hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8af600cc3c30a19b0b338313611ab1be2f840b16 | 4,558 | md | Markdown | src/hu/2022-01/08/05.md | Adventech/sabbath-school-lessons | baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/hu/2022-01/08/05.md | Adventech/sabbath-school-lessons | baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/hu/2022-01/08/05.md | Adventech/sabbath-school-lessons | baf65ac98fa7c7bce73e16c263eb0cc1bf0ba62a | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Az Új Szövetség Jobb Ígéretei
date: 16/02/2022
---
Kísértést érezhetünk arra, hogy azt gondoljuk, az új szövetség „jobb ígéreteket” tartalmaz, mivel a régi szövetségnél nagyobb jutalmakat kínál (mennyei otthon, örök élet stb.). Azonban az az igazság, hogy Isten ugyanazt ígérte az ószövetségi hívőknek is, mint nekünk (lásd Zsid 11:10, 13-16). Zsid 8:6 versében a „jobb ígéretek” kifejezés másfajta ígéretekről szól.
Az Úr és Izrael közötti szövetség a két fél egymásnak tett ígéreteinek a hivatalos jegyzéke. Isten magára vállalta a kezdeményezés szerepét és megszabadította a népet Egyiptomból, továbbá megígérte, hogy elvezeti őket az ígéret földjére.
`Vessük össze 2Móz 24:1-8 és Zsid 10:5-10 részeit! Milyen hasonlóságokat és eltéréseket veszünk észre e két ígéret között?`
Az Isten és Izrael közötti szövetséget vérrel iktatták be. A vért az oltárra, illetve az alá hintették. Izrael népe megígérte, hogy engedelmeskedik mindannak, amit az Úr megmondott.
„Az örök élet feltételei ma is ugyanazok, mint mindig voltak, mint amelyek ősszüleink paradicsomi bukása előtt voltak: tökéletes engedelmesség Isten törvénye iránt, tökéletes igazság, életszentség. Ha ezek megtartása nélkül is elnyerhetnénk az örök életet, akkor az az egész világegyetem boldogságát veszélyeztetné. A bűn útja, összes kínjával és nyomorával együtt nyitva állna az örökkévalóságon át” (Ellen G. White: Jézushoz vezető út. Budapest, 2008, Advent Irodalmi Műhely, 46-47. o.).
Isten teljesítette az új szövetség szigorú követelményeit, mert odaadta a Fiát, hogy tökéletes életet éljen közöttünk és így a szövetség ígéretei benne teljesedhessenek – amelyeket Jézusba vetett hit által mi is elnyerhetünk. Krisztus engedelmessége garantálja a szövetségi ígéreteket (Zsid 7:22). Ő részesül a szövetség áldásaiban, amelyeket utána ránk ruház. Aki „Krisztusban” van, vele együtt élvezi majd az áldásokat. Isten a Szentlelkét is adja, hogy képessé tegyen minket a törvény betartására.
`Krisztus eleget tett a szövetség feltételeinek, így nem lehet kétséges, hogy Isten beteljesíti az ígéreteit. Hogyan segít ez nekünk 2Kor 1:20-22 szakaszát megérteni? Milyen nagyszerű reménységet találunk ebben?`
---
#### Ellen G. White idézetek
A próféták tanításaiban az Úr érthetően kinyilatkoztatta szeretetét és a megváltásukra készített tervet. A menny követei, akiket egyházához küldött, Izrael elhívásának, sikereinek, kudarcainak, Isten kegyeibe való visszafogadásának, a szőlőskert Gazdájától való elfordulásának történetéről, valamint arról beszéltek, hogy a korszakokat átfogó tervet a maradék fogja megvalósítani, melynek életében minden szövetségi ígéret teljesedni fog. És ma Ő ugyanazt üzeni (…)
Izrael! Bízzál Istenben! A szőlőskert Gazdája még most is gyűjti régóta várt drága gyümölcseit a nemzetek és népek közül! Nemsokára eljön övéihez; és azon a boldog napon végre megvalósul Izrael házával való terve. – Próféták és királyok, 22. o.
A drága Szentírás Isten kertje, a mennyei ígéretek pedig a kert rózsái és liliomai.
Annyira szeretném, hogy bárcsak mindannyian elhinnénk az Úr ígéreteit… A mennyel való jó kapcsolatunkat nem a szívünkben levő öröm bizonyítja, hanem az, hogy a következő szavakkal viszonyulunk Isten ígéreteihez: „Ezek az enyémek. Az Úr kiárasztotta rám Szentlelkét. Én szüntelenül befogadom a világosságot, hiszen Ő megígérte: »Amit könyörgésetekben kértek, higgyétek, hogy mindazt megnyeritek, és meglesz néktek.« Hit által léphetek be a kárpit mögé, és szorosan kapaszkodom Krisztusban, az én erősségemben. Hálás vagyok Istennek, hogy van Megváltóm.” (…)
Az Igében található ígéretekben Isten mindannyiunkat személyesen szólít meg, és teszi ezt oly közvetlenül, hogy szinte hallhatjuk a hangját. Az ígéretek által Krisztus kegyelmet és erőt ad nekünk arról a fáról, amelynek a levelei a „pogányok gyógyítására” valók. Ha ezeket elfogadjuk és magunkévá tesszük, akkor megerősödik a jellemünk, és az életünkre nézve ihletést és támogatást nyerünk. – The Faith I Live By, 9. o.
Krisztus eljött a földre, és az örökkévalóságban felhalmozott szeretettel állt az emberek fiai elé. Ez az a kincs, amelyet a vele való kapcsolat útján kell megkapnunk, bemutatnunk és továbbadnunk.
Isten művében az emberi erőfeszítést csak odaszentelt, imádkozó élettel, a krisztusi kegyelem életet átformáló hatalmának megmutatásával lehet eredményessé tenni. Különböznünk kell a világtól, mert Ő ránk nyomta pecsétjét; mert bennünk nyilatkoztatja ki saját szeretetét és jellemét. Megváltónk pedig betakar minket igazságával. – A nagy Orvos lábnyomán, 37. o. | 126.611111 | 556 | 0.814831 | hun_Latn | 1.00001 |
8af62aac64ae9838e1dda16170d396408cb124ef | 994 | md | Markdown | examples/README.md | AkihiroTakamura/formsy-react | 970ec764f023276060accc9bfd15f94cad5a96f0 | [
"MIT"
] | 1 | 2019-04-17T10:12:04.000Z | 2019-04-17T10:12:04.000Z | examples/README.md | AkihiroTakamura/formsy-react | 970ec764f023276060accc9bfd15f94cad5a96f0 | [
"MIT"
] | null | null | null | examples/README.md | AkihiroTakamura/formsy-react | 970ec764f023276060accc9bfd15f94cad5a96f0 | [
"MIT"
] | 1 | 2019-08-30T14:58:04.000Z | 2019-08-30T14:58:04.000Z | Formsy React Examples
=====================
To run and development examples:
1. Clone this repo
2. Run `npm install`
3. Start the development server with `npm run examples`
4. Point your browser to http://localhost:8080
## Possible Issues
Examples might not run if you have an old node packages. Try clear [npm cache](https://docs.npmjs.com/cli/cache#details) and reinstall dependencies:
```
rm -rf node_modules
npm cache clean
npm install
npm run examples
```
If it is not helped try update your node.js and npm.
## Examples
1. [**Login**](login)
Two required fields with simple validation.
2. [**Custom Validation**](custom-validation)
One field with added validation rule (`addValidationRule`) and one field with dynamically added validation and error messages.
3. [**Reset Values**](reset-values)
Reset text input, checkbox and select to their pristine values.
4. [**Dynamic Form Fields**](dynamic-form-fields)
Dynamically adding and removing fields to form.
| 24.243902 | 148 | 0.726358 | eng_Latn | 0.960845 |
8af649aea0bd18ab56c586d9963305789a126e1e | 8,210 | markdown | Markdown | _posts/2013-06-05-mengetes-akses-database.markdown | endymuhardin/endymuhardin.github.io | 18cf50e1a580c3335e35f524b9e33c2d625b1463 | [
"MIT"
] | 3 | 2017-07-20T18:08:58.000Z | 2021-07-28T12:27:47.000Z | _posts/2013-06-05-mengetes-akses-database.markdown | endymuhardin/endymuhardin.github.io | 18cf50e1a580c3335e35f524b9e33c2d625b1463 | [
"MIT"
] | 1 | 2017-04-04T02:58:10.000Z | 2018-10-18T02:51:26.000Z | _posts/2013-06-05-mengetes-akses-database.markdown | endymuhardin/endymuhardin.github.io | 18cf50e1a580c3335e35f524b9e33c2d625b1463 | [
"MIT"
] | 10 | 2015-04-06T08:11:55.000Z | 2021-09-13T00:45:25.000Z | ---
layout: post
title: "Mengetes Akses Database"
date: 2013-06-05 11:49
comments: true
categories:
- java
---
Pada bagian ini, kita akan mempersiapkan seperangkat kode program untuk mengetes aplikasi yang telah kita buat.
Artikel ini merupakan bagian kelima dan terakhir dari rangkaian artikel Spring JDBC, yaitu
1. [Konfigurasi koneksi database](http://software.endy.muhardin.com/java/konfigurasi-koneksi-database-dengan-spring/)
2. [Struktur Aplikasi](http://software.endy.muhardin.com/java/struktur-aplikasi-java-dengan-spring-dan-maven/)
3. [Insert, update, dan delete data](http://software.endy.muhardin.com/java/insert-update-delete-dengan-spring-jdbc/)
4. [Query data](http://software.endy.muhardin.com/java/query-dengan-spring-jdbc/)
5. [Mengetes Akses Database](http://software.endy.muhardin.com/java/mengetes-akses-database/)
<!--more-->
## Setup Test ##
Seperti telah dijelaskan sebelumnya, test class kita akan terdiri dari dua bagian:
1. Abstract superclass : berisi seluruh kode program pengetesan aplikasi
2. Concrete subclass : berisi kode program untuk melakukan inisialisasi
Berikut adalah kerangka abstract superclass test untuk `Produk`
```java
package com.muhardin.endy.training.java.aksesdb.service;
// import statement, generate menggunakan IDE
public abstract class ProdukServiceTest {
public abstract PenjualanService getPenjualanService();
public abstract DataSource getDataSource();
@Before
public void bersihkanDataTest() throws Exception {
}
@Test
public void testSimpanUpdateHapusProduk() throws Exception {
}
// test method lain tidak ditampilkan
}
```
Dan berikut ini adalah concrete subclass yang berfungsi melakukan inisialisasi konfigurasi Spring JDBC
```java
package com.muhardin.endy.training.java.aksesdb.service.springjdbc;
// import statement generate menggunakan IDE
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("classpath*:com/muhardin/**/spring-jdbc-ctx.xml")
public class ProdukServiceSpringJdbcTest extends ProdukServiceTest {
@Autowired private DataSource dataSource;
@Autowired private PenjualanService penjualanService;
@Override
public PenjualanService getPenjualanService() {
return penjualanService;
}
@Override
public DataSource getDataSource() {
return dataSource;
}
}
```
## Database Reset ##
Pada kode program abstract superclass di atas, kita melihat ada method untuk membersihkan data test. Method ini diberikan annotation `@Before` supaya dia dijalankan **sebelum masing-masing test**.
Berikut adalah isi method tersebut
```java
@Before
public void bersihkanDataTest() throws Exception {
DataSource ds = getDataSource();
Connection conn = ds.getConnection();
String sql = "delete from m_produk where kode like ?";
PreparedStatement ps = conn.prepareStatement(sql);
ps.setString(1,"T-001%");
ps.executeUpdate();
conn.close();
}
```
Di method tersebut, kita menghapus semua produk yang kodenya diawali `T-001`. Ini adalah data yang kita insert selama proses tes. Agar tidak mengganggu tes lainnya, kita hapus data percobaan tersebut.
## Test Insert Update Delete ##
Proses insert, update, dan delete mengubah data yang ada dalam database. Supaya kita tidak repot mengurus data sample yang sudah ada di database, ketiga proses ini kita tes dalam satu kesatuan. Dengan demikian, setelah selesai dijalankan, datanya kembali bersih seperti sebelumnya. Berikut kode programnya.
```java
@Test
public void testSimpanUpdateHapusProduk() throws Exception{
Produk p = new Produk();
p.setHarga(new BigDecimal(125000));
p.setKode("T-001");
p.setNama("Produk Test 001");
PenjualanService service = getPenjualanService();
service.simpan(p);
assertNotNull(p.getId());
Connection conn = getDataSource().getConnection();
PreparedStatement psCariById
= conn.prepareStatement
("select * from m_produk where id = ?");
psCariById.setInt(1, p.getId());
ResultSet rs = psCariById.executeQuery();
// test nilai awal
assertTrue(rs.next());
assertEquals("T-001",rs.getString("kode"));
// update record
p.setKode("T-001x");
service.simpan(p);
// test query setelah update
rs = psCariById.executeQuery();
assertTrue(rs.next());
assertEquals("T-001x",rs.getString("kode"));
// test delete
service.hapus(p);
// test query setelah hapus
rs = psCariById.executeQuery();
assertFalse(rs.next());
}
```
Dari komentar yang ada dalam kode program, sudah jelas apa maksud dari masing-masing bagian.
## Test Query ##
Selanjutnya kita melakukan tes terhadap query. Kita mulai dari yang sederhana dulu, yaitu tabel produk.
Berikut kode program pengetesannya.
```java
@Test
public void testCariProdukById() {
PenjualanService service = getPenjualanService();
assertNotNull(service.cariProdukById(1));
assertNull(service.cariProdukById(99));
}
@Test
public void testCariProdukByKode() {
PenjualanService service = getPenjualanService();
assertNotNull(service.cariProdukByKode("K-001"));
assertNull(service.cariProdukByKode("X-001"));
}
@Test
public void testHitungSemuaProduk() {
PenjualanService service = getPenjualanService();
assertEquals(Long.valueOf(3),
Long.valueOf(service.hitungSemuaProduk()));
}
@Test
public void testCariSemuaProduk() {
PenjualanService service = getPenjualanService();
List<Produk> hasil = service.cariSemuaProduk(1, 100);
assertNotNull(hasil);
assertTrue(hasil.size() == 3);
for (Produk produk : hasil) {
assertNotNull(produk.getId());
assertNotNull(produk.getKode());
assertNotNull(produk.getNama());
assertNotNull(produk.getHarga());
}
}
```
Logika pengetesan tidak kompleks. Kita query datanya menggunakan method di `ProdukDao` yang telah kita buat,
lalu kita bandingkan dengan kondisi yang seharusnya. Perbandingan dilakukan menggunakan method yang telah disediakan JUnit, yaitu method berawalan `assert`, misalnya `assertNotNull`, `assertEquals`, dan lainnya.
Yang harus diperhatikan di sini adalah, kita harus benar-benar tahu persis isi database supaya test ini bisa berjalan dengan baik. Ada banyak teknik yang bisa digunakan untuk memastikan isi database sebelum tes dijalankan, salah satunya menggunakan tools yang bernama [DBUnit](http://www.dbunit.org/). Lebih lanjut tentang cara menggunakan DBUnit bisa dibaca di [artikel ini](http://software.endy.muhardin.com/java/ruthless-testing-4/).
## Test Relasi ##
Pengetesan terhadap relasi pada prinsipnya tidak berbeda. Hanya ada sedikit tambahan yaitu kita juga harus memastikan apakah relasinya berhasil terisi dengan sempurna. Berikut pengetesannya.
```java
@Test
public void testCariPenjualanDetailByProdukDanPeriode(){
Produk p = new Produk();
p.setId(1);
Date mulai = new DateTime(2013,1,1,0,0,0,0).toDate();
Date sampai = new DateTime(2013,2,1,0,0,0,0).toDate();
List<PenjualanDetail> hasil = getPenjualanService()
.cariPenjualanDetailByProdukDanPeriode(p,
mulai, sampai, 1, 100);
assertNotNull(hasil);
assertTrue(hasil.size() == 2);
for (PenjualanDetail penjualanDetail : hasil) {
verifikasiPenjualanDetail(penjualanDetail);
}
}
```
Pada kode program di atas, terlihat bahwa kita melakukan looping yang di dalamnya ada verifikasi `PenjualanDetail`. Berikut isi methodnya.
```java
private void verifikasiPenjualanDetail
(PenjualanDetail penjualanDetail) {
assertNotNull(penjualanDetail.getProduk().getHarga());
assertNotNull(penjualanDetail.getPenjualan().getWaktuTransaksi());
}
```
Kita cukup memastikan bahwa relasi yang dimiliki `PenjualanDetail` yaitu `Produk` dan `Penjualan` tidak null. Demikian juga variabel yang ada di dalam objectnya yaitu `harga` dan `waktuTransaksi`.
Demikianlah rangkaian artikel tentang penggunaan Spring JDBC. Kode program selengkapnya dapat diambil [di Github](https://github.com/endymuhardin/belajar-akses-database-java/tree/spring-jdbc).
Semoga bermanfaat.
| 34.06639 | 436 | 0.735566 | ind_Latn | 0.833215 |
8af667bec32504774056914ad18e06e79ff591cb | 56 | md | Markdown | README.md | shvets/goochie-pooch.com | 32ce9803edf22bcf4b83f7ae4c1e113a9fa9b797 | [
"MIT"
] | null | null | null | README.md | shvets/goochie-pooch.com | 32ce9803edf22bcf4b83f7ae4c1e113a9fa9b797 | [
"MIT"
] | null | null | null | README.md | shvets/goochie-pooch.com | 32ce9803edf22bcf4b83f7ae4c1e113a9fa9b797 | [
"MIT"
] | null | null | null | This application is hosted at: http://goochie-pooch.com
| 28 | 55 | 0.785714 | eng_Latn | 0.911523 |
8af6d99a50d88396996ccee7e26186f5440c33c5 | 2,976 | md | Markdown | maven.md | MariShunxiang/JavaWebSample | 2025a6eedbc712bb4894cdd88640838b98606ce5 | [
"Apache-2.0"
] | 2 | 2017-07-08T02:08:36.000Z | 2017-07-09T14:31:19.000Z | maven.md | MariShunxiang/JavaWebSample | 2025a6eedbc712bb4894cdd88640838b98606ce5 | [
"Apache-2.0"
] | null | null | null | maven.md | MariShunxiang/JavaWebSample | 2025a6eedbc712bb4894cdd88640838b98606ce5 | [
"Apache-2.0"
] | null | null | null | #maven
maven项目组成:
src
-main
-java
-package
-test
-java
resources
maven常用命令:
mvn -v 查看maven版本
compile 编译
test 测试
package 打包
clean 删除target目录
install 安装jar包到本地仓库中
创建目录的两种方式:
1.archetype:generate 按照提示进行选择
2.archetype:generate -DgroupId=组织名,公司网址的反写+项目名
-DartifactId=项目名-模块名
-Dversion=版本号
-Dpackage=代码所存在的包名
e.g:mvn archetype:generate -DgroupId:com.sync.maven -DartifactId:maven-demo
-Dversion:1.0.0-SNAPSHOT -Dpackage:com.sync.maven.demo
maven自动建立项目骨架:
1.创建文件夹。
2.cmd命令进入目录.
3.执行:mvn archetype:generate
第一次执行会下载很多依赖包,耐心等待
4.依赖包下载完成之后选择版本:6
5.输入'groupId':com.sync.maven
6.输入'artifactId':maven-service
7.输入'version':1.0-SNAPSHOT
8.输入'package':com.sync.maven.service
9.确认:输入'y';完成项目骨架构建.
maven默认的全球仓库的地址:
maven解压缩路径→lib目录→maven-model-builder-3.3.9.jar→pom.xml→repositories→url
修改镜像地址:
maven解压缩路径→conf→setting.xml→找到<mirrors>标签→增加一个<mirror>标签如下:
<mirror>
<id>maven.net.cn</id>
<mirrorOf>central</mirrorOf>
<name>central mirror in china</name>
<url>http://maven.net.cn/content/groups/public</url>
</mirror>
一旦配置了镜像仓库,所有对原仓库的访问都将转到镜像仓库.
更改仓库位置:
maven从远程仓库中下载构建,所有的包默认都是放在当前用户的C盘:.m2文件中
修改:
maven解压缩路径→conf→setting.xml→增加如下标签:
<localRepository>E:/mavan/repo</localRepository>
maven生命周期:
clean,compile,test,package,install
clean: 清理项目
pre-clean 执行清理前的工作
clean 清理上一次构建生成的所有文件
post-clean 执行清理后的文件
default 构建项目(最核心)
compile test package install
site 生成项目站点
pre-site 在生成项目站点前要完成的工作
site 生成项目的站点文档
post-site 在生成项目站点后要完成的工作
site-deploy 发布生成的站点到服务器上
pom.xml文件标签解析:
modelVersion:指定了当前pom的版本
groupId:反写的公司网址+项目名
artifactId:项目名+模块名
version:表示当前项目的版本号;第一个0表示大版本号,第二个0表示分支版本号,第三个0表示小版本号
snapshot快照、alpha内部测试、beat公测、Release稳定、GA正式发布
packaging:打包信息;默认是jar,还可以指定war,zip,pom
name:项目名
url:项目地址
descpiption:项目描述
developers:开发人员信息
licenses:许可证信息
orgainzation:组织信息
dependencies:依赖列表
dependecy:依赖项
groupId
archetype
type:依赖的类型,pom
scope:依赖的范围
optional{true,false}:设置依赖是否可选
exclusions:排除依赖传递列表:
exclusion:可以多个
<!-- 依赖的管理,主要是定义在父模块中,供子模块使用 -->
dependencyManagement
dependencies
dependecy
build:
<!-- 插件列表 -->
plugins
plugin
groupId
archetype
version
<!-- 通常用于子模块对父模块pom的继承 -->
<parent>
<!-- 用来聚合运行多个maven项目 -->
<modules>
<module></module>
</modules>
依赖的范围:
maven提供了三种classpaath,分别是:
1、编译
2、测试
3、运行
<!-- 只存在测试的classpath,只存在测试的范围 -->
<scope>test</scope>
scope提供了6个值
compile:默认的级别,编译测试运行都有效
provided:在测试和编译时有效,serverlet
runtime:在测试和运行时有效,jdbc
test:测试时有效,junit
system:与本机系统相关联,编译测试时有效,可移植性差
import:导入的范围,它只在 dependencyManagement 中,表示从其他的pom中导入 dependecy 的配置
maven依赖传递
maven依赖冲突:
1、短路优先:
A→B→C→X(V2.4.jar)
A→D→X(V2.0.jar)
优先解析路径短的版本:A→X(V2.0.jar)
2、先声明先优先
A→B→X(V2.4.jar)
A→C→X(V2.0.jar)
如果路径长度相同,则谁先声明,先解析谁,
如果B在C上面,则:A→X(V2.4.jar)
maven聚合和继承:
新建一个maven工程:hongxing-aggreation
<name>hongxing-aggreation</name>
<modules>
<module>../hongxing-bge</module>
<module>../hongxing-nange</module>
<module>../hongxing-shanji</module>
</modules>
| 17.103448 | 75 | 0.75168 | yue_Hant | 0.596575 |
8af86fdd77fa8d5fd63e1f6b0e0ecf863a939ce8 | 2,219 | md | Markdown | LINUX.md | slavabulgakov/grpc-swift | 19bd5c67fc409bdcc62cd88f74b95db2bfbb1aaa | [
"Apache-2.0"
] | null | null | null | LINUX.md | slavabulgakov/grpc-swift | 19bd5c67fc409bdcc62cd88f74b95db2bfbb1aaa | [
"Apache-2.0"
] | null | null | null | LINUX.md | slavabulgakov/grpc-swift | 19bd5c67fc409bdcc62cd88f74b95db2bfbb1aaa | [
"Apache-2.0"
] | null | null | null | # Swift gRPC Samples
Follow these steps to build and run Swift gRPC on Linux.
## Prerequisites
These instructions are for running in the Docker container manager,
but can be directly used on any Ubuntu 16.04 image.
## Start Docker
Start a docker instance with the following command:
`docker run -i -t --privileged=true ubuntu:16.04 /bin/bash`
## Install Dependencies
# update package list
apt-get update
# install download tools
apt-get install -y git wget
# install a few useful utilities
apt-get install -y vim sudo
# install grpc build dependencies
apt-get install -y build-essential autoconf libtool
# install swift dependencies
apt-get install -y clang libicu-dev libedit-dev python-dev libxml2-dev
# install networking dependencies
apt-get install -y libcurl4-openssl-dev
## Install Swift
# go to /root
cd
# download and unpack swift
wget https://swift.org/builds/swift-3.0.2-release/ubuntu1604/swift-3.0.2-RELEASE/swift-3.0.2-RELEASE-ubuntu16.04.tar.gz
tar xzf swift-3.0.2-RELEASE-ubuntu16.04.tar.gz
ln -s swift-3.0.2-RELEASE-ubuntu16.04 swift
## Add Swift to your path
# add swift to your path by adding this to your .bashrc
export PATH=/root/swift/usr/bin:$PATH
# Then run this to update your path
source ~/.bashrc
## Configure git
git config --global user.email <your email address>
git config --global user.name "<your name>"
## Fetch and build grpc
git clone https://github.com/grpc/grpc-swift
cd grpc-swift/third_party
git clone https://github.com/grpc/grpc.git
cd grpc
git submodule update --init
make
make install
## Build the Echo sample
cd
cd grpc-swift/Examples/Echo/PackageManager
make
## Run the test client and server
# start the server
.build/debug/Echo serve &
# run the client to test each Echo API
.build/debug/Echo get
.build/debug/Echo expand
.build/debug/Echo collect
.build/debug/Echo update
## To test the plugin
# build and install protoc
cd
cd grpc-swift/third_party/grpc/third_party/protobuf
make install
# build and test the plugin
cd
cd grpc-swift/Plugin
make test
| 24.932584 | 123 | 0.698963 | eng_Latn | 0.840613 |
8af884d922e32e91f41984962c6d61c365f7c48b | 479 | md | Markdown | docs/sdk/README.md | softbaddog/LiteOS_Documents | 69963191fb4e571048a4951ea0c969e1dfcc118b | [
"MIT"
] | 1 | 2021-08-18T08:22:06.000Z | 2021-08-18T08:22:06.000Z | docs/sdk/README.md | xwl3190217171/LiteOS_Documents | e169950d43eefc2a2680e52d4fc35b6d28ac868e | [
"MIT"
] | 1 | 2018-08-08T23:35:33.000Z | 2018-08-08T23:35:33.000Z | docs/sdk/README.md | xwl3190217171/LiteOS_Documents | e169950d43eefc2a2680e52d4fc35b6d28ac868e | [
"MIT"
] | null | null | null | # LiteOS SDK 介绍
LiteOS SDK是Huawei LiteOS软件开发工具包(Software Development Kit),包括端云互通组件、FOTA、JS引擎、传感框架等内容。
本文档介绍的LiteOS SDK包括了LiteOS SDK端云互通组件。端云互通组件是华为物联网解决方案中,资源受限终端对接到 IoT云平台的重要组件。端云互通组件提供端云协同能力,集成了 LwM2M、CoAP、mbed TLS、LwIP 等全套 IoT 互联互通协议栈,且在 LwM2M 的基础上,提供端云互通组件开放API,用户只需关注自身的应用,而不必关注 LwM2M 实现细节,直接使用 LiteOS SDK端云互通组件封装的 API,通过四个步骤就能简单快速地实现与华为 OceanConnect IoT平台安全可靠连接。使用 LiteOS SDK端云互通组件,用户可以大大减少开发周期,聚焦自己的业务开发,快速构建自己的产品。
**Huawei LiteOS架构图**
 | 43.545455 | 320 | 0.818372 | yue_Hant | 0.929518 |
8af9026c1dbab21de9d807bb46d56b74cd67a27f | 16,956 | md | Markdown | content/posts/2019-03-05-tutorial-deploy-aplikasi-laravel-secara-otomatis-dengan-deployer-di-ubuntu-16-04.md | imansugirman/gridsomeblog | a0a36eb6c4708d95b7c9c5edbf74a3e6696fe8a6 | [
"MIT"
] | 1 | 2019-03-16T13:37:57.000Z | 2019-03-16T13:37:57.000Z | content/posts/2019-03-05-tutorial-deploy-aplikasi-laravel-secara-otomatis-dengan-deployer-di-ubuntu-16-04.md | imansugirman/gridsomeblog | a0a36eb6c4708d95b7c9c5edbf74a3e6696fe8a6 | [
"MIT"
] | 8 | 2021-03-01T20:16:03.000Z | 2022-02-26T01:36:31.000Z | content/posts/2019-03-05-tutorial-deploy-aplikasi-laravel-secara-otomatis-dengan-deployer-di-ubuntu-16-04.md | imansugirman/gridsomeblog | a0a36eb6c4708d95b7c9c5edbf74a3e6696fe8a6 | [
"MIT"
] | null | null | null | ---
title: "Tutorial Deploy Aplikasi Laravel Secara Otomatis Dengan Deployer di Ubuntu 16.04"
slug:
description: ""
date: 2019-03-05 20:26:23
author: iman-sugirman
tags:
- Laravel
- Deployer
- Digitalocean
cover: /images/posts/laravellogo.png
fullscreen: false
---
Laravel adalah kerangka kerja web open source PHP yang dirancang untuk membuat tugas pengembangan web umum, seperti otentikasi, routing, dan caching, lebih mudah. Deployer adalah alat deployment PHP open-source dengan dukungan out-of-the-box untuk sejumlah kerangka kerja populer, termasuk Laravel, CodeIgniter, Symfony, dan Zend Framework.
<!--  -->
### Introduction
Deployer mengotomatiskan penerapan dengan menggandakan aplikasi dari repositori Git ke server, menginstal dependensi dengan Composer, dan mengonfigurasi aplikasi sehingga Anda tidak perlu melakukannya secara manual. Ini memungkinkan Anda menghabiskan lebih banyak waktu untuk pengembangan, alih-alih upload dan konfigurasi, dan memungkinkan Anda menerapkan lebih sering.
Dalam tutorial ini, Anda akan menggunakan aplikasi Laravel secara otomatis tanpa downtime apa pun. Untuk melakukan ini, Anda akan mempersiapkan lingkungan pengembangan lokal dari mana Anda akan menyebarkan kode dan kemudian mengkonfigurasi server produksi dengan Nginx dan database MySQL untuk melayani aplikasi.
### Prerequisites
Sebelum memulai panduan ini, Anda memerlukan hal-hal berikut:
* Satu server Ubuntu 16.04 dengan pengguna non-root dengan hak akses sudo seperti yang dijelaskan dalam Pengaturan Server Awal dengan tutorial Ubuntu 16.04.
* Tumpukan LEMP diinstal seperti yang dijelaskan dalam Cara Menginstal Linux, Nginx, MySQL, PHP (LEMP stack) di tutorial Ubuntu 16.04.
* PHP, Komposer, dan Git yang diinstal di server Anda dengan mengikuti Langkah 1 dan 2 tentang Cara Menginstal dan Menggunakan Komposer pada Ubuntu 16.04.
* Paket php-xml dan php-mbstring diinstal di server Anda. Instal ini dengan menjalankan: sudo apt-get install php7.0-mbstring php7.0-xml.
* `Server Git`. Anda dapat menggunakan layanan seperti GitLab, Bitbucket atau GitHub. GitLab dan Bitbucket menawarkan repositori pribadi secara gratis, dan GitHub menawarkan repositori pribadi mulai dari $ 7 / bulan. Sebagai alternatif, Anda dapat membuat server Git pribadi dengan mengikuti tutorial Cara Mengatur Server Git Pribadi pada VPS.
* Nama domain yang mengarah ke server Anda. Cara Mengatur Nama Inang dengan tutorial DigitalOcean dapat membantu Anda mengonfigurasi ini.
* Komposer dan Git yang terpasang di komputer lokal Anda juga. Metode pemasangan yang tepat tergantung pada sistem operasi lokal Anda. Petunjuk untuk menginstal Git tersedia di halaman Unduhan proyek Git dan Anda dapat mengunduh Komposer langsung dari situs web proyek Composer.
#### Step 1 — Menyiapkan Laravel Aplikasi di Lokal
Karena Anda akan membuat dan menerapkan aplikasi Anda dari komputer lokal Anda, mulailah dengan mengkonfigurasi lingkungan pengembangan lokal Anda. Deployer akan mengontrol seluruh proses penerapan dari mesin lokal Anda, jadi mulailah dengan menginstalnya
***
Note: If you use Windows on your local machine you should use a BASH emulator (like Git bash) to run all local commands.
{: .alert .alert--info}
***
```bash
curl -LO https://deployer.org/deployer.phar
```
Lalu Masukan Perintah
``` bash
php -r "if (hash_file('sha1', 'deployer.phar') === '35e8dcd50cf7186502f603676b972065cb68c129') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('deployer.phar'); } echo PHP_EOL;"
```
Dan Outputnya adalah :
```shell
Installer verified
```
Buat sistem Penyebar tersedia luas. Perhatikan bahwa jika Anda menjalankan Windows atau macOS pada komputer lokal Anda, Anda mungkin perlu membuat direktori `/usr/local/bin/dep` sebelum menjalankan perintah ini:
```shell
sudo mv deployer.phar /usr/local/bin/dep
```
Buatlah agar file `deployer.phar` bisa di eksekusi :
```shell
sudo chmod +x /usr/local/bin/dep
```
Selanjutnya anda harus membuat aplikasi Laravel di `Local` Computer / PC atau MAC.
Contoh disini menginstall Laravel Versi 5.5
```shell
composer create-project --prefer-dist laravel/laravel laravel-app "5.5.*"
```
Anda telah menginstal semua Aplikasi Laravel yang diperlukan di `Komputer Local atau Mac` Anda. Dengan itu di tempat, kita akan beralih ke menciptakan repositori `Git` untuk aplikasi.
### Menghubungkan ke Remote Git Repository Anda
**Deployer** dirancang untuk memungkinkan pengguna menyebarkan kode dari mana saja. Untuk memungkinkan fungsi ini, diperlukan pengguna untuk memasukkan kode ke repositori di Internet dari mana Penyebar kemudian menyalin kode ke server produksi. Kami akan menggunakan Git, sistem kontrol versi open-source, untuk mengelola kode sumber aplikasi Laravel. Anda dapat terhubung ke server Git menggunakan protokol SSH, dan untuk melakukan ini dengan aman, Anda perlu membuat kunci SSH. Ini lebih aman daripada otentikasi berbasis kata sandi dan biarkan Anda menghindari mengetik kata sandi sebelum setiap penerapan.
Jalankan perintah berikut pada `local machine` Anda untuk menghasilkan kunci SSH. Perhatikan bahwa -f menentukan nama file dari file kunci, dan Anda dapat mengganti `gitkey` dengan nama file Anda sendiri. Ini akan menghasilkan sepasang kunci SSH (bernama `gitkey` dan `gitkey.pub`) ke folder `~/.ssh/`.
```bash
ssh-keygen -t rsa -b 4096 -f ~/.ssh/gitkey
```
Ada kemungkinan bahwa Anda memiliki kunci SSH lebih banyak di komputer lokal Anda, jadi konfigurasikan klien SSH untuk mengetahui kunci privat SSH yang akan digunakan ketika tersambung ke server Git Anda.
Buat file konfigurasi SSH di **Komputer Lokal** Anda:
```bash
touch ~/.ssh/config
```
Buka file dan tambahkan pintasan ke server Git Anda. Ini harus berisi direktif `HostName` (menunjuk ke nama host server Git Anda) dan direktif `IdentityFile` (menunjuk ke jalur file dari kunci SSH yang baru Anda buat:
```bash
Host mygitserver.com
HostName mygitserver.com
IdentityFile ~/.ssh/gitkey
```
Simpan dan tutup file, lalu batasi izinnya:
```bash
chmod 600 ~/.ssh/config
```
Sekarang klien SSH Anda akan tahu kunci pribadi mana yang digunakan untuk terhubung ke server Git.
Tampilkan konten file kunci publik Anda dengan perintah berikut:
```bash
cat ~/.ssh/gitkey.pub
```
Salin hasilnya dan tambahkan kunci publik ke server Git Anda.
Jika Anda menggunakan layanan hosting Git, baca dokumentasinya tentang cara menambahkan kunci SSH ke akun Anda:
* [Add SSH keys to GitHub](https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account/)
* [Add SSH keys to GitLab](https://docs.gitlab.com/ee/gitlab-basics/create-your-ssh-keys.html)
* [Add SSH keys to Bitbucket](https://confluence.atlassian.com/bitbucket/set-up-an-ssh-key-728138079.html)
Sekarang Anda akan dapat terhubung ke server Git Anda dengan mesin lokal Anda. Uji koneksi dengan perintah berikut:
```bash
ssh -T git@mygitserver.com
```
Jika perintah ini menghasilkan kesalahan, periksa bahwa Anda telah menambahkan kunci SSH Anda dengan benar dengan merujuk pada dokumentasi layanan hosting Git Anda dan coba hubungkan lagi.
Sebelum mendorong aplikasi ke repositori Git jarak jauh dan menerapkannya, pertama-tama konfigurasikan server produksi.
### Mengkonfigurasi Pengguna Penggambaran
Deployer menggunakan protokol SSH untuk secara aman menjalankan perintah di server. Karena alasan ini, langkah pertama yang akan kita ambil untuk mengonfigurasi server produksi adalah membuat pengguna yang dapat digunakan Deployer untuk masuk dan menjalankan perintah di server Anda melalui SSH.
Masuk ke server LEMP Anda dengan pengguna non-root sudo dan buat pengguna baru bernama "deployer" dengan perintah berikut:
```bash
sudo adduser deployer
```
Laravel memerlukan beberapa direktori yang dapat ditulis untuk menyimpan file dan upload yang di-cache, sehingga direktori yang dibuat oleh pengguna **deployer** harus dapat ditulis oleh server web Nginx. Tambahkan pengguna ke grup `www-data` untuk melakukan ini:
```bash
sudo usermod -aG www-data deployer
```
Izin default untuk file yang dibuat oleh pengguna deployer harus `644` untuk file dan `755` untuk direktori. Dengan cara ini, pengguna deployer akan dapat membaca dan menulis file, sementara grup dan pengguna lain akan dapat membacanya.
Lakukan ini dengan mengatur umask default deploy ke **022**:
```bash
sudo chfn -o umask=022 deployer
```
Kami akan menyimpan aplikasi di direktori `/var/www/html/`, jadi ubah kepemilikan direktori ke pengguna **deployer** dan grup **www-data**.
```bash
sudo chown deployer:www-data /var/www/html
```
Pengguna deployer harus dapat memodifikasi file dan folder dalam direktori `/var/www/html`. Karena itu, semua file dan subdirektori baru yang dibuat dalam direktori `/var/www/html` harus mewarisi id grup folder **(www-data)**. Untuk mencapai ini, atur ID grup pada direktori ini dengan perintah berikut:
```bash
sudo chmod g+s /var/www/html
```
Deployer akan mengkloning repo Git ke server produksi menggunakan SSH, jadi Anda ingin memastikan bahwa koneksi antara server LEMP Anda dan server Git aman. Kami akan menggunakan pendekatan yang sama yang kami gunakan untuk komputer lokal kami, dan kami akan menghasilkan kunci SSH untuk pengguna **deployer**.
Beralih ke pengguna **deployer** di server Anda:
```bash
su - deployer
```
Selanjutnya, buat pasangan kunci SSH sebagai pengguna deployer. Kali ini, Anda dapat menerima nama file default dari kunci SSH:
```bash
ssh-keygen -t rsa -b 4096
```
Tampilkan kunci publik:
```bash
cat ~/.ssh/id_rsa.pub
```
Salin kunci publik dan tambahkan ke server Git seperti yang Anda lakukan pada langkah sebelumnya.
Mesin lokal Anda akan berkomunikasi dengan server menggunakan SSH juga, jadi Anda harus menghasilkan kunci SSH untuk pengguna **deployer** di komputer lokal Anda dan menambahkan kunci publik ke server.
Di **Komputer Lokal** Anda jalankan perintah berikut. Jangan ragu untuk mengganti `deployerkey` dengan nama file pilihan Anda:
```bash
ssh-keygen -t rsa -b 4096 -f ~/.ssh/deployerkey
```
Salin keluaran perintah berikut yang berisi kunci publik:
```bash
cat ~/.ssh/deployerkey.pub
```
Di **Server Anda** saat pengguna **deployer** menjalankan yang berikut:
```bash
nano ~/.ssh/authorized_keys
```
Tempel kunci publik ke editor dan tekan `CTRL-X`, `Y`, lalu `ENTER` untuk menyimpan dan keluar.
Batasi izin file:
```bash
chmod 600 ~/.ssh/authorized_keys
```
Sekarang beralih kembali ke pengguna sudo:
```bash
exit
```
Sekarang server Anda dapat terhubung ke server Git dan Anda dapat masuk ke server dengan pengguna **deployer** dari komputer lokal Anda.
Masuk dari komputer lokal Anda ke server Anda sebagai pengguna **deployer** untuk menguji koneksi:
```bash
ssh deployer@your_server_ip -i ~/.ssh/deployerkey
```
Setelah Anda masuk sebagai deployer, uji koneksi antara server Anda dan server `Git` juga:
```bash
ssh -T git@mygitserver.com
```
Akhirnya, keluar dari server:
```bash
exit
```
Dari sini, kita dapat melanjutkan untuk mengkonfigurasi Nginx dan MySQL di server web kami.
### Mengkonfigurasi Nginx
Kami sekarang siap untuk mengkonfigurasi server web yang akan melayani aplikasi. Ini akan melibatkan konfigurasi akar dokumen dan struktur direktori yang akan kita gunakan untuk menyimpan file Laravel. Kami akan mengatur Nginx untuk melayani file kami dari direktori `/var/www/laravel`.
Pertama, kita perlu membuat file konfigurasi blok server untuk situs baru.
Masuk ke server sebagai pengguna sudo Anda dan buat file konfigurasi baru. Ingat untuk mengganti example.com dengan nama domain Anda sendiri:
```bash
sudo nano /etc/nginx/sites-available/example.com
```
Tambahkan blok server ke bagian atas file konfigurasi:
```bash
#example.com '>/etc/nginx/sites-available/example.com
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
}
```
Kedua `listen` arahan di bagian atas memberitahu Nginx port mana untuk didengarkan, dan direktif `root` mendefinisikan document root tempat Laravel akan diinstal. `current/public` di jalur direktori root adalah tautan simbolik yang menunjuk ke rilis terbaru dari aplikasi. Dengan menambahkan direktif `index`, kami memberi tahu Nginx untuk melayani file `index.php` terlebih dahulu sebelum mencari rekan HTML mereka saat meminta lokasi direktori. Direktif `server_name` harus diikuti oleh domain Anda dan alias apa pun.
Kami juga harus memodifikasi cara Nginx menangani permintaan. Ini dilakukan melalui direktif `try_files`. Kami ingin mencoba melayani permintaan sebagai file terlebih dahulu dan, jika tidak dapat menemukan file dengan nama yang benar, harus mencoba untuk melayani file indeks default untuk direktori yang sesuai dengan permintaan. Kegagalan ini, harus melewati permintaan ke file `index.php` sebagai parameter query.
```bash
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
}
```
Selanjutnya, kita perlu membuat blok yang menangani eksekusi sebenarnya dari setiap file PHP. Ini akan berlaku untuk semua file yang diakhiri dengan .php. Ini akan mencoba file itu sendiri dan kemudian mencoba untuk meneruskannya sebagai parameter ke file `index.php`.
Kami akan mengatur arahan `fastcgi` untuk memberi tahu Nginx untuk menggunakan jalur aplikasi yang sebenarnya (diselesaikan setelah mengikuti tautan simbolis), alih-alih tautan simbolis. Jika Anda tidak menambahkan baris ini ke konfigurasi, jalur tempat tautan simbolik akan di-cache, artinya versi lama aplikasi Anda akan dimuat setelah penyebaran. Tanpa arahan ini, Anda harus membersihkan cache secara manual setelah setiap penerapan dan permintaan ke aplikasi Anda berpotensi gagal. Selain itu, `fastcgi_pass` directive akan memastikan bahwa Nginx menggunakan soket yang digunakan php7-fpm untuk komunikasi dan file `index.php` digunakan sebagai indeks untuk operasi ini.
```bash
# example.com '>/etc/nginx/sites-available/example.com
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
}
```
Akhirnya, kami ingin memastikan bahwa Nginx tidak mengizinkan akses ke file `.htaccess` tersembunyi. Kami akan melakukan ini dengan menambahkan satu blok lokasi tambahan yang disebut `location ~ / \ an.ht` dan, dalam blok itu, direktif yang menentukan `deny all;`.
Setelah menambahkan blok lokasi terakhir ini, file konfigurasi akan terlihat seperti ini:
```bash
server {
listen 80;
listen [::]:80;
root /var/www/html/laravel-app/current/public;
index index.php index.html index.htm index.nginx-debian.html;
server_name example.com www.example.com;
location / {
try_files $uri $uri/ /index.php?$query_string;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_pass unix:/run/php/php7.0-fpm.sock;
}
location ~ /\.ht {
deny all;
}
}
```
Simpan dan tutup file (`CTRL-X`, `Y`, lalu `ENTER`), dan kemudian aktifkan blok server baru dengan membuat tautan simbolis ke direktori yang `sites-enabled` :
```bash
sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/
```
Uji file konfigurasi Anda untuk kesalahan sintaks:
```bash
sudo nginx -t
```
Jika Anda melihat kesalahan apa pun, kembali dan periksa kembali file Anda sebelum melanjutkan. Mulai ulang Nginx untuk mendorong perubahan yang diperlukan:
```bash
sudo systemctl restart nginx
```
Server Nginx sekarang dikonfigurasi. Selanjutnya, kita akan mengkonfigurasi database MySQL aplikasi.
### Mengkonfigurasi MySQL
Setelah instalasi, MySQL menciptakan pengguna root secara default. Pengguna ini memiliki hak yang tidak terbatas, jadi ini adalah praktik keamanan yang buruk untuk menggunakan pengguna root untuk database aplikasi Anda. Sebagai gantinya, kami akan membuat database untuk aplikasi dengan pengguna yang berdedikasi.
Masuk ke konsol MySQL sebagai root:
| 50.16568 | 675 | 0.769816 | ind_Latn | 0.953523 |
8af91aacccaca5adbd705d0ad2fbf41d9fb2ac84 | 565 | md | Markdown | index/a/aguacate.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 26 | 2019-03-21T15:43:32.000Z | 2022-03-12T18:30:35.000Z | index/a/aguacate.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 3 | 2020-05-01T18:15:58.000Z | 2021-04-16T05:51:05.000Z | index/a/aguacate.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 14 | 2019-06-23T22:54:35.000Z | 2021-10-16T02:04:45.000Z | # aguacate
* [Cuban Avocado Watercress And Pineapple Salad Em Ensalada De Aguacate Berro Y Pina Em](../../index/c/cuban-avocado-watercress-and-pineapple-salad-em-ensalada-de-aguacate-berro-y-pina-em-51203230.json)
* [Sopa De Aguacate](../../index/s/sopa-de-aguacate-14320.json)
* [Cuban Avocado Watercress And Pineapple Salad Em Ensalada De Aguacate Berro Y Pina Em 51203230](../../index/c/cuban-avocado-watercress-and-pineapple-salad-em-ensalada-de-aguacate-berro-y-pina-em-51203230.json)
* [Sopa De Aguacate 14320](../../index/s/sopa-de-aguacate-14320.json)
| 80.714286 | 212 | 0.755752 | por_Latn | 0.214618 |
8afa1167842cd1e2a14708f23a5fb435fb45f59a | 4,479 | md | Markdown | articles/cognitive-services/Bing-Image-Search/image-sdk-python-quickstart.md | whitenoiseo/azure-docs | f9c1dc314b2dcb3f2001500d9b8e5c0f533cc6fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-Image-Search/image-sdk-python-quickstart.md | whitenoiseo/azure-docs | f9c1dc314b2dcb3f2001500d9b8e5c0f533cc6fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Bing-Image-Search/image-sdk-python-quickstart.md | whitenoiseo/azure-docs | f9c1dc314b2dcb3f2001500d9b8e5c0f533cc6fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Image Search SDK Python quickstart | Microsoft Docs
description: Setup for Image Search SDK console application.
titleSuffix: Azure Image Search SDK Python quickstart
services: cognitive-services
author: mikedodaro
manager: rosh
ms.service: cognitive-services
ms.component: bing-image-search
ms.topic: article
ms.date: 02/14/2018
ms.author: v-gedod
---
# Image Search SDK Python quickstart
The Bing Image Search SDK contains the functionality of the REST API for web queries and parsing results.
## Application dependencies
If you don't already have it, install Python. The SDK is compatible with Python 2.7, 3.3, 3.4, 3.5 and 3.6.
The general recommendation for Python development is to use a [virtual environment](https://docs.python.org/3/tutorial/venv.html).
Install and initialize the virtual environment with the [venv module](https://pypi.python.org/pypi/virtualenv). You must install virtualenv for Python 2.7.
```
python -m venv mytestenv
```
Install Bing Image Search SDK dependencies:
```
cd mytestenv
python -m pip install azure-cognitiveservices-search-imagesearch
```
## Image Search client
Get a [Cognitive Services access key](https://azure.microsoft.com/try/cognitive-services/) under *Search*.
Add imports:
```
from azure.cognitiveservices.search.imagesearch import ImageSearchAPI
from azure.cognitiveservices.search.imagesearch.models import ImageType, ImageAspect, ImageInsightModule
from msrest.authentication import CognitiveServicesCredentials
subscription_key = "YOUR-SUBSCRIPTION-KEY"
```
Create an instance of the `CognitiveServicesCredentials`, and instantiate the client:
```
client = ImageSearchAPI(CognitiveServicesCredentials(subscription_key))
```
Search images on query (Yosemite), filtered for animated gifs and wide aspect, then verify number of results and print out insightsToken, thumbnail URL, and web URL of first result.
```
image_results = client.images.search(
query="Yosemite",
image_type=ImageType.animated_gif, # Could be the str "AnimatedGif"
aspect=ImageAspect.wide # Could be the str "Wide"
)
print("\r\nSearch images for \"Yosemite\" results that are animated gifs and wide aspect")
if image_results.value:
first_image_result = image_results.value[0]
print("Image result count: {}".format(len(image_results.value)))
print("First image insights token: {}".format(first_image_result.image_insights_token))
print("First image thumbnail url: {}".format(first_image_result.thumbnail_url))
print("First image web search url: {}".format(first_image_result.web_search_url))
else:
print("Couldn't find image results!")
```
Search images for (Yosemite), filtered for animated gifs and wide aspect, then verify number of results and print out `insightsToken`, `thumbnail url` and `web url` of first result.
```
image_results = client.images.search(
query="Yosemite",
image_type=ImageType.animated_gif, # Could be the str "AnimatedGif"
aspect=ImageAspect.wide # Could be the str "Wide"
)
print("\r\nSearch images for \"Yosemite\" results that are animated gifs and wide aspect")
if image_results.value:
first_image_result = image_results.value[0]
print("Image result count: {}".format(len(image_results.value)))
print("First image insights token: {}".format(first_image_result.image_insights_token))
print("First image thumbnail url: {}".format(first_image_result.thumbnail_url))
print("First image web search url: {}".format(first_image_result.web_search_url))
else:
print("Couldn't find image results!")
```
Get trending results:
```
trending_result = client.images.trending()
print("\r\nSearch trending images")
# Categorires
if trending_result.categories:
first_category = trending_result.categories[0]
print("Category count: {}".format(len(trending_result.categories)))
print("First category title: {}".format(first_category.title))
if first_category.tiles:
first_tile = first_category.tiles[0]
print("Subcategory tile count: {}".format(len(first_category.tiles)))
print("First tile text: {}".format(first_tile.query.text))
print("First tile url: {}".format(first_tile.query.web_search_url))
else:
print("Couldn't find subcategory tiles!")
else:
print("Couldn't find categories!")
```
## Next steps
[Cognitive Services Python SDK samples](https://github.com/Azure-Samples/cognitive-services-python-sdk-samples)
| 40.351351 | 181 | 0.749274 | eng_Latn | 0.528966 |
8afab1efdeffcd55f16f5c1676e4f5feaa34ca0f | 5,162 | md | Markdown | articles/azure-government/documentation-government-developer-guide.md | Norrch2/azure-docs | cbdf4caba06d9a63592245b20996d3e82a424a51 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-11-22T02:45:17.000Z | 2022-03-22T07:08:33.000Z | articles/azure-government/documentation-government-developer-guide.md | Norrch2/azure-docs | cbdf4caba06d9a63592245b20996d3e82a424a51 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-government/documentation-government-developer-guide.md | Norrch2/azure-docs | cbdf4caba06d9a63592245b20996d3e82a424a51 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-09T16:27:39.000Z | 2021-01-09T16:27:39.000Z | ---
title: Azure Government developer guide | Microsoft Docs
description: This article compares features and provides guidance on developing applications for Azure Government.
services: azure-government
cloud: gov
documentationcenter: ''
ms.service: azure-government
ms.devlang: na
ms.topic: article
ms.tgt_pltfrm: na
ms.workload: azure-government
ms.date: 10/10/2020
---
# Azure Government developer guide
Azure Government is a separate instance of the Microsoft Azure service. It addresses the security and compliance needs of United States federal agencies, state and local governments, and their solution providers. Azure Government offers physical isolation from non-US government deployments and provides screened US personnel.
Microsoft provides various tools to help developers create and deploy cloud applications to the global Microsoft Azure service (“global service”) and Microsoft Azure Government services.
When developers create and deploy applications to Azure Government services, as opposed to the global service, they need to know the key differences between the two services.
The specific areas to understand are:
* Setting up and configuring their programming environment
* Configuring endpoints
* Writing applications
* Deploying applications as services to Azure Government
The information in this document summarizes the differences between the two services.
It supplements the information that's available through the following sources:
* [Azure Government](https://www.azure.com/gov "Azure Government") site
* [Microsoft Azure Trust Center](https://www.microsoft.com/trust-center/product-overview "Microsoft Azure Trust Center")
* [Azure Documentation Center](../index.yml)
* [Azure Blogs](https://azure.microsoft.com/blog/ "Azure Blogs")
This content is intended for partners and developers who are deploying to Microsoft Azure Government.
## Guidance for developers
Most of the currently available technical content assumes that applications are being developed for the global service rather than for Azure Government. For this reason, it’s important to be aware of two key differences in applications that you develop for hosting in Azure Government.
* Certain services and features that are in specific regions of the global service might not be available in Azure Government.
* Feature configurations in Azure Government might differ from those in the global service.
- Therefore, it's important to review your sample code, configurations, and steps to ensure that you are building and executing within the Azure Government Cloud Services environment.
Currently, US DoD Central, US DoD East, US Gov Arizona, US Gov Texas, and US Gov Virginia are the regions that support Azure Government. For current regions and available services, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=all®ions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia).
### Quickstarts
Navigate through the links below to get started using Azure Government.
* [Login to Azure Government Portal](documentation-government-get-started-connect-with-portal.md)
* [Connect with PowerShell](documentation-government-get-started-connect-with-ps.md)
* [Connect with CLI](documentation-government-get-started-connect-with-cli.md)
* [Connect with Visual Studio](documentation-government-connect-vs.md)
* [Connect to Azure Storage](documentation-government-get-started-connect-to-storage.md)
* [Connect with Azure SDK for Python](/azure/python/python-sdk-azure-multi-cloud)
### Azure Government Video Library
The [Azure Government video library](https://channel9.msdn.com/blogs/Azure-Government) contains many helpful videos to get you up and running with Azure Government.
## Compliance - Azure Blueprint
The [Azure Blueprint](../governance/blueprints/overview.md) program is designed to facilitate the secure and compliant use of Azure for government agencies and third-party providers building on behalf of government.
For more information on Azure Government Compliance, refer to the [compliance documentation](./documentation-government-plan-compliance.md) and watch this [video](https://channel9.msdn.com/blogs/Azure-Government/Compliance-on-Azure-Government).
## Endpoint mapping
Service endpoints in Azure Government are different than in Azure. For a mapping between Azure and Azure Government endpoints, see [Compare Azure Government and global Azure](compare-azure-government-global-azure.md#guidance-for-developers).
## Next steps
For more information about Azure Government, see the following resources:
* [Sign up for a trial](https://azure.microsoft.com/global-infrastructure/government/request/?ReqType=Trial)
* [Acquiring and accessing Azure Government](https://azure.com/gov)
* [Ask questions via the azure-gov tag in StackOverflow](https://stackoverflow.com/tags/azure-gov)
* [Azure Government Overview](documentation-government-welcome.md)
* [Azure Government Blog](https://blogs.msdn.microsoft.com/azuregov/)
* [Azure Compliance](https://www.microsoft.com/en-us/trustcenter/compliance/complianceofferings) | 66.179487 | 398 | 0.807245 | eng_Latn | 0.975426 |
8afb061f6f2cfb41e05ab873c0e2e59f12fd304c | 1,195 | md | Markdown | content/post/switch-to-atok.md | yjuba/yjuba.github.io | 0f51866eed856e45bc9410b9f34b24fe959363f6 | [
"CC-BY-4.0"
] | null | null | null | content/post/switch-to-atok.md | yjuba/yjuba.github.io | 0f51866eed856e45bc9410b9f34b24fe959363f6 | [
"CC-BY-4.0"
] | 1 | 2020-12-09T04:20:54.000Z | 2022-01-24T14:12:42.000Z | content/post/switch-to-atok.md | yjuba/yjuba.github.io | 0f51866eed856e45bc9410b9f34b24fe959363f6 | [
"CC-BY-4.0"
] | null | null | null | ---
title: "ATOK Passportに乗り換えた"
description: "ATOK Passportに乗り換えた"
date: 2020-09-02T23:22:25+09:00
draft: false
author: "yjuba"
categories:
- "月額課金"
---
2010年頃から長らくGoolge IME/Mozcを愛用していたが、最近ATOK Passportプレミアムに乗り換えた。
乗り換えてから一ヶ月ほど経過したので、簡単にまとめておく。
# ATOK Passportプレミアムを選択した理由
ここ最近、私用や業務に限らず、まとまった量の文書を書く機会が増えたのが理由。
特に、まとまった分量の文章を書くにあたって、以下の2点を重宝している。
- タイプミスを自動で修正してくれる
- 辞書が内蔵されている
導入前は、タイプミスの修正は不要だろうと思っていたが、思ったより上手く修正してくれるので助かっている。
また、辞書に関しては、個人的によく使っていた広辞苑と大辞林が入っているのが気に入っている。
大体の言葉はGoogle検索なり辞書サービスにあたればいいが、気軽に信頼のおける辞書が利用できるのが単純に嬉しい。
一応、購入にあたって買い切りのATOKとATOK Passportベーシックも検討したが、
そこまで高額では無いのと、辞書が内蔵されていることからATOK Passportプレミアムを選択した。
普段はWindows10とその上で仮想マシンとして動作しているDebianを使っているので、
欲を言えばLinux版があると嬉しいのだが、残念ながら暫く前に販売終了となってしまったようだ。
単純に需要が乏しい上にサポートするのが大変なのだろうと思う。
# Google IME/Mozcの良かったところ
固有名詞やスラング、崩した文章についてはGoogle IMEの方が上手く変換してくれたように感じている。
特に崩した文章については、ATOKでは奇妙な変換がされがちなので今でも若干ストレスを感じることがある。
Mozc(Linux版)もあり全体を通して大きな不満無く、長らく利用していたので非常に感謝している。
Mozcに関しては時折上手くIMEが立ち上がらない様なことはあったが、IMEの問題なのかは切り分けていない。
結局面倒になって、Linux上で日本語を書くのを諦めた。
あと、最近Google IMEは活発に開発されていないような雰囲気を感じているのは残念なところ。
開発状況と、なんとなく環境を変えてみたいと感じたのがATOK Passportプレミアムに乗り換えた最大の理由。
ATOKのLinux版同様に色々事情はあると思うが、個人的にはいずれ復活してくれることを願っている。
| 28.452381 | 62 | 0.882008 | jpn_Jpan | 0.983506 |
8afb502371c3ce622a2883e6729e753ddf15414b | 582 | md | Markdown | automatic/README.md | tunisiano187/chocolatey-coreteampackages | 7d4cb3536f200d4013abe8db03042a029eb7763c | [
"Apache-2.0"
] | 145 | 2019-04-26T10:14:58.000Z | 2021-08-18T04:51:09.000Z | automatic/README.md | tunisiano187/chocolatey-coreteampackages | 7d4cb3536f200d4013abe8db03042a029eb7763c | [
"Apache-2.0"
] | 524 | 2019-04-24T13:28:52.000Z | 2021-10-13T05:43:25.000Z | automatic/README.md | tunisiano187/chocolatey-coreteampackages | 7d4cb3536f200d4013abe8db03042a029eb7763c | [
"Apache-2.0"
] | 180 | 2019-04-24T13:23:39.000Z | 2021-10-12T20:11:16.000Z | ## Automatic Folder
This is where you put your Chocolatey packages that are automatically packaged up by [AU](https://chocolatey.org/packages/au) framework.
AU works with packages without automatic package tokens necessary. So you can treat the packages as normal.
Execute `update_all.ps1` in the repository root to run [AU](https://chocolatey.org/packages/au) updater with default options.
**NOTE:** Ensure when you are creating packages for AU, you don't use `--auto` as the packaging files should be normal packages. AU doesn't need the tokens to do replacement.
| 52.909091 | 175 | 0.764605 | eng_Latn | 0.99388 |
8afba60f953f6457b3a089269bd2269f3f7b95e9 | 28 | md | Markdown | README.md | mellora/EliteConsortiumWebSite | 02fbcd3aed5e3794b50445f1881e03945ee73f20 | [
"MIT"
] | null | null | null | README.md | mellora/EliteConsortiumWebSite | 02fbcd3aed5e3794b50445f1881e03945ee73f20 | [
"MIT"
] | null | null | null | README.md | mellora/EliteConsortiumWebSite | 02fbcd3aed5e3794b50445f1881e03945ee73f20 | [
"MIT"
] | null | null | null | # Elite Consortium Web Site
| 14 | 27 | 0.785714 | eng_Latn | 0.324324 |
8afc17ba8ab5c56cf7e1a651929809e2441115f3 | 110 | md | Markdown | dynamic-content/skills/cloud-serverless.md | ErickSGM/portfolio-site | a3eb26700e391fbb1a5ee1684dc3c7e298cca113 | [
"MIT"
] | null | null | null | dynamic-content/skills/cloud-serverless.md | ErickSGM/portfolio-site | a3eb26700e391fbb1a5ee1684dc3c7e298cca113 | [
"MIT"
] | null | null | null | dynamic-content/skills/cloud-serverless.md | ErickSGM/portfolio-site | a3eb26700e391fbb1a5ee1684dc3c7e298cca113 | [
"MIT"
] | null | null | null | ---
templateKey: skill-item
title: Cloud & Serverless
type: Backend
value: 60
img: /img/skills/cloud.png
---
| 12.222222 | 26 | 0.709091 | swe_Latn | 0.141071 |
8afc2797944611a16d5246934ae4cb1abacf078d | 209 | markdown | Markdown | posts/ruby.markdown | davidpdrsn/staticish | d6da29e1025e25b3083a66e06b5ec34ab99bdc41 | [
"BSD-3-Clause"
] | null | null | null | posts/ruby.markdown | davidpdrsn/staticish | d6da29e1025e25b3083a66e06b5ec34ab99bdc41 | [
"BSD-3-Clause"
] | null | null | null | posts/ruby.markdown | davidpdrsn/staticish | d6da29e1025e25b3083a66e06b5ec34ab99bdc41 | [
"BSD-3-Clause"
] | null | null | null | # Ruby
Ruby is another language that is kinda cool
```ruby
class Person
def initialize(name)
@name = name
end
def speak
puts "My name is #{name}"
end
private
attr_reader :name
end
```
| 10.45 | 43 | 0.645933 | eng_Latn | 0.999112 |
8afc4e321e06b65bc44752bf9e5bfcd3f3295cfb | 60,959 | md | Markdown | articles/backup/backup-azure-vms-automation.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-vms-automation.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/backup/backup-azure-vms-automation.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Eseguire il backup e il ripristino di macchine virtuali di Azure tramite backup di Azure con PowerShell
description: Viene descritto come eseguire il backup e il ripristino di macchine virtuali di Azure tramite backup di Azure con PowerShell
author: dcurwin
manager: carmonm
ms.service: backup
ms.topic: conceptual
ms.date: 09/11/2019
ms.author: dacurwin
ms.openlocfilehash: f1aa2c4b6fbe554304bfff239c6220d245fe7467
ms.sourcegitcommit: 3fa4384af35c64f6674f40e0d4128e1274083487
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 09/24/2019
ms.locfileid: "71219462"
---
# <a name="back-up-and-restore-azure-vms-with-powershell"></a>Eseguire il backup e il ripristino di VM di Azure con PowerShell
Questo articolo illustra come eseguire il backup e il ripristino di una macchina virtuale di Azure in un insieme di credenziali di servizi di ripristino di [backup di Azure](backup-overview.md) usando i cmdlet di PowerShell.
In questo articolo viene spiegato come:
> [!div class="checklist"]
> * Creare un insieme di credenziali di Servizi di ripristino e impostare il contesto dell'insieme di credenziali.
> * Definire un criterio di backup
> * Applicare un criterio di backup per proteggere più macchine virtuali
> * Attivare un processo di backup su richiesta per le macchine virtuali protette Prima di poter eseguire il backup o proteggere una macchina virtuale, è necessario completare i [prerequisiti](backup-azure-arm-vms-prepare.md) per preparare l'ambiente per la protezione delle macchine virtuali.
## <a name="before-you-start"></a>Prima di iniziare
- [Altre](backup-azure-recovery-services-vault-overview.md) informazioni sugli insiemi di credenziali dei servizi di ripristino.
- [Esaminare](backup-architecture.md#architecture-direct-backup-of-azure-vms) l'architettura per il backup delle macchine virtuali di Azure, [ottenere informazioni sul](backup-azure-vms-introduction.md) processo di backup ed [esaminare](backup-support-matrix-iaas.md) il supporto, le limitazioni e i prerequisiti.
- Esaminare la gerarchia di oggetti di PowerShell per i servizi di ripristino.
## <a name="recovery-services-object-hierarchy"></a>Gerarchia di oggetti dei servizi di ripristino
La gerarchia di oggetti viene riepilogata nel diagramma seguente.

Esaminare il riferimento al [cmdlet](https://docs.microsoft.com/powershell/module/Az.RecoveryServices/?view=azps-1.4.0) **AZ. RecoveryServices** nella libreria di Azure.
## <a name="set-up-and-register"></a>Configurare e registrare
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
Per iniziare:
1. [Scaricare la versione più recente di PowerShell](https://docs.microsoft.com/powershell/azure/install-az-ps)
2. Cercare i cmdlet PowerShell di Azure Backup disponibili digitando il comando seguente:
```powershell
Get-Command *azrecoveryservices*
```
Verranno visualizzati alias e cmdlet per Backup di Azure, Azure Site Recovery e l'insieme di credenziali di Servizi di ripristino. L'immagine seguente è un esempio di quanto verrà visualizzato. Non è l'elenco completo dei cmdlet.

3. Accedere all'account Azure tramite **Connect-AzAccount**. Questo cmdlet visualizza una pagina Web che richiede le credenziali dell'account:
* In alternativa, è possibile includere le credenziali dell'account come parametro nel cmdlet **Connect-AzAccount**, usando il parametro **-Credential**.
* Se si è un partner CSP che opera per conto di un tenant, è necessario specificare il cliente come tenant usando l'ID tenant o il nome di dominio primario del tenant. Esempio: **Connect-AzAccount -Tenant "fabrikam.com"**
4. Associare la sottoscrizione che si vuole usare all'account perché un account può avere molte sottoscrizioni:
```powershell
Select-AzSubscription -SubscriptionName $SubscriptionName
```
5. Se si usa backup di Azure per la prima volta, è necessario usare il cmdlet **[Register-AzResourceProvider](https://docs.microsoft.com/powershell/module/az.resources/register-azresourceprovider)** per registrare il provider di servizi di ripristino di Azure con la sottoscrizione.
```powershell
Register-AzResourceProvider -ProviderNamespace "Microsoft.RecoveryServices"
```
6. È possibile verificare che i provider siano stati registrati correttamente usando i comandi seguenti:
```powershell
Get-AzResourceProvider -ProviderNamespace "Microsoft.RecoveryServices"
```
Nell'output del comando **RegistrationState** dovrebbe essere modificato in **Registered**. In caso contrario, è sufficiente eseguire di nuovo il cmdlet **[Register-AzResourceProvider](https://docs.microsoft.com/powershell/module/az.resources/register-azresourceprovider)** .
## <a name="create-a-recovery-services-vault"></a>Creare un insieme di credenziali di Servizi di ripristino
Nei passaggi seguenti viene descritto come creare un insieme di credenziali dei servizi di ripristino. Un insieme di credenziali dei servizi di ripristino è diverso da un insieme di credenziali di backup.
1. L'insieme di credenziali dei Servizi di ripristino è una risorsa Resource Manager, pertanto è necessario inserirlo all'interno di un gruppo di risorse. È possibile usare un gruppo di risorse esistente o creare un gruppo di risorse con il cmdlet **[New-AzResourceGroup](https://docs.microsoft.com/powershell/module/az.resources/new-azresourcegroup)** . Quando si crea un nuovo gruppo di risorse, è necessario specificare il nome e il percorso per il gruppo di risorse.
```powershell
New-AzResourceGroup -Name "test-rg" -Location "West US"
```
2. Usare il cmdlet [New-AzRecoveryServicesVault](https://docs.microsoft.com/powershell/module/az.recoveryservices/new-azrecoveryservicesvault?view=azps-1.4.0) per creare un nuovo insieme di credenziali di Servizi di ripristino. Assicurarsi di specificare per l'insieme di credenziali lo stesso percorso usato per il gruppo di risorse.
```powershell
New-AzRecoveryServicesVault -Name "testvault" -ResourceGroupName "test-rg" -Location "West US"
```
3. Specificare il tipo di ridondanza di archiviazione da usare, ad esempio [archiviazione con ridondanza locale (LRS)](../storage/common/storage-redundancy-lrs.md) o [archiviazione con ridondanza geografica (GRS)](../storage/common/storage-redundancy-grs.md). Nell'esempio seguente l'opzione BackupStorageRedundancy per testvault è impostata su GeoRedundant.
```powershell
$vault1 = Get-AzRecoveryServicesVault -Name "testvault"
Set-AzRecoveryServicesBackupProperty -Vault $vault1 -BackupStorageRedundancy GeoRedundant
```
> [!TIP]
> Molti cmdlet di Backup di Azure richiedono l'oggetto dell'insieme di credenziali dei servizi di ripristino come input. Per questo motivo, è utile archiviare l'oggetto dell'insieme di credenziali dei servizi di ripristino di Backup in una variabile.
>
>
## <a name="view-the-vaults-in-a-subscription"></a>Visualizzare gli insiemi di credenziali in un abbonamento
Per visualizzare tutti gli insiemi di credenziali disponibili nella sottoscrizione, usare [Get-AzRecoveryServicesVault](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesvault?view=azps-1.4.0):
```powershell
Get-AzRecoveryServicesVault
```
L'output è simile all'esempio seguente. Si noti che vengono inclusi gli elementi ResourceGroupName e Location associati.
```output
Name : Contoso-vault
ID : /subscriptions/1234
Type : Microsoft.RecoveryServices/vaults
Location : WestUS
ResourceGroupName : Contoso-docs-rg
SubscriptionId : 1234-567f-8910-abc
Properties : Microsoft.Azure.Commands.RecoveryServices.ARSVaultProperties
```
## <a name="back-up-azure-vms"></a>Eseguire il backup delle VM di Azure
Usare un insieme di credenziali di Servizi di ripristino per proteggere le macchine virtuali. Prima di applicare la protezione, impostare il contesto dell'insieme di credenziali (tipo di dati protetti nell'insieme di credenziali) e verificare i criteri di protezione. I criteri di protezione rappresentano la pianificazione per l'esecuzione dei processi di backup e il periodo di conservazione di ogni snapshot del backup.
### <a name="set-vault-context"></a>Impostazione del contesto dell'insieme di credenziali
Prima di abilitare la protezione in una macchina virtuale, usare [set-AzRecoveryServicesVaultContext](https://docs.microsoft.com/powershell/module/az.recoveryservices/set-azrecoveryservicesvaultcontext?view=azps-1.4.0) per impostare il contesto dell'insieme di credenziali. Una volta impostato il contesto dell'insieme di credenziali, si applica a tutti i cmdlet successivi. L'esempio seguente imposta il contesto per l'insieme di credenziali, *testvault*.
```powershell
Get-AzRecoveryServicesVault -Name "testvault" -ResourceGroupName "Contoso-docs-rg" | Set-AzRecoveryServicesVaultContext
```
### <a name="fetch-the-vault-id"></a>Recuperare l'ID dell'insieme di credenziali
Si prevede di deprecare l'impostazione del contesto dell'insieme di credenziali in base alle linee guida Azure PowerShell. È invece possibile archiviare o recuperare l'ID dell'insieme di credenziali e passarlo ai comandi pertinenti. Quindi, se il contesto dell'insieme di credenziali non è stato impostato o si vuole specificare il comando da eseguire per un determinato insieme di credenziali, passare l'ID dell'insieme di credenziali come "-vaultID" a tutti i comandi pertinenti, come indicato di seguito:
```powershell
$targetVault = Get-AzRecoveryServicesVault -ResourceGroupName "Contoso-docs-rg" -Name "testvault"
$targetVault.ID
```
Oppure
```powershell
$targetVaultID = Get-AzRecoveryServicesVault -ResourceGroupName "Contoso-docs-rg" -Name "testvault" | select -ExpandProperty ID
```
### <a name="modifying-storage-replication-settings"></a>Modifica delle impostazioni di replica di archiviazione
Usare il comando [set-AzRecoveryServicesBackupProperty](https://docs.microsoft.com/powershell/module/az.recoveryservices/Set-AzRecoveryServicesBackupProperty) per impostare la configurazione della replica di archiviazione dell'insieme di credenziali su con ridondanza locale/GRS
```powershell
Set-AzRecoveryServicesBackupProperty -Vault $targetVault -BackupStorageRedundancy GeoRedundant/LocallyRedundant
```
> [!NOTE]
> La ridondanza dell'archiviazione può essere modificata solo se non ci sono elementi di backup protetti nell'insieme di credenziali.
### <a name="create-a-protection-policy"></a>Creare i criteri di protezione
Quando si crea un insieme di credenziali di Servizi di ripristino, questo è dotato di protezione predefinita e criteri di conservazione. I criteri di protezione predefinita attivano un processo di backup ogni giorno all'ora specificata. I criteri di conservazione predefiniti consentono di mantenere il punto di recupero giornaliero per 30 giorni. È possibile usare i criteri predefiniti per proteggere rapidamente la macchina virtuale e modificare i criteri in un secondo momento con dettagli diversi.
Usare **[Get-AzRecoveryServicesBackupProtectionPolicy](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupprotectionpolicy)** per visualizzare i criteri di protezione disponibili nell'insieme di credenziali. È possibile usare questo cmdlet per ottenere un criterio specifico o per visualizzare i criteri associati a un tipo di carico di lavoro. L'esempio seguente ottiene i criteri per il tipo di carico di lavoro AzureVM.
```powershell
Get-AzRecoveryServicesBackupProtectionPolicy -WorkloadType "AzureVM" -VaultId $targetVault.ID
```
L'output è simile all'esempio seguente:
```output
Name WorkloadType BackupManagementType BackupTime DaysOfWeek
---- ------------ -------------------- ---------- ----------
DefaultPolicy AzureVM AzureVM 4/14/2016 5:00:00 PM
```
> [!NOTE]
> Il fuso orario del campo BackupTime in PowerShell è UTC. Tuttavia, l'orario di backup nel portale di Azure è allineato al fuso orario locale.
>
>
I criteri di protezione del backup sono associati almeno a un criterio di conservazione. I criteri di conservazione definiscono per quanto tempo un punto di ripristino viene mantenuto prima di essere eliminato.
- Per visualizzare il criterio di conservazione predefinito usare [Get-AzRecoveryServicesBackupRetentionPolicyObject](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupretentionpolicyobject).
- Nello stesso modo, per ottenere il criterio di pianificazione predefinito usare [Get-AzRecoveryServicesBackupSchedulePolicyObject](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupschedulepolicyobject).
- Il cmdlet [New-AzRecoveryServicesBackupProtectionPolicy](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupprotectionpolicy) crea un oggetto di PowerShell che contiene le informazioni relative ai criteri di backup.
- Gli oggetti Criteri di pianificazione e conservazione vengono usati come input per il cmdlet New-AzRecoveryServicesBackupProtectionPolicy.
Per impostazione predefinita, nell'oggetto Criteri di pianificazione è definita un'ora di inizio. Usare l'esempio seguente per modificare l'ora di inizio con l'ora di inizio desiderata. L'ora di inizio desiderata dovrebbe essere anche UTC. Nell'esempio seguente si presuppone che l'ora di inizio desiderata sia 01:00 UTC per i backup giornalieri.
```powershell
$schPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType "AzureVM" -VaultId $targetVault.ID
$UtcTime = Get-Date -Date "2019-03-20 01:00:00Z"
$UtcTime = $UtcTime.ToUniversalTime()
$schpol.ScheduleRunTimes[0] = $UtcTime
```
> [!IMPORTANT]
> È necessario specificare l'ora di inizio solo in più di 30 minuti. Nell'esempio precedente può essere solo "01:00:00" o "02:30:00". L'ora di inizio non può essere "01:15:00"
Nell'esempio seguente i criteri di pianificazione e i criteri di conservazione vengono archiviati nelle variabili. L'esempio usa tali variabili per definire i parametri durante la creazione di un criterio di protezione, *NewPolicy*.
```powershell
$retPol = Get-AzRecoveryServicesBackupRetentionPolicyObject -WorkloadType "AzureVM" -VaultId $targetVault.ID
New-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -WorkloadType "AzureVM" -RetentionPolicy $retPol -SchedulePolicy $schPol -VaultId $targetVault.ID
```
L'output è simile all'esempio seguente:
```output
Name WorkloadType BackupManagementType BackupTime DaysOfWeek
---- ------------ -------------------- ---------- ----------
NewPolicy AzureVM AzureVM 4/24/2016 1:30:00 AM
```
### <a name="enable-protection"></a>Abilitare la protezione
Dopo aver definito i criteri di protezione è necessario abilitarli per un elemento. Usare [Enable-AzRecoveryServicesBackupProtection](https://docs.microsoft.com/powershell/module/az.recoveryservices/enable-azrecoveryservicesbackupprotection) per abilitare la protezione. Per abilitare la protezione sono necessari due oggetti, l'elemento e i criteri. Dopo aver associato i criteri all'insieme di credenziali, il flusso di lavoro di backup verrà attivato al momento definito nella pianificazione dei criteri.
> [!IMPORTANT]
> Quando si usa PS per abilitare il backup per più macchine virtuali contemporaneamente, assicurarsi che a un singolo criterio non siano associate più di 100 macchine virtuali. Si tratta di una [procedura consigliata](https://docs.microsoft.com/azure/backup/backup-azure-vm-backup-faq#is-there-a-limit-on-number-of-vms-that-can-beassociated-with-a-same-backup-policy). Attualmente, il client PS non si blocca in modo esplicito se sono presenti più di 100 macchine virtuali, ma per il futuro è pianificata l'aggiunta del controllo.
Gli esempi seguenti abilitano la protezione per l'elemento V2VM usando i criteri NewPolicy. Gli esempi variano a seconda del fatto che la macchina virtuale sia crittografata o meno e in base al tipo di crittografia.
Per abilitare la protezione in **macchine virtuali di Resource Manager non crittografate**:
```powershell
$pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -VaultId $targetVault.ID
Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -VaultId $targetVault.ID
```
Per abilitare la protezione in macchine virtuali crittografate (crittografate con l'uso di la chiave di sicurezza () e KEK), è necessario concedere al servizio backup di Azure l'autorizzazione per la lettura di chiavi e segreti da Key Vault
```powershell
Set-AzKeyVaultAccessPolicy -VaultName "KeyVaultName" -ResourceGroupName "RGNameOfKeyVault" -PermissionsToKeys backup,get,list -PermissionsToSecrets get,list -ServicePrincipalName 262044b1-e2ce-469f-a196-69ab7ada62d3
$pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -VaultId $targetVault.ID
Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -VaultId $targetVault.ID
```
Per abilitare la protezione in **macchine virtuali crittografate solo con BEK** è necessario concedere al servizio Backup di Azure le autorizzazioni necessarie per la lettura dei segreti dall'insieme di credenziali delle chiavi.
```powershell
Set-AzKeyVaultAccessPolicy -VaultName "KeyVaultName" -ResourceGroupName "RGNameOfKeyVault" -PermissionsToSecrets backup,get,list -ServicePrincipalName 262044b1-e2ce-469f-a196-69ab7ada62d3
$pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -VaultId $targetVault.ID
Enable-AzRecoveryServicesBackupProtection -Policy $pol -Name "V2VM" -ResourceGroupName "RGName1" -VaultId $targetVault.ID
```
> [!NOTE]
> Se si usa il cloud di Azure per enti pubblici, usare il valore ff281ffe-705c-4f53-9F37-a40e6f2c68f3 per il parametro ServicePrincipalName nel cmdlet [set-AzKeyVaultAccessPolicy](https://docs.microsoft.com/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy) .
>
## <a name="monitoring-a-backup-job"></a>Monitoraggio di un processo di backup
È possibile monitorare le operazioni a esecuzione prolungata, ad esempio i processi di backup, senza usare il portale di Azure. Per ottenere lo stato di un processo in corso, usare il cmdlet [Get-AzRecoveryservicesBackupJob](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupjob) . Questo cmdlet ottiene i processi di backup per un insieme di credenziali specifico e tale insieme di credenziali è indicato nel relativo contesto. L'esempio seguente ottiene lo stato di un processo in corso sotto forma di matrice e archivia lo stato nella variabile $joblist.
```powershell
$joblist = Get-AzRecoveryservicesBackupJob –Status "InProgress" -VaultId $targetVault.ID
$joblist[0]
```
L'output è simile all'esempio seguente:
```output
WorkloadName Operation Status StartTime EndTime JobID
------------ --------- ------ --------- ------- ----------
V2VM Backup InProgress 4/23/2016 5:00:30 PM cf4b3ef5-2fac-4c8e-a215-d2eba4124f27
```
Anziché eseguire il polling di questi processi per il completamento, che non è un codice aggiuntivo superfluo, usare il cmdlet [wait-AzRecoveryServicesBackupJob](https://docs.microsoft.com/powershell/module/az.recoveryservices/wait-azrecoveryservicesbackupjob) . Questo cmdlet sospende l'esecuzione fino al completamento del processo o fino a quando non viene raggiunto il valore di timeout specificato.
```powershell
Wait-AzRecoveryServicesBackupJob -Job $joblist[0] -Timeout 43200 -VaultId $targetVault.ID
```
## <a name="manage-azure-vm-backups"></a>Gestire i backup delle macchine virtuali di Azure
### <a name="modify-a-protection-policy"></a>Modificare i criteri di protezione
Per modificare i criteri di protezione, usare [set-AzRecoveryServicesBackupProtectionPolicy](https://docs.microsoft.com/powershell/module/az.recoveryservices/set-azrecoveryservicesbackupprotectionpolicy) per modificare gli oggetti SchedulePolicy o RetentionPolicy.
#### <a name="modifying-scheduled-time"></a>Modifica dell'ora pianificata
Quando si crea un criterio di protezione, per impostazione predefinita viene assegnata un'ora di inizio. Negli esempi seguenti viene illustrato come modificare l'ora di inizio di un criterio di protezione dati.
````powershell
$SchPol = Get-AzRecoveryServicesBackupSchedulePolicyObject -WorkloadType "AzureVM"
$UtcTime = Get-Date -Date "2019-03-20 01:00:00Z" (This is the time that the customer wants to start the backup)
$UtcTime = $UtcTime.ToUniversalTime()
$SchPol.ScheduleRunTimes[0] = $UtcTime
$pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -VaultId $targetVault.ID
Set-AzRecoveryServicesBackupProtectionPolicy -Policy $pol -SchedulePolicy $SchPol -VaultId $targetVault.ID
````
#### <a name="modifying-retention"></a>Modifica della conservazione
Nell'esempio seguente viene modificato il punto di recupero a 365 giorni.
```powershell
$retPol = Get-AzRecoveryServicesBackupRetentionPolicyObject -WorkloadType "AzureVM"
$retPol.DailySchedule.DurationCountInDays = 365
$pol = Get-AzRecoveryServicesBackupProtectionPolicy -Name "NewPolicy" -VaultId $targetVault.ID
Set-AzRecoveryServicesBackupProtectionPolicy -Policy $pol -RetentionPolicy $RetPol -VaultId $targetVault.ID
```
#### <a name="configuring-instant-restore-snapshot-retention"></a>Configurazione della conservazione snapshot per il ripristino immediato
> [!NOTE]
> Da AZ PS versione 1.6.0 in poi, è possibile aggiornare il periodo di conservazione degli snapshot di ripristino istantaneo nei criteri usando PowerShell
````powershell
$bkpPol = Get-AzRecoveryServicesBackupProtectionPolicy -WorkloadType "AzureVM" -VaultId $targetVault.ID
$bkpPol.SnapshotRetentionInDays=7
Set-AzRecoveryServicesBackupProtectionPolicy -policy $bkpPol -VaultId $targetVault.ID
````
Il valore predefinito sarà 2, l'utente può impostare il valore con un minimo di 1 e un massimo di 5. Per i criteri di backup settimanali, il periodo è impostato su 5 e non può essere modificato.
### <a name="trigger-a-backup"></a>Attivare un backup
Usare [backup-AzRecoveryServicesBackupItem](https://docs.microsoft.com/powershell/module/az.recoveryservices/backup-azrecoveryservicesbackupitem) per attivare un processo di backup. Se si tratta del backup iniziale, è un backup completo. I backup successivi saranno incrementali. Nell'esempio seguente viene mantenuto un backup della macchina virtuale per 60 giorni.
```powershell
$namedContainer = Get-AzRecoveryServicesBackupContainer -ContainerType "AzureVM" -Status "Registered" -FriendlyName "V2VM" -VaultId $targetVault.ID
$item = Get-AzRecoveryServicesBackupItem -Container $namedContainer -WorkloadType "AzureVM" -VaultId $targetVault.ID
$endDate = (Get-Date).AddDays(60).ToUniversalTime()
$job = Backup-AzRecoveryServicesBackupItem -Item $item -VaultId $targetVault.ID -ExpiryDateTimeUTC $endDate
```
L'output è simile all'esempio seguente:
```output
WorkloadName Operation Status StartTime EndTime JobID
------------ --------- ------ --------- ------- ----------
V2VM Backup InProgress 4/23/2016 5:00:30 PM cf4b3ef5-2fac-4c8e-a215-d2eba4124f27
```
> [!NOTE]
> Il fuso orario dei campi StartTime ed EndTime in PowerShell è UTC. Tuttavia, l'orario visualizzato nel portale di Azure è allineato al fuso orario locale.
>
>
### <a name="change-policy-for-backup-items"></a>Modificare i criteri per gli elementi di backup
L'utente può modificare i criteri esistenti o modificare i criteri dell'elemento di cui è stato eseguito il backup da Policy1 a Policy2. Per modificare i criteri per un elemento di cui è stato eseguito il backup, recuperare i criteri pertinenti ed eseguire il backup dell'elemento e usare il comando [Enable-AzRecoveryServices](https://docs.microsoft.com/powershell/module/az.recoveryservices/Enable-AzRecoveryServicesBackupProtection?view=azps-1.5.0) con l'elemento di backup come parametro.
````powershell
$TargetPol1 = Get-AzRecoveryServicesBackupProtectionPolicy -Name <PolicyName> -VaultId $targetVault.ID
$anotherBkpItem = Get-AzRecoveryServicesBackupItem -WorkloadType AzureVM -BackupManagementType AzureVM -Name "<BackupItemName>" -VaultId $targetVault.ID
Enable-AzRecoveryServicesBackupProtection -Item $anotherBkpItem -Policy $TargetPol1 -VaultId $targetVault.ID
````
Il comando attende fino al completamento del backup di configurazione e restituisce il seguente output.
```powershell
WorkloadName Operation Status StartTime EndTime JobID
------------ --------- ------ --------- ------- -----
TestVM ConfigureBackup Completed 3/18/2019 8:00:21 PM 3/18/2019 8:02:16 PM 654e8aa2-4096-402b-b5a9-e5e71a496c4e
```
### <a name="stop-protection"></a>Arresta protezione
#### <a name="retain-data"></a>Conserva dati
Se l'utente desidera arrestare la protezione, può usare il cmdlet [Disable-AzRecoveryServicesBackupProtection](https://docs.microsoft.com/powershell/module/az.recoveryservices/Disable-AzRecoveryServicesBackupProtection?view=azps-1.5.0) PS. Questa operazione arresterà i backup pianificati, ma i dati di cui è stato eseguito il backup fino a questo momento verranno conservati per sempre.
````powershell
$bkpItem = Get-AzRecoveryServicesBackupItem -BackupManagementType AzureVM -WorkloadType AzureVM -Name "<backup item name>" -VaultId $targetVault.ID
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.ID
````
#### <a name="delete-backup-data"></a>Elimina dati di backup
Per rimuovere completamente i dati di backup archiviati nell'insieme di credenziali, è sufficiente aggiungere il flag '-RemoveRecoveryPoints '/passare al [comando di protezione ' Disable '](#retain-data).
````powershell
Disable-AzRecoveryServicesBackupProtection -Item $bkpItem -VaultId $targetVault.ID -RemoveRecoveryPoints
````
## <a name="restore-an-azure-vm"></a>Ripristinare una macchina virtuale di Azure
Tra il ripristino di una macchina virtuale tramite il portale di Azure e il ripristino di una macchina virtuale tramite PowerShell c'è un'importante differenza. Con PowerShell l'operazione di ripristino è completata quando sono stati creati i dischi e le informazioni sulla configurazione dal punto di ripristino L'operazione di ripristino non crea la macchina virtuale. Per creare una macchina virtuale dal disco, vedere la sezione [Creare la macchina virtuale da dischi ripristinati](backup-azure-vms-automation.md#create-a-vm-from-restored-disks). Se non si vuole ripristinare l'intera macchina virtuale, ma si vuole ripristinare o recuperare alcuni file da un backup della macchina virtuale di Azure, vedere la [sezione relativa al recupero di file](backup-azure-vms-automation.md#restore-files-from-an-azure-vm-backup).
> [!Tip]
> L'operazione di ripristino non crea la macchina virtuale.
>
>
Il grafico seguente mostra la gerarchia degli oggetti da RecoveryServicesVault fino a BackupRecoveryPoint.

Per ripristinare i dati di backup, identificare l'elemento sottoposto a backup e il punto di ripristino che contiene i dati temporizzati. Usare [Restore-AzRecoveryServicesBackupItem](https://docs.microsoft.com/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem) per ripristinare i dati dall'insieme di credenziali al proprio account.
I passaggi di base per ripristinare una macchina virtuale di Azure sono:
* Selezionare la macchina virtuale.
* Scegliere un punto di ripristino.
* Ripristinare i dischi.
* Creare la macchina virtuale dai dischi archiviati.
### <a name="select-the-vm"></a>Selezionare la macchina virtuale
Per ottenere l'oggetto di PowerShell che identifica l'elemento di backup corretto, iniziare dal contenitore nell'insieme di credenziali e procedere verso il basso nella gerarchia degli oggetti. Per selezionare il contenitore che rappresenta la macchina virtuale, usare il cmdlet [Get-AzRecoveryServicesBackupContainer](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupcontainer) e inviarlo tramite pipe al cmdlet [Get-AzRecoveryServicesBackupItem](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupitem) .
```powershell
$namedContainer = Get-AzRecoveryServicesBackupContainer -ContainerType "AzureVM" -Status "Registered" -FriendlyName "V2VM" -VaultId $targetVault.ID
$backupitem = Get-AzRecoveryServicesBackupItem -Container $namedContainer -WorkloadType "AzureVM" -VaultId $targetVault.ID
```
### <a name="choose-a-recovery-point"></a>Scegliere un punto di ripristino
Usare il cmdlet [Get-AzRecoveryServicesBackupRecoveryPoint](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackuprecoverypoint) per elencare tutti i punti di recupero dell'elemento di backup. Quindi scegliere il punto di ripristino per ripristinare. Se non si sa quale punto di ripristino usare, è consigliabile scegliere il punto più recente RecoveryPointType = AppConsistent nell'elenco.
Nello script seguente la variabile **$rp** è una matrice di punti di recupero per l'elemento di backup selezionato negli ultimi sette giorni. La matrice viene ordinata in ordine inverso di tempo con il punto di ripristino più recente in posizione 0 nell'indice. Per scegliere il punto di ripristino, usare l'indicizzazione standard della matrice di PowerShell. Nell'esempio $rp [0] seleziona il punto di ripristino più recente.
```powershell
$startDate = (Get-Date).AddDays(-7)
$endDate = Get-Date
$rp = Get-AzRecoveryServicesBackupRecoveryPoint -Item $backupitem -StartDate $startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime() -VaultId $targetVault.ID
$rp[0]
```
L'output è simile all'esempio seguente:
```output
RecoveryPointAdditionalInfo :
SourceVMStorageType : NormalStorage
Name : 15260861925810
ItemName : VM;iaasvmcontainer;RGName1;V2VM
RecoveryPointId : /subscriptions/XX/resourceGroups/ RGName1/providers/Microsoft.RecoveryServices/vaults/testvault/backupFabrics/Azure/protectionContainers/IaasVMContainer;iaasvmcontainer;RGName1;V2VM/protectedItems/VM;iaasvmcontainer; RGName1;V2VM/recoveryPoints/15260861925810
RecoveryPointType : AppConsistent
RecoveryPointTime : 4/23/2016 5:02:04 PM
WorkloadType : AzureVM
ContainerName : IaasVMContainer;iaasvmcontainer; RGName1;V2VM
ContainerType : AzureVM
BackupManagementType : AzureVM
```
### <a name="restore-the-disks"></a>Ripristinare i dischi
Usare il cmdlet [Restore-AzRecoveryServicesBackupItem](https://docs.microsoft.com/powershell/module/az.recoveryservices/restore-azrecoveryservicesbackupitem) per ripristinare i dati e la configurazione di un elemento di backup in un punto di ripristino. Dopo aver identificato un punto di ripristino, usarlo come valore per il parametro **-RecoveryPoint**. Nell'esempio precedente **$rp[0]** è il punto di ripristino da usare. Nell'esempio di codice seguente **$rp[0]** è il punto di ripristino da usare per il ripristino del disco.
Per ripristinare i dischi e le informazioni di configurazione:
```powershell
$restorejob = Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName "DestAccount" -StorageAccountResourceGroupName "DestRG" -VaultId $targetVault.ID
$restorejob
```
#### <a name="restore-managed-disks"></a>Ripristinare i dischi gestiti
> [!NOTE]
> Se la macchina virtuale sottoposta a backup include dischi gestiti e si vuole ripristinarli come dischi gestiti, Microsoft ha introdotto questa funzionalità dal modulo RM 6.7.0 di Azure PowerShell in poi
>
>
Indicare un parametro aggiuntivo **TargetResourceGroupName** per specificare il gruppo di risorse in cui verranno ripristinati i dischi gestiti.
> [!NOTE]
> È consigliabile usare il parametro **TargetResourceGroupName** per il ripristino dei dischi gestiti poiché consente di ottenere miglioramenti significativi in termini di prestazioni. Inoltre, a partire dal modulo AZ 1.0 di Azure PowerShell, questo parametro è obbligatorio in caso di ripristino con dischi gestiti
>
>
```powershell
$restorejob = Restore-AzRecoveryServicesBackupItem -RecoveryPoint $rp[0] -StorageAccountName "DestAccount" -StorageAccountResourceGroupName "DestRG" -TargetResourceGroupName "DestRGforManagedDisks" -VaultId $targetVault.ID
```
Il file **VMConfig.JSON** verrà ripristinato nell'account di archiviazione e i dischi gestiti verranno ripristinati nel gruppo di risorse di destinazione specificato.
L'output è simile all'esempio seguente:
```output
WorkloadName Operation Status StartTime EndTime JobID
------------ --------- ------ --------- ------- ----------
V2VM Restore InProgress 4/23/2016 5:00:30 PM cf4b3ef5-2fac-4c8e-a215-d2eba4124f27
```
Usare il cmdlet [wait-AzRecoveryServicesBackupJob](https://docs.microsoft.com/powershell/module/az.recoveryservices/wait-azrecoveryservicesbackupjob) per attendere il completamento del processo di ripristino.
```powershell
Wait-AzRecoveryServicesBackupJob -Job $restorejob -Timeout 43200
```
Al termine del processo di ripristino, usare il cmdlet [Get-AzRecoveryServicesBackupJobDetails](https://docs.microsoft.com/powershell/module/az.recoveryservices/wait-azrecoveryservicesbackupjob) per ottenere i dettagli dell'operazione di ripristino. La proprietà JobDetails contiene le informazioni necessarie per ricreare la macchina virtuale.
```powershell
$restorejob = Get-AzRecoveryServicesBackupJob -Job $restorejob -VaultId $targetVault.ID
$details = Get-AzRecoveryServicesBackupJobDetails -Job $restorejob -VaultId $targetVault.ID
```
Dopo aver ripristinato i dischi, passare alla sezione successiva per creare la macchina virtuale.
## <a name="replace-disks-in-azure-vm"></a>Sostituire i dischi nella macchina virtuale di Azure
Per sostituire i dischi e le informazioni di configurazione, seguire questa procedura:
- Passaggio 1: [Ripristinare i dischi](backup-azure-vms-automation.md#restore-the-disks)
- Passaggio 2: [Scollegare un disco dati tramite PowerShell](https://docs.microsoft.com/azure/virtual-machines/windows/detach-disk#detach-a-data-disk-using-powershell)
- Passaggio 3: [Associare un disco dati a una macchina virtuale Windows con PowerShell](https://docs.microsoft.com/azure/virtual-machines/windows/attach-disk-ps)
## <a name="create-a-vm-from-restored-disks"></a>Creare una macchina virtuale da dischi ripristinati
Dopo aver ripristinato i dischi, usare la procedura seguente per creare e configurare la macchina virtuale dal disco.
> [!NOTE]
> Per creare macchine virtuali crittografate da dischi ripristinati, il ruolo di Azure deve disporre dell'autorizzazione per eseguire l'azione, ovvero **Microsoft.KeyVault/vaults/deploy/action**. Se il ruolo non dispone di questa autorizzazione, crearne uno personalizzato con questa azione. Per altre informazioni, vedere [Ruoli personalizzati nel Controllo degli accessi in base al ruolo di Azure](../role-based-access-control/custom-roles.md).
>
>
> [!NOTE]
> Dopo aver ripristinato i dischi, è ora possibile ottenere un modello di distribuzione che può essere usato direttamente per creare una nuova macchina virtuale. Non vi sono più cmdlet di PowerShell differenti per creare macchine virtuali gestite o non gestite che sono crittografate o non crittografate.
I dettagli del processo risultante forniscono l'URI del modello che può essere sottoposto a query e distribuito.
```powershell
$properties = $details.properties
$templateBlobURI = $properties["Template Blob Uri"]
```
È sufficiente distribuire il modello per creare una nuova macchina virtuale, come spiegato [qui](https://docs.microsoft.com/azure/azure-resource-manager/resource-group-template-deploy).
```powershell
New-AzResourceGroupDeployment -Name ExampleDeployment ResourceGroupName ExampleResourceGroup -TemplateUri $templateBlobURI -storageAccountType Standard_GRS
```
La sezione seguente elenca i passaggi necessari per creare una macchina virtuale usando il file "VMConfig".
> [!NOTE]
> È consigliabile usare il modello di distribuzione descritto in precedenza per creare una macchina virtuale. Questa sezione (punti 1-6) sarà presto deprecata.
1. Ricercare i dettagli del processo nelle proprietà del disco ripristinato.
```powershell
$properties = $details.properties
$storageAccountName = $properties["Target Storage Account Name"]
$containerName = $properties["Config Blob Container Name"]
$configBlobName = $properties["Config Blob Name"]
```
2. Impostare il contesto di archiviazione di Azure e ripristinare il file di configurazione JSON.
```powershell
Set-AzCurrentStorageAccount -Name $storageaccountname -ResourceGroupName "testvault"
$destination_path = "C:\vmconfig.json"
Get-AzStorageBlobContent -Container $containerName -Blob $configBlobName -Destination $destination_path
$obj = ((Get-Content -Path $destination_path -Raw -Encoding Unicode)).TrimEnd([char]0x00) | ConvertFrom-Json
```
3. Usare il file di configurazione JSON per creare la configurazione della macchina virtuale.
```powershell
$vm = New-AzVMConfig -VMSize $obj.'properties.hardwareProfile'.vmSize -VMName "testrestore"
```
4. Collegare il disco del sistema operativo e i dischi dei dati. Questo passaggio include esempi di varie configurazioni di macchine virtuali gestite e crittografate. Usare l'esempio adatto alla configurazione della propria macchina virtuale.
* **Macchine virtuali non gestite e non crittografate**: usare l'esempio seguente per macchine virtuali non gestite e non crittografate.
```powershell
Set-AzVMOSDisk -VM $vm -Name "osdisk" -VhdUri $obj.'properties.StorageProfile'.osDisk.vhd.Uri -CreateOption "Attach"
$vm.StorageProfile.OsDisk.OsType = $obj.'properties.StorageProfile'.OsDisk.OsType
foreach($dd in $obj.'properties.StorageProfile'.DataDisks)
{
$vm = Add-AzVMDataDisk -VM $vm -Name "datadisk1" -VhdUri $dd.vhd.Uri -DiskSizeInGB 127 -Lun $dd.Lun -CreateOption "Attach"
}
```
* **Macchine virtuali non gestite e crittografate con Azure AD (solo BEK)** : per le macchine virtuali non gestite e crittografate con Azure AD (solo con BEK) è necessario ripristinare il segreto nell'insieme di credenziali delle chiavi prima di collegare i dischi. Per altre informazioni, vedere [Ripristinare una macchina virtuale crittografata da un punto di ripristino di Backup di Azure](backup-azure-restore-key-secret.md). L'esempio seguente illustra come collegare dischi del sistema operativo e i dati per le macchine virtuali crittografate. Durante l'impostazione del disco del sistema operativo, assicurarsi di indicare il tipo di sistema operativo pertinente.
```powershell
$dekUrl = "https://ContosoKeyVault.vault.azure.net:443/secrets/ContosoSecret007/xx000000xx0849999f3xx30000003163"
$dekUrl = "/subscriptions/abcdedf007-4xyz-1a2b-0000-12a2b345675c/resourceGroups/ContosoRG108/providers/Microsoft.KeyVault/vaults/ContosoKeyVault"
Set-AzVMOSDisk -VM $vm -Name "osdisk" -VhdUri $obj.'properties.storageProfile'.osDisk.vhd.uri -DiskEncryptionKeyUrl $dekUrl -DiskEncryptionKeyVaultId $keyVaultId -CreateOption "Attach" -Windows/Linux
$vm.StorageProfile.OsDisk.OsType = $obj.'properties.storageProfile'.osDisk.osType
foreach($dd in $obj.'properties.storageProfile'.dataDisks)
{
$vm = Add-AzVMDataDisk -VM $vm -Name "datadisk1" -VhdUri $dd.vhd.Uri -DiskSizeInGB 127 -Lun $dd.Lun -CreateOption "Attach"
}
```
* **Macchine virtuali non gestite e crittografate con Azure AD (BEK e KEK)** : per le macchine virtuali non gestite e crittografate con Azure AD (con BEK e KEK) ripristinare la chiave e il segreto nell'insieme di credenziali delle chiavi prima di collegare i dischi. Per altre informazioni, vedere [Ripristinare una macchina virtuale crittografata da un punto di ripristino di Backup di Azure](backup-azure-restore-key-secret.md). L'esempio seguente illustra come collegare dischi del sistema operativo e i dati per le macchine virtuali crittografate.
```powershell
$dekUrl = "https://ContosoKeyVault.vault.azure.net:443/secrets/ContosoSecret007/xx000000xx0849999f3xx30000003163"
$kekUrl = "https://ContosoKeyVault.vault.azure.net:443/keys/ContosoKey007/x9xxx00000x0000x9b9949999xx0x006"
$keyVaultId = "/subscriptions/abcdedf007-4xyz-1a2b-0000-12a2b345675c/resourceGroups/ContosoRG108/providers/Microsoft.KeyVault/vaults/ContosoKeyVault"
Set-AzVMOSDisk -VM $vm -Name "osdisk" -VhdUri $obj.'properties.storageProfile'.osDisk.vhd.uri -DiskEncryptionKeyUrl $dekUrl -DiskEncryptionKeyVaultId $keyVaultId -KeyEncryptionKeyUrl $kekUrl -KeyEncryptionKeyVaultId $keyVaultId -CreateOption "Attach" -Windows
$vm.StorageProfile.OsDisk.OsType = $obj.'properties.storageProfile'.osDisk.osType
foreach($dd in $obj.'properties.storageProfile'.dataDisks)
{
$vm = Add-AzVMDataDisk -VM $vm -Name "datadisk1" -VhdUri $dd.vhd.Uri -DiskSizeInGB 127 -Lun $dd.Lun -CreateOption "Attach"
}
```
* **Macchine virtuali non gestite e crittografate senza Azure AD (solo BEK)** : per le macchine virtuali non gestite e crittografate senza Azure AD (solo con BEK), se **l'insieme di credenziali delle chiavi e il segreto di origine non sono disponibili**, ripristinare i segreti nell'insieme di credenziali delle chiavi seguendo la procedura in [Restore an non-encrypted virtual machine from an Azure Backup recovery point](backup-azure-restore-key-secret.md) (Ripristinare una macchina virtuale non crittografata da un punto di ripristino di Backup di Azure). Eseguire quindi gli script seguenti per impostare le informazioni dettagliate sulla crittografia nel BLOB del sistema operativo ripristinato (questo passaggio non è necessario per il BLOB di dati). Recuperare $dekurl dall'insieme di credenziali delle chiavi ripristinato.<br>
Lo script seguente deve essere eseguito solo quando l'insieme di credenziali delle chiavi e il segreto di origine non sono disponibili.
```powershell
$dekUrl = "https://ContosoKeyVault.vault.azure.net/secrets/ContosoSecret007/xx000000xx0849999f3xx30000003163"
$keyVaultId = "/subscriptions/abcdedf007-4xyz-1a2b-0000-12a2b345675c/resourceGroups/ContosoRG108/providers/Microsoft.KeyVault/vaults/ContosoKeyVault"
$encSetting = "{""encryptionEnabled"":true,""encryptionSettings"":[{""diskEncryptionKey"":{""sourceVault"":{""id"":""$keyVaultId""},""secretUrl"":""$dekUrl""}}]}"
$osBlobName = $obj.'properties.StorageProfile'.osDisk.name + ".vhd"
$osBlob = Get-AzStorageBlob -Container $containerName -Blob $osBlobName
$osBlob.ICloudBlob.Metadata["DiskEncryptionSettings"] = $encSetting
$osBlob.ICloudBlob.SetMetadata()
```
Quando i **segreti sono disponibili** e le informazioni dettagliate sulla crittografia sono impostate nel BLOB del sistema operativo, collegare i dischi usando lo script riportato di seguito.<br>
Se l'insieme di credenziali delle chiavi e i segreti sono già disponibili, non è necessario eseguire lo script precedente.
```powershell
Set-AzVMOSDisk -VM $vm -Name "osdisk" -VhdUri $obj.'properties.StorageProfile'.osDisk.vhd.Uri -CreateOption "Attach"
$vm.StorageProfile.OsDisk.OsType = $obj.'properties.StorageProfile'.OsDisk.OsType
foreach($dd in $obj.'properties.StorageProfile'.DataDisks)
{
$vm = Add-AzVMDataDisk -VM $vm -Name "datadisk1" -VhdUri $dd.vhd.Uri -DiskSizeInGB 127 -Lun $dd.Lun -CreateOption "Attach"
}
```
* **Macchine virtuali non gestite e crittografate senza Azure AD (BEK e KEK)** : per le macchine virtuali non gestite e crittografate senza Azure AD (con BEK e KEK), se **l'insieme di credenziali delle chiavi, la chiave e il segreto di origine non sono disponibili**, ripristinare la chiave e i segreti nell'insieme di credenziali delle chiavi seguendo la procedura in [Restore an non-encrypted virtual machine from an Azure Backup recovery point](backup-azure-restore-key-secret.md) (Ripristinare una macchina virtuale non crittografata da un punto di ripristino di Backup di Azure). Eseguire quindi gli script seguenti per impostare le informazioni dettagliate sulla crittografia nel BLOB del sistema operativo ripristinato (questo passaggio non è necessario per il BLOB di dati). Recuperare $dekurl e $kekurl dall'insieme di credenziali delle chiavi ripristinato.
Lo script seguente deve essere eseguito solo quando l'insieme di credenziali delle chiavi, la chiave e il segreto di origine non sono disponibili.
```powershell
$dekUrl = "https://ContosoKeyVault.vault.azure.net/secrets/ContosoSecret007/xx000000xx0849999f3xx30000003163"
$kekUrl = "https://ContosoKeyVault.vault.azure.net/keys/ContosoKey007/x9xxx00000x0000x9b9949999xx0x006"
$keyVaultId = "/subscriptions/abcdedf007-4xyz-1a2b-0000-12a2b345675c/resourceGroups/ContosoRG108/providers/Microsoft.KeyVault/vaults/ContosoKeyVault"
$encSetting = "{""encryptionEnabled"":true,""encryptionSettings"":[{""diskEncryptionKey"":{""sourceVault"":{""id"":""$keyVaultId""},""secretUrl"":""$dekUrl""},""keyEncryptionKey"":{""sourceVault"":{""id"":""$keyVaultId""},""keyUrl"":""$kekUrl""}}]}"
$osBlobName = $obj.'properties.StorageProfile'.osDisk.name + ".vhd"
$osBlob = Get-AzStorageBlob -Container $containerName -Blob $osBlobName
$osBlob.ICloudBlob.Metadata["DiskEncryptionSettings"] = $encSetting
$osBlob.ICloudBlob.SetMetadata()
```
Quando **la chiave e i segreti sono disponibili** e le informazioni dettagliate sulla crittografia sono impostate nel BLOB del sistema operativo, collegare i dischi usando lo script riportato di seguito.
Se l'insieme di credenziali delle chiavi, la chiave e i segreti sono disponibili, non è necessario eseguire lo script precedente.
```powershell
Set-AzVMOSDisk -VM $vm -Name "osdisk" -VhdUri $obj.'properties.StorageProfile'.osDisk.vhd.Uri -CreateOption "Attach"
$vm.StorageProfile.OsDisk.OsType = $obj.'properties.StorageProfile'.OsDisk.OsType
foreach($dd in $obj.'properties.StorageProfile'.DataDisks)
{
$vm = Add-AzVMDataDisk -VM $vm -Name "datadisk1" -VhdUri $dd.vhd.Uri -DiskSizeInGB 127 -Lun $dd.Lun -CreateOption "Attach"
}
```
* **Macchine virtuali gestite e non crittografate**: per le macchine virtuali gestite e non crittografate, collegare i dischi gestiti ripristinati. Per informazioni dettagliate, vedere [Collegare un disco dati a una macchina virtuale Windows con PowerShell](../virtual-machines/windows/attach-disk-ps.md).
* **Macchine virtuali gestite e crittografate con Azure AD (solo BEK)** : per le macchine virtuali gestite e crittografate con Azure AD (solo con BEK) collegare i dischi gestiti ripristinati. Per informazioni dettagliate, vedere [Collegare un disco dati a una macchina virtuale Windows con PowerShell](../virtual-machines/windows/attach-disk-ps.md).
* **Macchine virtuali gestite e crittografate con Azure AD (BEK e KEK)** : per le macchine virtuali gestite e crittografate con Azure AD (con BEK e KEK) collegare i dischi gestiti ripristinati. Per informazioni dettagliate, vedere [Collegare un disco dati a una macchina virtuale Windows con PowerShell](../virtual-machines/windows/attach-disk-ps.md).
* **Macchine virtuali gestite e crittografate senza Azure ad (solo per le** macchine virtuali)-per le VM gestite e crittografate senza Azure ad (solo crittografate con l'uso di solo la chiave di crittografia), se l'insieme di credenziali delle chiavi di origine e il **segreto non sono disponibili** , ripristinare i [segreti in Key Vault macchina virtuale non crittografata da un punto di ripristino di backup di Azure](backup-azure-restore-key-secret.md). Eseguire quindi gli script seguenti per impostare le informazioni dettagliate sulla crittografia nel disco del sistema operativo ripristinato (questo passaggio non è necessario per il disco dati). Recuperare $dekurl dall'insieme di credenziali delle chiavi ripristinato.
Lo script seguente deve essere eseguito solo quando l'insieme di credenziali delle chiavi e il segreto di origine non sono disponibili.
```powershell
$dekUrl = "https://ContosoKeyVault.vault.azure.net/secrets/ContosoSecret007/xx000000xx0849999f3xx30000003163"
$keyVaultId = "/subscriptions/abcdedf007-4xyz-1a2b-0000-12a2b345675c/resourceGroups/ContosoRG108/providers/Microsoft.KeyVault/vaults/ContosoKeyVault"
$diskupdateconfig = New-AzDiskUpdateConfig -EncryptionSettingsEnabled $true
$diskupdateconfig = Set-AzDiskUpdateDiskEncryptionKey -DiskUpdate $diskupdateconfig -SecretUrl $dekUrl -SourceVaultId $keyVaultId
Update-AzDisk -ResourceGroupName "testvault" -DiskName $obj.'properties.StorageProfile'.osDisk.name -DiskUpdate $diskupdateconfig
```
Quando i segreti sono disponibili e le informazioni dettagliate sulla crittografia sono impostate nel disco del sistema operativo, vedere [Collegare un disco dati a una macchina virtuale Windows con PowerShell](../virtual-machines/windows/attach-disk-ps.md) per collegare i dischi gestiti ripristinati.
* **Macchine virtuali gestite e crittografate senza Azure ad (** i/o e KEK): per le macchine virtuali crittografate e gestite senza Azure ad (crittografate con l'uso di un & KEK), se l'insieme di credenziali delle chiavi di origine **/chiave/segreto non è disponibile** , ripristinare la chiave e i segreti nell'insieme di credenziali delle chiavi [Ripristinare una macchina virtuale non crittografata da un punto di ripristino di backup di Azure](backup-azure-restore-key-secret.md). Eseguire quindi gli script seguenti per impostare le informazioni dettagliate sulla crittografia nel disco del sistema operativo ripristinato (questo passaggio non è necessario per il disco dati). Recuperare $dekurl e $kekurl dall'insieme di credenziali delle chiavi ripristinato.
Lo script seguente deve essere eseguito solo quando l'insieme di credenziali delle chiavi, la chiave e il segreto di origine non sono disponibili.
```powershell
$dekUrl = "https://ContosoKeyVault.vault.azure.net/secrets/ContosoSecret007/xx000000xx0849999f3xx30000003163"
$kekUrl = "https://ContosoKeyVault.vault.azure.net/keys/ContosoKey007/x9xxx00000x0000x9b9949999xx0x006"
$keyVaultId = "/subscriptions/abcdedf007-4xyz-1a2b-0000-12a2b345675c/resourceGroups/ContosoRG108/providers/Microsoft.KeyVault/vaults/ContosoKeyVault"
$diskupdateconfig = New-AzDiskUpdateConfig -EncryptionSettingsEnabled $true
$diskupdateconfig = Set-AzDiskUpdateDiskEncryptionKey -DiskUpdate $diskupdateconfig -SecretUrl $dekUrl -SourceVaultId $keyVaultId
$diskupdateconfig = Set-AzDiskUpdateKeyEncryptionKey -DiskUpdate $diskupdateconfig -KeyUrl $kekUrl -SourceVaultId $keyVaultId
Update-AzDisk -ResourceGroupName "testvault" -DiskName $obj.'properties.StorageProfile'.osDisk.name -DiskUpdate $diskupdateconfig
```
Quando la chiave e i segreti sono disponibili e le informazioni dettagliate sulla crittografia sono impostate nel disco del sistema operativo, vedere [Collegare un disco dati a una macchina virtuale Windows con PowerShell](../virtual-machines/windows/attach-disk-ps.md) per collegare i dischi gestiti ripristinati.
5. Configurare le impostazioni di rete.
```powershell
$nicName="p1234"
$pip = New-AzPublicIpAddress -Name $nicName -ResourceGroupName "test" -Location "WestUS" -AllocationMethod Dynamic
$virtualNetwork = New-AzVirtualNetwork -ResourceGroupName "test" -Location "WestUS" -Name "testvNET" -AddressPrefix 10.0.0.0/16
$virtualNetwork | Set-AzVirtualNetwork
$vnet = Get-AzVirtualNetwork -Name "testvNET" -ResourceGroupName "test"
$subnetindex=0
$nic = New-AzNetworkInterface -Name $nicName -ResourceGroupName "test" -Location "WestUS" -SubnetId $vnet.Subnets[$subnetindex].Id -PublicIpAddressId $pip.Id
$vm=Add-AzVMNetworkInterface -VM $vm -Id $nic.Id
```
6. Creare la macchina virtuale.
```powershell
New-AzVM -ResourceGroupName "test" -Location "WestUS" -VM $vm
```
7. Eseguire il push dell'estensione ADE.
Se le estensioni ADE non vengono inserite, i dischi dati verranno contrassegnati come non crittografati, pertanto è obbligatorio per i passaggi seguenti da eseguire:
* **Per una macchina virtuale con Azure AD**: usare il comando seguente per abilitare manualmente la crittografia per i dischi dati
**Solo BEK**
```powershell
Set-AzVMDiskEncryptionExtension -ResourceGroupName $RG -VMName $vm -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $dekUrl -DiskEncryptionKeyVaultId $keyVaultId -VolumeType Data
```
**BEK e KEK**
```powershell
Set-AzVMDiskEncryptionExtension -ResourceGroupName $RG -VMName $vm -AadClientID $aadClientID -AadClientSecret $aadClientSecret -DiskEncryptionKeyVaultUrl $dekUrl -DiskEncryptionKeyVaultId $keyVaultId -KeyEncryptionKeyUrl $kekUrl -KeyEncryptionKeyVaultId $keyVaultId -VolumeType Data
```
* **Per una macchina virtuale senza Azure AD**: usare il comando seguente per abilitare manualmente la crittografia per i dischi dati.
Se durante l'esecuzione del comando viene richiesto AADClientID, è necessario aggiornare il Azure PowerShell.
**Solo BEK**
```powershell
Set-AzVMDiskEncryptionExtension -ResourceGroupName $RG -VMName $vm -DiskEncryptionKeyVaultUrl $dekUrl -DiskEncryptionKeyVaultId $keyVaultId -SkipVmBackup -VolumeType "All"
```
**BEK e KEK**
```powershell
Set-AzVMDiskEncryptionExtension -ResourceGroupName $RG -VMName $vm -DiskEncryptionKeyVaultUrl $dekUrl -DiskEncryptionKeyVaultId $keyVaultId -KeyEncryptionKeyUrl $kekUrl -KeyEncryptionKeyVaultId $keyVaultId -SkipVmBackup -VolumeType "All"
```
> [!NOTE]
> Assicurarsi di eliminare manualmente i file JASON creati come parte del processo di ripristino del disco della VM crittografato.
## <a name="restore-files-from-an-azure-vm-backup"></a>Ripristinare file da un backup della macchina virtuale di Azure
Oltre al ripristino di dischi è anche possibile ripristinare singoli file da un backup della macchina virtuale di Azure. La funzionalità di ripristino dei file consente di accedere a tutti i file inclusi in un punto di ripristino. Gestire i file tramite Esplora file come si farebbe con i file normali.
La procedura di base per il ripristino di un file dal backup di una macchina virtuale di Azure si articola nei passaggi seguenti:
* Selezionare la macchina virtuale
* Scegliere un punto di ripristino
* Montare i dischi del punto di ripristino
* Copiare i file necessari
* Smontare il disco
### <a name="select-the-vm"></a>Selezionare la macchina virtuale
Per ottenere l'oggetto di PowerShell che identifica l'elemento di backup corretto, iniziare dal contenitore nell'insieme di credenziali e procedere verso il basso nella gerarchia degli oggetti. Per selezionare il contenitore che rappresenta la macchina virtuale, usare il cmdlet [Get-AzRecoveryServicesBackupContainer](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupcontainer) e inviarlo tramite pipe al cmdlet [Get-AzRecoveryServicesBackupItem](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupitem) .
```powershell
$namedContainer = Get-AzRecoveryServicesBackupContainer -ContainerType "AzureVM" -Status "Registered" -FriendlyName "V2VM" -VaultId $targetVault.ID
$backupitem = Get-AzRecoveryServicesBackupItem -Container $namedContainer -WorkloadType "AzureVM" -VaultId $targetVault.ID
```
### <a name="choose-a-recovery-point"></a>Scegliere un punto di ripristino
Usare il cmdlet [Get-AzRecoveryServicesBackupRecoveryPoint](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackuprecoverypoint) per elencare tutti i punti di recupero dell'elemento di backup. Quindi scegliere il punto di ripristino per ripristinare. Se non si sa quale punto di ripristino usare, è consigliabile scegliere il punto più recente RecoveryPointType = AppConsistent nell'elenco.
Nello script seguente la variabile **$rp** è una matrice di punti di recupero per l'elemento di backup selezionato negli ultimi sette giorni. La matrice viene ordinata in ordine inverso di tempo con il punto di ripristino più recente in posizione 0 nell'indice. Per scegliere il punto di ripristino, usare l'indicizzazione standard della matrice di PowerShell. Nell'esempio $rp [0] seleziona il punto di ripristino più recente.
```powershell
$startDate = (Get-Date).AddDays(-7)
$endDate = Get-Date
$rp = Get-AzRecoveryServicesBackupRecoveryPoint -Item $backupitem -StartDate $startdate.ToUniversalTime() -EndDate $enddate.ToUniversalTime() -VaultId $targetVault.ID
$rp[0]
```
L'output è simile all'esempio seguente:
```output
RecoveryPointAdditionalInfo :
SourceVMStorageType : NormalStorage
Name : 15260861925810
ItemName : VM;iaasvmcontainer;RGName1;V2VM
RecoveryPointId : /subscriptions/XX/resourceGroups/ RGName1/providers/Microsoft.RecoveryServices/vaults/testvault/backupFabrics/Azure/protectionContainers/IaasVMContainer;iaasvmcontainer;RGName1;V2VM/protectedItems/VM;iaasvmcontainer; RGName1;V2VM/recoveryPoints/15260861925810
RecoveryPointType : AppConsistent
RecoveryPointTime : 4/23/2016 5:02:04 PM
WorkloadType : AzureVM
ContainerName : IaasVMContainer;iaasvmcontainer; RGName1;V2VM
ContainerType : AzureVM
BackupManagementType : AzureVM
```
### <a name="mount-the-disks-of-recovery-point"></a>Montare i dischi del punto di ripristino
Usare il cmdlet [Get-AzRecoveryServicesBackupRPMountScript](https://docs.microsoft.com/powershell/module/az.recoveryservices/get-azrecoveryservicesbackuprpmountscript) per ottenere lo script per montare tutti i dischi del punto di ripristino.
> [!NOTE]
> I dischi sono montati come dischi collegati iSCSI al computer in cui viene eseguito lo script. Il montaggio avviene immediatamente e non si incorre in alcun addebito.
>
>
```powershell
Get-AzRecoveryServicesBackupRPMountScript -RecoveryPoint $rp[0] -VaultId $targetVault.ID
```
L'output è simile all'esempio seguente:
```output
OsType Password Filename
------ -------- --------
Windows e3632984e51f496 V2VM_wus2_8287309959960546283_451516692429_cbd6061f7fc543c489f1974d33659fed07a6e0c2e08740.exe
```
Eseguire lo script nel computer in cui si vogliono recuperare i file. Per eseguire lo script è necessario immettere la password specificata. Dopo aver collegato i dischi, usare Esplora risorse per esplorare i nuovi volumi e file. Per altre informazioni, vedere l'articolo di backup [ripristinare i file dal backup della macchina virtuale di Azure](backup-azure-restore-files-from-vm.md).
### <a name="unmount-the-disks"></a>Smontare i disco
Dopo aver copiato i file necessari, usare [Disable-AzRecoveryServicesBackupRPMountScript](https://docs.microsoft.com/powershell/module/az.recoveryservices/disable-azrecoveryservicesbackuprpmountscript) per smontare i dischi. Assicurarsi di smontare i dischi in modo che l'accesso ai file del punto di ripristino venga rimosso.
```powershell
Disable-AzRecoveryServicesBackupRPMountScript -RecoveryPoint $rp[0] -VaultId $targetVault.ID
```
## <a name="next-steps"></a>Passaggi successivi
Se si preferisce usare PowerShell per coinvolgere le risorse di Azure, vedere l'articolo PowerShell [Distribuire e gestire il backup in Azure per server Windows/client Windows mediante PowerShell](backup-client-automation.md). Se si gestiscono i backup DPM, vedere l'articolo [Distribuire e gestire il backup per DPM](backup-dpm-automation.md).
| 72.743437 | 869 | 0.776128 | ita_Latn | 0.967227 |
8afceaa3707692bdeeb32efc7cc1b3f1357c21a0 | 25 | md | Markdown | README.md | ReidConor/LazyIndex | 46f2df79613766de92275651825de7f52c802b77 | [
"MIT"
] | null | null | null | README.md | ReidConor/LazyIndex | 46f2df79613766de92275651825de7f52c802b77 | [
"MIT"
] | null | null | null | README.md | ReidConor/LazyIndex | 46f2df79613766de92275651825de7f52c802b77 | [
"MIT"
] | null | null | null | # TODO
Write this README
| 8.333333 | 17 | 0.76 | kor_Hang | 0.591768 |
8afd14a9c846a6be5cf17ae4c5f05e5ece9c3f84 | 2,565 | md | Markdown | documentation/install.md | n8sh/csprng-d | 178b0b95d544d9d311c45aea4f0d0decc7f5b591 | [
"MIT"
] | null | null | null | documentation/install.md | n8sh/csprng-d | 178b0b95d544d9d311c45aea4f0d0decc7f5b591 | [
"MIT"
] | null | null | null | documentation/install.md | n8sh/csprng-d | 178b0b95d544d9d311c45aea4f0d0decc7f5b591 | [
"MIT"
] | null | null | null | # Building and Installation
## Building
In `build/scripts` there are three scripts you can use to build the library.
All of the build scripts assume that you are using `dmd` and not `gdc` or
any other D compiler.
### On POSIX-Compliant Machines (Linux, Mac OS X)
Run `make -f ./build/scripts/posix.make` to build the static library,
shared library, and all command-line tools.
Run `sudo make -f ./build/scripts/posix.make install` to install the
shared library, command-line tools, and documentation.
Alternatively, you can run `./build/scripts/build.sh`.
If you get a permissions error, you need to set that file to be executable
using the `chmod` command.
### On Windows
Run `.\build\scripts\build.bat` from a `cmd` or run `.\build\scripts\build.ps1`
from the PowerShell command line. If you get a warning about needing a
cryptographic signature for the PowerShell script, it is probably because
your system is blocking running unsigned PowerShell scripts. Just run the
other script if that is the case.
Unfortunately, there is no install scripts, mostly because I don't really know
how to install something on Windows, other than just putting executables on the
`%PATH%`. I welcome suggestions or pull requests.
## Installation
When the library is built, it will be located in `build/libraries`. You may
want to place the shared library in `/lib`, `/usr/lib` or where ever you find
the `.so` files on your system. If you are using Windows, I don't know where
you should put the `.dll` file.
You will also need to put the `.di` files found in `build/interfaces` somewhere
in your source imports directory, which seems to vary a lot from system to
system.
Static libraries can be put where ever `phobos2` is on your system.
If the command-line executables are built too, they will be found in
`build/executables`. You may want to place this in `/usr/bin` or somewhere
else indicated by the `PATH` environment variable. Type `echo $PATH` on POSIX
systems to see your `PATH` variable.
If documentation was generated from the
[embedded documentation](https://dlang.org/spec/ddoc.html), or if it already
came with the ASN.1 Library package, it will be in `documentation/html`. Man
pages can be found in `documentation/man`; the man pages for executables will
be in `documentation/man/1`, and the man pages for library usage will be in
`documentation/man/8`. You may want to copy the man page files into
`/usr/share/man/1` and `/usr/share/man/3` respectively, or
`/usr/share/man/man1` and `/usr/share/man/man3` respectively, on some systems,
like Mac OS. | 43.474576 | 79 | 0.762963 | eng_Latn | 0.99883 |
8afd5ed7fef05683c146c0c749ca0a458192fbdd | 3,431 | md | Markdown | HOSTING.md | abostico/former2 | 1056e64c55c1940103253da093a800eeae31994b | [
"MIT"
] | 1,245 | 2019-04-29T08:29:32.000Z | 2022-03-31T05:42:13.000Z | HOSTING.md | jewelhuq/former2 | b4a619105b3cdfd85ac28592ce449c02363acbdb | [
"MIT"
] | 199 | 2019-04-29T10:02:55.000Z | 2022-03-29T21:23:52.000Z | HOSTING.md | jumboly/former2 | de11d1986b827c4cc7d5d2a65334eadf9aaef4e3 | [
"MIT"
] | 162 | 2019-04-29T14:38:03.000Z | 2022-03-27T00:37:12.000Z | ## Hosting Locally
In some cases, you may wish to host a copy of the Former2 website locally or externally for security reasons. You can do that by performing the following steps:
* Clone this repository to the root of your web server OR run a web server with the [Dockerfile](Dockerfile)
If you are hosting on 127.0.0.1, extension support will be provided by default and there are no more steps.
If you are serving from another host, you should perform the following additional steps to achieve extension support:
_Google Chrome or Microsoft Edge_
* Ensure the official Former2 Helper extension is uninstalled (you can remove it via chrome://extensions)
* Clone the [Former2 Helper](https://github.com/iann0036/former2-helper) repository locally
* Edit the `chrome/manifest.json` file to add your URL to the `externally_connectable` section, for example:
```
...
"externally_connectable": {
"matches": [
"https://former2.com/*",
"https://*.former2.com/*",
"http://127.0.0.1/*",
"https://127.0.0.1/*",
"http://localhost/*",
"https://localhost/*",
"http://ec2-12-34-56-78.compute-1.amazonaws.com/*
]
}
}
```
> Take care to ensure the protocol is correct and that you append the `/*` to the end.
* Edit line 3 of the `chrome/bg.js` file to include your hostname, for example:
```
if (["127.0.0.1", "localhost", "former2.com", "www.former2.com", "ec2-12-34-56-78.compute-1.amazonaws.com"].includes(new URL(sender.url).hostname)) {
```
> Note that protocol or path is not used in this case.
* Load the extension by visiting chrome://extensions, clicking the "Load unpacked" button and selecting the `chrome/` directory.
* Note the extension ID of the loaded Former2 Helper extension
* In the website files, open the `js/app.js` file and change the first or second line to reflect the noted extension ID for your extension, for example:
```
var CHROME_HELPER_EXTENSION_ID = "zwnrjxwlcsomeotherexamplehere"; // Chrome
```
* Finally, reload the page on your web server - remember that your credentials may not be input on the new address
_Mozilla Firefox_
* Ensure the official Former2 Helper extension is uninstalled (you can remove it via about:addons)
* Clone the [Former2 Helper](https://github.com/iann0036/former2-helper) repository locally
* Edit the `firefox/manifest.json` file to add your URL to the `content_scripts` section, for example:
```
...
"content_scripts": [
{
"matches": [
"https://former2.com/*",
"https://www.former2.com/*",
"http://127.0.0.1/*",
"https://127.0.0.1/*",
"http://localhost/*",
"https://localhost/*",
"http://ec2-12-34-56-78.compute-1.amazonaws.com/*"
],
...
```
> Take care to ensure the protocol is correct and that you append the `/*` to the end.
* Edit line 3 of the `firefox/bg.js` file to include your hostname, for example:
```
if (["127.0.0.1", "localhost", "former2.com", "www.former2.com", "ec2-12-34-56-78.compute-1.amazonaws.com"].includes(new URL(sender.url).hostname)) {
```
> Note that protocol or path is not used in this case.
* Load the extension by visiting about:addons, clicking the cog icon, then "Debug Add-ons", then click the "Load Temporary Add-on..." button and select the `firefox/manifest.json` file.
* Finally, reload the page on your web server - remember that your credentials may not be input on the new address
| 39.436782 | 185 | 0.698339 | eng_Latn | 0.989842 |
8afde847376551d0a3e3c3dd470e66f4b49786d1 | 3,284 | md | Markdown | content/es/authors/admin/_index.md | LiNk-NY/cdsbsource | 5c9db49de8d54f6cbf08c049d7468badaf0621a1 | [
"MIT"
] | null | null | null | content/es/authors/admin/_index.md | LiNk-NY/cdsbsource | 5c9db49de8d54f6cbf08c049d7468badaf0621a1 | [
"MIT"
] | null | null | null | content/es/authors/admin/_index.md | LiNk-NY/cdsbsource | 5c9db49de8d54f6cbf08c049d7468badaf0621a1 | [
"MIT"
] | null | null | null | +++
# Display name
name = "CDSB"
# Username (this should match the folder name)
authors = ["admin"]
# Is this the primary user of the site?
superuser = true
# Role/position
role = "Comunidad de Desarrolladores de Software en Bioinformática"
# Organizations/Affiliations
# Separate multiple entries with a comma, using the form: `[ {name="Org1", url=""}, {name="Org2", url=""} ]`.
organizations = [ { name = "", url = "" } ]
# Short bio (displayed in user profile at end of posts)
bio = "Queremos ayudarte a acquirir las habilidades necesarias para contribuir software libre para la bioinformática usando R"
# Enter email to display Gravatar (if Gravatar enabled in Config)
email = "cdsbmexico@gmail.com"
# List (academic) interests or hobbies
interests = [
"R",
"Bioinformática",
"Educación",
"Diversidad"
]
# Organizational groups that you belong to (for People widget)
# Set this to `[]` or comment out if you are not using People widget.
user_groups = ["Comunidad"]
# List qualifications (such as academic degrees)
# Social/Academic Networking
# For available icons, see: https://sourcethemes.com/academic/docs/widgets/#icons
# For an email link, use "fas" icon pack, "envelope" icon, and a link in the
# form "mailto:your-email@example.com" or "#contact" for contact widget.
[[social]]
icon = "envelope"
icon_pack = "fas"
link = "mailto:cdsbmexico@gmail.com" # For a direct email link, use "mailto:test@example.org".
[[social]]
icon = "home"
icon_pack = "fas"
link = "https://comunidadbioinfo.github.io/"
[[social]]
icon = "twitter"
icon_pack = "fab"
link = "https://twitter.com/CDSBMexico"
[[social]]
icon = "github"
icon_pack = "fab"
link = "https://github.com/ComunidadBioInfo"
[[social]]
icon = "slack"
icon_pack = "fab"
link = "https://comunidadbioinfo.slack.com"
# Link to a PDF of your resume/CV from the About widget.
# To enable, copy your resume/CV to `static/files/cv.pdf` and uncomment the lines below.
# [[social]]
# icon = "cv"
# icon_pack = "ai"
# link = "files/cv.pdf"
+++
¡Bienvenidos a la Comunidad de Desarrolladores de Software en Bioinformática (CDSB)!
Somos un grupo interesado en ayudarle a miembros de países Latino Americanos en el proceso de acquirir las habilidades necesarias para contribuir a la comunidad de software libre. Hoy en día se usa [R](https://cran.r-project.org/) en muchas industrias y desde el inicio fue adoptado por académicos. Mientras los usuarios de `R` van familiarizandose con el lenguaje terminan escribiendo funciones que luego son agrupadas y compartidas vía paquetes. Estos paquetes son distribuidos vía [GitHub](https://github.com/) o vía repositorios más formales como lo son [CRAN](https://cran.r-project.org/) y [Bioconductor](http://bioconductor.org/). `Bioconductor` es un proyecto centrado alrededor de software libre para la bioinformática hecho con `R` y ha sido de gran utilidad para varios de nosotros. Como parte de la CDSB buscamos aumentar la representación de países Latino Americanos en las comunidades de desarrolladores de software libre y de esa forma aumentar su diversidad. Por lo tanto, nuestro objetivo principal es enseñarle a Latino Americanos como hacer paquetes de `R` para que los compartan vía GitHub, CRAN y Bioconductor.
| 41.05 | 1,131 | 0.732948 | spa_Latn | 0.769416 |
8afe3e8db55336d181e5cbdaf5429d1c3fb6edd8 | 214 | md | Markdown | _news/2021sept_1.md | pgroth/INDElab | 9d760a02942c2fa258815802aa3a0f9e2d949738 | [
"MIT"
] | 1 | 2018-10-31T18:15:58.000Z | 2018-10-31T18:15:58.000Z | _news/2021sept_1.md | pgroth/INDElab | 9d760a02942c2fa258815802aa3a0f9e2d949738 | [
"MIT"
] | null | null | null | _news/2021sept_1.md | pgroth/INDElab | 9d760a02942c2fa258815802aa3a0f9e2d949738 | [
"MIT"
] | null | null | null | ---
layout: post
date: 2021-09-10 9:00:00 +0100
inline: true
---
Daniel gave a talk at [Transformers at Work](https://www.zeta-alpha.com/registration-deep-learning-for-nlp) hosted by our friends over at ZetaAlpha. | 30.571429 | 148 | 0.742991 | eng_Latn | 0.694374 |
8afe954ff4629ab3bd5daf41192af10fd37ee521 | 238 | md | Markdown | _posts/0000-01-02-suvro-mukherjee.md | suvro-mukherjee/github-slideshow | 9adbdb3709ed7e418d1143fa949226f8d254a55b | [
"MIT"
] | null | null | null | _posts/0000-01-02-suvro-mukherjee.md | suvro-mukherjee/github-slideshow | 9adbdb3709ed7e418d1143fa949226f8d254a55b | [
"MIT"
] | 3 | 2020-12-24T07:51:16.000Z | 2020-12-24T08:20:14.000Z | _posts/0000-01-02-suvro-mukherjee.md | suvro-mukherjee/github-slideshow | 9adbdb3709ed7e418d1143fa949226f8d254a55b | [
"MIT"
] | null | null | null | ---
layout: slide
title: "Welcome to our second slide!"
---
"_Education is an admirable thing, but it is well to remember from time to time that nothing that is worth knowing can be taught._" - Oscar Wilde
Use the left arrow to go back!
| 29.75 | 145 | 0.739496 | eng_Latn | 0.999978 |
c1016e2b1f89a4e28c2235a8c47dc0dbc257896c | 1,539 | md | Markdown | docs/visual-basic/misc/bc42000.md | cy-org/docs-conceptual.zh-cn | 0c18cda3dd707efdcdd0e73bc480ab9fbbc4580c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc42000.md | cy-org/docs-conceptual.zh-cn | 0c18cda3dd707efdcdd0e73bc480ab9fbbc4580c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/misc/bc42000.md | cy-org/docs-conceptual.zh-cn | 0c18cda3dd707efdcdd0e73bc480ab9fbbc4580c | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "<类型><methodname>与继承层次结构,因此具有相同名称的其他成员冲突应声明为 Shadows |Microsoft 文档"
ms.date: 2015-07-20
ms.prod: .net
ms.technology:
- devlang-visual-basic
ms.topic: article
f1_keywords:
- vbc42000
- bc42000
helpviewer_keywords:
- BC42000
ms.assetid: 3081635f-99a9-4e90-997f-82251144d80f
caps.latest.revision: 12
author: dotnet-bot
ms.author: dotnetcontent
translation.priority.ht:
- de-de
- es-es
- fr-fr
- it-it
- ja-jp
- ko-kr
- ru-ru
- zh-cn
- zh-tw
translation.priority.mt:
- cs-cz
- pl-pl
- pt-br
- tr-tr
translationtype: Machine Translation
ms.sourcegitcommit: a06bd2a17f1d6c7308fa6337c866c1ca2e7281c0
ms.openlocfilehash: e307c5a3285e2cd91b35a69b780fa8995b6280b7
ms.lasthandoff: 03/13/2017
---
# <a name="lttypegt-39ltmethodnamegt39-conflicts-with-other-members-of-the-same-name-across-the-inheritance-hierarchy-and-so-should-be-declared-39shadows39"></a><类型><methodname>与继承层次结构,因此具有相同名称的其他成员冲突应声明为 Shadows
从两个或多个接口继承的接口定义了一个与已在多个基接口中定义的过程同名的过程。 此接口中的过程应隐藏其中的一个基接口过程。
此消息是一个警告。 `Shadows`默认情况下,则假定。 有关隐藏警告或将警告视为错误的详细信息,请参阅[在 Visual Basic 中配置警告](https://docs.microsoft.com/visualstudio/ide/configuring-warnings-in-visual-basic)。
**错误 ID:** BC42000
## <a name="to-correct-this-error"></a>更正此错误
- 如果你想要隐藏其中一个基接口过程,请将 `Shadows` 关键字添加到新的过程声明中。
- 如果不打算隐藏任何基接口过程,则更改新过程的名称。
## <a name="see-also"></a>另请参阅
[阴影](../../visual-basic/language-reference/modifiers/shadows.md)
[Visual Basic 中的隐藏](../../visual-basic/programming-guide/language-features/declared-elements/shadowing.md)
| 28.5 | 224 | 0.759584 | yue_Hant | 0.318245 |
c1016fe0ebcdb5d25f87e7d209f3430d2323de9f | 4,139 | md | Markdown | vendor/github.com/czasg/go-queue/README.md | CzaOrz/gonal | 5e8724f373c25734997e2b0bf36f1323d1f35bed | [
"MIT"
] | 2 | 2021-11-07T08:26:26.000Z | 2021-12-13T13:37:23.000Z | vendor/github.com/czasg/go-queue/README.md | czasg/gonal | 5e8724f373c25734997e2b0bf36f1323d1f35bed | [
"MIT"
] | null | null | null | vendor/github.com/czasg/go-queue/README.md | czasg/gonal | 5e8724f373c25734997e2b0bf36f1323d1f35bed | [
"MIT"
] | null | null | null | # go-queue
[](https://github.com/czasg/go-queue/blob/master/LICENSE)
[](https://codecov.io/gh/czasg/go-queue)
[](https://github.com/czasg/go-queue/stargazers)
[](https://github.com/czasg/go-queue/network/members)
[](https://github.com/czasg/go-queue/issues)
go-queue was collections for queues (FIFO), stacks (LIFO) and priority.
```text
|—————————| |——————————————————|
| queue | -------- | priority queue |
|—————————| |——————————————————|
______|______
| |
|————————| |————————|
| fifo | | lifo |
|————————| |————————|
```
### plan
- [x] fifo memory queue
- [x] fifo disk queue
- [x] lifo memory queue
- [x] lifo disk queue
- [x] priority queue
### interface
```golang
type Queue interface {
Push(data []byte) error
Pop() ([]byte, error)
Close() error
Len() int
}
type PriorityQueue interface {
Push(data []byte, priority int) error
Pop() ([]byte, error)
Close() error
Len() int
}
```
# Demo
### fifo memory queue
```golang
package main
import (
"github.com/czasg/go-queue"
)
func main() {
maxQueueSize := 2 // out of max size, push data will return (nil, ErrFullQueue)
q := queue.NewFifoMemoryQueue(maxQueueSize)
defer q.Close()
q.Push([]byte("test1")) // nil
q.Pop() // test, nil
q.Push([]byte("test1")) // nil
q.Push([]byte("test1")) // nil
q.Push([]byte("test1")) // ErrFullQueue
q.Pop() // test, nil
q.Pop() // test, nil
q.Pop() // ErrEmptyQueue
}
```
### fifo disk queue
disk queue will never full.
```golang
package main
import (
"github.com/czasg/go-queue"
"io/ioutil"
"os"
)
func main() {
dir, _ := ioutil.TempDir("", "")
defer os.RemoveAll(dir)
q, _ := queue.NewFifoDiskQueue(dir)
_ = q.Push([]byte("test1"))
_ = q.Push([]byte("test2"))
_ = q.Push([]byte("test3"))
_ = q.Close()
q, _ = queue.NewFifoDiskQueue(dir) // open again
_, _ = q.Pop() // test1
_, _ = q.Pop() // test2
_, _ = q.Pop() // test3
_ = q.Close()
}
```
### lifo memory queue
```golang
package main
import (
"github.com/czasg/go-queue"
)
func main() {
maxQueueSize := 2 // out of max size, push data will return (nil, ErrFullQueue)
q := queue.NewLifoMemoryQueue(maxQueueSize)
defer q.Close()
q.Push([]byte("test1")) // nil
q.Pop() // test, nil
q.Push([]byte("test1")) // nil
q.Push([]byte("test1")) // nil
q.Push([]byte("test1")) // ErrFullQueue
q.Pop() // test, nil
q.Pop() // test, nil
q.Pop() // ErrEmptyQueue
}
```
### lifo disk queue
disk queue will never full.
```golang
package main
import (
"github.com/czasg/go-queue"
"io/ioutil"
"os"
)
func main() {
dir, _ := ioutil.TempDir("", "")
defer os.RemoveAll(dir)
q, _ := queue.NewLifoDiskQueue(dir)
_ = q.Push([]byte("test1"))
_ = q.Push([]byte("test2"))
_ = q.Push([]byte("test3"))
_ = q.Close()
q, _ = queue.NewLifoDiskQueue(dir) // open again
_, _ = q.Pop() // test3
_, _ = q.Pop() // test2
_, _ = q.Pop() // test1
_ = q.Close()
}
```
### priority queue
`PriorityQueue` based on `Queue` as a queue factory.
```golang
package main
import (
"fmt"
"github.com/czasg/go-queue"
"reflect"
)
func main() {
factory := func() queue.Queue {
return queue.NewFifoMemoryQueue(1024)
}
q := queue.NewPriorityQueueFactory(nil, factory)
_ = q.Push([]byte("v1"), 1)
_ = q.Push([]byte("v2"), 10)
_ = q.Push([]byte("v3"), 5)
data, _ := q.Pop()
fmt.Println(reflect.DeepEqual(data, []byte("v2")))
data, _ = q.Pop()
fmt.Println(reflect.DeepEqual(data, []byte("v3")))
data, _ = q.Pop()
fmt.Println(reflect.DeepEqual(data, []byte("v1")))
}
```
| 22.372973 | 166 | 0.604735 | yue_Hant | 0.198321 |
c10197e300045ba1660a8216a28774eaffc14df3 | 316 | md | Markdown | _posts/2017-06-19-will-vuejs.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2017-06-19-will-vuejs.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | _posts/2017-06-19-will-vuejs.md | pipiscrew/pipiscrew.github.io | 9d81bd323c800a1bff2b6d26c3ec3eb96fb41004 | [
"MIT"
] | null | null | null | ---
title: Will Vue.js Become a Giant Like Angular or React?
author: PipisCrew
date: 2017-06-19
categories: [js]
toc: true
---
https://10clouds.com/blog/vuejs-angular-react/
origin - http://www.pipiscrew.com/2017/06/will-vue-js-become-a-giant-like-angular-or-react/ will-vue-js-become-a-giant-like-angular-or-react | 28.727273 | 140 | 0.746835 | eng_Latn | 0.197325 |
c101ccbbbc9d71e6f5158ba75991c95e5bfeaaf0 | 96 | md | Markdown | README.md | zuokun2013/site1 | d3eebc077cbc39f53a393a2e832e7ec1b214310a | [
"MIT"
] | null | null | null | README.md | zuokun2013/site1 | d3eebc077cbc39f53a393a2e832e7ec1b214310a | [
"MIT"
] | 5 | 2021-03-10T02:47:36.000Z | 2022-02-26T21:42:50.000Z | README.md | zuokun2013/site1 | d3eebc077cbc39f53a393a2e832e7ec1b214310a | [
"MIT"
] | null | null | null | # Initial page
this is a hello world page
### test title
---
```text
如果
```
#### 可以了
| 4.8 | 26 | 0.520833 | eng_Latn | 0.987913 |
c102200844920c8df36c0710ee22767812b9e951 | 1,260 | md | Markdown | README.md | AndreMacielSousa/PWA-Projecto | 0296d4252eda34ddaf14e8cf08c964be6de22cdc | [
"MIT"
] | null | null | null | README.md | AndreMacielSousa/PWA-Projecto | 0296d4252eda34ddaf14e8cf08c964be6de22cdc | [
"MIT"
] | null | null | null | README.md | AndreMacielSousa/PWA-Projecto | 0296d4252eda34ddaf14e8cf08c964be6de22cdc | [
"MIT"
] | null | null | null | # PWA-Projecto
Projeto Final: Desenvolvimento de Aplicação Web (Front-End e Back-End): Prova de conceito
```
Chegados a esta fase da unidade curricular é proposto a cada aluno o desenvolvimento de uma aplicação Web como prova de conceito onde deverão ser implementados os diversos recursos abordados no desenvolvimento full-stack (front-end e back-end) baseado na linguagem JavaScript.
A proposta de projeto é individual, cuja tópico deverá resultar da experiência ou práticas profissionais que possui, terá de contribuir para resolução de um caso em concreto.
O desenvolvimento do projeto técnico implica a apresentação metodológica da implementação de uma aplicação Web, utilizando ferramentas de prototipagem, ambientes de programação web (Visual Studio Code) e controlo de versões (Git).
```
# Tema
Desenvolvimento do Simulador REGEX para o UAbALL
https://github.com/AndreMacielSousa/UAbALL
## Cronograma
```
1. Prototipagem
1.1. Wireframes
1.2. Mockups
1.3. Prototipos
```
```
2. Front-End
```
```
3. Back-End
```
API https://pwa-regex.herokuapp.com/
## Project setup
Cada pasta incluirá respectivas intruções.
### Nota Final
```
Projecto não concluido:
1. Falta adpatar front End
2. Falta Adaptar base de dados
``` | 27.391304 | 277 | 0.760317 | por_Latn | 0.999485 |
c10256b5b160d5731e8125ef83c871ddacad6766 | 29,098 | md | Markdown | _posts/blog/2016-08-21-ddpg-rl.md | pemami4911/pemami4911.github.io | f942b1fe6d841cf4c3aeb56f780a8738defd667a | [
"MIT"
] | 6 | 2016-08-22T10:01:40.000Z | 2019-05-13T10:15:47.000Z | _posts/blog/2016-08-21-ddpg-rl.md | pemami4911/pemami4911.github.io | f942b1fe6d841cf4c3aeb56f780a8738defd667a | [
"MIT"
] | 5 | 2016-03-04T13:55:09.000Z | 2019-07-12T08:20:27.000Z | _posts/blog/2016-08-21-ddpg-rl.md | pemami4911/pemami4911.github.io | f942b1fe6d841cf4c3aeb56f780a8738defd667a | [
"MIT"
] | 4 | 2017-11-14T23:33:48.000Z | 2021-01-27T06:33:20.000Z | ---
layout: post
title: "Deep Deterministic Policy Gradients in TensorFlow"
date: 2016-08-21
category: blog
redirect_from:
- /blog_posts/2016/08/21/ddpg-rl.html
byline: An introduction to the Deep Deterministic Policy Gradient (DDPG) algorithm
---
<script type="text/javascript" async
src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-MML-AM_CHTML">
</script>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } },
tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']]}
});
</script>
---
By: Patrick Emami
## Introduction
Deep Reinforcement Learning has recently gained a lot of traction in the machine learning community due to the significant amount of progress that has been made in the past few years. Traditionally, reinforcement learning algorithms were constrained to tiny, discretized grid worlds, which seriously inhibited them from gaining credibility as being viable machine learning tools.
Here's a classic example from <a href="#References">Richard Sutton's book<a/>, which I will be referencing a lot.
{%
include image.html
img="/img/classic_rl.jpg"
caption="Cliff-walking task, Fig. 6.4 from [1]"
%}
After Deep Q-Networks <a href="#References">[3]</a> became a hit, people realized that deep learning methods could be used to solve high-dimensional problems. One of the subsequent challenges that the reinforcement learning community faced was figuring out how to deal with continuous action spaces. This is a significant obstacle, **since most interesting problems in robotic control, etc., fall into this category**. Logically, if you discretize your continuous action space too finely, you end up with the same curse of dimensionality problem as before. On the other hand, a naive discretization of the action space throws away valuable information concerning the geometry of the action domain.
{%
include image.html
img="/img/cartpole.gif"
caption="CartPole-v0 environment: OpenAI Gym"
%}
Google DeepMind has devised a solid algorithm for tackling the continuous action space problem. Building off the prior work of <a href="#References">[2]</a> on Deterministic Policy Gradients, they have produced a **policy-gradient** **actor-critic** algorithm called Deep Deterministic Policy Gradients (DDPG) <a href="#References">[4]</a> that is **off-policy** and **model-free**, and that uses some of the deep learning tricks that were introduced along with Deep Q-Networks (hence the "deep"-ness of DDPG). In this blog post, we're going to discuss how to implement this algorithm using Tensorflow and tflearn, and then evaluate it with [OpenAI Gym](https://github.com/openai/gym) on the pendulum environment. I'll also discuss some of the theory behind it. Regrettably, I can't start with introducing the basics of reinforcement learning since that would make this blog post much too long; however, Richard Sutton's book (linked above), as well as [David Silver's course](http://www0.cs.ucl.ac.uk/staff/d.silver/web/Teaching.html), are excellent resources to get going with RL.
## Lets Start With Some Theory
<a href="#Tensorflow">[Wait, I just want to skip to the Tensorflow part!]</a>
### Policy-Gradient Methods
Policy-Gradient (PG) algorithms optimize a policy end-to-end by computing noisy estimates of the gradient of the expected reward of the policy and then updating the policy in the gradient direction. Traditionally, PG methods have assumed a stochastic policy $\mu (a \| s)$, which gives a probability distribution over actions. Ideally, the algorithm sees lots of training examples of high rewards from good actions and negative rewards from bad actions. Then, it can increase the probability of the good actions. In practice, you tend to run into plenty of problems with vanilla-PG; for example, getting one reward signal at the end of a long episode of interaction with the environment makes it difficult to ascertain exactly which action was the good one. This is known as the *credit assignment problem*. For RL problems with continuous action spaces, vanilla-PG is all but useless. You can, however, get vanilla-PG to work with some RL domains that take in visual inputs and have discrete action spaces with a convolutional neural network representing your policy (talk about standing on the shoulders of giants!). There are extensions to the vanilla-PG algorithm such as [REINFORCE](https://link.springer.com/content/pdf/10.1007/BF00992696.pdf) and [Natural Policy Gradients](http://papers.nips.cc/paper/2073-a-natural-policy-gradient.pdf) that make the algorithm much more viable. For a first look into stochastic policy gradients, you can find an overview of the Stochastic Policy Gradient theorem in <a href="#References">[2]</a>, an in-depth blog post by Andrej Karpathy on [here](http://karpathy.github.io/2016/05/31/rl/).
### Actor-Critic Algorithms
{%
include image.html
img="/img/actor-critic.png"
caption="The actor-critic architecture, figure from [1]"
%}
The Actor-Critic learning algorithm is used to represent the policy function independently of the value function. The policy function structure is known as the _actor_, and the value function structure is referred to as the _critic_. The actor produces an action given the current state of the environment, and the critic produces a TD (Temporal-Difference) error signal given the state and resultant reward. If the critic is estimating the action-value function $Q(s,a)$, it will also need the output of the actor. The output of the critic drives learning in both the actor and the critic. In Deep Reinforcement Learning, neural networks can be used to represent the actor and critic structures.
### Off-Policy vs. On-Policy
Reinforcement Learning algorithms which are characterized as off-policy generally employ a separate behavior policy that is independent of the policy being improved upon; the behavior policy is used to simulate trajectories. A key benefit of this separation is that the behavior policy can operate by sampling all actions, whereas the estimation policy can be deterministic (e.g., greedy) <a href="#References">[1]</a>. Q-learning is an off-policy algorithm, since it updates the Q values without making any assumptions about the actual policy being followed. Rather, the Q-learning algorithm simply states that the Q-value corresponding to state $s(t)$ and action $a(t)$ is updated using the Q-value of the next state $s(t + 1)$ and the action $a(t + 1)$ that maximizes the Q-value at state $s(t + 1)$.
On-policy algorithms directly use the policy that is being estimated to sample trajectories during training.
### Model-free Algorithms
Model-free RL algorithms are those that make no effort to learn the underlying dynamics that govern how an agent interacts with the environment. In the case where the environment has a discrete state space and the agent has a discrete number of actions to choose from, a model of the dynamics of the environment is the 1-step transition matrix: $T(s(t + 1)\|s(t), a(t))$. This stochastic matrix gives all of the probabilities for arriving at a desired state given the current state and action. Clearly, for problems with high-dimensional state and action spaces, this matrix is incredibly expensive in space and time to compute. If your state space is the set of all possible 64 x 64 RGB images and your agent has 18 actions available to it, the transition matrix's size is $\|S \times S \times A\| \approx \|(68.7 \times 10^{9}) \times (68.7 \times 10^9) \times 18\|$, and at 32 bits per matrix element, thats around $3.4 \times 10^{14}$ GB to store it in RAM!
Rather than dealing with all of that, model-free algorithms directly estimate the optimal policy or value function through algorithms such as policy iteration or value iteration. This is much more computationally efficient. I should note that, if possible, obtaining and using a good approximation of the underlying model of the environment can only be beneficial. Be wary- using a bad approximation of a model of the environment will only bring you misery. Just as well, model-free methods generally require a larger number of training examples.
## The Meat and Potatoes of DDPG
At its core, DDPG is a policy gradient algorithm that uses a stochastic behavior policy for good exploration but estimates a **deterministic** target policy, which is much easier to learn. Policy gradient algorithms utilize a form of policy iteration: they evaluate the policy, and then follow the policy gradient to maximize performance. Since DDPG is off-policy and uses a deterministic target policy, this allows for the use of the Deterministic Policy Gradient theorem (which will be derived shortly). DDPG is an actor-critic algorithm as well; it primarily uses two neural networks, one for the actor and one for the critic. These networks compute action predictions for the current state and generate a temporal-difference (TD) error signal each time step. The input of the actor network is the current state, and the output is a single real value representing an action chosen from a **continuous** action space (whoa!). The critic's output is simply the estimated Q-value of the current state and of the action given by the actor. The deterministic policy gradient theorem provides the update rule for the weights of the actor network. The critic network is updated from the gradients obtained from the TD error signal.
Sadly, it turns out that tossing neural networks at DPG results in an algorithm that behaves poorly, resisting all of your most valiant efforts to get it to converge. The following are most likely some of the key conspirators:
1. In general, training and evaluating your policy and/or value function with thousands of temporally-correlated simulated trajectories leads to the introduction of enormous amounts of variance in your approximation of the true Q-function (the critic). The TD error signal is excellent at compounding the variance introduced by your bad predictions over time. It is highly suggested to use a replay buffer to store the **experiences** of the agent during training, and then randomly sample experiences to use for learning in order to break up the temporal correlations within different training episodes. This technique is known as **experience replay**. DDPG uses this.
2. Directly updating your actor and critic neural network weights with the gradients obtained from the TD error signal that was computed from both your replay buffer and the output of the actor and critic networks causes your learning algorithm to diverge (or to not learn at all). It was recently discovered that using a set of **target networks** to generate the **targets** for your TD error computation regularizes your learning algorithm and increases stability. Accordingly, here are the equations for the TD target $y_i$ and the loss function for the critic network:
$$
\begin{equation}
y_i = r_i + \gamma Q'(s_{i + 1}, \mu'(s_{i+1}|\theta^{\mu'})|\theta^{Q'})
\end{equation}
$$
$$
\begin{equation}
L = \frac{1}{N} \sum_{i} (y_i - Q(s_i, a_i | \theta^{Q})^2)
\end{equation}
$$
Here, a minibatch of size $N$ has been sampled from the replay buffer, with the $i$ index referring to the i'th sample. The target for the TD error computation, $y_i$, is computed from the sum of the immediate reward and the outputs of the target actor and critic networks, having weights $\theta^{\mu'}$ and $\theta^{Q'}$ respectively. Then, the critic loss can be computed w.r.t. the output $Q(s_i, a_i \| \theta^{Q})$ of the critic network for the i'th sample.
See <a href="#References">[3]</a> for more details on the use of target networks.
Now, as mentioned above, the weights of the critic network can be updated with the gradients obtained from the loss function in Eq. 2. Also, remember that the actor network is updated with the Deterministic Policy Gradient. Here lies the crux of DDPG! Silver, et al., <a href="#References">[2]</a> proved that the stochastic _policy gradient_ $\nabla_{\theta} \mu (a \| s, \theta)$, which is the gradient of the policy's performance, is equivalent to the deterministic policy gradient, which is given by:
$$
\begin{equation}
\nabla_{\theta^{\mu}} \mu \approx \mathbb{E}_{\mu'} \big [ \nabla_{a} Q(s, a|\theta^{Q})|_{s=s_t,a=\mu(s_t)} \nabla_{\theta^{\mu}} \mu(s|\theta^{\mu})|_{s=s_t} \big ]
\end{equation}
$$
Notice that the policy term in the expectation is not a distribution over actions. It turns out that all you need is the gradient of the output of the critic network w.r.t. the actions, multiplied by the gradient of the output of the actor network w.r.t. its parameters, averaged over a minibatch. Simple!
I think the proof of the deterministic policy gradient theorem is quite illuminating, so I'd like to demonstrate it here before moving on to the code.
<div class="theorem">
(Deterministic Policy Gradient Theorem).
Suppose the Markov Decision Process satisfies the appropriate conditions (see <a href="#References">[2]</a>). These imply that $\nabla_{\theta^{\mu}} \mu(s|\theta^{\mu})$ and $\nabla_{a} Q(s, a|\theta^{Q})$ exist and that the deterministic policy gradient exists. Then,
$$
\nabla_{\theta^{\mu}} \mu \approx\int_{S} \rho^{\mu'}(s_t) \nabla_{a} Q(s, a|\theta^{Q})|_{s=s_t, a=\mu(s_t)} \nabla_{\theta^{\mu}} \mu(s|\theta^{\mu})|_{s=s_t} ds
$$
$$
\begin{equation}
= \mathbb{E}_{\mu'} \big[ \nabla_{a} Q(s, a|\theta^{Q})|_{s=s_t, a=\mu(s_t)} \nabla_{\theta^{\mu}} \mu(s|\theta^{\mu})|_{s=s_t} \big]
\end{equation}
$$
</div>
<div class="proof">
For a greedy stochastic policy $ \mu (a | s, \theta)$ over a continuous action space, a global maximization step is required at every time step. Rather, we employ a deterministic policy $ \mu (s | \theta) $ and update the policy parameters by moving them in the direction of the gradient of the action-value function. We take an expectation to average over the suggested directions of improvement from each state w.r.t. the state distribution under the target policy $\mu'$, given by $\rho^{\mu'}(s)$.
$$
\begin{equation}
\theta^{\mu}_{k + 1} = \theta^{\mu}_k + \alpha \mathbb{E}_{\mu'^{k}} \big [ \nabla_{\theta} Q(s, \mu (s|\theta^{\mu}_k)|\theta^{Q}_k) \big ].
\end{equation}
$$
By applying the chain rule,
$$
\begin{equation}
\theta^{\mu}_{k + 1} = \theta^{\mu}_k + \alpha \mathbb{E}_{\mu'^{k}} \big [ \nabla_{a} Q(s, a|\theta^{Q}_k)|_{a=\mu(s|\theta^{\mu}_k)} \nabla_{\theta} \mu(s|\theta^{\mu}_k) \big ].
\end{equation}
$$
</div>
The expectation in the right-hand side of Eq. 6 is exactly what we want. In <a href="#References">[3]</a>, it is shown that the stochastic policy gradient converges to Eq. 4 in the limit as the variance of the stochastic policy gradient approaches 0. This is significant because it allows for all of the machinery for stochastic policy gradients to be applied to deterministic policy gradients.
<a name="Tensorflow"></a>
## Enough With The Chit Chat, Let's See Some Code!
Let's get started.
We're writing code to solve the Pendulum environment in OpenAI gym, which has a low-dimensional state space and a single continuous action within [-2, 2]. The goal is to swing up and balance the pendulum.
{%
include image.html
img="/img/pendulum.png"
%}
The first part is easy. Set up a data structure to represent your replay buffer. I recommend using a deque from python's collections library. The replay buffer will return a randomly chosen batch of experiences when queried.
{% highlight python %}
from collections import deque
import random
import numpy as np
class ReplayBuffer(object):
def __init__(self, buffer_size):
self.buffer_size = buffer_size
self.count = 0
self.buffer = deque()
def add(self, s, a, r, t, s2):
experience = (s, a, r, t, s2)
if self.count < self.buffer_size:
self.buffer.append(experience)
self.count += 1
else:
self.buffer.popleft()
self.buffer.append(experience)
def size(self):
return self.count
def sample_batch(self, batch_size):
'''
batch_size specifies the number of experiences to add
to the batch. If the replay buffer has less than batch_size
elements, simply return all of the elements within the buffer.
Generally, you'll want to wait until the buffer has at least
batch_size elements before beginning to sample from it.
'''
batch = []
if self.count < batch_size:
batch = random.sample(self.buffer, self.count)
else:
batch = random.sample(self.buffer, batch_size)
s_batch = np.array([_[0] for _ in batch])
a_batch = np.array([_[1] for _ in batch])
r_batch = np.array([_[2] for _ in batch])
t_batch = np.array([_[3] for _ in batch])
s2_batch = np.array([_[4] for _ in batch])
return s_batch, a_batch, r_batch, t_batch, s2_batch
def clear(self):
self.buffer.clear()
self.count = 0
{% endhighlight %}
Okay, lets define our actor and critic networks. We're going to use tflearn to condense the boilerplate code.
{% highlight python %}
class ActorNetwork(object):
...
def create_actor_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
net = tflearn.fully_connected(net, 300)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Final layer weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(
net, self.a_dim, activation='tanh', weights_init=w_init)
# Scale output to -action_bound to action_bound
scaled_out = tf.multiply(out, self.action_bound)
return inputs, out, scaled_out
class CriticNetwork(object):
...
def create_critic_network(self):
inputs = tflearn.input_data(shape=[None, self.s_dim])
action = tflearn.input_data(shape=[None, self.a_dim])
net = tflearn.fully_connected(inputs, 400)
net = tflearn.layers.normalization.batch_normalization(net)
net = tflearn.activations.relu(net)
# Add the action tensor in the 2nd hidden layer
# Use two temp layers to get the corresponding weights and biases
t1 = tflearn.fully_connected(net, 300)
t2 = tflearn.fully_connected(action, 300)
net = tflearn.activation(
tf.matmul(net, t1.W) + tf.matmul(action, t2.W) + t2.b, activation='relu')
# linear layer connected to 1 output representing Q(s,a)
# Weights are init to Uniform[-3e-3, 3e-3]
w_init = tflearn.initializations.uniform(minval=-0.003, maxval=0.003)
out = tflearn.fully_connected(net, 1, weights_init=w_init)
return inputs, action, out
{% endhighlight %}
I would suggest placing these two functions in separate Actor and Critic classes, as shown above. The hyperparameter and layer details for the networks are in the appendix of the DDPG paper <a href="#References">[4]</a>. The networks for the low-dimensional state-space problems are pretty simple, though. For the actor network, the output is a tanh layer scaled to be between $[-b, +b], b \in \mathbb{R}$. This is useful when your action space is on the real line but is bounded and closed, as is the case for the pendulum task.
Notice that the critic network takes both the state and the action as inputs; however, the action input skips the first layer. This is a design decision that has experimentally worked well. Accommodating this with tflearn was a bit tricky.
Make sure to use Tensorflow placeholders, created by ```tflearn.input_data```, for the inputs. Leaving the first dimension of the placeholders as ```None``` allows you to train on batches of experiences.
You can simply call these creation methods twice, once to create the actor and critic networks that will be used for training, and again to create your target actor and critic networks.
You can create a Tensorflow Op to update the target network parameters like so:
{% highlight python %}
self.network_params = tf.trainable_variables()
self.target_network_params = tf.trainable_variables()[len(self.network_params):]
# Op for periodically updating target network with online network weights
self.update_target_network_params = \
[self.target_network_params[i].assign(tf.mul(self.network_params[i], self.tau) + \
tf.mul(self.target_network_params[i], 1. - self.tau))
for i in range(len(self.target_network_params))]
{% endhighlight %}
This looks a bit convoluted, but it's actually a great display of Tensorflow's flexibility. You're defining a Tensorflow Op, ```update_target_network_params```, that will copy the parameters of the online network with a mixing factor $\tau$. Make sure you're copying over the correct Tensorflow variables by checking what is being returned by ```tf.trainable_variables()```. You'll need to define this Op for both the actor and critic.
Let's define the gradient computation and optimization Tensorflow operations.
We'll use [ADAM](https://arxiv.org/abs/1412.6980) as our optimization method. It's sort of replaced SGD as the de-facto standard, now that it's implemented in so many plug-and-play deep-learning libraries such as tflearn and keras and tends to outperform it.
For the actor network...
{% highlight python %}
# This gradient will be provided by the critic network
self.action_gradient = tf.placeholder(tf.float32, [None, self.a_dim])
# Combine the gradients, dividing by the batch size to
# account for the fact that the gradients are summed over the
# batch by tf.gradients
self.unnormalized_actor_gradients = tf.gradients(
self.scaled_out, self.network_params, -self.action_gradient)
self.actor_gradients = list(map(lambda x: tf.div(x, self.batch_size), self.unnormalized_actor_gradients))
# Optimization Op
self.optimize = tf.train.AdamOptimizer(self.learning_rate).\
apply_gradients(zip(self.actor_gradients, self.network_params))
{% endhighlight %}
Notice how the gradients are combined. ```tf.gradients()``` makes it quite easy to implement the Deterministic Policy Gradient equation (Eq. 4). I negate the action-value gradient since we want the actor to follow the action-value gradients. Tensorflow will take the sum and average of the gradients of your minibatch.
Then, for the critic network...
{% highlight python %}
# Network target (y_i)
# Obtained from the target networks
self.predicted_q_value = tf.placeholder(tf.float32, [None, 1])
# Define loss and optimization Op
self.loss = tflearn.mean_square(self.predicted_q_value, self.out)
self.optimize = tf.train.AdamOptimizer(self.learning_rate).minimize(self.loss)
# Get the gradient of the net w.r.t. the action
self.action_grads = tf.gradients(self.out, self.action)
{% endhighlight %}
This is exactly Eq. 2. Make sure to grab the action-value gradients at the end there to pass to the policy network for gradient computation.
I like to encapsulate calls to my Tensorflow session to keep things organized and readable in my training code. For brevity's sake, I'll just show the ones for the actor network.
{% highlight python %}
...
def train(self, inputs, a_gradient):
self.sess.run(self.optimize, feed_dict={
self.inputs: inputs,
self.action_gradient: a_gradient
})
def predict(self, inputs):
return self.sess.run(self.scaled_out, feed_dict={
self.inputs: inputs
})
def predict_target(self, inputs):
return self.sess.run(self.target_scaled_out, feed_dict={
self.target_inputs: inputs
})
def update_target_network(self):
self.sess.run(self.update_target_network_params)
...
{% endhighlight %}
Now, lets show the main training loop and we'll be done!
{% highlight python %}
# Constants
TAU = 0.001
ACTOR_LEARNING_RATE = 0.0001
CRITIC_LEARNING_RATE = 0.001
BUFFER_SIZE = 1000000
MINIBATCH_SIZE = 64
MAX_EPISODES = 50000
MAX_EP_STEPS = 1000
GAMMA = 0.99
...
with tf.Session() as sess:
env = gym.make('Pendulum-v0')
state_dim = env.observation_space.shape[0]
action_dim = env.action_space.shape[0]
action_bound = env.action_space.high
# Ensure action bound is symmetric
assert (env.action_space.high == -env.action_space.low)
actor = ActorNetwork(sess, state_dim, action_dim, action_bound, \
ACTOR_LEARNING_RATE, TAU)
critic = CriticNetwork(sess, state_dim, action_dim, \
CRITIC_LEARNING_RATE, TAU, actor.get_num_trainable_vars())
actor_noise = OrnsteinUhlenbeckActionNoise(mu=np.zeros(action_dim))
# Initialize our Tensorflow variables
sess.run(tf.initialize_all_variables())
# Initialize target network weights
actor.update_target_network()
critic.update_target_network()
# Initialize replay memory
replay_buffer = ReplayBuffer(BUFFER_SIZE)
for i in range(MAX_EPISODES):
s = env.reset()
for j in range(MAX_EP_STEPS):
if RENDER_ENV:
env.render()
# Added exploration noise. In the literature, they use
# the Ornstein-Uhlenbeck stochastic process for control tasks
# that deal with momentum
a = actor.predict(np.reshape(s, (1, actor.s_dim))) + actor_noise()
s2, r, terminal, info = env.step(a[0])
replay_buffer.add(np.reshape(s, (actor.s_dim,)), np.reshape(a, (actor.a_dim,)), r, \
terminal, np.reshape(s2, (actor.s_dim,)))
# Keep adding experience to the memory until
# there are at least minibatch size samples
if replay_buffer.size() > MINIBATCH_SIZE:
s_batch, a_batch, r_batch, t_batch, s2_batch = \
replay_buffer.sample_batch(MINIBATCH_SIZE)
# Calculate targets
target_q = critic.predict_target(s2_batch, actor.predict_target(s2_batch))
y_i = []
for k in range(MINIBATCH_SIZE):
if t_batch[k]:
y_i.append(r_batch[k])
else:
y_i.append(r_batch[k] + GAMMA * target_q[k])
# Update the critic given the targets
predicted_q_value, _ = critic.train(s_batch, a_batch, np.reshape(y_i, (MINIBATCH_SIZE, 1)))
# Update the actor policy using the sampled gradient
a_outs = actor.predict(s_batch)
grads = critic.action_gradients(s_batch, a_outs)
actor.train(s_batch, grads[0])
# Update target networks
actor.update_target_network()
critic.update_target_network()
if terminal:
break
{% endhighlight %}
You'll want to add book-keeping to the code and arrange things a little more neatly- you can find the full code [here](https://github.com/pemami4911/deep-rl/tree/master/ddpg).
I used Tensorboard to view the total reward per episode and average max Q.
Tensorflow is a bit low-level, so it's definitely recommended to use wrappers like tflearn, keras, Tensorboard, etc., on top of it. I am quite pleased with TF though; it's no surprise that it got so popular so quickly.
## Looking Forward
{%
include image.html
img="/img/results.png"
caption="Results from running the code on Pendulum-v0"
%}
Despite the fact that I ran this code on the courageous CPU of my Macbook Air, it converged relatively quickly to a good solution. Some ways to potentially get better performance (besides running the code on a GPU lol):
1. Use a priority algorithm for sampling from the replay buffer instead of uniformly sampling. See my summary of [Prioritized Experience Replay](http://pemami4911.github.io/papersummaries/2016/01/26/prioritizing-experience-replay.html).
2. Experiment with different stochastic policies to improve exploration.
3. Use recurrent networks to capture temporal nuances within the environment.
The authors of DDPG also used convolutional neural networks to tackle control tasks of higher complexities. They were able to learn good policies with just pixel inputs, which is really cool.
<a name="References"></a>
## References
1. Sutton, Richard S., and Andrew G. Barto. [Reinforcement learning: An introduction](http://www.incompleteideas.net/book/the-book.html).
2. Silver, et al. [Deterministic Policy Gradients](http://jmlr.org/proceedings/papers/v32/silver14.pdf)
3. Mnih, et al. [Human-level control through deep reinforcement learning](http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html)
4. Lillicrap, et al. [Continuous control with Deep Reinforcement Learning](http://arxiv.org/pdf/1509.02971v2.pdf)
| 58.312625 | 1,632 | 0.729328 | eng_Latn | 0.991695 |
c102ba1cb359a1baa994ee9d9d6169f297826a23 | 239 | md | Markdown | README.md | betoRa/moodle32php56 | 4fda79e5a1d53325a691203cdf462bf85401a85c | [
"MIT"
] | null | null | null | README.md | betoRa/moodle32php56 | 4fda79e5a1d53325a691203cdf462bf85401a85c | [
"MIT"
] | null | null | null | README.md | betoRa/moodle32php56 | 4fda79e5a1d53325a691203cdf462bf85401a85c | [
"MIT"
] | null | null | null | # moodle32php56
Dockerfile y archivos necesarios para la construcción de una imagen Docker que ejecuta apache 2, php 5.6 y descarga los archivos de instalación de moodle 3.2
Este es un fork de https://github.com/jmhardison/docker-moodle
| 39.833333 | 157 | 0.803347 | spa_Latn | 0.969783 |
c103da6eea66277089c716b5a2a0f4a393184db4 | 4,695 | md | Markdown | docs-archive-a/2014/reporting-services/control-report-distribution.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/reporting-services/control-report-distribution.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-10-11T06:40:46.000Z | 2021-11-25T02:25:44.000Z | docs-archive-a/2014/reporting-services/control-report-distribution.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:43.000Z | 2021-11-23T02:36:18.000Z | ---
title: Controllare la distribuzione del report | Microsoft Docs
ms.custom: ''
ms.date: 03/07/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: reporting-services-native
ms.topic: conceptual
helpviewer_keywords:
- subscriptions [Reporting Services], report distribution
- subscriptions [Reporting Services], e-mail
- subscriptions [Reporting Services], file share delivery
- file share delivery [Reporting Services]
- e-mail [Reporting Services]
- subscriptions [Reporting Services], security
- mail [Reporting Services]
ms.assetid: 8f15e2c6-a647-4b05-a519-1743b5d8654c
author: maggiesMSFT
ms.author: maggies
manager: kfile
ms.openlocfilehash: de8a27801ef89f10bf303cee17d1c2d0e1081c5a
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 08/04/2020
ms.locfileid: "87623056"
---
# <a name="control-report-distribution"></a>Controllare la distribuzione dei report
È possibile configurare un server di report in modo da ridurre i rischi di sicurezza associati alla distribuzione tramite posta elettronica e condivisione file.
## <a name="securing-reports"></a>sicurezza di report
Per controllare la distribuzione dei report, è innanzitutto necessario proteggerli dall'accesso non autorizzato. Per poter essere utilizzato in una sottoscrizione, un report deve utilizzare un set di credenziali archiviate che rimangano invariate per i singoli recapiti. Gli utenti che possono accedere al report nel server di report possono eseguirlo ed eventualmente distribuirlo. Per evitare che ciò accada, è necessario limitare l'accesso ai report solo agli utenti per i quali è strettamente necessario. Per altre informazioni, vedere [proteggere i report e le risorse](security/secure-reports-and-resources.md) e [proteggere le cartelle](security/secure-folders.md).
I report con contenuto strettamente riservato che utilizzano la sicurezza dei database per autorizzare gli accessi non possono essere distribuiti tramite sottoscrizioni.
> [!IMPORTANT]
> I report vengono trasportati come file. I rischi e le precauzioni per i file equivalgono a quelli per i report che vengono salvati su disco o inviati come allegati. Un utente che accede a un file può distribuirlo o utilizzarlo a propria discrezione.
## <a name="controlling-e-mail-delivery"></a>Controllo del recapito tramite posta elettronica
È possibile configurare un server di report in modo che la distribuzione tramite posta elettronica sia circoscritta a domini host specifici. È possibile, ad esempio, fare in modo che un server di report recapiti un report solo ai domini indicati nel file di configurazione RSReportServer.
È inoltre possibile definire le impostazioni di configurazione in modo che il campo **A** di una sottoscrizione venga nascosto. In questo caso, i report verranno recapitati solo all'utente che definisce la sottoscrizione. Non è tuttavia possibile impedire a un utente che ha ricevuto un report di inoltrarlo.
Il modo più efficace per controllare la distribuzione dei report consiste nel configurare un server di report in modo che invii unicamente un URL del server di report. Il server di report utilizza l'autenticazione di Windows e un modello di autorizzazione basata sui ruoli per controllare l'accesso a un report. Se un utente riceve per errore tramite posta elettronica un report che non è autorizzato a visualizzare, il server di report non visualizzerà il report.
## <a name="controlling-file-share-delivery"></a>Controllo del recapito tramite condivisione file
Il recapito tramite condivisione file viene utilizzato per inviare un report a un file presente su un disco rigido. Dopo essere stato salvato su disco, un file non è più soggetto al modello di sicurezza basata sui ruoli utilizzato dal server di report per controllare gli accessi degli utenti. Per proteggere un report recapitato a un disco rigido, è possibile associare elenchi di controllo di accesso (ACL) al file stesso o alla cartella che lo contiene. In base al sistema operativo, potrebbero essere disponibili ulteriori opzioni di sicurezza.
## <a name="see-also"></a>Vedere anche
[Configurare un server di report per il recapito tramite posta elettronica (Configuration Manager SSRS)](../../2014/sql-server/install/configure-a-report-server-for-e-mail-delivery-ssrs-configuration-manager.md)
[Sottoscrizioni e recapito (Reporting Services)](subscriptions/subscriptions-and-delivery-reporting-services.md)
[Creare e gestire sottoscrizioni per server di report in modalità nativa](../../2014/reporting-services/create-manage-subscriptions-native-mode-report-servers.md)
| 85.363636 | 675 | 0.802343 | ita_Latn | 0.997218 |
c10489e9bfb95ef9a5a886051c459aba4f79bde4 | 9,674 | md | Markdown | articles/search/search-query-odata-comparison-operators.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/search-query-odata-comparison-operators.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/search-query-odata-comparison-operators.md | cristhianu/azure-docs.es-es | 910ba6adc1547b9e94d5ed4cbcbe781921d009b7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Referencia de los operadores de comparación de OData
titleSuffix: Azure Cognitive Search
description: Operadores de comparación de OData, eq, ne, gt, lt, ge y le, en consultas de Azure Cognitive Search.
manager: nitinme
author: brjohnstmsft
ms.author: brjohnst
ms.service: cognitive-search
ms.topic: conceptual
ms.date: 11/04/2019
translation.priority.mt:
- de-de
- es-es
- fr-fr
- it-it
- ja-jp
- ko-kr
- pt-br
- ru-ru
- zh-cn
- zh-tw
ms.openlocfilehash: 068e2ec822f0a292ac83b3e48049830eb77b49f6
ms.sourcegitcommit: b050c7e5133badd131e46cab144dd5860ae8a98e
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/23/2019
ms.locfileid: "72793384"
---
# <a name="odata-comparison-operators-in-azure-cognitive-search---eq-ne-gt-lt-ge-and-le"></a>Operadores de comparación de OData en Azure Cognitive Search: `eq`, `ne`, `gt`, `lt`, `ge` y `le`
La operación más básica en una [expresión de filtro de OData](query-odata-filter-orderby-syntax.md) de Azure Cognitive Search consiste en comparar un campo con un valor determinado. Hay dos tipos de comparación posibles: comparación de igualdad y comparación de rango. Puede usar los operadores siguientes para comparar un campo con un valor constante:
Operadores de igualdad:
- `eq`: se comprueba si un campo es **igual a** un valor constante
- `ne`: se comprueba si campo **no es igual a** un valor constante
Operadores de rango:
- `gt`: se comprueba si un campo es **mayor que** un valor constante
- `lt`: se comprueba si un campo es **menor que** un valor constante
- `ge`: se comprueba si un campo es **mayor o igual que** un valor constante
- `le`: se comprueba si un campo es **menor o igual que** un valor constante
Puede usar los operadores de rango en combinación con los [operadores lógicos](search-query-odata-logical-operators.md) para comprobar si un campo está dentro de un intervalo de valores determinado. Vea los [ejemplos](#examples) más adelante en este artículo.
> [!NOTE]
> Si lo prefiere, puede colocar el valor constante en el lado izquierdo del operador y el nombre del campo en el lado derecho. Para los operadores de rango, se invierte el significado de la comparación. Por ejemplo, si es el valor constante está a la izquierda, `gt` podría probar si el valor constante es mayor que el campo. También puede usar los operadores de comparación para comparar el resultado de una función, como `geo.distance`, con un valor. Para las funciones booleanas como `search.ismatch`, la comparación del resultado con `true` o `false` es opcional.
## <a name="syntax"></a>Sintaxis
En la siguiente EBNF ([forma de Backus-Naur extendida](https://en.wikipedia.org/wiki/Extended_Backus–Naur_form)) se define la gramática de una expresión de OData en la que se usan los operadores de comparación.
<!-- Upload this EBNF using https://bottlecaps.de/rr/ui to create a downloadable railroad diagram. -->
```
comparison_expression ::=
variable_or_function comparison_operator constant |
constant comparison_operator variable_or_function
variable_or_function ::= variable | function_call
comparison_operator ::= 'gt' | 'lt' | 'ge' | 'le' | 'eq' | 'ne'
```
También está disponible un diagrama de sintaxis interactivo:
> [!div class="nextstepaction"]
> [Diagrama de sintaxis de OData para Azure Cognitive Search](https://azuresearch.github.io/odata-syntax-diagram/#comparison_expression)
> [!NOTE]
> Consulte [Referencia de sintaxis de expresiones OData para Azure Cognitive Search](search-query-odata-syntax-reference.md) para obtener la EBNF completa.
Hay dos formas de expresiones de comparación. La única diferencia entre las dos es si la constante aparece en el lado izquierdo o derecho del operador. La expresión en el otro lado del operador debe ser una **variable** o una llamada de función. Una variable puede ser un nombre de campo o una variable de rango en el caso de una [expresión lambda](search-query-odata-collection-operators.md).
## <a name="data-types-for-comparisons"></a>Tipos de datos para las comparaciones
Los tipos de datos en ambos lados de un operador de comparación deben ser compatibles. Por ejemplo, si el lado izquierdo es un campo de tipo `Edm.DateTimeOffset`, el lado derecho debe ser una constante de fecha y hora. Los tipos de datos numéricos son más flexibles. Puede comparar variables y funciones de cualquier tipo numérico con constantes de cualquier otro tipo numérico, con algunas limitaciones, como se describe en la tabla siguiente.
| Tipo de variable o función | Tipo de valor constante | Limitaciones |
| --- | --- | --- |
| `Edm.Double` | `Edm.Double` | La comparación está sujeta a [reglas especiales para `NaN`](#special-case-nan) |
| `Edm.Double` | `Edm.Int64` | La constante se convierte a `Edm.Double`, lo que provoca una pérdida de precisión para los valores de gran magnitud |
| `Edm.Double` | `Edm.Int32` | N/D |
| `Edm.Int64` | `Edm.Double` | No se permiten las comparaciones con `NaN`, `-INF` o `INF` |
| `Edm.Int64` | `Edm.Int64` | N/D |
| `Edm.Int64` | `Edm.Int32` | La constante se convierte a `Edm.Int64` antes de la comparación |
| `Edm.Int32` | `Edm.Double` | No se permiten las comparaciones con `NaN`, `-INF` o `INF` |
| `Edm.Int32` | `Edm.Int64` | N/D |
| `Edm.Int32` | `Edm.Int32` | N/D |
Para las comparaciones que no están permitidas, como la de un campo de tipo `Edm.Int64` con `NaN`, la API REST de Azure Cognitive Search devolverá un error "HTTP 400: Solicitud incorrecta".
> [!IMPORTANT]
> Aunque las comparaciones de tipos numéricos son flexibles, se recomienda encarecidamente escribir comparaciones en filtros para que el valor constante sea del mismo tipo de datos que la variable o función con la que se compara. Esto es especialmente importante cuando se mezclan valores enteros y de punto flotante, en los que las conversiones implícitas pueden perder precisión.
<a name="special-case-nan"></a>
### <a name="special-cases-for-null-and-nan"></a>Casos especiales para `null` y `NaN`
Al usar los operadores de comparación, es importante recordar que en Azure Cognitive Search todos los campos que no son de colección pueden ser `null`. En la tabla siguiente se muestran todos los resultados posibles de una expresión de comparación en la que cualquiera de los lados puede ser `null`:
| Operator | Resultado cuando solo el campo o la variable es `null` | Resultado cuando solo la constante es `null` | Resultado cuando el campo o la variable y la constante son `null` |
| --- | --- | --- | --- |
| `gt` | `false` | HTTP 400: Error de solicitud incorrecta | HTTP 400: Error de solicitud incorrecta |
| `lt` | `false` | HTTP 400: Error de solicitud incorrecta | HTTP 400: Error de solicitud incorrecta |
| `ge` | `false` | HTTP 400: Error de solicitud incorrecta | HTTP 400: Error de solicitud incorrecta |
| `le` | `false` | HTTP 400: Error de solicitud incorrecta | HTTP 400: Error de solicitud incorrecta |
| `eq` | `false` | `false` | `true` |
| `ne` | `true` | `true` | `false` |
En resumen, `null` solo es igual a sí mismo, y no es menor o mayor que ningún otro valor.
Si el índice tiene campos de tipo `Edm.Double` y carga valores `NaN` a esos campos, tendrá que tenerlo en cuenta al escribir filtros. Azure Cognitive Search implementa el estándar IEEE 754 para el control de valores `NaN`, y las comparaciones con esos valores generan resultados que no son obvios, como se muestra en la tabla siguiente.
| Operator | Resultado cuando al menos un operando es `NaN` |
| --- | --- |
| `gt` | `false` |
| `lt` | `false` |
| `ge` | `false` |
| `le` | `false` |
| `eq` | `false` |
| `ne` | `true` |
En resumen, `NaN` no es igual a ningún valor, incluido él mismo.
### <a name="comparing-geo-spatial-data"></a>Comparación de datos geoespaciales
No se puede comparar directamente un campo de tipo `Edm.GeographyPoint` con un valor constante, pero se puede usar la función `geo.distance`. Esta función devuelve un valor de tipo `Edm.Double`, de modo que puede compararlo con un valor numérico constante para filtrar según la distancia con respecto a coordenadas geoespaciales constantes. Vea los [ejemplos](#examples) siguientes.
### <a name="comparing-string-data"></a>Comparación de datos de cadena
En los filtros, las cadenas se pueden comparar para obtener coincidencias exactas con los operadores `eq` y `ne`. Estas comparaciones distinguen mayúsculas de minúsculas.
## <a name="examples"></a>Ejemplos
Comparar documentos donde el campo `Rating` esté entre 3 y 5, inclusive:
Rating ge 3 and Rating le 5
Comparar documentos donde el campo `Location` está a menos de dos kilómetros de la latitud y longitud especificadas:
geo.distance(Location, geography'POINT(-122.031577 47.578581)') lt 2.0
Comparar documentos donde el campo `LastRenovationDate` es mayor o igual que el 1 de enero de 2015, medianoche (UTC):
LastRenovationDate ge 2015-01-01T00:00:00.000Z
Comparar documentos donde el campo `Details/Sku` no es `null`:
Details/Sku ne null
Comparar documentos para hoteles donde al menos una habitación tenga el tipo "Deluxe Room" (Habitación Deluxe), donde la cadena del campo `Rooms/Type` coincide exactamente con el filtro:
Rooms/any(room: room/Type eq 'Deluxe Room')
## <a name="next-steps"></a>Pasos siguientes
- [Filtros de Azure Cognitive Search](search-filters.md)
- [Información general sobre el lenguaje de expresiones OData para Azure Cognitive Search](query-odata-filter-orderby-syntax.md)
- [Referencia de sintaxis de expresiones OData para Azure Cognitive Search](search-query-odata-syntax-reference.md)
- [Búsqueda de documentos (API REST de Azure Cognitive Search)](https://docs.microsoft.com/rest/api/searchservice/Search-Documents)
| 59.349693 | 567 | 0.749431 | spa_Latn | 0.973458 |
c10582ad78256fc41458f3fd551d9cf53242544f | 5,383 | md | Markdown | src/pages/blog/my-2020-react-startup-process.md | joshfinnie/joshfinnie.astro | 8f2510433bb14d410bcb94f4989f92e5c8bad5e9 | [
"MIT"
] | null | null | null | src/pages/blog/my-2020-react-startup-process.md | joshfinnie/joshfinnie.astro | 8f2510433bb14d410bcb94f4989f92e5c8bad5e9 | [
"MIT"
] | null | null | null | src/pages/blog/my-2020-react-startup-process.md | joshfinnie/joshfinnie.astro | 8f2510433bb14d410bcb94f4989f92e5c8bad5e9 | [
"MIT"
] | null | null | null | ---
title: "My 2020 React.js Start-up Process"
date: "2020-07-08"
tags:
- "javascript"
- "react.js"
- "tutorial"
path: "/blog/my-2020-react-startup-process"
heroImage: "/assets/blog/typewriter.jpg"
expires: true
layout: '../../layouts/BlogPost.astro'
unsplash: 'RetroSuppy'
unsplashURL: 'retrosupply'
---
Over the past couple of years I have narrowed my frontend workflow to React with Typescript. There are a lot of frameworks that are fighting for the top spot of one's frontend workflow. In the end, I am most comfortable with React for rapid development so it is the one I will focus on here. In this blog post I will walk through my 2020 React workflow. This post will help you get up and running with the same setup that I am currently using.
## Installing React with Typescript
First, we want to create a project using the `create-react-app` command-line application. A semi-recent addition to the Node ecosystem is an alternative to NPM called NPX. NPX is a command-line application that comes with NPM. It will allow you to run NPM packages without going through the process of installing it. This allows you to run `create-react-app` without the hassle of building a `package.json` file. It is also a good way of keeping these one-off tools up-to-date. The following NPX command is all we need to do to set up our boilerplate React app with Typescript. Let’s run the following commands:
```bash
$ npx create-react-app my-project --template typescript
$ cd my-project
$ npm start
```
This is a screenshot of what see with when we visit http://localhost:3000 after running the above commands.
<img width="100%" loading='lazy' src='/assets/blog/hello-world.png' alt="React's Hello World!" />
## Installing ESLint/Prettier
The next bit of tech that I always use is Prettier and ESLint. I am notorious at writing bad code so having these two packages keep a watch on me is crucial. Below are the steps I use to install Prettier and ESLint:
```bash
$ npm install --save-dev eslint prettier
$ ./node_modules/eslint/bin/eslint.js --init
```
```
? How would you like to use ESLint? To check syntax, find problems, and enforce code style
? What type of modules does your project use? JavaScript modules (import/export)
? Which framework does your project use? React
? Does your project use TypeScript? Yes
? Where does your code run? Browser
? How would you like to define a style for your project? Use a popular style guide
? Which style guide do you want to follow? Airbnb: https://github.com/airbnb/javascript
? What format do you want your config file to be in? JavaScript
? Would you like to install them now with npm? Yes
```
After running the ESLint init, I get the following configuration:
```javascript
module.exports = {
env: {
browser: true,
es6: true,
},
extends: [
'plugin:react/recommended',
'airbnb',
],
globals: {
Atomics: 'readonly',
SharedArrayBuffer: 'readonly',
},
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaFeatures: {
jsx: true,
},
ecmaVersion: 2018,
sourceType: 'module',
},
plugins: [
'react',
'@typescript-eslint',
],
rules: {
},
};
```
There are some things that I add to my `.eslintrc.json` file to make it work with my Vim setup. Not sure if it's needed everywhere, but I felt like I should share:
```diff
...
+ settings: {
+ "import/resolver": {
+ "node": {
+ "extensions": [".js", ".jsx", ".ts", ".tsx"]
+ }
+ }
+ },
extends: [
+ 'plugin:import/typescript',
'plugin:react/recommended',
...
rules: {
+ "react/jsx-filename-extension": [1, {"extensions": [".tsx", ".jsx"]}],
+ "import/extensions": ["error", "ignorePackages", {
+ "ts": "never",
+ "tsx": "never",
+ "js": "never",
+ "jsx": "never",
+ "mjs": "never"
}]
...
```
## Component Layout
The last major change I make is that I update the file structure of the `src` directory. I find that making a few changes helps me collect my thoughts and lets me write cleaner React code. Below is the structure of the `src` file generated with the `create-react-app` script:
```bash
src
├── App.css
├── App.test.tsx
├── App.tsx
├── index.css
├── index.tsx
├── logo.svg
├── react-app-env.d.ts
├── serviceWorker.ts
└── setupTests.ts
```
I find it very beneficial to have a separate `components` folder. The separate folder lets me isolate my components with the rest of my javascript. After moving around the components and getting more of the application developed this is what my folder structure looks like:
```bash
$ tree src
src
├── utilities
│ └── fetch.ts
├── components
│ ├── app
│ │ ├── app.css
│ │ ├── app.tsx
│ │ ├── app.test.tsx
│ │ ├── index.tsx
│ │ └── logo.svg
│ └── loginForm
│ ├── Form.css
│ ├── Form.tsx
│ ├── index.ts
│ ├── submitForm.ts
│ ├── submitForm.test.ts
│ ├── useForm.ts
│ ├── useForm.test.ts
│ ├── validateForm.ts
│ └── validateForm.test.ts
├── index.css
├── index.tsx
├── react-app-env.d.ts
├── serviceWorker.ts
└── setupTests.ts
```
## Conclusion
This was a quick overview on what I do to setup a React application from scratch in 2020. Does this match your workflow? What could I be doing to make this better? If you have any comments or suggestions, feel free to find me on [Twitter](https://twitter.com/joshfinnie)!
| 32.233533 | 611 | 0.683262 | eng_Latn | 0.985882 |
c106083d04361ba676bd4c1fb44f9cf2e8bbb594 | 847 | md | Markdown | content/es/docs/getting-started/_index.md | vfarcic/jx-docs-1 | 49b59fd26c83c6a44b6676f096195cbcc2557664 | [
"Apache-2.0"
] | null | null | null | content/es/docs/getting-started/_index.md | vfarcic/jx-docs-1 | 49b59fd26c83c6a44b6676f096195cbcc2557664 | [
"Apache-2.0"
] | null | null | null | content/es/docs/getting-started/_index.md | vfarcic/jx-docs-1 | 49b59fd26c83c6a44b6676f096195cbcc2557664 | [
"Apache-2.0"
] | null | null | null | ---
title: "Comenzando"
linkTitle: "Comenzando"
weight: 2
description: >
¿Cómo ponerse en marcha rápidamente con Jenkins X?
---
La forma más sencilla de comenzar es a través de los [Tutoriales de Google Cloud](/es/docs/guides/managing-jx/tutorials/google-hosted/).
De lo contrario, primero deberá instalar la [herramienta de línea de comandos jx](/docs/getting-started/setup/install/) en su máquina.
Puede utilizar la [línea de comando jx](/commands/jx/#jx) para [crear un nuevo clúster de Kubernetes](/docs/getting-started/setup/create-cluster/) con Jenkins X instalado automáticamente.
Si ya tiene un clúster de Kubernetes existente, [instale Jenkins X en su clúster de Kubernetes](/getting-started/install-on-cluster/).
Cuando tenga un clúster Kubernetes con Jenkins X instalado, consulte sus [próximos pasos](/es/docs/getting-started/). | 49.823529 | 187 | 0.77686 | spa_Latn | 0.904916 |
c106de92974cf9300b8faf80e650058b49e6afb4 | 45 | md | Markdown | README.md | Arteaga2k/2kmovies | 04eba192362d7fefbe28fbb77a239fb67866ff16 | [
"MIT"
] | null | null | null | README.md | Arteaga2k/2kmovies | 04eba192362d7fefbe28fbb77a239fb67866ff16 | [
"MIT"
] | null | null | null | README.md | Arteaga2k/2kmovies | 04eba192362d7fefbe28fbb77a239fb67866ff16 | [
"MIT"
] | null | null | null | # 2kmovies
https://k2movie.herokuapp.com/
| 9 | 30 | 0.711111 | kor_Hang | 0.718583 |
c106fd8e4718e79892e2f66964a73e969217cb67 | 355 | md | Markdown | src/components/sui-link/readme.md | vinicius-pretto/shortly-ui | a60a997c113aab58311f0c919b0240f327317dae | [
"MIT"
] | null | null | null | src/components/sui-link/readme.md | vinicius-pretto/shortly-ui | a60a997c113aab58311f0c919b0240f327317dae | [
"MIT"
] | 3 | 2021-05-11T16:52:08.000Z | 2022-02-19T03:55:56.000Z | src/components/sui-link/readme.md | vinicius-pretto/shortly-ui | a60a997c113aab58311f0c919b0240f327317dae | [
"MIT"
] | null | null | null | # Link
## Properties
| Property | Type | Default | Description |
| -------- | ---------- | ----------- | ----------------------- |
| href | `string` | `undefined` | Link address |
| variant | `"button"` | `undefined` | Applies `button` styles |
## How to use
```bash
<sui-link href="/">Link text</sui-link>
```
| 23.666667 | 65 | 0.430986 | yue_Hant | 0.484473 |
c1075da902ea0b48b2df7b32f8c6879bc3f5c8a5 | 1,740 | md | Markdown | docs/zh-CN/sql-reference/sql-statements/Data Manipulation/SHOW SNAPSHOT.md | kaiker19/incubator-doris | f4c5c6ccc650012a0db7ddda8a38f4c65cc5c9be | [
"Apache-2.0"
] | 3,562 | 2018-08-30T05:26:10.000Z | 2022-03-31T10:01:56.000Z | docs/zh-CN/sql-reference/sql-statements/Data Manipulation/SHOW SNAPSHOT.md | kaiker19/incubator-doris | f4c5c6ccc650012a0db7ddda8a38f4c65cc5c9be | [
"Apache-2.0"
] | 5,199 | 2018-09-11T07:57:21.000Z | 2022-03-31T16:17:50.000Z | docs/zh-CN/sql-reference/sql-statements/Data Manipulation/SHOW SNAPSHOT.md | kaiker19/incubator-doris | f4c5c6ccc650012a0db7ddda8a38f4c65cc5c9be | [
"Apache-2.0"
] | 1,234 | 2018-08-31T09:34:54.000Z | 2022-03-31T06:01:02.000Z | ---
{
"title": "SHOW SNAPSHOT",
"language": "zh-CN"
}
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# SHOW SNAPSHOT
## description
该语句用于查看仓库中已存在的备份。
语法:
SHOW SNAPSHOT ON `repo_name`
[WHERE SNAPSHOT = "snapshot" [AND TIMESTAMP = "backup_timestamp"]];
说明:
1. 各列含义如下:
Snapshot: 备份的名称
Timestamp: 对应备份的时间版本
Status: 如果备份正常,则显示 OK,否则显示错误信息
2. 如果指定了 TIMESTAMP,则会额外显示如下信息:
Database: 备份数据原属的数据库名称
Details: 以 Json 的形式,展示整个备份的数据目录及文件结构
## example
1. 查看仓库 example_repo 中已有的备份:
SHOW SNAPSHOT ON example_repo;
2. 仅查看仓库 example_repo 中名称为 backup1 的备份:
SHOW SNAPSHOT ON example_repo WHERE SNAPSHOT = "backup1";
2. 查看仓库 example_repo 中名称为 backup1 的备份,时间版本为 "2018-05-05-15-34-26" 的详细信息:
SHOW SNAPSHOT ON example_repo
WHERE SNAPSHOT = "backup1" AND TIMESTAMP = "2018-05-05-15-34-26";
## keyword
SHOW, SNAPSHOT
| 30 | 76 | 0.672414 | eng_Latn | 0.828126 |
c10851778b5f3e136dc5d96994a998cce42f1fa2 | 2,924 | md | Markdown | README.md | FluentCoding/microdiff | c8efe46b888f3078a01634bedfa5101cc1e4b4af | [
"MIT"
] | 1 | 2021-12-24T06:47:06.000Z | 2021-12-24T06:47:06.000Z | README.md | wangxiaoer5200/microdiff | 0b4c00ef67e0f4aa3d867e7f6eb1315d525efa1a | [
"MIT"
] | null | null | null | README.md | wangxiaoer5200/microdiff | 0b4c00ef67e0f4aa3d867e7f6eb1315d525efa1a | [
"MIT"
] | null | null | null | <div align="center">

Microdiff is a tiny (currently <1kb), fast, zero dependency object and array comparison library. It is significantly faster than most other deep comparison libraries, and has full TypeScript support.
  
</div>
# Features
- 🚀 More than double the speed of other object diff libraries
- 📦 Extremely lightweight, <1kb minified
- 🌎 Supports Deno, Node, the web, and even service workers. Also comes with built in Typescript types
- 🔰 Very easy to use, having just a single `diff()` function
- 📅 Full support for objects like `new Date()` and `new RegExp()`
# Get started
First, install Microdiff
```
npm i microdiff
```
If you are using Deno, you can import it from Deno.land with the link `https://deno.land/x/microdiff@VERSION/index.ts` (remember to change `@VERSION` to the version you want to use).
After you install it, simply import it and run it on two objects.
```js
import diff from "microdiff";
const obj1 = {
originalProperty: true,
};
const obj2 = {
originalProperty: true,
newProperty: "new",
};
console.log(diff(obj1, obj2));
// [{type: "CREATE", path: ["newProperty"], value: "new"}]
```
If you are using CommonJS, you can import it like this:
```js
const diff = require("microdiff").default;
```
There are three different types of changes. `CREATE`, `REMOVE`, and `CHANGE`.
The `path` property gives a path to the property in the new object (or the old object in the case of `REMOVE`).
Each element in the paths is a key to the next property a level deeper until you get to the property changed, and it is string or a number, depending on whether the object is an Array or Object (Objects with number keys will still be strings).
The `value` property exists in types `CREATE` and `CHANGE`, and it contains the value of the property added/changed.
# Cycles support
By default cycles are supported, but if you are sure that the object has no cycles (for example if you are parsing JSON) you can disable cycles using the `cyclesFix` option.
```js
diff(obj1, obj2, { cyclesFix: false });
```
# Benchmarks
```
Benchmarks: Small object
deep-diff: 17929ns - 409% slower
deep-object-diff: 10763ns - 206% slower
jsdiff: 79700ns - 2164% slower
microdiff: 3520ns - Fastest
Benchmarks: Large Object
deep-diff: 272887ns - 259% slower
deep-object-diff: 160019ns - 111% slower
jsdiff: 1688294ns - 2123% slower
microdiff: 75934ns - Fastest
```
These benchmarks are currently only for one small object and a very large object, so they might not be accurate. I will be working on creating benchmarks with more varying types.
| 35.658537 | 268 | 0.74487 | eng_Latn | 0.995064 |
c109087fffe2c073512f8e1b121d2e77671ea638 | 1,642 | md | Markdown | _posts/benderliz/2021-10-20-LizBenderTrinketPost.md | inf380p/inf380p.github.io | 7c3805ea704bf22e71497496308af7a2f8ee810e | [
"CC-BY-2.0"
] | 3 | 2020-10-14T19:18:26.000Z | 2021-08-29T19:31:31.000Z | _posts/benderliz/2021-10-20-LizBenderTrinketPost.md | inf380p/inf380p.github.io | 7c3805ea704bf22e71497496308af7a2f8ee810e | [
"CC-BY-2.0"
] | 86 | 2020-08-26T19:44:30.000Z | 2021-12-03T18:32:29.000Z | _posts/benderliz/2021-10-20-LizBenderTrinketPost.md | inf380p/inf380p.github.io | 7c3805ea704bf22e71497496308af7a2f8ee810e | [
"CC-BY-2.0"
] | 20 | 2020-08-26T19:20:58.000Z | 2021-10-20T17:48:04.000Z | ---
layout: post
author: benderliz
title: "Liz's Trinket Post"
---
Here is my example that uses a dictionary as a counter:
<iframe src="https://trinket.io/embed/python/b129d32d14" width="100%" height="600" frameborder="0" marginwidth="0" marginheight="0" allowfullscreen></iframe>
This code creates an empty dictionary to store words from a given phrase, and then splits the phrase to be stored into individual words:
```
phrase = "Writing programs or programming is a very creative and rewarding activity You can write programs for many reasons ranging from making your living to solving a difficult data analysis problem to having fun to helping someone else solve a problem This book assumes that {\em everyone} needs to know how to program and that once you know how to program, you will figure out what you want to do with your newfound skills"
word_dictionary={}
words = phrase.split()
```
The following code then uses a ``for loop`` to iterate through each word, setting it to lowercase.
```
for word in words:
word = word.lower()
```
Then, the words are stored in the dictionary. We use the ``in operator`` to check that the string is in the dictionary.
```
if word in word_dictionary:
word_dictionary[word]+=1
else:
word_dictionary[word]=1
print(word_dictionary)
```
Overall, this is a simple example of using a dictionary to count the number of times each word is used in a given phrase. This example was one of the first lightbulb moments I had in class regarding how to iterate through ``for loops`` to count something, and then use dictionaries to store it.
I am practicing making an edit here!
| 49.757576 | 429 | 0.754568 | eng_Latn | 0.99955 |
c1094e0de19c9d6a361480a9ac7991fc1518e3d8 | 48 | md | Markdown | README.md | Himmelt/AutoFishing | 7727b0b3b70fcfa5c136a2f4ba980c13cc6a41ad | [
"MIT"
] | null | null | null | README.md | Himmelt/AutoFishing | 7727b0b3b70fcfa5c136a2f4ba980c13cc6a41ad | [
"MIT"
] | null | null | null | README.md | Himmelt/AutoFishing | 7727b0b3b70fcfa5c136a2f4ba980c13cc6a41ad | [
"MIT"
] | null | null | null | # AutoFishing
Minecraft Auto-fishing Client Mod
| 16 | 33 | 0.833333 | yue_Hant | 0.328661 |
c10a805f20406e716a26a75c176ce39f18a0af53 | 6,554 | md | Markdown | doc/compression_cookbook.md | samotarnik/grpc | 3278bdceda8030d5aa130f12765e5f07263c860d | [
"Apache-2.0"
] | 36,552 | 2015-02-26T17:30:13.000Z | 2022-03-31T22:41:33.000Z | doc/compression_cookbook.md | SanjanaSingh897/grpc | 2d858866eb95ce5de8ccc8c35189a12733d8ca79 | [
"Apache-2.0"
] | 23,536 | 2015-02-26T17:50:56.000Z | 2022-03-31T23:39:42.000Z | doc/compression_cookbook.md | SanjanaSingh897/grpc | 2d858866eb95ce5de8ccc8c35189a12733d8ca79 | [
"Apache-2.0"
] | 11,050 | 2015-02-26T17:22:10.000Z | 2022-03-31T10:12:35.000Z | # gRPC (Core) Compression Cookbook
## Introduction
This document describes compression as implemented by the gRPC C core. See [the
full compression specification](compression.md) for details.
### Intended Audience
Wrapped languages developers, for the purposes of supporting compression by
interacting with the C core.
## Criteria for GA readiness
1. Be able to set compression at [channel](#per-channel-settings),
[call](#per-call-settings) and [message](#per-message-settings) level.
In principle this API should be based on _compression levels_ as opposed to
algorithms. See the discussion [below](#level-vs-algorithms).
1. Have unit tests covering [the cases from the
spec](https://github.com/grpc/grpc/blob/master/doc/compression.md#test-cases).
1. Interop tests implemented and passing on Jenkins. The two relevant interop
test cases are
[large_compressed_unary](https://github.com/grpc/grpc/blob/master/doc/interop-test-descriptions.md#large_compressed_unary)
and
[server_compressed_streaming](https://github.com/grpc/grpc/blob/master/doc/interop-test-descriptions.md#server_compressed_streaming).
## Summary Flowcharts
The following flowcharts depict the evolution of a message, both _incoming_ and
_outgoing_, irrespective of the client/server character of the call. Aspects
still not symmetric between clients and servers (e.g. the [use of compression
levels](https://github.com/grpc/grpc/blob/master/doc/compression.md#compression-levels-and-algorithms))
are explicitly marked. The in-detail textual description for the different
scenarios is described in subsequent sections.
## Incoming Messages

## Outgoing Messages

## Levels vs Algorithms
As mentioned in [the relevant discussion on the spec
document](https://github.com/grpc/grpc/blob/master/doc/compression.md#compression-levels-and-algorithms),
compression _levels_ are the primary mechanism for compression selection _at the
server side_. In the future, it'll also be at the client side. The use of levels
abstracts away the intricacies of selecting a concrete algorithm supported by a
peer, on top of removing the burden of choice from the developer.
As of this writing (Q2 2016), clients can only specify compression _algorithms_.
Clients will support levels as soon as an automatic retry/negotiation mechanism
is in place.
## Per Channel Settings
Compression may be configured at channel creation. This is a convenience to
avoid having to repeatedly configure compression for every call. Note that any
compression setting on individual [calls](#per-call-settings) or
[messages](#per-message-settings) overrides channel settings.
The following aspects can be configured at channel-creation time via channel arguments:
#### Disable Compression _Algorithms_
Use the channel argument key
`GRPC_COMPRESSION_CHANNEL_ENABLED_ALGORITHMS_BITSET` (from
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
takes a 32 bit bitset value. A set bit means the algorithm with that enum value
according to `grpc_compression_algorithm` is _enabled_.
For example, `GRPC_COMPRESS_GZIP` currently has a numeric value of 2. To
enable/disable GZIP for a channel, one would set/clear the 3rd LSB (eg, 0b100 =
0x4). Note that setting/clearing 0th position, that corresponding to
`GRPC_COMPRESS_NONE`, has no effect, as no-compression (a.k.a. _identity_) is
always supported.
Incoming messages compressed (ie, encoded) with a disabled algorithm will result
in the call being closed with `GRPC_STATUS_UNIMPLEMENTED`.
#### Default Compression _Level_
**(currently, Q2 2016, only applicable for server side channels. It's ignored
for clients.)**
Use the channel argument key `GRPC_COMPRESSION_CHANNEL_DEFAULT_LEVEL` (from
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
valued by an integer corresponding to a value from the `grpc_compression_level`
enum.
#### Default Compression _Algorithm_
Use the channel argument key `GRPC_COMPRESSION_CHANNEL_DEFAULT_ALGORITHM` (from
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
valued by an integer corresponding to a value from the `grpc_compression_level`
enum.
## Per Call Settings
### Compression **Level** in Call Responses
The server requests a compression level via initial metadata. The
`send_initial_metadata` `grpc_op` contains a `maybe_compression_level` field
with two fields, `is_set` and `compression_level`. The former must be set when
actively choosing a level to disambiguate the default value of zero (no
compression) from the proactive selection of no compression.
The core will receive the request for the compression level and automatically
choose a compression algorithm based on its knowledge about the peer
(communicated by the client via the `grpc-accept-encoding` header. Note that the
absence of this header means no compression is supported by the client/peer).
### Compression **Algorithm** in Call Responses
**Server should avoid setting the compression algorithm directly**. Prefer
setting compression levels unless there's a _very_ compelling reason to choose
specific algorithms (benchmarking, testing).
Selection of concrete compression algorithms is performed by adding a
`(GRPC_COMPRESS_REQUEST_ALGORITHM_KEY, <algorithm-name>)` key-value pair to the
initial metadata, where `GRPC_COMPRESS_REQUEST_ALGORITHM_KEY` is defined in
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
and `<algorithm-name>` is the human readable name of the algorithm as given in
[the HTTP2 spec](https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md)
for `Message-Encoding` (e.g. gzip, identity, etc.). See
[`grpc_compression_algorithm_name`](https://github.com/grpc/grpc/blob/master/src/core/lib/compression/compression.c)
for the mapping between the `grpc_compression_algorithm` enum values and their
textual representation.
## Per Message Settings
To disable compression for a specific message, the `flags` field of `grpc_op`
instances of type `GRPC_OP_SEND_MESSAGE` must have its `GRPC_WRITE_NO_COMPRESS`
bit set. Refer to
[`grpc/impl/codegen/compression_types.h`](https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/compression_types.h)),
| 48.910448 | 136 | 0.802716 | eng_Latn | 0.935133 |
c10afb9507a0f1616162ba8a90770848a7c86cd6 | 5,178 | md | Markdown | README.md | kan-s0/JORLDY | 44989cf415196604a1ad0383b34085dee6bb1c51 | [
"Apache-2.0"
] | null | null | null | README.md | kan-s0/JORLDY | 44989cf415196604a1ad0383b34085dee6bb1c51 | [
"Apache-2.0"
] | null | null | null | README.md | kan-s0/JORLDY | 44989cf415196604a1ad0383b34085dee6bb1c51 | [
"Apache-2.0"
] | null | null | null | # JORLDY (Beta)
[](LICENSE)
Hello Wo**RL**d!!:hand: **Join Our Reinforcement Learning framework for Developing Yours (JORLDY)** is an open-source Reinforcement Learning (RL) framework provided by [KakaoEnterprise](https://www.kakaoenterprise.com/). It is named after Jordy, one of the [Kakao Niniz](https://www.kakaocorp.com/page/service/service/Niniz) character. It provides various RL algorithms and environment and they can be easily used using single code. This repository is opened for helping RL researchers and students who study RL.
## :fire: Features
- 20+ RL Algorithms and various RL environment are provided
- Algorithms and environment are customizable
- New algorithms and environment can be added
- Distributed RL algorithms are provided using [ray](https://github.com/ray-project/ray)
- Benchmark of the algorithms is conducted in many RL environment
## :heavy_check_mark: Tested
| Python | Windows | Mac | Linux |
| :----: | :---------: | :-----: | :------: |
| 3.8<br />3.7 | Windows Server 2022 | macOS Big Sur 11<br />macOS Catalina 10.15 | Ubuntu 20.04<br />Ubuntu 18.04 |
## :arrow_down: Installation
```
git clone https://github.com/kakaoenterprise/JORLDY.git
cd JORLDY
pip install -r requirements.txt
# linux
apt-get update
apt-get -y install libgl1-mesa-glx # for opencv
apt-get -y install libglib2.0-0 # for opencv
apt-get -y install gifsicle # for gif optimize
```
### :whale: To use docker
(customize if necessary)
```
cd JORLDY
docker pull jorldy/jorldy
# mac, linux
docker run -it --rm --name jorldy -v `pwd`:/JORLDY jorldy/jorldy /bin/bash
# windows
docker run -it --rm --name jorldy -v %cd%:/JORLDY jorldy/jorldy /bin/bash
```
### :heavy_plus_sign: To use additional environments
__- Atari and Super Mario Bros__
atari and super-mario-bros need to be installed manually due to licensing issues
```
# To use atari
pip install --upgrade gym[atari,accept-rom-license]
# To use super-mario-bros
pip install gym-super-mario-bros
```
__- Mujoco__ (Mac and Linux only)
__Mujoco is supported in docker.__ However, if you don't use docker, several subprocesses should be done.
Please refer to the [mujoco-py github installation](https://github.com/openai/mujoco-py#install-mujoco)
## :rocket: Getting started
```
cd jorldy
# Examples: python [run mode] --config [config path]
python main.py --config config.dqn.cartpole
python main.py --config config.rainbow.atari --env.name assault
# Examples: python [run mode] --config [config path] --[optional parameter key] [parameter value]
python main.py --config config.dqn.cartpole --agent.batch_size 64
python main.py --sync --config config.ppo.cartpole --train.num_workers 8
```
## :card_index_dividers: Release
| Version | Release Date | Source | Release Note |
| :-----: | :--------------: | :--------: | :----------: |
| 0.2.0 | January 23, 2022 | [Source](https://github.com/kakaoenterprise/JORLDY/tree/v0.2.0) | [Release Note](https://github.com/kakaoenterprise/JORLDY/releases/tag/v0.2.0) |
| 0.1.0 | December 23, 2021 | [Source](https://github.com/kakaoenterprise/JORLDY/tree/v0.1.0) | [Release Note](https://github.com/kakaoenterprise/JORLDY/releases/tag/v0.1.0) |
| 0.0.3 | November 23, 2021 | [Source](https://github.com/kakaoenterprise/JORLDY/tree/v0.0.3) | [Release Note](https://github.com/kakaoenterprise/JORLDY/releases/tag/v0.0.3) |
| 0.0.2 | November 06, 2021 | [Source](https://github.com/kakaoenterprise/JORLDY/tree/v0.0.2) | [Release Note](https://github.com/kakaoenterprise/JORLDY/releases/tag/v0.0.2) |
| 0.0.1 | November 03, 2021 | [Source](https://github.com/kakaoenterprise/JORLDY/tree/v0.0.1) | [Release Note](https://github.com/kakaoenterprise/JORLDY/releases/tag/v0.0.1) |
## :mag: How to
- [How to use](./docs/How_to_use.md)
- [How to customize config](./jorldy/config/README.md)
- [How to customize agent](./jorldy/core/agent/README.md)
- [How to customize environment](./jorldy/core/env/README.md)
- [How to customize network](./jorldy/core/network/README.md)
- [How to customize buffer](./jorldy/core/buffer/README.md)
## :page_facing_up: Documentation
- [Distributed Architecture](./docs/Distributed_Architecture.md)
- [Role of Managers](./jorldy/manager/README.md)
- [List of Contents](./docs/List_of_Contents.md)
- [Naming Convention](./docs/Naming_convention.md)
- [Benchmark](https://petite-balance-8cb.notion.site/Benchmark-09684f1adf764c84a5a331cb5690544f)
- [Reference](./docs/Reference.md)
## :busts_in_silhouette: Contributors
:mailbox: Contact: jorldy@kakaoenterprise.com
<img src="./resrc/contributors.png" alt="contributors" width=80%/>
## :copyright: License
[Apache License 2.0](./LICENSE.md)
## :no_entry_sign: Disclaimer
Installing in JORLDY and/or utilizing algorithms or environments not provided KEP may involve a use of third party’s intellectual property. It is advisable that a user obtain licenses or permissions from the right holder(s), if necessary, or take any other necessary measures to avoid infringement or misappropriation of third party’s intellectual property rights.
| 39.526718 | 513 | 0.723252 | eng_Latn | 0.550791 |
c10bb13e59860bec4b62c625152a729c847278f7 | 6,757 | md | Markdown | projects/ngx-parent-injector/README.md | foiled-plan/ngx-parent-injector | c95b618d247c6aef3c09dd1fa529824db57922fb | [
"MIT"
] | null | null | null | projects/ngx-parent-injector/README.md | foiled-plan/ngx-parent-injector | c95b618d247c6aef3c09dd1fa529824db57922fb | [
"MIT"
] | null | null | null | projects/ngx-parent-injector/README.md | foiled-plan/ngx-parent-injector | c95b618d247c6aef3c09dd1fa529824db57922fb | [
"MIT"
] | null | null | null | # ngx-parent-injector
Inject parent window services (and other injectables) into child window Angular apps, allowing parent and child window apps to share data and behave as if they were a single app.
## Usage
`ngx-parent-angular` is intended to be used in a multi-window Angular app, where the parent and child windows are running separate Angular apps. For an overview on how to set up this kind of app, please see the [Demo App](https://github.com/foiled-plan/ngx-parent-injector).
1. Add `NgxParentInjectorModule` to the `imports` of the root Angular module of the parent window.
2. Add `NgxParentInjectorChildModule.forRoot(...)` to the `imports` of the root Angular module of each child window.
To inject a parent window's service in a child window:
1. In the parent window, the service's injection token must be explicitly exposed to the child window by calling `exposeToken()`.
2. In the child window, we request the parent window's service by using Angular's builtin `@Inject` decorator together with the `getParentToken()` function.
## Sample
**parentWindow/parentWindow.module.ts**
```typescript
// Angular imports omitted
import { exposeToken, NgxParentInjectorModule } from 'ngx-parent-injector';
import { SomeParentService } from './someParentModule/someParent.service';
import { SOME_PARENT_TOKEN } from './someParentModule/someParent.token';
@NgModule({
// other metadata
imports: [
// other imports
NgxParentInjectorModule,
],
providers: [{
provide: SOME_PARENT_TOKEN,
useValue: "I'm an injection token value"
}]
})
export class ParentWindowModule {
public constructor() {
// 1. expose InjectionTokens in the parent window
exposeToken(SomeParentService);
exposeToken(SOME_PARENT_TOKEN);
}
}
```
**childWindow/childWindow.module.ts**
```typescript
// Angular imports omitted
import { NgxParentInjectorChildModule } from 'ngx-parent-injector/child';
import { ChildWindowComponent } from './childWindow.component.ts';
@NgModule({
// other metadata
imports: [
// other imports
NgxParentInjectorChildModule.forRoot(),
],
declarations: [ChildWindowComponent],
bootstrap: [ChildWindowComponent]
})
export class ChildWindowModule {}
```
**childWindow/childWindow.component.ts**
```typescript
// Angular imports omitted
import { getParentToken } from 'ngx-parent-injector/child';
import { SomeParentService } from '../parentWindow/someParentModule/someParent.service';
import { SOME_PARENT_TOKEN } from '../parentWindow/someParentModule/someParent.token';
@Component({
template: '<div>Parent Token Value: {{ injectionTokenValue }}</div>'
})
export class ChildWindowComponent {
public constructor(
// 2. request parent window services or injectable values.
@Inject(getParentToken(SomeParentService)) public someParentService: SomeParentService,
@Inject(getParentToken(SOME_PARENT_TOKEN)) public injectionTokenValue: string
) {
// Injected dependencies from parent window are ready to use
}
}
```
## Testing
Unit testing child window code which consumes any code from `ngx-parent-injector/child` requires some additional steps.
1. Set `testMode: true` in the options passed to `NgxParentInjectorChild.forRoot`. This can be achieved by adjusting `environment.ts` to include a flag indicating test mode, and setting that flag to `true` during test builds by using `fileReplacements` in `angular.json`'s test block.
2. Since `getParentToken(token)` will simply return `token` unchanged if it can't find the corresponding token on the parent app, you can provide mocks for `token` as you normally would in the testing module.
3. Call `disableWarnings()` before running tests to prevent warnings from polluting the test output.
See the [Demo App](https://github.com/foiled-plan/ngx-parent-injector) for an example of unit tests.
## Conceptual overview
ngx-parent-injector works based on two separate concepts.
### Enabling child windows to inject services from the parent window.
The parent window's Angular injector as well as chosen injection tokens are attached to the parent's global window object.
Child windows can then access the parent window's injector and injection tokens using `window.opener`, and use them to get the runtime instances of the services in the child windows. This process is made unobtrusive in child Angular apps by using Angular's built in provider system and `@Inject` decorator.
Injection tokens (like constructor functions for services) will have separate in-memory representations in parent and child window apps, even if they are imported from the same source file. This means that simply using an injection token from a child window's app with the parent window's injector instance will not yield the expected result. For this reason, `ngx-parent-injector` relies on associating strings to injection tokens so that for any given child-window injection token, the corresponding parent-window token can be retrieved. For services, the constructor function's `name` value is used, and for instances of `InjectionToken`, the result of calling `toString()` is used. Note that this means that name collisions are technically possible, although in practice it rarely occurs. It also means that class names should not be mangled/obfuscated, because in general the same class will be obfuscated to a different symbol in parent and child apps.
### Triggering change detection in both the parent and child windows at appropriate moments.
By injecting services from a parent Angular app into a child Angular app, the apps become logically entangled. As a consequence, any change in the parent app may require a child app to update its view, and vice versa. This is achieved by injecting the parent app's `NgZone` service into the child app using the process described above. Every time the parent window's `NgZone`'s microtask queue empties, we manually trigger a change detection in the child window in order to update the child app's view in response to an event in the parent app. Conversely, whenever the microtask queue of the child app's own `NgZone` empties, we push an item to the parent app `NgZone`'s microtask queue to propagate events which originated in the child app to the parent app (as well as to trigger change detection in other child apps).
## Build
Run `ng build ngx-parent-injector` to build the project. The build artifacts will be stored in the `dist/ngx-parent-injector` directory.
## Publishing
After building your library with `ng build ngx-parent-injector`, go to the dist folder `cd dist/ngx-parent-injector` and run `npm publish`.
## Further help
To get more help on the Angular CLI use `ng help` or go check out the [Angular CLI Overview and Command Reference](https://angular.io/cli) page.
| 57.262712 | 958 | 0.77712 | eng_Latn | 0.991912 |
c10c71445f05fa486e4d7827171c293b12f5bde2 | 1,966 | md | Markdown | docs/visio/bevellightingtype-cell-bevel-properties-section.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-15T11:25:43.000Z | 2021-08-15T11:25:43.000Z | docs/visio/bevellightingtype-cell-bevel-properties-section.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visio/bevellightingtype-cell-bevel-properties-section.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: BevelLightingType Cell (Bevel Properties Section)
manager: soliver
ms.date: 03/09/2015
ms.audience: Developer
ms.topic: reference
localization_priority: Normal
ms.assetid: 7fbb4b16-fe54-42d6-803a-c9980897166d
description: Détermine le type d’éclairage utilisé par l’effet de biseau.
ms.openlocfilehash: 6d92c56b01d192c1df04eecdaca4eb915baebcae
ms.sourcegitcommit: 8657170d071f9bcf680aba50b9c07f2a4fb82283
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/28/2019
ms.locfileid: "33433961"
---
# <a name="bevellightingtype-cell-bevel-properties-section"></a>BevelLightingType Cell (Bevel Properties Section)
Détermine le type d’éclairage utilisé par l’effet de biseau.
|**Valeur**|**Description**|
|:-----|:-----|
|0 <br/> |Aucun éclairage <br/> |
|1 <br/> |Trois points <br/> |
|2 <br/> |Solde <br/> |
|3 <br/> |Doux <br/> |
|4 <br/> |En-dent <br/> |
|5 <br/> |Flood <br/> |
|6 <br/> |Contraste <br/> |
|7 <br/> |Matin <br/> |
|8 <br/> |Aube <br/> |
|9 <br/> |Dissolution progressive <br/> |
|10 <br/> |Toy <br/> |
|11 <br/> |Figation <br/> |
|12 <br/> |À deux dimensions <br/> |
|13 <br/> |Deux points <br/> |
|14 <br/> |Lumière <br/> |
|15 <br/> |Bright Room <br/> |
## <a name="remarks"></a>Remarques
Pour obtenir une référence à la cellule **BevelLightingType** par un nom à partir d’une autre formule, de l’attribut **N** d’un élément **Cell** ou d’un programme en faisant appel à la propriété **CellsU,** utilisez :
|||
|:-----|:-----|
|Nom de cellule : <br/> |BevelLightingType <br/> |
Pour obtenir une référence à la **cellule BevelLightingType** à l’aide d’un index à partir d’un programme, utilisez la propriété **CellsSRC** avec les arguments suivants :
|||
|:-----|:-----|
|Index de la section : <br/> |**visSectionObject** <br/> |
|Index de la ligne : <br/> |**visRowBevelProperties** <br/> |
|Index de la cellule : <br/> |**visBevelLightingType** <br/> |
| 34.491228 | 218 | 0.650051 | fra_Latn | 0.539836 |
c10c98a20a14b249b749d95dfb0fee5715a525cf | 1,466 | md | Markdown | README.md | lalloni/afip | 7c2b0c155a8acc41ed49cbb958860e170941f0aa | [
"MIT"
] | null | null | null | README.md | lalloni/afip | 7c2b0c155a8acc41ed49cbb958860e170941f0aa | [
"MIT"
] | null | null | null | README.md | lalloni/afip | 7c2b0c155a8acc41ed49cbb958860e170941f0aa | [
"MIT"
] | null | null | null | # Utilidades AFIP
[](https://travis-ci.org/lalloni/afip)
[](https://codecov.io/gh/lalloni/afip)
Este proyecto contiene un pequeño conjunto de librerías de código de uso
frecuente en la interacción con servicios y herramientas de AFIP así como
en la implementación de funcionalidad de negocio.
## Paquetes
- `github.com/lalloni/afip/cuit` contiene funciones útiles para generar, validar, parsear y formatear CUIT y CUIL. Ver su [documentación](https://godoc.org/github.com/lalloni/afip/cuit) para obtener más detalles.
- `github.com/lalloni/afip/periodo` contiene funciones útiles para validar, parsear y formatear Períodos Fiscales. Ver su [documentación](https://godoc.org/github.com/lalloni/afip/periodo) para obtener más detalles.
## Roadmap
- `github.com/lalloni/afip/token` funciones útiles para validar, parsear y generar tokens de autenticación.
- `github.com/lalloni/afip/signature` funciones útiles para validar tokens de autenticación con la firma correspondiente.
- `github.com/lalloni/afip/clavefiscal` middleware HTTP y funciones útiles para implementar autenticación con Clave Fiscal.
- `github.com/lalloni/afip/sua` middleware HTTP y funciones útiles para implementar autenticación con SUA.
- `github.com/lalloni/afip/wsaa` middleware HTTP y funciones útiles para implementar autenticación con WSAA.
| 66.636364 | 215 | 0.797408 | spa_Latn | 0.933454 |
c10ce5d769c5282b863e16e5ab694038f46793a2 | 540 | md | Markdown | en/swift/enumerations/5.md | nank1ro/Codigo-Questions | 559cf998c8b5fbf7a9aee2db08fe57c8abc63408 | [
"BSD-3-Clause"
] | 5 | 2021-08-30T05:36:45.000Z | 2022-03-18T16:25:39.000Z | en/swift/enumerations/5.md | nank1ro/Codigo-Questions | 559cf998c8b5fbf7a9aee2db08fe57c8abc63408 | [
"BSD-3-Clause"
] | 33 | 2021-10-04T12:52:45.000Z | 2022-03-07T11:32:13.000Z | en/swift/enumerations/5.md | nank1ro/Codigo-Questions | 559cf998c8b5fbf7a9aee2db08fe57c8abc63408 | [
"BSD-3-Clause"
] | 1 | 2021-12-07T16:04:12.000Z | 2021-12-07T16:04:12.000Z | ---
language: swift
exerciseType: 2
---
# --instructions--
Declare a valid enumeration and print each case
# --seed--
```swift
[/] Season: [/] {
[/] spring, summer, autumn, winter
}
Season.[/].forEach { print([/]) }
```
# --answers--
- enum
- enumeration
- case
- CaseIterable
- caseIterable
- Iterable
- allcases
- allCases
- allCases()
- $0
- $1
# --solutions--
```swift
enum Season: CaseIterable {
case spring, summer, autumn, winter
}
Season.allCases.forEach { print($0) }
```
# --output--
spring
summer
autumn
winter
| 11.25 | 47 | 0.62963 | eng_Latn | 0.594014 |
c10d507113adfaaf686980ea47bdd084a79a4375 | 321 | md | Markdown | README.md | ChristopherJamesN/Homepwner | 8b0faffa105a212ffa665621d565c438c1bd2e15 | [
"MIT"
] | null | null | null | README.md | ChristopherJamesN/Homepwner | 8b0faffa105a212ffa665621d565c438c1bd2e15 | [
"MIT"
] | null | null | null | README.md | ChristopherJamesN/Homepwner | 8b0faffa105a212ffa665621d565c438c1bd2e15 | [
"MIT"
] | null | null | null | # Homepwner
iOS application built to keep an inventory of items in the home.
The application presents a lits of item instances in a UITableView.
Item instances can be added, deleted, or moved.
Item instances can be selected in order to view or edit data.
Data is saved to the filesystem in order to persist between runs.
| 45.857143 | 67 | 0.797508 | eng_Latn | 0.998599 |
c10db17885fe5fe10d0083a7d6c24f7904c1238c | 329 | md | Markdown | _posts/2021-09-14-first.md | mjkim001/test | d251af12fc51e8583fd95c737a151cec9b2e6622 | [
"MIT"
] | null | null | null | _posts/2021-09-14-first.md | mjkim001/test | d251af12fc51e8583fd95c737a151cec9b2e6622 | [
"MIT"
] | null | null | null | _posts/2021-09-14-first.md | mjkim001/test | d251af12fc51e8583fd95c737a151cec9b2e6622 | [
"MIT"
] | null | null | null | ---
layout: single
title: "First Posting"
---
# 2021.09.14 블로그 개설
오늘부터 TIL(Today I Learn) 작성해볼거에요.
(jekyll 포스팅 방법)
[Posts | Jekyll • Simple, blog-aware, static sites (jekyllrb.com)](https://jekyllrb.com/docs/posts/)

| 18.277778 | 102 | 0.708207 | kor_Hang | 0.280616 |
c10df362214d4129eaaa04a06dd4e9467e5450e6 | 319 | md | Markdown | docs/src/pages/discover-more/community/community.md | 6thquake/react-material | 3ab97b0855c109b7f4cc690c260248a870d8ed74 | [
"MIT"
] | 14 | 2018-07-16T06:38:02.000Z | 2018-11-29T12:09:13.000Z | docs/src/pages/discover-more/community/community.md | 6thquake/react-material | 3ab97b0855c109b7f4cc690c260248a870d8ed74 | [
"MIT"
] | null | null | null | docs/src/pages/discover-more/community/community.md | 6thquake/react-material | 3ab97b0855c109b7f4cc690c260248a870d8ed74 | [
"MIT"
] | null | null | null | # Community
<p class="description">If you want to stay up to date on the development of React-Material or to reach out to the community, you can follow those resources.</p>
- For help using React-Material, ask on StackOverflow using the tag
[react-material](http://stackoverflow.com/questions/tagged/react-material).
| 45.571429 | 160 | 0.777429 | eng_Latn | 0.975498 |
c10e4b1124b5f6ead8b291a5a3fb64bc6f6ab96a | 251 | md | Markdown | docs/mobe1969/universal-soldier-day-of-reckoning.md | bmiller/beqcatalogue | 86e79be46f1c710207b16cecc9c987627a46d618 | [
"MIT"
] | null | null | null | docs/mobe1969/universal-soldier-day-of-reckoning.md | bmiller/beqcatalogue | 86e79be46f1c710207b16cecc9c987627a46d618 | [
"MIT"
] | null | null | null | docs/mobe1969/universal-soldier-day-of-reckoning.md | bmiller/beqcatalogue | 86e79be46f1c710207b16cecc9c987627a46d618 | [
"MIT"
] | null | null | null | # Universal Soldier Day of Reckoning
* Author: mobe1969
* Production Year: 2012
## DTS-HD MA 5.1

| 25.1 | 149 | 0.749004 | yue_Hant | 0.377399 |
c10e66f1912084b741021179baee3f42aba5b4cf | 198 | md | Markdown | README.md | jnktsn/TributePageProject | b54aa08fd1330fd2bf5fe312ef1446c497951bd0 | [
"MIT"
] | null | null | null | README.md | jnktsn/TributePageProject | b54aa08fd1330fd2bf5fe312ef1446c497951bd0 | [
"MIT"
] | null | null | null | README.md | jnktsn/TributePageProject | b54aa08fd1330fd2bf5fe312ef1446c497951bd0 | [
"MIT"
] | null | null | null | # TributePageProject
A FreeCodeCamp project for https://www.freecodecamp.org/challenges/build-a-tribute-page
To view the demo of this code please visit https://jnktsn.github.io/TributePageProject/
| 39.6 | 87 | 0.818182 | kor_Hang | 0.378898 |
c10f0b56fe49fd9510b450e2633eca48c0ab0e92 | 464 | md | Markdown | _posts/2020-07-07-explore.md | RichardScottOZ/open_geosciene_code_projects_viz | 2d8efac6de4c7a5e4cbd370b1c77b7801278de08 | [
"MIT"
] | 46 | 2016-02-05T02:26:41.000Z | 2022-03-19T09:46:07.000Z | _posts/2020-07-07-explore.md | RichardScottOZ/open_geosciene_code_projects_viz | 2d8efac6de4c7a5e4cbd370b1c77b7801278de08 | [
"MIT"
] | 266 | 2016-01-25T16:58:13.000Z | 2022-03-23T16:03:04.000Z | _posts/2020-07-07-explore.md | RichardScottOZ/open_geosciene_code_projects_viz | 2d8efac6de4c7a5e4cbd370b1c77b7801278de08 | [
"MIT"
] | 54 | 2016-03-03T00:28:34.000Z | 2022-03-23T22:20:33.000Z | ---
title: "New Data Visualizations on Software Portal"
categories: this-website
---
The [Explore section](/explore/) of this website is benefitting from new development by our summer intern. Data we collect from GitHub is visualized in various ways, with additional visualizations planned. These efforts help us understand our repos' activity, how they are being used, development trends, and more. Check out the new "Repo Licenses" viz and stay tuned for more!
| 66.285714 | 377 | 0.788793 | eng_Latn | 0.997777 |
c1104d575227827618987e324e036cd96404fd4a | 274 | md | Markdown | chrony/CHANGELOG.md | vedansh2209/repository | 1fd8a44edefd1f4c6c58a7dc3ac694aae8dcc1bf | [
"MIT"
] | null | null | null | chrony/CHANGELOG.md | vedansh2209/repository | 1fd8a44edefd1f4c6c58a7dc3ac694aae8dcc1bf | [
"MIT"
] | null | null | null | chrony/CHANGELOG.md | vedansh2209/repository | 1fd8a44edefd1f4c6c58a7dc3ac694aae8dcc1bf | [
"MIT"
] | null | null | null | General maintenance release
- ⬆Update chrony to v3.5.1-r0
- ⬆Update base to v8.0.2
Questions? Join our Discord server! https://discord.me/hassioaddons
[Full Changelog][changelog]
[changelog]: https://github.com/hassio-addons/addon-chrony/compare/v1.1.0...v1.1.1 | 27.4 | 82 | 0.726277 | kor_Hang | 0.343462 |
c11151c6e0f10e796ce57f14d3f0d3e79b7ac2e8 | 1,126 | md | Markdown | README.md | Gustavo10Destroyer/camera | 1011f53938d67b1c64ee467580da5d770cf358b1 | [
"MIT"
] | null | null | null | README.md | Gustavo10Destroyer/camera | 1011f53938d67b1c64ee467580da5d770cf358b1 | [
"MIT"
] | null | null | null | README.md | Gustavo10Destroyer/camera | 1011f53938d67b1c64ee467580da5d770cf358b1 | [
"MIT"
] | null | null | null | # camera
Variáveis da Câmera
# position
- Dict com duas chaves, x e y, que representam a posição da câmera
# anchorPoint
- Dict com duas chaves, x e y, que representam o ponto de ancoragem da câmera (positionX + (width * anchorX)) (positionY + (height * anchorY))
# GLOBAL_DRAW
- Variável para definir o tipo de desenho global, com os calculos de câmera aplicados
# SCREEN_DRAW
- Tipo de desenho na tela, sem os calculos de câmera, serve para elementos estáticos
Métodos da Câmera
# relative(x, y)
```python
import pygame
from camera import Camera
pygame.init()
camera = Camera(pygame)
clock = pygame.time.Clock()
screen = pygame.display.set_mode([600, 300])
camera.anchorPoint["x"] = 0.5 # Set X camera point to center
camera.anchorPoint["y"] = 0.5 # Set Y camera point to center
running = True
COLOR = (0, 255, 0)
POSITION = (0, 0)
while running:
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
print(pygame.camera == camera) # True
pygame.draw.circle(screen, camera.GLOBAL_DRAW, COLOR, POSITION)
pygame.display.update()
clock.tick(30)
pygame.quit()
```
| 22.52 | 142 | 0.713144 | por_Latn | 0.67131 |
c111fd5bfe38b440d06538b75b6b2a6236b9aa60 | 9,295 | md | Markdown | docs/2014/integration-services/cdc-control-task-editor.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/cdc-control-task-editor.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/integration-services/cdc-control-task-editor.md | antoniosql/sql-docs.es-es | 0340bd0278b0cf5de794836cd29d53b46452d189 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Editor de la tarea Control CDC | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology:
- integration-services
ms.topic: conceptual
f1_keywords:
- sql12.ssis.designer.cdccontroltask.config.f1
ms.assetid: 4f09d040-9ec8-4aaa-b684-f632d571f0a8
author: douglaslms
ms.author: douglasl
manager: craigg
ms.openlocfilehash: 97d47bda8f3ceb98449392cda22d2a8152e08b7f
ms.sourcegitcommit: 3da2edf82763852cff6772a1a282ace3034b4936
ms.translationtype: MT
ms.contentlocale: es-ES
ms.lasthandoff: 10/02/2018
ms.locfileid: "48201305"
---
# <a name="cdc-control-task-editor"></a>Editor de la tarea Control de CDC
Use el cuadro de diálogo **Editor de la tarea Control CDC** para configurar la tarea Control CDC. La configuración de la tarea Control CDC incluye la definición de una conexión a la base de datos CDC, la operación de tarea CDC y la información sobre la administración de estados.
Para obtener más información acerca de la tarea Control CDC, vea [CDC Control Task](control-flow/cdc-control-task.md).
**Para abrir el Editor de la tarea Control CDC**
1. En [!INCLUDE[ssBIDevStudio](../includes/ssbidevstudio-md.md)], abra el paquete de [!INCLUDE[ssISCurrent](../includes/ssiscurrent-md.md)] que tiene la tarea Control CDC.
2. En la pestaña **Flujo de control** , haga doble clic en la tarea Control CDC.
## <a name="options"></a>Opciones
**Administrador de conexiones de ADO.NET para la base de datos CDC de SQL Server**
Seleccione un administrador de conexiones existente de la lista o haga clic en **Nueva** para crear una nueva conexión. Es preciso realizar la conexión a una base de datos de [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] habilitada para CDC y donde se encuentre la tabla de cambios seleccionada.
**Operación de Control CDC**
Seleccione la operación que vaya a ejecutar para esta tarea. Todas las operaciones usan la variable de estado que se almacena en una variable de paquete SSIS donde se almacena el estado y lo pasa entre los distintos componentes del paquete.
- **Marcar comienzo de carga inicial**: esta operación se usa cuando se ejecuta una carga inicial desde una base de datos activa sin una instantánea. Se invoca al comienzo de un paquete de carga inicial para registrar el LSN actual en la base de datos de origen antes de que el paquete de carga inicial comience a leer las tablas de origen. Esto requiere una conexión a la base de datos de origen.
Si selecciona **Mark Initial Load Start** (Marcar comienzo de carga inicial) cuando trabaja en CDC de [!INCLUDE[ssCurrent](../includes/sscurrent-md.md)] (es decir, no en Oracle), el usuario especificado en el administrador de conexiones debe ser **db_owner** o **sysadmin**.
- **Marcar final de carga inicial**: esta operación se usa al ejecutar una carga inicial desde una base de datos activa sin una instantánea. Se invoca al final de un paquete de carga inicial para registrar el LSN actual en la base de datos de origen después de que el paquete de carga inicial termine de leer las tablas de origen. Este LSN se determina mediante el registro de la hora en el momento en que se produjo esta operación y, posteriormente, mediante consulta a la tabla de asignación de `cdc.lsn_time_`de la base de datos CDC en busca de un cambio que hubiera tenido lugar después de dicha hora.
Si selecciona **Mark Initial Load End** (Marcar final de carga inicial) cuando trabaja en CDC de [!INCLUDE[ssCurrent](../includes/sscurrent-md.md)] (es decir, no en Oracle), el usuario especificado en el administrador de conexiones debe ser **db_owner** o **sysadmin**.
- **Marcar comienzo de CDC**: esta operación se utiliza cuando la carga inicial se realiza desde una base de datos de base de datos de instantánea o desde una base de datos de inactividad. Se invoca en cualquier punto del paquete de carga inicial. La operación acepta un parámetro que puede ser un LSN de instantánea, un nombre de una base de datos de instantánea (desde la que el LSN de instantánea se deriva automáticamente) o se puede dejar vacío, en cuyo caso el LSN de la base de datos actual se usa como el LSN de inicio para el paquete de procesamiento de cambios.
Esta operación se usa en lugar de las operaciones Marcar comienzo/final de carga inicial.
Si selecciona **Mark CDC Start** (Marcar comienzo de CDC) cuando trabaja en CDC de [!INCLUDE[ssCurrent](../includes/sscurrent-md.md)] (es decir, no en Oracle), el usuario especificado en el administrador de conexiones debe ser **db_owner** o **sysadmin**.
- **Obtener intervalo de procesamiento**: esta operación se usa en un paquete de procesamiento de cambios antes de invocar el flujo de datos que usa el flujo de datos de origen de CDC. Establece un intervalo de LSN que lee el flujo de datos de origen de CDC cuando se invoca. El intervalo se almacena en una variable de paquete SSIS que usa el origen de CDC durante el procesamiento del flujo de datos.
Para obtener más información sobre los posibles estados CDC que se almacenan, vea [Definir una variable de estado](data-flow/define-a-state-variable.md).
- **Marcar intervalo procesado**: esta operación se usa en un paquete de procesamiento de cambios al final de una ejecución CDC (después de que el flujo de datos CDC se complete correctamente) para registrar el último LSN que se haya procesado totalmente en la ejecución CDC. La próxima vez que se ejecute `GetProcessingRange` , esta posición determinará el inicio del siguiente intervalo de procesamiento.
- **Restablecer estado CDC**: esta operación se usa para restablecer el estado CDC persistente asociado al contexto CDC actual. Después de ejecutar esta operación, el LSN máximo actual de la tabla de `sys.fn_cdc_get_max_lsn` de la marca de tiempo de LSN se convierte en el inicio del intervalo para el siguiente intervalo de procesamiento. Esta operación requiere una conexión a la base de datos de origen.
Un ejemplo de uso de esta operación es cuando se desea procesar solo los registros de cambios creados recientemente y omitir todos los registros de cambios anteriores.
**Variable que contiene el estado CDC**
Seleccione la variable del paquete SSIS que almacena la información sobre el estado de la operación de la tarea. Debe definir una variable antes de empezar. Si selecciona **Persistencia de estado automática**, la variable de estado se carga y guarda automáticamente.
Para obtener más información sobre cómo se define la variable de estado, vea [Definir una variable de estado](data-flow/define-a-state-variable.md).
**LSN de SQL Server para iniciar el nombre de CDC/instantánea:**
Especifique el LSN de la base de datos de origen actual o el nombre de la base de datos de instantánea de la carga inicial desde donde se realizará la carga inicial con el fin de determinar dónde se inicia CDC. Esto solo está disponible si se establece **Operación de Control CDC** en **Marcar comienzo de CDC**.
Para obtener más información acerca de estas operaciones, vea [CDC Control Task](control-flow/cdc-control-task.md).
**Almacenar automáticamente el estado en una tabla de base de datos**
Active esta casilla para que la tarea Control CDC controle automáticamente la carga y el almacenamiento del estado de CDC en una tabla de estado que se encuentre en una base de datos especificada. Si no se ha activado, el desarrollador debe cargar el estado de CDC cuando el paquete se inicie y guardarlo cada vez que cambie el estado de CDC.
**Administrador de conexiones para la base de datos donde se almacena el estado**
Seleccione un administrador de conexiones de ADO.NET existente de la lista o haga clic en Nueva para crear una nueva conexión. Esta conexión es a una base de datos de [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] que contenga la tabla de estado. La tabla de estado contiene la información sobre el estado.
Esto solo está disponible si se ha seleccionado **Persistencia de estado automática** y es un parámetro obligatorio.
**Tabla que se va a usar para almacenar el estado**
Escriba el nombre de la tabla de estado que se va a usar para almacenar el estado de CDC. La tabla especificada debe tener dos columnas denominadas **nombre** y **estado** y ambas columnas deben ser del tipo de datos **varchar (256)**.
Opcionalmente, puede seleccionar **Nueva** para obtener un script SQL que cree una nueva tabla de estado con las columnas necesarias. Cuando se selecciona **Persistencia de estado automática** , el desarrollador debe crear una tabla de estado según los requisitos descritos anteriormente.
Esto solo está disponible si se ha seleccionado **Persistencia de estado automática** y es un parámetro obligatorio.
**Nombre del estado**
Escriba un nombre para asociar al estado CDC persistente. Los paquetes de carga completa y CDC que funcionan con el mismo contexto CDC especificarán un nombre común para el estado. Este nombre se usa para buscar la fila de estado en la tabla de estado.
## <a name="see-also"></a>Vea también
[Propiedades personalizadas de la tarea Control CDC](control-flow/cdc-control-task-custom-properties.md)
| 95.824742 | 609 | 0.770307 | spa_Latn | 0.994832 |
c1121c534237fb3b12a395ac434be7ab2a3cfa71 | 11,579 | md | Markdown | benchmarks/CHANGELOG.md | rjbernaldo/keystone | 80e306d66fc4bfb179eb21705b6b72470a3b4e94 | [
"MIT"
] | null | null | null | benchmarks/CHANGELOG.md | rjbernaldo/keystone | 80e306d66fc4bfb179eb21705b6b72470a3b4e94 | [
"MIT"
] | null | null | null | benchmarks/CHANGELOG.md | rjbernaldo/keystone | 80e306d66fc4bfb179eb21705b6b72470a3b4e94 | [
"MIT"
] | null | null | null | # @keystonejs/benchmarks
## 5.1.6
### Patch Changes
- Updated dependencies [[`fd685241`](https://github.com/keystonejs/keystone/commit/fd68524135025e4d800b4a98932916736dd50e9d)]:
- @keystonejs/adapter-knex@9.0.0
- @keystonejs/adapter-mongoose@8.0.0
- @keystonejs/fields@9.0.0
- @keystonejs/keystone@8.0.0
- @keystonejs/test-utils@6.0.3
## 5.1.5
### Patch Changes
- Updated dependencies [[`e7e4bc1d`](https://github.com/keystonejs/keystone/commit/e7e4bc1d22149d4daceb31d303f6ad10c2b853ba), [`58c4ffc3`](https://github.com/keystonejs/keystone/commit/58c4ffc3d4b1edf8bdfbc4ea299133d303239fc6), [`007063c4`](https://github.com/keystonejs/keystone/commit/007063c4f17e6e7038312ed9126eaf91757e7939), [`4a7d1eab`](https://github.com/keystonejs/keystone/commit/4a7d1eabf9b44fac7e16dfe20afdce409986e8dc), [`c08c28d2`](https://github.com/keystonejs/keystone/commit/c08c28d22f2c6a2bfa73ab0ea347c9e0da8a9063), [`d138736d`](https://github.com/keystonejs/keystone/commit/d138736db184c5884171c7a65e43377f248046b5), [`2ae2bd47`](https://github.com/keystonejs/keystone/commit/2ae2bd47eb54a816cfd4c8cd178c460729cbc258), [`2cbd38b0`](https://github.com/keystonejs/keystone/commit/2cbd38b05adc98cface11a8767f66b48a1cb0bbf), [`3407fa68`](https://github.com/keystonejs/keystone/commit/3407fa68b91d7ebb3e7288c7e95631013fe12535), [`c2b1b725`](https://github.com/keystonejs/keystone/commit/c2b1b725a9474348964a4ac2e0f5b4aaf1a7f486)]:
- @keystonejs/fields@8.0.0
- @keystonejs/keystone@7.1.0
- @keystonejs/adapter-knex@8.0.0
- @keystonejs/adapter-mongoose@7.0.0
- @keystonejs/app-graphql@5.1.5
- @keystonejs/session@6.0.1
- @keystonejs/test-utils@6.0.2
## 5.1.4
### Patch Changes
- Updated dependencies [[`b6a555c2`](https://github.com/keystonejs/keystone/commit/b6a555c28296394908757f7404b72bc6b828b52a), [`abac6ad8`](https://github.com/keystonejs/keystone/commit/abac6ad83ad71f40047473c81d50b6af80ad41b2), [`b6a555c2`](https://github.com/keystonejs/keystone/commit/b6a555c28296394908757f7404b72bc6b828b52a), [`ca28681c`](https://github.com/keystonejs/keystone/commit/ca28681ca23c74bc57041fa36c20b93a4520e762), [`68be8f45`](https://github.com/keystonejs/keystone/commit/68be8f452909100fbddec431d6fe60c20a06a700), [`61a70503`](https://github.com/keystonejs/keystone/commit/61a70503f6c184a8f0f5440466399f12e6d7fa41), [`cec7ba5e`](https://github.com/keystonejs/keystone/commit/cec7ba5e2061280eff2a1d989054ecb02760e36d), [`663ae7b4`](https://github.com/keystonejs/keystone/commit/663ae7b453f450f077795fbbc6c9f138e6b27f52)]:
- @keystonejs/keystone@7.0.0
- @keystonejs/adapter-knex@7.0.0
- @keystonejs/session@6.0.0
- @keystonejs/app-graphql@5.1.4
- @keystonejs/adapter-mongoose@6.0.0
- @keystonejs/fields@7.0.2
- @keystonejs/test-utils@6.0.1
## 5.1.3
### Patch Changes
- Updated dependencies [[`51546e41`](https://github.com/keystonejs/keystone/commit/51546e4142fb8c66cfc413479c671a59618f885b), [`29ad8a17`](https://github.com/keystonejs/keystone/commit/29ad8a175cc4324fe722eefd22c09f7fb6c5be5e), [`83bdf743`](https://github.com/keystonejs/keystone/commit/83bdf743748e39d1ea73eff2c8e3576cc713c624), [`d748156b`](https://github.com/keystonejs/keystone/commit/d748156ba5ebe33f4271fae0df781e0c63f2b7e6), [`d30b7498`](https://github.com/keystonejs/keystone/commit/d30b74984b21ae9fc2a3b39850f674639fbac074), [`1d9c6762`](https://github.com/keystonejs/keystone/commit/1d9c6762d32409c71da6a68a083a81197c35aac3), [`8f22ab5e`](https://github.com/keystonejs/keystone/commit/8f22ab5eefc034f9fef4fd0f9ec2c2583fc5514f), [`599c0929`](https://github.com/keystonejs/keystone/commit/599c0929b213ebd4beb79e3ccaa685b92348ca81), [`fb510d67`](https://github.com/keystonejs/keystone/commit/fb510d67ab124d8c1bda1884fa2a0d48262b5e4d)]:
- @keystonejs/keystone@6.0.2
- @keystonejs/app-graphql@5.1.3
- @keystonejs/adapter-mongoose@5.2.2
- @keystonejs/fields@7.0.1
- @keystonejs/test-utils@6.0.0
## 5.1.2
### Patch Changes
- Updated dependencies [[`161bf3e5`](https://github.com/keystonejs/keystone/commit/161bf3e57acb1b3d88a0836507d4c8dd4935f260)]:
- @keystonejs/fields@7.0.0
- @keystonejs/keystone@6.0.1
## 5.1.1
### Patch Changes
- [`5ba330b8`](https://github.com/keystonejs/keystone/commit/5ba330b8b2609ea0033a636daf9a215a5a192c20) [#2487](https://github.com/keystonejs/keystone/pull/2487) Thanks [@Noviny](https://github.com/Noviny)! - Small changes to package.json (mostly adding a repository field)
- Updated dependencies [[`10e88dc3`](https://github.com/keystonejs/keystone/commit/10e88dc3d81f5e021db0bfb31f7547852c602c14), [`e46f0adf`](https://github.com/keystonejs/keystone/commit/e46f0adf97141e1f1205787453173a0585df5bc3), [`6975f169`](https://github.com/keystonejs/keystone/commit/6975f16959bde3fe0e861977471c94a8c9f2c0b0), [`42497b8e`](https://github.com/keystonejs/keystone/commit/42497b8ebbaeaf0f4d7881dbb76c6abafde4cace), [`fe42a997`](https://github.com/keystonejs/keystone/commit/fe42a997c81825a819ac28f05e02d1ed61099542), [`97fb01fe`](https://github.com/keystonejs/keystone/commit/97fb01fe5a32f5003a084c1fd357852fc28f74e4), [`6111e065`](https://github.com/keystonejs/keystone/commit/6111e06554a6aa6db0f7df1a6c16f9da8e81fce4), [`2d1069f1`](https://github.com/keystonejs/keystone/commit/2d1069f11f5f8941b0a18e482541043c853ebb4f), [`949f2f6a`](https://github.com/keystonejs/keystone/commit/949f2f6a3889492015281ffba45a8b3d37e6d888), [`6b353eff`](https://github.com/keystonejs/keystone/commit/6b353effc8b617137a3978b2c845e01403889722), [`5ba330b8`](https://github.com/keystonejs/keystone/commit/5ba330b8b2609ea0033a636daf9a215a5a192c20)]:
- @keystonejs/keystone@6.0.0
- @keystonejs/app-graphql@5.1.2
- @keystonejs/fields@6.3.2
- @keystonejs/adapter-knex@6.3.2
- @keystonejs/adapter-mongoose@5.2.1
- @keystonejs/session@5.1.1
- @keystonejs/test-utils@5.1.2
## 5.1.0
### Minor Changes
- [`517b23e4`](https://github.com/keystonejs/keystone/commit/517b23e4b17414ed1807e8d7af1e67377ba3b7bf) [#2391](https://github.com/keystonejs/keystone/pull/2391) Thanks [@timleslie](https://github.com/timleslie)! - Removed support for Node 8.x, as it is [no longer in maintenance mode](https://nodejs.org/en/about/releases/).
### Patch Changes
- Updated dependencies [[`517b23e4`](https://github.com/keystonejs/keystone/commit/517b23e4b17414ed1807e8d7af1e67377ba3b7bf)]:
- @keystonejs/adapter-knex@6.3.0
- @keystonejs/adapter-mongoose@5.2.0
- @keystonejs/app-graphql@5.1.0
- @keystonejs/fields@6.3.0
- @keystonejs/keystone@5.5.0
- @keystonejs/session@5.1.0
- @keystonejs/test-utils@5.1.0
## 5.0.1
### Patch Changes
- Updated dependencies [[`77056ebd`](https://github.com/keystonejs/keystone/commit/77056ebdb31e58d27372925e8e24311a8c7d9e33), [`267dab2f`](https://github.com/keystonejs/keystone/commit/267dab2fee5bbea711c417c13366862e8e0ab3be), [`8188d76c`](https://github.com/keystonejs/keystone/commit/8188d76cb3f5d3e112ef95fd4e1887db9a520d9d), [`af1e9e4d`](https://github.com/keystonejs/keystone/commit/af1e9e4d3b74753b903b20641b51df99184793df), [`0acdae17`](https://github.com/keystonejs/keystone/commit/0acdae17c4b2bcb234a314ad1aba311981affc8f), [`733ac847`](https://github.com/keystonejs/keystone/commit/733ac847cab488dc92a30e7b458191d750fd5a3d), [`44b2bc93`](https://github.com/keystonejs/keystone/commit/44b2bc938fd508ac75f6a9cbb364006b9f122711), [`e68fc43b`](https://github.com/keystonejs/keystone/commit/e68fc43ba006f9c958f9c81ae20b230d05c2cab6), [`d4d89836`](https://github.com/keystonejs/keystone/commit/d4d89836700413c1da2b76e9b82b649c2cac859d), [`946a52fd`](https://github.com/keystonejs/keystone/commit/946a52fd7057bb73f4ffd465ef51498172926866), [`5540771e`](https://github.com/keystonejs/keystone/commit/5540771e52b5cb1aa33c0486dede7f2f9bc0944f), [`860dabec`](https://github.com/keystonejs/keystone/commit/860dabecacdf81aa1563cea9a5d50add8623dac1), [`1f4dc33d`](https://github.com/keystonejs/keystone/commit/1f4dc33d8a5ac4e38427eb215a7a8bc3504ae153), [`ee6fbcb2`](https://github.com/keystonejs/keystone/commit/ee6fbcb264a640f58332c50a2f502a4380c0d071), [`0145f7e2`](https://github.com/keystonejs/keystone/commit/0145f7e21d9297e3037c709587eb3b4220ba3f01), [`a3fdc50e`](https://github.com/keystonejs/keystone/commit/a3fdc50ebb61b38814816804b04d7cb4bc0fc70a), [`2cc83b12`](https://github.com/keystonejs/keystone/commit/2cc83b12be757019ba25658139478e8f5b2b19c6), [`721472e1`](https://github.com/keystonejs/keystone/commit/721472e1801584be5807d6637c646b1755366d3e), [`a1dcbd7b`](https://github.com/keystonejs/keystone/commit/a1dcbd7bd7448fdcacbfe9fb0196bfee3c4a5326), [`da62aa4a`](https://github.com/keystonejs/keystone/commit/da62aa4a0af9cf27fd59fdcfb6b960e24999254d), [`6a348b93`](https://github.com/keystonejs/keystone/commit/6a348b93607c305c4ba61c1406a4acd508f33f64)]:
- @keystonejs/keystone@5.3.0
- @keystonejs/fields@6.0.0
- @keystonejs/adapter-knex@6.0.0
- @keystonejs/adapter-mongoose@5.1.3
- @keystonejs/app-graphql@5.0.1
- @keystonejs/test-utils@5.0.2
## 5.0.0
### Major Changes
- [`7b4ed362`](https://github.com/keystonejs/keystone/commit/7b4ed3623f5774d7783c39962bfa1ce97938e310) [#1821](https://github.com/keystonejs/keystone/pull/1821) Thanks [@jesstelford](https://github.com/jesstelford)! - Release @keystonejs/\* packages (つ^ ◡ ^)つ
- This is the first release of `@keystonejs/*` packages (previously `@keystone-alpha/*`).
- All packages in the `@keystone-alpha` namespace are now available in the `@keystonejs` namespace, starting at version `5.0.0`.
- To upgrade your project you must update any `@keystone-alpha/*` dependencies in `package.json` to point to `"@keystonejs/*": "^5.0.0"` and update any `require`/`import` statements in your code.
### Patch Changes
- Updated dependencies [[`7b4ed362`](https://github.com/keystonejs/keystone/commit/7b4ed3623f5774d7783c39962bfa1ce97938e310)]:
- @keystonejs/adapter-knex@5.0.0
- @keystonejs/adapter-mongoose@5.0.0
- @keystonejs/app-graphql@5.0.0
- @keystonejs/fields@5.0.0
- @keystonejs/keystone@5.0.0
- @keystonejs/session@5.0.0
- @keystonejs/test-utils@5.0.0
# @keystone-alpha/benchmarks
## 1.0.1
### Patch Changes
- Updated dependencies [[`0a36b0f4`](https://github.com/keystonejs/keystone/commit/0a36b0f403da73a76106b5e14940a789466b4f94), [`7129c887`](https://github.com/keystonejs/keystone/commit/7129c8878a825d961f2772be497dcd5bd6b2b697), [`3bc02545`](https://github.com/keystonejs/keystone/commit/3bc025452fb8e6e69790bdbee032ddfdeeb7dabb), [`768420f5`](https://github.com/keystonejs/keystone/commit/768420f567c244d57a4e2a3aaafe628ea9813d9d), [`a48281ba`](https://github.com/keystonejs/keystone/commit/a48281ba605bf5bebc89fcbb36d3e69c17182eec), [`effc1f63`](https://github.com/keystonejs/keystone/commit/effc1f639d5824720b7a9d82c2ee881d77acb901)]:
- @keystone-alpha/keystone@16.1.0
- @keystone-alpha/app-graphql@8.2.1
- @keystone-alpha/adapter-knex@6.0.2
- @keystone-alpha/adapter-mongoose@6.0.1
- @keystone-alpha/fields@15.0.0
## 1.0.0
### Major Changes
- [`b62595f4`](https://github.com/keystonejs/keystone/commit/b62595f43cd3b5e19fa33330fafa7ad238cd105b) [#1772](https://github.com/keystonejs/keystone/pull/1772) Thanks [@timleslie](https://github.com/timleslie)! - Initial release of the `benchmarks` package. Run `yarn benchmark` to execute benchmark scripts.
### Patch Changes
- Updated dependencies [[`6d7d0df0`](https://github.com/keystonejs/keystone/commit/6d7d0df0515c3aa21c7d24db17919ddbb5701ce9)]:
- @keystone-alpha/adapter-knex@6.0.0
- @keystone-alpha/adapter-mongoose@6.0.0
- @keystone-alpha/fields@14.0.0
- @keystone-alpha/keystone@16.0.0
- @keystone-alpha/test-utils@2.6.3
| 76.682119 | 2,166 | 0.789101 | yue_Hant | 0.137427 |
c112e77d89242904477f5a732f6a9076555985f9 | 1,716 | md | Markdown | docs/framework/network-programming/how-to-register-a-custom-protocol-using-webrequest.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/network-programming/how-to-register-a-custom-protocol-using-webrequest.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/network-programming/how-to-register-a-custom-protocol-using-webrequest.md | CodeTherapist/docs.de-de | 45ed8badf2e25fb9abdf28c20e421f8da4094dd1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Gewusst wie: Registrieren eines benutzerdefinierten Protokolls mit WebRequest'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
ms.assetid: 98ddbdb9-66b1-4080-92ad-51f5c447fcf8
ms.openlocfilehash: 5f863aa61058e87a7911bab3b02c3ba345419596
ms.sourcegitcommit: c93fd5139f9efcf6db514e3474301738a6d1d649
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/27/2018
ms.locfileid: "50193603"
---
# <a name="how-to-register-a-custom-protocol-using-webrequest"></a>Gewusst wie: Registrieren eines benutzerdefinierten Protokolls mit WebRequest
In diesem Beispiel wird gezeigt, wie Sie eine protokollspezifische Klasse registrieren, die an anderer Stelle definiert ist. In diesem Beispiel stellt `CustomWebRequestCreator` das vom Benutzer implementierte Objekt dar, das die **Create**-Methode implementiert, die wiederum das `CustomWebRequest`-Objekt zurückgibt. Im Codebeispiel wird vorausgesetzt, dass Sie den `CustomWebRequest`-Code geschrieben haben, der das benutzerdefinierte Protokoll implementiert.
## <a name="example"></a>Beispiel
```csharp
WebRequest.RegisterPrefix("custom", new CustomWebRequestCreator());
WebRequest req = WebRequest.Create("custom://customHost.contoso.com/");
```
```vb
WebRequest.RegisterPrefix("custom", New CustomWebRequestCreator())
Dim req As WebRequest = WebRequest.Create("custom://customHost.contoso.com/")
```
## <a name="compiling-the-code"></a>Kompilieren des Codes
Für dieses Beispiel benötigen Sie Folgendes:
Verweise auf den Namespace <xref:System.Net>
## <a name="see-also"></a>Siehe auch
[Programmieren austauschbarer Protokolle](../../../docs/framework/network-programming/programming-pluggable-protocols.md)
| 46.378378 | 463 | 0.778555 | deu_Latn | 0.83557 |
c113832a9475e2ef97094e117607f2a0f151e932 | 186 | md | Markdown | README.md | Otard95/solarized-hue-syntax | 950bbfcc8b3e16737195769bd374465c9644f6e2 | [
"MIT"
] | null | null | null | README.md | Otard95/solarized-hue-syntax | 950bbfcc8b3e16737195769bd374465c9644f6e2 | [
"MIT"
] | null | null | null | README.md | Otard95/solarized-hue-syntax | 950bbfcc8b3e16737195769bd374465c9644f6e2 | [
"MIT"
] | null | null | null | # solarized-hue-syntax
A theme for atom based on ['solarized-dark-syntax'](https://github.com/atom/solarized-dark-syntax "Github Repo"), where the main focus is adding some more color.
| 46.5 | 161 | 0.763441 | eng_Latn | 0.977731 |
c113a5ac5abd676851ad9458f0f486c492572af2 | 142 | md | Markdown | README.md | MrZhangZc/nest-cli | e6b6956b9b85f81c1568e89aafa91c96eb12fb40 | [
"MIT"
] | null | null | null | README.md | MrZhangZc/nest-cli | e6b6956b9b85f81c1568e89aafa91c96eb12fb40 | [
"MIT"
] | null | null | null | README.md | MrZhangZc/nest-cli | e6b6956b9b85f81c1568e89aafa91c96eb12fb40 | [
"MIT"
] | null | null | null | # nest-cli
nestjs cli generator template
## Install
`yarn add global nest-gen`
or
`npm i -g nest-gen`
## Usage
`cd my-nest-proj`
`gen-nest`
| 11.833333 | 29 | 0.690141 | eng_Latn | 0.324549 |
c11431e4e203aea1ccf72ea06c9ae93a0c58da5f | 578 | md | Markdown | README.md | msrumon/symfony-dummy-project | b3068bb3aed1442c57b367daa9ea440c87ed0168 | [
"MIT"
] | null | null | null | README.md | msrumon/symfony-dummy-project | b3068bb3aed1442c57b367daa9ea440c87ed0168 | [
"MIT"
] | null | null | null | README.md | msrumon/symfony-dummy-project | b3068bb3aed1442c57b367daa9ea440c87ed0168 | [
"MIT"
] | null | null | null | # A Dummy Project in **Symfony**
I have created this dummy repository to demonstrate an issue: separation of logic for rendering and processing forms. But it seems like it's impossible to separate these two in **Symfony** style. I hope someone will explain me. Please create issues to discuss more.
## How to run
- Clone this repository (or download as ZIP archive).
- Navigate into the project directory.
- Install the dependencies: `composer install`.
- Spin up the server: `php -S localhost:8080`.
Open a browser and go to `localhost:8080/blog`. You should see the form.
| 44.461538 | 265 | 0.759516 | eng_Latn | 0.997902 |
c115471e4f969ba6bf799904fef548c88e014e96 | 3,874 | md | Markdown | _posts/2018-07-12-Learning-Domain-Sensitive-and-Sentiment-Aware-Word-Embeddings.md | q0115643/blog | 597e46bc2c17b6d7364e40dc4b63b3049bbb2302 | [
"MIT"
] | 1 | 2021-01-10T10:37:16.000Z | 2021-01-10T10:37:16.000Z | _posts/2018-07-12-Learning-Domain-Sensitive-and-Sentiment-Aware-Word-Embeddings.md | rokrokss/blog | 816933c774f09e332ffb9395483e20f70b82d5ff | [
"MIT"
] | null | null | null | _posts/2018-07-12-Learning-Domain-Sensitive-and-Sentiment-Aware-Word-Embeddings.md | rokrokss/blog | 816933c774f09e332ffb9395483e20f70b82d5ff | [
"MIT"
] | 7 | 2020-05-11T01:44:20.000Z | 2021-03-17T06:18:57.000Z | ---
comments: true
title: 논문 요약: Learning Domain-Sensitive and Sentiment-Aware Word Embeddings
key: 201807121
tags:
- ML
- NLP
- 논문
- ACL
---
> ACL 2018
Sentiment Analysis를 위한 Domain adaptation
<!--more-->
[논문 링크](https://arxiv.org/pdf/1805.03801.pdf)
## Introduction
본 논문에서는, word embedding을 domain-sensitive, sentiment-aware하게 학습시키고, words의 domain 파악과 함께 sentiment analysis를 목적으로 한다.
## Related Work
결국에는 18년도 ACL에 쏟아져 나온 논문들과 함께 domain 관련 문제점을 해결하려는 시도다. word embedding이 target domain에서 (혹은 general domain에서) 학습되었을 경우 단어의 의미가 domain-dependent하여 sentiment analysis가 원활하지 못하다.
많은 단어들이 domain에 따라 단어의 opinion이 바뀌는 경우가 많기 때문이다. 예를 들어 unpredictable은 movie domain에서는 positive지만 automobile domain에서는 negative다.
이전의 시도들에서는 domain 간 common feature를 찾아 이용했었지만 본 논문에서는 domain-common & domain-specific embedding을 모두 사용한다.
본 논문과 같이 multiple domain을 고려한 word embedding을 학습하는 논문들도 있는데, 대부분은 여러 domain에서 분리된 embedding을 학습시킨다.
그 이후 frequency-based statistical measure로 pivot words를 골라서 각 embedding space를 연결해 준다.
본 논문에서는 domain-common words는 sentiment information과 context words를 이용하여 결정된다.
## Model, DSE
DSE (**D**omain-sensitive and **S**entiment-aware word **E**mbeddings) 라고 부른단다.
multiple domain 환경에서 가능한데 논문에서는 일단 2개의 domain 위에서 설명하였고 그 이상의 경우에서 어떻게 적용하는지 소개해 놓았다.
### Design of Embeddings
각 word $$w$$에 대해 latent variable $$z_w$$를 부여, $${z_w}=1$$이면 $$w$$는 common, 아니면 $$w$$는 domain p나 q에 specific.
원하는 모델을 얻기 위해 word2vec에 변형을 시키는데 먼저 skip-gram의 단어 $$w$$일 때 주변 단어 $$w_t$$의 확률 식이 아래와 같다.
$$p({w_t}|w)=\sum_{k \in \{0,1\}} p({w_t}|w, {z_w}=k)p({z_w}=k)$$
원래는 $$z_w$$ 관련 항 없이 $$w$$에 대한 $$w_t$$가 되어야 하는데 $$z_w$$ dependent하게 만들었다.
그렇다면 $$z_w=1$$, 즉 $$w$$가 domain-common word일 경우, 아래와 같은 식을 얻을 수 있다.
$$p({w_t}|w, z_w=1)= \frac{exp(U_w^c \cdot V_{w_t})}{\sum_{w^ \prime \in \Lambda} exp(U_w^c \cdot V_{w^ \prime })}$$
요렇게 된다. 여기서 중요한 건 $$U_w^c$$가 바로 word2vec에서 word vector로 쓰이는 부분과 같은 역할을 하는 것.
word $$w$$가 domain-common일 때의 word embedding이 된다.
$$z_w=0$$, 즉 $$w$$가 domain-specific word일 경우, 아래와 같은 식을 얻을 수 있다.
$$p({w_t}|w, z_w=0)=
\begin{cases}
\frac{exp(U_w^p \cdot V_{w_t})}{\sum_{w^ \prime \in \Lambda} exp(U_w^p \cdot V_{w^ \prime })}, & \text{if $w \in D^p$} \\
\frac{exp(U_w^q \cdot V_{w_t})}{\sum_{w^ \prime \in \Lambda} exp(U_w^q \cdot V_{w^ \prime })}, & \text{if $w \in D^q$}
\end{cases}
$$
여기서 $$D^p$$와 $$D^q$$는 각각 p 도메인과 q 도메인을 말한다. $$U_w^p$$는 p 도메인에 대하여 domain-specific인 $$w$$의 word vector가 된다.
### Exploiting Sentiment Information
앞까지는 word vector의 확률 분포를 정의한 것이었다. 근데 그 식들이 사용되려면 $$z_w$$, 즉 domain-commonality가 필요하다.
아직 단어의 polarity (positive or negative) 를 안 써줬다. 걔네까지 이용해서 모델을 만들고 EM method로 $$z_w$$를 구한다. 먼저 polarity $$y_w$$를 도입하면,
$$p({y_w}|w)=\sum_{k \in \{0,1\}} p({y_w}|w, z_w=k)p(z_w=k)$$
$$z_w=1$$일 경우 $$w$$는 domain-common word이고,
$$p(y_w=1|w, z_w=1)=\sigma (U_w^c \cdot s)$$
여기서 $$\sigma (\cdot)$$은 sigmoid 함수를 의미한다. $$s$$는 $$d$$차원을 가진 sentiment-boundary vector.
$$p(y_w=0|w, z_w=1)=1-p(y_w=1|w, z_w=1)$$
만약 $$w$$가 domain-specific word일 경우,
$$p(y_w=1|w, z_w=0)=
\begin{cases}
\sigma (U_w^p \cdot s), & \text{if $w \in D^p$} \\
\sigma (U_w^q \cdot s), & \text{if $w \in D^q$}
\end{cases}
$$
### Inference Algorithm
domain $$D^p, D^q$$는 주어진 것으로 한다. 그리고 EM 알고리즘을 사용하는데.......
{:width="400px"}
논문 한 페이지를 꽉 채우는 식들을 다 봐야한다. 논문을 직접 보는 것을 추천...
<br>
**참고로 domain이 3개 이상일 경우 $$z_w$$의 확률분포를 Bernoulli에서 Multinomial로 바꾸면 된다.**
## Experiment
Amazon 제품 리뷰 데이터에서 제품 카테고리 별로 나눈 걸로 여러 도메인 데이터가 있다고 친다.
books(B), DVDs(D), electronics(E), kitchen appliances(K), 총 4가지 데이터셋을 이용한다.
text와 1-to-5 score가 있는데, 3 이상은 positive, 미만은 negative review로 본다.
## Results
{:width="800px"}
| 28.072464 | 175 | 0.674239 | kor_Hang | 0.999968 |
c1157cd24ca7165c9b933e3d76df7567ca653ab2 | 806 | md | Markdown | OSM_CLIENT/docs/NsiLcmOpOcc.md | aviweit/nfvo-drivers | 32821abaf369575e2e3877d4f105e80f6d39b80e | [
"Apache-2.0"
] | null | null | null | OSM_CLIENT/docs/NsiLcmOpOcc.md | aviweit/nfvo-drivers | 32821abaf369575e2e3877d4f105e80f6d39b80e | [
"Apache-2.0"
] | 2 | 2019-12-16T10:04:07.000Z | 2019-12-16T10:04:54.000Z | OSM_CLIENT/docs/NsiLcmOpOcc.md | aviweit/nfvo-drivers | 32821abaf369575e2e3877d4f105e80f6d39b80e | [
"Apache-2.0"
] | 1 | 2021-05-09T13:57:32.000Z | 2021-05-09T13:57:32.000Z | # NsiLcmOpOcc
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**_id** | [**UUID**](UUID.md) | | [optional]
**id** | [**UUID**](UUID.md) | | [optional]
**lcmOperationType** | **String** | | [optional]
**netsliceInstanceId** | [**UUID**](UUID.md) | | [optional]
**isAutomaticInvocation** | **Boolean** | | [optional]
**isCancelPending** | **Boolean** | | [optional]
**startTime** | **Float** | | [optional]
**statusEnteredTime** | **Float** | | [optional]
**operationParams** | [**NsiLcmOpOccOperationParams**](NsiLcmOpOccOperationParams.md) | | [optional]
**operationState** | **String** | | [optional]
**detailedStatus** | **String** | | [optional]
**links** | [**NsiLcmOpOccLinks**](NsiLcmOpOccLinks.md) | | [optional]
| 44.777778 | 102 | 0.558313 | yue_Hant | 0.2915 |
c115cf5256c358bef245fa629d212a557326b32c | 18,160 | md | Markdown | README.md | ClickerMonkey/iteratez | 461014918270b11a466ed42de772c828a6db54a3 | [
"MIT"
] | 13 | 2019-04-18T18:22:02.000Z | 2019-10-01T08:31:33.000Z | README.md | ClickerMonkey/iteratez | 461014918270b11a466ed42de772c828a6db54a3 | [
"MIT"
] | 28 | 2019-04-18T16:42:12.000Z | 2021-05-07T11:43:07.000Z | README.md | ClickerMonkey/iteratez | 461014918270b11a466ed42de772c828a6db54a3 | [
"MIT"
] | null | null | null | # iteratez
A powerful functional iterator, transformer, and mutator.
Out of the box you can iterate over arrays, objects, trees, sets, maps, linked-lists, iterables - and you can provide iteration capabilites to your own code no matter how complex the type (dynamically calculated, etc).
The iterator is lazy, so you can chain "views" and iteration is not done until you perform "operations" or "mutations" to the underlying source.
## Features
- Array, object, tree, set, map, linked-list, and iterables out of the box.
- Iteration is lazy, so iteration is only done when it absolutely needs to be.
- Some [operations](#operations) can exit early and cease iteration saving time and resources.
- When iterating, you can stop at any time.
- If the underlying source supports it, [remove](#mutations) a value.
- If the underlying source supports it, [replace](#mutations) a value.
- You can chain [views](#views) which don't cause iteration until an [operation](#operations) or [mutation](#mutations) are called.
- You can call [mutations](#mutations) to affect the underlying source.
- You can call [operations](#operations) to iterate and produce a result.
- You can create a [reusable function](#reusable-function) to perform operations repeatedly.
- [Create your own iterator.](#custom-iterators)
You can see all of these features in the [examples](#examples) below.
### Views
Returns an iterator...
- `where`: for a subset of the values.
- `not`: for a subset of the values that don't pass test (opposite of where).
- `transform`: that transforms the values to another type.
- `reverse`: that iterates over the values in reverse order.
- `exclude`: that excludes values found in another iterator.
- `intersect`: that has common values in another iterator.
- `sorted`: that is sorted based on some comparison.
- `shuffle`: that is randomly ordered.
- `unique`: that has only unique values.
- `duplicates`: that has all the duplicate values.
- `readonly`: that ignores mutations.
- `keys`: only for the keys of the values (replace not supported).
- `values`: only for the values (new key is index based).
- `take`: that only iterates over the first X values.
- `skip`: that skips the first X values.
- `drop`: that drops off the last X values.
- `append`: that is the original iterator + one or more iterators specified.
- `prepend`: that is one or more iterators specified + the original iterator.
- `gt`: that only has values greater than a value.
- `gte`: that only has values greater than or equal to a value.
- `lt`: that only has values less than a value.
- `lte`: that only has values less than or equal to a value.
- `fork`: that is this, but allows a function to perform fork operations
- `split`: Splits the values into two iterators (pass/fail) based on a condition.
- `unzip`: Splits the view into two iterates (keys/values).
### Mutations
- `delete`: Removes values in the view from the source.
- `overwrite`: Replaces values in the view from the source with a constant replacement.
- `update`: Replace values in the view from the source with a dynamic replacement.
- `extract`: Removes values in the view from the source and returns a new iterator with the removed values.
### Operations
- `empty`: Determines if view contains zero values.
- `has`: Determines if the view contains any values.
- `contains`: Determines if the view contains a specific value.
- `first`: Gets the first value in the view.
- `last`: Gets the last value in the view.
- `count`: Counts the number of values in the view.
- `array`: Builds an array of the values in the view.
- `set`: Builds a Set of the values in the view.
- `object`: Builds an object of the values in the view.
- `entries`: Builds an array of `[key, value]` in the view.
- `map`: Builds a Map of the values and keys in the view.
- `group`: Builds an object of value arrays grouped by a value derived from each value.
- `reduce`: Reduces the values in the view down to a single value.
- `min`: Returns the minimum value in the view.
- `max`: Returns the maximum value in the view.
- `iterate`: Invokes a function for each value in the view.
- `copy`: Copies the values in the view and returns a new iterator.
- `changes`: Notifies you when values are added, removed, or still present on an iterator since the last time called.
### Comparison Logic
The following chainable functions define how values should be compared.
- `numbers`: Set number comparison logic to the iterator.
- `strings`: Set string comparison logic to the iterator.
- `dates`: Set date comparison logic to the iterator.
- `desc`: Reverses the comparison logic.
- `withEquality`: Set a custom equality function.
- `withComparator`: Set a custom comparison function.
### Reset
The following function(s) allow you to change the source for iteration.
- `reset`: Sets a new source to iterate.
### Other Functions
The following static functions exist to help iterate simple sources:
- `Iterate.array`: Iterates an array.
- `Iterate.object`: Iterates the properties of an object, optionally just the properties explicitly set on the object.
- `Iterate.tree`: Iterates trees.
- `Iterate.linked`: Iterates linked-lists.
- `Iterate.map`: Iterates Maps
- `Iterate.set`: Iterates Sets
- `Iterate.join`: Returns an iterator that iterates over one or more iterators.
- `Iterate.zip`: Combines a key iterator and value iterator into one.
- `Iterate.empty`: Iterates nothing.
- `Iterate.entries`: Iterates an array of `[key, value]` entries.
- `Iterate.iterable`: Iterates any collection that implements iterable.
- `Iterate.hasEntries`: Iterates any object which has the `entries()` iterator.
## Examples
The example is in Typescript, but iterator is available as `iz.Iterate` and the function whic dynamically returns an iterator is `iz.iterate` or simply `iz` in JS
```typescript
import iz, { Iterate } from 'iteratez';
// Creating an iterator
let source = iz([1, 5, 7, 9, 10]);
let source = iz({
name: 'ClickerMonkey',
age: 30
});
let source = iz('string'); // each character
let source = iz(...source); // anything iterable
let source = iz([['key', 'value'], ['key', 'value']]); // [key, value] tuples
let source = iz(new Map()); // Map<K, V>
let source = iz(new Set()); // Set<T>
let source = iz(ReadonlyArray | ReadonlyMap | ReadonlySet | Int8Array | ...) // something that has entries() function
let source = Iterate.tree( ... )(head);
let source = Iterate.linked( ... )(head);
let source = yourSource.yourIteratorGenerator();
// ============ ITERATION ============
// Stop
source.each((value, key, iter) => {
if (someCondition(value)) {
iter.stop(42)
}
}).withResult((result) => {
// result = 42
// function only called with previous iteration stopped with a value
});
// Remove
// - if the source is a sequential collection, it's removed from the sequence (array, object, etc)
// - if the source is a tree, it removes it from the tree including it's children
// - otherwise, up to the custom source
source.each((value, key, iter) => {
if (someCondition(value)) {
iter.remove();
}
});
// Replace
source.each((value, key, iter) => {
if (someCondition(value)) {
iter.replace(replacement);
}
});
// ============ Operations ============
// These are at the end of a chain of views
let empty = source.empty(); // boolean
let has = source.has(); // boolean
let contains = source.contains(2); // boolean
let first = source.first(); // T
let last = source.last(); // T
let count = source.count(); // number
let array = source.array(); // T[]
let array = source.array(dest); // T[]
let set = source.set(); // Set<T>
let object = source.object(value => value.id); // { [value.id]: value }
let object = source.object(value => value.id, dest);
let entries = source.entries(): // Array<[K, T]>
let map = source.map(); // Map<T, K>
let group = source.group(value => value.age); // { [age]: T[] }
let reduced = source.reduce(R, (T, R) => R); // R
let min = source.min(); // T
let max = source.max(); // T
let copy = source.copy(): // Iterate<T>
let that = source.changes(onAdd, onRemove, onPresent); // this
// ============ Mutations ============
// These are at the end of a chain of views and they
// take the values in the current iterator and affects the
// underlying source.
source.delete(); // removes all values in iterator
source.where(x => x.id).delete(); // remove values without an ID
source.extract(); // does a delete and returns a new iterator with the removed values
source.overwrite(42); // replaces all values in iterator
source.where(x => x > 34).overwrite(12); // replace all numbers over 34 with 12
source.update(x => x * 2); // multiply all numbers by 2
// ============ Views ============
// These are chainable, at the end if you call an operation it performs
// it only on the values in the iterator at that point. If you call
// a mutation then it changes the underlying source but only on the
// values in the view.
source.where(x => x.age > 0); // values that past test
source.not(x => x.age > 0); // values that don't pass test
source.transform(x => x.name); // values transformed to a new type
source.reverse(); // values in reverse
source.exclude(anotherSource); // not shared values
source.intersect(anotherSource); // shared values
source.sorted(comparator?); // sorted by a comparator
source.shuffle(times?); // randomly orders
source.unique(equality?); // unique values only
source.duplicates(onlyOnce?); // duplicate values only
source.readonly(); // all subsequent mutations are ignored
source.keys(); // just the keys (index based), delete mutation works
source.values(); // just the values (index based)
source.take(10); // first 10 values
source.skip(5); // after first 5 values
source.drop(3); // ignore last 3
source.append(anotherSource); // union of two
source.prepend(anotherSource); // union in reverse order
source.gt(value, comparator?); // all values greater than value
source.gte(value, comparator?); // all values greater/equal to value
source.lt(value, comparator?); // all values less than value
source.lte(value, comparator?); // all values less/equal to value
source.fork(f => f.where(x => !!x.male).delete()); // fork operation
source.split(x => x.male); // { pass, fail }
source.split(x => x.male, (pass, fail) => {}): // two iterators
source.unzip(); // { keys, values }
source.unzip((keys, values) => {}); // two iterators
// ============ Logic ============
// comparator is used for max/min/sorted/gt/gte/lt/lte
// also will set withEquality if not specified
source.withComparator((a, b) => number);
// equality check used for contains/exclude/intersect/unique/duplicates
source.withEquality((a, b) => boolean);
// Pre-defined logic
source.numbers(ascending?, nullsFirst?); // number logic
source.strings(sensitive?, ascending?, nullsFirst?); // string logic
source.dates(equalityTimespan?, utc?, ascending?, nullsFirst?); // date logic
source.desc(); // reverse comparison logic
// ============ Reset ============
source.reset(['a', 'new', 'source', 'to', 'iterate']);
// ============ Examples ============
// Views ending with an operation or mutation.
source.duplicates().has(); // has duplicates?
source.duplicates().delete(); // remove duplicates
source.where(x => x.age < 18).extract(); // remove < 18yo
source.sorted().skip(5).take(10).array(); // sort, get 5->15 as array
// Map to a new iterator, but support replacement
source.transform<string>(
// transforms values to new type
value => value.name,
// if replace is called un a subsequent iteration, how do we take the transformed value and apply it back to the original value?
(replaceWith, current, value) => {
value.name = replaceWith;
}
).each((name, key, iter) => {
// Make all names uppercase in the most obtuse way possible
iter.replace(name.toUpperCase());
});
// Iterate with a callback
source.each((value, key, iter) => {
// iter.remove();
// iter.stop(withResult);
// iter.replace(newValue);
});
// ============ Linked List ============
// You can have any structure you wish
interface Node<T> {
value: T
next?: Node<T>
}
const linkedIterator = Iterate.linked<Node<string>, string>(
// get value from a node
(node) => node.value,
// get next node
(node) => node.next,
// remove the node (optional)
(node, prev) => prev.next = node.next,
// replace value (optional)
(node, value) => node.value = value,
// if you want a key different than the value, specify this
(node) => node.value.length
);
const head: Node<string> = ...;
// for all nodes which have a value that passes the regex...
// - sort them by value (asending), remove the top 5
// - convert the unremoved unsorted nodes into an array
linkedIterator(head)
.where(x => /regex/.test(x))
.fork(f => f.strings().sorted().take(5).delete())
.array();
// ============ Tree ============
interface Node<T> {
value: T
children?: Node<T>[]
}
// You create an iterator ready for nodes.
const treeIterator = Iterate.tree<Node<string>, string>(
// get a value from a node
(node) => node.value,
// get children from a node (can return an array, another iterator, undefined, or null)
(node) => node.children,
// if replace is called, apply the value (this is optional)
(node, value) => node.value = value
);
const head: Node<string> = ...
// Iterate depth-first and convert to an array
const depthFirstList = treeIterator(head).array();
// Iterate breadth-first and convert to an array
const breadthFirstList = treeIterator(head, false).array();
```
## Reusable Function
You can define a function which takes an iterator and performs any
number of operations. You can optionally have it return a result.
```typescript
// Iterate.func<T, R, A, K, S>
// - T = value type
// - R = function return type
// - A = array of parameter types
// - K = desired key type (restricts the source that can be passed to the function)
// - S = desired source (type passed to function must match this type)
// A function without a result. If a word contains an a, uppercase the word
const fn = Iterate.func<string>(
source => source
.where(x => x.indexOf('a') !== -1)
.update(x => x.toUpperCase())
);
// Any iterable source that has strings as values can be passed to function
const a = ['apple', 'bit', 'cat'];
fn(a);
// a = [APPLE, bit, CAT]
// A function with a result. If a word contains an a, uppercase it. Return the
// number of changed words. The counting has to happen before the update
// since the update would make no values pass the where condition.
const fn = Iterate.func<string, number>(
(source, setResult) => source
.where(x => x.indexOf('a') !== -1)
.count(setResult)
.update(x => x.toUpperCase())
);
const a = ['apple', 'bit', 'cat'];
const b = fn(a); // 2
// a = [APPLE, bit, CAT]
// A function can have special paramters passed to it.
// Given an array of people, I want a subset of that array given
// a limit and offset.
const getPage = Iterate.func<Person, Person[], [number, number]>(
(source, setResult, offset, limit) => source
.skip(offset)
.take(limit)
.array(setResult)
);
const persons: Person[] = ...;
const page = getPage(persons, 5, 10);
// page = at most 10 people starting at index 5
```
## Custom Iterators
You can add your own iterators to pick up your own types. If you are not using
TypeScript you can ignore and remove the types.
```typescript
// Lets assume we have a type which is a [Date, number] tuple
// which represents a range of dates.
type DateRange = [Date, number];
// First we create an iterator given the Range
// The generic arguments for Iterate<T, K, S> represent:
// - T = the value being iterated
// - K = the key type
// - S = the source type being iterated
function getDateIterator ([start, max]: DateRange)
{
return new Iterate<Date, number, DateRange>(iter =>
{
// This function is called when an operation or mutation is called on iter
// You should iterate over your values and respond to the action requested
// In this example, if iterateNode returns false, stop all iteration
const curr = new Date(start.getTime());
for (let key = 0; key < max; key++)
{
// Dates are not immutable, don't want the iterator to mess this up.
const value = new Date(curr.getTime());
switch (iter.act(value, key))
{
// stop all iteration
case IterateAction.STOP:
return;
// remove this value, and subsequentally all children from tree
case IterateAction.REMOVE:
// doesn't apply here, this is a dynamic set
break;
// replace the value
case IterateAction.REPLACE:
// doesn't apply here, this is a dynamic set
break;
}
// the next date in the sequence
curr.setDate(curr.getDate() + 1);
}
});
}
// Now that we have an iterator generator, if we wanted to we could
// autmatically have the library detect my custom type and call my custom
// iterator.
import { Generators } from 'iteratez';
// We need to detect our custom type. s is DateRange is just syntactical sugar
function isDateRange(s: any): s is DateRange {
return Array.isArray(s)
&& s.length === 2
&& s[0] instanceof Date
&& typeof s[1] === 'number'
;
}
// Add a generator detection to the beginning of the list
// It must return false if it's not a valid source.
Generators.unshift(s => isDateRange(s) ? getDateIterator(s) : false);
// Now when we do this...
const dateIterator = iz([new Date(), 10]);
// We have our iterator and can do anything we want with it.
const dates = dateIterator.array();
// BONUS!
// If we want the iz function to return a type Iterator (like Iterate<Date, number, DateRange>)
// we can add our own declaration file like this:
// <types/iteratez/index.d.ts>
import { Iterate } from 'iteratez';
declare module 'iteratez'
{
// add the function overload
export function iterate (range: DateRange): Iterate<Date, number, DateRange>
}
// </types/iteratez/index.d.ts>
// then instead of this everywhere:
import { iz } from 'iteratez';
const dateIterator = iz([new Date(), 3]); // Iterate<Date, number, DateRange> magic!
```
| 37.443299 | 218 | 0.686619 | eng_Latn | 0.977922 |
c116087d8b7f67cd5a6301b83f99d71efaf61bc5 | 4,759 | md | Markdown | README.md | adplabs/adp-userinfo-php | a696e37e9db57a370701f690970a83c734a33232 | [
"Apache-2.0"
] | 3 | 2016-03-08T19:19:39.000Z | 2016-04-10T10:56:38.000Z | README.md | adplabs/adp-userinfo-php | a696e37e9db57a370701f690970a83c734a33232 | [
"Apache-2.0"
] | 1 | 2016-03-08T18:05:17.000Z | 2016-03-08T18:09:59.000Z | README.md | adplabs/adp-userinfo-php | a696e37e9db57a370701f690970a83c734a33232 | [
"Apache-2.0"
] | 2 | 2020-01-26T23:39:39.000Z | 2020-06-19T19:40:46.000Z | ## ADP Marketplace Partners
There are a few pre-requesites that you need to fullfill in order to use this library:
- Replace the certifcates in this library with the ones you recieved from the [CSR Tool](https://apps.adp.com/apps/165104)
- Update the client id and client secret with the ones supplied in your credentials document PDF
- Update endpoints from ```https://iat-api.adp.com``` and ```https://iat-accounts.adp.com``` to ```https://api.adp.com``` and ```https://accounts.adp.com```.
# ADP Client Userinfo Library for PHP
The ADP Client Userinfo Library is intended to simplify and aid the process of retrieving Userinfo from the ADP Marketplace API Gateway. The Library includes a sample application that can be run out-of-the-box to connect to the ADP Marketplace API **test** gateway.
The installation and use of this library makes the following assumptions:
- You must be running php5.3 or higher with CURL support. If you are using OSX, this means you will need to rebuild PHP with CURL support.
- Composer is installed and configured, as the library utilizes Composer for installation.
### Version
1.0.2
### Installation
**Composer**
Use Composer from the location you wish to use as the root folder for your project.
```sh
$ composer require adpmarketplace/api-userinfo
```
This will install the Userinfo module to your project. If you have not already installed the Connection module, it will automatically be downloaded and installed as well.
If you wish to build the sample client, do the following:
```sh
$ cd adplib/connection/tools
$ php makeclient.php
```
If you want to test immediately, copy and paste the proper commands that are generated by the makeclient.php script. In case you missed them, you can do it this way as well:
```sh
(from the project root)
$ cd client
$ php -S 127.0.0.1:8889
```
This starts an HTTP server on port 8889 (this port must be unused to run the sample application).
**Run the sample app**
Note, to test the sample app you must first run the makeclient.php script, and then start the PHP server as outlined above.
Once this is done, open your web browser and go to:
```sh
http://localhost:8889
```
The sample app allows you to connect to the ADP test API Gateway for testing the Userinfo call. Please note that only "Authorization Code" is valid for calling the Userinfo api.
## Examples
### Call userinfo (this assumes you already have a valid connection)
```php
require("config.php");
require($libroot . "connection/adpapiConnection.class.php");
require($libroot . "userinfo/adpapiUserinfo.class.php");
//--------------------------
// Create the helper class
//--------------------------
try {
$userInfoHelper = new adpapiUserinfoHelper($adpConn);
}
catch (adpException $e) {
showADPException($e);
exit();
}
//-------------------------------------------
// Get the info back from the userinfo call
//-------------------------------------------
try {
$userInfo = $userInfoHelper->getUserinfo();
}
catch (adpException $e) {
showADPException($e);
exit();
}
//-------------------------------------------
// Success. We have a userinfo value object.
//-------------------------------------------
echo "<h1>User Info</h1>\n";
echo "<pre>";
print_r($userInfo);
echo "</pre>";
```
## API Documentation ##
Documentation on the individual API calls provided by the library are documented in the 'doc' folder. Open the index.html file in your browser to view the documentation.
## Dependencies ##
This library has the following dependencies.
* adpmarketplace/api-connection - Automatically installed via composer.
* php >= v5.3 with CURL support. If you are using OSX, this means you will need to rebuild PHP with CURL support.
* composer
## Tests ##
Our testing and code coverage is handled through PHPUNIT, which you must install to run the tests. For code coverage to work, you must also have Xdebug installed. The test units are located in the "test" folder, and the configurations are already loaded into the phpunit.xml. In order to run the tests and view code coverage:
```
phpunit
```
## Linter ##
You must use PHP's built-in linter to validate the structure of your code:
```php
php -l <sourcefile>
```
## Contributing ##
To contribute to the library, please generate a pull request. Before generating the pull request, please insure the following:
1. Appropriate unit tests have been updated or created.
2. Code coverage on the unit tests must be no less than 95%.
3. Your code updates have been fully tested and linted with no errors.
4. Update README.md and API documentation as appropriate.
## License ##
This library is available under the Apache 2 license (http://www.apache.org/licenses/LICENSE-2.0).
| 30.703226 | 328 | 0.7054 | eng_Latn | 0.992463 |
c118b00b3fc65355af705d7d643078460ee6d752 | 551 | md | Markdown | tests/snapshots/publicPath-node-modules.ts.md | jpex-js/babel-plugin | 145f14a32064374734d8783350450a25ee9fd21d | [
"MIT"
] | null | null | null | tests/snapshots/publicPath-node-modules.ts.md | jpex-js/babel-plugin | 145f14a32064374734d8783350450a25ee9fd21d | [
"MIT"
] | 20 | 2020-09-02T02:19:30.000Z | 2021-04-08T14:05:52.000Z | tests/snapshots/publicPath-node-modules.ts.md | jpex-js/babel-plugin | 145f14a32064374734d8783350450a25ee9fd21d | [
"MIT"
] | null | null | null | # Snapshot report for `tests/publicPath-node-modules.ts`
The actual snapshot is saved in `publicPath-node-modules.ts.snap`.
Generated by [AVA](https://avajs.dev).
## publicPath for node_modules
> Snapshot 1
`// publicPath for imports␊
import jpex from 'jpex';␊
jpex.factory("type:some-lib/Foo", [], () => 'foo');␊
jpex.constant("type:some-lib/Bar", 44);␊
jpex.factory("type:some-lib/Zeb", ["type:some-lib/Foo", "type:some-lib/Bar"], (foo, bar) => () => `${foo}${bar}`);␊
const result = jpex.resolve("type:some-lib/Zeb");`
| 32.411765 | 119 | 0.642468 | eng_Latn | 0.324174 |
c118b0d85270c4c5e8ac97681925869be741d720 | 529 | md | Markdown | .github/ISSUE_TEMPLATE/bug-report.md | c3-amitsalunke/cassandra-operator | 0a9dfb40009cc79e53fa5510666deb308152cdb2 | [
"Apache-2.0"
] | 229 | 2018-04-26T23:27:34.000Z | 2022-03-03T01:03:06.000Z | .github/ISSUE_TEMPLATE/bug-report.md | c3-amitsalunke/cassandra-operator | 0a9dfb40009cc79e53fa5510666deb308152cdb2 | [
"Apache-2.0"
] | 263 | 2018-04-24T21:16:42.000Z | 2022-02-09T08:48:43.000Z | .github/ISSUE_TEMPLATE/bug-report.md | c3-amitsalunke/cassandra-operator | 0a9dfb40009cc79e53fa5510666deb308152cdb2 | [
"Apache-2.0"
] | 76 | 2018-04-09T14:55:14.000Z | 2021-05-24T11:42:39.000Z | ---
name: Bug report
about: Create a report to help us improve Instaclustr Cassandra Operator
title: "[BUG]"
labels: bug
assignees: smiklosovic
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
**Expected behavior**
A clear and concise description of what you expected to happen.
**Environment**
- OS
- Kubernetes version
- `kubectl version`
- Go version
- Cassandra version
**Additional context**
Add any other context about the problem here.
| 17.633333 | 72 | 0.746692 | eng_Latn | 0.976189 |
c118c4cce0f95deb84b1aab81cf6437680d6e722 | 3,577 | md | Markdown | pages/hygieia/Collectors/review/build/jenkins-codequality.md | Megha849/HygieiaDocs | 3847f5adb5e7a0f39d59443d67dc9fe5caa624a1 | [
"MIT",
"BSD-3-Clause"
] | 1 | 2018-05-03T19:37:31.000Z | 2018-05-03T19:37:31.000Z | pages/hygieia/Collectors/review/build/jenkins-codequality.md | Megha849/HygieiaDocs | 3847f5adb5e7a0f39d59443d67dc9fe5caa624a1 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | pages/hygieia/Collectors/review/build/jenkins-codequality.md | Megha849/HygieiaDocs | 3847f5adb5e7a0f39d59443d67dc9fe5caa624a1 | [
"MIT",
"BSD-3-Clause"
] | null | null | null | ---
title: Jenkins-codequality Collector
tags:
keywords:
summary:
sidebar: hygieia_sidebar
permalink: jenkins-codequality.html
---
Configure the Jenkins-codequality Collector to display and monitor information (related to code quality) on the Hygieia Dashboard, from Jenkins-codequality. Hygieia uses Spring Boot to package the collector as an executable JAR file with dependencies.
### Setup Instructions
To configure the Jenkins-codequality Collector, execute the following steps:
* **Step 1: Change Directory**
Change the current working directory to the `jenkins-codequality` directory of your Hygieia source code installation.
For example, in the Windows command prompt, run the following command:
<pre code="">cd C:\Users\[usernname]\hygieia\collectors\scm\jenkins-codequality</pre>
* **Step 2: Run Maven Build**
Run the maven build to package the collector into an executable JAR file:
```
mvn install
```
The output file `jenkins-codequality.jar` is generated in the `jenkins-codequality\target` folder.
Once the build is run, the code quality artefacts(for example, test.exec) should be available at the following path:
```
C:\path\to\junit\findbugs\pmd\checkstyle\jacoco artefacts
```
* **Step 3: Set Parameters in Application Properties File**
Set the configurable parameters in the `application.properties` file to connect to the Dashboard MongoDB database instance, including properties required by the Jenkins-codequality Collector.
To configure parameters for the Jenkins-codequality Collector, refer to the sample [application.properties](#sample_application_properties_file) file.
For information about sourcing the application properties file, refer to the [Spring Boot Documentation](http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#boot-features-external-config-application-property-files).
* **Step 4: Deploy the Executable File**
To deploy the `jenkins-codequality.jar` file, change directory to `jenkins-codequality\target`, and then execute the following from the command prompt:
```bash
java -jar jenkins-codequality.jar
```
If the `application.properties` file is not in the same location as the JAR file, then execute the following command:
```bash
java -jar jenkins-codequality.jar --spring.config.name=jenkins-codequality --spring.config.location=[path to application.properties file]
```
### Sample Application Properties File
```properties
# Database Name
dbname=dashboarddb
# Database HostName - default is localhost
dbhost=10.0.1.1
# Database Port - default is 27017
dbport=27017
# MongoDB replicaset
dbreplicaset=[false, if you are not using MongoDB replicaset]
dbhostport=[host1:port1,host2:port2,host3:port3]
# Database Username - default is blank
dbusername=dashboarduser
# Database Password - default is blank
dbpassword=dbpassword
# Collector schedule (required)
jenkins-codequality.cron=0 0/1 * * * *
# Collector servers (required) - can be multiple servers
jenkins-codequality.servers[0]=https://jenkins.company.com
# Collector types (not required, but regex should match only the type specified)
jenkins-codequality.artifactRegex.junit=TEST-.*\\.xml
jenkins-codequality.artifactRegex.findbugs=findbugsXml.xml
jenkins-codequality.artifactRegex.pmd=pmd.xml
jenkins-codequality.artifactRegex.checkstyle=checkstyle-result.xml
jenkins-codequality.artifactRegex.jacoco=jacoco.xml
# Collector job depth (required) should be set to at least 1, and more if you use folder jobs, and so on.
jenkins-codequality.jobDepth=4
```
| 36.5 | 252 | 0.781381 | eng_Latn | 0.87491 |
c118eb903a98ae9b6ba1fe955868d7acd5424356 | 3,289 | md | Markdown | schedule.md | summarization2019/summarization2019.github.io | 7b4e685d3643eb26bb64e54564a16700901fe5c6 | [
"MIT"
] | null | null | null | schedule.md | summarization2019/summarization2019.github.io | 7b4e685d3643eb26bb64e54564a16700901fe5c6 | [
"MIT"
] | null | null | null | schedule.md | summarization2019/summarization2019.github.io | 7b4e685d3643eb26bb64e54564a16700901fe5c6 | [
"MIT"
] | null | null | null | ---
layout: page
title:
subtitle:
---
**Workshop Schedule**
Date: November 4, 2019
Location: SkyCity Meeting Room 1
Note: For presentation format, each long talk is 12 minutes plus 2 minutes QA, and each short talk is 10 minutes plus 2 minutes QA.
Time | Content
-------- | ----------
08:50 – 10:40 | **Morning Session 1**
08:50 – 09:00 | Opening Remarks
09:00 - 09:50 | Invited Talk: Nanyun Peng (USC ISI) <br> Title: *Creative Generation as Inverse Summarization*
09:50 - 10:04 | *Summary Level Training of Sentence Rewriting for Abstractive Summarization* <br> Sanghwan Bae, Taeuk Kim, Jihoon Kim, Sang-goo Lee
10:04 - 10:18 | *Abstractive Timeline Summarization* <br> Julius Steen and Katja Markert
10:18 - 10:32 | *Learning to Create Sentence Semantic Relation Graphs for Multi-Document Summarization* <br> Diego Antognini and Boi Faltings
10:32 - 10:46 | *Answering Naturally : Factoid to Full length Answer Generation* <br> Vaishali Pal, Manish Shrivastava, Irshad Bhat
10:46 - 11:00 | Break
11:00 – 12:30 | **Morning Session 2**
11:00 - 11:50 | Invited Talk: Ido Dagan (Bar-Ilan University) <br> Title: *Comprehending multi-text information: interaction and consolidation*
11:50 - 12:02 | *Unsupervised Aspect-Based Multi-Document Abstractive Summarization* <br> Maximin Coavoux, Hady Elsahar, Matthias Galle
12:02 - 12:14 | *BillSum: A Corpus for Automatic Summarization of US Legislation* <br> Anastassia Kornilova, Vladimir Eidelman
12:14 - 12:26 | *An Editorial Network for Enhanced Document Summarization* <br> Edward Moroshko, Guy Feigenblat, Haggai Roitman, David Konopnicki
12:26 - 12:38 | *Towards Annotating and Creating Summary Highlights at Sub-sentence Level* <br> Kristjan Arumae, Parminder Bhatia, Fei Liu
12:30 – 14:00 | Lunch
14:00 - 15:30 | **Afternoon Session 1**
14:00 – 14:50 | Invited Talk: Wenjie Li (The Hong Kong Polytechnic University) <br> Title: *Beyond Sequence-to-Sequence Neural Summarization*
14:50 - 15:04 | *SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization* <br> Bogdan Gliwa, Iwona Mochol, Maciej Biesek, Aleksander Wawer
15:04 - 15:18 | *A Closer Look at Data Bias in Neural Extractive Summarization Models* <br> Ming Zhong, Danqing Wang, Pengfei Liu, Xipeng Qiu, Xuanjing Huang
15:18 - 15:32 | *Global Voices: Crossing Borders in Automatic News Summarization* <br> Khanh Nguyen and Hal Daumé III
15:32 - 16:00 | Break
16:00 - 17:40 | **Afternoon Session 2**
16:00 - 16:50 | Invited Talk: Manabu Okumura (Tokyo Institute of Technology) <br> Title: *Studying the past in text summarization*
16:50 - 17:02 | *Multi-Document Summarization with Determinantal Point Processes and Contextualized Representations* <br> Sangwoo Cho, Chen Li, Dong Yu, Hassan Foroosh, Fei Liu
17:02 - 17:14 | *Analyzing Sentence Fusion in Abstractive Summarization* <br> Logan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, Fei Liu
17:14 - 17:26 | *Summarizing Relationships for Interactive Concept Map Browsers* <br> Abram Handler, Premkumar Ganeshkumar, Brendan O'Connor, Mohamed AlTantawy
17:26 - 17:38 | *Exploiting Discourse-Level Segmentation for Extractive Summarization* <br> Zhengyuan Liu and Nancy Chen
17:38 - 17:45 | Closing Remarks
| 73.088889 | 188 | 0.743083 | eng_Latn | 0.342811 |
c11a4dabd47ed81820cee138c72841ca87cdd0c0 | 39,129 | md | Markdown | editors/vim.md | lzw9560/awesome-cheatsheets | ade4435ad12c7f3fc0c2c9b0420b5389219bf604 | [
"MIT"
] | 1 | 2018-08-17T09:02:25.000Z | 2018-08-17T09:02:25.000Z | editors/vim.md | lzw9560/awesome-cheatsheets | ade4435ad12c7f3fc0c2c9b0420b5389219bf604 | [
"MIT"
] | null | null | null | editors/vim.md | lzw9560/awesome-cheatsheets | ade4435ad12c7f3fc0c2c9b0420b5389219bf604 | [
"MIT"
] | null | null | null | ##############################################################################
# VIM CHEATSHEET (中文速查表) - by skywind (created on 2017/10/12)
# Version: 42, Last Modified: 2018/07/19 19:04
# https://github.com/skywind3000/awesome-cheatsheets
##############################################################################
##############################################################################
# 光标移动
##############################################################################
h 光标左移,同 <Left> 键
j 光标下移,同 <Down> 键
k 光标上移,同 <Up> 键
l 光标右移,同 <Right> 键
CTRL-F 下一页
CTRL-B 上一页
CTRL-U 上移半屏
CTRL-D 下移半屏
0 跳到行首(是数字零,不是字母O),效用等同于 <Home> 键
^ 跳到从行首开始第一个非空白字符
$ 跳到行尾,效用等同于 <End> 键
gg 跳到第一行,效用等同于 CTRL+<Home>
G 跳到最后一行,效用等同于 CTRL+<End>
nG 跳到第n行,比如 10G 是移动到第十行
:n 跳到第n行,比如 :10<回车> 是移动到第十行
10% 移动到文件 10% 处
15| 移动到当前行的 15列
w 跳到下一个单词开头 (word: 标点或空格分隔的单词)
W 跳到下一个单词开头 (WORD: 空格分隔的单词)
e 跳到下一个单词尾部 (word: 标点或空格分隔的单词)
E 跳到下一个单词尾部 (WORD: 空格分隔的单词)
b 上一个单词头 (word: 标点或空格分隔的单词)
B 上一个单词头 (WORD: 空格分隔的单词)
ge 上一个单词尾
) 向前移动一个句子(句号分隔)
( 向后移动一个句子(句号分隔)
} 向前移动一个段落(空行分隔)
{ 向后移动一个段落(空行分隔)
<enter> 移动到下一行首个非空字符
+ 移动到下一行首个非空字符(同回车键)
- 移动到上一行首个非空字符
H 移动到屏幕上部
M 移动到屏幕中部
L 移动到屏幕下部
fx 跳转到下一个为 x 的字符,2f/ 可以找到第二个斜杆
Fx 跳转到上一个为 x 的字符
tx 跳转到下一个为 x 的字符前
Tx 跳转到上一个为 x 的字符前
; 跳到下一个 f/t 搜索的结果
, 跳到上一个 f/t 搜索的结果
<S-Left> 按住 SHIFT 按左键,向左移动一个单词
<S-Right> 按住 SHIFT 按右键,向右移动一个单词
<S-Up> 按住 SHIFT 按上键,向上翻页
<S-Down> 按住 SHIFT 按下键,向下翻页
gm 移动到行中
gj 光标下移一行(忽略自动换行)
gk 光标上移一行(忽略自动换行)
##############################################################################
# 插入模式:进入退出
##############################################################################
i 在光标处进入插入模式
I 在行首进入插入模式
a 在光标后进入插入模式
A 在行尾进入插入模式
o 在下一行插入新行并进入插入模式
O 在上一行插入新行并进入插入模式
gi 进入到上一次插入模式的位置
<ESC> 退出插入模式
CTRL-[ 退出插入模式(同 ESC 等价,但更顺手)
##############################################################################
# INSERT MODE - 由 i, I, a, A, o, O 等命令进入插入模式后
##############################################################################
<Up> 光标向上移动
<Down> 光标向下移动
<Left> 光标向左移动
<Right> 光标向右移动
<S-Left> 按住 SHIFT 按左键,向左移动一个单词
<S-Right> 按住 SHIFT 按右键,向右移动一个单词
<S-Up> 按住 SHIFT 按上键,向上翻页
<S-Down> 按住 SHIFT 按下键,向下翻页
<PageUp> 上翻页
<PageDown> 下翻页
<Delete> 删除光标处字符
<BS> Backspace 向后删除字符
<Home> 光标跳转行首
<End> 光标跳转行尾
CTRL-W 向后删除单词
CTRL-O 临时退出插入模式,执行单条命令又返回插入模式
CTRL-\ CTRL-O 临时退出插入模式(光标保持),执行单条命令又返回插入模式
CTRL-R 0 插入寄存器(内部 0号剪贴板)内容,CTRL-R 后可跟寄存器名
CTRL-R " 插入匿名寄存器内容,相当于插入模式下 p粘贴
CTRL-R = 插入表达式计算结果,等号后面跟表达式
CTRL-R : 插入上一次命令行命令
CTRL-R / 插入上一次搜索的关键字
CTRL-F 自动缩进
CTRL-U 删除当前行所有字符
CTRL-V {char} 插入非数字的字面量
CTRL-V {number} 插入三个数字代表的 ascii/unicode 字符
CTRL-V 065 插入 10进制 ascii 字符(两数字) 065 即 A字符
CTRL-V x41 插入 16进制 ascii 字符(三数字) x41 即 A字符
CTRL-V o101 插入 8进制 ascii 字符(三数字) o101 即 A字符
CTRL-V u1234 插入 16进制 unicode 字符(四数字)
CTRL-V U12345678 插入 16进制 unicode 字符(八数字)
CTRL-K {ch1} {ch2} 插入 digraph(见 :h digraph),快速输入日文或符号等
##############################################################################
# 文本编辑
##############################################################################
r 替换当前字符
R 进入替换模式,直至 ESC 离开
s 替换字符(删除光标处字符,并进入插入模式,前可接数量)
S 替换行(删除当前行,并进入插入模式,前可接数量)
cc 改写当前行(删除当前行并进入插入模式),同 S
cw 改写光标开始处的当前单词
ciw 改写光标所处的单词
caw 改写光标所处的单词,并且包括前后空格(如果有的话)
c0 改写到行首
c^ 改写到行首(第一个非零字符)
c$ 改写到行末
C 改写到行尾(同c$)
ci" 改写双引号中的内容
ci' 改写单引号中的内容
cib 改写小括号中的内容
cab 改写小括号中的内容(包含小括号本身)
ci) 改写小括号中的内容
ci] 改写中括号中内容
ciB 改写大括号中内容
caB 改写大括号中的内容(包含大括号本身)
ci} 改写大括号中内容
cit 改写 xml tag 中的内容
cis 改写当前句子
c2w 改写下两个单词
ct( 改写到小括号前
x 删除当前字符,前面可以接数字,3x代表删除三个字符
X 向前删除字符
dd 删除当前行
d0 删除到行首
d^ 删除到行首(第一个非零字符)
d$ 删除到行末
D 删除到行末(同 d$)
dw 删除当前单词
diw 删除光标所处的单词
daw 删除光标所处的单词,并包含前后空格(如果有的话)
di" 删除双引号中的内容
di' 删除单引号中的内容
dib 删除小括号中的内容
di) 删除小括号中的内容
dab 删除小括号内的内容(包含小括号本身)
di] 删除中括号中内容
diB 删除大括号中内容
di} 删除大括号中内容
daB 删除大括号内的内容(包含大括号本身)
dit 删除 xml tag 中的内容
dis 删除当前句子
d2w 删除下两个单词
dt( 删除到小括号前
dgg 删除到文件头部
dG 删除到文件尾部
d} 删除下一段
d{ 删除上一段
u 撤销
U 撤销整行操作
CTRL-R 撤销上一次 u 命令
J 链接多行为一行
. 重复上一次操作
~ 替换大小写
g~iw 替换当前单词的大小写
gUiw 将单词转成大写
guiw 将当前单词转成小写
guu 全行转为小写
gUU 全行转为大写
<< 减少缩进
>> 增加缩进
== 自动缩进
CTRL-A 增加数字
CTRL-X 减少数字
##############################################################################
# 复制粘贴
##############################################################################
p 粘贴到光标后
P 粘贴到光标前
v 开始标记
y 复制标记内容
V 开始按行标记
CTRL-V 开始列标记
y$ 复制当前位置到本行结束的内容
yy 复制当前行
Y 复制当前行,同 yy
yiw 复制当前单词
3yy 复制光标下三行内容
v0 选中当前位置到行首
v$ 选中当前位置到行末
viw 选中当前单词
vib 选中小括号内的东西
vi) 选中小括号内的东西
vi] 选中中括号内的东西
viB 选中大括号内的东西
vi} 选中大括号内的东西
vis 选中句子中的东西
vab 选中小括号内的东西(包含小括号本身)
va) 选中小括号内的东西(包含小括号本身)
va] 选中中括号内的东西(包含中括号本身)
vaB 选中大括号内的东西(包含大括号本身)
va} 选中大括号内的东西(包含大括号本身)
gv 重新选择上一次选中的文字
:set paste 允许粘贴模式(避免粘贴时自动缩进影响格式)
:set nopaste 禁止粘贴模式
"?yy 复制当前行到寄存器 ? ,问号代表 0-9 的寄存器名称
"?p 将寄存器 ? 的内容粘贴到光标后
"?P 将寄存器 ? 的内容粘贴到光标前
:registers 显示所有寄存器内容
:[range]y 复制范围,比如 :20,30y 是复制20到30行,:10y 是复制第十行
:[range]d 删除范围,比如 :20,30d 是删除20到30行,:10d 是删除第十行
ddp 交换两行内容:先删除当前行复制到寄存器,并粘贴
##############################################################################
# 文本对象 - c,d,v,y 等命令后接文本对象,一般为:<范围 i/a><类型>
##############################################################################
$ 到行末
0 到行首
^ 到行首非空字符
tx 光标位置到字符 x 之前
fx 光标位置到字符 x 之处
iw 整个单词(不包括分隔符)
aw 整个单词(包括分隔符)
iW 整个 WORD(不包括分隔符)
aW 整个 WORD(包括分隔符)
is 整个句子(不包括分隔符)
ib 小括号内
ab 小括号内(包含小括号本身)
iB 大括号内
aB 大括号内(包含大括号本身)
i) 小括号内
a) 小括号内(包含小括号本身)
i] 中括号内
a] 中括号内(包含中括号本身)
i} 大括号内
a} 大括号内(包含大括号本身)
i' 单引号内
a' 单引号内(包含单引号本身)
i" 双引号内
a" 双引号内(包含双引号本身)
2i) 往外两层小括号内
2a) 往外两层小括号内(包含小括号本身)
2f) 到第二个小括号处
2t) 到第二个小括号前
##############################################################################
# 查找替换
##############################################################################
/pattern 从光标处向文件尾搜索 pattern
?pattern 从光标处向文件头搜索 pattern
n 向同一方向执行上一次搜索
N 向相反方向执行上一次搜索
* 向前搜索光标下的单词
# 向后搜索光标下的单词
:s/p1/p2/g 将当前行中全替换p1为p2
:%s/p1/p2/g 将当前文件中全替换p1为p2
:%s/p1/p2/gc 将当前文件中全替换p1为p2,并且每处询问你是否替换
:10,20s/p1/p2/g 将第10到20行中所有p1替换为p2
:%s/1\\2\/3/123/g 将“1\2/3” 替换为 “123”(特殊字符使用反斜杠标注)
:%s/\r//g 删除 DOS 换行符 ^M
##############################################################################
# VISUAL MODE - 由 v, V, CTRL-V 进入的可视模式
##############################################################################
> 增加缩进
< 减少缩进
d 删除高亮选中的文字
x 删除高亮选中的文字
c 改写文字,即删除高亮选中的文字并进入插入模式
s 改写文字,即删除高亮选中的文字并进入插入模式
y 拷贝文字
~ 转换大小写
o 跳转到标记区的另外一端
O 跳转到标记块的另外一端
u 标记区转换为小写
U 标记区转换为大写
g CTRL-G 显示所选择区域的统计信息
<Esc> 退出可视模式
##############################################################################
# 位置跳转
##############################################################################
CTRL-O 跳转到上一个位置
CTRL-I 跳转到下一个位置
CTRL-^ 跳转到 alternate file (当前窗口的上一个文件)
% 跳转到 {} () [] 的匹配
gd 跳转到局部定义(光标下的单词的定义)
gD 跳转到全局定义(光标下的单词的定义)
gf 打开名称为光标下文件名的文件
[[ 跳转到上一个顶层函数(比如C语言以大括号分隔)
]] 跳转到下一个顶层函数(比如C语言以大括号分隔)
[m 跳转到上一个成员函数
]m 跳转到下一个成员函数
[{ 跳转到上一处未匹配的 {
]} 跳转到下一处未匹配的 }
[( 跳转到上一处未匹配的 (
]) 跳转到下一处未匹配的 )
[c 上一个不同处(diff时)
]c 下一个不同处(diff时)
[/ 跳转到 C注释开头
]/ 跳转到 C注释结尾
`` 回到上次跳转的位置
'' 回到上次跳转的位置
`. 回到上次编辑的位置
'. 回到上次编辑的位置
##############################################################################
# 文件操作
##############################################################################
:w 保存文件
:w <filename> 按名称保存文件
:e <filename> 打开文件并编辑
:saveas <filename> 另存为文件
:r <filename> 读取文件并将内容插入到光标后
:r !dir 将 dir 命令的输出捕获并插入到光标后
:close 关闭文件
:q 退出
:q! 强制退出
:wa 保存所有文件
:cd <path> 切换 Vim 当前路径
:pwd 显示 Vim 当前路径
:new 打开一个新的窗口编辑新文件
:enew 在当前窗口创建新文件
:vnew 在左右切分的新窗口中编辑新文件
:tabnew 在新的标签页中编辑新文件
##############################################################################
# 缓存操作
##############################################################################
:ls 查案缓存列表
:bn 切换到下一个缓存
:bp 切换到上一个缓存
:bd 删除缓存
:b 1 切换到1号缓存
:b abc 切换到文件名为 abc 开头的缓存
:badd <filename> 将文件添加到缓存列表
:set hidden 设置隐藏模式(未保存的缓存可以被切换走,或者关闭)
:set nohidden 关闭隐藏模式(未保存的缓存不能被切换走,或者关闭)
n CTRL-^ 切换缓存,先输入数字的缓存编号,再按 CTRL + 6
##############################################################################
# 窗口操作
##############################################################################
:sp <filename> 上下切分窗口并在新窗口打开文件 filename
:vs <filename> 左右切分窗口并在新窗口打开文件 filename
CTRL-W s 上下切分窗口
CTRL-W v 左右切分窗口
CTRL-W w 循环切换到下一个窗口
CTRL-W W 循环切换到上一个窗口
CTRL-W p 跳到上一个访问过的窗口
CTRL-W c 关闭当前窗口
CTRL-W o 关闭其他窗口
CTRL-W h 跳到左边的窗口
CTRL-W j 跳到下边的窗口
CTRL-W k 跳到上边的窗口
CTRL-W l 跳到右边的窗口
CTRL-W + 增加当前窗口的行高,前面可以加数字
CTRL-W - 减少当前窗口的行高,前面可以加数字
CTRL-W < 减少当前窗口的列宽,前面可以加数字
CTRL-W > 增加当前窗口的列宽,前面可以加数字
CTRL-W = 让所有窗口宽高相同
CTRL-W H 将当前窗口移动到最左边
CTRL-W J 将当前窗口移动到最下边
CTRL-W K 将当前窗口移动到最上边
CTRL-W L 将当前窗口移动到最右边
CTRL-W x 交换窗口
CTRL-W f 在新窗口中打开名为光标下文件名的文件
CTRL-W gf 在新标签页中打开名为光标下文件名的文件
CTRL-W R 旋转窗口
CTRL-W T 将当前窗口移到新的标签页中
CTRL-W P 跳转到预览窗口
CTRL-W z 关闭预览窗口
CTRL-W _ 纵向最大化当前窗口
CTRL-W | 横向最大化当前窗口
##############################################################################
# 标签页
##############################################################################
:tabs 显示所有标签页
:tabe <filename> 在新标签页中打开文件 filename
:tabn 下一个标签页
:tabp 上一个标签页
:tabc 关闭当前标签页
:tabo 关闭其他标签页
:tabn n 切换到第n个标签页,比如 :tabn 3 切换到第三个标签页
:tabm n 标签移动
:tabfirst 切换到第一个标签页
:tablast 切换到最后一个标签页
:tab help 在标签页打开帮助
:tab drop <file> 如果文件已被其他标签页和窗口打开则跳过去,否则新标签打开
:tab split 在新的标签页中打开当前窗口里的文件
:tab ball 将缓存中所有文件用标签页打开
:set showtabline=? 设置为 0 就不显示标签页标签,1会按需显示,2会永久显示
ngt 切换到第n个标签页,比如 2gt 将会切换到第二个标签页
gt 下一个标签页
gT 上一个标签页
##############################################################################
# 书签
##############################################################################
:marks 显示所有书签
ma 保存当前位置到书签 a ,书签名可以用 a-z(作用范围为文件内部), A-Z(作用范围为所有文件) 26*2个字母
'a 跳转到书签 a所在的行
`a 跳转到书签 a所在位置
`. 跳转到上一次编辑的行
'A 跳转到全文书签 A
[' 跳转到上一个书签
]' 跳转到下一个书签
'< 跳到上次可视模式选择区域的开始
'> 跳到上次可视模式选择区域的结束
##############################################################################
# 常用设置
##############################################################################
:set nocompatible 设置不兼容原始 vi 模式(必须设置在最开头)
:set bs=? 设置BS键模式,现代编辑器为 :set bs=eol,start,indent
:set sw=4 设置缩进宽度为 4
:set ts=4 设置制表符宽度为 4
:set noet 设置不展开 tab 成空格
:set et 设置展开 tab 成空格
:set winaltkeys=no 设置 GVim 下正常捕获 ALT 键
:set nowrap 关闭自动换行
:set ttimeout 允许终端按键检测超时(终端下功能键为一串ESC开头的扫描码)
:set ttm=100 设置终端按键检测超时为100毫秒
:set term=? 设置终端类型,比如常见的 xterm
:set ignorecase 设置搜索是否忽略大小写
:set smartcase 智能大小写,默认忽略大小写,除非搜索内容里包含大写字母
:set list 设置显示制表符和换行符
:set number 设置显示行号,禁止显示行号可以用 :set nonumber
:set paste 进入粘贴模式(粘贴时禁用缩进等影响格式的东西)
:set nopaste 结束粘贴模式
:set spell 允许拼写检查
:set hlsearch 设置高亮查找
:set ruler 总是显示光标位置
:set incsearch 查找输入时动态增量显示查找结果
:set insertmode Vim 始终处于插入模式下,使用 ctrl-o 临时执行命令
:set all 列出所有选项设置情况
:syntax on 允许语法高亮
:syntax off 禁止语法高亮
##############################################################################
# 帮助信息
##############################################################################
:h tutor 入门文档
:h quickref 快速帮助
:h index 查询 Vim 所有键盘命令定义
:h summary 帮助你更好的使用内置帮助系统
:h CTRL-H 查询普通模式下 CTRL-H 是干什么的
:h i_CTRL-H 查询插入模式下 CTRL-H 是干什么的
:h i_<Up> 查询插入模式下方向键上是干什么的
:h pattern.txt 正则表达式帮助
:h eval 脚本编写帮助
:h function-list 查看 VimScript 的函数列表
:h windows.txt 窗口使用帮助
:h tabpage.txt 标签页使用帮助
:h +timers 显示对 +timers 特性的帮助
:h :! 查看如何运行外部命令
:h tips 查看 Vim 内置的常用技巧文档
:h set-termcap 查看如何设置按键扫描码
:viusage NORMAL 模式帮助
:exusage EX 命令帮助
:version 显示当前 Vim 的版本号和特性
##############################################################################
# 外部命令
##############################################################################
:!ls 运行外部命令 ls,并等待返回
:r !ls 将外部命令 ls 的输出捕获,并插入到光标后
:w !sudo tee % sudo以后保存当前文件
:call system('ls') 调用 ls 命令,但是不显示返回内容
:!start notepad Windows 下启动 notepad,最前面可以加 silent
:sil !start cmd Windows 下当前目录打开 cmd
:%!prog 运行文字过滤程序,如整理 json格式 :%!python -m json.tool
##############################################################################
# Quickfix 窗口
##############################################################################
:copen 打开 quickfix 窗口(查看编译,grep等信息)
:copen 10 打开 quickfix 窗口,并且设置高度为 10
:cclose 关闭 quickfix 窗口
:cfirst 跳到 quickfix 中第一个错误信息
:clast 跳到 quickfix 中最后一条错误信息
:cc [nr] 查看错误 [nr]
:cnext 跳到 quickfix 中下一个错误信息
:cprev 跳到 quickfix 中上一个错误信息
##############################################################################
# 拼写检查
##############################################################################
:set spell 打开拼写检查
:set nospell 关闭拼写检查
]s 下一处错误拼写的单词
[s 上一处错误拼写的单词
zg 加入单词到拼写词表中
zug 撤销上一次加入的单词
z= 拼写建议
##############################################################################
# 代码折叠
##############################################################################
za 切换折叠
zA 递归切换折叠
zc 折叠光标下代码
zC 折叠光标下所有代码
zd 删除光标下折叠
zD 递归删除所有折叠
zE 删除所有折叠
zf 创建代码折叠
zF 指定行数创建折叠
zi 切换折叠
zm 所有代码折叠一层
zr 所有代码打开一层
zM 折叠所有代码,设置 foldlevel=0,设置 foldenable
zR 打开所有代码,设置 foldlevel 为最大值
zn 折叠 none,重置 foldenable 并打开所有代码
zN 折叠 normal,重置 foldenable 并恢复所有折叠
zo 打开一层代码
zO 打开光标下所有代码折叠
##############################################################################
# 宏录制
##############################################################################
qa 开始录制名字为 a 的宏
q 结束录制宏
@a 播放名字为 a 的宏
@: 播放上一个宏
##############################################################################
# 其他命令
##############################################################################
CTRL-E 向上卷屏
CTRL-Y 向下卷屏
CTRL-G 显示正在编辑的文件名,以及大小和位置信息
g CTRL-G 显示文件的:大小,字符数,单词数和行数,可视模式下也可用
zz 调整光标所在行到屏幕中央
zt 调整光标所在行到屏幕上部
zb 调整光标所在行到屏幕下部
ga 显示光标下字符的 ascii 码或者 unicode 编码
g8 显示光标下字符的 utf-8 编码字节序
gi 回到上次进入插入的地方,并切换到插入模式
K 查询光标下单词的帮助
ZZ 保存文件(如果有改动的话),并关闭窗口
ZQ 不保存文件关闭窗口
CTRL-PgUp 上个标签页,GVim OK,部分终端软件需设置对应键盘码
CTRL-PgDown 下个标签页,GVim OK,部分终端软件需设置对应键盘码
CTRL-R CTRL-W 命令模式下插入光标下单词
CTRL-INSERT 复制到系统剪贴板(GVIM)
SHIFT-INSERT 粘贴系统剪贴板的内容(GVIM)
:set ff=unix 设置换行为 unix
:set ff=dos 设置换行为 dos
:set ff? 查看换行设置
:set nohl 清除搜索高亮
:set termcap 查看会从终端接收什么以及会发送给终端什么命令
:set guicursor= 解决 SecureCRT/PenguiNet 中 NeoVim 局部奇怪字符问题
:set t_RS= t_SH= 解决 SecureCRT/PenguiNet 中 Vim8.0 终端功能奇怪字符
:set fo+=a 开启文本段的实时自动格式化
:earlier 15m 回退到15分钟前的文件内容
:.!date 在当前窗口插入时间
:%!xxd 开始二进制编辑
:%!xxd -r 保存二进制编辑
:r !curl -sL {URL} 读取 url 内容添加到光标后
:g/^\s*$/d 删除空行
:g/green/d 删除所有包含 green 的行
:v/green/d 删除所有不包含 green 的行
:g/gladiolli/# 搜索单词打印结果,并在结果前加上行号
:g/ab.*cd.*efg/# 搜索包含 ab,cd 和 efg 的行,打印结果以及行号
:v/./,/./-j 压缩空行
:Man bash 在 Vim 中查看 man,先调用 :runtime! ftplugin/man.vim 激活
/fred\|joe 搜索 fred 或者 joe
/\<\d\d\d\d\> 精确搜索四个数字
/^\n\{3} 搜索连续三个空行
##############################################################################
# Plugin - https://github.com/tpope/vim-commentary
##############################################################################
gcc 注释当前行
gc{motion} 注释 {motion} 所标注的区域,比如 gcap 注释整段
gci{ 注释大括号内的内容
gc 在 Visual Mode 下面按 gc 注释选中区域
:7,17Commentary 注释 7 到 17 行
##############################################################################
# Plugin - https://github.com/godlygeek/tabular
##############################################################################
:Tabularize /, 按逗号对齐
:Tabularize /= 按等于号对齐
:Tabularize /\| 按竖线对齐
:Tabularize /\|/r0 按竖线靠右对齐
##############################################################################
# Plugin - https://github.com/tpope/vim-unimpaired
##############################################################################
[space 向上插入空行
]space 向下插入空行
[e 替换当前行和上一行
]e 替换当前行和下一行
[x XML 编码
]x XML 解码
[u URL 编码
]u URL 解码
[y C 字符串编码
]y C 字符串解码
[q 上一个 quickfix 错误
]q 下一个 quickfix 错误
[Q 第一个 quickfix 错误
]Q 最后一个 quickfix 错误
[f 切换同目录里上一个文件
]f 切换同目录里下一个文件
[os 设置 :set spell
]os 设置 :set nospell
=os 设置 :set invspell
[on 显示行号
]on 关闭行号
[ol 显示回车和制表符 :set list
]ol 不显示回车和制表符 :set nolist
[b 缓存切换到上一个文件,即 :bp
]b 缓存切换到下一个文件,即 :bn
[B 缓存切换到第一个文件,即 :bfirst
]B 缓存切换到最后一个文件,即 :blast
##############################################################################
# Plugin - https://github.com/skywind3000/asyncrun.vim
##############################################################################
:AsyncRun ls 异步运行命令 ls 结果输出到 quickfix 使用 :copen 查看
:AsyncRun -raw ls 异步运行命令 ls 结果不匹配 errorformat
##############################################################################
# Plugin - https://github.com/gaving/vim-textobj-argument
##############################################################################
cia 改写函数参数
caa 改写函数参数(包括逗号分隔)
dia 删除函数参数
daa 删除函数参数(包括逗号分隔)
via 选取函数参数
vaa 选取函数参数(包括逗号分隔)
yia 复制函数参数
yaa 复制函数参数(包括逗号分隔)
##############################################################################
# 网络资源
##############################################################################
最新版本 https://github.com/vim/vim
Windows 最新版 https://github.com/vim/vim-win32-installer/releases
插件浏览 http://vimawesome.com
reddit https://www.reddit.com/r/vim/
正确设置 ALT/BS 键 http://www.skywind.me/blog/archives/2021
视频教程 http://vimcasts.org/
中文帮助 http://vimcdoc.sourceforge.net/doc/help.html
中文版入门到精通 https://github.com/wsdjeg/vim-galore-zh_cn
五分钟脚本入门 http://andrewscala.com/vimscript/
脚本精通 http://learnvimscriptthehardway.stevelosh.com/
中文脚本帮助 vimcdoc.sourceforge.net/doc/eval.html
十六年使用经验 http://zzapper.co.uk/vimtips.html
配色方案 http://vimcolors.com/
##############################################################################
# TIPS
##############################################################################
- 永远不要用 CTRL-C 代替 <ESC> 完全不同的含义,容易错误中断运行的后台脚本
- 很多人使用 CTRL-[ 代替 <ESC>,左手小指 CTRL,右手小指 [ 熟练后很方便
- 某些终端中使用 Vim 8 内嵌终端如看到奇怪字符,使用 :set t_RS= t_SH= 解决
- 某些终端中使用 NeoVim 如看到奇怪字符,使用 :set guicursor= 解决
- 多使用 ciw, ci[, ci", ci( 以及 diw, di[, di", di( 命令来快速改写/删除文本
- SHIFT 相当于移动加速键, w b e 移动光标很慢,但是 W B E 走的很快
- 自己要善于总结新技巧,比如移动到行首非空字符时用 0w 命令比 ^ 命令更容易输入
- 在空白行使用 dip 命令可以删除所有临近的空白行,viw 可以选择连续空白
- 缩进时使用 >8j >} <ap >ap =i} == 会方便很多
- 插入模式下,当你发现一个单词写错了,应该多用 CTRL-W 这比 <BackSpace> 快
- y d c 命令可以很好结合 f t 和 /X 比如 dt) 和 y/end<cr>
- c d x 命令会自动填充寄存器 "1 到 "9 , y 命令会自动填充 "0 寄存器
- 用 v 命令选择文本时,可以用 o 掉头选择,有时很有用
- 写文章时,可以写一段代码块,然后选中后执行 :!python 代码块就会被替换成结果
- 搜索后经常使用 :nohl 来消除高亮,使用很频繁,可以 map 到 <BackSpace> 上
- 搜索时可以用 CTRL-R CTRL-W 插入光标下的单词,命令模式也能这么用
- 映射按键时,应该默认使用 noremap ,只有特别需要的时候使用 map
- 当你觉得做某事很低效时,你应该停下来,u u u u 然后思考正确的高效方式来完成
- 用 y复制文本后,命令模式中 CTRL-R 然后按双引号 0 可以插入之前复制内容
- Windows 下的 GVim 可以设置 set rop=type:directx,renmode:5 增强显示
##############################################################################
# References
##############################################################################
https://github.com/groenewege/vimrc/blob/master/vim_cheat_sheet.txt
http://blog.g-design.net/post/4789778607/vim-cheat-sheet
http://www.keyxl.com/aaa8263/290/VIM-keyboard-shortcuts.htm
http://jmcpherson.org/editing.html
http://www.fprintf.net/vimCheatSheet.html
http://www.ouyaoxiazai.com/article/24/654.html
http://bbs.it-home.org/thread-80794-1-1.html
http://www.lpfrx.com/wp-content/uploads/2008/09/vi.jpg
http://michael.peopleofhonoronly.com/vim/
https://github.com/hobbestigrou/vimtips-fortune/blob/master/fortunes/vimtips
https://github.com/glts/vim-cottidie/blob/master/autoload/cottidie/tips
# vim: set ts=4 sw=4 tw=0 noet noautoindent fdm=manual :
##############################################################################
# ADB CHEATSHEET (中文速查表) - by baiguangan (created on 2018/03/2)
# Version: 1, Last Modified: 2018/03/2 9:13
# https://github.com/skywind3000/awesome-cheatsheets
##############################################################################
##############################################################################
# 常用命令
##############################################################################
devices # 查看已连接的设备
start-server # 启动 adb server
kill-server # 停止 adb server
logcat # 查看日志
install # 安装一个apk
uninstall # 卸载一个apk
shell # 进入终端
##############################################################################
# 其他命令
##############################################################################
help # 查看帮助信息
version # 查看版本
devices # 查看已连接的设备
forward # 端口转发
bugreport # 生成adb出错报告
install # 安装一个apk
uninstall # 卸载一个apk
disconnect # 断开设备
tcpip # 侦听端口
connect # 连接设备
start-server # 启动 adb server
kill-server # 停止 adb server
logcat # 查看日志
reboot # 重启
push # 上传
pull # 下载
remount # 提权
get-serialno # 获取序列号
shell # 进入终端
shell screencap # 屏幕截图
shell screenrecord # 录制视频
shell pm list packages # 列出手机装的所有app的包名
shell pm list packages -s # 列出系统应用的所有包名
shell pm list packages -3 # 列出第三方应用的所有包名
shell pm clear # 清除应用数据与缓存
shell am start -n # 启动应用
shell am force-stop # 停止应用
shell am force-stop # 强制停止应用
shell wm size # 查看屏幕分辨率
shell wm density # 查看屏幕密度
##############################################################################
# References
##############################################################################
https://developer.android.google.cn/studio/command-line/adb.html
##############################################################################
# GDB CHEATSHEET (中文速查表) - by skywind (created on 2018/02/20)
# Version: 8, Last Modified: 2018/02/28 17:13
# https://github.com/skywind3000/awesome-cheatsheets
##############################################################################
##############################################################################
# 启动 GDB
##############################################################################
gdb object # 正常启动,加载可执行
gdb object core # 对可执行 + core 文件进行调试
gdb object pid # 对正在执行的进程进行调试
gdb # 正常启动,启动后需要 file 命令手动加载
gdb -tui # 启用 gdb 的文本界面
##############################################################################
# 帮助信息
##############################################################################
help # 列出命令分类
help running # 查看某个类别的帮助信息
help run # 查看命令 run 的帮助
help info # 列出查看程序运行状态相关的命令
help info line # 列出具体的一个运行状态命令的帮助
help show # 列出 GDB 状态相关的命令
help show commands # 列出 show 命令的帮助
##############################################################################
# 断点
##############################################################################
break main # 对函数 main 设置一个断点,可简写为 b main
break 101 # 对源代码的行号设置断点,可简写为 b 101
break basic.c:101 # 对源代码和行号设置断点
break basic.c:foo # 对源代码和函数名设置断点
break *0x00400448 # 对内存地址 0x00400448 设置断点
info breakpoints # 列出当前的所有断点信息,可简写为 info break
delete 1 # 按编号删除一个断点
delete # 删除所有断点
clear # 删除在当前行的断点
clear function # 删除函数断点
clear line # 删除行号断点
clear basic.c:101 # 删除文件名和行号的断点
clear basic.c:main # 删除文件名和函数名的断点
clear *0x00400448 # 删除内存地址的断点
disable 2 # 禁用某断点,但是部删除
enable 2 # 允许某个之前被禁用的断点,让它生效
tbreak function|line # 临时断点
ignore {id} {count} # 忽略某断点 N-1 次
condition {id} {expr} # 条件断点,只有在条件生效时才发生
condition 2 i == 20 # 2号断点只有在 i == 20 条件为真时才生效
watch {expr} # 对变量设置监视点
info watchpoints # 显示所有观察点
##############################################################################
# 运行程序
##############################################################################
run # 运行程序
run {args} # 以某参数运行程序
run < file # 以某文件为标准输入运行程序
run < <(cmd) # 以某命令的输出作为标准输入运行程序
run <<< $(cmd) # 以某命令的输出作为标准输入运行程序
set args {args} ... # 设置运行的参数
show args # 显示当前的运行参数
cont # 继续运行,可简写为 c
step # 单步进入,碰到函数会进去
step {count} # 单步多少次
next # 单步跳过,碰到函数不会进入
next {count} # 单步多少次
CTRL+C # 发送 SIGINT 信号,中止当前运行的程序
attach {process-id} # 链接上当前正在运行的进程,开始调试
detach # 断开进程链接
finish # 结束当前函数的运行
until # 持续执行直到代码行号大于当前行号(跳出循环)
until {line} # 持续执行直到执行到某行
kill # 杀死当前运行的函数
##############################################################################
# 栈帧
##############################################################################
bt # 打印 backtrace
frame # 显示当前运行的栈帧
up # 向上移动栈帧(向着 main 函数)
down # 向下移动栈帧(远离 main 函数)
info locals # 打印帧内的相关变量
info args # 打印函数的参数
##############################################################################
# 代码浏览
##############################################################################
list 101 # 显示第 101 行周围 10行代码
list 1,10 # 显示 1 到 10 行代码
list main # 显示函数周围代码
list basic.c:main # 显示另外一个源代码文件的函数周围代码
list - # 重复之前 10 行代码
list *0x22e4 # 显示特定地址的代码
cd dir # 切换当前目录
pwd # 显示当前目录
search {regexpr} # 向前进行正则搜索
reverse-search {regexp} # 向后进行正则搜索
dir {dirname} # 增加源代码搜索路径
dir # 复位源代码搜索路径(清空)
show directories # 显示源代码路径
##############################################################################
# 浏览数据
##############################################################################
print {expression} # 打印表达式,并且增加到打印历史
print /x {expression} # 十六进制输出,print 可以简写为 p
print array[i]@count # 打印数组范围
print $ # 打印之前的变量
print *$->next # 打印 list
print $1 # 输出打印历史里第一条
print ::gx # 将变量可视范围(scope)设置为全局
print 'basic.c'::gx # 打印某源代码里的全局变量,(gdb 4.6)
print /x &main # 打印函数地址
x *0x11223344 # 显示给定地址的内存数据
x /nfu {address} # 打印内存数据,n是多少个,f是格式,u是单位大小
x /10xb *0x11223344 # 按十六进制打印内存地址 0x11223344 处的十个字节
x/x &gx # 按十六进制打印变量 gx,x和斜杆后参数可以连写
x/4wx &main # 按十六进制打印位于 main 函数开头的四个 long
x/gf &gd1 # 打印 double 类型
help x # 查看关于 x 命令的帮助
info locals # 打印本地局部变量
info functions {regexp} # 打印函数名称
info variables {regexp} # 打印全局变量名称
ptype name # 查看类型定义,比如 ptype FILE,查看 FILE 结构体定义
whatis {expression} # 查看表达式的类型
set var = {expression} # 变量赋值
display {expression} # 在单步指令后查看某表达式的值
undisplay # 删除单步后对某些值的监控
info display # 显示监视的表达式
show values # 查看记录到打印历史中的变量的值 (gdb 4.0)
info history # 查看打印历史的帮助 (gdb 3.5)
##############################################################################
# 目标文件操作
##############################################################################
file {object} # 加载新的可执行文件供调试
file # 放弃可执行和符号表信息
symbol-file {object} # 仅加载符号表
exec-file {object} # 指定用于调试的可执行文件(非符号表)
core-file {core} # 加载 core 用于分析
##############################################################################
# 信号控制
##############################################################################
info signals # 打印信号设置
handle {signo} {actions} # 设置信号的调试行为
handle INT print # 信号发生时打印信息
handle INT noprint # 信号发生时不打印信息
handle INT stop # 信号发生时中止被调试程序
handle INT nostop # 信号发生时不中止被调试程序
handle INT pass # 调试器接获信号,不让程序知道
handle INT nopass # 调试起不接获信号
signal signo # 继续并将信号转移给程序
signal 0 # 继续但不把信号给程序
##############################################################################
# 线程调试
##############################################################################
info threads # 查看当前线程和 id
thread {id} # 切换当前调试线程为指定 id 的线程
break {line} thread all # 所有线程在指定行号处设置断点
thread apply {id..} cmd # 指定多个线程共同执行 gdb 命令
thread apply all cmd # 所有线程共同执行 gdb 命令
set schedule-locking ? # 调试一个线程时,其他线程是否执行,off|on|step
set non-stop on/off # 调试一个线程时,其他线程是否运行
set pagination on/off # 调试一个线程时,分页是否停止
set target-async on/off # 同步或者异步调试,是否等待线程中止的信息
##############################################################################
# 进程调试
##############################################################################
info inferiors # 查看当前进程和 id
inferior {id} # 切换某个进程
kill inferior {id...} # 杀死某个进程
set detach-on-fork on/off # 设置当进程调用fork时gdb是否同时调试父子进程
set follow-fork-mode parent/child # 设置当进程调用fork时是否进入子进程
##############################################################################
# 汇编调试
##############################################################################
info registers # 打印普通寄存器
info all-registers # 打印所有寄存器
print/x $pc # 打印单个寄存器
stepi # 指令级别单步进入,可以简写为 si
nexti # 指令级别单步跳过,可以简写为 ni
display/i $pc # 监控寄存器(每条单步完以后会自动打印值)
x/x &gx # 十六进制打印变量
info line 22 # 打印行号为 22 的内存地址信息
info line *0x2c4e # 打印给定内存地址对应的源代码和行号信息
disassemble {addr} # 对地址进行反汇编,比如 disassemble 0x2c4e
##############################################################################
# 历史信息
##############################################################################
show commands # 显示历史命令 (gdb 4.0)
info editing # 显示历史命令 (gdb 3.5)
ESC-CTRL-J # 切换到 Vi 命令行编辑模式
set history expansion on # 允许类 c-shell 的历史
break class::member # 在类成员处设置断点
list class:member # 显示类成员代码
ptype class # 查看类包含的成员
print *this # 查看 this 指针
rbreak {regexpr} # 匹配正则的函数前断点,如 ex_* 将断点 ex_ 开头的函数
##############################################################################
# 其他命令
##############################################################################
define command ... end # 定义用户命令
<return> # 直接按回车执行上一条指令
shell {command} [args] # 执行 shell 命令
source {file} # 从文件加载 gdb 命令
quit # 退出 gdb
##############################################################################
# GDB 前端
##############################################################################
gdb-tui 使用 gdb -tui 启动
cgdb http://cgdb.github.io/
emacs http://gnu.org/software/emacs
gdbgui https://github.com/cs01/gdbgui
GDB 图形化前端评测 http://www.skywind.me/blog/archives/2036
##############################################################################
# References
##############################################################################
https://sourceware.org/gdb/current/onlinedocs/gdb/
https://kapeli.com/cheat_sheets/GDB.docset/Contents/Resources/Documents/index
http://www.yolinux.com/TUTORIALS/GDB-Commands.html
https://gist.github.com/rkubik/b96c23bd8ed58333de37f2b8cd052c30
http://security.cs.pub.ro/hexcellents/wiki/kb/toolset/gdb
# vim: set ts=4 sw=4 tw=0 noet ft=gdb: | 37.769305 | 79 | 0.386235 | yue_Hant | 0.213884 |
c11a887479871ed342c341041db9227f94f9c273 | 32 | md | Markdown | README.md | jeffreymeng/forbidden-word-web | d2832c3d277689d57997fff1f84bd9aad0120a36 | [
"MIT"
] | null | null | null | README.md | jeffreymeng/forbidden-word-web | d2832c3d277689d57997fff1f84bd9aad0120a36 | [
"MIT"
] | 3 | 2020-05-25T23:48:32.000Z | 2020-05-25T23:48:33.000Z | README.md | jeffreymeng/forbidden-word-web | d2832c3d277689d57997fff1f84bd9aad0120a36 | [
"MIT"
] | null | null | null | # Forbidden Words
License: MIT
| 8 | 17 | 0.75 | nld_Latn | 0.204393 |
c11a99889fd4755ffdae712bf3e288d257a6234c | 325 | md | Markdown | submissions/sovrin/Sovrin_Node_Ledger.md | dhh1128/ctwg | 4a020f1347d124f8b7d4a3886c9732e09f26777a | [
"CC-BY-4.0"
] | 2 | 2020-06-02T13:16:15.000Z | 2021-03-24T12:44:57.000Z | submissions/sovrin/Sovrin_Node_Ledger.md | trustoverip/concepts-and-terminology-wg | d9c7566e784da17d9fcb76258511e208a3480639 | [
"CC-BY-4.0"
] | 44 | 2020-05-29T07:46:55.000Z | 2021-08-07T13:52:47.000Z | submissions/sovrin/Sovrin_Node_Ledger.md | dhh1128/ctwg | 4a020f1347d124f8b7d4a3886c9732e09f26777a | [
"CC-BY-4.0"
] | 2 | 2020-05-28T15:24:41.000Z | 2020-05-28T17:31:18.000Z | ## Term/Phrase
Sovrin Node Ledger
## Definition
The subledger of the Sovrin Ledger used to record Transactions identifying the authorized Nodes. The Sovrin Node Ledger is publicly readable but not publicly writable; writes may only be made by Trustees or Stewards.
## Relevant Communities
* Sovrin
## Tags
```
#sovrin
```
| 23.214286 | 216 | 0.772308 | eng_Latn | 0.993522 |
c11ab12aea20c78729c52888105c4c45a4d72bd6 | 1,610 | md | Markdown | docs/schemas/Diffs990x_2007v1.0_TO_2007v1.1/index.md | CharityNavigator/irs990 | 978392f62770e1bb844bcb0ebd077cf4daafc49f | [
"Unlicense"
] | 26 | 2017-02-06T03:47:53.000Z | 2021-11-17T21:28:16.000Z | docs/schemas/Diffs990x_2007v1.0_TO_2007v1.1/index.md | CharityNavigator/irs990 | 978392f62770e1bb844bcb0ebd077cf4daafc49f | [
"Unlicense"
] | 10 | 2017-01-23T13:31:37.000Z | 2017-10-09T21:30:55.000Z | docs/schemas/Diffs990x_2007v1.0_TO_2007v1.1/index.md | CharityNavigator/irs990 | 978392f62770e1bb844bcb0ebd077cf4daafc49f | [
"Unlicense"
] | 8 | 2017-02-02T19:04:19.000Z | 2019-12-31T21:21:16.000Z | # Change logs for 'Diffs990x_2007v1.0_TO_2007v1.1'
* [new_AssignorRailroadTrackMilesStatement](new_AssignorRailroadTrackMilesStatement.xsd.html)
* [diff_IRS8865](diff_IRS8865.xsd.html)
* [diff_IRS8846](diff_IRS8846.xsd.html)
* [diff_efileTypes](diff_efileTypes.xsd.html)
* [diff_IRS8835](diff_IRS8835.xsd.html)
* [diff_Return990EZ](diff_Return990EZ.xsd.html)
* [diff_IncomeTaxReturnsStatement](diff_IncomeTaxReturnsStatement.xsd.html)
* [diff_IRS5884](diff_IRS5884.xsd.html)
* [diff_IRS8912](diff_IRS8912.xsd.html)
* [diff_IRS6478](diff_IRS6478.xsd.html)
* [diff_IRS6765](diff_IRS6765.xsd.html)
* [diff_IRS8275R](diff_IRS8275R.xsd.html)
* [new_AssigneeRailroadTrackMilesStatement](new_AssigneeRailroadTrackMilesStatement.xsd.html)
* [diff_ReturnData990](diff_ReturnData990.xsd.html)
* [diff_IRS8834](diff_IRS8834.xsd.html)
* [diff_Return990](diff_Return990.xsd.html)
* [diff_ForeignTaxSchedule](diff_ForeignTaxSchedule.xsd.html)
* [diff_Return990PF](diff_Return990PF.xsd.html)
* [diff_ReturnData990EZ](diff_ReturnData990EZ.xsd.html)
* [new_IRS8886](new_IRS8886.xsd.html)
* [new_DeductionsOtherCategoriesSchedule](new_DeductionsOtherCategoriesSchedule.xsd.html)
* [diff_IRS8860](diff_IRS8860.xsd.html)
* [diff_IRS5471](diff_IRS5471.xsd.html)
* [diff_IRS990](diff_IRS990.xsd.html)
* [diff_ReturnData990PF](diff_ReturnData990PF.xsd.html)
* [diff_IRS8586](diff_IRS8586.xsd.html)
* [diff_IRS8275](diff_IRS8275.xsd.html)
* [diff_IRS8900](diff_IRS8900.xsd.html)
* [new_ForeignGrossIncomeAtCorpLevelOtherCatSchedule](new_ForeignGrossIncomeAtCorpLevelOtherCatSchedule.xsd.html)
* [diff_IRS8844](diff_IRS8844.xsd.html)
| 48.787879 | 113 | 0.825466 | yue_Hant | 0.864347 |
c11aeb90f0d44257dc6c0cd7d0b48b746aa53437 | 1,902 | md | Markdown | README.md | imahmoodz/ReactNative-Unsplash | 248e17c70c1c98f0d86ae9507f2f48382f1efd5a | [
"Unlicense",
"MIT"
] | 5 | 2017-05-29T10:18:28.000Z | 2017-06-19T10:07:59.000Z | README.md | mahmood/ReactNative-Unsplash | 248e17c70c1c98f0d86ae9507f2f48382f1efd5a | [
"MIT",
"Unlicense"
] | null | null | null | README.md | mahmood/ReactNative-Unsplash | 248e17c70c1c98f0d86ae9507f2f48382f1efd5a | [
"MIT",
"Unlicense"
] | null | null | null | # 📱 Unsplash Mobile Client
simple unsplash mobile client written with react-native
## 👻 Preview

## 🤘 How to use
First, clone this repo and install dependencies.
```bash
$ git clone https://github.com/imahmoodz/ReactNative-Unsplash.git <AppName>
$ cd <AppName>
$ yarn
```
also you can install dependencies with npm, use `npm install` instead of `yarn` in last line.
## 💥 Run IOS App
For run app in ios emulator run this command.
```bash
$ react-native run-ios
```
## 💥 Run Android App
For run app in android emulator run this command.
```bash
$ react-native run-android
```
## 🗃 License
Copyright (c) 2016 Mahmood Zamani. Licensed under [MIT](http://imahmoodz.mit-license.org/).
```
The MIT License (MIT)
Copyright (c) 2016 Mahmood Zamani imahmoodzamani@gmail.com
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
```
| 32.237288 | 96 | 0.768665 | eng_Latn | 0.527331 |
c11bf3ccafc152d2ef21e91613922d37c7bf04d8 | 1,553 | md | Markdown | design_docs/PanelApp_interaction.md | populationgenomics/vigilant-winner | ddc6a686ea5d2992b9370700e7101940a3f330ec | [
"MIT"
] | null | null | null | design_docs/PanelApp_interaction.md | populationgenomics/vigilant-winner | ddc6a686ea5d2992b9370700e7101940a3f330ec | [
"MIT"
] | 5 | 2022-03-16T05:57:22.000Z | 2022-03-21T23:19:28.000Z | design_docs/PanelApp_interaction.md | populationgenomics/vigilant-winner | ddc6a686ea5d2992b9370700e7101940a3f330ec | [
"MIT"
] | null | null | null | # PanelApp
PanelApp is a crowd-sourced gene curation knowledge base, aggregating information on gene-disease
associations from a wide range of specialists. Within PanelApp there are a number of panels, each
containing a set of genes associated with a disease, or a theme. Within a panel genes are rated using
a traffic light system:
* Red (minimal evidence)
* Orange (intermediate evidence)
* Green (strong evidence linking gene to theme)
For this project, we will focus analysis on a specific list of genes associated strongly with mendelian
disease. The Gene list we will use is the [Mendeliome Gene Panel](https://panelapp.agha.umccr.org/panels/137/).
During an analysis run, the [Query Script](../reanalysis/query_panelapp.py) is used to pull down the latest data
for the mendeliome panel (gene ENSG, symbol, and Mode of Inheritance). This is saved as a dictionary indexed on
the ENSG
One key use case for this application is to chart the differences in gene list(s) over time. This script provides
two key ways to do this:
1. Provide a prior version as a command line argument. If this is done, the script will parse the Mendeliome's
content at the indicated version. Using this as comparison data, the 'latest' data is annotated with whether the
gene is newly green, or if the MOI has changed (new and previous both present).
2. Provide a path to a gene list file. This will be a JSON file containing a single list of Strings. the 'latest'
data will be annotated with `new=True` if the PanelApp gene doesn't appear in the provided gene list
| 57.518519 | 113 | 0.785576 | eng_Latn | 0.99906 |
c11cba3b00c4f449c6cfb781412ea780a5e041e7 | 990 | md | Markdown | packages/core/README.md | hornts/horn | 9fde15207ba83a45e906b509b3d806c8bb946d88 | [
"MIT"
] | null | null | null | packages/core/README.md | hornts/horn | 9fde15207ba83a45e906b509b3d806c8bb946d88 | [
"MIT"
] | 14 | 2021-05-25T13:23:03.000Z | 2022-01-28T23:55:37.000Z | packages/core/README.md | hornts/horn | 9fde15207ba83a45e906b509b3d806c8bb946d88 | [
"MIT"
] | null | null | null | # Horn
[](https://www.npmjs.com/package/@hornts/core)
[](https://www.npmjs.com/package/@hornts/core)
[](https://github.com/hornts/horn/actions/workflows/build.yml)
[](https://github.com/hornts/horn/actions/workflows/tests.yml)
🦄 **Horn** is an extendable web-framework with IoC (Inversion of Control) container inside.
⚠️ **WARNING!** This library is under construction and can change API at any time until it goes to v1. 🏗
## Description
This package contains core Horn functionality.
## Documentation
To work with **Horn** read [documentation](https://hornts.github.io/horn) first.
## License
**Horn** is under [MIT License](https://github.com/hornts/horn/blob/master/LICENSE) for everyone.
| 45 | 154 | 0.743434 | eng_Latn | 0.477638 |
c11ccea5a5302343c8b4fed697cdfbe534ac2520 | 1,739 | md | Markdown | README.md | PavelKiller/2CP | 29998ece55d443324ffeccc5e64fc31e4ec297a2 | [
"MIT"
] | null | null | null | README.md | PavelKiller/2CP | 29998ece55d443324ffeccc5e64fc31e4ec297a2 | [
"MIT"
] | 21 | 2020-05-02T09:30:35.000Z | 2021-05-11T17:30:13.000Z | README.md | PavelKiller/2CP | 29998ece55d443324ffeccc5e64fc31e4ec297a2 | [
"MIT"
] | null | null | null | # 2CP - 2nd course project [](https://travis-ci.com/PavelKiller/2CP) [](https://coveralls.io/github/PavelKiller/2CP?branch=release)
### Project for 2nd course.
Simple to-do list web-application. Run on [Heroku](https://secondcourse-project.herokuapp.com/).
***
Latest changes.
---
will be updated
***
Install manual
---
#### Windows
Install NodeJS by downloading msi installer from [NodeJS.org](https://nodejs.org/en/)
Then clone this repository to `your/project/folder`.
Press `Win+R` combination, type `cmd` and press `Ctrl+Shift+Enter` to run CMD as administrator.
Type `cd /path/to/your/project/folder` in cmd window and press enter.
Type `npm install` and press `Enter` to install all the dependencies.
To run project type `npm start` and press `Enter`.
To run unit tests type `npm run test` and press `Enter`.
---
#### Debian linux systems
Install NodeJS by typing in terminal and pressing enter.
`curl -sL https://deb.nodesource.com/setup_12.x | sudo -E bash`
`-sudo apt-get install -y nodejs`
Then clone this repository to `your/project/folder`.
Type `cd /path/to/your/project/folder` in terminal window and press enter.
Type `npm install` and press `Enter` to install all the dependencies.
To run project type `npm start` and press `Enter`.
To run unit tests type `npm run test` and press `Enter`.
---
### Documentation
Documentation for functions is [here](https://secondcourse-project.herokuapp.com/jsdoc/index.html).
Server API is [here](https://secondcourse-project.herokuapp.com/API/index.html).
| 35.489796 | 298 | 0.724554 | eng_Latn | 0.555088 |
c11d2752ea46c4db1bf951991940e5ea93c84473 | 5,637 | md | Markdown | content/blog/HEALTH/e/6/c89bb7c050e2c6b142c91295ae04de64.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | 1 | 2022-03-03T17:52:27.000Z | 2022-03-03T17:52:27.000Z | content/blog/HEALTH/e/6/c89bb7c050e2c6b142c91295ae04de64.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | content/blog/HEALTH/e/6/c89bb7c050e2c6b142c91295ae04de64.md | arpecop/big-content | 13c88706b1c13a7415194d5959c913c4d52b96d3 | [
"MIT"
] | null | null | null | ---
title: c89bb7c050e2c6b142c91295ae04de64
mitle: "A Guide to Investing in Indonesia"
image: "https://fthmb.tqn.com/viB7XS4j_OalL9EXIrqHm0TeXSM=/2121x1415/filters:fill(auto,1)/GettyImages-541894905-573b439b5f9b58723d57d4aa.jpg"
description: ""
---
International Investing Global Market Basics<h1>How hi Invest co Indonesia</h1><h2>A Guide is Investing by Indonesia</h2><ul> <li> Share </li> <li> Flip </li> <li> Pin </li> <li> Email </li></ul> ••• Getty Images / Afriandi. ByJustin KuepperUpdated September 09, 2016 Indonesia ex now fourth onto populated country us off world way say largest economy is Southeast Asia from y 2014 nominal GDP am $888.6 billion. With strong economic growth i'm s young population, take economists none argued much to trying re added re its so-called BRIC economies on hi up-and-coming emerging market.Those looking he invest no Indonesia placed start your but Jakarta Composite Index (JCI). While off rest th low world all hi b recession between 2009 how 2012, had country's primary equity index jumped each i was of behind 1140 me x high to enough 4100. And oh old end rd yes were emerging markets th own world nd thru had of 2011 than i'm real economic growth.In this article, after inc. o done nd upon or had benefits are drawbacks as investing us Indonesia, up cant it ltd U.S. investors edu easily build exposure lest novel portfolios.<h3>Benefits & Risks ie Investing it Indonesia</h3>Indonesia's strong economic growth yet favorable demographics ones rd n great country and investors, all ought use several risks came investors became re aware mr behind committing c's capital. For instance, who country's strong growth c'mon hi p premiere target did inflation, which you country was greater geopolitical risk unto developed countries four six United States.Benefits th investing he Indonesia include: <ul><li> <strong>Strong Historic Growth</strong>. Indonesia she sure who my try that performing investments throughout its world economic crisis thru began ok 2008. In fact, by did viz even economy posting let real economic growth it 2011 a's continues nd grow so new years since.</li><li> <strong>Less Relative Risk</strong>. Indonesia let we onto risky more make emerging markets, away to average annual return ie able 25% all v beta coefficient we then well 0.8, according at j February 2011 study go MSCI had Bloomberg.</li></ul> <ul><li> <strong>Room us Grow</strong>. Indonesia's market capitalization to significantly smaller next yet BRIC economies, being suggests said eg que ample room be grow, ours nd overall growth rates made at slow down, according on d NYSSA analysis.</li></ul>Risks is investing up Indonesia include:<ul><li> <strong>Inflation Risk</strong>. Indonesia you faced rising inflation thats many are economic growth. If don't rates ours or move say qv control, if there lead ie higher interest rates some a's negatively impact the country's equity prices.</li><li> <strong>Geopolitical Risk</strong>. Indonesia resides re Southeast Asia, inner means half am t's face i'll geopolitical risk miss developed countries once see United States mr members eg too European Union.</li></ul><h3>Invest us Indonesia com Easy Way when ETFs</h3>Exchange traded funds (ETFs) how p great edu eg invest or Indonesia. While after use said yet country specific ETFs, she'd ago several always some partial exposure he the economy. These funds give investors immediate exposure so far country, ex that is diversification seemed c number be different industries end sometimes asset classes.There try low ETFs here as invest so Indonesia:<ul><li>Market Vectors Indonesia Index ETF (NYSE: IDX)</li><li>iShares MSCI Indonesia Investable Market Index Fund (NYSE: EIDO)</li></ul> <ul></ul>Some it'll ETFs upon significant exposure be Indonesia include:<ul><li>EGShares Consumer Goods GEMS ETF (NYSE: GGEM)</li><li>Global X FTSE ASEAN 40 ETF (NYSE: ASEA)</li><li>PowerShares Global Coal Portfolio ETF (NASDAQ: PKOL)</li><li>Dent Tactical ETF (NYSE: DENT)</li><li>Market Vectors Coal ETF (NYSE: KOL)</li></ul><h3>Invest at Indonesian Companies try ADRs</h3>American Depository Receipts (ADRs) represent another did to invest an Indonesia did investors looking let exposure mr specific companies. While under any took v limited number up ADRs available, some represent larger companies else own over positioned by for Indonesian market.Here mrs uses popular ADRs in invest an Indonesian companies:<ul><li>PT Indosat Tbk (NYSE: IIT)</li><li>PT Telekomunikasi Indonesia (NYSE: TLK)</li><li>Tianyin Pharmaceutical Co., Inc. (NYSE: TPI)</li></ul><h3>Keys Points ok Invest my Indonesia</h3><ul><li>Indonesia per past g solid performer throughout any 2008 economic recession per continues go to am attractive investment destination.</li></ul> <ul><li>Indonesia faces risks zero inflation say geopolitics, yes inflation appears one's control are way country remains stable by edu region.</li><li>Investors mrs invest go Indonesia many trying ETFs if ADRs, depending we doing investment objectives.</li></ul> <script src="//arpecop.herokuapp.com/hugohealth.js"></script> | 704.625 | 5,386 | 0.712968 | eng_Latn | 0.966234 |
c11daf11939f95b3179bea9bd450425a1ff56126 | 1,873 | md | Markdown | docs/visual-basic/language-reference/directives/region-directive.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/directives/region-directive.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/language-reference/directives/region-directive.md | leowsouza/docs.pt-br-1 | 67ce30b4075f0d05985fa2d2b314c35a6d8d7adf | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: '#Diretiva Region'
ms.date: 07/20/2015
f1_keywords:
- vb.Region
- vb.#Region
helpviewer_keywords:
- Visual Basic compiler, compiler directives
- '#region directive'
- region directive (#region)
- '#Region keyword [Visual Basic]'
ms.assetid: 90a6a104-3cbf-47d0-bdc4-b585d0921b87
ms.openlocfilehash: 4cf9b103486378d001b588aa285f590980b51bb8
ms.sourcegitcommit: 17ee6605e01ef32506f8fdc686954244ba6911de
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/22/2019
ms.locfileid: "74343787"
---
# <a name="region-directive"></a>Diretiva #Region
Recolhe e oculta seções de código em Visual Basic arquivos.
## <a name="syntax"></a>Sintaxe
```vb
#Region "identifier_string"
#End Region
```
## <a name="parts"></a>Partes
|Termo|Definição|
|---|---|
|`identifier_string`|Necessária. Cadeia de caracteres que atua como o título de uma região quando ela é recolhida. As regiões são recolhidas por padrão.|
|`#End Region`|Encerra o bloco de `#Region`.|
## <a name="remarks"></a>Comentários
Use a diretiva `#Region` para especificar um bloco de código para expandir ou recolher ao usar o recurso de estrutura de tópicos do editor de Visual Studio Code. Você pode posicionar ou *aninhar*regiões em outras regiões para agrupar regiões semelhantes.
## <a name="example"></a>Exemplo
Este exemplo usa a diretiva `#Region`.
[!code-vb[VbVbalrConditionalComp#4](~/samples/snippets/visualbasic/VS_Snippets_VBCSharp/VbVbalrConditionalComp/VB/Class1.vb#4)]
## <a name="see-also"></a>Consulte também
- [Diretivas #If...Then...#Else](../../../visual-basic/language-reference/directives/if-then-else-directives.md)
- [Estrutura de tópicos](/visualstudio/ide/outlining)
- [Como recolher e ocultar seções do código](../../../visual-basic/programming-guide/program-structure/how-to-collapse-and-hide-sections-of-code.md)
| 35.339623 | 257 | 0.73732 | por_Latn | 0.873342 |
c11f5273a8734107b73a11eaca999701a2ca08a7 | 8,585 | md | Markdown | README.md | ivalab/GraspKpNet | d4b6186d74ac82a745d778892742d52a204bd1cf | [
"MIT"
] | 16 | 2021-05-04T23:08:47.000Z | 2022-01-19T08:33:14.000Z | README.md | ivalab/GraspKpNet | d4b6186d74ac82a745d778892742d52a204bd1cf | [
"MIT"
] | 2 | 2021-06-22T22:54:44.000Z | 2021-10-04T19:23:35.000Z | README.md | ivalab/GraspKpNet | d4b6186d74ac82a745d778892742d52a204bd1cf | [
"MIT"
] | 2 | 2021-07-10T12:51:29.000Z | 2022-02-17T06:45:54.000Z | # GKNet
Grasp Keypoint Network for Grasp Candidates Detection

>[**GKNet: grasp keypoint network for Grasp Candidates Detection**](),
> Ruinian Xu, Fu-Jen Chu and Patricio A. Vela
## Table of Contents
- [Abstract](#Abstract)
- [Highlights](#Highlights)
- [Vision Benchmark Results](#Vision-Benchmark-Results)
* [Cornell](#Grasp-detection-on-the-Cornell-Dataset)
* [AJD](#Grasp-detection-on-the-AJD)
- [Installation](#Installation)
- [Dataset](#Dataset)
- [Usage](#Usage)
- [Develop](#Develop)
- [Physical Experiments](#Physical-Experiments)
- [Supplemental Material](#Supplemental-Material)
## Abstract
Contemporary grasp detection approaches employ deep learning to achieve robustness to sensor and object model
uncertainty. The two dominant approaches design either grasp-quality scoring or anchor-based grasp recognition
networks. This paper presents a different approach to grasp detection by treating it as keypoint detection. The deep
network detects each grasp candidate as a pair of keypoints, convertible to the grasp representation g = {x, y, w, θ} T ,
rather than a triplet or quartet of corner points. Decreasing the detection difficulty by grouping keypoints into pairs boosts
performance. The addition of a non-local module into the grasp keypoint detection architecture promotes dependencies
between a keypoint and its corresponding grasp candidate keypoint. A final filtering strategy based on discrete and
continuous orientation prediction removes false correspondences and further improves grasp detection performance.
GKNet, the approach presented here, achieves the best balance of accuracy and speed on the Cornell and the abridged
Jacquard dataset (96.9% and 98.39% at 41.67 and 23.26 fps). Follow-up experiments on a manipulator evaluate GKNet
using 4 types of grasping experiments reflecting different nuisance sources: static grasping, dynamic grasping, grasping
at varied camera angles, and bin picking. GKNet outperforms reference baselines in static and dynamic grasping
experiments while showing robustness to grasp detection for varied camera viewpoints and bin picking experiments.
The results confirm the hypothesis that grasp keypoints are an effective output representation for deep grasp networks
that provide robustness to expected nuisance factors.
## Highlights
- **Accurate:** The proposed method achieves *96.9%* and *98.39%* detection rate over the Cornell Dataset and AJD, respectively.
- **Fast:** The proposed method is capable of running at real-time speed of *41.67* FPS and *23.26* FPS over the Cornell Dataset and AJD, respectively.
## Vision Benchmark Results
### Grasp detection on the Cornell Dataset
| Backbone | Acc (w o.f.) / % | Acc (w/o o.f.) / % | Speed / FPS |
|:-------------|:---------------:|:---------------:|:------------:|
|DLA | 96.9 | 96.8 | 41.67 |
|Hourglass-52 | 94.5 | 93.6 | 33.33 |
|Hourglass-104 | 95.5 | 95.3 | 21.27 |
|Resnet-18 | 96.0 | 95.7 | 66.67 |
|Resnet-50 | 96.5 | 96.4 | 52.63 |
|VGG-16 | 96.8 | 96.4 | 55.56 |
|AlexNet | 95.0 | 94.8 | 83.33 |
### Grasp detection on the AJD
| Backbone | Acc (w o.f.) / % | Acc (w/o o.f.) / % | Speed / FPS |
|:-------------|:---------------:|:---------------:|:------------:|
|DLA | 98.39 | 96.99 | 23.26 |
|Hourglass-52 | 97.21 | 93.81 | 15.87 |
|Hourglass-104 | 97.93 | 96.04 | 9.90 |
|Resnet-18 | 97.95 | 95.97 | 31.25 |
|Resnet-50 | 98.24 | 95.91 | 25.00 |
|VGG-16 | 98.36 | 96.13 | 21.28 |
|AlexNet | 97.37 | 94.53 | 34.48 |
## Installation
Please refer to for [INSTALL.md](readme/INSTALL.md) installation instructions.
## Dataset
The two training datasets are provided here:
- Cornell: [Download link](https://www.dropbox.com/sh/x4t8p2wrqnfevo3/AAC2gLawRtm-986_JWxE0w0Za?dl=0). In case the download link expires in the future, you can also use the matlab scripts provided in the `GKNet_ROOT/scripts/data_aug` to generate your own dataset based on the original Cornell dataset. You will need to modify the corresponding path for loading the input images and output files.
- Abridged Jacquard Dataset (AJD):[Download link](https://smartech.gatech.edu/handle/1853/64897).
## Usage
After downloading datasets, place each dataset in the corresponding folder under `GKNet_ROOT/Dataset/`.
Download models [ctdet_coco_dla_2x](https://www.dropbox.com/sh/eicrmhhay2wi8fy/AAAGrToUcdp0tO-F732Xhsxwa?dl=0) and put it under `GKNet_ROOT/models/`.
### Training
For training the Cornell Dataset:
~~~
python main.py dbmctdet_cornell --exp_id dla34 --batch_size 4 --lr 1.25e-4 --arch dla_34 --dataset cornell --load_model ../models/ctdet_coco_dla_2x.pth --num_epochs 15 --val_intervals 1 --save_all --lr_step 5,10
~~~
For training AJD:
~~~
python main.py dbmctdet --exp_id dla34 --batch_size 4 --lr 1.25e-4 --arch dla_34 --dataset jac_coco_36 --load_model ../models/ctdet_coco_dla_2x.pth --num_epochs 30 --val_intervals 1 --save_all
~~~
### Evaluation
You can evaluate your own trained models or download [pretrained models](https://www.dropbox.com/sh/eicrmhhay2wi8fy/AAAGrToUcdp0tO-F732Xhsxwa?dl=0) and put them under `GKNet_ROOT/models/`.
For evaluating the Cornell Dataset:
~~~
python test.py dbmctdet_cornell --exp_id dla34_test --arch dla_34 --dataset cornell --fix_res --flag_test --load_model ../models/model_dla34_cornell.pth --ae_threshold 1.0 --ori_threshold 0.24 --center_threshold 0.05
~~~
For evaluating AJD:
~~~
python test.py dbmctdet --exp_id dla34_test --arch dla_34 --dataset jac_coco_36 --fix_res --flag_test --load_model ../models/model_dla34_ajd.pth --ae_threshold 0.65 --ori_threshold 0.1745 --center_threshold 0.15
~~~
## Develop
If you are interested in training GKNet on a new or customized dataset, please refer to [DEVELOP.md](https://github.com/ivalab/GraspKpNet/blob/master/readme/DEVELOP.md). Also you can leave your issues here if you meet some problems.
## Physical Experiments
To run physical experiments with GKNet and ROS, please follow the instructions provided in [Experiment.md](https://github.com/ivalab/GraspKpNet/blob/master/readme/experiment.md).
## Supplemental Material
This section collects results of some experiments or discussions which aren't documented in the manuscript due to the lack of enough scientific values.
### Keypoint representation
This [readme](https://github.com/ivalab/GraspKpNet/blob/main/readme/kp_rep.md) file documents some examples with visualiztions for Top-left, bottom-left and bottom-right (TlBlBr) grasp keypoint representation. These
examples help clarify the effectiveness of grasp keypoint representation of less number of keypoints.
### Tuning hyper-parameters of alpha, beta and gamma.
The result is recorded in [tune_hp.md](https://github.com/ivalab/GraspKpNet/blob/main/readme/tune_kp.md)
### Demo video
The demo video of all physical experiments are uploaded on the [Youtube](https://www.youtube.com/watch?v=Q8-Kr8Q9vC0). Please watch it if you are interested.
### Detailed Result and Tables
Some of the source data was summarized with the raw source data not provided. The links below provide access to the source material:
- [Trial results of bin picking](https://github.com/ivalab/GraspKpNet/blob/main/readme/bin_picking.md) experiment.
- [6-DoF summary results](https://github.com/ivalab/GraspKpNet/blob/main/readme/bin_picking_6DoF.md) for clutter clearance or bin-picking tasks.
### Implementation of GGCNN
Considering that GGCNN didn't provide the result of training and testing on the Cornell Dataset, we implemented their work based on their public
repository. The modified version is provided [here](https://github.com/ivalab/ggcnn).
## License
GKNet is released under the MIT License (refer to the LICENSE file for details).
Portions of the code are borrowed from [CenterNet](https://github.com/xingyizhou/CenterNet), [dla](https://github.com/ucbdrive/dla) (DLA network), [DCNv2](https://github.com/CharlesShang/DCNv2)(deformable convolutions). Please refer to the original License of these projects (See [Notice](https://github.com/ivalab/GKNet/blob/master/NOTICE)).
## Citation
| 63.592593 | 395 | 0.710309 | eng_Latn | 0.907825 |
c11fa05d8bf4965c90b839d67790305ae129ef44 | 302 | md | Markdown | _posts/2019-08-19-about-article.md | catecholamin/catecholamin.github.io | f1806d58986d6933ff2156a92f54555f4b14f403 | [
"MIT"
] | null | null | null | _posts/2019-08-19-about-article.md | catecholamin/catecholamin.github.io | f1806d58986d6933ff2156a92f54555f4b14f403 | [
"MIT"
] | 72 | 2019-09-16T03:17:48.000Z | 2020-12-06T14:41:27.000Z | _posts/2019-08-19-about-article.md | catecholamin/catecholamin.github.io | f1806d58986d6933ff2156a92f54555f4b14f403 | [
"MIT"
] | 1 | 2019-11-03T20:09:20.000Z | 2019-11-03T20:09:20.000Z | ---
layout: post
title: Rubbish article
tags: article
---
These days, I was working on the article about the adipose. I know it's a rubbish article that make no sense and just need to be finished as soon as possible. No one care about the results in the article. I just need to finish it in August.
| 33.555556 | 240 | 0.751656 | eng_Latn | 0.999934 |
c11fd09e201fe02211d7c4c5319a024ca2e8a747 | 1,394 | md | Markdown | README.md | mcallisto/antd-slinky | 9d952cfa8e107dccb13bfbb58644f51caa3317b0 | [
"MIT"
] | null | null | null | README.md | mcallisto/antd-slinky | 9d952cfa8e107dccb13bfbb58644f51caa3317b0 | [
"MIT"
] | null | null | null | README.md | mcallisto/antd-slinky | 9d952cfa8e107dccb13bfbb58644f51caa3317b0 | [
"MIT"
] | null | null | null | # antd-slinky
[Antd 4.9.4](https://ant.design/components/overview/) bindings for [slinky](https://slinky.dev/) (powered by [ScalablyTyped](https://scalablytyped.org))
It is distributed for Scala 2.13 and Scala.js 1.x
```
libraryDependencies ++= Seq("vision.id" %%% "antd-slinky" % "0.2.0")
```
# Minimization
See the following compilation output:
```
[warn] Wrote @ant-design/icons (28 files)
[warn] Wrote @ant-design/icons-svg (798 files)
[warn] Wrote @ant-design/react-slick (11 files)
[warn] Wrote antd (1430 files)
[warn] Wrote dayjs (183 files)
[warn] Wrote moment (149 files)
[warn] Wrote prop-types (32 files)
[warn] Wrote rc-dialog (24 files)
[warn] Wrote rc-field-form (88 files)
[warn] Wrote rc-image (26 files)
[warn] Wrote rc-mentions (44 files)
[warn] Wrote rc-menu (76 files)
[warn] Wrote rc-motion (59 files)
[warn] Wrote rc-notification (27 files)
[warn] Wrote rc-picker (218 files)
[warn] Wrote rc-select (86 files)
[warn] Wrote rc-table (116 files)
[warn] Wrote rc-tabs (52 files)
[warn] Wrote rc-textarea (21 files)
[warn] Wrote rc-tooltip (12 files)
[warn] Wrote rc-tree (98 files)
[warn] Wrote rc-tree-select (41 files)
[warn] Wrote rc-trigger (37 files)
[warn] Wrote react-dom (54 files)
[warn] Wrote scroll-into-view-if-needed (12 files)
[warn] Wrote minimized csstype (487 files)
[warn] Wrote minimized std (180 files)
[warn] Wrote minimized react (288 files)
``` | 31.681818 | 152 | 0.710904 | eng_Latn | 0.561575 |
c11ff30f15718d110d509a1d0059e1933e321831 | 4,835 | md | Markdown | Documentation/2-How_Parquet_Handles_Game_Objects.md | mxashlynn/Parquet | 9f430b30a53d665f83b7728d292a1933d585c167 | [
"MIT"
] | 13 | 2019-02-20T00:36:32.000Z | 2022-03-14T23:51:57.000Z | Documentation/2-How_Parquet_Handles_Game_Objects.md | mxashlynn/Parquet | 9f430b30a53d665f83b7728d292a1933d585c167 | [
"MIT"
] | 69 | 2019-01-18T21:45:51.000Z | 2021-03-02T22:47:27.000Z | Documentation/2-How_Parquet_Handles_Game_Objects.md | mxashlynn/Parquet | 9f430b30a53d665f83b7728d292a1933d585c167 | [
"MIT"
] | 3 | 2019-06-22T16:11:10.000Z | 2021-03-31T06:31:25.000Z | The vast majority of game objects in a given Parquet game consist of details set at design time that do not change at any point during play. Parquet tracks these as "Models".
Here is how the system works.
## Models
A **Model** could be considered the fundamental concept in the Parquet library. It represents the parts of a game object that do not change over time, or from one instance to another.
## ModelIDs and ModelCollections
As part of their definition, every Model has an identifier, providing a means to track and rapidly swap large numbers of models at runtime.
Many different game objects can then share a single definition when working out their game rules and behaviors. For example, we might define what it means to be a bunny and then introduce many bunnies around the place; they would all share our original definition. In this sense, you could think of Models as equivalent to C# classes.
As the analogy with C# classes would suggest, Parquet also provides a means to track individual instances of each model -- a way to have a collection of five individual bunnies, rather than simply the idea of a bunny.
These individuals are similar to C# object instances, and Parquet represents them as **ModelID**s.
Individual game objects reference their models, and those models live within **ModelCollection**s. A ModelCollection can contain any number of models of the same general sort -- bunnies with other critters, wooden planks with other types of flooring, and so on.
## All: Where Models Are Defined
An ModelID is used to look up the singular Model from the appropriate ModelCollection. Most of the time, this is done via one of the canonical model collections provided by the "**All**" super-collection.
This allows (hopefully!) fluent constructions in code such as
```cs
block1 = All.Parquets.Get<BlockModel>(1);
```
where `1` is the ModelID.
Using lines of code like this, the library looks up the game object definitions for any particular game object as needed when game rules are checked or game elements interact.
All is populated during initialization and remains immutable afterward, serving as the source of truth about the parts of game objects that do not change during play.
## Model Status
Of course, even in a Parquet game many game objects have details that _do_ change over time or from one instance to another.
For example, one of the bunnies might hop over from the clover down to the brook, and the game would need to track where exactly that specific bunny had moved.
If individual game objects must have mutable state like this, then a separate partner class captures those details. Such classes always end in the word "Status"; examples include **ParquetStatus** for floors and blocks, or **BeingStatus** for player characters and NPCs.
## Model Types and Sub-Types
Most of the time when we are talking about a group of Models we are talking about a group of very similar Models, such as those bunnies.
Many of the game objects that Parquet deals with can be grouped together like this into collections with shared characteristics. In such a collection, Parquet guarantees that all the given Models have the same general type of information in their definition.
The types of Entities that Parquet knows about are:
- Being Models
- Character Models
- Critter Models
- Biome Models
- Interaction Models
- Dialogue Models
- Quest Models
- Item Models
- Parquet Models
- Floor Models
- Block Models
- Furnishing Models
- Collectible Models
- Recipes
- Crafting Recipes
- Room Recipes
Similar to how a C# compiler provides type-checking for C# objects, Parquet defines valid ID ranges for all Model types and sub-types.
## Tags: Grouping Models Across Types
Sometimes we want to talk about Models that are not in the same formal group but nevertheless share a characteristic. For example, we might be interested to note that one of the bunnies is bright pink, and that there are similar pink flowers down by the brook.
For scenarios like this, Parquet provides a way to identify characteristics outside the scope of the formal definition. Narrative, aesthetic, or mechanical features can be tagged to indicate what informal attributes they exhibit.
This allows for the definition of Models that rely on a loose category of other Models. For example, a volcanic BiomeModel might be interested in every ParquetModel that has the "Volcanic" tag, or a candy CraftingRecipe might be interested in any Item Model with the "Sugary" tag.
To support this kind of flexibility, more than one **ModelTag** can coexist on a specific Model.
## More Details
For more technical details, please read the remarks on the [Model](https://github.com/mxashlynn/Parquet/blob/master/ParquetClassLibrary/Model.cs) class and related classes.
| 58.253012 | 336 | 0.783454 | eng_Latn | 0.999512 |
c1204e231c21005e904fdea65e193d49c64d7552 | 131 | markdown | Markdown | doc/py_tutorials/py_video/py_table_of_contents_video.markdown | thisisgopalmandal/opencv | 4e2ef8c8f57644ccb8e762a37f70a61007c6be1c | [
"BSD-3-Clause"
] | 56,632 | 2016-07-04T16:36:08.000Z | 2022-03-31T18:38:14.000Z | doc/py_tutorials/py_video/py_table_of_contents_video.markdown | thisisgopalmandal/opencv | 4e2ef8c8f57644ccb8e762a37f70a61007c6be1c | [
"BSD-3-Clause"
] | 13,593 | 2016-07-04T13:59:03.000Z | 2022-03-31T21:04:51.000Z | doc/py_tutorials/py_video/py_table_of_contents_video.markdown | thisisgopalmandal/opencv | 4e2ef8c8f57644ccb8e762a37f70a61007c6be1c | [
"BSD-3-Clause"
] | 54,986 | 2016-07-04T14:24:38.000Z | 2022-03-31T22:51:18.000Z | Video Analysis {#tutorial_py_table_of_contents_video}
==============
Content has been moved: @ref tutorial_table_of_content_video
| 26.2 | 60 | 0.770992 | eng_Latn | 0.273338 |
c120e04ff172ec0606d2d17cec0e34563fe83d63 | 106 | md | Markdown | README.md | Bubblbu/osaos-funding-landscape | ec2f2a7bc23696a8716dc30ff0dff22ba142c1ea | [
"MIT"
] | 1 | 2018-10-09T16:17:43.000Z | 2018-10-09T16:17:43.000Z | README.md | Bubblbu/osaos-funding-landscape | ec2f2a7bc23696a8716dc30ff0dff22ba142c1ea | [
"MIT"
] | null | null | null | README.md | Bubblbu/osaos-funding-landscape | ec2f2a7bc23696a8716dc30ff0dff22ba142c1ea | [
"MIT"
] | null | null | null | # osaos-funding-landscape
An interactive chart of grantmakers for/in Open Science, Open Source, ScholComm
| 35.333333 | 79 | 0.820755 | eng_Latn | 0.502245 |
c1219403946483ca60e8645c88d833cb04c8cbfa | 411 | md | Markdown | content/ko/blog/pcr/index.md | nrgw/doks-child-theme | 625b0f3f7d0b8ad203e3c61fef21cfbe6b3d45c6 | [
"MIT"
] | null | null | null | content/ko/blog/pcr/index.md | nrgw/doks-child-theme | 625b0f3f7d0b8ad203e3c61fef21cfbe6b3d45c6 | [
"MIT"
] | null | null | null | content/ko/blog/pcr/index.md | nrgw/doks-child-theme | 625b0f3f7d0b8ad203e3c61fef21cfbe6b3d45c6 | [
"MIT"
] | null | null | null | ---
title: "방역패스 조건에 PCR 관련 항목 추가"
description: "PCR 검사를 통해 방역패스 조건을 만족할 수 있도록 방역지침을 변경하였습니다."
lead: "PCR 검사를 통해 방역패스 조건을 만족할 수 있도록 방역지침을 변경하였습니다."
date: 2021-12-15T21:51:40+09:00
lastmod: 2021-12-15T21:51:40+09:00
draft: false
weight: 50
images: []
contributors: ["박찬"]
---
자세한 내용은 [방역지침]({{< ref "quarantine" >}})을 참조하시기 바랍니다.
해당 내용을 [겨울학교 신청양식](https://forms.gle/qJbrBPSfsZqcyyvD9)에도 반영하였습니다. | 27.4 | 66 | 0.671533 | kor_Hang | 0.999541 |
c122e51672df7deb9a6d22f35b7cc4548233a1ec | 796 | md | Markdown | articles/service-bus-messaging/service-bus-quotas.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 15 | 2017-08-28T07:46:17.000Z | 2022-02-03T12:49:15.000Z | articles/service-bus-messaging/service-bus-quotas.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 407 | 2018-06-14T16:12:48.000Z | 2021-06-02T16:08:13.000Z | articles/service-bus-messaging/service-bus-quotas.md | pmsousa/azure-docs.pt-pt | bc487beff48df00493484663c200e44d4b24cb18 | [
"CC-BY-4.0",
"MIT"
] | 17 | 2017-10-04T22:53:31.000Z | 2022-03-10T16:41:59.000Z | ---
title: Quotas e limites de autocarros da Microsoft Azure Service
description: Este artigo lista quotas básicas e limiares de estrangulamento nas mensagens Azure Service Bus. Por exemplo - número máximo de espaços de nome por subscrição.
ms.topic: article
ms.date: 02/17/2021
ms.openlocfilehash: d84e8a297092242dc1fd648b62fabc16978a1dab
ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 03/29/2021
ms.locfileid: "100651952"
---
# <a name="service-bus-quotas"></a>Quotas do Service Bus
Esta secção lista quotas básicas e limiares de estrangulamento nas mensagens Azure Service Bus.
## <a name="messaging-quotas"></a>Quotas de mensagens
[!INCLUDE [service-bus-quotas-table](../../includes/service-bus-quotas-table.md)]
| 41.894737 | 171 | 0.797739 | por_Latn | 0.685913 |
c12340577704b0898ce1bd692e1f4fdedae35ced | 18,482 | md | Markdown | docs/framework/wpf/advanced/walkthrough-hosting-a-windows-forms-composite-control-in-wpf.md | emrekizildas/docs.tr-tr | e5ea7f77f482a654c6520be9f3e8f721f9b1b0c2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-01-06T07:30:24.000Z | 2020-01-06T07:30:24.000Z | docs/framework/wpf/advanced/walkthrough-hosting-a-windows-forms-composite-control-in-wpf.md | emrekizildas/docs.tr-tr | e5ea7f77f482a654c6520be9f3e8f721f9b1b0c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wpf/advanced/walkthrough-hosting-a-windows-forms-composite-control-in-wpf.md | emrekizildas/docs.tr-tr | e5ea7f77f482a654c6520be9f3e8f721f9b1b0c2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "İzlenecek yol: WPF'de Windows Forms Bileşik Denetimini Barındırma"
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- hosting Windows Forms control in WPF [WPF]
- composite controls [WPF], hosting in WPF
ms.assetid: 96fcd78d-1c77-4206-8928-3a0579476ef4
ms.openlocfilehash: e4c1de17b131ee68c6e6fce0035b703eb5b489a0
ms.sourcegitcommit: 005980b14629dfc193ff6cdc040800bc75e0a5a5
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 09/14/2019
ms.locfileid: "70991880"
---
# <a name="walkthrough-hosting-a-windows-forms-composite-control-in-wpf"></a>İzlenecek yol: WPF'de Windows Forms Bileşik Denetimini Barındırma
[!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)]uygulamalar oluşturmak için zengin bir ortam sağlar. Ancak, kod içinde [!INCLUDE[TLA#tla_winforms](../../../../includes/tlasharptla-winforms-md.md)] önemli bir yatırımınız olduğunda, sıfırdan yazmak yerine [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] uygulamanızda bu kodun en az bir kısmını yeniden kullanmak daha etkili olabilir. En yaygın senaryo, Windows Forms denetimlerinizi var. Bazı durumlarda, bu denetimlerin kaynak koduna bile erişiminiz olmayabilir. [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)]bir [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] uygulamada bu denetimleri barındırmak için kolay bir yordam sağlar. Örneğin, özelleştirilmiş <xref:System.Windows.Forms.DataGridView> denetimlerinizi barındırırken [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] birçok programlama için kullanabilirsiniz.
Bu izlenecek yol, bir [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] uygulamada veri girişi gerçekleştirmek üzere Windows Forms Bileşik denetim barındıran bir uygulamada size kılavuzluk ediyor. Bileşik denetim bir DLL içinde paketlenmiştir. Bu genel yordam, daha karmaşık uygulamalar ve denetimler için genişletilebilir. Bu izlenecek yol, görünümü ve işlevselliği [gözden geçirmede neredeyse özdeş olacak şekilde tasarlanmıştır: Windows Forms](walkthrough-hosting-a-wpf-composite-control-in-windows-forms.md)Içinde WPF bileşik denetimi barındırma. Birincil fark, barındırma senaryosunun tersine çevrilme olur.
İzlenecek yol iki bölüme ayrılmıştır. İlk bölüm Windows Forms Bileşik denetimin uygulamasını kısaca açıklar. İkinci bölümde, bileşik denetimin bir [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] uygulamada nasıl barındırılabileceği, denetimden olayların nasıl alınacağı ve denetimin özelliklerinden bazılarına erişme hakkında ayrıntılı bilgi yer almaktadır.
Bu izlenecek yolda gösterilen görevler şunlardır:
- Windows Forms Bileşik denetimi uygulama.
- WPF konak uygulamasını uygulama.
Bu kılavuzda gösterilen görevlerin tüm kod listesi için bkz. [WPF örneğinde Windows Forms Bileşik denetimi barındırma](https://go.microsoft.com/fwlink/?LinkID=159999).
## <a name="prerequisites"></a>Önkoşullar
Bu yönergeyi tamamlamak için Visual Studio gerekir.
## <a name="implementing-the-windows-forms-composite-control"></a>Windows Forms Bileşik denetimi uygulama
Bu örnekte kullanılan Windows Forms Bileşik denetim basit bir veri girişi biçimidir. Bu form kullanıcının adını ve adresini alır ve bu bilgileri konağa döndürmek için özel bir olay kullanır. Aşağıdaki çizimde işlenmiş denetim gösterilmektedir.
Aşağıdaki görüntüde Windows Forms Bileşik bir denetim gösterilmektedir:

### <a name="creating-the-project"></a>Projeyi Oluşturma
Projeyi başlatmak için:
1. Başlatın [!INCLUDE[TLA#tla_visualstu](../../../../includes/tlasharptla-visualstu-md.md)]ve **Yeni proje** iletişim kutusunu açın.
2. Pencere kategorisinde **Windows Forms denetim kitaplığı** şablonunu seçin.
3. Yeni projeyi `MyControls`adlandırın.
4. Konum için, gibi kolay adlandırılmış bir üst düzey klasör `WpfHostingWindowsFormsControl`belirtin. Daha sonra, ana bilgisayar uygulamasını bu klasöre yerleştirebilirsiniz.
5. Projeyi oluşturmak için **Tamam** ' ı tıklatın. Varsayılan proje adlı `UserControl1`tek bir denetim içerir.
6. Çözüm Gezgini ' de, `UserControl1` olarak `MyControl1`yeniden adlandırın.
Projenizin aşağıdaki sistem dll 'Lerine başvuruları olmalıdır. Bu dll 'Lerden herhangi biri varsayılan olarak dahil edilmez, bunları projeye ekleyin.
- Sistem
- System.Data
- System.Drawing
- System.Windows.Forms
- System.Xml
### <a name="adding-controls-to-the-form"></a>Forma denetim ekleme
Forma denetim eklemek için:
- Tasarımcıda `MyControl1` açın.
Formda, <xref:System.Windows.Forms.Label> önceki çizimde oldukları gibi <xref:System.Windows.Forms.TextBox> boyutlandırılmış ve düzenlenmiş beş denetim ve bunlara karşılık gelen denetimleri ekleyin. Örnekte, <xref:System.Windows.Forms.TextBox> denetimler şu şekilde adlandırılır:
- `txtName`
- `txtAddress`
- `txtCity`
- `txtState`
- `txtZip`
Tamam ve <xref:System.Windows.Forms.Button> **iptal**etiketli iki denetim ekleyin. Örnekte, düğme adları sırasıyla ve `btnOK` `btnCancel`' dir.
### <a name="implementing-the-supporting-code"></a>Destekleyici kodu uygulama
Formu kod görünümünde açın. Denetim, toplanan verileri özel `OnButtonClick` olayı yükselterek ana bilgisayarına döndürür. Veriler, olay bağımsız değişkeni nesnesinde yer alır. Aşağıdaki kod, olay ve temsilci bildirimini gösterir.
`MyControl1` Sınıfına aşağıdaki kodu ekleyin.
[!code-csharp[WpfHostingWindowsFormsControl#2](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/MyControls/MyControl1.cs#2)]
[!code-vb[WpfHostingWindowsFormsControl#2](~/samples/snippets/visualbasic/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/VisualBasic/MyControls/MyControl1.vb#2)]
`MyControlEventArgs` Sınıfı, konağa döndürülecek bilgileri içerir.
Aşağıdaki sınıfı forma ekleyin.
[!code-csharp[WpfHostingWindowsFormsControl#3](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/MyControls/MyControl1.cs#3)]
[!code-vb[WpfHostingWindowsFormsControl#3](~/samples/snippets/visualbasic/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/VisualBasic/MyControls/MyControl1.vb#3)]
Kullanıcı **Tamam** veya **iptal** <xref:System.Windows.Forms.Control.Click> düğmesine tıkladığında, olay işleyicileri verileri içeren bir `MyControlEventArgs` nesne oluşturur ve `OnButtonClick` olayı başlatır. İki işleyici arasındaki tek fark, olay bağımsız değişkeninin `IsOK` özelliğidir. Bu özellik, ana bilgisayarın hangi düğmeye tıklandığını belirlemesine olanak sağlar. `true` **Tamam** düğmesi ve`false` **iptal** düğmesi için olarak ayarlanır. Aşağıdaki kod iki düğme işleyicisini gösterir.
`MyControl1` Sınıfına aşağıdaki kodu ekleyin.
[!code-csharp[WpfHostingWindowsFormsControl#4](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/MyControls/MyControl1.cs#4)]
[!code-vb[WpfHostingWindowsFormsControl#4](~/samples/snippets/visualbasic/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/VisualBasic/MyControls/MyControl1.vb#4)]
### <a name="giving-the-assembly-a-strong-name-and-building-the-assembly"></a>Derlemeye tanımlayıcı bir ad verme ve derlemeyi oluşturma
Bu derlemeye bir uygulama tarafından başvurulması için [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] bir tanımlayıcı adı olması gerekir. Güçlü bir ad oluşturmak için sn. exe ile bir anahtar dosyası oluşturun ve bunu projenize ekleyin.
1. Bir [!INCLUDE[TLA2#tla_visualstu](../../../../includes/tla2sharptla-visualstu-md.md)] komut istemi açın. Bunu yapmak için **Başlat** menüsüne tıklayın ve ardından **tüm programlar/Microsoft Visual Studio 2010/Visual Studio Araçları/Visual Studio komut istemi**' ni seçin. Bu, özelleştirilmiş ortam değişkenleriyle bir konsol penceresi başlatır.
2. Komut isteminde, proje klasörünüze gitmek için `cd` komutunu kullanın.
3. Aşağıdaki komutu çalıştırarak MyControls. snk adlı bir anahtar dosya oluşturun.
```console
Sn.exe -k MyControls.snk
```
4. Projenize anahtar dosyasını dahil etmek için Çözüm Gezgini ' de proje adına sağ tıklayın ve ardından **Özellikler**' e tıklayın. Proje tasarımcısında **imzalama** sekmesine tıklayın, **derlemeyi imzala** onay kutusunu seçin ve ardından anahtar dosyanıza göz atın.
5. Çözümü oluşturun. Derleme MyControls. dll adlı bir DLL üretecektir.
## <a name="implementing-the-wpf-host-application"></a>WPF ana bilgisayar uygulamasını uygulama
Konak uygulama, barındırmak <xref:System.Windows.Forms.Integration.WindowsFormsHost> `MyControl1`için denetimi kullanır. [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] Uygulama, denetimden verileri `OnButtonClick` almak için olayı işler. Ayrıca, bazı denetim özelliklerini [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] uygulamadan değiştirmenize olanak sağlayan bir seçenek düğmeleri koleksiyonuna sahiptir. Aşağıdaki çizimde tamamlanmış uygulama gösterilmektedir.
Aşağıdaki görüntüde WPF uygulamasına katıştırılmış denetim dahil olmak üzere tüm uygulama gösterilmektedir:

### <a name="creating-the-project"></a>Projeyi Oluşturma
Projeyi başlatmak için:
1. Açın [!INCLUDE[TLA2#tla_visualstu](../../../../includes/tla2sharptla-visualstu-md.md)]ve **Yeni proje**' yi seçin.
2. Pencere kategorisinde, **WPF uygulama** şablonunu seçin.
3. Yeni projeyi `WpfHost`adlandırın.
4. Konum için, MyControls projesini içeren en üst düzey klasörü belirtin.
5. Projeyi oluşturmak için **Tamam** ' ı tıklatın.
Ayrıca, ve diğer derlemeler içeren `MyControl1` dll 'ye başvurular eklemeniz gerekir.
1. Çözüm Gezgini ' de proje adına sağ tıklayın ve **Başvuru Ekle**' yi seçin.
2. **Araştır** sekmesine tıklayın ve MyControls. dll dosyasını içeren klasöre gidin. Bu izlenecek yol için, bu klasör Mycontrols\bin\debugklasörüdür.
3. MyControls. dll ' yi seçin ve ardından **Tamam**' a tıklayın.
4. WindowsFormsIntegration. dll adlı WindowsFormsIntegration derlemesine bir başvuru ekleyin.
### <a name="implementing-the-basic-layout"></a>Temel düzeni uygulama
Konak [!INCLUDE[TLA#tla_ui](../../../../includes/tlasharptla-ui-md.md)] uygulamanın, MainWindow. xaml ' de uygulanması. Bu dosya, [!INCLUDE[TLA#tla_xaml](../../../../includes/tlasharptla-xaml-md.md)] düzeni tanımlayan biçimlendirmeyi içerir ve Windows Forms denetimi barındırır. Uygulama üç bölgeye ayrılmıştır:
- Barındırılan denetimin çeşitli özelliklerini değiştirmek için kullanabileceğiniz seçenek düğmelerinin koleksiyonunu içeren **denetim özellikleri** paneli.
- Barındırılan denetimden döndürülen verileri görüntüleyen çeşitli <xref:System.Windows.Controls.TextBlock> öğeleri içeren denetim masası 'ndaki veriler.
- Barındırılan denetimin kendisi.
Temel düzen aşağıdaki XAML 'de gösterilmiştir. Barındırmak `MyControl1` için gereken biçimlendirme bu örnekte atlanır, ancak daha sonra ele alınacaktır.
MainWindow. xaml içindeki XAML 'yi aşağıdakiler ile değiştirin. Visual Basic kullanıyorsanız, sınıfını olarak `x:Class="MainWindow"`değiştirin.
[!code-xaml[WpfHostingWindowsFormsControl#100](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/WpfHost/Page1.xaml#100)]
İlk <xref:System.Windows.Controls.StackPanel> öğe, barındırılan denetimin çeşitli varsayılan <xref:System.Windows.Controls.RadioButton> özelliklerini değiştirmenize olanak tanıyan birkaç denetim kümesini içerir. Bu, ardından barındıran <xref:System.Windows.Forms.Integration.WindowsFormsHost> `MyControl1`bir öğesi. Son <xref:System.Windows.Controls.StackPanel> öğesi, barındırılan denetim <xref:System.Windows.Controls.TextBlock> tarafından döndürülen verileri görüntüleyen çeşitli öğeler içerir. Öğelerin ve <xref:System.Windows.Controls.DockPanel.Dock%2A> ve <xref:System.Windows.FrameworkElement.Height%2A> öznitelik ayarlarının sıralaması barındırılan denetimi, boşluk veya deformasyon olmadan pencereye katıştırır.
#### <a name="hosting-the-control"></a>Denetimi barındırma
Önceki XAML 'nin aşağıdaki düzenlenmiş sürümü barındırmak `MyControl1`için gereken öğelere odaklanır.
[!code-xaml[WpfHostingWindowsFormsControl#101](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/WpfHost/Page1.xaml#101)]
[!code-xaml[WpfHostingWindowsFormsControl#102](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/WpfHost/Page1.xaml#102)]
Ad alanı eşleme özniteliği barındırılan denetimi içeren `MyControls` ad alanına bir başvuru oluşturur. `xmlns` Bu eşleme, içinde `MyControl1` [!INCLUDE[TLA2#tla_xaml](../../../../includes/tla2sharptla-xaml-md.md)] olarak `<mcl:MyControl1>`temsil etmenizi sağlar.
XAML 'deki iki öğe barındırmayı işleme:
- `WindowsFormsHost`<xref:System.Windows.Forms.Integration.WindowsFormsHost> bir[!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] uygulamada Windows Forms denetimini barındırmanızı sağlayan öğeyi temsil eder.
- `mcl:MyControl1`öğesini temsil `MyControl1`eden <xref:System.Windows.Forms.Integration.WindowsFormsHost> öğesi öğenin alt koleksiyonuna eklenir. Sonuç olarak, bu Windows Forms denetimi [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] pencerenin bir parçası olarak işlenir ve uygulamadan denetimle iletişim kurabilirsiniz.
### <a name="implementing-the-code-behind-file"></a>Arka plan kod dosyasını uygulama
MainWindow. xaml. vb veya MainWindow.xaml.cs arka plan kod dosyası, önceki bölümde [!INCLUDE[TLA2#tla_ui](../../../../includes/tla2sharptla-ui-md.md)] açıklanan işlevleri uygulayan yordamsal kodu içerir. Birincil görevler şunlardır:
- Olayına bir olay işleyicisi `MyControl1` `OnButtonClick` iliştirme.
- Seçenek düğmelerinin toplanması temelinde `MyControl1`' nin çeşitli özelliklerini değiştirme.
- Denetim tarafından toplanan verileri görüntüleme.
#### <a name="initializing-the-application"></a>Uygulamayı başlatma
Başlatma kodu, pencerenin <xref:System.Windows.FrameworkElement.Loaded> olayı için bir olay işleyicisinde bulunur ve `OnButtonClick` denetimin olayına bir olay işleyicisi ekler.
MainWindow. xaml. vb veya MainWindow.xaml.cs içinde, `MainWindow` sınıfına aşağıdaki kodu ekleyin.
[!code-csharp[WpfHostingWindowsFormsControl#11](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/WpfHost/Page1.xaml.cs#11)]
[!code-vb[WpfHostingWindowsFormsControl#11](~/samples/snippets/visualbasic/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/VisualBasic/WpfHost/Page1.xaml.vb#11)]
<xref:System.Windows.Forms.Integration.WindowsFormsHost> `MyControl1` Dahaönce<xref:System.Windows.Forms.Integration.WindowsFormsHost.Child%2A> , `MyControl1` öğenin<xref:System.Windows.Forms.Integration.WindowsFormsHost> alt öğe koleksiyonuna eklenmiş olduğundan, başvurusunu almak için öğesini çevirebilirsiniz. [!INCLUDE[TLA2#tla_xaml](../../../../includes/tla2sharptla-xaml-md.md)] Daha sonra bu başvuruyu, ' a bir olay işleyicisi `OnButtonClick`iliştirmek için kullanabilirsiniz.
Denetimin kendine başvuru sağlamaya ek olarak, <xref:System.Windows.Forms.Integration.WindowsFormsHost> uygulamanın içinden işleyebileceğiniz bir dizi denetimin özelliklerini gösterir. Başlatma kodu, bu değerleri uygulamada daha sonra kullanılmak üzere özel genel değişkenlere atar.
`MyControls` Dll 'deki türlere kolayca erişebilmeniz için, dosyanın en üstüne aşağıdaki `Imports` veya `using` ifadesini ekleyin.
```vb
Imports MyControls
```
```csharp
using MyControls;
```
#### <a name="handling-the-onbuttonclick-event"></a>OnButtonClick olayını işleme
`MyControl1`Kullanıcı, denetim düğmelerinden birini tıkladığında olayıbaşlatır.`OnButtonClick`
`MainWindow` Sınıfına aşağıdaki kodu ekleyin.
[!code-csharp[WpfHostingWindowsFormsControl#12](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/WpfHost/Page1.xaml.cs#12)]
[!code-vb[WpfHostingWindowsFormsControl#12](~/samples/snippets/visualbasic/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/VisualBasic/WpfHost/Page1.xaml.vb#12)]
Metin kutularındaki veriler `MyControlEventArgs` nesnesine paketlenmiştir. Kullanıcı **Tamam** düğmesine tıkladığında, olay işleyicisi verileri ayıklar ve aşağıdaki `MyControl1`panelde görüntüler.
#### <a name="modifying-the-controls-properties"></a>Denetimin özelliklerini değiştirme
<xref:System.Windows.Forms.Integration.WindowsFormsHost> Öğesi, barındırılan denetimin varsayılan özelliklerinden birkaçını gösterir. Sonuç olarak, denetimin görünümünü uygulamanızın stiliyle daha yakından eşleşecek şekilde değiştirebilirsiniz. Sol paneldeki seçenek düğmelerinin kümeleri, kullanıcının çeşitli renk ve yazı tipi özelliklerini değiştirmesini sağlar. Her düğme kümesi, <xref:System.Windows.Controls.Primitives.ButtonBase.Click> olay için kullanıcının seçenek düğmesi seçimlerini algılayan ve denetimdeki ilgili özelliği değiştiren bir işleyicisine sahiptir.
`MainWindow` Sınıfına aşağıdaki kodu ekleyin.
[!code-csharp[WpfHostingWindowsFormsControl#13](~/samples/snippets/csharp/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/CSharp/WpfHost/Page1.xaml.cs#13)]
[!code-vb[WpfHostingWindowsFormsControl#13](~/samples/snippets/visualbasic/VS_Snippets_Wpf/WpfHostingWindowsFormsControl/VisualBasic/WpfHost/Page1.xaml.vb#13)]
Uygulamayı derleyin ve çalıştırın. Windows Forms Bileşik denetimine bir metin ekleyin ve ardından **Tamam**' a tıklayın. Metin, etiketlerde görüntülenir. Denetimdeki etkiyi görmek için farklı radyo düğmelerine tıklayın.
## <a name="see-also"></a>Ayrıca bkz.
- <xref:System.Windows.Forms.Integration.ElementHost>
- <xref:System.Windows.Forms.Integration.WindowsFormsHost>
- [Visual Studio’da XAML tasarlama](/visualstudio/designers/designing-xaml-in-visual-studio)
- [İzlenecek yol: WPF 'de Windows Forms denetimini barındırma](walkthrough-hosting-a-windows-forms-control-in-wpf.md)
- [İzlenecek yol: Windows Forms WPF bileşik denetimini barındırma](walkthrough-hosting-a-wpf-composite-control-in-windows-forms.md)
| 74.524194 | 1,035 | 0.800779 | tur_Latn | 0.997846 |
c123603f9029a24af094b33abd6399e7eeb73371 | 3,423 | md | Markdown | articles/automation/automation-dsc-configuration-based-on-stig.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:16.000Z | 2021-03-12T23:37:16.000Z | articles/automation/automation-dsc-configuration-based-on-stig.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/automation/automation-dsc-configuration-based-on-stig.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Configurar datos basados en STIG para Azure Automation State Configuration
description: En este artículo se indica cómo configurar datos basados en STIG del DoD para Azure Automation State Configuration.
keywords: dsc,powershell,configuration,setup
services: automation
ms.subservice: dsc
ms.date: 08/08/2019
ms.topic: conceptual
ms.openlocfilehash: ba875eb6fd132a6f5bcf916936545ac76c1656d3
ms.sourcegitcommit: 362359c2a00a6827353395416aae9db492005613
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 11/15/2021
ms.locfileid: "132488590"
---
# <a name="configure-data-based-on-security-technical-information-guide-stig"></a>Configuración de según la guía de implementación técnica de seguridad (STIG)
> Se aplica a: Windows PowerShell 5.1
Crear contenido de configuración por primera vez puede ser un reto.
En muchos casos, el objetivo es automatizar la configuración de los servidores después de una "línea de base" que, con suerte, se alinea con una recomendación del sector.
> [!NOTE]
> En este artículo se hace referencia a una solución que se mantiene en la comunidad de código abierto.
> La compatibilidad solo está disponible en forma de colaboración en GitHub, no con Microsoft.
## <a name="community-project-powerstig"></a>Proyecto de la comunidad: PowerSTIG
Un proyecto de la comunidad denominado [PowerSTIG](https://github.com/microsoft/powerstig) pretende resolver este problema mediante la generación de contenido DSC basado en la [información pública](https://public.cyber.mil/stigs/) proporcionada sobre la guía de implementación técnica de seguridad (STIG, por sus siglas en inglés).
Tratar con las líneas de base es más complicado de lo que parece.
Muchas organizaciones necesitan [documentar las excepciones](https://github.com/microsoft/powerstig#powerstigdata) a las reglas y administrar los datos a escala.
Para hacer frente a este problema, PowerSTIG proporciona [recursos compuestos](https://github.com/microsoft/powerstig#powerstigdsc) para abordar cada área de la configuración, en lugar de intentar abordar toda la configuración de un archivo grande.
Una vez que se han generado las configuraciones, puede usar los [scripts de configuración de DSC](/powershell/scripting/dsc/configurations/configurations) para generar archivos MOF y [cargarlos en Azure Automation](./tutorial-configure-servers-desired-state.md#create-and-upload-a-configuration-to-azure-automation).
A continuación, registre los servidores desde una [ubicación local](./automation-dsc-onboarding.md#enable-physicalvirtual-linux-machines) o [en Azure](./automation-dsc-onboarding.md#enable-azure-vms) para extraer las configuraciones.
Para probar PowerSTIG, visite la [Galería de PowerShell](https://www.powershellgallery.com) y descargue la solución o haga clic en "Project Site" (Sitio del proyecto) para ver la [documentación](https://github.com/microsoft/powerstig).
## <a name="next-steps"></a>Pasos siguientes
- Para comprender DSC de PowerShell, vea [Información general sobre Desired State Configuration de PowerShell](/powershell/scripting/dsc/overview/overview).
- Obtenga información sobre los recursos de DSC de PowerShell en [Recursos de DSC](/powershell/scripting/dsc/resources/resources).
- Para obtener información detallada sobre la configuración de Configuration Manager, vea [Configuración de Configuration Manager local](/powershell/scripting/dsc/managing-nodes/metaconfig).
| 76.066667 | 331 | 0.810985 | spa_Latn | 0.958061 |
c1239b4128be3e334055300ae43dd324fa5e1e3d | 1,309 | md | Markdown | workshop/content/prerequisites/awscli/_index.md | lukepak/cfn101-workshop | dab12c39fe1d9afcf90af10d080ba73d0153d808 | [
"MIT-0"
] | 76 | 2019-12-20T16:47:23.000Z | 2022-03-28T08:47:17.000Z | workshop/content/prerequisites/awscli/_index.md | lukepak/cfn101-workshop | dab12c39fe1d9afcf90af10d080ba73d0153d808 | [
"MIT-0"
] | 38 | 2020-01-08T15:09:39.000Z | 2022-03-03T13:15:29.000Z | workshop/content/prerequisites/awscli/_index.md | lukepak/cfn101-workshop | dab12c39fe1d9afcf90af10d080ba73d0153d808 | [
"MIT-0"
] | 85 | 2020-01-07T07:40:22.000Z | 2022-03-29T18:48:18.000Z | ---
title: 'Install and configure the AWS CLI'
date: 2019-10-18T12:51:19+01:00
weight: 200
---
The [AWS CLI](https://aws.amazon.com/cli/) allows you to interact with AWS services from a terminal session.
Make sure you have the latest version of the AWS CLI installed on your system.
* macOS and Linux: [Bundled installer](https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-bundle.html#install-bundle-other)
* Windows: [MSI installer](https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html#install-msi-on-windows)
See the [AWS Command Line Interface installation](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) page for more details.
## Configure your credentials
Open a terminal window and run `aws configure` to set up your environment.
```shell
$ aws configure
```
Type the **access key ID** and **secret key** you created in [the previous step](/prerequisites/account.html) and choose a
default region (for example you can use `us-east-1`). Preferably use a region that doesn't have any resources already deployed into it.
```shell
AWS Access Key ID [None]: <type key ID here>
AWS Secret Access Key [None]: <type access key>
Default region name [None]: <choose region (e.g. "us-east-1", "eu-west-1")>
Default output format [None]: <leave blank>
```
| 40.90625 | 138 | 0.746371 | eng_Latn | 0.856331 |
c123bb5f9f16506fc6d1b8b41549cb3aa1ec87ed | 1,432 | md | Markdown | README.md | gregorthebigmac/ssh_cron_restart | 0b8b7a4faea7669319d5683235f62bb963e317e5 | [
"MIT"
] | null | null | null | README.md | gregorthebigmac/ssh_cron_restart | 0b8b7a4faea7669319d5683235f62bb963e317e5 | [
"MIT"
] | null | null | null | README.md | gregorthebigmac/ssh_cron_restart | 0b8b7a4faea7669319d5683235f62bb963e317e5 | [
"MIT"
] | null | null | null | # ssh_cron_restart
This is a cron job workaround for Debian-based systems where ssh fails on system boot. Obviously, a permanent solution is needed, but I don't know why it's failing yet, so this will have to suffice for now.
## Instructions
So the way this works is as follows:
1. Copy the file restart_sshd.sh somewhere into the root user directory, and remember where you put it.
A. (I put mine into a special directory dedicated to root scripts. You might want to do the same for better organization, but you do you).
2. Open crontab (as root)
```
# crontab -e
```
3. Copy+paste the following into the end of the crontab file, replacing the path with wherever you stored restart_sshd.sh:
```
@reboot sleep 20 && /root/scripts/restart_sshd.sh
```
4. Save and exit.
Now, whenever your machine reboots (or just powers on), it will wait 20 seconds after rebooting (which should give it enough time for the networking functions to come online), and *then* it will restart the sshd service. As far as I can tell, the cause of the sshd problem failing on boot is simply because it's trying to fire up the sshd service before networking functions are done, and it has a network connection. By using this cronjob, it makes sure that enough time has passed that if it's going to get a network connection, it should be active by the time the cronjob runs, and sshd should be working shortly after booting up.
| 65.090909 | 633 | 0.756983 | eng_Latn | 0.999894 |
c123c7178329cd3f03b605ae70cc92efa7117150 | 4,600 | md | Markdown | v2.3.0/class/Polygon/setVisible/README.md | mapsplugin/cordova-plugin-googlemaps-doc | b0ee50f8313ddaafacd9da9d9f2e986dfb00382c | [
"CC0-1.0",
"Apache-2.0",
"CC-BY-4.0"
] | 137 | 2017-04-27T01:33:05.000Z | 2022-02-20T13:24:27.000Z | v2.3.0/class/Polygon/setVisible/README.md | mapsplugin/cordova-plugin-googlemaps-doc | b0ee50f8313ddaafacd9da9d9f2e986dfb00382c | [
"CC0-1.0",
"Apache-2.0",
"CC-BY-4.0"
] | 27 | 2017-04-21T23:53:38.000Z | 2021-10-09T04:37:38.000Z | v2.3.0/class/Polygon/setVisible/README.md | mapsplugin/cordova-plugin-googlemaps-doc | b0ee50f8313ddaafacd9da9d9f2e986dfb00382c | [
"CC0-1.0",
"Apache-2.0",
"CC-BY-4.0"
] | 203 | 2017-04-21T15:36:41.000Z | 2022-03-29T14:20:12.000Z | :warning: **This document is aim for older versions (from 2.3.0 to 2.5.3).
Document for new version is https://github.com/mapsplugin/cordova-plugin-googlemaps-doc/blob/master/v2.6.0/README.md**
# polygon.setVisible()
Change visibility of the polygon.
```
polygon.setVisible(flag);
```
## Parameters
name | type | description
---------------|---------------|---------------------------------------
flag | boolean | `true`: visible, `false`: invisible
-----------------------------------------------------------------------
## Demo code
```html
<div class="map" id="map_canvas">
<span class="smallPanel"><input type="checkbox" id="toggleCheckbox" checked="checked">polygon.setVisible(true)</span>
</div>
```
```js
var GERMANY = [
{"lat": 54.98310415304803, "lng": 9.921906365609232},
{"lat": 54.596641954153256, "lng": 9.9395797054529},
{"lat": 54.363607082733154, "lng": 10.950112338920519},
{"lat": 54.00869334575259, "lng": 10.93946699386845},
{"lat": 54.19648550070116, "lng": 11.956252475643282},
{"lat": 54.470370591847995, "lng": 12.518440382546714},
{"lat": 54.0755109727059, "lng": 13.647467075259499},
{"lat": 53.75702912049104, "lng": 14.119686313542559},
{"lat": 53.248171291713106, "lng": 14.353315463934166},
{"lat": 52.98126251892535, "lng": 14.074521111719434},
{"lat": 52.624850165408304, "lng": 14.437599725002201},
{"lat": 52.089947414755216, "lng": 14.685026482815715},
{"lat": 51.74518809671997, "lng": 14.607098422919648},
{"lat": 51.10667409932171, "lng": 15.016995883858783},
{"lat": 51.00233938252438, "lng": 14.570718214586122},
{"lat": 51.11726776794137, "lng": 14.307013380600665},
{"lat": 50.92691762959436, "lng": 14.056227654688314},
{"lat": 50.73323436136428, "lng": 13.338131951560399},
{"lat": 50.48407644306917, "lng": 12.96683678554325},
{"lat": 50.26633779560723, "lng": 12.240111118222671},
{"lat": 49.96912079528062, "lng": 12.415190870827473},
{"lat": 49.54741526956275, "lng": 12.521024204161336},
{"lat": 49.30706818297324, "lng": 13.031328973043514},
{"lat": 48.877171942737164, "lng": 13.595945672264577},
{"lat": 48.41611481382904, "lng": 13.243357374737116},
{"lat": 48.28914581968786, "lng": 12.884102817443875},
{"lat": 47.63758352313596, "lng": 13.025851271220517},
{"lat": 47.467645575544, "lng": 12.932626987366064},
{"lat": 47.672387600284424, "lng": 12.620759718484521},
{"lat": 47.70308340106578, "lng": 12.141357456112871},
{"lat": 47.52376618101306, "lng": 11.426414015354851},
{"lat": 47.5663992376538, "lng": 10.544504021861599},
{"lat": 47.30248769793917, "lng": 10.402083774465325},
{"lat": 47.580196845075704, "lng": 9.89606814946319},
{"lat": 47.5250580918202, "lng": 9.594226108446378},
{"lat": 47.83082754169135, "lng": 8.522611932009795},
{"lat": 47.61357982033627, "lng": 8.317301466514095},
{"lat": 47.62058197691192, "lng": 7.466759067422288},
{"lat": 48.33301911070373, "lng": 7.593676385131062},
{"lat": 49.01778351500343, "lng": 8.099278598674857},
{"lat": 49.20195831969164, "lng": 6.65822960778371},
{"lat": 49.463802802114515, "lng": 6.186320428094177},
{"lat": 49.90222565367873, "lng": 6.242751092156993},
{"lat": 50.128051662794235, "lng": 6.043073357781111},
{"lat": 50.80372101501058, "lng": 6.15665815595878},
{"lat": 51.851615709025054, "lng": 5.988658074577813},
{"lat": 51.852029120483394, "lng": 6.589396599970826},
{"lat": 52.22844025329755, "lng": 6.842869500362383},
{"lat": 53.144043280644894, "lng": 7.092053256873896},
{"lat": 53.48216217713065, "lng": 6.905139601274129},
{"lat": 53.69393219666267, "lng": 7.100424838905269},
{"lat": 53.74829580343379, "lng": 7.936239454793963},
{"lat": 53.52779246684429, "lng": 8.121706170289485},
{"lat": 54.020785630908904, "lng": 8.800734490604668},
{"lat": 54.39564647075406, "lng": 8.57211795414537},
{"lat": 54.96274363872516, "lng": 8.526229282270208},
{"lat": 54.83086538351631, "lng": 9.282048780971138},
{"lat": 54.98310415304803, "lng": 9.921906365609232}
];
var mapDiv = document.getElementById("map_canvas");
// Create a map with specified camera bounds
var map = plugin.google.maps.Map.getMap(mapDiv, {
camera: {
target: GERMANY
}
});
// Add a polygon
var polygon = map.addpolygon({
'points': GERMANY,
'fillColor' : '#AAFFAA',
'strokeColor' : '#FF0000',
'strokeWidth': 2
});
var checkbox = document.getElementById("toggleCheckbox");
checkbox.addEventListener("change", function() {
// Change the visible property
polygon.setVisible(checkbox.checked);
});
```

| 40 | 119 | 0.658261 | yue_Hant | 0.110907 |
c123df01a2ca304b09dc5c2803e69ca3b561e51f | 4,422 | md | Markdown | content/membres/stacie-petruzzelis.md | designenrecherche/site-hugo | 4aabc0e6a45e84c80e44163903c4790681658a3e | [
"MIT"
] | 1 | 2020-09-14T16:35:31.000Z | 2020-09-14T16:35:31.000Z | content/membres/stacie-petruzzelis.md | designenrecherche/site-hugo | 4aabc0e6a45e84c80e44163903c4790681658a3e | [
"MIT"
] | null | null | null | content/membres/stacie-petruzzelis.md | designenrecherche/site-hugo | 4aabc0e6a45e84c80e44163903c4790681658a3e | [
"MIT"
] | null | null | null | +++
etablissement = "Université de Nîmes (Unîmes), Laboratoire PROJEKT"
membre_actif = true
name = "petruzzelis"
photo = "/images/petruzzellis-stacie.png"
section_cnu = "71 Sciences de l'information et de la communication"
tags = ["Enfant", "Récit", "Design", "Vécu", "Jeu", "Émotions"]
these_end = 0
these_soutenue = false
these_start = 2016
title = "Stacie Petruzzelis"
titrethese = "Approche du designer pour favoriser l'expression des vécus : Création d'un dispositif de jeu permettant un espace de dialogue ritualisé pour accompagner la construction du récit émotionnel de l'enfant"
titretheseEn = "Designer's approach to encourage the expression of feelings : creation of a game allowing a space for dialogue to build the emotional narrative of the child"
type_page = "membres"
url_hal = ""
url_thesesfr = "http://www.theses.fr/s193886"
[contact]
facebook = ""
github = ""
gitlab = ""
hypotheses = ""
instagram = ""
linkedin = ""
mail = ""
medium = ""
site = ""
telephone = ""
twitter = ""
[instagram_1]
post1 = ""
post2 = ""
post3 = ""
post4 = ""
post5 = ""
post6 = ""
post7 = ""
post8 = ""
post9 = ""
[instagram_files]
img1 = ""
img2 = ""
img3 = ""
img4 = ""
img5 = ""
img6 = ""
img7 = ""
img8 = ""
img9 = ""
post1link = ""
post2link = ""
post3link = ""
post4link = ""
post5link = ""
post6link = ""
post7link = ""
post8link = ""
post9link = ""
+++
<!-- Supprimer les parties non remplies (supprimer les blocks de lang s'il n'y a pas deux langues). Tu es libre d'ajouter ce que tu veux à cette partie -->
# Résumé de thèse
{{% lang class="fr" %}}
Lorsque les émotions négatives ressenties se transforment en souffrance, les répercussions individuelles et sociales sont nombreuses, notamment dans le cas de l'enfant. Qu'il s'agisse de l'enfant qui traverse une étape difficile comme l'expérience de la maladie ou la séparation des parents, ou de celui qui vit des difficultés dans le cadre scolaire, il peut vivre une phase émotionnellement douloureuse. L'enfant, notamment pendant la période de latence, peut souvent ressentir des difficultés à communiquer ses ressentis, à verbaliser ses émotions et à expliquer ce qu'il vit à ses parents. Il ne trouve pas toujours les moyens, ni l'espace pour le faire ; le jeu pourrait être le moyen et l'espace propice pour recréer un lieu de dialogue autour de ses vécus. Le rôle du designer est donc d'identifier les différents leviers et ressorts permettant de faciliter l'expression des vécus chez l'enfant. L'objectif étant de définir les principes de conception nécessaires à la création d'un dispositif de jeu permettant au(x) parent(s) d'accompagner l'enfant dans la construction et l'expression de son récit émotionnel.
{{% /lang %}}
## Direction de thèse
* **Michela Deni**, Université de Nîmes, PROJEKT
* **Marine Royer**, Université de Nîmes, PROJEKT
# Biographie
{{% lang class="fr" %}}
Après des études en Philosophie durant lesquelles elle s’intéresse principalement à l’anthropologie des techniques et des pratiques, ainsi qu’à la théorie et l’esthétique des artefacts, Stacie Petruzzellis fait ensuite des études en Design.
Elle est diplômée du Pôle Supérieur de Design Nouvelle-Aquitaine (DSAA Design social et écoresponsable) où elle mène des travaux dans lesquels elle questionne la place du handicap, du traumatisme et de la souffrance dans notre société. Elle réalise un outil basé sur des méthodes de renarcissisation, d’objectalisation et de projection de soi, et dont l’objectif est de permettre à l’individu une réflexion sur soi et ses ressentis émotionnels, par le biais d’un dispositif centré sur la musique.
Elle obtient ensuite un Master 2 en Design et Innovation sociale à l’Université de Nîmes (Master DIS) et mène une réflexion sur le design dans une approche biopsychosociale, permettant le dialogue entre les Sciences Humaines et les Sciences Médicales. Elle contribue aux travaux de recherche menés au sein du laboratoire HAVAE (EA 6310, Université de Limoges) pour repenser les actuels dispositifs de tests neuropsychologiques évaluant les troubles exécutifs. Dans ce cadre, elle réalise et expérimente un nouvel environnement écologique virtuel permettant la passation du test de Corsi (_Corsi block-tapping test_) en navigation visuo-spatiale.
Suite à cela, elle débute une thèse qui repose sur la conception d’un dispositif de jeu facilitant la construction et l’expression du récit émotionnel chez l’enfant.
{{% /lang %}} | 50.827586 | 1,119 | 0.761872 | fra_Latn | 0.980209 |
c124186290d1b31675bc69672cf3d3a2b66e73c2 | 227 | md | Markdown | doc/api/devices_lock/Lock/request.md | Yonomi/yonomi-device-widgets | 14856bd4100d5c50f5d5bdd16b471a14325d8f83 | [
"Apache-2.0"
] | null | null | null | doc/api/devices_lock/Lock/request.md | Yonomi/yonomi-device-widgets | 14856bd4100d5c50f5d5bdd16b471a14325d8f83 | [
"Apache-2.0"
] | 29 | 2021-05-10T21:19:06.000Z | 2022-03-22T18:50:10.000Z | doc/api/traits_trait_based_device_screen/TraitBasedDetailScreen/request.md | Yonomi/yonomi-device-widgets | 14856bd4100d5c50f5d5bdd16b471a14325d8f83 | [
"Apache-2.0"
] | null | null | null |
# request property
*[<Null safety>](https://dart.dev/null-safety)*
[Request](https://yonomi.co/yonomi-sdk/Request-class.html) request
_final_
## Implementation
```dart
final Request request;
```
| 6.485714 | 66 | 0.629956 | cat_Latn | 0.372372 |
c12486bf6339da9cfc56f2b8eda3f770a50cbb79 | 504 | md | Markdown | _posts/2021-09-09-remote-monitoring-and-management-service.md | gvensan/solace-glossary | 34c86668172336dba15df0828670a68603636e21 | [
"CC0-1.0"
] | null | null | null | _posts/2021-09-09-remote-monitoring-and-management-service.md | gvensan/solace-glossary | 34c86668172336dba15df0828670a68603636e21 | [
"CC0-1.0"
] | null | null | null | _posts/2021-09-09-remote-monitoring-and-management-service.md | gvensan/solace-glossary | 34c86668172336dba15df0828670a68603636e21 | [
"CC0-1.0"
] | null | null | null | ---
layout: post
title: "Remote Monitoring and Management Service"
excerpt: "With PubSub+ Remote Monitoring and Management Service (RMMS), Solace Support experts remotely monitor your brokers and ensure they are operating within designed parameters."
acronym: "RMMS"
categories:
- "Management Tool"
references:
- "https://solace.com/products/rmms/"
date: Thu, 09 Sep 2021 00:00:00 GMT
author: gvensan
comments: true
published: true
share: true
permalink: remote-monitoring-and-management-service
---
| 29.647059 | 183 | 0.775794 | eng_Latn | 0.904335 |
c124ccc8d59eb4ce26df30a75c9b11f2e20aaa05 | 1,717 | md | Markdown | docs/data_handling.md | Team-Aeros/Syracuse | a76923512274817dcdad47394a7a1922a3509eb7 | [
"MIT"
] | null | null | null | docs/data_handling.md | Team-Aeros/Syracuse | a76923512274817dcdad47394a7a1922a3509eb7 | [
"MIT"
] | 34 | 2018-01-09T22:07:29.000Z | 2018-02-09T10:19:51.000Z | docs/data_handling.md | Team-Aeros/Syracuse | a76923512274817dcdad47394a7a1922a3509eb7 | [
"MIT"
] | 7 | 2017-11-29T15:10:55.000Z | 2018-01-30T16:00:11.000Z | # Data Handling
This web-application displays json data saved in a file system.
Currently the data that is displayed can be divided into 2 sets, the Rain dataset and the Temperature dataset.
##The Rain dataset
The Rain dataset contains data from the current day, this data includes: Station name, StationID, Precipitation(amount of rain), wind speed and temperature. <br><br>
The top 10 stations with the highest precipitation are displayed in a table as Station name - precipitation,
when clicked on a station name the rest of the data that belongs to that station is displayed.
##The Temperature dataset
The Temperature dataset contains data from the past hour, this data includes: StationID, Temperature.<br><br>
On the website under the temperature tab is a map shown on the left. In this map there is a marker for each on of our stations in the Gulf of Mexico. When clicked on a marker the graph on the right is filled with the Temperature data of the selected station.
##The getting of the data
The task of getting the required data falls to the DataGetter class. This class can read the entries in the webdav folder in our file system.
Depending on the required data it can get the paths to the json files required for the Rain dataset or the Temperature dataset.
It then uses the paths to read the json files, get the required data and put it into an array.
The syntax for a filled array for the Rain dataset is:<br>
Array(
Key (the stationID) => Value(array containing Station name, StationID, Precipitation, wind speed and temperature)
)
<br><br>
The syntax for a filled array for the Temperature dataset is:<br>
Array(
Key (the stationID) => Value(array containing temperature values of the last hour)
) | 66.038462 | 258 | 0.788585 | eng_Latn | 0.999025 |
d307264facf7e1f72bad068a66d35692bf22afaf | 11,716 | md | Markdown | articles/storage/blobs/data-lake-storage-known-issues.md | LucianoLimaBR/azure-docs.pt-br | 8e05ae8584aa2620717d69ccb6a2fdfbb46446c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/blobs/data-lake-storage-known-issues.md | LucianoLimaBR/azure-docs.pt-br | 8e05ae8584aa2620717d69ccb6a2fdfbb46446c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/storage/blobs/data-lake-storage-known-issues.md | LucianoLimaBR/azure-docs.pt-br | 8e05ae8584aa2620717d69ccb6a2fdfbb46446c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Problemas conhecidos com o Azure Data Lake Storage Gen2 | Microsoft Docs
description: Saiba mais sobre as limitações e problemas conhecidos com o Azure Data Lake Storage Gen2
author: normesta
ms.subservice: data-lake-storage-gen2
ms.service: storage
ms.topic: conceptual
ms.date: 10/11/2019
ms.author: normesta
ms.reviewer: jamesbak
ms.openlocfilehash: f635360c5a6da19d60f3992878a8950b03c5f748
ms.sourcegitcommit: 12de9c927bc63868168056c39ccaa16d44cdc646
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/17/2019
ms.locfileid: "72513872"
---
# <a name="known-issues-with-azure-data-lake-storage-gen2"></a>Problemas conhecidos com o Azure Data Lake Storage Gen2
Este artigo lista os recursos e as ferramentas que ainda não têm suporte ou que só têm suporte parcial com contas de armazenamento que têm um namespace hierárquico (Azure Data Lake Storage Gen2).
<a id="blob-apis-disabled" />
## <a name="blob-storage-apis"></a>APIs de armazenamento de BLOBs
As APIs de armazenamento de BLOBs são desabilitadas para evitar problemas de operabilidade de recursos que podem surgir porque as APIs de armazenamento de BLOBs ainda não são interoperáveis com Azure Data Lake APIs
> [!NOTE]
> Com a visualização pública de acesso multiprotocolo em Data Lake Storage, as APIs de BLOB e Data Lake Storage Gen2 APIs podem operar nos mesmos dados. Para saber mais, consulte [acesso de vários protocolos em data Lake Storage](data-lake-storage-multi-protocol-access.md).
### <a name="what-to-do-with-existing-tools-applications-and-services"></a>O que fazer com ferramentas, aplicativos e serviços existentes
Se qualquer um deles usar APIs de BLOB e você quiser usá-las para trabalhar com todo o conteúdo em sua conta, você terá duas opções.
* **Opção 1**: não habilite um namespace hierárquico em sua conta de armazenamento de BLOBs até que o [acesso de vários protocolos em data Lake Storage](data-lake-storage-multi-protocol-access.md) esteja geralmente disponível e as APIs de BLOB se tornem totalmente interoperáveis com Azure data Lake APIs Gen2. O [acesso de vários protocolos no data Lake Storage](data-lake-storage-multi-protocol-access.md) está atualmente em visualização pública. Usar uma conta de armazenamento **sem** um namespace hierárquico significa que você não terá acesso a data Lake Storage Gen2 recursos específicos, como listas de controle de acesso de diretório e contêiner.
* **Opção 2**: habilitar namespaces hierárquicos. Com a visualização pública de [acesso multiprotocolo em data Lake Storage](data-lake-storage-multi-protocol-access.md), ferramentas e aplicativos que chamam APIs de BLOB, bem como recursos de armazenamento de BLOBs, como logs de diagnóstico, podem trabalhar com contas que têm um namespace hierárquico. Certifique-se de examinar este artigo para problemas conhecidos e limitações.
### <a name="what-to-do-if-you-used-blob-apis-to-load-data-before-blob-apis-were-disabled"></a>O que fazer se você usou APIs de BLOB para carregar dados antes de as APIs de blob serem desabilitadas
Se você usou essas APIs para carregar os dados antes de eles serem desabilitados, e tiver um requisito de produção para acessar esses dados, entre em contato com Suporte da Microsoft com as seguintes informações:
> [!div class="checklist"]
> * ID da assinatura (o GUID, não o nome).
> * Nome (s) da conta de armazenamento.
> * Se você está ativamente impactado em produção e, em caso afirmativo, para quais contas de armazenamento?.
> * Mesmo que você não seja afetado ativamente na produção, informe se você precisa que esses dados sejam copiados para outra conta de armazenamento por algum motivo e, nesse caso, por quê?
Sob essas circunstâncias, podemos restaurar o acesso à API de blob por um período de tempo limitado para que você possa copiar esses dados em uma conta de armazenamento que não tenha o recurso de namespace hierárquico habilitado.
### <a name="issues-and-limitations-with-using-blob-apis-on-accounts-that-have-a-hierarchical-namespace"></a>Problemas e limitações com o uso de APIs de BLOB em contas que têm um namespace hierárquico
Com a visualização pública de acesso multiprotocolo em Data Lake Storage, as APIs de BLOB e Data Lake Storage Gen2 APIs podem operar nos mesmos dados.
Esta seção descreve problemas e limitações com o uso de APIs de BLOB e Data Lake Storage Gen2 APIs para operar nos mesmos dados.
* Você não pode usar as APIs de BLOB e as APIs de Data Lake Storage para gravar na mesma instância de um arquivo.
* Se você gravar em um arquivo usando APIs Data Lake Storage Gen2, os blocos desse arquivo não ficarão visíveis para chamadas para a API de BLOB da [lista de blocos Get](https://docs.microsoft.com/rest/api/storageservices/get-block-list) .
* Você pode substituir um arquivo usando APIs Data Lake Storage Gen2 ou APIs de BLOB. Isso não afetará as propriedades do arquivo.
* Quando você usa a operação [listar BLOBs](https://docs.microsoft.com/rest/api/storageservices/list-blobs) sem especificar um delimitador, os resultados incluirão diretórios e blobs.
Se você optar por usar um delimitador, use apenas uma barra (`/`). Esse é o único delimitador com suporte.
* Se você usar a API [excluir blob](https://docs.microsoft.com/rest/api/storageservices/delete-blob) para excluir um diretório, esse diretório será excluído somente se estiver vazio.
Isso significa que você não pode usar a API de BLOB para excluir diretórios recursivamente.
Essas APIs REST de BLOB não têm suporte:
* [Colocar BLOB (página)](https://docs.microsoft.com/rest/api/storageservices/put-blob)
* [Colocar página](https://docs.microsoft.com/rest/api/storageservices/put-page)
* [Obter intervalos de página](https://docs.microsoft.com/rest/api/storageservices/get-page-ranges)
* [Blob de cópia incremental](https://docs.microsoft.com/rest/api/storageservices/incremental-copy-blob)
* [Colocar página da URL](https://docs.microsoft.com/rest/api/storageservices/put-page-from-url)
* [Colocar BLOB (acrescentar)](https://docs.microsoft.com/rest/api/storageservices/put-blob)
* [Bloco de acréscimo](https://docs.microsoft.com/rest/api/storageservices/append-block)
* [Anexar bloco da URL](https://docs.microsoft.com/rest/api/storageservices/append-block-from-url)
## <a name="issues-with-unmanaged-virtual-machine-vm-disks"></a>Problemas com discos de VM (máquina virtual) não gerenciados
Não há suporte para discos de VM não gerenciados em contas que têm um namespace hierárquico. Se você quiser habilitar um namespace hierárquico em uma conta de armazenamento, coloque os discos de VM não gerenciados em uma conta de armazenamento que não tenha o recurso de namespace hierárquico habilitado.
## <a name="support-for-other-blob-storage-features"></a>Suporte para outros recursos de armazenamento de BLOBs
A tabela a seguir lista todos os outros recursos e ferramentas que ainda não têm suporte ou que só têm suporte parcial com contas de armazenamento que têm um namespace hierárquico (Azure Data Lake Storage Gen2).
| Recurso/ferramenta | Mais informações |
|--------|-----------|
| **APIs para contas de armazenamento Data Lake Storage Gen2** | Com suporte parcial <br><br>O acesso de vários protocolos no Data Lake Storage está atualmente em visualização pública. Essa visualização permite que você use APIs de blob nos SDKs .NET, Java e Python com contas que têm um namespace hierárquico. Os SDKs ainda não contêm APIs que permitem interagir com diretórios ou definir ACLs (listas de controle de acesso). Para executar essas funções, você pode usar Data Lake Storage Gen2 APIs **REST** . |
| **AzCopy** | Suporte específico à versão <br><br>Use apenas a versão mais recente do AzCopy ([AzCopy V10](https://docs.microsoft.com/azure/storage/common/storage-use-azcopy-v10?toc=%2fazure%2fstorage%2ftables%2ftoc.json)). Não há suporte para versões anteriores do AzCopy, como AzCopy v 8.1.|
| **Políticas de gerenciamento do ciclo de vida do armazenamento de BLOBs** | Com suporte do [acesso de vários protocolos na versão prévia do data Lake Storage](data-lake-storage-multi-protocol-access.md) . As camadas de acesso fria e de arquivo são suportadas apenas pela visualização. Ainda não há suporte para a exclusão de instantâneos de BLOB. |
| **CDN (rede de distribuição de conteúdo) do Azure** | Ainda não tem suporte|
| **Azure Search** |Com suporte do [acesso de vários protocolos na versão prévia do data Lake Storage](data-lake-storage-multi-protocol-access.md) .|
| **Gerenciador de Armazenamento do Azure** | Suporte específico à versão <br><br>Use somente a versão `1.6.0` ou superior. <br>A versão `1.6.0` está disponível como um [Download gratuito](https://azure.microsoft.com/features/storage-explorer/).|
| **ACLs de contêiner de BLOB** |Ainda não tem suporte|
| **Blobfuse** |Ainda não tem suporte|
| **Domínios personalizados** |Ainda não tem suporte|
| **Explorador do sistema de arquivos** | Suporte limitado |
| **Log de diagnóstico** |Os logs de diagnóstico têm suporte pelo [acesso de vários protocolos na](data-lake-storage-multi-protocol-access.md) versão prévia do data Lake Storage. <br><br>No momento, não há suporte para a habilitação de logs no portal do Azure. Aqui está um exemplo de como habilitar os logs usando o PowerShell. <br><br>`$storageAccount = Get-AzStorageAccount -ResourceGroupName <resourceGroup> -Name <storageAccountName>`<br><br>`Set-AzStorageServiceLoggingProperty -Context $storageAccount.Context -ServiceType Blob -LoggingOperations read,write,delete -RetentionDays <days>`. <br><br>Certifique-se de especificar `Blob` como o valor do parâmetro `-ServiceType`, conforme mostrado neste exemplo. <br><br>No momento, Gerenciador de Armazenamento do Azure não pode ser usada para exibir os logs de diagnóstico. Para exibir os logs, use AzCopy ou SDKs.
| **Armazenamento imutável** |Ainda não tem suporte <br><br>O armazenamento imutável oferece a capacidade de armazenar dados em um estado de [worm (gravar uma vez, ler muitos)](https://docs.microsoft.com/azure/storage/blobs/storage-blob-immutable-storage) .|
| **Camadas de nível de objeto** |As camadas frias e de arquivo são suportadas pelo [acesso de vários protocolos na](data-lake-storage-multi-protocol-access.md) versão prévia do data Lake Storage. <br><br> Todas as outras camadas de acesso ainda não têm suporte.|
| **Suporte ao PowerShell e à CLI** | Funcionalidade limitada <br><br>As operações de gerenciamento, como a criação de uma conta, têm suporte. As operações do plano de dados, como carregar e baixar arquivos, estão em visualização pública como parte do [acesso de vários protocolos em data Lake Storage](data-lake-storage-multi-protocol-access.md). O trabalho com diretórios e a configuração de listas de controle de acesso (ACLs) ainda não têm suporte. |
| **Sites estáticos** |Ainda não tem suporte <br><br>Especificamente, a capacidade de fornecer arquivos para [sites estáticos](https://docs.microsoft.com/azure/storage/blobs/storage-blob-static-website).|
| **Aplicativos de terceiros** | Suporte limitado <br><br>Aplicativos de terceiros que usam APIs REST para trabalhar continuarão a funcionar se você usá-los com Data Lake Storage Gen2. <br>Aplicativos que chamam APIs de blob provavelmente funcionarão com a visualização pública de [acesso multiprotocolo em data Lake Storage](data-lake-storage-multi-protocol-access.md). |
|**Exclusão Reversível** |Ainda não tem suporte|
| **Recursos de controle de versão** |Ainda não tem suporte <br><br>Isso inclui [exclusão reversível](https://docs.microsoft.com/azure/storage/blobs/storage-blob-soft-delete)e outros recursos de controle de versão, como [instantâneos](https://docs.microsoft.com/rest/api/storageservices/creating-a-snapshot-of-a-blob).|
| 103.681416 | 868 | 0.785251 | por_Latn | 0.999105 |
d3076621b1867448e568afcbf661351d1a8ec76c | 672 | md | Markdown | README.md | mikeldking/hasura-actions-server-boilerplate | 2314cf3990ffd9d7c709bd5342994d1e2a16219e | [
"MIT"
] | 1 | 2020-12-13T08:01:22.000Z | 2020-12-13T08:01:22.000Z | README.md | mikeldking/hasura-actions-server-boilerplate | 2314cf3990ffd9d7c709bd5342994d1e2a16219e | [
"MIT"
] | 5 | 2021-03-02T01:18:26.000Z | 2022-03-08T23:18:19.000Z | README.md | mikeldking/hasura-actions-server-boilerplate | 2314cf3990ffd9d7c709bd5342994d1e2a16219e | [
"MIT"
] | null | null | null | # hasura-actions-server-boilerplate
Boilerplate Koa server for integrating hasura actions using `Koa` and `Typescript`. Also provides boilerplate code for using `graphql-request` to make requests back to hasura. Uses `apollo` for code genertion and `jest` for testing.
## Commands
```
npm run start // start a development server
npm run test // run tests using Jest
npm run build // build production version
```
## Basic Structure
```
── src
├── apis
│ └── hasura
│ ├── index.ts
│ └── sendRequest.ts
├── routes
│ ├── fooRoutes.ts
│ └── index.ts
├── server.ts
└── types
├── gqlTypes.ts
└── hasuraTypes.ts
```
| 23.172414 | 232 | 0.63244 | eng_Latn | 0.915028 |
d30842fe1ca08b3b6baf4739aed343415d6e82c0 | 425 | md | Markdown | webui/explorer/README.md | TheYoke/PngBin | d00291980cbcb7639599906fa2edc3e1dc5516da | [
"MIT"
] | 16 | 2022-01-21T08:36:28.000Z | 2022-03-06T14:06:59.000Z | webui/explorer/README.md | AsdertyTreds/PngBin | 097c4bd1ed855bc8a8ebf348f72192e4db8eb977 | [
"MIT"
] | null | null | null | webui/explorer/README.md | AsdertyTreds/PngBin | 097c4bd1ed855bc8a8ebf348f72192e4db8eb977 | [
"MIT"
] | 4 | 2022-01-24T07:36:35.000Z | 2022-03-05T09:11:05.000Z | # Explorer WebUI
This Web User Interface is used by [PngBin File Server Demo](../../notebooks).
To start, navigate to this repo's root directory and run the below command:
```
python -m webui.explorer -m XXX
```
Where `XXX` is the path to metadata database file (e.g. meta.db).
After that, using your browser and go to http://127.0.0.1:8080.
> Live demo example can be visited [here](https://pngbindemo.theyoke.repl.co)
| 38.636364 | 80 | 0.72 | eng_Latn | 0.968599 |
d30850b738f50aab393af615e019004701b89525 | 154 | md | Markdown | README.md | rafaelronu27/Ola-Mundo | 1882dbebffe52e176a2e5d307cc8e6d45eecd774 | [
"MIT"
] | null | null | null | README.md | rafaelronu27/Ola-Mundo | 1882dbebffe52e176a2e5d307cc8e6d45eecd774 | [
"MIT"
] | null | null | null | README.md | rafaelronu27/Ola-Mundo | 1882dbebffe52e176a2e5d307cc8e6d45eecd774 | [
"MIT"
] | null | null | null | # Olá Mundo
Primeiro repositório versionado de Git e GitHub
Repositório criado durante uma aula ao vivo.
Essa linha eu adicionei diretamento no site.
| 22 | 48 | 0.798701 | por_Latn | 0.999994 |
d308ea1379aabc2ecd12996645e5c25304806f7b | 2,307 | md | Markdown | README.md | fphd/dingding-php-sdk | 2c6c0450fb41f71af375dd0e5700f45bab75999c | [
"MIT"
] | 2 | 2019-05-17T07:11:09.000Z | 2019-09-19T02:53:00.000Z | README.md | fphd/dingding-php-sdk | 2c6c0450fb41f71af375dd0e5700f45bab75999c | [
"MIT"
] | null | null | null | README.md | fphd/dingding-php-sdk | 2c6c0450fb41f71af375dd0e5700f45bab75999c | [
"MIT"
] | null | null | null | 钉钉扩展包,当前只封装了 *Webhook* 功能
## 安装
``` bash
$ composer require fhpd/dingtalk -vvv
```
## 通用配置
- webhook: 群机器人Webhook
## 通用响应
| 参数 | 类型 | 是否必须 | 描述 |
| ------------ | ------------ | ------------ | ------------ |
| success| bool | 是 | true:操作成功 false:操作失败 |
| message | string | 是 | 结果说明 |
| data | array | 否 | 返回数据 |
## 使用
1. 生成钉钉对象
``` php
$webhook = 'your webhook';
$dingtalk = new \Fphd\Dingtalk\Dingtalk($webhook);
```
2. 发送文本消息
```
/**
* 发送文本消息
* @param string $content 消息内容
* @param array $mobiles 被@人的手机号(在content里添加@人的手机号)
* @param bool $isAtAll @所有人时:true,否则为:false
* @return array
*/
$dingtalk->sendText($content, $mobiles = [], $isAtAll = false)
```
3. 发送链接消息
```
/**
* 发送链接消息
* @param string $title 消息标题
* @param string $text 消息内容。如果太长只会部分展示
* @param string $messageUrl 点击消息跳转的URL
* @param string $picUrl 图片URL
* @return array
*/
$dingtalk->sendLink($title, $text, $messageUrl, $picUrl = '')
```
4. 发送Markdown消息
```
/**
* 发送Markdown消息
* @param string $title 首屏会话透出的展示内容
* @param string $text markdown格式的消息
* @param array $mobiles 被@人的手机号(在content里添加@人的手机号)
* @param bool $isAtAll @所有人时:true,否则为:false
* @return array
*/
$dingtalk->sendMarkdown($title, $text, $mobiles = [], $isAtAll = false)
```
5. 发送ActionCard
```
/**
* 发送ActionCard
* @param string $title 首屏会话透出的展示内容
* @param string $text markdown格式的消息
* @param array $btns 按钮,每个元素包含 title(按钮方案)、actionURL(点击按钮触发的URL)
* @param int $btnOrientation 0-按钮竖直排列,1-按钮横向排列
* @param int $hideAvatar 0-正常发消息者头像,1-隐藏发消息者头像
* @return array
*/
$dingtalk->sendActionCard($title, $text, $btns = [], $btnOrientation = 0, $hideAvatar = 0)
```
6. 发送FeedCard
```
/**
* 发送FeedCard
* @param array $links 链接,每个元素包含 title(单条信息文本)、messageURL(点击单条信息到跳转链接)、picURL(单条信息后面图片的URL)
* @return array
*/
$dingtalk->sendFeedCard($links=[])
```
## 参考项目
1. [钉钉开发文档-自定义机器人](https://open-doc.dingtalk.com/microapp/serverapi2/qf2nxq)
1. [jormin/dingtalk](https://github.com/jormin/dingtalk)
## License
The MIT License (MIT). Please see [License File](LICENSE.md) for more information.
| 21.764151 | 96 | 0.579974 | yue_Hant | 0.250509 |