hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
2f3a000edbd27cf0b684031717d4491dae97b806
2,718
md
Markdown
docs/vs-2015/code-quality/c26117.md
tommorris/visualstudio-docs.cs-cz
92c436dbc75020bc5121cc2c9e4976f62c9b13ca
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/code-quality/c26117.md
tommorris/visualstudio-docs.cs-cz
92c436dbc75020bc5121cc2c9e4976f62c9b13ca
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/code-quality/c26117.md
tommorris/visualstudio-docs.cs-cz
92c436dbc75020bc5121cc2c9e4976f62c9b13ca
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: C26117 | Dokumentace Microsoftu ms.custom: '' ms.date: 2018-06-30 ms.prod: visual-studio-dev14 ms.reviewer: '' ms.suite: '' ms.technology: - vs-devops-test ms.tgt_pltfrm: '' ms.topic: article f1_keywords: - C26117 helpviewer_keywords: - C26117 ms.assetid: cc7ebc8d-9826-4cad-a4d5-2d3ad5896734 caps.latest.revision: 13 author: corob-msft ms.author: gewarren manager: ghogen ms.openlocfilehash: 6984eb97cd5da9b5da9b669c965b0cc6bedd727b ms.sourcegitcommit: 55f7ce2d5d2e458e35c45787f1935b237ee5c9f8 ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 08/22/2018 ms.locfileid: "42677356" --- # <a name="c26117"></a>C26117 [!INCLUDE[vs2017banner](../includes/vs2017banner.md)] Nejnovější verzi tohoto tématu můžete najít v [C26117](https://docs.microsoft.com/visualstudio/code-quality/c26117). upozornění C26117: uvolnění nepotřebného zámku \<Zámek > ve funkci \<func >. Vynucení syntakticky s vymezeným oborem zámku *získat* a na zamykací *release* dvojice v programech jazyka C/C++ se neprovádí jazykem. Funkce může znamenat zamykání vedlejší účinek i tím, že pozorovatelných změny do stavu souběžnosti. Lock – funkce obálky například zvýší počet zámků nebo počet zámků pro daný zámek. Přidávat poznámky k funkci, která má vedlejší účinek z zámek, získání nebo vydání uzamknout pomocí `_Acquires_lock_` nebo `_Releases_lock_`v uvedeném pořadí. Bez těchto poznámek funkce očekává není po vrátí změnit libovolný počet zámků. Pokud operace čtení a nejsou vyváženy vydané verze, se považuje za *osamocené*. Upozornění C26117 dojde, pokud funkce, která nebyla byla označena s `_Releases_lock_` uvolní zámek, který nebude obsahovat, protože funkce musíte vlastnit zámek, než ho uvolní. ## <a name="example"></a>Příklad Následující příklad generuje upozornění C26117, protože funkce `ReleaseUnheldLock` uvolní zámek, který není nutně obsahovat – stav `flag` je nejednoznačný – a neexistuje žádné poznámky, která určuje, že by měl. ```cpp typedef struct _DATA { CRITICAL_SECTION cs; } DATA; int flag; void ReleaseUnheldLock(DATA* p) { if (flag) EnterCriticalSection(&p->cs); // code ... LeaveCriticalSection(&p->cs); } ``` ## <a name="example"></a>Příklad Následující kód opravuje problém zaručením, za stejných podmínek je také požadován vydané zámek. ```cpp typedef struct _DATA { CRITICAL_SECTION cs; } DATA; int flag; void ReleaseUnheldLock(DATA* p) { if (flag) { EnterCriticalSection(&p->cs); // code ... LeaveCriticalSection(&p->cs); } } ``` ## <a name="see-also"></a>Viz také [C26115](../code-quality/c26115.md)
31.604651
813
0.721486
ces_Latn
0.99871
2f3a12edc63c9256022870c7b5750d1982d59099
1,069
md
Markdown
docs/visual-basic/misc/bc30782.md
GeiGeiLa/docs.zh-tw
88f98d80c8afc37c430f79fb76c5e14f11dce957
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30782.md
GeiGeiLa/docs.zh-tw
88f98d80c8afc37c430f79fb76c5e14f11dce957
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30782.md
GeiGeiLa/docs.zh-tw
88f98d80c8afc37c430f79fb76c5e14f11dce957
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: '&#39;Continue Do&#39;只可以出現&#39;不要&#39;陳述式' ms.date: 07/20/2015 f1_keywords: - vbc30782 - bc30782 helpviewer_keywords: - BC30782 ms.assetid: c6b35e63-4d84-449d-9685-41a1bc0a7f35 ms.openlocfilehash: f4983fa5ebfc3d5932ba1809d2ccbf6f62ade363 ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 05/04/2018 ms.locfileid: "33616414" --- # <a name="39continue-do39-can-only-appear-inside-a-39do39-statement"></a>&#39;Continue Do&#39;只可以出現&#39;不要&#39;陳述式 `Continue Do` 陳述式只可以在 `Do...Loop` 迴圈內出現。 **錯誤 ID︰** BC30782 ## <a name="to-correct-this-error"></a>更正這個錯誤 1. 如果 `Continue Do` 陳述式是在 `For...Next` 迴圈內,請將陳述式變更為 `Continue For`。 2. 如果 `Continue Do` 陳述式是在 `While...End While` 迴圈內,請將陳述式變更為 `Continue While`。 3. 否則,請移除 `Continue Do` 陳述式。 ## <a name="see-also"></a>另請參閱 [Continue 陳述式](../../visual-basic/language-reference/statements/continue-statement.md) [Do...Loop 陳述式](../../visual-basic/language-reference/statements/do-loop-statement.md)
32.393939
115
0.705332
yue_Hant
0.480277
2f3a889f955d0e1fcd72940ac4aac3038a2918c6
11,024
md
Markdown
articles/storsimple/storsimple-8000-technical-specifications-and-compliance.md
Microsoft/azure-docs.cs-cz
1e2621851bc583267d783b184f52dc4b853a058c
[ "CC-BY-4.0", "MIT" ]
6
2017-08-28T07:43:21.000Z
2022-01-04T10:32:24.000Z
articles/storsimple/storsimple-8000-technical-specifications-and-compliance.md
MicrosoftDocs/azure-docs.cs-cz
1e2621851bc583267d783b184f52dc4b853a058c
[ "CC-BY-4.0", "MIT" ]
428
2018-08-23T21:35:37.000Z
2021-03-03T10:46:43.000Z
articles/storsimple/storsimple-8000-technical-specifications-and-compliance.md
Microsoft/azure-docs.cs-cz
1e2621851bc583267d783b184f52dc4b853a058c
[ "CC-BY-4.0", "MIT" ]
16
2018-03-03T16:52:06.000Z
2021-12-22T09:52:44.000Z
--- title: Technické specifikace StorSimple | Microsoft Docs description: Popisuje technické specifikace a informace o dodržování předpisů regulativních standardů pro hardwarové součásti StorSimple. services: storsimple documentationcenter: NA author: alkohli manager: timlt editor: '' ms.assetid: '' ms.service: storsimple ms.devlang: NA ms.topic: article ms.tgt_pltfrm: NA ms.workload: TBD ms.date: 06/02/2017 ms.author: alkohli ms.openlocfilehash: 061194422a8c1bc449dbef0c4f04bb8e1db10dea ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 03/29/2021 ms.locfileid: "68965292" --- # <a name="technical-specifications-and-compliance-for-the-storsimple-device"></a>Technické specifikace a dodržování předpisů pro zařízení StorSimple ## <a name="overview"></a>Přehled [!INCLUDE [storsimple-8000-eol-banner](../../includes/storsimple-8000-eol-banner.md)] Hardwarové součásti zařízení Microsoft Azure StorSimple vyhovují technickým specifikacím a regulativním normám, které jsou uvedené v tomto článku. Technické specifikace popisují moduly napájení a chlazení (PCMs), diskové jednotky, kapacitu úložiště a skříně. Informace o dodržování předpisů se týkají takových věcí jako mezinárodních standardů, bezpečnosti a emisí a kabeláže. ## <a name="power-and-cooling-module-specifications"></a>Specifikace modulu napájení a chlazení Zařízení StorSimple má dva duální ventilátory 100-240 V, SBB (PCMs), které jsou kompatibilní s. Tím je zajištěna redundantní konfigurace napájení. Pokud se PCM nedaří, zařízení bude i nadále fungovat normálně na jiném PCM, dokud se neúspěšně nenahradí modul. EBOD skříň používá modul PCM 580 W a primární skříň používá 764 W PCM. V následujících tabulkách jsou uvedeny technické specifikace přidružené k PCMs. | Specifikace | 580 W PCM (EBOD) | 764 W PCM (primární) | | --- | --- | --- | | Maximální výstupní výkon |580 W |764 | | Frekvence |50/60 Hz |50/60 Hz | | Výběr rozsahu napětí |Automatické rozsahy: 90 – 264 V AC, 47/63 Hz |Automatické rozsahy: 90-264 V AC, 47/63 Hz | | Maximální inrush aktuální |20 A |20 A | | Oprava Power Factor |Nominální vstupní napětí >95% |Nominální vstupní napětí >95% | | Harmony |Splňuje EN61000-3 – 2 |Splňuje EN61000-3 – 2 | | Výstup |Snímač 5V síťového pohotovostní napětí \@ 2,0 A |Snímač 5V síťového pohotovostní napětí \@ 2,7 A | | + Snímač 5V síťového \@ 42 A |+ Snímač 5V síťového \@ 40 A | | | + Snímač 12V síťového \@ 38 A |+ Snímač 12V síťového \@ 38 A | | | Horká, připojitelná |Ano |Ano | | Přepínače a diody LED |Přepínač zapnutí/vypnutí střídavého napájení a čtyři indikátory LED stavu |Přepínač zapnutí/vypnutí střídavého napájení a šest indikátorů stavu | | Chlazení skříně |Axiální chladicí ventilátory s proměnlivým ovládáním rychlosti ventilátoru |Axiální chladicí ventilátory s proměnlivým ovládáním rychlosti ventilátoru | ## <a name="power-consumption-statistics"></a>Statistika spotřeby energie V následující tabulce jsou uvedena typická data o spotřebě energie (skutečné hodnoty se mohou lišit od publikovaných) pro různé modely zařízení StorSimple. | Podmínky | 240 V AC | 240 V AC | 240 V AC | 110 V AC | 110 V AC | 110 V AC | | --- | --- | --- | --- | --- | --- | --- | | Ventilátory pomalu, nečinné jednotky |1,45 A |0,31 kW |1057,76 BTU/hod |3,19 A |0,34 kW |1160,13 BTU/hod | | Ventilátory pomalu, jednotky přistupují k |1,54 A |0,33 kW |1126,01 BTU/hod |3,27 A |0,36 kW |1228,37 BTU/hod | | Rychlé ventilátory, nečinné jednotky, dvě PSUs |2,14 A |0,49 kW |1671,95 BTU/hod |4,99 A |0,54 kW |1842,56 BTU/hod | | Rychlé ventilátory, nečinné jednotky, jedna PSUa s jedním nečinným |2,05 A |0,48 kW |1637,83 BTU/hod |4,58 A |0,50 kW |1706,07 BTU/hod | | Rychlé ventilátory, jednotky, ke kterým přistupují, dva PSUs |2,26 A |0,51 kW |1740,19 BTU/hod |4,95 A |0,54 kW |1842,56 BTU/hod | | Rychlé ventilátory, které přistupují k jednotkám, jeden PSU s jedním nečinným |2,14 A |0,49 kW |1671,95 BTU/hod |4,81 A |0,53 kW |1808,44 BTU/hod | ## <a name="disk-drive-specifications"></a>Specifikace diskové jednotky Vaše zařízení StorSimple podporuje diskové jednotky SAS (Serial Attached SCSI), a to až 12 3,5. Skutečnými jednotkami může být kombinace jednotek SSD (Solid-State Drive) nebo jednotek pevného disku (HDD) v závislosti na konfiguraci produktu. 12 slotů diskových jednotek se nachází v konfiguraci 3 o 4 před skříní. Skříň EBOD umožňuje další úložiště pro další 12 diskových jednotek. Ty jsou vždycky HDD. ## <a name="storage-specifications"></a>Specifikace úložiště Zařízení StorSimple mají kombinaci pevných disků a jednotek SSD (Solid-State Drive) pro 8100 i 8600. Celková použitelná kapacita pro 8100 a 8600 je zhruba 15 TB a 38 TB. Následující tabulka uvádí podrobnosti o kapacitě SSD, HDD a cloudu v kontextu kapacity řešení StorSimple. | Model/kapacita zařízení | 8100 | 8600 | | --- | --- | --- | | Počet jednotek pevného disku (HDD) |8 |19 | | Počet jednotek SSD (Solid-State Drive) (SSD) |4 |5 | | Kapacita jednoho HDD |4 TB |4 TB | | Jedna kapacita SSD |400 GB |800 GB | | Volná kapacita |4 TB |4 TB | | Použitelná kapacita HDD |14 TB |36 TB | | Použitelná kapacita SSD |800 GB |2 TB | | Celková použitelná kapacita * |~ 15 TB |~ 38 TB | | Maximální kapacita řešení (včetně cloudu) |200 TB |500 TB | <sup>* </sup>- *Celková použitelná kapacita zahrnuje kapacitu dostupnou pro data, metadata a vyrovnávací paměti. Na zařízení 8100 můžete zřídit místně připojené svazky o velikosti až 8,5 TB nebo až 22,5 TB na větším zařízení s 8600. Další informace najdete v [StorSimple místně připnuté svazky](storsimple-8000-local-volume-faq.md).* ## <a name="enclosure-dimensions-and-weight-specifications"></a>Rozměry skříně a specifikace váhy V následujících tabulkách jsou uvedeny různé specifikace velikosti skříně pro rozměry a váhu. ### <a name="enclosure-dimensions"></a>Rozměry skříně V následující tabulce jsou uvedeny rozměry skříně v milimetrech a palcích. | Skříně | Milimetrech | Cm | | --- | --- | --- | | Height (Výška) |87,9 |3,46 | | Šířka mezi montážní přírubou |483 |19,02 | | Šířka v těle skříně |443 |17,44 | | Hloubka od přední přímontní příruby k okraji těla skříně |577 |22,72 | | Hloubka z panelu operace až po krajní část skříně |630,5 |24,82 | | Hloubka od montážní příruby až po krajní část skříně |603 |23,74 | ### <a name="enclosure-weight"></a>Váha skříně V závislosti na konfiguraci může plně načtená primární skříň Navážit od 21 do 33 KGS a pro její zpracování vyžaduje dvě osoby. | Skříně | Hmotnost | | --- | --- | | Maximální váha (závisí na konfiguraci) |30 kg – 33 kg | | Prázdné (žádné jednotky nejsou namontované) |21 – 23 kg | ## <a name="enclosure-environment-specifications"></a>Specifikace prostředí skříně V této části jsou uvedeny specifikace týkající se prostředí skříně. V této kategorii jsou zahrnuté teploty, vlhkosti, nadmořskou výškou, proti otřesům, vibracím, zabezpečení a elektromagnetické kompatibilitě (EMC). ### <a name="temperature-and-humidity"></a>Teplota a vlhkost | Skříně | Rozsah okolních teplot | Relativní vlhkost okolí | Maximální mokrá cibule | | --- | --- | --- | --- | | Provoz |5 OC-35 OC (41 °F-95 °F) |20% až 80% nekondenzující- |28 °C (82 °F) | | Není funkční |-40 °C-70 °C (40 °F-158 °F) |5% až 100% nekondenzující |29 °C (84 °F) | ### <a name="airflow-altitude-shock-vibration-orientation-safety-and-emc"></a>Provozní flow, nadmořská, náraz, vibrace, orientace, bezpečnost a EMC | Skříně | Provozní specifikace | | --- | --- | | Tok dat |Tok systému je zepředu zezadu. Systém musí být provozován s nízkým tlakem na zadní výfukovou instalaci. Zpětný tlak vytvořený dveřmi dveří a překážkami by neměl překročit 5 pascals (0,5 mm měřič vody). | | Nadmořská výška, provozní |– 30 metrů až 3045 měřičů (-100 metrů až 10 000 metrů) s maximální provozní teplotou od 5 °C nad 7000 metrů. | | Nadmořská výška, není funkční |-305 měřičů na 12 192 metrů (-1 000 metrů až na 40 000 stopy) | | Úraz, provozní |5g 10 ms 1/2 – sinus | | Úraz, jiný než provozní |30g 10 ms 1/2 – sinus | | Vibrace, provozní |0.21 g RMS 5-500 Hz náhodně | | Vibrace, jiné než provozní |1.04 g RMS 2-200 Hz náhodně | | Vibrace, přemístění |sinus 3g 2-200 Hz | | Orientace a připojení |19 připojení stojanu (2 jednotky EIA) | | Rackové kolejnice |Chcete-li přizpůsobit minimální 700 mm (31,50 palců) hloubkovou stojany kompatibilní s IEC 297 | | Bezpečnost a schválení |CE a UL EN 61000-3, IEC 61000-3, UL 61000-3 | | SOFTWARE |EN55022 (CISPR-A), FCC A | ## <a name="international-standards-compliance"></a>Dodržování předpisů mezinárodními normami Vaše zařízení Microsoft Azure StorSimple splňuje následující mezinárodní standardy: * CE-EN 60950-1 * Zpráva o 1,5 až IEC 60950-1 * UL a cUL na UL 60950-1 ## <a name="safety-compliance"></a>Bezpečnostní dodržování předpisů Vaše zařízení Microsoft Azure StorSimple splňuje následující bezpečnostní hodnocení: * Schválení typu systémového produktu: UL, cUL, CE * Bezpečnostní dodržování předpisů: UL 60950, IEC 60950, EN 60950 ## <a name="emc-compliance"></a>Dodržování předpisů EMC Vaše zařízení Microsoft Azure StorSimple splňuje následující hodnocení EMC. ### <a name="emissions"></a>Emisí Zařízení je kompatibilní s EMC pro řízené a vycházející úrovně emisí. * Mezní hodnoty emisí: CFR 47 část 15B třídy A EN55022 třídy A CISPR třídy A * Počet vycházejících úrovní emisí: CFR 47 část 15B třídy A EN55022 třídy A CISPR třídy a ### <a name="harmonics-and-flicker"></a>Harmonické a blikání Zařízení je v souladu s EN61000-3-2/3. ### <a name="immunity-limit-levels"></a>Úrovně omezení odolnosti Zařízení je v souladu s EN55024. ## <a name="ac-power-cord-compliance"></a>Dodržování předpisů AC napájecí šňůry Sestavení plug-and kompletního napájecího kabelu musí splňovat standardy vhodné pro zemi nebo oblast, ve které se zařízení používá, a musí mít bezpečnostní schválení, která jsou přijatelná v dané zemi nebo oblasti. V následujících tabulkách jsou uvedeny standardy pro USA a Evropu. ### <a name="ac-power-cords---usa-must-be-nrtl-listed"></a>Napájecí šňůra – USA (musí být uvedená v NRTL) | Součást | Specifikace | | --- | --- | | Typ šňůry |SV nebo SVT, 18 AWG minimum, 3 vodič, maximální délka 2,0 metrů | | Zapojit |NEMA 5 – 15P pro uzemnění typu Příloha s příhodnocením 120 V, 10 A; nebo IEC 320 C14, 250 V, 10 A | | Zásuvky |IEC 320 C-13, 250 V, 10 A | ### <a name="ac-power-cords---europe"></a>Napájecí šňůra – Evropa | Součást | Specifikace | | --- | --- | | Typ šňůry |Harmonizované, H05-VVF-3G 1.0 | | Zásuvky |IEC 320 C-13, 250 V, 10 A | ## <a name="supported-network-cables"></a>Podporované síťové kabely Pro síťová rozhraní 10 GbE, DATA 2 a DATA 3, najdete [seznam podporovaných síťových kabelů a modulů](storsimple-supported-hardware-for-10-gbe-network-interfaces.md). ## <a name="next-steps"></a>Další kroky Teď jste připraveni nasadit zařízení StorSimple do svého datacentra. Další informace najdete v tématu [nasazení místního zařízení](storsimple-8000-deployment-walkthrough-u2.md).
54.305419
402
0.741564
ces_Latn
0.999854
2f3b45fde3f47ff35579a0ddb07434e5cd57aef3
360
md
Markdown
_pages/gammatone-filterbank.md
alexanderlerch/MIR-datasets
01166275dd6a415149345cd438ed528a53dbfad4
[ "MIT" ]
2
2020-07-01T01:59:40.000Z
2021-09-09T09:10:13.000Z
_pages/gammatone-filterbank.md
alexanderlerch/MIR-datasets
01166275dd6a415149345cd438ed528a53dbfad4
[ "MIT" ]
null
null
null
_pages/gammatone-filterbank.md
alexanderlerch/MIR-datasets
01166275dd6a415149345cd438ed528a53dbfad4
[ "MIT" ]
1
2020-07-01T01:59:39.000Z
2020-07-01T01:59:39.000Z
--- ID: 226 post_title: gammatone filterbank author: Alex post_excerpt: "" layout: page permalink: > https://www.audiocontentanalysis.org/code/helper-functions/gammatone-filterbank/ published: true post_date: 2012-06-22 19:59:29 --- <script src="https://gist-it.appspot.com/https://github.com/alexanderlerch/ACA-Code/blob/master/ToolGammatoneFb.m"> </script>
27.692308
115
0.769444
kor_Hang
0.089217
2f3b87a814d678e91c433362769614b130c9b545
419
md
Markdown
_discussion/536.md
brueghelfamily/brueghelfamily.github.io
a73351ac39b60cd763e483c1f8520f87d8c2a443
[ "MIT" ]
null
null
null
_discussion/536.md
brueghelfamily/brueghelfamily.github.io
a73351ac39b60cd763e483c1f8520f87d8c2a443
[ "MIT" ]
null
null
null
_discussion/536.md
brueghelfamily/brueghelfamily.github.io
a73351ac39b60cd763e483c1f8520f87d8c2a443
[ "MIT" ]
null
null
null
--- pid: '536' object_pid: '3596' author: E.A. Honig comment: "<p>This drawing is a copy after Matthijs Bril; another copy of the same subject is the drawing in the Institut Neerlandais, inv. # 5884. Numerous other copies after M. Bril by Jan exist, at least 8 compositions some in several versions. See Ruby in Munich 2013, p. 37 & note 10.</p>" post_date: August 3, 2014 order: '535' collection: discussion ---
32.230769
86
0.72315
eng_Latn
0.993269
2f3c2b532e2833860baef0d3c8aeaa46336eebe0
4,863
md
Markdown
subscriptions/using-admin-portal.md
huriyilmaz/visualstudio-docs.tr-tr
9459e8aaaeb3441455be384a2b011dbf306ce691
[ "CC-BY-4.0", "MIT" ]
1
2022-03-22T08:01:10.000Z
2022-03-22T08:01:10.000Z
subscriptions/using-admin-portal.md
huriyilmaz/visualstudio-docs.tr-tr
9459e8aaaeb3441455be384a2b011dbf306ce691
[ "CC-BY-4.0", "MIT" ]
null
null
null
subscriptions/using-admin-portal.md
huriyilmaz/visualstudio-docs.tr-tr
9459e8aaaeb3441455be384a2b011dbf306ce691
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Kullanmaya başlayın abonelikleri Visual Studio Portal sayfasından | Visual Studio Market author: evanwindom ms.author: amast manager: shve ms.assetid: 4c099fe8-883e-4789-9468-387ce5697dfe ms.date: 11/22/2020 ms.topic: overview description: Abonelikler Yönetici Portalı ile, Visual Studio aboneliklerini yönetmeye nasıl başlayabilirsiniz? ms.openlocfilehash: b2920052168d96f5f575a7b1bcd1c5990a8f94a8 ms.sourcegitcommit: 28168514c0c9472e852de35cceb4f95837669da6 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 11/30/2021 ms.locfileid: "133256236" --- # <a name="overview-of-the-visual-studio-subscriptions-administrator-portal"></a>Visual Studio Abonelikleri Yönetici Portalına Genel Bakış Visual Studio Abonelikler Yönetim Portalı, size, kuruluş aboneliklerini tek bir yerde yönetmek için araçlar sağlar. Portal turuna göz atabilirsiniz. > [!VIDEO https://www.microsoft.com/videoplayer/embed/RE4t9aW] ## <a name="important-considerations"></a>Önemli noktalar Abonelikler Yönetim Portalı'Visual Studio göz Visual Studio unutmayın: - **Visual Studio abonelikler kullanıcı başına lisansa sahip olur.** Her abone, geliştirme ve test için gereken sayıda bilgisayarda yazılımı kullanabilir. - **Her abone için yalnızca bir abonelik düzeyi atarak,** Visual Studio aboneliğine karşılık gelen bir abonelik düzeyi attayabilirsiniz. Birden fazla abonelik düzeyi atanmış aboneniz varsa, bunların ayarlarını yalnızca bir tane olacak şekilde düzenleyin. - **Abonelik yükseltilirken** (bir "yükseltme" lisansı satın alındıktan sonra) veya daha düşük bir düzeyde yenilendikten sonra abonenin abonelik düzeyi güncelleştirilecek. - **Abonelikleri aboneler arasında paylaşmayın.** Aboneliklerin adlandırılmış bireylere atanmaları gerekir. Aboneliklerin ekiplere atamaya izin verilmez. Abonelik avantajlarını (geliştirme ve test, yazılım, e-öğrenme vb.) kullanan herkese bir abonelik Microsoft Azure atamanız gerekir. ## <a name="the-subscribers-page"></a>Aboneler sayfası Abonelikleri atadıktan sonra, Aboneleri yönet sekmesi aboneleriniz hakkında aşağıdakiler dahil olmak üzere ayrıntılı bilgiler sağlar: - Her abonenin adı ve soyadı. - Bu kullanıcının e-posta adresi. - Atanmış abonelik düzeyi. - Aboneliklerinin atandığı tarih. - Aboneliklerinin sona erme tarihi. - Ek notlar için bir başvuru alanı. - Abone indirmelerinin etkin veya devre dışı bırakılmıştır. - Bunların bulunduğu ülke. - Yönetici portalında atama iletişim e-postası için dil tercihleri. - İletişim için kullanılan farklı bir e-posta adresi için oturum açma yerine isteğe bağlı bir alan. Sayfanın sol üst tarafında abonelik ataması genel bakış bilgilerini, sözleşme ayrıntılarını ve maksimum kullanım raporunu ortaya çıkarmak için seçebilirsiniz birkaç simge vardır. Her anlaşma için satın alınan, atanan ve hala kuruluşta kullanılabilir olan abonelik lisanslarının sayısı hakkında ek bilgi görmek için üst simgeye tıklayın ve kayan paneli etkinleştirin. > [!div class="mx-imgBorder"] > ![Visual Studio Abonelikleri Yönetici Portalı Aboneleri Sayfası](_img/using-admin-portal/subscribers-page.png "Aboneler sayfasında türe göre abonelik sayıları görüntülenir.") ## <a name="the-details-page"></a>Ayrıntılar sayfası Görüntülemekte olduğu sözleşme hakkında daha fazla bilgi için ikinci simgeyi seçerek Anlaşma Ayrıntıları sekmesini görüntüleyin. Kayan panelde anlaşma durumu, satın alma hesabı, kuruluş ayrıntıları, süper yöneticiler ve diğer ilgili bilgiler gösterilir. > [!div class="mx-imgBorder"] > ![Visual Studio Abonelikleri Yönetici Portalı Ayrıntılar Sayfası](_img/using-admin-portal/details-page.png "Ayrıntılar sayfasında, süper yöneticilerinizin adları da dahil olmak üzere sözleşmeniz hakkında bilgiler görüntülenir.") ## <a name="resources"></a>Kaynaklar - [Visual Studio lisanslama teknik incelemesi](https://visualstudio.microsoft.com/wp-content/uploads/2019/06/Visual-Studio-Licensing-Whitepaper-May-2019.pdf) - [Abonelik seçeneklerini ve fiyatlandırmalarını karşılaştırma](https://visualstudio.microsoft.com/vs/pricing) - [Visual Studio IDE özelliklerini karşılaştırma](https://visualstudio.microsoft.com/vs/compare) - [Yöneticiler için abonelik desteği](https://aka.ms/VSSAdminSupport) ## <a name="see-also"></a>Ayrıca bkz. - [Visual Studio belgeleri](/visualstudio/) - [Azure DevOps belgeleri](/azure/devops/) - [Azure belgeleri](/azure/) - [Microsoft 365 belgeleri](/microsoft-365/) ## <a name="next-steps"></a>Sonraki adımlar Yöneticiler için sorumluluklar hakkında daha fazla bilgi edinin: - [Yönetici sorumluluklarını genel bakış](admin-responsibilities.md) - [Üretim öncesi ortamın envanteri](admin-inventory.md) - [Büyük takımları ve dışarıdan yüklenicileri yönetme](manage-teams.md) - [Kullanıcı atamalarını izleme ve siparişleri işleme](assignments-orders.md) - Satın [alma taahhütlerini](maximum-usage.md) izlemek için Maksimum Kullanım kullanma
64.84
287
0.814312
tur_Latn
0.998972
2f3cee75912b1b088b7b0fd0dfa5bbe510077549
595
md
Markdown
doc/users/services/luci_config/faq.md
allaparthi/monorail
e18645fc1b952a5a6ff5f06e0c740d75f1904473
[ "BSD-3-Clause" ]
2
2021-04-13T21:22:18.000Z
2021-09-07T02:11:57.000Z
doc/users/services/luci_config/faq.md
allaparthi/monorail
e18645fc1b952a5a6ff5f06e0c740d75f1904473
[ "BSD-3-Clause" ]
21
2020-09-06T02:41:05.000Z
2022-03-02T04:40:01.000Z
doc/users/services/luci_config/faq.md
allaparthi/monorail
e18645fc1b952a5a6ff5f06e0c740d75f1904473
[ "BSD-3-Clause" ]
null
null
null
# Luci-config FAQ [TOC] ## When do I need to register a project? If it needs [CQ], [Buildbucket] or Chrome Infra console. ## How to register a project? [File a bug to nodir@](https://code.google.com/p/chromium/issues/entry?labels=Infra-Config,Restrict-View-Google&summary=Register%20a%20new%20repo&comment=Register%20a%20new%20repo%0A%0AWhy%20the%20project%20is%20being%20registered:%20%0A%3Creason%3E%0A%0ARepository%20URL:%0A%3Curl%3E%0A%0ARemove%20%22Restrict-View-Google%22%20label%20if%20the%20repo%20is%20public.) [CQ]: ../commit_queue/index.md [Buildbucket]: ../buildbucket/index.md
42.5
367
0.769748
eng_Latn
0.207819
2f3dee7698a632980aff63fbe9420a07b6600125
48
md
Markdown
README.md
shacklefree/dotfiles-discussion
b4c743292b392e4bcc8477f16cce85530cc9fecb
[ "MIT" ]
null
null
null
README.md
shacklefree/dotfiles-discussion
b4c743292b392e4bcc8477f16cce85530cc9fecb
[ "MIT" ]
1
2020-11-03T18:15:55.000Z
2020-11-03T18:42:25.000Z
README.md
shacklefree/dotfiles-discussion
b4c743292b392e4bcc8477f16cce85530cc9fecb
[ "MIT" ]
null
null
null
# dotfiles-discussion Discussing team dotfiles.
16
25
0.833333
yue_Hant
0.588647
2f3e81c66a90ab5e2a85172c1db0ff61f1a2cb67
3,064
md
Markdown
README.md
NL4B/EDMX-Label-Build-Runtime
6cb0ac5efaaef1f44b99aa19f60388df3646561c
[ "Apache-2.0" ]
null
null
null
README.md
NL4B/EDMX-Label-Build-Runtime
6cb0ac5efaaef1f44b99aa19f60388df3646561c
[ "Apache-2.0" ]
null
null
null
README.md
NL4B/EDMX-Label-Build-Runtime
6cb0ac5efaaef1f44b99aa19f60388df3646561c
[ "Apache-2.0" ]
null
null
null
# EDMX-Label-Build-Runtime NL for Business The Netherlands www.nl4b.com (c) April, 2018 We build the EDMX Label Builder to solve the gap between S/4HANA automatic generated or non-SAP OData services and the VDM Generator of S/4HANA Cloud SDK. The tool will be able to generate a properties file with can be modified by the user. The tool can be used to check the new properties file for inconsistency and will be used to generate a modified consistent EDMX file for the VDM generator. arguments: -b,--build Build edmx file out of template and properties file -c,--check Check properties files on duplicate labels -e,--edmx-file <arg> The name of the source or generated edmx file -g,--generate Generate edmx template and properties files -h,--help Display this help -o,--override-files Flag to override existing files -p,--edmx-properties <arg> The name of the properties file to generate -r,--release-info Show release info -t,--edmx-template <arg> The name of the edmx template file -v,--version Prints the tool version info -x,--extend-properties Flag to extend an existing properties file usage: 1. Generate edmx template and properties file java -jar edmx-sap-label-builder-1.0.4.jar --generate --edmx-file <arg> --edmx-template <arg> --edmx-properties <arg> 2. Validate edmx template and properties file java -jar edmx-sap-label-builder-1.0.4.jar --check --edmx-template <arg> --edmx-properties <arg> 3. Build a new edmx file based on template and properties files java -jar edmx-sap-label-builder-1.0.4.jar --build --edmx-file <arg> --edmx-template <arg> --edmx-properties <arg> Release Notes: Release 1.0.4 - April 25, 2018 [FIX] Correct typo in argument --override-files [FIX] Sort Properties in check procedure after reading property file Release 1.0.3 - April 24, 2018 [FEATURE] Argument for override template and properties file if exists [FEATURE] Extend existing properties file with missing properties [FIX] Properties in property file sorted Release 1.0.2 - April 23, 2018 [FEATURE] Add sap:labels to template for function imports to include niche name in odata-generator [FEATURE] Check sap:labels contains only spaces, numeric and alphanumeric characters. [FEATURE] Add release info to application [FIX] Add sap:labels to template when sap:labels doesnot exists in entitytype property [FIX] Arguments in help in alphabet order [FIX] Remove not yet implemented arguments from help Release 1.0.1 - April 18, 2018 [FEATURE] Feature to check properties [FEATURE] Check sap:label on empty labels [FEATURE] Check duplicate sap:label within an entitytype [FIX] Property in properties file now sorted and grouped on entitytype/property Release 1.0.0 - April 16, 2018 [FEATURE] Generate properties and template file out of edmx file [FEATURE] Build new edmx fileout of a properties and template file
51.066667
396
0.71671
eng_Latn
0.955789
2f3ebd5fdcdda73e31232e676de3d8b6b7eecda7
246
md
Markdown
README.md
Programmer-RD-AI/Learn-Time-Series-Forecasting-From-Gold-Price
c1ece1a23814a6521a39073a12d375e11cd77a91
[ "Apache-2.0" ]
null
null
null
README.md
Programmer-RD-AI/Learn-Time-Series-Forecasting-From-Gold-Price
c1ece1a23814a6521a39073a12d375e11cd77a91
[ "Apache-2.0" ]
null
null
null
README.md
Programmer-RD-AI/Learn-Time-Series-Forecasting-From-Gold-Price
c1ece1a23814a6521a39073a12d375e11cd77a91
[ "Apache-2.0" ]
null
null
null
# Learn-Time-Series-Forecasting-From-Gold-Price Learn-Time-Series-Forecasting-From-Gold-Price https://www.kaggle.com/arashnic/learn-time-series-forecasting-from-gold-price https://wandb.ai/ranuga-d/Learn-Time-Series-Forecasting-From-Gold-Price
35.142857
77
0.813008
yue_Hant
0.853564
2f3ebd649c880187942a35ccb4a3bacfd92b06cb
8,959
md
Markdown
README.md
yifu-ximmerse/VRTK_old
787550e8a0be0ab7cb469a80db6c1117e57c4bc8
[ "MIT" ]
null
null
null
README.md
yifu-ximmerse/VRTK_old
787550e8a0be0ab7cb469a80db6c1117e57c4bc8
[ "MIT" ]
null
null
null
README.md
yifu-ximmerse/VRTK_old
787550e8a0be0ab7cb469a80db6c1117e57c4bc8
[ "MIT" ]
null
null
null
![vrtk logo](https://raw.githubusercontent.com/thestonefox/VRTK/master/Assets/VRTK/Examples/Resources/Images/logos/vrtk-capsule-clear.png) > ### VRTK - Virtual Reality Toolkit > A productive VR Toolkit for rapidly building VR solutions in Unity3d. ## VRTK has just launched a Kickstarter campaign to fund version 4 and beyond. [Visit the Kickstarter campaign and pledge today! :)](https://www.kickstarter.com/projects/thestonefox/virtual-reality-toolkit-vrtk-version-4-and-beyond) [![Slack](http://sysdia2.co.uk/badge.svg)](http://invite.vrtk.io) [![Twitter Follow](https://img.shields.io/twitter/follow/vr_toolkit.svg?style=flat&label=twitter)](https://twitter.com/VR_Toolkit) [![YouTube](https://img.shields.io/badge/youtube-channel-e52d27.svg)](http://videos.vrtk.io) [![Waffle](https://img.shields.io/badge/project-roadmap-blue.svg)](http://tracker.vrtk.io) | Supported SDK | Download Link | |---------------|---------------| | VR Simulator | Included | | SteamVR Unity Asset | [SteamVR Plugin] | | Oculus Utilities Unity Package | [Oculus Utilities] | ## Documentation The documentation for the project can be found within this repository in [DOCUMENTATION.md] which includes the up to date documentation for this GitHub repository. Alternatively, the stable versions of the documentation can be viewed online at [http://docs.vrtk.io](http://docs.vrtk.io). ## Frequently Asked Questions If you have an issue or question then check the [FAQ] document to see if your query has already been answered. ## Getting Started > *VRTK requires a supported VR SDK to be imported into your Unity3d Project.* * Clone this repository `git clone https://github.com/thestonefox/VRTK.git`. * Open `VRTK` within Unity3d. * Add the `VRTK_SDKManager` script to a GameObject in the scene. <details><summary>**Instructions for using the VR Simulator**</summary> * Drag the `VRSimulatorCameraRig` prefab from the VRTK/Prefabs into the scene. * Select the GameObject with the `VRTK_SDKManager` script attached to it. * Select `Simulator` for each of the SDK Choices. * Click the `Auto Populate Linked Objects` button to find the relevant Linked Objects. * Use the Left Alt to switch between mouse look and move a hand. * Press Tab to switch between left/right hands. * Hold Left Shift to change from translation to rotation for the hands. * Hold Left Crtl to switch between X/Y and X/Z axis. * All above keys can be remapped using the inspector on the `VRSimulatorCameraRig` prefab. * Button mapping for the VR control are as follows: * Grip: Left mouse button * Trigger: Right mouse button * Touchpad Press: Q * Button One: E * Button Two: R </details> <details><summary>**Instructions for using the SteamVR Unity3d asset**</summary> * Import the [SteamVR Plugin] from the Unity Asset Store. * Drag the `[CameraRig]` prefab from the SteamVR plugin into the scene. * Check that `Virtual Reality Supported` is ticked in the `Edit -> Project Settings -> Player` menu. * Ensure that `OpenVR` is added in the `Virtual Reality SDKs` list in the `Edit -> Project Settings -> Player` menu. * Select the GameObject with the `VRTK_SDKManager` script attached to it. * Select `Steam VR` for each of the SDK Choices. * Click the `Auto Populate Linked Objects` button to find the relevant Linked Objects. * Optionally, browse the `Examples` scenes for example usage of the scripts. </details> <details><summary>**Instructions for using the Oculus Utilities Unity3d package**</summary> * Download the [Oculus Utilities] from the Oculus developer website. * Import the `OculusUtilities.unitypackage` into the project. * Drag the `OVRCameraRig` prefab from the Oculus package into the scene. * Check that `Virtual Reality Supported` is ticked in the `Edit -> Project Settings -> Player` menu. * Ensure that `Oculus` is added in the `Virtual Reality SDKs` list in the `Edit -> Project Settings -> Player` menu. * Select the GameObject with the `VRTK_SDKManager` script attached to it. * Select `Oculus VR` for each of the SDK Choices. * Click the `Auto Populate Linked Objects` button to find the relevant Linked Objects. </details> ## What's In The Box VRTK is a collection of useful scripts and concepts to aid building VR solutions rapidly and easily in Unity3d 5+. It covers a number of common solutions such as: * Locomotion within virtual space. * Interactions like touching, grabbing and using objects * Interacting with Unity3d UI elements through pointers or touch. * Body physics within virtual space. * 2D and 3D controls like buttons, levers, doors, drawers, etc. * And much more... ## Examples A collection of example scenes have been created to aid with understanding the different aspects of VRTK. A list of the examples can be viewed in [EXAMPLES.md] which includes an up to date list of examples showcasing the features of VRTK. The examples have all been built to work with the [SteamVR Plugin] by default, but they can be converted over to using the [Oculus Utilities] package by following the instructions for using the Oculus Utilities package above. > *If the examples are not working on first load, click the `[VRTK]` > GameObject in the scene hierarchy to ensure the SDK Manager editor > script successfully sets up the project and scene.* ## Made With VRTK [![image](https://cloud.githubusercontent.com/assets/1029673/21553226/210e291a-cdff-11e6-8639-91a3dddb1555.png)](http://store.steampowered.com/app/489380) [![image](https://cloud.githubusercontent.com/assets/1029673/21553234/2d105e4a-cdff-11e6-95a2-7dfdf7519e17.png)](http://store.steampowered.com/app/488760) [![image](https://cloud.githubusercontent.com/assets/1029673/21553257/5c17bf30-cdff-11e6-98ab-a017bc5cd00d.png)](http://store.steampowered.com/app/494830) [![image](https://cloud.githubusercontent.com/assets/1029673/21553262/6d82afd2-cdff-11e6-8400-882989a6252c.png)](http://store.steampowered.com/app/391640) [![image](https://cloud.githubusercontent.com/assets/1029673/21553270/7b8808f2-cdff-11e6-9adb-1e20fe557ae0.png)](http://store.steampowered.com/app/525680) [![image](https://cloud.githubusercontent.com/assets/1029673/21553293/9eef3e32-cdff-11e6-8dc7-f4a3866ac386.png)](http://store.steampowered.com/app/550360) [![image](https://cloud.githubusercontent.com/assets/1029673/21553635/3acbed36-ce01-11e6-80cd-4fe8d28d6b38.png)](http://store.steampowered.com/app/475520) [![image](https://cloud.githubusercontent.com/assets/1029673/21553649/53ded8d8-ce01-11e6-8314-d33a873db745.png)](http://store.steampowered.com/app/510410) [![image](https://cloud.githubusercontent.com/assets/1029673/21553655/63e21e0c-ce01-11e6-90b0-477b14af993f.png)](http://store.steampowered.com/app/499760) [![image](https://cloud.githubusercontent.com/assets/1029673/21553665/713938ce-ce01-11e6-84f3-40db254292f1.png)](http://store.steampowered.com/app/548560) [![image](https://cloud.githubusercontent.com/assets/1029673/21553680/908ae95c-ce01-11e6-989f-68c38160d528.png)](http://store.steampowered.com/app/511370) [![image](https://cloud.githubusercontent.com/assets/1029673/21553683/a0afb84e-ce01-11e6-9450-aaca567f7fc8.png)](http://store.steampowered.com/app/472720) Many games and experiences have already been made with VRTK. Check out the [Made With VRTK Document] to see the full list. ## Contributing I would love to get contributions from you! Follow the instructions below on how to make pull requests. For the full contribution guidelines see the [Contribution Document]. ## Pull requests 1. [Fork] the project, clone your fork, and configure the remotes. 2. Create a new topic branch (from `master`) to contain your feature, chore, or fix. 3. Commit your changes in logical units. 4. Make sure all the example scenes are still working. 5. Push your topic branch up to your fork. 6. [Open a Pull Request] with a clear title and description. ## License Code released under the [MIT License]. [SteamVR Plugin]: https://www.assetstore.unity3d.com/en/#!/content/32647 [SteamVR Plugin for Unity3d Github Repo]: https://github.com/ValveSoftware/openvr/tree/master/unity_package/Assets/SteamVR [Oculus Utilities]: https://developer3.oculus.com/downloads/game-engines/1.10.0/Oculus_Utilities_for_Unity_5/ [MIT License]: https://github.com/thestonefox/SteamVR_Unity_Toolkit/blob/master/LICENSE [Contribution Document]: https://github.com/thestonefox/SteamVR_Unity_Toolkit/blob/master/CONTRIBUTING.md [Made With VRTK Document]: https://github.com/thestonefox/SteamVR_Unity_Toolkit/blob/master/MADEWITHVRTK.md [DOCUMENTATION.md]: https://github.com/thestonefox/SteamVR_Unity_Toolkit/blob/master/DOCUMENTATION.md [EXAMPLES.md]: https://github.com/thestonefox/SteamVR_Unity_Toolkit/blob/master/EXAMPLES.md [Fork]: http://help.github.com/fork-a-repo/ [Open a Pull Request]: https://help.github.com/articles/using-pull-requests/ [FAQ]: https://github.com/thestonefox/SteamVR_Unity_Toolkit/blob/master/FAQ.md
52.087209
1,859
0.768278
eng_Latn
0.802277
2f3efbdf56569bd7a2f7796bbf641e4c10320a0d
2,483
md
Markdown
guide/english/mathematics/linear-equations/index.md
smonem/freeCodeCamp
f03f05d53de38fbc84ba50f1b6ee156e77959698
[ "BSD-3-Clause" ]
5
2020-07-09T10:19:39.000Z
2021-12-06T00:43:23.000Z
guide/english/mathematics/linear-equations/index.md
Nhatdth14/freeCodeCamp
9e82ae87b69a7bb5af87ee730da30be0be6cbf8a
[ "BSD-3-Clause" ]
58
2019-04-25T23:23:57.000Z
2021-07-28T23:18:44.000Z
guide/english/mathematics/linear-equations/index.md
Nhatdth14/freeCodeCamp
9e82ae87b69a7bb5af87ee730da30be0be6cbf8a
[ "BSD-3-Clause" ]
4
2019-06-28T13:50:36.000Z
2021-04-17T17:30:35.000Z
--- title: Linear Equations --- ## Linear Equations A linear equation is an equation that can be written in the form <p align='center'>a<sub>1</sub>x<sub>1</sub> + a<sub>2</sub>x<sub>2</sub> + &middot;&middot;&middot; + a<sub>n</sub>x<sub>n</sub> + b = 0,</p> where the x<sub>i</sub> are the *variables* while b and the a<sub>i</sub> are the *coefficients*. The solutions to the equation, that is, the points (x<sub>1</sub>, x<sub>2</sub>, ..., x<sub>n</sub>) that make the equation true when plugged in, describe a graph (a [hyperplane](https://en.wikipedia.org/wiki/Hyperplane)) in n-dimensional space. The most familiar example is in two dimensions, the Cartesian plane, where a linear equation describes a straight line. Here a linear equation is usually written as <p align='center'>y = mx + b,</p> where * x and y are the coordinates, * m is the *slope*, commonly called *rise over run* which describes the ratio between the vertical change and the horizontal change as you move along the line, and * b is the *y-intercept*, where the line described by the equation touches the y-axis. (Plugging x=0 into equation shows this.) Every non-vertical line can be described by such an equation. (While a vertical line can be described by the equation x = a for some number a, you no longer have the geometric interpretation from the values of m and b.) For example, suppose we wish to draw the line connecting the two points (1,3) and (-2,2). Then, between these two points the rise is the difference in the y-values, namely 3 - 2 = 1, while the run is the difference in the x-values, 1 - (-2) = 3, so the slope is m = 1/3. (Or 2 - 3 = -1 and -2 - 1 = -3, so m = (-1)/(-3) = 1/3.) This means our line is given by the equation <p align='center'>y = (1/3)x + b,</p> where b is the y-intercept. To find b we now plug either point into the equation and solve for b. For example, we can use (1,3) to get 3 = (1/3) &middot; 1 + b, or b = 3 - (1/3) = 8/3. Hence, the line going through the points (1,3) and (-2,2) is given by the equation <p align='center'>y = (1/3)x + 8/3.</p> While these may not appear terribly useful outside of simply working with lines (or hyperplanes in general), there are [many situations](https://en.wikipedia.org/wiki/Linear_approximation#Applications) where you can get a linear approximation of a complicated function and get valuable information. With the simplicity of linear equations, this can be a very powerful tool to study complex problems.
82.766667
592
0.715667
eng_Latn
0.999712
2f3ff866ce677e7a9dd1fb6674ac558ebc835e67
30
md
Markdown
ConvNets/README.md
taha-a/image
ed2aa77c7131edd3fdf324c59eb0f373bd4ea3ec
[ "BSD-3-Clause" ]
161
2017-01-13T05:44:30.000Z
2022-01-21T15:52:08.000Z
ConvNets/README.md
YunwenHuang/image-caption-generator
832f637c7d1b39b925b7f7d92b2b6bf2185c7b2e
[ "BSD-3-Clause" ]
39
2017-01-11T09:40:39.000Z
2020-02-20T03:09:52.000Z
ConvNets/README.md
YunwenHuang/image-caption-generator
832f637c7d1b39b925b7f7d92b2b6bf2185c7b2e
[ "BSD-3-Clause" ]
77
2016-10-29T14:47:07.000Z
2020-11-21T21:47:40.000Z
Put inception_v4.pb file here
15
29
0.833333
eng_Latn
0.733073
2f4004b7d20a12e8000307f64a04e1708cbd2551
1,357
md
Markdown
about.md
rogeruiz/built-with-ember
bc19b9b433edb3af57990e191e1d687e89d29377
[ "MIT" ]
null
null
null
about.md
rogeruiz/built-with-ember
bc19b9b433edb3af57990e191e1d687e89d29377
[ "MIT" ]
null
null
null
about.md
rogeruiz/built-with-ember
bc19b9b433edb3af57990e191e1d687e89d29377
[ "MIT" ]
null
null
null
--- layout: page title: About --- Built with Ember is a showcase of ambitious and inspirational web applications using [Ember.js](http://emberjs.com). It's designed, built, and maintained by the [Blimp](http://blimp.io) crew. ## Submissions All submissions should be of sites built with Ember.js and must follow the instructions on the *How to submit* section below. We reserve the right to reject any submission. Submissions with inappropriate content will not be accepted. ## How to submit To submit a site suggestion, [open an issue](https://github.com/getblimp/built-with-ember/issues/new) or [create a pull request](https://github.com/GetBlimp/built-with-ember). Pull requests will be given higher priority since they are easier to include. Make sure the screenshot is 1000x800 and please double check that everything looks good before submitting. It's also a good idea to run the screenshot through an image optimizer like [ImageOptim](https://imageoptim.com/) or [TinyPNG](https://tinypng.com/) before including it. This will help keep the website fast and the repository small as possible. ## About the site Inspired by [Bootstrap Expo](http://expo.getbootstrap.com/). It's built with [Jekyll](http://jekyllrb.com), developed on [GitHub](https://github.com/getblimp/built-with-ember), and is hosted on [GitHub Pages](https://pages.github.com).
79.823529
605
0.771555
eng_Latn
0.989176
2f4022287620f2189268eba005dbd066b06e925a
325
md
Markdown
README.md
saucecontrol/core-imaging-playground
ad414ae4b0b1116d96f69fb38fa0b4b4667f3306
[ "MIT" ]
59
2016-12-30T22:06:36.000Z
2022-02-08T14:51:13.000Z
README.md
AJEETX/core-imaging-playground
3e295387822725d628724355a8155b0833a24aa4
[ "MIT" ]
18
2016-12-30T22:12:08.000Z
2022-02-07T23:22:47.000Z
README.md
AJEETX/core-imaging-playground
3e295387822725d628724355a8155b0833a24aa4
[ "MIT" ]
28
2016-12-30T21:59:17.000Z
2022-03-26T04:48:08.000Z
# core-imaging-playground This is just me playing around with .NET imaging libraries. NOTE: FreeImage requires the Visual C++ 2013 Redistributable package to run on Windows. You can find a link to the latest supported version [here](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads).
65
237
0.796923
eng_Latn
0.963095
2f40ac236939096a72da96a3de4990a6b6727fc2
52
md
Markdown
README.md
Nerzal/talks
355384c69d3071c1021083f4e841340c21d69a93
[ "Apache-2.0" ]
null
null
null
README.md
Nerzal/talks
355384c69d3071c1021083f4e841340c21d69a93
[ "Apache-2.0" ]
null
null
null
README.md
Nerzal/talks
355384c69d3071c1021083f4e841340c21d69a93
[ "Apache-2.0" ]
2
2021-03-28T13:04:41.000Z
2021-06-16T18:24:18.000Z
# talks This repository holds some slides to talks.
17.333333
43
0.788462
eng_Latn
0.998319
2f413d3f562e1d77a74a517a016825c6fccb5d45
450
md
Markdown
README.md
HustBestCat/BC-2-Race
38c4a456a36056e42d4636507bc2b49e72dcbeb2
[ "Apache-2.0" ]
null
null
null
README.md
HustBestCat/BC-2-Race
38c4a456a36056e42d4636507bc2b49e72dcbeb2
[ "Apache-2.0" ]
null
null
null
README.md
HustBestCat/BC-2-Race
38c4a456a36056e42d4636507bc2b49e72dcbeb2
[ "Apache-2.0" ]
null
null
null
# AI-Studio-飞桨新人赛:钢铁缺陷检测挑战赛-第2名方案 > 这是我这个第二名给出的方案 ## 项目描述 > 1.我选用了Fast-RCNN并使用ResNet101_vd作为backbone。 > 2.图片进行大小上的缩放和归一化。在数据增强方面,大多数增强方式都不利于模型精度的提高,因此只选用了图片翻转。 > 3.因为使用了预训练模型所以选用了warm-up作为开始的学习率,其后的学习率为多个周期的余弦退火衰减。 > 4.选择带有动量的SGD作为优化器,同时对所有的参数设置了L2正则化系数。 ## 项目结构 > ``` -README.MD -飞桨新人赛:钢铁缺陷检测挑战赛-第2名方案.ipynb ``` ## 使用方式 A:在AI Studio上[运行本项目](https://aistudio.baidu.com/aistudio/projectdetail/2585386) B:下载飞桨新人赛:钢铁缺陷检测挑战赛-第2名方案.ipynb,然后运行它吧!
19.565217
81
0.782222
yue_Hant
0.500643
2f4149aa79e70780e86c68ff2a41e7a53b7cd223
1,030
md
Markdown
articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/cpp/prerequisites.md
eltociear/azure-docs.zh-cn
b24f1a5a0fba668fed89d0ff75ca11d3c691f09b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/cpp/prerequisites.md
eltociear/azure-docs.zh-cn
b24f1a5a0fba668fed89d0ff75ca11d3c691f09b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Speech-Service/includes/how-to/compressed-audio-input/cpp/prerequisites.md
eltociear/azure-docs.zh-cn
b24f1a5a0fba668fed89d0ff75ca11d3c691f09b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- author: IEvangelist ms.service: cognitive-services ms.topic: include ms.date: 03/09/2020 ms.author: dapine ms.openlocfilehash: 590e5494a8c8f9d4e06b69af0708e83d53be72b5 ms.sourcegitcommit: 27bbda320225c2c2a43ac370b604432679a6a7c0 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 03/31/2020 ms.locfileid: "80409582" --- 处理压缩音频是使用 [GStreamer](https://gstreamer.freedesktop.org) 实现的。 出于许可原因,GStreamer 二进制文件未编译并与语音 SDK 链接。 开发人员需要安装多个依赖项和插件。 # <a name="ubuntu-1604-1804-or-debian-9"></a>[乌本图, 16.04, 18.04 或 Debian 9](#tab/debian) ```sh sudo apt install libgstreamer1.0-0 \ gstreamer1.0-plugins-base \ gstreamer1.0-plugins-good \ gstreamer1.0-plugins-bad \ gstreamer1.0-plugins-ugly ``` # <a name="rehl--centos"></a>[REHL / CentOS](#tab/centos) ```sh sudo yum install gstreamer1 \ gstreamer1-plugins-base \ gstreamer1-plugins-good \ gstreamer1-plugins-bad-free \ gstreamer1-plugins-ugly-free ``` > [!NOTE] > 在 RHEL / CentOS 上,请按照有关如何[为 Linux 配置 OpenSSL](../../../../how-to-configure-openssl-linux.md)的说明进行操作。 ---
26.410256
117
0.751456
yue_Hant
0.220636
2f41ce78fc74b2fac160bf3aaac71ab3c96fb4cd
2,589
md
Markdown
README.md
evilband7/prime-thumbnailer
ce9ccd963481a1ad9e77becd3ca20f710734f499
[ "MIT" ]
1
2016-11-02T09:12:52.000Z
2016-11-02T09:12:52.000Z
README.md
evilband7/prime-thumbnailer
ce9ccd963481a1ad9e77becd3ca20f710734f499
[ "MIT" ]
null
null
null
README.md
evilband7/prime-thumbnailer
ce9ccd963481a1ad9e77becd3ca20f710734f499
[ "MIT" ]
null
null
null
# Web-Thumbnailator - based on Reactive Stream API ([RxJava2](https://github.com/ReactiveX/RxJava)) and [Thumbnailator](https://github.com/coobird/thumbnailator) - On the fly image processing such as image cropping or water mark adding by just change image request url. that's all. # !!! UNDER DEVELOPMENT !!! please wait... # How Web-Thumbnailator work For example, you want to store image and serve cropped 500x500 image in your home page. 1. Once you persist image, you will get `imageId`. store in somewhere. For example, in your Article table. 2. And then request image to this pattern [/contextPath][/WebThumbnailatorBaseUrl]/filterName/imageId Let's assume that you have - `example.com` as your host name. - `/sampleApp` as your contextPath. - `/images` as a Web-Thumbnailator baseUrl. - `crop_500x500` as your filter name to crop image to 500x500 - `/my/image/id/sample-article.jpg` as your imageId then the url to serve the image which will be cropped to 500 x 500 would be `http://example.com/sampleApp/images/crop_500x500/my/image/id/sample-article.jpg` # Installation //TODO # Spring [Go to spring-boot module](spring) # Servlet [Go to servlet module](servlet) # RxNetty //TODO # Filters Chain when you define filter in configuration, you must define in list which will be trigger in chain. for example, your first filter may crop image and then your 2nd one will rotate your image. # Provided Filters 1. `io.prime.web.thumbnailator.filter.CropFilter` //TODO add more filter # Custom Filters You can create your own filter by implements interface `ThumbnailatorFilter` and then add your implementation into configuration. For more information about how to define your own filter, please read more about [Thumbnailator](https://github.com/coobird/thumbnailator) # Usage -> Image Persisting 1. Just autowire `io.prime.web.thumbnailator.util.ThumbnailatorUtil` 2. And then you can persist image using variety of overload methods of `ThumbnailatorUtil.create()` 3. Once you persisted your image, you will get imageId as a `String`. store it wherever you want. # Usage -> Programmatic Once you autowire `io.prime.web.thumbnailator.util.ThumbnailatorUtil` then you can use - `ThumbnailatorUtil.create()` to persist image - `ThumbnailatorUtil.get()` to get filtered image - `ThumbnailatorUtil.getSource()` to get source image # Roadmap - need to review and do some tweaks before releasing. - change to full async on processing request - provide full configuration on every part of this project. # Contributor Mr. Siwapun Siwaporn - email: map.siwapun@gmail.com
39.227273
188
0.772499
eng_Latn
0.968972
2f4216f6c6892d4f46fab19af92a0b112d7a6245
935
md
Markdown
_posts/2021-09-20-353721171.md
bookmana/bookmana.github.io
2ed7b023b0851c0c18ad8e7831ece910d9108852
[ "MIT" ]
null
null
null
_posts/2021-09-20-353721171.md
bookmana/bookmana.github.io
2ed7b023b0851c0c18ad8e7831ece910d9108852
[ "MIT" ]
null
null
null
_posts/2021-09-20-353721171.md
bookmana/bookmana.github.io
2ed7b023b0851c0c18ad8e7831ece910d9108852
[ "MIT" ]
null
null
null
--- title: "2022 해커스공무원 영어 고득점 문법 777제" date: 2021-09-20 04:13:55 categories: [국내도서, 자격서-수험서] image: https://bimage.interpark.com/goods_image/1/1/7/1/353721171s.jpg description: ● 777 문항으로 30일 만에 공무원 영어 문법 고득점 달성! 1. 단원별 주요 출제포인트 집중 학습 및 문제 적용 훈련이 가능합니다.2. [종합 실전모의고사 9회분]으로 영문법 문제풀이 응용력을 강화할 수 있습니다.3. [기출포인트]와 상세한 해설로 공무원 영문법을 완벽하게 이해할 --- ## **정보** - **ISBN : 9791166626838** - **출판사 : 해커스공무원(해커스패스)** - **출판일 : 20210903** - **저자 : 해커스 공무원시험연구소** ------ ## **요약** ● 777 문항으로 30일 만에 공무원 영어 문법 고득점 달성! 1. 단원별 주요 출제포인트 집중 학습 및 문제 적용 훈련이 가능합니다.2. [종합 실전모의고사 9회분]으로 영문법 문제풀이 응용력을 강화할 수 있습니다.3. [기출포인트]와 상세한 해설로 공무원 영문법을 완벽하게 이해할 수 있습니다.4. 개인별 맞춤 [30일 60일 학습 플랜]을 통해 공무원 영문법을 체계적으로 학습할 수 있습니다. ------ 777 문항으로 30일 만에 공무원 영어 문법 고득점 달성!1. 단원별 주요 출제포인트 집중 학습 및 문제 적용 훈련이 가능합니다.2. [종합 실전모의고사 9회분]으로 영문법 문제풀이 응용력을 강화할 수 있습니다.3. [기출포인트]와 상세한 해설로 공무원 영문법을 완벽하게 이해할 수 있습니다.4. 개인별 맞춤... ------ 2022 해커스공무원 영어 고득점 문법 777제 ------
25.972222
225
0.64385
kor_Hang
1.00001
2f43dfa79e7697fbd5f04c52edc2a036632e17d0
64
md
Markdown
README.md
indeerad/htaccessdemoQuestion1
4c154b02cabef370f5008ebead5547d9285e3b85
[ "Unlicense" ]
null
null
null
README.md
indeerad/htaccessdemoQuestion1
4c154b02cabef370f5008ebead5547d9285e3b85
[ "Unlicense" ]
null
null
null
README.md
indeerad/htaccessdemoQuestion1
4c154b02cabef370f5008ebead5547d9285e3b85
[ "Unlicense" ]
null
null
null
# htaccessdemo Q1: Mapping URL to a file using .htaccess file
12.8
43
0.75
kor_Hang
0.875839
2f4441f1a8409936b625f968d552be1cbb0576c2
858
md
Markdown
CONTRIBUTING.md
dkg/lunr.js
aa5a878f62a6bba1e8e5b95714899e17e8150b38
[ "MIT" ]
6,250
2015-01-02T00:34:23.000Z
2022-03-31T13:04:38.000Z
CONTRIBUTING.md
dkg/lunr.js
aa5a878f62a6bba1e8e5b95714899e17e8150b38
[ "MIT" ]
758
2019-09-10T17:31:18.000Z
2022-03-03T19:47:57.000Z
CONTRIBUTING.md
dkg/lunr.js
aa5a878f62a6bba1e8e5b95714899e17e8150b38
[ "MIT" ]
519
2015-01-07T02:35:13.000Z
2022-03-26T06:51:57.000Z
Contributions are very welcome. To make the process as easy as possible please follow these steps: * Open an issue detailing the bug you've found, or the feature you wish to add. Simplified working examples using something like [jsFiddle](http://jsfiddle.net) make it easier to diagnose your problem. * Add tests for your code (so I don't accidentally break it in the future). * Don't change version numbers or make new builds as part of your changes. * Don't change the built versions of the library; only make changes to code in the `lib` directory. # Developer Dependencies A JavaScript runtime is required for building the library. Run the tests (using PhantomJS): make test The tests can also be run in the browser by starting the test server: make server This will start a server on port 3000, the tests are then available at `/test`.
40.857143
202
0.7669
eng_Latn
0.999514
2f4497ff2e41f48e79cb0820d554686ec155c724
3,936
md
Markdown
website/www/site/content/en/documentation/sdks/java-thirdparty.md
shitanshu-google/beam
9cd959f61d377874ee1839c2de4bb8f65a948ecc
[ "Apache-2.0" ]
1
2022-01-24T22:07:52.000Z
2022-01-24T22:07:52.000Z
website/www/site/content/en/documentation/sdks/java-thirdparty.md
shitanshu-google/beam
9cd959f61d377874ee1839c2de4bb8f65a948ecc
[ "Apache-2.0" ]
2
2021-08-25T16:16:20.000Z
2022-02-10T04:57:01.000Z
website/www/site/content/en/documentation/sdks/java-thirdparty.md
shitanshu-google/beam
9cd959f61d377874ee1839c2de4bb8f65a948ecc
[ "Apache-2.0" ]
1
2018-09-30T05:34:06.000Z
2018-09-30T05:34:06.000Z
--- type: languages title: "Beam 3rd Party Java Extensions" --- <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # Apache Beam 3rd Party Java Extensions These are some of the 3rd party Java libaries that may be useful for specific applications. ## Parsing HTTPD/NGINX access logs. ### Summary The Apache HTTPD webserver creates logfiles that contain valuable information about the requests that have been done to the webserver. The format of these log files is a configuration option in the Apache HTTPD server so parsing this into useful data elements is normally very hard to do. To solve this problem in an easy way a library was created that works in combination with Apache Beam and is capable of doing this for both the Apache HTTPD and NGINX. The basic idea is that the logformat specification is the schema used to create the line. This parser is simply initialized with this schema and the list of fields you want to extract. ### Project page [https://github.com/nielsbasjes/logparser](https://github.com/nielsbasjes/logparser) ### License Apache License 2.0 ### Download <dependency> <groupId>nl.basjes.parse.httpdlog</groupId> <artifactId>httpdlog-parser</artifactId> <version>5.0</version> </dependency> ### Code example Assuming a WebEvent class that has a the setters setIP, setQueryImg and setQueryStringValues PCollection<WebEvent> filledWebEvents = input .apply("Extract Elements from logline", ParDo.of(new DoFn<String, WebEvent>() { private Parser<WebEvent> parser; @Setup public void setup() throws NoSuchMethodException { parser = new HttpdLoglineParser<>(WebEvent.class, "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" \"%{Cookie}i\""); parser.addParseTarget("setIP", "IP:connection.client.host"); parser.addParseTarget("setQueryImg", "STRING:request.firstline.uri.query.img"); parser.addParseTarget("setQueryStringValues", "STRING:request.firstline.uri.query.*"); } @ProcessElement public void processElement(ProcessContext c) throws InvalidDissectorException, MissingDissectorsException, DissectionFailure { c.output(parser.parse(c.element())); } }) ); ## Analyzing the Useragent string ### Summary Parse and analyze the useragent string and extract as many relevant attributes as possible. ### Project page [https://github.com/nielsbasjes/yauaa](https://github.com/nielsbasjes/yauaa) ### License Apache License 2.0 ### Download <dependency> <groupId>nl.basjes.parse.useragent</groupId> <artifactId>yauaa-beam</artifactId> <version>4.2</version> </dependency> ### Code example PCollection<WebEvent> filledWebEvents = input .apply("Extract Elements from Useragent", ParDo.of(new UserAgentAnalysisDoFn<WebEvent>() { @Override public String getUserAgentString(WebEvent record) { return record.useragent; } @YauaaField("DeviceClass") public void setDC(WebEvent record, String value) { record.deviceClass = value; } @YauaaField("AgentNameVersion") public void setANV(WebEvent record, String value) { record.agentNameVersion = value; } }));
35.142857
136
0.683943
eng_Latn
0.864269
2f44b138f9ed8f8a1ad3a188372bd47bc6e299e9
859
md
Markdown
includes/event-grid-edge-persist-event-subscriptions.md
changeworld/azure-docs.pt-pt
8a75db5eb6af88cd49f1c39099ef64ad27e8180d
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/event-grid-edge-persist-event-subscriptions.md
changeworld/azure-docs.pt-pt
8a75db5eb6af88cd49f1c39099ef64ad27e8180d
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/event-grid-edge-persist-event-subscriptions.md
changeworld/azure-docs.pt-pt
8a75db5eb6af88cd49f1c39099ef64ad27e8180d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: incluir ficheiro description: incluir ficheiro services: event-grid author: banisadr ms.service: event-grid ms.topic: include ms.date: 01/16/2020 ms.author: babanisa ms.custom: include file ms.openlocfilehash: 42d1ebb23cf582c3dfbc375e4886ed449c21f493 ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897 ms.translationtype: MT ms.contentlocale: pt-PT ms.lasthandoff: 03/28/2020 ms.locfileid: "76844561" --- >[!NOTE] > Se precisar de garantir que os eventos pendentes são persistidos em caso de reinício do dispositivo, terá de permitir a persistência para a subscrição do evento. Para obter mais informações sobre como configurar a persistência, consulte os seguintes artigos: [Persistir no estado de Linux](../articles/event-grid/edge/persist-state-linux.md) ou Persistir no [Windows](../articles/event-grid/edge/persist-state-windows.md).
45.210526
424
0.810244
por_Latn
0.932423
2f44c3432c20a2edb30e08544463433bf81d7ac9
167
md
Markdown
.github/pull_request_template.md
mehrdad-shokri/flutter_appauth
c944c8a3124f9929875f9b7c9d36e1836abf754a
[ "BSD-3-Clause" ]
1
2021-12-31T20:07:15.000Z
2021-12-31T20:07:15.000Z
.github/pull_request_template.md
mehrdad-shokri/flutter_appauth
c944c8a3124f9929875f9b7c9d36e1836abf754a
[ "BSD-3-Clause" ]
null
null
null
.github/pull_request_template.md
mehrdad-shokri/flutter_appauth
c944c8a3124f9929875f9b7c9d36e1836abf754a
[ "BSD-3-Clause" ]
1
2020-03-19T03:34:20.000Z
2020-03-19T03:34:20.000Z
As this repository hosts two packages, please ensure the PR title starts with the name of the package that it relates to using square brackets (e.g. [flutter_appauth])
167
167
0.802395
eng_Latn
0.999832
2f44d4b964f32efdc94600b8895d29ebd2cdcc11
3,434
md
Markdown
wdk-ddi-src/content/gnssdriver/ni-gnssdriver-ioctl_gnss_set_supl_hslp.md
amrutha-chandramohan/windows-driver-docs-ddi
35e28164591cadf5ef3d6238cdddd4b88f2b8768
[ "CC-BY-4.0", "MIT" ]
176
2018-01-12T23:42:01.000Z
2022-03-30T18:23:27.000Z
wdk-ddi-src/content/gnssdriver/ni-gnssdriver-ioctl_gnss_set_supl_hslp.md
amrutha-chandramohan/windows-driver-docs-ddi
35e28164591cadf5ef3d6238cdddd4b88f2b8768
[ "CC-BY-4.0", "MIT" ]
1,093
2018-01-23T07:33:03.000Z
2022-03-30T20:15:21.000Z
wdk-ddi-src/content/gnssdriver/ni-gnssdriver-ioctl_gnss_set_supl_hslp.md
amrutha-chandramohan/windows-driver-docs-ddi
35e28164591cadf5ef3d6238cdddd4b88f2b8768
[ "CC-BY-4.0", "MIT" ]
251
2018-01-21T07:35:50.000Z
2022-03-22T19:33:42.000Z
--- UID: NI:gnssdriver.IOCTL_GNSS_SET_SUPL_HSLP title: IOCTL_GNSS_SET_SUPL_HSLP (gnssdriver.h) description: The IOCTL_GNSS_SET_SUPL_HSLP control code is used by the GNSS adapter to set the SUPL H-SLP address. old-location: gnss\ioctl_gnss_set_supl_hslp.htm tech.root: gnss ms.date: 02/15/2018 keywords: ["IOCTL_GNSS_SET_SUPL_HSLP IOCTL"] ms.keywords: IOCTL_GNSS_SET_SUPL_HSLP, IOCTL_GNSS_SET_SUPL_HSLP control, IOCTL_GNSS_SET_SUPL_HSLP control code [Sensor Devices], gnss.ioctl_gnss_set_supl_hslp, gnssdriver/IOCTL_GNSS_SET_SUPL_HSLP req.header: gnssdriver.h req.include-header: req.target-type: Windows req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: targetos: Windows req.typenames: f1_keywords: - IOCTL_GNSS_SET_SUPL_HSLP - gnssdriver/IOCTL_GNSS_SET_SUPL_HSLP topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - gnssdriver.h api_name: - IOCTL_GNSS_SET_SUPL_HSLP --- # IOCTL_GNSS_SET_SUPL_HSLP IOCTL ## -description The <b>IOCTL_GNSS_SET_SUPL_HSLP</b> control code is used by the GNSS adapter to set the SUPL H-SLP address. ## -ioctlparameters ### -input-buffer A pointer to a <a href="/windows-hardware/drivers/ddi/gnssdriver/ns-gnssdriver-gnss_supl_hslp_config">GNSS_SUPL_HSLP_CONFIG</a> structure. ### -input-buffer-length Set to sizeof(GNSS_SUPL_HSLP_CONFIG). ### -output-buffer Set to NULL. ### -output-buffer-length Set to 0. ### -in-out-buffer ### -inout-buffer-length ### -status-block <b>Irp->IoStatus.Status</b> is set to STATUS_SUCCESS if the request is successful. Otherwise, <b>Status</b> to the appropriate error condition as a <a href="/windows-hardware/drivers/kernel/using-ntstatus-values">NTSTATUS</a> code. ## -remarks The driver sets one of the following NTSTATUS values to indicate result. <ul> <li> <b>STATUS_SUCCESS</b>, when the driver processes the SUPL H-SLP information successfully. </li> <li> <b>Failed</b>, when the driver does not process the SUPL H-SLP information successfully. </li> <li> <b>Ignored</b>, when the driver ignores the SUPL H-SLP information. </li> </ul> <h3><a id="GNSS_driver_notes"></a><a id="gnss_driver_notes"></a><a id="GNSS_DRIVER_NOTES"></a>GNSS driver notes</h3> The GNSS driver must pass the H-SLP information, contained in the input structure, to the SUPL component which should connect to the server address specified by the H-SLP. If the certificate with the same name is injected again, the GNSS driver should overwrite the previous certificate with the same name. The H-SLP address is always in the form of a FQDN. ## -see-also <a href="/windows-hardware/drivers/kernel/creating-ioctl-requests-in-drivers">Creating IOCTL Requests in Drivers</a> <a href="/windows-hardware/drivers/ddi/wdfiotarget/nf-wdfiotarget-wdfiotargetsendinternalioctlotherssynchronously">WdfIoTargetSendInternalIoctlOthersSynchronously</a> <a href="/windows-hardware/drivers/ddi/wdfiotarget/nf-wdfiotarget-wdfiotargetsendinternalioctlsynchronously">WdfIoTargetSendInternalIoctlSynchronously</a> <a href="/windows-hardware/drivers/ddi/wdfiotarget/nf-wdfiotarget-wdfiotargetsendioctlsynchronously">WdfIoTargetSendIoctlSynchronously</a>
29.603448
232
0.757135
eng_Latn
0.594082
2f45f7910645a0a8e646ebc746284ad7aef134eb
457
md
Markdown
README.md
elifoster/github-real-names
c200f71666f7e9a50952dd66bcab0914e8db619b
[ "MIT" ]
1
2020-01-22T08:25:52.000Z
2020-01-22T08:25:52.000Z
README.md
elifoster/github-real-names
c200f71666f7e9a50952dd66bcab0914e8db619b
[ "MIT" ]
8
2016-04-15T01:27:08.000Z
2017-08-11T17:05:42.000Z
README.md
elifoster/github-real-names
c200f71666f7e9a50952dd66bcab0914e8db619b
[ "MIT" ]
null
null
null
# github-real-names GitHub Real Names is a Firefox addon that displays GitHub users' real names instead of their usernames. Currently, it supports issues, pull requests, commit history, and the GitHub dashboard. [![ILoveFreeSoftware Review](http://cdn.ilovefreesoftware.com/wp-content/uploads/2011/03/ilovefreesoftware_reviewed_5Star.png)](http://www.ilovefreesoftware.com/17/windows/internet/plugins/firefox-add-show-real-names-github-users-issues.html)
65.285714
242
0.816193
eng_Latn
0.584003
2f45fcd781a620d776ede79e575f422677ec7fe4
441
md
Markdown
guides/sdks/official/java/cancellation.es.md
LuizPacheco/devsite-docs
6ee0218e94322b755625bb35627952eaedd7253d
[ "MIT" ]
null
null
null
guides/sdks/official/java/cancellation.es.md
LuizPacheco/devsite-docs
6ee0218e94322b755625bb35627952eaedd7253d
[ "MIT" ]
null
null
null
guides/sdks/official/java/cancellation.es.md
LuizPacheco/devsite-docs
6ee0218e94322b755625bb35627952eaedd7253d
[ "MIT" ]
null
null
null
## Crear cancelación Es posible cancelar una compra específica desde el ID de pago utilizando el SDK a continuación. Para obtener detalles sobre los parámetros de la solicitud, consulte la API de [Cancelación](https://www.mercadopago[FAKER][URL][DOMAIN]/developers/es/reference/chargebacks/_payments_payment_id/put). [[[ ```java PaymentClient client = new PaymentClient(); Long paymentId = 123456789L; client.cancel(paymentId); ``` ]]]
31.5
295
0.77551
spa_Latn
0.723027
2f46116613625f5c8f834d80ca106dca4f2cb69d
333
md
Markdown
README.md
RosoVRgarden/Max-for-the-Visual-Arts
46e7130da72ea0fa429f2c566f881d8ff1bfcb61
[ "MIT" ]
31
2015-08-11T23:16:59.000Z
2021-02-14T00:06:46.000Z
README.md
les-cites-obscures/Max-for-the-Visual-Arts
46e7130da72ea0fa429f2c566f881d8ff1bfcb61
[ "MIT" ]
1
2015-08-17T12:16:35.000Z
2015-08-17T19:44:12.000Z
README.md
les-cites-obscures/Max-for-the-Visual-Arts
46e7130da72ea0fa429f2c566f881d8ff1bfcb61
[ "MIT" ]
6
2015-11-11T15:21:57.000Z
2020-12-02T16:02:23.000Z
Max-for-the-Visual-Arts ======================= Max for the Visual Arts (Max 7) is a self-learning tool and a repository for the Max patches made or used on the BA interaction design arts (IDA) and the MA interactive design communication (IDC) at the London College of Communication (University of the Arts London, United Kingdom).
66.6
283
0.732733
eng_Latn
0.985089
2f467316137ba6aaba138631d64a08433f782242
472
md
Markdown
_posts/2020-06-06-pantip-archive.md
sdeehub/i-learn-type-theme
0b019d3d16d37bb7a458e870cccdd42b6afa2dc9
[ "MIT" ]
null
null
null
_posts/2020-06-06-pantip-archive.md
sdeehub/i-learn-type-theme
0b019d3d16d37bb7a458e870cccdd42b6afa2dc9
[ "MIT" ]
49
2019-09-17T05:13:50.000Z
2020-01-15T15:34:31.000Z
_posts/2020-06-06-pantip-archive.md
sdeehub/i.learn
0b019d3d16d37bb7a458e870cccdd42b6afa2dc9
[ "MIT" ]
null
null
null
--- layout: post feature-img: "https://res.cloudinary.com/sdees-reallife/image/upload/v1555658919/sample_feature_img.png" title: 'รูปเก่า' date: 2020-06-06 T23:13:47+07:00 tags: - 'Reading' --- รูปเก่าพร้อมคำอธิบายจากเว็บ [PANTIP.COM](http://topicstock.pantip.com/isolate/topicstock/2012/11/M12951410/M12951410.html) <i class="fa fa-child" style="color:plum"></i> Being male is a matter of birth, Being a man is a matter of age, Being a gentleman is a matter of choice.
33.714286
122
0.741525
eng_Latn
0.351138
2f46be5d29ab6232b71d486717bf194c0c120ae6
1,028
md
Markdown
content/blog/2011/12/29/adb-right-on-the-command-line.md
gauntface/gaunt.dev
2e75a365666d10fed1cf2d4152f45566ab7ee3fd
[ "Apache-2.0" ]
null
null
null
content/blog/2011/12/29/adb-right-on-the-command-line.md
gauntface/gaunt.dev
2e75a365666d10fed1cf2d4152f45566ab7ee3fd
[ "Apache-2.0" ]
1
2021-11-13T18:53:58.000Z
2021-11-14T04:56:07.000Z
content/blog/2011/12/29/adb-right-on-the-command-line.md
gauntface/gaunt.dev
2e75a365666d10fed1cf2d4152f45566ab7ee3fd
[ "Apache-2.0" ]
null
null
null
--- title: "ADB Right on the Command Line" excerpt: "It's helpful having all of the Android tools on the command line so that when ever you need them, you aren't hunting around for them in the IDE or trying to remember where you stashed them on your system." mainImage: "/images/blog/2014/06/15/5695056315-25a835a3be-o.jpg" primaryColor: "#85a44c" date: "2011-12-29T16:50:35-08:00" updatedOn: "2011-12-29T16:50:35-08:00" slug: "adb-right-on-the-command-line" --- # ADB Right on the Command Line Edit your ~/.profile file Then add something along the lines of: `if [ -d "/home/matt/Development-Tools/android-sdks/tools" ] ; then     PATH="/home/matt/Development-Tools/android-sdks/tools:$PATH" fi   if [ -d "/home/matt/Development-Tools/android-sdks/platform-tools" ] ; then     PATH="/home/matt/Development-Tools/android-sdks/platform-tools:$PATH" fi` I tried '~/Development-Tools.....' but had no luck. Anyway, restart your machine and jobs a good'un Orig Photo: [https://flic.kr/p/9FfDQ8](https://flic.kr/p/9FfDQ8)
44.695652
292
0.726654
eng_Latn
0.824461
2f4710ed534cb668052a9ba4386fbccee5e83de7
576
md
Markdown
README.md
xuyu92327/waveform-analysis
8216cc8d7a75fc38d3fbc236d8b6b6cba963f78c
[ "MIT" ]
null
null
null
README.md
xuyu92327/waveform-analysis
8216cc8d7a75fc38d3fbc236d8b6b6cba963f78c
[ "MIT" ]
null
null
null
README.md
xuyu92327/waveform-analysis
8216cc8d7a75fc38d3fbc236d8b6b6cba963f78c
[ "MIT" ]
null
null
null
# waveform-analysis ## Methods:pill: + tara DL + xuyu DL + gz246 EMMP + wyy delta + xiaopeip + xdcFT + lucyddm + mcmc ## Frame: The process of algorithm evaluation is automated in Makefile For each method: + generate & save Answer of each training h5 file + record & save the efficiency of Answer generating + record & save average w&p-dist of each Answer respect to corresponding training h5 file ## Makefile argument: + set: jinp / juno + method: takara / xiaopeip / lucyddm / mcmc + mode: PEnum / Charge + rseq: the fileno to be extracted + chunk: the fileno for test
20.571429
89
0.729167
eng_Latn
0.934265
2f4758fe44ae8f35aa46acefeb847f64ed4fb5ad
7,885
md
Markdown
articles/remoteapp/remoteapp-usbredir.md
OpenLocalizationTestOrg/azure-docs-pr15_pt-BR
95dabd136ee50edd2caa1216e745b9f13ff7a1f2
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
1
2018-08-29T17:03:44.000Z
2018-08-29T17:03:44.000Z
articles/remoteapp/remoteapp-usbredir.md
OpenLocalizationTestOrg/azure-docs-pr15_pt-BR
95dabd136ee50edd2caa1216e745b9f13ff7a1f2
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/remoteapp/remoteapp-usbredir.md
OpenLocalizationTestOrg/azure-docs-pr15_pt-BR
95dabd136ee50edd2caa1216e745b9f13ff7a1f2
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Como você redirecionar dispositivos USB no Azure RemoteApp? | Microsoft Azure" description="Saiba como usar o redirecionamento para dispositivos USB no Azure RemoteApp." services="remoteapp" documentationCenter="" authors="lizap" manager="mbaldwin" /> <tags ms.service="remoteapp" ms.workload="compute" ms.tgt_pltfrm="na" ms.devlang="na" ms.topic="article" ms.date="08/15/2016" ms.author="elizapo" /> # <a name="how-do-you-redirect-usb-devices-in-azure-remoteapp"></a>Como você redirecionar dispositivos USB no Azure RemoteApp? > [AZURE.IMPORTANT] > RemoteApp Azure está sendo descontinuado. Leia o [comunicado](https://go.microsoft.com/fwlink/?linkid=821148) para obter detalhes. Redirecionamento de dispositivo permite aos usuários usar dispositivos USB conectados ao seu computador ou tablet com os aplicativos do Azure RemoteApp. Por exemplo, se você compartilhou Skype por meio do Azure RemoteApp, seus usuários precisam ser capaz de usar suas câmeras de dispositivo. Antes de ir adiante, certifique-se de que você leia as informações de redirecionamento de USB na [usando redirecionamento no Azure RemoteApp](remoteapp-redirection.md). No entanto, o recomendável nusbdevicestoredirect:s: * não funcionar para câmeras da web USB e pode não funcionar para alguns dispositivos de múltiplas funções de USB ou impressoras USB. Por design e por razões de segurança, o administrador de RemoteApp Azure tem que habilitar o redirecionamento por GUID de classe do dispositivo ou ID de instância de dispositivo antes de seus usuários podem usar esses dispositivos. Embora este artigo fala sobre redirecionamento de câmera da web, você pode usar uma abordagem semelhante para redirecionar impressoras USB e outros dispositivos de múltiplas funções USB que não serão redirecionados pelo **nusbdevicestoredirect:s:*** comando. ## <a name="redirection-options-for-usb-devices"></a>Opções de redirecionamento de dispositivos USB RemoteApp Azure usa mecanismos muito semelhantes para redirecionar dispositivos USB como aquelas disponíveis para os serviços de área de trabalho remota. A tecnologia subjacente permite que você escolha o método de redirecionamento correto para um determinado dispositivo, para obter o melhor dos dois alto nível e RemoteFX USB redirecionamento usando o **usbdevicestoredirect:s:** comando. Há quatro elementos a este comando: | Ordem de processamento | Parâmetro | Descrição | |------------------|---------------------|----------------------------------------------------------------------------------------------------------------------------| | 1 | * | Seleciona todos os dispositivos que não estão selecionados pelo redirecionamento de alto nível. Observação: por design, * não funciona para câmeras da web USB. | | | {GUID de classe de dispositivo} | Seleciona todos os dispositivos que correspondam a classe de instalação do dispositivo especificado. | | | USB\InstanceID | Seleciona um dispositivo USB especificado para ID da instância de determinado. | | 2 | -ID USB\Instance | Remove as configurações de redirecionamento para o dispositivo especificado. | ## <a name="redirecting-a-usb-device-by-using-the-device-class-guid"></a>Redirecionar um dispositivo USB, usando a classe de dispositivo GUID Há duas maneiras de localizar a GUID que pode ser usado para redirecionamento de classe do dispositivo. A primeira opção é usar o [System-Defined dispositivo configuração Classes disponível para fornecedores](https://msdn.microsoft.com/library/windows/hardware/ff553426.aspx). Escolha a classe que mais se aproxime do dispositivo conectado ao computador local. Para câmeras digitais, isso pode ser uma classe de dispositivo de imagens ou dispositivo de captura de vídeo. Você precisará fazer algumas experimento com as classes de dispositivo para localizar a classe correta GUID que funciona com o dispositivo USB conectado localmente (no nosso caso a câmera web). Uma maneira melhor ou a segunda opção, é siga estas etapas para localizar o GUID de classe do dispositivo específico: 1. Abra o Gerenciador de dispositivos, localize o dispositivo que será redirecionado e clique sobre ela e, em seguida, abra as propriedades. ![Abra o Gerenciador de dispositivos](./media/remoteapp-usbredir/ra-devicemanager.png) 2. Na guia **detalhes** , escolha a propriedade **Guid de classe**. O valor que aparece é o GUID de classe para esse tipo de dispositivo. ![Propriedades da câmera](./media/remoteapp-usbredir/ra-classguid.png) 3. Use o valor de Guid de classe de redirecionar dispositivos que correspondem a ele. Por exemplo: Set-AzureRemoteAppCollection -CollectionName <collection name> -CustomRdpProperty "nusbdevicestoredirect:s:<Class Guid value>" Você pode combinar vários redirecionamentos de dispositivo no mesmo cmdlet. Por exemplo: para redirecionar o armazenamento local e uma webcam USB, cmdlet tem esta aparência: Set-AzureRemoteAppCollection -CollectionName <collection name> -CustomRdpProperty "drivestoredirect:s:*`nusbdevicestoredirect:s:<Class Guid value>" Quando você configurar o redirecionamento de dispositivo por classe GUID todos os dispositivos que correspondam a classe GUID na coleção especificada serão redirecionados. Por exemplo, se houver vários computadores na rede local que tenham as mesmas câmeras da web USB, você pode executar um único cmdlet para redirecionar todas as câmeras da web. ## <a name="redirecting-a-usb-device-by-using-the-device-instance-id"></a>Redirecionar um dispositivo USB usando a identificação de instância de dispositivo Se você deseja ter mais controle refinado e deseja controlar o redirecionamento por dispositivo, você pode usar o parâmetro de redirecionamento de **USB\InstanceID** . A parte mais difícil desse método está localizando a identificação de instância de dispositivo USB. Você precisará ter acesso ao computador e o dispositivo USB específico. Em seguida, siga estas etapas: 1. Habilitar o redirecionamento de dispositivo em sessão de área de trabalho remota, conforme descrito em [como posso usar meus dispositivos e recursos em uma sessão de área de trabalho remota?](http://windows.microsoft.com/en-us/windows7/How-can-I-use-my-devices-and-resources-in-a-Remote-Desktop-session) 2. Abra uma Conexão de área de trabalho remota e clique em **Mostrar opções**. 3. Clique em **Salvar como** para salvar as configurações de conexão atuais em um arquivo RDP. ![Salvar as configurações como um arquivo RDP](./media/remoteapp-usbredir/ra-saveasrdp.png) 4. Escolha um nome de arquivo e um local, por exemplo, "MyConnection.rdp" e "Este PC\Documents" e salvar o arquivo. 5. Abra o arquivo de MyConnection.rdp usando um editor de texto e localize o ID de instância do dispositivo que você deseja redirecionar. Agora, use o ID de instância no seguinte cmdlet: Set-AzureRemoteAppCollection -CollectionName <collection name> -CustomRdpProperty "nusbdevicestoredirect:s: USB\<Device InstanceID value>" ### <a name="help-us-help-you"></a>Ajude-na ajudá-lo Você sabia que, além de classificação neste artigo e fazer comentários para baixo abaixo, você pode fazer alterações para o artigo próprio? Algo ausente? Algo errado? Eu tenha escrito algo que é apenas confuso? Rolar para cima e clique em **Editar no GitHub** para fazer alterações - aqueles se torne para revisão e, em seguida, assim que podemos entrar neles, você verá suas alterações e melhorias aqui.
93.869048
586
0.737223
por_Latn
0.999043
2f478c56d871eeb40db298aa4a0e9672aec3c5d1
153
md
Markdown
content/items/dragonfly_spawn_egg.md
BiomeMakeover/biomemakeover.github.io
4609f18f2bcf81364324f303ce9fa093d95d3012
[ "MIT" ]
null
null
null
content/items/dragonfly_spawn_egg.md
BiomeMakeover/biomemakeover.github.io
4609f18f2bcf81364324f303ce9fa093d95d3012
[ "MIT" ]
null
null
null
content/items/dragonfly_spawn_egg.md
BiomeMakeover/biomemakeover.github.io
4609f18f2bcf81364324f303ce9fa093d95d3012
[ "MIT" ]
null
null
null
--- title: Dragonfly Spawn Egg item: "dragonfly_spawn_egg" --- {% assign it = site.data.items[page.item] %} {% include item_template.liquid item=it %}
17
44
0.69281
eng_Latn
0.640228
2f4892d357e9f6eecd196eac26c2e37d52d873e7
1,492
md
Markdown
README.md
azzlack/owin.themable-errorpage
f15aa20c4b7ee8d7fdd470340e0669286d376a3c
[ "Apache-2.0" ]
null
null
null
README.md
azzlack/owin.themable-errorpage
f15aa20c4b7ee8d7fdd470340e0669286d376a3c
[ "Apache-2.0" ]
null
null
null
README.md
azzlack/owin.themable-errorpage
f15aa20c4b7ee8d7fdd470340e0669286d376a3c
[ "Apache-2.0" ]
null
null
null
# owin.themable-errorpage Themable error page for OWIN, based on `Microsoft.Owin.Diagnostics` ### Usage #### Basic initialization The themable error page is initialized like a normal OWIN middleware in `Startup.cs`. By default it will use the Razor file located at `~/Views/Shared/Error.cshtml`. ```csharp public void Configuration(IAppBuilder app) { ... app.UseThemableErrorPage(); ... } ``` #### Set available tabs ```csharp public void Configuration(IAppBuilder app) { ... app.UseThemableErrorPage(new ThemableErrorPageOptions<ErrorPageViewModel>() { ShowCookies = true, ShowHeaders = true, ShowQuery = true, ShowEnvironment = false, ShowExceptionDetails = false, ShowSourceCode = false }); ... } ``` #### Custom viewmodel and error page path ```csharp public void Configuration(IAppBuilder app) { ... app.UseThemableErrorPage(new ThemableErrorPageOptions<FriendlyErrorPageViewModel>() { ShowCookies = true, ShowHeaders = true, ShowQuery = true, ShowEnvironment = false, ShowExceptionDetails = false, ShowSourceCode = false, ErrorPagePath = "Views/Shared/Error.cshtml", ConfigureViewModel = (x) => { x.Debug = true; x.Deployment = "STAGING"; x.FileVersion = "1.0.0.12-build56"; x.Version = "1.0.0"; return x; } }); ... } ```
23.68254
87
0.61059
kor_Hang
0.389567
2f48b5999d252f07a601b0a9f9458ec57439e780
6,882
md
Markdown
repos/mysql/remote/8.0.md
LaudateCorpus1/repo-info
f1e38ecb932245ea05e587645cc67cbea8a538b6
[ "Apache-2.0" ]
1
2021-08-14T22:17:05.000Z
2021-08-14T22:17:05.000Z
repos/mysql/remote/8.0.md
LaudateCorpus1/repo-info
f1e38ecb932245ea05e587645cc67cbea8a538b6
[ "Apache-2.0" ]
1
2021-08-14T22:16:53.000Z
2021-08-14T22:16:53.000Z
repos/mysql/remote/8.0.md
LaudateCorpus1/repo-info
f1e38ecb932245ea05e587645cc67cbea8a538b6
[ "Apache-2.0" ]
null
null
null
## `mysql:8.0` ```console $ docker pull mysql@sha256:8b928a5117cf5c2238c7a09cd28c2e801ac98f91c3f8203a8938ae51f14700fd ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: 1 - linux; amd64 ### `mysql:8.0` - linux; amd64 ```console $ docker pull mysql@sha256:516b92a7ccf2340c1a696a7ad2de1784393d0876d042cc4913bc33fb3f455a75 ``` - Docker Version: 20.10.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **150.6 MB (150592914 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:c60d96bd2b771a8e3cae776e02e55ae914a6641139d963defeb3c93388f61707` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["mysqld"]` ```dockerfile # Thu, 22 Jul 2021 00:45:43 GMT ADD file:45f5dfa135c848a348382413cb8b66a3b1dac3276814fbbe4684b39101d1b148 in / # Thu, 22 Jul 2021 00:45:44 GMT CMD ["bash"] # Thu, 22 Jul 2021 09:45:43 GMT RUN groupadd -r mysql && useradd -r -g mysql mysql # Thu, 22 Jul 2021 09:45:49 GMT RUN apt-get update && apt-get install -y --no-install-recommends gnupg dirmngr && rm -rf /var/lib/apt/lists/* # Thu, 22 Jul 2021 09:45:49 GMT ENV GOSU_VERSION=1.12 # Thu, 22 Jul 2021 09:45:58 GMT RUN set -eux; savedAptMark="$(apt-mark showmanual)"; apt-get update; apt-get install -y --no-install-recommends ca-certificates wget; rm -rf /var/lib/apt/lists/*; dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; gpgconf --kill all; rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; apt-mark auto '.*' > /dev/null; [ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; chmod +x /usr/local/bin/gosu; gosu --version; gosu nobody true # Thu, 22 Jul 2021 09:45:59 GMT RUN mkdir /docker-entrypoint-initdb.d # Thu, 22 Jul 2021 09:46:06 GMT RUN apt-get update && apt-get install -y --no-install-recommends pwgen openssl perl xz-utils && rm -rf /var/lib/apt/lists/* # Thu, 22 Jul 2021 09:46:09 GMT RUN set -ex; key='A4A9406876FCBD3C456770C88C718D3B5072E1F5'; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver keyserver.ubuntu.com --recv-keys "$key"; gpg --batch --export "$key" > /etc/apt/trusted.gpg.d/mysql.gpg; gpgconf --kill all; rm -rf "$GNUPGHOME"; apt-key list > /dev/null # Thu, 22 Jul 2021 09:46:09 GMT ENV MYSQL_MAJOR=8.0 # Thu, 22 Jul 2021 09:46:09 GMT ENV MYSQL_VERSION=8.0.26-1debian10 # Thu, 22 Jul 2021 09:46:10 GMT RUN echo 'deb http://repo.mysql.com/apt/debian/ buster mysql-8.0' > /etc/apt/sources.list.d/mysql.list # Thu, 22 Jul 2021 09:46:26 GMT RUN { echo mysql-community-server mysql-community-server/data-dir select ''; echo mysql-community-server mysql-community-server/root-pass password ''; echo mysql-community-server mysql-community-server/re-root-pass password ''; echo mysql-community-server mysql-community-server/remove-test-db select false; } | debconf-set-selections && apt-get update && apt-get install -y mysql-community-client="${MYSQL_VERSION}" mysql-community-server-core="${MYSQL_VERSION}" && rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/mysql && mkdir -p /var/lib/mysql /var/run/mysqld && chown -R mysql:mysql /var/lib/mysql /var/run/mysqld && chmod 1777 /var/run/mysqld /var/lib/mysql # Thu, 22 Jul 2021 09:46:27 GMT VOLUME [/var/lib/mysql] # Thu, 22 Jul 2021 09:46:27 GMT COPY dir:2e040acc386ebd23b8571951a51e6cb93647df091bc26159b8c757ef82b3fcda in /etc/mysql/ # Thu, 22 Jul 2021 09:46:28 GMT COPY file:345a22fe55d3e6783a17075612415413487e7dba27fbf1000a67c7870364b739 in /usr/local/bin/ # Thu, 22 Jul 2021 09:46:28 GMT RUN ln -s usr/local/bin/docker-entrypoint.sh /entrypoint.sh # backwards compat # Thu, 22 Jul 2021 09:46:29 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Thu, 22 Jul 2021 09:46:29 GMT EXPOSE 3306 33060 # Thu, 22 Jul 2021 09:46:29 GMT CMD ["mysqld"] ``` - Layers: - `sha256:33847f680f63fb1b343a9fc782e267b5abdbdb50d65d4b9bd2a136291d67cf75` Last Modified: Thu, 22 Jul 2021 00:50:35 GMT Size: 27.1 MB (27145795 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5cb67864e624cb9385283d9c15d7d63cb2df3695df62f54616ceba589fb37ae0` Last Modified: Thu, 22 Jul 2021 09:48:50 GMT Size: 1.7 KB (1735 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:1a2b594783f5615223f4e91e8b6cfd89ac66aa5d678fb9296a6390cd64264f1c` Last Modified: Thu, 22 Jul 2021 09:48:51 GMT Size: 4.2 MB (4179259 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b30e406dd9250eb8283fcf316c29450eae75eddbc22a3b05c1c67cca904bb879` Last Modified: Thu, 22 Jul 2021 09:48:48 GMT Size: 1.4 MB (1419410 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:48901e306e4c36bbb20c354393adb4e37707cc4313e4618c2dc2a5532b01d17d` Last Modified: Thu, 22 Jul 2021 09:48:47 GMT Size: 149.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:603d2b7147fdf54be4906fa8d2046e88d148de73e65b86f54b910f03a0481e78` Last Modified: Thu, 22 Jul 2021 09:48:52 GMT Size: 13.4 MB (13447526 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:802aa684c1c4a9a004fb5cde0c6f0611f8da574510da43e1aac509a7990922cf` Last Modified: Thu, 22 Jul 2021 09:48:47 GMT Size: 1.9 KB (1874 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:715d3c143a062c12a411d2917d5119eda4aeccf9bdcd316567a25528de9ba6a5` Last Modified: Thu, 22 Jul 2021 09:48:44 GMT Size: 224.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6978e1b7a5113b48511c0757d860e7c35a15ae349191e6e9620cff6cfb446e3a` Last Modified: Thu, 22 Jul 2021 09:49:07 GMT Size: 104.4 MB (104390438 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:f0d78b0ac1be7141ac803e34695bebc7a7e8a291caf36e92907e3a87de2bac10` Last Modified: Thu, 22 Jul 2021 09:48:44 GMT Size: 843.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:35a94d251ed180a36bd9b75feb9b5bc15a215cce80c6c10502897bda639c0274` Last Modified: Thu, 22 Jul 2021 09:48:44 GMT Size: 5.5 KB (5540 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:36f75719b1a9b020a38ba3ffc0ad8b26ab97e8d2d51a4c62e34d7db787f9e689` Last Modified: Thu, 22 Jul 2021 09:48:45 GMT Size: 121.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
58.820513
981
0.744696
yue_Hant
0.210049
2f48b7b58853df0938daaa231c07c8c3c1430838
600
md
Markdown
LICENSE.md
playable-cn/phpPgAdmin6
caf1d3c4d346b9de6250f21a0799af5955fc7018
[ "MIT", "BSD-3-Clause" ]
33
2017-07-22T13:33:59.000Z
2021-04-23T14:29:19.000Z
LICENSE.md
playable-cn/phpPgAdmin6
caf1d3c4d346b9de6250f21a0799af5955fc7018
[ "MIT", "BSD-3-Clause" ]
311
2017-07-21T03:31:31.000Z
2022-03-26T07:02:41.000Z
LICENSE.md
playable-cn/phpPgAdmin6
caf1d3c4d346b9de6250f21a0799af5955fc7018
[ "MIT", "BSD-3-Clause" ]
17
2017-10-06T02:33:18.000Z
2021-11-17T09:05:55.000Z
# Licenses This project is distributed under licences MIT OR GPL-2.0-or-later OR BSD-3-Clause. You can choose either of them. - Distributed under an MIT license: See [LICENSE.MIT](LICENSE.MIT) - This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. See [LICENSE.GPL-2.0-or-later](LICENSE.GPL-2.0-or-later) - Released also under BSD license (BSD-3-Clause). See [LICENSE.BSD-3-Clause](LICENSE.BSD-3-Clause)
50
299
0.76
eng_Latn
0.986155
2f49b4bb423942b0e4f615d83b6621d76a9d9561
728
md
Markdown
_pages/publications.md
alpatania/alpatania.github.io
4a7a9927747c8a7e0ac434dea4c692f4f2103c0c
[ "MIT" ]
2
2020-04-22T15:51:42.000Z
2021-03-15T02:47:18.000Z
_pages/publications.md
alpatania/alpatania.github.io
4a7a9927747c8a7e0ac434dea4c692f4f2103c0c
[ "MIT" ]
null
null
null
_pages/publications.md
alpatania/alpatania.github.io
4a7a9927747c8a7e0ac434dea4c692f4f2103c0c
[ "MIT" ]
1
2018-12-28T19:44:50.000Z
2018-12-28T19:44:50.000Z
--- layout: archive title: "Publications" permalink: /publications/ bg_img: 'https://alpatania.github.io/images/bg_publications.png' author_profile: true --- {% include base_path %} <ul>{% for post in site.publications reversed %} {% if post.label == "publications"%} {% include archive-single-cv.html %} {% endif %} {% endfor %}</ul> <p> Pre-prints </p> <ul>{% for post in site.publications reversed %} {% if post.label == "pre-prints"%} {% include archive-single-cv.html %} {% endif %} {% endfor %}</ul> <p style="font-size:15px"> You can also find a full list of my articles on <u><a href="{{author.googlescholar}}">my Google Scholar profile</a>.</u></p>
25.103448
151
0.60989
eng_Latn
0.624827
2f4aa6edee87abe4597b88e7846a72661c8cebcd
736
md
Markdown
src/pages/ms/0147.md
MacbethJ/political-life
018f9a45b7a6ce1ee2f63841eb587bbfaa4d11bb
[ "MIT" ]
33
2019-01-08T02:52:22.000Z
2022-02-24T17:59:19.000Z
src/pages/ms/0147.md
MichaelZuo/political-life
018f9a45b7a6ce1ee2f63841eb587bbfaa4d11bb
[ "MIT" ]
9
2019-08-01T09:22:36.000Z
2022-01-04T06:50:04.000Z
src/pages/ms/0147.md
MichaelZuo/political-life
018f9a45b7a6ce1ee2f63841eb587bbfaa4d11bb
[ "MIT" ]
23
2018-09-22T19:57:05.000Z
2021-12-12T09:47:40.000Z
--- title: '24 September Sabtu' date: '1994-9-24' --- Berjalan kaki ke Dataran Tiananmen. Terdapat suasana perayaan di sini, dan Hari Kebangsaan akan datang. Gerbang Tiananmen telah diubahsuai dan dinding merah dicat. Perintis muda berdiri di tebing Sungai Jinshui. Terdapat perubahan besar di dataran, dan tali pinggang hijau besar dibentuk di tengah-tengah dataran. Di sebelah timur adalah naga yang terbuat dari bunga, barat adalah phoenix, belakang adalah pagoda yang terbuat dari bunga, dan model kapal bernama "Sea World". Sejumlah besar orang berkumpul di dataran, berseri, mengambil gambar dan menonton. Slogan besar di Chang'an Street telah ditubuhkan, dan bendera negara berkibar di angin. Hari lahir Republik akan datang.
73.6
466
0.804348
zsm_Latn
0.678509
2f4ae2665c30eef091cb2d74ccf16aa22d96dab7
30,365
md
Markdown
docker-cloud/migration/cloud-to-kube-gke.md
xiaods/docker.github.io
978b1ac9645b2d223ec6c1057a36aeaaf567bcd1
[ "Apache-2.0" ]
2
2020-02-15T18:17:32.000Z
2021-06-16T03:48:28.000Z
docker-cloud/migration/cloud-to-kube-gke.md
xiaods/docker.github.io
978b1ac9645b2d223ec6c1057a36aeaaf567bcd1
[ "Apache-2.0" ]
1
2018-05-10T05:51:06.000Z
2018-05-10T05:51:06.000Z
docker-cloud/migration/cloud-to-kube-gke.md
xiaods/docker.github.io
978b1ac9645b2d223ec6c1057a36aeaaf567bcd1
[ "Apache-2.0" ]
null
null
null
--- description: How to migrate apps from Docker Cloud to GKE keywords: cloud, migration, kubernetes, google, gke title: Migrate Docker Cloud stacks to Google Kubernetes Engine --- ## GKE Kubernetes This page explains how to prepare your applications for migration from Docker Cloud to [Google Kubernetes Engine (GKE)](https://cloud.google.com/free/){: target="_blank" class="_"} clusters. GKE is a hosted Kubernetes service on Google Cloud Platform (GCP). It exposes standard Kubernetes APIs so that standard Kubernetes tools and apps run on it without needing to be reconfigured. At a high level, migrating your Docker Cloud applications requires that you: - **Build** a target environment (Kubernetes cluster on GKE). - **Convert** your Docker Cloud YAML stackfiles. - **Test** the converted YAML stackfiles in the new environment. - **Point** your application CNAMES to new service endpoints. - **Migrate** your applications from Docker Cloud to the new environment. To demonstrate, we **build** a target environment of GKE nodes, **convert** the Docker Cloud stackfile for [example-voting-app](https://github.com/dockersamples/example-voting-app){: target="_blank" class="_"} to a Kubernetes manifest, and **test** the manifest in the new environment to ensure that it is safe to migrate. > The actual process of migrating -- switching customers from your Docker Cloud applications to GKE applications -- will vary by application and environment. ## Voting-app example The Docker Cloud stack of our example voting application is defined in [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}. This document explains how `dockercloud.yml` is converted to a Kubernetes YAML manifest file so that you have the tools to do the same for your applications. In the [dockercloud.yml](https://raw.githubusercontent.com/dockersamples/example-voting-app/master/dockercloud.yml){: target="_blank" class="_"}, the voting app is defined as a stack of six microservices: - **vote**: Web front-end that displays voting options - **redis**: In-memory k/v store that collects votes - **worker**: Stores votes in database - **db**: Persistent store for votes - **result**: Web server that pulls and displays results from database - **lb**: Container-based load balancer Votes are accepted with the `vote` service and stored in persistent backend database (`db`) with the help of services, `redis`, `worker`, and `lb`. The vote tally is displayed with the `result` service. ![image of voting app arch](images/votingapp-arch.png){:width="500px"} ## Migration prerequisites To complete the migration from Docker Cloud to Kubernetes on GKE, you need: - An active Google Cloud subscription with billing enabled. ## Build target environment Google Kubernetes Engine (GKE) is a managed Kubernetes service on the Google Cloud Platform (GCP). It takes care of all of the Kubernetes control plane management (the master nodes) -- delivering the control plane APIs, managing control plane HA, managing control plane upgrades, etc. You only need to look after worker nodes -- how many, the size and spec, where to deploy them, etc. High-level steps to build a working GKE cluster are: 1. Create a new GKE project. 2. Create a GKE cluster. 3. Connect to the GKE cluster. ### Create a new GKE project Everything in the Google Cloud Platform has to sit inside of a _project_. Let's create one. 1. Log in to the [Google Cloud Platform Console](https://console.cloud.google.com){: target="_blank" class="_"}. 2. Create a new project. Either: - Select **Create an empty project** from the home screen, or ... - Open **Select a project** from the top of the screen and click **+**. 3. Name the project and click **Create**. It may take a minute. > The examples in this document assume a project named, `proj-k8s-vote`. ### Create a GKE cluster In this section, we build a three-node cluster; your cluster should probably be based on the configuration of your Docker Cloud node cluster. Whereas Docker Cloud deploys work to all nodes in a cluster (managers and workers), _Kubernetes only deploys work to worker nodes_. This affects how you should size your cluster. If your Docker Cloud node cluster was working well with three managers and two workers of a particular size, you should probably size your GKE cluster to have five nodes of a similar size. > In Docker Cloud, to see the configuration of each of your clusters, select **Node Clusters** > _your_cluster_. Before continuing, ensure you know: - **Region and zone** in which you want to deploy your GKE cluster - **Number, size, and spec** of the worker nodes you want. To build: 1. Log into the [GCP Console](https://console.cloud.google.com){: target="_blank" class="_"}. 2. Select your project from **Select a project** at the top of the Console screen. 3. Click **Kubernetes Engine** from the left-hand menu. It may take a minute to start. 4. Click **Create Cluster**. 5. Configure the required cluster options: - **Name:** An arbitrary name for the cluster. - **Description:** An arbitrary description for the cluster. - **Location:** Determines if the Kubernetes control plane nodes (masters) are in a single availability zone or spread across availability zones within a GCP Region. - **Zone/Region:** The zone or region in which to deploy the cluster. - **Cluster version:** The Kubernetes version. You should probably use a 1.8.x or 1.9.x version. - **Machine type:** The type of GKE VM for the worker nodes. This should probably match your Docker Cloud node cluster. - **Node image:** The OS to run on each Kubernetes worker node. Use Ubuntu if you require NFS, glusterfs, Sysdig, or Debian packages, otherwise use a [COS (container-optimized OS)](https://cloud.google.com/container-optimized-os/). - **Size:** The number of _worker_ nodes that you want in the GKE cluster. It should probably match the _total_ number of nodes in your existing Docker Cloud node cluster (managers + workers). You should carefully consider the other configuration options; but most deployments should be OK with default values. 6. Click **Create**. It takes a minute or two for the cluster to create. Once the cluster is created, you can click its name to see more details. ### Connect to the GKE cluster You can connect to your GKE cluster from the web-based [Google Cloud Shell](https://cloud.google.com/shell/){: target="_blank" class="_"}; but to do so from your laptop, or other local terminal, you must: - Install and configure the `gcloud` CLI tool. - Install the Kubernetes CLI (`kubectl`) - Configure `kubectl` to connect to your cluster. The `gcloud` tool is the command-line tool for interacting with the Google Cloud Platform. It is installed as part of the Google Cloud SDK. 1. Download and install the [Cloud SDK](https://cloud.google.com/sdk/){: target="_blank" class="_"} for your operating system. 2. Configure `gcloud` and follow all the prompts: ``` $ gcloud init --console-only ``` > Follow _all_ prompts, including the one to open a web browser and approve the requested authorizations. As part of the procedure you must copy and paste a code into the terminal window to authorize `gcloud`. 3. Install [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl): ``` $ gcloud components list $ gcloud components install kubectl ``` You can install `kubectl` with or without `glcoud`. If you have `kubectl` already installed, ensure that the current context is correct: ``` $ kubectl config get-context $ kubectl config use-context <my_gke_namespace> ``` 4. Configure `kubectl` to talk to your GKE cluster. - In GKE, click the **Connect** button at the end of the line representing your cluster. - Copy the long command and paste to your local terminal window. Your command may differ. ``` $ gcloud container clusters get-credentials clus-k8s-vote --zone europe-west2-c --project proj-k8s-vote Fetching cluster endpoint and auth data. kubeconfig entry generated for clus-k8s-vote. ``` 5. Test the `kubectl` configuration: ``` $ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-clus-k8s-vote-default-pool-81bd226c-2jtp Ready <none> 1h v1.9.2-gke.1 gke-clus-k8s-vote-default-pool-81bd226c-mn4k Ready <none> 1h v1.9.2-gke.1 gke-clus-k8s-vote-default-pool-81bd226c-qjm2 Ready <none> 1h v1.9.2-gke.1 ``` If the values returned match your GKE cluster (number of nodes, age, and version), then you have successfully configured `kubectl` to manage your GKE cluster. You now have a GKE cluster and have configured `kubectl` to manage it. Let's look at how to convert your Docker Cloud app into a Kubernetes app. ## Convert Docker Cloud stackfile **In the following sections, we discuss each service definition separately, but you should group them into one stackfile with the `.yml` extension, for example, [k8s-vote.yml](#combined-manifest-k8s-vote.yml){: target="_blank" class="_"}.** To prepare your applications for migration from Docker Cloud to Kubernetes, you must recreate your Docker Cloud stackfiles as Kubernetes _manifests_. Once you have each application converted, you can test and deploy. Like Docker Cloud stackfiles, Kubernetes manifests are YAML files but usually longer and more complex. > In Docker Cloud, to find the stackfiles for your existing applications, you can either: (1) Select **Stacks** > _your_stack_ > **Edit**, or (2) Select **Stacks** > _your_stack_ and scroll down. In the Docker Cloud stackfile, the six Docker _services_ in our `example-voting-app` stack are defined as **top-level keys**: ``` db: redis: result: lb: vote: worker: ``` Kubernetes applications are built from objects (such as [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/){: target="_blank" class="_"}) and object abstractions (such as [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/){: target="_blank" class="_"} and [Services](https://kubernetes.io/docs/concepts/services-networking/service/){: target="_blank" class="_"}). For each _Docker service_ in our voting app stack, we create one Kubernetes Deployment and one _Kubernetes Service_. Each Kubernetes Deployment spawns Pods. A Pod is a set of containers and also the smallest unit of work in Kubernetes. > A [Docker serivce](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/){: target="_blank" class="_"} is one component of an application that is generated from one image. > A [Kubernetes service](https://kubernetes.io/docs/concepts/services-networking/service/){: target="_blank" class="_"} is a networking construct that load balances Pods behind a proxy. A Kubernetes Deployment defines the application "service" -- which Docker image to use and the runtime instructions (which container ports to map and the container restart policy). The Deployment is also where you define rolling updates, rollbacks, and other advanced features. A Kubernetes Service object is an abstraction that provides stable networking for a set of Pods. A Service is where you can register a cluster-wide DNS name and virtual IP (VIP) for accessing the Pods, and also create cloud-native load balancers. This diagram shows four Pods deployed as part of a single Deployment. Each Pod is labeled as “app=vote”. The Deployment has a label selector, “app=vote”, and this combination of labels and label selector is what allows the Deployment object to manage Pods (create, terminate, scale, update, roll back, and so on). Likewise, the Service object selects Pods on the same label (“app-vote”) which allows the service to provide a stable network abstraction (IP and DNS name) for the Pods. ![Voting app vote Kube pods](images/votingapp-kube-pods-vote.png){:width="500px"} ### db service > Consider using a hosted database service for production databases. This is something that, ideally, should not change as part of your migration away from Docker Cloud stacks. **Docker Cloud stackfile**: The Docker Cloud stackfile defines an image and a restart policy for the `db` service. ``` db: image: 'postgres:9.4' restart: always ``` **Kubernetes manifest**: The Kubernetes translation defines two object types or "kinds": a _Deployment_ and a _Service_ (separated by three dashes `---`). Each object includes an API version, metadata (labels and name), and a `spec` field for object configuration (that is, the Deployment Pods and the Service). ``` apiVersion: apps/v1beta1 kind: Deployment metadata: name: db labels: app: db spec: selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - image: postgres:9.4 name: db restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: db spec: clusterIP: None ports: - port: 55555 targetPort: 0 selector: app: db ``` About the Kubernetes fields in general: - `apiVersion` sets the schema version for Kubernetes to use when managing the object. - `kind` defines the object type. In this example, we only define Deployments and Services but there are many others. - `metadata` assigns a name and set of labels to the object. - `spec` is where we configure the object. In a Deployment, `spec` defines the Pods to deploy. It is important that **Pod labels** (`Deployment.spec.template.metadata.labels`) match both the Deployment label selector (`Deployment.spec.selector.matchLabels`) and the Service label selector (`Service.spec.selector`). This is how the Deployment object knows which Pods to manage and how the Service object knows which Pods to provide networking for. > Deployment and Service label selectors have different fields in the YAML file because Deployments use [set-based selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#set-based-requirement){: target="_blank" class="_"} and Services use [equality-based selectors](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#equality-based-requirement){: target="_blank" class="_"}. For the `db` Deployment, we define a container called `db` based on the `postgres:9.4` Docker image, and define a restart policy. All Pods created by this Deployment have the label, `app=db` and the Deployment selects on them. The `db` Service is a “headless” service (`clusterIP: None`). Headless services are useful when you want a stable DNS name but do not need the cluster-wide VIP. They create a stable DNS record, but instead of creating a VIP, they map the DNS name to multiple [A records](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records){: target="_blank" class="_"} -- one for each Pod associated with the Service. The Service’s label selector (`Service.spec.selector`) has the value, "app=db". This means the Service provides stable networking and load balancing for all Pods on the cluster labeled as “app=db”. Pods defined in the Deployment section are all labelled as "app-db". It is this mapping between the Service label selector and the Pod labels that tells the Service object which Pods for which to provide networking. ### redis service **Docker Cloud stackfile**: ``` redis: image: 'redis:latest' restart: always ``` **Kubernetes manifest**: ``` apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: redis name: redis spec: selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - image: redis:alpine name: redis ports: - containerPort: 6379 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: redis name: redis spec: ports: - port: 6379 targetPort: 6379 selector: app: redis ``` Here, the Deployment object deploys a Pod from the `redis:alpine` image and sets the container port to `6379`. It also sets the `labels` for the Pods to the same value ("app=redis") as the Deployment’s label selector to tie the two together. The Service object defines a cluster-wide DNS mapping for the name "redis" on port 6379. This means that traffic for `tcp://redis:6379` is routed to this Service and is load balanced across all Pods on the cluster with the "app=redis" label. The Service is accessed on the cluster-wide `port` and forwards to the Pods on the `targetPort`. Again, the label-selector for the Service and the labels for the Pods are what tie the two together. The diagram shows traffic intended for `tcp://redis:6379` being sent to the redis Service and then load balanced across all Pods that match the Service label selector. ![Voting app redis Kube pods](images/votingapp-kube-pods-redis.png){:width="500px"} ### lb service The Docker Cloud stackfile defines an `lb` service to balance traffic to the vote service. On GKE, this is not necessary because Kubernetes lets you define a Service object with `type=balancer`, which creates a native GCP balancer to do this job. We demonstrate in the `vote` section. ### vote service The Docker Cloud stackfile for the `vote` service defines an image, a restart policy, and a specific number of Pods (replicas: 5). It also enables the Docker Cloud `autoredeploy` feature. We can tell that it listens on port 80 because the Docker Cloud `lb` service forwards traffic to it on port 80; we can also inspect its image. > **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment. **Docker Cloud stackfile**: ``` vote: autoredeploy: true image: 'docker/example-voting-app-vote:latest' restart: always target_num_containers: 5 ``` **Kubernetes manifest**: ``` apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: vote name: vote spec: selector: matchLabels: app: vote replicas: 5 template: metadata: labels: app: vote spec: containers: - image: docker/example-voting-app-vote:latest name: vote ports: - containerPort: 80 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: vote name: vote spec: type: LoadBalancer ports: - port: 80 selector: app: vote ``` Again, we ensure that both Deployment and Service objects can find the Pods with matching labels ("app=vote"). We also set the number of Pod replicas to five (`Deployment.spec.replicas`) so that it matches the `target_num_containers` from the Docker Cloud stackfile. We define the Service as "type=loadbalancer". This creates a native GCP load balancer with a stable, publicly routable IP for the service. It also maps port 80 so that traffic hitting port 80 is load balanced across all five Pod replicas in the cluster. (This is why the `lb` service from the Docker Cloud app is not needed.) ### worker service Like the `vote` service, the `worker` service defines an image, a restart policy, and a specific number of Pods (replicas: 5). It also defines the Docker Cloud `autoredeploy` policy (which is not supported in GKE). > **Autoredeploy options**: Autoredeploy is a Docker Cloud feature that automatically updates running applications every time you push an image. It is not native to Docker CE, AKS or GKE, but you may be able to regain it with Docker Cloud auto-builds, using web-hooks from the Docker Cloud repository for your image back to the CI/CD pipeline in your dev/staging/production environment. **Docker Cloud stackfile**: ``` worker: autoredeploy: true image: 'docker/example-voting-app-worker:latest' restart: always target_num_containers: 3 ``` **Kubernetes manifest**: ``` apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: worker name: worker spec: selector: matchLabels: app: worker replicas: 3 template: metadata: labels: app: worker spec: containers: - image: docker/example-voting-app-worker:latest name: worker restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: worker name: worker spec: clusterIP: None ports: - port: 55555 targetPort: 0 selector: app: worker ``` Again, we ensure that both Deployment and Service objects can find the Pods with matching labels ("app=worker"). The `worker` Service (like `db`) is another ["headless" service](https://kubernetes.io/docs/concepts/services-networking/service/#headless-services){: target="_blank" class="_"} where a DNS name is created and mapped to individual [A records](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records){: target="_blank" class="_"} for each Pod rather than a cluster-wide VIP. ### result service **Docker Cloud stackfile**: ``` result: autoredeploy: true image: 'docker/example-voting-app-result:latest' ports: - '80:80' restart: always ``` **Kubernetes manifest**: ``` apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: result name: result spec: selector: matchLabels: app: result template: metadata: labels: app: result spec: containers: - image: docker/example-voting-app-result:latest name: result ports: - containerPort: 80 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: result name: result spec: type: LoadBalancer ports: - port: 80 selector: app: result ``` The Deployment section defines the usual names, labels and container spec. The `result` Service (like the `vote` Service) defines a GCP-native load balancer to distribute external traffic to the cluster on port 80. ### Combined manifest k8s-vote.yml You can combine all Deployments and Services in a single YAML file, or have individual YAML files per Docker Cloud service. The choice is yours, but it's usually easier to deploy and manage one file. > You should manage your Kubernetes manifest files the way you manage your application code -- checking them in and out of version control repositories etc. Here, we combine all the Kubernetes definitions explained above into one YAML file that we call, `k8s-vote.yml`. ``` apiVersion: apps/v1beta1 kind: Deployment metadata: name: db labels: app: db spec: selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - image: postgres:9.4 name: db restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: db spec: clusterIP: None ports: - port: 55555 targetPort: 0 selector: app: db --- apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: redis name: redis spec: selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - image: redis:alpine name: redis ports: - containerPort: 6379 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: redis name: redis spec: ports: - port: 6379 targetPort: 6379 selector: app: redis --- apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: vote name: vote spec: selector: matchLabels: app: vote replicas: 5 template: metadata: labels: app: vote spec: containers: - image: docker/example-voting-app-vote:latest name: vote ports: - containerPort: 80 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: vote name: vote spec: type: LoadBalancer ports: - port: 80 selector: app: vote --- apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: worker name: worker spec: selector: matchLabels: app: worker replicas: 3 template: metadata: labels: app: worker spec: containers: - image: docker/example-voting-app-worker:latest name: worker restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: worker name: worker spec: clusterIP: None ports: - port: 55555 targetPort: 0 selector: app: worker --- apiVersion: apps/v1beta1 kind: Deployment metadata: labels: app: result name: result spec: selector: matchLabels: app: result template: metadata: labels: app: result spec: containers: - image: docker/example-voting-app-result:latest name: result ports: - containerPort: 80 restartPolicy: Always --- apiVersion: v1 kind: Service metadata: labels: app: result name: result spec: type: LoadBalancer ports: - port: 80 selector: app: result ``` Save the Kubernetes manifest file (as `k8s-vote.yml`) and check it into version control. ## Test the app on GKE Before migrating, you should thoroughly test each new Kubernetes manifest on a GKE cluster. Healthy testing includes _deploying_ the application with the new manifest file, performing _scaling_ operations, increasing _load_, running _failure_ scenarios, and doing _updates_ and _rollbacks_. These tests are specific to each of your applications. You should also manage your manifest files in a version control system. The following steps explain how to deploy your app from the Kubernetes manifest file and verify that it is running. The steps are based on the sample application used throughout this guide, but the general commands should work for any app. > Run from a [Google Cloud Shell](https://cloud.google.com/shell/){: target="_blank" class="_"} or local terminal with `kubectl` configured to talk to your GKE cluster. 1. Verify that your shell/terminal is configured to talk to your GKE cluster. If the output matches your cluster, you're ready to proceed with the next steps. ``` $ kubectl get nodes NAME STATUS ROLES AGE VERSION gke-clus-k8s-vote-default-pool-81bd226c-2jtp Ready <none> 1h v1.9.2-gke.1 gke-clus-k8s-vote-default-pool-81bd226c-mn4k Ready <none> 1h v1.9.2-gke.1 gke-clus-k8s-vote-default-pool-81bd226c-qjm2 Ready <none> 1h v1.9.2-gke.1 ``` 2. Deploy your Kubernetes application to your cluster. The Kubernetes manifest here is `ks8-vote.yml` and lives in the system PATH. To use a different manifest, substitute `ks8-vote.yml` with the name of your manifest file. ``` $ kubectl create -f k8s-vote.yml deployment "db" created service "db" created deployment "redis" created service "redis" created deployment "vote" created service "vote" created deployment "worker" created service "worker" created deployment "result" created service "result" created ``` 3. Check the status of the app (both Deployments and Services): ``` $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE db 1 1 1 1 43s redis 1 1 1 1 43s result 1 1 1 1 43s vote 5 5 5 5 43s worker 3 3 3 3 43s $ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP None <none> 55555/TCP 48s kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 6h redis ClusterIP 10.0.168.188 <none> 6379/TCP 48s result LoadBalancer 10.0.76.157 <pending> 80:31033/TCP 47s vote LoadBalancer 10.0.244.254 <pending> 80:31330/TCP 48s worker ClusterIP None <none> 55555/TCP 48s ``` Both `LoadBalancer` Services are `pending` because it takes a minute or two to provision a GCP load balancer. You can run `kubectl get svc --watch` to see when they are ready. Once provisioned, the output looks like this (with different external IPs): ``` $ kubectl get services <Snip> result LoadBalancer 10.0.76.157 52.174.195.232 80:31033/TCP 7m vote LoadBalancer 10.0.244.254 52.174.196.199 80:31330/TCP 8m ``` 4. Test that the application works in your new environment. For example, the voting app exposes two web front-ends -- one for casting votes and the other for viewing results: - Copy/paste the `EXTERNAL-IP` value for the `vote` service into a browser and cast a vote. - Copy/paste the `EXTERNAL-IP` value for the `result` service into a browser and ensure your vote registered. If you had a CI/CD pipeline with automated tests and deployments for your Docker Cloud stacks, you should build, test, and implement one for each application on GKE. > You can extend your Kubernetes manifest file with advanced features to perform rolling updates and simple rollbacks. But you should not do this until you have confirmed your application is working with the simple manifest file. ## Migrate apps from Docker Cloud > Remember to point your application CNAMES to new service endpoints. How you migrate your applications is unique to your environment and applications. - Plan with all developers and operations teams. - Plan with customers. - Plan with owners of other applications that interact with your Docker Cloud app. - Plan a rollback strategy if problems occur. Once your migration is in process, check that everything is working as expected. Ensure that users are hitting the new application on the GKE infrastructure and getting expected results. > Think before you terminate stacks and clusters > > Do not terminate your Docker Cloud stacks or node clusters until some time after the migration has been signed off as successful. If there are problems, you may need to roll back and try again. {: .warning}
38.534264
483
0.720929
eng_Latn
0.981564
2f4b3028751f6d5779536a285c1f5ea99a810f78
1,803
md
Markdown
docs/vs-2015/extensibility/debugger/reference/idebugexpressionevaluator-setregistryroot.md
HiDeoo/visualstudio-docs.fr-fr
db4174a3cd6d03edc8bbf5744c3f917e4b582cb3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/debugger/reference/idebugexpressionevaluator-setregistryroot.md
HiDeoo/visualstudio-docs.fr-fr
db4174a3cd6d03edc8bbf5744c3f917e4b582cb3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/extensibility/debugger/reference/idebugexpressionevaluator-setregistryroot.md
HiDeoo/visualstudio-docs.fr-fr
db4174a3cd6d03edc8bbf5744c3f917e4b582cb3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'IDebugExpressionEvaluator :: SetRegistryRoot | Microsoft Docs' ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.technology: vs-ide-sdk ms.topic: reference f1_keywords: - IDebugExpressionEvaluator::SetRegistryRoot helpviewer_keywords: - IDebugExpressionEvaluator::SetRegistryRoot method ms.assetid: 790886d8-1975-4d3c-9a75-cd86c1faf4ca caps.latest.revision: 12 ms.author: gregvanl manager: jillfra ms.openlocfilehash: 3e01c340b571854011966e9feeef116fab0b7d41 ms.sourcegitcommit: 6cfffa72af599a9d667249caaaa411bb28ea69fd ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 09/02/2020 ms.locfileid: "62540509" --- # <a name="idebugexpressionevaluatorsetregistryroot"></a>IDebugExpressionEvaluator::SetRegistryRoot [!INCLUDE[vs2017banner](../../../includes/vs2017banner.md)] Cette méthode définit la racine du Registre. Utilisé pour le débogage côte à côte. ## <a name="syntax"></a>Syntaxe ```cpp# HRESULT SetRegistryRoot (  LPCOLESTR ustrRegistryRoot ); ``` ```csharp int SetRegistryRoot( string ustrRegistryRoot ); ``` #### <a name="parameters"></a>Paramètres `ustrRegistryRoot` dans Nouvelle racine du Registre. ## <a name="return-value"></a>Valeur renvoyée En cas de réussite, retourne `S_OK` , sinon, retourne un code d'erreur. ## <a name="remarks"></a>Notes La racine de Registre spécifiée est généralement définie lorsque l’évaluateur d’expression est instancié pour la première fois et pointe vers la clé de Registre pour une version spécifique de Visual Studio (HKEY_LOCAL_MACHINE \SOFTWARE\Microsoft\VisualStudio \\ *x. y*, où *X. y* est un numéro de version). ## <a name="see-also"></a>Voir aussi [IDebugExpressionEvaluator](../../../extensibility/debugger/reference/idebugexpressionevaluator.md)
34.018868
309
0.754853
fra_Latn
0.33633
2f4bcef4358399bcb40d92966a854386e00d09dc
446
md
Markdown
.github/ISSUE_TEMPLATE/feature-request.md
jejacks0n/freesewing
6163deb1c97dc098cd28ea27e186f78a8666ec99
[ "MIT" ]
174
2018-08-25T13:46:07.000Z
2022-03-13T22:34:10.000Z
.github/ISSUE_TEMPLATE/feature-request.md
jejacks0n/freesewing
6163deb1c97dc098cd28ea27e186f78a8666ec99
[ "MIT" ]
1,029
2018-08-13T08:44:55.000Z
2022-03-31T20:35:42.000Z
.github/ISSUE_TEMPLATE/feature-request.md
jejacks0n/freesewing
6163deb1c97dc098cd28ea27e186f78a8666ec99
[ "MIT" ]
100
2018-09-18T18:11:38.000Z
2022-03-31T17:55:09.000Z
--- name: Feature request about: Suggest an idea to make FreeSewing better title: Feature request labels: "\U0001F48E enhancement" assignees: '' --- **What is it that you would like to see happen?** A clear and concise description of what you want to happen. **Are you a FreeSewing patron?** - [ ] Yes, I am :hugs: - [ ] No, I am not :thinking: **Additional context** Add any other context or screenshots about the feature request here.
20.272727
68
0.713004
eng_Latn
0.997635
2f4bea5faec66f4d842b33255173ab723a884d7f
1,327
md
Markdown
_posts/2016-10-12-The-Dessy-Group-Dessy-Alfred-Sung-Style-D632-Sleeveless-KneeLength-AlinePrincess.md
HOLEIN/HOLEIN.github.io
7da00c82f070f731cb05c3799426f481aba19b99
[ "MIT" ]
null
null
null
_posts/2016-10-12-The-Dessy-Group-Dessy-Alfred-Sung-Style-D632-Sleeveless-KneeLength-AlinePrincess.md
HOLEIN/HOLEIN.github.io
7da00c82f070f731cb05c3799426f481aba19b99
[ "MIT" ]
null
null
null
_posts/2016-10-12-The-Dessy-Group-Dessy-Alfred-Sung-Style-D632-Sleeveless-KneeLength-AlinePrincess.md
HOLEIN/HOLEIN.github.io
7da00c82f070f731cb05c3799426f481aba19b99
[ "MIT" ]
null
null
null
--- layout: post date: 2016-10-12 title: "The Dessy Group Dessy - Alfred Sung Style D632 Sleeveless Knee-Length Aline/Princess" category: The Dessy Group tags: [The Dessy Group,Aline/Princess ,Sweetheart,Knee-Length,Sleeveless] --- ### The Dessy Group Dessy - Alfred Sung Style D632 Just **$279.99** ### Sleeveless Knee-Length Aline/Princess <table><tr><td>BRANDS</td><td>The Dessy Group</td></tr><tr><td>Silhouette</td><td>Aline/Princess </td></tr><tr><td>Neckline</td><td>Sweetheart</td></tr><tr><td>Hemline/Train</td><td>Knee-Length</td></tr><tr><td>Sleeve</td><td>Sleeveless</td></tr></table> <a href="https://www.readybrides.com/en/the-dessy-group/13021-the-dessy-group-alfred-sung-style-d632.html"><img src="//img.readybrides.com/29549/the-dessy-group-alfred-sung-style-d632.jpg" alt="Dessy - Alfred Sung Style D632" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/the-dessy-group/13021-the-dessy-group-alfred-sung-style-d632.html"><img src="//img.readybrides.com/29548/the-dessy-group-alfred-sung-style-d632.jpg" alt="Dessy - Alfred Sung Style D632" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/the-dessy-group/13021-the-dessy-group-alfred-sung-style-d632.html](https://www.readybrides.com/en/the-dessy-group/13021-the-dessy-group-alfred-sung-style-d632.html)
82.9375
266
0.727204
yue_Hant
0.366619
2f4bf225379f14a0df1ff7bd739eb01b9bbb21d5
1,112
md
Markdown
README.md
ShotaroKataoka/Fuzzy-Terminal-Explorer
b2771ca0e76d31b0ae6e47970121b16cc900d196
[ "MIT" ]
2
2020-11-22T11:38:48.000Z
2021-08-12T08:16:35.000Z
README.md
Fuzzy-Explorer/Fuzzy-Explorer-on-Terminal
b2771ca0e76d31b0ae6e47970121b16cc900d196
[ "MIT" ]
43
2020-11-23T12:53:13.000Z
2021-08-17T08:53:32.000Z
README.md
ShotaroKataoka/Fuzzy-Terminal-Explorer
b2771ca0e76d31b0ae6e47970121b16cc900d196
[ "MIT" ]
null
null
null
# Fuzzy-Explorer-on-Terminal Powerful CUI Explorer. Version.0.1.3-beta ## What is this ![result](https://github.com/ShotaroKataoka/Fuzzy-Terminal-Explorer/blob/media/test.gif) ## Install Install command: ```bash git clone git@github.com:Fuzzy-Explorer/Fuzzy-Explorer-on-Terminal.git ~/.fet . ~/.fet/install ``` And, you can use Fuzzy-Explorer-on-Terminal with `. ~/.fet/fet`. Setting `alias fet='. $HOME/.fet/fet'` is recommended. If you want to preview files with syntax, please install [bat](https://github.com/sharkdp/bat#installation). ```bash sudo apt update sudo apt upgrade sudo apt install bat ``` If you want to use beautiful-preview function, please install [richcat](https://github.com/richcat-dev/richcat). ``` pip install richcat ``` # Related projects ## Plugins - [yamamoto-yuta/dir_history](https://github.com/yamamoto-yuta/fet_dir_history) - [yamamoto-yuta/fet_respawn](https://github.com/yamamoto-yuta/fet_respawn) # Contributors! - [@ShotaroKataoka](https://github.com/ShotaroKataoka) (Maintainer, main contributor) - [@yamamoto-yuta](https://github.com/yamamoto-yuta) (Contributor)
30.888889
112
0.746403
yue_Hant
0.210186
2f4bfb7f538640d43c3179234ac1a4cc3f5c0d95
9,882
md
Markdown
content/tutorials/run-mock-api-anywhere-cli.md
255kb/mockoon-website
6bf849c442f27da2866b2394807585529d257953
[ "MIT" ]
null
null
null
content/tutorials/run-mock-api-anywhere-cli.md
255kb/mockoon-website
6bf849c442f27da2866b2394807585529d257953
[ "MIT" ]
1
2018-07-16T16:07:53.000Z
2018-07-18T19:44:03.000Z
content/tutorials/run-mock-api-anywhere-cli.md
255kb/mockoon-website
6bf849c442f27da2866b2394807585529d257953
[ "MIT" ]
null
null
null
--- title: Run your mock REST APIs anywhere with Mockoon CLI excerpt: Learn how to create mock REST APIs and run them anywhere with the CLI meta: title: Run your mock REST APIs anywhere with Mockoon CLI description: Learn how to create mock REST APIs and run them in all headless and server environments with Mockoon CLI image: tutorial-getting-started-cli.png imageAlt: a terminal imageWidth: 1200 imageHeight: 400 order: 20 --- Mockoon is a set of free and open-source API mocking tools. They help you get ready to work in no time. Should you be a front-end or back-end developer or a QA tester, Mockoon got you covered with a flexible user interface and a CLI that allows you to bring your mocking scenarios on servers and headless environments. This tutorial will help you put up on track with the CLI and all its possibilities. > To learn more about API in general, head over to our [API guide](/tutorials/api-guide-what-are-api/) ## What is Mockoon CLI? Mockoon CLI is an [NPM package](https://www.npmjs.com/package/@mockoon/cli) that can run on all environments where Node.js is installed. A [Docker image](https://hub.docker.com/r/mockoon/cli) is also available (see [Step 8](#step-8-deploy-mockoon-cli-using-docker) below). The CLI is a companion application to Mockoon's main interface designed to receive a Mockoon data file. It has been written in JavaScript/TypeScript and uses some great libraries like [oclif](https://oclif.io/) and [PM2](https://pm2.io/). One of the benefits of using PM2 is that you can easily manage your running mock APIs through the CLI or by using PM2 commands if you are used to them. ## How to use the CLI? As Mockoon CLI is designed to work in pair with the main user interface, you will learn how to create your first mock API and how to use the mock data with the CLI. ### Step 1. Create a mock API using Mockoon interface One of the prerequisites for using the CLI is to create a mock API in the main application. If you already have a setup in Mockoon, you can jump straight to the next section. > To create a new mock API, we have a [Getting started tutorial](tutorials:getting-started) that will guide you step by step. Once your mock is created, come back to this tutorial to learn how to use it in the CLI. ### Step 2. Install the CLI Before importing your mock API in the CLI, you must install it. First ensure that Node.js is installed on your computer by running `node -v` in your terminal: ```sh-sessions $ node -v v14.15.4 ``` If it's not installed, head over to [Node.js' download page](https://nodejs.org/en/download/) and follow the instructions for your operating system. You are now ready to install the CLI by running the following command `npm i -g @mockoon/cli`: ```sh-sessions $ npm i -g @mockoon/cli + @mockoon/cli@1.0.0 added 423 packages from 339 contributors in 15s ``` You can also install Mockoon CLI in the scope of a local project by running `npm i @mockoon/cli`. You will then need to use `npx mockoon-cli ...` to run it. ### Step 3. Prepare your data file The CLI can open and migrate data from older versions of Mockoon. However, it doesn't alter the file you provide and only migrates a copy. If you created your mock with a more recent version of the application, you need to update your CLI with the following command: `npm install -g @mockoon/cli`. #### Provide a Mockoon's environment file You can run your mock in one single step by providing the actual location of your Mockoon environment file. To locate your environment file from the main application, right-click on an environment and select "Show in folder" in the context menu: ![show in folder menu entry{481x228}](/images/tutorials/getting-started-cli/environment-show-in-folder.png) Let's pretend your file name is `data.json` and resides in the current directory. As an alternative, you can also provide a URL pointing to a Mockoon environment file, and Mockoon CLI will take care of downloading it. #### Use an OpenAPI specification file Another option is to directly pass an OpenAPI specification file. Mockoon supports both JSON and YAML formats in versions 2.0.0 and 3.0.0. As above, you can provide a path to a local OpenAPI specification file or directly the file's URL. ### Step 4. Start you mock API After locating your environment file, you are ready to run your API mock with the CLI. In your terminal, navigate to the folder where your Mockoon's data file or OpenAPI file is and run the following command: `mockoon-cli start --data ./data.json` Or: `mockoon-cli start --data ./openapi-spec.yaml` If you want to use a remotely hosted files, you can also provide a URL to the `--data` flag like this: `mockoon-cli start --data https://domain.com/data.json` You can also provide multiple parameters to customize your mock: - `--pname`: to provide a different name for the API mock process. The name will always be prefixed with 'mockoon-'. - `--port`: to override the port on which the mock process will run. You will find more information regarding the [`start` command](https://github.com/mockoon/mockoon/blob/main/packages/cli#mockoon-cli-start), including all the available flags on the official repository. ### Step 5. Manage your API mock After running one or more API server mock, you might want to check their health and statuses. To do so you can type `mockoon-cli list`: ```sh-sessions $ mockoon-cli list Name Id Status Cpu Memory Hostname Port mockoon-test 0 online 0.1 45.6 MB 0.0.0.0 3000 ``` > Mockoon CLI is using [PM2](https://pm2.io/), the Node.js process manager, behind the scene. It allows you to use all PM2 usual commands to manage your running mock servers: `pm2 list`, `pm2 kill`, etc. To stop a process, type the following command: `mockoon-cli stop {id|name}`, where `id|name` is your process id or name. If you omit the id, you will be prompted to choose a mock to stop. You can also stop all running servers at once with `mockoon-cli stop all` ### Step 6. View a running mock's logs Mockoon CLI log all events like requests and errors in your user folder in the following files: `~/mockoon-cli/logs/{process_name}-out.log` and `~/mockoon-cli/logs/{process_name}-error.log`. The `{process_name}-error.log` file contains server errors that only occur at startup time and prevent the mock API from running (port in use, etc.). The `{process_name}-out.log` file contains all other log entries (all levels) produced by the running mock server. Most of the errors occurring in Mockoon, either the CLI or the main application, are not mission-critical and are considered as "normal" output. As an example, if Mockoon is unable to parse the entering request's JSON body, it will log a JSON parsing error, but it won't block the normal execution of the application. ### Step 7. Run as a blocking process Using the `--daemon-off` flag will keep the CLI in the foreground. The mock API process will not be [managed by PM2](#step-5-manage-your-api-mock). When running as a blocking process, all the logs are sent to both stdout (console) and the usual files. ```sh-sessions $ mockoon-cli start -d ./data.json --daemon-off {"level":"info","message":"Server started on port 3000","timestamp":"2022-02-02T14:49:23.367Z"} {"level":"info","message":"GET /test | 200","timestamp":"2022-02-02T14:49:31.286Z"} ... ``` ### Step 8. Deploy Mockoon CLI using Docker #### Using the generic Docker image published on Docker Hub A generic Docker image `mockoon/cli` is automatically built upon each release on Docker Hub's Mockoon CLI repository. It uses a `node:14-alpine` image and installs the latest version of Mockoon CLI. All of `mockoon-cli start` flags (`--port`, etc.) must be provided when running the container. To load a data file, you can either mount a local file and pass `mockoon-cli start` flags at the end of the command: `docker run -d --mount type=bind,source=./data.json,target=/data,readonly -p 3000:3000 mockoon/cli:latest -d data -p 3000` Or directly pass a URL to the `mockoon-cli start` command: `docker run -d -p 3000:3000 mockoon/cli:latest -d https://raw.githubusercontent.com/mockoon/mock-samples/main/samples/generate-mock-data.json -p 3000` #### Using the dockerize command Mockoon CLI also offers a `dockerize` command which generates a new Dockerfile that will allow you to build a self-contained image. Thus, no Mockoon CLI-specific parameters will be needed at runtime. Run the `dockerize` command: `mockoon-cli dockerize --data ./data.json --port 3000 --output ./tmp/Dockerfile` Then, navigate to the `tmp` folder, where the Dockerfile has been generated, and build the image: `docker build -t mockoon-test .` You can finally run your container: `docker run -d -p <host_port>:3000 mockoon-mock1` ### Step 9. Use Mockoon CLI in a CI environment: GitHub Actions Mockoon CLI being a Javascript application, it can run on any environment where Node.js is installed, including continuous integration systems like GitHub Actions or CircleCI. It is useful when you want to run a mock server while running integration tests on another application. For example, you could mock the backend when running a React front-end application tests. Here is an example of a GitHub Action running a mock API before running some tests: ```yaml name: Run mock API server on: push: branches: - main jobs: test: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Use Node.js uses: actions/setup-node@v2 with: node-version: "14.x" - name: NPM install, build and test run: | npm ci npm run build # If mockoon-cli is not a devDependency: # npm install -D mockoon-cli npx mockoon-cli start --data https://domain.com/data.json --port 3000 npm run test ```
48.920792
432
0.744485
eng_Latn
0.997332
2f4d1ed8d74efa1146630e1c02a8ed9c162f16a9
4,657
md
Markdown
_posts/software-foundations/logical-foundations/2020-03-06-logical-foundations-03-lists.md
roife/blog
09e9449ffedfd4c8fc3857e529d49a2dc6644979
[ "Apache-2.0" ]
9
2020-09-01T15:56:55.000Z
2022-01-28T08:21:55.000Z
_posts/software-foundations/logical-foundations/2020-03-06-logical-foundations-03-lists.md
roife/blog
09e9449ffedfd4c8fc3857e529d49a2dc6644979
[ "Apache-2.0" ]
8
2020-09-02T00:50:47.000Z
2021-12-29T04:57:19.000Z
_posts/software-foundations/logical-foundations/2020-03-06-logical-foundations-03-lists.md
roife/blog
09e9449ffedfd4c8fc3857e529d49a2dc6644979
[ "Apache-2.0" ]
8
2021-01-28T12:24:34.000Z
2022-03-25T14:52:21.000Z
--- layout: "post" title: "「SF-LF」 03 Lists" subtitle: "Working with Structured Data" author: "roife" date: 2020-03-06 tags: ["Software Foundations@Books@Series", "Logical Foundations@Books@Series", "Coq@Languages@Tags", "程序语言理论@Tags@Tags", "函数式编程@Tags@Tags", "形式化验证@Tags@Tags"] lang: zh catalog: true header-image: "" header-style: text --- # Pairs ## Definition ``` coq Inductive natprod : Type := | pair (n1 n2 : nat). ``` ## 定义基本运算 ``` coq (*frist element*) Definition fst(p : natprod) : nat := match p with | pair x y => x end. (*second element*) Definition snd(p : natprod) : nat := match p with | pair x y => y end. (*swap pair*) Definition swap_pair(p : natprod) : nat := match p with | pair x y => pair y x end. Notation "(x, y)" := (pair x y). (*定义表示方法*) ``` 注意: 模式匹配中的"多匹配"与 pair 匹配本质不同 (如 "2 3" 与 "(2, 3)" 不同). ## destruct 证明时可以用 destruct 暴露 pair 内部元素. 注意: 这个操作不会产生 subgoal, 与 nat 中的分类讨论不相同. ``` coq Theorem surjective_pairing : forall (p : natprod), p = (fst p, snd p). Proof. intros p. destruct p as [n m]. (*pair 被分解为两个元素*) reflexivity. Qed. ``` # Lists of numbers ## Definition ``` coq Inductive natlist : Type := | nil | cons (n : nat) (l : natlist). Notation "x :: l" := (cons x l) (at level 60, right associativity). Notation "[ ]" := nil. Notation "[ x ; .. ; y ]" := (cons x .. (cons y nil) ..). (* 第三种定义意为把 n 元 Notation 转换为二元 constructor *) (** Definition mylist1 := 1 :: (2 :: (3 :: nil)). Definition mylist2 := 1 :: 2 :: 3 :: nil. Definition mylist3 := [1;2;3].*) ``` ## 一些函数 ``` coq (* 返回列表长度 *) Fixpoint length (l:natlist) : nat := match l with | nil => O | h :: t => S (length t) end. (* 连接两个列表 *) Fixpoint app (l1 l2 : natlist) : natlist := match l1 with | nil => l2 | h :: t => h :: (app t l2) end. Notation "x ++ y" := (app x y) (right associativity, at level 60). (* 返回头部和尾部 *) Definition hd (default:nat) (l:natlist) : nat := match l with | nil => default | h :: t => h end. Definition tl (l:natlist) : natlist := match l with | nil => nil | h :: t => t end. ``` # destruct and induction in lists ## destruct ``` coq Theorem tl_length_pred : forall l:natlist, pred (length l) = length (tl l). Proof. intros l. destruct l as [| n l']. - reflexivity. - reflexivity. Qed. ``` ## induction ``` coq Theorem app_assoc : forall l1 l2 l3 : natlist, (l1 ++ l2) ++ l3 = l1 ++ (l2 ++ l3). Proof. intros l1 l2 l3. induction l1 as [| n l1' IHl1']. - reflexivity. - simpl. rewrite -> IHl1'. reflexivity. Qed. ``` 注意: :: 运算符和 ++ 运算的优先级相同, 二者都是右结合. # reversing lists ``` coq Fixpoint rev (l:natlist) : natlist := match l with | nil => nil | h :: t => rev t ++ [h] end. ``` 一些有用的 Lemma. ``` coq (* 对合性 *) Lemma rev_involutive : forall l : natlist, rev (rev l) = l. Proof. intros l. induction l as [| n l' IHl']. - reflexivity. - simpl. rewrite rev_app_distr. simpl. rewrite IHl'. reflexivity. Qed. (* 利用对合性的巧妙证明 *) Theorem rev_injective : forall (l1 l2 : natlist), rev l1 = rev l2 -> l1 = l2. Proof. intros l1 l2. intros H. rewrite <- rev_involutive. rewrite <- H. rewrite -> rev_involutive. reflexivity. Qed. ``` # natoption ``` coq Inductive natoption : Type := | Some (n : nat) | None. Definition option_elim (d : nat) (o : natoption) : nat := match o with | Some n' => n' | None => d end. ``` # if - if cond then exp1 else exp2 选择语句 ``` coq Fixpoint nth_error' (l:natlist) (n:nat) : natoption := match l with | nil => None | a :: l' => if n =? O then Some a else nth_error' l' (pred n) end. ``` 由于 coq 没有 bool 类型, 因此二元归纳类型可以用作 cond 语句的判定. 当返回值为归纳类型的第一个 constructor 时, 等价于 true; 第二个 constructor 等价于 false. # partial map ## id ``` coq Inductive id : Type := | Id (n : nat). ``` ``` coq Theorem eqb_id_refl : forall x, true = eqb_id x x. Proof. destruct x as [n]. - simpl. rewrite <- eqb_refl. reflexivity. Qed. ``` ## dictionary ``` coq Inductive partial_map : Type := | empty | record (i : id) (v : nat) (m : partial_map). ``` ``` coq (* 通过覆盖来更新key *) Definition update (d : partial_map) (x : id) (value : nat) : partial_map := record x value d. (* 查找 *) Fixpoint find (x : id) (d : partial_map) : natoption := match d with | empty => None | record y v d' => if eqb_id x y then Some v else find x d' end. ``` ``` coq Theorem update_eq : forall (d : partial_map) (x : id) (v: nat), find x (update d x v) = Some v. Proof. intros d x v. simpl. rewrite <- eqb_id_refl. reflexivity. Qed. ```
17.980695
159
0.57784
eng_Latn
0.442673
2f4d81aa3155b6817e8553fbb213c130d85f35fc
455
md
Markdown
README.md
omnipede/coin-vesting-contract
4360956ddd68b1f599ff0336c849c04e92f25dfd
[ "MIT" ]
1
2021-04-12T08:35:19.000Z
2021-04-12T08:35:19.000Z
README.md
omnipede/coin-vesting-contract
4360956ddd68b1f599ff0336c849c04e92f25dfd
[ "MIT" ]
null
null
null
README.md
omnipede/coin-vesting-contract
4360956ddd68b1f599ff0336c849c04e92f25dfd
[ "MIT" ]
null
null
null
# coin-vesting-contract Coin vesting contract ## Preequiste 1. Docker Install at https://docs.docker.com/install 2. Install solc 0.4.24 ``` $ docker pull ethereum/solc:0.4.24 $ cp dockers/solc /usr/local/bin/solc ``` ## Install 1. Libraries ``` $ npm install ``` 2. Analyzer ``` $ npm run install_analyzer ``` ## Run ### Compile ``` $ npm run compile ``` ### Test ``` $ npm run test ``` ### Coverage ``` $ npm run coverage ```
9.1
45
0.606593
kor_Hang
0.32288
2f4df27edb7b55cd8766931153720cacebdfc396
565
md
Markdown
strongloop/node_modules/strongloop/node_modules/loopback-sdk-angular-cli/README.md
tsiry95/openshift-strongloop-cartridge
c027885328f0842d96eb639377cd637878d88af4
[ "MIT" ]
null
null
null
strongloop/node_modules/strongloop/node_modules/loopback-sdk-angular-cli/README.md
tsiry95/openshift-strongloop-cartridge
c027885328f0842d96eb639377cd637878d88af4
[ "MIT" ]
null
null
null
strongloop/node_modules/strongloop/node_modules/loopback-sdk-angular-cli/README.md
tsiry95/openshift-strongloop-cartridge
c027885328f0842d96eb639377cd637878d88af4
[ "MIT" ]
null
null
null
# loopback-sdk-angular-cli **NOTE: The loopback-sdk-angular-cli module supersedes [loopback-angular-cli](https://www.npmjs.org/loopback-angular-cli). Please update your package.json accordingly.** CLI tools for the [LoopBack AngularJS SDK](https://github.com/strongloop/loopback-sdk-angular). See the official [LoopBack AngularJS SDK documentation](http://docs.strongloop.com/display/LB/AngularJS+JavaScript+SDK) for more information. ## Mailing List Discuss features and ask questions on [LoopBack Forum](https://groups.google.com/forum/#!forum/loopbackjs).
40.357143
169
0.785841
kor_Hang
0.348041
2f4f0c960282b8ca2444e0685eec5da50c8d0fd0
1,208
md
Markdown
software/tinyclr/tutorials/timer.md
ghi-electronics/Docs
12f409e22ca060b746ef04c3ceb1720d285e3b50
[ "Apache-2.0" ]
5
2017-08-03T22:40:35.000Z
2018-11-06T22:53:42.000Z
software/tinyclr/tutorials/timer.md
ghi-electronics/Docs
12f409e22ca060b746ef04c3ceb1720d285e3b50
[ "Apache-2.0" ]
115
2017-07-05T18:30:18.000Z
2020-01-05T14:10:14.000Z
software/tinyclr/tutorials/timer.md
ghi-electronics/Docs
12f409e22ca060b746ef04c3ceb1720d285e3b50
[ "Apache-2.0" ]
42
2017-07-03T16:22:50.000Z
2020-03-18T18:24:11.000Z
# Timers --- A timer is used to call a method at a specific time. This example will call (invoke) Ticker initially after 3 seconds and then it will repeat once a second indefinitely. ```cs static void Ticker(object o) { Debug.WriteLine("Hello!"); } static void Main() { Timer timer = new Timer(Ticker, null, 3000, 1000); Thread.Sleep(Timeout.Infinite); } ``` A thread can also be created that loops once a second. The difference is that a thread with a 1 second sleep will always sleep for one second after whatever time was needed by the thread. So if a thread needed 0.5 second to complete what it is doing, sleeping for one second will cause the thread to execute every 1.5 seconds. This also gets more complex as a thread can be interrupted by the system. There is no guaranteed time on threads. A timer set to invoke a method every second will do so every second regardless of how long that method needs to complete its task. However, care must be taken as if a timer calls a method every 10 milliseconds but then the method needs more than 10 milliseconds to execute you will end up flooding the system. The best practice is for timers to invoke methods that execute in a short time.
57.52381
440
0.764901
eng_Latn
0.999923
2f4f23c0a63ba8428c0d3ba76c548369a8bfbbc9
1,348
md
Markdown
treebanks/mdf_jr/mdf_jr-feat-Abbr.md
vistamou/docs
116b9c29e4218be06bf33b158284b9c952646989
[ "Apache-2.0" ]
204
2015-01-20T16:36:39.000Z
2022-03-28T00:49:51.000Z
treebanks/mdf_jr/mdf_jr-feat-Abbr.md
vistamou/docs
116b9c29e4218be06bf33b158284b9c952646989
[ "Apache-2.0" ]
654
2015-01-02T17:06:29.000Z
2022-03-31T18:23:34.000Z
treebanks/mdf_jr/mdf_jr-feat-Abbr.md
vistamou/docs
116b9c29e4218be06bf33b158284b9c952646989
[ "Apache-2.0" ]
200
2015-01-16T22:07:02.000Z
2022-03-25T11:35:28.000Z
--- layout: base title: 'Statistics of Abbr in UD_Moksha-JR' udver: '2' --- ## Treebank Statistics: UD_Moksha-JR: Features: `Abbr` This feature is universal. It occurs with 1 different values: `Yes`. 2 tokens (0%) have a non-empty value of `Abbr`. 2 types (0%) occur at least once with a non-empty value of `Abbr`. 2 lemmas (0%) occur at least once with a non-empty value of `Abbr`. The feature is used with 1 part-of-speech tags: <tt><a href="mdf_jr-pos-NOUN.html">NOUN</a></tt> (2; 0% instances). ### `NOUN` 2 <tt><a href="mdf_jr-pos-NOUN.html">NOUN</a></tt> tokens (0% of all `NOUN` tokens) have a non-empty value of `Abbr`. The most frequent other feature values with which `NOUN` and `Abbr` co-occurred: <tt><a href="mdf_jr-feat-Case.html">Case</a></tt><tt>=EMPTY</tt> (2; 100%), <tt><a href="mdf_jr-feat-Definite.html">Definite</a></tt><tt>=EMPTY</tt> (2; 100%), <tt><a href="mdf_jr-feat-Number.html">Number</a></tt><tt>=EMPTY</tt> (2; 100%), <tt><a href="mdf_jr-feat-Number-psor.html">Number[psor]</a></tt><tt>=EMPTY</tt> (2; 100%), <tt><a href="mdf_jr-feat-Person-psor.html">Person[psor]</a></tt><tt>=EMPTY</tt> (2; 100%). `NOUN` tokens may have the following values of `Abbr`: * `Yes` (2; 100% of non-empty `Abbr`): <em>И., Н.</em> * `EMPTY` (852): <em>лангс, ломаттне, ава, шиня, шись, Тишка, цёранц, шамац, визькс, вирьса</em>
48.142857
502
0.658754
eng_Latn
0.344751
2f4f7888c0edc47ef0cb3fb36f31500f4d9b9e85
820
md
Markdown
README.md
fabricegeib/gatsby-source-twitch
1fa6ad930eda3948f1f9ad30e0802a7e3ead1ae6
[ "MIT" ]
7
2018-02-16T21:28:32.000Z
2020-07-06T15:56:13.000Z
README.md
fabricegeib/gatsby-source-twitch
1fa6ad930eda3948f1f9ad30e0802a7e3ead1ae6
[ "MIT" ]
4
2019-05-14T19:02:11.000Z
2022-02-12T10:48:46.000Z
README.md
fabricegeib/gatsby-source-twitch
1fa6ad930eda3948f1f9ad30e0802a7e3ead1ae6
[ "MIT" ]
3
2018-03-14T15:13:41.000Z
2020-11-01T19:39:08.000Z
# gatsby-source-twitch A [gatsby](https://www.gatsbyjs.org/) source plugin for fetching all the videos and channel info for a Twitch user ID. Learn more about Gatsby plugins and how to use them here: https://www.gatsbyjs.org/docs/plugins/ ## Install `npm install --save gatsby-source-twitch` ## gatsby-config.js ```javascript plugins: [ { resolve: `gatsby-source-twitch`, options: { userID: '<<Twitch UserID eg. 6058227 >>', clientID: '<< Add your Twitch client_id here>>' }, }, ... ] ``` ## Examples of how to query: Get all the videos: ```graphql { allTwitchvideo { edges { node { title url type } } } } ``` Get the user/channel info: ```graphql { twitchuser { display_name description profile_image_url } } ```
14.642857
118
0.613415
eng_Latn
0.445847
2f4fa10ec0509120c12f39c0848bcb92e9024d27
4,052
md
Markdown
README.md
tunnckoCore/vez
3fc9e21fc7f61558c30a271f7efa764ec4f0d12a
[ "MIT" ]
1
2015-07-09T14:10:51.000Z
2015-07-09T14:10:51.000Z
README.md
hybridables/vez
3fc9e21fc7f61558c30a271f7efa764ec4f0d12a
[ "MIT" ]
3
2015-07-13T17:24:05.000Z
2015-07-23T22:43:52.000Z
README.md
hybridables/vez
3fc9e21fc7f61558c30a271f7efa764ec4f0d12a
[ "MIT" ]
null
null
null
# [vez][author-www-url] [![npmjs.com][npmjs-img]][npmjs-url] [![The MIT License][license-img]][license-url] > Middleware composition at new level. Ultimate alternative to `ware`, `plugins`, `koa-compose` and `composition` packages. Allows you to use callbacks, promises, generators and async/await functions as middlewares. [![code climate][codeclimate-img]][codeclimate-url] [![standard code style][standard-img]][standard-url] [![travis build status][travis-img]][travis-url] [![coverage status][coveralls-img]][coveralls-url] [![dependency status][david-img]][david-url] ## Install ``` npm i vez --save npm test ``` ## Usage > For more use-cases see the [tests](./test.js) ```js var vez = require('vez') var assert = require('assert') var Bluebird = require('bluebird') vez() .use(Bluebird.resolve(123)) .use(function () { assert.deepEqual(this, {a: 'b', c: 'd', e: 'f'}) return Bluebird.resolve(456) }) .use(function (foo, next) { assert.deepEqual(this, {a: 'b', c: 'd', e: 'f'}) next(null, foo, 789) }) .use(function * (first, second) { this.g = first + second assert.deepEqual(this, {a: 'b', c: 'd', e: 'f', g: 1245}) // because generators are handled by `co@4.6` return yield { gens: this.g } }) .run({a: 'b'}, {c: 'd'}, {e: 'f'}, function (err, res) { if (err) { return console.error(err) } assert.deepEqual(this, {a: 'b', c: 'd', e: 'f', g: 1245}) assert.deepEqual(res, [123, 456, [ 456, 789 ], { gens: 1245 }]) done() }) ``` ## Contributing Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](https://github.com/tunnckoCore/vez/issues/new). But before doing anything, please read the [CONTRIBUTING.md](./CONTRIBUTING.md) guidelines. ## [Charlike Make Reagent](http://j.mp/1stW47C) [![new message to charlike][new-message-img]][new-message-url] [![freenode #charlike][freenode-img]][freenode-url] [![tunnckocore.tk][author-www-img]][author-www-url] [![keybase tunnckocore][keybase-img]][keybase-url] [![tunnckoCore npm][author-npm-img]][author-npm-url] [![tunnckoCore twitter][author-twitter-img]][author-twitter-url] [![tunnckoCore github][author-github-img]][author-github-url] [npmjs-url]: https://www.npmjs.com/package/vez [npmjs-img]: https://img.shields.io/npm/v/vez.svg?label=vez [license-url]: https://github.com/tunnckoCore/vez/blob/master/LICENSE.md [license-img]: https://img.shields.io/badge/license-MIT-blue.svg [codeclimate-url]: https://codeclimate.com/github/tunnckoCore/vez [codeclimate-img]: https://img.shields.io/codeclimate/github/tunnckoCore/vez.svg [travis-url]: https://travis-ci.org/tunnckoCore/vez [travis-img]: https://img.shields.io/travis/tunnckoCore/vez.svg [coveralls-url]: https://coveralls.io/r/tunnckoCore/vez [coveralls-img]: https://img.shields.io/coveralls/tunnckoCore/vez.svg [david-url]: https://david-dm.org/tunnckoCore/vez [david-img]: https://img.shields.io/david/tunnckoCore/vez.svg [standard-url]: https://github.com/feross/standard [standard-img]: https://img.shields.io/badge/code%20style-standard-brightgreen.svg [author-www-url]: http://www.tunnckocore.tk [author-www-img]: https://img.shields.io/badge/www-tunnckocore.tk-fe7d37.svg [keybase-url]: https://keybase.io/tunnckocore [keybase-img]: https://img.shields.io/badge/keybase-tunnckocore-8a7967.svg [author-npm-url]: https://www.npmjs.com/~tunnckocore [author-npm-img]: https://img.shields.io/badge/npm-~tunnckocore-cb3837.svg [author-twitter-url]: https://twitter.com/tunnckoCore [author-twitter-img]: https://img.shields.io/badge/twitter-@tunnckoCore-55acee.svg [author-github-url]: https://github.com/tunnckoCore [author-github-img]: https://img.shields.io/badge/github-@tunnckoCore-4183c4.svg [freenode-url]: http://webchat.freenode.net/?channels=charlike [freenode-img]: https://img.shields.io/badge/freenode-%23charlike-5654a4.svg [new-message-url]: https://github.com/tunnckoCore/messages [new-message-img]: https://img.shields.io/badge/send%20me-message-green.svg
38.226415
282
0.702369
yue_Hant
0.228108
2f500c5bd5ecfe5f798b252baaaab51496eee6da
1,926
md
Markdown
README.md
anapdh/re-former
f636584c70ea9378f2b56c898f76bcb03027060d
[ "MIT" ]
2
2021-04-22T03:19:05.000Z
2021-06-23T02:34:56.000Z
README.md
anapdh/re-former
f636584c70ea9378f2b56c898f76bcb03027060d
[ "MIT" ]
null
null
null
README.md
anapdh/re-former
f636584c70ea9378f2b56c898f76bcb03027060d
[ "MIT" ]
null
null
null
![](https://img.shields.io/badge/Microverse-blueviolet) ![](https://img.shields.io/badge/RoR-red) # Re-former This is part of the Forms Project in The Odin Project’s Ruby on Rails Curriculum where Forms are built using nearly-pure HTML and then graduating to the usage of the helper methods that Rails provide. ## Built With - Ruby 2.7.2 - Rails 6.1.1 - Windows subsystem Linux (Ubuntu 20.04.1 LTS) ### Install Install Ruby and Rails on your local machine. ### Setup Open your terminal and go to the directory where you want to clone the repo. Type $ `git clone https://github.com/vichuge/re-former` to clone the repository to your local machine. Type $ `cd re-former` to go to the re-former directory. Type $ `bundle install` to install the necessary gems to run the project. Now, please run this command `rails db:migrate` to run all the migration for the database and have all tables updated and read to use. Now your environment is ready to run the project! Type rails c (for console) or rails s (to open in the server). Once your project is running in the server, you can open the browser and paste `http://localhost:3000/users/new` to create a new user. After creating your first user, you can edit it in the `http://localhost:3000/users/1/edit` url path. To stop running the server in the terminal, type `Ctrl + C` inside it. ## Authors 👩🏼‍💻 **Ana Paula Hübner** - GitHub: [@anapdh](https://github.com/anapdh) - Twitter: [@dev_anahub](https://twitter.com/dev_anahub) - LinkedIn: [Ana Paula Hübner](https://www.linkedin.com/in/anapdh) 👤 **Victor Pacheco** - GitHub: [@vichuge](https://github.com/vichuge) - LinkedIn: [LinkedIn](https://www.linkedin.com/in/victor-pacheco-7946aab2/) ## 🤝 Contributing Contributions, issues, and feature requests are welcome! Feel free to check the [issues page](https://github.com/vichuge/re-former/issues). ## 📝 License This project is [MIT](./LICENSE) licensed.
33.789474
200
0.73676
eng_Latn
0.970847
2f5021b401d3066b6bd7682b0c40fc266f153e78
39
md
Markdown
README.md
Hraju07/java-coding-practice
19eebb3e3cf83efc10679dc0a65b43fb9ac53506
[ "MIT" ]
null
null
null
README.md
Hraju07/java-coding-practice
19eebb3e3cf83efc10679dc0a65b43fb9ac53506
[ "MIT" ]
null
null
null
README.md
Hraju07/java-coding-practice
19eebb3e3cf83efc10679dc0a65b43fb9ac53506
[ "MIT" ]
null
null
null
# java-coding-practice Problem solving
13
22
0.820513
eng_Latn
0.352344
2f508e226f6a3c42d46d2df89a1256af314c8dc2
104
md
Markdown
README.md
VladimirsHisamutdinovs/python-scraper
5d58e4ea47333e5276c8d16a12ea75995ec031b5
[ "Apache-2.0" ]
null
null
null
README.md
VladimirsHisamutdinovs/python-scraper
5d58e4ea47333e5276c8d16a12ea75995ec031b5
[ "Apache-2.0" ]
null
null
null
README.md
VladimirsHisamutdinovs/python-scraper
5d58e4ea47333e5276c8d16a12ea75995ec031b5
[ "Apache-2.0" ]
null
null
null
# python-scraper Scrapy - a python scraper I have used in my MSc project to collect data for my dataset
34.666667
86
0.778846
eng_Latn
0.999525
2f50b86588b878c771e06ba732812a967605a03d
1,707
md
Markdown
README.md
freshapi/protopub
05e2e3f6abde723fcebd4d69281f7b4e7474fc45
[ "Apache-2.0" ]
null
null
null
README.md
freshapi/protopub
05e2e3f6abde723fcebd4d69281f7b4e7474fc45
[ "Apache-2.0" ]
null
null
null
README.md
freshapi/protopub
05e2e3f6abde723fcebd4d69281f7b4e7474fc45
[ "Apache-2.0" ]
null
null
null
# Protopub Protopub allows you to publish your protobuf definitions into OCI complaint container registry (such as docker-registry). This opens new possibilities for working with protoreflect - especially dynamic interactive documentation and dynamic request routing. ## How to use We have a simple tutorial [here](docs/tutorial.md), but here are some usage examples: ```bash # build .proto files from current working directory into single descriptor set file using `protoc` $ protopub build my-descriptor-set.bin # login into registry $ protopub login docker.io # push descriptor into registry $ protopub push docker.io/freshapi/example:latest ./my-descriptor-set.bin # get info about image $ protopub inspect docker.io/freshapi/example:latest # pull from registry $ protopub pull docker.io/freshapi/example:latest ./pulled-descriptor-set.bin ``` That's it! ## Why This project is heavily inspired by three factors: 1. A desire to build dynamic gRPC proxy (`grpc-router`) 2. A need for human-readable gRPC documentation in running system 3. [Buf](https://github.com/bufbuild/buf) project idea about protobuf schema registry. Technically this could be one of the implementations of it, but surely we'd like to have custom backend which can perform certain validations (e.g. backwards compatibility checks). Maybe be protopub could support Buf's image format in the future. ## What's next Currently, there's no publicly available software which uses OCI registry to fetch protobuf schema because I am working on it right now. As soon as this project become available, this section will be updated. If you want to use this mechanism in your internal project, feel free to import `pkg/protopub`.
38.795455
119
0.788518
eng_Latn
0.988591
2f50fb32e7e057d8d2b703621d30ec42f7c6f3d0
1,064
md
Markdown
readme.md
ORNL/mcr-container-tools
03ccf6dd5542e603dda86ca70e7b96a785adbde8
[ "MIT" ]
null
null
null
readme.md
ORNL/mcr-container-tools
03ccf6dd5542e603dda86ca70e7b96a785adbde8
[ "MIT" ]
null
null
null
readme.md
ORNL/mcr-container-tools
03ccf6dd5542e603dda86ca70e7b96a785adbde8
[ "MIT" ]
2
2019-01-18T21:58:26.000Z
2020-12-13T02:46:46.000Z
# MCR Tools This repository is a collection of [Docker](https://www.docker.com/what-docker) images, [BASH](https://www.gnu.org/software/bash/) scripts, etc. developed at [Oak Ridge National Laboratory (ORNL)](https://www.ornl.gov/) designed to augment the development and deployment of [MATLAB](https://www.mathworks.com/products/matlab.html) based analytics. # License Original code included in this repository is released under the [MIT License](https://opensource.org/licenses/MIT), unless otherwise noted, see the included LICENSE file for details. # Directories * matlab : Docker image that can execute binaries compiled with [MATLAB's Compiler](https://www.mathworks.com/products/compiler.html) R2016a, via the [MATLAB Runtime (MCR)](https://www.mathworks.com/products/compiler/matlab-runtime.html), in a [CentOS 7](https://seven.centos.org/) environment. * util : Tools to help determine Shared Object (SO) dependencies of other SO files such that the image can be rebuilt to support newer and older versions of the MCR. See matlab/readme.md for details.
118.222222
347
0.776316
eng_Latn
0.903718
2f517516352b95db2e97271eaf274548e9a0b831
5,801
md
Markdown
DEV.md
DanielShox/predictionio-buildpack
080487d4f1a858287b038f0ae3b55e6b4c293628
[ "MIT" ]
54
2016-09-24T02:24:06.000Z
2018-10-11T16:29:20.000Z
DEV.md
DanielShox/predictionio-buildpack
080487d4f1a858287b038f0ae3b55e6b4c293628
[ "MIT" ]
22
2016-10-07T03:40:57.000Z
2017-10-04T06:56:12.000Z
DEV.md
DanielShox/predictionio-buildpack
080487d4f1a858287b038f0ae3b55e6b4c293628
[ "MIT" ]
36
2016-09-27T16:17:15.000Z
2021-03-21T20:16:58.000Z
⚠️ **This project is no longer active.** No further updates are planned. # Local Development Use [predictionio-buildpack](README.md) to setup your local PredictionIO environment. To do any real development work with PredictionIO, you'll need to run it locally. We'll use the [`bin/local/` scripts](https://github.com/heroku/predictionio-buildpack/tree/master/bin/local) to simplify that setup procedure and ensure parity between the dev (local) & production (Heroku). ## Background This local dev technique sets up a complete installation of PredictionIO inside each engine you wish to work on. Each engine may have different configuration and dependencies, so the entire environment is contained within an engine directory. ## Supported Platforms This workflow augments the Heroku/Linux-based deployment, and so only supports similar platforms: ### Works * macOS ⭐️ **primary, best experience** * Debian/Ubuntu Linux ### Should Work * Linux via Docker or virtualization * Windows 10 w/ Linux subsystem ### Not Working * Windows MS/DOS or PowerShell * mobile OSs ## How-to ### 0. Remove previously installed `pio` If you previously used PredictionIO, then you might have added the `pio` command to the `PATH` of the shell. Setup will abort if `pio` already exists. Please remove any existing PredictionIO entries from the `PATH`. It may be set in `~/.profile`, `~/.bash_profile`, or `~/.bashrc`. ### 1. Install Dependencies ⚠️ *This step is only required once for your computer.* 1. Install [Java/JDK 8](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html) 1. Install [PostgreSQL 9.6](https://www.postgresql.org/download/) and start it * for macOS, we 💜 [Postgres.app](http://postgresapp.com) ### 2. The Buildpack ⚠️ *This step is only required once for your computer.* ```bash # First, change directory up to your top-level projects. cd ~/my/projects/ git clone https://github.com/heroku/predictionio-buildpack cd predictionio-buildpack/ # Capture this directory's path for use in next steps. export PIO_BUILDPACK_DIR="$(pwd)" ``` ### 3. The Engine ⚠️ *This step is required for each local engine directory.* With a few commands, we'll install PredictionIO & its dependencies into `./PredictionIO-dist/` and configure it all via env vars & rendered config files. ```bash # First, change directory to the target engine: cd ~/my/projects/engine-dir/ # Depending on the engine, an environment template may be available. # Copy & edit it: cp .env.local .env # # …or create a new one: echo 'PIO_EVENTSERVER_APP_NAME=my-engine' >> .env echo 'PIO_POSTGRES_OPTIONAL_SSL=true' >> .env # Ignore the local dev artifacts echo >> .gitignore echo 'bin/pio' >> .gitignore echo 'bin/dotenv' >> .gitignore echo '.env' >> .gitignore echo 'PredictionIO-dist/' >> .gitignore echo 'repo/' >> .gitignore # Setup this working directory: $PIO_BUILDPACK_DIR/bin/local/setup ``` #### Refreshing the setup ♻️ Run `bin/local/setup` whenever: * the buildpack is updated/moved * an environment variable (including the `.env` file) is changed that effects dependencies, like: * `PIO_S3_*` or * `PIO_ELASTICSEARCH_*` Here's how: ```bash # Capture the path to the buildpack on your machine (from Step 2.) export PIO_BUILDPACK_DIR=~/my/projects/predictionio-buildpack # Then, inside the engine to refresh: $PIO_BUILDPACK_DIR/bin/local/setup # Finally, verify the new setup is working; Postgres & optionally Elasticsearch must be running: bin/pio status ``` If you encounter errors, it may be necessary to [reset the local development installation](#user-content-reset). ### 4. Postgres 1. Start Postgres * for **Postgres.app**, use the 🐘 menubar item to start the server * other installation methods have their own start-up process 1. Configure each engine to use its own database. The `.env` file should define the unique database connection like this: ✏️ *Replace `my_database_name` with a name for your engine's database.* ```bash DATABASE_URL=postgres://pio@localhost/my_database_name PIO_POSTGRES_OPTIONAL_SSL=true ``` 1. Create the Postgres user & database to match that configuration: ```bash createuser pio createdb my_database_name ``` 👓 These binary commands are [included with Postgres](https://www.postgresql.org/docs/9.6/static/reference-client.html). You may need to reference them directly from the database installation. ### 5. Elasticsearch (optional) ⚠️ *Only available if `PIO_ELASTICSEARCH_URL` is set during `bin/local/setup`.* #### Configure Elasticsearch 1. In the engine, open `.env` file and add the default local address for ES: ```bash PIO_ELASTICSEARCH_URL=http://127.0.0.1:9200 ``` 1. [Refresh the setup](#user-content-refreshing-the-setup) #### Run Elasticsearch In a new terminal, from the engine's directory: ```bash cd PredictionIO-dist/vendors/elasticsearch/ bin/elasticsearch ``` ### 6. Finally, use `bin/pio` ```bash bin/pio status bin/pio app new my-engine-name bin/pio build --verbose # Importing data is required before training will succeed bin/pio train -- --driver-memory 8G bin/pio deploy ``` 👓 the `bin/pio` command reads the local environment (config vars) from the local `.env` file everytime it's invoked. #### Run Eventserver In a new terminal, from the engine's directory: ```bash bin/pio eventserver ``` ## Deployment ▶️ [How to deploy to Heroku](CUSTOM.md) ## Reset If local dev seems broken, try clearing out the local install, and then reinstall. 🚨 **This will result in destruction of local state contained in these directories.** ```bash rm -rf PredictionIO-dist/ repo/ ``` Then, [refresh the setup](#user-content-refreshing-the-setup).
29.150754
288
0.728667
eng_Latn
0.953409
2f521316ed0002483215b7e1ed5f959133723a4a
743
md
Markdown
CatClock/README.md
fperez2511/xamarin-forms-samples
d3d1dc19d9c9a67c4def4c4c8008452d77b4dfc4
[ "Apache-2.0" ]
1
2019-12-10T11:46:07.000Z
2019-12-10T11:46:07.000Z
CatClock/README.md
SuperXCode/xamarin-forms-samples
521e0c8424e7e8f3bc72bfc564b8fda84eae8ea4
[ "Apache-2.0" ]
null
null
null
CatClock/README.md
SuperXCode/xamarin-forms-samples
521e0c8424e7e8f3bc72bfc564b8fda84eae8ea4
[ "Apache-2.0" ]
null
null
null
--- name: Xamarin.Forms - Cat Clock description: "Cat Clock is a Xamarin.Forms application that demonstrates various features of SkiaSharp graphics. It runs on iOS, Android, and UWP #skiasharp" page_type: sample languages: - csharp products: - xamarin urlFragment: catclock --- # Cat Clock Cat Clock is a Xamarin.Forms application that demonstrates various features of SkiaSharp graphics. It runs on iOS, Android, and Universal Windows Platform devices. This program was the focus of a webinar. To see the program built from the ground up, watch the video [SkiaSharp Graphics for Xamarin Forms](https://www.youtube.com/watch?v=fF0tzA6wUhA). ![Cat Clock application screenshot](Screenshots/01CatClock.png "Cat Clock application screenshot")
41.277778
186
0.792732
eng_Latn
0.955174
2f522e66688c24f2592823e7cced9c1113a0147c
5,008
md
Markdown
README.md
stjude/sjcloud-data-transfer
b4189b31176eb56898146f92793d9e27a963457f
[ "ECL-2.0", "Apache-2.0" ]
4
2018-06-13T11:59:16.000Z
2020-01-23T17:26:09.000Z
README.md
stjude/sjcloud-data-transfer
b4189b31176eb56898146f92793d9e27a963457f
[ "ECL-2.0", "Apache-2.0" ]
86
2017-10-19T23:34:41.000Z
2020-06-08T14:59:07.000Z
README.md
stjude/sjcloud-data-transfer
b4189b31176eb56898146f92793d9e27a963457f
[ "ECL-2.0", "Apache-2.0" ]
7
2018-04-11T21:43:20.000Z
2021-02-20T17:02:44.000Z
# St. Jude Cloud Data Transfer Application ## :warning: NOTICE: This repository has been deprecated. [![FOSSA Status](https://app.fossa.io/api/projects/git%2Bgithub.com%2Fstjude%2Fsjcloud-data-transfer-app.svg?type=shield)](https://app.fossa.io/projects/git%2Bgithub.com%2Fstjude%2Fsjcloud-data-transfer-app?ref=badge_shield) | Branch | Version | CI | Coverage | | ------ | ------- | -- | -------- | | Master | v1.6.0 | ![Node.js CI][ci-master-link] | [![Coverage Status][coverage-master-svg]][coverage-master-link] | | Development | v1.6.0 | ![Node.js CI][ci-development-link] | [![Coverage Status][coverage-development-svg]][coverage-development-link] | A desktop application written on top of the [Electron Framework](https://electron.atom.io/) facilitating easy uploading and downloading of genomic data to the St. Jude Cloud. Functionality includes: * Logging in using OAuth for both internal and external St. Jude users. * Reliably uploading and downloading genomic data files to/from the platform. You can find the latest built version of the tools on the [releases page](https://github.com/stjude/sjcloud-data-transfer-app/releases). ## Building ### Prerequisites If you'd like to build yourself, you'll also need the following prerequisites installed: | **Name** | **Install Link** | | -------- | ----------------------------------------------------------------------------------------------------------------- | | NodeJS | [Using NVM](https://github.com/creationix/nvm#install-script) or [Official Site](https://nodejs.org/en/download/) | You must use version 12 of NodeJS and version 6 of NPM. We recommend installing NVM, then running `nvm install 12`. This will handle installing both the correct Node version and the correct NPM version. ### Process The process for installing the software in production mode: ```bash # download repository git clone git@github.com:stjude/sjcloud-data-transfer-app.git cd sjcloud-data-transfer-app npm i # install dependencies export NODE_ENV=production # set the Node environment. Can be 'production' or 'development'. # set NODE_ENV=production # if you're on Windows cmd.exe. # $Env:NODE_ENV = "production" # if you're on Windows powershell. npx gulp compile # compile the frontend/backend code. npm start # start the application. ``` ## Development Running the tool in development mode requires a few changes to the config: ```bash # download repository git clone -b development git@github.com:stjude/sjcloud-data-transfer-app.git cd sjcloud-data-transfer-app npm i # install dependencies export NODE_ENV=development # set the Node environment. Can be 'production' or 'development'. # set NODE_ENV=development # if you're on Windows cmd.exe. # $Env:NODE_ENV="development" # if you're on Windows powershell. npx gulp compile # compile the frontend/backend code. npm run start:dev # start the application. ``` Note that we recommend that you use the following environment variables when developing: ```bash export AUTOUPDATE_ENABLED="false" export CHROMIUM_MENU="true" # set AUTOUPDATE_ENABLED="false" # cmd.exe # set CHROMIUM_MENU="true" # cmd.exe # $Env:AUTOUPDATE_ENABLED="false" # PowerShell # $Env:CHROMIOM_MENU="true" # PowerShell ``` We recommend that in practice, you use the following command in a separate tab to recompile the code as you make changes: ```bash # continuously recompile frontend/backend code npx gulp develop ``` If you are only working with the front-end code, you can develop in the web browser, which should automatically open up [BrowserSync](https://www.browsersync.io/). # Testing End-to-end testing is as simple as running the following command. ```bash npx gulp test ``` ## Issues If you have any issues, please file a bug report at the [issues page](https://github.com/stjude/sjcloud-data-transfer-app/issues). [maintainability-master-link]: https://codeclimate.com/github/stjude/sjcloud-data-transfer-app/maintainability [maintainability-master-svg]: https://api.codeclimate.com/v1/badges/ce7eed7d778bf50ac81a/maintainability [ci-master-link]: https://github.com/stjude/sjcloud-data-transfer-app/workflows/Node.js%20CI/badge.svg?branch=master [ci-development-link]: https://github.com/stjude/sjcloud-data-transfer-app/workflows/Node.js%20CI/badge.svg?branch=development [coverage-master-link]: https://coveralls.io/github/stjude/sjcloud-data-transfer-app?branch=master [coverage-master-svg]: https://coveralls.io/repos/github/stjude/sjcloud-data-transfer-app/badge.svg?branch=master [coverage-development-link]: https://coveralls.io/github/stjude/sjcloud-data-transfer-app?branch=development [coverage-development-svg]: https://coveralls.io/repos/github/stjude/sjcloud-data-transfer-app/badge.svg?branch=development
46.803738
224
0.704673
eng_Latn
0.746845
2f529adcd39d6a1f4b434fee3e0351f25b4e475a
7,075
md
Markdown
data/issues/ZF-8209.md
zendframework/zf3-web
5852ab5bfd47285e6b46f9e7b13250629b3e372e
[ "BSD-3-Clause" ]
40
2016-06-23T17:52:49.000Z
2021-03-27T20:02:40.000Z
data/issues/ZF-8209.md
zendframework/zf3-web
5852ab5bfd47285e6b46f9e7b13250629b3e372e
[ "BSD-3-Clause" ]
80
2016-06-24T13:39:11.000Z
2019-08-08T06:37:19.000Z
data/issues/ZF-8209.md
zendframework/zf3-web
5852ab5bfd47285e6b46f9e7b13250629b3e372e
[ "BSD-3-Clause" ]
52
2016-06-24T22:21:49.000Z
2022-02-24T18:14:03.000Z
--- layout: issue title: "database requests return corrupt data when executed as a phpunit test" id: ZF-8209 --- ZF-8209: database requests return corrupt data when executed as a phpunit test ------------------------------------------------------------------------------ Issue Type: Bug Created: 2009-11-03T01:34:58.000+0000 Last Updated: 2010-08-25T06:36:56.000+0000 Status: Resolved Fix version(s): Reporter: Bryn Davies (eastzenders) Assignee: Michelangelo van Dam (dragonbe) Tags: - Zend\_Db Related issues: Attachments: - [error.png](/issues/secure/attachment/12404/error.png) ### Description Hi, I've been trying to create unit tests for my application that includes tests that access a database I've set up. In production code everything works as expected (i.e I can fetch a row, perform Inserts etc) but when running the same code with the same database I get corrupt results on any type of SELECT statement (be it a find(), fetchRow(), fetchAll() etc). The columns returned end up being either missed completely or named incorrectly (usually with the database/table name replacing the column name). If I have more than 5 columns in my table fastcgi.exe crashes completely. I've created a small test: public function testdbAccess() { $db = Zend\_Db::factory('Pdo\_Mysql', array( 'host' => 'localhost', 'username' => 'root', 'password' => 'xxxxx', 'dbname' => 'testdb' )); $sql = "select * from payment where id = ?"; $data = 1; $set = $db->fetchAll($sql,$data); } which still gives me problems. In the debugger, the variable window shows: $set | 0\_ Array[4] id 1 merchant OUR\_REF payment 1 testdb 3 And my Table is actually db: testdb table: payment id | merchant | payment\_type\_id | order\_id | track\_id 1 OUR\_REF 1 3 1 2 ... ... so I'm getting the table name replacing column 3's name, db name replacing column 4's name and column 5 is missing. If I have another column in the table I get the fastcgi.exe crash. Do you have any ideas about this? I found a similar issue reported (search ' too many columns crashes') but it was closed without an answer. Many thanks, Bryn ### Comments Posted by Michelangelo van Dam (dragonbe) on 2009-11-19T07:24:20.000+0000 A simple unit test shows that Zend\_Db behaves properly, independent from platform. <pre class="literal"> 1 <?php 2 3 require_once 'PHPUnit/Framework.php'; 4 require_once 'Zend/Db.php'; 5 class MyDbTest extends PHPUnit_Framework_TestCase 6 { 7 /** 8 * populated db table 'payment' with 3 rows 9 * 10 * id | merchant | payment_type_id | order_id | track_id 11 * 1|test 1|1|3|1 12 * 2|test 2|1|3|2 13 * 3|test 3|1|3|2 14 */ 15 public function testDbAccess() 16 { 17 $db = Zend_Db::factory( 18 'Pdo_SQLite', 19 array ('dbname' => './testdb.db') 20 ); 21 22 $sql = 'SELECT * FROM payment WHERE id = ?'; 23 $data = 1; 24 $set = $db->fetchAll($sql, $data); 25 $this->assertType('array', $set); 26 $this->assertSame(1, count($set)); 27 $this->assertArrayHasKey('id', $set[0]); 28 $this->assertArrayHasKey('merchant', $set[0]); 29 $this->assertArrayHasKey('payment_type_id', $set[0]); 30 $this->assertArrayHasKey('order_id', $set[0]); 31 $this->assertArrayHasKey('track_id', $set[0]); 32 } 33 } Output of PHPUnit is a success: <pre class="literal"> phpunit MyDbTest ./zf-8209.php PHPUnit 3.4.2 by Sebastian Bergmann. . Time: 0 seconds OK (1 test, 7 assertions) Posted by Bryn Davies (eastzenders) on 2009-11-19T08:00:45.000+0000 The problems I describe occur only with pdo\_mySql and not SQLite - in fact we are using sqlite in order to get around this issue. More than 4 columns in a table and you get a crash, less than 4 and you get corrupt record names. Cheers, Bryn Posted by Michelangelo van Dam (dragonbe) on 2009-11-20T05:21:48.000+0000 Ok, let me rework my unit test using a MySQL database then. Which flavor of MySQL are you having this experience with ? Posted by Michelangelo van Dam (dragonbe) on 2009-11-20T05:39:50.000+0000 Cannot reproduce the error, had following unit test run against both MySQL 5.0 and 5.1 databases. Please use following unit test to reproduce your error. <pre class="literal"> <?php require_once 'PHPUnit/Framework.php'; require_once 'Zend/Db.php'; class MyDbTest extends PHPUnit_Framework_TestCase { /** * populated db table 'payment' with 3 rows * * id | merchant | payment_type_id | order_id | track_id * 1|test 1|1|3|1 * 2|test 2|1|3|2 * 3|test 3|1|3|2 */ public function testDbAccess() { $db = Zend_Db::factory( 'Pdo_MySQL', array ( 'host' => 'localhost', 'username' => 'zfbhd', 'password' => 'secret', 'dbname' => 'test' ) ); $sql = 'SELECT * FROM payment WHERE id = ?'; $data = 1; $set = $db->fetchAll($sql, $data); $this->assertType('array', $set); $this->assertSame(1, count($set)); $this->assertArrayHasKey('id', $set[0]); $this->assertArrayHasKey('merchant', $set[0]); $this->assertArrayHasKey('payment_type_id', $set[0]); $this->assertArrayHasKey('order_id', $set[0]); $this->assertArrayHasKey('track_id', $set[0]); } } Until further notice, I set the status of this issue again to resolved... Posted by Bryn Davies (eastzenders) on 2009-11-20T07:05:19.000+0000 Problem summary Posted by Bryn Davies (eastzenders) on 2009-11-20T07:09:19.000+0000 Hi, I've attached a screenshot that highlights the issues I have when I run your test code. Running MySQL 5.1.36-community via TCP/IP Zend Studio Version 7.1.0.20091014 (Though this issue has appeared in previous versions). As I say, if this test code is chucked into say, a controller, it executes fine. But as a test it becomes corrupt. Cheers, Bryn Posted by Margus Koiduste (marguskoiduste) on 2010-08-25T06:36:56.000+0000 Hi, Check out my comment on following issue for possible fix: <http://zendframework.com/issues/browse/ZF-7734> Margus
30.76087
579
0.583604
eng_Latn
0.921861
2f52c714856ce23e661dc1e4875c9daa20bb0074
7,157
md
Markdown
docs/Endpoints.md
diesel-engineer/Moya
bd9d27a21ec94352bd57173884802b1b57665bd3
[ "MIT" ]
3
2017-10-30T16:03:30.000Z
2019-10-05T19:23:08.000Z
docs/Endpoints.md
diesel-engineer/Moya
bd9d27a21ec94352bd57173884802b1b57665bd3
[ "MIT" ]
null
null
null
docs/Endpoints.md
diesel-engineer/Moya
bd9d27a21ec94352bd57173884802b1b57665bd3
[ "MIT" ]
3
2018-03-08T20:42:35.000Z
2019-07-03T16:09:39.000Z
# Endpoints An endpoint is a semi-internal data structure that Moya uses to reason about the network request that will ultimately be made. An endpoint stores the following data: - The url. - The HTTP method (`GET`, `POST`, etc). - The HTTP request header fields. - `Task` to differentiate `upload`, `download` or `request`. - The sample response (for unit testing). [Providers](Providers.md) map [Targets](Targets.md) to Endpoints, then map Endpoints to actual network requests. There are two ways that you interact with Endpoints. 1. When creating a provider, you may specify a mapping from `Target` to `Endpoint`. 1. When creating a provider, you may specify a mapping from `Endpoint` to `URLRequest`. The first might resemble the following: ```swift let endpointClosure = { (target: MyTarget) -> Endpoint<MyTarget> in let url = URL(target: target).absoluteString return Endpoint(url: url, sampleResponseClosure: {.networkResponse(200, target.sampleData)}, method: target.method, task: target.task, httpHeaderFields: target.headers) } ``` This is actually the default implementation Moya provides. If you need something custom, or if you're creating a test provider that returns non-200 HTTP statuses in unit tests, this is where you would do it. Notice the `URL(target:)` initializer, Moya provides a convenient extension to create a `URL` from any `TargetType`. The second use is very uncommon. Moya tries to prevent you from having to worry about low-level details. But it's there if you need it. Its use is covered further below. Let's take a look at an example of the flexibility mapping from a Target to an Endpoint can provide. ## From Target to Endpoint In this closure you have absolute power over converting from `Target` to `Endpoint`. You can change the `task`, `method`, `url`, `headers` or `sampleResponse`. For example, we may wish to set our application name in the HTTP header fields for server-side analytics. ```swift let endpointClosure = { (target: MyTarget) -> Endpoint<MyTarget> in let defaultEndpoint = MoyaProvider.defaultEndpointMapping(for: target) return defaultEndpoint.adding(newHTTPHeaderFields: ["APP_NAME": "MY_AWESOME_APP"]) } let provider = MoyaProvider<GitHub>(endpointClosure: endpointClosure) ``` *Note that header fields can also be added as part of the [Target](Targets.md) definition.* This also means that you can provide additional parameters to some or all of your endpoints. For example, say that there is an authentication token we need for all values of the hypothetical `MyTarget` target, with the exception of the target that actually does the authentication. We could construct an `endpointClosure` resembling the following. ```swift let endpointClosure = { (target: MyTarget) -> Endpoint<MyTarget> in let defaultEndpoint = MoyaProvider.defaultEndpointMapping(for: target) // Sign all non-authenticating requests switch target { case .authenticate: return defaultEndpoint default: return defaultEndpoint.adding(newHTTPHeaderFields: ["AUTHENTICATION_TOKEN": GlobalAppStorage.authToken]) } } let provider = MoyaProvider<GitHub>(endpointClosure: endpointClosure) ``` Awesome. Note that we can rely on the existing behavior of Moya and extend – instead of replace – it. The `adding(newHttpHeaderFields:)` function allows you to rely on the existing Moya code and add your own custom values. Sample responses are a requirement of the `TargetType` protocol. However, they only specify the data returned. The Target-to-Endpoint mapping closure is where you can specify more details, which is useful for unit testing. Sample responses have one of these values: - `.networkError(NSError)` when network failed to send the request, or failed to retrieve a response (eg a timeout). - `.networkResponse(Int, Data)` where `Int` is a status code and `Data` is the returned data. - `.response(HTTPURLResponse, Data)` where `HTTPURLResponse` is the response and `Data` is the returned data. This one can be used to fully stub a response. ## Request Mapping As we mentioned earlier, the purpose of this library is not really to provide a coding framework with which to access the network – that's Alamofire's job. Instead, Moya is about a way to frame your thoughts about network access and provide compile-time checking of well-defined network targets. You've already seen how to map targets into endpoints using the `endpointClosure` parameter of the `MoyaProvider` initializer. That lets you create an `Endpoint` instance that Moya will use to reason about the network API call. At some point, that `Endpoint` must be resolved into an actual `URLRequest` to give to Alamofire. That's what the `requestClosure` parameter is for. The `requestClosure` is an optional, last-minute way to modify the request that hits the network. It has a default value of `MoyaProvider.defaultRequestMapping`, which uses the `urlRequest()` method of the `Endpoint` instance. This `urlRequest()` method throws three possible errors: - `MoyaError.requestMapping(String)` when `URLRequest` could not be created for given path - `MoyaError.parameterEncoding(Swift.Error)` when parameters couldn't be encoded - `MoyaError.encodableMapping(Swift.Error)` when `Encodable` object couldn't be encoded into `Data` This closure receives an `Endpoint` instance and is responsible for invoking a its argument of `RequestResultClosure` (shorthand for `Result<URLRequest, MoyaError> -> Void`) with a request that represents the Endpoint. It's here that you'd do your OAuth signing or whatever. Since you may invoke the closure asynchronously, you can use whatever authentication library you like ([example](https://github.com/rheinfabrik/Heimdallr.swift)). Instead of modifying the request, you could simply log it, instead. ```swift let requestClosure = { (endpoint: Endpoint<GitHub>, done: MoyaProvider.RequestResultClosure) in do { var request = try endpoint.urlRequest() // Modify the request however you like. done(.success(request)) } catch { done(.failure(MoyaError.underlying(error))) } } let provider = MoyaProvider<GitHub>(requestClosure: requestClosure) ``` This `requestClosure` is useful for modifying properties specific to the `URLRequest` or providing information to the request that cannot be known until that request is created, like cookies settings. Note that the `endpointClosure` mentioned above is not intended for this purpose or any request-specific application-level mapping. This parameter is actually very useful for modifying the request object. `URLRequest` has many properties you can customize. Say you want to disable all cookies on requests: ```swift { (endpoint: Endpoint<ArtsyAPI>, done: MoyaProvider.RequestResultClosure) in do { var request: URLRequest = try endpoint.urlRequest() request.httpShouldHandleCookies = false done(.success(request)) } catch { done(.failure(MoyaError.underlying(error))) } } ``` You could also perform logging of network requests, since this closure is invoked just before the request is sent to the network.
45.297468
332
0.767361
eng_Latn
0.997213
2f535a1c9201818eb6a8a95372344449adf8bb56
2,721
md
Markdown
README.md
Jun-Hub/ClubhouseProgressBar
6c94adb721f5fec576f62b7577c4059a13ddf650
[ "Apache-2.0" ]
2
2021-02-25T06:35:32.000Z
2021-04-03T05:43:35.000Z
README.md
Jun-Hub/ClubhouseProgressBar
6c94adb721f5fec576f62b7577c4059a13ddf650
[ "Apache-2.0" ]
null
null
null
README.md
Jun-Hub/ClubhouseProgressBar
6c94adb721f5fec576f62b7577c4059a13ddf650
[ "Apache-2.0" ]
1
2021-02-25T21:41:26.000Z
2021-02-25T21:41:26.000Z
# ClubhouseProgressBar ⭕️Indeterminate ProgressBar that exactly looks like in Clubhouse iOS </br> </br> What is Clubhouse ProgressBar? ---------------------------------- <img src="https://user-images.githubusercontent.com/54348567/109168128-da4ffa00-77c1-11eb-8771-bf014bd6e075.GIF" width="300" height="620"> </br> </br> Demo ------------ <img src="https://user-images.githubusercontent.com/54348567/109168199-eb007000-77c1-11eb-8f16-ad109a8ef9d8.GIF" width="300" height="400"> </br> </br> Dependency ----------------- Add this in your root ```build.gradle``` file (not your module ```build.gradle``` file) ``` allprojects { repositories { ... maven { url 'https://jitpack.io' } } } ``` Then, Add the library to your module ```build.gradle``` ``` dependencies { implementation 'com.github.Jun-Hub:ClubhouseProgressBar:1.0.2' } ``` </br> </br> Usage -------------- ```xml <com.github.joon.chprogressbar.ChProgressBar android:layout_width="wrap_content" android:layout_height="wrap_content" /> ``` </br> </br> Detail Usage -------------------- ```xml <com.github.joon.chprogressbar.ChProgressBar android:layout_width="wrap_content" android:layout_height="wrap_content" app:dotCount="5" app:dotRadius="20dp" app:dotInterval="10dp" app:inactiveColor="@color/purple_200" app:activeColor="@color/purple_700" app:animationDuration="500" /> ``` </br> You can set option programmatically also. ```kotlin override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) binding = ActivityMainBinding.inflate(layoutInflater) setContentView(binding.root) binding.chProgressBar.apply { setDotCount(5) setDotRadius(25f) setDotInterval(10f) setActiveColor(getColor(R.color.teal_700)) setInactiveColor(getColor(R.color.teal_200)) setAnimationDuration(500) } } ``` </br> If you want fadeout effect as ```View.GONE``` when your job done, ```kotlin binding.chProgressBar.fadeOut() ``` or ```kotlin //you can set fadeout duration binding.chProgressBar.fadeOut(2000) ``` </br> </br> License ----------- Copyright 2021 Joon Lee Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
24.736364
138
0.68688
eng_Latn
0.571254
2f53e913ddc4f0638149d6c3a823a10487400dce
659
md
Markdown
README.md
tallcoleman/CosmoToRGB
e8a9b35ff2cf4f178b399ab4b7bee3bb9716206f
[ "MIT" ]
null
null
null
README.md
tallcoleman/CosmoToRGB
e8a9b35ff2cf4f178b399ab4b7bee3bb9716206f
[ "MIT" ]
null
null
null
README.md
tallcoleman/CosmoToRGB
e8a9b35ff2cf4f178b399ab4b7bee3bb9716206f
[ "MIT" ]
null
null
null
# CosmoToRGB Dataset of RGB colours for 500 Cosmo (Lecien) embroidery floss colours ## Formats - Comma Separated Values (.csv) - MacStitch (.threads) - [Google Sheets](https://docs.google.com/spreadsheets/d/1iGxF2IGG0T30gD4D-IT3XFXjIJ9_RrB9EKvESWHApxA) <img src="https://github.com/tallcoleman/CosmoToRGB/raw/main/README_Assets/Google-Sheets-Cosmo-Colours.png" alt="Screenshot of Google Sheets document with Cosmo colour list" width="400"> ## Source Sampled from the [2020 version of Cosmo colour card with 500 colours](https://www.gela.ru/upload/iblock/1ef/leaflet_cosmo-size-25-embroidery-floss_-renewal-2020-ver.-500-solid-colors-_1_.pdf) using GIMP
43.933333
202
0.787557
yue_Hant
0.651666
2f540781b9e01a46c742b725d01fef8e7989909c
69
md
Markdown
README.md
ketanhwr/CSN-212
a26473270e744850ef1661ec08a7322a72e230c7
[ "MIT" ]
null
null
null
README.md
ketanhwr/CSN-212
a26473270e744850ef1661ec08a7322a72e230c7
[ "MIT" ]
null
null
null
README.md
ketanhwr/CSN-212
a26473270e744850ef1661ec08a7322a72e230c7
[ "MIT" ]
null
null
null
# CSN-212 Assignments for CSN-212, Design and Analysis of Algorithms
23
58
0.797101
eng_Latn
0.84038
2f547e61da16863195a21662b0f18a8e813a8367
114
md
Markdown
content/perl/client/connected.md
xackery/eqquestapi
e3bb4d58651c7c2bb1ced94deb59115946eed3c5
[ "MIT" ]
null
null
null
content/perl/client/connected.md
xackery/eqquestapi
e3bb4d58651c7c2bb1ced94deb59115946eed3c5
[ "MIT" ]
1
2020-09-08T17:21:08.000Z
2020-09-08T17:21:08.000Z
content/perl/client/connected.md
xackery/eqquestapi
e3bb4d58651c7c2bb1ced94deb59115946eed3c5
[ "MIT" ]
1
2020-08-29T00:49:26.000Z
2020-08-29T00:49:26.000Z
--- title: Connected weight: 1 hidden: true menuTitle: Connected --- ## Connected ```perl $client->Connected() ```
11.4
20
0.675439
eng_Latn
0.966243
2f54f5ebbeb598bfb913ba2d63b24bb0bad67c59
11,635
md
Markdown
articles/azure-monitor/app/status-monitor-v2-api-enable-monitoring.md
Peterkingalex1972/azure-docs.es-es
5369c5fde8457f6d68fd46192cda191f5a4e3da3
[ "CC-BY-4.0", "MIT" ]
2
2019-09-04T06:39:25.000Z
2019-09-04T06:43:40.000Z
articles/azure-monitor/app/status-monitor-v2-api-enable-monitoring.md
Peterkingalex1972/azure-docs.es-es
5369c5fde8457f6d68fd46192cda191f5a4e3da3
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-monitor/app/status-monitor-v2-api-enable-monitoring.md
Peterkingalex1972/azure-docs.es-es
5369c5fde8457f6d68fd46192cda191f5a4e3da3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Referencia de la API del Monitor de estado de Azure v2: Habilitación de la supervisión | Microsoft Docs' description: Referencia de la API del Monitor de estado de Azure v2. Enable-ApplicationInsightsMonitoring. Supervise el rendimiento de los sitios web sin volver a implementarlos. Funciona con las aplicaciones web de ASP.NET hospedadas en local, en las máquinas virtuales o en Azure. services: application-insights documentationcenter: .net author: MS-TimothyMothra manager: alexklim ms.assetid: 769a5ea4-a8c6-4c18-b46c-657e864e24de ms.service: application-insights ms.workload: tbd ms.tgt_pltfrm: ibiza ms.topic: conceptual ms.date: 04/23/2019 ms.author: tilee ms.openlocfilehash: d3963889e3604fb67cb526b992e7ca27b1212b59 ms.sourcegitcommit: 4b431e86e47b6feb8ac6b61487f910c17a55d121 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 07/18/2019 ms.locfileid: "68326353" --- # <a name="status-monitor-v2-api-enable-applicationinsightsmonitoring"></a>API del Monitor de estado v2: Enable-ApplicationInsightsMonitoring. Este artículo describe un cmdlet que forma parte del [módulo Az.ApplicationMonitor de PowerShell](https://www.powershellgallery.com/packages/Az.ApplicationMonitor/). ## <a name="description"></a>DESCRIPCIÓN Permite adjuntar sin código la supervisión de aplicaciones IIS en un equipo de destino. Este cmdlet modificará el archivo IIS applicationHost.config y establecerá algunas claves del Registro. También creará un archivo applicationinsights.ikey.config, que define la clave de instrumentación usada por cada aplicación. IIS cargará RedfieldModule durante el inicio, lo que insertará los SDK de Application Insights en aplicaciones como las de inicio. Reinicie IIS para que los cambios surtan efecto. Después de habilitar la supervisión, se recomienda que use [Live Metrics](live-stream.md) para comprobar rápidamente si la aplicación enviaba telemetría. > [!NOTE] > - Para empezar, necesita una clave de instrumentación. Para más información, consulte [Crear un recurso](create-new-resource.md#copy-the-instrumentation-key). > - Este cmdlet requiere que revise y acepte la licencia, y la declaración de privacidad. > [!IMPORTANT] > Este cmdlet requiere una sesión de PowerShell con permisos de administrador y una directiva de ejecución elevada. Para más información, consulte [Ejecute PowerShell como administrador con una directiva de ejecución con privilegios elevados](status-monitor-v2-detailed-instructions.md#run-powershell-as-admin-with-an-elevated-execution-policy). ## <a name="examples"></a>Ejemplos ### <a name="example-with-a-single-instrumentation-key"></a>Ejemplo con una única clave de instrumentación En este ejemplo, a todas las aplicaciones en el equipo actual se les asigna una única clave de instrumentación. ```powershell PS C:\> Enable-ApplicationInsightsMonitoring -InstrumentationKey xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx ``` ### <a name="example-with-an-instrumentation-key-map"></a>Ejemplo con un mapa de claves de instrumentación En este ejemplo: - `MachineFilter` busca la correspondencia con el equipo actual usando el comodín `'.*'`. - `AppFilter='WebAppExclude'` proporciona una clave de instrumentación `null`. No se puede instrumentar la aplicación especificada. - `AppFilter='WebAppOne'` asigna una clave de instrumentación única a la aplicación especificada. - `AppFilter='WebAppTwo'` asigna una clave de instrumentación única a la aplicación especificada. - Por último, `AppFilter` también usa el comodín `'.*'` para buscar la correspondencia de todas las aplicaciones web que no coincidan según las reglas anteriores y les asigna una clave de instrumentación predeterminada. - Se agregan espacios para mejorar la legibilidad. ```powershell PS C:\> Enable-ApplicationInsightsMonitoring -InstrumentationKeyMap @(@{MachineFilter='.*';AppFilter='WebAppExclude'}, @{MachineFilter='.*';AppFilter='WebAppOne';InstrumentationSettings=@{InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx1'}}, @{MachineFilter='.*';AppFilter='WebAppTwo';InstrumentationSettings=@{InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx2'}}, @{MachineFilter='.*';AppFilter='.*';InstrumentationSettings=@{InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxdefault'}}) ``` ## <a name="parameters"></a>Parámetros ### <a name="-instrumentationkey"></a>-InstrumentationKey **Requerido.** Use este parámetro para proporcionar una única clave de instrumentación para que la usen todas las aplicaciones en el equipo de destino. ### <a name="-instrumentationkeymap"></a>-InstrumentationKeyMap **Requerido.** Use este parámetro para proporcionar varias claves de instrumentación y una asignación de las claves de instrumentación utilizadas por cada aplicación. Puede crear un solo script de instalación para varios equipos si establece `MachineFilter`. > [!IMPORTANT] > Las aplicaciones se harán coincidir con las reglas en el orden en que estas se proporcionan. Por tanto, debe especificar las reglas más específicas en primer lugar y las más genéricas las últimas. #### <a name="schema"></a>Esquema `@(@{MachineFilter='.*';AppFilter='.*';InstrumentationSettings=@{InstrumentationKey='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'}})` - **MachineFilter** es una expresión regular de C# requerida del nombre de la máquina virtual o del equipo. - ".*" coincidirá con todos - "ComputerName" coincidirá solo son aquellos equipos que tengan el nombre exacto especificado. - **AppFilter** es una expresión regular de C# requerida del nombre del sitio de IIS. Para obtener una lista de sitios en el servidor, ejecute el comando [get-iissite](https://docs.microsoft.com/powershell/module/iisadministration/get-iissite). - ".*" coincidirá con todos - "SiteName" coincidirá solo son el sitio de IIS que tenga el nombre exacto especificado. - **InstrumentationKey** se requiere para habilitar la supervisión de las aplicaciones que coincidan con los dos filtros anteriores. - Deje este valor como null si desea definir reglas para excluir la supervisión. ### <a name="-enableinstrumentationengine"></a>-EnableInstrumentationEngine **Opcional.** Utilice este modificador para habilitar el motor de instrumentación y recopilar eventos y mensajes sobre lo que sucede durante la ejecución de un proceso administrado. Estos eventos y mensajes incluyen códigos de resultado de la dependencia, verbos HTTP y texto de comando SQL. El motor de instrumentación se sobrecarga y está desactivado de forma predeterminada. ### <a name="-acceptlicense"></a>-AcceptLicense **Opcional.** Use este modificador para aceptar la licencia y la declaración de privacidad en las instalaciones sin periféricos. ### <a name="-ignoresharedconfig"></a>-IgnoreSharedConfig Si tiene un clúster de servidores web, puede que esté usando una [configuración compartida](https://docs.microsoft.com/iis/web-hosting/configuring-servers-in-the-windows-web-platform/shared-configuration_211). HttpModule no pueden insertarse en esta configuración compartida. Este script generará un error con un mensaje que indica que se necesitan pasos de instalación adicionales. Use este modificador para ignorar esta comprobación y seguir instalando requisitos previos. Para obtener más información, consulte [Conflicto con la configuración compartida de IIS](status-monitor-v2-troubleshoot.md#conflict-with-iis-shared-configuration). ### <a name="-verbose"></a>-Verbose **Parámetro común.** Utilice este modificador para mostrar registros detallados. ### <a name="-whatif"></a>-WhatIf **Parámetro común.** Utilice este modificador para probar y validar los parámetros de entrada sin habilitar realmente la supervisión. ## <a name="output"></a>Output #### <a name="example-output-from-a-successful-enablement"></a>Ejemplo de salida de una activación correcta ``` Initiating Disable Process Applying transformation to 'C:\Windows\System32\inetsrv\config\applicationHost.config' 'C:\Windows\System32\inetsrv\config\applicationHost.config' backed up to 'C:\Windows\System32\inetsrv\config\applicationHost.config.backup-2019-03-26_08-59-52z' in :1,237 No element in the source document matches '/configuration/location[@path='']/system.webServer/modules/add[@name='ManagedHttpModuleHelper']' Not executing RemoveAll (transform line 1, 546) Transformation to 'C:\Windows\System32\inetsrv\config\applicationHost.config' was successfully applied. Operation: 'disable' GAC Module will not be removed, since this operation might cause IIS instabilities Configuring IIS Environment for codeless attach... Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IISADMIN[Environment] Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W3SVC[Environment] Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WAS[Environment] Configuring IIS Environment for instrumentation engine... Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\IISADMIN[Environment] Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W3SVC[Environment] Registry: skipping non-existent 'HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WAS[Environment] Configuring registry for instrumentation engine... Successfully disabled Application Insights Status Monitor Installing GAC module 'C:\Program Files\WindowsPowerShell\Modules\Az.ApplicationMonitor\0.2.0\content\Runtime\Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.dll' Applying transformation to 'C:\Windows\System32\inetsrv\config\applicationHost.config' Found GAC module Microsoft.AppInsights.IIS.ManagedHttpModuleHelper.ManagedHttpModuleHelper, Microsoft.AppInsights.IIS.ManagedHttpModuleHelper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35 'C:\Windows\System32\inetsrv\config\applicationHost.config' backed up to 'C:\Windows\System32\inetsrv\config\applicationHost.config.backup-2019-03-26_08-59-52z_1' Transformation to 'C:\Windows\System32\inetsrv\config\applicationHost.config' was successfully applied. Operation: 'enable' Configuring IIS Environment for codeless attach... Configuring IIS Environment for instrumentation engine... Configuring registry for instrumentation engine... Updating app pool permissions... Successfully enabled Application Insights Status Monitor ``` ## <a name="next-steps"></a>Pasos siguientes Vea la telemetría: - [Explore las métricas](../../azure-monitor/app/metrics-explorer.md) para supervisar el rendimiento y el uso. - [Busque en los eventos y los registros](../../azure-monitor/app/diagnostic-search.md) para diagnosticar problemas. - [Use Analytics](../../azure-monitor/app/analytics.md) para consultas más avanzadas. - [Cree paneles](../../azure-monitor/app/overview-dashboard.md). Agregue más telemetría: - [Cree pruebas web](monitor-web-app-availability.md) para asegurarse de que el sitio permanece activo. - [Agregue telemetría de cliente web](../../azure-monitor/app/javascript.md) para ver las excepciones de código de la página web y para habilitar las llamadas de seguimiento. - [Agregue el SDK de Application Insights al código](../../azure-monitor/app/asp-net.md) para que pueda insertar llamadas de seguimiento y registro. Hacer más con el Monitor de estado v2: - Use nuestra guía para [solucionar problemas](status-monitor-v2-troubleshoot.md) del Monitor de estado v2. - [Obtenga la configuración](status-monitor-v2-api-get-config.md) para confirmar que sus opciones se registraron correctamente. - [Obtenga el estado](status-monitor-v2-api-get-status.md) para inspeccionar la supervisión.
68.040936
345
0.799914
spa_Latn
0.853957
2f550f9c646be8181b08eae01360775c4a93aa6f
6,087
md
Markdown
articles/application-gateway/application-gateway-troubleshooting-502.md
tksh164/azure-docs.ja-jp
9e6da0958749572641dfdc56dd64e99ba8ee492d
[ "CC-BY-3.0" ]
null
null
null
articles/application-gateway/application-gateway-troubleshooting-502.md
tksh164/azure-docs.ja-jp
9e6da0958749572641dfdc56dd64e99ba8ee492d
[ "CC-BY-3.0" ]
null
null
null
articles/application-gateway/application-gateway-troubleshooting-502.md
tksh164/azure-docs.ja-jp
9e6da0958749572641dfdc56dd64e99ba8ee492d
[ "CC-BY-3.0" ]
null
null
null
--- title: Application Gateway での無効なゲートウェイによる (502) エラーのトラブルシューティング | Microsoft Docs description: Application Gateway の 502 エラーに関するトラブルシューティングの方法を説明します services: application-gateway documentationcenter: na author: amitsriva manager: rossort editor: '' tags: azure-resource-manager ms.service: application-gateway ms.devlang: na ms.topic: article ms.tgt_pltfrm: na ms.workload: infrastructure-services ms.date: 09/02/2016 ms.author: amitsriva --- # Application Gateway での無効なゲートウェイによるエラーのトラブルシューティング ## Overview Azure Application Gateway の構成後に発生する可能性があるエラーの 1 つに、"サーバー エラー: 502 - Web サーバーがゲートウェイまたはプロキシ サーバーとして動作しているときに、無効な応答を受信しました。" というエラーがあります。このエラーが発生する主な理由としては、次のことが考えられます。 * Azure Application Gateway のバックエンド プールが構成されていないか、空である。 * VM スケール セット内に正常な VM またはインスタンスがない。 * VM スケール セットのバックエンド VM またはインスタンスが既定の正常性プローブに応答していない。 * カスタムの正常性プローブの構成が無効または不適切である。 * 要求がタイムアウトしたか、ユーザー要求に関して接続の問題がある。 ## 空の BackendAddressPool ### 原因 バックエンド アドレス プールに構成済みの VM または VM スケール セットがない場合、Application Gateway は顧客の要求をルーティングできず、無効なゲートウェイによるエラーをスローします。 ### 解決策 バックエンド アドレス プールを空でない状態にしてください。これには、PowerShell、CLI、ポータルのいずれかを使用できます。 Get-AzureRmApplicationGateway -Name "SampleGateway" -ResourceGroupName "ExampleResourceGroup" 前のコマンドレットから取得した出力に、空でないバックエンド アドレス プールが含まれている必要があります。次の例では、バックエンド VM の FQDN または IP アドレスが構成された 2 つのプールが返されます。BackendAddressPool のプロビジョニング状態は "Succeeded" である必要があります。 BackendAddressPoolsText: [{ "BackendAddresses": [{ "ipAddress": "10.0.0.10", "ipAddress": "10.0.0.11" }], "BackendIpConfigurations": [], "ProvisioningState": "Succeeded", "Name": "Pool01", "Etag": "W/"00000000-0000-0000-0000-000000000000"", "Id": "/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.Network/applicationGateways/<application gateway name>/backendAddressPools/pool01" }, { "BackendAddresses": [{ "Fqdn": "xyx.cloudapp.net", "Fqdn": "abc.cloudapp.net" }], "BackendIpConfigurations": [], "ProvisioningState": "Succeeded", "Name": "Pool02", "Etag": "W/"00000000-0000-0000-0000-000000000000"", "Id": "/subscriptions/<subscription id>/resourceGroups/<resource group name>/providers/Microsoft.Network/applicationGateways/<application gateway name>/backendAddressPools/pool02" }] ## BackendAddressPool の異常なインスタンス ### 原因 BackendAddressPool のインスタンスがすべて異常である場合、Application Gateway にユーザー要求のルーティング先となるバックエンドがありません。これは、バックエンド インスタンスは正常であるものの、必要なアプリケーションがデプロイされていない場合にも生じることがあります。 ### 解決策 インスタンスが正常であり、アプリケーションが正しく構成されていることを確認してください。バックエンド インスタンスが同じ VNet 内に存在する他の VM からの ping に応答できることをチェックします。パブリック エンドポイントを構成している場合は、Web アプリケーションに対するブラウザーの要求を処理できるようにします。 ## 既定の正常性プローブに関する問題 ### 原因 502 エラーは、既定の正常性プローブがバックエンド VM に到達できないことを示している場合もよくあります。Application Gateway インスタンスがプロビジョニングされると、BackendHttpSetting のプロパティを使用して BackendAddressPool ごとに既定の正常性プローブが自動的に構成されます。このプローブの設定にはユーザーの入力は必要ありません。具体的には、負荷分散規則を構成する際に、BackendHttpSetting と BackendAddressPool の間で関連付けが行われます。既定のプローブがこれらの関連付けごとに構成されます。Application Gateway は、BackendHttpSetting 要素で指定されたポートで BackendAddressPool 内の各インスタンスに対して定期的な正常性チェック接続を開始します。次の表は、既定の正常性プローブに関連する値の一覧です。 | プローブのプロパティ | 値 | Description | | --- | --- | --- | | プローブの URL |http://127.0.0.1/ |URL パス | | 間隔 |30 |プローブの間隔 (秒) | | タイムアウト |30 |プローブのタイムアウト (秒) | | 異常のしきい値 |3 |プローブの再試行回数。プローブの連続失敗回数が異常のしきい値に達すると、バックエンド サーバーは「ダウン」とマークされます。 | ### 解決策 * 既定のサイトが構成されており、127.0.0.1 でリッスンしていることを確認します。 * BackendHttpSetting で 80 以外のポートが指定されている場合、既定のサイトはポート 80 でリッスンするように構成する必要があります。 * http://127.0.0.1:port を呼び出したときに、HTTP 結果コード 200 が30 秒のタイムアウト期間内に返されるようにする必要があります。 * 構成済みのポートを開き、構成済みのポートでの送受信トラフィックをブロックするファイアウォール規則または Azure ネットワーク セキュリティ グループが存在しないようにします。 * FQDN またはパブリック IP と共に Azure クラシック VM またはクラウド サービスを使用する場合、対応する[エンドポイント](../virtual-machines/virtual-machines-windows-classic-setup-endpoints.md)を必ず開いてください。 * Azure Resource Manager を介して VM を構成しており、Application Gateway がデプロイされた VNet の外側に VM がある場合、[ネットワーク セキュリティ グループ](../virtual-network/virtual-networks-nsg.md)は、目的のポートにアクセスできるように構成する必要があります。 ## カスタムの正常性プローブに関する問題 ### 原因 カスタムの正常性プローブを使用すれば、既定のプローブ動作の柔軟性を高めることができます。カスタム プローブを使用する場合、ユーザーは、プローブの間隔、テスト対象の URL とパス、バックエンド プール インスタンスを "異常" とマークするまでの応答の失敗回数を構成することができます。次のプロパティが追加されます。 | プローブのプロパティ | Description | | --- | --- | | Name |プローブの名前。この名前は、バックエンドの HTTP 設定でプローブを参照するために使用されます。 | | プロトコル |プローブを送信するために使用するプロトコル。唯一の有効なプロトコルは HTTP です。 | | Host |プローブを送信するホスト名。Application Gateway でマルチサイトを構成する場合にのみ適用可能です。これは VM ホスト名とは異なります。 | | パス |プローブの相対パス。パスは、先頭が '/' である必要があります。プローブの送信先は <protocol>://<host>:<port><path> になります。 | | 間隔 |プローブの間隔 (秒)。2 つの連続するプローブの時間間隔。 | | タイムアウト |プローブのタイムアウト (秒)。プローブは、このタイムアウト期間内に正常な応答を受信しない場合に「失敗」とマークされます。 | | 異常のしきい値 |プローブの再試行回数。プローブの連続失敗回数が異常のしきい値に達すると、バックエンド サーバーは「ダウン」とマークされます。 | ### 解決策 カスタムの正常性プローブが前の表のとおりに正しく構成されていることを確認します。前のトラブルシューティングの手順を実行したうえで、次のことも必ず行ってください。 * プロトコルが HTTP のみに設定されていることを確認する。現在 HTTPS はサポートされていません。 * プローブが[ガイド](application-gateway-create-probe-ps.md)のとおりに正しく指定されていることを確認する。 * Application Gateway を単一のサイトで構成する場合、既定ではホスト名は "127.0.0.1" と指定する必要があります (カスタム プローブで構成する場合は除く)。 * http://\<host>:<port><path> を呼び出したときに HTTP 結果コード 200 が返されることを確認する。 * 間隔、タイムアウト、異常のしきい値が許容される範囲内であることを確認する。 ## 要求のタイムアウト ### 原因 ユーザー要求の受信時に、Application Gateway は構成済みの規則をその要求に適用し、要求をバックエンド プール インスタンスにルーティングします。Application Gateway は一定時間バックエンド インスタンスからの応答を待ちます。この時間間隔は構成できます。既定ではこの間隔は **30** 秒です。この間隔の間に Application Gateway がバックエンド アプリケーションから応答を受信しない場合、ユーザー要求の結果として 502 エラーが表示されます。 ### 解決策 Application Gateway では、ユーザーは BackendHttpSetting でこの設定を構成し、別のプールに適用できます。バックエンド プールごとに異なる BackendHttpSetting を構成できるため、異なる要求のタイムアウトを構成できます。 New-AzureRmApplicationGatewayBackendHttpSettings -Name 'Setting01' -Port 80 -Protocol Http -CookieBasedAffinity Enabled -RequestTimeout 60 ## 次のステップ 前の手順で問題を解決できない場合は、[サポート チケット](https://azure.microsoft.com/support/options/)を開きます。 <!---HONumber=AcomDC_0907_2016-->
48.309524
430
0.767209
yue_Hant
0.587137
2f5582db55dc1cdb81d957e99e8b6f816fb7da64
1,878
md
Markdown
docs/observability/Observability.md
aziule/patron
1eef62577c37d4669b567291f1d259328f91697b
[ "Apache-2.0" ]
null
null
null
docs/observability/Observability.md
aziule/patron
1eef62577c37d4669b567291f1d259328f91697b
[ "Apache-2.0" ]
null
null
null
docs/observability/Observability.md
aziule/patron
1eef62577c37d4669b567291f1d259328f91697b
[ "Apache-2.0" ]
null
null
null
# Observability ## Metrics and Tracing Tracing and metrics are provided by Jaeger's implementation of the OpenTracing project and Prometheus. Every component has been integrated with the above library and produces traces and metrics. Metrics are can be scraped via the default HTTP component at the `/metrics` route for Prometheus. Traces will be sent to a Jaeger agent, which can be setup through environment variables mentioned in the config section. Sane defaults are applied for making the use easy. The `component` and `client` packages implement capturing and propagating of metrics and traces. ## Prometheus Exemplars [OpenTracing](https://opentracing.io) compatible tracing systems such as [Grafana Tempo](https://grafana.com/oss/tempo/) can work with [Prometheus Exemplars](https://grafana.com/docs/grafana/latest/basics/exemplars/). Below are prerequisites for enabling exemplars: - Use Prometheus Go client library version 1.4.0 or above. - Use the new `ExemplarObserver` for `Histogram` or `ExemplarAdder` for `Counter` because the original interfaces has not been changed for the backward compatibility. - Use `ObserveWithExemplar` or `AddWithExemplar` methods noting the `TraceID` key — it is needed later to configure Grafana, so that it knows which label to use to retrieve the `TraceID` An example of enabling exemplars in an already instrumented Go application can be found [here](../../trace/metric.go) where exemplars are enabled for `Histogram` and `Counter` metrics. The result of the above steps is attached trace IDs to metrics via exemplars. When querying `/metrics` endpoint `curl -H "Accept: application/openmetrics-text" <endpoint>:<port>/metrics` exemplars will be present in metrics entry after `#` in [Open Metrics](https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#exemplars-1) format.
62.6
179
0.78967
eng_Latn
0.985643
2f55c8439d9359befd2b95e3b75d12e11cef7abe
556
md
Markdown
CHANGELOG.md
mamal72/excuse-me
34ba2e01b4717a47e08c2300f6f3b876df9b3209
[ "MIT" ]
2
2021-08-30T14:23:12.000Z
2021-09-02T18:48:35.000Z
CHANGELOG.md
mamal72/excuse-me
34ba2e01b4717a47e08c2300f6f3b876df9b3209
[ "MIT" ]
null
null
null
CHANGELOG.md
mamal72/excuse-me
34ba2e01b4717a47e08c2300f6f3b876df9b3209
[ "MIT" ]
null
null
null
# Changelog All notable changes to this project will be documented in this file. See [standard-version](https://github.com/conventional-changelog/standard-version) for commit guidelines. ### [1.0.3](https://github.com/mamal72/excuse-me/compare/v1.0.2...v1.0.3) (2021-08-25) ### [1.0.2](https://github.com/mamal72/excuse-me/compare/v1.0.1...v1.0.2) (2021-08-25) ### Bug Fixes * add proper stuff to make it a global excutable cmd ([0b01a29](https://github.com/mamal72/excuse-me/commit/0b01a29c3db66423947b3b05067dfd8352ba0df1)) ### 1.0.1 (2021-08-25)
37.066667
174
0.726619
eng_Latn
0.260006
2f575f10aef48abfaee62df75015aace24e31598
229
md
Markdown
README.md
goodwaterwu/jetson_nano_eepromdump
43ddfacf7e27a8b3935260df51e12ebc25e6d368
[ "MIT" ]
null
null
null
README.md
goodwaterwu/jetson_nano_eepromdump
43ddfacf7e27a8b3935260df51e12ebc25e6d368
[ "MIT" ]
null
null
null
README.md
goodwaterwu/jetson_nano_eepromdump
43ddfacf7e27a8b3935260df51e12ebc25e6d368
[ "MIT" ]
1
2020-05-10T10:03:48.000Z
2020-05-10T10:03:48.000Z
# jetson_nano_eepromdump A script to parse EEPROM on Jetson Nano development kit <pre>Usage: eepromdump [options] -h show this help</pre> ![result](https://github.com/goodwaterwu/jetson_nano_eepromdump/blob/master/readme.png)
28.625
87
0.790393
yue_Hant
0.292521
2f579df6ede95e355070b21b10b8e022a3679afb
1,261
md
Markdown
docs/framework/windows-workflow-foundation/2577-trycatchexceptionduringcancelation.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/windows-workflow-foundation/2577-trycatchexceptionduringcancelation.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/windows-workflow-foundation/2577-trycatchexceptionduringcancelation.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 2577 - TryCatchExceptionDuringCancelation ms.date: 03/30/2017 ms.assetid: 35ee9f55-227f-4566-bcb4-4c7c75dea85b ms.openlocfilehash: c272dd91249dfc90e6f4c38a7339919a5a6446e5 ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 04/23/2019 ms.locfileid: "61755622" --- # <a name="2577---trycatchexceptionduringcancelation"></a>2577 - TryCatchExceptionDuringCancelation ## <a name="properties"></a>Eigenschaften ||| |-|-| |ID|2577| |Schlüsselwörter|WFActivities| |Ebene|Warnung| |Kanal|Microsoft-Windows-Application Server-Applications/Debug| ## <a name="description"></a>Beschreibung Gibt an, dass eine untergeordnete Aktivität der TryCatch-Aktivität während des Abbruchs eine Ausnahme ausgelöst hat. ## <a name="message"></a>Meldung Eine untergeordnete Aktivität der TryCatch-Aktivität '%1' hat während des Abbruchs eine Ausnahme ausgelöst. ## <a name="details"></a>Details |Datenelementname|Datenelementtyp|Beschreibung| |--------------------|--------------------|-----------------| |DisplayName|xs:string|Der Anzeigename der Aktivität.| |AppDomain|xs:string|Die von AppDomain.CurrentDomain.FriendlyName zurückgegebene Zeichenfolge.|
37.088235
119
0.73751
deu_Latn
0.368057
2f582ebd0d1c2fee33b6aede3827fcb9456c5f24
5,919
md
Markdown
README.md
essentiaone/omise-python
5e974c7027fe4506719a5ab86e4c4ca9ae91657b
[ "MIT" ]
2
2018-04-29T15:05:20.000Z
2018-05-05T13:51:43.000Z
README.md
essentiaone/omise-python
5e974c7027fe4506719a5ab86e4c4ca9ae91657b
[ "MIT" ]
null
null
null
README.md
essentiaone/omise-python
5e974c7027fe4506719a5ab86e4c4ca9ae91657b
[ "MIT" ]
null
null
null
# Omise Python Client [![Build Status](https://img.shields.io/travis/omise/omise-python.svg?style=flat-square)](https://travis-ci.org/omise/omise-python) [![Python Versions](https://img.shields.io/pypi/pyversions/omise.svg?style=flat-square)](https://pypi.python.org/pypi/omise/) [![PyPi Version](https://img.shields.io/pypi/v/omise.svg?style=flat-square)](https://pypi.python.org/pypi/omise/) [![](https://img.shields.io/badge/discourse-forum-1a53f0.svg?style=flat-square&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAA4AAAAOCAYAAAAfSC3RAAAAAXNSR0IArs4c6QAAAAlwSFlzAAALEwAACxMBAJqcGAAAAVlpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDUuNC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6dGlmZj0iaHR0cDovL25zLmFkb2JlLmNvbS90aWZmLzEuMC8iPgogICAgICAgICA8dGlmZjpPcmllbnRhdGlvbj4xPC90aWZmOk9yaWVudGF0aW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KTMInWQAAAqlJREFUKBU9UVtLVFEU%2FvY%2B27mPtxl1dG7HbNRx0rwgFhJBPohBL9JTZfRQ0YO9RU%2FVL6iHCIKelaCXqIewl4gEBbEyxSGxzKkR8TbemmbmnDlzVvsYtOHbey1Y317fWh8DwCVMCfSHww3ElCs7CjuzbOcNIaEo9SbtlDRjZiNPY%2BvrqSWrTh7l3yPvrmh0KBZW59HcREjEqcGpElAuESRxopU648dTwfrIyH%2BCFXSH1cFgJLqHlma6443SG0CfqYY2NZjQnkV8eiMgP6ijjnizHglErlocdl5VA0mT3v102dseL2W14cYM99%2B9XGY%2FlQArd8Mo6JhbSJUePHytvf2UdnW0qen93cKQ4nWXX1%2FyOkZufsuZN0L7PPzkthDDZ4FQLajSA6XWR8HWIK861sCfj68ggGwl83mzfMclBmAQ%2BktrqBu9wOhcD%2BB0ErSiFFyEkdcYhKD27mal9%2F5FY36b4BB%2FTvO8XdQhlUe11F3WG2fc7QLlC8wai3MGGQCGDkcZQyymCqAPSmati3s45ygWseeqADwuWS%2F3wGS5hClDMMstxvJFHQuGU26yHsY6iHtL0sIaOyZzB9hZz0hHZW71kySSl6LIJlSgj5s5LO6VG53aFgpOfOFCyoFmYsOS5HZIaxVwKYsLSbJJn2kfU%2BlNdms5WMLqQRklX0FX26eFRnKYwzX0XRsgR0uUrWxplM7oqPIq8r8cZrdLNLqaABayxZMTTx2HVfglbP4xkcvqZEMNfmglevRi1ny5mGfJfTuQiBEq%2FMBvG0NqDh2TY47sbtJAuO%2Fe9%2Fn3STRFosm2WIxsFSFrFUfwHb11JNBNcaZSp8yb%2FEhHW3suWRNZRzDGvxb0oifk5lmnX2V2J2dEJkX1Q0baZ1MvYXPXHvhAga7x9PTEyj8a%2BF%2BXbxiTn78bSQAAAABJRU5ErkJggg%3D%3D)](https://forum.omise.co) Please pop onto our [community forum](https://forum.omise.co) or contact [support@omise.co](mailto:support@omise.co) if you have any question regarding this library and the functionality it provides. ## Installation If you simply want to use Omise Python client in your application, you can install it using [pip](http://www.pip-installer.org/en/latest/index.html): ``` pip install omise ``` Or `easy_install` in case your system do not have pip installed: ``` easy_install omise ``` The Omise Python client officially supports the following Python versions: * Python 2.7 * Python 3.3 * Python 3.4 * Python 3.5 * Python 3.6 Any versions not listed here _may_ work but they are not automatically tested. ## Usage Please refer to an example in [API documentation](https://docs.omise.co/) or the [help](https://docs.python.org/2/library/functions.html#help) function for documentation. For basic usage, you can use the module in your application by importing the `omise` module and set the secret key and public key: ```python >>> import omise >>> omise.api_secret = 'skey_test_4xsjvwfnvb2g0l81sjz' >>> omise.api_public = 'pkey_test_4xs8breq32civvobx15' ``` After both keys are set, you can now use all the APIs. For example, to create a new customer without any cards associated to the customer: ```python >>> customer = omise.Customer.create( >>> description='John Doe', >>> email='john.doe@example.com' >>> ) <Customer id='cust_test_4xtrb759599jsxlhkrb' at 0x7ffab7136910> ``` Then to retrieve, update and destroy that customer: ```python >>> customer = omise.Customer.retrieve('cust_test_4xtrb759599jsxlhkrb') >>> customer.description = 'John W. Doe' >>> customer.update() <Customer id='cust_test_4xtrb759599jsxlhkrb' at 0x7ffab7136910> >>> customer.destroy() >>> customer.destroyed True ``` In case of any errors (such as authentication failure, invalid card and others as listed in [errors](https://docs.omise.co/api/errors/) section in the documentation), the error of a subclass `omise.errors.BaseError` will be raise. The application code must be handling these errors as appropriate. ### API version In case you want to enforce API version the application use, you can specify it by setting `api_version`. The version specified by this settings will override the version setting in your account. This is useful if you have multiple environments with different API versions (e.g. development on the latest but production on the older version). ```python >>> import omise >>> omise.api_version = '2014-07-27' ``` It is highly recommended to set this version to the current version you're using. ## Contributing The Omise Python client uses [Vagrant](https://www.vagrantup.com/) for development environment provisioning and require all changes to be tested against all supported Python versions. You can bootstrap the environment with the following instructions: 1. Install [Vagrant](https://www.vagrantup.com/) with [provider](https://docs.vagrantup.com/v2/providers/index.html) of your choice (e.g. [VirtualBox](https://www.virtualbox.org/)) 2. Run `vagrant up` and read Vagrant's [Getting Started](https://docs.vagrantup.com/v2/getting-started/index.html) while waiting. After the box is up and running, you can now SSH to the server and run [tox](http://tox.readthedocs.org/en/latest/) to test against all supported Python versions: 1. Run `vagrant ssh` to SSH into the provisioned box. 2. Run `cd /vagrant` to navigate to working directory. 3. Run `tox` to run tests against all supported Python versions. Any changes made locally to the source code will be automatically updated to the box. After you've done with the changes, please open a [Pull Request](https://github.com/omise/omise-python/pulls). ## License See [LICENSE.txt](https://github.com/omise/omise-python/blob/master/LICENSE.txt)
59.787879
1,684
0.819057
eng_Latn
0.596702
2f58e410bf46a4d50aba7981dedaf2b6f4fc76ee
355
md
Markdown
ext/native-decls/ResetVehiclePedsCanStandOnTopFlag.md
thorium-cfx/fivem
587eb7c12066a2ebf8631bde7bb39ee2df1b5a0c
[ "MIT" ]
5,411
2017-04-14T08:57:56.000Z
2022-03-30T19:35:15.000Z
ext/native-decls/ResetVehiclePedsCanStandOnTopFlag.md
thorium-cfx/fivem
587eb7c12066a2ebf8631bde7bb39ee2df1b5a0c
[ "MIT" ]
802
2017-04-21T14:18:36.000Z
2022-03-31T21:20:48.000Z
ext/native-decls/ResetVehiclePedsCanStandOnTopFlag.md
thorium-cfx/fivem
587eb7c12066a2ebf8631bde7bb39ee2df1b5a0c
[ "MIT" ]
2,011
2017-04-14T09:44:15.000Z
2022-03-31T15:40:39.000Z
--- ns: CFX apiset: client game: gta5 --- ## RESET_VEHICLE_PEDS_CAN_STAND_ON_TOP_FLAG ```c void RESET_VEHICLE_PEDS_CAN_STAND_ON_TOP_FLAG(Vehicle vehicle); ``` Resets whether or not peds can stand on top of the specified vehicle. Note this flag is not replicated automatically, you will have to manually do so. ## Parameters * **vehicle**: The vehicle.
20.882353
80
0.766197
eng_Latn
0.952071
2f5926ec4b3314949262e44987030a8597d244c8
1,059
md
Markdown
README.md
kuya-ui/Pizza-place
32ee2ac4d19653d4e455444dc612d897c2f27e61
[ "MIT" ]
null
null
null
README.md
kuya-ui/Pizza-place
32ee2ac4d19653d4e455444dc612d897c2f27e61
[ "MIT" ]
null
null
null
README.md
kuya-ui/Pizza-place
32ee2ac4d19653d4e455444dc612d897c2f27e61
[ "MIT" ]
null
null
null
# NINI'S PIZZA PLACE #### A site about a place where customers can order pizzaas at the comfort of ther home, 25 6 2021. #### By **MAXMILLAN KUYA** ## Description A site about pizza place where people can order pizza at the comfort of their homes. The pizzas have different prices according to the sizes, they also have different crusts and toppings too. There is also delivery for customers who can't be able to reach to our shops. ## Setup/Installation Requirements * $ sudo apt-get update * $ sudo apt-get install code * $ sudo apt-get install node.js * $ sudo apt-get install npm ## Known Bugs No any bugs ## BDD The site is about a pizzaa place where people make there orders at their comfort. It contains size, crust, toppings, orders and how to get deliveries and also prices after ordering. We also us javascript to show the alert. ## Technologies Used * HTML * CSS * JavaScript ## Support and contact details * kuyamaxmillan@gmail.com * moringaschool ### License *license under the [MIT LICENSE] (license.txt)* Copyright (c) {2021} **Maxmillan Kuya**
42.36
269
0.75543
eng_Latn
0.99605
2f59a3d25c328915182743c4e65b1e30015f0db8
1,576
md
Markdown
README.md
hashsploit/OpenDNAS
3c5523f33ce485880a9713452b37f656ad985bc5
[ "MIT" ]
6
2020-07-25T23:08:11.000Z
2021-01-04T16:55:37.000Z
README.md
hashsploit/OpenDNAS
3c5523f33ce485880a9713452b37f656ad985bc5
[ "MIT" ]
null
null
null
README.md
hashsploit/OpenDNAS
3c5523f33ce485880a9713452b37f656ad985bc5
[ "MIT" ]
3
2020-08-27T20:47:54.000Z
2021-01-04T01:18:32.000Z
### This project is no longer being worked on because nginx only supports HTTP/1.1 which does not work with DNAS (DNAS requires HTTP/1.0) Use [clank-dnas](https://github.com/hashsploit/clank-dnas) instead. ## OpenDNAS An Open Source replacement DNAS server. ### What is OpenDNAS OpenDNAS is a Open Source implementation of the production DNAS servers hosted by SCEI for authenticating Sony PlayStation clients to play multiplayer games. On April 4, 2016; SCEI discontinued the official DNAS servers, thus forcefully taking down hundreds of multiplayer game titles with it. OpenDNAS aims to be a solution to this, providing successful authentication for emulators and genuine PlayStations. ### Requirements - nginx (DNAS does not work with HTTP/1.1 ...) - OpenSSL 1.0.2i (or older, as long as it supports SSLv2). - php7.0.15-fpm (mcrypt_encrypt [removed in 7.2](https://www.php.net/manual/en/function.mcrypt-encrypt.php)). ### Installation Please do not run this application on a production system directly. This application requires OpenSSL 1.0.2i (SSLv2) to be compiled which is not secure anymore. Instead use a container. Such as [clank-dnas](https://github.com/hashsploit/clank-dnas). A sample `nginx.vhost` has been provided. - The `certs/` directory should become `/etc/nginx/certs`. - The `public/` directory should become `/var/www/OpenDNAS/public`. - The `nginx.vhost` file should be configured, added to `/etc/nginx/sites-available`, and then linked to `/etc/nginx/sites-enabled`. - You will need to generate your own SSL cert for `opendnas.localhost`.
46.352941
160
0.766497
eng_Latn
0.983768
2f5aa4af2ef76ec407999f03f893b7bf8d46636d
3,073
md
Markdown
docs/reference/errors-and-warnings/NU3028.md
DalavanCloud/docs.microsoft.com-nuget.it-it
a64229582afdd0a42917dd2542051c62a33d884c
[ "MIT" ]
1
2019-01-05T03:19:42.000Z
2019-01-05T03:19:42.000Z
docs/reference/errors-and-warnings/NU3028.md
DalavanCloud/docs.microsoft.com-nuget.it-it
a64229582afdd0a42917dd2542051c62a33d884c
[ "MIT" ]
null
null
null
docs/reference/errors-and-warnings/NU3028.md
DalavanCloud/docs.microsoft.com-nuget.it-it
a64229582afdd0a42917dd2542051c62a33d884c
[ "MIT" ]
null
null
null
--- title: NuGet avviso NU3028 description: Codice di avviso NU3028 author: zhili1208 ms.author: lzhi ms.date: 06/25/2018 ms.topic: reference ms.reviewer: anangaur f1_keywords: - NU3028 ms.openlocfilehash: ecfa650144e186fb75311bacfbc38eb773b97f05 ms.sourcegitcommit: 47858da1103848cc1b15bdc00ac7219c0ee4a6a0 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 09/12/2018 ms.locfileid: "44516191" --- # <a name="nuget-warning-nu3028"></a>NuGet avviso NU3028 *NuGet 4.6.0+* <pre>The author primary signature's timestamp found a chain building issue: The revocation function was unable to check revocation because the revocation server could not be reached. For more information, visit https://aka.ms/certificateRevocationMode</pre> ### <a name="issue"></a>Problema La creazione della catena di certificati non riuscito per la firma del timestamp. Il certificato di firma del timestamp non è considerato attendibile, revocato, o le informazioni di revoca del certificato non sono disponibile. ### <a name="solution"></a>Soluzione Usare un certificato attendibile e valido. Controllo internet connectivity.gits ### <a name="revocation-check-mode-481"></a>Modalità di controllo di revoca *(4.8.1+)* Se il computer ha limitato l'accesso a internet (ad esempio un computer di compilazione in uno scenario di integrazione continua/recapito Continuo), l'installazione o il ripristino di un pacchetto nuget con segno comporterà questo avviso poiché il server di revoche di certificati non sono raggiungibili. Si tratta di una condizione prevista. In alcuni casi, tuttavia, ciò può avere concequences imprevisti, ad esempio il pacchetto di installazione e il ripristino richiede più tempo del solito. Se in questo caso, è possibile risolverlo impostando il `NUGET_CERT_REVOCATION_MODE` variabile di ambiente `offline`. Questa operazione forzerà a controllare lo stato di revoca del certificato solo per l'elenco di revoche memorizzato nella cache NuGet e NuGet non prova a raggiungere i server di revoca. > [!Warning] > Non è consigliabile attivare la modalità di controllo di revoca non in linea in circostanze normali. In questo modo verrà NuGet a saltare il controllo di revoca in linea ed eseguire solo un controllo di revoca offline rispetto all'elenco di revoche di certificati certificato memorizzato nella cache che può essere aggiornata. Questo significa che i pacchetti in cui il certificato di firma sia stato revocato, continueranno a essere installata o il ripristino, che altrimenti sarebbe riuscito controllo di revoca e potrebbe non essere stato installato. Quando è impostata la modalità di controllo di revoca `offline`, l'avviso sarà possibile effettuare il downgrade a un'info. <pre>The author primary signature's timestamp found a chain building issue: The revocation function was unable to check revocation because the certificate is not available in the cached certificate revocation list and NUGET_CERT_REVOCATION_MODE environment variable has been set to offline. For more information, visit https://aka.ms/certificateRevocationMode.</pre>
76.825
555
0.812886
ita_Latn
0.992627
2f5b19d796121cdc3ed73c38ffa329d0958a7fed
4,320
md
Markdown
treebanks/nl_alpino/nl_alpino-dep-compound-prt.md
vistamou/docs
116b9c29e4218be06bf33b158284b9c952646989
[ "Apache-2.0" ]
204
2015-01-20T16:36:39.000Z
2022-03-28T00:49:51.000Z
treebanks/nl_alpino/nl_alpino-dep-compound-prt.md
vistamou/docs
116b9c29e4218be06bf33b158284b9c952646989
[ "Apache-2.0" ]
654
2015-01-02T17:06:29.000Z
2022-03-31T18:23:34.000Z
treebanks/nl_alpino/nl_alpino-dep-compound-prt.md
vistamou/docs
116b9c29e4218be06bf33b158284b9c952646989
[ "Apache-2.0" ]
200
2015-01-16T22:07:02.000Z
2022-03-25T11:35:28.000Z
--- layout: base title: 'Statistics of compound:prt in UD_Dutch-Alpino' udver: '2' --- ## Treebank Statistics: UD_Dutch-Alpino: Relations: `compound:prt` This relation is a language-specific subtype of . 1974 nodes (1%) are attached to their parents as `compound:prt`. 1161 instances of `compound:prt` (59%) are left-to-right (parent precedes child). Average distance between parent and child is 3.54103343465046. The following 10 pairs of parts of speech are connected with `compound:prt`: <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-ADP.html">ADP</a></tt> (1304; 66% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-ADV.html">ADV</a></tt> (194; 10% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-NOUN.html">NOUN</a></tt> (184; 9% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-ADJ.html">ADJ</a></tt> (171; 9% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt> (86; 4% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-DET.html">DET</a></tt> (26; 1% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-PRON.html">PRON</a></tt> (4; 0% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-SCONJ.html">SCONJ</a></tt> (2; 0% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-SYM.html">SYM</a></tt> (2; 0% instances), <tt><a href="nl_alpino-pos-VERB.html">VERB</a></tt>-<tt><a href="nl_alpino-pos-PROPN.html">PROPN</a></tt> (1; 0% instances). ~~~ conllu # visual-style 5 bgColor:blue # visual-style 5 fgColor:white # visual-style 3 bgColor:blue # visual-style 3 fgColor:white # visual-style 3 5 compound:prt color:blue 1 Moeder moeder NOUN N|soort|ev|basis|zijd|stan Gender=Com|Number=Sing 3 nsubj 3:nsubj _ 2 Mien Mien PROPN N|eigen|ev|basis|zijd|stan Gender=Com|Number=Sing 1 appos 1:appos _ 3 spoorde aan_sporen VERB WW|pv|verl|ev Number=Sing|Tense=Past|VerbForm=Fin 0 root 0:root _ 4 haar haar PRON VNW|pers|pron|obl|vol|3|getal|fem Case=Acc|Person=3|PronType=Prs 3 obj 3:obj|9:nsubj:xsubj _ 5 aan aan ADP VZ|fin _ 3 compound:prt 3:compound:prt _ 6 toch toch ADV BW _ 9 advmod 9:advmod _ 7 op op ADP VZ|fin _ 9 compound:prt 9:compound:prt _ 8 te te ADP VZ|init _ 9 mark 9:mark _ 9 stappen op_stappen VERB WW|inf|vrij|zonder VerbForm=Inf 3 xcomp 3:xcomp SpaceAfter=No 10 . . PUNCT LET _ 3 punct 3:punct _ ~~~ ~~~ conllu # visual-style 6 bgColor:blue # visual-style 6 fgColor:white # visual-style 8 bgColor:blue # visual-style 8 fgColor:white # visual-style 8 6 compound:prt color:blue 1 Daar daar ADV VNW|aanw|adv-pron|obl|vol|3o|getal _ 2 advmod 2:advmod _ 2 besloot besluiten VERB WW|pv|verl|ev Number=Sing|Tense=Past|VerbForm=Fin 0 root 0:root _ 3 hij hij PRON VNW|pers|pron|nomin|vol|3|ev|masc Case=Nom|Person=3|PronType=Prs 2 nsubj 2:nsubj|8:nsubj:xsubj _ 4 wat wat DET VNW|onbep|pron|stan|vol|3o|ev _ 5 det 5:det _ 5 misverstanden misverstand NOUN N|soort|mv|basis Number=Plur 8 obj 8:obj _ 6 weg weg ADV BW _ 8 compound:prt 8:compound:prt _ 7 te te ADP VZ|init _ 8 mark 8:mark _ 8 werken weg_werken VERB WW|inf|vrij|zonder VerbForm=Inf 2 xcomp 2:xcomp SpaceAfter=No 9 . . PUNCT LET _ 2 punct 2:punct _ ~~~ ~~~ conllu # visual-style 5 bgColor:blue # visual-style 5 fgColor:white # visual-style 6 bgColor:blue # visual-style 6 fgColor:white # visual-style 6 5 compound:prt color:blue 1 Rotterdam Rotterdam PROPN N|eigen|ev|basis|onz|stan Gender=Neut|Number=Sing 6 nsubj 6:nsubj _ 2 kan kunnen AUX WW|pv|tgw|ev Number=Sing|Tense=Pres|VerbForm=Fin 6 aux 6:aux _ 3 dus dus ADV BW _ 6 advmod 6:advmod _ 4 nader nader ADJ ADJ|vrij|comp|zonder Degree=Cmp 6 advmod 6:advmod _ 5 kennis kennis NOUN N|soort|ev|basis|zijd|stan Gender=Com|Number=Sing 6 compound:prt 6:compound:prt _ 6 maken kennis_maken VERB WW|inf|vrij|zonder VerbForm=Inf 0 root 0:root _ 7 met met ADP VZ|init _ 9 case 9:case _ 8 de de DET LID|bep|stan|rest Definite=Def 9 det 9:det _ 9 liefhebbers liefhebber NOUN N|soort|mv|basis Number=Plur 6 obl 6:obl:met SpaceAfter=No 10 . . PUNCT LET _ 6 punct 6:punct _ 11 " " PUNCT LET _ 6 punct 6:punct _ ~~~
54.683544
1,313
0.723148
yue_Hant
0.297661
2f5b1babb42ffd453a33df85fe05ca39cc4bdcd7
259
md
Markdown
help/home/c-preparing-for-dashboard-installation/c-preparing-for-dashboard-installation.md
dahlstro/data-workbench.en
1bfea36c390d5e6aef76d5b2c9f12431571e68c8
[ "Apache-2.0" ]
null
null
null
help/home/c-preparing-for-dashboard-installation/c-preparing-for-dashboard-installation.md
dahlstro/data-workbench.en
1bfea36c390d5e6aef76d5b2c9f12431571e68c8
[ "Apache-2.0" ]
null
null
null
help/home/c-preparing-for-dashboard-installation/c-preparing-for-dashboard-installation.md
dahlstro/data-workbench.en
1bfea36c390d5e6aef76d5b2c9f12431571e68c8
[ "Apache-2.0" ]
null
null
null
--- description: null solution: Analytics title: Data Workbench Dashboard Administrator Guide topic: Data workbench uuid: 662f8cac-e5af-4892-afc1-78f705d3033e --- # Data Workbench Dashboard Administrator Guide{#data-workbench-dashboard-administrator-guide}
23.545455
93
0.814672
yue_Hant
0.426797
2f5b2d4b80dfbae17fb38f74abe6f56ddf26eda0
400
md
Markdown
README.md
huangshaolin/gallium3-setup
546eee5dbf957a3c69af5d3ce5f761a018443a01
[ "MIT" ]
null
null
null
README.md
huangshaolin/gallium3-setup
546eee5dbf957a3c69af5d3ce5f761a018443a01
[ "MIT" ]
null
null
null
README.md
huangshaolin/gallium3-setup
546eee5dbf957a3c69af5d3ce5f761a018443a01
[ "MIT" ]
null
null
null
# gallium3-setup Setup GalliumOS 3.0 for my Samsung Chromebook 3 ## Install ``` git clone https://github.com/huangshaolin/gallium3-setup.git cd gallium3-setup ``` ## Usage ``` # Setup dotfiles bash setup.sh # Optional: fix "the suspend option in xfce4-power-manager does not lock the screen" for Samsung Chromebook 3 cd optional/lid_suspend bash install.sh ``` ## License MIT © huangshaolin
14.814815
109
0.74
kor_Hang
0.393972
2f5b75f08ae64311a7febae80c40f1cb98f717ca
753
md
Markdown
README.md
JorkDev/pokemon-quiz-vanilla-js
5acb9fee5cb5071b284c037a6f086f869763392e
[ "MIT" ]
null
null
null
README.md
JorkDev/pokemon-quiz-vanilla-js
5acb9fee5cb5071b284c037a6f086f869763392e
[ "MIT" ]
null
null
null
README.md
JorkDev/pokemon-quiz-vanilla-js
5acb9fee5cb5071b284c037a6f086f869763392e
[ "MIT" ]
null
null
null
# pokemon quiz vanilla js A simple quiz with no frameworks or libreries used on it Simple use, just clone the repo and open it. You can also add more questions for the Quiz in the questions.js file. There's every Pokemon sprite in the img folder, even every other form of the Pokemon. Pokemon is a registered trademark of Nintendo, Game Freak, and Creatures Inc, the use of Pokemon sprites is for educational purposes only. **This project was done by the challenge of 50 javascript projects in 5 days** <img src="https://media.discordapp.net/attachments/842503650017280039/850401511444185139/pokemon_ss.PNG?width=1149&height=559"> Programming Level: Beginner\ Project Type: Front-End Front-End: HTML, CSS, JavaScript\ Back-End: N/A
34.227273
139
0.772908
eng_Latn
0.988455
2f5b8ebbcf5d99437d03a30b39ae85a07914be09
576
md
Markdown
site/jbase/fileinfo/README.md
taful/docs
b63111a831566f262ea9d57ce56c10d209804eeb
[ "MIT" ]
7
2019-12-06T23:39:36.000Z
2020-12-13T13:26:23.000Z
site/jbase/fileinfo/README.md
taful/docs
b63111a831566f262ea9d57ce56c10d209804eeb
[ "MIT" ]
36
2020-01-21T00:17:12.000Z
2022-02-28T03:24:29.000Z
site/jbase/fileinfo/README.md
taful/docs
b63111a831566f262ea9d57ce56c10d209804eeb
[ "MIT" ]
33
2020-02-07T12:24:42.000Z
2022-03-24T15:38:31.000Z
# jBASE Files <PageHeader /> [Audit Logging](./../faq/introduction-to-audit-logging/README.md) [Encryption](./../encryption/README.md) [File Handling](./../files/README.md) [jEDI -> MongoDB](./../jedi/mongodb/mongodb-jedi-driver/README.md) [jEDI -> ODBC](./../jedi/odbc/introduction-to-the-odbc-jedi/README.md) [jDLS](./../daemons/manual-installation-of-jdls-service/README.md) [jRFS](./../jrfs/README.md) [Record Locking](./../record-locking/README.md) [Triggers](./../triggers-overview/README.md) Back to [Knowledgebase](./../README.md) <PageFooter />
32
72
0.671875
yue_Hant
0.850297
2f5c453673e850d8a8282c6b9e7ed0066ea6b235
92
md
Markdown
Chapter1/Readme.md
pandian3k/Ansible-QuickStart-Guide
7e23994b107e99bb257b5178dbae931ef7bff7aa
[ "MIT" ]
9
2018-09-25T16:36:59.000Z
2021-10-01T22:12:05.000Z
Chapter01/Readme.md
PacktPublishing/Ansible-Quick-Start-Guide
2ab57a0aa05493f9a827d1f9fe501ae410689304
[ "MIT" ]
null
null
null
Chapter01/Readme.md
PacktPublishing/Ansible-Quick-Start-Guide
2ab57a0aa05493f9a827d1f9fe501ae410689304
[ "MIT" ]
18
2018-09-15T15:09:38.000Z
2021-08-08T22:17:26.000Z
# Chapter 1: What is Ansible? These folders contain the code used when writing this chapter
30.666667
61
0.793478
eng_Latn
1.000006
2f5c839b033a6b4803841955da6c248b9db045aa
178
md
Markdown
README.md
javacc21/blue-sky
8c6470cad6b96189c3d4f73d61e9f2b53472f399
[ "BSD-2-Clause" ]
1
2020-01-29T10:14:41.000Z
2020-01-29T10:14:41.000Z
README.md
javacc21/blue-sky
8c6470cad6b96189c3d4f73d61e9f2b53472f399
[ "BSD-2-Clause" ]
3
2020-01-29T10:15:14.000Z
2020-03-07T19:06:33.000Z
README.md
javacc21/blue-sky
8c6470cad6b96189c3d4f73d61e9f2b53472f399
[ "BSD-2-Clause" ]
null
null
null
# Blue Sky Blue Sky is a repository for experimental code and trying out new ideas. If you want commit rights to this repository, just drop me a note at revusky@javacc.com.
22.25
72
0.764045
eng_Latn
0.995498
2f5d002fbde5b755c8656e97b05ac1a1e149fec5
42
md
Markdown
README.md
yesOrNo123/algorithms
6e353b83d59609e270789c1fae801e2b951d0f60
[ "Apache-2.0" ]
null
null
null
README.md
yesOrNo123/algorithms
6e353b83d59609e270789c1fae801e2b951d0f60
[ "Apache-2.0" ]
null
null
null
README.md
yesOrNo123/algorithms
6e353b83d59609e270789c1fae801e2b951d0f60
[ "Apache-2.0" ]
null
null
null
# algorithms Summary of algorithm problem
14
28
0.833333
eng_Latn
0.793684
2f5d40591cc88b414591bc874dbb085fa4d7dc87
12,314
md
Markdown
docs/2014/analysis-services/multidimensional-models-scripting-language-assl-xmla/performing-batch-operations-xmla.md
kirabr/sql-docs.ru-ru
08e3b25ff0792ee0ec4c7641b8960145bbec4530
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/multidimensional-models-scripting-language-assl-xmla/performing-batch-operations-xmla.md
kirabr/sql-docs.ru-ru
08e3b25ff0792ee0ec4c7641b8960145bbec4530
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/analysis-services/multidimensional-models-scripting-language-assl-xmla/performing-batch-operations-xmla.md
kirabr/sql-docs.ru-ru
08e3b25ff0792ee0ec4c7641b8960145bbec4530
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Выполнение пакетных операций (XMLA) | Документация Майкрософт ms.custom: '' ms.date: 03/08/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.suite: '' ms.technology: - analysis-services - docset-sql-devref ms.tgt_pltfrm: '' ms.topic: reference helpviewer_keywords: - multiple projects - XML for Analysis, batches - parallel batch execution [XMLA] - transactional batches - serial batch execution [XMLA] - XMLA, batches - batches [XML for Analysis] - nontransactional batches ms.assetid: 731c70e5-ed51-46de-bb69-cbf5aea18dda caps.latest.revision: 12 author: minewiskan ms.author: owend manager: craigg ms.openlocfilehash: 186d5a0896814544f34531fe98ad88c8034ac63a ms.sourcegitcommit: c18fadce27f330e1d4f36549414e5c84ba2f46c2 ms.translationtype: MT ms.contentlocale: ru-RU ms.lasthandoff: 07/02/2018 ms.locfileid: "37304594" --- # <a name="performing-batch-operations-xmla"></a>Выполнение пакетных операций (XMLA) Можно использовать [пакета](../xmla/xml-elements-commands/batch-element-xmla.md) в XML для аналитики (XMLA) для выполнения нескольких команд XMLA при помощи только одного команда [Execute](../xmla/xml-elements-methods-execute.md) метод. Несколько команд, содержащихся в команде `Batch`, можно выполнить либо как одну транзакцию, либо как отдельные транзакции для каждой команды, последовательно или параллельно. Можно также указать out-привязок и другие свойства в `Batch` команды для обработки нескольких [!INCLUDE[msCoName](../../includes/msconame-md.md)] [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] объектов. ## <a name="running-transactional-and-nontransactional-batch-commands"></a>Выполнение транзакционных и нетранзакционных пакетных команд Команда `Batch` выполняет команды одним из двух способов. **Транзакционная** Если `Transaction` атрибут `Batch` команды установлено значение true, `Batch` команда выполняет все команды, содержащиеся `Batch` команды в одной транзакции — *транзакций* пакетной службы. При сбое любой команды в пакете транзакций [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] откат всех команд `Batch` команда, были выполнены до команды, завершившейся неудачей и `Batch` команда немедленно завершается. Все команды, содержащиеся в команде `Batch`, которые еще не запускались, выполняться не будут. После завершения команды `Batch` команда `Batch` сообщает обо всех ошибках, возникших при выполнении команды, завершившейся неудачей. **Нетранзакционные** Если `Transaction` атрибут имеет значение false, `Batch` команда выполняет каждую команду, содержащуюся `Batch` команду в отдельной транзакции — *нетранзакционные* пакетной службы. Если в нетранзакционном пакете одна из команд завершается ошибкой, команда `Batch` продолжает выполнение команд после той команды, которая завершилась неудачей. После того как команда `Batch` предпримет попытки выполнить все команды, содержащиеся в команде `Batch`, то команда `Batch` сообщит обо всех возникших ошибках. Все результаты, возвращаемые командами, которые содержатся в команде `Batch`, возвращаются в том порядке, в котором эти команды расположены в команде `Batch`. В зависимости от того, является ли команда `Batch` транзакционной или нетранзакционной, команда `Batch` возвращает разные результаты. > [!NOTE] > Если `Batch` команд содержит команду, которая не возвращает выходные данные, такие как [блокировки](../xmla/xml-elements-commands/lock-element-xmla.md) команда, и что команда выполнена успешно, `Batch` команда возвращает пустой [корневой](../xmla/xml-elements-properties/root-element-xmla.md) элемент в элементе результатов. Пустой элемент `root` дает возможность сопоставить каждую команду, содержащуюся в команде `Batch`, с соответствующим элементом `root` для результатов этой команды. ### <a name="returning-results-from-transactional-batch-results"></a>Возвращение результатов из результатов транзакционного пакета Результаты команд, выполняющихся в рамках транзакционного пакета, возвращаются только после завершения выполнения всей команды `Batch`. Результаты каждой команды не возвращаются отдельно по мере их выполнения, поскольку завершение с ошибкой любой команды в рамках транзакционного пакета приводит к откату всей команды `Batch` и всех содержащихся в ней команд. Если все команды запустились и выполнены успешно, [возвращают](../xmla/xml-elements-properties/return-element-xmla.md) элемент [ExecuteResponse](../xmla/xml-elements-objects-executeresponse.md) элемент, возвращаемый `Execute` метод `Batch` команды содержит одно [результатов](../xmla/xml-elements-properties/results-element-xmla.md) элемент, который в свою очередь содержит один `root` -элемент для каждой успешно выполненной команды, содержащиеся в `Batch` команды. Если же одну из команд в команде `Batch` не удалось запустить или она завершилась ошибкой, метод `Execute` возвращает сбой SOAP для команды `Batch`, в котором содержится ошибка команды, завершившейся неудачей. ### <a name="returning-results-from-nontransactional-batch-results"></a>Возвращение результатов из результатов нетранзакционного пакета Результаты выполнения команд в рамках нетранзакционного пакета возвращаются в том порядке, в котором эти команды расположены в команде `Batch` и с учетом их возврата каждой командой. Если не удалось запустить ни одной команды, содержащейся в команде `Batch`, метод `Execute` возвращает сбой SOAP, который содержит ошибку для команды `Batch`. Если удалось запустить хотя бы одну команду, элемент `return` элемента `ExecuteResponse`, возвращаемого методом `Execute` для команды `Batch`, будет содержать один элемент `results`, в котором, в свою очередь, будет содержаться по одному элементу `root` для каждой команды, находящейся в команде `Batch`. Если одной или нескольких команд в рамках нетранзакционного пакета не может быть запущена или не удается завершить, `root` содержит элемент для завершившейся неудачно команды [ошибка](../xmla/xml-elements-properties/error-element-xmla.md) элемент, описывающий ошибку. > [!NOTE] > Считается, что нетранзакционный пакет выполнен успешно, если из него удалось запустить хотя бы одну команду, даже если все команды, содержащиеся в таком пакете, возвратят ошибку в результатах выполнения команды `Batch`. ## <a name="using-serial-and-parallel-execution"></a>Использование последовательного и параллельного выполнения Команду `Batch` можно использовать для последовательного или параллельного выполнения включенных в нее команд. При последовательном выполнении следующая команда из команды `Batch` может запуститься только после завершения выполнения текущей команды в команде `Batch`. При параллельном выполнении команд команда `Batch` может выполнять несколько команд одновременно. Для выполнения команд в параллельном режиме, необходимо добавить команды, должны выполняться в параллельном режиме в [параллельных](../xmla/xml-elements-properties/parallel-element-xmla.md) свойство `Batch` команды. В настоящее время [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] можно запустить только непрерывные, последовательные [процесс](../xmla/xml-elements-commands/process-element-xmla.md) команды в параллельном режиме. Любые другие команды XMLA, такие как [создать](../xmla/xml-elements-commands/create-element-xmla.md) или [Alter](../xmla/xml-elements-commands/alter-element-xmla.md), входящий в `Parallel` свойство, выполняются последовательно. Службы [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] пытаются выполнить все команды `Process`, включенные в свойство `Parallel`, в параллельном режиме, однако гарантировать параллельное выполнение всех команд `Process` невозможно. Экземпляр анализирует каждую команду `Process`, а если обнаруживается, что выполнить команду в параллельном режиме невозможно, выполняет команду `Process` последовательно. > [!NOTE] > Чтобы команды можно было выполнять параллельно, атрибуту `Transaction` команды `Batch` необходимо присвоить значение TRUE, так как службы [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] поддерживают только одну активную транзакцию на соединение, а нетранзакционные пакеты выполняют каждую команду в отдельной транзакции. В случае включения свойства `Parallel` в нетранзакционный пакет возникнет ошибка. ### <a name="limiting-parallel-execution"></a>Ограничение параллельного выполнения Службы [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] пытаются выполнить как можно большее число команд `Process` параллельно, с учетом возможностей компьютера, на котором работает экземпляр служб. Можно ограничить число одновременно выполняющихся команд `Process`, присвоив атрибуту `maxParallel` свойства `Parallel` значение, указывающее максимальное число команд `Process`, которые могут выполняться параллельно. Например, в приведенной последовательности свойство `Parallel` содержит следующие команды: 1. `Create` 2. `Process` 3. `Alter` 4. `Process` 5. `Process` 6. `Process` 7. `Delete` 8. `Process` 9. `Process` Атрибут `maxParalle` этого свойства `Parallel` имеет значение 2. Это означает, что экземпляр выполнит приведенный ранее список команд в следующем порядке. - Команда 1 будет выполнена последовательно, так как это команда `Create`, а параллельно могут выполняться только команды `Process`. - Команда 2 выполняется последовательно после завершения команды 1. - Команда 3 выполняется последовательно после завершения команды 2. - Команды, 4 и 5 выполняются в параллельном режиме, после завершения команды 3. Несмотря на то, что команда 6 также является командой `Process`, она не может выполняться параллельно с командами 4 и 5, поскольку свойство `maxParallel` имеет значение 2. - Команда будет выполнена последовательно после завершения команд 4 и 5. - Команда 7 выполняется последовательно за командой 6. - Команды 8 и 9 выполняются параллельно после завершения команды 7. ## <a name="using-the-batch-command-to-process-objects"></a>Использование пакетов команд для обработки объектов Команда `Batch` содержит несколько дополнительных свойств и атрибутов, включенных в нее специально для поддержки обработки нескольких проектов служб [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)]. - Атрибут `ProcessAffectedObjects` команды `Batch` указывает, должен ли экземпляр обрабатывать также любые объекты, которым в результате команды `Process`, включенной в команду `Batch`, обрабатывающую указанный объект, требуется повторная обработка. - [Привязки](../xmla/xml-elements-properties/bindings-element-xmla.md) свойство содержит коллекцию привязок вне строки, используемые всеми `Process` команды в `Batch` команды. - [DataSource](../xmla/xml-elements-properties/source-element-xmla.md) содержит привязку вне строки для источника данных, используемые всеми `Process` команды в `Batch` команды. - [DataSourceView](../xmla/xml-elements-properties/datasourceview-element-xmla.md) содержит привязку вне строки для представления источников данных, используемые всеми `Process` команды в `Batch` команды. - [ErrorConfiguration](../xmla/xml-elements-properties/errorconfiguration-element-xmla.md) указывает то, как в котором `Batch` команда обрабатывает ошибки, обнаруженные всеми `Process` команд, содержащихся в `Batch` команду. > [!IMPORTANT] > Команда `Process` не может содержать свойств `Bindings`, `DataSource`, `DataSourceView`, `ErrorConfiguration` или `Process`, если она находится в команде `Batch`. Если эти свойства для команды `Process` задать необходимо, укажите требуемые сведения в соответствующих свойствах команды `Batch`, которая содержит команду `Process`. ## <a name="see-also"></a>См. также [Элемент Batch &#40;XML для Аналитики&#41;](../xmla/xml-elements-commands/batch-element-xmla.md) [Обработать элемент &#40;XML для Аналитики&#41;](../xmla/xml-elements-commands/process-element-xmla.md) [Обработка объектов многомерной модели](../multidimensional-models/processing-a-multidimensional-model-analysis-services.md) [Разработка с использованием XMLA в службах Analysis Services](developing-with-xmla-in-analysis-services.md)
91.895522
1,039
0.786503
rus_Cyrl
0.948422
2f5f38de78235773d6db22c403250104c8fc18e0
72
md
Markdown
docs/properties/p/PlayerMediaFormat.md
jdehotin/IFC4.3.x-development
5652a25dac27242af8c60f1a33206d1697948ffa
[ "FSFAP" ]
1
2021-08-31T16:12:09.000Z
2021-08-31T16:12:09.000Z
docs/properties/p/PlayerMediaFormat.md
Moult/IFC4.3.x-development
5dfacdc91f04b446f9d7386b950099fc14e6587e
[ "FSFAP" ]
null
null
null
docs/properties/p/PlayerMediaFormat.md
Moult/IFC4.3.x-development
5dfacdc91f04b446f9d7386b950099fc14e6587e
[ "FSFAP" ]
null
null
null
PlayerMediaFormat ================= Indicates supported media formats.
14.4
34
0.652778
eng_Latn
0.754504
2f6000526a91732f29fa4dfdeb12367416f101b3
2,058
md
Markdown
README.md
tomascufaro/cursos-python
03b403b5dcc1a80048204df64d63f58eb6f2f049
[ "MIT" ]
1
2021-07-24T12:12:33.000Z
2021-07-24T12:12:33.000Z
README.md
tomascufaro/cursos-python
03b403b5dcc1a80048204df64d63f58eb6f2f049
[ "MIT" ]
null
null
null
README.md
tomascufaro/cursos-python
03b403b5dcc1a80048204df64d63f58eb6f2f049
[ "MIT" ]
null
null
null
# Cursos de Python ## Introducción En este repositorio podrán encontrar cursos sobre el lenguaje de programación Python. Orientan el desarrollo 4 ideas principales: 1- El material debe ser **abierto**: siendo que es cada vez más relevante el aprender a programar y dado que existen muchísimas instituciones y personas a quienes podrían serle de utilidad, decidimos que el material desarrollado sea públicamente accesible. Esto tiene por fin la **difusión** pero también invitar a colaborar a todas aquellas personas quienes quieran donar material. Para ello les invitamos a que nos contacten a cordoba.leonardoignacio@gmail.com, maria.gaska@gmail.com o mg@ihum.ai. Para colaborar empleamos un versión reducida de GitFlow. Pueden aprender sobre eso en cursos-python/0_Colaboracion y en https://medium.com/@ihumai/gitflow-colaborando-en-git-4046f4a95c9c podrán encontrar más detalles y un video. 2- El contenido tiene una orientación **local**: considerando que la mayor parte del material disponible en internet se encuentra en inglés y esto es una barrera para mucha gente, creemos que es importante que el material desarrollado sea en español. Además, a la hora de trabajar con datos generalmente usamos datasets reales y en español. 3- El material es **modular**: siendo que cada persona tiene necesidades de aprendizaje distintas, los contenidos se organizan en módulos chicos, cada cual se enfoca en un tema en particular. 4- El contenido está **curado**: todo lo publicado en la rama master fue revisado previamente a ser subido, y sigue pautas acordadas sobre estilo. En especial, tratamos de respetar PEP8, explicar el código y conceptos en Jupyter Notebooks, proponer ejercicios como parte de la clase y también ofrecer notebooks con tarea. ## Cursos 1- Introduccion: curso introductorio a Python. 2- AnalisisDeDatos: curso orientado a la manipulación y análisis de datos con Pandas y librerías de visualización. En desarrollo. 3- DatosGeograficos: curso sobre datos vectoriales con GeoPandas y librerías de visualización. En desarrollo.
76.222222
205
0.808066
spa_Latn
0.998399
2f602f7ca7cd5a69f0e5b89d60e5cc091f12e6cb
1,764
md
Markdown
_posts/journal/2018-02-06-new_zealand_5.md
kdlovett/keith-website
d2619234bfd0cadd26dbd73a69bd7eae451e0a4a
[ "MIT" ]
1
2020-07-23T21:08:19.000Z
2020-07-23T21:08:19.000Z
_posts/journal/2018-02-06-new_zealand_5.md
kdlovett/keiths-site
d2619234bfd0cadd26dbd73a69bd7eae451e0a4a
[ "MIT" ]
null
null
null
_posts/journal/2018-02-06-new_zealand_5.md
kdlovett/keiths-site
d2619234bfd0cadd26dbd73a69bd7eae451e0a4a
[ "MIT" ]
null
null
null
--- layout: post title: "New Zealand 5 - Waitangi" date: 2018-02-06 category: journal --- <link rel="stylesheet" type="text/css" href="/keiths-site/css/main.css"> *Part 5 of a journal documenting shenanigans abroad in New Zealand for family and anyone else interested.* Last weekend I visited Waitangi, to learn more about Maori traditions. ![NZWaitangi](/keiths-site/image_dir/NZWaitangi.jpg) I then took a ferry in the Bay of Islands to the cozy town of Russell. ![NZSunset](/keiths-site/image_dir/NZSunset.jpg) Once considered a "hell hole" by missionaries who were appalled by the behavior of the whalers and plunderers who resided there, the town now embraces the title with irony: this is now the type of town where kids skip down the roads as the sun goes down, passing tiny cottages with flowerbeds in the windowsills. ![NZRussell](/keiths-site/image_dir/NZRussell.jpg) And indeed the sun did go down, and the stars that presided over the sky without the light polution I'm now accustomed to were nice. Long Beach offered a particularly calming view, and the water there was almost eerily quiet, I think perhaps in part due to the recent supermoon. Today is Waitangi Day and I noticed last night a crowd gathering at Silo Park, overlooking the bridge leading into Auckland. "There's a lightshow starting on the bridge in a few minutes," someone told me, so I stood with them on the pier there. As the bridge dimmed before the show I could hear an occassional drum beat or horn echoing from various locations along the harbor. ![NZSkytower](/keiths-site/image_dir/NZSkytower.jpg) I'm not quite sure what this was, but I know the bridge has in the past been traversed by peaceful protestors demanding tino rangatiratanga, Maori self-sovereignty.
58.8
376
0.78458
eng_Latn
0.998652
2f606d5852fd4e68fc37243a72d7b3a3da0248c5
7,106
md
Markdown
_posts/2019/2019-09-23-status-internal-hackathon.md
Critic-A/sdslabs.github.com
e7c120fb957f515dd90e65ca2e57bff85c350c2c
[ "MIT" ]
6
2015-04-04T13:56:56.000Z
2020-04-08T21:21:25.000Z
_posts/2019/2019-09-23-status-internal-hackathon.md
Critic-A/sdslabs.github.com
e7c120fb957f515dd90e65ca2e57bff85c350c2c
[ "MIT" ]
9
2017-09-04T17:57:08.000Z
2022-01-27T15:21:36.000Z
_posts/2019/2019-09-23-status-internal-hackathon.md
Critic-A/sdslabs.github.com
e7c120fb957f515dd90e65ca2e57bff85c350c2c
[ "MIT" ]
6
2017-09-04T16:35:39.000Z
2022-01-31T18:54:57.000Z
--- layout: post title: How I learned to stop worrying and start pinging excerpt: Recently we held an internal hackathon. Here’s how we created an app that creates status pages. author: name: Vaibhav link: https://github.com/vrongmeal bio: Developer, SDSLabs image: vrongmeal.jpg --- *We have regular hackathons to experiment with new fields/technology. We decided to write blogs on any shippable product that we created during the hackathon. This blog is first in the series, where we created an app that creates status pages.* ![Whiteboard](/images/posts/status/whiteboard.jpg) ## Introduction ### What is a Status Page? Almost all organizations have a page where you can view the status (or uptime) of their services. For example, this is the SDSLabs Status page: [https://statusv2.sdslabs.co/](https://statusv2.sdslabs.co/). A webpage that displays the uptime/downtime statistics of its services is called a status page. ### Why do you need it? A status page tells you whether an application is up or not. Suppose your Facebook notifications just stopped working. You might start to fiddle with the settings on your phone, but it may be that the Facebook servers that send notifications are down. A status page tells you exactly that. It is also pretty useful for companies too, making them aware of the downtime before a user has a chance to complain. ### What did we make? We made a web application that creates these status pages. Many others provide the same service, such as Freshping, Apex Ping, and Atlassian Statuspage. However, these services are either paid or provide limited free access. Moreover, there are no open-source alternatives to them. So, we made one on our own (and yes it’s going to be open-source). ## Developing ### Architecture A status page consists of many services where it needs to check whether the service is working or not. However, what do we mean by the phrase “the service is working”? Usually, you ping the host, and if you receive a reply, the host server is up else it’s down. However, ping only tells you if your request can reach the server. The code may or may not function properly. For that, more parameters need to be checked, say, the status code of the response. A “200” status means your page is working fine, but any other status means you need to fix some issues. Seeing that there might be so many parameters, we defined an abstraction called a Check. A check consists of the URL of the request, type of request, expected output, and any input sent by the user. For each kind of request, there are a limited number of request types and output types and correspondingly their values. A page might contain some (and not all checks created by the user). A user might create multiple pages (each having a different set of checks). A page also needs to display any incidents that happened and if they were fixed or not. The app also has a feature to add collaborators to your page. The diagram below illustrates the schema of the Database. ![Status Schema](/images/posts/status/status-schema.png) The biggest challenge in making the app was implementing how the checks would work. Since a user can request to check for uptime every 30 seconds, assuming a large scale, it’s unreasonable to expect one server to handle everything correctly. So we wanted the app to be scalable over multiple servers. For this, we first thought of deploying a container for every check. We schedule this container in any of the nodes defined by us. Soon, we rejected the idea, given that there can be way too many checks and you cannot have a container dedicated to a check. The next solution had a master that assigns checks to workers. The worker schedules the checks. Now, these workers can be scaled horizontally. This architecture was a much better optimization as compared to the previous solution. There were some solutions that we thought about optimizing the process of scheduling the checks too, but each of them imposed some limitations on the functionality of the app (like limiting the choices of the interval of sending requests or increasing the minimum interval time after which we can send a request). Sacrificing functionality for the sake of minor optimizations was unacceptable. Finally, we needed to manage the lifecycle of the workers. It’s relatively complex when managing containers across multiple nodes (servers). We chose to rely on Kubernetes than reinventing the wheel ourselves. Kubernetes allows you to create custom resource objects. We need to write the code that handles assigning of checks in various workers for multiple events, such as, when adding or removing a worker from the cluster and let Kubernetes handle the rest. Due to a limited number of resources, first, we’re going to release a version without the described Kubernetes architecture. For the initial release, we’re aiming at an app which can spawn and manage workers on a single server. ![Master-Worker](/images/posts/status/master-worker.png) ### Tech Stack We chose Golang since it’s a compiled language and all my team members were comfortable with writing code in Go. Go being a compiled language was helpful because we could create binaries of agents that would run on worker containers. It’s also much easier to write as compared to C/C++. Along with it, we used Docker Containers, and as mentioned above, Kubernetes for managing their lifecycle. For storing metrics of the statuses, we needed a time-series database. Struggling between Prometheus and InfluxDB, we found out about TimescaleDB. It’s an extension of PostgreSQL, so anything that works with Postgres works well with Timescale too. There were even more benefits to this since we did not want to write queries specifically for InfluxDB, and Prometheus being a pull-based DB did not fit our needs. With Postgres, we could use a single database for both metrics and meta-data. Many people don’t require all this functionality and just the metrics for a few checks. So there’s even a stand-alone mode where you can write a simple YAML file defining properties of checks, and it’ll do the job. Also, integrating with Prometheus was an easy job, so for the stand-alone user can plug in Prometheus too. ## Hackathon Organizing an internal hackathon turned out to be productive. Usually, the part of the process that takes much time gets completed in a matter of hours. It’s also encouraging to see everyone else around you working with such enthusiasm. It’s exciting to see all the products that others make in the end. We had a formal presentation of the work everyone completed in two days. The winner was also awarded a prize from the final year. The best part was this being an internal hackathon, and we could decide the parameters for rating. So, we did include code quality as well as the number of hours spent on working on the project. We finished most of the app during the weekend. There’s still some work left to ship the complete product (at least a version that works flawlessly with minimal functionality). You’ll hopefully see it on Github soon :)
94.746667
559
0.791022
eng_Latn
0.99984
2f60f2826ea0608f331d2f612320f44dc55fadb4
199
md
Markdown
Computation/Geometry/point_in_triangle.md
gmlscripts/legacy
acd272c1ca983762f9b154838b51c0f5a778fd3d
[ "Zlib" ]
56
2015-06-19T10:18:14.000Z
2022-02-21T17:25:00.000Z
Computation/Geometry/point_in_triangle.md
gmlscripts/legacy
acd272c1ca983762f9b154838b51c0f5a778fd3d
[ "Zlib" ]
14
2015-01-29T04:51:05.000Z
2021-06-10T22:01:54.000Z
Computation/Geometry/point_in_triangle.md
gmlscripts/legacy
acd272c1ca983762f9b154838b51c0f5a778fd3d
[ "Zlib" ]
10
2015-05-15T03:25:06.000Z
2021-11-16T09:12:35.000Z
point_in_triangle ================= NOTE: The GameMaker:Studio function of the same name produces the same results and obsoletes this script. script: point_in_triangle.gml contributors: Yourself
19.9
79
0.753769
eng_Latn
0.927375
2f6101839a8fcc30de1cfa44c4c17d1d05dc11be
271
md
Markdown
src/test/docs_for_test/about/related-projects.md
software-architect-tools/docs4all
06df8b05af1e203ca942d29cd6d625b66316f37b
[ "MIT" ]
null
null
null
src/test/docs_for_test/about/related-projects.md
software-architect-tools/docs4all
06df8b05af1e203ca942d29cd6d625b66316f37b
[ "MIT" ]
null
null
null
src/test/docs_for_test/about/related-projects.md
software-architect-tools/docs4all
06df8b05af1e203ca942d29cd6d625b66316f37b
[ "MIT" ]
null
null
null
<!-- { "order":1, "targetAudience":"user", "version":"1.0.0" } --> - [Deploy Raneto to your servers with Ansible](https://github.com/ryanlelek/raneto-devops) (@ryanlelek) - [Run Raneto in a Vagrant container](https://github.com/draptik/vagrant-raneto) (@draptik)
24.636364
103
0.675277
eng_Latn
0.123335
2f61215d557f5f2119d1501b85e9bd13cf6e545f
3,363
md
Markdown
community-content/pytorch_text_classification_using_vertex_sdk_and_gcloud/python_package/README.md
gogasca/vertex-ai-samples
85984f117300768061c03a64e3982fae27c64126
[ "Apache-2.0" ]
213
2021-06-10T20:05:20.000Z
2022-03-31T16:09:29.000Z
community-content/pytorch_text_classification_using_vertex_sdk_and_gcloud/python_package/README.md
gogasca/vertex-ai-samples
85984f117300768061c03a64e3982fae27c64126
[ "Apache-2.0" ]
343
2021-07-25T22:55:25.000Z
2022-03-31T23:58:47.000Z
community-content/pytorch_text_classification_using_vertex_sdk_and_gcloud/python_package/README.md
gogasca/vertex-ai-samples
85984f117300768061c03a64e3982fae27c64126
[ "Apache-2.0" ]
143
2021-07-21T17:27:47.000Z
2022-03-29T01:20:43.000Z
# PyTorch - Python Package Training ## Overview The directory provides code to fine tune a transformer model ([BERT-base](https://huggingface.co/bert-base-cased)) from Huggingface Transformers Library for sentiment analysis task. [BERT](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html) (Bidirectional Encoder Representations from Transformers) is a transformers model pre-trained on a large corpus of unlabeled text in a self-supervised fashion. In this sample, we use [IMDB sentiment classification dataset](https://huggingface.co/datasets/imdb) for the task. We show you packaging a PyTorch training model to submit it to Vertex AI using pre-built PyTorch containers and handling Python dependencies through Python build scripts (`setup.py`). ## Prerequisites * Setup your project by following the instructions from [documentation](https://cloud.google.com/vertex-ai/docs/start/cloud-environment) * Change directories to this sample. ## Directory Structure * `trainer` directory: all Python modules to train the model. * `scripts` directory: command-line scripts to train the model on Vertex AI. * `setup.py`: `setup.py` scripts specifies Python dependencies required for the training job. Vertex Training uses pip to install the package on the training instances allocated for the job. ### Trainer Modules | File Name | Purpose | | :-------- | :------ | | [metadata.py](trainer/metadata.py) | Defines: metadata for classification task such as predefined model dataset name, target labels. | | [utils.py](trainer/utils.py) | Includes: utility functions such as data input functions to read data, save model to GCS bucket. | | [model.py](trainer/model.py) | Includes: function to create model with a sequence classification head from a pretrained model. | | [experiment.py](trainer/experiment.py) | Runs the model training and evaluation experiment, and exports the final model. | | [task.py](trainer/task.py) | Includes: 1) Initialize and parse task arguments (hyper parameters), and 2) Entry point to the trainer. | ### Scripts * [train-cloud.sh](scripts/train-cloud.sh) This script submits a training job to Vertex AI ## How to run For local testing, run: ``` !cd python_package && python -m trainer.task ``` For cloud training, once the prerequisites are satisfied, update the `BUCKET_NAME` environment variable in `scripts/train-cloud.sh`. You may then run the following script to submit an AI Platform Training job: ``` source ./python_package/scripts/train-cloud.sh ``` ## Run on GPU The provided trainer code runs on a GPU if one is available including data loading and model creation. To run the trainer code on a different GPU configuration or latest PyTorch pre-built container image, make the following changes to the trainer script. * Update the PyTorch image URI to one of [PyTorch pre-built containers](https://cloud.google.com/vertex-ai/docs/training/pre-built-containers#available_container_images) * Update the [`worker-pool-spec`](https://cloud.google.com/vertex-ai/docs/training/configure-compute?hl=hr) in the gcloud command that includes a GPU Then, run the script to submit a Custom Job on Vertex Training job: ``` source ./scripts/train-cloud.sh ``` ### Versions This script uses the pre-built PyTorch containers for PyTorch 1.7. * `us-docker.pkg.dev/vertex-ai/training/pytorch-gpu.1-7:latest`
57
725
0.770443
eng_Latn
0.951698
2f615e70cca00d46c65f208ef1fe2dd69929ba86
12,551
md
Markdown
iis/extensions/transform-manager/jobmanifest-class-microsoft-web-media-transformmanager.md
ccard587/iis-docs
8bf060c1840466bc64a15f334e02a42d0d020e23
[ "CC-BY-4.0", "MIT" ]
null
null
null
iis/extensions/transform-manager/jobmanifest-class-microsoft-web-media-transformmanager.md
ccard587/iis-docs
8bf060c1840466bc64a15f334e02a42d0d020e23
[ "CC-BY-4.0", "MIT" ]
null
null
null
iis/extensions/transform-manager/jobmanifest-class-microsoft-web-media-transformmanager.md
ccard587/iis-docs
8bf060c1840466bc64a15f334e02a42d0d020e23
[ "CC-BY-4.0", "MIT" ]
1
2022-03-24T00:22:20.000Z
2022-03-24T00:22:20.000Z
--- title: JobManifest Class (Microsoft.Web.Media.TransformManager) description: This article has information about inheritance hierarchy, syntax, properties, methods, version information, and thready safety for the JobManifest class. TOCTitle: JobManifest Class ms:assetid: T:Microsoft.Web.Media.TransformManager.JobManifest ms:mtpsurl: https://msdn.microsoft.com/library/microsoft.web.media.transformmanager.jobmanifest(v=VS.90) ms:contentKeyID: 35521078 ms.date: 06/14/2012 mtps_version: v=VS.90 f1_keywords: - Microsoft.Web.Media.TransformManager.JobManifest dev_langs: - csharp - jscript - vb - FSharp - cpp api_location: - Microsoft.Web.Media.TransformManager.Common.dll api_name: - Microsoft.Web.Media.TransformManager.JobManifest api_type: - Assembly topic_type: - apiref product_family_name: VS --- # JobManifest Class Provides capabilities to manipulate job-instance metadata. ## Inheritance Hierarchy [System.Object](https://msdn.microsoft.com/library/e5kfa45b) Microsoft.Web.Media.TransformManager..::..JobManifest **Namespace:** [Microsoft.Web.Media.TransformManager](microsoft-web-media-transformmanager-namespace.md) **Assembly:** Microsoft.Web.Media.TransformManager.Common (in Microsoft.Web.Media.TransformManager.Common.dll) ## Syntax ```vb 'Declaration Public Class JobManifest _ Implements IJobManifest 'Usage Dim instance As JobManifest ``` ```csharp public class JobManifest : IJobManifest ``` ```cpp public ref class JobManifest : IJobManifest ``` ``` fsharp type JobManifest = class interface IJobManifest end ``` ```jscript public class JobManifest implements IJobManifest ``` The JobManifest type exposes the following members. ## Constructors |Method Type|Name|Description| |--- |--- |--- | |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[JobManifest](jobmanifest-constructor-microsoft-web-media-transformmanager.md)|Initializes a new instance of the JobManifest class.| ## Properties |Property Type|Name|Description| |--- |--- |--- | |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[Arguments](jobmanifest-arguments-property-microsoft-web-media-transformmanager.md)|Gets executable program task arguments that are associated with the data in the manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[FileName](jobmanifest-filename-property-microsoft-web-media-transformmanager.md)|Gets or sets the file name of the manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[Folder](jobmanifest-folder-property-microsoft-web-media-transformmanager.md)|Gets or sets the folder name of the manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[FullFileName](jobmanifest-fullfilename-property-microsoft-web-media-transformmanager.md)|Gets the folder name and file name of the manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[InputFileNames](jobmanifest-inputfilenames-property-microsoft-web-media-transformmanager.md)|Gets a collection of input file names for a job.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[InstanceFileIsManifest](jobmanifest-instancefileismanifest-property-microsoft-web-media-transformmanager.md)|Gets a value that indicates whether the file that initiates job creation is a SMIL 2.0-compliant manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[InstanceFileName](jobmanifest-instancefilename-property-microsoft-web-media-transformmanager.md)|Gets or sets the file name of the manifest instance.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[InstanceId](jobmanifest-instanceid-property-microsoft-web-media-transformmanager.md)|Gets or sets the ID of the manifest instance.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[JobDefinitionId](jobmanifest-jobdefinitionid-property-microsoft-web-media-transformmanager.md)|Gets the ID of the job definition.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[JobDefinitionName](jobmanifest-jobdefinitionname-property-microsoft-web-media-transformmanager.md)|Gets the name of the job definition from the job manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[JobDetails](jobmanifest-jobdetails-property-microsoft-web-media-transformmanager.md)|Gets a [JobDetails](jobdetails-class-microsoft-web-media-transformmanager.md) object that is based on details from the job manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[LogFolder](jobmanifest-logfolder-property-microsoft-web-media-transformmanager.md)|| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[ManifestAsString](jobmanifest-manifestasstring-property-microsoft-web-media-transformmanager.md)|Gets the manifest XML content.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[Name](jobmanifest-name-property-microsoft-web-media-transformmanager.md)|Gets or sets the name of the job manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[Priority](jobmanifest-priority-property-microsoft-web-media-transformmanager.md)|Gets the priority of a job.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[ProcessPriority](jobmanifest-processpriority-property-microsoft-web-media-transformmanager.md)|| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[Programs](jobmanifest-programs-property-microsoft-web-media-transformmanager.md)|Gets a collection of tasks that are executable program files.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[RawManifest](jobmanifest-rawmanifest-property-microsoft-web-media-transformmanager.md)|Gets the manifest XML content.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[Status](jobmanifest-status-property-microsoft-web-media-transformmanager.md)|Gets or sets the status value from the manifest.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[TaskIndex](jobmanifest-taskindex-property-microsoft-web-media-transformmanager.md)|Gets or sets the task index value from the task index element.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[Template](jobmanifest-template-property-microsoft-web-media-transformmanager.md)|Gets an XML element that contains a set of sequential tasks that define a job.| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[WorkFolder](jobmanifest-workfolder-property-microsoft-web-media-transformmanager.md)|| |![Public property](images/Hh125762.pubproperty(en-us,VS.90).gif "Public property")|[WorkQueueRoot](jobmanifest-workqueueroot-property-microsoft-web-media-transformmanager.md)|Gets the root work folder name.| ## Methods |Method Type|Name|Description| |--- |--- |--- | |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")![Static member](images/Hh125771.static(en-us,VS.90).gif "Static member")|[CreateManifest](jobmanifest-createmanifest-method-microsoft-web-media-transformmanager.md)|Creates a JobManifest object by using the job definition, the root work folder name, scheduling information about a job, tasks that define a job, shared properties, the name of the file that is used to create the manifest, and the ID of the manifest instance.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[Equals](https://msdn.microsoft.com/library/bsc2ak47)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)| |![Protected method](images/Hh125771.protmethod(en-us,VS.90).gif "Protected method")|[Finalize](https://msdn.microsoft.com/library/4k87zsw7)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[FindJobElement](jobmanifest-findjobelement-method-microsoft-web-media-transformmanager.md)|Returns the XML job element from the manifest.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[GetHashCode](https://msdn.microsoft.com/library/zdee4b3y)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")![Static member](images/Hh125771.static(en-us,VS.90).gif "Static member")|[GetInputFileNames](jobmanifest-getinputfilenames-method-microsoft-web-media-transformmanager.md)|Returns a collection of input file names.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")![Static member](images/Hh125771.static(en-us,VS.90).gif "Static member")|[GetManifestElement](jobmanifest-getmanifestelement-method-microsoft-web-media-transformmanager.md)|Returns an XML representation of the manifest metadata.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[GetMetadataForTask](jobmanifest-getmetadatafortask-method-microsoft-web-media-transformmanager.md)|Returns the metadata for the specified task.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[GetScheduler](jobmanifest-getscheduler-method-microsoft-web-media-transformmanager.md)|Creates and returns a new [Scheduler](scheduler-class-microsoft-web-media-transformmanager.md) object.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[GetSchedulerInfo](jobmanifest-getschedulerinfo-method-microsoft-web-media-transformmanager.md)|Returns scheduling information about a job.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[GetType](https://msdn.microsoft.com/library/dfwy45w9)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[Initialize](jobmanifest-initialize-method-microsoft-web-media-transformmanager.md)|Initializes member variables for a manifest that is loaded from disk instead of created as part of job submission.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")![Static member](images/Hh125771.static(en-us,VS.90).gif "Static member")|[LoadManifest](jobmanifest-loadmanifest-method-microsoft-web-media-transformmanager.md)|Loads the manifest file.| |![Protected method](images/Hh125771.protmethod(en-us,VS.90).gif "Protected method")|[MemberwiseClone](https://msdn.microsoft.com/library/57ctke0a)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[Save](jobmanifest-save-method-microsoft-web-media-transformmanager.md)|Saves a job manifest file.| |![Public method](images/Hh125771.pubmethod(en-us,VS.90).gif "Public method")|[ToString](https://msdn.microsoft.com/library/7bxwbwt2)|(Inherited from [Object](https://msdn.microsoft.com/library/e5kfa45b).)| ## Fields |Field Type|Name|Description| |--- |--- |--- | |![Public field](images/Hh125771.pubfield(en-us,VS.90).gif "Public field")![Static member](images/Hh125771.static(en-us,VS.90).gif "Static member")|[ManifestExtension](jobmanifest-manifestextension-field-microsoft-web-media-transformmanager.md)|Represents a constant that is used as the job manifest file extension (".smil").| ## Remarks The job manifest holds the information about a job instance. The job manifest is a .smil file that conforms to the Synchronized Multimedia Integration Language (SMIL). It contains a body section that lists all of the files that triggered the job. The job manifest also contains Resource Description Framework (RDF) metadata in a head section that describes the job definition, job scheduler, and job template. This metadata is combined with the input files to create the manifest for a job. The manifest constitutes the instructions that a scheduler requires in order to create, run, and report on the job. ## Thread Safety Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe. ## See Also ### Reference [Microsoft.Web.Media.TransformManager Namespace](microsoft-web-media-transformmanager-namespace.md)
84.804054
606
0.790853
eng_Latn
0.392104
2f6214718e829ee69229c79d17d012f3074ca26b
15,628
md
Markdown
docs/ARGUMENTS_ENV_VARS.md
akamai/uls
c62237ac4b081749eaac7eb7fd0f0818a87ef9d5
[ "Apache-2.0" ]
16
2021-06-09T17:31:13.000Z
2022-03-07T15:08:37.000Z
docs/ARGUMENTS_ENV_VARS.md
akamai/uls
c62237ac4b081749eaac7eb7fd0f0818a87ef9d5
[ "Apache-2.0" ]
7
2021-06-17T07:44:15.000Z
2022-02-28T11:10:42.000Z
docs/ARGUMENTS_ENV_VARS.md
akamai/uls
c62237ac4b081749eaac7eb7fd0f0818a87ef9d5
[ "Apache-2.0" ]
2
2021-09-29T09:31:35.000Z
2022-03-04T03:21:38.000Z
# List of parameters / Environmental variables The following tables list all available command line parameters and their corresponding environmental variables (for advanced usage). ## Global | Parameter | Env - Var | Options | Default | Description | |--------------------|--------------|-------------------------------------------------|---------|-----------------------------------------------------------| | -h <br> --help | n/a | n/a | None | Display help / usage information | | -l <br> --loglevel | ULS_LOGLEVEL | 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL' | WARNING | Adjust the overall loglevel | | -v <br> --version | n/a | n/a | None | Display ULS version information (incl. CLI & OS versions) | ## INPUT | Parameter | Env - Var | Options | Default | Description | |---------------------------|-----------------|-------------------------------------------------------------------------------------------------------|---------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | -i <br> --input | ULS_INPUT | 'EAA', 'ETP', 'MFA' | None | Specify the desired INPUT source | | --feed | ULS_FEED | EAA: 'ACCESS', 'ADMIN', 'CONHEALTH'<br> ETP: 'THREAT', 'AUP', 'DNS', 'PROXY'<br> MFA: 'AUTH','POLICY' | None | Specify the desired INPUT feed | | --format | ULS_FORMAT | 'JSON', 'TEXT' | JSON | Specify the desired INPUT (=OUTPUT) format | | --inproxy<br>--inputproxy | ULS_INPUT_PROXY | HOST:PORT | None | Adjust proxy usage for INPUT data collection (cli) <br>If this parameter does not work as expected, [please read more about it here](./FAQ.md#--inputproxy-proxy-does-not-work-as-expected) | | --rawcmd | ULS_RAWCMD | \<cli command\> | None | USE with caution /!\ <br> This is meant only to be used when told by AKAMAI [Click here for more information](ADDITIONAL_FEATURES.md#rawcmd---rawcmd-feature) | | --edgerc | ULS_EDGERC | /path/to/your/.edgerc | '~/.edgerc' | Specify the location of the .edgerc EDGE GRID AUTH file | | --section | ULS_SECTION | edgerc_config_section | 'default' | Specify the desired section within the .edgerc file | | --starttime | ULS_STARTTIME | EPOCH timestamp | `cli_default` | Specify an EPOCH timestamp from where to start the log collection. | | --endtime | ULS_ENDTIME | EPOCH timestamp | None | Specify an EPOCH timestamp up until where to fetch logs. ULS will exit after reaching this point.<br>ULS will not continue reading logs on CLI errors !!! | ## OUTPUT | Parameter | Output Type | Env - Var | Options | Default | Description | |-------------------|-------------|----------------------|----------------------------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | -o <br> --output | | ULS_OUTPUT | 'TCP', 'UDP', 'HTTP', 'RAW', 'FILE' | None | Specify the desired OUTPUT target | | --host | TCP / UDP | ULS_OUTPUT_HOST | xxx.xxx.xxx.xxx | None | Specify the desired OUTPUT target host (TCP/UDP only) | | | | | | | | | --port | TCP / UDP | ULS_OUTPUT_PORT | xxxx | None | Specify the desired OUTPUT target port (TCP/UDP only) | | | | | | | | | --httpurl | HTTP(S) | ULS_HTTP_URL | http(s)://\<host\>:\<port\>/\<path\> | None | The HTTP target URL. (HTTP only) <br> Do not use --host / --port for HTTP | | --httpformat | HTTP(S) | ULS_HTTP_FORMAT | '<http_output_format>' | '{"event": %s}' | Specify the expected output format (e.g. json) where %s will be replaced with the event data. /!\ %s can only be used once | | --httpauthheader | HTTP(S) | ULS_HTTP_AUTH_HEADER | '{"Authorization": "VALUE"}' | None | Specify an Auhtorization header to auth against the HTTP Server (HTTP only) <br>Example:<br>'{"Authorization": "Splunk xxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"}' | | --httpinsecure | HTTP(S) | ULS_HTTP_INSECURE | True | False | Disable TLS CA certificate verification | | | | | | | | | --filehandler | FILE | ULS_FILE_HANDLER | 'SIZE','TIME' | SIZE | Select the handler which decides how the files are rotated if either specific SIZE or TIME has been reached | | --filename | FILE | ULS_FILE_NAME | '/path/to/file.name' | None | The PATH + FILENAME where ULS should create the file | | --filebackupcount | FILE | ULS_FILE_BACKUPCOUNT | '\<number of files to keep\>' | 3 | Select the number of files that should be kept on the file system when rotating the data | | --filemaxbytes | FILE (SIZE) | ULS_FILE_MAXBYTES | '\<bytes\>' | 50 * 1024 * 1024 = 50 MB | Filesize (in bytes) a file can reach before it will be rotated.<br>Only on SIZE - Handler (`--filehandler = size`) !! | | --filetime | FILE (TIME) | ULS_FILE_TIME | ['S','M','H','D','W0'-'W6','midnight'] | 'M' | Specifies the file rotation trigger unit.<br>S: seconds, M: minutes, H: hours, D: days, 'W0'-'W6' Weekday (W0=Monday), 'midnight': midnight. | | --fileinterval | FILE (TIME) | ULS_FILE_INTERVAL | '\<interval\>' | 30 | Specifies the file rotation interval based on `--filetime` unit value.<br>Example: 30 and filetime=M would rotate the file every 30 minutes | | --fileaction | FILE | ULS_FILE_ACTION | \<file_handler_script.sh '%s'\> | None | Specify a file handler script/binary (e.g. bash) where `'%s'` will be replaced with the absolute filename (.e.g. /path/to/myfile.log). /!\ %s can only be used once! <br>This setting enforces '--filebackupcount' to be set to '1'<br>[Click here for more information](ADDITIONAL_FEATURES.md#) | ## Special Arguments | Parameter | Env - Var | Options | Default | Description | |-------------------------|----------------------------|------------------------|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | --filter | ULS_OUTPUT_FILTER | \<regular expression\> | None | Filter (regex) to reduce number of OUTPUT log lines<br> Only loglines **matching** the `--filter <expression>` argument will bes sent to the output.<br>[Click here for more information](ADDITIONAL_FEATURES.md#filter---filter-feature) | | --transformation | ULS_TRANSFORMATION | 'MCAS', 'JMESPATH' | None | OPTIONAL: Specify an optional transformation to manipulate the output format<br> [Click here for more information](TRANSFORMATIONS.md) | | --transformationpattern | ULS_TRANSFORMATION_PATTERN | \<pattern\> | None | Specifies the pattern used to transform the log event for the selected transformation. [Click here for more information](TRANSFORMATIONS.md) | ## Autoresume | Parameter | Env - Var | Options | Default | Description | |------------------------|---------------------------|-------------------------------|---------|---------------------------------------------------------------------------------------------------------------| | --autoresume | ULS_AUTORESUME | [True, False] | False | Enable automated resume on based on a checkpoint upon api failure or crash (do not use alongside --starttime) | | --autoresumepath | ULS_AUTORESUME_PATH | '/path/to/store/checkpoints/' | var/ | Specify the path where checkpoint files should be written to. (Trailing /) | | --autoresumewriteafter | ULS_AUTORESUME_WRITEAFTER | <int> | 1000 | Specify after how many loglines a checkpoint should be written. |
256.196721
419
0.273739
eng_Latn
0.438466
2f625f9b6a8051ee42080ebcc03a4e3e4a7834ab
284
md
Markdown
Ayehu/SelfServicePortal/AY GetWorkflowResultForSelfServicePortalControl/Readme.md
Gstar7CodeMan/custom-activities
d930cb9f0b516f88d6ce8bcbf72ae38b4eb8bea4
[ "MIT" ]
null
null
null
Ayehu/SelfServicePortal/AY GetWorkflowResultForSelfServicePortalControl/Readme.md
Gstar7CodeMan/custom-activities
d930cb9f0b516f88d6ce8bcbf72ae38b4eb8bea4
[ "MIT" ]
null
null
null
Ayehu/SelfServicePortal/AY GetWorkflowResultForSelfServicePortalControl/Readme.md
Gstar7CodeMan/custom-activities
d930cb9f0b516f88d6ce8bcbf72ae38b4eb8bea4
[ "MIT" ]
null
null
null
<br># Ayehu</br> <br>AY GetWorkflowResultForSelfServicePortalControl</br> <br>Method: Get</br> <br>OperationID: SelfServicePortal_GetWorkflowResultForSelfServicePortalControl</br> <br>EndPoint:</br> <br>/Api/selfServicePortal/getWorkflowResultForControl/{eventNumber}/{type}</br>
40.571429
84
0.799296
yue_Hant
0.790877
2f62ed8880ce550a9acd024e75f14eec614c7472
1,838
md
Markdown
docs/data/oledb/cdynamicstringaccessorw-class.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/data/oledb/cdynamicstringaccessorw-class.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/data/oledb/cdynamicstringaccessorw-class.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
1
2020-05-28T15:54:57.000Z
2020-05-28T15:54:57.000Z
--- title: Classe CDynamicStringAccessorW ms.date: 11/04/2016 f1_keywords: - CDynamicStringAccessorW helpviewer_keywords: - CDynamicStringAccessorW class ms.assetid: 9b7fd5cc-3a9b-4b57-b907-f1e35de2c98f ms.openlocfilehash: 20ea4a2d795108e00c4b11c3abea6cf7b9953ca7 ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 04/23/2019 ms.locfileid: "62230788" --- # <a name="cdynamicstringaccessorw-class"></a>Classe CDynamicStringAccessorW Consente di accedere a un'origine dati quando non si dispone di alcuna conoscenza dello schema del database (struttura sottostante). ## <a name="syntax"></a>Sintassi ```cpp typedef CDynamicStringAccessorT<WCHAR, DBTYPE_WSTR> CDynamicStringAccessorW; ``` ## <a name="remarks"></a>Note Entrambi richiedono che il provider recuperi tutti i dati dall'archivio dati come dati di tipo stringa, ma `CDynamicStringAccessor` richiede i dati stringa Unicode. `CDynamicStringAccessorW` eredita `GetString` e `SetString` da `CDynamicStringAccessor`. Quando si utilizzano questi metodi in un `CDynamicStringAccessorW` oggetto `BaseType` viene **WCHAR**. ## <a name="requirements"></a>Requisiti **Intestazione**: atldbcli.h ## <a name="see-also"></a>Vedere anche [Modelli Consumer OLE DB](../../data/oledb/ole-db-consumer-templates-cpp.md)<br/> [Riferimenti ai modelli consumer OLE DB](../../data/oledb/ole-db-consumer-templates-reference.md)<br/> [Classe CAccessor](../../data/oledb/caccessor-class.md)<br/> [Classe CDynamicParameterAccessor](../../data/oledb/cdynamicparameteraccessor-class.md)<br/> [Classe CManualAccessor](../../data/oledb/cmanualaccessor-class.md)<br/> [Classe CDynamicAccessor](../../data/oledb/cdynamicaccessor-class.md)<br/> [Classe CDynamicStringAccessor](../../data/oledb/cdynamicstringaccessor-class.md)<br/>
40.844444
191
0.781828
ita_Latn
0.469838
2f6354a6b263e7beca31a21a7272bfa0113f8520
6,723
md
Markdown
vendor/bundle/ruby/2.6.0/gems/dnsruby-1.61.4/RELEASE_NOTES.md
connorslagle/connorslagle.github.io
344ff2142d6da508fec037bff26e8946cccf41b2
[ "MIT" ]
1
2020-05-20T05:29:28.000Z
2020-05-20T05:29:28.000Z
vendor/bundle/ruby/2.6.0/gems/dnsruby-1.61.4/RELEASE_NOTES.md
connorslagle/connorslagle.github.io
344ff2142d6da508fec037bff26e8946cccf41b2
[ "MIT" ]
7
2019-09-06T03:24:10.000Z
2021-11-01T21:29:00.000Z
vendor/bundle/ruby/2.6.0/gems/dnsruby-1.61.4/RELEASE_NOTES.md
connorslagle/connorslagle.github.io
344ff2142d6da508fec037bff26e8946cccf41b2
[ "MIT" ]
1
2021-01-07T10:10:29.000Z
2021-01-07T10:10:29.000Z
# Release Notes ## v1.61.4 * Dnsruby::Name : document .punycode * gemspec enhancement * add yard build file * fix create name include url special characters * Fix uninitialized constant error when using via Rails * Implement ECDSAP256SHA256 (13) / ECDSAP384SHA384 (14) algorithms for DNSKEY * Reinitialize all IANA TAR keys with Dnssec.reset ## v1.61.3 * TCP timeout and port changes ## v1.61.2 * Add new root key ## v1.61.1 * Add Addressable as a gem runtime dependency ## v1.61.0 * Add URI, CDS and CDNSKEY records * Supply port to DNS.new as optiona parameter * Supply timeout to zone transfer connect * Fix multi-line strings * Try absolute name as candidate in DNS even if not dot supplied * Do not try to generate candidates if no domain is given * Handle new OpenSSL interface as well as old * Handle new DSA interface * fix encode error select thread issue * handle encoding errors * add punycode support * Make sure dnssec is enabled in verifier and also in digroot demo * Other minor fixes and changes to test code and infrastructure ## v1.60.2 * Fix deletion of TXT records with spaces in dynamic updates (thanks Sean Dilda) * Fix use of non-default ports in Dnsruby::Resolver (thanks Thomas Morgan) * Fix NAPTR encoding for null rdata dynamic update packets * Fix CAA resource record encoding * Avoid changing ruby global thread abort behavior (thanks Brent Cook) ## v1.60.1 * DNSSEC validation switched OFF by default (but can still be switched on) * Add APL RR support (thanks Manabu Sonoda) * Various test fixes (thanks Keith Bennett) * 'include' issues fixed (thanks Keith Bennett!) * Fixnum replacement (thanks Keith Bennett) * Zone transfer fixes (thanks Manabu Sonoda) * Name decoding fix * MX record passing error now raised * CAA RR support (thanks Richard Luther) * TLSA RR support (thanks Manabu Sonoda) ## v1.60.0 * TCP multi-packet support fixed * Response 'Message' now included with exception. * Docs added * CNAME dynamic update fix ## v1.59.3 * Output TXT record multiple strings correctly * NONE class encoding fix * only add name labels if there are any ## v1.59.2 * Timeout error fix ## v1.59.1 * Support for HMAC SHA512 TSIG keys * Fix TCP pipelining tests * IDN encoding error returned as Dnsruby::OtherResolvError ## v1.59.0 * Add LICENSE file * Add Cache max_size (gihub issue 64) * Disable caching for SOA lookups in demo check_soa.rb * Fix for invalid nameserver in config * Fix encoding for OPT data (thanks Craig Despeaux) * Various test system fixes * OPT fixes * DNSSEC verification failure handling wrt lack of DS chain * DNSSEC validation policy name constants * Fix for BOGUS DLV chains * demo upgrades * Resolver hints improvements ## v1.58.0 * Add TCP pipelining (reusing a single TCP connection for multiple requests). * Enhance zone reading, including reading data from a string. * Add add_answer! method for adding duplicate answers, as needed for an AXFR response. * Add support for GPOS and NXT resource records. * Test cleanup, including removal of use of Nominet servers, soak_test cleanup. * Refactorings: MessageDecoder, Resolv, Resolver (part). * Fix zone reader adding unwanted dot to relative hostnames being converted to absolute. * Fix default access for tsig options in Resolver. * Fix ZoneTransfer not to use deprecated SingleResolver. * Fix Resolver bug in parameter to create_tsig_options. * Fix tests to always use working copy and not gem. ## v1.57.0 * Add query_raw method as alias for send_plain_message, with option to raise or return error. * Fixed a bug in RR hash calculation where TTL should have been ignored but wasn't. * Add support for (obsolete) GPOS resource record type. * Tweak Travis CI configuration. * Fix zone reader for case where a line contains whitespace preceding a comment. * Add post install message. * Improve README. * Moved content of NEWS to RELEASE_NOTES.md. * Use git ls-files now to determine files for inclusion in gem. ## v1.56.0 * Drop support for Ruby 1.8, using lambda -> and hash 'key: value' notations. * First release since the move from Rubyforge to Github (https://github.com/alexdalitz/dnsruby). * Add EDNS client subnet support. * Relocate CodeMapper subclasses, Resolv, RR, and RRSet classes. * Add Travis CI and coveralls integration. * Improve Google IPV6 support. * Convert some file names to snake case. * Remove trailing whitespace from lines, and ensure that comments have space between '#' and text. * Restore test success when running under JRuby. * Disabled attempt to connect to Nominet servers, which are no longer available. * Convert from test/unit to minitest/autorun to support Ruby 2.1+. * Remove setup.rb. * Other minor refactoring and improvements to production code, test code, and documentation. ## v1.53 * Validation routine fixes * Ruby 1.9 fixes * Recursor fixes * IPv4 Regex fixes * Fixes for A/PTR lookups with IP-like domain name * TXT and SSHFP processing fixes * Default retry parameters in Resolver more sensible ## v1.48 * Fixed deadlock/performance issue seen on some platforms * DNSSEC validation now disabled by default * Signed root DS record can be added to validator * ITAR support removed * multi-line DS/RRSIG reading bug fixed (thanks Marco Davids!) * DS algorithms of more than one digit can now be read from string * LOC records now parsed correctly * HINFO records now parsed correctly ## v1.42 * Complicated TXT and NAPTR records now handled correctly * ZoneReader now handles odd escape characters correctly * Warns when immediate timeout occurs because no nameservers are configured * Easy hmac-sha1/256 options to Resolver#tsig= * ZoneReader fixed for "IN CNAME @" notations * ZoneReader supports wildcards * Dnsruby.version method added - currently returns 1.42 ## v1.41 * RFC3597 unknown classes (e.g. CLASS32) now handled correctly in RRSIGs * Resolver#do_caching flag added for Resolver-level caching * DNSKEY#key_tag now cached - only recalculated when key data changes * Bugfix where Resolver would not time queries out if no nameservers were configured * Recursor now performs A and AAAA queries in parallel * Fix for zero length salt * Fixing priming for signed root * Fixes for DLV verification * Other minor fixes ## v1.40 * Zone file reading support added (Dnsruby::ZoneReader) * Name and Label speed-ups * CodeMapper speed-ups * DHCID RR added * LOC presentation format parsing fixed * KX RR added * Quotations now allowed in text representation for ISDN, X25 and HINFO * AFSDB from_string fixes * Fixing CERT types and from_string * CERT now allows algorithm 0 * Fix for DS record comparison * HIP RR added * Minor bug fixes * IPSECKEY RR added * Clients can now manipulate Name::Labels
31.269767
98
0.768258
eng_Latn
0.976055
2f637f9a0ba064776b2c29bc07b289cc3e47788f
11,980
md
Markdown
Bibel/at/Psalmen/78.md
lustremedia/Menge-Bibel
2a7e71b23fc22a3f37eb5829c33b1748f0ad2808
[ "CC0-1.0" ]
null
null
null
Bibel/at/Psalmen/78.md
lustremedia/Menge-Bibel
2a7e71b23fc22a3f37eb5829c33b1748f0ad2808
[ "CC0-1.0" ]
null
null
null
Bibel/at/Psalmen/78.md
lustremedia/Menge-Bibel
2a7e71b23fc22a3f37eb5829c33b1748f0ad2808
[ "CC0-1.0" ]
1
2020-12-14T15:49:01.000Z
2020-12-14T15:49:01.000Z
### Warnender Rückblick auf Israels wiederholten Ungehorsam __78__ <sup>1</sup><em>Ein Lehrgedicht</em><sup title="vgl. 32,1">&#x2732;</sup> <em>von Asaph</em><sup title="vgl. 50,1">&#x2732;</sup>. Gib acht, mein Volk, auf meine Belehrung, <blockquote> <blockquote> leiht euer Ohr den Worten meines Mundes! </blockquote> </blockquote> <sup>2</sup>Ich will auftun meinen Mund zur Rede in Sprüchen, <blockquote> <blockquote> will Rätsel verkünden von der Vorzeit her. </blockquote> </blockquote> <sup>3</sup>Was wir gehört und erfahren <blockquote> <blockquote> und unsere Väter uns erzählt haben, </blockquote> </blockquote> <sup>4</sup>das wollen wir ihren Kindern nicht verschweigen, <blockquote> <blockquote> sondern dem künftgen Geschlecht verkünden die Ruhmestaten des HERRN und seine Stärke und die Wunder, die er getan hat. </blockquote> </blockquote> <sup>5</sup>Denn er hat ein Zeugnis aufgerichtet in Jakob <blockquote> <blockquote> und festgestellt in Israel ein Gesetz, von dem er unsern Vätern gebot, es ihren Kindern kundzutun, </blockquote> </blockquote> <sup>6</sup>auf daß die Nachwelt Kenntnis davon erhielte: <blockquote> <blockquote> die Kinder, die geboren würden, sollten aufstehn und ihren Kindern davon erzählen, </blockquote> </blockquote> <sup>7</sup>daß sie auf Gott ihr Vertrauen setzten <blockquote> <blockquote> und die Taten Gottes nicht vergäßen und seine Gebote befolgten, </blockquote> </blockquote> <sup>8</sup>daß sie nicht wie ihre Väter würden, <blockquote> <blockquote> ein trotziges und widerspenstiges Geschlecht, ein Geschlecht mit wankelmütigem Herzen, dessen Geist sich nicht zuverlässig zu Gott hielt. </blockquote> </blockquote> <sup>9</sup>Ephraims Söhne, bogengerüstete Schützen, <blockquote> <blockquote> haben den Rücken gewandt am Tage des Kampfes. </blockquote> </blockquote> <sup>10</sup>Sie hielten den gottgestifteten Bund nicht <blockquote> <blockquote> und wollten nicht wandeln in seinem Gesetz; </blockquote> </blockquote> <sup>11</sup>nein, sie vergaßen seine Taten <blockquote> <blockquote> und seine Wunder, die er sie hatte sehen lassen. </blockquote> </blockquote> <sup>12</sup>Vor ihren Vätern hatte er Wunder getan <blockquote> <blockquote> im Lande Ägypten, im Gefilde von Zoan<span data-param="f3_19_78_12A" class="fussnote">A</span>. </blockquote> </blockquote> <sup>13</sup>Er spaltete das Meer und ließ sie hindurchziehn <blockquote> <blockquote> und türmte die Wasser auf wie einen Wall; </blockquote> </blockquote> <sup>14</sup>er leitete sie bei Tag durch die Wolke <blockquote> <blockquote> und während der ganzen Nacht durch Feuerschein; </blockquote> </blockquote> <sup>15</sup>er spaltete Felsen in der Wüste <blockquote> <blockquote> und tränkte sie reichlich wie mit Fluten; </blockquote> </blockquote> <sup>16</sup>Bäche ließ er aus dem Felsen hervorgehn <blockquote> <blockquote> und Wasser gleich Strömen niederfließen. </blockquote> </blockquote> <sup>17</sup>Dennoch fuhren sie fort, gegen ihn zu sündigen, <blockquote> <blockquote> und widerstrebten dem Höchsten in der Wüste; </blockquote> </blockquote> <sup>18</sup>ja, sie versuchten Gott in ihren Herzen, <blockquote> <blockquote> indem sie Speise verlangten für ihr Gelüst, </blockquote> </blockquote> <sup>19</sup>und redeten gegen Gott mit den Worten: <blockquote> <blockquote> »Kann Gott wohl einen Tisch in der Wüste uns decken? </blockquote> </blockquote> <sup>20</sup>Wohl hat er den Felsen geschlagen, daß Wasser <blockquote> <blockquote> flossen heraus und Bäche sich ergossen; doch wird er auch vermögen Brot zu geben oder Fleisch seinem Volke zu schaffen?« </blockquote> </blockquote> <sup>21</sup>Drum, als der HERR das hörte, ergrimmte er: <blockquote> <blockquote> Feuer entbrannte gegen Jakob, und Zorn stieg auf gegen Israel, </blockquote> </blockquote> <sup>22</sup>weil sie an Gott nicht glaubten <blockquote> <blockquote> und auf seine Hilfe nicht vertrauten. </blockquote> </blockquote> <sup>23</sup>Und doch gebot er den Wolken droben <blockquote> <blockquote> und tat die Türen des Himmels auf, </blockquote> </blockquote> <sup>24</sup>ließ Manna auf sie regnen zum Essen <blockquote> <blockquote> und gab ihnen himmlisches Brotkorn: </blockquote> </blockquote> <sup>25</sup>Engelspeise aßen sie allesamt, <blockquote> <blockquote> Reisekost sandte er ihnen zur Sättigung. </blockquote> </blockquote> <sup>26</sup>Hinfahren ließ er den Ostwind am Himmel <blockquote> <blockquote> und führte durch seine Kraft den Südwind herbei; </blockquote> </blockquote> <sup>27</sup>Fleisch ließ er auf sie regnen wie Staub <blockquote> <blockquote> und beschwingte Vögel wie Meeressand; </blockquote> </blockquote> <sup>28</sup>mitten in ihr Lager ließ er sie fallen, <blockquote> <blockquote> rings um ihre Wohnungen her. </blockquote> </blockquote> <sup>29</sup>Da aßen sie und wurden reichlich satt, <blockquote> <blockquote> und was sie gewünscht, gewährte er ihnen. </blockquote> </blockquote> <sup>30</sup>Noch hatten sie ihres Gelüsts sich nicht entschlagen, <blockquote> <blockquote> noch hatten sie ihre Speise in ihrem Munde, </blockquote> </blockquote> <sup>31</sup>da stieg der Ingrimm Gottes gegen sie auf <blockquote> <blockquote> und erwürgte die kräftigen Männer unter ihnen und streckte Israels junge Mannschaft zu Boden. </blockquote> </blockquote> <sup>32</sup>Trotz alledem sündigten sie weiter <blockquote> <blockquote> und glaubten nicht an seine Wunder<sup title="= Machttaten">&#x2732;</sup>. </blockquote> </blockquote> <sup>33</sup>Drum ließ er ihre Tage vergehn wie einen Hauch <blockquote> <blockquote> und ihre Jahre in angstvoller Hast. </blockquote> </blockquote> <sup>34</sup>Wenn er sie sterben ließ, dann fragten sie nach ihm <blockquote> <blockquote> und kehrten um und suchten Gott eifrig </blockquote> </blockquote> <sup>35</sup>und dachten daran, daß Gott ihr Fels sei <blockquote> <blockquote> und Gott, der Höchste, ihr Erlöser. </blockquote> </blockquote> <sup>36</sup>Doch sie heuchelten ihm mit ihrem Munde <blockquote> <blockquote> und belogen ihn mit ihrer Zunge; </blockquote> </blockquote> <sup>37</sup>denn ihr Herz hing nicht fest an ihm, <blockquote> <blockquote> und sie hielten nicht treu an seinem Bunde. </blockquote> </blockquote> <sup>38</sup>Doch er war barmherzig, vergab die Schuld <blockquote> <blockquote> und vertilgte sie nicht, nein, immer wieder hielt er seinen Zorn zurück und ließ nicht seinen ganzen Grimm erwachen; </blockquote> </blockquote> <sup>39</sup>denn er dachte daran, daß Fleisch sie waren, <blockquote> <blockquote> ein Windhauch, der hinfährt und nicht wiederkehrt. </blockquote> </blockquote> <sup>40</sup>Wie oft widerstrebten sie ihm in der Wüste, <blockquote> <blockquote> kränkten sie ihn in der Öde! </blockquote> </blockquote> <sup>41</sup>Und immer aufs neue versuchten sie Gott <blockquote> <blockquote> und betrübten den Heiligen Israels. </blockquote> </blockquote> <sup>42</sup>Sie dachten nicht mehr an seine starke Hand, <blockquote> <blockquote> an den Tag, wo er sie vom Bedränger erlöste, </blockquote> </blockquote> <sup>43</sup>als er seine Zeichen in Ägypten tat, <blockquote> <blockquote> seine Wunder im Gefilde von Zoan<sup title="V.12">&#x2732;</sup>. </blockquote> </blockquote> <sup>44</sup>Er verwandelte dort in Blut ihre Ströme<sup title="= Nilarme">&#x2732;</sup>, <blockquote> <blockquote> so daß man ihr fließendes Wasser nicht trinken konnte; </blockquote> </blockquote> <sup>45</sup>er sandte unter sie Ungeziefer, das sie fraß, <blockquote> <blockquote> und Frösche, die ihnen Verderben brachten; </blockquote> </blockquote> <sup>46</sup>er gab ihre Ernte den Freßgrillen preis <blockquote> <blockquote> und die Frucht ihrer Arbeit den Heuschrecken; </blockquote> </blockquote> <sup>47</sup>er zerschlug ihre Reben mit Hagel, <blockquote> <blockquote> ihre Maulbeerfeigenbäume mit Schloßen; </blockquote> </blockquote> <sup>48</sup>er gab ihr Vieh dem Hagel preis <blockquote> <blockquote> und ihren Besitz den Blitzen; </blockquote> </blockquote> <sup>49</sup>er sandte gegen sie seines Zornes Glut, <blockquote> <blockquote> Wut und Grimm und Drangsal: eine Schar<span data-param="f3_19_78_49A" class="fussnote">A</span> von Unglücksengeln; </blockquote> </blockquote> <sup>50</sup>er ließ seinem Ingrimm freien Lauf, <blockquote> <blockquote> entzog ihre Seele nicht dem Tode, überließ vielmehr ihr Leben der Pest; </blockquote> </blockquote> <sup>51</sup>er ließ alle Erstgeburt in Ägypten sterben, <blockquote> <blockquote> der Manneskraft Erstlinge in den Zelten Hams. </blockquote> </blockquote> <sup>52</sup>Dann ließ er sein Volk ausziehn wie Schafe <blockquote> <blockquote> und leitete sie in der Wüste wie eine Herde </blockquote> </blockquote> <sup>53</sup>und führte sie sicher, so daß sie nicht bangten; <blockquote> <blockquote> ihre Feinde aber bedeckte das Meer. </blockquote> </blockquote> <sup>54</sup>So brachte er sie nach seinem heiligen Gebiet, <blockquote> <blockquote> in das Bergland, das er mit seiner Rechten erworben, </blockquote> </blockquote> <sup>55</sup>und vertrieb vor ihnen her die Völker, <blockquote> <blockquote> verloste ihr Gebiet als erblichen Besitz und ließ in ihren Zelten die Stämme Israels wohnen. </blockquote> </blockquote> <sup>56</sup>Doch sie versuchten und reizten Gott, den Höchsten, <blockquote> <blockquote> und hielten sich nicht an seine Gebote, </blockquote> </blockquote> <sup>57</sup>sondern fielen ab und handelten treulos, ihren Vätern gleich; <blockquote> <blockquote> sie versagten wie ein trüglicher<sup title="oder: schlaffer">&#x2732;</sup> Bogen </blockquote> </blockquote> <sup>58</sup>und erbitterten ihn durch ihren Höhendienst <blockquote> <blockquote> und reizten ihn zum Eifer durch ihre Götzenbilder. </blockquote> </blockquote> <sup>59</sup>Als Gott es vernahm, ergrimmte er <blockquote> <blockquote> und verwarf Israel ganz und gar: </blockquote> </blockquote> <sup>60</sup>er gab seine Wohnung in Silo auf, <blockquote> <blockquote> das Zelt, das er aufgeschlagen unter den Menschen; </blockquote> </blockquote> <sup>61</sup>er ließ seine Macht in Gefangenschaft fallen <blockquote> <blockquote> und seine Zier in die Hand des Feindes; </blockquote> </blockquote> <sup>62</sup>er gab sein Volk dem Schwerte preis <blockquote> <blockquote> und war entrüstet über sein Erbteil<sup title="= Eigentumsvolk">&#x2732;</sup>; </blockquote> </blockquote> <sup>63</sup>seine jungen Männer fraß das Feuer, <blockquote> <blockquote> und seine Jungfraun blieben ohne Brautlied; </blockquote> </blockquote> <sup>64</sup>seine Priester fielen durchs Schwert, <blockquote> <blockquote> und seine Witwen konnten keine Totenklage halten.<span data-param="f3_19_78_64A" class="fussnote">A</span> </blockquote> </blockquote> <sup>65</sup>Da erwachte der Allherr wie ein Schlafender, <blockquote> <blockquote> wie ein vom Wein übermannter Kriegsheld; </blockquote> </blockquote> <sup>66</sup>er schlug seine Feinde von hinten <blockquote> <blockquote> und gab sie ewiger Schande preis. </blockquote> </blockquote> <sup>67</sup>Auch verwarf er das Zelt Josephs <blockquote> <blockquote> und erwählte nicht den Stamm Ephraim, </blockquote> </blockquote> <sup>68</sup>sondern erwählte den Stamm Juda, <blockquote> <blockquote> den Berg Zion, den er liebgewonnen; </blockquote> </blockquote> <sup>69</sup>und er baute den ragenden Bergen<sup title="oder: Palästen">&#x2732;</sup> gleich sein Heiligtum, <blockquote> <blockquote> fest wie die Erde, die er auf ewig gegründet. </blockquote> </blockquote> <sup>70</sup>Dann erwählte er David, seinen Knecht, <blockquote> <blockquote> den er wegnahm von den Hürden des Kleinviehs; </blockquote> </blockquote> <sup>71</sup>von den Mutterschafen holte er ihn, <blockquote> <blockquote> daß er Jakob weide, sein Volk, und Israel, seinen Erbbesitz. </blockquote> </blockquote> <sup>72</sup>Der weidete sie mit redlichem Herzen <blockquote> <blockquote> und leitete sie mit kundiger Hand. </blockquote> </blockquote>
26.387665
130
0.766444
deu_Latn
0.983085
2f63f15928764dfda95ff6be98547f75eaffef5e
204
md
Markdown
README.md
MBagrat/learn-html-and-css
845e369fd7ec2d36841f75a1e23c9894fe3ca32c
[ "MIT" ]
null
null
null
README.md
MBagrat/learn-html-and-css
845e369fd7ec2d36841f75a1e23c9894fe3ca32c
[ "MIT" ]
null
null
null
README.md
MBagrat/learn-html-and-css
845e369fd7ec2d36841f75a1e23c9894fe3ca32c
[ "MIT" ]
null
null
null
# learn-html-and-css This repository contains the source code of the course **[Learn HTML5 and CSS3 From Scratch](https://www.youtube.com/watch?v=mU6anWqZJcc)** from the youtube channel freeCodeCamp.org
51
181
0.784314
eng_Latn
0.783284