hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
26708653069d73c0f57db85b1ecf6ed890646522 | 5,423 | md | Markdown | index.md | overwatchTO/overwatchto.github.io | 74665751040a4f7ebcbc7fddc071e0628895f539 | [
"MIT"
] | null | null | null | index.md | overwatchTO/overwatchto.github.io | 74665751040a4f7ebcbc7fddc071e0628895f539 | [
"MIT"
] | null | null | null | index.md | overwatchTO/overwatchto.github.io | 74665751040a4f7ebcbc7fddc071e0628895f539 | [
"MIT"
] | null | null | null | ---
layout: default
---
<div class="container-fluid">
<div class="row banner">
<div class="col-12 col-md-8">
<h1>Toronto Overwatch Beer League</h1>
<h4>Toronto's first social Overwatch LAN gaming league</h4>
<!-- <div class="text-center" style="padding-top:3em;"><a href="{{ site.baseurl }}/join/" class="btn btn-primary primary-cta">Sign up for TOBL</a></div> -->
<!-- <p class="text-center" style="font-size:80%;"><small>Registration for Season 5 is closed.</small></p> -->
</div>
<div class="col-12 col-md-4">
<div>
<img src="{{ site.baseurl }}/images/logo_white.png" class="img-responsive banner-logo" alt="Toronto Overwatch Beer League logo">
</div>
</div>
</div>
</div>
<div class="container">
<div class="row page-section">
<div class="col-12">
<h1 class="text-center">How it works</h1>
</div>
</div>
<div class="row">
<div class="col-10 col-sm-8 col-md-4 mx-auto">
<div class="feature">
<div class="text-center">
<img class="feature-icon" style="margin-top:-55px;" src="{{ site.baseurl }}/images/notes-icon.svg" alt="Notes icon">
</div>
<p class="text-center" style="margin-top:-26px;"><a href="{{ site.baseurl }}/join/">Create a team</a> or <a href="{{ site.baseurl }}/join/">sign up as an individual</a> and get drafted.</p>
</div>
</div>
<div class="col-10 col-sm-8 col-md-4 mx-auto">
<div class="feature">
<div class="text-center" style="margin-top:-5px;">
<img class="feature-icon" src="{{ site.baseurl }}/images/overwatch_logo.png" alt="Overwatch Logo">
</div>
<p class="text-center">Compete weekly in an organized Overwatch league. See the details <a href="{{ site.baseurl }}/league/">here</a>.</p>
</div>
</div>
<div class="col-10 col-sm-8 col-md-4 mx-auto">
<div class="feature">
<div class="text-center">
<img class="feature-icon" src="{{ site.baseurl }}/images/hive_logo.jpg" alt="The Hive eSports Logo" style="width:130px;">
</div>
<p class="text-center">Play on LAN at <a href="https://www.facebook.com/thehiveesports/">The Hive Esports</a>. Have fun and socialize.</p>
</div>
</div>
</div>
</div>
<div class="jumbotron-fluid">
<div class="container center-vertical">
<div class="row justify-content-center">
<div class="col-10 col-md-8 col-lg-6">
<div class="mailing-list-panel">
<h1>Stay notified</h1>
<!-- <p>Individual registration is now open! Sign up <a href="http://overwatchtoronto.org/join/" style="color:white">here</a> or join our mailing list for future beer league updates.</p> -->
<p>Registration opens soon. Sign up for our mailing list and you'll be the first to know.</p>
<!-- Begin MailChimp Signup Form -->
<style type="text/css">
#mc_embed_signup{/*background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif;*/ width:100%;}
</style>
<div id="mc_embed_signup">
<form action="https://overwatchtoronto.us17.list-manage.com/subscribe/post?u=8b3de13b281e00b24f345f7e5&id=96eab85b72" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank" novalidate>
<div id="mc_embed_signup_scroll" class="mx-auto">
<div class="form-group">
<label for="mce-EMAIL" class="mailing-list-label">Email address</label>
<input type="email" value="" name="EMAIL" class="email form-control" id="mce-EMAIL" required>
</div>
<!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_8b3de13b281e00b24f345f7e5_96eab85b72" tabindex="-1" value=""></div>
<div class="form-group">
<div class="clear">
<input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button btn btn-block">
</div>
</div>
</div>
</form>
</div>
<!--End mc_embed_signup-->
</div>
</div>
</div>
</div>
</div>
<div class="container">
<div class="row page-section-no-line">
<div class="col-10 col-sm-10 col-md-8 col-lg-6 mx-auto">
<h1 class="text-center">The Hive Esports</h1>
<p>All of our matches take place on LAN at <a href="https://www.facebook.com/thehiveesports/">The Hive Esports</a>. Meet other Overwatch players, talk strategy, and enjoy some food and drink while socializing.</p>
<p>The Hive boasts 7000 square feet, 21 high end gaming PCs, 30 TVs, retro gaming machines, and a full service kitchen. Located just two minutes from St. Clair Station.</p>
</div>
</div>
<div class="row">
<div class="col-10 col-sm-10 col-md-8 col-lg-6 mx-auto">
<div class="map-responsive">
<iframe
width="600"
height="450"
frameborder="0" style="border:0"
src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d2885.1084227746137!2d-79.39813908425393!3d43.68750927912014!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x882b335b46a62d57%3A0x7299f91389e798f8!2sThe+Hive!5e0!3m2!1sen!2sca!4v1534813943290" allowfullscreen>
</iframe>
</div>
</div>
</div>
</div>
<div style="padding-bottom:4em"></div>
| 46.75 | 273 | 0.627697 | eng_Latn | 0.379376 |
267140362bd31e16d12d1edd40c904a2f981c9ee | 822 | md | Markdown | README.md | codetin/DarkPlusPlus | 6f935a6c78677fe5766bea7c436a6d9b6b421fae | [
"MIT"
] | 30 | 2022-03-13T08:29:22.000Z | 2022-03-29T04:27:22.000Z | README.md | codetin/DarkPlusPlus | 6f935a6c78677fe5766bea7c436a6d9b6b421fae | [
"MIT"
] | null | null | null | README.md | codetin/DarkPlusPlus | 6f935a6c78677fe5766bea7c436a6d9b6b421fae | [
"MIT"
] | null | null | null | # Dark++
Dark++主题是基于 Dark+(vscode 默认主题)的一个自用主题。
因为自己用的感觉不错,所以分享出来。
如果大家觉得好的话请到 github 官方点赞。
说明文件改为中文书写,非中文用户可以考虑使用翻译软件查看。

### 语法着色示例

### VSCODE 界面截图
# Github 源码
## [Github](https://github.com/codetin/DarkPlusPlus.git)
# 变更日志
## 0.2.1 2021 年 9 月 23 日
增加 string.quoted.docstring.multi.python 支持
## 0.1.7 2021 年 9 月 22 日
class 颜色从 variable 中独立
增加 property 的颜色
## 0.0.5 2021 年 2 月 18 日
更新一部分颜色
增加 TAB 的边缘色条
## 0.0.4 2019-12-16
因为实在有太多名叫 Dark++的主题,所以改名为 Dark+3 😄
## 0.0.3 2019-12-16
set constant.numeric to #91afd1
## 0.0.2 2019-12-16
### meta.funtion color
## 0.0.1 2019-12-16
### make meta.funtion color
| 16.117647 | 95 | 0.701946 | yue_Hant | 0.967174 |
26715f6739d597f87a3d3835dc4b7dc2f86c51ef | 9,663 | md | Markdown | src/pages/java/index.md | yashinnhl/guide | 755226f505063f5bb7aa246e9a557c908f57b2d7 | [
"BSD-3-Clause"
] | 1 | 2018-09-16T18:41:26.000Z | 2018-09-16T18:41:26.000Z | src/pages/java/index.md | yashinnhl/guide | 755226f505063f5bb7aa246e9a557c908f57b2d7 | [
"BSD-3-Clause"
] | 7 | 2020-07-19T10:17:15.000Z | 2022-02-26T01:49:48.000Z | src/pages/java/index.md | yashinnhl/guide | 755226f505063f5bb7aa246e9a557c908f57b2d7 | [
"BSD-3-Clause"
] | 2 | 2018-03-15T20:44:04.000Z | 2019-07-12T06:57:41.000Z | ---
title: Java
---
**What is Java?**
<a href='https://www.oracle.com/java/index.html' target='_blank' rel='nofollow'>Java</a> is a programming language developed by <a href='https://en.wikipedia.org/wiki/Sun_Microsystems' target='_blank' rel='nofollow'>Sun Microsystems</a> in 1995, which got later acquired by <a href='http://www.oracle.com/index.html' target='_blank' rel='nofollow'>Oracle</a>. It's now a full platform with lots of standard APIs, open source APIs, tools, huge developer community and is used to build the most trusted enterprise solutions by big and small companies alike. <a href='https://www.android.com/' target='_blank' rel='nofollow'>Android</a> application development is done fully with Java and its ecosystem. To know more about Java, read <a href='https://java.com/en/download/faq/whatis_java.xml' target='_blank' rel='nofollow'>this</a> and <a href='http://tutorials.jenkov.com/java/what-is-java.html' target='_blank' rel='nofollow'>this</a>.
## Version
The latest version is <a href='http://www.oracle.com/technetwork/java/javase/overview' target='_blank' rel='nofollow'> Java 9</a>, which was released in 2017 with <a href='https://docs.oracle.com/javase/9/whatsnew/toc.htm#JSNEW-GUID-C23AFD78-C777-460B-8ACE-58BE5EA681F6' target='_blank' rel='nofollow'>various improvements</a> over the previous version, Java 8. But for all intents and purposes, we will use Java 8 in this wiki for all tutorials.
Java is also divided into several "Editions" :
* <a href='http://www.oracle.com/technetwork/java/javase/overview/index.html' target='_blank' rel='nofollow'>SE</a> - Standard Edition - for desktop and standalone server applications
* <a href='http://www.oracle.com/technetwork/java/javaee/overview/index.html' target='_blank' rel='nofollow'>EE</a> - Enterprise Edition - for developing and executing Java components that run embedded in a Java server
* <a href='http://www.oracle.com/technetwork/java/embedded/javame/overview/index.html' target='_blank' rel='nofollow'>ME</a> - Micro Edition - for developing and executing Java applications on mobile phones and embedded devices
## Installation : JDK or JRE ?
Download the latest Java binaries from the <a href='http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html' target='_blank' rel='nofollow'>official website</a>. Here you may face a question, which one to download, JDK or JRE? JRE stands for Java Runtime Environment, which is the platform dependent Java Virtual Machine to run Java codes, and JDK stands for Java Development Kit, which consists of most of the development tools, most importantly the compiler `javac`, and also the JRE. So, for an average user JRE would be sufficient, but since we would be developing with Java, we would download the JDK.
## Platform specific installation instructions
### Windows
* Download the relevant <a href='https://en.wikipedia.org/wiki/Windows_Installer' target='_blank' rel='nofollow'>.msi</a> file (x86 / i586 for 32bits, x64 for 64bits)
* Run the .msi file. Its a self extracting executable file which will install Java in your system!
### Linux
* Download the relevant <a href='http://www.cyberciti.biz/faq/linux-unix-bsd-extract-targz-file/' target='_blank' rel='nofollow'>tar.gz</a> file for your system and install :
`bash
$ tar zxvf jdk-8uversion-linux-x64.tar.gz`
* <a href='https://en.wikipedia.org/wiki/List_of_Linux_distributions#RPM-based' target='_blank' rel='nofollow'>RPM based Linux platforms</a> download the relevant <a href='https://en.wikipedia.org/wiki/RPM_Package_Manager' target='_blank' rel='nofollow'>.rpm</a> file and install :
`bash
$ rpm -ivh jdk-8uversion-linux-x64.rpm`
* Users have the choice to install an open source version of Java, OpenJDK or the Oracle JDK. While OpenJDK is in active development and in sync with Oracle JDK, they just differ in <a href='http://openjdk.java.net/faq/' target='_blank' rel='nofollow'>licensing</a> stuff. However few developers complain of the stability of Open JDK. Instructions for **Ubuntu** :
Open JDK installation :
`bash
sudo apt-get install openjdk-8-jdk`
Oracle JDK installation :
`bash
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer`
### Mac
* Either download Mac OSX .dmg executable from Oracle Downloads
* Or use <a href='http://brew.sh/' target='_blank' rel='nofollow'>Homebrew</a> to <a href='http://stackoverflow.com/a/28635465/2861269' target='_blank' rel='nofollow'>install</a> :
`bash
brew tap caskroom/cask
brew install brew-cask
brew cask install java`
### Verify Installation
Verify Java has been properly installed in your system by opening Command Prompt (Windows) / Windows Powershell / Terminal (Mac OS and *Unix) and checking the versions of Java runtime and compiler :
$ java -version
java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)
$ javac -version
javac 1.8.0_66
**Tip** : If you get an error such as "Command Not Found" on either `java` or `javac` or both, dont panic, its just your system PATH is not properly set. For Windows, see <a href='http://stackoverflow.com/questions/15796855/java-is-not-recognized-as-an-internal-or-external-command' target='_blank' rel='nofollow'>this StackOverflow answer</a> or <a href='http://javaandme.com/' target='_blank' rel='nofollow'>this article</a> on how to do it. Also there are guides for <a href='http://stackoverflow.com/questions/9612941/how-to-set-java-environment-path-in-ubuntu' target='_blank' rel='nofollow'>Ubuntu</a> and <a href='http://www.mkyong.com/java/how-to-set-java_home-environment-variable-on-mac-os-x/' target='_blank' rel='nofollow'>Mac</a> as well. If you still can't figure it out, dont worry, just ask us in our <a href='https://gitter.im/FreeCodeCamp/java' target='_blank' rel='nofollow'>Gitter room</a>!
## JVM
Ok now since we are done with the installations, let's begin to understand first the nitty gritty of the Java ecosystem. Java is an <a href='http://stackoverflow.com/questions/1326071/is-java-a-compiled-or-an-interpreted-programming-language' target='_blank' rel='nofollow'>interpreted and compiled</a> language, that is the code we write gets compiled to bytecode and interpreted to run . We write the code in .java files, Java compiles them into <a href='https://en.wikipedia.org/wiki/Java_bytecode' target='_blank' rel='nofollow'>bytecodes</a> which are run on a Java Virtual Machine or JVM for execution. These bytecodes typically has a .class extension.
Java is a pretty secure language as it doesn't let your program run directly on the machine. Instead, your program runs on a Virtual Machine called JVM. This Virtual Machine exposes several APIs for low level machine interactions you can make, but other than that you cannot play with machine instructions explicitely. This adds a huge bonus of security.
Also, once your bytecode is compiled it can run on any Java VM. This Virtual Machine is machine dependent, i.e it has different implementations for Windows, Linux and Mac. But your program is guranteed to run in any system thanks to this VM. This philosophy is called <a href='https://en.wikipedia.org/wiki/Write_once,_run_anywhere' target='_blank' rel='nofollow'>"Write Once, Run Anywhere"</a>.
## Hello World!
Let's write a sample Hello World application. Open any editor / IDE of choice and create a file `HelloWorld.java`.
public class HelloWorld {
public static void main(String[] args) {
// Prints "Hello, World" to the terminal window.
System.out.println("Hello, World");
}
}
**N.B.** Keep in mind in Java file name should be the **exact same name of the public class** in order to compile!
Now open the terminal / Command Prompt. Change your current directory in the terminal / Command Prompt to the directory where your file is located. And compile the file :
$ javac HelloWorld.java
Now run the file using `java` command!
$ java HelloWorld
Hello, World
Congrats! Your first Java program has run successfully. Here we are just printing a string passing it to the API `System.out.println`. We will cover all the concepts in the code, but you are welcome to take a <a href='https://docs.oracle.com/javase/tutorial/getStarted/application/' target='_blank' rel='nofollow'>closer look</a>! If you have any doubt or need additional help, feel free to contact us anytime in our <a href='https://gitter.im/FreeCodeCamp/java' target='_blank' rel='nofollow'>Gitter Chatroom</a>!
## Documentation
Java is heavily <a href='https://docs.oracle.com/javase/8/docs/' target='_blank' rel='nofollow'>documented</a>, as it supports huge amounts of API's. If you are using any major IDE such as Eclipse or IntelliJ IDEA, you would find the Java Documentation included within.
Also, here is a list of free IDEs for Java coding:
* <a href='https://netbeans.org/' target='_blank' rel='nofollow'>NetBeans</a>
* <a href='https://eclipse.org/' target='_blank' rel='nofollow'>Eclipse</a>
* <a href='https://www.jetbrains.com/idea/features/' target='_blank' rel='nofollow'>IntelliJ IDEA</a>
* <a href='https://developer.android.com/studio/index.html' target='_blank' rel='nofollow'>Android Studio</a>
* <a href='https://www.bluej.org/' target='_blank' rel='nofollow'>BlueJ</a>
* <a href='http://www.jedit.org/' target='_blank' rel='nofollow'>jEdit</a>
* <a href='http://www.oracle.com/technetwork/developer-tools/jdev/overview/index-094652.html' target='_blank' rel='nofollow'>Oracle JDeveloper</a>
| 79.204918 | 935 | 0.746766 | eng_Latn | 0.917044 |
26738e973f87a7005f22f0c263ed0fb7ab8550dd | 4,470 | md | Markdown | docs/data/odbc/recordset-locking-records-odbc.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/data/odbc/recordset-locking-records-odbc.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/data/odbc/recordset-locking-records-odbc.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:54:57.000Z | 2020-05-28T15:54:57.000Z | ---
title: 'Recordset: blocco dei record (ODBC)'
ms.date: 11/04/2016
helpviewer_keywords:
- locks [C++], recordsets
- optimistic locking
- pessimistic locking in ODBC
- recordsets [C++], locking records
- optimistic locking, ODBC
- ODBC recordsets [C++], locking records
- data [C++], locking
ms.assetid: 8fe8fcfe-b55a-41a8-9136-94a7cd1e4806
ms.openlocfilehash: abd5f817ad321241df2d8565bd6bf346c0792088
ms.sourcegitcommit: c123cc76bb2b6c5cde6f4c425ece420ac733bf70
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/14/2020
ms.locfileid: "81366967"
---
# <a name="recordset-locking-records-odbc"></a>Recordset: blocco dei record (ODBC)
Le informazioni contenute in questo argomento sono valide per le classi ODBC MFC.
In questo argomento:
- [Tipi di blocco dei record disponibili.](#_core_record.2d.locking_modes)
- [Come bloccare i record nel recordset durante gli aggiornamenti.](#_core_locking_records_in_your_recordset)
Quando si utilizza un recordset per aggiornare un record nell'origine dati, l'applicazione può bloccare il record in modo che nessun altro utente possa aggiornare il record contemporaneamente. Lo stato di un record aggiornato da due utenti contemporaneamente non è definito, a meno che il sistema non possa garantire che due utenti non possano aggiornare un record contemporaneamente.
> [!NOTE]
> Questo argomento si applica agli oggetti derivati da `CRecordset` in cui non è stato implementato il recupero di massa di righe. Se è stato implementato il recupero di massa di righe, alcune delle informazioni non sono valide. Ad esempio, non `Edit` è `Update` possibile chiamare le funzioni membro e . Per ulteriori informazioni sul recupero di massa di righe, vedere Recordset: recupero di massa di [record (ODBC)](../../data/odbc/recordset-fetching-records-in-bulk-odbc.md).
## <a name="record-locking-modes"></a><a name="_core_record.2d.locking_modes"></a>Modalità di blocco dei record
Le classi di database forniscono due modalità di [blocco dei record:](../../mfc/reference/crecordset-class.md#setlockingmode)
- Blocco ottimistico (impostazione predefinita)Optimistic locking (the default)
- Blocco pessimistico
L'aggiornamento di un record avviene in tre passaggi:Updating a record occurs in three steps:
1. Iniziare l'operazione chiamando il [Edit](../../mfc/reference/crecordset-class.md#edit) funzione membro.
1. Modificare i campi appropriati del record corrente.
1. Terminare l'operazione, e normalmente eseguire il commit dell'aggiornamento, chiamando la funzione membro [Update.](../../mfc/reference/crecordset-class.md#update)
Il blocco ottimistico blocca il record `Update` nell'origine dati solo durante la chiamata. Se si utilizza il blocco ottimistico in un `Update` ambiente multiutente, l'applicazione deve gestire una condizione di errore. Il blocco pessimistico blocca il `Edit` record non appena viene `Update` chiamato e non lo `CDBException` rilascia fino a quando non viene `Update`chiamato (gli errori vengono indicati tramite il meccanismo, non da un valore FALSE restituito da ). Il blocco pessimistico comporta una potenziale riduzione delle prestazioni per altri utenti, perché `Update` l'accesso simultaneo allo stesso record potrebbe dover attendere il completamento del processo dell'applicazione.
## <a name="locking-records-in-your-recordset"></a><a name="_core_locking_records_in_your_recordset"></a>Blocco dei record nel recordset
Se si desidera modificare la [modalità](#_core_record.2d.locking_modes) di blocco predefinita di un oggetto `Edit`recordset, è necessario modificare la modalità prima di chiamare .
#### <a name="to-change-the-current-locking-mode-for-your-recordset"></a>Per modificare la modalità di blocco corrente per il recordset
1. Chiamare la funzione membro [SetLockingMode](../../mfc/reference/crecordset-class.md#setlockingmode) `CRecordset::optimistic`, specificando o `CRecordset::pessimistic` .
La nuova modalità di blocco rimane attiva fino a quando non viene modificata nuovamente o il recordset non viene chiuso.
> [!NOTE]
> Relativamente pochi driver ODBC supportano attualmente il blocco pessimistico.
## <a name="see-also"></a>Vedere anche
[Recordset (ODBC)](../../data/odbc/recordset-odbc.md)<br/>
[Recordset: esecuzione di un join (ODBC)](../../data/odbc/recordset-performing-a-join-odbc.md)<br/>
[Recordset: aggiunta, aggiornamento ed eliminazione di record (ODBC)](../../data/odbc/recordset-adding-updating-and-deleting-records-odbc.md)
| 62.957746 | 690 | 0.788814 | ita_Latn | 0.99441 |
2673c4a31b9db141190904be6575e5481c0ecdca | 697 | md | Markdown | README.md | mtrobregado/Vaccine-Carrier-Cold-Box-Temperature-Monitoring | 98f8b6c888a59a7be2202ac0f47df22705b44333 | [
"MIT"
] | 1 | 2021-10-01T10:43:12.000Z | 2021-10-01T10:43:12.000Z | README.md | mtrobregado/Vaccine-Carrier-Cold-Box-Temperature-Monitoring | 98f8b6c888a59a7be2202ac0f47df22705b44333 | [
"MIT"
] | null | null | null | README.md | mtrobregado/Vaccine-Carrier-Cold-Box-Temperature-Monitoring | 98f8b6c888a59a7be2202ac0f47df22705b44333 | [
"MIT"
] | 1 | 2022-01-02T07:34:58.000Z | 2022-01-02T07:34:58.000Z |
Project Name: Vaccine Carrier Cold Box Temperature Monitoring
Author: Markel Robregado
Lemuel Adane
Date Created: 8/5/2021
Project Description:
- AWS IOT EduKit, collect sensor data via UART from a Texas Instruments Sub 1-Ghz sensor network,
and send these data to AWS IOT. Sub 1-Ghz sensor network consist of collector and sensor nodes.
Each sensor node is attached to a vaccine carrier cold box.
Hardware:
- AWS IOT EduKit
- CC1352R Launchpad (collector)
- LPSTK-CC1352R (sensor node connected to vaccine carrier cold box)
Pre-requisites:
- See, getting started guide from this link. https://edukit.workshop.aws/en/getting-started.html | 36.684211 | 100 | 0.730273 | eng_Latn | 0.858046 |
2673e923b4a002176390ea5e9dff7914d2b0a52c | 1,021 | md | Markdown | course-details.md | amyschoen/intro-to-html | 02e583e2b52ab9dd7cd88112a0ad3118543ba1f1 | [
"CC-BY-4.0"
] | null | null | null | course-details.md | amyschoen/intro-to-html | 02e583e2b52ab9dd7cd88112a0ad3118543ba1f1 | [
"CC-BY-4.0"
] | null | null | null | course-details.md | amyschoen/intro-to-html | 02e583e2b52ab9dd7cd88112a0ad3118543ba1f1 | [
"CC-BY-4.0"
] | null | null | null | If you are an aspiring developer, creating a simple webpage is a great way to take your first steps into the exciting world of programming. The first skill you should learn is HTML.
HTML is the markup language that forms the backbone of the internet. In this course, you will build a clean, stunning webpage using HTML. As you build your page, we will show you how you can use GitHub Pages to host your website free of charge. Your new HTML skills will form an important foundation in your journey as a new developer.
In this course, you'll learn how to:
- Make a simple HTML website
- Use foundational HTML concepts, like tags, headers, lists, images, and links
- Publish your page to the web using GitHub Pages
This course has a dedicated message board on the [GitHub Community]({{ communityBoard }}) website. If you want to discuss this course with GitHub Trainers or other participants create a post over there. The message board can also be used to troubleshoot any issue you encounter while taking this course.
| 85.083333 | 335 | 0.789422 | eng_Latn | 0.999825 |
26751f1b40e7adbda194d7feb5fe1a24a697184e | 1,946 | md | Markdown | _draft/Algorithm/2021-03-25-leetcode-03.md | kangyongseok/f-log.github.io | d33616b3fb8bdd06163385aa1298289f81a42bab | [
"MIT"
] | 1 | 2019-03-18T14:06:33.000Z | 2019-03-18T14:06:33.000Z | _draft/Algorithm/2021-03-25-leetcode-03.md | kangyongseok/kangyongseok.github.io | d33616b3fb8bdd06163385aa1298289f81a42bab | [
"MIT"
] | null | null | null | _draft/Algorithm/2021-03-25-leetcode-03.md | kangyongseok/kangyongseok.github.io | d33616b3fb8bdd06163385aa1298289f81a42bab | [
"MIT"
] | null | null | null | ---
title: "알고리즘 - Maximum Depth of Binary Tree"
categories:
- Algorithm
tags:
- Algorithm
- coding
- javascript
- leetcode
- 알고리즘풀이
- 연습
- DFS(깊이우선탐색)
- 재귀
toc: true
toc_sticky: true
comments: true
---
Given the root of a binary tree, return its maximum depth.
A binary tree's maximum depth is the number of nodes along the longest path from the root node down to the farthest leaf node.
## Example 1:
```console
Input: root = [3,9,20,null,null,15,7]
Output: 3
```
## Example 2:
```console
Input: root = [1,null,2]
Output: 2
```
## Example 3:
```console
Input: root = []
Output: 0
```
## Example 4:
```console
Input: root = [0]
Output: 1
```
## Constraints:
The number of nodes in the tree is in the range [0, 104].
-100 <= Node.val <= 100
## 문제풀이
2진트리로 구성된 노드의 가장 깊은 깊이에 대해서 구하는 알고리즘이 DFS(깊이 우선 탐색) 으로 해결해야하는 문제라 재귀용법을 써야한다.
```javascript
/**
* Definition for a binary tree node.
* function TreeNode(val, left, right) {
* this.val = (val===undefined ? 0 : val)
* this.left = (left===undefined ? null : left)
* this.right = (right===undefined ? null : right)
* }
*/
/**
* @param {TreeNode} root
* @return {number}
*/
var maxDepth = function(root) {
if (!root) return 0;
let max = 0;
(function calcDepth(root, currnetCount = 0) {
++currnetCount
if (max < currnetCount) max = currnetCount
if(root.left) calcDepth(root.left, currnetCount)
if(root.right) calcDepth(root.right, currnetCount)
})(root)
return max
};
```
>이게 어떻게 이렇게되는지 모르겠는데 원래 자바스크립트에서 빈 배열객체를 `![]` 이렇게 했을때 어쨋든 객체 자체는 존재하는거라서 `false` 가 출력된다. 근데 릿코드에서는 `true` 로 빈 배열이라고 체크를 해준다
`calcDepth` 라는 재귀로 반복시킬 함수를 하나 만들고 인자로 2진으로 나뉘어질 루트의 목록을 받고 목록이 있다는건 자식노드가 존재한다는것이기때문에 그때마다 뎁스계산을 위해 count 를 1씩 증가시켜준다.
left 노드와 right 노드가 존재하게되는데 만약 left 노드의 뎁스가 더 길다면 마지막 count 는 더 짧은 right 노드의 뎁스를 기억하게된다. 따라서 가장 긴 뎁스의 깊이를 기억하고 있어야할 `max` 변수를 만들어 가장 깊은 뎁스를 기억하도록 하였다.
| 21.622222 | 149 | 0.640288 | kor_Hang | 0.999071 |
26770d8ac0b64e9835f7398768d0f81de9383132 | 22 | md | Markdown | doc/source/paddlerec/contribute.md | michaelwang123/PaddleRec | 4feb0a7f962e918bdfa4f7289a9ddfd08d459824 | [
"Apache-2.0"
] | 2,739 | 2020-04-28T05:12:48.000Z | 2022-03-31T16:01:49.000Z | doc/contribute.md | hzj1558718/PaddleRec | 927e363e42ab55dd2961b0fbbb23c2578d289105 | [
"Apache-2.0"
] | 205 | 2020-05-14T13:29:14.000Z | 2022-03-31T13:01:50.000Z | doc/contribute.md | hzj1558718/PaddleRec | 927e363e42ab55dd2961b0fbbb23c2578d289105 | [
"Apache-2.0"
] | 545 | 2020-05-14T13:19:13.000Z | 2022-03-24T07:53:05.000Z | # PaddleRec 贡献代码
> 占位
| 7.333333 | 16 | 0.681818 | vie_Latn | 0.385531 |
2677d83fd22770ad7ba8318a498dd7db82fcd2bf | 876 | md | Markdown | README.md | tstirrat15/predictit_538_odds | dc4ab8263ebbdd209fa03a6965c31c3a69c2cd1b | [
"MIT"
] | null | null | null | README.md | tstirrat15/predictit_538_odds | dc4ab8263ebbdd209fa03a6965c31c3a69c2cd1b | [
"MIT"
] | null | null | null | README.md | tstirrat15/predictit_538_odds | dc4ab8263ebbdd209fa03a6965c31c3a69c2cd1b | [
"MIT"
] | null | null | null | # Introduction
### Compare PredictIt shares, betting odds, 538 models, Economist models, and 538 polling in the 2020 U.S. Presidential election.
A simple python script that compares by state:
* The latest [general election polling](https://projects.fivethirtyeight.com/polls/president-general/) used by 538
* 538 [presidential election forecast](https://projects.fivethirtyeight.com/2020-election-forecast/)
* The Econmist [presidential election forecast](https://projects.economist.com/us-2020-forecast/president)
* Current PredictIt [share prices](https://www.predictit.org/markets/13/Prez-Election)
* The [implied probabilities](https://help.smarkets.com/hc/en-gb/articles/214058369-How-to-calculate-implied-probability-in-betting) of gambling odds
## To run this script, first install the necessary libraries via pip:
```
pip install requests pandas numpy datetime
``` | 54.75 | 149 | 0.789954 | eng_Latn | 0.767954 |
26781ddc9a92ec665d7cc2b216ea42dc63be017a | 6,206 | md | Markdown | articles/virtual-machines/workloads/oracle/oracle-disaster-recovery.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/oracle/oracle-disaster-recovery.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/workloads/oracle/oracle-disaster-recovery.md | Kraviecc/azure-docs.pl-pl | 4fffea2e214711aa49a9bbb8759d2b9cf1b74ae7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Omówienie scenariusza odzyskiwania po awarii programu Oracle w środowisku platformy Azure | Microsoft Docs
description: Scenariusz odzyskiwania po awarii dla Oracle Database bazy danych 12c w środowisku platformy Azure
author: dbakevlar
ms.service: virtual-machines
ms.subservice: oracle
ms.collection: linux
ms.topic: article
ms.date: 08/02/2018
ms.author: kegorman
ms.openlocfilehash: 68b5b9dfd205628c9d7c430df4c0230267752e01
ms.sourcegitcommit: b4647f06c0953435af3cb24baaf6d15a5a761a9c
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/02/2021
ms.locfileid: "101669948"
---
# <a name="disaster-recovery-for-an-oracle-database-12c-database-in-an-azure-environment"></a>Odzyskiwanie po awarii dla Oracle Database bazy danych 12c w środowisku platformy Azure
## <a name="assumptions"></a>Założenia
- Znasz zagadnienia dotyczące projektowania i środowisk platformy Azure z usługą Oracle Data Guard.
## <a name="goals"></a>Cele
- Zaprojektuj topologię i konfigurację, która spełnia wymagania odzyskiwania po awarii (DR).
## <a name="scenario-1-primary-and-dr-sites-on-azure"></a>Scenariusz 1. Lokacje podstawowe i DR na platformie Azure
Klient ma bazę danych Oracle skonfigurowaną w lokacji głównej. Lokacja DR znajduje się w innym regionie. Klient korzysta z usługi Oracle Data Guard do szybkiego odzyskiwania między tymi lokacjami. Lokacja główna ma także pomocniczą bazę danych do raportowania i innych celów.
### <a name="topology"></a>Topologia
Poniżej znajduje się podsumowanie konfiguracji platformy Azure:
- Dwie lokacje (lokacja główna i lokacja odzyskiwania po awarii)
- Dwie sieci wirtualne
- Dwie bazy danych Oracle z funkcją ochrony danych (podstawowa i w stanie gotowości)
- Dwie bazy danych Oracle z złotą bramą lub funkcją ochrony danych (tylko lokacja główna)
- Dwie usługi aplikacji, jeden podstawowy i jeden w lokacji odzyskiwania po awarii
- *Zestaw dostępności,* który jest używany dla usługi bazy danych i aplikacji w lokacji głównej
- Jedna serwera przesiadkowego w każdej lokacji, która ogranicza dostęp do sieci prywatnej i zezwala na logowanie tylko przez administratora
- Serwera przesiadkowego, Usługa aplikacji, baza danych i Brama sieci VPN w różnych podsieciach
- SIECIOWEJ grupy zabezpieczeń wymuszane w podsieciach aplikacji i bazy danych

## <a name="scenario-2-primary-site-on-premises-and-dr-site-on-azure"></a>Scenariusz 2: lokacja podstawowa i usługa odzyskiwania po awarii na platformie Azure
Klient ma lokalną konfigurację bazy danych Oracle (lokacja główna). Lokacja odzyskiwania po awarii znajduje się na platformie Azure. Funkcja Oracle Data Guard służy do szybkiego odzyskiwania między tymi lokacjami. Lokacja główna ma także pomocniczą bazę danych do raportowania i innych celów.
Istnieją dwie podejścia do tej konfiguracji.
### <a name="approach-1-direct-connections-between-on-premises-and-azure-requiring-open-tcp-ports-on-the-firewall"></a>Podejście 1: bezpośrednie połączenia między środowiskiem lokalnym i platformą Azure, wymagające otwartych portów TCP zapory
Nie zalecamy bezpośrednich połączeń, ponieważ uwidaczniają one porty TCP na świecie zewnętrznym.
#### <a name="topology"></a>Topologia
Poniżej znajduje się podsumowanie konfiguracji platformy Azure:
- Jedna lokacja DR
- Jedna sieć wirtualna
- Jedna baza danych Oracle z funkcją Data Guard (aktywna)
- Jedna usługa aplikacji w witrynie odzyskiwania po awarii
- Jeden serwera przesiadkowego, który ogranicza dostęp do sieci prywatnej i zezwala na logowanie tylko przez administratora
- Serwera przesiadkowego, Usługa aplikacji, baza danych i Brama sieci VPN w różnych podsieciach
- SIECIOWEJ grupy zabezpieczeń wymuszane w podsieciach aplikacji i bazy danych
- Zasada sieciowej grupy zabezpieczeń/reguła zezwalająca na przychodzący port TCP 1521 (lub port zdefiniowany przez użytkownika)
- Zasada sieciowej grupy zabezpieczeń/reguła ograniczająca dostęp do sieci wirtualnej tylko w lokalnym adresie IP lub w aplikacji.

### <a name="approach-2-site-to-site-vpn"></a>Podejście 2: sieć VPN typu lokacja-lokacja
Sieci VPN typu lokacja-lokacja jest lepszym rozwiązaniem. Aby uzyskać więcej informacji na temat konfigurowania sieci VPN, zobacz [Tworzenie sieci wirtualnej z połączeniem sieci VPN typu lokacja-lokacja przy użyciu interfejsu wiersza polecenia](../../../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md).
#### <a name="topology"></a>Topologia
Poniżej znajduje się podsumowanie konfiguracji platformy Azure:
- Jedna lokacja DR
- Jedna sieć wirtualna
- Jedna baza danych Oracle z funkcją Data Guard (aktywna)
- Jedna usługa aplikacji w witrynie odzyskiwania po awarii
- Jeden serwera przesiadkowego, który ogranicza dostęp do sieci prywatnej i zezwala na logowanie tylko przez administratora
- Serwera przesiadkowego, Usługa aplikacji, baza danych i Brama sieci VPN znajdują się w różnych podsieciach
- SIECIOWEJ grupy zabezpieczeń wymuszane w podsieciach aplikacji i bazy danych
- Połączenie sieci VPN typu lokacja-lokacja między środowiskiem lokalnym i platformą Azure

## <a name="additional-reading"></a>Materiały uzupełniające
- [Projektowanie i implementowanie bazy danych Oracle na platformie Azure](oracle-design.md)
- [Konfigurowanie środowiska Oracle Data Guard](configure-oracle-dataguard.md)
- [Konfigurowanie firmy Oracle — Złotej Bramy](configure-oracle-golden-gate.md)
- [Kopia zapasowa Oracle i odzyskiwanie](./oracle-overview.md)
## <a name="next-steps"></a>Następne kroki
- [Samouczek: Tworzenie maszyn wirtualnych o wysokiej dostępności](../../linux/create-cli-complete.md)
- [Eksplorowanie przykładów interfejsu wiersza polecenia platformy Azure wdrożenia maszyny wirtualnej](https://github.com/Azure-Samples/azure-cli-samples/tree/master/virtual-machine) | 60.252427 | 322 | 0.810345 | pol_Latn | 0.999906 |
26783cd35296526f02f6a5d96523a5fb9426f44b | 5,200 | md | Markdown | articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Translator/custom-translator/how-to-view-system-test-results.md | gliljas/azure-docs.sv-se-1 | 1efdf8ba0ddc3b4fb65903ae928979ac8872d66e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Visa system test resultat och distribution – anpassad översättare
titleSuffix: Azure Cognitive Services
description: När din utbildning är klar kan du granska system testerna för att analysera dina utbildnings resultat. Om du är nöjd med utbildnings resultatet ska du placera en distributions förfrågan för den tränade modellen.
author: swmachan
manager: nitinme
ms.service: cognitive-services
ms.subservice: translator-text
ms.date: 05/26/2020
ms.author: swmachan
ms.topic: conceptual
ms.openlocfilehash: 3361241bf0a330abc18701f93460208b8804a7dc
ms.sourcegitcommit: fc718cc1078594819e8ed640b6ee4bef39e91f7f
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 05/27/2020
ms.locfileid: "83994270"
---
# <a name="view-system-test-results"></a>Visa testresultat för system
När din utbildning är klar kan du granska system testerna för att analysera dina utbildnings resultat. Om du är nöjd med utbildnings resultatet ska du placera en distributions förfrågan för den tränade modellen.
## <a name="system-test-results-page"></a>Sidan system test resultat
Välj ett projekt, Välj fliken modeller i projektet, leta upp den modell som du vill använda och välj slutligen fliken test.
På fliken test visas:
1. **System testresultat:** Resultatet av test processen i träningarna. Test processen genererar BLEU-poängen.
**Antal meningar:** Hur många parallella meningar som användes i test uppsättningen.
**Bleu Poäng:** BLEU Poäng som genererats för en modell efter att utbildningen har slutförts.
**Status:** Anger om test processen är slutförd eller pågår.

2. Klicka på system test resultatet så kommer du till sidan testa resultat information. Den här sidan visar dator översättning av meningar som ingick i test data uppsättningen.
3. Tabellen på sidan med test resultat information har två kolumner – en för varje språk i paret. Kolumnen för käll språket visar meningen som ska översättas. Kolumnen för mål språket innehåller två meningar i varje rad.
**Ref:** Den här meningen är referens översättningen av käll meningen som anges i test data uppsättningen.
**MT:** Den här meningen är den automatiska översättningen av käll meningen som görs av modellen som skapats efter att utbildningen genomförts.

## <a name="download-test"></a>Hämta test
Klicka på länken Hämta översättningar för att ladda ned en zip-fil. Zip-filen innehåller dator översättningar av käll meningar i data uppsättningen test.

Det hämtade ZIP-arkivet innehåller tre filer.
1. **Anpassad. mt. txt:** Den här filen innehåller dator översättningar av käll språk meningar på mål språket som sköts av modellen som tränas med användarens data.
2. **Ref. txt:** Den här filen innehåller användare som har angett översättningar av käll språkets meningar på mål språket.
3. **källa. txt:** Den här filen innehåller meningar i käll språket.

## <a name="deploy-a-model"></a>Distribuera en modell
Så här begär du en distribution:
1. Välj ett projekt, gå till fliken modeller.
2. För en lyckad tränad modell visas knappen "distribuera", om den inte distribueras.

3. Klicka på distribuera.
4. Välj **distribuerad** för de regioner där du vill att din modell ska distribueras och klicka sedan på Spara. Du kan välja **distribuerat** i flera regioner.

5. Du kan visa status för din modell i kolumnen "status".
>[!Note]
>Anpassad översättare stöder 10 distribuerade modeller inom en arbets yta vid varje tidpunkt.
## <a name="update-deployment-settings"></a>Uppdatera distributions inställningar
Så här uppdaterar du distributions inställningarna:
1. Välj ett projekt och gå till fliken **modeller** .
2. För en lyckad distribuerad modell visas en **uppdaterings** knapp.

3. Välj **Uppdatera**.
4. Välj **distribuerad** eller ej **distribuerad** för den eller de regioner där du vill att din modell ska distribueras eller avinstalleras och klicka sedan på **Spara**.

>[!Note]
>Om du väljer **distribuerat** i alla regioner, är modellen inte distribuerad från alla regioner och försätts i ett tillstånd där det inte har distribuerats. Den är nu inte tillgänglig för användning.
## <a name="next-steps"></a>Nästa steg
- Börja använda din distribuerade anpassade översättnings modell via [Translator v3](https://docs.microsoft.com/azure/cognitive-services/translator/reference/v3-0-translate?tabs=curl).
- Lär dig [hur du hanterar inställningar](how-to-manage-settings.md) för att dela din arbets yta, hantera prenumerations nyckel.
- Lär dig [hur du migrerar din arbets yta och ditt projekt](how-to-migrate.md) från [Microsoft Translator Hub](https://hub.microsofttranslator.com)
| 48.148148 | 224 | 0.776346 | swe_Latn | 0.999701 |
2678a113f1998c15104ccfda769f4896392bfd37 | 55 | md | Markdown | README.md | NyeGuy/A4G | 598f0ddbf2f1895e746bee8edd923ff42834f7f0 | [
"MIT"
] | null | null | null | README.md | NyeGuy/A4G | 598f0ddbf2f1895e746bee8edd923ff42834f7f0 | [
"MIT"
] | null | null | null | README.md | NyeGuy/A4G | 598f0ddbf2f1895e746bee8edd923ff42834f7f0 | [
"MIT"
] | null | null | null | # A4G
Art and Animation for Games - SCAD Game Pipeline
| 18.333333 | 48 | 0.763636 | kor_Hang | 0.652586 |
2678fd789d6d6cbd14c82c88c42303fa1fded2f8 | 5,336 | md | Markdown | 2018/articles/ComplementProgramming.md | tutlane/Writing | 66e984101e342e2c8923c911224f779861c33dc2 | [
"CC-BY-4.0"
] | 208 | 2018-04-05T17:17:29.000Z | 2022-03-31T17:51:19.000Z | 2018/articles/ComplementProgramming.md | tutlane/Writing | 66e984101e342e2c8923c911224f779861c33dc2 | [
"CC-BY-4.0"
] | 73 | 2017-01-28T15:58:14.000Z | 2018-03-03T16:32:20.000Z | 2018/articles/ComplementProgramming.md | tutlane/Writing | 66e984101e342e2c8923c911224f779861c33dc2 | [
"CC-BY-4.0"
] | 85 | 2018-06-18T15:31:48.000Z | 2022-02-27T13:20:47.000Z | # Complement Programming to any topic you like
[<img src="https://images.unsplash.com/photo-1494253109108-2e30c049369b?ixlib=rb-0.3.5&ixid=eyJhcHBfaWQiOjEyMDd9&s=02261b49dc587eaecb3dfae7ccfbbcaa&auto=format&fit=crop&w=2250&q=80">](
https://unsplash.com/photos/5E5N49RWtbA)
Photo by Cody Davis on Unsplash - https://unsplash.com/photos/5E5N49RWtbA
Since I started programming on my own while studying law and having work in tax, I though it is important so see coding not only as a path to already established programming jobs, but rather create your own role with it. Here are some thoughts on how to compliment programming with other fields of expertise.
(This article was created as brainstorming for a talk I gave)
## Table of Contents
<!-- TOC -->
- [Complement Programming to any topic you like](#complement-programming-to-any-topic-you-like)
- [Table of Contents](#table-of-contents)
- [Definitions](#definitions)
- [Why are we using programming?](#why-are-we-using-programming)
- [What are some basic principles for law?](#what-are-some-basic-principles-for-law)
- [Combining principles - an example](#combining-principles---an-example)
- [Legal Tech available](#legal-tech-available)
<!-- /TOC -->
## Definitions
First of all it's important to define the term programming as I am going to use it. I not only refer to it as the mere act of writing code and creating an application but rather to use newest technology (that often involves programming) to improve the status quo. It is basically the idea of digitalization in a more concise way.
## Why are we using programming?
Loops, conditionals, variables and functions are all concepts that help to abstract a problem and solve it in the language of a computer.
Why are we using a computer? Because it is faster. Way faster. Or better, it is as fast as you like. ;) It runs also as long as you like and never gets tired.
So why not use it for other things than browsing the internet or playing computer games?
As a young programmer you tend to only look at things that are already available and try to work with them. But better try to use creativity and create something new.
For example: Law.
## What are some basic principles for law?
As Elon Musk said that he would always reason from first principles rather than analogy. So what are some first principles of law?
This heavily depends on the type of law you are having in your state. For simplicity let's assume this an established and working democracy. This means that the law is made by the people living in that state. Let's also assume that people and all process around the law are bound to the law itself.
Now a first distinction has to be made what type of law are we dealing with. This is important, because the purpose and arrangement of the law depends on it's goals.
To illustrate this, let's have civil law and public law. Both shall regulate the behavior of people of course. Both shall provide guidelines on how to behave correctly with other people. However, the main difference is that civil law is concerned with regulating behavior of people acting with other people, whereas public law addresses the rights of an individual against the state (which is ultimately the biggest group of people). Therefore the legislative text has to take the disbalance of power in account. In civil law it can be fair to treat similar people similarly. In contrast, in public law, the individual gains much more rights in a case against the state to compensate the disbalance in power.
A great example for a combination with programming is tax law. Not only does tax law offer a solution for natural language processing, but also for actually calculating a tax burden.
## Combining principles - an example
Let's say we want to reduce the tax burden of an individual. How can this be done using knowledge in programming?
## Legal Tech available
First you look at your available market. In my case these are Austrian law firms or international companies that have a subsidiary in Austria. I realized that, apart from document management, no new technology is used often.
Having a look at the international market of technical solutions, there already exists software for:
- Contract Review ({LawGeex})
- Legal Analytics (Brightflag)
- Prediction Software (Lexpredict)
- Improved Search/eDiscovery (Brainspace)
- Legal Research (Ross)
- Contract Due Diligence (kira)
- Intellectual Property (TrademarkNow)
- Improved Customer Management/eBilling (Smokeball)
- Expertise Automation/Robot Lawyer (Lisa)
After having a look at some of these implementations it is much easier to see possibilities for your own ideas and create your own.
---
Thanks for reading my article! Feel free to leave any feedback!
---
Daniel is a LL.M. student in business law, working as a software engineer and organizer of tech related events in Vienna.
His current personal learning efforts focus on machine learning.
Connect on:
- [LinkedIn](https://www.linkedin.com/in/createdd)
- [Github](https://github.com/DDCreationStudios)
- [Medium](https://medium.com/@ddcreationstudi)
- [Twitter](https://twitter.com/DDCreationStudi)
- [Steemit](https://steemit.com/@createdd)
- [Hashnode](https://hashnode.com/@DDCreationStudio)
<!-- Written by Daniel Deutsch (deudan1010@gmail.com) --> | 53.36 | 709 | 0.776799 | eng_Latn | 0.999006 |
2679e49036e90ff50cf952a9f2c401b48c53564c | 4,806 | md | Markdown | _posts/http/2018-03-04-cors-session.md | Raoul1996/raoul1996.github.io | 6760b7c2ef88130d66eb024d391b5b3f48de64e8 | [
"MIT"
] | 1 | 2018-03-16T15:55:35.000Z | 2018-03-16T15:55:35.000Z | _posts/http/2018-03-04-cors-session.md | Raoul1996/raoul1996.github.io | 6760b7c2ef88130d66eb024d391b5b3f48de64e8 | [
"MIT"
] | 12 | 2016-12-14T17:52:41.000Z | 2018-05-29T07:16:11.000Z | _posts/http/2018-03-04-cors-session.md | Raoul1996/raoul1996.github.io | 6760b7c2ef88130d66eb024d391b5b3f48de64e8 | [
"MIT"
] | null | null | null | ---
layout: post
title: 前后端分离项目中基于 Session 图片验证码功能的实现
category: Http
keyword: Http session captcha
---
早就听说 Session 是一个大坑,最近在使用 Node 做图片验证码的时候,刚好掉到了里边,所以记录一下。
### 操作环境
- Eggjs 2.3.0+
- Vue 2.5.11+
- [线上投票地址](https://votes.raoul1996.cn)
- [线上后台地址](https://api.raoul1996.cn)
- [后端仓库地址](https://github.com/Raoul1996/egg-vote)
- [前端仓库地址](https://github.com/Raoul1996/vue-vote)
### Session 是什么
这个问题的确是老生常谈。由于 HTTP 协议是无状态的协议,所以服务端在记录某种状态的时候,就需要使用某种机制来识别用户,这里就可以使用 Session。
#### 常见的应用场景
- 购物车
- 图片验证码
- 追踪用户
- 等等
### 服务端如何识别 session
这时候就不得不提一下 Cookie。同样也是一个老生常谈的问题。首先想一下我们是服务器的话,我们如何来识别客户端(Client)?
由于 HTTP 是无状态的协议,所以服务端需要客户机每次访问的时候,客户端需要告诉服务端自己到底是谁。
一般服务端在用户第一次访问的时候,都会创建一个会话,称为 Session。每一个 Session 都会有一个唯一的 Session Id 用来标识会话。
在创建 Session 后,服务端就会通过某种方式来告诉客户端,需要有某种方式来记录一下 Session Id。以后每次访问的时候,都需要用某种方式将 Session Id 回传到服务端用于客户端身份的识别。
现在的问题就是,**如何把 Session Id 传回服务端。**
Cookie 就非常适合用来处理这个工作。浏览器在访问相应域名(domain)的时候,默认会自动携带对应域名以及其父域下的 Cookie,无需我们做额外处理。
所以我们就可以把 Session Id 写到 cookie 中,然后再每次发请求的时候,交给服务端验证 Session Id,识别用户的登陆状态。
#### 如果客户端禁止了 cookie 呢?
一般的处理方式是将 Session Id 添加到 url 中,供服务端识别,但是这里没有用到,故不赘述。
### 服务端如何存储 Session
Session 默认存储在服务器的内存中,然后将 Session Id 写入客户端的 Cookie 中,保证客户端碰不到 Session 中保存的关键数据,避免修改。但是在内存中务必会出现一些问题。比如说如何共享 Session,如果部署服务集群的话,存在内存中的 Session 必定是不能共享的。
#### Session 持久化存储
Session 可以存储到文件中,和各种数据库中。Node 项目的话一般选择会存储到 redis 数据库中。但是同样由于暂时没有持久化存储的需求这里先不讨论。
egg 支持将 Session 存储到 Redis 数数据库中,但是由于目前只是使用 session 存储一下 captcha 的值,没必要使用 redis 进行操作了,直接放到传统的内存中就 OK 了。
### 跨域携带 Cookie 前后端代码实现
后端使用 `egg-cors` 模块完成请求的响应头中的 `Access-Control-Allow-Headers`、`Access-Control-Allow-Methods`、`Access-Control-Allow-Origin`、`Access-Control-Allow-Credentials: true` 等条目的添加。
前端使用 `axios` 进行跨域请求。
#### 前端代码实现
[前端 axios 配置文件](https://github.com/Raoul1996/vue-vote/blob/master/src/service/axios.js)
关键的配置只有一行
```js
// axios.js
instance.defaults.withCredentials = true
```
如果没有定义 `axios` 的 `instance` 的话,直接添加到 `axios` 上面即可。
#### 前端配置解释
我们需要携带 Cookie 进行跨域请求,目的是让服务端识别写在 Cookie 中的 Session Id。那么现在的问题就是:
**跨域请求如何携带 Cookie?**
[`XMLHttpRequest.withCredentials`](https://developer.mozilla.org/zh-CN/docs/Web/API/XMLHttpRequest/withCredentials) 属性值为 `Boolean` 类型,默认值为 false。它指示了是否该使用类似cookies, authorization headers(头部授权)或者 TLS 客户端证书这一类资格证书来创建一个跨站点访问控制(cross-site Access-Control)请求。**在同一个站点下使用 withCredentials 属性是无效的。**
**此外,这个指示也会被用做响应中cookies 被忽视的标示。默认值是false。**
意思是什么呢?
如果在发送来自其他域的 XMLHttpRequest 请求之前 `XMLHttpRequest.withCredentials` 不为 `true` 的话,那么就不能为他自己的域设置 Cookie,无论设置什么 `Access-Control-header` 都没有用,没法存储来自其他域的 Cookie。
#### 后端代码实现
[后端跨域配置](https://github.com/Raoul1996/egg-vote/blob/master/config/config.default.js#L86),[后端生产环境下域名白名单配置](https://github.com/Raoul1996/egg-vote/blob/master/config/config.prod.js#L15)
关键配置有两个地方,稍候进行解释
```js
// config.default.js
module.exports = app => {
exports.cors = {
allowMethods: 'GET,HEAD,PUT,POST,DELETE,PATCH,OPTIONS',
credentials: true
}
}
```
```js
// config.prod.js
module.exports = app => {
const domainWhiteList = []
const portList = [8080, 7001]
portList.forEach(port => {
domainWhiteList.push(`http://localhost:${port}`)
domainWhiteList.push(`http://127.0.0.1:${port}`)
})
domainWhiteList.push('https://votes.raoul1996.cn')
domainWhiteList.push('http://egg.raoul1996.cn')
exports.security = {domainWhiteList}
}
```
#### 后端配置解释
在 [`config.default.js`](https://github.com/Raoul1996/egg-vote/blob/master/config/config.default.js#L86) 处的配置会为响应添加 `Access-Control-Allow-Credentials: true`,表示服务端允许跨域请求包含 Cookie
在 [`config.prod.js`](https://github.com/Raoul1996/egg-vote/blob/master/config/config.prod.js#L15) 中配置了跨域请求域名的白名单,这里会涉及到 [`egg-cors` 模块的内部实现](https://github.com/eggjs/egg-cors/blob/master/app.js)。
```js
// egg-cors 内部实现
'use strict';
module.exports = app => {
// put before other core middlewares
app.config.coreMiddlewares.unshift('cors');
// if security plugin enabled, and origin config is not provided, will only allow safe domains support CORS.
app.config.cors.origin = app.config.cors.origin || function corsOrigin(ctx) {
const origin = ctx.get('origin');
if (!ctx.isSafeDomain || ctx.isSafeDomain(origin)) {
return origin;
}
return '';
};
};
```
由于允许携带 Cookie 之后,浏览器不允许 `Access-Control-Allow-Origin` 值为 `*`,所以需要进行上述操作。
### 图片验证码功能实现
Egg 下的图片验证码的实现我已经封装了一个小的 [plugin](https://www.npmjs.com/package/egg-captcha),基于 [`ccap`](https://www.npmjs.com/package/ccap),请参阅 [这里](https://www.npmjs.com/package/egg-captcha)
### 后记
还有很多东西没有很深入的研究,比如 Session 机制的来源因果,Session 持久化的相关知识。
### 参考文章
1. [vue2 前后端分离项目ajax跨域session问题解决](https://segmentfault.com/a/1190000009208644)
2. [跨域资源共享 CORS 详解](http://www.ruanyifeng.com/blog/2016/04/cors.html)
3. [XMLHttpRequest.withCredentials](https://developer.mozilla.org/zh-CN/docs/Web/API/XMLHttpRequest/withCredentials)
| 29.666667 | 290 | 0.752185 | yue_Hant | 0.863887 |
267a0c29f9ded5914d3c36fb1dd1c785227a3aba | 1,069 | md | Markdown | treebanks/pl_pdb/pl_sz-dep-csubj.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | 204 | 2015-01-20T16:36:39.000Z | 2022-03-28T00:49:51.000Z | treebanks/pl_pdb/pl_sz-dep-csubj.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | 654 | 2015-01-02T17:06:29.000Z | 2022-03-31T18:23:34.000Z | treebanks/pl_pdb/pl_sz-dep-csubj.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | 200 | 2015-01-16T22:07:02.000Z | 2022-03-25T11:35:28.000Z | ---
layout: base
title: 'Statistics of csubj in UD_Polish-SZ'
udver: '2'
---
## Treebank Statistics: UD_Polish-SZ: Relations: `csubj`
This relation is universal.
4 nodes (0%) are attached to their parents as `csubj`.
3 instances of `csubj` (75%) are right-to-left (child precedes parent).
Average distance between parent and child is 4.25.
The following 1 pairs of parts of speech are connected with `csubj`: <tt><a href="pl_sz-pos-NOUN.html">NOUN</a></tt>-<tt><a href="pl_sz-pos-VERB.html">VERB</a></tt> (4; 100% instances).
~~~ conllu
# visual-style 1 bgColor:blue
# visual-style 1 fgColor:white
# visual-style 4 bgColor:blue
# visual-style 4 fgColor:white
# visual-style 4 1 csubj color:blue
1 To to VERB pred _ 4 csubj _ _
2 była być AUX praet:sg:f:imperf Aspect=Imp|Gender=Fem|Number=Sing|Tense=Past|VerbForm=Part|Voice=Act 4 cop _ _
3 istna istny ADJ adj:sg:nom:f:pos Case=Nom|Degree=Pos|Gender=Fem|Number=Sing 4 amod _ _
4 makabra makabra NOUN subst:sg:nom:f Case=Nom|Gender=Fem|Number=Sing 0 root _ SpaceAfter=No
5 ! ! PUNCT interp _ 4 punct _ _
~~~
| 31.441176 | 185 | 0.725912 | eng_Latn | 0.555153 |
267e72ec35316833aa1246c16f1b5651cf150b1b | 6,296 | md | Markdown | docs/framework/windows-workflow-foundation/store-extensibility.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/store-extensibility.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/store-extensibility.md | adamsitnik/docs.pl-pl | c83da3ae45af087f6611635c348088ba35234d49 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Rozszerzalność magazynu
ms.date: 03/30/2017
ms.assetid: 7c3f4a46-4bac-4138-ae6a-a7c7ee0d28f5
ms.openlocfilehash: 46c1ea40925a5c79180171da9a705d7e6b7c8b89
ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/23/2019
ms.locfileid: "61641609"
---
# <a name="store-extensibility"></a>Rozszerzalność magazynu
<xref:System.Activities.DurableInstancing.SqlWorkflowInstanceStore> Umożliwia użytkownikom promować właściwości niestandardowych, specyficzne dla aplikacji, które mogą być używane do wykonywania zapytań dla wystąpień w bazie danych trwałości. Działanie promocji właściwości powoduje, że wartości, które mają być dostępne w widoku specjalne w bazie danych. Te właściwości o podwyższonym poziomie (właściwości, które mogą być używane w kwerendach użytkownika) może być typów prostych, takich jak Int64, Guid, String i daty/godziny lub serializacji typu binary (byte[]).
<xref:System.Activities.DurableInstancing.SqlWorkflowInstanceStore> Klasa ma <xref:System.Activities.DurableInstancing.SqlWorkflowInstanceStore.Promote%2A> metodę, która służy do promowania właściwość jako właściwość, które mogą być używane w zapytaniach. Poniższy przykład jest przykładem rozszerzalność magazynu end-to-end.
1. W tym przykładowym scenariuszu dokument przetwarzania aplikacji (DP) zawiera przepływy pracy, z których każdy korzysta z działań niestandardowych do przetwarzania dokumentu. Te przepływy pracy mają zestaw zmienne stanu, które muszą być widoczne dla użytkownika końcowego. Aby to osiągnąć, aplikacja DP udostępnia rozszerzenie typu <xref:System.Activities.Persistence.PersistenceParticipant>, który jest używany przez działania umożliwiają określanie wartości zmienne stanu.
```csharp
class DocumentStatusExtension : PersistenceParticipant
{
public string DocumentId;
public string ApprovalStatus;
public string UserName;
public DateTime LastUpdateTime;
}
```
2. Nowe rozszerzenie jest następnie dodawana do hosta.
```csharp
static Activity workflow = CreateWorkflow();
WorkflowApplication application = new WorkflowApplication(workflow);
DocumentStatusExtension documentStatusExtension = new DocumentStatusExtension ();
application.Extensions.Add(documentStatusExtension);
```
Aby uzyskać więcej informacji na temat dodawania niestandardowego uczestnika stanów trwałych, zobacz [uczestnicy stanów trwałych](persistence-participants.md) próbki.
3. Niestandardowe działania w aplikacji, punkt dystrybucji wypełniania różnych pól Stan w **Execute** metody.
```csharp
public override void Execute(CodeActivityContext context)
{
// ...
context.GetExtension<DocumentStatusExtension>().DocumentId = Guid.NewGuid();
context.GetExtension<DocumentStatusExtension>().UserName = "John Smith";
context.GetExtension<DocumentStatusExtension>().ApprovalStatus = "Approved";
context.GetExtension<DocumentStatusExtension>().LastUpdateTime = DateTime.Now();
// ...
}
```
4. Gdy wystąpienie przepływu pracy osiąga punkt trwałości **CollectValues** metody **DocumentStatusExtension** uczestnika stanów trwałych zapisuje te właściwości do danych stanów trwałych Kolekcja.
```csharp
class DocumentStatusExtension : PersistenceParticipant
{
const XNamespace xNS = XNamespace.Get("http://contoso.com/DocumentStatus");
protected override void CollectValues(out IDictionary<XName, object> readWriteValues, out IDictionary<XName, object> writeOnlyValues)
{
readWriteValues = new Dictionary<XName, object>();
readWriteValues.Add(xNS.GetName("UserName"), this.UserName);
readWriteValues.Add(xNS.GetName("ApprovalStatus"), this.ApprovalStatus);
readWriteValues.Add(xNS.GetName("DocumentId"), this.DocumentId);
readWriteValues.Add(xNS.GetName("LastModifiedTime"), this.LastUpdateTime);
writeOnlyValues = null;
}
// ...
}
```
> [!NOTE]
> Te właściwości są przekazywane do **SqlWorkflowInstanceStore** przez platformę trwałości za pośrednictwem **SaveWorkflowCommand.InstanceData** kolekcji.
5. Aplikacja DP inicjuje Store wystąpienia przepływu pracy SQL i wywołuje **Podwyższ poziom** metodę, aby podwyższyć poziom tych danych.
```csharp
SqlWorkflowInstanceStore store = new SqlWorkflowInstanceStore(connectionString);
List<XName> variantProperties = new List<XName>()
{
xNS.GetName("UserName"),
xNS.GetName("ApprovalStatus"),
xNS.GetName("DocumentId"),
xNS.GetName("LastModifiedTime")
};
store.Promote("DocumentStatus", variantProperties, null);
```
Na podstawie tych informacji podwyższania poziomu **SqlWorkflowInstanceStore** umieszcza właściwości danych w kolumnach tabeli [InstancePromotedProperties](#InstancePromotedProperties) widoku.
6. Aby wysłać zapytanie podzbiór danych z tabeli podwyższania poziomu, aplikacji DP dodaje dostosowany widok na widok podwyższania poziomu.
```sql
create view [dbo].[DocumentStatus] with schemabinding
as
select P.[InstanceId] as [InstanceId],
P.Value1 as [UserName],
P.Value2 as [ApprovalStatus],
P.Value3 as [DocumentId],
P.Value4 as [LastUpdatedTime]
from [System.Activities.DurableInstancing].[InstancePromotedProperties] as P
where P.PromotionName = N'DocumentStatus'
go
```
## <a name="InstancePromotedProperties"></a> Widok [System.Activities.DurableInstancing.InstancePromotedProperties]
|Nazwa kolumny|Typ kolumny|Opis|
|-----------------|-----------------|-----------------|
|InstanceId|Identyfikator GUID|Wystąpienie przepływu pracy, który należy do tej promocji.|
|PromotionName|nvarchar(400)|Nazwa sam podwyższenia poziomu.|
|Wartość1, wartość2, Wartość3,..., Value32|sql_variant|Wartość o podwyższonym poziomie samej właściwości. Większość typów pierwotnych danych SQL, z wyjątkiem pliku binarnego obiekty BLOB i ponad 8000 bajtów długości ciągów mieści się w sql_variant.|
|Value35 Value33, Value34,..., Value64|Varbinary(max)|Wartość właściwości o podwyższonym poziomie, które są jawnie zadeklarowane jako varbinary(max).|
| 52.466667 | 567 | 0.752859 | pol_Latn | 0.991703 |
267eca8ebcf2ec075aced8aeb348a2603305c7cc | 4,544 | md | Markdown | includes/virtual-wan-tutorial-s2s-site-include.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-wan-tutorial-s2s-site-include.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-wan-tutorial-s2s-site-include.md | sonquer/azure-docs.pl-pl | d8159cf8e870e807bd64e58188d281461b291ea8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Plik dyrektywy include
description: Plik dyrektywy include
services: virtual-wan
author: cherylmc
ms.service: virtual-wan
ms.topic: include
ms.date: 11/04/2019
ms.author: cherylmc
ms.custom: include file
ms.openlocfilehash: 4bcee1097010bb8746b11185a470ca2584485c3f
ms.sourcegitcommit: c22327552d62f88aeaa321189f9b9a631525027c
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 11/04/2019
ms.locfileid: "73488844"
---
1. Na stronie portalu wirtualnej sieci WAN w sekcji **połączenie** wybierz pozycję **Lokacje sieci VPN** , aby otworzyć stronę witryny sieci VPN.
2. Na stronie **Lokacje sieci VPN** kliknij pozycję **+Utwórz lokację**.

3. Na stronie **Tworzenie witryny sieci VPN** na karcie **podstawowe** wykonaj następujące pola:
* **Region** — wcześniej nazywany lokalizacją. Jest to lokalizacja, w której chcesz utworzyć zasób lokacji.
* **Nazwa** — nazwa, przez którą chcesz odwołać się do lokacji lokalnej.
* **Dostawca urządzenia** — nazwa dostawcy urządzenia sieci VPN (na przykład: Citrix, Cisco, Barracuda). Dzięki temu zespół platformy Azure może lepiej zrozumieć swoje środowisko w celu dodania do niego dodatkowych możliwości optymalizacji lub ułatwienia rozwiązywania problemów.
* Wartość Włącz **Border Gateway Protocol** oznacza, że wszystkie połączenia z lokacji będą włączone przy użyciu protokołu BGP. Ostatecznie skonfigurujemy informacje protokołu BGP dla każdego łącza z witryny sieci VPN w sekcji linki. Konfigurowanie protokołu BGP w wirtualnej sieci WAN jest równoznaczne z konfiguracją protokołu BGP na sieci VPN bramy sieci wirtualnej platformy Azure. Adres lokalnego elementu równorzędnego protokołu BGP nie może być taki sam jak publiczny adres IP sieci VPN do urządzenia lub przestrzeni adresowej sieci wirtualnej witryny sieci VPN. Użyj innego adresu IP na urządzeniu sieci VPN dla adresu IP elementu równorzędnego protokołu BGP. Może to być adres przypisany do interfejsu sprzężenia zwrotnego na urządzeniu. Jednak nie może być adresem APIPA (169.254. x. x). Określ ten adres w odpowiedniej witrynie sieci VPN reprezentującej lokalizację. Wymagania wstępne dotyczące protokołu BGP zawiera temat [Informacje o protokole BGP z platformą Azure VPN Gateway](../articles/vpn-gateway/vpn-gateway-bgp-overview.md). Można zawsze edytować połączenie sieci VPN, aby zaktualizować jego parametry protokołu BGP (Komunikacja równorzędna w ramach łącza i jako #) po włączeniu ustawienia protokołu BGP witryny sieci VPN.
* **Prywatna przestrzeń adresowa** — przestrzeń adresów IP znajdująca się w lokacji lokalnej. Ruch do tej przestrzeni adresowej jest kierowany do lokacji lokalnej. Jest to wymagane, gdy dla lokacji nie jest włączony protokół BGP.
* **Hubs — centrum** , z którym chcesz połączyć się z lokacją. Lokację można podłączyć tylko do centrów, które mają VPN Gateway. Jeśli nie widzisz centrum, najpierw utwórz bramę sieci VPN w tym centrum.
4. Wybierz **linki** , aby dodać informacje o fizycznych linkach w gałęzi. Jeśli masz urządzenie z wirtualnym partnerem sieci WAN CPE, skontaktuj się z nimi, aby sprawdzić, czy te informacje są wymieniane z platformą Azure w ramach przekazywania informacji o gałęziach skonfigurowanych na podstawie ich systemów.

* **Nazwa łącza** — nazwa, która ma zostać połączona z łączem fizycznym w witrynie sieci VPN. Przykład: Mylink1.
* **Nazwa dostawcy** — nazwa fizycznego linku w witrynie sieci VPN. Przykład: ATT, Verizon.
* **Szybkość** — szybkość urządzenia sieci VPN w lokalizacji gałęzi. Przykład: 50, co oznacza, że 50 MB/s to szybkość urządzenia sieci VPN w lokacji oddziału.
* **Adres IP** — publiczny adres IP urządzenia Premium za pomocą tego linku. Opcjonalnie możesz podać prywatny adres IP lokalnego urządzenia sieci VPN znajdującego się za ExpressRoute.
5. Możesz użyć pola wyboru, aby usunąć lub dodać dodatkowe linki. Obsługiwane są cztery linki na witrynę sieci VPN. Na przykład jeśli masz czterech USŁUGODAWCów internetowych (usługodawców internetowych) w lokalizacji gałęzi, możesz utworzyć cztery linki. jeden dla każdego usługodawcy internetowego i podaj informacje dla każdego łącza.
6. Po zakończeniu wypełniania pól wybierz pozycję **Przegląd + Utwórz** , aby sprawdzić i utworzyć lokację.
7. Wyświetl stan na stronie witryny sieci VPN. Lokacja przejdzie do **połączenia** , ponieważ lokacja nie została jeszcze podłączona do centrum. | 113.6 | 1,248 | 0.797315 | pol_Latn | 0.999982 |
268058bfb0f4d8e0283a4015d1eae307138ea76b | 342 | md | Markdown | src/base/README.md | getsedona/sedona-components | d16aae7b20c8ef6727d789316652b8d0d25ffd87 | [
"MIT"
] | 1 | 2019-12-03T12:51:05.000Z | 2019-12-03T12:51:05.000Z | src/base/README.md | getsedona/sedona-components | d16aae7b20c8ef6727d789316652b8d0d25ffd87 | [
"MIT"
] | 2 | 2019-09-08T11:32:32.000Z | 2020-05-15T08:59:49.000Z | src/base/README.md | getsedona/sedona-components | d16aae7b20c8ef6727d789316652b8d0d25ffd87 | [
"MIT"
] | null | null | null | # base
Базовые теги разметки.
[Разметка](https://github.com/getsedona/sedona-components/blob/master/src/base/examples.html) · [Пример](https://getsedona.github.io/sedona-components/base.html)
## Подключение
```js
// index.js
import "sedona-components/src/base";
```
```less
// index.less
@import "~sedona-components/src/base/index";
```
| 19 | 161 | 0.722222 | zul_Latn | 0.08823 |
26815e65202d4280c13a5c0fa0e6fd596e3a2ba6 | 6,514 | md | Markdown | source/02_language_guide/20_Extensions.md | slidoooor/the-swift-programming-language-in-chinese | c173d544cd45154f385da5dd6dc970e2fa3e473e | [
"CC-BY-4.0"
] | 1 | 2020-02-10T02:56:36.000Z | 2020-02-10T02:56:36.000Z | source/02_language_guide/20_Extensions.md | slidoooor/the-swift-programming-language-in-chinese | c173d544cd45154f385da5dd6dc970e2fa3e473e | [
"CC-BY-4.0"
] | null | null | null | source/02_language_guide/20_Extensions.md | slidoooor/the-swift-programming-language-in-chinese | c173d544cd45154f385da5dd6dc970e2fa3e473e | [
"CC-BY-4.0"
] | null | null | null | # 扩展
*扩展*可以给一个现有的类,结构体,枚举,还有协议添加新的功能。它还拥有不需要访问被扩展类型源代码就能完成扩展的能力(即*逆向建模*)。扩展和 Objective-C 的分类很相似。(与 Objective-C 分类不同的是,Swift 扩展是没有名字的。)
Swift 中的扩展可以:
- 添加计算型实例属性和计算型类属性
- 定义实例方法和类方法
- 提供新的构造器
- 定义下标
- 定义和使用新的嵌套类型
- 使已经存在的类型遵循(conform)一个协议
在 Swift 中,你甚至可以扩展协议以提供其需要的实现,或者添加额外功能给遵循的类型所使用。你可以从 [协议扩展](https://docs.swift.org/swift-book/LanguageGuide/Protocols.html#ID521) 获取更多细节。
> 注意
>
> 扩展可以给一个类型添加新的功能,但是不能重写已经存在的功能。
## 扩展的语法 {#extension-syntax}
使用 `extension` 关键字声明扩展:
```swift
extension SomeType {
// 在这里给 SomeType 添加新的功能
}
```
扩展可以扩充一个现有的类型,给它添加一个或多个协议。协议名称的写法和类或者结构体一样:
```swift
extension SomeType: SomeProtocol, AnotherProtocol {
// 协议所需要的实现写在这里
}
```
这种遵循协议的方式在 [使用扩展遵循协议](https://docs.swift.org/swift-book/LanguageGuide/Protocols.html#ID277) 中有描述。
扩展可以使用在现有范型类型上,就像 [扩展范型类型](https://docs.swift.org/swift-book/LanguageGuide/Generics.html#ID185) 中描述的一样。你还可以使用扩展给泛型类型有条件的添加功能,就像 [扩展一个带有 Where 字句的范型](https://docs.swift.org/swift-book/LanguageGuide/Generics.html#ID553) 中描述的一样。
> 注意
>
> 对一个现有的类型,如果你定义了一个扩展来添加新的功能,那么这个类型的所有实例都可以使用这个新功能,包括那些在扩展定义之前就存在的实例。
## 计算型属性 {#computed-properties}
扩展可以给现有类型添加计算型实例属性和计算型类属性。这个例子给 Swift 内建的 `Double` 类型添加了五个计算型实例属性,从而提供与距离单位相关工作的基本支持:
```swift
extension Double {
var km: Double { return self * 1_000.0 }
var m: Double { return self }
var cm: Double { return self / 100.0 }
var mm: Double { return self / 1_000.0 }
var ft: Double { return self / 3.28084 }
}
let oneInch = 25.4.mm
print("One inch is \(oneInch) meters")
// 打印“One inch is 0.0254 meters”
let threeFeet = 3.ft
print("Three feet is \(threeFeet) meters")
// 打印“Three feet is 0.914399970739201 meters”
```
这些计算型属性表示的含义是把一个 `Double` 值看作是某单位下的长度值。即使它们被实现为计算型属性,但这些属性的名字仍可紧接一个浮点型字面值,从而通过点语法来使用,并以此实现距离转换。
在上述例子中,`Double` 类型的 `1.0` 代表的是“一米”。这就是为什么计算型属性 `m` 返回的是 `self`——表达式 `1.m` 被认为是计算一个 `Double` 类型的 `1.0`。
其它单位则需要一些单位换算。一千米等于 1,000 米,所以计算型属性 `km` 要把值乘以 `1_000.00` 来实现千米到米的单位换算。类似地,一米有 3.28084 英尺,所以计算型属性 `ft` 要把对应的 `Double` 值除以 `3.28084`,来实现英尺到米的单位换算。
这些属性都是只读的计算型属性,所以为了简便,它们的表达式里面都不包含 `get` 关键字。它们使用 `Double` 作为返回值类型,并可用于所有接受 `Double` 类型的数学计算中:
```swift
let aMarathon = 42.km + 195.m
print("A marathon is \(aMarathon) meters long")
// 打印“A marathon is 42195.0 meters long”
```
> 注意
>
> 扩展可以添加新的计算属性,但是它们不能添加存储属性,或向现有的属性添加属性观察者。
## 构造器 {#initializers}
扩展可以给现有的类型添加新的构造器。它使你可以把自定义类型作为参数来供其他类型的构造器使用,或者在类型的原始实现上添加额外的构造选项。
扩展可以给一个类添加新的便利构造器,但是它们不能给类添加新的指定构造器或者析构器。指定构造器和析构器必须始终由类的原始实现提供。
如果你使用扩展给一个值类型添加构造器只是用于给所有的存储属性提供默认值,并且没有定义任何自定义构造器,那么你可以在该值类型扩展的构造器中使用默认构造器和成员构造器。如果你把构造器写到了值类型的原始实现中,就像 [值类型的构造器委托](https://docs.swift.org/swift-book/LanguageGuide/Initialization.html#ID215) 中所描述的,那么就不属于在扩展中添加构造器。
如果你使用扩展给另一个模块中定义的结构体添加构造器,那么新的构造器直到定义模块中使用一个构造器之前,不能访问 `self`。
在下面的例子中,自定义了一个的 `Rect` 结构体用来表示一个几何矩形。这个例子中还定义了两个给予支持的结构体 `Size` 和 `Point`,它们都把属性的默认值设置为 `0.0`:
```swift
struct Size {
var width = 0.0, height = 0.0
}
struct Point {
var x = 0.0, y = 0.0
}
struct Rect {
var origin = Point()
var size = Size()
}
```
因为 `Rect` 结构体给所有的属性都提供了默认值,所以它自动获得了一个默认构造器和一个成员构造器,就像 [默认构造器](https://docs.swift.org/swift-book/LanguageGuide/Initialization.html#ID213) 中描述的一样。这些构造器可以用来创建新的 `Rect` 实例:
```swift
let defaultRect = Rect()
let memberwiseRect = Rect(origin: Point(x: 2.0, y: 2.0),
size: Size(width: 5.0, height: 5.0))
```
你可以通过扩展 `Rect` 结构体来提供一个允许指定 point 和 size 的构造器:
```swift
extension Rect {
init(center: Point, size: Size) {
let originX = center.x - (size.width / 2)
let originY = center.y - (size.height / 2)
self.init(origin: Point(x: originX, y: originY), size: size)
}
}
```
这个新的构造器首先根据提供的 `center` 和 `size` 计算一个适当的原点。然后这个构造器调用结构体自带的成员构造器 `init(origin:size:)`,它会将新的 origin 和 size 值储存在适当的属性中:
```swift
let centerRect = Rect(center: Point(x: 4.0, y: 4.0),
size: Size(width: 3.0, height: 3.0))
// centerRect 的 origin 是 (2.5, 2.5) 并且它的 size 是 (3.0, 3.0)
```
> 注意
>
> 如果你通过扩展提供一个新的构造器,你有责任确保每个通过该构造器创建的实例都是初始化完整的。
## 方法 {#methods}
扩展可以给现有类型添加新的实例方法和类方法。在下面的例子中,给 `Int` 类型添加了一个新的实例方法叫做 `repetitions`:
```swift
extension Int {
func repetitions(task: () -> Void) {
for _ in 0..<self {
task()
}
}
}
```
`repetitions(task:)` 方法仅接收一个 `() -> Void` 类型的参数,它表示一个没有参数没有返回值的方法。
定义了这个扩展之后,你可以对任意整形数值调用 `repetitions(task:)` 方法,来执行对应次数的任务:
```swift
3.repetitions {
print("Hello!")
}
// Hello!
// Hello!
// Hello!
```
### 可变实例方法 {#mutating-instance-methods}
通过扩展添加的实例方法同样也可以修改(或 *mutating(改变)*)实例本身。结构体和枚举的方法,若是可以修改 `self` 或者它自己的属性,则必须将这个实例方法标记为 `mutating`,就像是改变了方法的原始实现。
在下面的例子中,对 Swift 的 `Int` 类型添加了一个新的 mutating 方法,叫做 `square`,它将原始值求平方:
```swift
extension Int {
mutating func square() {
self = self * self
}
}
var someInt = 3
someInt.square()
// someInt 现在是 9
```
## 下标 {#subscripts}
扩展可以给现有的类型添加新的下标。下面的例子中,对 Swift 的 `Int` 类型添加了一个整数类型的下标。下标 `[n]` 从数字右侧开始,返回小数点后的第 `n` 位:
- `123456789[0]` 返回 `9`
- `123456789[1]` 返回 `8`
……以此类推:
```swift
extension Int {
subscript(digitIndex: Int) -> Int {
var decimalBase = 1
for _ in 0..<digitIndex {
decimalBase *= 10
}
return (self / decimalBase) % 10
}
}
746381295[0]
// 返回 5
746381295[1]
// 返回 9
746381295[2]
// 返回 2
746381295[8]
// 返回 7
```
如果操作的 `Int` 值没有足够的位数满足所请求的下标,那么下标的现实将返回 `0`,将好像在数字的左边补上了 0:
```swift
746381295[9]
// 返回 0,就好像你进行了这个请求:
0746381295[9]
```
## 嵌套类型 {#nested-yypes}
扩展可以给现有的类,结构体,还有枚举添加新的嵌套类型:
```swift
extension Int {
enum Kind {
case negative, zero, positive
}
var kind: Kind {
switch self {
case 0:
return .zero
case let x where x > 0:
return .positive
default:
return .negative
}
}
}
```
这个例子给 `Int` 添加了一个新的嵌套枚举。这个枚举叫做 `Kind`,表示特定整数所代表的数字类型。具体来说,它表示数字是负的、零的还是正的。
这个例子同样给 `Int` 添加了一个新的计算型实例属性,叫做 `kind`,它返回被操作整数所对应的 `Kind` 枚举 case 分支。
现在,任意 `Int` 的值都可以使用这个嵌套类型:
```swift
func printIntegerKinds(_ numbers: [Int]) {
for number in numbers {
switch number.kind {
case .negative:
print("- ", terminator: "")
case .zero:
print("0 ", terminator: "")
case .positive:
print("+ ", terminator: "")
}
}
print("")
}
printIntegerKinds([3, 19, -27, 0, -6, 0, 7])
// 打印“+ + - 0 - 0 + ”
```
方法 `printIntegerKinds(_:)`,使用一个 `Int` 类型的数组作为输入,然后依次迭代这些值。对于数组中的每一个整数,方法会检查它的 `kind` 计算型属性,然后打印适当的描述。
> 注意
>
> `number.kind` 已经被认为是 `Int.Kind` 类型。所以,在 `switch` 语句中所有的 `Int.Kind` case 分支可以被缩写,就像使用 `.negative` 替代 `Int.Kind.negative.`。
| 23.948529 | 225 | 0.678846 | yue_Hant | 0.832568 |
2681957aa26ed61641dc5c80a87ba8297637ca9a | 1,430 | md | Markdown | workshop/exercises/basic-state-management/README.md | OldMetalmind/FlutterWorkshop | 4c9f2338a4253d6f286cf228907119a60ec0673c | [
"MIT"
] | 3 | 2019-09-21T08:47:10.000Z | 2020-04-01T12:28:04.000Z | workshop/exercises/basic-state-management/README.md | FlutterPortugal/FlutterWorkshop | ee1e51bb2a40ad5420d2d4db1c8208084cbc1e6b | [
"MIT"
] | null | null | null | workshop/exercises/basic-state-management/README.md | FlutterPortugal/FlutterWorkshop | ee1e51bb2a40ad5420d2d4db1c8208084cbc1e6b | [
"MIT"
] | 6 | 2019-06-01T11:57:25.000Z | 2020-10-24T08:35:46.000Z | ---
prev: /exercises/networking/
---
# Basic State Management
Flutter is [declarative](https://en.wikipedia.org/wiki/Declarative_programming), when you want to change the UI, you don't change it by setting the Widget like `TextWidget.setText("Hello")` this is the most common in [`imperative`](https://en.wikipedia.org/wiki/Imperative_programming) languages, change the data and the UI will update accordingly.
State is the name we give to the current status of the data and in Flutter there are two kinds of dates, an Ephemeral, that is temporary and is used in the `StatefulWidgets` however there is another more common `App State`.
When to use one or the another?

[source](https://flutter.dev/docs/development/data-and-backend/state-mgmt/ephemeral-vs-app)
## Setup
- Replace the following content into your file `my_app/lib/main.dart`
<<< @/workshop/solutions/exercises/basic-state-management/main.dart
## Tasks
1. After copying the code, run the app. Try it out.
2. This is more of experimenting with the code mentioned. Notice how most of the code is from the routing exercise however, it has a `InheritedWidget`, what is it doing? How is it called?
## Project
Create a app with the following.
- Create an app with 3 Screens
- List of cat facts;
- Detail of 1 cat fact, after clicking an item of the list;
- Screen with the favorites cat facts;
| 42.058824 | 348 | 0.760839 | eng_Latn | 0.996735 |
26819d606cd16e7a551b2941b9808edcb5ecaa99 | 2,735 | md | Markdown | README.md | KrystynaMil/javascript-quiz | 91c220f90224f721af8c1c54cf5c86aed12b11c7 | [
"MIT"
] | null | null | null | README.md | KrystynaMil/javascript-quiz | 91c220f90224f721af8c1c54cf5c86aed12b11c7 | [
"MIT"
] | 14 | 2020-12-31T00:49:56.000Z | 2020-12-31T01:09:00.000Z | README.md | KrystynaMil/javascript-quiz | 91c220f90224f721af8c1c54cf5c86aed12b11c7 | [
"MIT"
] | 3 | 2020-12-20T23:53:04.000Z | 2021-07-18T10:09:24.000Z | > ...
## Getting Started
This repository comes with some nice extras like testing, documentation and CI, but in it's heart it's just an HTML/CSS/JS website boilerplate.
### Development
To run this project locally you will need to open `index.html` in your browser using a local server. LiveServer, `http-server`, `study-lenses`, or any other local static server will work.
### Deployment
Push your changes, turn on GitHub pages, that's all!
When your project is deployed to GitHub pages there will be buttons rendered at the top of your page to validate your HTML, CSS, accessibility and spelling, plus a link back to the project repository.
### Installing Dependencies
There are no dependencies needed to run the website, everything is prepared to work with vanilla JavaScript. However, if you want to run tests or if you want to generate documentation for your project you will need to install the development dependencies:
- `npm install`
### Documentation
To document your project you will need to write a JSDoc comment for each function in the `/handlers`, `/listeners` and `/logic`. You will also want to add an entry to the JSDoc in `/data.js` for each property you store in the object.
The JSDoc comments you write in the `/src` folder will be used to re-write the `DOCS.md` file each time you run `npm run document` from the root of your project.
### Testing
After installing the dev dependencies you can start writing and running tests for your .js files. Careful! In this project starter you can only test code that does not interact with the DOM, so only the `/logic` and `/view` functions will be testable (`/view` functions will be tested in Node.js using `jsdom`). There are two options for running tests:
- _Individually_: You can run the tests in a single `.spec.js` using the VSCode debugger. Open the spec file you want to run, open the debugger pane, select the "current .spec.js file" option, then start debugging!
- _All at Once_: You can also run every `.spec.js` in the `/src` directory at the same time using `npm run test`. When you run the `npm run test` command all test results will be logged to the console, and a report file will be generated next to each spec file. These report files will be helpful when reviewing PRs to the `main`/`master` branch.
### Continuous Integration
This repository comes with a GitHub Action to re-build the documentation and run all the tests whenever you push to `master`/`main`, and each time you open a PR to `master`/`main`. You don't need to do anything, it works!
Having this CI action means that your master branch will always have the most up-to-date documentation, and that you can easily check test results when reviewing Pull Requests.
> ...
| 63.604651 | 352 | 0.764168 | eng_Latn | 0.99937 |
2681ac3c6742ff5c375e5c72166549736447fa67 | 16,334 | md | Markdown | docs/framework/tools/mdbg-exe.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/tools/mdbg-exe.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/tools/mdbg-exe.md | juucustodio/docs.pt-br | a3c389ac92d6e3c69928c7b906e48fbb308dc41f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: MDbg.exe (Depurador de Linha de Comando do .NET Framework)
description: Entenda MDbg.exe, o depurador de linha de comando para .NET, que ajuda os fornecedores de ferramentas e os desenvolvedores de aplicativos a localizar e corrigir bugs em programas direcionados ao CLR.
ms.date: 03/30/2017
helpviewer_keywords:
- command-line debugger [.NET Framework]
- MDbg.exe
ms.assetid: 28a3f509-07e2-4dbe-81df-874c5e969cc4
ms.openlocfilehash: 1c663474e5084afa1824f0f6b0740ae03a344e92
ms.sourcegitcommit: 3824ff187947572b274b9715b60c11269335c181
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/17/2020
ms.locfileid: "84904215"
---
# <a name="mdbgexe-net-framework-command-line-debugger"></a>MDbg.exe (Depurador de Linha de Comando do .NET Framework)
O Depurador de Linha de Comando do .NET Framework ajuda fornecedores de ferramentas e desenvolvedores de aplicativos na localização e na correção de bugs em programas com o Common Language Runtime do .NET Framework como destino. Essa ferramenta usa a API de depuração do runtime para fornecer serviços de depuração. É possível usar MDbg.exe para depurar apenas o código gerenciado; não há suporte para depurar o código não gerenciado.
Essa ferramenta está disponível por meio do NuGet. Para obter informações sobre a instalação, consulte [MDbg 0.1.0](https://www.nuget.org/packages/MDbg/0.1.0). Para executar a ferramenta, use o Console do Gerenciador de Pacotes. Para saber mais sobre como usar o Console do Gerenciador de Pacotes, confira o artigo [Console do Gerenciador de Pacotes](/nuget/tools/package-manager-console).
No prompt de comando do Gerenciador de Pacotes, digite o seguinte:
## <a name="syntax"></a>Syntax
```console
MDbg [ProgramName[arguments]] [options]
```
## <a name="commands"></a>Comandos
Quando você estiver no depurador (como indicado pelo prompt **mdbg>**), digite um dos comandos descritos na próxima seção:
**comando** [*arguments*]
Os comandos de MDbg.exe diferenciam maiúsculas de minúsculas.
|Comando|Descrição|
|-------------|-----------------|
|**ap**[**rocess**] [*number*]|Alterna para outro processo depurado ou imprime processos disponíveis. Os números não são PIDs (IDs de processo) reais, e sim uma lista indexada em 0.|
|**a**[**ttach**] [*pid*]|Anexa a um processo ou imprime processos disponíveis.|
|**b**[**reak**] [*ClassName.Method* | *FileName:LineNo*]|Define um ponto de interrupção no método especificado. Os módulos são verificados sequencialmente.<br /><br /> - **break** *FileName:LineNo* define um ponto de interrupção em um local na origem.<br />- **break** *~number* define um ponto de interrupção em um símbolo exibido recentemente com o comando **x**.<br />- **break** *module!ClassName.Method+IlOffset* define um ponto de interrupção no local totalmente qualificado.|
|**block**[**ingObjects**]|Exibe bloqueios de monitor, que estão bloqueando threads.|
|**ca**[**tch**] [*exceptionType*]|Faz o depurador parar em todas as exceções, e não apenas em exceções sem tratamento.|
|**cl**[**earException**]|Marca a exceção atual como tratada de forma que a execução possa continuar. Se a causa da exceção não tiver sido tratada, a exceção poderá ser rapidamente emitida novamente.|
|**conf**[**ig**] [*option value*]|Exibe todas as opções configuráveis e mostra como as opções são invocadas sem valores opcionais. Se a opção for especificada, define `value` como a opção atual. As seguintes opções estão disponíveis:<br /><br /> - `extpath` define o caminho para pesquisar extensões quando o comando `load` é usado.<br />- `extpath+` adiciona um caminho para carregar extensões.|
|**del**[**ETE**]|Exclui um ponto de interrupção.|
|**de**[**xar**]|É desanexado de um processo depurado.|
|**d**[**own**] [*frames*]|Move o registro de ativação ativo para baixo.|
|**echo**|Repete uma mensagem para o console.|
|**enableNotif**[**ication**] *typeName* 0|1|Habilita (1) ou desabilita (0) notificações personalizadas para o tipo especificado.|
|**ex**[**it**] [*exitcode*]|Sai do shell MDbg.exe e especifica como opção o código de saída do processo.|
|**fo**[**reach**] [*OtherCommand*]|Executa um comando em todos os threads. *OtherCommand* é um comando válido que opera no thread, **foreach** *OtherCommand* executa o mesmo comando em todos os threads.|
|**f**[**unceval**] [ `-ad` *num*] *FunctionName* [*args...* ]|Executa uma avaliação da função no thread ativo atual em que a função a ser avaliada é *functionName*. O nome da função deve ser totalmente qualificado, incluindo namespaces.<br /><br /> A opção `-ad` especifica o domínio do aplicativo a ser usado para resolver a função. Se a opção `-ad` não for especificada, o domínio do aplicativo para resolução assumirá como padrão o domínio do aplicativo em que o thread usado na avaliação da função está localizado.<br /><br /> Se a função que está sendo avaliada não for estática, o primeiro parâmetro passado deverá ser um ponteiro `this`. Todos os domínios de aplicativo são pesquisados em busca de argumentos para a avaliação da função.<br /><br /> Para solicitar um valor de um domínio do aplicativo, anteceda a variável com o módulo e o nome de domínio do módulo; por exemplo, `funceval -ad 0 System.Object.ToString hello.exe#0!MyClass.g_rootRef`. Esse comando avalia a função `System.Object.ToString` no domínio do aplicativo `0`. Como o método `ToString` é uma função de instância, o primeiro parâmetro deve ser um ponteiro `this`.|
|**g**[**o**]|Faz o programa continuar até encontrar um ponto de interrupção, o programa sai ou um evento (por exemplo, uma exceção sem tratamento) faz o programa parar.|
|**h**[**elp**] [*comando*]<br /><br /> -ou-<br /><br /> **?** [*comando*]|Exibe uma descrição de todos os comandos ou uma descrição detalhada de um comando especificado.|
|**ig**[**nore**] [*event*]|Faz o depurador parar somente em exceções sem tratamento.|
|**int**[**ercept**] *FrameNumber*|Reverte o depurador para um número de quadro especificado.<br /><br /> Se o depurador encontrar uma exceção, use esse comando para reverter o depurador para o número de quadro especificado. É possível alterar o estado do programa usando o comando **set** e continuar usando o comando **go**.|
|**k**[**mal**)|Para o processo ativo.|
|**l**[**ist**] [*modules* | *appdomains* | *assemblies*]|Exibe os módulos carregados, os domínios de aplicativo ou os assemblies.|
|**lo**[**ad**] *assemblyName*|Carrega uma extensão da seguinte maneira: o assembly especificado é carregado e uma tentativa é feita para, em seguida, executar o método estático `LoadExtension` com base no tipo `Microsoft.Tools.Mdbg.Extension.Extension`.|
|**log** [*eventType*]|Defina ou exiba os eventos que serão registrados em log.|
|**mo**[**de**] [*option on/off*]|Define opções de depurador diferentes. Use `mode` sem opções para obter uma lista dos modos de depuração e suas configurações atuais.|
|**mon**[**itorInfo**] *monitorReference*|Exibe informações de bloqueio do monitor do objeto.|
|**newo**[**bj**] *typeName* [*arguments...*]|Cria um novo objeto do tipo *typeName*.|
|**n**[**ext**]|Executa o código e avança para a próxima linha (mesmo que a próxima linha inclua muitas chamadas de função).|
|**Opendump** *pathToDumpFile*|Abre o arquivo de despejo especificado para depuração.|
|**o**[**UT**]|Move para o final da função atual.|
|**pa**[**th**] [*pathName*]|Procura os arquivos de origem no caminho especificado se o local nos binários não estiver disponível.|
|**p**[**rint**] [*var*] | [`-d`]|Imprime todas as variáveis no escopo (**print**), imprime a variável especificada (**print** *var*) ou imprime as variáveis do depurador (**print**`-d`).|
|**printe**[**xception**] [*-r*]|Imprime a última exceção no thread atual. Use a opção `–r` (recursiva) para percorrer a propriedade `InnerException` no objeto de exceção e obter informações sobre a cadeia inteira de exceções.|
|**pro**[**cessenum**]|Exibe os processos ativos.|
|**q**[**uit**] [*exitcode*]|Sai do shell MDbg.exe, especificando como opção o código de saída do processo.|
|**re**[**sume**] [`*` | [`~`]*threadNumber*]|Retoma o thread atual ou o thread especificado pelo parâmetro *threadNumber*.<br /><br /> Se o parâmetro *threadNumber* for especificado como `*` ou se o número de threads começar com `~`, o comando se aplicará a todos os threads, exceto o especificado por *threadNumber*.<br /><br /> A retomada de um thread não suspenso não tem nenhum efeito.|
|**r**[**un**] [ `-d` ( `ebug` ) |- `o` () `ptimize` |`-enc` ] [[*path_to_exe*] [*args_to_exe*]]|Para o processo atual (se houver algum) e inicia um novo. Se nenhum argumento executável for passado, esse comando executará o programa que foi executado anteriormente com o comando `run`. Se o argumento executável for fornecido, o programa especificado será executado usando os argumentos fornecidos como opção.<br /><br /> Se os eventos de carga da classe, de carga do módulo e de início do thread forem ignorados (como são por padrão), o programa parará na primeira instrução executável do thread principal.<br /><br /> É possível forçar o depurador a compilar o código com JIT (just-in-time) usando um destes três sinalizadores:<br /><br /> - `-d`*(* `ebug` *)* desabilita otimizações. Este é o padrão para MDbg.exe.<br />- `-o`*(* `ptimize` *)* força o código a ser executado mais como faz fora do depurador, mas também torna a experiência de depuração mais difícil. Esse é o padrão de uso fora do depurador.<br />- `-enc` habilita o recurso Editar e Continuar, mas incorre em um impacto no desempenho.|
|**Set** *variable*=*value*|Altera o valor de qualquer variável no escopo.<br /><br /> Também é possível criar variáveis próprias do depurador e atribuir valores de referência a elas dentro do aplicativo. Esses valores funcionam como identificadores para o valor original, e mesmo o valor original está fora do escopo. Todas as variáveis do depurador devem começar com `$` (por exemplo, `$var`). Desmarque estes identificadores definindo-os como nada usando o seguinte comando:<br /><br /> `set $var=`|
|Número de **SetIP** [ `-il` ] *number*|Define o IP (ponteiro da instrução) atual no arquivo como a posição especificada. Se você especificar a opção `-il`, o número representará um deslocamento MSIL (Microsoft Intermediate Language) no método. Do contrário, o número representa um número de linha de origem.|
|**sh**[**ow**] [*lines*]|Especifica o número de linhas que serão mostradas.|
|**s**[**tapa**]|Move a execução para a próxima função na linha atual ou move para a próxima linha se não houver função a ser realizada.|
|**su**[**spend**] [\* | [~]*threadNumber*]|Suspende o thread atual ou o thread especificado pelo parâmetro *threadNumber*. Se *threadNumber* for especificado como `*`, o comando se aplicará a todos os threads. Se o número do thread começar com `~`, o comando se aplicará a todos os threads, exceto o especificado por *threadNumber*. Os threads suspensos são excluídos da execução quando o processo é executado pelo comando **go** ou **step**. Se não houver threads não suspensos no processo e você emitir o comando **go**, o processo não continuará. Nesse caso, pressione CTRL-C para interromper o processo.|
|**sy**[**mbol**] *commandName* [*commandValue*]|Especifica um dos comandos a seguir:<br /><br /> - `symbol path` [`"value"`] – Exibe ou define o caminho de símbolo atual.<br />- `symbol addpath``"value"`-Adiciona ao seu caminho de símbolo atual.<br />- `symbol reload` [`"module"`] – Recarrega todos os símbolos ou os símbolos do módulo especificado.<br />- `symbol list` [`module`] – Mostra os símbolos carregados atualmente para todos os módulos ou o módulo especificado.|
|**t**[**hread**] [*newThread*] [-*apelido de Nick*`]`|O comando de thread sem parâmetros exibe todos os threads gerenciados no processo atual. Os threads costumam ser identificados pelos números de thread; se o thread tiver um apelido atribuído, o apelido será exibido no lugar. É possível usar o parâmetro `-nick` para atribuir um apelido a um thread.<br /><br /> - **thread** `-nick` *threadName* atribui um apelido ao thread em execução no momento.<br /><br /> Os apelidos não podem ser números. Se o thread atual já tiver um apelido atribuído, o apelido anterior será substituído pelo novo. Se o novo apelido for uma cadeia de caracteres vazia (""), o apelido do thread atual será excluído e nenhum apelido novo será atribuído ao thread.|
|**u**[**p**]|Move o registro de ativação ativo para cima.|
|**uwgc**[**handle**] [*var*] | [*address*]|Imprime a variável acompanhada por um identificador. O identificador pode ser especificado por nome ou endereço.|
|**ao**|Exibe as instruções `when` ativas no momento.<br /><br /> **quando** **excluir todos os** | `num` [ `num` [ `num` ...]]-exclui a `when` instrução especificada pelo número, ou todas as `when` instruções, se `all` for especificado.<br /><br /> **quando** `stopReason` [ `specific_condition` ] **fazer** `cmd` [ `cmd` [ `cmd` ...]]-O parâmetro *stopReason* pode ser um dos seguintes:<br /><br /> `StepComplete`, `ProcessExited`, `ThreadCreated`, `BreakpointHit`, `ModuleLoaded`, `ClassLoaded`, `AssemblyLoaded`, `AssemblyUnloaded`, `ControlCTrapped`, `ExceptionThrown`, `UnhandledExceptionThrown`, `AsyncStop`, `AttachComplete`, `UserBreak`, `EvalComplete`, `EvalException`, `RemapOpportunityReached`, `NativeStop`.<br /><br /> *specific_condition* pode ser um dos seguintes:<br /><br /> - *number* – Para `ThreadCreated` e `BreakpointHit`, dispara a ação somente quando parado por uma ID de thread/número do ponto de interrupção com o mesmo valor.<br />-[ `!` ]*Name* -for `ModuleLoaded` , `ClassLoaded` , `AssemblyLoaded` , `AssemblyUnloaded` , `ExceptionThrown` e e `UnhandledExceptionThrown` disparará a ação somente quando o nome corresponder ao nome do *stopReason*.<br /><br /> *specific_condition* deve estar vazio para outros valores de *stopReason*.|
|**w**[**aqui**] [ `-v` ] [ `-c` *profundidade*] [*ThreadID*]|Exibe informações de depuração sobre quadros de pilha.<br /><br /> – A opção `-v` fornece informações detalhadas sobre cada registro de ativação exibido.<br />– A especificação de um número para `depth` limita o número de quadros exibidos. Use o comando **all** para exibir todos os quadros. O padrão é 100.<br />– Se especificar o parâmetro *threadID*, você poderá controlar qual thread está associado à pilha. O padrão é apenas o thread atual. Use o comando **all** para obter todos os threads.|
|**x** [ `-c` *numSymbols*] [*módulo*[ `!` *padrão*]]|Exibe funções correspondentes ao `pattern` para um módulo.<br /><br /> Se *numSymbols* for especificado, a saída será limitada ao número especificado. Se `!` (indicando uma expressão regular) não for especificado para *pattern*, todas as funções serão exibidas. Se *module* não for fornecido, todos os módulos carregados serão exibidos. Os símbolos ( *~#* ) podem ser usados para definir pontos de interrupção usando o comando **Break** .|
## <a name="remarks"></a>Comentários
Compile o aplicativo a ser depurado usando sinalizadores específicos do compilador que fazem o compilador gerenciar símbolos de depuração. Consulte a documentação do compilador para obter mais informações sobre esses sinalizadores. É possível depurar aplicativos otimizados, mas algumas informações de depuração estarão ausentes. Por exemplo, muitas variáveis locais não serão visíveis e as linhas de origem serão imprecisas.
Depois de compilar o aplicativo, digite **mdbg** no prompt de comando para iniciar uma sessão de depuração, como mostrado no exemplo a seguir.
```console
C:\Program Files\Microsoft Visual Studio 8\VC>mdbg
MDbg (Managed debugger) v2.0.50727.42 (RTM.050727-4200) started.
Copyright (C) Microsoft Corporation. All rights reserved.
For information about commands type "help";
to exit program type "quit".
mdbg>
```
O prompt `mdbg>` indica que você está no depurador.
Depois que você estiver no depurador, use os comandos e os argumentos descritos na seção anterior.
## <a name="see-also"></a>Veja também
- [Ferramentas](index.md)
- [Prompts de comando](developer-command-prompt-for-vs.md)
| 148.490909 | 1,274 | 0.729093 | por_Latn | 0.999219 |
2681cda1f3b9074a501e7fc1046652bc32e12681 | 16 | md | Markdown | README.md | Salhazry-4456/Salhazry-4456- | 957bfc8200d85d17574887bdfcb163311b521198 | [
"Unlicense"
] | null | null | null | README.md | Salhazry-4456/Salhazry-4456- | 957bfc8200d85d17574887bdfcb163311b521198 | [
"Unlicense"
] | null | null | null | README.md | Salhazry-4456/Salhazry-4456- | 957bfc8200d85d17574887bdfcb163311b521198 | [
"Unlicense"
] | null | null | null | # Salhazry-4456- | 16 | 16 | 0.75 | por_Latn | 0.661946 |
26820c774cf58885bcd332fc2726c4a290532e35 | 155 | md | Markdown | interview-questions.md | mouse123/Notes | 395d52e495a4c696e11e7856d2e898a0ef34610d | [
"MIT"
] | null | null | null | interview-questions.md | mouse123/Notes | 395d52e495a4c696e11e7856d2e898a0ef34610d | [
"MIT"
] | null | null | null | interview-questions.md | mouse123/Notes | 395d52e495a4c696e11e7856d2e898a0ef34610d | [
"MIT"
] | null | null | null | ---
description: Brush questions
---
# Interview questions
**vue传参**
* props,$emit 独立组件
* vuex全局处理数据
* vue-route query/params
* cookie/localstorage
| 9.6875 | 28 | 0.696774 | fra_Latn | 0.277503 |
2682574bebea266beccdbd6691fa274028f7ffa0 | 594 | md | Markdown | neurips2019/code/lubm1u/README.md | kant/e2r | df59055872564c0df5d23596478b60f1bcf815bf | [
"Apache-2.0"
] | 22 | 2020-01-04T07:13:27.000Z | 2022-02-28T12:30:15.000Z | neurips2019/code/lubm1u/README.md | kant/e2r | df59055872564c0df5d23596478b60f1bcf815bf | [
"Apache-2.0"
] | 2 | 2020-04-22T08:33:23.000Z | 2022-02-24T06:29:55.000Z | neurips2019/code/lubm1u/README.md | kant/e2r | df59055872564c0df5d23596478b60f1bcf815bf | [
"Apache-2.0"
] | 11 | 2019-11-02T06:49:52.000Z | 2022-02-09T17:52:03.000Z | This directory contains code and data used for experiment on LUBM1U dataset.
After downloading (and before running any of the scripts here), please unzip 'data.zip'. This should create 'data/' directory containing data needed for error-free execution of the scripts.
Simply running the script 'reasonE.test.py' should use trained model as stored in directory 'model/' and generate accuracy numbers as reported in the paper.
Script 'reasonE.train.py' is for fresh training. Script 'reasonE.retrain.py' is to retrain the latest trained model starting from the point where last training ended.
| 74.25 | 189 | 0.799663 | eng_Latn | 0.999751 |
2682e24480c48d77817a6235341d972573c47d24 | 138 | md | Markdown | _includes/05-emphasis.md | phoye-developer/markdown-portfolio | 44400b7ee285ad76eeac952aa499d5bb7cf77846 | [
"MIT"
] | null | null | null | _includes/05-emphasis.md | phoye-developer/markdown-portfolio | 44400b7ee285ad76eeac952aa499d5bb7cf77846 | [
"MIT"
] | 5 | 2020-06-17T02:33:20.000Z | 2020-06-17T03:19:42.000Z | _includes/05-emphasis.md | phoye-developer/markdown-portfolio | 44400b7ee285ad76eeac952aa499d5bb7cf77846 | [
"MIT"
] | null | null | null | # Bold and Italics
I have been with my current employer for **8 years**, and I am a *subject matter expert* for many of our key products.
| 46 | 118 | 0.73913 | eng_Latn | 0.999904 |
2684153713f1b817972bd54823049dfb06d9093c | 2,349 | md | Markdown | README.md | farbenmeer/next-intl | 8f66e59bfc7eeb554ce1c0fa5e8c7c05b954210f | [
"MIT"
] | null | null | null | README.md | farbenmeer/next-intl | 8f66e59bfc7eeb554ce1c0fa5e8c7c05b954210f | [
"MIT"
] | null | null | null | README.md | farbenmeer/next-intl | 8f66e59bfc7eeb554ce1c0fa5e8c7c05b954210f | [
"MIT"
] | null | null | null | # next-intl 🌐
  
> A minimal, but complete solution for managing translations, date, time and number formatting in Next.js apps.
This library complements the [internationalized routing](https://nextjs.org/docs/advanced-features/i18n-routing) capabilities of Next.js by managing translations and providing them to components.
## Features
- 🌟 I18n is an essential part of the user experience, therefore this library doesn't compromise on flexibility and never leaves you behind when you need to fine tune a translation. Messages use the **proven [ICU syntax](https://formatjs.io/docs/core-concepts/icu-syntax)** which covers interpolation, plurals, ordinal pluralization, label selection based on enums and rich text.
- 📅 Built-in **date, time and number formatting** provides all the necessary parts you need for localisation. You can use global formats for a consistent look & feel of your app and integrate them with translations.
- 💡 A **hooks-only API** ensures that you can use the same API for `children` as well as for attributes which expect strings.
- ⚔️ Based on **battle-tested** building blocks from [Format.JS](https://formatjs.io/) (used by `react-intl`) and modern browser APIs, this library is a thin wrapper around high-quality, lower-level APIs for i18n.
- 🚀 By integrating with both **static as well as server side rendering** you always get the best possible performance from your app.
## What does it look like?
This library is based on the premise that messages can be grouped by namespaces (typically a component name).
```js
// en.json
{
"LatestFollower": {
"latestFollower": "{username} started following you",
"followBack": "Follow back"
}
}
```
```jsx
// LatestFollower.js
function LatestFollower({user}) {
const t = useTranslations('LatestFollower');
return (
<>
<Text>{t('latestFollower', {username: user.name})}</Text>
<IconButton aria-label={t('followBack')} icon={<FollowIcon />} />
</>
);
}
```
## Docs
- [Installation guide](./docs/installation.md)
- [Usage guide](./docs/usage.md)
- [FAQ](./docs/faq.md)
- [Changelog](./CHANGELOG.md)
| 46.058824 | 378 | 0.735632 | eng_Latn | 0.977657 |
26841bb2e9c002cf8b8af5288e30beb064cad471 | 139 | md | Markdown | README.old.md | guandradeamm/car-wishlist | d60bbc62699c38d3ed08556e18640055c98f620b | [
"MIT"
] | null | null | null | README.old.md | guandradeamm/car-wishlist | d60bbc62699c38d3ed08556e18640055c98f620b | [
"MIT"
] | null | null | null | README.old.md | guandradeamm/car-wishlist | d60bbc62699c38d3ed08556e18640055c98f620b | [
"MIT"
] | null | null | null | # car-wishlist
A simple react.js app to implement an image carrousel of cars that I want to have in the future. Use to motivate yourself .
| 46.333333 | 123 | 0.776978 | eng_Latn | 0.999832 |
2684442aa349b402644f1659290a751d72613e20 | 2,019 | md | Markdown | Binary/Protostar/Stack/Stack4.md | Kan1shka9/CTFs | 33ab33e094ea8b52714d5dad020c25730e91c0b0 | [
"MIT"
] | 21 | 2016-02-06T14:30:01.000Z | 2020-09-11T05:39:17.000Z | Binary/Protostar/Stack/Stack4.md | Kan1shka9/CTFs | 33ab33e094ea8b52714d5dad020c25730e91c0b0 | [
"MIT"
] | null | null | null | Binary/Protostar/Stack/Stack4.md | Kan1shka9/CTFs | 33ab33e094ea8b52714d5dad020c25730e91c0b0 | [
"MIT"
] | 7 | 2017-02-02T16:27:02.000Z | 2021-04-30T17:14:53.000Z | #### Stack4
###### About
- Stack4 takes a look at overwriting saved EIP and standard buffer overflows.
- This level is at ``/opt/protostar/bin/stack4``
- Hints
- A variety of introductory papers into buffer overflows may help.
- gdb lets you do ``run < input``
- EIP is not directly after the end of buffer, compiler padding can also increase the size.
```stack4.c```
```c
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
#include <string.h>
void win()
{
printf("code flow successfully changed\n");
}
int main(int argc, char **argv)
{
char buffer[64];
gets(buffer);
}
```
###### Solution
```sh
user@protostar:/opt/protostar/bin$ ./stack4
AAAA
user@protostar:/opt/protostar/bin$
```
```sh
root@kali:~# /usr/share/metasploit-framework/tools/exploit/pattern_create.rb -l 100
Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2A
root@kali:~#
```
```sh
user@protostar:/opt/protostar/bin$ ./stack4
Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2A
Segmentation fault
user@protostar:/opt/protostar/bin$
```
```sh
user@protostar:/opt/protostar/bin$ echo "Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2A" > /tmp/pay
```
```sh
user@protostar:/opt/protostar/bin$ gdb ./stack4 -q
Reading symbols from /opt/protostar/bin/stack4...done.
(gdb) r < /tmp/pay
Starting program: /opt/protostar/bin/stack4 < /tmp/pay
Program received signal SIGSEGV, Segmentation fault.
0x63413563 in ?? ()
(gdb)
```
```sh
root@kali:~# /usr/share/metasploit-framework/tools/exploit/pattern_offset.rb -q 63413563
[*] Exact match at offset 76
root@kali:~#
```
```sh
user@protostar:/opt/protostar/bin$ objdump -d stack4 | grep win
080483f4 <win>:
user@protostar:/opt/protostar/bin$
```
```sh
user@protostar:/opt/protostar/bin$ python -c "print 'A'*76 +'\xf4\x83\x04\x08'" | ./stack4
code flow successfully changed
Segmentation fault
user@protostar:/opt/protostar/bin$
``` | 23.476744 | 153 | 0.744428 | eng_Latn | 0.48867 |
2684f51b3f6c6e7c48c2de26f27d00e604bfac13 | 3,347 | md | Markdown | wdk-ddi-src/content/d3dkmthk/ne-d3dkmthk-_d3dkmt_escapetype.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 176 | 2018-01-12T23:42:01.000Z | 2022-03-30T18:23:27.000Z | wdk-ddi-src/content/d3dkmthk/ne-d3dkmthk-_d3dkmt_escapetype.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 1,093 | 2018-01-23T07:33:03.000Z | 2022-03-30T20:15:21.000Z | wdk-ddi-src/content/d3dkmthk/ne-d3dkmthk-_d3dkmt_escapetype.md | amrutha-chandramohan/windows-driver-docs-ddi | 35e28164591cadf5ef3d6238cdddd4b88f2b8768 | [
"CC-BY-4.0",
"MIT"
] | 251 | 2018-01-21T07:35:50.000Z | 2022-03-22T19:33:42.000Z | ---
UID: NE:d3dkmthk._D3DKMT_ESCAPETYPE
title: _D3DKMT_ESCAPETYPE (d3dkmthk.h)
description: Do not use the D3DKMT_VIDMMESCAPETYPE enumeration; it is for testing purposes only.
ms.date: 10/19/2018
keywords: ["D3DKMT_ESCAPETYPE enumeration"]
ms.keywords: _D3DKMT_ESCAPETYPE, D3DKMT_ESCAPETYPE,
req.header: d3dkmthk.h
req.include-header:
req.target-type:
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.max-support:
req.typenames: D3DKMT_ESCAPETYPE
targetos: Windows
tech.root: display
f1_keywords:
- _D3DKMT_ESCAPETYPE
- d3dkmthk/_D3DKMT_ESCAPETYPE
- D3DKMT_ESCAPETYPE
- d3dkmthk/D3DKMT_ESCAPETYPE
topic_type:
- apiref
api_type:
- HeaderDef
api_location:
- d3dkmthk.h
api_name:
- _D3DKMT_ESCAPETYPE
- D3DKMT_ESCAPETYPE
product:
- Windows
---
# _D3DKMT_ESCAPETYPE enumeration
## -description
<b>Do not use the D3DKMT_ESCAPETYPE enumeration; it is for testing purposes only.</b>
## -enum-fields
### -field D3DKMT_ESCAPE_DRIVERPRIVATE
### -field D3DKMT_ESCAPE_VIDMM
### -field D3DKMT_ESCAPE_TDRDBGCTRL
### -field D3DKMT_ESCAPE_VIDSCH
### -field D3DKMT_ESCAPE_DEVICE
### -field D3DKMT_ESCAPE_DMM
### -field D3DKMT_ESCAPE_DEBUG_SNAPSHOT
### -field D3DKMT_ESCAPE_DRT_TEST
### -field D3DKMT_ESCAPE_DIAGNOSTICS
### -field D3DKMT_ESCAPE_OUTPUTDUPL_SNAPSHOT
### -field D3DKMT_ESCAPE_OUTPUTDUPL_DIAGNOSTICS
### -field D3DKMT_ESCAPE_BDD_PNP
### -field D3DKMT_ESCAPE_BDD_FALLBACK
### -field D3DKMT_ESCAPE_ACTIVATE_SPECIFIC_DIAG
### -field D3DKMT_ESCAPE_MODES_PRUNED_OUT
### -field D3DKMT_ESCAPE_WHQL_INFO
### -field D3DKMT_ESCAPE_BRIGHTNESS
### -field D3DKMT_ESCAPE_EDID_CACHE
### -field D3DKMT_ESCAPE_GENERIC_ADAPTER_DIAG_INFO
### -field D3DKMT_ESCAPE_MIRACAST_DISPLAY_REQUEST
### -field D3DKMT_ESCAPE_HISTORY_BUFFER_STATUS
### -field D3DKMT_ESCAPE_MIRACAST_ADAPTER_DIAG_INFO
### -field D3DKMT_ESCAPE_FORCE_BDDFALLBACK_HEADLESS
### -field D3DKMT_ESCAPE_REQUEST_MACHINE_CRASH
### -field D3DKMT_ESCAPE_HMD_GET_EDID_BASE_BLOCK
### -field D3DKMT_ESCAPE_SOFTGPU_ENABLE_DISABLE_HMD
### -field D3DKMT_ESCAPE_PROCESS_VERIFIER_OPTION
### -field D3DKMT_ESCAPE_ADAPTER_VERIFIER_OPTION
### -field D3DKMT_ESCAPE_IDD_REQUEST
### -field D3DKMT_ESCAPE_DOD_SET_DIRTYRECT_MODE
### -field D3DKMT_ESCAPE_LOG_CODEPOINT_PACKET
### -field D3DKMT_ESCAPE_LOG_USERMODE_DAIG_PACKET
### -field D3DKMT_ESCAPE_GET_EXTERNAL_DIAGNOSTICS
### -field D3DKMT_ESCAPE_GET_PREFERRED_MODE
### -field D3DKMT_ESCAPE_GET_DISPLAY_CONFIGURATIONS
### -field D3DKMT_ESCAPE_QUERY_IOMMU_STATUS
### -field D3DKMT_ESCAPE_WIN32K_START
### -field D3DKMT_ESCAPE_WIN32K_HIP_DEVICE_INFO
### -field D3DKMT_ESCAPE_WIN32K_QUERY_CD_ROTATION_BLOCK
### -field D3DKMT_ESCAPE_WIN32K_DPI_INFO
### -field D3DKMT_ESCAPE_WIN32K_PRESENTER_VIEW_INFO
### -field D3DKMT_ESCAPE_WIN32K_SYSTEM_DPI
### -field D3DKMT_ESCAPE_WIN32K_BDD_FALLBACK
### -field D3DKMT_ESCAPE_WIN32K_DDA_TEST_CTL
### -field D3DKMT_ESCAPE_WIN32K_USER_DETECTED_BLACK_SCREEN
### -field D3DKMT_ESCAPE_WIN32K_HMD_ENUM
### -field D3DKMT_ESCAPE_WIN32K_HMD_CONTROL
### -field D3DKMT_ESCAPE_WIN32K_LPMDISPLAY_CONTROL
### -field D3DKMT_ESCAPE_WIN32K_DISPBROKER_TEST
## -remarks
## -see-also
| 22.463087 | 97 | 0.772035 | yue_Hant | 0.975445 |
26854a20a69f2726f17ba5534a3d69306b866894 | 71 | md | Markdown | scala-impl/README.md | YoonSung/consistent-hashing | fee2e42eb943a7527602e4c25f104586f1c33204 | [
"Apache-2.0"
] | 1 | 2022-03-26T07:50:57.000Z | 2022-03-26T07:50:57.000Z | scala-impl/README.md | YoonSung/ConsistentHashing | fee2e42eb943a7527602e4c25f104586f1c33204 | [
"Apache-2.0"
] | null | null | null | scala-impl/README.md | YoonSung/ConsistentHashing | fee2e42eb943a7527602e4c25f104586f1c33204 | [
"Apache-2.0"
] | null | null | null | # Execute Kager (Hash Ring)
./sbt "runMain com.algamza.Application 1"
| 17.75 | 41 | 0.732394 | kor_Hang | 0.273651 |
2686042b68aeb2e0d6e886f800311310f4d9b9cf | 2,092 | md | Markdown | README.md | danguenet/dwolla-app | eb76247842be4fd46afd71e94f2e267c211aca35 | [
"MIT"
] | 1 | 2021-02-14T18:20:27.000Z | 2021-02-14T18:20:27.000Z | README.md | danguenet/dwolla-app | eb76247842be4fd46afd71e94f2e267c211aca35 | [
"MIT"
] | 1 | 2021-05-11T15:37:30.000Z | 2021-05-11T15:37:30.000Z | README.md | danguenet/dwolla-app | eb76247842be4fd46afd71e94f2e267c211aca35 | [
"MIT"
] | 2 | 2020-05-26T14:19:57.000Z | 2021-02-14T18:09:11.000Z | # Dwolla App
This repo contains each module from a Dwolla IT/IS Lunch and Learn. It showcases how to build an express application that uses the Dwolla API. Each module is numbered and builds off the previous. Each module will also have a README file to instruct you on what steps you need to take to reproduce the work completed in that module.
## Before You Start
1. Download [Node.js](https://nodejs.org/en/download/)
2. Download [Docker](https://www.docker.com/products/docker-desktop)
3. Review `Introduction-to-node`
## Getting Started
In the `1-hello-world` folder open the `README.md` file. Follow the instructions to get started on your own project. When you are done with the first module, there is a `README.md` in the following modules to continue. In each module there is a `starter` and `final` folder.
In the `starter` folder is what your code should look like when you start the module. In the `final` folder is what your code should look like when you are done with that module. If you ever get lost feel free to copy the contents of the `starter` folder and then start from the top of the `README.md` file.
If you plan on on reusing the code in either the `starter` or `final`, remember to rename the project in the `package.json` file. It should match the name of the folder you copy the code into. If you do not update this, you may have an error with installing the dependencies into your `node_modules` folder. You may also need to rename `.env.example` to `.env` and fill in any empty fields.
## Cloning the Repo
It might be helpful to clone the repo as a reference. To run any of the instances of this project, navigate into a `starter` or `final` folder of the desired module. Then run the following:
```
npm install
npm run build-db
npm run dev
```
A reminder that if you move the code out of the `starter` or `final` folder then you need to rename the project within the `package.json`. Without doing this, you may get an error when installing the `node_modules` folder. You may also need to rename `.env.example` to `.env` and fill in any empty fields. | 72.137931 | 390 | 0.761472 | eng_Latn | 0.999609 |
26860cc02bc0a650f9da2401ab0b24a8557c8ebd | 287 | markdown | Markdown | _day_by_day/2011-06-05-tarragona-road-trip.markdown | figarocorso/figarocorso.github.io | e2ede424ac26ef6c92233683656a8278f84c62d2 | [
"MIT"
] | 1 | 2020-07-23T20:37:52.000Z | 2020-07-23T20:37:52.000Z | _day_by_day/2011-06-05-tarragona-road-trip.markdown | figarocorso/figarocorso.github.io | e2ede424ac26ef6c92233683656a8278f84c62d2 | [
"MIT"
] | null | null | null | _day_by_day/2011-06-05-tarragona-road-trip.markdown | figarocorso/figarocorso.github.io | e2ede424ac26ef6c92233683656a8278f84c62d2 | [
"MIT"
] | null | null | null | ---
date: 2011-06-05 22:29:04+00:00
layout: post
image:
thumbnail: /images/blog/DSCN1740.jpg
slug: tarragona-road-trip
title: Tarragona Road Trip
categories: day-by-day
---
[](/images/blog/DSCN1740.jpg)
Nunca pensé que podría salir algo así ^_^ @amigos
| 20.5 | 59 | 0.724739 | spa_Latn | 0.220501 |
26875df5bec90a868af8b0b3cfb4c4040a580652 | 3,636 | md | Markdown | content/goldcard-holders-faq/at-work.md | ascemama/website | 22f98946e8671f54a96a71580d2eb2bf2391dcbd | [
"Apache-2.0"
] | null | null | null | content/goldcard-holders-faq/at-work.md | ascemama/website | 22f98946e8671f54a96a71580d2eb2bf2391dcbd | [
"Apache-2.0"
] | null | null | null | content/goldcard-holders-faq/at-work.md | ascemama/website | 22f98946e8671f54a96a71580d2eb2bf2391dcbd | [
"Apache-2.0"
] | null | null | null | ---
title: "At Work"
weight: 2
---
<!--- (c) Tom Fifield, licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. -->
## What do I need to tell my employer about the Gold Card?
Your Gold Card is a four-in-one document that contains your work permit, residence visa, reentry
permit and Alien Residence Certificate. You have open work rights and they do not need to apply
for a work permit or visa for you.
## What happens if I lose my job?
You may simply find a new job. Your Gold Card is not sponsored by your employer and will not be
affected by a change of job.
## Can I setup a company and employ myself?
Yes. With a Gold Card you are able to setup a Taiwanese limited liability company as a sole foreign investor
without any minimum capital requirement, and you can employ yourself in that company since you already have a
work permit through the Gold Card. You are also able to setup and employ yourself through a representative office
if you just intend to represent a foreign company.
## Can I be self-employed?
Yes. If you don't work for an employer and don't setup a company in Taiwan you can still be self-employed (either abroad or in Taiwan) while you have a Gold Card.
## Does the Gold Card provide an open work permit?
Yes. You may work for any organisation (or many organisations at the same time) in any industry,
in any job, provided it doesn't have licensing requirements that you don't meet.
## Is there a minimum salary requirement for Gold Card holders?
Yes. It is one of:
* NT$23,800 a month, or NT$158 per hour for "workers" (Labor Standards Act Article 21)
* NT$47,971 a month for those performing "specialised or technical work"
Despite many applicants qualifying for a gold card on the basis of previous employment with salary
of greater than NT$160,000, this is not a minimum salary requirement. _However_, those applying
under this category will have their financial data reviewed by the Ministry of Labor. The
regulations allow the Ministry of Labor to revoke a work permit if they are not satisfied with
the results of this review.
## Can I work for multiple companies at the same time?
Yes.
## What is salary withholding tax? At what rate should my salary be withheld?
In May each year you pay income tax on your earnings from the previous year. To smooth out your
payments (and when the government receives money) a little, the Taiwanese tax system is designed
so that your employer effectively pre-pays a little bit of your tax. To do that, they
"withhold" a percentage of your salary payment each time you get paid and remit it to
the government a few times a year. The tax you pay in May will be reduced by this amount.
There are two common amounts you will see for salary withholding tax: 18% and 5%. For
tax residents of Taiwan, employers should withhold 5% and non-tax-residents 18%.
However, some companies work on the assumption that all foreign residents have a danger of
leaving Taiwan each year before they achieve tax residency (i.e. less than 183 days).
These types of companies typically withhold 18% from January to June, and 5% for the rest of
the year, which protects the company from owing additional tax to the government if the
employee leaves.
Which withholding rate you are charged is not a large issue, as it directly contributes to your
income tax payment. If you've been charged too much, you will receive a refund from the
government once you file your income tax. However, it is always worth talking with your
employer to explain your circumstances and that you plan to reside in Taiwan long-term.
| 56.8125 | 162 | 0.777228 | eng_Latn | 0.999947 |
26879b5eb5d8b6ef9bb30dfa87a0d6a8c3f728a3 | 31,519 | md | Markdown | chapter2/README.md | ralacher/ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups | 7270add1809fb898c28d25522a8c446af7888aa2 | [
"MIT"
] | null | null | null | chapter2/README.md | ralacher/ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups | 7270add1809fb898c28d25522a8c446af7888aa2 | [
"MIT"
] | null | null | null | chapter2/README.md | ralacher/ms-identity-javascript-angular-spa-dotnetcore-webapi-roles-groups | 7270add1809fb898c28d25522a8c446af7888aa2 | [
"MIT"
] | null | null | null | # An Angular single-page application (SPA) calling a protected Core web API and using Security Groups to implement Role-Based Access Control (RBAC)
1. [Overview](#overview)
1. [Scenario](#scenario)
1. [Contents](#contents)
1. [Prerequisites](#prerequisites)
1. [Setup](#setup)
1. [Registration](#registration)
1. [Running the sample](#running-the-sample)
1. [Explore the sample](#explore-the-sample)
1. [About the code](#about-the-code)
1. [More information](#more-information)
1. [Community Help and Support](#community-help-and-support)
1. [Contributing](#contributing)
## Overview
In [Chapter 1](../Chapter1) we learnt how to use **App Roles** for **role-based access control**. In this chapter, we learn about how to do the same with Azure AD **Security Groups**.
In the sample, a dashboard component allows signed-in users to see the tasks assigned to them or other users based on their memberships to one of the two security groups, **GroupAdmin** and **GroupMember**.
Authorization in Azure AD can also be done with **App Roles**, as shown in [chapter1](../chapter1/README.md). **Groups** and **App Roles** in Azure AD are by no means mutually exclusive - they can be used in tandem to provide even finer grained access control.
## Scenario
- The scenario is similar to Chapter 1, except we'd use **Security Groups** instead of **App Roles**

## Contents
| File/folder | Description |
|-------------------|--------------------------------------------|
| `AppCreationScripts/` | Contains Powershell scripts to automate app registrations. |
| `TodoListAPI/` | Source code of the TodoList API. |
| `TodoListSPA/` | Source code of the TodoList client SPA. |
| `CHANGELOG.md` | List of changes to the sample. |
| `CONTRIBUTING.md` | Guidelines for contributing to the sample. |
| `LICENSE` | The license for the sample. |
## Prerequisites
- Two security groups **GroupAdmin** and **GroupMember**, with users you want to test with assigned to them.
## Setup
Using a command line interface such as VS Code integrated terminal, follow the steps below:
### Step 1. Install .NET Core API dependencies
```console
cd chapter2
```
```console
cd TodoListAPI
dotnet restore
```
### Step 2. Trust development certificates
```console
dotnet dev-certs https --clean
dotnet dev-certs https --trust
```
Learn more about [HTTPS in .NET Core](https://docs.microsoft.com/aspnet/core/security/enforcing-ssl).
### Step 3. Install Angular SPA dependencies
```console
cd ../
cd TodoListSPA
npm install
```
## Registration
There are two projects in this sample. Each needs to be registered separately in your Azure AD tenant. To register these projects, you can:
- either follow the steps below for manual registration,
- or use PowerShell scripts that:
- **automatically** creates the Azure AD applications and related objects (passwords, permissions, dependencies) for you.
- modify the configuration files.
<details>
<summary>Expand this section if you want to use this automation:</summary>
1. On Windows, run PowerShell as **Administrator** and navigate to the root of the cloned directory
1. In PowerShell run:
```PowerShell
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope Process -Force
```
1. Run the script to create your Azure AD application and configure the code of the sample application accordingly.
1. In PowerShell run:
```PowerShell
cd .\AppCreationScripts\
.\Configure.ps1
```
> Other ways of running the scripts are described in [App Creation Scripts](./AppCreationScripts/AppCreationScripts.md)
> The scripts also provide a guide to automated application registration, configuration and removal which can help in your CI/CD scenarios.
</details>
### Register the service app (TodoListAPI)
1. Navigate to the [Azure portal](https://portal.azure.com) and select the **Azure AD** service.
1. Select the **App Registrations** blade on the left, then select **New registration**.
1. In the **Register an application page** that appears, enter your application's registration information:
- In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example `TodoListAPI`.
- Under **Supported account types**, select **Accounts in this organizational directory only**.
1. Select **Register** to create the application.
1. In the app's registration screen, find and note the **Application (client) ID**. You use this value in your app's configuration file(s) later in your code.
1. Select **Save** to save your changes.
1. In the app's registration screen, select the **Certificates & secrets** blade in the left to open the page where we can generate secrets and upload certificates.
1. In the **Client secrets** section, select **New client secret**:
- Type a key description (for instance `app secret`),
- Select one of the available key durations (**In 1 year**, **In 2 years**, or **Never Expires**) as per your security posture.
- The generated key value will be displayed when you select the **Add** button. Copy the generated value for use in the steps later.
- You'll need this key later in your code's configuration files. This key value will not be displayed again, and is not retrievable by any other means, so make sure to note it from the Azure portal before navigating to any other screen or blade.
1. In the app's registration screen, select the **API permissions** blade in the left to open the page where we add access to the APIs that your application needs.
- Select the **Add a permission** button and then,
- Ensure that the **Microsoft APIs** tab is selected.
- In the *Commonly used Microsoft APIs* section, select **Microsoft Graph**
- In the **Delegated permissions** section, select the **User.Read**, **GroupMember.Read.All** in the list. Use the search box if necessary.
- Select the **Add permissions** button at the bottom.
1. In the app's registration screen, select the **Expose an API** blade to the left to open the page where you can declare the parameters to expose this app as an API for which client applications can obtain [access tokens](https://docs.microsoft.com/azure/active-directory/develop/access-tokens) for.
The first thing that we need to do is to declare the unique [resource](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow) URI that the clients will be using to obtain access tokens for this Api. To declare an resource URI, follow the following steps:
- Select `Set` next to the **Application ID URI** to generate a URI that is unique for this app.
- For this sample, accept the proposed Application ID URI (`api://{clientId}`) by selecting **Save**.
1. All APIs have to publish a minimum of one [scope](https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-auth-code-flow#request-an-authorization-code) for the client's to obtain an access token successfully. To publish a scope, follow the following steps:
- Select **Add a scope** button open the **Add a scope** screen and Enter the values as indicated below:
- For **Scope name**, use `access_as_user`.
- Select **Admins and users** options for **Who can consent?**.
- For **Admin consent display name** type `Access TodoListAPI`.
- For **Admin consent description** type `Allows the app to access TodoListAPI as the signed-in user.`
- For **User consent display name** type `Access TodoListAPI`.
- For **User consent description** type `Allow the application to access TodoListAPI on your behalf.`
- Keep **State** as **Enabled**.
- Select the **Add scope** button on the bottom to save this scope.
#### Configure the service app (TodoListAPI) to use your app registration
Open the project in your IDE (like Visual Studio or Visual Studio Code) to configure the code.
> In the steps below, "ClientID" is the same as "Application ID" or "AppId".
1. Open the `TodoListAPI\appsettings.json` file.
1. Find the app key `Domain` and replace the existing value with your Azure AD tenant name.
1. Find the app key `ClientId` and replace the existing value with the application ID (clientId) of the `TodoListAPI` application copied from the Azure portal.
1. Find the app key `TenantId` and replace the existing value with your Azure AD tenant ID.
1. Find the app key `ClientSecret` and replace the existing value with the key you saved during the creation of the `TodoListAPI` app, in the Azure portal.
### Register the client app (TodoListSPA)
1. Navigate to the [Azure portal](https://portal.azure.com) and select the **Azure AD** service.
1. Select the **App Registrations** blade on the left, then select **New registration**.
1. In the **Register an application page** that appears, enter your application's registration information:
- In the **Name** section, enter a meaningful application name that will be displayed to users of the app, for example `TodoListSPA`.
- Under **Supported account types**, select **Accounts in this organizational directory only**.
- In the **Redirect URI (optional)** section, select **Single-page application** in the combo-box and enter the following redirect URI: `http://localhost:4200/`.
1. Select **Register** to create the application.
1. In the app's registration screen, find and note the **Application (client) ID**. You use this value in your app's configuration file(s) later in your code.
1. Select **Save** to save your changes.
1. In the app's registration screen, select the **API permissions** blade in the left to open the page where we add access to the APIs that your application needs.
- Select the **Add a permission** button and then:
- Ensure that the **Microsoft APIs** tab is selected.
- In the *Commonly used Microsoft APIs* section, select **Microsoft Graph**
- In the **Delegated permissions** section, select the **User.Read**, **GroupMember.Read.All** in the list. Use the search box if necessary.
- Select the **Add permissions** button at the bottom.
- **GroupMember.Read.All** requires admin to consent. Select the **Grant/revoke admin consent for {tenant}** button, and then select **Yes** when you are asked if you want to grant consent for the requested permissions for all account in the tenant. You need to be an Azure AD tenant admin to do this.
#### Configure the client app (TodoListSPA) to use your app registration
Open the project in your IDE (like Visual Studio or Visual Studio Code) to configure the code.
> In the steps below, "ClientID" is the same as "Application ID" or "AppId".
1. Open the `TodoListSPA\src\app\app-config.json` file.
1. Find the app key `clientId` and replace the existing value with the application ID (clientId) of the **TodoListSPA** application copied from the Azure portal.
1. Find the app key `todoListApi.resourceUri` and replace the existing value with the base address of the **TodoListAPI** project (by default `https://localhost:44351/api/todolist`).
1. Find the app key `todoListApi.resourceScopes` and replace the existing value with *Scope* you created earlier `api://{clientId-of-service}/access_as_user`.
#### Configure Known Client Applications for service (TodoListAPI)
For a middle tier web API (`TodoListAPI`) to be able to call a downstream web API, the middle tier app needs to be granted the required permissions as well. However, since the middle tier cannot interact with the signed-in user, it needs to be explicitly bound to the client app in its **Azure AD** registration. This binding merges the permissions required by both the client and the middle tier web API and presents it to the end user in a single consent dialog. The user then consent to this combined set of permissions.
To achieve this, you need to add the **Application Id** of the client app, in the Manifest of the web API in the `knownClientApplications` property. Here's how:
1. In the [Azure portal](https://portal.azure.com), navigate to your `TodoListAPI` app registration, and select **Manifest** section.
1. In the manifest editor, change the `"knownClientApplications": []` line so that the array contains the Client ID of the client application (`TodoListSPA`) as an element of the array.
For instance:
```json
"knownClientApplications": ["ca8dca8d-f828-4f08-82f5-325e1a1c6428"],
```
1. **Save** the changes to the manifest.
### Configure Security Groups (TodoListSPA and TodoListAPI)
You have two different options available to you on how you can further configure your application(s) to receive the `groups` claim.
1. [Receive **all the groups** that the signed-in user is assigned to in an Azure AD tenant, included nested groups](#configure-your-application-to-receive-all-the-groups-the-signed-in-user-is-assigned-to-including-nested-groups).
2. [Receive the **groups** claim values from a **filtered set of groups** that your application is programmed to work with](#configure-your-application-to-receive-the-groups-claim-values-from-a-filtered-set-of-groups-a-user-may-be-assigned-to) (Not available in the [Azure AD Free edition](https://azure.microsoft.com/pricing/details/active-directory/)).
> To get the on-premise group's `samAccountName` or `On Premises Group Security Identifier` instead of Group ID, please refer to the document [Configure group claims for applications with Azure Active Directory](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-fed-group-claims#prerequisites-for-using-group-attributes-synchronized-from-active-directory).
:warning: The token configuration steps below should be performed for **both** the **TodoListSPA** and the **TodoListAPI**.
#### Configure your application to receive **all the groups** the signed-in user is assigned to, including nested groups
1. In the app's registration screen, select the **Token Configuration** blade in the left to open the page where you can configure the claims provided tokens issued to your application.
1. Select the **Add groups claim** button on top to open the **Edit Groups Claim** screen.
1. Select `Security groups` **or** the `All groups (includes distribution lists but not groups assigned to the application)` option. Choosing both negates the effect of `Security Groups` option.
1. Under the **ID** section, select `Group ID`. This will result in Azure AD sending the [Object ID](https://docs.microsoft.com/graph/api/resources/group?view=graph-rest-1.0) of the groups the user is assigned to in the **groups** claim of the [ID Token](https://docs.microsoft.com/azure/active-directory/develop/id-tokens) that your app receives after signing-in a user.
#### Configure your application to receive the `groups` claim values from a **filtered set of groups** a user may be assigned to
##### Prerequisites, benefits and limitations of using this option
1. This option is useful when your application is interested in a selected set of groups that a signing-in user may be assigned to and not every security group this user is assigned to in the tenant. This option also saves your application from running into the [overage](#the-groups-overage-claim) issue.
1. This feature is not available in the [Azure AD Free edition](https://azure.microsoft.com/pricing/details/active-directory/).
1. **Nested group assignments** are not available when this option is utilized.
##### Steps to enable this option in your app
1. In the app's registration screen, select the **Token Configuration** blade in the left to open the page where you can configure the claims provided tokens issued to your application.
1. Select the **Add groups claim** button on top to open the **Edit Groups Claim** screen.
1. Select `Groups assigned to the application`.
1. Choosing additional options like `Security Groups` or `All groups (includes distribution lists but not groups assigned to the application)` will negate the benefits your app derives from choosing to use this option.
1. Under the **ID** section, select `Group ID`. This will result in Azure AD sending the [Object ID](https://docs.microsoft.com/graph/api/resources/group?view=graph-rest-1.0) of the groups the user is assigned to in the `groups` claim of the [ID Token](https://docs.microsoft.com/azure/active-directory/develop/id-tokens) that your app receives after signing-in a user.
1. If you are exposing a web API using the **Expose an API** option, then you can also choose the `Group ID` option under the **Access** section. This will result in Azure AD sending the [Object ID](https://docs.microsoft.com/graph/api/resources/group?view=graph-rest-1.0) of the groups the user is assigned to in the `groups` claim of the [Access Token](https://docs.microsoft.com/azure/active-directory/develop/access-tokens) issued to the client applications of your API.
1. In the app's registration screen, select on the **Overview** blade in the left to open the Application overview screen. Select the hyperlink with the name of your application in **Managed application in local directory** (note this field title can be truncated for instance `Managed application in ...`). When you select this link you will navigate to the **Enterprise Application Overview** page associated with the service principal for your application in the tenant where you created it. You can navigate back to the app registration page by using the *back* button of your browser.
1. Select the **Users and groups** blade in the left to open the page where you can assign users and groups to your application.
1. Select the **Add user** button on the top row.
1. Select **User and Groups** from the resultant screen.
1. Choose the groups that you want to assign to this application.
1. Click **Select** in the bottom to finish selecting the groups.
1. Select **Assign** to finish the group assignment process.
1. Your application will now receive these selected groups in the `groups` claim when a user signing in to your app is a member of one or more these **assigned** groups.
1. Select the **Properties** blade in the left to open the page that lists the basic properties of your application.Set the **User assignment required?** flag to **Yes**.
> :bulb: **Important security tip**
>
> When you set **User assignment required?** to **Yes**, Azure AD will check that only users assigned to your application in the **Users and groups** blade are able to sign-in to your app. You can assign users directly or by assigning security groups they belong to.
### Configure the client app (TodoListSPA) to recognize Group IDs
> :warning:
> During **Token Configuration**, if you have chosen any other option except **groupID** (e.g. like **DNSDomain\sAMAccountName**) you should enter the **group name** (for example `contoso.com\Test Group`) instead of the **object ID** below:
1. Open the `TodoListSPA\src\app\app-config.json` file.
1. Find the app key `groups.groupAdmin` and replace the existing value with the **object ID** of the **GroupAdmin** group copied from the Azure portal.
1. Find the app key `groups.groupMember` and replace the existing value with the **object ID** of the **GroupMember** group copied from the Azure portal.
### Configure the service app (TodoListAPI) to recognize Group IDs
> :warning:
> During **Token Configuration**, if you have chosen any other option except **groupID** (e.g. like **DNSDomain\sAMAccountName**) you should enter the **group name** (for example `contoso.com\Test Group`) instead of the **object ID** below:
1. Open the `TodoListAPI\appsettings.json` file.
2. Find the app key `Groups.GroupAdmin` and replace the existing value with the object ID of the **GroupAdmin** group copied from the Azure portal.
3. Find the app key `Groups.GroupMember` and replace the existing value with the object ID of the **GroupMember** group copied from the Azure portal.
## Run the sample
Using a command line interface such as VS Code integrated terminal, locate the application directory. Then:
```console
cd ../
cd TodoListSPA
npm start
```
In a separate console window, execute the following commands:
```console
cd TodoListAPI
dotnet run
```
## Explore the sample
1. Open your browser and navigate to `http://localhost:4200`.
2. Sign-in using the button on top-right:

1. Click on the **Get My Tasks** button to access your (the signed-in user's) todo list:

1. If the signed-in user has the right privileges (i.e. in the right "group"), click on the **See All Tasks** button to access every users' todo list:

1. If the signed-in user does not have the right privileges, clicking on the **See All Tasks** will give an error:

> :information_source: Consider taking a moment to [share your experience with us](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR73pcsbpbxNJuZCMKN0lURpUQ09BMkFPQ0cyWEczSEFJSVVQSVVTREw0TCQlQCN0PWcu)
## About the Code
Much of the specifics of implementing **RBAC** with **Security Groups** is the same with implementing **RBAC** with **App Roles** discussed in [Chapter1](../Chapter1/). In order to avoid redundancy, here we discuss particular issues that might arise with using **groups** claim.
### The Groups Overage Claim
To ensure that the token size doesn’t exceed HTTP header size limits, the Microsoft Identity Platform limits the number of object Ids that it includes in the **groups** claim.
If a user is member of more groups than the overage limit (**150 for SAML tokens, 200 for JWT tokens, 6 for Single Page applications**), then the Microsoft Identity Platform does not emit the group IDs in the `groups` claim in the token. Instead, it includes an **overage** claim in the token that indicates to the application to query the [MS Graph API](https://graph.microsoft.com) to retrieve the user’s group membership.
> We strongly advise you use the [group filtering feature](#configure-your-application-to-receive-the-groups-claim-values-from-a-filtered-set-of-groups-a-user-may-be-assigned-to) (if possible) to avoid running into group overages.
#### Create the Overage Scenario for testing
1. You can use the `BulkCreateGroups.ps1` provided in the [App Creation Scripts](./AppCreationScripts/) folder to create a large number of groups and assign users to them. This will help test overage scenarios during development. :warning: Remember to change the user's **objectId** provided in the `BulkCreateGroups.ps1` script.
1. When you run this sample and an overage occurred, then you'd see the `_claim_names` in the home page after the user signs-in.
1. We strongly advise you use the [group filtering feature](#configure-your-application-to-receive-the-groups-claim-values-from-a-filtered-set-of-groups-a-user-may-be-assigned-to) (if possible) to avoid running into group overages.
1. In case you cannot avoid running into group overage, we suggest you use the following logic to process groups claim in your token.
1. Check for the claim `_claim_names` with one of the values being `groups`. This indicates overage.
1. If found, make a call to the endpoint specified in `_claim_sources` to fetch user’s groups.
1. If none found, look into the `groups` claim for user’s groups.
> When attending to overage scenarios, which requires a call to [Microsoft Graph](https://graph.microsoft.com) to read the signed-in user's group memberships, your app will need to have the [GroupMember.Read.All](https://docs.microsoft.com/graph/permissions-reference#group-permissions) for the [getMemberObjects](https://docs.microsoft.com/graph/api/user-getmemberobjects?view=graph-rest-1.0) function to execute successfully.
> Developers who wish to gain good familiarity of programming for Microsoft Graph are advised to go through the [An introduction to Microsoft Graph for developers](https://www.youtube.com/watch?v=EBbnpFdB92A) recorded session.
##### Angular *group-guard* service
Consider the `group-guard.service.ts`. Here, we are checking whether the token for the user has the `_claim_names` claim, which indicates that the user has too many group memberships. If so, we redirect the user to the `/overage` page. There, we initiate a call to MS Graph API's `https://graph.microsoft.com/v1.0/me/memberOf` endpoint to query the full list of groups that the user belongs to. Finally we check for the designated `groupID` among this list.
```typescript
@Injectable({
providedIn: 'root'
})
export class GroupGuardService implements CanActivate {
constructor(private authService: MsalService, private graphService: GraphService, private router: Router) {}
canActivate(route: ActivatedRouteSnapshot): boolean {
const expectedGroup = route.data.expectedGroup;
let account: Account = this.authService.instance.getAllAccounts()[0];
this.graphService.user.displayName = account.idTokenClaims?.preferred_username!;
if (this.graphService.user.groupIDs.includes(expectedGroup)) {
return true;
}
if (account.idTokenClaims?.groups) {
this.graphService.user.groupIDs = account.idTokenClaims?.groups;
} else {
if (account.idTokenClaims?._claim_names) {
window.alert('You have too many group memberships. The application will now query Microsoft Graph to get the full list of groups that you are a member of.');
this.router.navigate(['/overage']);
return false;
}
window.alert('Token does not have groups claim');
return false;
}
window.alert('You do not have access for this');
return false;
}
}
```
In `app-routing.module.ts`, we add **GroupGuardService** to routes we want to check for group membership:
```typescript
const routes: Routes = [
{
path: 'todo-edit/:id',
component: TodoEditComponent,
canActivate: [
MsalGuard,
GroupGuardService
],
data: {
expectedGroup: auth.groups.groupMember
}
},
{
path: 'todo-view',
component: TodoViewComponent,
canActivate: [
MsalGuard,
GroupGuardService
],
data: {
expectedGroup: auth.groups.groupMember
}
},
{
path: 'dashboard',
component: DashboardComponent,
canActivate: [
MsalGuard,
GroupGuardService,
],
data: {
expectedGroup: auth.groups.groupAdmin
}
},
];
```
#### .NET Core web API and how to handle the Overage Scenario
1. In `Startup.cs`, `OnTokenValidated` event calls **GetSignedInUsersGroups** method defined in GraphHelper.cs to process groups overage claim.
```csharp
services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddMicrosoftIdentityWebApi(options =>
{
Configuration.Bind("AzureAd", options);
options.Events = new JwtBearerEvents();
options.Events.OnTokenValidated = async context =>
{
if (context != null)
{
//Calls method to process groups overage claim.
await GraphHelper.GetSignedInUsersGroups(context);
}
};
}, options => { Configuration.Bind("AzureAd", options); })
.EnableTokenAcquisitionToCallDownstreamApi(options => Configuration.Bind("AzureAd", options))
.AddMicrosoftGraph(Configuration.GetSection("GraphBeta"))
.AddInMemoryTokenCaches();
```
`AddMicrosoftGraph` registers the service for `GraphServiceClient`. The values for `BaseUrl` and `Scopes` defined in `GraphAPI` section of the **appsettings.json**.
1. In `GraphHelper.cs`, **GetSignedInUsersGroups** method checks if incoming token contains *Group Overage* claim then it will call **ProcessUserGroupsForOverage** method to retrieve groups.
```csharp
public static async Task GetSignedInUsersGroups(TokenValidatedContext context)
{
// Checks if the incoming token contained a 'Group Overage' claim.
if (HasOverageOccurred(context.Principal))
{
await ProcessUserGroupsForOverage(context);
}
}
```
##### Group authorization policy
The ASP.NET middleware supports roles populated from claims by specifying the claim in the `RoleClaimType` property of `TokenValidationParameters`.
Since the `groups` claim contains the object IDs of the security groups than the actual names by default, you'd use the group IDs instead of group names. See [Role-based authorization in ASP.NET Core](https://docs.microsoft.com/aspnet/core/security/authorization/roles) for more info.
```CSharp
// Startup.cs
// The following lines code instruct the asp.net core middleware to use the data in the "groups" claim in the [Authorize] attribute and for User.IsInrole()
// See https://docs.microsoft.com/aspnet/core/security/authorization/roles
services.Configure<OpenIdConnectOptions>(OpenIdConnectDefaults.AuthenticationScheme, options =>
{
// Use the groups claim for populating roles
options.TokenValidationParameters.RoleClaimType = "groups";
});
// Adding asp.net core authorization policies that enforce authorization using Azure AD roles.
services.AddAuthorization(options =>
{
options.AddPolicy(AuthorizationPolicies.AssignmentToGroupAdminGroupRequired, policy => policy.RequireRole(Configuration["Groups:GroupAdmin"]));
options.AddPolicy(AuthorizationPolicies.AssignmentToGroupMemberGroupRequired, policy => policy.RequireRole(Configuration["Groups:GroupMember"]));
});
// In code..(Controllers & elsewhere)
[Authorize(Roles = "Group-object-id")] // In controllers
// or
User.IsInRole("Group-object-id"); // In methods
```
> :information_source: Did the sample not work for you as expected? Did you encounter issues trying this sample? Then please reach out to us using the [GitHub Issues](../../../issues) page.
## Debugging the sample
To debug the .NET Core web API that comes with this sample, install the [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
Learn more about using [.NET Core with Visual Studio Code](https://docs.microsoft.com/dotnet/core/tutorials/with-visual-studio-code).
## Community Help and Support
Use [Stack Overflow](http://stackoverflow.com/questions/tagged/msal) to get support from the community.
Ask your questions on Stack Overflow first and browse existing issues to see if someone has asked your question before.
Make sure that your questions or comments are tagged with [`azure-active-directory` `azure-ad-b2c` `ms-identity` `msal`].
If you find a bug in the sample, raise the issue on [GitHub Issues](../../../issues).
To provide feedback on or suggest features for Azure Active Directory, visit [User Voice page](https://feedback.azure.com/forums/169401-azure-active-directory).
## Contributing
If you'd like to contribute to this sample, see [CONTRIBUTING.MD](/CONTRIBUTING.md).
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information, see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
| 61.923379 | 589 | 0.73911 | eng_Latn | 0.982351 |
2687efdf6199ff0215a5f80b494cdb021d4b8725 | 742 | markdown | Markdown | doc/en_US/rss.markdown | filipk2404/Atlas_logistic | 8fefa438a02af0d07b65f6ddca5a215dccc9433b | [
"MIT"
] | 1 | 2018-04-01T22:48:41.000Z | 2018-04-01T22:48:41.000Z | doc/en_US/rss.markdown | filipk2404/Atlas_logistic | 8fefa438a02af0d07b65f6ddca5a215dccc9433b | [
"MIT"
] | null | null | null | doc/en_US/rss.markdown | filipk2404/Atlas_logistic | 8fefa438a02af0d07b65f6ddca5a215dccc9433b | [
"MIT"
] | null | null | null | RSS/Atom subscriptions
======================
Kanboard supports RSS feeds for projects and users.
- Project feeds contains only the activity the project
- User feeds contains the activity stream of all projects the user is a member
Those subscriptions are only activated when the public access is enabled in the user profile or in the project settings.
Enable/disable project RSS feeds
--------------------------------
Go to **Project settings > Public access**.

Enable/disable user RSS feeds
--------------------------------
Go to **User profile > Public access**.
The RSS link is protected by a random token, only people who know the URL can access to the feed.
| 30.916667 | 120 | 0.690027 | eng_Latn | 0.995477 |
26887ab885bed9930ca7a2d04ac9523d1ea57e1b | 3,858 | md | Markdown | windows-apps-src/contacts-and-calendar/sending-an-sms-message.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/contacts-and-calendar/sending-an-sms-message.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-apps-src/contacts-and-calendar/sending-an-sms-message.md | Aaron-Junker/windows-uwp.de-de | 7171d224a4a27d04e54ab083568710e32235af3d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: In diesem Thema erfahren Sie, wie Sie das Dialogfeld zum Verfassen einer SMS starten, damit Benutzer eine SMS senden können. Sie können die Felder der SMS vor dem Anzeigen des Dialogfelds mit Daten füllen. Die Nachricht wird erst gesendet, wenn Benutzer auf die Schaltfläche „Senden“ tippen.
title: Senden einer SMS
ms.assetid: 4D7B509B-1CF0-4852-9691-E96D8352A4D6
keywords: Kontakte, SMS, Senden
ms.date: 02/08/2017
ms.topic: article
ms.localizationpriority: medium
ms.openlocfilehash: ceaeffdb0d8b3207a95e981f16590fe8e9e12e46
ms.sourcegitcommit: 7b2febddb3e8a17c9ab158abcdd2a59ce126661c
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 08/31/2020
ms.locfileid: "89154654"
---
# <a name="send-an-sms-message"></a>Senden einer SMS
In diesem Thema erfahren Sie, wie Sie das Dialogfeld zum Verfassen einer SMS starten, damit Benutzer eine SMS senden können. Sie können die Felder der SMS vor dem Anzeigen des Dialogfelds mit Daten füllen. Die Nachricht wird erst gesendet, wenn Benutzer auf die Schaltfläche „Senden“ tippen.
Um diesen Code aufzurufen, deklarieren Sie die Funktionen **Chat**, **smssend**und **Chatsystem** in Ihrem Paket Manifest. Dies sind [eingeschränkte Funktionen](../packaging/app-capability-declarations.md#special-and-restricted-capabilities) , aber Sie können Sie in Ihrer APP verwenden. Sie benötigen nur dann eine Genehmigung, wenn Sie beabsichtigen, Ihre APP im Store zu veröffentlichen. Weitere Informationen finden Sie unter [Konto Typen, Standorte und Gebühren](../publish/account-types-locations-and-fees.md).
## <a name="launch-the-compose-sms-dialog"></a>Starten des Dialogfelds zum Verfassen einer SMS
Erstellen Sie ein neues [**ChatMessage**](/uwp/api/windows.applicationmodel.chat.chatmessage)-Objekt, und legen Sie die Daten fest, die im Dialogfeld zum Verfassen einer E-Mail bereits vorhanden sein sollen. Rufen Sie [**ShowComposeSmsMessageAsync**](/uwp/api/windows.applicationmodel.chat.chatmessagemanager.showcomposesmsmessageasync) auf, um das Dialogfeld anzuzeigen.
```cs
private async void ComposeSms(Windows.ApplicationModel.Contacts.Contact recipient,
string messageBody,
StorageFile attachmentFile,
string mimeType)
{
var chatMessage = new Windows.ApplicationModel.Chat.ChatMessage();
chatMessage.Body = messageBody;
if (attachmentFile != null)
{
var stream = Windows.Storage.Streams.RandomAccessStreamReference.CreateFromFile(attachmentFile);
var attachment = new Windows.ApplicationModel.Chat.ChatMessageAttachment(
mimeType,
stream);
chatMessage.Attachments.Add(attachment);
}
var phone = recipient.Phones.FirstOrDefault<Windows.ApplicationModel.Contacts.ContactPhone>();
if (phone != null)
{
chatMessage.Recipients.Add(phone.Number);
}
await Windows.ApplicationModel.Chat.ChatMessageManager.ShowComposeSmsMessageAsync(chatMessage);
}
```
Mithilfe des folgenden Codes können Sie feststellen, ob das Gerät, auf dem Ihre APP ausgeführt wird, SMS-Nachrichten senden kann.
```csharp
if (Windows.Foundation.Metadata.ApiInformation.IsTypePresent("Windows.ApplicationModel.Chat"))
{
// Call code here.
}
```
## <a name="summary-and-next-steps"></a>Zusammenfassung und nächste Schritte
In diesem Thema haben Sie erfahren, wie Sie das Dialogfeld zum Verfassen einer SMS starten. Informationen zum Auswählen von Kontakten als SMS-Empfänger finden Sie unter [Auswählen von Kontakten](selecting-contacts.md). Laden Sie die [Beispiele für universelle Windows-Apps](https://github.com/Microsoft/Windows-universal-samples) von GitHub herunter, um sich weitere Beispiele zum Senden und Empfangen von SMS-Nachrichten unter Verwendung einer Hintergrundaufgabe anzusehen.
## <a name="related-topics"></a>Zugehörige Themen
* [Auswählen von Kontakten](selecting-contacts.md) | 55.114286 | 516 | 0.784085 | deu_Latn | 0.941455 |
2688a5e29cd497fb92b7e17efe8d38d9aec33c8e | 17,653 | md | Markdown | app/data/cpc/markdown-apc/68P.md | mattlavis-transform/importing-can-be-fun | 697f7ea50c318837bb56122ce1e7d929af8bbf66 | [
"MIT"
] | null | null | null | app/data/cpc/markdown-apc/68P.md | mattlavis-transform/importing-can-be-fun | 697f7ea50c318837bb56122ce1e7d929af8bbf66 | [
"MIT"
] | null | null | null | app/data/cpc/markdown-apc/68P.md | mattlavis-transform/importing-can-be-fun | 697f7ea50c318837bb56122ce1e7d929af8bbf66 | [
"MIT"
] | 1 | 2020-12-08T14:43:03.000Z | 2020-12-08T14:43:03.000Z | 68P: Free circulation goods, previously exported, now being entered to an excise warehouse with a simultaneous claim to Returned Goods Relief
------------------------------------------------------------------------------------------------------------------------------------------------
### Description of procedure:
Re-importation of free circulation goods to RGR with simultaneous entry to an approved excise warehouse.
### Goods covered:
This Additional Procedure Code must only be used when both of the following apply:
* Release for free circulation of re-imported goods in an unaltered state simultaneously claiming Returned Goods Relief (RGR, Article 203 EU Reg. No. 952/2013 (UCC)) upon payment of any customs duties where the goods were temporarily exported to a third country.
* Release of goods for free circulation simultaneously placed under a warehousing procedure other than customs warehousing (placed in an excise warehouse), where neither excise nor (when applicable) VAT, has been paid. The law which governs the storage of goods in an excise warehouse may be found in section 17 of Notice 197.
This Additional Procedure Code may also be used where the goods are being released from a customs warehouse for entry to an excise warehouse simultaneously claiming RGR.
### Conditions for use:
The goods must meet the conditions applicable to RGR and excise warehousing to use this Additional Procedure Code.
Any customs duties not relieved under RGR must be paid or accounted for under Article 195, EU Reg. No. 952/2013 (UCC), in order for the goods to be released to this procedure.
These notes must be read in conjunction with the appropriate completion notes for the Procedure Code used on the declaration (DE 1/10).
This additional procedure code is only to be used where the full conditions of both RGR and excise warehousing are met.
### Returned Goods Relief (RGR)
This Additional Procedure Code is used to claim duty (and where applicable VAT) relief under RGR with simultaneous release for free circulation of goods re-imported in an unaltered state, where the goods were temporarily exported to a third country (Article 203 EU Reg. No. 952/2013 (UCC)).
Any customs duties or other charges previously refunded at export must be repaid prior to the release of the goods.
Evidence must be available to demonstrate the Union status of the goods at their original export.
The MRN of the export declaration or C21e must be declared as a previous document in DE 2/1 using previous document code MRN.
Where alternative evidence is provided in lieu of an export declaration/ C21e MRN, the evidence used must:
* Have a reference number declared against code ZZZ in DE 2/1
* clearly identify the goods,
* confirm the physical export of those goods and
* their duty status at export.
The following may be accepted as a form of alternative evidence where it meets the above conditions:
* a document that proves the goods were previously in the EU
* a copy of the export invoice
* a copy of the export airway bill or bill of lading
* a commercial certificate of shipment prepared at the time of export
* a certificate of posting relating to the export of the goods
* a copy of the import invoice if it clearly shows that the goods are being returned
* a suitable statement from the manufacturer or exporter if other than yourself
* a preferential origin form EUR1 in certain cases, contact our helpline on Telephone: 0300 200 3700 for further details
* in the case of collectable items, catalogue information or qualified opinion from collectors\xe2\x80\x99 or auction houses
* stock record book.
Goods must return in an un-altered state, to the Union within 3 years of export (unless a waiver has been granted). A period of 6 years is allowed for Crown Servants.
No changes may be made to the goods, other than to maintain them in the same working condition as they were exported.
No changes may be made to the goods that will increase the value or upgrade the specification.
In order to be eligible for VAT relief at re-import, the legal declarant at export and re-import must be the same entity (the EORI number or named person shown in DE 3/1 or DE 3/2 at export and DE 3/15 or DE 3/16 at import must be the same).
VAT relief cannot be claimed on their removal from the excise warehouse if the goods were sold while outside the European Union unless the specific conditions in Notice 236 are met.
Where goods are being released from a customs warehouse with a simultaneous claim to RGR, DE 2/2 (PREMS AI Code), DE 2/3, and DE 3/39 should be completed with the relevant customs warehouse details.
Where goods are being released to RGR from a customs warehouse, a Customs Warehousing (CW) authorisation is needed to use this additional procedure code, see conditions and requirements detailed in Notice 3001.
The MRN of this declaration must be provided to the customs warehousekeeper for their records as evidence that the goods have been entered to a customs procedure.
Please see Notice 236 for additional conditions for eligibility for RGR.
### Excise Warehousing:
This Additional Procedure Code is used where the goods are released for free circulation but where excise duties (and where applicable VAT) are suspended by entering them into an approved excise warehouse.
These notes must be read in conjunction with the appropriate completion notes for the Procedure Code used on the declaration (DE 1/10).
### Restrictions on usage:
Goods may only use this Additional Procedure code where the goods are claiming a suspension of excise duty on entry to an approved excise warehouse.
Any excise duty suspended will be payable on their removal from the excise warehouse.
Evidence of eligibility for customs duty relief under RGR must be held.
Goods must not have been exported for the purpose of repair or process.
Any revenue re-claimed at export must be re-paid at import (for example, VAT zero-rated for export must be paid at re-import on removal from the excise warehouse).
The goods must use an appropriate Requested Procedure Code in the 07 series and comply with the full terms and conditions of excise warehousing and the associated previous procedure concerned (as applicable to the DE 1/10 Procedure Code being used).
The goods must be entered to an approved excise warehouse, registered tobacco store or other registered premises, declared in DE 2/7 (Identification of Warehouse) of the declaration, with suspension of excise duty and where applicable, VAT.
Release for free circulation is dependent on the application of customs formalities relating to the release of the goods. This includes the application of commercial policy measures, including any prohibitions or restrictions (Article 201, EU Reg. No. 952/2013 (UCC)).
This Additional Procedure Code cannot be used with Entry in Declarant\xe2\x80\x99s Records (EIDR).
### Notices:
Please refer to [Notice 236: Returned Goods Relief](https://www.gov.uk/government/publications/notice-236-returned-goods-relief) for details of the full conditions of RGR which must be met in order to use this Additional Procedure Code.
Information can be found on Excise suspense regimes on Gov.UK:
[Excise Notice 179: motor and heating fuels - general information and accounting for excise duty and VAT](https://www.gov.uk/guidance/motor-and-heating-fuels-general-information-and-accounting-for-excise-duty-and-vatexcise-notice-179).
[Excise Notice 179a: aviation turbine fuel](https://www.gov.uk/government/publications/excise-notice-179a-aviation-turbine-fuel).
[Excise Notice 179e: biofuels and other fuel substitutes](https://www.gov.uk/government/publications/excise-notice-179e-biofuels-and-other-fuel-substitutes).
[Excise Notice 196: excise goods - registration and approval of warehousekeepers, warehouse premises, owners of goods and registered consignors](https://www.gov.uk/government/publications/excise-notice-196-excise-goods-registration-and-approval-of-warehousekeepers-warehouse-premises-owners-of-goods-and-registered-consignors).
[Excise Notice 197: receipt into and removal from an excise warehouse of excise goods](https://www.gov.uk/guidance/receive-goods-into-and-remove-goods-from-an-excise-warehouse-excise-notice-197).
[Excise Notice 476: Tobacco Products Duty](https://www.gov.uk/government/publications/excise-notice-476-tobacco-products-duty).
Information on Customs Warehousing can be found on Gov.UK:
[Notice 3001: customs special procedures for the Union Customs Code](https://www.gov.uk/government/publications/notice-3001-special-procedures-for-the-union-customs-code)
### Specific fields in the declaration/notes on completion:
Please refer to the full completion rules in [Appendix 1: DE 1/10: Requested and Previous Procedure Codes](https://www.gov.uk/government/publications/appendix-1-de-110-requested-and-previous-procedure-codes-of-the-customs-declaration-service-cds) for the specific completion instructions for the requested and previous procedure being used in the 07 series. These notes only cover any additional RGR requirement.
### Additional Information (DE 2/2):
The following AI statements may be required in DE 2/2:
| Coverage | AI statement code | Details to be declared |
| Relief claimed under Returned Goods Relief. Waiver of time limit claimed. | GEN03 | Enter \xe2\x80\x98Waiver of time limit claimed\xe2\x80\x99. |
| Relief claimed under Returned Goods Relief. RGR time limit extension to 6 years claimed by Crown Servants
Note: Crown Servants include diplomatic staff, armed forces, embassy and consular personnel. See Customs Information Paper 2017, no. 28 for details | GEN3C | Enter \xe2\x80\x98RGR 6-year time limit claimed: Crown Servant\xe2\x80\x99 |
| Consignment/work number. | GEN45 | Enter the unique reference number, allocated to the consignment/work by the authorisation holder, followed by \xe2\x80\x98Returned goods\xe2\x80\x99 which will identify that the goods are eligible for RGR and that no further customs documentation is required for release |
| Duty calculation override
Note: this code is only to be used where the amount of duty payable is being manually calculated, as required by this Additional Procedure Code
See CDS Volume 3 import Declaration Completion for details on how to declare the tax lines in DE 4/4 \xe2\x80\x93 4/7 when code OVR01 is used. | OVR01 | Enter \xe2\x80\x98Duty override claimed\xe2\x80\x99 followed by a plain text description of the reason for the override.
For example:
Duty override claimed RGR |
| Code used to declare:
That the pallets or containers being re-imported by (foreign consignors name) were previously in free circulation in the EU:
\xe2\x80\xa2 Are owned by the importer being returned within 3 years of the original export
\xe2\x80\xa2\tBeing returned to the importers who or on whose behalf they were previously exported for import free of duty
\xe2\x80\xa2\tBeing returned to the importers who originally declared the goods to export for import free of VAT
\xe2\x80\xa2\tThat the VAT and any customs duty previously chargeable on the pallets or containers or material used in their manufacture has been accounted for and not later refunded.
\xe2\x80\xa2 That the pallets or containers are eligible for RGR. | PAL05 | Enter \xe2\x80\x98RGR Pallets or Containers\xe2\x80\x99 followed by:
\xe2\x80\xa2\tThe name of the Exporter from the declaration or clearance request that was used to originally export the goods
\xe2\x80\xa2 The date of the original export declaration or clearance request |
| Premises Name and Address.
Note: If the premises code in DE 2/7 (Identification of Warehouse) is that of a UK allocated warehouse, do not complete a PREMS AI Statement. | PREMS | Enter the full name, address and country of the warehouse where the goods can be examined.
Enter the Premises Country Code as a suffix to the Premises Name and Address separated by \xe2\x80\x98-\xe2\x80\x98. |
### Documents produced, certificates and authorisations, additional references (DE 2/3):
The specific document code references detailed below should be declared (as applicable).
Enter the following for the INF3 form (Form C&E 1158):
An INF3 is only required where RGR triangulation applies (see Notice 236, section 2.3 for details)
| Document code | Document identifier | Document status |
| C605 | Enter the INF3 reference number | Use status code AC if certification is required otherwise use status code AE. |
Additional RGR national document codes which may be required:
Enter the following for the C1314 (RGR claim form):
| Document code | Document identifier | Document status |
| 1314 | Enter the MRN of the export declaration, C21e or reference number of the alternative evidence the RGR claim relates to | The document status code as AC, AE, AF, AG, AP, AS, AT, JA, JE, or JP as appropriate. |
Enter the following for the C&E1246 (RGR duplicate lists):
| Document code | Document identifier | Document status |
| 1246 | Enter the MRN of the export declaration, C21e or reference number of the alternative evidence the RGR claim relates to | The document status code as AC, AE, AF, AG, AP, AS, AT, JA, JE, or JP as appropriate. |
Customs Warehouse authorisation details (where applicable):
| Document code | Document identifier: country code | Document identifier: authorisation type code | Document identifier: authorisation number |
| C517 | e.g. GB | CWP (Private Customs Warehouse) | The customs warehouse authorisation number. |
| C518 | e.g. GB | CW1 (public customs warehouse type 1) | The customs warehouse authorisation number. |
| C519 | e.g. FR | CW2* (public customs warehouse type 2) | The customs warehouse authorisation number. |
*Please note: CW2 must not be used with GB.
### Security required:
\xe2\x80\x93
### VAT:
VAT will be payable, unless specifically relieved under RGR, on the goods removal from the excise warehouse.
Use of this procedure code gives relief from VAT which must be accounted for, if applicable, upon removal from the Excise Warehouse.
### Excise:
RGR does not provide relief from excise duty and any excise duty suspended under this Additional Procedure Code must be accounted for on removal from the excise warehouse.
### Post clearance action:
\xe2\x80\x93
### Notes
Where RGR is being claimed on re-imported IP goods, Additional Procedure Codes F04 or F07 must be used instead of 63P.
Where RGR is being claimed on goods be re-imported and placed back under the end use procedure, Additional Procedure Code 1RL must be used instead of 63P.
Additional Procedure Code 68P can only be used with Requested and Previous Procedure code (DE 1/10): 0700, 0771.
### Additional notes:
The use of this Additional Procedure Code is a declaration by the importer that the conditions laid down in Council Regulation (EU) No. 952/2013, Commission Delegated Regulation (EU) No. 2015/2446 and Commission Implementing Regulation (EU) No. 2015/2447 are met.
* Entry under this procedure code is an undertaking by the importer/agent to pay to the Commissioners of HMRC immediately on demand any duties or other charges due in respect of the goods in question if the conditions of the relief aren\xe2\x80\x99t met.
* Agents must have prior written approval from the importer to enter goods to RGR on their behalf and ensure a copy of the declaration is returned to the holder.
* [Notice 236: Returned Goods Relief](https://www.gov.uk/government/publications/notice-236-returned-goods-relief) defines what is meant by \xe2\x80\x98Goods in free circulation\xe2\x80\x99.
* All supporting materials (for example rope, wood and plastic) used for the storage and protection of the goods are also eligible for relief.
* Evidence of export and free circulation status as described in [Notice 236: Returned Goods Relief](https://www.gov.uk/government/publications/notice-236-returned-goods-relief) except where the export declaration reference is quoted in DE 2/1 (Previous Documents).
Entry under this Procedure Code is a declaration that:
* The goods are eligible to claim RGR.
* Any additional security which may be needed will be provided.
* All other conditions and requirements associated with claiming RGR have been met.
* The goods were in free circulation when previously exported from the customs territory of the EU of Territory with which the EU has formed a customs union.
* The goods are being entered for free circulation within 3 years of them being exported from that territory unless a valid waiver is claimed. A period of 6 years is allowed for Crown Servants.
* If the waiver of the 3-year time limit for reimportation is claimed due to special circumstances, insert \xe2\x80\x98Waiver of time limit claimed\xe2\x80\x99 in DE 2/2 as a GEN03 AI Statement. This waiver must be shown to be reasonable to HMRC when requested.
* The goods haven\xe2\x80\x99t undergone any process or repair outside that territory other than routine maintenance to keep them in good condition, or handling which only altered the goods appearance, for example attaching operating instructions in foreign languages.
* If claiming VAT relief on removal from the excise warehouse, it is a further declaration that:
+ the goods were exported from the EU by the importer or on their behalf.
+ any VAT due on the goods was paid and not refunded on export from the EU.
### Additional documents needed:
\xe2\x80\x93
[ Contents](#contents) Print this page | 71.469636 | 412 | 0.777885 | eng_Latn | 0.998681 |
2688c1b9b051ca3c1ec0315cb0d0cc0fb26418f1 | 150 | md | Markdown | md/ansicl/23.md | nptcl/npt-japanese | 41d28b1f88372aba7651656870844bc791af2063 | [
"CC0-1.0"
] | null | null | null | md/ansicl/23.md | nptcl/npt-japanese | 41d28b1f88372aba7651656870844bc791af2063 | [
"CC0-1.0"
] | null | null | null | md/ansicl/23.md | nptcl/npt-japanese | 41d28b1f88372aba7651656870844bc791af2063 | [
"CC0-1.0"
] | null | null | null | % 23. リーダー
[UP](index.html)
---
23. リーダー
- [23.2. リーダーの辞書](23.2.html)
---
[TOP](index.html), [Github](https://github.com/nptcl/npt-japanese)
| 10.714286 | 67 | 0.586667 | yue_Hant | 0.670764 |
2688e4b8440b64c5d4665973e9e910a3e26284ed | 5,428 | md | Markdown | content/publications/_index.md | JoeMatt/joematt.github.io | 7fcc76a090de9533d4688afbbf4ab050700443ed | [
"MIT"
] | null | null | null | content/publications/_index.md | JoeMatt/joematt.github.io | 7fcc76a090de9533d4688afbbf4ab050700443ed | [
"MIT"
] | null | null | null | content/publications/_index.md | JoeMatt/joematt.github.io | 7fcc76a090de9533d4688afbbf4ab050700443ed | [
"MIT"
] | null | null | null | ---
title: "Publications"
sitemap:
priority : 0.6
---
A collection of articles, papers, presentations or talks, most likely on Development, Project Management and DevOps, because let's admit it, they are one in the same ;)
## Articles & Papers
__Coming Soon__
## Patents
### Event identification in sensor analytics
__Issued: Dec 31, 2013__
[_us 8620624B2_](https://patents.google.com/patent/U8620624B2/en)
#### Description
A method of detecting an event anomaly includes receiving one or more data points, in which each data point represents a spatial or temporal event; associating a unique identifier with each of the one or more data points to obtain one or more individualized data points; distributing the one or more individualized data points across a grid, in which the grid includes one or more cells; determining an event likelihood ratio for one or more of the grid cells; identifying one or more event clusters, in which each event cluster includes one or more of the grid cells; and storing in a data repository an event cluster having a significance level above a threshold significance level.
#### Other inventors
Greg Skibiski Tony Jebara Christine Lemke Markus Loecher Girish Rao Jason Uechi 2+
[See patent](https://patents.google.com/patent/US8620624B2/en)
### System and Method of Performing Location Analytics
__Issued: Mar 3, 2012__
[_us 20120071175_](https://patents.google.com/patent/US20120071175/en)
#### Description
A mobile terminal, a log information supplying method using the same, a detection system for a web platform, and a detection method using the same are provided to supply a detection result about the operation of a platform by obtaining log information regardless of the diverse platforms. CONSTITUTION: A log information manager(110) generates log data about state information of a web platform or web application executing information. A log message generator(150) generates a log message including a body and a header. A mobile communication terminal transmits the log message to a monitoring server.
#### Other inventors
Greg Skibiski Tony Jebara Christine Lemke Girish Rao Jason Uechi Markus Loecher
[See patent](https://patents.google.com/patent/US20120071175/en)
### Anomaly Detection in Sensor Analytics
__Issued: Apr 1, 2010__
[_us 20100082301_](https://patents.google.com/patent/US20100082301/en)
#### Description
A method of detecting an event anomaly includes receiving one or more data points, in which each data point represents a spatial or temporal event, associating a unique identifier with each of the one or more data points to obtain one or more individualized data points, distributing the one or more individualized data points across a grid, in which the grid includes one or more cells, determining an event likelihood ratio for one or more of the grid cells, identifying one or more event clusters, in which each event cluster includes one or more of the grid cells, and storing in a data repository an event cluster having a significance level above a threshold significance level.
#### Other inventors
Markus Loecher Tony Jebara Christine Lemke Alex `Sandy' Pentland Greg Skibiski David Rosenberg Girish Rao 2+
[See patent](https://patents.google.com/patent/US20100082301/en)
### Comparing Spatial-Temporal Trails In Location Analytics
__Issued: Apr 1, 2010__
[_us 20100079336_](https://patents.google.com/patent/US20100079336/en)
#### Description
Systems and computer implemented methods are provided for comparing, associating and deriving associations between two or more spatial temporal data trails. One or more spatial-temporal data trails comprising one or more places are received at a processor. Each place is identified by a spatial temporal data point. And each spatial-temporal data trail is associated with an individual. The similarity between pairs of places is determined to establish one or more groups of places or one or more groups of individuals. Similarity and/groups can be determined based on demographics associated with the place or individual.
#### Other inventors
Markus Loecher Tony Jebara Alex `Sandy' Pentland Christine Lemke Greg Skibiski David Rosenberg Girish Rao 2+
[See patent](https://patents.google.com/patent/US20100079336/en)
### System and Method of Performing Location Analytics
__Issued: Dec 10, 2009__
[_us 20090307263_](https://patents.google.com/patent/US20090307263/en)
#### Description
A system and method are provided for associating location data from one or more unique sources. The place and time of a unique location enabled device are associated with stored demographic information relating to the particular place and particular time. The place and time of the unique location enabled device are associated with a historical record of past locations and time of locations that the device has been. Based on the association of demographical information and historical information, the unique location enable device is assigned to one or more groups or tribes. The location of all members of the group or tribe can be aggregated and exported for further analysis or display, thereby showing all group or tribe members at a particular time and place.
#### Other inventors
Girish Rao Markus Loecher Alex `Sandy' Pentland Tony Jebara Christine Lemke Greg Skibiski Jason Uechi Blake Shaw
[See patent](https://patents.google.com/patent/US20090307263/en)
| 61.681818 | 768 | 0.801216 | eng_Latn | 0.995277 |
26897a71883d7ad0ad42526e61653ea461aa1f27 | 210 | md | Markdown | README.md | usarskyy/ngx-input-number | 5077adbbb1eda54e1653af010f64b1e52a7eecc2 | [
"MIT"
] | null | null | null | README.md | usarskyy/ngx-input-number | 5077adbbb1eda54e1653af010f64b1e52a7eecc2 | [
"MIT"
] | null | null | null | README.md | usarskyy/ngx-input-number | 5077adbbb1eda54e1653af010f64b1e52a7eecc2 | [
"MIT"
] | null | null | null | # NgxInputNumber
An Angular directive only allows [0-9] and the feature of decimal numbers in the input box when typing, pasting or drag/dropping. This directive handles both Windows keyboard and Mac keyboard. | 70 | 192 | 0.809524 | eng_Latn | 0.999366 |
2689c46f32af38d59e77658d186e58447dc95c41 | 1,810 | md | Markdown | content/blog/seek-forum-for-arbitration-with-best-prospect-of-achieving-and-enforcing-successful-contract/index.md | sahilkanaya/clausehound-blog | f820b8e26ace29ae635d86c122529695a6647d10 | [
"MIT"
] | 1 | 2020-05-31T22:57:06.000Z | 2020-05-31T22:57:06.000Z | content/blog/seek-forum-for-arbitration-with-best-prospect-of-achieving-and-enforcing-successful-contract/index.md | JaninaFe/clausehound-blog | 80b09f7adf619bca44008e1a8586159dd4274abc | [
"MIT"
] | null | null | null | content/blog/seek-forum-for-arbitration-with-best-prospect-of-achieving-and-enforcing-successful-contract/index.md | JaninaFe/clausehound-blog | 80b09f7adf619bca44008e1a8586159dd4274abc | [
"MIT"
] | null | null | null | ---
title: "Seek Forum for Arbitration with Best Prospect of Achieving and Enforcing Successful Contract"
author: rajah@cobaltcounsel.com
tags: ["Dispute Resolution","Governing Law","Dispute Resolution","Rajah"]
date: 2015-07-25 00:00:00
description: "Links from this article:Read the article here.While international arbitration seeks to aid in the legal predictability and stability of inter..."
---
**Links from this article:**
[Read the article here.](http://www.lexology.com/library/detail.aspx?g=5fe9b5e8-05ea-43aa-848d-2d620a10745c)
While international arbitration seeks to aid in the legal predictability and stability of international contracts by providing for the neutral, impartial, centralized and enforceable resolution of disputes arising out of international contracts, parties should always seek to negotiate the forum where they have the best prospect of achieving and enforcing success. Parties should consider the best combination of governing law, arbitral institution and treaties applicable to their transaction.
Parties should also consider whether protection in the form of a bilateral or multilateral trade agreement is available. One innovation of the BIT for example, has been to provide foreign investors with the ability to directly seek recourse against the host state by international investment arbitration. With careful negotiation, it is possible to structure a deal, and the dispute resolution mechanisms, to provide for the same level of protection for international deals as can be found in a BIT for investment deals.
[Read the article here.](http://www.lexology.com/library/detail.aspx?g=5fe9b5e8-05ea-43aa-848d-2d620a10745c)**Take away:**
- Careful choice of jurisdiction and seat of arbitration is essential when structuring an international arbitration agreement. | 106.470588 | 520 | 0.816575 | eng_Latn | 0.996471 |
268a328070776520180ced3f17a67c2f8e4ebb65 | 1,746 | md | Markdown | _posts/2018-03-04-weekly-roundup.md | lingwhatics/lingwhatics.github.io | 7fa3fbf42bc0d9fc9a28a8acca7c89636794874b | [
"MIT"
] | null | null | null | _posts/2018-03-04-weekly-roundup.md | lingwhatics/lingwhatics.github.io | 7fa3fbf42bc0d9fc9a28a8acca7c89636794874b | [
"MIT"
] | null | null | null | _posts/2018-03-04-weekly-roundup.md | lingwhatics/lingwhatics.github.io | 7fa3fbf42bc0d9fc9a28a8acca7c89636794874b | [
"MIT"
] | null | null | null | ---
layout: post
title: Weekly roundup February 26 - March 4
date: 2018-03-04
type: post
published: true
status: publish
categories:
- link
tags: []
---
### Articles/Columns
[Why we should talk about masculinity more often](https://www.macleans.ca/opinion/why-we-should-talk-about-masculinity-more-often/ "Why we should talk about masculinity more often. By Tabatha Southey") *Maclean's*
[Why Paper Jams Persist](https://www.newyorker.com/magazine/2018/02/12/why-paper-jams-persist "Why Paper Jams Persist. By Joshua Rothman") *The New Yorker*
[MUNCHIES in North Korea: Tasting the Legendarily Terrible Koryo Burger](https://munchies.vice.com/en_us/article/vvqbbd/the-legend-of-north-koreas-mystery-meat-koryo-burger "MUNCHIES in North Korea: Tasting the Legendarily Terrible Koryo Burger. By Jamie Fullerton") *Vice*
[A little girl in Toronto lost to history – and now found](https://www.theglobeandmail.com/news/a-little-girl-in-toronto-lost-to-history-and-nowfound/article38198028/ "A little girl in Toronto lost to history – and now found. By Chris Bateman") *The Globe and Mail*
### Tweet
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">im at work crying at this video of a cat watching a horror movie <a href="https://t.co/kfwd0G1HKY">pic.twitter.com/kfwd0G1HKY</a></p>— ZORAH SECURE DA BAGDAROS (@METALTEARSOLID) <a href="https://twitter.com/METALTEARSOLID/status/969315863236001792?ref_src=twsrc%5Etfw">March 1, 2018</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
### Video
<iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/If-n27SUAl8?rel=0" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe> | 67.153846 | 464 | 0.761168 | eng_Latn | 0.424134 |
268b869a3074ffa5642987ad4f9c34716938e66d | 4,503 | md | Markdown | docs/framework/windows-workflow-foundation/samples/using-editing-scope.md | douglasbreda/docs.pt-br | f92e63014d8313d5e283db2e213380375cea9a77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/samples/using-editing-scope.md | douglasbreda/docs.pt-br | f92e63014d8313d5e283db2e213380375cea9a77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/windows-workflow-foundation/samples/using-editing-scope.md | douglasbreda/docs.pt-br | f92e63014d8313d5e283db2e213380375cea9a77 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Usando o escopo de edição
ms.date: 03/30/2017
ms.assetid: 79306f9e-318b-4687-9863-8b93d1841716
ms.openlocfilehash: 268849c584c235a21a0818baa60f119cf8e49305
ms.sourcegitcommit: 3c1c3ba79895335ff3737934e39372555ca7d6d0
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/05/2018
ms.locfileid: "43749042"
---
# <a name="using-editing-scope"></a>Usando o escopo de edição
Este exemplo demonstra como em lotes um conjunto de alterações de modo que eles possam ser desfeitos em uma única unidade atômica. Por padrão, as ações executadas por um autor do designer de atividade automaticamente são integradas desfazer/refazem o sistema.
## <a name="demonstrates"></a>Demonstra
Edite o escopo e desfaz e refaz.
## <a name="discussion"></a>Discussão
Este exemplo demonstra como em lotes um conjunto de alterações na árvore de <xref:System.Activities.Presentation.Model.ModelItem> em uma única unidade de trabalho. Observe que ao associar aos valores de <xref:System.Activities.Presentation.Model.ModelItem> diretamente de um designer de WPF, as alterações serão aplicadas automaticamente. Este exemplo demonstra o que deve ser feito quando várias alterações a ser agrupadas estão sendo feitas com o código obrigatório, em vez de uma única alteração.
Nesse exemplo, três atividades são adicionadas. Quando editar começa, <xref:System.Activities.Presentation.Model.ModelItem.BeginEdit%2A> é chamado uma instância de <xref:System.Activities.Presentation.Model.ModelItem>. As alterações feitas na árvore de <xref:System.Activities.Presentation.Model.ModelItem> dentro desse escopo de edição são agrupadas. O comando de <xref:System.Activities.Presentation.Model.ModelItem.BeginEdit%2A> retorna <xref:System.Activities.Presentation.Model.EditingScope>, que podem ser usados para controlar essa instância. <xref:System.Activities.Presentation.Model.EditingScope.OnComplete%2A> ou <xref:System.Activities.Presentation.Model.EditingScope.OnRevert%2A> podem ser chamados a confirmação ou reverter o escopo de edição.
Você também pode aninhar objetos de <xref:System.Activities.Presentation.Model.EditingScope> , que permite vários conjuntos de alterações ser controlado como parte de um escopo maior de edição e pode ser controlado individualmente. Um cenário que pode usar esse recurso seria quando as alterações de várias caixas de diálogo devem ser adicionadas ou revertido separada, com as alterações que estão sendo tratados como uma única operação atômica. Nesse exemplo, os escopos de edição são empilhados usando <xref:System.Collections.ObjectModel.ObservableCollection%601> de tipo <xref:System.Activities.Presentation.Model.ModelEditingScope>. <xref:System.Collections.ObjectModel.ObservableCollection%601> é utilizado para que a profundidade de aninhamento pode ser observada na superfície de designer.
## <a name="to-set-up-build-and-run-the-sample"></a>Para configurar, compilar, e executar o exemplo
1. Compilar e executar o exemplo, e depois use os botões à esquerda para alterar o fluxo de trabalho.
2. Clique em **abrir o escopo de edição**.
1. Este comando chama <xref:System.Activities.Presentation.Model.ModelItem.BeginEdit%2A> que cria um escopo e envia de edição ele na pilha de edição.
2. Três atividades são adicionadas a <xref:System.Activities.Presentation.Model.ModelItem>selecionado. Observe que se o escopo de edição não tivesse sido aberto com <xref:System.Activities.Presentation.Model.ModelItem.BeginEdit%2A>, três novas atividades apareceriam na tela de designer. Porque esta operação ainda está pendente dentro de <xref:System.Activities.Presentation.Model.EditingScope>, o designer não é atualizada ainda.
3. Pressione **próximo escopo de edição** para confirmar o escopo de edição. Três atividades aparecem no designer.
> [!IMPORTANT]
> Os exemplos podem já estar instalados no seu computador. Verifique o seguinte diretório (padrão) antes de continuar.
>
> `<InstallDrive>:\WF_WCF_Samples`
>
> Se este diretório não existir, vá para [Windows Communication Foundation (WCF) e o Windows Workflow Foundation (WF) exemplos do .NET Framework 4](https://go.microsoft.com/fwlink/?LinkId=150780) para baixar todos os Windows Communication Foundation (WCF) e [!INCLUDE[wf1](../../../../includes/wf1-md.md)] exemplos. Este exemplo está localizado no seguinte diretório.
>
> `<InstallDrive>:\WF_WCF_Samples\WF\Basic\CustomActivities\CustomActivityDesigners\UsingEditingScope`
| 100.066667 | 800 | 0.796136 | por_Latn | 0.998388 |
268c13b2e7643ed66ec720cfe05b3df27a9a0967 | 1,624 | md | Markdown | content/project/ibm/index.md | rahulbs98/academic-kickstart | 6d2d0d48b0b768c56b4fed57506235a7a36847f4 | [
"MIT"
] | null | null | null | content/project/ibm/index.md | rahulbs98/academic-kickstart | 6d2d0d48b0b768c56b4fed57506235a7a36847f4 | [
"MIT"
] | null | null | null | content/project/ibm/index.md | rahulbs98/academic-kickstart | 6d2d0d48b0b768c56b4fed57506235a7a36847f4 | [
"MIT"
] | 2 | 2020-11-27T13:17:20.000Z | 2021-11-04T11:31:41.000Z | ---
# Documentation: https://sourcethemes.com/academic/docs/managing-content/
title: "Non-Interactive Proof Generation from Interactive Zero Knowledge Protocols"
summary: "Efficient implementations of interactive and non-interactive versions of Ligero-like protocols."
authors: [Guide - Dr Dhinakaran Vinayagamurthy, Dr. Nitin Singh]
tags: [Technical]
categories: [ZKP]
date: 2020-05-04T21:29:05+05:30
# Optional external URL for project (replaces project detail page).
external_link: ""
# Featured image
# To use, add an image named `featured.jpg/png` to your page's folder.
# Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight.
image:
caption: ""
focal_point: ""
preview_only: false
# Custom links (optional).
# Uncomment and edit lines below to show custom links.
links:
- name: Project repository
url: https://github.com/rahulbs98/Project_IBM
icon_pack: fab
icon: github
url_code: ""
url_pdf: ""
url_slides: ""
url_video: ""
# Slides (optional).
# Associate this project with Markdown slides.
# Simply enter your slide deck's filename without extension.
# E.g. `slides = "example-slides"` references `content/slides/example-slides.md`.
# Otherwise, set `slides = ""`.
slides: ""
---
**Work:**
- Designed a modular framework for Interactive Zero Knowledge Protocols which was used to convert it to a non-interactive protocol.
- Implemented additional features for the design to support oracles, protocol composition, etc.. and tested existing protocols like Ligero on it
**An abstraction of the implementation:**
 | 33.833333 | 144 | 0.751847 | eng_Latn | 0.844669 |
268c7ba7c9c103dde8c258a61b4f0c367555506e | 7,462 | md | Markdown | docs/requesting-excel-workbook-data-from-sharepoint-server-using-odata.md | davidchesnut/sp-general-docs | 298d09e45837a6908cb7ab54043625b652a3501f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2017-03-09T23:15:49.000Z | 2021-05-09T04:01:43.000Z | docs/requesting-excel-workbook-data-from-sharepoint-server-using-odata.md | davidchesnut/sp-general-docs | 298d09e45837a6908cb7ab54043625b652a3501f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-05-10T04:56:44.000Z | 2017-05-10T04:56:44.000Z | docs/requesting-excel-workbook-data-from-sharepoint-server-using-odata.md | davidchesnut/sp-general-docs | 298d09e45837a6908cb7ab54043625b652a3501f | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-05-10T04:10:40.000Z | 2017-05-10T04:10:40.000Z | ---
title: Requesting Excel workbook data from SharePoint Server using OData
ms.prod: SHAREPOINT
ms.assetid: 2f846e96-6c9e-4ed2-9602-4081ad0ab135
---
# Requesting Excel workbook data from SharePoint Server using OData
> [!NOTE]
> The Excel Services REST API applies to SharePoint 2013 and SharePoint 2016 on-premises. For Office 365 Education, Business, and Enterprise accounts, use the Excel REST APIs that are part of the [Microsoft Graph](http://graph.microsoft.io/en-us/docs/api-reference/v1.0/resources/excel
) endpoint.
OData uses URLs to request information from a resource. You craft the URL in a specific way, using query options, to return the information that you are requesting. The following URL shows what a typical OData request looks like.
http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$top=20
This sample OData request is structured so that it gets the first 20 rows from a table named Table1 in the workbook ProductSales.xlsx which is stored in the Documents folder on the contoso1 server. The URL uses the system query option **$top** to specify the number of rows to return.Looking closely at the URL, you can see its three part structure: the service root URI; the resource path; and the query options.
## Service root URI
The initial part of the URL is called the service root and remains the same for every OData request that you make to a SharePoint 2013 server except for the name of the server. It includes the name of the SharePoint server where the workbook is stored and the path, _vti_bin/ExcelRest.aspx, as in the following example.
http://contoso1/_vti_bin/ExcelRest.aspx
## Resource path
The second part of the URL has the path to the Excel workbook and specifies that this is an OData request.
/Documents/ProductSales.xlsx/OData
## System query options
The third part of the URL gives the system query options for the request. Query options always begin with a dollar sign ($) and are appended to the end of the URI as query parameters. In this example, the request is for the first 20 rows in the table named Table1.
/Table1?$top=20
System query options provide a way to get specific data from a resource. The Excel Services OData implementation supports a number of query options as listed in the following section.
## System query options
<a name="xlsSystemQueryOptions"> </a>
The Excel Services implementation of OData supports a number of the standard OData system query options. These query options are at the heart of OData requests since you use the options to specify what data you want to get from a resource. The following table lists the system query options that Excel Services implementation of OData currently supports.
**Table 1. Excel Services OData system query options**
|** **System Query Option****|** **Description****|
|:-----|:-----|
|/<tableName> |Returns all rows for the table specified by <tableName>, where <tableName> is the name of a table in an Excel workbook that contains the rows that you want to retrieve. > [!IMPORTANT] > This form of OData request returns no more than 500 rows at a time. Each set of 500 rows is one page. To get rows in further pages in a table that has more than 500 rows, use the **$skiptoken** query option (see below). The following example returns all rows up to the 500th row in Table1 in the ProductSales.xlsx workbook. |
|**$metadata**|Returns all the available tables and the type information for all rows in each table in the specified workbook. The following example returns the tables and type information for the tables in the ProductSales.xlsx workbook. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/$metadata |
|**$orderby**|Returns rows in the specified table, sorted by the value specified by **$orderby**. The following example returns all rows from Table 1, sorted by the Name column, in the ProductSales.xlsx workbook. > [!NOTE] > The default value for **$orderby** is ascending. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$orderby=Name |
|**$top**|Returns N rows from the table where N is a number specified by the value of **$top**. The following example returns the first 5 rows from Table1, sorted by the Name column, in the ProductSales.xlsx workbook. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$orderby=Name&$top=5 |
|**$skip**|Skips N rows, where N is the number specified by the value of **$skip**, and then returns the remaining rows of the table. The following example returns all remaining rows after the fifth row from Table1 in the ProductSales.xlsx workbook. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$skip=5 |
|**$skiptoken**|Seeks to the Nth row, where N is the row ordinal position indicated by the value of **$skiptoken**, and then returns all remaining rows, beginning at row N + 1. The collection is zero-based, so the second row, for example, is indicated by $skiptoken=1. The following example returns all remaining rows after the second row from Table1 in the ProductSales.xlsx workbook. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$skiptoken=1 You can also use the **$skiptoken** query option to get rows in pages after the first page from a table that contains more than 500 rows. The following example shows how to get the 500th row and greater from a table with more than 500 rows.http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$skiptoken=499 |
|**$filter**|Returns the subset of rows that satisfy the conditions specified in the value of **$filter**. For more information about the operators and set of functions that you can use with **$filter**, see the OData [documentation](http://www.odata.org/documentation/odata-version-2-0/uri-conventions/). The following example returns only those rows where the value of the Price column is greater than 100. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$filter=Price gt 100 |
|**$format**|The Atom XML format is the only supported value and is the default for the **$format** query option.|
|**$select**|Returns the entity specified by **$select**. The following example selects the Name column from Table1 in the ProductSales.xlsx workbook. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$select=Name |
|**$inlinecount**| Returns the number of rows in the specified table. $ **inlinecount** can only use 1 of 2 of the following values. **allpages** - Returns the count for all rows in the table. **none** - Does not include a count of rows in the table. The following example returns the count for the total number of rows in Table1 in the ProductSales.xlsx workbook. http://contoso1/_vti_bin/ExcelRest.aspx/Documents/ProductSales.xlsx/OData/Table1?$inlinecount=allpages|
## Additional resources
<a name="xlsAdditionalResources"> </a>
- [Using OData with Excel Services REST in SharePoint 2013](using-odata-with-excel-services-rest-in-sharepoint-2013.md)
- [What's new in Excel Services for developers](http://msdn.microsoft.com/library/09e96c8b-cb55-4fd1-a797-b50fbf0f9296.aspx)
- [OData specification documentation](http://www.odata.org)
| 74.62 | 817 | 0.761592 | eng_Latn | 0.991863 |
268cfd62f886e5d7f0726a10eb4c3b6e9a444f84 | 739 | md | Markdown | content/blog/2018-05-06-the-weekend-warrior/index.md | nickangtc/gatsby-starter-blog | f3b354b736432651040b04cd525a734e39a9572c | [
"MIT"
] | null | null | null | content/blog/2018-05-06-the-weekend-warrior/index.md | nickangtc/gatsby-starter-blog | f3b354b736432651040b04cd525a734e39a9572c | [
"MIT"
] | null | null | null | content/blog/2018-05-06-the-weekend-warrior/index.md | nickangtc/gatsby-starter-blog | f3b354b736432651040b04cd525a734e39a9572c | [
"MIT"
] | null | null | null | ---
title: "The weekend warrior"
date_published: "2018-05-06"
---

There's a special breed of workers called the weekend warriors.
They don't necessarily hate their jobs, although many do. Some might just be in need of a break.
Weekend warriors look forward to the end of the week. No, that's not Saturday. It's Friday.
Friday. When the work uniform is still on and feels perfect for a night out. Dressed up for free.
Drinks, food, chatter, and laughter. It feels right, and it feels earned.
But life is more than just two days a week.
We ought to make the other five feel right. Lest we forget, we already sleep about half of it away.
Earn it.
| 32.130435 | 99 | 0.757781 | eng_Latn | 0.999358 |
268ec1002cf8e12584468dbcf8b68802458de176 | 385 | md | Markdown | docs/admin-api/data_types/r_rs_marketing_channel_rulesets.md | inselaffe/analytics-1.4-apis | 272af09811cefdab7d597d9d1e42b2f0235ca7aa | [
"MIT"
] | 93 | 2018-08-06T09:36:12.000Z | 2022-03-20T04:52:44.000Z | docs/admin-api/data_types/r_rs_marketing_channel_rulesets.md | inselaffe/analytics-1.4-apis | 272af09811cefdab7d597d9d1e42b2f0235ca7aa | [
"MIT"
] | 69 | 2018-08-03T17:50:24.000Z | 2022-03-08T02:42:54.000Z | docs/admin-api/data_types/r_rs_marketing_channel_rulesets.md | inselaffe/analytics-1.4-apis | 272af09811cefdab7d597d9d1e42b2f0235ca7aa | [
"MIT"
] | 81 | 2018-08-01T14:19:44.000Z | 2022-03-04T16:02:57.000Z | # rs\_marketing\_channel\_rulesets
|Name|Type|Description|
|----|----|-----------|
| **rsid** | `xsd:string` |A report suite ID.|
| **marketing\_channel\_rules** | [marketing\_channel\_ruleset\_array](r_marketing_channel_ruleset_array.md#) - An array of [marketing\_channel\_ruleset](r_marketing_channel_ruleset.md#) | |
**Parent topic:** [Data Types](../data_types/c_datatypes.md)
| 38.5 | 190 | 0.701299 | eng_Latn | 0.153575 |
268f349bc3ca17f95bcbddb3b3258cabae0d4d7c | 1,255 | md | Markdown | content/events/2019-toronto/program/quintessence-anx.md | hlmuludiang/devopsdays-web | 9834dd3f6b58e2277e210e565c1bdd82d789b34e | [
"Apache-2.0",
"MIT"
] | null | null | null | content/events/2019-toronto/program/quintessence-anx.md | hlmuludiang/devopsdays-web | 9834dd3f6b58e2277e210e565c1bdd82d789b34e | [
"Apache-2.0",
"MIT"
] | null | null | null | content/events/2019-toronto/program/quintessence-anx.md | hlmuludiang/devopsdays-web | 9834dd3f6b58e2277e210e565c1bdd82d789b34e | [
"Apache-2.0",
"MIT"
] | null | null | null | +++
Talk_date = ""
Talk_start_time = ""
Talk_end_time = ""
Title = "Sensory Friendly Monitoring - Keeping the Noise Down"
Type = "talk"
Speakers = ["quintessence-anx"]
+++
The ability to monitor infrastructure has been exploding with new tools on the market and new integrations, so the tools can speak to one another, leading to even more tools, and to a hypothetically very loud monitoring environment with various members of the engineering team finding themselves muting channels, individual alerts, or even alert sources so they can focus long enough to complete other tasks. There has to be a better way - a way to configure comprehensive alerts that send out notifications with the appropriate level of urgency to the appropriate persons at the appropriate time. And in fact there is: during this talk I’ll be walking through different alert patterns and discussing: what we need to know, who needs to know it, as well as how soon and how often do they need to know.
<a href="https://assets.devopsdays.org/events/2019/toronto/QuintessenceAnx_Sensory_Lg.jpg" target="_blank"><img src="https://assets.devopsdays.org/events/2019/toronto/QuintessenceAnx_Sensory.png" alt="Graphic Recording Sensory Friendly Monitoring - Keeping the Noise Down" /></a> | 104.583333 | 801 | 0.790438 | eng_Latn | 0.996402 |
268fd629b30b689e0417ad5449c23a5278836fff | 606 | md | Markdown | packages/carbon-components/src/components/number-input/README.md | ashpc/carbon | 1d2edf0d6e4df103a43535bc0eff8a73a04d0783 | [
"Apache-2.0"
] | 1 | 2018-01-25T16:56:11.000Z | 2018-01-25T16:56:11.000Z | packages/carbon-styles/src/components/number-input/README.md | joshblack/carbon-experimental | a76973b07b189757ed895eb5d4008bd155bae04c | [
"Apache-2.0"
] | null | null | null | packages/carbon-styles/src/components/number-input/README.md | joshblack/carbon-experimental | a76973b07b189757ed895eb5d4008bd155bae04c | [
"Apache-2.0"
] | null | null | null | ### JavaScript
#### Options
| Option | Default Selector | Description |
| ------------ | ------------------ | ----------------------------------------- |
| selectorInit | [data-numberinput] | The CSS seletor to find number input HTML |
#### Events
| Name | Description |
| ------ | ----------------------------------------------------------------- |
| click | Increases and decreases value attribute of number-input |
| change | Emitted when click event inceases or decreases number-input value |
| 40.4 | 81 | 0.405941 | eng_Latn | 0.801373 |
2690852898d17bca4edf625919565f9380b7e502 | 3,206 | md | Markdown | README.md | Python3pkg/Ansprogen | 74dab972f356633aad1366ee8de2db6ec6ea9485 | [
"MIT"
] | 1 | 2021-09-10T19:27:47.000Z | 2021-09-10T19:27:47.000Z | README.md | Python3pkg/Ansprogen | 74dab972f356633aad1366ee8de2db6ec6ea9485 | [
"MIT"
] | null | null | null | README.md | Python3pkg/Ansprogen | 74dab972f356633aad1366ee8de2db6ec6ea9485 | [
"MIT"
] | null | null | null | Ansvia Project Generator (Ansprogen)
======================================
Tools for easily create project for any language, currently support for Go and Scala.
But you can extend easily to support other language by extending IGenerator on `ansprogen/generators` dir.
Example Usage
---------------
Create a Go project:
$ progen -p Golang -o ./hello
Testing:
$ cd hello/
$ ls
Makefile hello.go
$ make
6g -o _go_.6 hello.go
6l -o hello _go_.6
$ ./hello
Yo World!
Create a Scala project:
$ progen -p Scala -o hello
Testing, create jar and execute using `java -jar`:
$ cd hello/
$ ls
build.xml src
$ ant Hello.jar
Buildfile: /Users/Robin/Development/ansvia/project.generator/hello/build.xml
init:
[mkdir] Created dir: /Users/Robin/Development/ansvia/project.generator/hello/build
[mkdir] Created dir: /Users/Robin/Development/ansvia/project.generator/hello/lib
build:
[mkdir] Created dir: /Users/Robin/Development/ansvia/project.generator/hello/build/classes
[scalac] Compiling 1 source file to /Users/Robin/Development/ansvia/project.generator/hello/build/classes
Hello.jar:
[mkdir] Created dir: /Users/Robin/Development/ansvia/project.generator/hello/build/jar
[copy] Copying 1 file to /Users/Robin/Development/ansvia/project.generator/hello/lib
[jar] Building jar: /Users/Robin/Development/ansvia/project.generator/hello/build/jar/Hello.jar
BUILD SUCCESSFUL
Total time: 29 seconds
$ java -jar build/jar/Hello.jar
Yo World!
More specific:
$ progen -p Golang -o /tmp/hello kind=cmd target_name=hello sources=hello.go
Or use interactive mode:
$ progen -p Golang -i
Creating Golang project
Interactive mode.
-> out_dir: ./hello
-> kind: cmd
-> target_name: hello
-> sources: hello.go
Every language has the specific options, if you want to know what the supported options just type `help` followed by project name,
for example:
$ progen help Scala
Will show project spec details:
Generate Scala project
Parameters:
kind -- Project kind, currently support `exe` and `lib`.
`exe` for create executable jar target.
`lib` for create java library.
target_name -- Target output name.
sources -- scala sources, separated by whitespace.
main_class -- if project kind == `exe` then specify this one
for class entry point.
package -- Package name, ex: com.ansvia.myapp.
Examples:
$ progen -p Scala -o ./hello kind=exe target_name=hello.jar sources="Hello.scala" main_class="Hello"
Show all supported projects:
$ progen -t
Ansprogen 0.0.3
Supported generators:
* Golang
`help Golang` to show Golang's project spec details
* Scala
`help Scala` to show Scala's project spec details
* Scala:sbt
`help Scala:sbt` to show Scala:sbt's project spec details
Create Scala project with sbt tools:
$ progen -p Scala:sbt -o ~/hello_sbt
Installation
-------------
Using `easy_install`:
$ easy_install ansprogen
Using `pip`:
$ pip install ansprogen
Source code
-------------
Fork me on github: https://github.com/anvie/Ansprogen
Enhancement and new language support are welcome feel free to Fork and gime pull request.
| 23.925373 | 130 | 0.704304 | eng_Latn | 0.798682 |
2691258b0bb45413dfa814c53786f7207063c557 | 2,194 | md | Markdown | README.md | mohammadv184/arcaptcha | 23b56c001439049af6e17146a05da0d6348d0897 | [
"MIT"
] | 9 | 2021-08-04T20:03:18.000Z | 2022-02-26T22:11:51.000Z | README.md | arcaptcha/arcaptcha-php | 23b56c001439049af6e17146a05da0d6348d0897 | [
"MIT"
] | null | null | null | README.md | arcaptcha/arcaptcha-php | 23b56c001439049af6e17146a05da0d6348d0897 | [
"MIT"
] | 1 | 2021-08-12T11:42:06.000Z | 2021-08-12T11:42:06.000Z | # PHP ArCaptcha Library
[](https://packagist.org/packages/mohammadv184/arcaptcha)
[](https://packagist.org/packages/mohammadv184/arcaptcha)
[](https://packagist.org/packages/mohammadv184/arcaptcha)
[](https://travis-ci.com/mohammadv184/arcaptcha)
[](https://packagist.org/packages/mohammadv184/arcaptcha)
PHP library for ArCaptcha.
This package supports `PHP 7.3+`.
# List of contents
- [PHP ArCaptcha Library](#PHP-ArCaptcha-Library)
- [List of contents](#list-of-contents)
- [Installation](#Installation)
- [Configuration](#Configuration)
- [How to use](#how-to-use)
- [Widget usage](#Widget-usage)
- [Verifying a response](#Verifying-a-response)
- [Credits](#credits)
- [License](#license)
## Installation
Require this package with composer:
```bash
composer require mohammadv184/arcaptcha
```
## Configuration
You can create a new instance by passing the SiteKey and SecretKey from your API.
You can get that at https://arcaptcha.ir/dashboard
```php
use Mohammadv184\ArCaptcha\ArCaptcha;
$ArCaptcha = new ArCaptcha($siteKey, $secretKey);
```
## How to use
How to use ArCaptcha.
### Widget usage
To show the ArCaptcha on a form, use the class to render the script tag and the widget.
```php
<?php echo $ArCaptcha->getScript() ?>
<form method="POST">
<?php echo $ArCaptcha->getWidget() ?>
<input type="submit" value="Submit" />
</form>
```
### Verifying a response
After the post, use the class to verify the response.
You get true or false back:
```php
if ($ArCaptcha->verify($_POST["arcaptcha-token"])) {
echo "OK!";
} else {
echo "FAILED!";
}
```
## Credits
- [Mohammad Abbasi](https://github.com/mohammadv184)
- [All Contributors](../../contributors)
## License
The MIT License (MIT). Please see [License File](LICENSE) for more information.
| 28.128205 | 140 | 0.716044 | eng_Latn | 0.435001 |
2691a66f4a8c80c2cc2d4cf964584342a8089e12 | 1,832 | md | Markdown | src/md/parse/api.md | louix/node-csv-docs | f2f6bb0d94757c396d18e61846163bd4818f07ec | [
"MIT"
] | null | null | null | src/md/parse/api.md | louix/node-csv-docs | f2f6bb0d94757c396d18e61846163bd4818f07ec | [
"MIT"
] | null | null | null | src/md/parse/api.md | louix/node-csv-docs | f2f6bb0d94757c396d18e61846163bd4818f07ec | [
"MIT"
] | null | null | null | ---
title: API
description: CSV Parse - stream, callback and sync APIs
keywords: ['csv', 'parse', 'parser', 'api', 'callback', 'stream', 'sync', 'promise']
sort: 3
---
# CSV Parse API
There are multiple APIs available, each with their own advantages and disadvantages. Under the hood, they are all based on the same implementation.
* [Stream API](/parse/api/stream/)
The stream API might not be the most pleasant API to use but is scalable. It
is the one upon which all the other implementation are based.
* [Callback API](/parse/api/callback/)
The callback API buffers all the emitted data from the stream API into a single
object which is passed to a user provided function. Passing a function is
easier than implementing the stream events function but it implies that the
all dataset must fit into the available memory and it will only be available
after the last record has been processed.
* [Stream + callback API](/parse/api/stream_callback/)
Replace the writable stream with a string or buffer and the readable stream with a callback function.
* [Sync API](/parse/api/sync/)
The sync API provides simplicity, readability and convenience. Like for the
callback API, it is meant for small dataset which fit in memory and which
usage tolerates waiting for the last record.
* [Async iterator API](/parse/api/async_iterator/)
The Async iterator API is both scalable and elegant. It takes advantage of
the native Readable Stream API upon which the parser is build to iterate
over the parsed records.
For additional usages and examples, you may refer to:
* [the API page](/parse/api/),
* [the "samples" folder](https://github.com/adaltas/node-csv/tree/master/packages/csv-parse/samples)
* [the "test" folder](https://github.com/adaltas/node-csv/tree/master/packages/csv-parse/test).
| 49.513514 | 147 | 0.750546 | eng_Latn | 0.993755 |
2693c0d24bd16c2f909e589f317dadae069d5cee | 328 | md | Markdown | README.md | wyj2080/springBoot | c8c72ace4cdd4f9cc9a4e2c9293f30a098ef89d5 | [
"Apache-2.0"
] | null | null | null | README.md | wyj2080/springBoot | c8c72ace4cdd4f9cc9a4e2c9293f30a098ef89d5 | [
"Apache-2.0"
] | null | null | null | README.md | wyj2080/springBoot | c8c72ace4cdd4f9cc9a4e2c9293f30a098ef89d5 | [
"Apache-2.0"
] | null | null | null | # SpringBoot
SpringBoot project to java test
用于学习的spring boot项目
内容有:mysql连接(PageHelper)<br>
kafka的使用<br>
redis的使用<br>
excel导出<br>
文件下载:单个文件、zip文件<br>
读写文件:同步读写<br>
命令行:Java执行cmd里的命令<br>
响应式编程:非阻塞IO<br>
http请求<br>
Java8编程<br>
事务注解@Transactional<br>
AOP面向切面编程<br>
RabbitMQ消息队列的使用<br>
easyexcel的使用<br>
minio的使用<br>
jetcache的使用<br>
| 16.4 | 31 | 0.795732 | yue_Hant | 0.395216 |
2693dcbf9a2b0dc24c59762950d5397ffad43dcc | 198 | md | Markdown | src/content/posts/2022-01-29-test.md | Andloe/gatsby-starter-foundation | 79c5c02bcbadc0a599e5964e8dbe3fb527075871 | [
"MIT"
] | null | null | null | src/content/posts/2022-01-29-test.md | Andloe/gatsby-starter-foundation | 79c5c02bcbadc0a599e5964e8dbe3fb527075871 | [
"MIT"
] | null | null | null | src/content/posts/2022-01-29-test.md | Andloe/gatsby-starter-foundation | 79c5c02bcbadc0a599e5964e8dbe3fb527075871 | [
"MIT"
] | 1 | 2022-03-05T21:02:46.000Z | 2022-03-05T21:02:46.000Z | ---
template: blog-post
title: My First Blog Post
slug: /my-first-blog
date: 2022-01-29 16:43
description: my first blog
featuredImage: /assets/florian-olivo-mf23rf8xary-unsplash.jpg
---
Hello World | 22 | 61 | 0.767677 | kor_Hang | 0.234636 |
26944bf5cedce1938c7bce972f4880583304080b | 1,695 | md | Markdown | README.md | pamil/money | ffbdc0403e5ebff02e4c3024b67beccb9879c450 | [
"MIT"
] | null | null | null | README.md | pamil/money | ffbdc0403e5ebff02e4c3024b67beccb9879c450 | [
"MIT"
] | null | null | null | README.md | pamil/money | ffbdc0403e5ebff02e4c3024b67beccb9879c450 | [
"MIT"
] | null | null | null | Money
=====
[](http://travis-ci.org/moneyphp/money)
PHP 5.5+ library to make working with money safer, easier, and fun!
> "If I had a dime for every time I've seen someone use FLOAT to store currency, I'd have $999.997634" -- [Bill Karwin](https://twitter.com/billkarwin/status/347561901460447232)
In short: You shouldn't represent monetary values by a float. Wherever
you need to represent money, use this Money value object. Since version
3.0 this library uses [strings internally](https://github.com/moneyphp/money/pull/136)
in order to support unlimited integers.
```php
<?php
use Money\Money;
$fiveEur = Money::EUR(500);
$tenEur = $fiveEur->add($fiveEur);
list($part1, $part2, $part3) = $tenEur->allocate(array(1, 1, 1));
assert($part1->equals(Money::EUR(334)));
assert($part2->equals(Money::EUR(333)));
assert($part3->equals(Money::EUR(333)));
```
The documentation is available at http://money.readthedocs.org
Installation
------------
Install the library using [composer][1].
``` bash
composer require mathiasverraes/money
```
Features
------------
- JSON Serialization
- Big integer support utilizing different, transparent calculation logic upon availability (bcmath, gmp, plain php)
- Money formatting (including intl formatter)
- Currency repositories (ISO currencies included)
- Money exchange (including Swap implementation)
Integration
-----------
See [`MoneyBundle`][2] or [`TbbcMoneyBundle`][4] for [Symfony integration][3].
[1]: http://getcomposer.org/
[2]: https://github.com/pink-tie/MoneyBundle/
[3]: http://symfony.com/
[4]: https://github.com/TheBigBrainsCompany/TbbcMoneyBundle
| 28.728814 | 177 | 0.725664 | eng_Latn | 0.683152 |
2694d4b91bf175965d7836ee9018e71007d7f0c8 | 956 | md | Markdown | gnn_dynamics/README.md | delta2323/gnn-asymptotics | 0246e29df9b64f49b2b4bd929e3e3393eadbb0d7 | [
"MIT"
] | 29 | 2020-02-25T20:24:22.000Z | 2021-11-30T12:05:39.000Z | gnn_dynamics/README.md | delta2323/gnn-asymptotics | 0246e29df9b64f49b2b4bd929e3e3393eadbb0d7 | [
"MIT"
] | 1 | 2020-09-09T09:00:13.000Z | 2021-04-21T06:30:20.000Z | gnn_dynamics/README.md | delta2323/gnn-asymptotics | 0246e29df9b64f49b2b4bd929e3e3393eadbb0d7 | [
"MIT"
] | 6 | 2020-05-21T01:40:23.000Z | 2021-11-30T12:05:41.000Z | Code for experiments in Section 6.1
Usage
```
bash -x run.sh
```
This command creates the following files:
```
4/
├── 0.5
│ ├── log.txt
│ └── streamplot.pdf
├── 1.0
│ ├── log.txt
│ └── streamplot.pdf
├── 1.2
│ ├── log.txt
│ └── streamplot.pdf
├── 1.5
│ ├── log.txt
│ └── streamplot.pdf
├── 2.0
│ ├── log.txt
│ └── streamplot.pdf
└── 4.0
├── log.txt
└── streamplot.pdf
15/
├── 0.5
│ ├── log.txt
│ └── streamplot.pdf
├── 1.0
│ ├── log.txt
│ └── streamplot.pdf
├── 1.2
│ ├── log.txt
│ └── streamplot.pdf
├── 1.5
│ ├── log.txt
│ └── streamplot.pdf
├── 2.0
│ ├── log.txt
│ └── streamplot.pdf
└── 4.0
├── log.txt
└── streamplot.pdf
```
The naming convention of the directory is `<seed>/<weight_value>/`. This directory contains the result when the seed is `<seed>` and the weight value `W` is `<weight_value>`. Seed=4 corresponds to Case 1 and Seed=15 corresponds to Case 2 in the paper, respectively.
| 18.384615 | 265 | 0.560669 | eng_Latn | 0.882859 |
269555481a7daa95e03e076bb85921a5457077b5 | 4,872 | md | Markdown | desktop-src/com/error-handling-strategies.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 552 | 2019-08-20T00:08:40.000Z | 2022-03-30T18:25:35.000Z | desktop-src/com/error-handling-strategies.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 1,143 | 2019-08-21T20:17:47.000Z | 2022-03-31T20:24:39.000Z | desktop-src/com/error-handling-strategies.md | citelao/win32 | bf61803ccb0071d99eee158c7416b9270a83b3e4 | [
"CC-BY-4.0",
"MIT"
] | 1,287 | 2019-08-20T05:37:48.000Z | 2022-03-31T20:22:06.000Z | ---
title: Error Handling Strategies
description: Error Handling Strategies
ms.assetid: 8d03ede8-0661-43dc-adaf-3c1f5fc1687e
ms.topic: article
ms.date: 05/31/2018
---
# Error Handling Strategies
Because interface methods are virtual, it is not possible for a caller to know the full set of values that may be returned from any one call. One implementation of a method may return five values; another may return eight.
The documentation lists common values that may be returned for each method; these are the values that you must check for and handle in your code because they have special meanings. Other values may be returned, but because they are not meaningful, you do not need to write special code to handle them. A simple check for zero or nonzero is adequate.
## HRESULT Values
The return value of COM functions and methods is an **HRESULT**. The values of some HRESULTs have been changed in COM to eliminate all duplication and overlapping with the system error codes. Those that duplicate system error codes have been changed to FACILITY\_WIN32, and those that overlap remain in FACILITY\_NULL. Common **HRESULT** values and their values are listed in the following table.
| HRESULT | Value | Description |
|----------------------------|-----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------|
| E\_ABORT<br/> | 0x80004004<br/> | The operation was aborted because of an unspecified error.<br/> |
| E\_ACCESSDENIED<br/> | 0x80070005<br/> | A general access-denied error.<br/> |
| E\_FAIL<br/> | 0x80004005<br/> | An unspecified failure has occurred.<br/> |
| E\_HANDLE<br/> | 0x80070006<br/> | An invalid handle was used.<br/> |
| E\_INVALIDARG<br/> | 0x80070057<br/> | One or more arguments are invalid.<br/> |
| E\_NOINTERFACE<br/> | 0x80004002<br/> | The [**QueryInterface**](/windows/desktop/api/Unknwn/nf-unknwn-iunknown-queryinterface(q)) method did not recognize the requested interface. The interface is not supported.<br/> |
| E\_NOTIMPL<br/> | 0x80004001<br/> | The method is not implemented.<br/> |
| E\_OUTOFMEMORY<br/> | 0x8007000E<br/> | The method failed to allocate necessary memory.<br/> |
| E\_PENDING<br/> | 0x8000000A<br/> | The data necessary to complete the operation is not yet available.<br/> |
| E\_POINTER<br/> | 0x80004003<br/> | An invalid pointer was used.<br/> |
| E\_UNEXPECTED<br/> | 0x8000FFFF<br/> | A catastrophic failure has occurred.<br/> |
| S\_FALSE<br/> | 0x00000001<br/> | The method succeeded and returned the boolean value **FALSE**.<br/> |
| S\_OK<br/> | 0x00000000<br/> | The method succeeded. If a boolean return value is expected, the returned value is **TRUE**.<br/> |
## Network Errors
If the first four digits of the error code are 8007, this indicates a system or network error. You can use the **net** command to decode these types of errors. To decode the error, first convert the last four digits of the hexadecimal error code to decimal. Then, at the command prompt, type the following, where decimal code is replaced with the return value you want to decode:
**net helpmsg <***decimal\_code***>**
The net command returns a description of the error. For example, if COM returns the error 8007054B, convert the 054B to decimal (1355). Then type the following:
**net helpmsg 1355**
The net command returns the error description: "The specified domain did not exist".
## Related topics
<dl> <dt>
[Error Handling in COM](error-handling-in-com.md)
</dt> </dl>
| 71.647059 | 396 | 0.511494 | eng_Latn | 0.991447 |
2695cb9fa3491c341d65f59ed00465ec19432814 | 1,854 | markdown | Markdown | _posts/2009/2009-09-30-rna-fragmentation-ricks-trout-rbc-samples-prepped-earlier-today.markdown | AidanCox12/Aidans_Journal | 6bc80960ae7cc3f81aa097382d7c0bcc63f0c9f9 | [
"MIT"
] | null | null | null | _posts/2009/2009-09-30-rna-fragmentation-ricks-trout-rbc-samples-prepped-earlier-today.markdown | AidanCox12/Aidans_Journal | 6bc80960ae7cc3f81aa097382d7c0bcc63f0c9f9 | [
"MIT"
] | null | null | null | _posts/2009/2009-09-30-rna-fragmentation-ricks-trout-rbc-samples-prepped-earlier-today.markdown | AidanCox12/Aidans_Journal | 6bc80960ae7cc3f81aa097382d7c0bcc63f0c9f9 | [
"MIT"
] | 5 | 2019-12-18T06:47:34.000Z | 2022-03-15T23:47:41.000Z | ---
author: kubu4
comments: true
date: 2009-09-30 04:21:23+00:00
layout: post
slug: rna-fragmentation-ricks-trout-rbc-samples-prepped-earlier-today
title: RNA Fragmentation - Rick's trout RBC samples prepped earlier today
wordpress_id: 852
author:
- kubu4
categories:
- Miscellaneous
tags:
- EtOH precipitation
- library prep
- NanoDrop1000
- RBC
- Ribominus Concentration Module Kit
- RNA
- RNA fragmentation
- RNA quantification
- SOLiD
- SOLiD libraries
- trout
- Whole Transcriptome Analysis Kit
---
#### EtOH Precipitaiton - Rick's trout Ribosomoal-depleted RNA for SOLiD WTK (continued from yesterday)
Continued precipitation. Spun samples 30 mins, 16,000g, 4C. Removed supe. Added 1mL 70% EtOH. Spun samples 15mins, 16,000g, 4C. Removed supe. Resuspended in 8uL H2O. Proceeded with SOLiD WTK fragmentation.
#### RNA Fragmentation
Samples were fragmented according to the Whole Transcriptome Kit protocol. Samples were then cleaned up using Invitrogen's RiboMinus Concentration Module, according to SOLiD WTK protocol. Briefly:
* Added 1X volume of binding buffer (100uL)
* Added 100% EtOH (250uL)
* Eluted with 20uL of H2O.
Samples were spec'd.
Results:

Control Sample - Virtually nothing there. Hopefully it's just too dilute for the NanoDrop, however I have a feeling this sample is bad (degraded?) 1.5uL of the sample has been transferred to a 0.5mL snap cap tube to send off for the Bioanalyzer.
Poly I:C Sample - Looks great, excellent recovery. 0.25uL of this sample was transferred to a 0.5mL snap cap tube containing 1.25uL of H2O to send off for the Bioanalyzer.
Samples were stored @ 80C until resutls from the Bioanalyzer are received.
| 26.869565 | 245 | 0.752427 | eng_Latn | 0.924898 |
2696e7cedc9312e916b7958aa790e4d582b49c82 | 843 | md | Markdown | packages/readmodel-adapters/README.md | kilimondjaro/resolve | 3633be2272eccf31908bef28c85614dce0c19c32 | [
"MIT"
] | null | null | null | packages/readmodel-adapters/README.md | kilimondjaro/resolve | 3633be2272eccf31908bef28c85614dce0c19c32 | [
"MIT"
] | 1 | 2018-07-12T13:48:47.000Z | 2018-07-12T13:48:47.000Z | packages/readmodel-adapters/README.md | MrCheater/resolve | 29c44650223b5807cd733801fa162baf1777db8d | [
"MIT"
] | null | null | null | # **Read Model Adapters** 🛢
This folder contains [resolve-query](../resolve-query) read model adapters.
A read model adapter is an object that should contain the following functions:
* `buildProjection` - wraps the projection.
* `init` - initializes an adapter instance, returns an API for interaction with a read model storage.
* `reset` - disposes of an adapter instance.
The read model storage API consists of the following asynchronous functions:
* `getReadable` - provides an API to access (read-only) and retrieve data from a store.
* `getError` - returns the last internal adapter error if a failure occurred.
Available adapters:
* [resolve-readmodel-memory](./resolve-readmodel-memory)
Used to store a read model in Memory.
* [resolve-readmodel-mysql](./resolve-readmodel-mysql)
Used to store a read model in MySQL.
| 46.833333 | 103 | 0.753262 | eng_Latn | 0.981921 |
2697ffd7512703bf61a943594a8a0a589cbff78e | 22,183 | md | Markdown | content/03.results.md | taylorreiter/2020-paper-sourmash-gather | f141deb6c9dbeec0894e46e213e5944de4384500 | [
"CC-BY-4.0",
"CC0-1.0"
] | null | null | null | content/03.results.md | taylorreiter/2020-paper-sourmash-gather | f141deb6c9dbeec0894e46e213e5944de4384500 | [
"CC-BY-4.0",
"CC0-1.0"
] | null | null | null | content/03.results.md | taylorreiter/2020-paper-sourmash-gather | f141deb6c9dbeec0894e46e213e5944de4384500 | [
"CC-BY-4.0",
"CC0-1.0"
] | null | null | null | # Results
We first describe FracMinHash, a sketching technique that supports
containment and overlap estimation for DNA sequencing datasets using
k-mers. We next frame reference-based metagenome content analysis as
the problem of finding a _minimum set cover_ for a metagenome using a
collection of reference genomes. We then evaluate the accuracy of this
approach using a taxonomic classification benchmark. Finally, we
demonstrate the utility of this approach by using the genomes from the
minimum metagenome cover as reference genomes for read mapping.
## FracMinHash sketches support accurate containment operations
We define the *fractional MinHash*, or FracMinHash, on an input domain
of hash values $W$, as follows:
$$\mathbf{FRAC}_s(W) = \{\,w \leq \frac{H}{s} \mid \forall w \in
W\,\}$$ where $H$ is the largest possible value in the domain of
$h(x)$ and $\frac{H}{s}$ is the *maximum hash value* allowed in the
FracMinHash sketch.
The FracMinHash is a mix of MinHash and ModHash
[@mash; @broder_minhash]. It keeps the selection of the smallest
elements from MinHash, while using the dynamic size from ModHash to
allow containment estimation. However, instead of taking $0 \mod m$
elements like $\mathbf{MOD}_m(W)$, a FracMinHash uses the parameter
$s$ to select a subset of $W$.
Like ModHash (but not MinHash), FracMinHash supports estimation
of the containment index:
{% raw %}
```{=latex}
\begin{equation}
\hat{C}_\text{scale}(A,B):=\frac{\vert \mathbf{FRAC}_S(A) \cap \mathbf{FRAC}_S(B)\vert }{\vert \mathbf{FRAC}_S(A)\vert}.
\end{equation}
```
{% endraw %}
See Methods for details.
<!-- CTB: do we want to discuss overlap? -->
<!--
FracMinHash supports containment estimation with high accuracy and
low bias. **(Analytic work from David HERE.)**
* approximation formula (eqn 13 from overleaf)
* for queries into large sets (large $|A|$), bias factor is low.
* refer to appendix for derivation.
-->
Given a uniform hash function $h$ and $s=m$, the cardinalities of
$\mathbf{FRAC}_s(W)$ and $\mathbf{MOD}_m(W)$ converge for large
$\vert W \vert$. The main difference is the range of possible values
in the hash space, since the FracMinHash range is contiguous and
the ModHash range is not. This permits a variety of convenient
operations on the sketches, including iterative downsampling of FracMinHash sketches as well as conversion to MinHash sketches.
## A FracMinHash implementation accurately estimates containment between sets of different sizes
We compare the FracMinHash method, implemented in the sourmash
software [@sourmash_joss], to _Containment MinHash_ [@cmash]
and Mash Screen (_Containment Score_) [@mash_screen] for containment
queries in data from the `podar mock` community, a mock bacterial and
archaeal community where the reference genomes are largely known
[@shakya_podar]; see also Table @tbl:genbank-cover, row 2. This data
set has been used in several methods evaluations
[@doi:10.1093/bioinformatics/btu395;@doi:10.1101/gr.213959.116;@doi:10.1101/155358;@doi:10.1186/s13059-019-1841-x].
![
**Letter-value plot [@doi:10.1080/10618600.2017.1305277] of the
differences from containment estimate to ground truth (exact).**
Each method is evaluated for $k=\{21,31,51\}$,
except for `Mash` with $k=51$, which is unsupported.
](images/containment.svg "Containment estimation between sourmash, CMash, and mash screen"){#fig:containment}
Figure @fig:containment shows containment analysis of genomes in this metagenome, with low-coverage and
contaminant genomes (as described in [@awad_podar] and
[@mash_screen]) removed from the database.
All methods are within 1\% of the exact containment on average (Figure
@fig:containment), with `CMash` consistently underestimating
the containment. `Mash
Screen` with $n=10000$ has the smallest difference to ground truth for
$k=\{21, 31\}$, followed by `sourmash` with `scaled=1000` and `Mash
Screen` with $n=1000$.
The sourmash sketch sizes varied between 431 hashes and 9540 hashes,
with a median of 2741 hashes.
<!-- CTB: discuss cmash consistently underestimating...-->
<!-- CTB: add sketch sizes for the figure; maybe note conversion -->
## FracMinHash can be used to construct a minimum set cover for metagenomes
We next ask: what is the smallest collection of genomes in a database
that contains all of the known k-mers in a metagenome?
Formally, for a
given metagenome $M$ and a reference database $D$, what is the minimum
collection of genomes in $D$ which contain all of the k-mers in the
intersection of $D$ and $M$? We wish to find the smallest set
$\{ G_n \}$ of genomes in $D$ such that, for the k-mer decomposition $k()$,
$$ k(M) \cap k(D) = \bigcup_n \{ k(M) \cap k(G_n) \} $$
This is a *minimum set covering* problem, for which there is a
polynomial-time approximation [@polynomial_minsetcov]:
1. Initialize $C \leftarrow \emptyset$
2. Define $f(C) = \vert \cup_{s \in C} \{ s \} \vert$
3. Repeat until $f(C) = f(M \cap D)$:
4. Choose $s \in G$ maximizing the contribution of the element $f(C \cup \{ s \}) - f(C)$
5. Let $C \leftarrow C \cup \{ s \}$
6. Return $C$
This greedy algorithm iteratively chooses reference genomes from $D$
in order of largest remaining overlap with $M$. This results in a
progressive classification of the known k-mers in the
metagenome to specific genomes.[^equivalent]
[^equivalent]: In our current implementation in `sourmash`, when
equivalent matches are available for a given rank, a match is chosen
at random. This is an implementation decision that is not intrinsic to
the algorithm itself.
In Figure @fig:gather0, we show an example of this progressive
classification of k-mers by matching GenBank genome for `podar mock`. The matching genomes are provided
in the order found by the greedy algorithm, i.e. by overlap with remaining k-mers in the metagenome.
The
high rank (early) matches reflect large and/or mostly-covered genomes
with high containment, while later matches reflect genomes that share
fewer k-mers with the remaining set of k-mers in the metagenome -
smaller genomes, less-covered genomes, and/or genomes with substantial
overlap with earlier matches. Where there are overlaps between
genomes, shared common k-mers are "claimed" by higher rank matches and
only k-mer content specific to the later genome is used to find
lower rank matches.
As one example of metagenome k-mers shared with multiple matches,
genomes from two strains of *Shewanella baltica* are present in the
mock metagenome. These genomes overlap in k-mer content by approximately 50%, and these shared k-mers are first claimed by
*Shewanella baltica* OS223 -- compare *S. baltica* OS223, rank
8, with *S. baltica* OS185, rank 33 in Figure
@fig:gather0. Here the difference between the red circles and green
triangles for *S. baltica* OS185 represents the k-mers claimed by
*S. baltica* OS223 .
<!-- (CTB: maybe indicate or highlight these genomes in the figure?) -->
For this mock metagenome, 205m (54.8%) of 375m k-mers were found in
GenBank
(see
Table @tbl:genbank-cover, row 2). The remaining 169m (45.2%) k-mers had no matches, and
represent either k-mers introduced by sequencing errors or k-mers from
real but unknown community members.
![
**K-mer decomposition of a metagenome into constituent genomes.**
A rank ordering by remaining containment for the first 36 genomes from the minimum metagenome cover
of the `podar mock` synthetic metagenome [@shakya_podar],
calculated using 700,000 genomes from GenBank with scaled=2000, k=31. The Y axis is labeled with the NCBI-designated name of the
genome.
In the left plot, the X axis represents the estimated number of k-mers shared
between each genome and the metagenome. The red circles indicate the number
of matching k-mers that were not matched at previous ranks, while the green triangle symbols indicate all matching k-mers.
In the right plot, the X axis represents the estimated k-mer coverage of that
genome. The red circles indicate the percentage of the genome covered by k-mers remaining at
that rank, while the green triangles indicate overlap between
the genome and the entire metagenome, including those already assigned at previous ranks.
](images/gathergram-SRR606249.hashes.svg "minimum metagenome cover for podar"){#fig:gather0}
## Minimum metagenome covers can accurately estimate taxonomic composition
We evaluated the accuracy of min-set-cov for metagenome decomposition
using benchmarks from the Critical Assessment of Metagenome
Interpretation (CAMI), a community-driven
initiative for reproducibly benchmarking metagenomic methods [@cami1]. We used
the mouse gut metagenome dataset [@cami_tutorial],
in which a simulated
mouse gut metagenome (_MGM_) was derived from 791 bacterial and
archaeal genomes,
representing 8 phyla,
18 classes,
26 orders,
50 families,
157 genera,
and 549 species.
Sixty-four samples were generated with _CAMISIM_,
with 91.8 genomes present in each sample on average.
Each sample is 5 GB in size, and both short-read (Illumina) and
long-read (PacBio) simulated sequencing data is available.
Since min-set-cov yields only a collection of genomes, this collection must be
converted into a taxonomy for benchmarking with CAMI.
We developed the following procedure for
generating a taxonomic profile from a given metagenome
cover. For each genome match, we note
the species designation in the NCBI taxonomy for that genome. Then, we
calculate the fraction of the genome remaining in the metagenome
after k-mers belonging to higher-rank genomes have been removed (i.e. red
circles in Figure @fig:gather0 (a)). We use this fraction to weight
the contribution of the genome's species designation to the
metagenome taxonomy. This procedure produces an estimate of that
species' taxonomic contribution to the metagenome, normalized by the
genome size.
{#fig:spider}
<!--
{#fig:ranks}
-->
{#fig:scores}
In Figures @fig:spider and @fig:scores we show an updated version of
Figure 6 from [@doi:10.1038/s41596-020-00480-3] that includes our
method, implemented in the `sourmash` software. Here we compare 10 different methods for taxonomic
profiling and their characteristics at each taxonomic rank. While
previous methods show reduced completeness -- the ratio of taxa
correctly identified in the ground truth -- below the genus level,
`sourmash` can reach 88.7\% completeness at the species level with the
highest purity (the ratio of correctly predicted taxa over all
predicted taxa) across all methods: 95.9\% when filtering predictions
below 1\% abundance, and 97\% for unfiltered results. `sourmash` also
has the second lowest L1-norm error,
the highest number of true positives and the lowest number of
false positives.
<!-- CTB: runtimes/memory are mentioned in the discussion saying they're in the appendix,
I think it would be nice to mention that they're in the appendix here -->
<!--
| Taxonomic binner | Time (hh:mm) | Memory (kbytes) |
|:--------------------------------|-------------:|----------------:|
| MetaPhlAn 2.9.21 | 18:44 | 5,139,172 |
| MetaPhlAn 2.2.0 | 12:30 | 1,741,304 |
| Bracken 2.5 (only Bracken) | **0:01** | **24,472** |
| Bracken 2.5 (Kraken and Bracken)| **3:03** | 39,439,796 |
| FOCUS 0.31 | 13:27 | 5,236,199 |
| CAMIARKQuikr 1.0.0 | 16:19 | 27,391,555 |
| mOTUs 1.1 | 19:50 | **1,251,296** |
| mOTUs 2.5.1 | 14:29 | 3,922,448 |
| MetaPalette 1.0.0 | 76:49 | 27,297,132 |
| TIPP 2.0.0 | 151:01 | 70,789,939 |
| MetaPhyler 1.25 | 119:30 | 2,684,720 |
| sourmash 3.4.0 | 16:41 | 5,760,922 |
Table: Updated Supplementary Table 12 from [@meyer_tutorial_2020].
Elapsed (wall clock) time (h:mm) and maximum resident set size
(kbytes) of taxonomic profiling methods on the 64 short read samples
of the CAMI II mouse gut data set. The best results are shown in
bold. Bracken requires to run Kraken, hence the times required to run
Bracken and both tools are shown. The taxonomic profilers were run on
a computer with an Intel Xeon E5-4650 v4 CPU (virtualized to 16 CPU
cores, 1 thread per core) and 512 GB (536.870.912 kbytes) of main
memory. {#tbl:gather-cami2}
When considering resource consumption and running times, `sourmash`
used 5.62 GB of memory with an _LCA index_ built from the RefSeq
snapshot (141,677 genomes) with $scaled=10000$ and $k=51$. Each
sample took 597 seconds to run (on average), totaling 10 hours and 37
minutes for 64 samples. MetaPhlan 2.9.21 was also executed in the
same machine, a workstation with an AMD Ryzen 9 3900X 12-Core CPU
running at 3.80 GHz, 64 GB DDR4 2133 MHz of RAM and loading data from
an NVMe SSD, in order to compare to previously reported times in Table
@tbl:gather-cami2 [@meyer_tutorial_2020]. MetaPhlan took 11 hours and
25 minutes to run for all samples, compared to 18 hours and 44 minutes
previously reported, and correcting the `sourmash` running time by
this factor it would likely take 16 hours and 41 minutes in the
machine used in the original comparison. After correction, `sourmash`
has similar runtime and memory consumption to the other best
performing tools (_mOTUs_ and _MetaPhlAn_), both gene marker and
alignment based tools.
Additional points are that `sourmash` is a single-threaded program, so
it didn't benefit from the 16 available CPU cores, and it is the only
tool that could use the full RefSeq snapshot, while the other tools
can only scale to a smaller fraction of it (or need custom databases).
The CAMI II RefSeq snapshot for reference genomes also doesn't include
viruses; this benefits `sourmash` because viral _Scaled MinHash_
sketches are usually not well supported for containment estimation,
since viral sequences require small scaled values to have enough
hashes to be reliable.
-->
## Minimum metagenome covers select small subsets of large databases
<!--
For very large reference databases such as GenBank (which contains
over 700,000 microbial genomes as of January 2021) and GTDB (230,000
genomes in release RS202), this is computationally challenging to do
exactly. (Estimate total number of k-mers in genbank!) We therefore
implemented the algorithm using _Scaled MinHash_ sketches to estimate
containment, and used an overlap threshold of 100,000 k-mers in order
to eliminate genomes with only small overlaps (see Methods).
-->
| data set | genomes >= 100k overlap | min-set-cov | % 31-mers identified |
| -------- | -------- | -------- | ------- |
| `zymo mock` | 405,839 | 19 | 47.1% |
| `podar mock` | 5,800 | 74 | 54.8% |
| `gut real` | 96,423 | 99 | 36.0% |
| `oil well real` | 1,235 | 135 | 14.9% |
Table: Four metagenomes and the number of genomes in the estimated minimum metagenome cover from GenBank, with scaled=2000 and k=31. Overlap and % 31-mers identified are estimated from FracMinHash sketch size. {#tbl:genbank-cover}
In Table @tbl:genbank-cover, we show the minimum metagenome cover
for four metagenomes against GenBank - two mock communities
[@https://www.zymoresearch.com/collections/zymobiomics-microbial-community-standards;@shakya_podar], a human gut microbiome data set
from iHMP [@doi:10.1038/s41586-019-1238-8], and an oil well sample
[@doi:10.1128/mBio.01669-15]. Our implementation provides estimates
for both the *total* number of genomes with substantial overlap to a
query genome, and the minimum set of genomes that account for k-mers
with overlap in the query metagenome. Note that only matches estimated
to have more than 100,000 overlapping k-mers are shown (see Methods for
details).
We find many genomes with overlaps for each metagenome, due to
the redundancy of the reference database. For example, `zymo mock`
contains a *Salmonella* genome, and there are over 200,000 *Salmonella*
genomes that match to it in GenBank. Likewise, `gut real`
matches to over 75,000 *E. coli* genomes in GenBank. Since neither
`podar mock` nor `oil well real` contain genomes from species with
substantial representation in GenBank, they yield many fewer total
overlapping genomes.
Regardless of the number of genomes in the database with substantial
overlap, the
estimated _minimum_ collection of genomes is always much smaller than the
number of genomes with overlaps. In
the cases where the k-mers in the metagenome are mostly identified,
this is because of database redundancy: e.g. in the case of `zymo
mock`, the min-set-cov algorithm chooses precisely one *Salmonella* genome
from the 200,000+ available. Conversely, in the case of `oil well real`,
much of the sample is not identified,
suggesting that the small size of the covering set is because much
of the sample is not represented in the database.
<!-- CTB provide some kind of taxonomic breakdown at, say, genus level? -->
## Minimum metagenome covers provide representative genomes for mapping
Mapping metagenome reads to representative genomes is an important
step in many microbiome analysis pipelines, but mapping approaches
struggle with large, redundant databases [@ganon;@metalign]. One specific use for a minimum
metagenome cover could be to select a small set of representative genomes
for mapping. We therefore developed a hybrid selection and
mapping pipeline that uses the rank-ordered min-set-cov results to
map reads to candidate genomes.
<!-- CTB do we have a citation for "mapping approaches struggle..."? -->
We first map all metagenome reads to the first ranked genome in the minimum metagenome
cover, and then remove successfully mapped reads from the metagenome.
Remaining unmapped reads are then mapped to the second rank genome, and
this then continues until all genomes have been used.
That is, all reads
mapped to the rank-1 genome in Figure @fig:gather0 are removed from
the rank-2 genome mapping, and all reads mapping to rank-1 and rank-2
genomes are removed from the rank-3 genome mapping, and so on. This produces
results directly analogous to those presented in Figure @fig:gather0,
but for reads rather than k-mers. This approach is implemented in the automated
workflow package `genome-grist`; see Methods for details.
<!-- (CTB: provide versions of figure 1 here as Suppl Figure?) -->
Figure @fig:mapping compares k-mer assignment rates and mapping rates
for the four evaluation metagenomes in Table
@tbl:genbank-cover. Broadly speaking, we see that k-mer-based
estimates of metagenome composition agree closely with the number of
bases covered by mapped reads: the Y axis has not been re-scaled, so
k-mer matches and read mapping coverage correspond well. This suggests
that the k-mer-based min-set-cov approach effectively selects
reference genomes for metagenome read mapping.
For mock metagenomes (Figure @fig:mapping (A) and (B)), there is a
close correspondence between mapping and k-mer coverage, while for
real metagenomes (Figure @fig:mapping (C) and (D)), mapping coverage
tends to be higher. This may be because the mock metagenomes are
largely constructed from strains with known genomes, so most 31-mers
match exactly, while the gut and oil well metagenomes contain a number
of strains where only species (and not strain) genomes are present in
the database, and so mapping performs better. Further work is needed
to evaluate rates of variation across a larger number of metagenomes.
<!--
(belongs in discussion)
This suggests that metagenome reads are being mapped to different
genomic elements from a species pangenome. While we do not have the
resolution to determine this, the most parsimonious interpretation
is that the "true" reference genome for the species present in the
sample is not in the database, and instead is being cobbled together
from core and accessory genome elements in the database.
(Maybe this is where we use R. gnavus genomes? Yes - take JUST reads
that map to R. gnavus, do gather, show what happens x all gnavus
genomes? Could also do withholding, to show that pangenome elements will
usually map one way or another.)
(Show plots with leftover mapping vs all mapping.)
-->
{#fig:mapping}
| 52.07277 | 398 | 0.758464 | eng_Latn | 0.997804 |
26985b2f8ddac477686dc1fdb3caafa263fba682 | 10,388 | md | Markdown | articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/iot-hub/iot-hub-live-data-visualization-in-power-bi.md | KazuOnuki/azure-docs.ja-jp | 4dc313dec47a4efdb0258a8b21b45c5376de7ffc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure IoT Hub から取得したデータのリアルタイム データの視覚化 – Power BI
description: Power BI を使用して、センサーから収集されて Azure IoT Hub に送信された気温と湿度のデータを視覚化します。
author: eross-msft
keywords: リアルタイム データの視覚化, ライブ データの視覚化, センサー データの視覚化
ms.service: iot-hub
services: iot-hub
ms.topic: conceptual
ms.tgt_pltfrm: arduino
ms.date: 7/23/2021
ms.author: lizross
ms.openlocfilehash: 15e297f65aad93cd3999ae44953a3410c7281788
ms.sourcegitcommit: 05c8e50a5df87707b6c687c6d4a2133dc1af6583
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 11/16/2021
ms.locfileid: "132555052"
---
# <a name="tutorial-visualize-real-time-sensor-data-from-azure-iot-hub-using-power-bi"></a>チュートリアル: Power BI を使用して Azure IoT Hub からのリアルタイム センサー データを視覚化する
Microsoft Power BI を使用して、Azure IoT ハブが受信したリアルタイム センサー データを視覚化できます。 これを行うには、IoT Hub からのデータを使用し、Power BI 内のデータセットにルーティングするように Azure Stream Analytics ジョブを構成します。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/end-to-end-diagram.png" alt-text="エンド ツー エンド ダイアグラム" border="false":::
[Microsoft Power BI](https://powerbi.microsoft.com/) は、大規模なデータ セットに対してセルフサービスおよびエンタープライズ ビジネス インテリジェンス (BI) を実行するために使用できるデータ視覚化ツールです。 [Azure Stream Analytics](https://azure.microsoft.com/services/stream-analytics/#overview) は、分析情報を取得、レポートを作成、またはアラートとアクションをトリガーするために使用できるデータの高速移動ストリームを分析および処理するための、フル マネージド リアルタイム分析サービスです。
このチュートリアルでは、以下のタスクを実行します。
> [!div class="checklist"]
> * IoT ハブのコンシューマー グループを作成します。
> * コンシューマー グループから温度テレメトリを読み取って Power BI に送信する Azure Stream Analytics ジョブを作成して構成します。
> * Power BI 内で温度データのレポートを作成し、Web で共有します。
## <a name="prerequisites"></a>前提条件
* 選択した開発言語で[テレメトリの送信](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp)に関するクイックスタートのいずれかを完了します。 または、温度テレメトリを送信する任意のデバイス アプリを使用することもできます。たとえば、[Raspberry Pi オンライン シミュレーターや](iot-hub-raspberry-pi-web-simulator-get-started.md)、[組み込みデバイス](../iot-develop/quickstart-devkit-mxchip-az3166.md)のクイックスタートの 1 つなどです。 これらの記事では、次の要件について取り上げています。
* 有効な Azure サブスクリプション
* サブスクリプション内の Azure IoT Hub。
* Azure IoT ハブにメッセージを送信するクライアント アプリ。
* Power BI アカウント ([Power BI を無料で試す](https://powerbi.microsoft.com/))
[!INCLUDE [iot-hub-get-started-create-consumer-group](../../includes/iot-hub-get-started-create-consumer-group.md)]
## <a name="create-configure-and-run-a-stream-analytics-job"></a>Stream Analytics ジョブの作成、構成、実行
まずは、Stream Analytics ジョブを作成しましょう。 ジョブを作成したら、入力、出力、およびデータを取得するためのクエリを定義します。
### <a name="create-a-stream-analytics-job"></a>Stream Analytics のジョブの作成
1. [Azure Portal](https://portal.azure.com) で、 **[リソースの作成]** を選択します。 検索ボックスに「*Stream Analytics ジョブ*」と入力し、ドロップダウン リストから選択します。 **[Stream Analytics ジョブ]** 概要ページで、 **[作成]** を選択します
2. 次の情報をジョブに入力します。
**ジョブ名**:ジョブの名前。 名前はグローバルに一意である必要があります。
**[リソース グループ]** :IoT ハブと同じリソース グループを使用します。
**[場所]** :リソース グループと同じ場所を使用します。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/create-stream-analytics-job.png" alt-text="Azure での Stream Analytics ジョブの作成":::
3. **[作成]** を選択します
### <a name="add-an-input-to-the-stream-analytics-job"></a>Stream Analytics ジョブへの入力の追加
1. Stream Analytics ジョブを開きます。
2. **[ジョブ トポロジ]** で、 **[入力]** を選択します。
3. **[入力]** ウィンドウで、 **[Add stream input]\(ストリーム入力の追加\)** を選択し、ドロップダウン リストから **[IoT Hub]** を選択します。 新しい入力ウィンドウで、次の情報を入力します。
**入力のエイリアス**:入力の一意のエイリアスを入力します。
**サブスクリプションから IoT Hub を選択する**: このラジオ ボタンを選択します。
**サブスクリプション**:このチュートリアルに使用している Azure サブスクリプションを選択します。
**IoT Hub**:このチュートリアルで使用している IoT Hub を選択します。
**エンドポイント**: **[メッセージング]** を選びます。
**共有アクセス ポリシー名**:Stream Analytics ジョブで IoT ハブに使用する共有アクセス ポリシーの名前を選択します。 このチュートリアルでは、*service* を選択できます。 *service* ポリシーは、新しい IoT ハブ上で既定で作成され、IoT ハブによって公開されるクライアント側エンドポイント上で送受信するためのアクセス許可を付与します。 詳細については、「[アクセス制御とアクセス許可](iot-hub-dev-guide-sas.md#access-control-and-permissions)」を参照してください。
**共有アクセス ポリシー キー**: このフィールドは、共有アクセス ポリシー名の選択内容に基づいて自動的に入力されます。
**コンシューマー グループ**:以前に作成したコンシューマー グループを選びます。
他のすべてのフィールドは既定値のままにします。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-input-to-stream-analytics-job.png" alt-text="Azure で Stream Analytics ジョブに入力を追加する":::
4. **[保存]** を選択します。
### <a name="add-an-output-to-the-stream-analytics-job"></a>Stream Analytics ジョブへの出力の追加
1. **[ジョブ トポロジ]** で、 **[出力]** を選択します。
2. **[出力]** ペインで **[追加]** を選択し、ドロップダウン リストから **[Power BI]** を選びます。
3. **[Power BI - New output]\(Power BI - 新規出力\)** ウィンドウで、 **[Authorize]\(承認\)** を選択し、指示に従って Power BI アカウントにサインインします。
4. Power BI にサインインした後、次の情報を入力します。
**出力のエイリアス**:出力の一意のエイリアス。
**グループ ワークスペース**:ターゲットのグループ ワークスペースを選択します。
**データセット名**:データセットの名前を入力します。
**テーブル名**:テーブルの名前を入力します。
**認証モード**:既定値のままにします。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-output-to-stream-analytics-job.png" alt-text="Azure で Stream Analytics ジョブに出力を追加する":::
5. **[保存]** を選択します。
### <a name="configure-the-query-of-the-stream-analytics-job"></a>Stream Analytics ジョブのクエリの構成
1. **[ジョブ トポロジ]** で、 **[クエリ]** を選択します。
2. `[YourInputAlias]` をジョブの入力エイリアスに置き換えます。
3. `[YourOutputAlias]` をジョブの出力エイリアスに置き換えます。
1. クエリの最後の行として、次の `WHERE` 句を追加します。 この行により、**temperature** プロパティを持つメッセージだけが Power BI に転送されます。
```sql
WHERE temperature IS NOT NULL
```
1. クエリは次のスクリーンショットのようになります。 **[クエリの保存]** を選択します。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/add-query-to-stream-analytics-job.png" alt-text="Stream Analytics ジョブにクエリを追加する":::
### <a name="run-the-stream-analytics-job"></a>Stream Analytics ジョブの実行
Stream Analytics ジョブで、 **[概要]** を選択してから、 **[開始]** > **[Now]\(今すぐ\)** > **[開始]** を選択します。 ジョブが正常に開始されると、ジョブの状態が **[停止済み]** から **[実行中]** に変わります。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/run-stream-analytics-job.png" alt-text="Azure での Stream Analytics ジョブの実行":::
## <a name="create-and-publish-a-power-bi-report-to-visualize-the-data"></a>データを視覚化する Power BI レポートの作成と公開
次の手順では、Power BI サービスを使用してレポートの作成と公開を行う方法を示しています。 Power BI で "新しい外観" を使用する場合は、いくつかの変更を加えて次の手順を実行できます。 "新しい外観" の相違点と移動方法を理解するには、「[Power BI サービスの "新しい外観"](/power-bi/fundamentals/desktop-latest-update)」を参照してください。
1. デバイス上でクライアント アプリが実行されていることを確認します。
2. [Power BI](https://powerbi.microsoft.com/) アカウントにサインインし、上部のメニューから **[Power BI サービス]** を選択します。
3. 使用したワークスペースである **[マイ ワークスペース]** を右側のメニューから選択します。
4. **[すべて]** タブまたは **[Datasets + dataflows]\(データセット + データフロー\)** タブに、Stream Analytics ジョブの出力を作成したときに指定したデータセットが表示されます。
5. 作成したデータセットをポイントし、 **[その他のオプション]** メニュー (データセット名の右側にある 3 つのドット) を選択して、 **[レポートの作成]** を選択します。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-create-report.png" alt-text="Microsoft Power BI レポートの作成":::
6. 時間の経過に伴う温度の変化を示す折れ線グラフを作成します。
1. レポート作成ページの **[視覚化]** ウィンドウで、折れ線グラフのアイコンを選択して折れ線グラフを追加します。 グラフの辺と角にあるガイドを使用して、サイズと位置を調整します。
2. **[フィールド]** ウィンドウで、Stream Analytics ジョブの出力を作成したときに指定したテーブルを展開します。
3. **EventEnqueuedUtcTime** を、 **[視覚化]** ウィンドウの **[軸]** にドラッグします。
4. **temperature** を **[値]** にドラッグします。
折れ線グラフが作成されます。 x 軸は日付と時刻 (UTC タイム ゾーン) を示し、 y 軸はセンサーから取得した温度を示します。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-add-temperature.png" alt-text="Microsoft Power BI レポートに温度の折れ線グラフを追加する":::
> [!NOTE]
> テレメトリ データの送信に使用するデバイスまたはシミュレートされたデバイスによっては、フィールドの一覧が若干異なる場合があります。
>
8. **[保存]** を選択してレポートを保存します。 メッセージが表示されたら、レポートの名前を入力します。 秘密度ラベルの入力を求めるメッセージが表示されたら、 **[パブリック]** を選択し、 **[保存]** を選択できます。
10. 引き続きレポート ペインで、 **[ファイル]** > **[Embed report]\(レポートを埋め込む\)** > **[Web サイトまたはポータル]** を選択します。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-select-embed-report.png" alt-text="Microsoft Power BI レポートの埋め込みレポート Web サイト選択する":::
> [!NOTE]
> 埋め込みコードを作成するには管理者に問い合わせることを求める通知が表示された場合は、管理者への連絡が必要である可能性があります。 この手順を完了する前に、埋め込みコードの作成を有効にする必要があります。
>
> :::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/contact-admin.png" alt-text="管理者への問い合わせを求める通知":::
11. 他のユーザーと共有できるレポート アクセス用のレポート リンクと、ブログまたは Web サイトにレポートを組み込むために使用できるコード スニペットが表示されます。 **[安全な埋め込みコード]** ウィンドウのリンクをコピーし、ウィンドウを閉じます。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/copy-secure-embed-code.png" alt-text="埋め込みレポート リンクのコピー":::
12. Web ブラウザーを開いて、リンクをアドレス バーに貼り付けます。
:::image type="content" source="./media/iot-hub-live-data-visualization-in-power-bi/power-bi-web-output.png" alt-text="Microsoft Power BI レポートの公開":::
Microsoft は [Power BI のモバイル アプリ](https://powerbi.microsoft.com/documentation/powerbi-power-bi-apps-for-mobile-devices/)も提供しています。これを使用すると、モバイル デバイスで Power BI のダッシュボードとレポートを表示して操作できます。
## <a name="cleanup-resources"></a>リソースをクリーンアップする
このチュートリアルでは、Power BI 内でリソース グループ、IoT ハブ、Stream Analytics ジョブ、およびデータセットを作成しました。
他のチュートリアルを実行する予定がある場合は、リソース グループと IoT ハブをそのままにしておき、後で再利用します。
IoT ハブまたは作成した他のリソースが不要になった場合は、ポータル内でリソース グループを削除できます。 そのためには、リソース グループを選択してから、 **[リソース グループの削除]** を選択します。 IoT ハブを保持する場合は、リソース グループの **[概要]** ペインから他のリソースを削除できます。 これを行うには、リソースを右クリックし、コンテキスト メニューから **[削除]** を選択して、プロンプトに従います。
### <a name="use-the-azure-cli-to-clean-up-azure-resources"></a>Azure CLI を使用して Azure リソースをクリーンアップする
リソース グループとそのすべてのリソースを削除するには、[az group delete](/cli/azure/group#az_group_delete) コマンドを使用します。
```azurecli-interactive
az group delete --name {your resource group}
```
### <a name="clean-up-power-bi-resources"></a>Power BI リソースをクリーンアップする
Power BI 内でデータセット **PowerBiVisualizationDataSet** を作成しました。 それを削除するには、[Power BI](https://powerbi.microsoft.com/) アカウントにサインインします。 左側のメニューの **[ワークスペース]** で、 **[マイ ワークスペース]** を選択します。 **[Datasets + dataflows]\(データセット + データフロー\)** タブの下にあるデータセットの一覧で、**PowerBiVisualizationDataSet** データセットの上にマウス ポインターを移動します。 データセット名の右側に表示される 3 つの垂直ドットを選択して **[その他のオプション]** メニューを開き、 **[削除]** を選択してプロンプトに従います。 データセットを削除すると、レポートも削除されます。
## <a name="next-steps"></a>次のステップ
このチュートリアルでは、次のタスクを実行することで、Power BI を使用して Azure IoT ハブからのリアルタイム センサー データを視覚化する方法について説明しました。
> [!div class="checklist"]
> * IoT ハブのコンシューマー グループを作成します。
> * コンシューマー グループから温度テレメトリを読み取って Power BI に送信する Azure Stream Analytics ジョブを作成して構成します。
> * Power BI 内で温度データのレポートを構成し、Web で共有します。
Azure IoT Hub からのデータを視覚化する別の方法については、次の記事を参照してください。
> [!div class="nextstepaction"]
> [Web アプリを使用した Azure IoT Hub からのリアルタイム センサー データの視覚化](iot-hub-live-data-visualization-in-web-apps.md)。 | 44.969697 | 409 | 0.745476 | yue_Hant | 0.558087 |
2698618e7341aa9b61372fa54dbf885cb7e28b2f | 31,104 | md | Markdown | CHANGELOG.md | gogainda/ruby-progressbar | 97a6610b48ba532229489ad4359b9c84b6aa7611 | [
"MIT"
] | 887 | 2015-01-02T16:50:42.000Z | 2022-03-31T02:10:27.000Z | CHANGELOG.md | gogainda/ruby-progressbar | 97a6610b48ba532229489ad4359b9c84b6aa7611 | [
"MIT"
] | 172 | 2015-01-07T04:31:03.000Z | 2022-03-02T00:19:30.000Z | CHANGELOG.md | gogainda/ruby-progressbar | 97a6610b48ba532229489ad4359b9c84b6aa7611 | [
"MIT"
] | 70 | 2015-01-18T01:50:56.000Z | 2022-03-16T11:16:13.000Z | Version v1.11.0 - December 30, 2020
================================================================================
Add
--------------------------------------------------------------------------------
* RUBY_PROGRESS_BAR_FORMAT Environment Variable
Merge
--------------------------------------------------------------------------------
* PR #165 - Show Unknown Time Remaining After Timer Reset
Fix
--------------------------------------------------------------------------------
* Show Unknown Time Remaining After Bar Is Reset
Uncategorized
--------------------------------------------------------------------------------
* Merge PR #167 - Convert To Github Actions
Version v1.10.1 - May 27, 2019
================================================================================
Change
--------------------------------------------------------------------------------
* Make Extra Sure We're Not Loading Ruby's Time Class
Fix
--------------------------------------------------------------------------------
* CHANGELOG URI in both gemspecs to point to master CHANGELOG.md
* Ruby 1.8/1.9 IO Doesn't Respond to winsize
Remove
--------------------------------------------------------------------------------
* allowed_push_host From gemspecs
Version v1.10.0 - August 3, 2018
================================================================================
Add
--------------------------------------------------------------------------------
* %W flag for complete_bar_with_percentage
* %W Flag for complete_bar_with_percentage
Change
--------------------------------------------------------------------------------
* Don't rely on default when building complete bar
Fix
--------------------------------------------------------------------------------
* NoMethodError on decrement when output is non-TTY
Uncategorized
--------------------------------------------------------------------------------
* Fix no method error on decrement when output is not TTY enabled
Version v1.9.0 - September 27, 2017
================================================================================
Performance
--------------------------------------------------------------------------------
* don't shell out when it's avoidable.
Change
--------------------------------------------------------------------------------
* Don't allow user to override total or starting_at in Enumerator
* print_and_flush to be explicitly a private method
Uncategorized
--------------------------------------------------------------------------------
* Enumerator#to_progressbar as a refinement
Remove
--------------------------------------------------------------------------------
* Explicit clear on start
Fix
--------------------------------------------------------------------------------
* Components::Time to allow #estimated_seconds_remaining to be called
Add
--------------------------------------------------------------------------------
* Base#to_h to expose all of the data about the current bar state
* Outputs::Null for users who don't want the bar sent anywhere
* Ability to specify a completely custom output stream
* %u format flag to show ?? if total is unknown
Version v1.8.3 - September 13, 2017
================================================================================
* Update warning_filter to fix `require_relative`
Version v1.8.2 - December 10, 2016
================================================================================
Fix
--------------------------------------------------------------------------------
* Predicates not available on 1.8.7
Add
--------------------------------------------------------------------------------
* progressbar as a gem build target
Removed
--------------------------------------------------------------------------------
* reek
Version v1.8.1 - May 13, 2016
================================================================================
Fixed
--------------------------------------------------------------------------------
* no dynamic length when working with spring
Version v1.8.0 - April 24, 2016
================================================================================
Added
--------------------------------------------------------------------------------
* Gem signing via certificate
* ActiveSupport Time-Traveling Compatibility
Changed
--------------------------------------------------------------------------------
* ProgressBar::Time to an instantiated class
Fixed
--------------------------------------------------------------------------------
* Progress#finish causing an exception when total was unknown
Version v1.7.5 - March 25, 2015
================================================================================
* Prevent `method redefined` warnings being generated by replacing uses of
`attr_accessor` with: `attr_reader` where a setter function is already
defined, `attr_writer` where a getter function is already defined
Version v1.7.4 - March 23, 2015
================================================================================
Version v1.7.3 - March 23, 2015
================================================================================
Version v1.7.2 - March 23, 2015
================================================================================
Added
--------------------------------------------------------------------------------
* rubygems config
Version v1.7.1 - December 21, 2014
================================================================================
Bugfix
--------------------------------------------------------------------------------
* ETA works again, when ProgressBar is initialized with a non zero
starting_at.
Uncategorized
--------------------------------------------------------------------------------
* Describe the wiki link
* Inline the cage image in the README
* THE CAGE
* Remove superfluous subtitle
* Remove sections from the README that were moved to the Wiki
* Add link to wiki
* Update logo
Version v1.7.0 - November 4, 2014
================================================================================
Feature
--------------------------------------------------------------------------------
* Massive internal refactoring. Now 236% faster!
* Add Timer#restart
Version v1.6.1 - October 30, 2014
================================================================================
Uncategorized
--------------------------------------------------------------------------------
* Update readme about output option
* Display warnings when testing
Bugfix
--------------------------------------------------------------------------------
* Remove warnings from uninitialized instance variable
* Instance variable @started_at not initialized
* Instance variable @out_of_bounds_time_format not initialized
* Change private attributes to protected
* `*' interpreted as argument prefix
* Prefix assigned but unused variables with underscores
* Ambiguous first argument
Version v1.6.0 - September 20, 2014
================================================================================
Feature
--------------------------------------------------------------------------------
* Add ability to disable auto-finish
* Add SCSS lint configuration
* Update JSHint config with our custom version
* Add right-justified percentages - Closes #77
Bugfix
--------------------------------------------------------------------------------
* Don't allow title to change for non-TTY output
* Percentage formatter failed when total was 0 or unknown
Version v1.5.1 - May 14, 2014
================================================================================
Uncategorized
--------------------------------------------------------------------------------
* Make grammar and spelling corrections in the README
* Add the ability to scale the rate component
* Add notes to the README about the new format components
* Add the %R flag to the formatting to show the rate with 2 decimal places of
precision
* Remove unused molecule cruft
* Add specs to make sure that rate works even if the bar is started in the
middle
* Add base functionality for the rate component
* Add Slack notification to Travis builds
* Upgrade rspectacular to v0.21.6
* Upgrade rspectacular to v0.21.5
* Upgrade rspectacular to v0.21.4
* Upgrade rspectacular to v0.21.3
* Upgrade rspectacular to v0.21.2
* Add badges to the README
* Upgrade rspectacular to v0.21.1
* Lower Timecop version for Ruby 1.8 compatibility
* Lower rake version to 0.9.6 so that it will be compatible with Ruby 1.8
* Update rspectacular to 0.21
* Add CODECLIMATE_REPO_TOKEN as a secure Travis ENV variable
* Upgrade rspectacular to v0.20
* Add the Code Climate test reporter gem
* Add Ruby 2.1 to Travis
* Convert to RSpec 3
Feature
--------------------------------------------------------------------------------
* The running average is always set back to 0 when the bar is reset
Version v1.4.2 - March 1, 2014
================================================================================
* Improve estimated timer for short durations
* Remove useless protection
* README Update
* Slight formatting changes on the PACMAN example to make it consistent with
the others
* Pacman-style progressbar
Version v1.4.1 - January 26, 2014
================================================================================
* Change from 'STDOUT.puts' to the more appropriate 'Kernel.warn'
* Add another spec which tests this in a different way
* Add an acceptance spec to mimic running fuubar with no specs
* Makes Timer#stop a no-op unless it has first been started.
Version v1.4.0 - December 28, 2013
================================================================================
* Displaying the call stack was probably too much
* Upgrade fuubar
* Add an error specifically for invalid progress so that, in parent libraries,
it can be caught properly
* Use the splat operator just to be clear
* Fix an issue with the estimated timers blowing up if the total was nil -
Closes #62
* Changed my mind. Rather than checking if the bar is stopped/started just
blow up when the attempt is made to increment/decrement the bar to an invalid
value
* Remove the CannotUpdateStoppedBarError
* Changes to the total should also be considered a change in progress and
should therefore not be allowed for a stopped bar
* Add a warning that any changes to progress while the bar is stopped, will
eventually be an exception
* Use the helper to divide the seconds. Don't know why I didn't do this before
* When finishing the bar, we also should stop the timers
* When checking 'finished?' make sure we check all progressables
* Always thought it was weird that the 'finished?' check was in the update
method
* Move the 'finished' logic into the progressable
* Rather than specifying @elapsed_time explicitly, use the with_timers helper
* Add a method to check to see whether the bar has been started
* Extract logic for updating progress into a 'update_progress' method
* Add placeholder for an Error which will be used in v2.0.0
* Update the copyright in the README to 2014 (we're almost there :)
* Add 'Zero dependencies' to the README as a beneifit of using
ruby-progressbar
Version v1.3.2 - December 15, 2013
================================================================================
* Try to fix issues with testing on 1.8 and 1.9 when 'console/io' is not
available
* Remove rspectacular so we can get the specs to pass on 1.8 and 1.9.2
Version v1.3.1 - December 15, 2013
================================================================================
* Even if the throttle rate is passed in as nil, use the default regardless
Version v1.3.0 - December 15, 2013
================================================================================
* Remove the 'Road Map' section in the README
* Add notes to the README about non-TTY output
* Add notes to the CHANGELOG
* Give the bar the option of whether or not to automatically start or if
`#start` has to be explicitly called
* Default to a non-TTY-safe format if there is no TTY support when outputting
the bar
* Do not output the bar multiple times if `#resume` is called when the bar is
already started
* Do not output the bar multiple times if `#stop` is called when the bar is
already stopped
* Do not output multiple bars if `#finish` is called multiple times
* Change progressbar variables in specs to be `let`'s instead
* Change output variables in specs to be `let`'s instead
* Update Gemfile.lock to use HTTPS for Rubygems
* Add Ruby 2.0.0 to the README as a supported Ruby version
* Test with Ruby 2.0.0 on Travis CI
* Use HTTPS RubyGems source
* Added an option to set the :remainder_mark (along the lines of
:progress_mark) that allows the user to set the character used to represent
the remaining progress to be made along the bar.
* Add specs for the ANSI color code length calculation
* Name the regex for the ANSI SGR codes so that it's more clear what we're
doing
* Remove comment
* allows to inclue ANSI SGR codes into molecules, preserving the printable
length
* Switch from using 'git ls-files' to Ruby Dir globbing - Closes #54
Version v1.2.0 - August 12, 2013
================================================================================
* Add note to CHANGELOG about TTY updates
* Update benchmark script
* Update logic to describe the bar as being 'stopped' also when it is
'finished'
* Only print the bar output if we're printing to a TTY device, or any device
as long as the bar is finished
* Switch to instead of STDOUT so that it can be properly reassigned for
redirections
* Move carriage return to the clear method
* Add better inspection now that we can have a nil total
* Add note about unknown progress to the changelog
* Add notes to the README about displaying unknown progress
* Fix missing throttle rate in README
* Allow the progress bar to have an 'unknown' amount of progress
* Add item to the changelog
* Update the benchmark script
* Add #log to progressbar for properly handling bar output when printing to
the output IO
* Add CHANGELOG
* Rename all of the requires lines to be consistent with the new lib file
* Remove depreciation code
Version v1.1.2 - August 11, 2013
================================================================================
* Fix the 'negative argument' problem - Closes #47
* Update a spec that was passing when it shouldn't have been and pend it until
we can implement the fix
* Upgrade rspec and fuubar
* When dividing up the remainder of the length and determining how much space
a completed bar should take up, round down so that the bar doesn't complete
until 100%
* Add tags file to gitignore
Version v1.1.1 - June 8, 2013
================================================================================
* Fix file modes to be world readable
* Filter out specs themselves from coverage report
* Add tags file to gitignore
* Simplify #with_progressables and #with_timers
Version v1.1.0 - May 29, 2013
================================================================================
* Upgrade simplecov so it is resilient to mathn being loaded
* fix progress format when core lib mathn is loaded
* Rename throttle_period to throttle_rate
* Set a default throttle_period of 100 times per second
* Use the new precise #elapsed_seconds in the throttle component
* Add #elapsed_seconds that gets a more precise value for the elapsed time
* Rename #elapsed_seconds to #elapsed_whole_seconds
* Add throttle_period documentation
* Made throttle API resemble other components
* Add throttle_period option to #create
* Add throttle component
* Use StringIO in the new spec so we don't get output to STDOUT
* fix for the ruby_debug error, where debug defines a start method on kernel
that is used erroneously by progressbar
* spec that recreates the problem we're seeing with ruby-debug under jruby
* fix terminal width crashing progressbar
* Add failing test for terminal width crashing progress bar
* Make sure we're using an up-to-date version of the JSON gem
* Fix gemspec since Date.today is no longer supported
* Update ruby-prof
* Upgrade timecop
* Upgrade simplecov
* Upgrade rake
* Make changes related to rspectacular
* Install rspectacular
* Remove guard
* Rework gem manifest so that it only calls ls-files once
* Replace .rvmrc with .ruby-version
* Rework #length specs now that we have a more complex set of specifications
* Fix overriding the progress bar length with an environment variable.
* Fix the `rdoc_options` specification in the gemspec
* Add Ruby Markdown code fencing to the README
Version v1.0.2 - October 7, 2012
================================================================================
* Remove superfluous comment
* The amount returned if the total is 0 should always be 100 (as in 100%) and
not the DEFAULT_TOTAL. Even though they currently happen to be the same
number.
* return DEFAULT_TOTAL for percentage_completed of total is zero, fixing
ZeroDivisionError
* Use io/console where available.
* Add tmux notifications to Guardfile
* Bundler is not a development dependency
* Hashes are not ordered and therefore when looking for the time mocking
method, we weren't selecting the proper one. Switched to an Array instead.
* Update development gems
* Move ruby-prof into the Gemfile so it is only loaded when it's MRI Ruby
* Add a script for benchmarking
* Now that we're memoizing Format::Base#bar_molecules, just use it to
calculate how many bar molecules are left
* Limit the API of the Format.Base class by making #non_bar_molecules and
#bar_molecules private
* Move Formatter#process into Format::Base because it is much more concerned
with the format
* Remove the Kernel#tap in Formatter#process and just use an instance variable
instead
* Now that we're not reparsing the format string each time, we can save some
cycles by memoizing the Format::Base#non_bar_molecules and #bar_molecules
* When setting the format string, if it hasn't changed, we don't need to
reparse it
* Extract the logic of setting the format string out into its own private
method ProgressBar::Formatter#format_string=
* Add 'ruby-prof' to the project as a development gem
Version v1.0.1 - August 28, 2012
================================================================================
* Add Ruby 1.8.7 back into Travis CI build
* Fixing string slice bug
* Add a Rakefile
* Update .gitignore
* Add Rake to the Gemfile
Version v1.0.0 - August 18, 2012
================================================================================
* Remove 1.8 from the Ruby Travis builds
* Add a spec for the %% molecule
* Fix bug where a progress bar with an integrated percentage miscalculated the
space it was taking up
* fix @terminal_width and bar_width calculation
* Fix more README typos
* Set the default bar mark to '='
* Make sure to blow up if a molecule is not value
* It's not sufficient to say that a molecule is 'a percent sign followed by
something that isn't a percent sign', we need to force it to be followed by a
letter
* Fix problems in the README
* Update the formatting to make sure the %b and %i formatting molecules can
coexist with each other
* Now that we can use the %b and %i flags, we can create a mirrored bar simply
by using a format string of '%i%b' and therefore this extra code is no longer
necessary
* Make sure that when the timer is started, then stopped, then started again,
it should not register as `stopped?`
* Allow %i to be used display the incomplete space of the bar
* Update `ProgressBar::Formatter#format` to reset the bar style to default if
it is called without passing in a format string
* Allow the %b molecule to be used to display the bar only without incomplete
space
* Update the %B format test to be more reasonable
* Make the %w molecule only return the bar with the percentage instead of
including empty space
* Remove the `length` argument when calling
`ProgressBar::Components::Bar#to_s` and instead set the attribute
* Rename `ProgressBar::Formatter#bar` to `#complete_bar`
* Change the %b (bar with percentage) format molecule to %w
* Swap the meaning of the %b and %B molecules
* There was a typo in the example formats in the README. The literal percent
sign needs to be included in the format string
* Make sure the '%%' molecule is formatted properly
* Little refactoring on the `ProgressBar::Formatter#process` method
* README update
* Remove all of the `ProgressBar::Base#update` calls and convert to method
calls that take a block `#with_update`
* Add an "In The Weeds" section to the README
* Add 'It's better than some other library' section to the README
* Add contributors to the README
* Add supported Rubies to the README
* Tons of README formatting updates
* Add time-mocking information to the README
* If Time is being mocked via Delorean, make sure that the progress bar always
uses the unmocked time
* If Time is being mocked via Timecop, make sure that the progress bar always
uses the unmocked time
* When testing, make sure that we're able to always get the proper version of
`now` that we need for our particular spec
* When calling `ProgressBar::Time.now` allow a Time-like object to be passed
in
* Add a `ruby-progressbar`-specific implementation of Time to encapsulate the
business logic
* Extract the notion of `now` into a method on the `Timer` module
* Remove extra `private`
* Use inheritance to put `title=` in the Formatter module where it belongs
* I didn't notice that #total and #progress were available in the Formatter
module
* Move logic specific to the modules into those modules and use the
inheritance chain to get at them
* Evidently Travis is having issues with Rubinius so we'll remove them from
our .travis.yml file to get a passing build
* Try and get better 1.8.7 compatibility when checking the end character in
the progressbar string
* Add the Travis-CI build status to the README
* Add the Travis-CI configuration file
* Update the other deprecation warnings outside of `ProgressBar::Base`
* Add the remaining method deprecation/warning messages
* Use a little metaprogramming to further dry up the deprecation messages
* fixup! c3e6991988107ab45ac3dac380750b287db3bc2e
* When displaying deprecation warnings for methods, only show them one time;
not every time the method is invoked
* Dry up the warning messages in `ProgressBar::Depreciable`
* Move `ProgressBar::Base#backwards_compatible_args_to_options_conversion` to
the `ProgressBar::Depreciable` module
* Add a new `ProgressBar::Depreciable` module to encapsulate all of the
deprecation logic
* Forgot to return the `options` hash from
`ProgressBar::Base#backwards_compatible_args_to_options_conversion`
* Add the old `bar_mark=` method back so it's more backwards compatible
* Update deprecation warnings to expire June 30th, 2013 instead of October
30th, 2013
* Update the README to reflect the new syntax for creating a ProgressBar
* Override `ProgressBar.new` and remain backward compatible with the pre-1.0
versions of the gem
* Convert the `ProgressBar` module to a class so that we can...
* Add `ProgressBar::Base#progress` and `#total`
* Update the gemspec
* Update the `EstimatedTimer` specs when smoothing is turned off such that the
`#decrement` spec is sufficiently different from the smoothing on `#decrement`
spec
* Update `EstimatedTimer` specs when smoothing is turned off to be more
consistent with the new smoothing specs
* Add `EstimatedTimer` specs to test when smoothing is turned on
* Update the spec text for the `EstimatedTimer` class so that it doesn't
contain the actual expected value but rather the general expectation
* Extract `smoothing` into its own `let` variable
* Add notes to the README about smoothing
* Invert the smoothing value such that 0.0 is no smoothing and 1.0 is maximum
smoothing
* Set the default smoothing value to 0.9
* Convert the `EstimatedTime#estimated_seconds_remaining` over to using the
running average
* Tell the `Progressable` module to update the running average any time the
`progress` is set
* Add the notion of a `smoothing` variable to the `Progressable` module for
use when calculating the running average
* Introduce `Progressable#running_average` and reset it any time
`Progressable#start` is called
* Add a RunningAverageCalculator so we can offload the logic for calculating
running averages in our Progressables
* Always refer to `total` using the accessor rather than the instance variable
* Fix place where we were using a literal string for our time format rather
than the TIME_FORMAT constant
* Make the `Progressable` initializer optional
* Fix README mistake regarding out of bounds ETAs
* In Progressable, rather than accessing the starting_position instance
variable, use an accessor
* Rather than having the logic in multiple places, use `Progressable#start`
where possible
* Update the Progressable module to always reference the `progress` accessor
rather than the instance variable
* Add the ability to customize the bar's title in real time
* Add a note to the README about customizing the bar in real time
* Add notes to the README about overriding the bar's length
* Update the deprecation date of
* Upgrade the README to describe the new 'integrated percentage' formatting
option
* Update Ruby version in .rvmrc
* Replace @out.print with @out.write to work better in dumb terminal like
Emacs' M-x shell.
* Document the smoothing attribute a little better.
* Rewrote smoothing stuff to something better.
* Offload handling of weird time values to format_time (isn't that its job?)
;-)
* Added "smoothing" attribute (default 0.9). It can be set to nil to use the
old ETA code.
* Make time estimate a smoothed moving average
* Use the inherited #initialize
* Add a format where the bar has an integrated percentage
* Just always run all specs
* Alias stopped? to paused?
* If the bar is completed, show the elapsed time, otherwise show the estimated
time
* estimated_time to estimated_time_with_no_oob
* Add a Guardfile
* Add the ability to set the progress mark at any point
* Upgrade RSpec in the Gemfile
* Allow :focused w/o the '=> true'
* More gem updates. Include guard
* Quotes
* Unindent private methods
* And again
* Consistency is key
* And again
* Change to new date and repo
* Upgraded RSpec uses RSpec not Rspec
* Not sure why I did this here
* Upgrade RSpec and SimpleCov
* Bump Ruby version to 1.9.3
* allow to customize the #title_width
* Detect whether the output device is a terminal, and use a simplified output
strategy when it is not.
* Use 1.9 compatible require in test.
* Add tests for Timecop and Delorean time mocking
* Make Progressbar resistant to time mocking
* Automatically tag gem builds as Date.today
* Replace the Bar's instance variable references
* Remove Options Parser
* The starting value should be passed on #start
* Remove Title class for now
* Change 'reversed bar' to 'mirrored bar'
* Rename `out` to `output` and access w/o variable
* Change default output to STDOUT
* Rename `output_stream` to `output`
* Rename `current` to `progress`
* Update README
* Add #decrement to the progress bar
* Backwards compatibility for instantiation
* Create `with_timers` helper
* Update spec_helper with new root gem file
* Update gemspec with new license file
* Update gemspec to auto-update Date
* Add deprecation and backwards compatibility helprs
* Add SimpleCov to the project
* Rename 'beginning_position' option to 'started_at'
* Fix require files
* Update README
* Update README
* Update README
* Remove Test::Unit test cases which are covered
* Replace licenses with the MIT license
* Begin updating README
* Add .gitignore
* Fix 'ArgumentError: negative argument' when using with Spork
* Bar can be forcibly stopped
* Autostart for now
* Add ability to pause/resume progress bar
* Bar resets the elapsed time when reset.
* Bar resets the estimated time when reset.
* Timers can now be reset
* #start determines #reset position
* On #reset, bar goes back to its starting position
* Bar can be reset back to 0
* Fix test typo
* Fix tests
* Reminder for autostart
* Move #title
* Delete unneeded code
* Stop Elapsed Timer on finish
* Progressable components finish properly
* Refactor out common 'Progressable' functionality
* Prepare for more 'finish' functionality
* Refactor common Timer functionality into a module
* Bar outputs a \n when it's finished
* Bar can now be "finished"
* Remove unnecessary (for now) code
* Resizing algorithm is much smarter
* Fix length_changed? check
* Move formatting methods and make them private
* Create #inspect method
* Remove implemented methods
* We have a LICENSE file. No need for this.
* Fix output problem
* Always show 2 decimal places with precise percentage
* Elapsed Time works properly with progress bar
* Estimated Timer works properly with progress bar
* %r format string works properly
* Estimated Timer can now be incremented
* Bar graphic can now be reversed
* Remove method arguments from molecule
* %e, %E and %f format the estimated time correctly
* Formatting
* Include Molecule specs
* Estimated Timer works with out of bounds times
* Estimated Timer displays estimated time correctly
* Estimated Timer displays unknown time remaining
* Estimated Time can now be displayed
* Make Timer work properly
* Move bar_spec to the proper locale
* Elapsed Time can now be displayed
* Percentage information can now be displayed
* Capacity information can now be displayed
* Move Bar and Title into Components submodule
* Base refactoring work laid out
* Add RSpec support files
* Create a Gemfile and other infrastructure files
* Update gemspec
* Fix to failing test: Adjusting the path to progressbar.rb file
* accessor for alternate bar mark
* Updated gem name to match project (so it would build)
* Add a gemspec.
* Move progressbar.rb into lib/.
* Add LICENSE files.
* Get rid of the ChangeLog. That's what revision logs are for.
* Make the readme use Markdown.
* Initial commit (based on Ruby/ProgressBar 0.9).
| 42.961326 | 82 | 0.619952 | eng_Latn | 0.983134 |
269885c36a8a4eb846d59984bdfef26b11032005 | 768 | md | Markdown | 2016/CVE-2016-1556.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2016/CVE-2016-1556.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2016/CVE-2016-1556.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2016-1556](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1556)



### Description
Information disclosure in Netgear WN604 before 3.3.3; WNAP210, WNAP320, WNDAP350, and WNDAP360 before 3.5.5.0; and WND930 before 2.0.11 allows remote attackers to read the wireless WPS PIN or passphrase by visiting unauthenticated webpages.
### POC
#### Reference
- http://packetstormsecurity.com/files/135956/D-Link-Netgear-FIRMADYNE-Command-Injection-Buffer-Overflow.html
#### Github
No PoCs found on GitHub currently.
| 42.666667 | 240 | 0.761719 | eng_Latn | 0.298329 |
26997c506f5825bb35f36a0f3fadf59aea124c35 | 1,807 | md | Markdown | languages/csharp/exercises/concept/wizards-and-warriors/.docs/introduction.md | jwarwick/exercism_v3 | db92721b9d62681c51b8f25fb0c6f5f97bfac44b | [
"MIT"
] | null | null | null | languages/csharp/exercises/concept/wizards-and-warriors/.docs/introduction.md | jwarwick/exercism_v3 | db92721b9d62681c51b8f25fb0c6f5f97bfac44b | [
"MIT"
] | 45 | 2020-01-24T17:04:52.000Z | 2020-11-24T17:50:18.000Z | languages/csharp/exercises/concept/wizards-and-warriors/.docs/introduction.md | jwarwick/exercism_v3 | db92721b9d62681c51b8f25fb0c6f5f97bfac44b | [
"MIT"
] | 1 | 2020-04-20T11:41:55.000Z | 2020-04-20T11:41:55.000Z | ## inheritance
In C#, a _class_ hierarchy can be defined using _inheritance_, which allows a derived class (`Car`) to inherit the behavior and data of its parent class (`Vehicle`). If no parent is specified, the class inherits from the `object` class.
Parent classes can provide functionality to derived classes in three ways:
- Define a regular method.
- Define a `virtual` method, which is like a regular method but one that derived classes _can_ change.
- Define an `abstract` method, which is a method without an implementation that derived classes _must_ implement. A class with `abstract` methods must be marked as `abstract` too. Abstract classes cannot be instantiated.
The `protected` access modifier allows a parent class member to be accessed in a derived class, but blocks access from other classes.
Derived classes can access parent class members through the `base` keyword.
```csharp
// Inherits from the 'object' class
abstract class Vehicle
{
// Can be overridden
public virtual void Drive()
{
}
// Must be overridden
protected abstract int Speed();
}
class Car : Vehicle
{
public override void Drive()
{
// Override virtual method
// Call parent implementation
base.Drive();
}
protected override int Speed()
{
// Implement abstract method
}
}
```
The constructor of a derived class will automatically call its parent's constructor _before_ executing its own constructor's logic. Arguments can be passed to a parent class' constructor using the `base` keyword:
```csharp
abstract class Vehicle
{
protected Vehicle(int wheels)
{
Console.WriteLine("Called first");
}
}
class Car : Vehicle
{
public Car() : base(4)
{
Console.WriteLine("Called second");
}
}
```
| 28.234375 | 236 | 0.710017 | eng_Latn | 0.995279 |
269997ab7afd7c954c83981dcd2a3f479c11aa6f | 1,188 | md | Markdown | org.OpenT2T.Sample.SuperPopular.TemperatureSensor/Test Thermostat/README.md | jepickett/translators | 705741f6457289e425b82e51e93477875031f881 | [
"MIT"
] | null | null | null | org.OpenT2T.Sample.SuperPopular.TemperatureSensor/Test Thermostat/README.md | jepickett/translators | 705741f6457289e425b82e51e93477875031f881 | [
"MIT"
] | null | null | null | org.OpenT2T.Sample.SuperPopular.TemperatureSensor/Test Thermostat/README.md | jepickett/translators | 705741f6457289e425b82e51e93477875031f881 | [
"MIT"
] | null | null | null | # Test Thermostat Translator
> <b>Note</b>: This translator is only for testing, and does not correspond to any real device. It just does
> some console logging.
To install dependencies for this translator, run:
```bash
npm install
```
After dependencies are installed, you can run this translator with some test onboarding data:
```bash
node test -t abc
```
You should see output that looks like:
```bash
Javascript initialized.
device.name : Test
device.props : { "id": "abc" }
getCurrentTemperature called.
returning random temperature: 70
getTemperatureTrend called.
returning random temperature trend: -0.23426573426573427
disconnect called.
device.name : Test
device.props : { "id": "abc" }
```
Let's step through what's going on here. The manifest.xml for this translator documents the onboarding type
for this translator is org.OpenT2T.Onboarding.Manual. This basically just describes what sort of setup, pairing or
auth information is required to interact with the device. In the case of this onboarding type, success means you get
a token parameter. This token parameter needs to be provided to the translator for it to work.
| 32.108108 | 116 | 0.746633 | eng_Latn | 0.997533 |
2699d4e558e06bbd6772d560acd517fd357311ab | 164 | md | Markdown | content/lua/client/hasskill.md | xackery/eqquestapi | e3bb4d58651c7c2bb1ced94deb59115946eed3c5 | [
"MIT"
] | null | null | null | content/lua/client/hasskill.md | xackery/eqquestapi | e3bb4d58651c7c2bb1ced94deb59115946eed3c5 | [
"MIT"
] | 1 | 2020-09-08T17:21:08.000Z | 2020-09-08T17:21:08.000Z | content/lua/client/hasskill.md | xackery/eqquestapi | e3bb4d58651c7c2bb1ced94deb59115946eed3c5 | [
"MIT"
] | 1 | 2020-08-29T00:49:26.000Z | 2020-08-29T00:49:26.000Z | ---
title: HasSkill
searchTitle: Lua Client HasSkill
weight: 1
hidden: true
menuTitle: HasSkill
---
## HasSkill
```lua
Client:HasSkill(number skill_id); -- bool
``` | 14.909091 | 41 | 0.719512 | eng_Latn | 0.34336 |
2699d70df79e1e42f39aff72671164f00ed30491 | 7,163 | md | Markdown | README.md | fakecoinbase/LavacoreteamslashLava | e44959b81b708f995bce5611bf857bd180a39b55 | [
"MIT"
] | 14 | 2020-04-23T04:15:01.000Z | 2020-08-04T07:00:38.000Z | README.md | fakecoinbase/LavacoreteamslashLava | e44959b81b708f995bce5611bf857bd180a39b55 | [
"MIT"
] | null | null | null | README.md | fakecoinbase/LavacoreteamslashLava | e44959b81b708f995bce5611bf857bd180a39b55 | [
"MIT"
] | 5 | 2020-05-10T22:28:50.000Z | 2020-07-19T06:26:40.000Z | Lava
===============
[](https://opencollective.com/lava) [](https://github.com/Lavacoreteam/Lava/releases)
[](https://github.com/Lavacoreteam/Lava/releases)
[](https://github.com/Lavacoreteam/Lava/releases)
[](https://github.com/Lavacoreteam/Lava/graphs/commit-activity)
[](https://github.com/Lavacoreteam/Lava/graphs/code-frequency)
[](https://github.com/Lavacoreteam/Lava/commits/master)
[](https://lgtm.com/projects/g/lavaofficial/lava/alerts/)
[](https://lgtm.com/projects/g/lavaofficial/lava/context:cpp)
What is Lava?
--------------
[Lava](https://lava.money) is a privacy focused cryptocurrency forked from Zcoin (https://zcoin.io/) that utilizes zero-knowledge proofs which allows users to destroy coins and then redeem them later for brand new ones with no transaction history. Zcoin was the first project to implement the Zerocoin protocol and transitioned to the [Sigma protocol](https://lava.money/what-is-sigma-and-why-is-it-replacing-zerocoin-in-lava/) which has no trusted setup and small proof sizes. Lava also utilises [Dandelion++](https://arxiv.org/abs/1805.11060) to obscure the originating IP of transactions without relying on any external services such as Tor/i2P.
How Lava’s and Zcoin's Privacy Technology Compares to the Competition
--------------

read more https://lava.money/lavas-privacy-technology-compares-competition/
Running with Docker
===================
If you are already familiar with Docker, then running Lava with Docker might be the the easier method for you. To run Lava using this method, first install [Docker](https://store.docker.com/search?type=edition&offering=community). After this you may
continue with the following instructions.
Please note that we currently don't support the GUI when running with Docker. Therefore, you can only use RPC (via HTTP or the `lava-cli` utility) to interact with Lava via this method.
Pull our latest official Docker image:
```sh
docker pull lavaofficial/lavad
```
Start Lava daemon:
```sh
docker run --detach --name lavad lavaofficial/lavad
```
View current block count (this might take a while since the daemon needs to find other nodes and download blocks first):
```sh
docker exec lavad lava-cli getblockcount
```
View connected nodes:
```sh
docker exec lavad lava-cli getpeerinfo
```
Stop daemon:
```sh
docker stop lavad
```
Backup wallet:
```sh
docker cp lavad:/home/lavad/.lava/wallet.dat .
```
Start daemon again:
```sh
docker start lavad
```
Linux Build Instructions and Notes
==================================
Dependencies
----------------------
1. Update packages
sudo apt-get update
2. Install required packages
sudo apt-get install build-essential libtool autotools-dev automake pkg-config libssl-dev libevent-dev bsdmainutils libboost-all-dev
3. Install Berkeley DB 4.8
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:bitcoin/bitcoin
sudo apt-get update
sudo apt-get install libdb4.8-dev libdb4.8++-dev
4. Install QT 5
sudo apt-get install libminiupnpc-dev libzmq3-dev
sudo apt-get install libqt5gui5 libqt5core5a libqt5dbus5 qttools5-dev qttools5-dev-tools libprotobuf-dev protobuf-compiler libqrencode-dev
Build
----------------------
1. Clone the source:
git clone https://github.com/Lavacoreteam/Lava
2. Build Lava-core:
Configure and build the headless Lava binaries as well as the GUI (if Qt is found).
You can disable the GUI build by passing `--without-gui` to configure.
./autogen.sh
./configure
make
3. It is recommended to build and run the unit tests:
make check
macOS Build Instructions and Notes
=====================================
See (doc/build-macos.md) for instructions on building on macOS.
Windows (64/32 bit) Build Instructions and Notes
=====================================
See (doc/build-windows.md) for instructions on building on Windows 64/32 bit.
## Contributors
### Code Contributors
This project exists thanks to all the people who contribute. [[Contribute](CONTRIBUTING.md)].
<a href="https://github.com/Lavacoreteam/Lava/graphs/contributors"><img src="https://opencollective.com/lava/contributors.svg?width=890&button=false" /></a>
### Financial Contributors
Become a financial contributor and help us sustain our community. [[Contribute](https://opencollective.com/lava/contribute)]
#### Individuals
<a href="https://opencollective.com/lava"><img src="https://opencollective.com/lava/individuals.svg?width=890"></a>
#### Organizations
Support this project with your organization. Your logo will show up here with a link to your website. [[Contribute](https://opencollective.com/lava/contribute)]
<a href="https://opencollective.com/lava/organization/0/website"><img src="https://opencollective.com/lava/organization/0/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/1/website"><img src="https://opencollective.com/lava/organization/1/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/2/website"><img src="https://opencollective.com/lava/organization/2/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/3/website"><img src="https://opencollective.com/lava/organization/3/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/4/website"><img src="https://opencollective.com/lava/organization/4/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/5/website"><img src="https://opencollective.com/lava/organization/5/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/6/website"><img src="https://opencollective.com/lava/organization/6/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/7/website"><img src="https://opencollective.com/lava/organization/7/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/8/website"><img src="https://opencollective.com/lava/organization/8/avatar.svg"></a>
<a href="https://opencollective.com/lava/organization/9/website"><img src="https://opencollective.com/lava/organization/9/avatar.svg"></a>
| 45.050314 | 648 | 0.738099 | eng_Latn | 0.469212 |
2699e31a93bed44f80c654eb4237f1e9996853a8 | 58 | md | Markdown | README.md | NightBaron/.NET-Obfuscator | 0a2ae648e0d0f6db71c1bf9e49f8af4a7ff08d9d | [
"MIT"
] | 3 | 2017-07-25T13:29:07.000Z | 2021-08-09T06:06:03.000Z | README.md | NightBaron/.NET-Obfuscator | 0a2ae648e0d0f6db71c1bf9e49f8af4a7ff08d9d | [
"MIT"
] | null | null | null | README.md | NightBaron/.NET-Obfuscator | 0a2ae648e0d0f6db71c1bf9e49f8af4a7ff08d9d | [
"MIT"
] | 2 | 2019-04-30T06:10:08.000Z | 2021-08-09T06:06:05.000Z | # .NET-Obfuscator
Simple .net obfuscator using Mono Cecil
| 19.333333 | 39 | 0.793103 | ind_Latn | 0.367147 |
269a31359b366b992179c56ed1f94e7c0f80c038 | 1,505 | md | Markdown | index/g/grass.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 26 | 2019-03-21T15:43:32.000Z | 2022-03-12T18:30:35.000Z | index/g/grass.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 3 | 2020-05-01T18:15:58.000Z | 2021-04-16T05:51:05.000Z | index/g/grass.md | charles-halifax/recipes | 48268785a13d598a87de2e75c525056d00202e54 | [
"MIT"
] | 14 | 2019-06-23T22:54:35.000Z | 2021-10-16T02:04:45.000Z | # grass
* [Grass Fed Beef Meatloaf In A Bacon Blanket](../../index/g/grass-fed-beef-meatloaf-in-a-bacon-blanket-368549.json)
* [Grass Fed Strip Steak With Spicy Hoisin Sauce And Cucumber Relish](../../index/g/grass-fed-strip-steak-with-spicy-hoisin-sauce-and-cucumber-relish-356949.json)
* [Perfect Grass Fed Beef Burgers](../../index/p/perfect-grass-fed-beef-burgers-51210490.json)
* [Grass Fed Beef Sliders With Air Fried French Fries](../../index/g/grass-fed-beef-sliders-with-air-fried-french-fries.json)
* [Creamy Lemon Grass Ice Cream](../../index/c/creamy-lemon-grass-ice-cream.json)
* [Lemon Grass And Chicken Summer Rolls](../../index/l/lemon-grass-and-chicken-summer-rolls.json)
* [Lemon Grass Lemonade](../../index/l/lemon-grass-lemonade.json)
* [Steamed Lemon Grass Crab Legs](../../index/s/steamed-lemon-grass-crab-legs.json)
* [Vietnamese Lemon Grass Chicken Curry](../../index/v/vietnamese-lemon-grass-chicken-curry.json)
* [Cardamom Scented Grass Fed Rib Steak With Herb Vinaigrette 235237](../../index/c/cardamom-scented-grass-fed-rib-steak-with-herb-vinaigrette-235237.json)
* [Grass Fed Beef Meatloaf In A Bacon Blanket 368549](../../index/g/grass-fed-beef-meatloaf-in-a-bacon-blanket-368549.json)
* [Grass Fed Strip Steak With Spicy Hoisin Sauce And Cucumber Relish 356949](../../index/g/grass-fed-strip-steak-with-spicy-hoisin-sauce-and-cucumber-relish-356949.json)
* [Perfect Grass Fed Beef Burgers 51210490](../../index/p/perfect-grass-fed-beef-burgers-51210490.json)
| 94.0625 | 170 | 0.740864 | eng_Latn | 0.209979 |
269b11422ca63cfcd054e307b6d26dfd219d581a | 2,347 | md | Markdown | _pages/resources.md | anthonp/reverie | 6bc63a6afdfea00829753596565c6e77b2b1a199 | [
"MIT"
] | null | null | null | _pages/resources.md | anthonp/reverie | 6bc63a6afdfea00829753596565c6e77b2b1a199 | [
"MIT"
] | null | null | null | _pages/resources.md | anthonp/reverie | 6bc63a6afdfea00829753596565c6e77b2b1a199 | [
"MIT"
] | null | null | null | ---
layout: page
permalink: /resources/
title: Resources Available To You
---
### Tools:
|Tool | Description | Documentation | Link |
|---- | ----------- | --------------------- | ----------------|
|Linpeas | PrivEsc | [docs](https://www.aldeid.com/wiki/LinPEAS) | [link](https://linpeas.sh/)|
|Wireshark GeoIP | Geographic | [docs](https://www.chappell-university.com/post/geoip-mapping-in-wireshark) | [link](https://www.chappell-university.com/post/geoip-mapping-in-wireshark)|
|skanuvaty | Subdomain Enumerator | [docs](https://github.com/Esc4iCEscEsc/skanuvaty) | [link](https://github.com/Esc4iCEscEsc/skanuvaty)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
|Name | Description | [docs](https://) | [link](https://)|
### YouTube:
Philosophy:
- The Hated One: [https://www.youtube.com/channel/UCjr2bPAyPV7t35MvcgT3W8Q](https://www.youtube.com/channel/UCjr2bPAyPV7t35MvcgT3W8Q)
- Mental Outlaw: [https://www.youtube.com/c/MentalOutlaw](https://www.youtube.com/c/MentalOutlaw)
Education:
- Network Chuck: [https://www.youtube.com/c/NetworkChuck](https://www.youtube.com/c/NetworkChuck)
- John Hammond: [https://www.youtube.com/c/JohnHammond010](https://www.youtube.com/c/JohnHammond010)
- DistroTube: [https://www.youtube.com/c/DistroTube](https://www.youtube.com/c/DistroTube)
- Chris Titus: [https://www.youtube.com/c/ChrisTitusTech](https://www.youtube.com/c/ChrisTitusTech)
Online Resources:
- OSINT: [https://start.me/p/ZME8nR/osint](https://start.me/p/ZME8nR/osint)
- Search Engines: [https://start.me/p/b56G5Q/search-engines](https://start.me/p/b56G5Q/search-engines)
| 55.880952 | 191 | 0.632723 | yue_Hant | 0.675969 |
269b3ccc19aad3459bed2ba03b4a3d6d6e5e6120 | 292 | md | Markdown | packages/docs/src/pages/modal/md/modal2.md | jeven2016/react-windy-ui | 93623d11fec4b282ecaeb3e09c60bf81ecf56c3f | [
"MIT"
] | 1 | 2022-01-22T14:31:30.000Z | 2022-01-22T14:31:30.000Z | packages/docs/src/pages/modal/md/modal2.md | jeven2016/react-windy-ui | 93623d11fec4b282ecaeb3e09c60bf81ecf56c3f | [
"MIT"
] | 2 | 2021-06-12T05:52:22.000Z | 2022-02-26T07:03:01.000Z | packages/docs/src/pages/modal/md/modal2.md | jeven2016/react-windy-ui | 93623d11fec4b282ecaeb3e09c60bf81ecf56c3f | [
"MIT"
] | null | null | null | ---
order: 2
type: sample
zh_CN: 在窗口顶部显示的Modal
en_US: Modal
editUrl: $BASE/pages/menu/md/modal2.md
---
+++ zh_CN
默认情况下,Modal会在屏幕居中显示。如果需要在顶部一定距离显示,可以<Code>center</Code>为<Code>false</Code>,同时在<Code>style</Code>
中设置<Code>top</Code>属性值。
+++ en_US
Modal2
+++ SampleCode
fileName: Modal2
| 16.222222 | 97 | 0.712329 | yue_Hant | 0.91926 |
269b6f6770852ddf35c1de40cbfbc75576d0c30c | 2,306 | md | Markdown | README.md | faneaatiku/zecwallet | 8cef0943a0dc9886d34bb1396e7615d498152453 | [
"MIT"
] | null | null | null | README.md | faneaatiku/zecwallet | 8cef0943a0dc9886d34bb1396e7615d498152453 | [
"MIT"
] | null | null | null | README.md | faneaatiku/zecwallet | 8cef0943a0dc9886d34bb1396e7615d498152453 | [
"MIT"
] | 2 | 2021-02-26T13:37:55.000Z | 2021-07-18T12:13:51.000Z | BZWallet Fullnode is a z-Addr first, Sapling compatible wallet and full node for bzedged that runs on Linux, Windows and macOS.


# Installation
**Note**: BZWallet Fullnode will download the **entire blockchain (about 4GB)**, and requires some familiarity with the command line. If you don't want to download the blockchain but prefer a Lite wallet, please check out [BZWallet Lite](https://getbze.com).
Head over to the releases page and grab the latest installers or binary. https://getbze.com
### Linux
If you are on Debian/Ubuntu, please download the '.AppImage' package and just run it.
```
./BZwallet.Fullnode-1.4.2.AppImage
```
If you prefer to install a `.deb` package, that is also available.
```
sudo apt install -f ./BZwallet_1.4.2_amd64.deb
```
### Windows
Download and run the `.msi` installer and follow the prompts. Alternately, you can download the release binary, unzip it and double click on `bzwallet.exe` to start.
### macOS
Double-click on the `.dmg` file to open it, and drag `BZwallet Fullnode` on to the Applications link to install.
## bzedged
BZWallet needs a BZEdge node running bzedged. If you already have a zcashd node running, BZWallet will connect to it.
If you don't have one, BZWallet will start its embedded bzedged node.
Additionally, if this is the first time you're running BZWallet or a bzedge daemon, BZWallet will download the zcash params (~777 MB) and configure `bzedge.conf` for you.
## Compiling from source
BZWallet is written in Electron/Javascript and can be build from source. Note that if you are compiling from source, you won't get the embedded bzedged by default. You can either run an external bzedged, or compile bzedged as well.
#### Pre-Requisits
You need to have the following software installed before you can build BZWallet Fullnode
- Nodejs v12.16.1 or higher - https://nodejs.org
- Yarn - https://yarnpkg.com
```
git clone https://github.com/bze-alphateam/bzwallet.git
cd bzwallet
yarn install
yarn build
```
To start in development mode, run
```
yarn dev
```
To start in production mode, run
```
yarn start
```
### [Troubleshooting](https://discord.gg/7KuDSSESVC)
Join our [Discord](https://discord.gg/7KuDSSESVC)
| 30.342105 | 258 | 0.753252 | eng_Latn | 0.985799 |
269c0930e186a4eeb4dabd325fd2397863be5564 | 1,109 | md | Markdown | content/events/2018-edinburgh/program/rachel-willmer.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 6 | 2016-11-14T14:08:29.000Z | 2018-05-09T18:57:06.000Z | content/events/2018-edinburgh/program/rachel-willmer.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 461 | 2016-11-11T19:23:06.000Z | 2019-07-21T16:10:04.000Z | content/events/2018-edinburgh/program/rachel-willmer.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 15 | 2016-11-11T15:07:53.000Z | 2019-01-18T04:55:24.000Z | +++
Talk_date = ""
Talk_start_time = ""
Talk_end_time = ""
Title = "Keynote: DevOps: Why the Startup CEO should care and how to convince them with numbers"
Type = "talk"
Speakers = ["rachel-willmer"]
+++
You run a startup. You’ve just raised your first serious money, and need to start spending it wisely to hit your goals you promised to your investor.
You’re hiring developers like crazy, but going slower than ever! What’s gone wrong?
Your Lead Developer keeps saying “we need to build a CD/CI pipeline, so I’m going to take one developer away to focus on that, it’ll make life better in the long run”. And you go “No! We need all hands on getting the next Shiny New Feature out, we have no time for whatever you just said..”
Sound familiar? We, as a profession, need to get better at explaining what we do in terms that a non-technical startup CEO can understand.
I’m going to take some examples of DevOps projects and show how the language we use to describe them can turn them from a cost into an investment. So your CEO will *want* to put money into your DevOps project, not fight to hold it back. | 61.611111 | 290 | 0.758341 | eng_Latn | 0.999534 |
269cdf6b6e1d0625a077bbebb02ff971cf57862a | 56 | md | Markdown | README.md | intoalmiala/tukoke2021-recorder | 57f409bb26fffe2a4ca1d18e00f98b3ec2aeca01 | [
"MIT"
] | null | null | null | README.md | intoalmiala/tukoke2021-recorder | 57f409bb26fffe2a4ca1d18e00f98b3ec2aeca01 | [
"MIT"
] | null | null | null | README.md | intoalmiala/tukoke2021-recorder | 57f409bb26fffe2a4ca1d18e00f98b3ec2aeca01 | [
"MIT"
] | null | null | null | # tukoke2021-recorder
Recording program for TuKoKe 2021
| 18.666667 | 33 | 0.839286 | eng_Latn | 0.842497 |
269d3c5d08a31727e9e987bf072848e7ea63be3f | 48 | md | Markdown | CONTRIBUTING.md | yagobski/loopback-connector-elastic | 8b68b2299b8f7928d986f9cee36b92aafd61ac72 | [
"MIT"
] | 2 | 2015-03-16T02:22:37.000Z | 2015-04-30T21:00:50.000Z | CONTRIBUTING.md | yagobski/loopback-connector-elastic | 8b68b2299b8f7928d986f9cee36b92aafd61ac72 | [
"MIT"
] | null | null | null | CONTRIBUTING.md | yagobski/loopback-connector-elastic | 8b68b2299b8f7928d986f9cee36b92aafd61ac72 | [
"MIT"
] | null | null | null | ## Contributing to this connector
### Bugfixes
| 12 | 33 | 0.729167 | eng_Latn | 0.997851 |
269d4aacab7148fef51d9954db87d547bccec830 | 705 | md | Markdown | e-board.md | uscsage/uscsage.github.io | 6a25b611060e9a360343b95d17c4d84d82a32561 | [
"MIT"
] | null | null | null | e-board.md | uscsage/uscsage.github.io | 6a25b611060e9a360343b95d17c4d84d82a32561 | [
"MIT"
] | null | null | null | e-board.md | uscsage/uscsage.github.io | 6a25b611060e9a360343b95d17c4d84d82a32561 | [
"MIT"
] | null | null | null | ---
layout: page
title: Meet the 2019-2020 E-Board!
---
### EXECUTIVE CO-DIRECTOR
Vivian Ren (she/her/hers)
### EXECUTIVE CO-DIRECTOR
Leanna Lugo (she/her/hers)
### ASSISTANT DIRECTOR
Celeste Pane (she/her/hers)
### MEMBERSHIP COORDINATOR
Lauren Okhovat (she/her/hers)
### CO-DIRECTOR OF PUBLIC RELATIONS AND MARKETING
Myra Wu (she/her/hers)
### CO-DIRECTOR OF PUBLIC RELATIONS AND MARKETING
Sara Okum (she/her/hers)
### ADVOCACY CO-DIRECTOR
Honor Hayball (she/her/hers)
### ADVOCACY CO-DIRECTOR
Faith Dearborn (she/her/hers)
### BODY LOVE MONTH DIRECTOR
Gwen Howard (she/her/hers/they/them/theirs)
### DIRECTOR OF FINANCE
Ellen Minkin (she/her/hers)
### Fem Fest Director:
_Pending_ | 14.6875 | 49 | 0.716312 | yue_Hant | 0.673405 |
269de13eb72bb5193109dea6dda161852bb35127 | 4,653 | md | Markdown | README.md | candrian/electronic-leadscrew | 0837df22dd4a2482d31b9acbc9e88f8ac45dd1fd | [
"MIT"
] | null | null | null | README.md | candrian/electronic-leadscrew | 0837df22dd4a2482d31b9acbc9e88f8ac45dd1fd | [
"MIT"
] | null | null | null | README.md | candrian/electronic-leadscrew | 0837df22dd4a2482d31b9acbc9e88f8ac45dd1fd | [
"MIT"
] | null | null | null | # Clough42 Lathe Electronic Leadscrew Controller
This is the firmware for an experimental lathe electronic leadscrew controller. The goal is to replace the change
gears or gearbox on a metalworking lathe with a stepper motor controlled electronically based on an encoder on the
lathe spindle. The electronic controller allows setting different virtual gear ratios for feeds and threading.
## Project Status
Design and prototyping.
**NOTE: This is still a work in progress and is subject to change. Many people have expressed interest in buying parts
and following along with the build. That's great! Just be aware that this hasn't been tested and refined end-to-end yet, so
things like stepper motors, servos, drivers, display boards are likely to change as the project progresses.**
## Documentation
For documentation, please [**visit the project wiki**](https://github.com/clough42/electronic-leadscrew/wiki).
## Current Concept
The project is still in the early stages of development. The current hardware looks like this:
* Use a TI F280049C Piccolo microcontroller on a LaunchXL prototyping board
* Use an inexpensive SPI "LED&Key" import display and keypad for the user interface
* Read the spindle position using a rotary encoder with a 3D-printed gear
* Use a standard stepper motor and driver on the lathe leadscrew
* Make as few modifications to the lathe as possible:
* Use the original leadscrew and halfnuts without machining
* Drive the leadscrew through the factory input shaft
* Drive the encoder from the factory spindle gear train
* Make it easy to restore the lathe to the original change gear configuration
The current plan is to support the following operations:
* Multiple feeds in thousandths of an inch per revolution
* Multiple feeds in hundredths of a millimeter per revolution
* Multiple imperial thread pitches (in threads-per-inch)
* Multiple metric thread pitches (in millimeters)
## Future Goals
While this project is starting out as a simple gearbox replacement, some features seem attractive, even for
that simple case:
* Threading/feeding up to a hard shoulder with automatic stop
* Automatic recall and resync with hybrid threads (e.g. metric threads on an imperial leadscrew)
## Non-Goals
This is not a CNC project. The goal is not to control the lathe automatically, but rather to transparently
replace the lathe change gears or quick-change gearbox with an electronic virtual gearbox that can be quickly
set to almost any ratio. The lathe will continue to operate normally, with the operator closing and opening
the half-nuts to control feeding and threading.
## Hardware Configuration
If you want to get this running on your own LaunchXL-F280049C board, here are the settings you will need:
### Physical Switches
The LaunchXL-F280049C has a number of switches on the board to configure the GPIO and peripheral signal routing.
This provides lots of flexibility for different uses of the board, but it means that you will have to change
some of the switches from their defaults in order to use the GPIO pins and eQEP periperals.
These are the required switch settings
* `S3.1 = 0` Connects eQEP1 to J12
* `S3.2 = 0` Connects eQEP2 to J13
* `S4 = 1` Connects eQEP1 to J12
* `S9 = 1` (default) -- Connects GPIO32/33 to BoosterPack site 2 pins
S6 and S8 can be in any position. 0 is the default.
### External Connections
The firmware assumes the following pins are connected externally.
> NOTE: the silkscreen on the LaunchXL-F280049C board has the labels for J6 and J8
> swapped. J6 is the header closest to the edge of the board.
#### Encoder
A quadrature encoder for the spindle is connected to the eQEP connector, J12.
#### Stepper Driver
A Step-direction stepper motor driver should be connected to the following GPIO pins:
* `GPIO0` (J8 pin 80) - Step
* `GPIO1` (J8 pin 79) - Direction
* `GPIO6` (J8 pin 78) - Enable
* `GPIO7` (J8 pin 77) - Alarm input
#### Control Panel
The LED&KEY control panel board **must be connected through a bidirectional level converter**, since
it is a 5V device, and the TI microcontroller is a 3.3V device. A standard BSS138-based converter works
well.
* `GPIO24` (J6 pin 55) - DIO
* `GPIO32` (J8 pin 71) - CLK
* `GPIO33` (J6 pin 53) - STB (chip select)
#### Debugging Outputs
In addition to the control wires for the hardware, the firmware also outputs signals on two additional
GPIO pins to allow timing of the interrupt and loop routines. An oscilloscope or logic analyzer may be
connected to these pins to debug and time the ISR routines:
* `GPIO2` (J8 pin 76) - Main state machine ISR
* `GPIO3` (J8 pin 75) - Secondary control panel loop
| 50.032258 | 125 | 0.776058 | eng_Latn | 0.998806 |
269e33b08932ca4412a69ea46d4e13cd98d3b91f | 1,084 | md | Markdown | content/api/ng-grid/grid.simplenodatarenderercomponent._constructor_.md | ressurectit/ressurectit.github.io | 09ed543e50e9b35594333afe6e98d79687849b04 | [
"MIT"
] | null | null | null | content/api/ng-grid/grid.simplenodatarenderercomponent._constructor_.md | ressurectit/ressurectit.github.io | 09ed543e50e9b35594333afe6e98d79687849b04 | [
"MIT"
] | null | null | null | content/api/ng-grid/grid.simplenodatarenderercomponent._constructor_.md | ressurectit/ressurectit.github.io | 09ed543e50e9b35594333afe6e98d79687849b04 | [
"MIT"
] | null | null | null | <!-- Do not edit this file. It is automatically generated by API Documenter. -->
[Home](./index.md) > [@anglr/grid](./grid.md) > [SimpleNoDataRendererComponent](./grid.simplenodatarenderercomponent.md) > [(constructor)](./grid.simplenodatarenderercomponent._constructor_.md)
## SimpleNoDataRendererComponent.(constructor)
Constructs a new instance of the `SimpleNoDataRendererComponent` class
<b>Signature:</b>
```typescript
constructor(gridPlugins: GridPluginInstances, _stringLocalization: StringLocalization, pluginElement: ElementRef, _changeDetector: ChangeDetectorRef, options?: SimpleNoDataRendererOptions<CssClassesSimpleNoDataRenderer>);
```
## Parameters
| Parameter | Type | Description |
| --- | --- | --- |
| gridPlugins | <code>GridPluginInstances</code> | |
| \_stringLocalization | <code>StringLocalization</code> | |
| pluginElement | <code>ElementRef</code> | |
| \_changeDetector | <code>ChangeDetectorRef</code> | |
| options | <code>SimpleNoDataRendererOptions<CssClassesSimpleNoDataRenderer></code> | |
| 43.36 | 222 | 0.734317 | kor_Hang | 0.270505 |
269e3d7b5bb9c5645a42f1454f09d871e85eb28c | 1,419 | md | Markdown | docs/concepts/07-plugins.md | mrfambo/slate | c00f246c7e5b60642734fa41dfe3e336e36cf873 | [
"MIT"
] | 6 | 2020-12-16T21:59:07.000Z | 2021-12-18T23:38:53.000Z | docs/concepts/07-plugins.md | mrfambo/slate | c00f246c7e5b60642734fa41dfe3e336e36cf873 | [
"MIT"
] | 7 | 2021-06-28T20:48:17.000Z | 2022-02-26T02:09:39.000Z | docs/concepts/07-plugins.md | mrfambo/slate | c00f246c7e5b60642734fa41dfe3e336e36cf873 | [
"MIT"
] | 5 | 2020-08-12T11:05:34.000Z | 2021-02-11T16:40:21.000Z | # Plugins
You've already seen how the behaviors of Slate editors can be overridden. These overrides can also be packaged up into "plugins" to be reused, tested and shared. This is one of the most powerful aspects of Slate's architecture.
A plugin is simply a function that takes an `Editor` object and returns it after it has augmented it in some way.
For example, a plugin that marks image nodes as "void":
```js
const withImages = editor => {
const { isVoid } = editor
editor.isVoid = element => {
return element.type === 'image' ? true : isVoid(element)
}
return editor
}
```
And then to use the plugin, simply:
```js
import { createEditor } from 'slate'
const editor = withImages(createEditor())
```
This plugin composition model makes Slate extremely easy to extend!
## Helper Functions
In addition to the plugin functions, you might want to expose helper functions that are used alongside your plugins. For example:
```js
import { Editor, Element } from 'slate'
const MyEditor = {
...Editor,
insertImage(editor, url) {
const element = { type: 'image', url, children: [{ text: '' }] }
Transforms.insertNodes(editor, element)
},
}
const MyElement = {
...Element,
isImageElement(value) {
return Element.isElement(element) && element.type === 'image'
},
}
```
Then you can use `MyEditor` and `MyElement` everywhere and have access to all your helpers in one place.
| 25.8 | 227 | 0.706836 | eng_Latn | 0.983423 |
269e64ab426ee186f13be6106cbfb8983f253068 | 5,540 | md | Markdown | docs/visual-basic/programming-guide/concepts/linq/how-to-chain-axis-method-calls-linq-to-xml.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/programming-guide/concepts/linq/how-to-chain-axis-method-calls-linq-to-xml.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/visual-basic/programming-guide/concepts/linq/how-to-chain-axis-method-calls-linq-to-xml.md | adamsitnik/docs.cs-cz | 7c534ad2e48aa0772412dc0ecf04945c08fa4211 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Postupy: Volání metody osy řetězu (LINQ to XML) (Visual Basic)'
ms.date: 07/20/2015
ms.assetid: e4e22942-39bd-460f-b3c0-9f09e53d3aa9
ms.openlocfilehash: 8c607915d83c49958e3aa86c9625fa1311a2274b
ms.sourcegitcommit: eb9ff6f364cde6f11322e03800d8f5ce302f3c73
ms.translationtype: MT
ms.contentlocale: cs-CZ
ms.lasthandoff: 08/01/2019
ms.locfileid: "68709840"
---
# <a name="how-to-chain-axis-method-calls-linq-to-xml-visual-basic"></a>Postupy: Volání metody osy řetězu (LINQ to XML) (Visual Basic)
Běžným vzorem, který použijete v kódu, je zavolat metodu osy a pak zavolat jednu z OS rozšiřujících metod.
Existují dvě osy s názvem `Elements` , který vrací kolekci prvků <xref:System.Xml.Linq.XContainer.Elements%2A?displayProperty=nameWithType> : metoda a <xref:System.Xml.Linq.Extensions.Elements%2A?displayProperty=nameWithType> metoda. Tyto dvě osy můžete kombinovat a vyhledat všechny prvky zadaného názvu v dané hloubce stromu.
## <a name="example"></a>Příklad
Tento příklad používá <xref:System.Xml.Linq.XContainer.Elements%2A?displayProperty=nameWithType> a <xref:System.Xml.Linq.Extensions.Elements%2A?displayProperty=nameWithType> k `Name` `PurchaseOrder` nalezení všech prvků všech prvků ve všech prvcích. `Address`
V tomto příkladu se používá následující dokument XML: [Ukázkový soubor XML: Více nákupních objednávek (LINQ to XML)](../../../../visual-basic/programming-guide/concepts/linq/sample-xml-file-multiple-purchase-orders-linq-to-xml.md).
```vb
Dim purchaseOrders As XElement = XElement.Load("PurchaseOrders.xml")
Dim names As IEnumerable(Of XElement) = _
From el In purchaseOrders.<PurchaseOrder>.<Address>.<Name> _
Select el
For Each e As XElement In names
Console.WriteLine(e)
Next
```
Tento příklad vytvoří následující výstup:
```xml
<Name>Ellen Adams</Name>
<Name>Tai Yee</Name>
<Name>Cristian Osorio</Name>
<Name>Cristian Osorio</Name>
<Name>Jessica Arnold</Name>
<Name>Jessica Arnold</Name>
```
To funguje, protože jedna z implementací `Elements` osy je jako metoda rozšíření <xref:System.Xml.Linq.XContainer>na <xref:System.Collections.Generic.IEnumerable%601> . <xref:System.Xml.Linq.XElement>je odvozen z <xref:System.Xml.Linq.XContainer>, takže můžete <xref:System.Xml.Linq.Extensions.Elements%2A?displayProperty=nameWithType> volat metodu pro výsledky volání <xref:System.Xml.Linq.XContainer.Elements%2A?displayProperty=nameWithType> metody.
## <a name="example"></a>Příklad
Někdy je vhodné načíst všechny prvky na konkrétní hloubku prvku, pokud může nebo nemusí být zasahovat do nadřazených prvků. Například v následujícím dokumentu můžete `ConfigParameter` chtít načíst všechny prvky, které jsou podřízené `Customer` `ConfigParameter` prvku, ale nikoli podřízený `Root` prvek elementu.
```xml
<Root>
<ConfigParameter>RootConfigParameter</ConfigParameter>
<Customer>
<Name>Frank</Name>
<Config>
<ConfigParameter>FirstConfigParameter</ConfigParameter>
</Config>
</Customer>
<Customer>
<Name>Bob</Name>
<!--This customer doesn't have a Config element-->
</Customer>
<Customer>
<Name>Bill</Name>
<Config>
<ConfigParameter>SecondConfigParameter</ConfigParameter>
</Config>
</Customer>
</Root>
```
K tomu můžete použít <xref:System.Xml.Linq.Extensions.Elements%2A?displayProperty=nameWithType> osu následujícím způsobem:
```vb
Dim root As XElement = XElement.Load("Irregular.xml")
Dim configParameters As IEnumerable(Of XElement) = _
root.<Customer>.<Config>.<ConfigParameter>
For Each cp As XElement In configParameters
Console.WriteLine(cp)
Next
```
Tento příklad vytvoří následující výstup:
```xml
<ConfigParameter>FirstConfigParameter</ConfigParameter>
<ConfigParameter>SecondConfigParameter</ConfigParameter>
```
## <a name="example"></a>Příklad
Následující příklad ukazuje stejnou techniku pro XML, která je v oboru názvů. Další informace najdete v tématu [obory názvů Overview (LINQ to XML) (Visual Basic)](namespaces-overview-linq-to-xml.md).
V tomto příkladu se používá následující dokument XML: [Ukázkový soubor XML: Několik nákupních objednávek v oboru názvů](../../../../visual-basic/programming-guide/concepts/linq/sample-xml-file-multiple-purchase-orders-in-a-namespace.md).
```vb
Imports <xmlns:aw="http://www.adventure-works.com">
Module Module1
Sub Main()
Dim purchaseOrders As XElement = XElement.Load("PurchaseOrdersInNamespace.xml")
Dim names As IEnumerable(Of XElement) = _
From el In purchaseOrders.<aw:PurchaseOrder>.<aw:Address>.<aw:Name> _
Select el
For Each e As XElement In names
Console.WriteLine(e)
Next
End Sub
End Module
```
Tento příklad vytvoří následující výstup:
```xml
<aw:Name xmlns:aw="http://www.adventure-works.com">Ellen Adams</aw:Name>
<aw:Name xmlns:aw="http://www.adventure-works.com">Tai Yee</aw:Name>
<aw:Name xmlns:aw="http://www.adventure-works.com">Cristian Osorio</aw:Name>
<aw:Name xmlns:aw="http://www.adventure-works.com">Cristian Osorio</aw:Name>
<aw:Name xmlns:aw="http://www.adventure-works.com">Jessica Arnold</aw:Name>
<aw:Name xmlns:aw="http://www.adventure-works.com">Jessica Arnold</aw:Name>
```
## <a name="see-also"></a>Viz také:
- [LINQ to XML osy (Visual Basic)](../../../../visual-basic/programming-guide/concepts/linq/linq-to-xml-axes.md)
| 45.04065 | 454 | 0.723105 | ces_Latn | 0.913023 |
9acdcde394bed538a16143cedbfb8f618b9e88db | 1,617 | md | Markdown | results/referenceaudioanalyzer/zero/JVC HA-SR44X/README.md | thanasisttgr/AutoEq | 863436c40c8d4abf6874cc301bc1c7d8e8c5303c | [
"MIT"
] | 1 | 2020-07-17T03:48:21.000Z | 2020-07-17T03:48:21.000Z | results/referenceaudioanalyzer/zero/JVC HA-SR44X/README.md | datascientist1976/AutoEq | dd8ea7ea5edb5a9087a001ceeb862326e7d23cb9 | [
"MIT"
] | null | null | null | results/referenceaudioanalyzer/zero/JVC HA-SR44X/README.md | datascientist1976/AutoEq | dd8ea7ea5edb5a9087a001ceeb862326e7d23cb9 | [
"MIT"
] | null | null | null | # JVC HA-SR44X
See [usage instructions](https://github.com/jaakkopasanen/AutoEq#usage) for more options and info.
### Parametric EQs
In case of using parametric equalizer, apply preamp of **-7.2dB** and build filters manually
with these parameters. The first 5 filters can be used independently.
When using independent subset of filters, apply preamp of **-7.5dB**.
| Type | Fc | Q | Gain |
|:--------|:---------|:-----|:---------|
| Peaking | 27 Hz | 0.83 | 10.7 dB |
| Peaking | 51 Hz | 0.57 | -7.1 dB |
| Peaking | 1438 Hz | 0.79 | -10.7 dB |
| Peaking | 3340 Hz | 0.54 | 11.6 dB |
| Peaking | 7174 Hz | 1.33 | -9.1 dB |
| Peaking | 2607 Hz | 4.99 | 1.6 dB |
| Peaking | 4527 Hz | 1.08 | -1.3 dB |
| Peaking | 4761 Hz | 5.87 | 3.4 dB |
| Peaking | 10296 Hz | 4.21 | 1.1 dB |
### Fixed Band EQs
In case of using fixed band (also called graphic) equalizer, apply preamp of **-8.0dB**
(if available) and set gains manually with these parameters.
| Type | Fc | Q | Gain |
|:--------|:---------|:-----|:--------|
| Peaking | 31 Hz | 1.41 | 6.6 dB |
| Peaking | 62 Hz | 1.41 | -5.0 dB |
| Peaking | 125 Hz | 1.41 | -1.3 dB |
| Peaking | 250 Hz | 1.41 | -1.0 dB |
| Peaking | 500 Hz | 1.41 | 0.5 dB |
| Peaking | 1000 Hz | 1.41 | -6.3 dB |
| Peaking | 2000 Hz | 1.41 | -0.6 dB |
| Peaking | 4000 Hz | 1.41 | 8.6 dB |
| Peaking | 8000 Hz | 1.41 | -5.4 dB |
| Peaking | 16000 Hz | 1.41 | 0.7 dB |
### Graphs
 | 41.461538 | 136 | 0.571429 | eng_Latn | 0.680004 |
9acdda54133518c920ae3543b8f7dae80866d2d2 | 8,310 | md | Markdown | docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/architecture/cloud-native/deploy-eshoponcontainers-azure.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Distribuzione di eShopOnContainers in Azure
description: Distribuzione dell'applicazione eShopOnContainers tramite il servizio Kubernetes di Azure, Helm e DevSpaces.
ms.date: 06/30/2019
ms.openlocfilehash: 21033cc904dc595193c69f3452ce2522740f8ff6
ms.sourcegitcommit: 55f438d4d00a34b9aca9eedaac3f85590bb11565
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 09/23/2019
ms.locfileid: "71183273"
---
# <a name="deploying-eshoponcontainers-to-azure"></a>Distribuzione di eShopOnContainers in Azure
[!INCLUDE [book-preview](../../../includes/book-preview.md)]
La logica che supporta l'applicazione eShopOnContainers può essere supportata da Azure usando un'ampia gamma di servizi. L'approccio consigliato consiste nell'usare Kubernetes con il servizio Azure Kubernetes (AKS). Questo può essere combinato con la distribuzione Helm per garantire una configurazione dell'infrastruttura facilmente ripetuta. Facoltativamente, gli sviluppatori possono sfruttare Azure Dev Spaces per Kubernetes come parte del processo di sviluppo. Un'altra opzione consiste nell'ospitare la funzionalità dell'app usando funzionalità senza server di Azure, ad esempio funzioni di Azure e app per la logica di Azure.
## <a name="azure-kubernetes-service"></a>Servizio Azure Kubernetes
Se si vuole ospitare l'applicazione eShopOnContainers nel proprio cluster AKS, il primo passaggio consiste nel creare il cluster. A tale scopo, è possibile usare la portale di Azure, che illustra i passaggi necessari oppure è possibile usare l'interfaccia della riga di comando di Azure per assicurarsi di abilitare il controllo degli accessi in base al ruolo (RBAC) e il routing delle applicazioni. La documentazione di eShopOnContainers descrive i passaggi necessari per la creazione di un cluster AKS. Una volta creato il cluster, è necessario abilitare l'accesso al dashboard di Kubernetes. a questo punto, è possibile passare al dashboard di Kubernetes per gestire il cluster.
Una volta che il cluster è stato creato e configurato, è possibile distribuirvi l'applicazione usando Helm e Tiller.
## <a name="deploying-to-azure-kubernetes-service-using-helm"></a>Distribuzione nel servizio Azure Kubernetes tramite Helm
Le distribuzioni di base a AKS possono usare script CLI personalizzati o semplici file di distribuzione, ma le applicazioni più complesse devono usare uno strumento di gestione delle dipendenze come Helm. Helm è gestito da cloud-native Computing Foundation e consente di definire, installare e aggiornare le applicazioni Kubernetes. Helm è costituito da un client della riga di comando, Helm, che usa i grafici Helm e un componente in cluster, Tiller. I grafici Helm usano file con formato YAML standard per descrivere un set correlato di risorse Kubernetes e vengono in genere sottoposte a controllo delle versioni insieme all'applicazione che descrivono. I grafici Helm variano da semplici a complessi, a seconda dei requisiti dell'installazione che descrivono.
I grafici Helm di eShopOnContainers sono disponibili nella cartella/K8S/Helm. La figura 2-6 illustra come i diversi componenti dell'applicazione sono organizzati in una struttura di cartelle utilizzata da Helm per definire e distribuire le distribuzioni.

eShopOnContainers**Figura 2-6**. Cartella Helm eShopOnContainers.
Ogni singolo componente viene installato utilizzando un `helm install` comando. Questi comandi sono facilmente scritti e eShopOnContainers fornisce uno script "deploy all" che esegue il ciclo dei diversi componenti e li installa usando i rispettivi grafici Helm. Il risultato è un processo ripetibile, con versione con l'applicazione nel controllo del codice sorgente, che chiunque nel team può distribuire in un cluster AKS con un comando di script a una riga. In particolare in combinazione con Azure Dev Spaces, questo consente agli sviluppatori di diagnosticare e testare facilmente le proprie modifiche alle proprie app native basate su microservizi.
## <a name="azure-dev-spaces"></a>Azure Dev Spaces
Azure Dev Spaces aiuta i singoli sviluppatori a ospitare la propria versione univoca dei cluster AKS in Azure durante lo sviluppo. Questo consente di ridurre al minimo i requisiti dei computer locali e consente ai membri del team di verificare rapidamente il comportamento delle modifiche in un ambiente AKS reale. Azure Dev Spaces offre un'interfaccia della riga di comando per gli sviluppatori da usare per gestire gli spazi di sviluppo e per eseguire la distribuzione in uno spazio di sviluppo figlio specifico, se necessario. Si fa riferimento a ogni spazio dev figlio usando un sottodominio URL univoco, consentendo distribuzioni affiancate di cluster modificati in modo che i singoli sviluppatori possano evitare conflitti con il lavoro in corso. Nella figura 2-7 è possibile vedere come Developer Susie ha distribuito una propria versione del microservizio Bikes nello spazio di sviluppo. Potrà quindi testare le modifiche usando un URL personalizzato che inizia con il nome dello spazio (susie.s.dev.myapp.eus.azds.io).

eShopOnContainers**Figura 2-7**. Developer Susie distribuisce la propria versione del microservizio Bikes e ne esegue il test.
Allo stesso tempo, lo sviluppatore Giorgio sta personalizzando il microservizio prenotazioni e deve testare le proprie modifiche. È in grado di distribuire le modifiche nel proprio spazio di sviluppo senza conflitti con le modifiche di Susie, come illustrato nella figura 2-8. Può testare le modifiche usando il proprio URL, che è preceduto dal nome dello spazio (john.s.dev.myapp.eus.azds.io).

eShopOnContainers**Figura 2-8**. Developer Giorgio distribuisce la propria versione del microservizio prenotazioni e la testa senza conflitti con altri sviluppatori.
Con Azure Dev Spaces, i team possono lavorare direttamente con AKS mentre cambiano, distribuiscono e verificano le modifiche in modo indipendente. Questo approccio riduce la necessità di ambienti host dedicati distinti, perché ogni sviluppatore ha in effetti un proprio ambiente AKS. Gli sviluppatori possono lavorare con Azure Dev Spaces usando l'interfaccia della riga di comando o avviare l'applicazione per Azure Dev Spaces direttamente da Visual Studio. [Altre informazioni sul funzionamento di Azure Dev Spaces e sulla configurazione.](https://docs.microsoft.com/azure/dev-spaces/how-dev-spaces-works)
## <a name="azure-functions-and-logic-apps-serverless"></a>Funzioni di Azure e app per la logica (senza server)
L'esempio eShopOnContainers include il supporto per tenere traccia delle campagne di marketing online. Una funzione di Azure viene usata per eseguire il pull dei dettagli della campagna di marketing per un determinato ID campagna. Invece di creare un'applicazione ASP.NET Core completa a questo scopo, un singolo endpoint funzione di Azure è più semplice e sufficiente. Funzioni di Azure include un modello di compilazione e distribuzione molto più semplice rispetto alle applicazioni full ASP.NET Core, specialmente se configurato per l'esecuzione in Kubernetes. La distribuzione della funzione viene scritta tramite script usando i modelli di Azure Resource Manager (ARM) e l'interfaccia della riga di comando di Azure. Questo microservizio di dettagli della campagna non è destinato al cliente e non ha gli stessi requisiti del negozio online, rendendolo un candidato ideale per funzioni di Azure. La funzione richiede che alcune configurazioni funzionino correttamente, ad esempio i dati della stringa di connessione del database e le impostazioni dell'URI di base dell'immagine. Le funzioni di Azure vengono configurate nel portale di Azure.
## <a name="references"></a>Riferimenti
- [EShopOnContainers Creare un cluster Kubernetes in AKS](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Deploy-to-Azure-Kubernetes-Service-(AKS)#create-kubernetes-cluster-in-aks)
- [EShopOnContainers Azure Dev Spaces](https://github.com/dotnet-architecture/eShopOnContainers/wiki/Azure-Dev-Spaces)
- [Azure Dev Spaces](https://docs.microsoft.com/azure/dev-spaces/about)
>[!div class="step-by-step"]
>[Precedente](map-eshoponcontainers-azure-services.md)
>[Successivo](centralized-configuration.md)
| 134.032258 | 1,146 | 0.82142 | ita_Latn | 0.999416 |
9ace1c46761f11d02d1e0bcbb2fdb0b90b934aca | 5,528 | md | Markdown | index.md | peabee/puppylinux-woof-CE.github.io | e4f57046fc7658106fd5f43bb510ea999a8cb163 | [
"CC0-1.0"
] | null | null | null | index.md | peabee/puppylinux-woof-CE.github.io | e4f57046fc7658106fd5f43bb510ea999a8cb163 | [
"CC0-1.0"
] | null | null | null | index.md | peabee/puppylinux-woof-CE.github.io | e4f57046fc7658106fd5f43bb510ea999a8cb163 | [
"CC0-1.0"
] | null | null | null | ---
layout: default
title: Puppy Linux Home
---
## About Puppy Linux
Puppy Linux is a unique family of Linux distributions meant for the home-user computers. It was originally created by
[Barry Kauler](http://barryk.org/news) in 2003.
### Puppy Linux advantage
1. Ready to use → all tools for common daily computing usage already included.
2. Ease of use → grandpa-friendly certified ™
3. Relatively small size → 200 MB or less.
4. Fast and versatile.
5. Customisable within minutes → remasters.
6. Different flavours → optimised to support older computers, newer computers.
7. Variety → hundreds of derivatives ("puplets"), one of which will surely meet your needs.
If one of these things interest you, read on.
### Yes, but what does it look like?
[{: .cr-image }](screenshots.html "Screenshot Page")
### First thing first
Puppy Linux is _not_ a single Linux distribution like Debian.
Puppy Linux is also _not_ a Linux distribution with multiple flavours,
like Ubuntu (with its variants of Ubuntu, Kubuntu, Xubuntu, etc)
though it also comes in flavours.
Puppy Linux is **a collection of multiple Linux distributions**, built on
the _same shared principles_, built _using the same set of tools_, built on top
of a _unique set of puppy specific applications and configurations_ and
generally speaking provide _consistent behaviours and features_, no
matter which flavours you choose.
There are generally three broad categories of Puppy Linux distributions:
* _official_ Puppy Linux distributions → maintained by Puppy Linux team,
usually targeted for general purpose, and generally built using
Puppy Linux system builder (called [_Woof-CE_][woof-ce]).
* _woof-built_ Puppy Linux distributions → developed to suit specific needs
and appearances, also targeted for general purpose, and built using
Puppy Linux system builder (called [_Woof-CE_][woof-ce]) with some additional
or modified packages.
* _unofficial_ derivatives (_"puplets"_) → are usually remasters
(or remasters of remasters), made and maintained by Puppy Linux enthusiasts,
usually targeted for specific purposes.
<p id="download"/><!--do not edit this line-->
### Why not try it? Download now! (Official distributions)
Get the ISO, burn it to a CD/DVD using your favorite CD/DVD burner,
or _flash_ it using _dd_ ([Windows version](http://www.chrysocome.net/dd))
to your USB flash drive, or visit our [download](download.html) page
for more comprehensive information.
|Compatibility \* | Bits | Latest Version | Download link |
|------------------|---------|-----------------------|----------------------------------------------------|
|Slackware 14.1 | 32-bit | Slacko Puppy 6.3.2 | [Main][sl32] - [Mirror][sl32m] - [Checksum][sl32c] |
|Slackware64 14.1 | 64-bit | Slacko64 Puppy 6.3.2 | [Main][sl64] - [Mirror][sl64m] - [Checksum][sl64c] |
|Ubuntu Tahr | 32-bit | Tahrpup 6.0.5 | [Main][ta32] - [Mirror][ta32m] - [Checksum][ta32c] |
|Ubuntu Tahr 64 | 64-bit | Tahrpup64 6.0.5 | [Main][ta64] - [Mirror][ta64m] - [Checksum][ta64c] |
{: .table .table-striped .table-bordered }
[sl32]: http://distro.ibiblio.org/puppylinux/puppy-slacko-6.3.2/32/slacko-6.3.2-uefi.iso
[sl32m]: http://ftp.nluug.nl/ibiblio/distributions/puppylinux/puppy-slacko-6.3.2/32/slacko-6.3.2-uefi.iso
[sl32c]: http://distro.ibiblio.org/puppylinux/puppy-slacko-6.3.2/32/slacko-6.3.2-uefi.iso.md5.txt
[sl64]: http://distro.ibiblio.org/puppylinux/puppy-slacko-6.3.2/64/slacko64-6.3.2-uefi.iso
[sl64m]: http://ftp.nluug.nl/ibiblio/distributions/puppylinux/puppy-slacko-6.3.2/64/slacko64-6.3.2-uefi.iso
[sl64c]: http://distro.ibiblio.org/puppylinux/puppy-slacko-6.3.2/64/slacko64-6.3.2-uefi.iso.md5.txt
[ta32]: http://distro.ibiblio.org/puppylinux/puppy-tahr/iso/tahrpup%20-6.0-CE/tahr-6.0.5_PAE.iso
[ta32m]: http://ftp.nluug.nl/ibiblio/distributions/puppylinux/puppy-tahr/iso/tahrpup%20-6.0-CE/tahr-6.0.5_PAE.iso
[ta32c]: http://distro.ibiblio.org/puppylinux/puppy-tahr/iso/tahrpup%20-6.0-CE/tahr-6.0.5_PAE.iso.md5.txt
[ta64]: http://distro.ibiblio.org/puppylinux/puppy-tahr/iso/tahrpup64-6.0.5/tahr64-6.0.5.iso
[ta64m]: http://ftp.nluug.nl/ibiblio/distributions/puppylinux/puppy-tahr/iso/tahrpup64-6.0.5/tahr64-6.0.5.iso
[ta64c]: http://distro.ibiblio.org/puppylinux/puppy-tahr/iso/tahrpup64-6.0.5/tahr64-6.0.5.iso.md5.txt
> \* Compatibility: A Puppylinux distribution can also be built and assembled using packages
> and components from another Linux distribution called in Puppy the _"binary compatible"_
> distribution. The choice of a binary compatible distribution determines the availability of
> additional packages, among other things.
### Questions?
It has been said that the best experience of Puppy Linux is not from
the software itself, but from the community that gathers around it.
Whatever you have in mind - praises, curses, questions, suggestions,
or just plain chit-chat, we welcome you to join us at
[Puppy Linux Forum](http://murga-linux.com/puppy).
### I need more info before deciding to try ...
1. [Frequently Asked Questions (FAQ)][faq]
1. [Puppy Linux history][history]
2. [Puppy Linux family tree][family-tree]
3. [Puppy Linux Team][team]
4. [Puppy Linux Build System (_Woof-CE_)][woof-ce]
5. [Screenshots!][screen]
[faq]: faq.html
[woof-ce]: woof-ce.html
[history]: history.html
[team]: team.html
[family-tree]: family-tree.html
[screen]: screenshots.html
| 47.655172 | 118 | 0.72178 | eng_Latn | 0.838992 |
9ace29e74165e0d16f619cac2e7b5059976cb784 | 29 | md | Markdown | README.md | alexandrupta2017/microserve | c80ac347f48e8f62d8710833feb171d05e52eae0 | [
"Apache-2.0"
] | null | null | null | README.md | alexandrupta2017/microserve | c80ac347f48e8f62d8710833feb171d05e52eae0 | [
"Apache-2.0"
] | null | null | null | README.md | alexandrupta2017/microserve | c80ac347f48e8f62d8710833feb171d05e52eae0 | [
"Apache-2.0"
] | null | null | null | # microserve
microserve repo
| 9.666667 | 15 | 0.827586 | eng_Latn | 0.594109 |
9ace5e0cd993094717767f251f6148c7ffaf4532 | 295 | md | Markdown | docs/PublicPriceHistoryResponseData.md | Investabit/investabit-ruby-sdk | 31945e94449cdc2abfaab7d3876310516bebdfcf | [
"MIT"
] | null | null | null | docs/PublicPriceHistoryResponseData.md | Investabit/investabit-ruby-sdk | 31945e94449cdc2abfaab7d3876310516bebdfcf | [
"MIT"
] | null | null | null | docs/PublicPriceHistoryResponseData.md | Investabit/investabit-ruby-sdk | 31945e94449cdc2abfaab7d3876310516bebdfcf | [
"MIT"
] | null | null | null | # SwaggerClient::PublicPriceHistoryResponseData
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**price_history** | [**Array<PublicPriceHistoryResponseDataPriceHistory>**](PublicPriceHistoryResponseDataPriceHistory.md) | |
| 32.777778 | 134 | 0.627119 | yue_Hant | 0.726394 |
9acf66e3d0517b53d4aa4f9cf729c53446cee19b | 151 | md | Markdown | README.md | CreativeMetalCat/CSharp-CityBuildingGame | aef0118b19144f78d85904af15e85d6a9df406b6 | [
"MIT"
] | null | null | null | README.md | CreativeMetalCat/CSharp-CityBuildingGame | aef0118b19144f78d85904af15e85d6a9df406b6 | [
"MIT"
] | 5 | 2018-04-21T08:59:05.000Z | 2018-04-24T09:28:26.000Z | README.md | CreativeMetalCat/CSharp-CityBuildingGame | aef0118b19144f78d85904af15e85d6a9df406b6 | [
"MIT"
] | null | null | null | # CSharp-CityBuildingGame
This isn't playable game. Is was created for future GDI+ creations. But is you really want, you can do it. Will be developed
| 50.333333 | 124 | 0.781457 | eng_Latn | 0.999733 |
9acfcd17b97d023f5a29d3c4bd3e34fa3a03e232 | 1,365 | md | Markdown | src/guide/advanced-guides/music.md | JDaniel4562/documentation | 95f7135eab3ec0479c1f68777ffff16fb8760ed6 | [
"Apache-2.0"
] | null | null | null | src/guide/advanced-guides/music.md | JDaniel4562/documentation | 95f7135eab3ec0479c1f68777ffff16fb8760ed6 | [
"Apache-2.0"
] | null | null | null | src/guide/advanced-guides/music.md | JDaniel4562/documentation | 95f7135eab3ec0479c1f68777ffff16fb8760ed6 | [
"Apache-2.0"
] | null | null | null | ---
description: Una lista de funciones para reproducir música con tu bot usando Aoi.JS.
---
# Música
{% hint style="warning" %}
Estas funciones solo funcionan en la versión de Aoi v4.6.0 o inferior
En la v5 en adelante se utilizara [Sistema Lavalink](lavalink.md)
{% endhint %}
¡Toma música de YouTube o Spotify o SoundCloud y reprodúcela en Discord con tu bot codificado en Aoi.JS!
Aquí están todas las funciones de música que puede usar en su bot:
* [$playSong](../../functions/usdplaysong.md)
* [$playSpotify](../../functions/usdplayspotify.md)
* [$playSoundCloud](../../functions/usdplaysoundcloud.md)
* [$songInfo](../../functions/usdsonginfo.md)
* [$volume](../../functions/usdvolume.md)
* [$queue](../../functions/usdqueue.md)
* [$queueLength](../../functions/usdqueuelength.md)
* [$pauseSong](../../functions/usdpausesong.md)
* [$resumeSong](../../functions/usdresumesong.md)
* [$skipSong](../../functions/usdskipsong.md)
* [$skipTo](../../functions/usdskipto.md)
* [$seekTo](../../functions/usdseekto.md)
* [$loopQueue](../../functions/usdloopqueue.md)
* [$loopSong](../../functions/usdloopsong.md)
* [$stopSong](../../functions/usdstopsong.md)
* [$pruneMusic](../../functions/usdprunemusic.md)
* [$moveSong](../../functions/usdmovesong.md)
* [$songFilter](../../functions/usdsongfilter.md)
* [$clearSongQueue](../../functions/usdclearsongqueue.md)
| 37.916667 | 104 | 0.693773 | spa_Latn | 0.324236 |
9ad02c1964451502c5650ce3173b7a1a780470c9 | 190 | md | Markdown | README.md | vonabol1/FiM2020Rhizobiome | 26f1a345dbb7d78f805fb8c1affab7c67c21947d | [
"MIT"
] | null | null | null | README.md | vonabol1/FiM2020Rhizobiome | 26f1a345dbb7d78f805fb8c1affab7c67c21947d | [
"MIT"
] | null | null | null | README.md | vonabol1/FiM2020Rhizobiome | 26f1a345dbb7d78f805fb8c1affab7c67c21947d | [
"MIT"
] | null | null | null | # FiM2020Rhizobiome
Repository for FiM Submission "Community Consolidation Dictates Hydroponic Root Microbiome"
This repository includes original sequence files related to the publication.
| 38 | 91 | 0.857895 | eng_Latn | 0.964152 |
9ad03a6eeb732665c74dea64363ca0476017afd3 | 1,995 | md | Markdown | security-advisories/2015-03-05-0-c++-addl-cpu-amplification.md | joshuawarner32/capnproto | 09759e9f4ed2f341ad7e8802cd69bae7d60eac60 | [
"MIT"
] | null | null | null | security-advisories/2015-03-05-0-c++-addl-cpu-amplification.md | joshuawarner32/capnproto | 09759e9f4ed2f341ad7e8802cd69bae7d60eac60 | [
"MIT"
] | null | null | null | security-advisories/2015-03-05-0-c++-addl-cpu-amplification.md | joshuawarner32/capnproto | 09759e9f4ed2f341ad7e8802cd69bae7d60eac60 | [
"MIT"
] | null | null | null | Problem
=======
CPU usage amplification attack, similar to previous vulnerability
[2015-03-02-2][1].
Discovered by
=============
David Renshaw <david@sandstorm.io>
Announced
=========
2015-03-05
CVE
===
CVE-2015-2313
Impact
======
- Remotely cause a peer to execute a tight `for` loop counting from 0 to
2^29, possibly repeatedly, by sending it a small message. This could enable
a DoS attack by consuming CPU resources.
Fixed in
========
- git commit [80149744bdafa3ad4eedc83f8ab675e27baee868][0]
- release 0.5.1.2:
- Unix: https://capnproto.org/capnproto-c++-0.5.1.2.tar.gz
- Windows: https://capnproto.org/capnproto-c++-win32-0.5.1.2.zip
- release 0.4.1.1:
- Unix: https://capnproto.org/capnproto-c++-0.4.1.2.tar.gz
- release 0.6 (future)
[0]: https://github.com/sandstorm-io/capnproto/commit/80149744bdafa3ad4eedc83f8ab675e27baee868
Details
=======
Advisory [2015-03-02-2][1] described a bug allowing a remote attacker to
consume excessive CPU time or other resources usin a specially-crafted message.
The present advisory is simply another case of the same bug which was initially
missed.
The new case occurs only if the application invokes the `totalSize()` method
on an object reader.
The new case is somewhat less severe, in that it only spins in a tight `for`
loop that doesn't call any application code. Only CPU time is possibly
consumed, not RAM or other resources. However, it is still possible to create
significant delays for the receiver with a specially-crafted message.
[1]: https://github.com/sandstorm-io/capnproto/blob/master/security-advisories/2015-03-02-2-all-cpu-amplification.md
Preventative measures
=====================
Our fuzz test actually covered this case, but we didn't notice the problem
because the loop actually completes in less than a second. We've added a new
test case which is more demanding, and will make sure that when we do extended
testing with American Fuzzy Lop, we treat unexpectedly long run times as
failures.
| 29.338235 | 116 | 0.742857 | eng_Latn | 0.983308 |
9ad0d5730e9e8556748a4906abbbca21a0122f44 | 7,296 | md | Markdown | Assignments/Prog280Midterm2014.md | charliecalvert/CloudNotes | a3e52f94ac8600047b8bc1a4c8f7ffc6077736b5 | [
"CC-BY-3.0"
] | 2 | 2019-10-08T15:35:02.000Z | 2019-10-24T12:57:29.000Z | Assignments/Prog280Midterm2014.md | charliecalvert/CloudNotes | a3e52f94ac8600047b8bc1a4c8f7ffc6077736b5 | [
"CC-BY-3.0"
] | 1 | 2022-02-28T01:40:42.000Z | 2022-02-28T01:40:42.000Z | Assignments/Prog280Midterm2014.md | charliecalvert/CloudNotes | a3e52f94ac8600047b8bc1a4c8f7ffc6077736b5 | [
"CC-BY-3.0"
] | null | null | null | ## Prog 280 Midterm 2014
This document describes the Prog 280 midterm. Please check it regularly
for updates.
## Create Markdown Documents
Create 5 markdown documents based on Shakespeare's sonnets:
Sonnet 1
From fairest creatures we desire increase,
That thereby beauty's rose might never die,
But as the riper should by time decease,
His tender heir might bear his memory:
But thou contracted to thine own bright eyes,
Feed'st thy light's flame with self-substantial fuel,
Making a famine where abundance lies,
Thy self thy foe, to thy sweet self too cruel:
Thou that art now the world's fresh ornament,
And only herald to the gaudy spring,
Within thine own bud buriest thy content,
And tender churl mak'st waste in niggarding:
Pity the world, or else this glutton be,
To eat the world's due, by the grave and thee.
Sonnet 2
When forty winters shall besiege thy brow,
And dig deep trenches in thy beauty's field,
Thy youth's proud livery so gazed on now,
Will be a tatter'd weed of small worth held:
Then being asked, where all thy beauty lies,
Where all the treasure of thy lusty days;
To say, within thine own deep sunken eyes,
Were an all-eating shame, and thriftless praise.
How much more praise deserv'd thy beauty's use,
If thou couldst answer 'This fair child of mine
Shall sum my count, and make my old excuse,'
Proving his beauty by succession thine!
This were to be new made when thou art old,
And see thy blood warm when thou feel'st it cold.
Sonnet 3
Look in thy glass and tell the face thou viewest
Now is the time that face should form another;
Whose fresh repair if now thou not renewest,
Thou dost beguile the world, unbless some mother.
For where is she so fair whose unear'd womb
Disdains the tillage of thy husbandry?
Or who is he so fond will be the tomb,
Of his self-love to stop posterity?
Thou art thy mother's glass and she in thee
Calls back the lovely April of her prime;
So thou through windows of thine age shalt see,
Despite of wrinkles this thy golden time.
But if thou live, remember'd not to be,
Die single and thine image dies with thee.
Sonnet 4
Unthrifty loveliness, why dost thou spend
Upon thy self thy beauty's legacy?
Nature's bequest gives nothing, but doth lend,
And being frank she lends to those are free:
Then, beauteous niggard, why dost thou abuse
The bounteous largess given thee to give?
Profitless usurer, why dost thou use
So great a sum of sums, yet canst not live?
For having traffic with thy self alone,
Thou of thy self thy sweet self dost deceive:
Then how when nature calls thee to be gone,
What acceptable audit canst thou leave?
Thy unused beauty must be tombed with thee,
Which, used, lives th' executor to be.
Sonnet 5
Those hours, that with gentle work did frame
The lovely gaze where every eye doth dwell,
Will play the tyrants to the very same
And that unfair which fairly doth excel;
For never-resting time leads summer on
To hideous winter, and confounds him there;
Sap checked with frost, and lusty leaves quite gone,
Beauty o'er-snowed and bareness every where:
Then were not summer's distillation left,
A liquid prisoner pent in walls of glass,
Beauty's effect with beauty were bereft,
Nor it, nor no remembrance what it was:
But flowers distill'd, though they with winter meet,
Leese but their show; their substance still lives sweet.
Find the Google Drive folder you shared during the Online Presence
assignment.
- Create a new folder in it called **Poems**.
- Save all five documents to that folder as **Sonnet01.md**, **Sonnet02.md** etc.
- Also save them also to your DropBox folder.
## Create a BuildAll Script
Configure the **BuildAll** script that I give you to convert all five
files to HTML and copy them out to your **/var/www/bc** folder. Include
or create a **/var/www/bc/index.html** that will allow you to click on
link and display each file: **Sonnet01.html**, **Sonnet02.html** etc.
The sample BuildAll script is found in:
JsObjects/Python/Utils
It is run with Python 3.3:
python3.3 BuildAll.py
You should also be able to run it like this:
./BuildAll.py
## Display it on S3
Once you have everything working, copy the contents of your **index.html**
files and your five sonnets to S3.
## Turn It In
In your Google Drive folder, in a directory called Week07-Midterm, turn
in screen shots of:
- Your BuildAll script running in Linux.
- You editing your files in **StackEdit**. Be sure to show that
your document is linked to both Google Drive and DropBox.
When you submit the document, include a link to your files running
on S3.
Share a document in your existing Evernote **2014-Prog280-LastName**
folder that contains a link to your code running on S3.
## Hints
Here is the syntax for links in markdown:
- [DropBox](DropBox.html)
- [MongoMark](MongoMark.html)
Remember to put it in a file called **index.md**.
Here is what my bucket on AWS looks like:

Notice that I have three HTML files. Yours will be called Sonnet01.html,
Sonnet02.html, etc. Notice also the three folders that I copied from
our bc folder in the Linux box. Notice also the URL that I created for
my S3 box:
bucket02.elvenware.com.
You should create one like that. Use all lower case.
## Bug Fix in BuildAll.py
I realized, rather belatedly, that there was a bug in the **start.html**
file from this directory:
/JsObjects/Utilities/Templates
This meant that some of the CSS for our files in **/var/www/bc** did
not resolve correctly. Your files would still show up, but they might
look a bit odd with an image missing, some of the text not in the right
place.
I have created two new files called **StartLinux.html** and
**NavLinux.html** and put them in **Templates** folder. If you pull the
latest from Git you will get them.
I also updated line 23 in **BuildAll.py** to make sure they get linked in:
markdown.runner(files, ['StartLinux.html', 'NavLinux.html', 'footer.html', 'end.html']);
If you pull down the latest from **JsObjects** you will also get
that change. If your HTML files does not look exactly right because
of the broken CSS, I will understand, but if you can make this
change all should be well with your code.
The only thing you need to do is pull down the latest from Git:
cd /Git/JsObjects
git pull
Then rerun the updated **BuildAll.py**. You want to make sure to
preserve your list of files to transfer to the /var/www/bc directory:
def prog280(markdown):
files = ["DropBox", "MongoMark", "index"]; // Keep this list in sync with the files that you want to copy. ie (Sonnet01, Sonnet02, etc)
makeItSo(markdown, "", files);
The mistake in my **start.html** file caused three lines to have
the wrong path in them. They should read:
<link rel="shortcut icon" href="Images/favicon.png">
<script src="Scripts/elvenware.js" type="text/javascript"></script>
<link href="Styles/BootstrapIndex.css" rel="stylesheet" type="text/css" />
If, instead, you see the word **charlie** in those bits of text from
the top of your HTML file, then that is the bug, and you need to pull
the latest code and rerun **BuildAll.py**, being sure not to lose the
list of files that you want it to copy.
| 34.415094 | 137 | 0.753015 | eng_Latn | 0.998666 |
9ad18fa69e6cae36c03ea90783f29fbfbc5920f1 | 8,932 | md | Markdown | articles/cosmos-db/tutorial-develop-mongodb-nodejs-part6.md | changeworld/azure-docs.pt-pt | 8a75db5eb6af88cd49f1c39099ef64ad27e8180d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/tutorial-develop-mongodb-nodejs-part6.md | changeworld/azure-docs.pt-pt | 8a75db5eb6af88cd49f1c39099ef64ad27e8180d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/tutorial-develop-mongodb-nodejs-part6.md | changeworld/azure-docs.pt-pt | 8a75db5eb6af88cd49f1c39099ef64ad27e8180d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Adicione funções CRUD a uma app Angular com API do Azure Cosmos DB para MongoDB
description: Parte 6 da série do tutorial sobre como criar uma aplicação MongoDB com Angular e Node no Azure Cosmos DB mediante a utilização das mesmas APIs que são utilizadas para MongoDB.
author: johnpapa
ms.service: cosmos-db
ms.subservice: cosmosdb-mongo
ms.devlang: nodejs
ms.topic: tutorial
ms.date: 12/26/2018
ms.author: jopapa
ms.custom: seodec18
ms.reviewer: sngun
ms.openlocfilehash: 0c39ffe40a490ee23ac65f892c46fba2578bce74
ms.sourcegitcommit: 0947111b263015136bca0e6ec5a8c570b3f700ff
ms.translationtype: MT
ms.contentlocale: pt-PT
ms.lasthandoff: 03/24/2020
ms.locfileid: "75441095"
---
# <a name="create-an-angular-app-with-azure-cosmos-dbs-api-for-mongodb---add-crud-functions-to-the-app"></a>Criar uma app Angular com a API do Azure Cosmos DB para mongoDB - Adicionar funções CRUD à app
Este tutorial em várias partes demonstra como criar uma nova app escrita no Node.js com Express e Angular e, em seguida, conectá-la à sua [conta Cosmos configurada com a API da Cosmos DB para MongoDB](mongodb-introduction.md). A Parte 6 do tutorial é a continuação da [Parte 5](tutorial-develop-mongodb-nodejs-part5.md) e abrange as seguintes tarefas:
> [!div class="checklist"]
> * Criar as funções Post, Put e Delete para o serviço hero
> * Executar a aplicação
> [!VIDEO https://www.youtube.com/embed/Y5mdAlFGZjc]
## <a name="prerequisites"></a>Pré-requisitos
Antes de iniciar esta parte do tutorial, certifique-se de que concluiu os passos na [Parte 5](tutorial-develop-mongodb-nodejs-part5.md) do tutorial.
> [!TIP]
> Este tutorial orienta-o ao longo dos passos para criar a aplicação passo a passo. Se quiser transferir o projeto concluído, pode obter a aplicação terminada a partir do [repositório angular-cosmosdb](https://github.com/Azure-Samples/angular-cosmosdb) no GitHub.
## <a name="add-a-post-function-to-the-hero-service"></a>Adicionar uma função Post ao serviço hero
1. No Visual Studio Code, abra **routes.js** e **hero.service.js** lado a lado ao premir o botão **Dividir Editor**.
Veja se a linha 7 de routes.js chama a função `getHeroes` na linha 5 de **hero.service.js**. Temos de criar este emparelhamento para as funções post, put e delete.

Vamos começar por programar o serviço hero.
2. Copie o código seguinte para **hero.service.js** a seguir à função `getHeroes` e antes de `module.exports`. Este código:
* Utiliza o modelo de hero para publicar um hero novo.
* Verifica as respostas para ver se existe um erro e devolve o valor de estado de 500.
```javascript
function postHero(req, res) {
const originalHero = { uid: req.body.uid, name: req.body.name, saying: req.body.saying };
const hero = new Hero(originalHero);
hero.save(error => {
if (checkServerError(res, error)) return;
res.status(201).json(hero);
console.log('Hero created successfully!');
});
}
function checkServerError(res, error) {
if (error) {
res.status(500).send(error);
return error;
}
}
```
3. Em **hero.service.js**, atualize o `module.exports` para incluir a função `postHero` nova.
```javascript
module.exports = {
getHeroes,
postHero
};
```
4. Em **routes.js**, adicione um router para a função `post` a seguir ao router `get`. Este router publica um hero de cada vez. Estruturar o ficheiro do router desta forma mostra claramente todos os pontos finais da API disponíveis e deixa o trabalho real a cargo do ficheiro **hero.service.js**.
```javascript
router.post('/hero', (req, res) => {
heroService.postHero(req, res);
});
```
5. Execute a aplicação para confirmar que está tudo a funcionar. No Visual Studio Code, guarde todas as alterações, selecione o botão **Depurar**, no lado esquerdo, selecione o botão **Iniciar Depuração**.
6. Agora, regresse ao browser e prima F12, na maioria dos computadores, para abrir o separador Ferramentas do programador e Rede. Navegue [http://localhost:3000](http://localhost:3000) para ver as chamadas feitas através da rede.

7. Selecione o botão **Adicionar Hero Novo** para adicionar um hero novo. Introduza o ID “999”, o nome “Pedro” e a indicação “Olá” e, em seguida, selecione **Guardar**. Deverá ver, no separador Rede, que enviou um pedido POST para um hero novo.

Agora, vamos voltar atrás e adicionar as funções Put e Delete à aplicação.
## <a name="add-the-put-and-delete-functions"></a>Adicionar as funções Put e Delete
1. Em **routes.js**, adicione os routers `put` e `delete` a seguir ao router post.
```javascript
router.put('/hero/:uid', (req, res) => {
heroService.putHero(req, res);
});
router.delete('/hero/:uid', (req, res) => {
heroService.deleteHero(req, res);
});
```
2. Copie o código seguinte para **hero.service.js** a seguir à função `checkServerError`. Este código:
* Cria as funções `put` e `delete`
* Verifica se o hero foi encontrado
* Procede ao processamento de erros
```javascript
function putHero(req, res) {
const originalHero = {
uid: parseInt(req.params.uid, 10),
name: req.body.name,
saying: req.body.saying
};
Hero.findOne({ uid: originalHero.uid }, (error, hero) => {
if (checkServerError(res, error)) return;
if (!checkFound(res, hero)) return;
hero.name = originalHero.name;
hero.saying = originalHero.saying;
hero.save(error => {
if (checkServerError(res, error)) return;
res.status(200).json(hero);
console.log('Hero updated successfully!');
});
});
}
function deleteHero(req, res) {
const uid = parseInt(req.params.uid, 10);
Hero.findOneAndRemove({ uid: uid })
.then(hero => {
if (!checkFound(res, hero)) return;
res.status(200).json(hero);
console.log('Hero deleted successfully!');
})
.catch(error => {
if (checkServerError(res, error)) return;
});
}
function checkFound(res, hero) {
if (!hero) {
res.status(404).send('Hero not found.');
return;
}
return hero;
}
```
3. Em **hero.service.js**, exporte os módulos novos:
```javascript
module.exports = {
getHeroes,
postHero,
putHero,
deleteHero
};
```
4. Agora que atualizámos o código, selecione o botão **Reiniciar** no Visual Studio Code.
5. Atualize a página do browser e selecione o botão **Adicionar novo hero**. Adicione um hero novo com o ID "9", o nome "Starlord" e a indicação "Hi". Selecione o botão **Guardar** para guardar o hero novo.
6. Agora, selecione o hero **Starlord**, mude a indicação de “Olá” para “Tchau” e, em seguida, selecione o botão **Guardar**.
Agora pode selecionar o ID no separador Rede para mostrar o payload. Pode ver, no payload, que a indicação está agora definida como “Bye”.

Também pode eliminar um dos heroes na IU e ver quanto tempo é que a operação delete demora a ser concluída. Experimente ao selecionar o botão “Eliminar” no hero com o nome “Pedro".

Se atualizar a página, o separador Rede mostra o tempo que demora a obter os heroes. Embora estes tempos sejam rápidos, muito está dependente de onde os seus dados estão localizados no mundo e da sua capacidade de georreplicá-los numa zona perto dos seus utilizadores. Pode encontrar mais informações sobre a georreplicação no tutorial seguinte, que vai ser lançado em breve.
## <a name="next-steps"></a>Passos seguintes
Nesta parte do tutorial, fez o seguinte:
> [!div class="checklist"]
> * Adicionou as funções Post, Put e Delete à aplicação
Regresse aqui para ver vídeos adicionais nesta série de tutoriais.
| 45.340102 | 431 | 0.707904 | por_Latn | 0.97843 |
9ad24a39f95bed30b79aa11cfdfd5a79cba3591a | 1,696 | md | Markdown | v1.0.0/views/processors.md | rauanmayemir/docs | e58e146390758b4005226eab53f81ff04a0b65b5 | [
"MIT"
] | null | null | null | v1.0.0/views/processors.md | rauanmayemir/docs | e58e146390758b4005226eab53f81ff04a0b65b5 | [
"MIT"
] | null | null | null | v1.0.0/views/processors.md | rauanmayemir/docs | e58e146390758b4005226eab53f81ff04a0b65b5 | [
"MIT"
] | null | null | null | # View Processors
Both Twig and Stempler engines include support for `Spiral\Views\ProcessorInterface` which works similar to http middlewares and modifies raw view source before or after it been pushed though render engine.
## ProcessorInterface
```php
interface ProcessorInterface
{
/**
* @param EnvironmentInterface $environment
* @param ViewSource $view
* @param string $code
*
* @return string
*/
public function modify(
EnvironmentInterface $environment,
ViewSource $view,
string $code
): string;
}
```
## Default Processors
Descriptions of existed view processors.
### TranslateProcessor
Replaces all `[[strings]]` with locale specific translation.
> Note that by default View environment depends on current locale value, switching language will
cause new cache location and view recompilation.
### EnvironmentProcessor
Replaces all `@{constructs|default}` constructions with environment dependency value, for example:
View config:
```php
'environment' => [
'language' => ['translator', 'getLocale'],
'basePath' => ['http', 'basePath'],
/*{{environment}}*/
],
```
View:
```html
Current language @{language}
```
### ExpressionsProcessors
Creates set of macros to handle php extraction for toolkit and EvaluateProcessor.
> This is internal plugin.
### EvaluateProcessor
Provides ability to execute php blocks with "#compile" comment in compilation mode (only once).
```php
This view is compiled at <?= date('c') #compile ?>
```
> Blocks marked as #compile does not have access to view data.
### PrettifyProcessor
Replaces extra lines in view and drops empty html attributes. | 26.5 | 206 | 0.709906 | eng_Latn | 0.924788 |
9ad3d3ec345ce959c9abe1f4b2d9c5480146fa8e | 9,897 | md | Markdown | _posts/2019-01-16-offsec-certs.md | capt-meelo/capt-meelo.github.io | bdc65dc7d121be72e9ea9eef68ba554ae96b0b43 | [
"MIT"
] | 5 | 2018-08-12T10:42:18.000Z | 2022-02-01T11:13:34.000Z | _posts/2019-01-16-offsec-certs.md | capt-meelo/capt-meelo.github.io | bdc65dc7d121be72e9ea9eef68ba554ae96b0b43 | [
"MIT"
] | null | null | null | _posts/2019-01-16-offsec-certs.md | capt-meelo/capt-meelo.github.io | bdc65dc7d121be72e9ea9eef68ba554ae96b0b43 | [
"MIT"
] | 5 | 2018-07-20T16:41:42.000Z | 2021-08-13T07:32:21.000Z | ---
layout: post
title: "OffSec Certs - Are They Still Worth the Money?"
date: 2019-01-16
categories: pentest
---
## Introduction

Offensive Security certifications are very popular and are sought-after courses/certifications by people who are interested in the offensive side of information security. Until now, people are still willing to spend their money to take the courses and pass the certifications. However, several companies out there are establishing their own hacking courses/certifications/labs, and they're starting to become popular among the folks of the penetration testing community. With the rise of several courses, the question is "**Are Offensive Security courses/certifications still worth the price?**" Or is it wiser to spend money elsewhere?
I think it's time to write my thoughts about them now that I have completed OSCE, OSCP, and OSWP certifications.
>_**Take note that everything written here is based on my own opinion and views. I do not have any intention to discourage people from taking Offensive Security's courses and certifications. No hate or bitterness against Offensive Security.**_
## OSCP

Let's start with the most popular certification of Offensive Security - OSCP. This is the most sought-after certification by people who want to get into penetration testing. Nowadays, everyone want this certification on their CV because of the higher chance of acceptance in the hiring process. Almost every job posting for a pentest position requires OSCP certification. The course material is great and it teaches the skills, mindset, and methodology needed to get the students started with their hacking journey. However, the material is limited. Based on my experience, I was not able to utilized what I learned from this course when I started doing internal network and active directory pentesting. While the lab teaches students how to pivot from one machine/network to another, the lab still doesn't mimic the real-world corporate environment. The course does not cover Man-in-the-Middle attacks as well, which is the most common attack used by pentesters when doing an internal network testing.
In my opinion, if you're the kind of person who's after for the knowledge and skills, and don't mind having the certification, it's better to check [Hack The Box (HTB)][htb]. You'll find that the machines in HTB is similar to the PWK lab. Another great thing is that HTB is way cheaper than PWK lab. Also, if you want to practice your active directory pentesting skills, HTB has the Pro labs _Offshore_ and _RastaLabs_. **If students could only pay for the exam (excluding the course/lab)**, then they could save some money by doing HTB instead of the PWK lab as preparation for the exam.
You can also check [Pentester Academy's Windows Red Team Lab][redteam] which focuses on red teaming. I haven't signed up to this course yet, but I heard a lot of good things about it. It's kind of pricey though compared to HTB's Pro labs.
## OSCE

OSCE is my latest certifcation. The exploit development section of the CTP course focuses only on Windows environment. Some might think it’s one of the weaknesses of this course, but for me, it’s not. I don’t mind if exploit development on Linux environment isn’t covered. My biggest concern is the lack of value of the course material. We know that Offensive Security always want their students to do the extra mile and to “try harder” - meaning they expect us to research related materials about a topic, and not just rely on what is presented from the course material. I get their motives and even support the need to go the extra mile. My problem is that the information from the course is lacking, and there is no deep explanation of some of the topics. Since I paid for the course material, I expected to learn a lot from it. Unfortunately, that’s not the case. **I learned and gained more knowledge from [Corelan's][corelan] and [FuzzySecurity’s][fuzzysec] tutorials, and from googling.**
For me, the lab is a waste of money because **you don’t need lab access to practice what’s being taught in the course.** If you refer to the [syllabus][osce-syllabus], you can easily set up your own lab environment by spinning up some VMS and installing vulnerable softwares. This is what I did - before signing up to the course, I practiced on my own lab by following the syllabus provided by Offensive Security. I highly recommend doing this especially for those who have a slow network connection like mine. With these, I think the CTP course is not worth taking. **If there’s an option to pay only for the exam (excluding the course), I would definitely do it.**
The exam is a different scenario. I find it gruelling and challenging, yet at the same time enjoyable. If you’re like me who’s not very experienced in exploit development, you’ll learn new things while doing the exam. I can say that taking the exam is worth the money, time, and effort.
## OSWP

With all honesty, I took OSWP for the sake of having it on my CV. As we all know, the contents of this course are very outdated. While the course teaches you the knowledge you need, the value you’re getting is minimal. I recommend you just spend your money on other courses like [Pentester Academy’s Wi-Fi Security and Pentesting][wifi] as you’ll learn more from this course as Vivek really spends his time and effort in explaining the topic deeply.
Another issue I have with this certification is that the name is misleading. OSWP stands for Offensive Security **Wireless** Professional, and yet the course only teaches you about WiFi. It would have been better if the course includes Bluetooth, Zigbee, RFID, Smart Cards, NFC, SDR, and other wireless technologies.
If you have the money and wanted to learn more about wireless hacking (aside from WiFi), I think the [SANS SEC617][sec617] course is worth taking. If you don’t want to spend on expensive courses, you can sign up to Pentester Academy and take their [Wi-Fi Security and Pentesting][wifi] course. If you are cheap and don’t want to spend at all, just watch some videos on Youtube, and read blogs. Several tutorials on the web are out there waiting for you to read and watch.
Among Offensive Security’s courses, **I think this is the most disappointing and not worth-taking**.
## How About OSWE and OSEE?
I haven’t taken these courses/certifications since they’re not yet available online. However, there’s a rumor on [reddit][oswe] that the AWAE course will become online. I also heard that the course is now being updated. Since I haven’t taken the course yet, I don't have much to say about it. I’ll definitely take this course for the sake of having it on my CV. If you can’t wait for the online version of AWAE course, and don’t want to spend thousands of dollars, I suggest you take a look at [PentesterLab Pro][pentestlab].
For AWE/OSEE, I don't have much to say as well since I haven’t taken the course/certification yet. Regarding the course’s online availability, I read from [Jollyfrog’s blog][osee] that it will be online by early 2020. However, some said that the online version might not happen since the course is being updated yearly. I also heard that this is the hardest exploit development course. If you're looking for a prep course, I think [Prace Security's Advanced Software Exploitation][ptrace] is good based on its syllabus. I also haven't taken this course yet so no review about it.
Again, these are just my opinions since I haven't taken both AWAE and AWE courses. But I'll definitely take them to complete all of Offensive Security's certification. Gotta catch 'em all!
## Conclusion
If you love challenges and really need the certifications on your CV, go ahead and take the Offensive Security courses. For me, I'm still going to take Offensive Security certifications because they don't expire and are still valuable in my CV. However, if you are the type of person who always want the best value out of their pocket, I suggest you consider other available courses.
Another thing, people should always choose **knowledge and skills over certification**. I know a lot of people who are not certified but are very skilled and talented in the infosec field, thus making them more valuable than those who are certified. It's not just about the certifications that you can add in your CV and hang in your wall, but it's more about the skills and the knowledge that you can gain. **Take the course for the sake of knowledge and skills, not just for the sake of being certified.**
If you're like me who does not earn much and who is frugal, keep on searching the web for a particular topic that you need and want to learn. There are a lot of people/companies who freely provide tutorials, writeups, and courses. Good examples of these are [Open Security Training][open] and [Sam Bowne][samclass]. Remember, **Google is your friend so be resourceful**.
[htb]: https://www.hackthebox.eu/
[redteam]: https://www.pentesteracademy.com/redteamlab
[corelan]: https://www.corelan.be/index.php/articles/
[fuzzysec]: http://www.fuzzysecurity.com/tutorials.html
[osce-syllabus]: https://www.offensive-security.com/documentation/cracking-the-perimeter-syllabus.pdf
[wifi]: https://www.pentesteracademy.com/course?id=9
[sec617]: https://www.sans.org/course/wireless-penetration-testing-ethical-hacking
[oswe]: https://old.reddit.com/r/netsecstudents/comments/a3s5ag/offsec_is_making_the_awae_course_online_about_time/
[pentestlab]: https://pentesterlab.com/
[osee]: https://www.jollyfrogs.com/osee-awestralia-2018-preparations/
[ptrace]: https://www.psec-courses.com/courses/advanced-software-exploitation
[open]: http://opensecuritytraining.info/Training.html
[samclass]: https://samsclass.info/
| 100.989796 | 1,003 | 0.783975 | eng_Latn | 0.999266 |
9ad45ca6a9ef73b5d846c0a1a2267cad56aff726 | 231 | md | Markdown | README.md | m3sserschmitt/algorithms | 396d85b5a1d381c88b5dee36d2a797a6b8876d2a | [
"MIT"
] | null | null | null | README.md | m3sserschmitt/algorithms | 396d85b5a1d381c88b5dee36d2a797a6b8876d2a | [
"MIT"
] | null | null | null | README.md | m3sserschmitt/algorithms | 396d85b5a1d381c88b5dee36d2a797a6b8876d2a | [
"MIT"
] | null | null | null | This repo contain basic implementations of some data structures and sorting algorithms:
Data structures:
* lists
* queues
* stacks
* binary trees
* red-black binary trees
___
Algorithms:
* insertion sort
* merge sort
* heap sort
| 15.4 | 87 | 0.774892 | eng_Latn | 0.823305 |
9ad486c4533985eae6d689c0dc1772cf5293f382 | 1,495 | md | Markdown | README.md | Jacophobia/GameNight | be16df0d505abf2196af640721b284040cb44c07 | [
"MIT"
] | null | null | null | README.md | Jacophobia/GameNight | be16df0d505abf2196af640721b284040cb44c07 | [
"MIT"
] | null | null | null | README.md | Jacophobia/GameNight | be16df0d505abf2196af640721b284040cb44c07 | [
"MIT"
] | null | null | null | # Overview
I really wanted to get some experience working in the front-end, so I decided that I would practice with React Native using JavaScript. I led a small team of four people to work on this project with me. Josh Bushman was in charge of the color pallette, images, and overall visual finalization. Christian Viazzo was in charge of app visual structure and its implementation. Jeremy Busch worked on our firebase connections and I worked on the background functionality, and I helped with each of the aforementioned sections of our code.
Our app, Game Night, is an app designed to help users find fun local events where they can do what they want with like-minded people. It has built in chat functions, authentication, and a live even feed.
[Software Demo Video](https://youtu.be/Rh-nIb0vYxg)
[Code Walk Through](https://youtu.be/mZvEerJSBjw)
# Development Environment
Our app uses React Native, a library that extends the JavaScript language.
# Useful Websites
* [Net Ninja JavaScript Tutorial](https://www.youtube.com/watch?v=iWOYAxlnaww&list=PL4cUxeGkcC9haFPT7J25Q9GRB_ZkFrQAc)
* [Net Ninja React Native Tutorial](https://www.youtube.com/watch?v=ur6I5m2nTvk&list=PL4cUxeGkcC9ixPU-QkScoRBVxtPPzVjrQ)
# Future Work
* Improve the firebase connections so they are a bit more consistent
* Add better live updates to the chat
* Allow the user to follow events
* Implement notifications
* Add event categories so the user can turn on and off notifications for specific categories. | 59.8 | 534 | 0.797993 | eng_Latn | 0.993765 |
9ad499de957a8b7f77b877523a7cf2b38d301394 | 3,488 | md | Markdown | docs/error-messages/compiler-warnings/c5045.md | Dinja1403/cpp-docs | 50161f2a9638424aa528253e95ef9a94ef028678 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-02-19T06:12:36.000Z | 2021-03-27T20:46:59.000Z | docs/error-messages/compiler-warnings/c5045.md | Dinja1403/cpp-docs | 50161f2a9638424aa528253e95ef9a94ef028678 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/error-messages/compiler-warnings/c5045.md | Dinja1403/cpp-docs | 50161f2a9638424aa528253e95ef9a94ef028678 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-04-08T13:58:30.000Z | 2020-04-08T13:58:30.000Z | ---
title: "Compiler Warning C5045"
ms.date: "06/21/2018"
f1_keywords: ["C5045"]
helpviewer_keywords: ["C5045"]
---
# Compiler Warning C5045
> Compiler will insert Spectre mitigation for memory load if /Qspectre switch specified
## Remarks
Warning C5045 lets you see what patterns in your code cause a Spectre mitigation, such as an LFENCE, to be inserted when the [/Qspectre](../../build/reference/qspectre.md) compiler option is specified. This lets you identify which code files are affected by the security issue. This warning is purely informational: the mitigation is not inserted until you recompile using the **/Qspectre** switch. The functionality of C5045 is independent of the **/Qspectre** switch, so you can use them both in the same compilation.
This warning is new in Visual Studio 2017 version 15.7, and is off by default. Use [/Wall](../../build/reference/compiler-option-warning-level.md) to enable all warnings that are off by default, or __/w__*n*__5038__ to enable C5045 as a level *n* warning. In the IDE, the default warning level is **/W3** and this warning can be enabled in the project **Property Pages** dialog. Open **Configuration Properties** > **C/C++** > **Command Line** and in the **Additional options** box, add */w35045*, then choose **OK**. For more information, see [Compiler warnings that are off by default](../../preprocessor/compiler-warnings-that-are-off-by-default.md). For information on how to disable warnings by compiler version, see [Compiler warnings by compiler version](compiler-warnings-by-compiler-version.md).
## Example
The following example raises warning C5045 when compiled by Visual Studio 2017 version 15.7 with the **/Wall** or the **/w35045** and **/W3** options:
```cpp
// C5045.cpp
// Compile with: cl /EHsc /W3 /w35045 C5045.cpp
int G, G1, G2;
__forceinline
int * bar(int **p, int i)
{
return p[i];
}
__forceinline
void bar1(int ** p, int i)
{
if (i < G1) {
auto x = p[i]; // C5045: mitigation here
G = *x;
}
}
__forceinline
void foo(int * p)
{
G = *p;
}
void baz(int ** p, int i)
{
if (i < G1) {
foo(bar(p, i + G2));
}
bar1(p, i);
}
int main() { }
```
The compiler output when the warning is enabled looks something like this:
```Output
C:\Users\username\source\repos\C5045>cl /W3 /w35045 C5045.cpp
Microsoft (R) C/C++ Optimizing Compiler Version 19.14.26431 for x86
Copyright (C) Microsoft Corporation. All rights reserved.
C5045.cpp
c:\users\username\source\repos\c5045\c5045.cpp(16) : warning C5045: Compiler will insert Spectre mitigation for memory load if /Qspectre switch specified
c:\users\username\source\repos\c5045\c5045.cpp(15) : note: index 'i' range checked by comparison on this line
c:\users\username\source\repos\c5045\c5045.cpp(17) : note: feeds memory load on this line
Microsoft (R) Incremental Linker Version 14.14.26431.0
Copyright (C) Microsoft Corporation. All rights reserved.
/out:C5045.exe
C5045.obj
```
The warning messages show that a mitigation would have been inserted on line 16. It also notes that the mitigation is needed because the index i on line 15 feeds the memory load on line 17. The speculation is done across bar and bar1 but the mitigation is effective when placed at line 16.
## See also
[C++ Developer Guidance for Speculative Execution Side Channels](../../security/developer-guidance-speculative-execution.md)<br/>
[/Qspectre](../../build/reference/qspectre.md)<br/>
[spectre](../../cpp/spectre.md)
| 41.52381 | 804 | 0.726204 | eng_Latn | 0.981252 |
9ad49c0b73fa7ce28632919514be763973273da0 | 571 | md | Markdown | wiki/event/onReceive.md | wlshuang/swoole-src | 2f9ad74871a9e91bc0debd6e242f199374a7c5fb | [
"Apache-2.0"
] | 2 | 2015-02-05T14:19:27.000Z | 2016-08-01T07:12:33.000Z | wiki/event/onReceive.md | wlshuang/swoole-src | 2f9ad74871a9e91bc0debd6e242f199374a7c5fb | [
"Apache-2.0"
] | null | null | null | wiki/event/onReceive.md | wlshuang/swoole-src | 2f9ad74871a9e91bc0debd6e242f199374a7c5fb | [
"Apache-2.0"
] | null | null | null | onReceive
-----
接收到数据时回调此函数,发生在worker进程中。函数原型:
```php
void onReceive(resource $server, int $fd, int $from_id, string $data);
```
swoole只负责底层通信,数据的格式,解包打包,包完整性检测需要放在应用层处理。
> onReceive回调函数收到的数据最大为8192,可以修改swoole_config.h中SW_BUFFER_SIZE宏来调整
> Swoole支持二进制格式,$data可能是二进制数据
onReceive到的数据,需要检查是不是完整的包,是否需要继续等待数据。代码中可以增加一个 $buffer = array(),使用$fd作为key,来保存上下文数据。
默认情况下,同一个fd会被分配到同一个worker中,所以数据可以拼接起来。使用dispatch_mode = 3时。
请求数据是抢占式的,同一个fd发来的数据可能会被分到不同的进程。
关于粘包问题,如SMTP协议,客户端可能会同时发出2条指令。在swoole中可能是一次性收到的,这时应用层需要自行拆包。smtp是通过\r\n来分包的,所以业务代码中需要 explode("\r\n", $data)来拆分数据包。 | 35.6875 | 115 | 0.812609 | yue_Hant | 0.562132 |
9ad576500a02956e5ce567d1155830958043fd58 | 692 | md | Markdown | src/content/uudised/2012-08-03-toimus-49-kokkutulek.md | sookoll/oviir.eu | 772dfd14ba53c21ec3bd56458ad33b6faccf3bb4 | [
"MIT"
] | null | null | null | src/content/uudised/2012-08-03-toimus-49-kokkutulek.md | sookoll/oviir.eu | 772dfd14ba53c21ec3bd56458ad33b6faccf3bb4 | [
"MIT"
] | 1 | 2021-05-10T20:03:03.000Z | 2021-05-10T20:03:03.000Z | src/content/uudised/2012-08-03-toimus-49-kokkutulek.md | sookoll/oviir.eu | 772dfd14ba53c21ec3bd56458ad33b6faccf3bb4 | [
"MIT"
] | null | null | null | ---
Title: Toimus 49. Oviiride kokkutulek
Description: Oviir.eu suguvõsa veebileht
Date: 2012-08-03
Image: https://oviir.eu/miuview-api?request=getimage&album=kokkutulekud&item=2012-49-kokkutulek-suur.jpg&size=600&mode=longest
Thumbnail: https://oviir.eu/miuview-api?request=getimage&album=kokkutulekud&item=2012-49-kokkutulek-suur.jpg&size=600&mode=square
Category: uudis
Author: Mihkel Oviir
Tags: '2012'
Edit: admins
---
Perekond Suur võõrustas selle aasta sugulste kokkutulekut 8. juulil Männitukas, Põhja külas, Kuusalu vallas. Kokkutuleku pilte saab vaadata <a href="%base_url%/kokkutulekud/2012">kokkutuleku lehelt</a>.
Aitäh korraldajatele ja osavõtjatele!
Järgmise kokkutulekuni!
| 38.444444 | 202 | 0.803468 | est_Latn | 0.996156 |
9ad5b585c4a49048b81bcd33cfeecd375f93e135 | 3,047 | md | Markdown | documentation/sdk_examples/muxed_account_payment.md | Soneso/stellar-flutter-sdk | 30e223486a40f0c6b5abf48ac0c3e83d1c2818eb | [
"MIT"
] | null | null | null | documentation/sdk_examples/muxed_account_payment.md | Soneso/stellar-flutter-sdk | 30e223486a40f0c6b5abf48ac0c3e83d1c2818eb | [
"MIT"
] | null | null | null | documentation/sdk_examples/muxed_account_payment.md | Soneso/stellar-flutter-sdk | 30e223486a40f0c6b5abf48ac0c3e83d1c2818eb | [
"MIT"
] | null | null | null |
### Use muxed account for payment
In this example we will see how to use a muxed account in a payment.
Muxed accounts can furthermore be used in:
- payment operation destination
- payment strict receive operation destination
- payment strict send operation destination
- any operation source account
- account merge operation destination
- transaction source account
- fee bump transaction fee source
```dart
// Create two random key pairs, we will need them later for signing.
KeyPair senderKeyPair = KeyPair.random();
KeyPair receiverKeyPair = KeyPair.random();
// AccountIds
String accountCId = receiverKeyPair.accountId;
String senderAccountId = senderKeyPair.accountId;
// Create the sender account.
await FriendBot.fundTestAccount(senderAccountId);
// Load the current account data of the sender account.
AccountResponse accountA = await sdk.accounts.account(senderAccountId);
// Create the receiver account.
Transaction transaction = new TransactionBuilder(accountA)
.addOperation(
new CreateAccountOperationBuilder(accountCId, "10").build())
.build();
// Sign.
transaction.sign(senderKeyPair, Network.TESTNET);
// Submit.
SubmitTransactionResponse response = await sdk.submitTransaction(transaction);
// Now let's create the mxued accounts to be used in the payment transaction.
MuxedAccount muxedDestinationAccount = MuxedAccount(accountCId, 8298298319);
MuxedAccount muxedSourceAccount = MuxedAccount(senderAccountId, 2442424242);
// Build the payment operation.
// We use the muxed account objects for destination and for source here.
// This is not needed, you can also use only a muxed source account or muxed destination account.
PaymentOperation paymentOperation =
PaymentOperationBuilder.forMuxedDestinationAccount(
muxedDestinationAccount, Asset.NATIVE, "100")
.setMuxedSourceAccount(muxedSourceAccount)
.build();
// Build the transaction.
// If we want to use a Med25519 muxed account with id as a source of the transaction, we can just set the id in our account object.
accountA.muxedAccountMed25519Id = 44498494844;
transaction = new TransactionBuilder(accountA)
.addOperation(paymentOperation)
.build();
// Sign.
transaction.sign(senderKeyPair, Network.TESTNET);
// Submit.
response = await sdk.submitTransaction(transaction);
// Have a look to the transaction and the contents of the envelope in Stellar Laboratory
// https://laboratory.stellar.org/#explorer?resource=transactions&endpoint=single&network=test
print(response.hash);
```
Since version 1.2.9 also use muxed "M..." addresses are supported by default as source account ids and destination account ids.
For example:
```dart
String sourceAccountId = "MD6HSPJQPCQBMMSPP33SMERH32U5ZWIJN7DAD2V3UPWZBO347GN3EAAAAAAAAAPGE4NB4";
String destinationAccountId = "MBUZNQV4SSPFZPLIK55ZQ4SZWROKZLJQ62YF5Q3IKJAD5ICYCC3JSAAAAAAAABHOUBCZ4";
PaymentOperation paymentOperation = new PaymentOperationBuilder(destinationAccountId, Asset.NATIVE, "100")
.setSourceAccount(sourceAccountId)
.build();
``` | 36.710843 | 131 | 0.790614 | eng_Latn | 0.801304 |