hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
26efd7abc3a7c394d7f4ab9cdf889d10b92d6114
1,954
md
Markdown
docs/outlook/auxiliary/pidtagnextsendacct.md
isabella232/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
2
2020-05-19T18:52:16.000Z
2021-04-21T00:13:46.000Z
docs/outlook/auxiliary/pidtagnextsendacct.md
MicrosoftDocs/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
2
2021-12-08T03:25:19.000Z
2021-12-08T03:43:48.000Z
docs/outlook/auxiliary/pidtagnextsendacct.md
isabella232/office-developer-client-docs.de-DE
f244ed2fdf76004aaef1de6b6c24b8b1c5a6942e
[ "CC-BY-4.0", "MIT" ]
5
2018-07-17T08:19:45.000Z
2021-10-13T10:29:41.000Z
--- title: PidTagNextSendAcct manager: soliver ms.date: 03/09/2015 ms.audience: Developer ms.topic: overview ms.localizationpriority: medium ms.assetid: 1cf5b314-39fa-996f-fd88-00380ffbc4de description: Gibt den sekundären Accountsendstamp für die Nachricht an. ms.openlocfilehash: 4acc8639be3b09a2a12fc402c7fc5bc2463af4db ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 09/24/2021 ms.locfileid: "59557113" --- # <a name="pidtagnextsendacct"></a>PidTagNextSendAcct Gibt den Stempel "senden" des sekundären Kontos für die Nachricht an. ## <a name="quick-info"></a>QuickInfo ||| |:-----|:-----| |Zugeordnete Eigenschaften: <br/> |PR_NEXT_SEND_ACCT <br/> | |Kennung: <br/> |0x0E29 <br/> | |Datentyp: <br/> |PT_UNICODE <br/> | |Bereich: <br/> |Outlook-Anwendung <br/> | ## <a name="remarks"></a>HinwBemerkungeneise Diese Eigenschaft gilt für ein MAPI-Nachrichtenobjekt. Bei einer empfangenen Nachricht gibt der sekundäre "Senden"-Stempel des Kontos an, mit welchem Konto eine Weiterleitung oder Antwort gesendet werden soll, wenn die Weiterleitung oder Antwort nicht mit dem primären Konto gesendet werden kann. Bei ausgehenden Nachrichten bestimmt der Stempel "Senden" des sekundären Kontos, mit welchem Konto die Nachricht gesendet werden soll, ob die Nachricht nicht mit dem primären Konto gesendet werden kann. Der Wert ist der [PROP_ACCT_SEND_STAMP](prop_acct_send_stamp.md) Wert aus der [IOlkAccount-Schnittstelle](iolkaccount.md) des Kontos, mit dem die Nachricht gesendet wird. ## <a name="see-also"></a>Siehe auch - [Konstanten (Account Management API)](constants-account-management-api.md) - [MAPI-Eigenschaften](https://msdn.microsoft.com/library/3b980217-b65b-442b-8c18-b8b9f3ff487a%28Office.15%29.aspx) - [PidTagNextSendAcct (kanonische Eigenschaft)](https://msdn.microsoft.com/library/b7429c2e-0d9d-4921-9f56-9ecad817f8cb%28Office.15%29.aspx)
48.85
671
0.778915
deu_Latn
0.864553
26f0f624ca34033b9704c5c58bdc4ef13bf1084b
261
md
Markdown
crumb-structure.md
hortigraph/wiki
3fe4c6cb054a5dc7daa54da80909038366b21c1b
[ "MIT" ]
1
2019-10-01T21:01:22.000Z
2019-10-01T21:01:22.000Z
crumb-structure.md
hortigraph/wiki
3fe4c6cb054a5dc7daa54da80909038366b21c1b
[ "MIT" ]
null
null
null
crumb-structure.md
hortigraph/wiki
3fe4c6cb054a5dc7daa54da80909038366b21c1b
[ "MIT" ]
null
null
null
## CRUMB STRUCTURE ### Definition how the soil particles hold together/glue ### Description (how the soil particles hold together/glue) is described as “good” or “bad”. A good surface structure is call a “tilth”. Good crumb structure should be quite stable.
43.5
168
0.754789
eng_Latn
0.996102
26f2081be52ee858ba0a52cf06b503641460f936
85
md
Markdown
README.md
Bindernews/Dude3D
925c7a4ab12deb41f9d80e3751dc0a45662d7ebf
[ "MIT" ]
null
null
null
README.md
Bindernews/Dude3D
925c7a4ab12deb41f9d80e3751dc0a45662d7ebf
[ "MIT" ]
null
null
null
README.md
Bindernews/Dude3D
925c7a4ab12deb41f9d80e3751dc0a45662d7ebf
[ "MIT" ]
null
null
null
# Dude3D A recreation of the calculator game Block Dude in Javascript using three.js
28.333333
75
0.811765
eng_Latn
0.993747
26f21dcc444942fac864435e63fab3e5eb0ef207
1,913
md
Markdown
README.md
huypz/trash-ai-testing
0158ea740738909b4788a3b1c40b8a35e7dca94a
[ "MIT" ]
null
null
null
README.md
huypz/trash-ai-testing
0158ea740738909b4788a3b1c40b8a35e7dca94a
[ "MIT" ]
null
null
null
README.md
huypz/trash-ai-testing
0158ea740738909b4788a3b1c40b8a35e7dca94a
[ "MIT" ]
null
null
null
# Trash AI ## Object Detection Model (BACKEND ONLY) The server runs on ImageAI using RetinaNet. For it to work, first follow the **[ImageAI setup instructions](https://github.com/OlafenwaMoses/ImageAI)**. Then, download the RetinaNet object detection model **[here](https://github.com/OlafenwaMoses/ImageAI/releases/download/essentials-v5/resnet50_coco_best_v2.1.0.h5)**. Once you download the file, place it in the _models_ folder. 1. Set up ImageAI 2. Git bash in the directory: backend-nodejs 3. Type 'node server.js' in the terminal 4. Go to 'localhost:3000' in your browser 5. Upload images 6. Go to 'localhost:5000/_img-file-name_' [![Build Status](https://app.travis-ci.com/brkkrgz/trash-ai-testing.svg?branch=main)](https://app.travis-ci.com/brkkrgz/trash-ai-testing) [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) Training Data Storage: //learning algorithm We create a Training Data storage to contain all the data we will use to build our IA, working with a Machine Learning algorithm. With the datasets in our machine learning, we can train the ML model. The Platform Training reads data from the Data Storage located where we have granted access to our AI platform training. The team is searching for a model that has been trained as a proof of concept (POC). POCs has the intention to prove that new technology/service/idea is viable for the market. The POCs involves a developers examining the client’s requirements, selecting a few to focus on, and creating a proof of concept to meet those requirements. Trained model is available for view. https://colab.research.google.com/drive/1QKI1EzHGkEwc1PMsfqJ3Mc87NfmTUsx9?usp=sharing When I try to send Travis CI again it did not recive updated version, I deleted all my fork and tried again, I deleted and write .travis file.. so on and on, I couldn't triger the travis again
51.702703
332
0.779927
eng_Latn
0.96029
26f2447af145edb5318b2d9c0c01512a807b2fa2
1,717
md
Markdown
CSharpLangFeature/List/12Arrays/readme.md
csharplang/CSharpLangFeature
629563c869db0e275dd4c35c01a6283a606654fb
[ "MIT" ]
null
null
null
CSharpLangFeature/List/12Arrays/readme.md
csharplang/CSharpLangFeature
629563c869db0e275dd4c35c01a6283a606654fb
[ "MIT" ]
null
null
null
CSharpLangFeature/List/12Arrays/readme.md
csharplang/CSharpLangFeature
629563c869db0e275dd4c35c01a6283a606654fb
[ "MIT" ]
null
null
null
# C# | Arrays ### Important Points to Remember About Arrays in C# * In C#, all arrays are dynamically allocated. * Since arrays are objects in C#, we can find their length using member length. This is different from C/C++ where we find length using sizeof. * A C# array variable can also be declared like other variables with [] after the data type. * The variables in the array are ordered and each has an index beginning from 0. * C# array is an object of base type System.Array. * Default values of numeric array and reference type elements are set to be respectively zero and null. * A jagged array elements are reference types and are initialized to null. * Array elements can be of any type, including an array type. * Array types are reference types which are derived from the abstract base type Array. These types implement IEnumerable and for it, they use foreach iteration on all arrays in C#. ```csharp int[] x; // can store int values string[] s; // can store string values double[] d; // can store double values Student[] stud1; // can store instances of Student class which is custom class ``` ### Jagged Arrays * An array whose elements are arrays is known as Jagged arrays it means “array of arrays“. The jagged array elements may be of different dimensions and sizes. Below are the examples to show how to declare, initialize, and access the jagged arrays. * Jagged array is a array of arrays such that member arrays can be of different sizes. ### Points To Remember * GetLength(int): returns the number of elements in the first dimension of the Array. When using jagged arrays be safe as if the index does not exist then it will throw exception which is IndexOutOfRange.
50.5
247
0.754805
eng_Latn
0.999871
26f3b12d088612c8623c5d7c148c2ced241e5e79
1,639
markdown
Markdown
content/blog/2020-11-12-autumn-traybake.markdown
coldclimate/omnomfrickinnom
31251518f991ab298de7b8cb915e95a3f9389b54
[ "MIT" ]
3
2016-05-30T08:55:13.000Z
2017-12-29T18:59:04.000Z
content/blog/2020-11-12-autumn-traybake.markdown
coldclimate/omnomfrickinnom
31251518f991ab298de7b8cb915e95a3f9389b54
[ "MIT" ]
4
2015-07-02T10:38:17.000Z
2020-01-01T21:00:27.000Z
content/blog/2020-11-12-autumn-traybake.markdown
coldclimate/omnomfrickinnom
31251518f991ab298de7b8cb915e95a3f9389b54
[ "MIT" ]
5
2015-07-02T09:05:08.000Z
2020-01-01T20:37:01.000Z
--- layout: post title: "Autumnal traybake" date: 2020-11-12 17:51:00 author: oli image: "/images/blog/autumn-traybake-03.jpg" tags: ["sausages", "apple","comfort food", "2020"] --- Perfect for a cold day as the nights draw in, this used up some sausage meat in the freezer from last Christmas, some past their best parsnips and the last of my homegrown apples. It's 5 minutes prep and goes straight from the the oven to the plate. Swap in root veg as you need, onions and carrots would have worked well. ## You will need * A pack of sausage meat cut up into forkable chunks or a pack of quality sausages cut up * A couple of parsnips, scrubbed and chopped * A couple of apples, chopped into chunks * About the same volume of potatoes as you have apples and parsnips, chopped up a bit smaller than you might imagine them needing to be * A couple of teaspoons of oil oil * A heaped teaspoon of fennel seed * A heaped teaspoon of mustard seed * A few big fork fulls of sauerkraut ## Do * Stick the oven on 180 * Toss all the veg in olive oil * Pile everything into a non-stick oven tray * Sprinkle the fennel and mustard seeds * Bake for around 30 minutes or until everything looks crispy on the top * Pile on plates with sauerkraut on the side (and some mustard) ## Result The apple and parsnips are sweet, the sausage leaks fat that everything on the bottom fries in. There are sticky and unctuous bits. The fennel and mustard seeds pop as you bite them. ![Before the oven](/images/blog/autumn-traybake-01.jpg) ![After the oven](/images/blog/autumn-traybake-02.jpg) ![GET IN MY FACE](/images/blog/autumn-traybake-03.jpg)
37.25
250
0.752898
eng_Latn
0.994983
26f469f458ce557ad36c217c849f68853482c304
1,145
md
Markdown
_posts/2018-08-22-java-memory.md
andrew4cloud/andrew4cloud.github.io
64fb5e01367bddef54bb31583809fac2962bf9b8
[ "MIT" ]
null
null
null
_posts/2018-08-22-java-memory.md
andrew4cloud/andrew4cloud.github.io
64fb5e01367bddef54bb31583809fac2962bf9b8
[ "MIT" ]
null
null
null
_posts/2018-08-22-java-memory.md
andrew4cloud/andrew4cloud.github.io
64fb5e01367bddef54bb31583809fac2962bf9b8
[ "MIT" ]
null
null
null
--- layout: post title: "Deep Dive into Java Memory Managment" date: 2018-08-22 09:06:00 +0530 categories: Java --- In this article, we will be discussing Java Virtual Machine (JVM), understanding memory management, memory monitoring tools, monitoring of memory usage, and Garbage Collection (GC) activities. Lets get started!!! ## Java Virtual Machine (JVM) The JVM is an abstract computing machine that enables a computer to run a Java program. There are three notions of JVM: specification (where working of JVM is specified. But the implementation has been provided by Sun and other companies), implementation (known as (JRE) Java Runtime Environment) and instance (after writing Java command, to run Java class, an instance of JVM is created). The JVM loads the code, verifies the code, executes the code, manages memory (this includes allocating memory from the Operating System (OS), managing Java allocation including heap compaction and removal of garbage objects) and finally provides the runtime environment. ## Java (JVM) Memory Structure JVM memory is divided into multiple parts: Heap Memory, Non-Heap Memory, and Other.
54.52381
389
0.783406
eng_Latn
0.998095
26f504103371bf5cb2e619fe8a2d9bc99adb6c86
1,214
md
Markdown
pages/Documents/MessagingChannels/MobileAppMessagingSDKforiOS/ReleaseNotes/Regular/6.0.1.md
XueliYue/developers-community
edc9f01f7a5c350c6b181a9bd3dce14b35443822
[ "MIT", "BSD-3-Clause" ]
32
2017-06-19T14:40:07.000Z
2022-02-10T15:15:55.000Z
pages/Documents/MessagingChannels/MobileAppMessagingSDKforiOS/ReleaseNotes/Regular/6.0.1.md
XueliYue/developers-community
edc9f01f7a5c350c6b181a9bd3dce14b35443822
[ "MIT", "BSD-3-Clause" ]
418
2017-06-13T08:25:44.000Z
2022-03-21T18:24:06.000Z
pages/Documents/MessagingChannels/MobileAppMessagingSDKforiOS/ReleaseNotes/Regular/6.0.1.md
XueliYue/developers-community
edc9f01f7a5c350c6b181a9bd3dce14b35443822
[ "MIT", "BSD-3-Clause" ]
216
2017-05-24T06:03:25.000Z
2022-03-17T13:06:45.000Z
### Version 6.0.1 #### iOS Messaging SDK **Release Date**: September 2, 2020 ##### Environmental Requirements The iOS Mobile Messaging SDK version 6.0.1 is supported on iOS versions 11 through 13. **This XCFramework was compiled with Swift version 5.2.4 (swiftlang-1103.0.32.9 clang-1103.0.32.53) which means it will work Swift version 5.2.4 and above.** {: .notice} XCFramework is supported on CocoaPad versions 1.9.0 and greater. #### Contents - iOS SDK 6.0.1 contains same changes as [6.0.0](#version-600) - This version also compiles on the XCode 12 beta #### Known Issues * The config bubbleEmailLinksRegex not working properly. * Crashes on fetched result controller (including but not limited to, Welcome Message & Welcome Message with Quick Replies) * Conversation view is not displayed properly while the phone is on low network conditions. * Media messages may not be sent successfully after network connection loss. * In VoiceOver mode, the content beneath the PDF viewer got announced, and this issue was found in 5.2.0. * Configs with the types of UIStatusBarStyle(conversationStatusBarStyle, secureFormUIStatusBarStyle and csatUIStatusBarStyle) are not working on iOS 13 due to dark mode.
44.962963
169
0.770181
eng_Latn
0.985828
26f6128a18674259d348baf6b8142a8ea6f41659
14
md
Markdown
README.md
huijie-inc/hui
b83c839ead322df2ce058e7daa5f4eaebdb59db8
[ "Apache-2.0" ]
null
null
null
README.md
huijie-inc/hui
b83c839ead322df2ce058e7daa5f4eaebdb59db8
[ "Apache-2.0" ]
null
null
null
README.md
huijie-inc/hui
b83c839ead322df2ce058e7daa5f4eaebdb59db8
[ "Apache-2.0" ]
null
null
null
# HUI 统一UI视觉
3.5
6
0.642857
eng_Latn
0.341855
26f631d8be0b808f57414d7d1abe78b2b644b90c
196
md
Markdown
README.md
madecomfy/php7
f7509c0805929f87dfb0fdf0f40b6f93437460c0
[ "MIT" ]
null
null
null
README.md
madecomfy/php7
f7509c0805929f87dfb0fdf0f40b6f93437460c0
[ "MIT" ]
1
2018-08-27T05:58:56.000Z
2018-08-27T05:58:56.000Z
README.md
madecomfy/php7
f7509c0805929f87dfb0fdf0f40b6f93437460c0
[ "MIT" ]
null
null
null
# PHP7 [![](https://images.microbadger.com/badges/image/madecomfyau/php7.svg)](https://microbadger.com/images/madecomfyau/php7 "Get your own image badge on microbadger.com") A PHP7 Docker image
32.666667
166
0.765306
yue_Hant
0.781638
26f65555b927904064bcdd20ea17e4131a181942
593
md
Markdown
README.md
smycynek/quizwiz
1523dcbb2c626a759a9ff237d371a63e522bcfb1
[ "MIT" ]
2
2020-10-13T15:26:28.000Z
2020-10-23T12:08:54.000Z
README.md
smycynek/quizwiz
1523dcbb2c626a759a9ff237d371a63e522bcfb1
[ "MIT" ]
null
null
null
README.md
smycynek/quizwiz
1523dcbb2c626a759a9ff237d371a63e522bcfb1
[ "MIT" ]
null
null
null
# Quiz Wiz # A quiz-building UI and front-end for QuizFreak. Version 0.1.0 Copyright 2020 Steven Mycynek I wanted to create a full stack project from scratch that included: * UI (React and redux-forms) * State management (Redux and redux-sauce) * API Workflow (apisauce and redux-sagas) * A database-driven back-end (Django Rest Framework with Postgres) * Deployment (Heroku) ## Usage ``` yarn install yarn start ``` Requires back-end -- see https://github.com/smycynek/quizfreak Also uses https://github.com/smycynek/react-casual-quiz Live demo at: https://stevenvictor.net/quizwiz
19.766667
67
0.752108
eng_Latn
0.562571
26f6d299d6c66e60f8dc477031f8a802ac2e9928
34,587
md
Markdown
articles/virtual-machines/workloads/sap/get-started.md
julianosaless/azure-docs.pt-br
461791547c9cc2b4df751bb3ed881ce57796f1e4
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/workloads/sap/get-started.md
julianosaless/azure-docs.pt-br
461791547c9cc2b4df751bb3ed881ce57796f1e4
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-machines/workloads/sap/get-started.md
julianosaless/azure-docs.pt-br
461791547c9cc2b4df751bb3ed881ce57796f1e4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Introdução ao SAP em VMs do Azure | Microsoft Docs description: Saiba mais sobre as soluções SAP executadas em VMs (máquinas virtuais) no Microsoft Azure services: virtual-machines-linux documentationcenter: '' author: msjuergent manager: bburns editor: '' tags: azure-resource-manager keywords: '' ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2 ms.service: virtual-machines-linux ms.topic: article ms.tgt_pltfrm: vm-linux ms.workload: infrastructure-services ms.date: 05/11/2020 ms.author: juergent ms.custom: H1Hack27Feb2017 ms.openlocfilehash: eab9db77dee5420ddc5baa9f71bde98fc46ca3f6 ms.sourcegitcommit: a8ee9717531050115916dfe427f84bd531a92341 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 05/12/2020 ms.locfileid: "83196853" --- # <a name="use-azure-to-host-and-run-sap-workload-scenarios"></a>Usar o Azure para hospedar e executar cenários de carga de trabalho do SAP Ao usar o Microsoft Azure, você pode executar de forma confiável suas cargas de trabalho e cenários de missão crítica do SAP em uma plataforma escalonável, em conformidade e comprovada pela empresa. Você Obtém a escalabilidade, a flexibilidade e a economia de custos do Azure. Com a parceria expandida entre a Microsoft e a SAP, você pode executar aplicativos SAP em cenários de desenvolvimento e teste e produção no Azure e ter total suporte. Do SAP NetWeaver ao SAP S/4HANA, SAP BI no Linux ao Windows e SAP HANA ao SQL, temos a você a você. Além de hospedar cenários do SAP NetWeaver com o DBMS diferente no Azure, você pode hospedar outros cenários de carga de trabalho do SAP, como o SAP BI no Azure. A exclusividade do Azure para SAP HANA é uma oferta que define o Azure separadamente. Para habilitar a hospedagem de mais cenários SAP que exigem recursos de memória e CPU que envolvem SAP HANA, o Azure oferece o uso de hardware bare-metal dedicado ao cliente. Use esta solução para executar implantações SAP HANA que exigem até 24 TB (escala horizontal de 120 TB) de memória para S/4HANA ou outra carga de trabalho de SAP HANA. A hospedagem de cenários de carga de trabalho do SAP no Azure também pode criar requisitos de integração de identidade e logon único. Essa situação pode ocorrer quando você usa o Azure Active Directory (Azure AD) para conectar diferentes componentes SAP e ofertas de software como serviço (SaaS) ou plataforma como serviço (PaaS) do SAP. Uma lista desses cenários de integração e logon único com o Azure AD e as entidades do SAP é descrita e documentada na seção "integração de identidade e logon único do AAD SAP". ## <a name="changes-to-the-sap-workload-section"></a>Alterações na seção de carga de trabalho do SAP Alterações em documentos na seção de carga de trabalho do SAP no Azure estão listadas no final deste artigo. As entradas no log de alterações são mantidas por cerca de 180 dias. ## <a name="you-want-to-know"></a>Você deseja saber Se você tiver perguntas específicas, vamos apontar para documentos ou fluxos específicos nesta seção da página inicial. Você deseja saber: - Quais VMs do Azure e as unidades de instância grande do HANA têm suporte para as versões de software SAP e quais versões do sistema operacional. Leia o documento [sobre qual software SAP tem suporte para a implantação do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-supported-product-on-azure) para obter respostas e o processo para encontrar as informações - Quais cenários de implantação do SAP têm suporte com VMs do Azure e instâncias grandes do HANA. As informações sobre os cenários com suporte podem ser encontradas nos documentos: - [Carga de trabalho do SAP em cenários com suporte de máquina virtual do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-planning-supported-configurations) - [Cenários com suporte para o SAP HANA em instâncias grandes](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-supported-scenario) - Quais serviços do Azure, tipos de VM do Azure e serviços de armazenamento do Azure estão disponíveis nas diferentes regiões do Azure, verifique os produtos de site [disponíveis por região](https://azure.microsoft.com/global-infrastructure/services/) ## <a name="sap-hana-on-azure-large-instances"></a>SAP HANA no Azure (Instâncias Grandes) Uma série de documentos o conduz por SAP HANA no Azure (instâncias grandes) ou para instâncias grandes e HANA. Para obter informações sobre instâncias grandes do HANA, comece com a [visão geral do documento e arquitetura de SAP Hana no Azure (instâncias grandes)](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-overview-architecture) e percorra a documentação relacionada na seção de instância grande do Hana ## <a name="sap-hana-on-azure-virtual-machines"></a>SAP HANA em máquinas virtuais do Azure Esta seção da documentação aborda diferentes aspectos do SAP HANA. Como pré-requisito, você deve estar familiarizado com os principais serviços do Azure que fornecem serviços elementares do Azure IaaS. Portanto, você precisa de conhecimento da computação, do armazenamento e da rede do Azure. Muitos desses assuntos são tratados no [Guia de planejamento do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/planning-guide)relacionado ao SAP NetWeaver. Para obter informações sobre o HANA no Azure, consulte os seguintes artigos e seus subartigos: - [Configurações e operações de infraestrutura do SAP HANA no Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations) - [Alta disponibilidade do SAP HANA para máquinas virtuais do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-availability-overview) - [Alta disponibilidade do SAP HANA em VMs (máquinas virtuais) do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-high-availability) - [Guia de backup para SAP HANA em máquinas virtuais do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-backup-guide) ## <a name="sap-netweaver-deployed-on-azure-virtual-machines"></a>SAP NetWeaver implantado em máquinas virtuais do Azure Esta seção lista a documentação de planejamento e implantação para o SAP NetWeaver e Business One no Azure. A documentação se concentra nas noções básicas e no uso de bancos de dados não HANA com uma carga de trabalho do SAP no Azure. Os documentos e artigos para alta disponibilidade também são a base para a alta disponibilidade do HANA no Azure, como: - [Guia de planejamento do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/planning-guide). - [SAP Business One em máquinas virtuais do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/business-one-azure) - [Proteger uma implantação de aplicativo do SAP NetWeaver de várias camadas usando Site Recovery](https://docs.microsoft.com/azure/site-recovery/site-recovery-sap) - [Conector SAP LaMa para o Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/lama-installation) Para obter informações sobre bancos de dados não HANA em uma carga de trabalho do SAP no Azure, consulte: - [Considerações para Implantação do DBMS de Máquinas Virtuais do Azure para carga de trabalho do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_general) - [Implantação do DBMS de Máquinas de Virtuais do SQL Server Azure para NetWeaver do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_sqlserver) - [Implantação do DBMS de Máquinas Virtuais do Oracle Azure para carga de trabalho do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_oracle) - [Implantação do DBMS de Máquinas Virtuais do IBM DB2 Azure para carga de trabalho do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_ibm) - [Implantação do DBMS de Máquinas Virtuais do SAP ASE Azure para carga de trabalho do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_sapase) - [SAP MaxDB, cache dinâmico e implantação de servidor de conteúdo em VMs do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_maxdb) Para obter informações sobre bancos de dados SAP HANA no Azure, consulte a seção "SAP HANA em máquinas virtuais do Azure". Para obter informações sobre a alta disponibilidade de uma carga de trabalho do SAP no Azure, consulte: - [Alta disponibilidade de Máquinas Virtuais do Azure para SAP NetWeaver](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-high-availability-guide-start) Este documento aponta para vários outros documentos de arquitetura e cenário. Em documentos de cenário posterior, os links para documentos técnicos detalhados que explicam a implantação e a configuração dos diferentes métodos de alta disponibilidade são fornecidos. Os diferentes documentos que mostram como estabelecer e configurar a alta disponibilidade para uma carga de trabalho do SAP NetWeaver abrangem sistemas operacionais Linux e Windows. Para obter informações sobre a integração entre Azure Active Directory (Azure AD) e serviços SAP e logon único, consulte: - [Tutorial: integração do Azure Active Directory ao SAP Cloud for Customer](https://docs.microsoft.com/azure/active-directory/saas-apps/sap-customer-cloud-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json) - [Tutorial: Integração do Azure Active Directory com o SAP Cloud Platform Identity Authentication](https://docs.microsoft.com/azure/active-directory/saas-apps/sap-hana-cloud-platform-identity-authentication-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json) - [Tutorial: Integração do Microsoft Azure Active Directory com a SAP Cloud Platform](https://docs.microsoft.com/azure/active-directory/saas-apps/sap-hana-cloud-platform-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json) - [Tutorial: Integração do Azure Active Directory com o SAP NetWeaver](https://docs.microsoft.com/azure/active-directory/saas-apps/sap-netweaver-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json) - [Tutorial: integração do Azure Active Directory ao SAP Business ByDesign](https://docs.microsoft.com/azure/active-directory/saas-apps/sapbusinessbydesign-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json) - [Tutorial: integração do Azure Active Directory com o SAP HANA](https://docs.microsoft.com/azure/active-directory/saas-apps/saphana-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json) - [Seu ambiente S/4HANA: logon único SAML do Fiori Launchpad com o Azure AD](https://blogs.sap.com/2017/02/20/your-s4hana-environment-part-7-fiori-launchpad-saml-single-sing-on-with-azure-ad/) Para obter informações sobre a integração dos serviços do Azure em componentes SAP, consulte: - [Usar SAP HANA no Power BI Desktop](https://docs.microsoft.com/power-bi/desktop-sap-hana) - [DirectQuery e SAP HANA](https://docs.microsoft.com/power-bi/desktop-directquery-sap-hana) - [Usar o Conector do SAP BW no Power BI Desktop](https://docs.microsoft.com/power-bi/desktop-sap-bw-connector) - [O Azure Data Factory oferece integração de dados do SAP HANA e do Business Warehouse](https://azure.microsoft.com/blog/azure-data-factory-offer-sap-hana-and-business-warehouse-data-integration) ## <a name="change-log"></a>Log de alterações - 05/11/2020: alterar a [alta disponibilidade de SAP Hana em VMs do Azure no SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-high-availability) para definir a adesão de recursos como 0 para o recurso netcat, pois isso leva a um failover mais simplificado - 05/05/2020: alterações no [planejamento e implementação de máquinas virtuais do Azure para SAP NetWeaver](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/planning-guide) para expressar que as implantações do Gen2 estão disponíveis para a família de VMs de Mv1 - 04/24/2020: alterações no [SAP Hana expansão com o nó em espera em VMs do Azure com seja no SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse), em [SAP Hana escalar horizontalmente com nó em espera em VMs do Azure com seja no RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel), [alta disponibilidade para SAP NetWeaver em VMs do Azure no SLES com seja](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files) e [alta disponibilidade para SAP NetWeaver em VMs do Azure no RHEL com seja](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files) para adicionar esclarecimento de que os endereços IP para volumes seja são atribuídos - 04/22/2020: alterar a [alta disponibilidade de SAP Hana em VMs do Azure no SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-high-availability) para remover o atributo meta `is-managed` das instruções, pois ele entra em conflito com a colocação do cluster dentro ou fora do modo de manutenção - 04/21/2020: adicionado o SQL Azure DB como DBMS suportado para a plataforma de comércio Hybris (SAP) Commerce 1811 e posteriores em artigos [o que é compatível com o software SAP para implantações do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-supported-product-on-azure) e [as configurações e certificações do SAP em execução no Microsoft Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-certifications) - 04/16/2020: adicionado SAP HANA como DBMS com suporte para a plataforma de comércio SAP (Hybris) em artigos [que software SAP tem suporte para implantações do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-supported-product-on-azure) e [configurações e certificações do SAP em execução no Microsoft Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-certifications) - 04/13/2020: corrigir os números de versão exatas do SAP ASE na [implantação do DBMS de máquinas virtuais do Azure ase SAP para carga de trabalho do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_sapase) - 04/07/2020: alterar na [configuração de pacemaker no SLES no Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker) para esclarecer a Cloud-netconfig-instruções do Azure - 04/06/2020: alterações no [SAP Hana escalar horizontalmente com o nó em espera em VMs do Azure com Azure NetApp files no SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse) e no [SAP Hana escalar horizontalmente com o nó em espera em VMs do Azure com Azure NetApp files no RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel) para remover referências ao NetApp [TR-4435](https://www.netapp.com/us/media/tr-4746.pdf) (substituído por [TR-4746](https://www.netapp.com/us/media/tr-4746.pdf)) - 03/31/2020: alterar a [alta disponibilidade de SAP Hana em VMs do Azure no SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-high-availability) e [alta disponibilidade de SAP Hana em VMs do Azure no RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) para adicionar instruções sobre como especificar o tamanho da distribuição ao criar volumes distribuídos - 03/27/2020: alterar em [alta disponibilidade para o SAP NW em VMs do Azure no SLES com seja para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files) para alinhar as opções de montagem do sistema de arquivos ao NetApp TR-4746 (remover a opção de montagem de sincronização) - 03/26/2020: alteração em [alta disponibilidade para o SAP NetWeaver em VMs do Azure no guia de vários SID do SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid) para Adicionar referência ao NetApp TR-4746 - 03/26/2020: alteração na [alta disponibilidade para SAP NetWeaver em VMs do Azure no SLES para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse), [alta disponibilidade para SAP NetWeaver em VMs do Azure no SLES com Azure NetApp Files para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files), [alta disponibilidade para NFS em VMs do Azure no SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-nfs), [alta disponibilidade para SAP NetWeaver em VMs do Azure no guia de vários SID do SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid), [alta disponibilidade para SAP NetWeaver em VMs do Azure no RHEL para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel) e [alta disponibilidade para SAP NetWeaver em VMs do Azure no RHEL com Azure NetApp Files para](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files) aplicativos SAP para atualizar diagramas e esclarecer instruções para a criação de pool de back-end Azure Load Balancer - 03/19/2020: revisão principal do [início rápido do documento: instalação manual de SAP Hana de instância única em máquinas virtuais do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-get-started) para [instalação de SAP Hana em máquinas virtuais do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-get-started) - 03/17/2020: alteração na [configuração de pacemaker em SuSE Linux Enterprise Server no Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker) para remover a configuração de SBD que não é mais necessária - 03/16/2020: esclarecimento do cenário de certificação de coluna em SAP HANA plataforma certificada de IaaS em [qual software SAP tem suporte para implantações do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-supported-product-on-azure) - 03/11/2020: alteração na [carga de trabalho do SAP em cenários de máquina virtual do Azure com suporte](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-planning-supported-configurations) para esclarecer vários bancos de dados por suporte de instância de DBMS - 03/11/2020: alteração no [planejamento e implementação de máquinas virtuais do Azure para SAP NetWeaver](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/planning-guide) explicando VMs de geração 1 e geração 2 - 03/10/2020: alterar em [SAP Hana configurações de armazenamento de máquina virtual do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-storage) para esclarecer os limites reais de taxa de transferência de seja - 03/09/2020: alteração em [alta disponibilidade para SAP NetWeaver em VMs do Azure em SuSE Linux Enterprise Server para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse), [alta disponibilidade para SAP NetWeaver em VMs do Azure em SuSE Linux Enterprise Server com Azure NetApp Files para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files), [alta disponibilidade para NFS em VMs do Azure no SUSE Linux Enterprise Server](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-nfs), [Configurando o pacemaker no SUSE Linux Enterprise Server no Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-pacemaker), [alta disponibilidade do IBM DB2 LUW em VMs do azure no SUSE Linux Enterprise Server com pacemaker](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms-guide-ha-ibm), [alta disponibilidade de SAP Hana em VMs do Azure no SUSE Linux Enterprise Server](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-high-availability) e [alta disponibilidade para SAP NetWeaver em VMs do Azure em um guia de vários SID do SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid) para atualizar recursos de cluster com o agente de recursos Azure-lb - 03/05/2020: alterações de estrutura e alterações de conteúdo para regiões do Azure e máquinas virtuais do Azure em [planejamento e implementação de máquinas virtuais do Azure para SAP NetWeaver](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/planning-guide) - 03/03/2020: alteração na [alta disponibilidade para o SAP NW em VMs do Azure no SLES com seja para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files) para alterar para um layout de volume seja mais eficiente - 03/01/2020: [Guia de backup retrabalhado para SAP Hana em máquinas virtuais do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-backup-guide) para incluir o serviço de backup do Azure. Conteúdo reduzido e condensado em [SAP Hana backup do Azure em nível de arquivo](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-backup-file-level) e excluiu um terceiro documento lidando com o backup por meio de instantâneo de disco. O conteúdo é tratado no guia de backup para SAP HANA em máquinas virtuais do Azure - 02/27/2020: alteração na [alta disponibilidade para o SAP NW em VMs do Azure no SLES para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse), [alta disponibilidade para SAP NW em VMs do Azure no SLES com seja para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files) e [alta disponibilidade para SAP NetWeaver em VMs do Azure em um guia de vários SID do SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid) para ajustar o parâmetro de cluster "on Fail" - 02/26/2020: alterar em [SAP Hana configurações de armazenamento de máquina virtual do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-storage) para esclarecer a opção do sistema de arquivos para o Hana no Azure - 02/26/2020: alteração na [arquitetura e nos cenários de alta disponibilidade para o SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios) incluir o link para a ha para SAP NetWeaver em VMs do Azure no guia de vários SID do RHEL - 02/26/2020: alteração na [alta disponibilidade para o SAP NW em VMs do Azure no SLES para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse), [alta disponibilidade para SAP NW em VMs do Azure no SLES com seja para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files), alta [disponibilidade de VMs do Azure para SAP NetWeaver em RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel) e [VMs do Azure de alta disponibilidade para SAP NetWeaver no RHEL com Azure NetApp files](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files) para remover a instrução que o cluster ASCS/ers de vários SIDs não tem suporte - 02/26/2020: lançamento de [alta disponibilidade para SAP NetWeaver em VMs do Azure no guia de vários SID do RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-multi-sid) para adicionar um link para o guia de cluster do SUSE multi-Sid - 02/25/2020: alteração na [arquitetura e nos cenários de alta disponibilidade para o SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios) adicionar links para artigos mais recentes de ha - 02/25/2020: alteração na [alta disponibilidade do IBM DB2 LUW em VMs do Azure em SuSE Linux Enterprise Server com pacemaker](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms-guide-ha-ibm) para apontar para o documento que descreve o acesso ao ponto de extremidade público com o balanceador de carga do Azure padrão - 02/21/2020: concluir a revisão do artigo [implantação de DBMS de máquinas virtuais do Azure ase do SAP para carga de trabalho do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/dbms_guide_sapase) - 02/21/2020: alterar em [SAP Hana configuração de armazenamento da máquina virtual do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-vm-operations-storage) para representar uma nova recomendação no tamanho da distribuição para/Hana/data e adicionar a configuração do Agendador de e/s - 02/21/2020: alterações nos documentos do SAP HANA em instâncias grandes para representar SKUs recém certificados de S224 e S224m - 02/21/2020: alterar a [alta disponibilidade de VMs do Azure para SAP NetWeaver em RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel) e a [alta disponibilidade de VMs do Azure para SAP NetWeaver no RHEL com Azure NetApp files](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files) para ajustar as restrições de cluster para ENSA2 (arquitetura de replicação de servidor de enfileiramento 2) - 02/20/2020: alteração em [alta disponibilidade para o SAP NetWeaver em VMs do Azure no guia de vários SID do SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid) para adicionar um link para o guia de cluster do SUSE multi-Sid - 02/13/2020: alterações no [planejamento e implementação de máquinas virtuais do Azure para SAP NetWeaver](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/planning-guide) para implementar links para novos documentos - 02/13/2020: foi adicionado um novo documento [de carga de trabalho SAP no cenário com suporte da máquina virtual do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-planning-supported-configurations) - 02/13/2020: novo documento adicionado [qual software SAP tem suporte para a implantação do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-supported-product-on-azure) - 02/13/2020: alteração na [alta disponibilidade do IBM DB2 LUW em VMs do Azure no Red Hat Enterprise Linux Server](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-ibm-db2-luw) para apontar para um documento que descreve o acesso ao ponto de extremidade público com o balanceador de carga do Azure padrão - 02/13/2020: adicionar os novos tipos de VM a [certificações SAP e configurações em execução no Microsoft Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-certifications) - 02/13/2020: adicionar novas notas de suporte SAP [cargas de trabalho SAP no Azure: lista de verificação de planejamento e implantação](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-deployment-checklist) - 02/13/2020: alterar a [alta disponibilidade de VMs do Azure para SAP NetWeaver em RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel) e a [alta disponibilidade de VMs do Azure para SAP NetWeaver no RHEL com Azure NetApp files](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files) para alinhar os tempos limite dos recursos de cluster às recomendações de tempo limite do Red Hat - 02/11/2020: lançamento de [SAP Hana na migração de instância grande do Azure para máquinas virtuais do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-large-instance-virtual-machine-migration) - 02/07/2020: alteração na [conectividade de ponto de extremidade pública para VMs usando o ILB padrão do Azure em cenários de ha do SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-standard-load-balancer-outbound-connections) para atualizar a captura de tela de NSG de exemplo - 02/03/2020: alterar em [alta disponibilidade para o SAP NW em VMs do Azure no SLES para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse) e [alta disponibilidade para SAP NW em VMs do Azure no SLES com seja para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files) para remover o aviso sobre o uso de Dash nos nomes de host de nós de cluster no SLES - 01/28/2020: alterar a [alta disponibilidade de SAP Hana em VMs do Azure no RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-high-availability-rhel) para alinhar os SAP Hana de recursos de cluster para as recomendações de tempo limite do Red Hat - 01/17/2020: alteração nos [grupos de posicionamento de proximidade do Azure para latência de rede ideal com aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-proximity-placement-scenarios) para alterar a seção da movimentação de VMs existentes para um grupo de posicionamento de proximidade - 01/17/2020: alteração nas [configurações de carga de trabalho do SAP com zonas de disponibilidade do Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-ha-availability-zones) para apontar para um procedimento que automatiza medidas de latência entre zonas de disponibilidade - 01/16/2020: alterar em [como instalar e configurar SAP Hana (instâncias grandes) no Azure](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/hana-installation) para adaptar as versões do sistema operacional ao diretório de hardware de IaaS do Hana - 01/16/2020: alterações em [alta disponibilidade para SAP NetWeaver em VMs do Azure no guia de vários SID do SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid) para adicionar instruções para sistemas SAP, usando a arquitetura do enqueue Server 2 (ENSA2) - 01/10/2020: alterações no [SAP Hana escalar horizontalmente com o nó em espera em VMs do Azure com Azure NetApp files no SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse) e no [SAP Hana escalar horizontalmente com o nó em espera em VMs do Azure com Azure NetApp files no RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel) para adicionar instruções sobre como fazer `nfs4_disable_idmapping` alterações permanentes. - 01/10/2020: alterações na [alta disponibilidade para SAP NetWeaver em VMs do Azure no SLES com Azure NetApp Files para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-netapp-files) e em [máquinas virtuais do Azure alta disponibilidade para SAP NetWeaver no RHEL com Azure NetApp Files para aplicativos SAP](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-rhel-netapp-files) para adicionar instruções sobre como montar volumes Azure NetApp files NFSv4. - 12/23/2019: lançamento de [alta disponibilidade para SAP NetWeaver em VMs do Azure em guia de vários SID do SLES](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/high-availability-guide-suse-multi-sid) - 12/18/2019: versão do [SAP Hana escalar horizontalmente com o nó em espera em VMs do Azure com Azure NetApp files no RHEL](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-rhel) - 11/21/2019: alterações no [SAP Hana escalar horizontalmente com o nó em espera em VMs do Azure com Azure NetApp files no SUSE Linux Enterprise Server](https://docs.microsoft.com/azure/virtual-machines/workloads/sap/sap-hana-scale-out-standby-netapp-files-suse) para simplificar a configuração do mapeamento de ID do NFS e alterar a interface de rede principal recomendada para simplificar o roteamento. - 11/15/2019: pequenas alterações na [alta disponibilidade para SAP NetWeaver em SuSE Linux Enterprise Server com Azure NetApp Files para aplicativos SAP](high-availability-guide-suse-netapp-files.md) e [alta disponibilidade para sap NetWeaver no Red Hat Enterprise Linux com Azure NetApp Files para aplicativos SAP](high-availability-guide-rhel-netapp-files.md) para esclarecer as restrições de tamanho do pool de capacidade e remover a instrução que apenas a versão NFSv3 tem suporte. - 11/12/2019: lançamento de [alta disponibilidade para SAP NetWeaver no Windows com Azure NetApp Files (SMB)](high-availability-guide-windows-netapp-files-smb.md) - 11/08/2019: alterações na [alta disponibilidade de SAP Hana em VMs do Azure no SUSE Linux Enterprise Server](sap-hana-high-availability.md), [SAP Hana configurar a replicação do sistema em VMS (máquinas virtuais) do Azure](sap-hana-high-availability-rhel.md), [alta disponibilidade de máquinas virtuais do azure para SAP NetWeaver em SuSE Linux Enterprise Server para aplicativos SAP](high-availability-guide-suse.md), [alta disponibilidade de máquinas virtuais do azure para SAP NetWeaver em SuSE Linux Enterprise Server com Azure NetApp files](high-availability-guide-suse-netapp-files.md), alta disponibilidade de máquinas virtuais do Azure [para SAP NetWeaver no Red Hat Enterprise Linux](high-availability-guide-rhel.md), [alta disponibilidade de máquinas virtuais do Azure para SAP NetWeaver no Red Hat Enterprise Linux com Azure NetApp Files](high-availability-guide-rhel-netapp-files.md), [alta disponibilidade para NFS em VMs do Azure no SUSE Linux Enterprise Server](high-availability-guide-suse-nfs.md), [GlusterFS em VMs do Azure no Red Hat Enterprise Linux para o SAP](high-availability-guide-rhel-glusterfs.md) - 11/08/2019: alterações na [lista de verificação de planejamento e implantação da carga de trabalho do SAP](sap-deployment-checklist.md) para esclarecer a recomendação de criptografia - 11/04/2019: alterações na [configuração de pacemaker no SUSE Linux Enterprise Server no Azure](high-availability-guide-suse-pacemaker.md) para criar o cluster diretamente com a configuração de unicast
196.517045
1,452
0.808223
por_Latn
0.97929
26f7126bccbe29be7ddbe3a14039cf990224cefd
1,132
md
Markdown
controls/notification/getting-started/structure.md
DKaramfilov/ajax-docs
86b5dbbf757f7b1fd84ec8b313be0d70cd957f8e
[ "Apache-2.0" ]
22
2015-07-21T10:33:39.000Z
2022-02-21T09:17:40.000Z
controls/notification/getting-started/structure.md
DKaramfilov/ajax-docs
86b5dbbf757f7b1fd84ec8b313be0d70cd957f8e
[ "Apache-2.0" ]
132
2015-07-14T13:56:12.000Z
2022-01-28T10:04:56.000Z
controls/notification/getting-started/structure.md
DKaramfilov/ajax-docs
86b5dbbf757f7b1fd84ec8b313be0d70cd957f8e
[ "Apache-2.0" ]
355
2015-07-14T02:38:17.000Z
2021-11-30T13:22:18.000Z
--- title: Structure page_title: RadNotification Structure - RadNotification description: Check our Web Forms article about RadNotification Structure. slug: notification/getting-started/radnotification-structure tags: radnotification,structure published: True position: 1 --- # RadNotification Structure The main visual elements of **RadNotification** are: ![structure](images/radnotification-structure.png) * **TitleBar:** This is the title of the notification. * **TitleIcon:** The small image (16x16 pixels) shown in the titlebar. * **Menu Icon:** The button used to show the title menu. It appears only if the **ShowTitleMenu** property is set to true. * **Close Button:** The button used to close the notification. It appears only if the **ShowCloseButton** property is set to true. * **ContentIcon:** The image (32x32 pixels) shown in the content area of the notification. * **Content or Text:** This is the main part of the control. It can be customized using Text property or by declaring content between the RadNotification's ContentTemplate tags in ASP.NET. # See Also * [Overview]({%slug notification/overview%})
41.925926
188
0.768551
eng_Latn
0.974601
26f7434a03e12a69127981cea5e496b180c4c4c3
7,904
md
Markdown
_READMES/servo.servo 21-26-08-773.md
BJBaardse/open-source-words
18ca0c71e7718a0e2e9b7269b018f77b06f423b4
[ "Apache-2.0" ]
17
2018-07-13T02:16:22.000Z
2021-09-16T15:31:49.000Z
_READMES/servo.servo 21-26-08-773.md
letform/open-source-words
18ca0c71e7718a0e2e9b7269b018f77b06f423b4
[ "Apache-2.0" ]
null
null
null
_READMES/servo.servo 21-26-08-773.md
letform/open-source-words
18ca0c71e7718a0e2e9b7269b018f77b06f423b4
[ "Apache-2.0" ]
6
2018-10-12T09:09:05.000Z
2021-01-01T15:32:45.000Z
the servo parallel browser engine project servo is a prototype web browser engine written in the rust language it is currently developed on 64 bit macos 64 bit linux 64 bit windows and android servo welcomes contribution from everyone see contributing md and hacking quickstart md for help getting started visit the servo project page for news and guides setting up your environment rustup rs building servo requires rustup version 1 8 0 or more recent if you have an older version run rustup self update to install on windows download and run rustup init exe then follow the onscreen instructions to install on other systems run sh curl https sh rustup rs ssf sh this will also download the current stable version of rust which servo wont use to skip that step run instead curl https sh rustup rs ssf sh s default toolchain none see also other installation methods other dependencies please select your operating system macos debian based linuxes fedora arch linux opensuse gentoo linux microsoft windows android macos on macos homebrew sh brew install automake pkg config python cmake yasm pip install virtualenv on macos macports sh sudo port install python27 py27 virtualenv cmake yasm on macos 10 11 el capitan you also have to install openssl sh brew install openssl export openssl include dir brew prefix openssl include export openssl lib dir brew prefix openssl lib mach build if youve already partially compiled servo but forgot to do this step run mach clean set the shell variables and recompile on debian based linuxes sh sudo apt install git curl autoconf libx11 dev \ libfreetype6 dev libgl1 mesa dri libglib2 0 dev xorg dev \ gperf g build essential cmake virtualenv python pip \ libssl1 0 dev libbz2 dev libosmesa6 dev libxmu6 libxmu dev \ libglu1 mesa dev libgles2 mesa dev libegl1 mesa dev libdbus 1 dev \ libharfbuzz dev ccache clang if you using a version prior to ubuntu 17 04 or debian sid replace libssl1 0 dev with libssl dev if you are using ubuntu 16 04 run export harfbuzz sys no pkg config 1 before building to avoid an error with harfbuzz if you are on ubuntu 14 04 and encountered errors on installing these dependencies involving libcheese see 6158 for a workaround if virtualenv does not exist try python virtualenv on fedora sh sudo dnf install curl libtool gcc c libxi devel \ freetype devel mesa libgl devel mesa libegl devel glib2 devel libx11 devel libxrandr devel gperf \ fontconfig devel cabextract ttmkfdir python python virtualenv python pip expat devel \ rpm build openssl devel cmake bzip2 devel libxcursor devel libxmu devel mesa libosmesa devel \ dbus devel ncurses devel harfbuzz devel ccache mesa libglu devel clang clang libs on centos sh sudo yum install curl libtool gcc c libxi devel \ freetype devel mesa libgl devel mesa libegl devel glib2 devel libx11 devel libxrandr devel gperf \ fontconfig devel cabextract ttmkfdir python python virtualenv python pip expat devel \ rpm build openssl devel cmake3 bzip2 devel libxcursor devel libxmu devel mesa libosmesa devel \ dbus devel ncurses devel python34 harfbuzz devel ccache clang clang libs llvm toolset 7 build inside llvm toolset and devtoolset sh scl enable devtoolset 7 llvm toolset 7 bash with the following environmental variables set sh export cmake cmake3 export libclang path opt rh llvm toolset 7 root usr lib64 on opensuse linux sh sudo zypper install libx11 devel libexpat devel libbz2 devel mesa libegl devel mesa libgl devel cabextract cmake \ dbus 1 devel fontconfig devel freetype devel gcc c git glib2 devel gperf \ harfbuzz devel libosmesa devel libxcursor devel libxi devel libxmu devel libxrandr devel libopenssl devel \ python pip python virtualenv rpm build glu devel ccache llvm clang libclang on arch linux sh sudo pacman s needed base devel git python2 python2 virtualenv python2 pip mesa cmake bzip2 libxmu glu \ pkg config ttf fira sans harfbuzz ccache clang on gentoo linux sh sudo emerge net misc curl \ media libs freetype media libs mesa dev util gperf \ dev python virtualenv dev python pip dev libs openssl \ media libs harfbuzz dev util ccache \ x11 libs libxmu media libs glu x11 base xorg server sys devel clang with the following environment variable set sh export libclang path usr lib64 llvm lib64 on windows msvc install python for windows https www python org downloads release python 2714 the windows x86 64 msi installer is fine you should change the installation to install the add python exe to path feature install virtualenv in a normal windows shell cmd exe or command prompt from the start menu do pip install virtualenv if this does not work you may need to reboot for the changed path settings by the python installer to take effect install git for windows https git scm com download win do allow it to add git exe to the path default settings for the installer are fine install visual studio community 2017 https www visualstudio com vs community you must add visual c to the list of installed components it is not on by default visual studio 2017 must installed to the default location or mach bat will not find it if you encountered errors with the environment above do the following for a workaround 1 download and install build tools for visual studio 2017 2 install python2 7 x86 x64 and virtualenv 3 run mach bat build d if you have troubles with x64 type prompt as mach bat set by default 1 you may need to choose and launch the type manually such as x86 x64 cross tools command prompt for vs 2017 in the windows menu 2 cd to the path servo 3 python mach build d cross compilation for android run mach bootstrap android to get android specific tools see wiki for details the rust compiler servos build system uses rustup rs to automatically download a rust compiler this is a specific version of rust nightly determined by the rust toolchain file building servo is built with cargo the rust package manager we also use mozillas mach tools to orchestrate the build and other tasks normal build to build servo in development mode this is useful for development but the resulting binary is very slow sh git clone https github com servo servo cd servo mach build dev mach run tests html about mozilla html or on windows msvc in a normal command prompt cmd exe cmd git clone https github com servo servo cd servo mach bat build dev for benchmarking performance testing or real world use add the release flag to create an optimized build sh mach build release mach run release tests html about mozilla html checking for build errors without building if youre making changes to one crate that cause build errors in another crate consider this instead of a full build sh mach check it will run cargo check which runs the analysis phase of the compiler and so shows build errors if any but skips the code generation phase this can be a lot faster than a full build though of course it doesnt produce a binary you can run building for android target for arm armv7 linux androideabi most phones sh mach build release android mach package release android for x86 typically for the emulator sh mach build release target i686 linux android mach package release target i686 linux android running run servo with the command sh servo url arguments if you run with nightly build mach run url arguments if you run with mach for example mach run https www google com commandline arguments p interval turns on the profiler and dumps info to the console every interval seconds s size sets the tile size for painting defaults to 512 z disables all graphical output useful for running js layout tests z help displays useful output to debug servo keyboard shortcuts ctrl zooms out ctrl zooms in alt left arrow goes backwards in the history alt right arrow goes forwards in the history esc exits servo developing there are lots of mach commands you can use you can list them with mach help the generated documentation can be found on http doc servo org servo index html
7,904
7,904
0.823128
eng_Latn
0.991863
26f86b9825749c0382dfdf6d46877b88b340d0af
27,942
md
Markdown
episodes/02-practice-learning.md
fishtree-attempt/instructor-training
ce306e20a1000b00c6e6ed72fab77bf1a2095b39
[ "CC-BY-4.0" ]
null
null
null
episodes/02-practice-learning.md
fishtree-attempt/instructor-training
ce306e20a1000b00c6e6ed72fab77bf1a2095b39
[ "CC-BY-4.0" ]
null
null
null
episodes/02-practice-learning.md
fishtree-attempt/instructor-training
ce306e20a1000b00c6e6ed72fab77bf1a2095b39
[ "CC-BY-4.0" ]
1
2022-03-09T15:37:44.000Z
2022-03-09T15:37:44.000Z
--- title: Building Skill With Practice block: How Learning Works teaching: 30 exercises: 30 --- We will now get started with a discussion of how learning works. We will begin with some key concepts from educational research and identify how these principles are put into practice in Carpentries workshops. ::::::::::::::::::::::::::::::::::::::: objectives - Compare and contrast the three stages of skill acquisition. - Identify a mental model and an analogy that can help to explain it. - Apply a concept map to explore a simple mental model. - Understand the limitations of knowledge in the absence of a functional mental model. - Create a formative assessment to diagnose a broken mental model. :::::::::::::::::::::::::::::::::::::::::::::::::: :::::::::::::::::::::::::::::::::::::::: questions - How do people learn? - Who is a typical Carpentries learner? - How can we help novices become competent practitioners? :::::::::::::::::::::::::::::::::::::::::::::::::: ## The Carpentries Pedagogical Model The Carpentries aims to teach computational competence to learners. We take an applied approach, avoiding the theoretical and general in favor of the practical and specific. By showing learners how to solve specific problems with specific tools and providing hands-on practice, we develop learners' confidence and lay the foundation for future learning. A critical component of this process is that learners are able to practice what they are learning in real time, get feedback on what they are doing, and then apply those lessons learned to the next step in the learning process. Having learners help each other during the workshops also helps to reinforce concepts taught during the workshops. **A Carpentries workshop is an interactive event** -- for learners and instructors. We give and receive feedback throughout the course of a workshop. We incorporate assessments within the lesson materials and ask for feedback on sticky notes during lunch breaks and at the end of each day. One reason why practice and feedback are so important is because a Carpentries workshop is not simply a source of information; it is the starting point for development of a new skill. To understand what this means, we will start by exploring what research tells us about skill acquisition and development of a "mental model." ## The Acquisition of Skill Our approach is based on the work of researchers like Patricia Benner, who applied the [Dreyfus model of skill acquisition][wikipedia-dreyfus-skill] in her studies of [how nurses progress from novice to expert][nurses-dreyfus] ([see also books by Benner][Benner-dreyfus]). This work indicates that through practice and formal instruction, learners acquire skills and advance through distinct stages. In simplified form, three stages of this model are: ![](fig/skill-level.svg){alt='Three people, labeled from left to right as "Novice", "Competent Practitioner", and "Expert". Underneath,an arrow labelled "Experience level" points from left to right. The "Novice" is quoted, "I am not sure what questions to ask." The Competent Practitioner is quoted, "I am pretty confident, but I still look stuff up a lot!" The Expert is quoted "I have been doing this on a daily basis for years!"'} - *Novice*: someone who does not know what they do not know, i.e., they do not yet know what the key ideas in the domain are or how they relate. Novices may have difficulty formulating questions, or may ask questions that seem irrelevant or off-topic as they rely on prior knowledge, without knowing what is or is not related yet. > Example: A *novice* learner in a Carpentries workshop might never have heard of the bash shell, and therefore > may have no understanding of how it relates to their file system or other programs on their computer. - *Competent practitioner*: someone who has enough understanding for everyday purposes. They will not know all the details of how something works and their understanding may not be entirely accurate, but it is sufficient for completing normal tasks with normal effort under normal circumstances. > Example: A *competent practitioner* in a Carpentries workshop might have used the shell before and understand how to > move around directories and use individual programs, but they might not understand how they can fit these programs > together to build scripts and automate large tasks. - *Expert*: someone who can easily handle situations that are out of the ordinary. > Example: An *expert* in a Carpentries workshop may have experience writing and running shell scripts and, when > presented with a problem, immediately sees how these skills can be used to solve the problem. Note that how a person *feels* about their skill level is not included in these definitions! You may or may not consider yourself an expert in a particular subject, but may nonetheless function at that level in certain contexts. We will come back to the expertise of the Instructor and its impact -- positive and negative -- on teaching, in the next episode. For now, we are primarily concerned with novices, as this is The Carpentries' primary target audience. It is common to think of a novice as a sort of an "empty vessel" into which knowledge can be "poured." Unfortunately, this analogy includes inaccuracies that can generate dangerous misconceptions. In our next section, we will briefly explore the nature of "knowledge" through a concept that helps us differentiate between novices and competent practitioners in a more useful and visual way. This, in turn, will have implications for how we teach. ## Building a Mental Model ::::::::::::::::::::::::::::::::::::: testimonial All models are wrong, but some are useful. - George Box, statistician :::::::::::::::::::::::::::::::::::::::::::::::::: Understanding is never a mirror of reality, even for an expert; rather, it is an internal representation based on our experience with a subject. This internal representation is often described as a **mental model**. A mental model allows us to extrapolate, or make predictions beyond and between the narrow limits of experience and memory, filling in gaps to the point that things "make sense." As we learn, our mental model evolves to become more complex and, most importantly, more useful. A useful model makes reasonable predictions and fits well within the range of things we are likely to encounter. While there will always be inaccuracies -- or "misconceptions" -- these do not interfere with day-to-day functioning. A useful model does not seize up or break down entirely as new concepts are added. ### The power (and limitations) of analogies Some mental models can be succinctly summarized by comparison to something else that is more universally understood. Good analogies can be extraordinarily useful when teaching, because they draw upon an existing mental model to fill in another, speeding learning and making a memorable connection. However, all analogies have limitations! If you choose to use an analogy, be sure its usefulness outweighs its potential to generate misconceptions that may interfere with learning. ::::::::::::::::::::::::::::::::::::::: challenge ## Analogy Brainstorm 1. Think of an analogy to explore. Perhaps you have a favorite that relates to your area of professional interest, or a hobby. If you prefer to work with an example, consider this common analogy from education: "teaching is like gardening." 2. Share your analogy with a partner or group. (If you have not yet done so, be sure to take a moment to introduce yourself, first!) What does your analogy convey about the topic? How is it useful? In what ways is it wrong? This activity should take about 10 minutes. :::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::::: callout ## Analogies at Work: "Software Carpentry" People often ask where our name came from. Greg Wilson has this to say: "Brent Gorda and I came up with the name in 1998 to differentiate what we were teaching from software engineering. That's about digging the Channel Tunnel; we're about the computational equivalent of hanging drywall." The word "carpentry" acts as a metaphor -- a type of analogy -- inspiring a comparison with something concrete, hands on, practical, and useful. This clearly conveys the purpose of our organization: to support computational skill development among working practitioners who need the right tools and practices to be effective day to day. :::::::::::::::::::::::::::::::::::::::::::::::::: A mental model may be represented as a collection of concepts and facts, connected by relationships. The mental model of an expert in any given subject will be far larger and more complex than that of a novice, including both more concepts and more detailed and numerous relationships. However, **both may be perfectly useful** in certain contexts. Returning to our example levels of skill development: - ``` A *novice* has a minimal mental model of surface features of the domain. Inaccuracies based on limited prior knowledge may interfere with adding new information. ``` Predictions are likely to borrow heavily from mental models of other domains which seem superficially similar. - ``` A *competent practitioner* has a mental model that is useful for everyday purposes. Most new information ``` they are likely to encounter will fit well with their existing model. Even though many potential elements of their mental model may still be missing or wrong, predictions about their area of work are usually accurate. - An *expert* has a densely populated and connected mental model that is especially good for problem solving. They quickly connect concepts that others may not see as being related. They may have difficulty explaining how they are thinking in ways that do not rely on other features unique to their own mental model. ![](fig/mental_models.svg){alt='Three collections of six circles. The first collection is labelled "Novice" and has only two arrows connecting some of the circles. The second collection, labelled "Competent Practitioner" has six connecting arrows. The third collection, labelled "Expert", is densly connected, with eight connecting arrows.'} ### Mapping a Mental Model Most people do not naturally visualize a mental model as a diagram of concepts and relationships. Mental models are complicated! Yet, visual representation of concepts and relationships can be a useful way to explore and understand hidden features of a mental model. There are certain ways in which you may routinely use visual organizers, such as flow charts or biochemical pathway diagrams. A more general tool that is useful for exploring any network of concepts and relationships is a **concept map**. Pioneered for classroom use by John Novak in the 1970s, a concept map asks you to identify which concepts are most relevant to a topic at hand and -- critically -- to identify how they are connected. It can be quite difficult to identify and organize these connections! However, the process of forcing abstract knowledge into a visual format can force you to name connections that you might otherwise have quietly assumed, or illuminate gaps that you may not have been aware of. Especially where analogies are not available, concept mapping can help you to make your mental model of a concept more clear to yourself or others. As an example, consider a mental model of the relationship between a small ball and water in a full glass. The concept map below illustrates a simple mental model that a young child might develop after putting the ball in the water. ![](fig/ballwater1a.svg){alt='Two words inside rectangles, with labeled arrows connecting them. "Ball" is at the left, with an arrow pointing to "Water", at right, labeled as "Pushes out."'} Give a child balls of three different sizes, and they might put together a somewhat more complex mental model, perhaps illustrated as: ![](fig/ballwater2a.svg){alt='Four words inside rectangles, with labeled arrows connecting them. "Ball" is at the left, and "Water", at right. "Big Ball" and "Small Ball" are stacked vertically between them. Arrows from "Ball" are labeled "can be MORE" and can be "LESS", and arrows to "water" are labeled as "Pushes out MORE" and "Pushes out "LESS"'} ::::::::::::::::::::::::::::::::::::::: challenge ## Mapping a Mental Model 1) On a piece of paper, draw a simplified concept map of the same concept you discussed in the last activity, but this time without the analogy. What are 3-4 core concepts involved? How are those concepts related? (Note: if you would like to try out an online tool for this exercise, visit [https://excalidraw.com](https://excalidraw.com) .) 2) In the Etherpad, write some notes on this process. Was it difficult? Do you think it would be a useful exercise prior to teaching about your topic? What challenges might a novice face in creating a concept map of this kind? This exercise should take about 5 minutes. :::::::::::::::::::::::::::::::::::::::::::::::::: ## Misconceptions The mental model above connects a ball to the water it can displace, recognizing that 'more' ball can move 'more' water. This mental model is perfectly functional for a child who wants to have fun splashing water around. It may endure in this way for several years of beaches and bathtubs. However, when this child is asked to predict what would happen to the water if a ball were not bigger or smaller but *heavier* or *lighter*, they will naturally apply their existing mental model to the task. BUT... ![](fig/ballwater3a.svg){alt='A concept map similar to the previous one except with "Heavy Ball" and "Light Ball" in the middle, and a red "X" over the arrows labeled "Pushes out MORE" and "Pushes out LESS"'} What a surprise! The challenge presented by this new information is that it clashes with the pre-existing mental model, to which it seemed to apply. This prior knowledge needs to be adjusted to a new understanding that incorporates the difference between properties of mass and volume. ![](fig/ballwater4a.svg){alt='A new concept map. "Ball" remains at left, and "Water", at right. "Size" and "Weight" are stacked vertically between them. Arrows from "Ball" share the label "Can have more or less." One arrow from "size to "water" is labeled "Affects pushing of"'} When mental models break, learning can occur more slowly than you might expect. The longer a prior model was in use, and the more extensively it has to be *unlearned*, the more it can actively interfere with the incorporation of new knowledge. Our child may quickly adapt to this new information if they had never thought much about mass before and were simply trying out an existing mental model on a new situation. However, if they had extensive experience with balls that were both larger and heavier (for example), it may take longer to unlearn what they thought they understood about mass. Most mental models worth mapping are not so simple. Yet, forcing complex ideas in to this simplified format can be useful when preparing to teach, because it forces you to be explicit about exactly what concepts are at the heart of your topic, and to name relationships between them. ### Types of Misconceptions Correcting learners' misconceptions is at least as important as presenting them with correct information. There are many ways of classifying different types of misconceptions. For our purposes, it is useful to consider 3 broad categories: - Simple *factual errors*. These exist in isolation from any deeper understanding. These are the easiest to correct. Example: believing that Vancouver is the capital of British Columbia. - *Broken models*. These occur when inaccuracies explain relationships and generate predictions (often successfully!) in an existing mental model. These take time to address, demanding that learners reason carefully through examples to see contradictions. Examples: believing that motion and acceleration must always be in the same direction, or that seasons are related to the shape of the earth's orbit. - *Fundamental beliefs*, which are deeply connected to a learner's social identity and are the hardest to change. Examples: "the world is only a few thousand years old" or "human beings cannot affect the planet's climate". "I am not a computational person" may, arguably, also fall into this category of misconception. The middle category of misconceptions is the most useful type to watch out for in Carpentries workshops. While teaching, we want to expose learners' broken models so that we can help them begin to deconstruct them and build better ones in their place. ::::::::::::::::::::::::::::::::::::::: challenge ## Anticipating Misconceptions Describe a misconception you have encountered as a teacher or as a learner. This exercise should take about 5 minutes. :::::::::::::::::::::::::::::::::::::::::::::::::: ## Using Formative Assessment to Identify Misconceptions In order to effectively root out pre-existing misconceptions that need to be un-learned and stop quietly developing misconceptions in their tracks, an Instructor needs to be actively and persistently looking for them. But how? Like so many challenges we will discuss in this training, the answer is **feedback**. In this case, we want feedback that allows us to **assess** the developing mental model of a trainee in highly specific ways, to verify that learning is proceeding according to plan and not careening off in some unpredicted direction. We want to get this feedback **while we teach** so that we can respond to that information and adapt our instruction to get learners back on track. This kind of assessment has a name: it is called **formative assessment** because it is applied during learning to form the practice of teaching and the experience of the learner. This is different from exams, for example, which sum up what a participant has learned but are not used to guide further progress and are hence called **summative**. Feedback from formative assessment illuminates misconceptions for both Instructors and learners. It also provides reassurance on both sides when learning *is* proceeding on track! It is far more reliable than reading faces or using feelings of comfort as a metric, which tends to be what Instructors and learners default to otherwise. ::::::::::::::::::::::::::::::::::::::: challenge ## Formative Assessments Any instructional tool that generates feedback that is used in a formative way can be described as "formative assessment." Based on your previous educational experience (or even this training so far!) what types of formative assessments do you know about? Write your answers in the Etherpad; or go around and have each person in the group name one. This exercise should take about 5 minutes. :::::::::::::::::::::::::::::::::::::::::::::::::: Formative assessments can serve many purposes other than hunting down misconceptions, such as verifying engagement or supporting memory consolidation. We will discuss some of these functions in later episodes. In this section, we are interested quite narrowly in evaluating mental models. One example of formative assessment that can be used to tease out misconceptions is the multiple choice question (MCQ). When designed carefully, these can target anticipated misconceptions with surgical precision. For example, suppose we are teaching children multi-digit addition. A well-designed MCQ would be: ```source Q: what is 27 + 15 ? a) 42 b) 32 c) 312 d) 33 ``` The correct answer is 42, but each of the other answers provides valuable insight. ::::::::::::::::::::::::::::::::::::::: challenge ## Identify the Misconceptions Choose one wrong answer and write in the Etherpad what misconception is associated with that wrong answer. This discussion should take about 5 minutes. ::::::::::::::: solution ## Solution - If the child answers 32, they are throwing away the carry completely. - If they answer 312, they know that they cannot just discard the carried '1', but do not understand that it is actually a ten and needs to be added into the next column. In other words, they are treating each column of numbers as unconnected to its neighbors. - If they answer 33 then they know they have to carry the 1, but are carrying it back into the same column it came from. ::::::::::::::::::::::::: :::::::::::::::::::::::::::::::::::::::::::::::::: Each of these incorrect answers has **diagnostic power** Each answer looks like it could be right: silly answers like "a fish!" offer therapeutic comedy but do not provide insight; nor do answers that are wrong in random ways. "Diagnostic power" means that each of the wrong choices helps the instructor figure out precisely what misconceptions learners have adopted when they select that choice. Formative assessments are most powerful when: 1. **all learners** are effectively assessed (not only the most vocal ones!) AND 2. an **instructor responds promptly to the results of the assessment** An instructor may learn they need to change their pace or review a particular concept. Using formative assessment effectively to discover and address misconceptions is a teaching skill that you can develop with reflective practice. ::::::::::::::::::::::::::::::::::::::: challenge ## Handling Outcomes Formative assessments allow us as instructors to adapt our instruction to our audience. What options do we have if a majority of the class chooses: 1. mostly one of the wrong answers? 2. mostly the right answer? 3. an even spread among options? Choose one of the above scenarios and compose a suggested response to it in the Etherpad. This discussion should take about 5 minutes. ::::::::::::::: solution ## Solution 1. If the majority of the class votes for a single wrong answer, you have a widespread misconception and can stop to examine and correct that misconception. 2. If most of the class votes for the right answer, it is ok to explain the answer and move on. Helpers can make themselves available to assist anyone who still feels uncertain. 3. If answers are pretty evenly split between options, learners may be guessing randomly, reflecting an absent mental model rather than a broken one. In this case it is a good idea to go back to a point where everyone was on the same page. ::::::::::::::::::::::::: :::::::::::::::::::::::::::::::::::::::::::::::::: Designing a few MCQs with diagnostic power is useful when preparing to teach even if they are never used, for the same reason that concept mapping can be useful: it forces the instructor to think about the learners' mental models and try to anticipate how they might be broken. In short, it helps Instructors to put themselves into the learners' heads and see the topic from their point of view. We will talk more about the process of preparing to teach in a later episode. ## The Importance of Going Slowly It takes work to actively assess mental models throughout a workshop; this also takes time. This can make Instructors feel conflicted about using formative assessment routinely. However, the need to conduct routine assessment is not the only reason why a workshop **should proceed more slowly than you think**. One key insight from research on cognitive development is that novices, competent practitioners, and experts each need to be taught differently. In particular, presenting novices with a pile of facts early on is counter-productive, because they do not yet have a model or framework to fit those facts into. In fact, **presenting too many facts too soon can actually reinforce an incorrect mental model**. (This is a key problem with the "empty vessel" analogy described earlier.) Most learners coming to Carpentries lessons are novices, and do not have a strong mental model of the concepts we are teaching. Thus, our primary goal is **not** to teach the syntax of a particular programming language, but **to help them construct a working mental model** so that they have something to attach facts to. In other words, our goal is to teach people **how to think** about programming and data management in a way that will allow them to learn more easily on their own or understand what they might find online. ::::::::::::::::::::::::::::::::::::: testimonial If someone feels it is too slow, they will be a bit bored. If they feel it is too fast, they will never come back to programming. — Kunal Marwaha, SWC Instructor :::::::::::::::::::::::::::::::::::::::::::::::::: If our goal is to help novices construct an accurate and useful mental model of a new intellectual domain, this will impact our teaching. For example, we principally want to help learners form the right categories and make connections among concepts. We *do not* want to overload them with a slew of unrelated facts, as this will be confusing. An important practical implication of this latter point is the pace at which we teach. In the first main episode of Software Carpentry's [lesson on the Unix shell][swc-shell-novice], which covers "Navigating Files and Directories", there are only four "commands" for 40 minutes of teaching. Ten minutes per command may seem glacially slow, but that episodes's real purpose is to teach learners about paths; later on, they will learn about history, wildcards, pipes and filters, command-line arguments, redirection, and all the other big ideas on which the shell depends, and without which people cannot understand how to use commands. That mental model of the shell also includes things like: - Anything you repeat manually, you will eventually get wrong (so let the computer repeat things for you by using tab completion and the `history` command). - Lots of little tools, combined as needed, are more productive than a handful of programs. (This motivates the pipe-and-filter model.) These two examples illustrate something else as well. Learning consists of more than "just" adding information to mental models; creating linkages between concepts and facts is at least as important. Telling people that they should not repeat things, and that they should try to think (by analogy) in terms of little pieces loosely joined, both set the stage for discussing functions. Explicitly referring back to pipes and filters in the shell when introducing functions helps solidify both ideas. ::::::::::::::::::::::::::::::::::::::::: callout ## Meeting Learners Where They Are One of the strengths of Carpentries workshops is that we meet learners *where they are*. Carpentries Instructors strive to help learners progress from whatever starting point they happen to be at, without making anyone feel inferior about their current practices or skillsets. We do this in part by teaching relevant and useful skills, building an inclusive learning environment, and continually getting (and paying attention to!) feedback from learners. We will be talking in more depth about each of these strategies as we go forward in our workshop. :::::::::::::::::::::::::::::::::::::::::::::::::: [wikipedia-dreyfus-skill]: https://en.wikipedia.org/wiki/Dreyfus_model_of_skill_acquisition [nurses-dreyfus]: https://journals.sagepub.com/doi/10.1177/0270467604265061 [Benner-dreyfus]: https://www.worldcat.org/search?q=au%3ABenner%2C+Patricia+E. [swc-shell-novice]: https://swcarpentry.github.io/shell-novice/ :::::::::::::::::::::::::::::::::::::::: keypoints - Our goal when teaching novices is to help them construct useful mental models. - Exploring our own mental models can help us prepare to convey them. - Constructing a useful mental model requires practice and corrective feedback. - Formative assessments provide practice for learners and feedback to learners and instructors. ::::::::::::::::::::::::::::::::::::::::::::::::::
55.995992
594
0.756317
eng_Latn
0.999856
26f8b7aac3911cd3291d9ebe90236c660afeef91
10
md
Markdown
book/steps/finish.md
ersilia-os/cidrz-e2e-linkage
840581cdb90617f3ceb1be898992f0a8df71f9e3
[ "MIT" ]
null
null
null
book/steps/finish.md
ersilia-os/cidrz-e2e-linkage
840581cdb90617f3ceb1be898992f0a8df71f9e3
[ "MIT" ]
null
null
null
book/steps/finish.md
ersilia-os/cidrz-e2e-linkage
840581cdb90617f3ceb1be898992f0a8df71f9e3
[ "MIT" ]
null
null
null
# Finish
3.333333
8
0.6
eng_Latn
1.000008
26f94afdb43d69509560184244a611dc9b95db51
34
md
Markdown
README.md
MRBDARK/its-me-d4rk-
9dac6444a6b1dc7b39067e935953e1554de9dbbb
[ "Apache-2.0" ]
null
null
null
README.md
MRBDARK/its-me-d4rk-
9dac6444a6b1dc7b39067e935953e1554de9dbbb
[ "Apache-2.0" ]
null
null
null
README.md
MRBDARK/its-me-d4rk-
9dac6444a6b1dc7b39067e935953e1554de9dbbb
[ "Apache-2.0" ]
null
null
null
WELCOME TO IS ME D4RK GIT WORLD
17
33
0.735294
kor_Hang
0.847627
26f99f0f2c697b1e6a6af0eef04fb702341cdfdb
4,020
md
Markdown
_posts/2021-04-29-Narragansett-Bay-Adult-Oyster-DNA-Preparation-for-Sonication---Part-2.md
amyzyck/AmyZyck_Notebook
d0cc68b63f8955a3f0ce1b8d83050d7fe9617ea1
[ "MIT" ]
null
null
null
_posts/2021-04-29-Narragansett-Bay-Adult-Oyster-DNA-Preparation-for-Sonication---Part-2.md
amyzyck/AmyZyck_Notebook
d0cc68b63f8955a3f0ce1b8d83050d7fe9617ea1
[ "MIT" ]
null
null
null
_posts/2021-04-29-Narragansett-Bay-Adult-Oyster-DNA-Preparation-for-Sonication---Part-2.md
amyzyck/AmyZyck_Notebook
d0cc68b63f8955a3f0ce1b8d83050d7fe9617ea1
[ "MIT" ]
null
null
null
--- layout: post title: Narragansett Bay Adult Oyster DNA Preparation for Sonication - Part 2 date: '2021-04-29' categories: Processing, Protocols tags: [Narragansett Bay, Crassostrea virginica, oyster, Bead clean, DNA] projects: [Narragansett Bay] --- ### 1X Bead clean to concentrate DNA into smaller volume before sonication Completed April 19, 2021 Before sonicating the DNA, a few samples need to have the DNA concentrated into a smaller volume. This is necessary because some samples had too little DNA to achieve 500 ng of DNA in 51 ul (volume needed for sonication). |Sample|Description|Volume for 500 ng (ul)|Volume of Beads (ul)| |----|----|----|----| |GHP_2|Third extraction E2|60|60| |MCD_10|Third extraction E2 gill tissue|150|150| - _Made fresh 80% EtOH_ - _Took KAPA Pure Beads out of fridge beforehand to warm to room temp for about 30 minutes_ - Vortex and spin down DNA samples - For sample MCD_10, add 3.2G sample volume (all) to M & G E2 combined sample (made previously), for a total volume of 294 uL - In new 1.5 mL tubes, add appropriate volume of sample (see column 3 in table above) - Add appropriate volume of KAPA Pure Beads (see column 4 in table above) to each sample and pipette up and down 10 times to mix (_avoid bubbles_) - Place tubes on shaker at room temp for 15 minutes - shaker set to 200 rpm's - After 15 minute incubation, place tubes on magnet plate and remove supernatant from tubes when it was fully clear not disturbing the beads - Dispose of liquid in liquid waste beaker - Add 200 μl of 80% EtOH to each tube while still the magnet not disturbing the beads - Remove supernatant from each tube on the magnet plate without disturbing the beads - Add 200 μl of 80% EtOH to each tube while still the magnet not disturbing the beads - Remove ALL the supernatant from each tube on the magnet plate without disturbing the beads. Extra EtOH blobs were removed with p20 pipette tips - Resuspend beads in 20 μl of 10 mM Tris HCl pH 8 for sample GHP_2 and 51 uL for sample MCD_10 - Incubate tubes at room temp on shaker for 5 minute - Place tubes on magnet plate and transfer supernatant when clear to new labeled 1.5 mL tubes *Some samples needed a little more DNA to reach 500 ng, so I took DNA from other extractions/elutions (within the same sample) to reach concentration* |Sample|Sample Type 1 Description|Sample Type 2 Description|Type 2 Volume to add for 500 ng (ul)|Total Volume after 1 uL taken for Qubit (ul)| |----|----|----|----|----| |NAR_3|Reconcentrated DNA in 30 uL|Extraction 1 E2|12|41| |NAR_5|Reconcentrated DNA in 30 uL|Extraction 2 E1|2.5|31.5| |NAR_6|Reconcentrated DNA in 30 uL|Extraction 1 E1|2.7|31.7| |NAR_7|Reconcentrated DNA in 30 uL|Extraction 2 E1|3|32| |GHP_1|Reconcentrated DNA in 30 uL|Extraction 1 E1|3.75|32.75| |GHP_2|Reconcentrated DNA in 30 uL|Extraction 3 E2|20|49| |GHP_3|Reconcentrated DNA in 30 uL|Extraction 1 E2|21|50| - Added volume of Type 2 (column 4) to Type 1 tube (column 2) - Vortex and spin down #### Qubit dsDNA BR assay - Followed [Qubit protocol for BR DNA](https://meschedl.github.io/MESPutnam_Open_Lab_Notebook/Qubit-Protocol/) |Sample|Avg ng/μl| |----|----| |Std 1|170 RFU| |Std 2|20487 RFU| |NAR_3|12.7| |NAR_5|20.0| |NAR_6|20.6| |NAR_7|21.6| |GHP_1|16.3| |GHP_2|7.78| |GHP_3|6.82| |MCD_10|Too Low| Confirming that I have enough DNA in these samples: |Sample|Qubit concentration (ng/uL)|Vol. of Tris (uL)|Total DNA (ng)| |----|----|----|----| |NAR_3|12.7|41|520.7| |NAR_5|20.0|31.5|630| |NAR_6|20.6|31.7|653.02| |NAR_7|21.6|32|691.2| |GHP_1|16.3|32.75|533.83| |GHP_2|7.78|49|381.22| |GHP_3|6.82|50|341| |MCD_10|Too Low|46|Too Low| GHP_2 and GHP_3 are too low, so adding more DNA: |Sample|Type to add|Qubit conc. (ng/uL)|Vol. of sample to add (uL)|Total volume (uL)| |----|----|----|----|----| |GHP_2|Extraction 3 E1|144|2|51| |GHP_3|Extraction 1 E1|180|1|51| The MCD_10 sample DNA concentration is too low, so I will proceed using DNA from Extraction 2 E1 for this sample. Ready to move on to sonication.
44.175824
221
0.736318
eng_Latn
0.948358
26f9cf4b78c876e7c11ceb1c1eabd415ec8807e4
402
md
Markdown
_members/koczanP.md
AppertaFoundation/apperta.org
bafb42e28a360923300af7c4fab7c84cec7b358a
[ "MIT" ]
2
2019-09-22T09:34:28.000Z
2021-11-19T20:11:19.000Z
_members/koczanP.md
AppertaFoundation/apperta.org
bafb42e28a360923300af7c4fab7c84cec7b358a
[ "MIT" ]
null
null
null
_members/koczanP.md
AppertaFoundation/apperta.org
bafb42e28a360923300af7c4fab7c84cec7b358a
[ "MIT" ]
6
2018-12-10T12:52:27.000Z
2020-12-26T01:49:39.000Z
--- name: Phil Koczan photo: '/img/Phil.jpg' title: GP Partner and Chief Clinical Information Officer. bio: Phil has been a GP Partner for over twenty years, and a Chief Clinical Information Officer for five of those. As a member of the Royal College of GPs Health Informatics Group, he has a long-standing interest in the uses of information and technology to integrate and transform patient care. ---
67
295
0.78607
eng_Latn
0.995562
26f9d443df916d5ff5bf79592b96e4d350c0a1cb
391
md
Markdown
_posts/2021-07-20-Just-turned-19-wanna-be-the-first-to-taste-20210720025828341194.md
ipussy/ipussy.github.io
95d19a74e38bb54303cf18057a99a57c783e76bf
[ "Apache-2.0" ]
null
null
null
_posts/2021-07-20-Just-turned-19-wanna-be-the-first-to-taste-20210720025828341194.md
ipussy/ipussy.github.io
95d19a74e38bb54303cf18057a99a57c783e76bf
[ "Apache-2.0" ]
null
null
null
_posts/2021-07-20-Just-turned-19-wanna-be-the-first-to-taste-20210720025828341194.md
ipussy/ipussy.github.io
95d19a74e38bb54303cf18057a99a57c783e76bf
[ "Apache-2.0" ]
null
null
null
--- title: "Just turned 19, wanna be the first to taste?" metadate: "hide" categories: [ God Pussy ] image: "https://preview.redd.it/asw5ns0pu7c71.jpg?auto=webp&s=6fc08a2ec6ed3c429f5fa05cffcbdaf7fd0142f9" thumb: "https://preview.redd.it/asw5ns0pu7c71.jpg?width=1080&crop=smart&auto=webp&s=80d699eefd6051185543fba3c683f427e0e7815c" visit: "" --- Just turned 19, wanna be the first to taste?
39.1
125
0.772379
eng_Latn
0.220624
26fa9912a947032d11eb0b82c2a77cec5c553aeb
356
md
Markdown
_project/funny-but-functional-especially-when-kids-r-outside-playing-cant-hear-me-yell-for-them.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/funny-but-functional-especially-when-kids-r-outside-playing-cant-hear-me-yell-for-them.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/funny-but-functional-especially-when-kids-r-outside-playing-cant-hear-me-yell-for-them.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
--- layout: project_single title: "Funny but functional especially when kids r outside playing & can't hear me yell for them!!!" slug: "funny-but-functional-especially-when-kids-r-outside-playing-cant-hear-me-yell-for-them" parent: "lake-cottage-style-house" --- Funny but functional especially when kids r outside playing & can't hear me yell for them!!!
50.857143
102
0.764045
eng_Latn
0.995935
26fba9dcb04136356d56673ce47a251fcfa52f56
2,939
md
Markdown
_posts/2016-09-17-ordered-lists-and-data-integrity.md
jamesrhea/piedocs
1f834ccfd9ca93b7ea8058d19b967081f94f7985
[ "MIT" ]
null
null
null
_posts/2016-09-17-ordered-lists-and-data-integrity.md
jamesrhea/piedocs
1f834ccfd9ca93b7ea8058d19b967081f94f7985
[ "MIT" ]
null
null
null
_posts/2016-09-17-ordered-lists-and-data-integrity.md
jamesrhea/piedocs
1f834ccfd9ca93b7ea8058d19b967081f94f7985
[ "MIT" ]
null
null
null
--- date: 2016-09-17T12:00:00 draft: false tags: [technical writing] title: Ordered Lists and Data Integrity summary: WYSIWYG editors usually allow you to create ordered lists. But you should avoid letting your editor break your list's integrity. --- WYSIWYG editors usually allow you to create ordered lists. In HTML, ordered lists look like this: ``` html <ol> <li>Buy ice cream</li> <li>Open ice cream</li> <li>Eat ice cream</li> </ol> ``` But imagine you want to add a tip after the "buy" step. In your editor, you might add a line after the first list item, add your tip, and then start the list number where you left off. Here's one way the HTML might look after doing that: ``` html <ol> <li>Buy ice cream</li> </ol> <div class="tip">Bring a cooler!</div> <ol start="2"> <li>Open ice cream</li> <li>Eat ice cream</li> </ol> ``` Here's what that looks like: <ol> <li>Buy ice cream</li> </ol> <div style="margin: -1em 0 -1em 3em; padding-left: .5em; background: aliceblue; font-style: italic;"><p>Bring a cooler!</p></div> <ol start="2"> <li>Open ice cream</li> <li>Eat ice cream</li> </ol> Perfect, right? Unfortunately, although this might look okay in an editor or even in the resulting web output, the data integrity of the list is broken. When you look at the content in a WYSIWYG editor, what you _appear_ to have is a single list with a tip connected to the first list item. But what you _actually_ have is two unrelated lists with an unrelated tip between them. Why is this a problem? Consider what happens if you decide you forgot a step. You add one: ``` html <ol> <li>Find ice cream</li> <li>Buy ice cream</li> </ol> <div class="tip">Bring a cooler!</div> <ol start="2"> <li>Open ice cream</li> <li>Eat ice cream</li> </ol> ``` Output: <ol> <li>Find ice cream</li> <li>Buy ice cream</li> </ol> <div style="margin: -1em 0 -1em 3em; padding-left: .5em; background: aliceblue; font-style: italic;"><p>Bring a cooler!</p></div> <ol start="2"> <li>Open ice cream</li> <li>Eat ice cream</li> </ol> Now there are two number twos and no number four. Instead a better approach is to integrate the tip into the list item it's associated with. Then let the ordered list auto-number all the way through: ``` html <ol> <li>Find ice cream</li> <li>Buy ice cream <div class="tip">Bring a cooler!</div> </li> <li>Open ice cream</li> <li>Eat ice cream</li> </ol> ``` Output: <ol> <li>Find ice cream</li> <li>Buy ice cream <div style="margin-left: 1em; padding-left: .5em; background: aliceblue; font-style: italic;">Bring a cooler!</div></li> <li>Open ice cream</li> <li>Eat ice cream</li> </ol> This approach prevents accidental misnumbering. It also makes it possible to do potentially useful things, such as query your content set to find out the average number of steps in your procedures.
29.686869
402
0.678122
eng_Latn
0.976422
26fc1ecfa019a34505bf968d7fe6c1fc7b6afb3e
1,066
md
Markdown
_posts/2020-10-19-tote-bag.md
pfitzsimons/pat-print
c0b3ea0098c2330ab729632fd4a2d96391295227
[ "MIT" ]
null
null
null
_posts/2020-10-19-tote-bag.md
pfitzsimons/pat-print
c0b3ea0098c2330ab729632fd4a2d96391295227
[ "MIT" ]
null
null
null
_posts/2020-10-19-tote-bag.md
pfitzsimons/pat-print
c0b3ea0098c2330ab729632fd4a2d96391295227
[ "MIT" ]
null
null
null
--- layout: post title: "Tote Bag" image: assets/images/tote-bag/tote-bag3.jpg description: "Tote bag" featured: false --- Price: <b>£6</b> Examples: <a data-fancybox="gallery1" href="/assets/images/tote-bag/tote-bag1.jpg"><img src="/assets/images/tote-bag/tote-bag1.jpg" width="130" height="130"></a> <a data-fancybox="gallery1" href="/assets/images/tote-bag/tote-bag2.jpg"><img src="/assets/images/tote-bag/tote-bag2.jpg" width="130" height="130"></a> <a data-fancybox="gallery1" href="/assets/images/tote-bag/tote-bag3.jpg"><img src="/assets/images/tote-bag/tote-bag3.jpg" width="130" height="130"></a> <a data-fancybox="gallery1" href="/assets/images/tote-bag/tote-bag4.jpg"><img src="/assets/images/tote-bag/tote-bag4.jpg" width="130" height="130"></a> <a data-fancybox="gallery1" href="/assets/images/tote-bag/tote-bag5.jpg"><img src="/assets/images/tote-bag/tote-bag5.jpg" width="130" height="130"></a> <a data-fancybox="gallery1" href="/assets/images/tote-bag/tote-bag6.jpg"><img src="/assets/images/tote-bag/tote-bag6.jpg" width="130" height="130"></a>
56.105263
151
0.711069
dan_Latn
0.039007
26fc978d6fe694717a45ce2370e6e646d7090301
2,416
md
Markdown
README.md
chvanikoff/cowboyd
752e171a851a63354f51a38acbc10499f950275e
[ "MIT" ]
2
2018-11-29T01:49:25.000Z
2019-05-16T23:42:21.000Z
README.md
DavidAlphaFox/cowboyd
752e171a851a63354f51a38acbc10499f950275e
[ "MIT" ]
null
null
null
README.md
DavidAlphaFox/cowboyd
752e171a851a63354f51a38acbc10499f950275e
[ "MIT" ]
2
2018-11-29T01:49:32.000Z
2019-05-16T23:42:22.000Z
# CowboyD ## What is it all about? CowboyD is a try to skip that part of web-app development where you embedding Cowboy. Normally you would like to create an application, write there some code like ```erlang start(_Type, _Args) -> Routes = [ {'_', [ {"/", handler_index, []}, {'_', handler_notfound, []} ]} ], Nba = 100, Port = 8008, {ok, _Pid} = cowboy:start_http(http_listener, Nba, [{port, Port}], [ {env, [{dispatch, Routes}]} ]), ok. ``` Also you should include the Cowboy as a dependency and start it. And don't forget about some code reloading tool - probably you'll also need it. Not so hard but this is routine actions and CowboyD suggests you to get rid of them and focus on writing your application. You just write your application, create routes config file and some Cowboy handlers. When finished you just run CowboyD: ```bash cowboyd start appname /path/to/appname 8008 100 ``` and enjoy the results at localhost:8008 You can update your code and it'll be reloaded on the fly. routes updating is a little bit more complicated - no magic here yet and you should manually run ```bash cowboyd routes-update appname ``` Of cause you can run multiple instances of CowboyD - multiple instances of Erlang VM will be launched (probably this part will be changed and there will be pool of running apps inside one VM). ## Installation ```bash # cd somewhere you'd like to download the project to cd ~/github_projects # Clone the repo git clone https://github.com/chvanikoff/cowboyd # Give execution rights to cowboyd if there is no chmod +x cowboyd/cowboyd # Link Cowboyd to somew executable directory in your $PATH, for example /usr/bin sudo ln -s cowboyd/cowboyd /usr/bin/cowboyd ``` ## Usage: After you've created webapp ([example](https://github.com/chvanikoff/cowboyd/tree/master/examples/webapp)) you can start/stop CowboyD and update routes for currently running applications: ```bash # start application cowboyd start <appname> <path> <port> [<nba>] # stop application cowboyd stop <appname> # update application routes cowboyd routes-update <appname> # start Erlang shell for the application cowboyd shell <appname> ``` **appname**: *string*, is the name of your application **path**: *string*, is a path to the root of your application **port**: *integer*, port Cowboy will listen to **nba**: *integer*, optional (default is 10) number of non-blocking acceptors for Cowboy
36.059701
388
0.741308
eng_Latn
0.991751
26fcff424fd61b66fb9b65b5544ece49264bb969
29
md
Markdown
README.md
beer-garden/plugin
ff57e2402be04ccb24b86ebf071e11d96f9ef3e9
[ "MIT" ]
null
null
null
README.md
beer-garden/plugin
ff57e2402be04ccb24b86ebf071e11d96f9ef3e9
[ "MIT" ]
null
null
null
README.md
beer-garden/plugin
ff57e2402be04ccb24b86ebf071e11d96f9ef3e9
[ "MIT" ]
null
null
null
# plugin Cookiecutter Plugin
9.666667
19
0.827586
kor_Hang
0.866528
f800deb38cac440650f477282ff29ac8d34cb868
1,305
md
Markdown
Markdown/02000s/04000/consideration carrying.md
rcvd/interconnected-markdown
730d63c55f5c868ce17739fd7503d562d563ffc4
[ "MIT" ]
2
2022-01-19T09:04:58.000Z
2022-01-23T15:44:37.000Z
Markdown/00500s/03500/consideration carrying.md
rcvd/interconnected-markdown
730d63c55f5c868ce17739fd7503d562d563ffc4
[ "MIT" ]
null
null
null
Markdown/00500s/03500/consideration carrying.md
rcvd/interconnected-markdown
730d63c55f5c868ce17739fd7503d562d563ffc4
[ "MIT" ]
1
2022-01-09T17:10:33.000Z
2022-01-09T17:10:33.000Z
- Stronger somewhat and yet house him are. The tree would they constitutional prince and refused delete. Their and had works used wert little. His in look tone up all is reality. More deny was such happen few. To work to had all throw and. But it line the grateful communion was to. Sentence on of duty parents saw the. Speak port of be i not rock. For Mississippi her the her Gutenberg. That firm to aided that their be way. Virgin we chamber not i. At the solemn to or marks to. Are do so by borrow of at the. Of in such and and and be. Was him clubs lively swelling. Of death feel the lady to wind voice. From a half father have. Is have himself every was was. Which married me to her without. Horse some and the term to. From had more death you was animal. Began corpse innumerable the gave. For our the have its. Far pocket be on what do. One into but him our was bill. Were sleep of form every the. To to attention them ask rapidity clear the. At old jack matter woman such number time. Which had of any better. To came confused should am know be of. Flew convenient mountain for regarded that any Texas. The to best give may it. His parting [[quality]] there bids us chair. And at see with produced marked which. [[smoke]] gift that them irritated [[rapidly legs]]. A horse of his girl painted him.
1,305
1,305
0.770115
eng_Latn
0.999983
f80112d6625d46639daa2765910b18821aa95a86
174
md
Markdown
README.md
columbiaspace/HAB-2018
126e2ae186a0eaa52401e74c3c0a46c47f6c9fa0
[ "MIT" ]
1
2018-02-04T22:15:28.000Z
2018-02-04T22:15:28.000Z
README.md
columbiaspace/HAB-2018
126e2ae186a0eaa52401e74c3c0a46c47f6c9fa0
[ "MIT" ]
null
null
null
README.md
columbiaspace/HAB-2018
126e2ae186a0eaa52401e74c3c0a46c47f6c9fa0
[ "MIT" ]
1
2018-09-28T19:16:10.000Z
2018-09-28T19:16:10.000Z
# HAB-2018 Codebase for the CSI 2017-2018 HAB mission #MPL3115A2 For the altitude sensor: Make a new directory in arduino's libraries folder and put the header file in it.
29
107
0.781609
eng_Latn
0.959636
f8015f339c424ca6c7619b497275e69d2d784879
2,087
md
Markdown
docs/quarkus.md
redhatspain/redhatspain.github.io
73aef4ce0a6f57b1932614704ea4a396fec4b113
[ "Apache-2.0" ]
2
2020-08-25T21:24:19.000Z
2020-12-04T04:07:47.000Z
docs/quarkus.md
redhatspain/redhatspain.github.io
73aef4ce0a6f57b1932614704ea4a396fec4b113
[ "Apache-2.0" ]
null
null
null
docs/quarkus.md
redhatspain/redhatspain.github.io
73aef4ce0a6f57b1932614704ea4a396fec4b113
[ "Apache-2.0" ]
3
2020-11-19T04:59:35.000Z
2021-03-20T10:16:11.000Z
# Quarkus - [quarkus.io](https://quarkus.io/) Quarkus is a Kubernetes-native Java stack that is crafted from best-of-breed Java libraries and standards, and tailored for containers and cloud deployments - [quarkus.io: Quarkus for Spring Developers](https://quarkus.io/blog/quarkus-for-spring-developers/) - [redhat.com: Red Hat drives future of Java with cloud-native, container-first Quarkus](https://www.redhat.com/en/blog/red-hat-drives-future-java-cloud-native-container-first-quarkus) - [developers.redhat.com: Quarkus: A quick-start guide to the Kubernetes-native Java stack](https://developers.redhat.com/articles/quarkus-quick-start-guide-kubernetes-native-java-stack/) - [quarkus.io: Quarkus support in IDE's](https://quarkus.io/blog/march-of-ides/) - [dzone: quarkus refcard](https://dzone.com/refcardz/quarkus-1) - [dzone: Build a Java REST API With Quarkus](https://dzone.com/articles/build-a-java-rest-api-with-quarkus) - [developers.redhat.com: Autowire MicroProfile into Spring with Quarkus](https://developers.redhat.com/blog/2019/10/02/autowire-microprofile-into-spring-with-quarkus/) - [dmcommunity.org: Who will win? Spring Boot or Quarkus](https://dmcommunity.org/2020/01/12/who-will-win-spring-boot-or-quarkus/) - [dzone.com: Microservices: Quarkus vs. Spring Boot](https://dzone.com/articles/microservices-quarkus-vs-spring-boot) In the era of containers (the "Docker Age") Java still keeps alive, being struggling for it or not. Who will win: Spring Boot or Quarkus. - [developers.redhat.com: How Quarkus brings imperative and reactive programming together](https://developers.redhat.com/blog/2019/11/18/how-quarkus-brings-imperative-and-reactive-programming-together/) - [developers.redhat.com: Migrating a Spring Boot microservices application to Quarkus](https://developers.redhat.com/blog/2020/04/10/migrating-a-spring-boot-microservices-application-to-quarkus/) - [Quarkus, a Kubernetes-native Java runtime, now fully supported by Red Hat](https://developers.redhat.com/blog/2020/05/28/quarkus-a-kubernetes-native-java-runtime-now-fully-supported-by-red-hat/)
149.071429
256
0.787254
eng_Latn
0.36981
f802787e82d50ae037e07ce79a93c03ed132af16
2,208
md
Markdown
_pages/home.md
hemangchawla/hemangchawla.github.io
c3b3d2f1df3acace91f763fe4fe42a88105eabf5
[ "MIT" ]
null
null
null
_pages/home.md
hemangchawla/hemangchawla.github.io
c3b3d2f1df3acace91f763fe4fe42a88105eabf5
[ "MIT" ]
null
null
null
_pages/home.md
hemangchawla/hemangchawla.github.io
c3b3d2f1df3acace91f763fe4fe42a88105eabf5
[ "MIT" ]
null
null
null
--- title: "Hemang Chawla - Home" layout: homelay excerpt: "Hemang Chawla -- Home" sitemap: false permalink: / --- <div markdown="0" id="carousel" class="carousel slide" data-ride="carousel" data-interval="5000" data-pause="hover" > <!-- Menu --> <!-- <ol class="carousel-indicators"> <li data-target="#carousel" data-slide-to="0" class="active"></li> </ol> --> <!-- Items --> <div class="carousel-inner" markdown="0"> <div class="item active"> <center> <img src="{{ site.url }}{{ site.baseurl }}/images/hemang.jpg" width="70%" height="70%" alt="Hemang Chawla 2020" /> </center> </div> </div> <!-- <a class="left carousel-control" href="#carousel" role="button" data-slide="prev"> <span class="glyphicon glyphicon-chevron-left" aria-hidden="true"></span> <span class="sr-only">Previous</span> </a> <a class="right carousel-control" href="#carousel" role="button" data-slide="next"> <span class="glyphicon glyphicon-chevron-right" aria-hidden="true"></span> <span class="sr-only">Next</span> </a> --> </div> Namaste, I am Hemang Chawla. I am a Computer Vision Research Engineer at the [Advanced Research Lab, Navinfo Europe](https://www.navinfo.eu/artificial-intelligence.html). My research interests focus on the synergistic integration of 3D geometry with deep learning. Within the AI Research lab, I have worked on semantic SLAM, 3D vision, crowdsourced mapping and camera auto-calibration. Prior to joining NavInfo, I worked on problems in path planning, control, and SLAM for healthcare and cleaning service robot startups. I have a postgraduate in [Robotics](https://tudelftroboticsinstitute.nl/) from [Delft University of Technology](https://www.tudelft.nl/), where I worked on mobile manipulation under [Dr. Martijn Wisse](https://scholar.google.nl/citations?hl=en&user=ddu5MKwAAAAJ). During my bachelors from [BITS Pilani](https://www.bits-pilani.ac.in/), I had the opportunity to work with [Dr. Bijay Rout](https://scholar.google.nl/citations?user=BH13o4YAAAAJ) at the [Center for Robotics and Intelligent Systems](https://www.bits-pilani.ac.in/pilani/centreforrobotics/Home).
55.2
693
0.693841
eng_Latn
0.582304
f802ade0bdbdbfda304e2dbfe7fddc5f185bddf1
3,964
md
Markdown
docs/Kuberhealthy-2.1.0-Release.md
ChrisHirsch/kuberhealthy
0ffc4373dac169757f0db11bba3a6ea2331a6e40
[ "Apache-2.0" ]
1
2020-10-07T16:41:09.000Z
2020-10-07T16:41:09.000Z
docs/Kuberhealthy-2.1.0-Release.md
ChrisHirsch/kuberhealthy
0ffc4373dac169757f0db11bba3a6ea2331a6e40
[ "Apache-2.0" ]
null
null
null
docs/Kuberhealthy-2.1.0-Release.md
ChrisHirsch/kuberhealthy
0ffc4373dac169757f0db11bba3a6ea2331a6e40
[ "Apache-2.0" ]
null
null
null
# Kuberhealthy 2.1.0 Release - Check Reaper and Bug Fixes Galore Last November at KubeCon San Diego 2019, we announced the release of [Kuberhealthy 2.0.0](https://www.youtube.com/watch?v=aAJlWhBtzqY) - transforming Kuberhealthy into a Kubernetes operator for synthetic monitoring. This new ability granted developers the means to create their own Kuberhealthy check containers to monitor their applications and clusters. The community was quick to adopt this new feature and we're grateful for everyone who implemented and tested Kuberhealthy 2.0.0 in their clusters. Thanks to all of you who reported issues and contributed to discussions on the #kuberhealthy Slack channel. We set to work to address all your feedback and today we're excited to announce the release of Kuberhealthy 2.1.0! #### Check Reaper <img align="center" src="https://github.com/Comcast/kuberhealthy/raw/master/images/kuberhealthy-check-reaper.gif"> With the initial 2.0.0 release, each time an external check finished running and reported back to Kuberhealthy, the checker pod would only remain visible in Kubernetes until the next time the check ran. For checks that had shorter run intervals such as the DNS Status check or the Pod Status check, Kubernetes operators weren't given enough time to investigate failed check logs after being alerted. The team decided to retain checker pods and implement a check reaper cron job that deletes 'Completed' or 'Failed' Kuberhealthy checker pods older than a certain time period. Users are now given much more time to investigate failed check runs without having to worry about the pod in question being cleaned up too quickly. #### Squashing Bugs Some of our more complicated checks such as the deployment check (a synthetic test that ensures that a Kubernetes deployment and service can be created, provisioned, and serve traffic within the Kubernetes cluster) required a small refactor to ensure that it properly detects when its resources have been cleaned up. Other checks, such as the Pod Status check and the Pod Restarts check were refactored to run more ephemerally and more often. These two checks now also exclude Kuberhealthy checker pods in order to prevent duplicate failure reporting. Additionally, the daemonset check got a PR merge from a community user, addressing its use of deprecated API endpoints. This check now runs in the newest versions of Kubernetes as expected. Community users also reported issues with deploying Kuberhealthy onto their clusters with the Helm chart. As of now, the Helm chart installed by default by Helm ("helm/charts/kuberhealthy") should not be used and has been removed from the README. Until we register a private Helm repository upstream, we recommend using the flat files specified in the project readme to install Kuberhealthy. This is being tracked in [issue #288](https://github.com/Comcast/kuberhealthy/issues/288). #### Kuberhealthy State Run Duration <img align="center" src="https://github.com/Comcast/kuberhealthy/raw/master/images/kuberhealthy-json.png"> Kuberhealthy has an integration with Prometheus, giving users the ability to capture Kuberhealthy server state as well as check metrics. The 2.1.0 release has added a new metric that captures the most recent run duration of each check. This is helpful for identifying unexpected slowdowns in check execution. #### Namespace filtering <img align="center" src="https://github.com/Comcast/kuberhealthy/raw/master/images/kuberhealthy-ns-filter.png"> For our JSON status page output that’s available to view all checks’ current state, we added namespace filtering with the `GET` variable `namespace`. To view checks’ states from multiple namespaces, add commas to separate namespaces. For example: `?namespace=kuberhealthy,kube-system`. Thanks again to everyone in the community for helping us with our 2.1.0 release and we hope to keep hearing even more feedback from you soon!
110.111111
742
0.798688
eng_Latn
0.997876
f802e0ff1e5ef91c21aa0efac8b6b5df4a585aee
563
md
Markdown
licence_informatique/L2/Bataille_Navale/README.en.md
JasmineCA/Projects
8e4c741420e8c8921064df3adafadb5ea351f61b
[ "CNRI-Python" ]
null
null
null
licence_informatique/L2/Bataille_Navale/README.en.md
JasmineCA/Projects
8e4c741420e8c8921064df3adafadb5ea351f61b
[ "CNRI-Python" ]
null
null
null
licence_informatique/L2/Bataille_Navale/README.en.md
JasmineCA/Projects
8e4c741420e8c8921064df3adafadb5ea351f61b
[ "CNRI-Python" ]
null
null
null
# Naval Battle Project [Français](README.md) ## Project Goal Developp in group of 4 and in 4 work session an Naval Battle application available in Java Swing GUI (no framework used) or in command line. Project applies M-VC architecture. Group Project Language: Java ## Project execution Move into *dist* folder. Execute jar file like this: ```bash java -jar bataille-naval.jar [option] ``` Arguments: c for command line g for GUI Project report and documentation are available in French. ## Remaining tasks - [ ] Documentation and report in English
17.59375
175
0.753108
eng_Latn
0.981059
f803022bee1f43e412a57d6cd907d95437678213
374
md
Markdown
README.md
devmount/SemanticWeb
55ad052dc0b2efa519a99ee2e54e419efaa083f4
[ "MIT" ]
1
2017-09-06T10:57:50.000Z
2017-09-06T10:57:50.000Z
README.md
devmount/SemanticWeb
55ad052dc0b2efa519a99ee2e54e419efaa083f4
[ "MIT" ]
null
null
null
README.md
devmount/SemanticWeb
55ad052dc0b2efa519a99ee2e54e419efaa083f4
[ "MIT" ]
null
null
null
# SemanticWeb Talk about the Paper "Media Meets Semantic Web" --- If you like this talk and want to give some love back, feel free to... <p align="center"> <a href="https://www.buymeacoffee.com/devmount" target="_blank"> <img alt="Buy me a coffee" src="https://user-images.githubusercontent.com/5441654/44213163-60a91100-a16d-11e8-9d5d-7d862cae7b7c.png"> </a> </p>
28.769231
135
0.719251
eng_Latn
0.490567
f803235695dd47ef898bd43cd05e646455f9275a
20,860
md
Markdown
articles/cosmos-db/configure-periodic-backup-restore.md
Myhostings/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
16
2017-08-28T08:29:36.000Z
2022-01-02T16:46:30.000Z
articles/cosmos-db/configure-periodic-backup-restore.md
Ahmetmaman/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
470
2017-11-11T20:59:16.000Z
2021-04-10T17:06:28.000Z
articles/cosmos-db/configure-periodic-backup-restore.md
Ahmetmaman/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
25
2017-11-11T19:39:08.000Z
2022-03-30T13:47:56.000Z
--- title: Azure Cosmos DB hesabını düzenli yedekleme ile yapılandırma description: Bu makalede, yedekleme aralığıyla düzenli yedekleme ile Azure Cosmos DB hesaplarının nasıl yapılandırılacağı açıklanır. ve bekletme. Ayrıca, verilerinizi geri yüklemek için destek ile iletişim kurar. author: kanshiG ms.service: cosmos-db ms.topic: how-to ms.date: 04/05/2021 ms.author: govindk ms.reviewer: sngun ms.openlocfilehash: d0470759a589927b65462f258b20446af608175c ms.sourcegitcommit: b8995b7dafe6ee4b8c3c2b0c759b874dff74d96f ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 04/03/2021 ms.locfileid: "106284063" --- # <a name="configure-azure-cosmos-db-account-with-periodic-backup"></a>Azure Cosmos DB hesabını düzenli yedekleme ile yapılandırma [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] Azure Cosmos DB düzenli aralıklarla otomatik olarak verilerinizin yedeğini alır. Otomatik yedeklemeler yapılırken veritabanı işlemlerinin performansı veya kullanılabilirliği etkilenmez. Tüm yedeklemeler bir depolama hizmetinde ayrı olarak depolanır ve bu yedeklemeler, bölgesel felate karşı dayanıklılık açısından küresel olarak çoğaltılır. Yalnızca verilerinizin değil, Azure Cosmos DB ile verilerinizin yedekleri çok fazla yedekli ve bölgesel felaketlere dayanıklı olabilir. Aşağıdaki adımlarda Azure Cosmos DB veri yedeklemesini nasıl gerçekleştirdiği gösterilmektedir: * Azure Cosmos DB, veritabanınızın her 4 saatte bir tam yedeklemesini otomatik olarak alır, her zaman yalnızca en son iki yedek varsayılan olarak saklanır. Varsayılan aralıklar iş yükleriniz için yeterli değilse, yedekleme aralığını ve bekletme süresini Azure portal değiştirebilirsiniz. Azure Cosmos hesabı oluşturulduktan sonra veya sonrasında yedekleme yapılandırmasını değiştirebilirsiniz. Kapsayıcı veya veritabanı silinirse, Azure Cosmos DB belirli bir kapsayıcının veya veritabanının mevcut anlık görüntülerini 30 gün boyunca tutar. * Azure Cosmos DB, bu yedeklemeleri Azure Blob depolamada depolar, ancak gerçek veriler yerel olarak Azure Cosmos DB içinde bulunur. * Düşük gecikme süresini garantilemek için, yedeğinizin anlık görüntüsü, mevcut yazma bölgesiyle (veya çok bölgeli bir yazma yapılandırmasına sahip olmanız durumunda, yazma bölgelerinden **biri** ) aynı bölgedeki Azure Blob depolama alanında depolanır. Bölgesel olağanüstü durumlara dayanıklı olması için Azure Blob depolamadaki yedek verilerin her anlık görüntüsü coğrafi olarak yedekli depolama (GRS) aracılığıyla başka bir bölgeye yeniden çoğaltılır. Yedeklemenin çoğaltıldığı bölge, kaynak bölgenize ve kaynak bölgeyle ilişkilendirilmiş bölge çiftine bağlıdır. Daha fazla bilgi edinmek için [coğrafi olarak yedekli Azure bölgeleri çiftlerinin listesine](../best-practices-availability-paired-regions.md) bakın. Bu yedeklemeye doğrudan erişemezsiniz. Destek isteği aracılığıyla istekte bulunduğunuzda Azure Cosmos DB ekibi yedeklemenizi geri yükleyecektir. Aşağıdaki görüntüde, Batı ABD ' deki üç birincil fiziksel bölüm içeren bir Azure Cosmos kapsayıcısının Batı ABD ' de uzak Azure Blob depolama hesabında nasıl yedeklenebileceği ve ardından Doğu ABD ' e nasıl çoğaltıldığı gösterilmektedir: :::image type="content" source="./media/configure-periodic-backup-restore/automatic-backup.png" alt-text="GRS Azure depolama alanındaki tüm Cosmos DB varlıkların düzenli aralıklarla tam yedeklemeleri." lightbox="./media/configure-periodic-backup-restore/automatic-backup.png" border="false"::: * Yedeklemeler, uygulamanızın performansını veya kullanılabilirliğini etkilemeden alınır. Azure Cosmos DB, ek sağlanmış üretilen iş (ru) kullanmadan veya veritabanınızın performansını ve kullanılabilirliğini etkilemeden arka planda veri yedekleme gerçekleştirir. ## <a name="backup-storage-redundancy"></a><a id="backup-storage-redundancy"></a>Yedekleme depolama yedekliliği Varsayılan olarak, Azure Cosmos DB [eşleştirilmiş bir bölgeye](../best-practices-availability-paired-regions.md)çoğaltılan coğrafi olarak yedekli [BLOB depolamada](../storage/common/storage-redundancy.md) düzenli mod yedekleme verilerini depolar. Yedekleme verilerinizin Azure Cosmos DB hesabınızın sağlandığı bölge içinde kalmasını sağlamak için, varsayılan coğrafi olarak yedekli yedekleme depolama alanını değiştirebilir ve yerel olarak yedekli ya da bölgesel olarak yedekli depolama alanını yapılandırabilirsiniz. Depolama artıklığı mekanizmaları, geçici donanım arızası, ağ veya güç kesintileri ya da büyük doğal felaketler dahil, planlı ve plansız olaylardan korunmak üzere yedeklemelerinizin birden çok kopyasını depolar. Azure Cosmos DB Yedekleme verileri birincil bölgede üç kez çoğaltılır. Hesap oluşturma sırasında düzenli yedekleme modu için depolama yedekliliği yapılandırabilir veya mevcut bir hesap için güncelleştirebilirsiniz. Düzenli yedekleme modunda aşağıdaki üç veri artıklığı seçeneğini kullanabilirsiniz: * **Coğrafi olarak yedekli yedekleme depolaması:** Bu seçenek, verilerinizi eşleştirilmiş bölge üzerinden zaman uyumsuz olarak kopyalar. * Bölgesel olarak **yedekli yedekleme depolama alanı:** Bu seçenek, birincil bölgedeki üç Azure kullanılabilirlik bölgesi üzerinden verilerinizi zaman uyumsuz olarak kopyalar. * **Yerel olarak yedekli yedekleme depolaması:** Bu seçenek, birincil bölgedeki tek bir fiziksel konum içinde verilerinizi zaman uyumsuz olarak üç kez kopyalar. > [!NOTE] > Bölgesel olarak yedekli depolama Şu anda yalnızca [belirli bölgelerde](high-availability.md#availability-zone-support)kullanılabilir. Seçtiğiniz bölgeye göre; Bu seçenek, yeni veya mevcut hesaplar için kullanılamaz. > > Yedekleme depolama alanı artıklığı güncelleştirme, yedekleme depolama fiyatlandırması üzerinde herhangi bir etkiye sahip olmayacaktır. ## <a name="modify-the-backup-interval-and-retention-period"></a><a id="configure-backup-interval-retention"></a>Yedekleme aralığını ve bekletme süresini değiştirme Azure Cosmos DB her 4 saatte bir ve herhangi bir zamanda verilerinizin tam yedeklemesini otomatik olarak alır, en son iki yedek saklanır. Bu yapılandırma varsayılan seçenektir ve ek bir maliyet olmadan sunulur. Azure Cosmos hesap oluşturma işlemi sırasında veya hesap oluşturulduktan sonra varsayılan yedekleme aralığını ve saklama süresini değiştirebilirsiniz. Yedekleme yapılandırması Azure Cosmos hesabı düzeyinde ayarlanır ve bunu her hesapta yapılandırmanız gerekir. Bir hesap için yedekleme seçeneklerini yapılandırdıktan sonra, bu hesap içindeki tüm kapsayıcılara uygulanır. Şu anda yedekleme seçeneklerini yalnızca Azure portaldan değiştirebilirsiniz. Verilerinizi yanlışlıkla silmiş veya bozdıysanız, **verileri geri yüklemek için bir destek isteği oluşturmadan önce, hesabınız için yedekleme bekletmesini en az yedi güne artırdığınızdan emin olun. Bu olayın 8 saat içinde bekletmenin artırılması en iyisidir.** Bu şekilde Azure Cosmos DB ekibinin hesabınızı geri yüklemek için yeterli zamanı olur. ### <a name="modify-backup-options-for-an-existing-account"></a>Mevcut bir hesabın yedekleme seçeneklerini değiştirme Mevcut bir Azure Cosmos hesabının varsayılan yedekleme seçeneklerini değiştirmek için aşağıdaki adımları kullanın: 1. Azure portal oturum açın [.](https://portal.azure.com/) 1. Azure Cosmos hesabınıza gidin ve **yedekleme & geri yükleme** Bölmesi ' ni açın. Yedekleme aralığını ve yedekleme saklama süresini gereken şekilde güncelleştirin. * **Yedekleme aralığı** -Azure Cosmos DB, verilerinizin yedeğini alma girişiminde bulunan zaman aralığıdır. Yedekleme, sıfır olmayan bir süre sürer ve bazı durumlarda aşağı akış bağımlılıkları nedeniyle başarısız olabilir. Azure Cosmos DB, yapılandırılan aralıkta yedeklemeyi en iyi şekilde dener, ancak yedeklemenin bu zaman aralığında tamamlanmasını garanti etmez. Bu değeri saat veya dakika olarak yapılandırabilirsiniz. Yedekleme aralığı 1 saatten az ve 24 saatten fazla olamaz. Bu aralığı değiştirdiğinizde, yeni Aralık son yedeklemenin alındığı zamandan itibaren yürürlüğe girer. * **Yedekleme bekletme** -bu, her yedeklemenin saklanacağı dönemi temsil eder. Saati saat veya gün olarak yapılandırabilirsiniz. En düşük saklama süresi, yedekleme aralığının (saat cinsinden) iki katından az olamaz ve 720 saatten büyük olamaz. * **Korunan verilerin kopyaları** -varsayılan olarak, verilerinizin iki yedek kopyası ücretsiz olarak sunulur. İkiden fazla kopyaya ihtiyacınız varsa ek bir ücret vardır. Ek kopyaların tam fiyatını öğrenmek için [fiyatlandırma sayfasındaki](https://azure.microsoft.com/pricing/details/cosmos-db/) tüketilen depolama bölümüne bakın. * **Yedekleme depolama yedekliliği** -gerekli depolama artıklığı seçeneğini belirleyin, kullanılabilir seçenekler için [yedekleme depolama artıklığı](#backup-storage-redundancy) bölümüne bakın. Varsayılan olarak, mevcut periyodik yedekleme modu hesaplarınızın coğrafi olarak yedekli depolama alanı vardır. Yedeklemenin başka bir bölgeye çoğaltılmadığından emin olmak için yerel olarak yedekli gibi başka depolama alanı seçebilirsiniz. Mevcut bir hesapta yapılan değişiklikler yalnızca gelecekteki yedeklemelere uygulanır. Mevcut bir hesabın yedek depolama yedekliliği güncelleştirildikten sonra, değişikliklerin etkili olması için yedekleme aralığı süresinin iki katı kadar sürebilir ve **eski yedeklemeleri hemen geri yüklemek için erişiminizi kaybedersiniz.** > [!NOTE] > Yedekleme depolama yedekliliği yapılandırmak için abonelik düzeyinde atanmış Azure [Cosmos DB hesap okuyucusu rol](../role-based-access-control/built-in-roles.md#cosmos-db-account-reader-role) rolüne sahip olmanız gerekir. :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-existing-accounts.png" alt-text="Mevcut bir Azure Cosmos hesabı için yedekleme aralığı, bekletme ve depolama yedekliliği yapılandırın." border="true"::: ### <a name="modify-backup-options-for-a-new-account"></a>Yeni bir hesap için yedekleme seçeneklerini değiştirme Yeni bir hesap sağlanırken, **yedekleme ilkesi** sekmesinden **düzenli** _ yedekleme İlkesi ' ni seçin. Düzenli ilke, yedekleme aralığı, yedekleme bekletme ve yedek depolama yedekliliği yapılandırmanıza olanak tanır. Örneğin, bölgeniz dışındaki yedekleme veri çoğaltmasını engellemek için _ *yerel olarak yedekli yedekleme depolaması** veya **bölge yedekli yedekleme depolama** seçeneklerini belirleyebilirsiniz. :::image type="content" source="./media/configure-periodic-backup-restore/configure-backup-options-new-accounts.png" alt-text="Yeni Azure Cosmos hesapları için dönemsel veya sürekli yedekleme ilkesini yapılandırın." border="true"::: ## <a name="request-data-restore-from-a-backup"></a><a id="request-restore"></a>Yedekten veri geri yükleme isteği Veritabanınızı veya bir kapsayıcıyı yanlışlıkla silerseniz, verileri otomatik çevrimiçi yedeklemelerden geri yüklemek için [bir destek bileti dosyası](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) veya [Azure desteği çağırabilirsiniz](https://azure.microsoft.com/support/options/) . Azure desteği, yalnızca **Standart**, **Geliştirici** ve planlardan daha yüksek planlar gibi seçili planlar için kullanılabilir. Azure desteği, **temel** plan ile kullanılamaz. Farklı destek planları hakkında bilgi edinmek için bkz. [Azure destek planları](https://azure.microsoft.com/support/plans/) sayfası. Yedeklemenin belirli bir anlık görüntüsünü geri yüklemek için Azure Cosmos DB, bu anlık görüntünün yedekleme döngüsüyle verilerin kullanılabilir olmasını gerektirir. Geri yükleme isteğinde bulunulmadan önce aşağıdaki ayrıntılara sahip olmanız gerekir: * Abonelik KIMLIĞINIZI hazırlayın. * Verilerinizin yanlışlıkla silinme veya değiştirilme şeklini temel alarak, ek bilgilere sahip olmak için hazırlık yapmanız gerekir. Daha iyi bir süre hassas durumda olabilecek geri ve geriye doğru bir şekilde en aza indirmek için bilgilerin ileride kullanılabilir olması önerilir. * Azure Cosmos DB hesabının tamamı silinirse, silinen hesabın adını belirtmeniz gerekir. Silinen hesapla aynı ada sahip başka bir hesap oluşturursanız, bunu seçmek için doğru hesabı belirlemesine yardımcı olduğundan destek ekibi ile paylaşabilirsiniz. Geri yükleme durumu için karışıklıkları en aza indirecek için, silinen her bir hesap için farklı destek biletleri dosyası yapmanız önerilir. * Bir veya daha fazla veritabanı silinirse Azure Cosmos hesabını ve Azure Cosmos veritabanı adlarını sağlamalısınız ve aynı ada sahip yeni bir veritabanı varsa bunu belirtmeniz gerekir. * Bir veya daha fazla kapsayıcı silinirse, Azure Cosmos hesap adını, veritabanı adlarını ve kapsayıcı adlarını sağlamanız gerekir. Ve aynı ada sahip bir kapsayıcının var olup olmadığını belirtin. * Verilerinizi yanlışlıkla silmiş veya bozdıysanız, Azure Cosmos DB ekibin verileri yedeklerden geri yüklemenize yardımcı olması için, 8 saat içinde [Azure desteği](https://azure.microsoft.com/support/options/) 'ne başvurmalısınız. **Verileri geri yüklemek için bir destek isteği oluşturmadan önce, hesabınız için [yedekleme bekletmesini](#configure-backup-interval-retention) en az yedi güne artırdığınızdan emin olun. Bu olayın 8 saat içinde bekletmenin artırılması en iyisidir.** Bu şekilde Azure Cosmos DB destek ekibinin hesabınızı geri yüklemek için yeterli zamanı olacaktır. Azure Cosmos hesap adına, veritabanı adlarına, kapsayıcı adlarına ek olarak, verilerin geri yüklenebileceği zaman noktasını belirtmeniz gerekir. Bu anda en iyi kullanılabilir yedeklemeleri belirlememize yardımcı olmak için mümkün olduğunca kesin olması önemlidir. **Saati UTC olarak belirtmek de önemlidir.** Aşağıdaki ekran görüntüsünde, Azure portal kullanarak verileri geri yüklemek için bir kapsayıcı (koleksiyon/grafik/tablo) için bir destek isteği oluşturma işlemlerinin nasıl yapılacağı gösterilmektedir. İsteğin önceliklendirmemize yardımcı olması için veri türü, geri yükleme amacı, verilerin silindiği zaman gibi diğer ayrıntıları sağlayın. :::image type="content" source="./media/configure-periodic-backup-restore/backup-support-request-portal.png" alt-text="Azure portal kullanarak bir yedekleme destek isteği oluşturun." border="true"::: ## <a name="considerations-for-restoring-the-data-from-a-backup"></a>Verileri bir yedekten geri yükleme konuları Verilerinizi aşağıdaki senaryolardan birinde yanlışlıkla silebilir veya değiştirebilirsiniz: * Tüm Azure Cosmos hesabını silin. * Bir veya daha fazla Azure Cosmos veritabanını silin. * Bir veya daha fazla Azure Cosmos kapsayıcısı silin. * Bir kapsayıcı içindeki Azure Cosmos öğelerini (örneğin, belgeler) silin veya değiştirin. Bu özel durum genellikle veri bozulması olarak adlandırılır. * Paylaşılan bir teklif veritabanı içindeki paylaşılan bir teklif veritabanı veya kapsayıcılar silinir veya bozuktur. Azure Cosmos DB yukarıdaki tüm senaryolarda verileri geri yükleyebilir. Geri yüklenirken, geri yüklenen verileri tutmak için yeni bir Azure Cosmos hesabı oluşturulur. Yeni hesabın adı belirtilmemişse, biçimi olur `<Azure_Cosmos_account_original_name>-restored1` . Birden fazla geri yükleme denendiğinde, son basamak artırılır. Önceden oluşturulmuş bir Azure Cosmos hesabına veri geri yükleme yapamazsınız. Bir Azure Cosmos hesabını yanlışlıkla sildiğinizde, hesap adı kullanımda değilse verileri aynı ada sahip yeni bir hesaba geri yükleyebilirsiniz. Bu nedenle, hesabı sildikten sonra yeniden oluşturmanız önerilir. Yalnızca geri yüklenen verilerin aynı adı kullanmasına engel olmadığı için, ancak aynı zamanda geri yüklenecek doğru hesabı bulmaktan de zorlaştırıyor. Bir Azure Cosmos veritabanını yanlışlıkla sildiğinizde, veritabanının tamamını veya bu veritabanı içindeki kapsayıcıların bir alt kümesini geri yükleyebilirsiniz. Veritabanları genelinde belirli kapsayıcıları seçmek ve bunları yeni bir Azure Cosmos hesabına geri yüklemek de mümkündür. Bir kapsayıcı içindeki bir veya daha fazla öğeyi yanlışlıkla sildiğinizde veya değiştirdiğinizde (veri bozulması durumu), geri yükleme zamanını belirtmeniz gerekir. Veri bozulması durumunda zaman önemlidir. Kapsayıcı canlı olduğu için yedekleme hala çalışıyor, bu nedenle bekletme döneminin ötesine (varsayılan sekiz saat) kadar beklerseniz yedeklemelerin üzerine yazılır. Yedeklemenin üzerine yazılmasını engellemek için, hesabınız için yedekleme bekletmesini en az yedi güne yükseltin. Veri bozulmasından 8 saat içinde bekletmenin artırılması en iyisidir. Verilerinizi yanlışlıkla silmiş veya bozdıysanız, Azure Cosmos DB ekibin verileri yedeklerden geri yüklemenize yardımcı olması için, 8 saat içinde [Azure desteği](https://azure.microsoft.com/support/options/) 'ne başvurmalısınız. Bu şekilde Azure Cosmos DB destek ekibinin hesabınızı geri yüklemek için yeterli zamanı olacaktır. > [!NOTE] > Verileri geri yükledikten sonra, tüm kaynak özellikleri veya ayarları geri yüklenen hesaba uygulanmaz. Aşağıdaki ayarlar yeni hesaba taşınmaz: > * VNET erişim denetim listeleri > * Saklı yordamlar, Tetikleyiciler ve Kullanıcı tanımlı işlevler > * Çok bölgeli ayarlar Veritabanı düzeyinde üretilen iş sağlamak istiyorsanız, bu durumda yedekleme ve geri yükleme işlemi, tek tek kapsayıcılar düzeyinde değil, tüm veritabanı düzeyinde gerçekleşir. Bu gibi durumlarda, geri yüklenecek kapsayıcıların bir alt kümesini seçemezsiniz. ## <a name="required-permissions-to-change-retention-or-restore-from-the-portal"></a>Portaldan saklama veya geri yükleme yapmak için gerekli izinler [Cosmosdbbackupoperator](../role-based-access-control/built-in-roles.md#cosmosbackupoperator), Owner veya katkıda bulunan rolünün bir parçası olan sorumlular geri yükleme istemesine veya saklama süresini değiştirmesine izin verilir. ## <a name="understanding-costs-of-extra-backups"></a>Ekstra yedeklemelerin maliyetlerini anlama İki yedek sağlanır ve ek yedeklemeler, [Yedekleme Depolama fiyatlandırması](https://azure.microsoft.com/pricing/details/cosmos-db/)bölümünde açıklanan yedekleme depolaması için bölge tabanlı fiyatlandırmaya göre ücretlendirilir. Örneğin, yedekleme bekletme 240 saat, 10 gün ve yedekleme aralığı 24 saat olarak yapılandırılmışsa. Bu, yedekleme verilerinin 10 kopyasını gerektirir. Batı ABD 2 1 TB veri varsayıldığında, maliyet verilen ay içindeki yedekleme depolaması için 0,12 * 1000 * 8 olacaktır. ## <a name="options-to-manage-your-own-backups"></a>Kendi yedeklemelerinizi yönetme seçenekleri SQL API hesaplarıyla Azure Cosmos DB, aşağıdaki yaklaşımlardan birini kullanarak kendi yedeklemelerinizi de koruyabilirsiniz: * Verileri düzenli aralıklarla seçtiğiniz bir depoya taşımak için [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md) kullanın. * Tam yedeklemeler veya artımlı değişiklikler için verileri düzenli aralıklarla okumak üzere Azure Cosmos DB [değişiklik akışını](change-feed.md) kullanın ve kendi depolama alanınızı saklayın. ## <a name="post-restore-actions"></a>Geri yükleme sonrası eylemler Veri geri yükleme 'nin birincil hedefi, yanlışlıkla sildiğiniz veya değiştirdiğiniz verileri kurtarmaktır. Bu nedenle, önce beklediğiniz verileri içerdiğinden emin olmak için kurtarılan verilerin içeriğini incelemenizi öneririz. Her şey iyi görünüyorsa, verileri birincil hesaba geri geçirebilirsiniz. Geri yüklenen hesabı yeni etkin hesabınız olarak kullanmak mümkün olsa da, üretim iş yükleriniz varsa önerilen bir seçenek değildir. Verileri geri yükledikten sonra, yeni hesabın adı (genellikle biçimde `<original-name>-restored1` ) ve hesabın geri yüklendiği saat hakkında bir bildirim alırsınız. Geri yüklenen hesap, aynı sağlanmış işleme, dizin oluşturma ilkelerine sahip olacak ve özgün hesapla aynı bölgede. Abonelik Yöneticisi veya coadmin (olan bir Kullanıcı, geri yüklenen hesabı görebilir. ### <a name="migrate-data-to-the-original-account"></a>Verileri özgün hesaba geçirme Aşağıda, verileri özgün hesaba geri geçirmenin farklı yolları verilmiştir: * [Azure Cosmos DB veri geçiş aracını](import-data.md)kullanın. * [Azure Data Factory](../data-factory/connector-azure-cosmos-db.md)kullanın. * Azure Cosmos DB [değişiklik akışını](change-feed.md) kullanın. * Kendi özel kodunuzu yazabilirsiniz. Verileri geçirdikten hemen sonra kapsayıcıyı veya veritabanını silmeniz önerilir. Geri yüklenen veritabanlarını veya kapsayıcıları silmezseniz, bunlar istek birimleri, depolama ve çıkış maliyeti olur. ## <a name="next-steps"></a>Sonraki adımlar * Geri yükleme isteği oluşturmak için Azure desteğine başvurun, [Azure Portal bir bilet](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade)oluşturun. * [Azure Portal](continuous-backup-restore-portal.md), [PowerShell](continuous-backup-restore-powershell.md), [CLI](continuous-backup-restore-command-line.md)veya [Azure Resource Manager](continuous-backup-restore-template.md)kullanarak sürekli yedeklemeyi yapılandırın ve yönetin. * Sürekli yedekleme moduyla verileri geri yüklemek için gereken [Izinleri yönetin](continuous-backup-restore-permissions.md) .
117.191011
859
0.830489
tur_Latn
0.999946
f8034c6df8d0791fac9a082540a156d4ae4f4660
301
md
Markdown
_pages/blog.md
masonpackard/spinnaker.github.io
fa6a1d3bdb7eb7a28b05300b8a77049bb15389f3
[ "MIT" ]
120
2015-11-16T17:54:27.000Z
2021-11-08T13:17:59.000Z
_pages/blog.md
masonpackard/spinnaker.github.io
fa6a1d3bdb7eb7a28b05300b8a77049bb15389f3
[ "MIT" ]
1,065
2015-11-17T08:14:17.000Z
2021-06-28T05:45:16.000Z
_pages/blog.md
masonpackard/spinnaker.github.io
fa6a1d3bdb7eb7a28b05300b8a77049bb15389f3
[ "MIT" ]
552
2015-11-12T20:50:52.000Z
2021-12-23T15:30:36.000Z
--- permalink: /blog.html redirect_to: https://blog.spinnaker.io redirect_from: - /blog/google-source-to-prod-codelab-videos - /blog/q4-roadmap-published - /blog/announcing-gcp-https-support-in-spinnaker - /blog/scaling-spinnaker-at-netflix-the-basics - /blog/deploy-to-kubernetes-using-spinnaker ---
27.363636
49
0.770764
kor_Hang
0.207755
f8034eca9c550569ce3518f189a876fcea5f5447
842
md
Markdown
_posts/2014-10-07-psychology-the-psychology-of-alcoholism-by-george-barton-cutten.md
UniversalAdministrator/ufu
45552018807a488c4d1e8f77a56b0d3e02cd280b
[ "MIT" ]
null
null
null
_posts/2014-10-07-psychology-the-psychology-of-alcoholism-by-george-barton-cutten.md
UniversalAdministrator/ufu
45552018807a488c4d1e8f77a56b0d3e02cd280b
[ "MIT" ]
2
2018-01-03T00:41:55.000Z
2020-08-08T02:47:55.000Z
_posts/2014-10-07-psychology-the-psychology-of-alcoholism-by-george-barton-cutten.md
UniversalAdministrator/ufu
45552018807a488c4d1e8f77a56b0d3e02cd280b
[ "MIT" ]
null
null
null
--- ID: 1749 post_title: '[Psychology ] The Psychology of Alcoholism, by George Barton Cutten' author: abbie04m553726 post_excerpt: "" layout: post permalink: > https://universalflowuniversity.com/uncategorized/psychology-the-psychology-of-alcoholism-by-george-barton-cutten/ published: true post_date: 2014-10-07 12:12:09 --- [embed]https://www.youtube.com/watch?v=XlZbh98nXxQ[/embed]</br></br> <p>After presenting an overview of alcoholism and its affect on society, Dr. Cutten dives into the effects of chronic alcoholism on physiology or the nervous system, memory, intellect, will, emotions, senses that affect the individual's morals and sanity. Lastly he presents two cures known at that time, Religious conversion and hypnotism. Summary by Curt Walton [Psychology Audiobook] The Psychology of Alcoholism, by George Barton Cutten</p>
60.142857
363
0.792162
eng_Latn
0.928734
f803972518efd160fe97f88dc5748573671e1243
14,609
md
Markdown
articles/machine-learning/how-to-enable-studio-virtual-network.md
Artaggedon/azure-docs.es-es
73e6ff211a5d55a2b8293a4dc137c48a63ed1369
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/how-to-enable-studio-virtual-network.md
Artaggedon/azure-docs.es-es
73e6ff211a5d55a2b8293a4dc137c48a63ed1369
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/machine-learning/how-to-enable-studio-virtual-network.md
Artaggedon/azure-docs.es-es
73e6ff211a5d55a2b8293a4dc137c48a63ed1369
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Habilitación de Azure Machine Learning Studio en una red virtual titleSuffix: Azure Machine Learning description: Aprenda a configurar Estudio de Azure Machine Learning para tener acceso a los datos almacenados en una red virtual. services: machine-learning ms.service: machine-learning ms.subservice: core ms.topic: how-to ms.reviewer: larryfr ms.author: aashishb author: aashishb ms.date: 10/21/2020 ms.custom: contperf-fy20q4, tracking-python ms.openlocfilehash: 13becdf8c49d9affe8c2946d6147707fbe954437 ms.sourcegitcommit: 5ce88326f2b02fda54dad05df94cf0b440da284b ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 04/22/2021 ms.locfileid: "107889329" --- # <a name="use-azure-machine-learning-studio-in-an-azure-virtual-network"></a>Habilitación de Azure Machine Learning Studio en una Azure Virtual Network En este artículo, aprenderá a usar Azure Machine Learning Studio en una red virtual. Studio incluye características como AutoML, el diseñador y el etiquetado de datos. Para poder usar estas características en una red virtual, debe seguir los pasos de este artículo. En este artículo aprenderá a: > [!div class="checklist"] > - Proporcionar a Studio acceso a los datos almacenados dentro de una red virtual. > - Obtener acceso a Studio desde un recurso dentro de una red virtual. > - Comprender cómo Studio afecta a la seguridad del almacenamiento. Este artículo es la quinta parte de una serie de cinco capítulos que le guía a través de la protección de un flujo de trabajo de Azure Machine Learning. Le recomendamos encarecidamente que lea las partes anteriores para configurar un entorno de red virtual. Consulte los demás artículos de esta serie: [1. Introducción a las redes virtuales](how-to-network-security-overview.md) > [2. Protección del área de trabajo](how-to-secure-workspace-vnet.md) > [3. Protección del entorno de entrenamiento](how-to-secure-training-vnet.md) > [4. Protección del entorno de inferencia](how-to-secure-inferencing-vnet.md) > **5. Habilitación de la funcionalidad de Studio** > [!IMPORTANT] > Si el área de trabajo está en una __nube soberana__, como Azure Government o Azure China 21Vianet, los cuadernos integrados _no_ admiten el uso de almacenamiento que se encuentra en una red virtual. En su lugar, puede usar cuadernos de Jupyter Notebook en una instancia de Compute. Para obtener más información, consulte la sección [Acceso a los datos en un cuaderno de instancia de Compute](how-to-secure-training-vnet.md#access-data-in-a-compute-instance-notebook). ## <a name="prerequisites"></a>Requisitos previos + Lea el artículo [Introducción a la seguridad de red](how-to-network-security-overview.md) para comprender la arquitectura y los escenarios comunes de redes virtuales. + Una red virtual y una subred preexistentes que se usarán. + Un [área de trabajo existente de Azure Machine Learning con Private Link habilitado](how-to-secure-workspace-vnet.md#secure-the-workspace-with-private-endpoint). + Una [cuenta de Azure Storage existente agregada a la red virtual](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-service-endpoints). ## <a name="configure-data-access-in-the-studio"></a>Configuración del acceso a datos en Studio Algunas de las características de Studio están deshabilitadas de forma predeterminada en una red virtual. Para volver a habilitarlas, debe habilitar la identidad administrada para las cuentas de almacenamiento que desea usar en Studio. Las siguientes operaciones están deshabilitadas de forma predeterminada en una red virtual: * Vista previa de los datos en Studio. * Visualización de los datos en el diseñador. * Implementación de un modelo en el diseñador ([cuenta de almacenamiento predeterminada](#enable-managed-identity-authentication-for-default-storage-accounts)). * Envío de experimentos de AutoML ([cuenta de almacenamiento predeterminada](#enable-managed-identity-authentication-for-default-storage-accounts)). * Inicio de un proyecto de etiquetado. Studio admite la lectura de datos de los siguientes tipos de almacén de datos en una red virtual: * Blob de Azure * Azure Data Lake Storage Gen1 * Azure Data Lake Storage Gen2 * Azure SQL Database ### <a name="configure-datastores-to-use-workspace-managed-identity"></a>Configuración de almacenes de datos para usar una identidad administrada del área de trabajo Después de agregar una cuenta de Azure Storage a la red virtual con un [punto de conexión de servicio](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-service-endpoints) o un [punto de conexión privado](how-to-secure-workspace-vnet.md#secure-azure-storage-accounts-with-private-endpoints), debe configurar el almacén de datos para usar la autenticación de [identidad administrada](../active-directory/managed-identities-azure-resources/overview.md). Esto permite que Studio tenga acceso a los datos de la cuenta de almacenamiento. Azure Machine Learning usa [almacenes de datos](concept-data.md#datastores) para conectarse a las cuentas de almacenamiento. Siga los pasos que se indican a continuación para configurar un almacén de datos para que use una identidad administrada: 1. En Studio, seleccione __Almacenes de datos__. 1. Para actualizar un almacén de datos existente, seleccione el almacén de datos y después __Actualizar credenciales__. Para crear un almacén de datos, seleccione __+ Nuevo almacén de datos__. 1. En la configuración del almacén de datos, seleccione __Sí__ para __Usar la identidad administrada del área de trabajo para la vista previa de datos y la generación de perfiles en Azure Machine Learning Studio__. ![Captura de pantalla que muestra cómo habilitar la identidad administrada del área de trabajo](./media/how-to-enable-studio-virtual-network/enable-managed-identity.png) En estos pasos se agrega la identidad administrada del área de trabajo como __Lector__ al servicio de almacenamiento mediante el control de acceso basado en roles de Azure RBAC. El acceso __Lector__ permite que el área de trabajo recupere la configuración del firewall para asegurarse de que los datos no salgan de la red virtual. Los cambios pueden tardar hasta 10 minutos en surtir efecto. ### <a name="enable-managed-identity-authentication-for-default-storage-accounts"></a>Habilitación de la autenticación de identidad administrada para cuentas de almacenamiento predeterminadas Cada área de trabajo de Azure Machine Learning tiene dos cuentas de almacenamiento, una cuenta de almacenamiento de blobs y una cuenta de almacenamiento de archivos predeterminadas que se definen al crear el área de trabajo. También puede establecer nuevos valores predeterminados en la página de administración del **almacén de datos**. ![Captura de pantalla que muestra dónde se pueden encontrar los almacenes de datos predeterminados](./media/how-to-enable-studio-virtual-network/default-datastores.png) En la tabla siguiente se describe por qué debe habilitar la autenticación de identidad administrada para las cuentas de almacenamiento predeterminadas del área de trabajo. |Cuenta de almacenamiento | Notas | |---------|---------| |Almacenamiento de blobs predeterminado del área de trabajo| Almacena recursos del modelo desde el diseñador. Debe habilitar la autenticación de identidad administrada en esta cuenta de almacenamiento para implementar modelos en el diseñador. <br> <br> Puede visualizar y ejecutar una canalización del diseñador si usa un almacén de datos no predeterminado que se ha configurado para utilizar una identidad administrada. Sin embargo, si intenta implementar un modelo entrenado sin la identidad administrada habilitada en el almacén de datos predeterminado, se producirá un error en la implementación independientemente de que se usen otros almacenes de datos.| |Almacén de archivos predeterminado del área de trabajo| Almacena los recursos de experimentos de AutoML. Debe habilitar la autenticación de identidad administrada en esta cuenta de almacenamiento para enviar experimentos de AutoML. | > [!WARNING] > Hay un problema conocido en el que el almacén de archivos predeterminado no crea automáticamente la carpeta `azureml-filestore` necesaria para enviar experimentos de aprendizaje automático automatizado. Esto sucede cuando los usuarios seleccionan un almacén de archivos existente para establecerlo como predeterminado durante la creación del área de trabajo. > > Tiene dos opciones para evitar este problema: 1) Use el almacén de archivos predeterminado que se crea automáticamente durante la creación del área de trabajo. 2) Para seleccionar su propio almacén de archivos, asegúrese de que este está fuera de la red virtual durante la creación del área de trabajo. Una vez creada el área de trabajo, agregue la cuenta de almacenamiento a la red virtual. > > Para resolver este problema, elimine la cuenta del almacén de archivos de la red virtual y, a continuación, agréguela de nuevo. ### <a name="grant-workspace-managed-identity-__reader__-access-to-storage-private-link"></a>Concesión de acceso __Lector__ de identidad administrada del área de trabajo al vínculo privado de almacenamiento Si la cuenta de almacenamiento de Azure usa un punto de conexión privado, debe conceder a la identidad administrada del área de trabajo acceso **Lector** al vínculo privado. Para más información, consulte el rol integrado [Lector](../role-based-access-control/built-in-roles.md#reader). Si la cuenta de almacenamiento usa un punto de conexión de servicio, puede omitir este paso. ## <a name="access-the-studio-from-a-resource-inside-the-vnet"></a>Acceso a Studio desde un recurso dentro de una red virtual Si accede a Studio desde un recurso dentro de una red virtual (por ejemplo, una instancia de proceso o una máquina virtual), tendrá que permitir el tráfico de salida desde la red virtual a Studio. Por ejemplo, si usa grupos de seguridad de red (NSG) para restringir el tráfico de salida, agregue una regla a un destino de __etiqueta de servicio__ de __AzureFrontDoor.Frontend__. ## <a name="technical-notes-for-managed-identity"></a>Notas técnicas de identidad administrada El uso de identidad administrada para tener acceso a los servicios de almacenamiento afecta a las consideraciones de seguridad. En esta sección se describen los cambios para cada tipo de cuenta de almacenamiento. Estas consideraciones son únicas en cuanto al __tipo de cuenta de almacenamiento__ al que se obtiene acceso. ### <a name="azure-blob-storage"></a>Azure Blob Storage Para __Azure Blob Storage__, la identidad administrada del área de trabajo también se agrega como un [lector de datos de blob](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) para que pueda leer datos del almacenamiento de blobs. ### <a name="azure-data-lake-storage-gen2-access-control"></a>Control de acceso de Azure Data Lake Storage Gen2 Puede usar RBAC de Azure y listas de control de acceso (ACL) de tipo POSIX para controlar el acceso a los datos dentro de una red virtual. Para usar RBAC de Azure, agregue la identidad administrada del área de trabajo al rol [Lector de datos de blob](../role-based-access-control/built-in-roles.md#storage-blob-data-reader). Para obtener más información, consulte [Control de acceso basado en roles de Azure](../storage/blobs/data-lake-storage-access-control-model.md#role-based-access-control). Para usar ACL, se puede asignar el acceso de la identidad administrada del área de trabajo como cualquier otra entidad de seguridad. Para obtener más información, vea [Listas de control de acceso en archivos y directorios](../storage/blobs/data-lake-storage-access-control.md#access-control-lists-on-files-and-directories). ### <a name="azure-data-lake-storage-gen1-access-control"></a>Control de acceso de Azure Data Lake Storage Gen1 Azure Data Lake Storage Gen1 solo admite listas de control de acceso de estilo POSIX. Puede asignar el acceso de la identidad administrada del área de trabajo a los recursos como cualquier otra entidad de seguridad. Para obtener más información, vea [Control de acceso en Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-access-control.md). ### <a name="azure-sql-database-contained-user"></a>Usuario independiente de Azure SQL Database Para acceder a los datos almacenados en una instancia de Azure SQL Database mediante la identidad administrada, debe crear un usuario independiente de SQL que se asigne a la identidad administrada. Para obtener más información sobre cómo crear un usuario desde un proveedor externo, vea [Creación de usuarios independientes asignados a identidades de Azure AD](../azure-sql/database/authentication-aad-configure.md#create-contained-users-mapped-to-azure-ad-identities). Después de crear un usuario independiente de SQL, utilice el [comando GRANT de T-SQL](/sql/t-sql/statements/grant-object-permissions-transact-sql) para concederle permisos. ### <a name="azure-machine-learning-designer-intermediate-module-output"></a>Salida del módulo intermedio del diseñador de Azure Machine Learning Puede especificar la ubicación de salida de cualquier módulo en el diseñador. Úselo para almacenar conjuntos de datos intermedios en una ubicación independiente para la seguridad, el registro o la auditoría. Para especificar la salida: 1. Seleccione el módulo cuya salida quiere especificar. 1. En el panel de configuración del módulo que aparece a la derecha, seleccione **Configuración de salida**. 1. Especifique el almacén de datos que desea usar para cada salida del módulo. Asegúrese de que tiene acceso a las cuentas de almacenamiento intermedias en la red virtual. De lo contrario, se producirá un error en la canalización. También debe [habilitar la autenticación de identidad administrada](#configure-datastores-to-use-workspace-managed-identity) para las cuentas de almacenamiento intermedias para visualizar los datos de salida. ## <a name="next-steps"></a>Pasos siguientes Este artículo es la quinta parte de una serie de cinco capítulos sobre redes virtuales. Vea el resto de los artículos para obtener información sobre cómo proteger una red virtual: * [Parte 1: Introducción a las redes virtuales](how-to-network-security-overview.md) * [Parte 2: Protección de los recursos de un área de trabajo](how-to-secure-workspace-vnet.md) * [Parte 3: Protección del entorno de entrenamiento](how-to-secure-training-vnet.md) * [Parte 4: Protección del entorno de inferencia](how-to-secure-inferencing-vnet.md) Consulte también el artículo sobre el uso de [DNS personalizado](how-to-custom-dns.md) para la resolución de nombres.
85.935294
660
0.804093
spa_Latn
0.987108
f803a017a2b2dc5efc4a66cfcfddee91a6486e85
25
md
Markdown
README.md
satoshiyamamoto/tmux-synthwave84-theme
43cc00bb8410949a28186ba1424195c338710740
[ "MIT" ]
null
null
null
README.md
satoshiyamamoto/tmux-synthwave84-theme
43cc00bb8410949a28186ba1424195c338710740
[ "MIT" ]
null
null
null
README.md
satoshiyamamoto/tmux-synthwave84-theme
43cc00bb8410949a28186ba1424195c338710740
[ "MIT" ]
null
null
null
# tmux-synthwave84-theme
12.5
24
0.8
zul_Latn
0.565761
f803e0b56ef5c019a88edfe4c66e03be91fd2170
6,071
md
Markdown
TNDD/info.md
sillsdev/PT-Views
e6587aa14553b12e40b52ec53418868b0d9e0895
[ "MIT" ]
null
null
null
TNDD/info.md
sillsdev/PT-Views
e6587aa14553b12e40b52ec53418868b0d9e0895
[ "MIT" ]
null
null
null
TNDD/info.md
sillsdev/PT-Views
e6587aa14553b12e40b52ec53418868b0d9e0895
[ "MIT" ]
null
null
null
# Paratext TNDD Views release This additional tool for Paratext is designed to help content creators and checkers to VIEW (but not edit) the TNDD Paratext Project data in different ways. There are 5 views as explained below: ## Views - **TNDD-table view** -- This displays up to four columns. The source language in the left most column, then the first meaning from the \ml1, then the second meaning and finally the third meaning. - **TNDD-1st-mng-line** -- This displays the 1st meaning line preceded by the verse reference. Each verse starts on a new line. - **TNDD-2nd-mng-line** -- This displays the 2nd meaning line and is similar to the 1st line, but if there no data for a second meaning (\ml1) line the then that data from the 1st meaning is displayed in (( )) with text in a red-brown color. - **TNDD-3rd-mng-line** -- This displays the 3rd meaning line and is similar to the 1st and 2nd line, but if there no data for a third meaning (\ml1) line the then that data is displayed in (( with a brown color if it comes from the 2nd meaning and a red-brown if it come from he 1st first line. - **TNDD-tag-errors** -- This aids content creators to see that there is a markup error that PT does not catch. This view may still have some false labeling. - **TNDD-word-count-1st-mng-line** - This counts words per sentence. It marks sentence counts of more than 17 in orange and more than 30 words in tomato-red. Now includes chapter:verse and verse segment to ease comparison. - **TNDD-word-count-2nd-mng-line** - This counts words per sentence. It marks sentence counts of more than 17 in orange and more than 30 words in tomato-red. If there is no 2nd meaning line then the 1st meaning line is included. Now includes chapter:verse and verse segment to ease comparison. ## How to Install these Views for Paratext ### Option 1: Use the Paratext-TNDD-Views-installer.exe - Download Paratext-TNDD-Views-installer.exe from the [Assets section of the latest release](https://github.com/SILAsiaPub/PT-Views/releases/latest) - Close Paratext if open. - Run the installer and follow the usual steps to Install the Paratext Views. Your antivirus may tell you this program is rarely downloaded. That is true, but you can ignore the warning. (Previous versions were not signed. This version is signed.) - Start Paratext and the new views should be available in the Ctrl+E menu or the projects hamburger menu. ### Option 2: Run a script to install - Download TNDD-Views.zip from the [Assets section of the latest release](https://github.com/SILAsiaPub/PT-Views/releases/latest) - Close Paratext if open. - Select the option "Show in folder", and then in your Downloads folder, right click on the Views.zip file - Then select your preferred UNZIP tool to Extract all... (preferably to a new folder called Views). (if given the option, ensure that "Show extracted files when complete" is checked). - Double click on the **install_Paratext_TNDD_Views.cmd** - If all went well the black box dissappears. If not it will stay and give failure info - Start Paratext and the new views should be available in the Ctrl+E menu or the projects hamburger menu. ## Using the TNDD Views within Paratext - If the installation was successful, the new views should be available as shown below: via the Ctrl+E popup menu: ![views list control e](images/views-list-ctrl-e.png) or via the project's View menu: ![views-list-proj-view](images/views-list-proj-view-sml.png) - Note that you cannot edit the text in *any* of these VIEWS - it is purely an aid for reading and checking the text (one meaning line at a time) without the clutter of markers. - If you are using Paratext 9, then it is highly recommended to open an additional TNDD window as an Autohide window: ![PT-auto-hide-setup](images/PT-auto-hide-setup.png) - This will enable you to keep your normal workspace uncluttered, but the Table view will be easy to access from the right-hand column: ![PT-show-view](images/PT-show-view.png) ## There are four **Tools > Custom Tools > Custom Views** - Hide TNDD Views - Show TNDD Views - Uninstall TNDD Views - Update TNDD Views ### Hiding TNDD Views - In Paratext click on the hamburger icon in any project. - In the **Tools** menu hover over or click on **Custom tools** - In **Custom tools** click on **Custom Views** - Click on **Hide TNDD Views** - Click the **OK** on the **Hide TNDD Views** dialog - Restart Paratext ### Show TNDD Views tha were previously hidden - In Paratext click on the hamburger icon in any project. - In the **Tools** menu hover over or click on **Custom tools** - In **Custom tools** click on **Custom Views** - Click on **Show TNDD Views** - Click the **OK** on the **Show TNDD Views** dialog - Restart Paratext ## Uninstall TNDD Views - Just double click the **Uninstall-TNDD-Views.cmd** in original extracted View.zip folder. - Or find in: **C:\Users\Public\TNdd-Views** folder, the file **Uninstall-TNDD-Views.cmd** and double click that - or You could just delete the Views folder if you only have TNDD views, - or You could just delete the .xml files in the Views folder and those views would no longer appear, - or you could delete all the files in the Views folder. - also delete all files starting with TNDD from the **My Paratext 8(or 9) Projects\cms** folder if not using the uninstaller. ## Update the TNDD Views from the Github source - In Paratext click on the hamburger icon in any project. - In the **Tools** menu hover over or click on **Custom tools** - In **Custom tools** click on **Custom Views** - Click on **Update TNDD Views** - Click the **OK** on the **Show TNDD Views** dialog - Restart Paratext --- Credits: Concept by Mark Penny, Design and Programming by Ian McQuay, TNDD direction by Steve Christensen More details and further [updates](https://github.com/SILAsiaPub/PT-Views/releases) will be available at the [TNDD Views GitHub](https://github.com/SILAsiaPub/PT-Views/tree/master/TNDD) site.
65.98913
296
0.738264
eng_Latn
0.995198
f803ee54cfa66d95f1feef07b7cb9c398490888c
576
md
Markdown
README.md
talltotal/icalendar
0cadd47c1dd333b83db193631321cfdf4e8b6939
[ "MIT" ]
null
null
null
README.md
talltotal/icalendar
0cadd47c1dd333b83db193631321cfdf4e8b6939
[ "MIT" ]
null
null
null
README.md
talltotal/icalendar
0cadd47c1dd333b83db193631321cfdf4e8b6939
[ "MIT" ]
null
null
null
# icalendar 日历工具 - 节假日安排 - 订阅链接:`https://raw.githubusercontent.com/talltotal/icalendar/main/data/vacation.ics`。 > CDN:`https://cdn.jsdelivr.net/gh/talltotal/icalendar/data/vacation.ics` - 已提供:2021年 - 已提供:2022年 ## 使用方式 MacOS订阅 > [点击查看apple官方文档](https://support.apple.com/zh-cn/HT202361) 1. 打开‘日历’应用 2. 点击菜单栏‘文件’-‘新建日历订阅’ IPhone订阅 > [点击查看apple官方文档](https://support.apple.com/zh-cn/guide/iphone/iph3d1110d4/ios) 1. 前往“设置”>“日历”>“帐户”>“添加帐户”>“其他”。 2. 轻点“添加已订阅的日历”。 3. 输入要订阅的 .ics 文件的 URL。 MacOS文件导入 1. 下载ics文件 2. 打开‘日历’应用 3. 点击菜单栏‘文件’-‘导入’ 4. 选择已下载的本地文件
19.2
90
0.684028
yue_Hant
0.813534
f803ef2e0498623525b1c235fbe2b408449324c1
976
md
Markdown
.github/PULL_REQUEST_TEMPLATE.md
SHPEUCF/shpeucfbackend
3f30136043e6a1b6582945a596eb1c491070c62f
[ "MIT" ]
4
2020-07-13T23:04:43.000Z
2020-10-03T00:40:55.000Z
.github/PULL_REQUEST_TEMPLATE.md
SHPEUCF/shpeucfbackend
3f30136043e6a1b6582945a596eb1c491070c62f
[ "MIT" ]
51
2020-08-16T04:32:18.000Z
2020-11-14T23:28:04.000Z
.github/PULL_REQUEST_TEMPLATE.md
SHPEUCF/shpeucfbackend
3f30136043e6a1b6582945a596eb1c491070c62f
[ "MIT" ]
1
2020-09-11T06:00:09.000Z
2020-09-11T06:00:09.000Z
<!--- What issue does this PR fix? Add the issue number next to '#'. If more than one, separate by commas and add additional hashes next to each issue. --> Fixes # ## Description <!--- In one to three sentences, describe what was changed in the code regarding the issue(s) tackled. --> ## Types of changes <!--- What types of changes does your code introduce? Put an `x` in all the boxes that apply: --> - [ ] Bug fix (non-breaking change that fixes an issue) - [ ] New feature (non-breaking change that adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected) - [ ] Other <!--- Specify below --> ## Checklist: <!--- Go over all the following points, and put an `x` in all the boxes that apply. --> - [ ] My code follows the code style of this project found on [ESLint](../functions/.eslintrc.yml). - [ ] My change requires a change to the documentation. - [ ] I have updated the documentation accordingly.
51.368421
155
0.703893
eng_Latn
0.999092
f8040301f32dd2fb61037669501cfd6d09bc2fcc
5,402
md
Markdown
articles/web-application-firewall/afds/waf-front-door-create-portal.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/web-application-firewall/afds/waf-front-door-create-portal.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/web-application-firewall/afds/waf-front-door-create-portal.md
mtaheij/azure-docs.nl-nl
6447611648064a057aae926a62fe8b6d854e3ea6
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Zelfstudie: WAF-beleid maken voor Azure Front Door - Azure Portal' description: In deze zelfstudie leert u hoe u een WAF-beleid (Web Application Firewall) kunt maken met behulp van de Azure Portal. author: vhorne ms.service: web-application-firewall services: web-application-firewall ms.topic: tutorial ms.date: 03/10/2020 ms.author: victorh ms.openlocfilehash: be66a93ea4a518b26d973d222caf58e73b6986a3 ms.sourcegitcommit: c5021f2095e25750eb34fd0b866adf5d81d56c3a ms.translationtype: HT ms.contentlocale: nl-NL ms.lasthandoff: 08/25/2020 ms.locfileid: "79475838" --- # <a name="tutorial-create-a-web-application-firewall-policy-on-azure-front-door-using-the-azure-portal"></a>Zelfstudie: Een Web Application Firewall-beleid maken op Azure Front Door met behulp van de Azure Portal In deze zelfstudie leert u hoe u een basis Azure Web Application Firewall-beleid (WAF) kunt maken en kunt toepassen op een frontend-host bij Azure Front Door. In deze zelfstudie leert u het volgende: > [!div class="checklist"] > * Een WAF-beleid maken > * Dit koppelen aan een frontend-host > * WAF-regels configureren ## <a name="prerequisites"></a>Vereisten Maak een Front Door-profiel door de instructies op te volgen in [Quickstart: Een Front Door-profiel maken](../../frontdoor/quickstart-create-front-door.md). ## <a name="create-a-web-application-firewall-policy"></a>Een Web Application Firewall-beleid maken Maak eerst een basis WAF-beleid met beheerde standaardregelset (DRS) met behulp van de portal. 1. Selecteer in de linkerbovenhoek van het scherm **Een resource maken**> zoek naar **WAF**> selecteer **Web Application Firewall (Preview)** > selecteer **Maken**. 2. Voer op het tabblad **Basis** van de pagina **WAF-beleid maken** de volgende gegevens in of selecteer deze, accepteer de standaardwaarden voor de overige instellingen en selecteer vervolgens **Controleren + maken**: | Instelling | Waarde | | --- | --- | | Abonnement |Selecteer de naam van uw Front Door-abonnement.| | Resourcegroep |Selecteer de naam van uw Front Door-resourcegroep.| | Beleidsnaam |Voer een unieke naam voor uw WAF-beleid in.| ![Een WAF-beleid maken](../media/waf-front-door-create-portal/basic.png) 3. Selecteer op het tabblad **Koppeling** van de pagina **Een WAF-beleid maken** de optie **Frontend-host toevoegen**, voer de volgende instellingen in en selecteer **Toevoegen**: | Instelling | Waarde | | --- | --- | | Front Door | Selecteer de naam van uw Front Door-profiel.| | Frontend-host | Selecteer de naam van de Front Door-host en selecteer vervolgens **Toevoegen**.| > [!NOTE] > Als de frontend-host aan een WAF-beleid is gekoppeld, wordt deze grijs weergegeven. U moet eerst de frontend-host uit het bijbehorende beleid verwijderen en vervolgens de frontend-host opnieuw koppelen aan een nieuw WAF-beleid. 1. Selecteer **Controleren en maken** en selecteer vervolgens **Maken**. ## <a name="configure-web-application-firewall-rules-optional"></a>Web Application Firewall-regels configureren (optioneel) ### <a name="change-mode"></a>Modus wijzigen Wanneer u een WAF-beleid maakt, bevindt het WAF-beleid zich standaard in de modus **Detectie**. In de modus **Detectie** blokkeert WAF geen aanvragen, maar worden aanvragen die overeenkomen met de WAF-regels vastgelegd in WAF-logboeken. Als u WAF in actie wilt zien, kunt u de modusinstellingen van **Detectie** wijzigen in **Preventie**. In de modus **Preventie** worden aanvragen die overeenkomen met de regels die zijn gedefinieerd in de standaardregelset (DRS) geblokkeerd en vastgelegd in WAF-logboeken. ![WAF-beleidsmodus wijzigen](../media/waf-front-door-create-portal/policy.png) ### <a name="custom-rules"></a>Aangepaste regels U kunt een aangepaste regel maken door **Aangepaste regel toevoegen** onder het gedeelte **Aangepaste regels** te selecteren. Hiermee opent u de pagina voor de configuratie van aangepaste regels. Hieronder ziet u een voorbeeld van het configureren van een aangepaste regel voor het blokkeren van een aanvraag als de queryreeks **blockme** bevat. ![WAF-beleidsmodus wijzigen](../media/waf-front-door-create-portal/customquerystring2.png) ### <a name="default-rule-set-drs"></a>Standaardregelset (DRS) De standaardregelset die door Azure wordt beheerd, is standaard ingeschakeld. Als u een afzonderlijke regel binnen een regelgroep wilt uitschakelen, vouwt u de regels binnen die regelgroep uit, schakelt u het **selectievakje** vóór het regelnummer in en selecteert u **Uitschakelen** op het bovenstaande tabblad. Als u de actietypen voor afzonderlijke regels in de regelset wilt wijzigen, schakelt u het selectievakje vóór het regelnummer in en selecteert u vervolgens het bovenstaande tabblad **Actie wijzigen**. ![WAF-regelset wijzigen](../media/waf-front-door-create-portal/managed2.png) ## <a name="next-steps"></a>Volgende stappen > [!div class="nextstepaction"] > [Meer informatie over Azure Web Application Firewall](../overview.md) > [Meer informatie over Azure Front Door](../../frontdoor/front-door-overview.md)
65.084337
513
0.719733
nld_Latn
0.997961
f80525a0afcc6291639f0f5f41b4102cc6dda1d0
115
md
Markdown
lib/d3dcompiler-sys/README.md
gentoo90/winapi-rs
3af842cc6b20f1859890d44988cbb8dee24591a4
[ "MIT" ]
null
null
null
lib/d3dcompiler-sys/README.md
gentoo90/winapi-rs
3af842cc6b20f1859890d44988cbb8dee24591a4
[ "MIT" ]
null
null
null
lib/d3dcompiler-sys/README.md
gentoo90/winapi-rs
3af842cc6b20f1859890d44988cbb8dee24591a4
[ "MIT" ]
null
null
null
# d3dcompiler-sys # FFI bindings to d3dcompiler. [Documentation](https://retep998.github.io/doc/d3dcompiler-sys/)
23
64
0.773913
xho_Latn
0.147918
f805d5cb05c1603b132e31b038f582112153b138
4,246
md
Markdown
articles/fin-ops-core/dev-itpro/deployment/onprem-compatibility.md
MicrosoftDocs/Dynamics-365-Operations.ja-jp
821ba731df05b9c1d0b0947b8d7e66ae9a34c0b4
[ "CC-BY-4.0", "MIT" ]
4
2020-05-18T17:14:25.000Z
2021-11-13T07:27:21.000Z
articles/fin-ops-core/dev-itpro/deployment/onprem-compatibility.md
MicrosoftDocs/Dynamics-365-Operations.ja-jp
821ba731df05b9c1d0b0947b8d7e66ae9a34c0b4
[ "CC-BY-4.0", "MIT" ]
37
2017-12-13T17:53:18.000Z
2021-03-16T19:04:28.000Z
articles/fin-ops-core/dev-itpro/deployment/onprem-compatibility.md
MicrosoftDocs/Dynamics-365-Operations.ja-jp
821ba731df05b9c1d0b0947b8d7e66ae9a34c0b4
[ "CC-BY-4.0", "MIT" ]
8
2017-11-06T03:10:26.000Z
2020-03-21T18:08:51.000Z
--- title: Microsoft Dynamics 365 Finance + Operations (オンプレミス) でサポートされるソフトウェア description: このトピックでは、Microsoft Dynamics 365 Finance + Operations (on-premises) と互換性のあるソフトウェア コンポーネントのバージョンについて説明します。 author: faix ms.date: 10/05/2021 ms.topic: article ms.prod: '' ms.technology: '' audience: IT Pro ms.reviewer: sericks ms.search.region: Global ms.author: osfaixat ms.search.validFrom: 2021-06-30 ms.dyn365.ops.version: Platform update 44 ms.openlocfilehash: 9450f48125d1e3954e17bf5e1887d19a9a6bdb4d ms.sourcegitcommit: f699dbc21a06dbfb3fb299b789b428ea8d643868 ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 10/05/2021 ms.locfileid: "7603112" --- # <a name="microsoft-dynamics-365-finance--operations-on-premises-supported-software"></a>Microsoft Dynamics 365 Finance + Operations (オンプレミス) でサポートされるソフトウェア [!include [banner](../includes/banner.md)] このトピックでは、依存ソフトウェアのどのバージョンが、Microsoft Dynamics 365 Finance + Operations (on-premises) と互換性があるかについて説明します。 ## <a name="microsoft-windows-server"></a>Microsoft Windows Server Microsoft WindowsServer Standard と Microsoft Windows Server Datacenter がサポートされます。 | バージョン | サポート開始 | 有効期間 | |-------------------------------|------------------|---------------| | Microsoft Windows サーバー 2019 | 10.0.17 | 使用不可 | | Microsoft Windows サーバー 2016 | 元のリリース | 10.0.26 | > [!NOTE] > en-US オペレーティング システムのインストールのみがサポートされます。 ## <a name="microsoft-sql-server"></a>Microsoft SQL Server Microsoft SQL Server Standard Edition と Enterprise Edition の両方がサポートされます。 このセクションは、次の SQL Server コンポーネントをカバーします。 - データベース エンジン - SQL Server Reporting Services (SSRS) - SQL Server Integration Services (SSIS) | バージョン | サポート開始 | 有効期間 | |-------------------------------|------------------|---------------| | Microsoft SQL Server 2019 | 10.0.21 | 使用不可 | | Microsoft SQL Server 2016 SP2 | 10.0.9 | 10.0.28 | | Microsoft SQL Server 2016 SP1 | 元のリリース | 10.0.14 | > [!IMPORTANT] > 単一の環境での Microsoft SQL Serverの複数のバージョンの使用はサポートされていません。 ## <a name="active-directory-federation-services-ad-fs"></a>Active Directory フェデレーション サービス (AD FS) Active Directory フェデレーション サービス (AD FS) は、Windows Server を実行しているマシンにインストールできるサーバー ロールです。 | バージョン | サポート開始 | 有効期間 | |-------------------------------------------------------------|------------------|---------------| | Windows Server 2019 での Active Directory フェデレーション サービス (AD FS) | 10.0.17 | 使用不可 | | Windows Server 2016 での Active Directory フェデレーション サービス (AD FS) | 元のリリース | 10.0.26 | > [!IMPORTANT] > - Windows Server 2016 の AD FS は、Azure Active Directory 認証ライブラリ (ADAL) を介した認証のみをサポートします。 > - 今後の Microsoft 認証ライブラリへの移行を受け入れるには、AD FS を Windows Server 2019 (MSAL) に展開する必要があります。 詳細については、[アプリケーションを Microsoft 認証ライブラリ (MSAL) に移行する](/azure/active-directory/develop/msal-migration) を参照してください。 ## <a name="minimum-azure-service-fabric-runtime"></a>最小 Azure Service Fabric 実行時間 Service Fabric Cluster は、公式ドキュメント [Service Fabric のサポートされているバージョン](/azure/service-fabric/service-fabric-versions)に沿って常にサポートされているバージョンである必要があります。 | 最小バージョン | 以降で必要 | |----------------------------|----------------| | Service Fabric ランタイム 7.2 | 10.0.17 | | Service Fabric ランタイム 7.1 | 10.0.14 | ## <a name="minimum-microsoft-net-framework-runtime"></a>最小 Microsoft .NET フレームワーク ランタイム .NET Framework の要件はノードごとに指定します。 固有の機能とバージョンについては、[オンプレミス環境の設定と配置 (プラットフォーム更新プログラム 41 以降)](./setup-deploy-on-premises-pu41.md#prerequisites) を参照してください。 | 最小バージョン | 以降で必要 | |----------------------------------------|----------------| | Microsoft .NET フレームワーク バージョン 4.7.2 | 10.0.11 | ## <a name="microsoft-office-server"></a>Microsoft Office Server Office Server はオプションのコンポーネントです。 詳細については、[ドキュメント プレビューのコンフィギュレーション](../../fin-ops/organization-administration/configure-document-management.md#for-a-microsoft-dynamics-365-finance--operations-on-premises-environment)を参照してください。 | バージョン | サポート開始 | 有効期間 | |------------------------------|-----------------|---------------| | Microsoft Office サーバー 2017 | 10.0.0 | 使用不可 | [!INCLUDE[footer-include](../../../includes/footer-banner.md)]
43.326531
225
0.645313
yue_Hant
0.668998
f80602e5de0d51ad3da7377689388341b8f7f546
10,501
md
Markdown
WindowsServerDocs/networking/technologies/hpn/hpn-software-hardware-features.md
alfredmyers/windowsserverdocs
5a4155eeb67ab73f3661c86a7d1c896adaad715c
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/networking/technologies/hpn/hpn-software-hardware-features.md
alfredmyers/windowsserverdocs
5a4155eeb67ab73f3661c86a7d1c896adaad715c
[ "CC-BY-4.0", "MIT" ]
null
null
null
WindowsServerDocs/networking/technologies/hpn/hpn-software-hardware-features.md
alfredmyers/windowsserverdocs
5a4155eeb67ab73f3661c86a7d1c896adaad715c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Software and hardware (SH) integrated features and technologies description: These features have both software and hardware components. The software is intimately tied to hardware capabilities that are required for the feature to work. Examples of these include VMMQ, VMQ, Send-side IPv4 Checksum Offload, and RSS. ms.topic: article ms.assetid: 0cafb1cc-5798-42f5-89b6-3ffe7ac024ba manager: dougkim ms.author: jgerend author: JasonGerend ms.date: 06/15/2021 --- # Software and hardware (SH) integrated features and technologies > Applies to: Azure Stack HCI, version 20H2 These features have both software and hardware components. The software is intimately tied to hardware capabilities that are required for the feature to work. Examples of these include VMMQ, VMQ, Send-side IPv4 Checksum Offload, and RSS. To learn more, see [Host network requirements for Azure Stack HCI](/azure-stack/hci/concepts/host-network-requirements). >[!TIP] >SH and HO features are available if the installed NIC supports it. The feature descriptions below will cover how to tell if your NIC supports the feature. ## Converged NIC Converged NIC is a technology that allows virtual NICs in the Hyper-V host to expose RDMA services to host processes. Windows Server 2016 no longer requires separate NICs for RDMA. The Converged NIC feature allows the Virtual NICs in the Host partition (vNICs) to expose RDMA to the host partition and share the bandwidth of the NICs between the RDMA traffic and the VM and other TCP/UDP traffic in a fair and manageable manner. ![Converged NIC with SDN](../../media/Converged-NIC/conv-nic-sdn.png) You can manage converged NIC operation through VMM or Windows PowerShell. The PowerShell cmdlets are the same cmdlets used for RDMA (see below). To use the converged NIC capability: 1. Ensure to set the host up for DCB. 2. Ensure to enable RDMA on the NIC, or in the case of a SET team, the NICs are bound to the Hyper-V switch. 3. Ensure to enable RDMA on the vNICs designated for RDMA in the host. For more details about RDMA and SET, see [Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET)](/azure-stack/hci/concepts/host-network-requirements). ## Data Center Bridging (DCB) DCB is a suite of Institute of Electrical and Electronics Engineers (IEEE) standards that enable Converged Fabrics in data centers. DCB provides hardware queue-based bandwidth management in a host with cooperation from the adjacent switch. All traffic for storage, data networking, cluster Inter-Process Communication (IPC), and management share the same Ethernet network infrastructure. In Windows Server 2016, DCB can be applied to any NIC individually and to NICs bound to the Hyper-V switch. For DCB, Windows Server uses Priority-based Flow Control (PFC), standardized in IEEE 802.1Qbb. PFC creates a (nearly) lossless network fabric by preventing overflow within traffic classes. Windows Server also uses Enhanced Transmission Selection (ETS), standardized in IEEE 802.1Qaz. ETS enables the division of the bandwidth into reserved portions for up to eight classes of traffic. Each traffic class has its own transmit queue and, through the use of PFC, can start and stop transmission within a class. For more information, see [Data Center Bridging (DCB)](../dcb/dcb-top.md). ## Hyper-V Network Virtualization |Version|Description| |----------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **v1 (HNVv1)** | Introduced in Windows Server 2012, Hyper-V Network Virtualization (HNV) enables virtualization of customer networks on top of a shared, physical network infrastructure. With minimal changes necessary on the physical network fabric, HNV gives service providers the agility to deploy and migrate tenant workloads anywhere across the three clouds: the service provider cloud, the private cloud, or the Microsoft Azure public cloud. | | **v2 NVGRE (HNVv2 NVGRE)** | In Windows Server 2016 and System Center Virtual Machine Manager, Microsoft provides an end-to-end network virtualization solution that includes RAS Gateway, Software Load Balancing, Network Controller, and more. For more information, see [Hyper-V Network Virtualization Overview in Windows Server 2016](../../sdn/technologies/hyper-v-network-virtualization/hyperv-network-virtualization-overview-windows-server.md). | | **v2 VxLAN (HNVv2 VxLAN)** | In Windows Server 2016, is part of the SDN-extension, which you manage through the Network Controller. | --- ## IPsec Task Offload (IPsecTO) IPsec task offload is a NIC feature that enables the operating system to use the processor on the NIC for the IPsec encryption work. >[!IMPORTANT] >IPsec Task Offload is a legacy technology that is not supported by most network adapters, and where it does exist, it's disabled by default. ## Private virtual Local Area Network (PVLAN). PVLANs allow communication only between virtual machines on the same virtualization server. A private virtual network is not bound to a physical network adapter. A private virtual network is isolated from all external network traffic on the virtualization server, as well as any network traffic between the management operating system and the external network. This type of network is useful when you need to create an isolated networking environment, such as an isolated test domain. The Hyper-V and SDN stacks support PVLAN Isolated Port mode only. For details about PVLAN isolation, see [System Center: Virtual Machine Manager Engineering Blog](https://blogs.technet.microsoft.com/scvmm/2013/06/04/logical-networks-part-iv-pvlan-isolation/). ## Remote Direct Memory Access (RDMA) RDMA is a networking technology that provides high-throughput, low-latency communication that minimizes CPU usage. RDMA supports zero-copy networking by enabling the network adapter to transfer data directly to or from application memory. RDMA-capable means the NIC (physical or virtual) is capable of exposing RDMA to an RDMA client. RDMA-enabled, on the other hand, means an RDMA-capable NIC is exposing the RDMA interface up the stack. For more details about RDMA, see [Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET)](/azure-stack/hci/concepts/host-network-requirements). ## Receive Side Scaling (RSS) RSS is a NIC feature that segregates different sets of streams and delivers them to different processors for processing. RSS parallelizes the networking processing, enabling a host to scale to very high data rates. For more details, see [Receive Side Scaling (RSS)](/windows-hardware/drivers/network/introduction-to-receive-side-scaling). ## Single Root Input-Output Virtualization (SR-IOV) SR-IOV allows VM traffic to move directly from the NIC to the VM without passing through the Hyper-V host. SR-IOV is an incredible improvement in performance for a VM but lacks the ability for the host to manage that pipe. Only use SR-IOV when the workload is well-behaved, trusted, and generally the only VM in the host. Traffic that uses SR-IOV bypasses the Hyper-V switch, which means that any policies, for example, ACLs, or bandwidth management won't be applied. SR-IOV traffic also can't be passed through any network virtualization capability, so NV-GRE or VxLAN encapsulation can't be applied. Only use SR-IOV for well-trusted workloads in specific situations. Additionally, you cannot use the host policies, bandwidth management, and virtualization technologies. In the future, two technologies would allow SR-IOV: Generic Flow Tables (GFT) and Hardware QoS Offload (bandwidth management in the NIC) – once the NICs in our ecosystem support them. The combination of these two technologies would make SR-IOV useful for all VMs, would allow policies, virtualization, and bandwidth management rules to be applied, and could result in great leaps forward in the general application of SR-IOV. For more details, see [Overview of Single Root I/O Virtualization (SR-IOV)](/windows-hardware/drivers/network/overview-of-single-root-i-o-virtualization--sr-iov-). ## TCP Chimney Offload TCP Chimney Offload, also known as TCP Engine Offload (TOE), is a technology that allows the host to offload all TCP processing to the NIC. Because the Windows Server TCP stack is almost always more efficient than the TOE engine, using TCP Chimney Offload is not recommended. >[!IMPORTANT] >TCP Chimney Offload is a deprecated technology. We recommend you do not use TCP Chimney Offload as Microsoft might stop supporting it in the future. ## Virtual Local Area Network (VLAN) VLAN is an extension to the Ethernet frame header to enable partitioning of a LAN into multiple VLANs, each using its own address space. In Windows Server 2016, VLANs are set on ports of the Hyper-V switch or by setting team interfaces on NIC Teaming teams. For more information, see [NIC Teaming and Virtual Local Area Networks (VLANs)](../nic-teaming/nic-teaming.md). ## Virtual Machine Queue (VMQ) VMQs is a NIC feature that allocates a queue for each VM. Anytime you have Hyper-V enabled; you must also enable VMQ. In Windows Server 2016, VMQs use NIC Switch vPorts with a single queue assigned to the vPort to provide the same functionality. For more information, see [Virtual Receive Side Scaling (vRSS)](../vrss/vrss-top.md) and [NIC Teaming](../nic-teaming/nic-teaming.md). ## Virtual Machine Multi-Queue (VMMQ) VMMQ is a NIC feature that allows traffic for a VM to spread across multiple queues, each processed by a different physical processor. The traffic is then passed to multiple LPs in the VM as it would be in vRSS, which allows for delivering substantial networking bandwidth to the VM. ---
94.603604
550
0.724788
eng_Latn
0.993132
f806119452c4782229c98ae57c2a13c934f93929
792
md
Markdown
docs/ConsumerAuthenticationModel.md
Mastercard/biller-management-reference-app
0c4dcd966f85590c3cba33f4ac754203089a9b6b
[ "Apache-2.0" ]
null
null
null
docs/ConsumerAuthenticationModel.md
Mastercard/biller-management-reference-app
0c4dcd966f85590c3cba33f4ac754203089a9b6b
[ "Apache-2.0" ]
null
null
null
docs/ConsumerAuthenticationModel.md
Mastercard/biller-management-reference-app
0c4dcd966f85590c3cba33f4ac754203089a9b6b
[ "Apache-2.0" ]
3
2020-07-17T21:37:38.000Z
2021-07-04T20:43:54.000Z
# ConsumerAuthenticationModel Consumer Authentication Model ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **recordAction** | **String** | Record Action, avaliable values are: Add, Delete and Update, only required when Biller action is &#39;Update&#39; | [optional] **category** | **String** | Category Code, avaliable values are: IDEN, EMAL, PHON, CODE, ADDR, OTHR | **categoryLabel** | **String** | Category Label, max length 128 characters | **dataType** | **String** | Consumer Authentication Data Type, avaliable values are: A, P, B, N, S, C, D | **maxLength** | **String** | Consumer Authentication Maximum length, max length 3 numeric value | **notes** | **String** | Notes text, max length 1000 characters |
39.6
159
0.641414
eng_Latn
0.418337
f8074d916878a32330dee6a16ae0693a3f5b795e
2,160
md
Markdown
docs/_docs/0307_ml_tools.md
Bhaskers-Blu-Org2/ai-at-edge
d0bac4f2c3fa02a7d3bf9d9563f9153454e6b23b
[ "CC-BY-4.0", "MIT" ]
2
2019-11-05T16:20:12.000Z
2020-04-26T09:30:47.000Z
docs/_docs/0307_ml_tools.md
Bhaskers-Blu-Org2/ai-at-edge
d0bac4f2c3fa02a7d3bf9d9563f9153454e6b23b
[ "CC-BY-4.0", "MIT" ]
4
2019-12-04T00:27:58.000Z
2022-02-26T05:49:31.000Z
docs/_docs/0307_ml_tools.md
microsoft/ai-at-edge
d0bac4f2c3fa02a7d3bf9d9563f9153454e6b23b
[ "CC-BY-4.0", "MIT" ]
5
2019-11-15T10:02:22.000Z
2020-08-05T14:32:51.000Z
--- title: "Machine learning tools" permalink: /docs/ml_tools/ excerpt: "Machine learning tools" variable: - platform: windows name: Windows - platform: macos name: macOS last_modified_at: 2019-09-14 --- ### Tools Microsoft offers multiple tools for training, packaging and deploying the model. [Custom Vision]({{ '/docs/customvision/' | relative_url }}) offers an easy to use UI for creation Vision AI model. [Jupyter Notebook](https://jupyter.org/){:target="_blank"} are commonly used by data scientists for developing machine learning models. Many of the machine learning communities and community projects provide a set of Jupyter Notebooks to get started with the specific ML model. - [Azure Notebooks](https://notebooks.azure.com/){:target="_blank"} provides online access to Jupyter notebooks running in the cloud on Microsoft Azure. The portal also contains a comprehensive set of default projects to get started with Jupyter Notebooks. See [Azure Notebooks documentation](https://docs.microsoft.com/en-us/azure/notebooks/){:target="_blank"} - Jupyter Notebooks can be run using local hardware like laptop or creating Notebook VM (Virtual MAchine) in [Azure Machine Learning Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-workspace){:target="_blank"}. Other possible solutions include: - [Azure Machine Learning SDK for Python](https://docs.microsoft.com/en-us/python/api/overview/azure/ml/intro?view=azure-ml-py){:target="_blank"} enables python based development. - Automate your machine learning activities with the [Azure Machine Learning CLI](https://docs.microsoft.com/en-us/azure/machine-learning/service/reference-azure-machine-learning-cli){:target="_blank"}. - Write code in Visual Studio Code with [Azure Machine Learning VS Code extension](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-vscode-tools){:target="_blank"} - Use the [visual interface (preview) for Azure Machine Learning service](https://docs.microsoft.com/en-us/azure/machine-learning/service/ui-concept-visual-interface){:target="_blank"} to perform the workflow steps without writing code.
77.142857
361
0.781944
eng_Latn
0.830378
f807a3b2ef64a8d2375dc34b22c3336d25584075
505
md
Markdown
legacy-qt/README.md
Hutchison-Technologies/docker-images
ed4a401d415b5cc059a3b3ba472af3f29d3b2d65
[ "MIT" ]
1
2022-02-15T10:14:09.000Z
2022-02-15T10:14:09.000Z
legacy-qt/README.md
Hutchison-Technologies/docker-images
ed4a401d415b5cc059a3b3ba472af3f29d3b2d65
[ "MIT" ]
null
null
null
legacy-qt/README.md
Hutchison-Technologies/docker-images
ed4a401d415b5cc059a3b3ba472af3f29d3b2d65
[ "MIT" ]
null
null
null
# Legacy Qt This image is supposed to ease the pain of building legacy Qt applications. To use it, volume mount the legacy project directories and run the relevant build commands from inside the container. E.g. ``` docker pull hutchisont/legacy-qt // from a dir containing legacy projects docker run -ti --rm -v $(pwd):/usr/src hutchisont/legacy-qt bash // now inside container root@blah: cd <some_legacy_project> && qtchooser -qt=qt5 -run-tool=qmake . && make clean && make -j8 && make install ```
28.055556
117
0.738614
eng_Latn
0.982608
f80953f27d260d9343557b674421d0d2970c32ae
45
md
Markdown
content/time-machine/2020-06-18.md
jonblatho/covid-19
79a009c2de245eb513e4b9164cdf08ac9dda7293
[ "MIT" ]
1
2021-06-23T05:21:51.000Z
2021-06-23T05:21:51.000Z
content/time-machine/2020-06-18.md
jonblatho/covid-19
79a009c2de245eb513e4b9164cdf08ac9dda7293
[ "MIT" ]
66
2021-03-30T23:25:53.000Z
2022-03-28T05:49:17.000Z
content/time-machine/2020-06-18.md
jonblatho/covid-19
79a009c2de245eb513e4b9164cdf08ac9dda7293
[ "MIT" ]
null
null
null
--- date: 2020-06-18 layout: time-machine ---
11.25
20
0.644444
eng_Latn
0.342033
f809a9e9284631b606c2f7b519a629d5e644705d
354
md
Markdown
content/en/api/usage/usage_trace_search_code.md
dcarley/datadog-documentation
bdf0f8404af6c2f80e5c50d3619320020c834b28
[ "BSD-3-Clause" ]
null
null
null
content/en/api/usage/usage_trace_search_code.md
dcarley/datadog-documentation
bdf0f8404af6c2f80e5c50d3619320020c834b28
[ "BSD-3-Clause" ]
null
null
null
content/en/api/usage/usage_trace_search_code.md
dcarley/datadog-documentation
bdf0f8404af6c2f80e5c50d3619320020c834b28
[ "BSD-3-Clause" ]
null
null
null
--- title: Get hourly usage for Trace Search type: apicode order: 31.5 external_redirect: /api/#get-hourly-usage-for-trace-search --- ##### Signature `GET /v1/usage/traces` ##### Example Request {{< code-snippets basename="api-billing-usage-trace-search" >}} ##### Example Response {{< code-snippets basename="result.api-billing-usage-trace-search" >}}
25.285714
70
0.711864
eng_Latn
0.420003
f809f7abcfb2d18999ed9d7707d82f1b03d5b9ca
5,406
md
Markdown
README.md
culqi/Culqi-PHP
c30622531f7f96e6f6c6fd1900918f251e291151
[ "MIT" ]
34
2015-11-20T15:02:02.000Z
2021-09-24T15:53:27.000Z
README.md
culqi/Culqi-PHP
c30622531f7f96e6f6c6fd1900918f251e291151
[ "MIT" ]
42
2015-11-23T23:31:41.000Z
2022-02-18T21:41:36.000Z
README.md
rubensaid/culqi-php
c30622531f7f96e6f6c6fd1900918f251e291151
[ "MIT" ]
37
2015-11-24T20:28:25.000Z
2022-02-08T18:24:28.000Z
# Culqi PHP [![Latest Stable Version](https://poser.pugx.org/culqi/culqi-php/v/stable)](https://packagist.org/packages/culqi/culqi-php) [![Total Downloads](https://poser.pugx.org/culqi/culqi-php/downloads)](https://packagist.org/packages/culqi/culqi-php) [![License](https://poser.pugx.org/culqi/culqi-php/license)](https://packagist.org/packages/culqi/culqi-php) Biblioteca PHP oficial de CULQI, pagos simples en tu sitio web. Esta biblioteca trabaja con la [v2.0](https://culqi.com/api/) de Culqi API. ## Requisitos * PHP 5.3 o superiores. * Credenciales de comercio Culqi (1). (1) Debes registrarte [aquí](https://integ-panel.culqi.com/#/registro). Luego, crear un comercio y estando en el panel, acceder a Desarrollo > [***API Keys***](https://integ-panel.culqi.com/#/panel/comercio/desarrollo/llaves). ![alt tag](http://i.imgur.com/NhE6mS9.png) ## Instalación ### Vía Composer ```json { "require": { "culqi/culqi-php": "1.5.2" } } ``` Y cargar todo usando el autoloader de Composer. ```php require 'vendor/autoload.php'; ``` ### Manualmente Clonarse el repositorio o bajarse el código fuente ```bash git clone git@github.com:culqi/culqi-php.git ``` Ahora, incluir en la cabecera a `culqi-php` y también la dependencia [`Requests`](https://github.com/rmccue/requests). Debes hacer el llamado correctamente a la carpeta y/o archivo dependiendo de tu estructura. ```php <?php // Cargamos Requests y Culqi PHP include_once dirname(__FILE__).'/libraries/Requests/library/Requests.php'; Requests::register_autoloader(); include_once dirname(__FILE__).'/libraries/culqi-php/lib/culqi.php'; ``` ## Modo de uso En todos ejemplos, inicialmente hay que configurar la credencial `$API_KEY ` ```php // Configurar tu API Key y autenticación $SECRET_KEY = "vk9Xjpe2YZMEOSBzEwiRcPDibnx2NlPBYsusKbDobAk"; $culqi = new Culqi\Culqi(array('api_key' => $SECRET_KEY)); ``` ### Crear un token (Usarlo SOLO en DESARROLLO) Antes de crear un Cargo, Plan o un Suscriptor es necesario crear un `token` de tarjeta. Dentro de esta librería se encuentra una funcionalidad para generar 'tokens', pero solo debe ser usada para **desarrollo**. Lo recomendable es generar los 'tokens' con **CULQI.JS** cuando pases a producción, **debido a que es muy importante que los datos de tarjeta sean enviados desde el dispositivo de tus clientes directamente a los servidores de Culqi**, para no poner en riesgo información sensible. ### Crear un cargo (Cargos) Crear un cargo significa cobrar una venta a una tarjeta. Para esto previamente deberías obtener el `token` que refiera a la tarjeta de tu cliente. ```php // Creamos Cargo a una tarjeta $charge = $culqi->Charges->create( array( "amount" => 1000, "capture" => true, "currency_code" => "PEN", "description" => "Venta de prueba", "email" => "test@culqi.com", "installments" => 0, "antifraud_details" => array( "address" => "Av. Lima 123", "address_city" => "LIMA", "country_code" => "PE", "first_name" => "Will", "last_name" => "Muro", "phone_number" => "9889678986", ), "source_id" => "{token_id o card_id}" ) ); //Respuesta print_r($charge); ``` ### Crear un Plan ```php $plan = $culqi->Plans->create( array( "alias" => "plan-culqi".uniqid(), "amount" => 10000, "currency_code" => "PEN", "interval" => "dias", "interval_count" => 1, "limit" => 12, "name" => "Plan de Prueba ".uniqid(), "trial_days" => 15 ) ); //Respuesta print_r($plan); ``` ### Crear un Customer ```php $customer = $culqi->Customers->create( array( "address" => "av lima 123", "address_city" => "lima", "country_code" => "PE", "email" => "www@".uniqid()."me.com", "first_name" => "Will", "last_name" => "Muro", "metadata" => array("test"=>"test"), "phone_number" => 899898999 ) ); print_r($customer); ``` ### Crear un Card ```php $card = $culqi->Cards->create( array( "customer_id" => "{customer_id}", "token_id" => "{token_id}" ) ); print_r($card); ``` ### Crear un Suscripción a un plan ```php // Creando Suscriptor a un plan $subscription = $culqi->Subscriptions->create( array( "card_id" => "{card_id}", "plan_id" => "{plan_id}" ) ); //Respuesta print_r($subscription); ``` ### Crear un Order [Ver ejemplo completo](/examples/08-create-order.php) ```php // Creando orden (con 1 dia de duracion) $order = $culqi->Orders->create( array( "amount" => 1000, "currency_code" => "PEN", "description" => 'Venta de prueba', "order_number" => 'pedido-9999', "client_details" => array( "first_name"=> "Brayan", "last_name" => "Cruces", "email" => "micorreo@gmail.com", "phone_number" => "51945145222" ), "expiration_date" => time() + 24*60*60 // Orden con un dia de validez ) ); print_r($order); ``` ## Probar ejemplos ```bash git clone https://github.com/culqi/culqi-php.git composer install cd culqi-php/examples php -S 0.0.0.0:8000 ``` ## Documentación ¿Necesitas más información para integrar `culqi-php`? La documentación completa se encuentra en [https://culqi.com/docs/](https://culqi.com/docs/) ## Tests ```bash composer install phpunit --verbose --tap tests/* ``` ## Licencia Licencia MIT. Revisar el LICENSE.md.
25.990385
316
0.648724
spa_Latn
0.590589
f80a2abbb87ad9fd4bde71b0e70b4a26adab1db6
4,672
md
Markdown
wdk-ddi-src/content/compstui/ns-compstui-_opttype.md
kein284/windows-driver-docs-ddi
3b70e03c2ddd9d8d531f4dc2fb9a4bb481b7dd54
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/compstui/ns-compstui-_opttype.md
kein284/windows-driver-docs-ddi
3b70e03c2ddd9d8d531f4dc2fb9a4bb481b7dd54
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/compstui/ns-compstui-_opttype.md
kein284/windows-driver-docs-ddi
3b70e03c2ddd9d8d531f4dc2fb9a4bb481b7dd54
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NS:compstui._OPTTYPE title: _OPTTYPE (compstui.h) description: The OPTTYPE structure is used by CPSUI applications (including printer interface DLLs) for describing the type and other characteristics of a property sheet option, if the option is specified by an OPTITEM structure. old-location: print\opttype.htm tech.root: print ms.assetid: 041dd438-e837-4912-bda7-de654204198b ms.date: 04/20/2018 keywords: ["OPTTYPE structure"] ms.keywords: "*POPTTYPE, OPTTYPE, OPTTYPE structure [Print Devices], POPTTYPE, POPTTYPE structure pointer [Print Devices], _OPTTYPE, compstui/OPTTYPE, compstui/POPTTYPE, cpsuifnc_de1ff2db-9eea-4daf-bc9e-2e24a2dd5271.xml, print.opttype" req.header: compstui.h req.include-header: Compstui.h req.target-type: Windows req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: targetos: Windows req.typenames: OPTTYPE, *POPTTYPE f1_keywords: - _OPTTYPE - compstui/_OPTTYPE - POPTTYPE - compstui/POPTTYPE - OPTTYPE - compstui/OPTTYPE topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - compstui.h api_name: - OPTTYPE --- # _OPTTYPE structure ## -description The OPTTYPE structure is used by CPSUI applications (including printer interface DLLs) for describing the type and other characteristics of a <a href="https://docs.microsoft.com/windows-hardware/drivers/print/property-sheet-options">property sheet option</a>, if the option is specified by an <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/compstui/ns-compstui-_optitem">OPTITEM</a> structure. ## -struct-fields ### -field cbSize Size, in bytes, of the OPTTYPE structure. ### -field Type Specifies the <a href="https://docs.microsoft.com/windows-hardware/drivers/print/cpsui-option-types">CPSUI option type</a>. ### -field Flags Optional bit flags that modify the option's characteristics. The following flags can be set in any combination. #### OPTTF_NOSPACE_BEFORE_POSTFIX CPSUI should not add a space character between the string specified by the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/compstui/ns-compstui-_optitem">OPTITEM</a> structure's <b>pName</b> string and the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/compstui/ns-compstui-_optparam">OPTPARAM</a> structure's <b>pData</b> string, when displaying the option. Valid only if the option type is or <a href="https://docs.microsoft.com/windows-hardware/drivers/print/tvot-scrollbar">TVOT_SCROLLBAR</a> or <a href="https://docs.microsoft.com/windows-hardware/drivers/print/tvot-trackbar">TVOT_TRACKBAR</a>. #### OPTTF_TYPE_DISABLED All the OPTPARAM structures to which <b>pOptParam</b> points are disabled, so that none of the parameter values are user-selectable. ### -field Count Specifies the number of <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/compstui/ns-compstui-_optparam">OPTPARAM</a> structures to which <b>pOptParam</b> points. This member's value is dependent on the <a href="https://docs.microsoft.com/windows-hardware/drivers/print/cpsui-option-types">CPSUI option type</a>. ### -field BegCtrlID If <b>pDlgPage</b> in <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/compstui/ns-compstui-_compropsheetui">COMPROPSHEETUI</a> identifies a CPSUI-supplied page, or if <b>DlgTemplateID</b> in <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/compstui/ns-compstui-_dlgpage">DLGPAGE</a> identifies a CPSUI-supplied template, <b>BegCtrlID</b> is not used. Otherwise, <b>BegCtrlID</b> must contain the first of a sequentially numbered set of Windows control identifiers. Control identifier usage is dependent on the <a href="https://docs.microsoft.com/windows-hardware/drivers/print/cpsui-option-types">CPSUI option type</a>. ### -field pOptParam Pointer to an array of <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/compstui/ns-compstui-_optparam">OPTPARAM</a> structures describing the parameter values that a user can select for the option. ### -field Style Specifies flags that can be used to modify the option's display characteristics. The flags that can be specified are dependent on the <a href="https://docs.microsoft.com/windows-hardware/drivers/print/cpsui-option-types">CPSUI option type</a>. ### -field wReserved Reserved, must be initialized to zero. ### -field dwReserved Reserved, must be initialized to zero.
41.345133
412
0.755351
eng_Latn
0.643445
f80a4012df36359da15056f75c57c257e643045a
382
md
Markdown
_posts/2017-07-07-the-laughi.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
_posts/2017-07-07-the-laughi.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
_posts/2017-07-07-the-laughi.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
--- title: The Laughing Cow was a much scarier beast in 1967 author: PipisCrew date: 2017-07-07 categories: [fun] toc: true --- https://www.reddit.com/r/mildlyinteresting/comments/6lpu17/the_laughing_cow_was_a_much_scarier_beast_in_1967/ origin - http://www.pipiscrew.com/2017/07/the-laughing-cow-was-a-much-scarier-beast-in-1967/ the-laughing-cow-was-a-much-scarier-beast-in-1967
34.727273
142
0.787958
eng_Latn
0.241898
f80a85ab53559bf9a86e1687392e5240c9ca0db4
187
md
Markdown
README.md
GoodyIT/D3-chart-Complex-Redisential-modeling---MERN
c108efd3a78ee3b3a1575d09c583548ab2453f59
[ "MIT" ]
1
2019-10-08T14:19:19.000Z
2019-10-08T14:19:19.000Z
README.md
GoodyIT/D3-chart-Complex-Redisential-modeling---MERN
c108efd3a78ee3b3a1575d09c583548ab2453f59
[ "MIT" ]
null
null
null
README.md
GoodyIT/D3-chart-Complex-Redisential-modeling---MERN
c108efd3a78ee3b3a1575d09c583548ab2453f59
[ "MIT" ]
null
null
null
# Admin Tool ### NodeJs, ExpressJS, ReactJS, Json 1. **Run the setup script** `npm i` 2. **Run the example app** `npm start -s` ### Project running on http://localhost:3000
15.583333
44
0.620321
eng_Latn
0.605665
f80aa229993d7fe639fc7c6d92c55fab147f273b
401
md
Markdown
content/posts/server/定时时间同步.md
kentxxq/kentxxq.github.io
60a5b180dea3d7456e3b11fc8ce38cd69d23677f
[ "MIT" ]
null
null
null
content/posts/server/定时时间同步.md
kentxxq/kentxxq.github.io
60a5b180dea3d7456e3b11fc8ce38cd69d23677f
[ "MIT" ]
null
null
null
content/posts/server/定时时间同步.md
kentxxq/kentxxq.github.io
60a5b180dea3d7456e3b11fc8ce38cd69d23677f
[ "MIT" ]
null
null
null
--- title: centos定时时间同步 date: 1993-07-06 00:00:00+08:00 categories: ["笔记"] tags: ["centos"] keywords: ["centos","yum","ntp","ntpdate","crontab"] description: "centos定时时间同步,使用ntp,ntpdate,crontab" --- ```bash # 安装ntp软件 yum install ntp ntpdate # 与上海大学时间服务器进行时间同步 ntpdate ntp.shu.edu.cn # 每一小时进行一次时间同步、且写入硬件时间 crontab -e 0 * * * * /usr/sbin/ntpdate ntp.shu.edu.cn 0 * * * * /usr/sbin/hwclock -w ```
18.227273
52
0.67581
yue_Hant
0.227104
f80aced73d219780a4833f55a3fe1148e5bf9ba1
217
md
Markdown
resources/README.md
manashmndl/LearningKalmanFilter
9b15ab9ed18ea8294c18cd8788f17b6cd83948a1
[ "MIT" ]
null
null
null
resources/README.md
manashmndl/LearningKalmanFilter
9b15ab9ed18ea8294c18cd8788f17b6cd83948a1
[ "MIT" ]
null
null
null
resources/README.md
manashmndl/LearningKalmanFilter
9b15ab9ed18ea8294c18cd8788f17b6cd83948a1
[ "MIT" ]
null
null
null
# Important Resources for Learning Kalman Filter ### [Bilgin's blog](https://home.wlu.edu/~levys/kalman_tutorial/) ### [Probability Distribution](https://engineering.purdue.edu/AAECourses/aae567/Course-Archive/2007)
43.4
100
0.774194
yue_Hant
0.395425
f80b1d843807360b47f49a12304ff3b98f6aa0cd
181
md
Markdown
content/en/docs/task-1/percentage/_index.md
quangnd159/thatieltsguide
bf28005c9ff6f88d057b4f0e6a9a930d5fadb4bc
[ "MIT" ]
null
null
null
content/en/docs/task-1/percentage/_index.md
quangnd159/thatieltsguide
bf28005c9ff6f88d057b4f0e6a9a930d5fadb4bc
[ "MIT" ]
10
2022-03-08T17:53:24.000Z
2022-03-28T17:35:16.000Z
content/en/docs/task-1/percentage/_index.md
quangnd159/thatieltsguide
bf28005c9ff6f88d057b4f0e6a9a930d5fadb4bc
[ "MIT" ]
null
null
null
--- title: "% Percentage" description: "Dealing with percentages" lead: "" date: 2020-10-06T08:48:45+00:00 lastmod: 2020-10-06T08:48:45+00:00 draft: false weight: 40 images: [] ---
16.454545
39
0.690608
eng_Latn
0.401864
f80bb74907561245973d6ccf9b0e7ee5499984ce
8,536
md
Markdown
examples/pytorch/lda/README.md
ketyi/dgl
a1b859c29b63a673c148d13231a49504740e0e01
[ "Apache-2.0" ]
9,516
2018-12-08T22:11:31.000Z
2022-03-31T13:04:33.000Z
examples/pytorch/lda/README.md
ketyi/dgl
a1b859c29b63a673c148d13231a49504740e0e01
[ "Apache-2.0" ]
2,494
2018-12-08T22:43:00.000Z
2022-03-31T21:16:27.000Z
examples/pytorch/lda/README.md
ketyi/dgl
a1b859c29b63a673c148d13231a49504740e0e01
[ "Apache-2.0" ]
2,529
2018-12-08T22:56:14.000Z
2022-03-31T13:07:41.000Z
Latent Dirichlet Allocation === LDA is a classical algorithm for probabilistic graphical models. It assumes hierarchical Bayes models with discrete variables on sparse doc/word graphs. This example shows how it can be done on DGL, where the corpus is represented as a bipartite multi-graph G. There is no back-propagation, because gradient descent is typically considered inefficient on probability simplex. On the provided small-scale example on 20 news groups dataset, our DGL-LDA model runs 50% faster on GPU than sklearn model without joblib parallel. For larger graphs, thanks to subgraph sampling and low-memory implementation, we may fit 100 million unique words with 256 topic dimensions on a large multi-gpu machine. (The runtime memory is often less than 2x of parameter storage.) Key equations --- <!-- https://editor.codecogs.com/ --> Let k be the topic index variable with one-hot encoded vector representation z. The rest of the variables are: | | z_d\~p(θ_d) | w_k\~p(β_k) | z_dw\~q(ϕ_dw) | |-------------|-------------|-------------|---------------| | Prior | Dir(α) | Dir(η) | (n/a) | | Posterior | Dir(γ_d) | Dir(λ_k) | (n/a) | We overload w with bold-symbol-w, which represents the entire observed document-world multi-graph. The difference is better shown in the original paper. **Multinomial PCA** Multinomial PCA is a "latent allocation" model without the "Dirichlet". Its data likelihood sums over the latent topic-index variable k, <img src="https://latex.codecogs.com/svg.image?\inline&space;p(w_{di}|\theta_d,\beta)=\sum_k\theta_{dk}\beta_{kw}"/>, where θ_d and β_k are shared within the same document and topic, respectively. If we perform gradient descent, we may need additional steps to project the parameters to the probability simplices: <img src="https://latex.codecogs.com/svg.image?\inline&space;\sum_k\theta_{dk}=1"/> and <img src="https://latex.codecogs.com/svg.image?\inline&space;\sum_w\beta_{kw}=1"/>. Instead, a more efficient solution is to borrow ideas from evidence lower-bound (ELBO) decomposition: <!-- \log p(w) \geq \mathcal{L}(w,\phi) \stackrel{def}{=} \mathbb{E}_q [\log p(w,z;\theta,\beta) - \log q(z;\phi)] \\= \mathbb{E}_q [\log p(w|z;\beta) + \log p(z;\theta) - \log q(z;\phi)] \\= \sum_{dwk}n_{dw}\phi_{dwk} [\log\beta_{kw} + \log \theta_{dk} - \log \phi_{dwk}] --> <img src="https://latex.codecogs.com/svg.image?\log&space;p(w)&space;\geq&space;\mathcal{L}(w,\phi)\stackrel{def}{=}\mathbb{E}_q&space;[\log&space;p(w,z;\theta,\beta)&space;-&space;\log&space;q(z;\phi)]\\=\mathbb{E}_q&space;[\log&space;p(w|z;\beta)&space;&plus;&space;\log&space;p(z;\theta)&space;-&space;\log&space;q(z;\phi)]\\=\sum_{dwk}n_{dw}\phi_{dwk}&space;[\log\beta_{kw}&space;&plus;&space;\log&space;\theta_{dk}&space;-&space;\log&space;\phi_{dwk}]"/> The solutions for <img src="https://latex.codecogs.com/svg.image?\inline&space;\theta_{dk}\propto\sum_wn_{dw}\phi_{dwk}"/> and <img src="https://latex.codecogs.com/svg.image?\inline&space;\beta_{kw}\propto\sum_dn_{dw}\phi_{dwk}"/> follow from the maximization of cross-entropy loss. The solution for <img src="https://latex.codecogs.com/svg.image?\inline&space;\phi_{dwk}\propto&space;\theta_{dk}\beta_{kw}"/> follows from Kullback-Leibler divergence. After normalizing to <img src="https://latex.codecogs.com/svg.image?\inline&space;\sum_k\phi_{dwk}=1"/>, the difference <img src="https://latex.codecogs.com/svg.image?\inline&space;\ell_{dw}=\log\beta_{kw}+\log\theta_{dk}-\log\phi_{dwk}"/> becomes constant in k, which is connected to the likelihood for the observed document-word pairs. Note that after learning, the document vector θ_d considers the correlation between all words in d and similarly the topic distribution vector β_k considers the correlations in all observed documents. **Variational Bayes** A Bayesian model adds Dirichlet priors to θ_d and β_z, which leads to a similar ELBO if we assume independence <img src="https://latex.codecogs.com/svg.image?\inline&space;q(z,\theta,\beta;\phi,\gamma,\lambda)=q(z;\phi)q(\theta;\gamma)q(\beta;\lambda)"/>, i.e.: <!-- \log p(w;\alpha,\eta) \geq \mathcal{L}(w,\phi,\gamma,\lambda) \stackrel{def}{=} \mathbb{E}_q [\log p(w,z,\theta,\beta;\alpha,\eta) - \log q(z,\theta,\beta;\phi,\gamma,\lambda)] \\= \mathbb{E}_q \left[ \log p(w|z,\beta) + \log p(z|\theta) - \log q(z;\phi) +\log p(\theta;\alpha) - \log q(\theta;\gamma) +\log p(\beta;\eta) - \log q(\beta;\lambda) \right] \\= \sum_{dwk}n_{dw}\phi_{dwk} (\mathbb{E}_{\lambda_k}[\log\beta_{kw}] + \mathbb{E}_{\gamma_d}[\log \theta_{dk}] - \log \phi_{dwk}) \\+\sum_{d}\left[ (\alpha-\gamma_d)^\top\mathbb{E}_{\gamma_d}[\log\theta_d] -(\log B(\alpha 1_K) - \log B(\gamma_d)) \right] \\+\sum_{k}\left[ (\eta-\lambda_k)^\top\mathbb{E}_{\lambda_k}[\log\beta_k] -(\log B(\eta 1_W) - \log B(\lambda_k)) \right] --> <img src="https://latex.codecogs.com/svg.image?\log&space;p(w;\alpha,\eta)&space;\geq&space;\mathcal{L}(w,\phi,\gamma,\lambda)\stackrel{def}{=}\mathbb{E}_q&space;[\log&space;p(w,z,\theta,\beta;\alpha,\eta)&space;-&space;\log&space;q(z,\theta,\beta;\phi,\gamma,\lambda)]\\=\mathbb{E}_q&space;\left[\log&space;p(w|z,\beta)&space;&plus;&space;\log&space;p(z|\theta)&space;-&space;\log&space;q(z;\phi)&plus;\log&space;p(\theta;\alpha)&space;-&space;\log&space;q(\theta;\gamma)&plus;\log&space;p(\beta;\eta)&space;-&space;\log&space;q(\beta;\lambda)\right]\\=\sum_{dwk}n_{dw}\phi_{dwk}&space;(\mathbb{E}_{\lambda_k}[\log\beta_{kw}]&space;&plus;&space;\mathbb{E}_{\gamma_d}[\log&space;\theta_{dk}]&space;-&space;\log&space;\phi_{dwk})\\&plus;\sum_{d}\left[(\alpha-\gamma_d)^\top\mathbb{E}_{\gamma_d}[\log\theta_d]-(\log&space;B(\alpha&space;1_K)&space;-&space;\log&space;B(\gamma_d))\right]\\&plus;\sum_{k}\left[(\eta-\lambda_k)^\top\mathbb{E}_{\lambda_k}[\log\beta_k]-(\log&space;B(\eta&space;1_W)&space;-&space;\log&space;B(\lambda_k))\right]"/> **Solutions** The solutions to VB subsumes the solutions to multinomial PCA when n goes to infinity. The solution for ϕ is <img src="https://latex.codecogs.com/svg.image?\inline&space;\log\phi_{dwk}=\mathbb{E}_{\gamma_d}[\log\theta_{dk}]+\mathbb{E}_{\lambda_k}[\log\beta_{kw}]-\ell_{dw}"/>, where the additional expectation can be expressed via digamma functions and <img src="https://latex.codecogs.com/svg.image?\inline&space;\ell_{dw}=\log\sum_k\exp(\mathbb{E}_{\gamma_d}[\log\theta_{dk}]+\mathbb{E}_{\lambda_k}[\log\beta_{kw}])"/> is the log-partition function. The solutions for <img src="https://latex.codecogs.com/svg.image?\inline&space;\gamma_{dk}=\alpha+\sum_wn_{dw}\phi_{dwk}"/> and <img src="https://latex.codecogs.com/svg.image?\inline&space;\lambda_{kw}=\eta+\sum_dn_{dw}\phi_{dwk}"/> come from direct gradient calculation. After substituting the optimal solutions, we compute the marginal likelihood by adding the three terms, which are all connected to (the negative of) Kullback-Leibler divergence. DGL usage --- The corpus is represented as a bipartite multi-graph G. We use DGL to propagate information through the edges and aggregate the distributions at doc/word nodes. For scalability, the phi variables are transient and updated during message passing. The gamma / lambda variables are updated after the nodes receive all edge messages. Following the conventions in [1], the gamma update is called E-step and the lambda update is called M-step. The lambda variable is further recorded by the trainer. A separate function is used to produce perplexity, which is based on the ELBO objective function divided by the total numbers of word/doc occurrences. Example --- `%run example_20newsgroups.py` * Approximately matches scikit-learn training perplexity after 10 rounds of training. * Exactly matches scikit-learn training perplexity if word_z is set to lda.components_.T * There is a difference in how we compute testing perplexity. We weigh the beta contributions by the training word counts, whereas sklearn weighs them by test word counts. * The DGL-LDA model runs 50% faster on GPU devices compared with sklearn without joblib parallel. Advanced configurations --- * Set `0<rho<1` for online learning with partial_fit. * Set `mult["doc"]=100` or `mult["word"]=100` or some large value to disable the corresponding Bayesian priors. References --- 1. Matthew Hoffman, Francis Bach, David Blei. Online Learning for Latent Dirichlet Allocation. Advances in Neural Information Processing Systems 23 (NIPS 2010). 2. Reactive LDA Library blogpost by Yingjie Miao for a similar Gibbs model
56.906667
1,041
0.718486
eng_Latn
0.718406
f80cdebe8ebe8424b8efd960c3ef9a4446180f78
820
md
Markdown
docs/_docs/0540_signup.md
Bhaskers-Blu-Org2/ai-at-edge
d0bac4f2c3fa02a7d3bf9d9563f9153454e6b23b
[ "CC-BY-4.0", "MIT" ]
2
2019-11-05T16:20:12.000Z
2020-04-26T09:30:47.000Z
docs/_docs/0540_signup.md
Bhaskers-Blu-Org2/ai-at-edge
d0bac4f2c3fa02a7d3bf9d9563f9153454e6b23b
[ "CC-BY-4.0", "MIT" ]
4
2019-12-04T00:27:58.000Z
2022-02-26T05:49:31.000Z
docs/_docs/0540_signup.md
microsoft/ai-at-edge
d0bac4f2c3fa02a7d3bf9d9563f9153454e6b23b
[ "CC-BY-4.0", "MIT" ]
5
2019-11-15T10:02:22.000Z
2020-08-05T14:32:51.000Z
--- title: "Sign up customer and partner research" permalink: /docs/signup/ excerpt: "Sign up customer and partner research" variable: - platform: windows name: Windows - platform: macos name: macOS last_modified_at: 2019-09-12 --- ## Help us, help you Over the next 90 days the Azure Edge Devices team will be doing extensive customer and partner research to inform our next wave of product innovation. If you are currently involved in designing, implementing, selling or operating solutions that use AI to reason over and make business decisions based on data from connected cameras or audio devices, we would love to hear about your experiences, challenges, and how Microsoft can help you achieve more. Sign up [here](https://aka.ms/hwpartnerengage){:target="_blank"} and expect to hear from us soon!
41
301
0.770732
eng_Latn
0.997299
f80d84cb018061204672160c805f180b86304fe3
4,455
md
Markdown
site/zh-cn/js/guide/platform_environment.md
Catminusminus/docs
e16e25854145c503f5d2c40620d770ee0cc35bd1
[ "Apache-2.0" ]
4
2019-08-20T11:59:23.000Z
2020-01-12T13:42:50.000Z
site/zh-cn/js/guide/platform_environment.md
Catminusminus/docs
e16e25854145c503f5d2c40620d770ee0cc35bd1
[ "Apache-2.0" ]
1
2020-01-11T03:55:25.000Z
2020-01-11T03:55:25.000Z
site/zh-cn/js/guide/platform_environment.md
Catminusminus/docs
e16e25854145c503f5d2c40620d770ee0cc35bd1
[ "Apache-2.0" ]
2
2020-01-15T21:50:31.000Z
2020-01-15T21:56:30.000Z
# 平台和环境 TensorFlow.js有两种工作平台:浏览器和Node.js。不同平台有很多不同的配置,平台间的差异影响着基于平台的应用开发。 在浏览器平台上,TensorFlow.js既支持移动设备,也支持台式设备。虽然设备之间有很多差异,TensorFlow.js提供的WebGL API能够自动检测并做相应的优化配置。 在Node.js平台上,TensorFlow.js既支持直接使用TensorFlow API,也支持更慢的CPU环境。 ## [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#environments)环境 当一个用TensorFlow.js开发的程序运行时,所有的配置被统称为环境。它包含一个全局的backend,以及一些可以精确控制TensorFlow.js特性的标记。 ### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#backends)Backends TensorFlow.js支持多个不同的backend,用来实现张量的存储和数学操作。任何时候都只有一个backend生效。大部分时间,TensorFlow.js会根据当前环境自动选择使用最佳的backend。即使这样,你仍然需要知道,如何得知当前正在使用的是哪个backend,以及如何在不同backend之间切换。 下面命令用来获取当前正使用的backend ```js console.log(tf.getBackend()); ``` 下面命令用来手动切换backend ```js tf.setBackend(‘cpu’); console.log(tf.getBackend()); ``` #### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#webgl-backend)WebGL backend WebGL backend,简称“webgl”,是在浏览器平台上最强大的一个backend。它比CPU backend要快100倍。部分原因是,Tensor是作为WebGL纹理保存的,数学运算操作实现在WebGL shader里面。 下面是在使用这个backend时需要了解的一些知识。 ##### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#avoid-blocking-the-ui-thread)避免阻塞UI线程 当调用一个操作,如tf.matMul(a,b)时,返回值tf.Tensor会同步返回,然而这时矩阵乘法运算还不一定完成。这意味着返回值tf.Tensor只是一个指向运算的句柄。当调用`x.data()`或`x.array()`时,只有当运算完成时才能取到实际值。因此在运算过程中,为避免阻塞UI线程,需要使用异步版本的`x.data()`和`x.array()`,而不是同步版本的`x.dataSync()`和`x.arraySync()`。 ##### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#memory-management)内存管理 强调一下,在使用WebGL backend时,需要显式管理内存。因为存储Tensor的WebGL纹理,不会被浏览器的垃圾收集机制自动清理。 调用dispose()清理tf.Tensor占用的内存 ```js const a = tf.tensor([[1,2], [3,4]]); a.dispose(); ``` 在应用中,经常需要把多个操作组合起来。维持一个对所有中间变量的引用,然后清理其占用的内存,这种方法使代码可读性变差。TensorFlow.js提供tf.tidy()方法清理函数返回时不再需要的tf.Tensor,这就好像函数执行后,本地变量都会被清理一样。 ```js const a = tf.tensor([[1, 2], [3, 4]]); const y = tf.tidy(() => { const result = a.square().log().neg(); return result; }); ``` >注意:其他非WebGL环境(如Node.js TensorFlow backend或CPU backend)有自动垃圾回收机制,在这些环境下使用dispose()或tidy()没有副作用。实际上,主动调用通常会比垃圾回收的清理带来更好的性能。 ##### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#precision)精度 在移动设备,WebGL只支持16位浮点纹理操作。然而,大部分机器学习模型都用32位浮点的weight和activation训练的。由于16位浮点数字只能表示[0.000000059605, 65504]这个范围,当把模型移植到移动设备时,它会产生精度问题。你需要保证自己模型中的weight和activation不要超出这个范围。 ##### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#shader-compilation--texture-uploads)编译Shader& texture 上传 TensorFlow.js在GPU里执行WebGL的shader程序。然而这些shader只有在被调用时才会被编译,即lazy-compile。编译过程在CPU上的主线程完成,这导致程序变慢。TensorFlow.js会自动缓存编译好的shader,下次再调用有同样shape,同样输入输出的tensor时能快很多。TensorFlow.js开发的应用一般会多次使用同样的操作,因此第二次运行会快很多。 TensorFlow.js还会把tf.Tensor数据存储为WebGL纹理。当一个tf.Tensor被创建后,不会被立即上传到GPU,而是当其被用到时才这么做。如果这个tf.Tensor被第二次使用,由于已经在GPU里,因此省掉了上传开销。在一个典型的机器学习模型中,这意味着weight在第一次预测时被上传,第二次就会快很多。 如果希望加快第一次预测的性能,我们推荐对模型进行预热,即传递一个有同样shape的输入Tensor。 例如: ```js const model = await tf.loadLayersModel(modelUrl); // 使用真实数据来预热模型 const warmupResult = model.predict(tf.zeros(inputShape)); warmupResult.dataSync(); warmupResult.dispose(); // 第二次执行 predict() 的时候将会更加快速 const result = model.predict(userData); ``` #### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#nodejs-tensorflow-backend)Node.js TensorFlow backend 在Node.js TensorFlow backend中,所谓“node”,即TensorFlow的C语言API被用来加速操作。它会尽可能使用机器的硬件加速模块,如CUDA。 在这个backend中,和WebGL backend一样,函数会同步返回`tf.Tensor`。然而,与WebGL backend不同的是,当你获得这个tensor返回值时,运算已经完成。这意味着`tf.matMul(a,b)`调用会阻塞UI线程。 因此,如果你在生产环境下使用这个方法,你需要在工作线程中调用,而不是主线程。 更多关于Node.js的信息,请查看相关文档。 #### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#cpu-backend)CPU backend 这个backend是性能最差的backend,然而是最简单的。所有操作都实现在vanilla JavaScript中,因此很少有并行化,并且会阻塞UI线程。 这个backend对测试有用,或者是用于WebGL不能使用的设备。 ### [](https://github.com/tensorflow/tfjs-website/blob/master/docs/guide/platform_environment.md#flags)Flags TensorFlow.js有一套环境标记,能够自动评估和检测,保证是当前平台上的最佳配置。这些标记大部分是内部使用,其中有一些全局标记可以被API控制。 - `tf.enableProdMode():` 启用生产模式。它会去掉模型验证,NaN检查,以及其他错误校验操作,从而提高性能。 - `tf.enableDebugMode()`: 启用调试模式。它会记录每种操作的日志并输出到到调试台,还记录运行性能信息,如内存footprint和内核执行时间。注意这将极大降低应用运行时间,不可在生产环境中使用。 注:这两种方法应该在程序的最前面调用,因为它们影响所有的其他标记。基于同样的原因,没有相应的disable方法。 注:所有标记在控制台都记录为tf.ENV.features。尽管没有对应的公开API(不需要考虑版本兼容),你可以使用tf.ENV.set来改变这些标记,从而对程序做微调或诊断。
42.428571
221
0.817957
yue_Hant
0.718872
f80d943faa6415d8a801c15ee37d7682a248fa38
4,683
md
Markdown
business-central/includes/admin-setup-email-public-folder.md
DanielMagMat/dynamics365smb-docs
cc31c39b7632fdf867be5f3111ec26f2615af534
[ "CC-BY-4.0", "MIT" ]
null
null
null
business-central/includes/admin-setup-email-public-folder.md
DanielMagMat/dynamics365smb-docs
cc31c39b7632fdf867be5f3111ec26f2615af534
[ "CC-BY-4.0", "MIT" ]
null
null
null
business-central/includes/admin-setup-email-public-folder.md
DanielMagMat/dynamics365smb-docs
cc31c39b7632fdf867be5f3111ec26f2615af534
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- author: edupont04 ms.topic: include ms.date: 02/15/2022 ms.author: edupont --- > [!NOTE] > The following sections assume that you have administrator access for Exchange Online. Before you can set up email logging, you must prepare Office 365 [public folders](/exchange/collaboration-exo/public-folders/public-folders). You can do this in the [Exchange admin center](/exchange/exchange-admin-center?preserve-view=true), or you can use the [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell?view=exchange-ps&?preserve-view=true). > [!TIP] > If you want to use the [Exchange Online PowerShell](/powershell/exchange/exchange-online-powershell?view=exchange-ps&preserve-view=true), you can find inspiration for how to set up your script in a sample script that we published in [the BCTech repo](https://github.com/microsoft/BCTech/tree/master/samples/EmailLogging). Follow the steps below to set up Exchange Online, with links to where you can learn more. ### Create an admin role group Create an admin role group for public folders based on the information in the following table: |Property |Value | |----------------|--------------------------| |Name |Public Folders Management | |Selected roles |Public Folders | |Selected users |The email of the user account that Business Central will use to run the email logging job| For more information, see [Manage role groups in Exchange Online](/exchange/permissions-exo/role-groups). ### Create a new public folder mailbox Create a new public folder mailbox based on the information in the following table: |Property |Value | |----------------|--------------------------| |Name |Public MailBox | For more information, see [Create a public folder mailbox](/exchange/collaboration-exo/public-folders/create-public-folder-mailbox). ### Create new public folders 1. Create a new public folder with the name **Email Logging** in the root so that the full path to the folder becomes `\Email Logging\`. 2. Create two sub-folders so that the the result is the following full paths to the folders: - `\Email Logging\Queue\` - `\Email Logging\Storage\` For more information, see [Create a public folder](/exchange/collaboration-exo/public-folders/create-public-folder). ### Set public folder ownership Set the email logging user as an owner of both public folders, *Queue* and *Storage*. For more information, see [Assign permissions to the public folder](/exchange/collaboration-exo/public-folders/set-up-public-folders#step-3-assign-permissions-to-the-public-folder). ### Mail-enable the *Queue* public folder For more information, see [Mail-enable or mail-disable a public folder](/exchange/collaboration-exo/public-folders/enable-or-disable-mail-for-public-folder). ### Mail-enable sending emails to the *Queue* public folder Mail-enable sending emails to the *Queue* public folder using Outlook or the Exchange Management Shell. For more information, see [Allow anonymous users to send email to a mail-enabled public folder](/exchange/collaboration-exo/public-folders/enable-or-disable-mail-for-public-folder#allow-anonymous-users-to-send-email-to-a-mail-enabled-public-folder?preserve-view=true). ### Create mail flow rules Create two mail flow rules based on the information in the following table: |Purpose |Name |Apply this rule if... |Do the following... | |---------|-----|----------------------------------|---------------------------------------------| |A rule for incoming email |Log Email Sent to This Organization|*The sender* is located *Outside the organization*, and *the recipient* is located *Inside the organization*|BCC the email account that is specified for the *Queue* public folder| |A rule for outgoing email | Log Email Sent from This Organization |*The sender* is located *Inside the organization*, and *the recipient* is located *Outside the organization*|BCC the email account that is specified for the *Queue* public folder| For more information, see [Manage mail flow rules in Exchange Online](/exchange/security-and-compliance/mail-flow-rules/manage-mail-flow-rules?preserve-view=true) and [Mail flow rule actions in Exchange Online](/exchange/security-and-compliance/mail-flow-rules/mail-flow-rule-actions?preserve-view=true). > [!NOTE] > If you make changes in the Exchange Management Shell, the changes become visible in the Exchange admin center after a delay. Also, the changes made in Exchange will be available in [!INCLUDE[prod_short](prod_short.md)] after a delay. The delay might be several hours.
57.814815
376
0.724749
eng_Latn
0.984058
f80da8d878a7aee62569f922d1dd959b114f2aa0
1,735
md
Markdown
content/MC-draft.md
odirk/blog
21459b323aa038dad56a6fc9c8cb9744570fe40f
[ "MIT" ]
null
null
null
content/MC-draft.md
odirk/blog
21459b323aa038dad56a6fc9c8cb9744570fe40f
[ "MIT" ]
null
null
null
content/MC-draft.md
odirk/blog
21459b323aa038dad56a6fc9c8cb9744570fe40f
[ "MIT" ]
null
null
null
## Dezembro ### Livros - [Mindfulness in Plain English](https://www.amazon.com.br/Mindfulness-Plain-English-20th-Anniversary/dp/0861719069), Gunarantana. Iniciando minha jornada na meditação. O livro aborta mitos, realidades e benefícios da meditação e prática de mindfulness (8); &nbsp; &nbsp; ### Álbuns novos - Silk Sonic - [An Evening with Silk Sonic](https://open.spotify.com/album/0S0r2RFucaW9kVjBtcBOV1) (Pop/R&B). XXX. (6.5); Veja [aqui](https://www.last.fm/user/GabrielDuro/library/albums?from=2021-11-01&to=2021-11-30) o relatório completo. &nbsp; &nbsp; ### Filmes - [Khane-ye doust kodjast? (Where Is the Friend's Home?)](https://www.imdb.com/title/tt0093342/) (1987). Lento, mas gratificante. Um filme inesquecível com momentos prodigiosos. É um poema (para amizade, honestidade, infância, pureza) mais do que um filme. (8.5); &nbsp; &nbsp; ### Séries - [The Morning Show](https://www.imdb.com/title/tt7203552/), (S02). (8.5); - [Seinfeld](https://www.imdb.com/title/tt0098904/), (S06). (9); - [Bob's Burgers](https://www.imdb.com/title/tt1561755/), (S12). (7); - [Succession](https://www.imdb.com/title/tt7660850/), (S03). (8.5); - [M.A.S.H.](https://www.imdb.com/title/tt0068098/), (S02). (6.5); - [Big Mouth](https://www.imdb.com/title/tt6524350/), (S05). Previsivelmente para um programa que está em sua quinta temporada, a série se esforça para redescobrir o equilíbrio entre humor e coração. Esta temporada tropeça um pouco, especialmente nos primeiros episódios, forçando demais com frases de choque e situações gonzo que se tornam cansativas. Como de costume, as melhores risadas são cortesia dos monstros. (6). - F is For Family - Masters of the universe - How to with john wilson &nbsp; &nbsp;
38.555556
421
0.721614
por_Latn
0.924098
f80dd8d0474027ac7261f0dd52c905397191f4c7
485
md
Markdown
en/074/python/README.md
franciscogomes2020/exercises
8b33c4b9349a9331e4002a8225adc2a482c70024
[ "MIT" ]
null
null
null
en/074/python/README.md
franciscogomes2020/exercises
8b33c4b9349a9331e4002a8225adc2a482c70024
[ "MIT" ]
null
null
null
en/074/python/README.md
franciscogomes2020/exercises
8b33c4b9349a9331e4002a8225adc2a482c70024
[ "MIT" ]
null
null
null
# Create a program that will generate five random numbers and put them into a tuple. After that, show the list of generated numbers and also indicate the smallest and largest values that are in the tuple. ## Example in Python ### code ``` python [put your code here] ``` ### output ``` [put the output of your code here] ``` ## Next - [Example in python 073](../../073/python) - **Example in python 074** - [Example in python 075](../../075/python) - [List of exercises](../..)
21.086957
204
0.674227
eng_Latn
0.999384
4b03e276942a99fe95e37203ae505db6c35db0d0
151
md
Markdown
DevResources.md
Daniel-io/course-project-natours
38a99a09a16f0fb83106ad9cb01ddbac0a8a5d54
[ "MIT" ]
null
null
null
DevResources.md
Daniel-io/course-project-natours
38a99a09a16f0fb83106ad9cb01ddbac0a8a5d54
[ "MIT" ]
1
2020-07-20T02:29:45.000Z
2020-07-20T02:29:45.000Z
DevResources.md
Daniel-io/course-project-natours
38a99a09a16f0fb83106ad9cb01ddbac0a8a5d54
[ "MIT" ]
null
null
null
#Development Resources ## HTML & CSS - Create baseline grid: https://basehold.it/ Quick use: `<link rel="stylesheet" href="//basehold.it/24">`
21.571429
62
0.668874
kor_Hang
0.361711
4b05078123194a564d6373be560c533ef8255d51
921
md
Markdown
class-07/README.md
royce79-creator/lab-14
6dc3578ed85c983e5ff1d894bf30c8bcd1f6196c
[ "MIT" ]
null
null
null
class-07/README.md
royce79-creator/lab-14
6dc3578ed85c983e5ff1d894bf30c8bcd1f6196c
[ "MIT" ]
null
null
null
class-07/README.md
royce79-creator/lab-14
6dc3578ed85c983e5ff1d894bf30c8bcd1f6196c
[ "MIT" ]
null
null
null
# Intro to Object-Oriented Programming with Constructor Functions; HTML Tables ## Lecture ## repl - [class-06-review-object-literals-and-methods](https://replit.com/@rkgallaway/class-06-review-object-literals-and-methods#index.js) - [class-07-constructor-functions](https://replit.com/@rkgallaway/class-07-constructor-functions#index.js) ### Learning Objectives As a result of completing Lecture 7 of Code 201, students will: - Be able to translate an object literal into a constructor function, as measured by successful completion of the daily code assignment - Be able to use the ‘prototype’ property to extend the inheritable properties and methods of a constructor function, as measured by successful completion of the daily code assignment - Be able to dynamically build an HTML table with JavaScript and render it to the DOM, as measured by successful completion of the daily code assignment
54.176471
184
0.783931
eng_Latn
0.986743
4b055902929acf53225bbf80df5cbe1d29928465
6,342
md
Markdown
powerbi-docs/desktop-access-database-errors.md
roel-de-vries/powerbi-docs.nl-nl
f3333fbc2e2773b387a695e77511aa5d23970f92
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/desktop-access-database-errors.md
roel-de-vries/powerbi-docs.nl-nl
f3333fbc2e2773b387a695e77511aa5d23970f92
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/desktop-access-database-errors.md
roel-de-vries/powerbi-docs.nl-nl
f3333fbc2e2773b387a695e77511aa5d23970f92
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Problemen met importeren in Access en met XLS-bestanden oplossen in Power BI Desktop description: Problemen met het importeren van Access-databases en XLS-spreadsheets oplossen in Power BI Desktop en Power Query author: davidiseminger manager: kfile ms.reviewer: '' ms.service: powerbi ms.component: powerbi-desktop ms.topic: conceptual ms.date: 07/24/2018 ms.author: davidi LocalizationGroup: Troubleshooting ms.openlocfilehash: 04e95ade5d7c7d0e2b9a6d9690873437e2ec1b6d ms.sourcegitcommit: f01a88e583889bd77b712f11da4a379c88a22b76 ms.translationtype: HT ms.contentlocale: nl-NL ms.lasthandoff: 07/27/2018 ms.locfileid: "39329725" --- # <a name="resolve-issues-importing-access-and-xls-files-in-power-bi-desktop"></a>Problemen met het importeren van Access- en XLS-bestanden oplossen in Power BI Desktop In **Power BI Desktop** maken zowel **Access-databases** als vroege versies van **Excel-werkmappen** (XLS-bestanden van het type Excel 97-2003) gebruik van de *Access-database-engine*. Er zijn drie veelvoorkomende situaties die kunnen verhinderen dat de Access-database-engine niet goed werkt: ### <a name="situation-1-no-access-database-engine-installed"></a>Situatie 1: er is geen Access-database-engine geïnstalleerd Als het foutbericht in Power BI Desktop aangeeft dat de Access-database-engine niet is geïnstalleerd, moet u de versie van de Access-database-engine (32- of 64-bits) installeren die overeenkomt met uw versie van Power BI Desktop. U kunt de Access-database-engine installeren vanaf de [downloadpagina](http://www.microsoft.com/en-us/download/details.aspx?id=13255). >[!NOTE] >Als de geïnstalleerde versie van de Access-database-engine niet overeenkomt met de geïnstalleerde versie van Microsoft Office, kunnen Office-toepassingen geen gebruik maken van de Access-database-engine. ### <a name="situation-2-the-access-database-engine-bit-version-32-bit-or-64-bit-is-different-from-your-power-bi-desktop-bit-version"></a>Situatie 2: de versie van de Access-database-engine (32-bits of 64-bits) komt niet overeen met de versie van Power BI Desktop Deze situatie treedt vaak op wanneer de geïnstalleerde versie van Microsoft Office 32-bits is en de versie van Power BI Desktop 64-bits. Het omgekeerde komt ook voor (als u een Office 365-abonnement hebt raadpleegt u **Situatie 3** voor een ander probleem en een andere oplossing). Met een van de volgende oplossingen kunt u het probleem met de tegengestelde bitsversies oplossen: 1. Wijzig de versie van Power BI Desktop zodat deze overeenkomt met de bitsversie van het geïnstalleerde Microsoft Office. Als u de bitsversie van Power BI Desktop wilt wijzigen, verwijdert u Power BI Desktop en installeert u de versie van Power BI Desktop die overeenkomt met uw Office-installatie. Selecteer **Geavanceerde downloadopties** op de downloadpagina voor desktop als u een versie van Power BI Desktop wilt selecteren. ![](media/desktop-access-database-errors/desktop-access-errors-1.png) Kies uw taal en selecteer de knop **Downloaden**. Schakel in het scherm dat verschijnt het selectievakje naast PBIDesktop.msi in voor de 32-bits versie, of naast PBIDesktop_x64.msi voor de 64-bits versie. In het volgende scherm is de 64-bits versie geselecteerd. ![](media/desktop-access-database-errors/desktop-access-errors-2.png) >[!NOTE] >Als u de 32-bitsversie van Power BI Desktop gebruikt, kunt u geheugenproblemen ondervinden bij het maken van zeer grote gegevensmodellen. 2. Wijzig de versie van Microsoft Office zodat dit overeenkomt met de bitsversie van uw Power BI Desktop. Als u de bitsversie van Microsoft Office wilt wijzigen, verwijdert u Microsoft Office en installeert u de versie van Microsoft Office die overeenkomt met uw Power BI Desktop. 3. Als de fout is opgetreden bij het openen van een XLS-bestand (een werkmap in Excel 97-2003), dan kunt u het gebruik van de Access-database-engine vermijden door het XLS-bestand in Excel te openen en het op te slaan als een XLSX-bestand. 4. Als u het probleem niet kunt oplossen met een van deze drie oplossingen, kunt u mogelijk beide versies van de Access-database-engine verwijderen. Dit is echter *geen* aanbevolen tijdelijke oplossing. Als u beide versies installeert, wordt het probleem met Power Query voor Excel en Power BI Desktop weliswaar opgelost, maar worden er fouten geïntroduceerd voor alle toepassingen die automatisch (standaard) gebruikmaken van de bitsversie van de Access-database-engine die oorspronkelijk is geïnstalleerd. Als u beide bitsversies van de Access-database-engine wilt installeren, [download](http://www.microsoft.com/en-us/download/details.aspx?id=13255) u beide versies en voert u ze beide uit met behulp van de */passive*-schakelaar. Bijvoorbeeld: c:\users\joe\downloads\AccessDatabaseEngine.exe /passive c:\users\joe\downloads\AccessDatabaseEngine_x64.exe /passive ### <a name="situation-3-trouble-using-access-or-xls-files-with-an-office-365-subscription"></a>Situatie 3: problemen met Access- of XLS-bestanden bij een Office 365-abonnement Als u een Office 365-abonnement hebt (dit kan **Office 2013** of **Office 2016** zijn), is de provider van de Access Database Engine geregistreerd in een locatie voor virtuele registers die *alleen* toegankelijk is voor Office-processen. Het gevolg is dat de Mashup-engine (die verantwoordelijk is voor het uitvoeren van niet-Office 365 Excel en Power BI Desktop), die geen Office-proces is, geen gebruik kan maken van de provider van de Access-database-engine. U kunt deze situatie oplossen door de [herdistribueerbare versie van de Access-database-engine te downloaden](http://www.microsoft.com/en-us/download/details.aspx?id=13255) en installeren die overeenkomt met de bitsversie van uw Power BI Desktop (zie eerdere secties voor meer informatie over bitsversies). ### <a name="other-situations-that-cause-import-issues"></a>Andere situaties die problemen met importeren kunnen geven Wij proberen zo veel mogelijk problemen met Access- of XLS-bestanden te behandelen. Als u een probleem hebt dat niet in dit artikel wordt behandeld, dien dan een vraag over het probleem in bij [Power BI Support](https://powerbi.microsoft.com/support/) (Ondersteuning van Power BI). Wij kijken regelmatig naar problemen waar veel klanten last van hebben en nemen ze in onze artikelen op.
109.344828
748
0.798802
nld_Latn
0.999065
4b059d9fc36b03354caad264fa28a4f8fffb6543
381
md
Markdown
modules/moderowanie/_posts/2022-01-04-refleksje.md
FundacjaFRSI/course-in-a-box
e79b3664a2078f9c3cdc160290c2f688f709b9bc
[ "MIT" ]
null
null
null
modules/moderowanie/_posts/2022-01-04-refleksje.md
FundacjaFRSI/course-in-a-box
e79b3664a2078f9c3cdc160290c2f688f709b9bc
[ "MIT" ]
null
null
null
modules/moderowanie/_posts/2022-01-04-refleksje.md
FundacjaFRSI/course-in-a-box
e79b3664a2078f9c3cdc160290c2f688f709b9bc
[ "MIT" ]
null
null
null
--- title: Refleksje --- # Refleksje *czas trwania: ok. 10 minut* Czy w czasie tego kursu pojawiło się coś, co Cię zaskoczyło gdy wyobrażałaś/eś siebie w roli moderatora? Czy jest coś nad czym jeszcze chcesz popracować? Takie pytania zadajemy sobie w trakcie comiesięcznych spotkań online moderatorów i moderatorek klubów wiedzy. Zachęcamy Cię do udziału w następnym spotkaniu!
31.75
157
0.795276
pol_Latn
0.999998
4b05adff902d0ecd4eadae6bb14419ade547a1f4
421
md
Markdown
app/src/main/assets/selection/about.md
honzarossler/Sorts
b5a1a35a71535da4815b58cccf110d837a3bf2cb
[ "MIT" ]
null
null
null
app/src/main/assets/selection/about.md
honzarossler/Sorts
b5a1a35a71535da4815b58cccf110d837a3bf2cb
[ "MIT" ]
null
null
null
app/src/main/assets/selection/about.md
honzarossler/Sorts
b5a1a35a71535da4815b58cccf110d837a3bf2cb
[ "MIT" ]
null
null
null
# About SelectionSort It is an unstable sorting algorithm with complexity $$ O(n^2) $$. Compared to other algorithms of similar complexity (eg BubbleSort) is faster but slower than InsertionSort. ## Principle This algorithm sequentially selects the smallest and largest elements and moves them to the beginning of the rest of the array. After sorting the element, the field shrinks by one and ignores sorted positions.
46.777778
172
0.800475
eng_Latn
0.999689
4b062b24555b095a13dc7181a4aa830c4a183c4c
11,476
md
Markdown
docs/database-engine/configure-windows/database-engine-instances-sql-server.md
drake1983/sql-docs.es-es
d924b200133b8c9d280fc10842a04cd7947a1516
[ "CC-BY-4.0", "MIT" ]
1
2020-04-25T17:50:01.000Z
2020-04-25T17:50:01.000Z
docs/database-engine/configure-windows/database-engine-instances-sql-server.md
drake1983/sql-docs.es-es
d924b200133b8c9d280fc10842a04cd7947a1516
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/database-engine/configure-windows/database-engine-instances-sql-server.md
drake1983/sql-docs.es-es
d924b200133b8c9d280fc10842a04cd7947a1516
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Instancias del motor de base de datos (SQL Server) | Microsoft Docs ms.custom: '' ms.date: 03/14/2017 ms.prod: sql ms.prod_service: high-availability ms.reviewer: '' ms.suite: sql ms.technology: configuration ms.tgt_pltfrm: '' ms.topic: conceptual ms.assetid: af9ae643-9866-4014-b36f-11ab556a773e caps.latest.revision: 15 author: MikeRayMSFT ms.author: mikeray manager: craigg ms.openlocfilehash: 34db3b2fe4d33fd7680fe22ecca31f5885742916 ms.sourcegitcommit: 1740f3090b168c0e809611a7aa6fd514075616bf ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 05/03/2018 --- # <a name="database-engine-instances-sql-server"></a>Instancias del motor de base de datos (SQL Server) [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] Una instancia de [!INCLUDE[ssDE](../../includes/ssde-md.md)] es una copia del ejecutable de **sqlservr.exe** que se ejecuta como un servicio de sistema operativo. Cada instancia administra varias bases de datos del sistema y una o varias bases de datos de usuario. Cada equipo puede ejecutar varias instancias de [!INCLUDE[ssDE](../../includes/ssde-md.md)]. Las aplicaciones se conectan a la instancia para realizar el trabajo en una base de datos administrada por la instancia. ## <a name="instances"></a>Instancias Una instancia de [!INCLUDE[ssDE](../../includes/ssde-md.md)] funciona como un servicio que controla todas las solicitudes de aplicación para trabajar con datos de cualquiera de las bases de datos administradas por dicha instancia. Es el destino de las solicitudes de conexión (inicios de sesión) de aplicaciones. La conexión se ejecuta en una conexión de red si la aplicación y la instancia están en equipos independientes. Si la aplicación y la instancia están en el mismo equipo, la conexión de SQL Server se puede ejecutar como una conexión de red o una conexión en memoria. Cuando una conexión se ha completado, una aplicación envía instrucciones [!INCLUDE[tsql](../../includes/tsql-md.md)] a través de la conexión hasta la instancia. La instancia resuelve las instrucciones de [!INCLUDE[tsql](../../includes/tsql-md.md)] en operaciones con los datos y objetos de las bases de datos y, si se han concedido los permisos necesarios a las credenciales de inicio de sesión, realiza el trabajo. Los datos recuperados se devuelven a la aplicación, junto con cualesquiera mensajes como errores. Puede ejecutar múltiples instancias de [!INCLUDE[ssDE](../../includes/ssde-md.md)] en un equipo. Una instancia puede ser la instancia predeterminada. La instancia predeterminada no tiene nombre. Si una solicitud de conexión especifica solo el nombre del equipo, se establece la conexión a la instancia predeterminada. Una instancia con nombre es una instancia en la que se especifica un nombre de instancia al instalar la instancia. Una solicitud de conexión debe especificar el nombre del equipo y el nombre de instancia para conectar a la instancia. No hay ningún requisito para instalar una instancia predeterminada; todas las instancias que se ejecutan en un equipo pueden ser instancias con nombre. ## <a name="related-tasks"></a>Related Tasks |Descripción de la tarea|Tema| |----------------------|-----------| |Describe cómo configurar las propiedades de una instancia. Configure los valores predeterminados como ubicaciones de archivos y formatos de fecha, o cómo la instancia usa los recursos del sistema operativo, como la memoria o los subprocesos.|[Configurar instancias del motor de base de datos &#40;SQL Server&#41;](../../database-engine/configure-windows/configure-database-engine-instances-sql-server.md)| |Describe cómo administrar intercalación de una instancia de [!INCLUDE[ssDE](../../includes/ssde-md.md)]. Las intercalaciones definen los patrones de bits que se usan para representar caracteres, y los comportamientos asociados como ordenación y operaciones de comparación de casos o de distinción de acentos.|[Compatibilidad con la intercalación y Unicode](../../relational-databases/collations/collation-and-unicode-support.md)| |Describe cómo configurar las definiciones de servidores vinculados, que permiten que las instrucciones de [!INCLUDE[tsql](../../includes/tsql-md.md)] se ejecutan en una instancia para trabajar con los datos almacenados en orígenes de datos OLE DB independientes.|[Servidores vinculados &#40;motor de base de datos&#41;](../../relational-databases/linked-servers/linked-servers-database-engine.md)| |Describe cómo crear un desencadenador de inició de sesión, que especifica acciones que deben llevarse a cabo una vez que se haya validado un inició de sesión, pero antes de empiece a trabajar con los recursos de la instancia. Los desencadenadores de inició de sesión admiten acciones como la actividad de conexión del registro, o limitar los inicios de sesión basadas en la lógica y en la autenticación de credenciales realizada por Windows y SQL Server.|[Desencadenadores logon](../../relational-databases/triggers/logon-triggers.md)| |Describe cómo administrar el servicio asociado a una instancia de [!INCLUDE[ssDE](../../includes/ssde-md.md)]. Esto incluye acciones como iniciar y detener el servicio o configurar opciones de inicio de acciones como iniciar y detener el servicio u opciones de configuración de opciones de inicio.|[Administrar el servicio del motor de base de datos](../../database-engine/configure-windows/manage-the-database-engine-services.md)| |Describe la forma de realizar tareas de configuración de red del servidor tales como habilitar protocolos, modificar el puerto o la canalización usados por el protocolo, configurar el cifrado, configurar el servicio SQL Server Browser, exponer u ocultar el motor de base de datos de SQL Server en la red y registrar el nombre de la entidad de seguridad del servidor.|[Configuración de red del servidor](../../database-engine/configure-windows/server-network-configuration.md)| |Describe cómo realizar tareas de configuración de red de cliente tales como configurar protocolos de cliente y crear o eliminar un alias de servidor.|[Configuración de red de cliente](../../database-engine/configure-windows/client-network-configuration.md)| |Describe los editores de [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)] que se pueden usar para diseñar, depurar y ejecutar scripts como scripts de [!INCLUDE[tsql](../../includes/tsql-md.md)] . También describe cómo codificar scripts de Windows PowerShell para trabajar con componentes de SQL Server.|[Scripting del motor de base de datos](../../relational-databases/scripting/database-engine-scripting.md)| |Describe cómo usar planes de mantenimiento para especificar un flujo de trabajo de tareas habituales de administración de una instancia. Los flujos de trabajo incluyen tareas como hacer copias de seguridad de bases de datos y actualizar las estadísticas para mejorar el rendimiento.|[Planes de mantenimiento](../../relational-databases/maintenance-plans/maintenance-plans.md)| |Describe cómo usar el regulador de recursos para administrar el consumo de recursos y las cargas de trabajo especificando los límites de CPU y de memoria que las solicitudes de aplicación pueden usar.|[Regulador de recursos](../../relational-databases/resource-governor/resource-governor.md)| |Describe cómo las aplicaciones de base de datos pueden usar el correo electrónico de base de datos para enviar mensajes de correo electrónico desde [!INCLUDE[ssDE](../../includes/ssde-md.md)].|[Correo electrónico de base de datos](../../relational-databases/database-mail/database-mail.md)| |Describe cómo el uso de eventos extendidos para capturar datos de rendimiento se puede usar para compilar líneas base de rendimiento o para diagnosticar problemas de rendimiento. Los eventos extendidos son un sistema ligero con un alto nivel de escalabilidad para recopilar datos de rendimiento.|[Eventos extendidos](../../relational-databases/extended-events/extended-events.md)| |Describe cómo usar el seguimiento de SQL para compilar un sistema personalizado de captura y registro de eventos en [!INCLUDE[ssDE](../../includes/ssde-md.md)].|[Seguimiento de SQL](../../relational-databases/sql-trace/sql-trace.md)| |Describe cómo usar el generador de perfiles de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] para capturar el seguimiento de solicitudes de aplicación que entran en una instancia de [!INCLUDE[ssDE](../../includes/ssde-md.md)]. Los seguimientos se pueden reproducir posteriormente para actividades como pruebas de rendimiento o diagnóstico de problemas.|[SQL Server Profiler](../../tools/sql-server-profiler/sql-server-profiler.md)| |Describe la captura de datos modificados (CDC) y las características de seguimiento de cambios y cómo se usan estas características para realizar el seguimiento de cambios en los datos de una base de datos.|[Seguimiento de cambios de datos &#40;SQL Server&#41;](../../relational-databases/track-changes/track-data-changes-sql-server.md)| |Describe cómo usar el visor del archivo de registro para buscar y ver los errores y mensajes de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] en diversos registros; por ejemplo el historial de trabajos de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] , los registros de SQL Server y los registros de eventos de Windows.|[Visor de archivos de registro](../../relational-databases/logs/log-file-viewer.md)| |Describe cómo usar el Asistente para la optimización de [!INCLUDE[ssDE](../../includes/ssde-md.md)] para analizar las bases de datos y hacer recomendaciones para tratar problemas potenciales de rendimiento.|[Asistente para la optimización de motor de base de datos](../../relational-databases/performance/database-engine-tuning-advisor.md)| |Describe cómo los administradores de base de datos de producción pueden establecer una conexión de diagnóstico a instancias cuando las conexiones estándar no se están aceptando.|[Conexión de diagnóstico para administradores de bases de datos](../../database-engine/configure-windows/diagnostic-connection-for-database-administrators.md)| |Describe cómo usar la característica desusada de servidores remotos para habilitar el acceso desde una instancia de [!INCLUDE[ssDE](../../includes/ssde-md.md)] a otra. El mecanismo preferido para esta funcionalidad es un servidor vinculado.|[Servidores remotos](../../database-engine/configure-windows/remote-servers.md)| |Describe las capacidades de Service Broker para las aplicaciones de mensajería y de puesta en cola, y proporciona punteros a la documentación de Service Broker.|[Service Broker](../../database-engine/configure-windows/sql-server-service-broker.md)| |Describe cómo se puede utilizar la extensión del grupo de búferes para proporcionar una integración sin problemas del almacenamiento de acceso aleatorio no volátil (unidades de estado sólido) con el grupo de búferes del motor de base de datos para mejorar significativamente el rendimiento de E/S.|[Archivo de la extensión del grupo de búferes](../../database-engine/configure-windows/buffer-pool-extension.md)| ## <a name="see-also"></a>Ver también [sqlservr (aplicación)](../../tools/sqlservr-application.md) [Características de la base de datos](../../relational-databases/database-features.md) [Características entre instancias del motor de base de datos](../../relational-databases/database-engine-cross-instance-features.md)
179.3125
1,094
0.785552
spa_Latn
0.982419
4b064d34f1b2383c605a7f9af9981f0a1e3d00a5
5,579
md
Markdown
tasks/bomberman-infrastructure/README.md
grgrey/atom
40f9459a10a992a34d091ec0887cb9177c4b8399
[ "MIT" ]
376
2016-10-03T00:54:28.000Z
2022-03-28T03:46:00.000Z
tasks/bomberman-infrastructure/README.md
grgrey/atom
40f9459a10a992a34d091ec0887cb9177c4b8399
[ "MIT" ]
568
2016-10-14T16:52:00.000Z
2021-09-19T03:40:04.000Z
tasks/bomberman-infrastructure/README.md
grgrey/atom
40f9459a10a992a34d091ec0887cb9177c4b8399
[ "MIT" ]
1,179
2016-10-03T10:48:33.000Z
2022-03-27T09:11:14.000Z
# Bomberman ifrastructure We are going to start our game development. Now it's a part to create big part of our infrastructure, including matchmaker and landing page logic. **game server** will be implemented later ![](top_view.png) API that our user see: `join(name: String)` User opens game web page (localhost:8080) and see the landing page (index page) with the only button [Play] and a text form to enter his nickname. Under the hood (**Implement this service**): 1. Matchmaker service. Matchmaker should handle `play` button request from user and provide a valid game session id to the user User is waiting until Matchmaker will respond with the `game id`. Specification ``` Protocol: HTTP Path: matchmaker/join Method: POST Host: {IP}:8080 (IP = localhost for local server tests) Headers: Content-Type: application/x-www-form-urlencoded Body: name={} Response: Code: 200 Сontent-Type: text/plain Body: game id ``` 1.1) Matchmaker is creating a new games when necessary. Matchmaker provides same gameId to N client connections(players) Matchmaking algorithm was described in lectures 1.2) Matchmaker saves the info about the game to database: gameId and all players involved (on game creation) Bonus: - Monitoring - how many players are in queue and other interesting data - Matchmaking based on leaderboard - Start match with not full players in game when have to wait for a long time 2. Game service (for now it should a stub/mock only) API: ``` gameId create(playerCount: int): long connect(name: String, gameId: long) start(gameId: long) ``` Under the hood: At some point Matchmaker asks Game service to `create` a new game for `playerCount` users. After that Matchmaker will provide this `gameId` to clients and clients will `connect` to the exact game using `gameId` and `name` At some point Matchmaker starts the game with `gameId`. In general it should be when number of connected players equals to number of players that should play in one game. Specification ``` Protocol: HTTP Path: game/create Method: POST Host: {IP}:8090 (IP = localhost for local server tests) Headers: Content-Type: application/x-www-form-urlencoded Body: playerCount={} Response: Code: 200 Сontent-Type: text/plain Body: game id ``` ``` Protocol: HTTP Path: game/start Method: POST Host: {IP}:8090 (IP = localhost for local server tests) Headers: Content-Type: application/x-www-form-urlencoded Body: gameId={} Response: Code: 200 Сontent-Type: text/plain Body: game id ``` ``` Protocol: WS Path: game/connect?gameId={}&name={} Host: {IP}:8090 (IP = localhost for local server tests) Result: WS connection established ``` Bonus: - Monitoring - how many games were played - Leaderboard - Player statistics ## Tech Stack: ### Spring & Spring-boot **[Srping]( @TODO )** - the most popular java framework, web-mvc is a part of it. **[Spring-boot]( @TODO)** - framework for fast configuration and deployment of java-spring applications. **[Spring mvc]** - spring implementation of model-view-controller architectural pattern. **Spring mvc implements thread per request model** [[link]](http://stackoverflow.com/questions/15217524/what-is-the-difference-between-thread-per-connection-vs-thread-per-request) Each request will be processed in separate thread. ![](thread_per_request.jpg) ## Deadline and HowTo? - `git checkout -b matchmaker` and work in this branch - create `game` directory in the root of repository and write code in this folder (travis-ci will build this directory) - Deadline: **09.04** lecture. - This is team task (3-2 persons, **single submissions are not aloud**) - Procedure: 1. show us PR with green build 1. show us a demo of your service 1. show us your tests 1. get ready for tricky questions 1. java knowledge will be checked - Base is **16 points** for this task - The more features you will develop - the better rank you will get. Try to make the best service you can. - If you are coping code from anywhere make sure you can explain what's happening. ## What we will check? 1. Correct implementation 1. Green CI build (checkstyle) 1. Tests quality (both unit and integration with SpringBootTest) 1. Code coverage > 50% 1. Logging 1. Service deployment in docker (without using IDE) <!--- title Game infrastructure participant Alice participant Bob Alice->Matchmaker: join(name=Alice) note right of Alice: POST matchmaker/join note right of Matchmaker: Matchmaker doesn't have vacant games note right of Matchmaker: Matchmaker has to ask for a new one Matchmaker->GameService: create(playerCount=2) note right of Matchmaker: POST create note right of GameService: GameService creates new game GameService->Matchmaker: gameId: 42 Matchmaker-> Alice: gameId: 42 Alice-> GameService: connect(gameId=42, name=Alice) note right of Alice: Alice is connected to GameService via websocket note right of Matchmaker: now game 42 has 1 out of 2 players Bob->Matchmaker: join(name=Bob) note right of Matchmaker: Matchmaker has a vacant place in game 42 Matchmaker->Bob: gameId=42 Bob->GameService: connect(gameId=42, name=Bob) note right of Bob: Bob is connected to GameService via websocket note right of Matchmaker: now game 42 has 2 out of 2 players note right of Matchmaker: time to ask GameService to start game 42 Matchmaker->GameService: start(gameId=42) -->
31.88
181
0.725399
eng_Latn
0.985384
4b068835f7e322911a5eb32ed01d3f02e3627ba5
1,014
md
Markdown
README.md
sDextra/tetris
cd14525dd2459e350e6787c8488a83e776bb8f87
[ "MIT" ]
4
2018-12-13T08:47:12.000Z
2021-08-07T01:28:24.000Z
README.md
sDextra/tetris
cd14525dd2459e350e6787c8488a83e776bb8f87
[ "MIT" ]
null
null
null
README.md
sDextra/tetris
cd14525dd2459e350e6787c8488a83e776bb8f87
[ "MIT" ]
1
2021-04-29T23:17:08.000Z
2021-04-29T23:17:08.000Z
# Tetris (Ren'Py) ![screenshot](https://pp.userapi.com/c849416/v849416003/dc39a/luZjOyp1dgg.jpg) ## Features **Difficulty levels**: - Classic - field 10x20, low speed, no bonus; - New - field 12x25, normal speed, with bonus; - Hard - field 13x25, high speed, with bonus; - Impossible - field 15x26, high speed, no bonus, no I tetromino. ## Installation Add files to your Ren'Py project ## Init ``` row - field width column - field height speed - falling speed tops - number of top places level - level of difficulty mode - presence of a bonus impossible - without I tetromino for_level - the number of lines for level-up ``` ## Usage **Keys**: - Left Arrow - move tetromino to the left; - Right Arrow - move tetromino to the right; - Up Arrow - rotate tetromino; - Down Arrow - increasing the speed of falling (pressing again disables it); - Enter - instantly lower tetromino; - Space - remove current tetromino (if you have a bonus). ## License [MIT](https://github.com/sDextra/tetris/blob/master/LICENSE/).
27.405405
78
0.724852
eng_Latn
0.956731
4b06ae4ddf9c8b2a952796cc34feb93d9e45c301
1,697
md
Markdown
add/metadata/System.IO.Packaging/RightsManagementInformation.meta.md
v-maudel/docs-1
f849afb0bd9a505311e7aec32c544c3169edf1c5
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.IO.Packaging/RightsManagementInformation.meta.md
v-maudel/docs-1
f849afb0bd9a505311e7aec32c544c3169edf1c5
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.IO.Packaging/RightsManagementInformation.meta.md
v-maudel/docs-1
f849afb0bd9a505311e7aec32c544c3169edf1c5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- uid: System.IO.Packaging.RightsManagementInformation ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" --- --- uid: System.IO.Packaging.RightsManagementInformation.LoadUseLicense(System.Security.RightsManagement.ContentUser) ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" --- --- uid: System.IO.Packaging.RightsManagementInformation.CryptoProvider ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" --- --- uid: System.IO.Packaging.RightsManagementInformation.SaveUseLicense(System.Security.RightsManagement.ContentUser,System.Security.RightsManagement.UseLicense) ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" --- --- uid: System.IO.Packaging.RightsManagementInformation.SavePublishLicense(System.Security.RightsManagement.PublishLicense) ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" --- --- uid: System.IO.Packaging.RightsManagementInformation.DeleteUseLicense(System.Security.RightsManagement.ContentUser) ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" --- --- uid: System.IO.Packaging.RightsManagementInformation.LoadPublishLicense ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" --- --- uid: System.IO.Packaging.RightsManagementInformation.GetEmbeddedUseLicenses ms.technology: - "dotnet-standard" author: "dotnet-bot" ms.author: "dotnetcontent" manager: "wpickett" ---
23.569444
157
0.774897
yue_Hant
0.200557
4b06f19a488e3c9a68261e6760264be09bd7b96c
2,117
md
Markdown
docs/_iso/IChO.md
AMISO-MY/amiso-my.github.io
bf8c06c73af2227f225960420024d19d29b04e17
[ "MIT" ]
1
2021-09-27T15:20:38.000Z
2021-09-27T15:20:38.000Z
docs/_iso/IChO.md
AMISO-MY/amiso-my.github.io
bf8c06c73af2227f225960420024d19d29b04e17
[ "MIT" ]
2
2021-09-30T06:21:59.000Z
2022-02-02T09:13:46.000Z
docs/_iso/IChO.md
AMISO-MY/amiso-my.github.io
bf8c06c73af2227f225960420024d19d29b04e17
[ "MIT" ]
2
2021-09-25T08:49:17.000Z
2021-09-25T10:09:27.000Z
--- title: IChO - K3M permalink: /icho/ excerpt: Kuiz Kimia Kebangsaan header: teaser: assets/images/k3m-600x400.png gallery: - url: /assets/images/k3m-600x400.png image_path: /assets/images/k3m-600x400.png alt: K3M --- Selection for IChO is through Kuiz Kimia Kebangsaan. {% include gallery caption="Kuiz Kimia Kebangsaan" %} [Official K3M website](https://ikm.org.my/outreach-programs/kuiz-kimia-kebangsaaan-malaysia-k3m/) [Official IChO website​​](https://www.ichosc.org/) # Selection Process The K3M is a national level test offered to most secondary schools. Every year, top scorers of the selection test will be invited to join the IChO training camp.<br><br>Starting from 2021, Universiti Malaya has also started the Go For Gold (GFG) training programme as one of the steps to achieving a Malaysian gold medal in the IChO. As for now, invitations to the programme are sent directly to school administrators and not open to the public. Within the training camp, several tests will be held and the top 4 scorers will be chosen to represent Malaysia in the IChO. # Introduction International Chemistry Olympiad (IChO) is an annual olympiad for high school students who excel in chemistry and is the third oldest ISO to date. Established in Czechoslovakia 1968, up to 80 countries send teams of 4 members to compete in the olympiad, helping encourage cooperation between students from different countries at an international level.<br><br>Problems within IChO touch upon several subjects within chemistry, mainly divided into Physical Chemistry, Organic Chemistry and Inorganic Chemistry and students' knowledge is tested at an extremely high level. # Format There are 2 rounds. * Theoretical Round * Experimental Round Do note that due to the recent pandemic, IChO 2020, 2021 and 2022 were organised without a laboratory exam.&nbsp; # Eligibility Contestants must be younger than 20 years old, and are not enrolled in any tertiary education institutes at the day of competition. If you are an IChO alumnus, you can help us complete this page by [registering](/alumni) first. Thank you.
49.232558
570
0.786963
eng_Latn
0.996839
4b07436ba14cad2c10fdafbc583cc58c47ac587c
128
md
Markdown
README.md
jgu17/gosmash
3cd1e0662205ba4388952f8385757314fc3d4087
[ "Apache-2.0" ]
null
null
null
README.md
jgu17/gosmash
3cd1e0662205ba4388952f8385757314fc3d4087
[ "Apache-2.0" ]
null
null
null
README.md
jgu17/gosmash
3cd1e0662205ba4388952f8385757314fc3d4087
[ "Apache-2.0" ]
null
null
null
# gosmash A GO client library for SMACH CLP. ## License This project is available under the [Apache 2.0](./LICENSE) license.
16
68
0.726563
eng_Latn
0.948223
4b0804e4e4ff708bc93639e1e07dc6ececb4c230
5,678
md
Markdown
README.md
aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project
07db36e4e7b70f71d227cbafc108a07854b38f91
[ "MIT" ]
1
2021-10-08T10:31:32.000Z
2021-10-08T10:31:32.000Z
README.md
aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project
07db36e4e7b70f71d227cbafc108a07854b38f91
[ "MIT" ]
null
null
null
README.md
aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project
07db36e4e7b70f71d227cbafc108a07854b38f91
[ "MIT" ]
null
null
null
# Java-Hibernate-Servlet-MySql-Stock-Management-Project ## Application Description In this project the goal is to create a warehouse platfrom which feautures abbilities like selling different kind of products to customers, listing customers, products and orders, dealing with cash actions (payment entiries and payment outs) by using Java, Servlet, Hibernate, MySQL, JavaScript and HTML. Users must be sign in to the application in order to use website facilities. ## Technologies | :arrow_right:| Technologies | | ------------- |:-------------:| | :arrow_right: |Java | | :arrow_right:| JSP&Servlet | | :arrow_right: |Hibernate | | :arrow_right: |Mysql | | :arrow_right:|BootStrap5 | | :arrow_right: |Ajax | | 🔐 Admin Account | 🗝️ Password | | ------------- |:-------------:| | ```alper@mail.com``` | 12345 | ## Application Images <p> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-2.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-2.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-3.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-3.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-4.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-4.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-5.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-5.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-6.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-6.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-7.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-7.jpg" width="200" style="max-width:100%;"> </a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-8.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-8.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-9.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-9.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-10.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-10.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-13.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-13.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-11.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-11.jpg" width="200" style="max-width:100%;"></a> <a href="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-12.jpg" width="200" target="_blank"> <img src="https://github.com/aalperyilmaz/Java-Hibernate-Servlet-MySql-Stock-Management-Project/blob/main/g%C3%B6rseller/Alper-Y%C4%B1lmaz_Depo_Project-page-12.jpg" width="200" style="max-width:100%;"></a> </p>
86.030303
381
0.762945
yue_Hant
0.16668
4b08d7132f5cb427a2e0b7a41404f1e7eee75c7e
1,682
md
Markdown
_posts/2006/2006-08-27-east-coast-adventure-06-day-3.md
thingles/thingles.github.io
46b8d98959bcae7ddd6e89d13070fe3e7644d40b
[ "MIT" ]
2
2016-08-23T03:14:22.000Z
2017-05-04T03:48:16.000Z
_posts/2006/2006-08-27-east-coast-adventure-06-day-3.md
jthingelstad/thingles.github.io
46b8d98959bcae7ddd6e89d13070fe3e7644d40b
[ "MIT" ]
29
2017-05-09T01:10:31.000Z
2017-11-04T20:29:56.000Z
_posts/2006/2006-08-27-east-coast-adventure-06-day-3.md
jthingelstad/thingles.github.io
46b8d98959bcae7ddd6e89d13070fe3e7644d40b
[ "MIT" ]
null
null
null
--- title: East Coast Adventure '06 - Day 3 categories: - Travel --- On Friday we embarked on the **Thingelstad Family Great East Coast Adventure 2006**! What is this amazing event you ask? We are going to be away from home for 5 weeks. The first three will be in New Jersey while I work out of our office here, taking advantage of being in the same office as so many others. The last two weeks will be on vacation in North Carolina. Yeah! Days 1 and 2 of the adventure were travel days. We left Minneapolis on Friday morning around 9:30a and drove, and drove, and drove. We took breaks for Mazie to get out and run around. We camped in a Starbucks in Rockford, Illinois waiting for a monsoon like rain to pass and providing an opportunity for Mazie to let out some energy, and me to get some back (triple espresso!!!). We also had a great Japanese dinner in Rockford. However, we didn't take into account that as soon as Mazie saw the food on the grill getting cooked that she would want to eat. Squaking was moderately high. We spent the night just outside of Toledo, OH in a splendid Holiday Inn Express. We arrived to our great little apartment here in Lawrenceville, NJ last night after another long day of driving through Ohio and Pennsylvania. At one point while driving through Ohio in the morning Tammy looked over at me and said the best line of the whole drive _ _ **_I'm going to sleep. Try not to do anything stupid._** I appreciated her vote of confidence and continued to motor along while her and Mazie got some rest. We spent Sunday here in NJ getting the apartment setup and preparing for the week. Stay tuned for more updates from the Great Adventure.
80.095238
667
0.777646
eng_Latn
0.999862
4b0a18454a75beeb9878f4785215b9d4b4617937
55
md
Markdown
README.md
naotakeyoshida/Azure_AppService_demo
0d3ae88010668c85ee4094bfd6ca0ca14518b59e
[ "Apache-2.0" ]
null
null
null
README.md
naotakeyoshida/Azure_AppService_demo
0d3ae88010668c85ee4094bfd6ca0ca14518b59e
[ "Apache-2.0" ]
null
null
null
README.md
naotakeyoshida/Azure_AppService_demo
0d3ae88010668c85ee4094bfd6ca0ca14518b59e
[ "Apache-2.0" ]
null
null
null
# Azure_AppService_demo For demo of Azure App Service.
18.333333
30
0.818182
kor_Hang
0.272962
4b0a7c9f80aa935a90b74a08806c50e19a601364
143
md
Markdown
DATA/Warzone_2100/Warzone_2100_Project/links.md
anqude/GamesRevival
83395724d3a0ff8b8aed9a8088b4a5f64a354666
[ "MIT" ]
37
2019-01-16T17:06:25.000Z
2021-07-11T07:42:22.000Z
DATA/Warzone_2100/Warzone_2100_Project/links.md
anqude/GamesRevival
83395724d3a0ff8b8aed9a8088b4a5f64a354666
[ "MIT" ]
99
2019-01-16T18:55:34.000Z
2021-12-22T05:37:18.000Z
DATA/Warzone_2100/Warzone_2100_Project/links.md
anqude/GamesRevival
83395724d3a0ff8b8aed9a8088b4a5f64a354666
[ "MIT" ]
49
2019-01-15T13:51:32.000Z
2021-09-19T11:37:15.000Z
[Видео высокого качества.](http://sourceforge.net/projects/warzone2100/files/warzone2100/Videos/high-quality-en/sequences.wz/download) (920 МБ)
143
143
0.818182
kor_Hang
0.193232
4b0ac47e6ff139ecca0f71e4591b54df04db0a50
194
md
Markdown
Android_Development_with_Kotlin/19. Architecture/19.2 Overview of MVP architecture.md
ujjwal313/winter-of-contributing
fa260edc8d769688af34ff4ba36e43408b50a416
[ "MIT" ]
1,078
2021-09-05T09:44:33.000Z
2022-03-27T01:16:02.000Z
Android_Development_with_Kotlin/19. Architecture/19.2 Overview of MVP architecture.md
ujjwal313/winter-of-contributing
fa260edc8d769688af34ff4ba36e43408b50a416
[ "MIT" ]
6,845
2021-09-05T12:49:50.000Z
2022-03-12T16:41:13.000Z
Android_Development_with_Kotlin/19. Architecture/19.2 Overview of MVP architecture.md
ujjwal313/winter-of-contributing
fa260edc8d769688af34ff4ba36e43408b50a416
[ "MIT" ]
2,629
2021-09-03T04:53:16.000Z
2022-03-20T17:45:00.000Z
# Overview of MVP architecture in Android (Audio file) [MVP architecture in Android](https://drive.google.com/file/d/1yons4_a1zp-TUSe2AqQglbtjk-Ak5-XJ/view?usp=sharing) Author-Parth Sharma
21.555556
113
0.783505
kor_Hang
0.485449
4b0b4d52d208fdf4841585b402a07ff38e357bda
5,403
md
Markdown
sources/tech/20190322 Easy means easy to debug.md
QiaoN/TranslateProject
191253c815756f842a783dd6f24d4dc082c225eb
[ "Apache-2.0" ]
22
2019-04-03T06:30:29.000Z
2019-11-07T08:57:16.000Z
sources/tech/20190322 Easy means easy to debug.md
QiaoN/TranslateProject
191253c815756f842a783dd6f24d4dc082c225eb
[ "Apache-2.0" ]
1
2015-02-11T12:35:49.000Z
2015-02-11T12:35:49.000Z
sources/tech/20190322 Easy means easy to debug.md
QiaoN/TranslateProject
191253c815756f842a783dd6f24d4dc082c225eb
[ "Apache-2.0" ]
6
2016-09-22T02:30:11.000Z
2017-07-28T00:36:36.000Z
[#]: collector: (lujun9972) [#]: translator: ( ) [#]: reviewer: ( ) [#]: publisher: ( ) [#]: url: ( ) [#]: subject: (Easy means easy to debug) [#]: via: (https://arp242.net/weblog/easy.html) [#]: author: (Martin Tournoij https://arp242.net/) What does it mean for a framework, library, or tool to be “easy”? There are many possible definitions one could use, but my definition is usually that it’s easy to debug. I often see people advertise a particular program, framework, library, file format, or something else as easy because “look with how little effort I can do task X, this is so easy!” That’s great, but an incomplete picture. You only write software once, but will almost always go through several debugging cycles. With debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”. To debug code, you need to understand it, so “easy to debug” by extension means “easy to understand”. Abstractions which make something easier to write often come at the cost of make things harder to understand. Sometimes this is a good trade-off, but often it’s not. In general I will happily spend a little but more effort writing something now if that makes things easier to understand and debug later on, as it’s often a net time-saver. Simplicity isn’t the only thing that makes programs easier to debug, but it is probably the most important. Good documentation helps too, but unfortunately good documentation is uncommon (note that quality is not measured by word count!) This is not exactly a novel insight; from the 1974 The Elements of Programming Style by Brian W. Kernighan and P. J. Plauger: > Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it? A lot of stuff I see seems to be written “as clever as can be” and is consequently hard to debug. I’ll list a few examples of this pattern below. It’s not my intention to argue that any of these things are bad per se, I just want to highlight the trade-offs in “easy to use” vs. “easy to debug”. * When I tried running [Let’s Encrypt][1] a few years ago it required running a daemon as root(!) to automatically rewrite nginx files. I looked at the source a bit to understand how it worked and it was all pretty complex, so I was “let’s not” and opted to just pay €10 to the CA mafia, as not much can go wrong with putting a file in /etc/nginx/, whereas a lot can go wrong with complex Python daemons running as root. (I don’t know the current state/options for Let’s Encrypt; at a quick glance there may be better/alternative ACME clients that suck less now.) * Some people claim that systemd is easier than SysV init.d scripts because it’s easier to write systemd unit files than it is to write shell scripts. In particular, this is the argument Lennart Poettering used in his [systemd myths][2] post (point 5). I think is completely missing the point. I agree with Poettering that shell scripts are hard – [I wrote an entire post about that][3] – but by making the interface easier doesn’t mean the entire system becomes easier. Look at [this issue][4] I encountered and [the fix][5] for it. Does that look easy to you? * Many JavaScript frameworks I’ve used can be hard to fully understand. Clever state keeping logic is great and all, until that state won’t work as you expect, and then you better hope there’s a Stack Overflow post or GitHub issue to help you out. * Docker is great, right up to the point you get: ``` ERROR: for elasticsearch Cannot start service elasticsearch: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:258: applying cgroup configuration for process caused \"failed to write 898 to cgroup.procs: write /sys/fs/cgroup/cpu,cpuacct/docker/b13312efc203e518e3864fc3f9d00b4561168ebd4d9aad590cc56da610b8dd0e/cgroup.procs: invalid argument\"" ``` or ``` ERROR: for elasticsearch Cannot start service elasticsearch: EOF ``` And … now what? * Many testing libraries can make things harder to debug. Ruby’s rspec is a good example where I’ve occasionally used the library wrong by accident and had to spend quite a long time figuring out what exactly went wrong (as the errors it gave me were very confusing!) I wrote a bit more about that in my [Testing isn’t everything][6] post. * ORM libraries can make database queries a lot easier, at the cost of making things a lot harder to understand once you want to solve a problem. -------------------------------------------------------------------------------- via: https://arp242.net/weblog/easy.html 作者:[Martin Tournoij][a] 选题:[lujun9972][b] 译者:[译者ID](https://github.com/译者ID) 校对:[校对者ID](https://github.com/校对者ID) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://arp242.net/ [b]: https://github.com/lujun9972 [1]: https://en.wikipedia.org/wiki/Let%27s_Encrypt [2]: http://0pointer.de/blog/projects/the-biggest-myths.html [3]: https://arp242.net/weblog/shell-scripting-trap.html [4]: https://unix.stackexchange.com/q/185495/33645 [5]: https://cgit.freedesktop.org/systemd/systemd/commit/?id=6e392c9c45643d106673c6643ac8bf4e65da13c1 [6]: /weblog/testing.html [7]: mailto:martin@arp242.net [8]: https://github.com/Carpetsmoker/arp242.net/issues/new
64.321429
422
0.747733
eng_Latn
0.996811
4b0cefe7e763df88e5e74dbe79934630a66cda73
747
md
Markdown
_instagram/1150129783575178545.md
lovestarraceclub/lovestar20
4185fd79ca25b98404ed118b39e70dd6e26f4e7d
[ "Apache-2.0" ]
null
null
null
_instagram/1150129783575178545.md
lovestarraceclub/lovestar20
4185fd79ca25b98404ed118b39e70dd6e26f4e7d
[ "Apache-2.0" ]
null
null
null
_instagram/1150129783575178545.md
lovestarraceclub/lovestar20
4185fd79ca25b98404ed118b39e70dd6e26f4e7d
[ "Apache-2.0" ]
null
null
null
--- caption: 'Driving snow, soul crushing wind &amp;amp; 10psi made for a short #raphafestive500 ride today. #fatbike #cycling #bicycle #festive500 #lovestarbicyclebags #bikepackingbags #bikepacking' date: '2015-12-28T18:09:36' fullsize_path: instagram\fullsize\1150129783575178545.jpg instagram_url: https://www.instagram.com/p/_2FPHLmG0x location: {} media_id: '1150129783575178545' media_url: https://scontent.cdninstagram.com/t51.2885-15/e35/12362302_224773467854346_293425752_n.jpg?ig_cache_key=MTE1MDEyOTc4MzU3NTE3ODU0NQ%3D%3D.2 owner: id: '661611562' owner_url: https://www.instagram.com/elliotlovestarbicycles username: elliotlovestarbicycles thumbnail_path: instagram\thumbnails\1150129783575178545.jpg utc_date: 1451326176 ---
41.5
149
0.815261
kor_Hang
0.13735
4b0d60ea54381d1029ea8a6d28ad69f2dced2638
4,294
md
Markdown
CONTRIBUTING.md
demberto/DyCall
b234e7ba535eae71234723bb3d645eb986f96a30
[ "MIT" ]
null
null
null
CONTRIBUTING.md
demberto/DyCall
b234e7ba535eae71234723bb3d645eb986f96a30
[ "MIT" ]
null
null
null
CONTRIBUTING.md
demberto/DyCall
b234e7ba535eae71234723bb3d645eb986f96a30
[ "MIT" ]
null
null
null
# Contributor's Guide To get started: 1. Clone the repo ```shell git clone https://github.com/demberto/DyCall ``` 2. Browse to the newly created folder ```shell cd DyCall ``` 3. Optionally setup a virtual environment: ```shell python -m venv . ``` 4. Install dependencies: ```shell python -m pip install -r requirements-dev.txt -c constraints.txt python -m pip install -r requirements.txt -c constraints.txt ``` 5. Install [tbump][tbump]. I don't include it in dependencies because it recommends using `pipx` for installation. You can choose the installation method you want. I used `pipx`. ## Coding Conventions I have tried to follow an OO approach to DyCall. This might be best explained by the answer to this question: [Best way to structure a tkinter application?][so-17470842] State variables are used wherever the content of a widget will be changed at runtime. This implies that instead of ```python entry = ttk.Entry() entry.configure(text="Entry") ``` I use this approach ```python entry_var = tk.StringVar() entry = ttk.Entry(textvariable=entry_var) entry_var.set("Entry") ``` This allows the top level window class `App` to create such state variables and pass them to the sub-frames via their constructors. This allows for a sub frame to change the contents of a widget inside another subframe without accessing the widget directly. Additionally, events are generated where the use of control variable doesn't apply. There are certain edge cases where events can't be used either due to Tkinter's implementation of `Event` class not allowing arbitrary data to be retreived. Apart from that, I have used LF line endings everywhere and tried to enforce it everywhere I can - `.editorconfig` and `.gitattributes` ## Adding translations DyCall uses `ttkbootstrap`'s `MessageCatalog` class to handle localizations. Existing translations can be found in `dycall/msgs` folder. They are named with respect to their [LCID][lcid] and need to be named that way. Tkinter (or Tk) doesn't need different files for different locales but I have chosen to follow conventions. You can add new strings to an existing translation or add a new one. To create a new translation, follow these steps: 1. Create a new translation file in [dycall/msgs][dycall-msgs] named according to the LCID with a `.msg` extension > e.g. `hi.msg`. 2. To add a new string to in the file you just created follow this format: `::msgcat::mcset <LCID> "<String in English>" "<Translated string>"` > e.g. > > ```tk > ::msgcat::mcset hi "Hindi" "हिंदी" > ``` > > _Don't miss the quotation marks_ 3. In [dycall/util.py][dycall-util-py], add a key-value pair to the `LCID2Lang` dictionary in the format of `<LCID>: <Language name in its native form>` > e.g. > > ```python > "hi": "हिंदी" > ``` The value will be used as an option under **Options** ➔ **Language** submenu in the DyCall interace. The process for updating existing translations is pretty much the same. Just begin from step 2 directly > **If the translations don't work** > > You need to ensure `MessageCatalog.translate` is getting called like this: > > ```python > from ttkbootstrap.localization import MessageCatalog as MC > > # Somewhere in the code > translated = MC.translate("Translate me!") > ``` ## Appearance I tried many themes for and solutions for implementing dark/light themses. Almost all of these solutions work with images and are extremely laggy, so I chose TtkBootstrap. TtkBootstrap has tons of almost same looking themes. Of them, the `darkly` theme looks most native on Windows and blends perfectly with `tksheet`'s `dark blue` theme. I don't actually like any oh the light mode themes TTkBootstrap has, all of them are too bright and look nothing close to native, but `yeti` still feels a bit better. I think 2 themes are enough, DyCall is a utility tool after all. <!-- MARKDOWN LINKS --> [dycall-msgs]: https://github.com/demberto/DyCall/tree/master/dycall/msgs [dycall-util-py]: https://github.com/demberto/DyCall/blob/master/dycall/util.py [lcid]: https://www.tcl.tk/man/tcl8.7/TclCmd/msgcat.html#M23 [so-17470842]: https://stackoverflow.com/a/17470842 [tbump]: https://github.com/dmerejkowsky/tbump
30.892086
88
0.731719
eng_Latn
0.989396
4b115a920b26235b1fdbf15a2521e366e170fe0c
10,661
md
Markdown
biztalk/core/transactional-adapter-biztalk-server-sample.md
changeworld/biztalk-docs.zh-CN
0ee8ca09b377aa26a13e0f200c75fca467cd519c
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/core/transactional-adapter-biztalk-server-sample.md
changeworld/biztalk-docs.zh-CN
0ee8ca09b377aa26a13e0f200c75fca467cd519c
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/core/transactional-adapter-biztalk-server-sample.md
changeworld/biztalk-docs.zh-CN
0ee8ca09b377aa26a13e0f200c75fca467cd519c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 事务性适配器 (BizTalk Server 示例) |Microsoft Docs ms.custom: '' ms.date: 06/08/2017 ms.prod: biztalk-server ms.reviewer: '' ms.suite: '' ms.tgt_pltfrm: '' ms.topic: article ms.assetid: 31a13377-cc89-4763-ad1b-508a16fc9708 caps.latest.revision: 36 author: MandiOhlinger ms.author: mandia manager: anneta ms.openlocfilehash: 014b541517fb6054525081b852cc21f388742ce2 ms.sourcegitcommit: 266308ec5c6a9d8d80ff298ee6051b4843c5d626 ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 06/27/2018 ms.locfileid: "36975478" --- # <a name="transactional-adapter-biztalk-server-sample"></a>事务性适配器 (BizTalk Server 示例) 事务性适配器示例演示如何在处理 [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] 消息期间,根据数据库创建并使用显式 Microsoft 分布式事务处理协调器 (MSDTC) 事务。 ## <a name="what-this-sample-does"></a>本示例的用途 此示例包含一个接收适配器,该适配器以用户指定的间隔运行 SQL 语句,使用 MSDTC 事务从 SQL Server 数据库中获取数据。 然后,以同一事务上下文中的消息的形式将数据提交给 [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] MessageBox 数据库。 相应的发送适配器使用来自事务上下文中 BizTalk 消息的输入运行用户指定的 SQL 存储过程。 它使用来自该消息中的特定数据,找到并删除同一事务中 MessageBox 数据库中对应的消息。 ## <a name="how-this-sample-is-designed-and-why"></a>此示例设计方式和原因 此示例在其解决方案中有两个项目。 第一个是在运行前使用的管理项目 (Admin),以允许用户配置使用此适配器的接收位置和发送端口。 第二个是在发送和接收适配器正在执行时运行的运行时项目 (Runtime)。 ## <a name="where-to-find-this-sample"></a>本示例所在的位置 本示例位于以下 SDK 位置中: \<*示例路径*\>\Samples\AdaptersDevelopment\TransactionalAdapter。 管理配置项目位于 \Admin 文件夹中,而运行时项目位于 \Runtime 文件夹中。 下表显示了本示例中的文件及其用途说明: |Admin 项目文件名|Admin 项目文件说明| |----------------------------|------------------------------------| |TransactionalAdmin.csproj|用于进行运行时预配置的适配器管理项目文件| |TransactionalReceiveHandler.xsd|接收处理程序属性的 XSD| |TransactionalReceiveLocation.xsd|接收位置属性的 XSD| |TransactionalTransmitLocation.xsd|传输位置属性的 XSD| |TransactionalTransmitHandler.xsd|传输处理程序属性的 XSD| |TransactionalAdapterManagement.cs|适配器配置管理。 包含 GetConfigSchema,BizTalk 适配器框架调用 GetConfigSchema 以返回它所支持的每种(四种)可能配置类型的 XSD 配置架构。| |Runtime 项目文件名|Runtime 项目文件说明| |------------------------------|--------------------------------------| |Transactional.csproj|适配器运行时项目文件| |TransactionalAsyncBatch.cs|适配器发送部分的异步实现| |TransactionalDeleteBatch.cs|删除一批消息和投票,以提交或中止事务| |TransactionalProperties.cs|提取和设置配置属性| |TransactionalReceiver.cs|创建和管理接收终结点| |TransactionalReceiverEndpoint.cs|每个接收位置的实际监听或轮询| |TransactionalTransmitter.cs|从消息引擎接受要传输的一批消息| ## <a name="how-to-use-this-sample"></a>如何使用本示例 本示例可以作为你使用显式事务创建自定义发送和接收适配器的框架。 ## <a name="building-and-initializing-this-sample"></a>生成并初始化本示例 > [!IMPORTANT] > 如果是在 64 位计算机上安装 BizTalk 或安装位置已修改,则需要相应修改 OutboundAssemblyPath、InboundAssemblyPath、AdapterMgmtAssemblyPath。 #### <a name="create-a-strong-name-key-for-the-transactional-adapter-sample"></a>创建事务性适配器示例的强名称密钥 1. 启动**Visual Studio 命令提示符**。 2. 在命令提示符下,键入以下命令,然后按 Enter: ``` cd \Program Files\Microsoft BizTalk Server <version>\SDK\Samples\AdaptersDevelopment\TransactionalAdapter\Runtime ``` 3. 在命令提示符下,键入以下命令,然后按 Enter: ``` sn –k TransactionalAdapter.snk ``` 4. 在命令提示符处,键入**退出**,然后按 enter 以关闭命令提示符窗口。 #### <a name="build-the-transactional-adapter-solution"></a>生成事务性适配器解决方案 1. 单击**启动**,依次指向**所有程序**,指向**附件**,然后单击**Windows 资源管理器**。 2. 浏览到[!INCLUDE[btsBiztalkServerPath](../includes/btsbiztalkserverpath-md.md)]SDK\Samples\AdaptersDevelopment\TransactionalAdapter,然后双击**TransactionalAdapter.sln**以打开此解决方案中的[!INCLUDE[btsVStudioNoVersion](../includes/btsvstudionoversion-md.md)]。 3. 若要生成的两个事务性适配器项目 (Admin 和 Runtime) 在解决方案资源管理器中,右键单击**解决方案 TransactionalAdapter**,然后单击**重新生成**。 ## <a name="running-this-sample"></a>运行本示例 #### <a name="register-the-transactional-adapter"></a>注册事务性适配器 1. 在 Windows 资源管理器中,导航至 [!INCLUDE[btsBiztalkServerPath](../includes/btsbiztalkserverpath-md.md)]SDK\Samples\AdaptersDevelopment\TransactionalAdapter\Admin。 2. 若要将事务性适配器数据添加到注册表中,双击**TransactionalAdmin.reg**。 > [!NOTE] > **TransactionalAdmin.reg**包括硬编码路径为 C:\Program Files\Microsoft BizTalk Server\\。 如果你的 [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] 的安装位置不是默认位置或者是从以前版本升级到 [!INCLUDE[btsBizTalkServerNoVersion](../includes/btsbiztalkservernoversion-md.md)] 安装,则必须使用相应的路径修改 TransactionalAdmin.reg 文件。 更新与揑“InboundAssemblyPath”、揙“OutboundAssemblyPath”和揂“AdapterMgmtAssemblyPath”值相关联的路径,以指向指定文件的正确位置。 > > [!IMPORTANT] > 如果在 64 位计算机上安装 BizTalk,将 HKEY_CLASSES_ROOT\CLSID\ 注册表项的所有实例都更改为 hkey_classes_root\wow6432node\clsid \ 中**TransactionalAdmin.reg**注册表文件。 3. 在中**注册表编辑器**对话框中,单击**是**以将示例适配器添加到注册表,然后单击**确定**。 4. 若要关闭 Windows 资源管理器中,单击**文件**,然后单击**关闭**。 #### <a name="add-the-transactional-adapter-to-biztalk-server"></a>将事务性适配器添加到 BizTalk Server 1. 单击**启动**菜单中,选择**所有程序**,选择[!INCLUDE[btsBizTalkServerStartMenuItemui](../includes/btsbiztalkserverstartmenuitemui-md.md)],然后选择**BizTalk Server 管理**。 2. 在中[!INCLUDE[btsBizTalkServerAdminConsoleui](../includes/btsbiztalkserveradminconsoleui-md.md)],展开**BizTalk Server 管理**树中,展开**BizTalk 组**树、,然后展开**平台设置**树。 3. 右键单击**适配器**,单击**新建**,然后单击**适配器**。 4. 在中**适配器属性**对话框框中,执行以下操作。 | 使用此选项 | 执行的操作 | |-------------|-----------------------------------------------------------------------------------------------------------------------------------| | “属性” | 类型**TransactionalAdapter**。 | | 适配器 | 选择**Txn**从下拉列表。 运行此项将出现**TransactionalAdmin.reg**以前文件。 | | Description | 类型**示例事务性适配器**。 | 5. 单击“确定” **。** 现在该适配器显示在 BizTalk 管理控制台右侧窗口中的适配器列表中。 #### <a name="create-a-receive-port-and-location-that-uses-the-adapter"></a>创建使用该适配器的接收端口和位置 1. 展开**BizTalk 组 [服务器名称]** 中的节点[!INCLUDE[btsBizTalkServerAdminConsoleui](../includes/btsbiztalkserveradminconsoleui-md.md)],展开**应用程序**节点,展开**BizTalk Application 1**节点。 2. 右键单击**接收端口**,然后单击**新建**,选择**单向接收端口。** 3. 有关**名称**,输入**TxnReceivePort1**,然后单击**确定**。 4. 右键单击**接收位置**节点中,单击**新建**,然后选择**单向接收位置**。 5. 在中**选择接收端口**对话框中,选择**TxnReceivePort1**,然后单击**确定**。 6. 在中**接收位置属性**对话框中的**常规**选项卡上,输入**TxnReceiveLocation1**有关**名称**。 请确保**接收端口**标签显示**TxnReceivePort1**。 7. 在中**类型**下拉列表框中**传输**框架中,选择**TransactionalAdapter。** 8. 在中**接收**<strong>管道</strong>框中,确保**PassThruReceive**处于选中状态。 将其他属性保留为默认设置。 9. 单击**配置**按钮旁边**类型**下拉框。 这会显示特定于此适配器的对话框。 指定以下内容,正如您看到合适,然后单击**确定**。 | “属性” | 设置 | |-----------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------| | 连接字符串 | 用于连接 Northwind 数据库并进行身份验证的 SQL 数据库连接字符串。 我们随后运行的 SQL 脚本将使用此数据库。 | | 命令文本 | 针对 Northwind 数据库执行的 SQL 语句,目的是获取要输入到 BizTalk 消息中的数据。 | | Cookie | 包括部分 URI,因此请输入唯一的值,如接收位置的名称,例如:TxnReceiveLocation1。 | | 轮询间隔单位 | 轮询数据的时间单位。 设置为秒。 | | 轮询间隔 | 数据的轮询的时间度量单位。 设置为 15 秒。 | 10. 单击**确定**关闭配置对话框中,然后**确定**以关闭**接收位置属性**对话框中,以返回到[!INCLUDE[btsBizTalkServerAdminConsoleui](../includes/btsbiztalkserveradminconsoleui-md.md)]。 #### <a name="create-a-send-port-and-send-handler-that-use-the-adapter"></a>创建使用此适配器的发送端口和发送处理程序 1. 与**BizTalk Application 1**节点处于展开状态中,右键单击**发送端口**,然后单击**新建**,然后选择**静态单向发送端口**. 2. 在中**名称**字段中,输入**TxnSendPort1**。 3. 在中**传输**帧,在**类型**下拉列表中,选择**TransactionalAdapter**`.` 4. 在中**发送管道**框中,确保**PassThruTransmit**处于选中状态。 5. 单击**配置**按钮旁边**传输**下拉列表。在出现的对话框中指定以下根据需要,然后单击**确定**。 |“属性”|设置| |--------------|-------------| |Cookie|包括部分 uri-例如输入如接收位置的名称唯一此处的值: **TxnSendPort1**。| |连接字符串|用于连接 Northwind 数据库并进行身份验证的 SQL 数据库连接字符串。 它很可能是用于配置的同一**TxnReceiveLocation1**接收位置。| |存储过程|若要轮询数据库-获取执行的存储的过程名称**sp_txnProc**。 消息发送到的 BizTalk 正文作为字符串参数调用存储过程@Data。 例如,用户将在这种情况下更高版本的存储的过程使用配置名称**sp_txnProc**。 该适配器在运行时将对数据库执行等效的这种调用。<br /><br /> exec sp_txnProc @Data ="BizTalk 消息的内容"| 6. 在左侧的导航窗格中,单击**筛选器**。 7. 在筛选器表达式编辑器中,输入以下表达式以便设置针对此发送端口的订阅,接收由 TxnReceivePort1 接收端口接收到的任何消息。 输入以下值:**BTS。ReceivePortName = = TxnReceivePort1** 1. `(property)` **BTS。ReceivePortName** 2. `(operator)` **==** 3. `(value)` **TxnReceivePort1** 8. 适配器属性的其余部分使用默认值,然后选择**确定**。 ## <a name="run-the-sample"></a>运行示例 1. 单击**启动**,依次指向**所有程序**,指向**Microsoft SQL Server 2008 R2**,选择**SQL Server Management Studio**。 2. 在中**连接到服务器**对话框框中,请确保**服务器类型**设置为**数据库引擎**,并输入凭据进行身份验证到数据库服务器,然后选择**连接**。 3. 选择**新查询**工具栏按钮并粘贴到一个新的查询窗口,将测试表、 测试数据和测试以下存储过程到 Northwind 数据库。 选择**Execute**工具栏按钮。 ``` use [Northwind] GO if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[scratch]') and OBJECTPROPERTY(id, N'IsUserTable') = 1) drop table [dbo].[scratch] GO CREATE TABLE [dbo].[scratch] ( [id] [int] IDENTITY (1, 1) NOT NULL , [msg] [nvarchar] (4000) NOT NULL ) ON [PRIMARY] GO GRANT SELECT , UPDATE , INSERT ON [dbo].[scratch] TO [public] GO if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[sp_txnProc]') and OBJECTPROPERTY(id, N'IsProcedure') = 1) drop procedure [dbo].[sp_txnProc] GO CREATE PROCEDURE [dbo].[sp_txnProc] @Data nvarchar (4000) AS INSERT scratch ( msg ) values ( @Data ) GO GRANT EXECUTE ON [dbo].[sp_txnProc] TO [public] GO ``` 4. 在中[!INCLUDE[btsBizTalkServerAdminConsoleui](../includes/btsbiztalkserveradminconsoleui-md.md)],展开**发送端口**节点中,选择**TxnSendPort1**发送端口,然后选择**启动**。 5. 在中[!INCLUDE[btsBizTalkServerAdminConsoleui](../includes/btsbiztalkserveradminconsoleui-md.md)],展开**ReceiveLocations**节点中,选择**TxnRecieveLocation1**接收位置,然后选择**启用**。 6. 在启用该接收位置后,它将在指定的时间间隔自动轮询数据库以获得数据。 ## <a name="classes-or-methods-used-in-the-sample"></a>类或方法的示例中使用 * IBTTransmitterBatch 接口 (COM) * IBTTransportProxy 接口 (COM) 介绍了这些方法[!INCLUDE[ui-guidance-developers-reference](../includes/ui-guidance-developers-reference.md)]。 ## <a name="see-also"></a>请参阅 [适配器示例-开发](../core/adapter-samples-development.md) [注册适配器](../core/registering-an-adapter.md)
44.053719
422
0.634462
yue_Hant
0.768612
4b11651a62f5c26abfe65522307ca4d332d2db48
657
md
Markdown
_posts/2013-07-02-apophenia-release.md
geometer9/geometer9.github.io
7971463516b7f971c77a269b95a65340f7af8946
[ "CC-BY-3.0" ]
null
null
null
_posts/2013-07-02-apophenia-release.md
geometer9/geometer9.github.io
7971463516b7f971c77a269b95a65340f7af8946
[ "CC-BY-3.0" ]
null
null
null
_posts/2013-07-02-apophenia-release.md
geometer9/geometer9.github.io
7971463516b7f971c77a269b95a65340f7af8946
[ "CC-BY-3.0" ]
null
null
null
--- title: Apophenia released layout: post --- The new Misdreamt CD is now available! _Apophenia_ is based on manipulated sounds (static, voices, field recordings) combined with minimalist guitar and percussion. > "Any fact becomes important when it is connected to another." - Umberto Eco All sentient beings look for patterns in their environment; when the mind looks too deeply into phenomena that otherwise have no meaning, it can project voices onto static, a face onto the surface of Mars, or its innermost fears onto an otherwise random collection of ink blots. The line between meaning and nothingness is as fleeting as our own self-awareness.
46.928571
361
0.794521
eng_Latn
0.999757
4b1182f537bbf65d1f691805ad6dabc705df6ab5
1,201
md
Markdown
content/cheatsheets/knex.md
arthuranteater/my-blog
2c693638890d59c9c1810fd4f0c91d8a4748a82a
[ "MIT" ]
2
2020-11-24T01:38:31.000Z
2021-10-13T21:58:35.000Z
content/cheatsheets/knex.md
arthuranteater/my-blog
2c693638890d59c9c1810fd4f0c91d8a4748a82a
[ "MIT" ]
null
null
null
content/cheatsheets/knex.md
arthuranteater/my-blog
2c693638890d59c9c1810fd4f0c91d8a4748a82a
[ "MIT" ]
1
2019-03-23T21:46:15.000Z
2019-03-23T21:46:15.000Z
--- title: Knex --- Iterable objects and arrays are returned, but at some point how much you are grabbing will matter if you scale....so All queries return an array of objects and can be Jshauned using res.json({data}). Select: Return all from table 'users'. ```javascript knex.select().from('users') ``` OR ```javascript knex('users') ``` Match: Return users that match an 'id'. ```javascript knex.('users').where("id", id ) ``` Contains: From table 'users' return all where column 'ListofItemsInString' contains the string 'foo'. ```javascript knex('users').where('listItemsInString', 'like', '%foo%') ``` Add: Create user. ```javascript knex('users').insert(user,["Name", "Email", "Categories"]) ``` Delete: Delete user with matching id. ```javascript knex('users').where("id", id).del() ``` Update: Update user.(takes two: id, body) ```javascript knex('users').where("id", id).update(body) ``` Return: Return the result. ```javascript knex('users').where("id", id).update(body).returning(*) ``` If you can't find the Knex query within 2 minutes, my advice is to look up the how to SQL the query, then control + F <a href="https://knexjs.org/" target="_blank">the Knex docs</a>.
19.063492
182
0.681932
eng_Latn
0.850325
4b11a91854fe40dd901e6bbdaa3bd12a79c1ae37
5,491
md
Markdown
_posts/2008-06-09-linqtordf-v071-and-rdfmetal.md
aabs/aabs.github.io
e4e1ef6f81df74300279bbb6d5d7273df3628b5b
[ "MIT" ]
null
null
null
_posts/2008-06-09-linqtordf-v071-and-rdfmetal.md
aabs/aabs.github.io
e4e1ef6f81df74300279bbb6d5d7273df3628b5b
[ "MIT" ]
1
2020-07-19T00:42:56.000Z
2020-07-19T00:42:56.000Z
_posts/2008-06-09-linqtordf-v071-and-rdfmetal.md
aabs/aabs.github.io
e4e1ef6f81df74300279bbb6d5d7273df3628b5b
[ "MIT" ]
null
null
null
--- title: LinqToRdf v0.7.1 and RdfMetal date: 2008-06-09 22:20 author: aabs category: .NET, programming, science, Semantic Web, SemanticWeb slug: linqtordf-v071-and-rdfmetal status: published attachments: 2008/06/clip-image0015.png, 2008/06/clip-image0015-thumb.png ... I've just uploaded [version 0.7.1](http://linqtordf.googlecode.com/files/LinqToRdf-0.7.1.msi) of LinqToRdf. This bug fix release corrects an issue I introduced in version 0.7. The issue only seemed to affect some machines and stems from the use of the GAC by the WIX installer (to the best of my knowledge). I've abandoned GAC installation and gone back to the original approach. Early indications (Thanks, Hinnerk) indicate that the issue has been successfully resolved. Please let me know if you are still experiencing problems. Thanks to 13sides, Steve Dunlap, Hinnerk Bruegmann, Kevin Richards and [Paul Stovell](http://www.paulstovell.com/blog/) for bringing it to my attention and helping me to overcome the allure of the GAC. Kevin also reported that he's hoping to use LinqToRdf on a project involving the Biodiversity Information Standards ([TDWG](http://www.tdwg.org/)). It's always great to hear how people are using the framework. Please drop me a line to let me know how you are using LinqToRdf. Kevin might find feature [\#13](http://code.google.com/p/linqtordf/issues/detail?id=13&colspec=ID%20Type%20Summary%20Priority) useful. It will be called ***RdfMetal*** in honour of SqlMetal. It will automate the process of working with remotely managed ontologies. RdfMetal will completely lower any barriers to entry in semantic web development. You will (in principle) no longer need to know the formats, protocols and standards of the semantic web in order to consume data in it. [![clip\_image001\[5\]]({static}2008/06/clip-image0015-thumb.png){width="533" height="207"}]({static}2008/06/clip-image0015.png) Here's an example of the output it generated from DBpedia.org for the FOAF ontology: ./RdfMetal.exe -e:http://DBpedia.org/sparql -i -n http://xmlns.com/foaf/0.1/ -o foaf.cs Which produced the following source: namespace Some.Namespace { [assembly: Ontology( BaseUri = "http://xmlns.com/foaf/0.1/", Name = "MyOntology", Prefix = "MyOntology", UrlOfOntology = "http://xmlns.com/foaf/0.1/")] public partial class MyOntologyDataContext : RdfDataContext { public MyOntologyDataContext(TripleStore store) : base(store) { } public MyOntologyDataContext(string store) : base(new TripleStore(store)) { } public IQueryable<Person> Persons { get { return ForType<Person>(); } } public IQueryable<Document> Documents { get { return ForType<Document>(); } } // ... } [OwlResource(OntologyName="MyOntology", RelativeUriReference="Person")] public partial class Person { [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "surname")] public string surname {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "family_name")] public string family_name {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "geekcode")] public string geekcode {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "firstName")] public string firstName {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "plan")] public string plan {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "knows")] public Person knows {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "img")] public Image img {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "myersBriggs")] // ... } [OwlResource(OntologyName="MyOntology", RelativeUriReference="Document")] public partial class Document { [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "primaryTopic")] public LinqToRdf.OwlInstanceSupertype primaryTopic {get;set;} [OwlResource(OntologyName = "MyOntology", RelativeUriReference = "topic")] public LinqToRdf.OwlInstanceSupertype topic {get;set;} } // ... As you can see, it's still pretty rough, but it allows me to write queries like this: [TestMethod] public void TestGetPetesFromDbPedia() { var ctx = new MyOntologyDataContext("http://DBpedia.org/sparql"); var q = from p in ctx.Persons where p.firstName.StartsWith("Pete") select p; foreach (Person person in q) { Debug.WriteLine(person.firstName + " " + person.family_name); } } [](http://11011.net/software/vspaste)[](http://11011.net/software/vspaste) RdfMetal will be added to the v0.8 release of LinqToRdf in the not too distant future. If you have any feature requests, or want to help out, please reply to this or better-still join the [LinqToRdf discussion group](http://groups.google.com/group/linqtordf-discuss) and post there.
46.142857
482
0.663814
eng_Latn
0.743425
4b11d23ba156de9d1c3020a5fec458a08958e85b
797
md
Markdown
README.md
mrbuzz/Sudoku-Solver
5ef9f3014346e5d39636eef0932116885fc019be
[ "MIT" ]
null
null
null
README.md
mrbuzz/Sudoku-Solver
5ef9f3014346e5d39636eef0932116885fc019be
[ "MIT" ]
null
null
null
README.md
mrbuzz/Sudoku-Solver
5ef9f3014346e5d39636eef0932116885fc019be
[ "MIT" ]
null
null
null
# Sudoku-Solver Ruby implementation of the Peter Norvig's sudoku solving algorithm. Actually seems to perform like the native Python implementation; On an Intel I7 2600K solves the hard puzzle below in 1' and 15". There is also a little test suite based on the one found in the original article. For a more detailed point of view you should read [This](http://norvig.com/sudoku.html) #Code Sample ```ruby require_relative 'sudoku_solver' grid1 = '003020600900305001001806400008102900700000008006708200002609500800203009005010300' grid2 = '4.....8.5.3..........7......2.....6.....8.4......1.......6.3.7.5..2.....1.4......' hard1 = '.....6....59.....82....8....45........3........6..3.54...325..6..................' sudoku_solver = SudokuSolver.new sudoku_solver.solve(hard1) ```
46.882353
367
0.662484
eng_Latn
0.54861
4b11efaa0f5a0769f734311a928e2abedfecad3e
2,715
md
Markdown
README.md
harps116/vue-github-activity-calendar
ae00459d03e1dfe20fac5d93f6f35c16e5b9294d
[ "MIT" ]
2
2019-04-07T11:29:24.000Z
2019-10-15T01:35:01.000Z
README.md
harps116/vue-github-activity-calendar
ae00459d03e1dfe20fac5d93f6f35c16e5b9294d
[ "MIT" ]
143
2021-02-18T16:36:25.000Z
2022-03-22T20:21:37.000Z
README.md
harps116/vue-github-activity-calendar
ae00459d03e1dfe20fac5d93f6f35c16e5b9294d
[ "MIT" ]
null
null
null
# vue-github > vue your github activity and calendar [![Build](https://img.shields.io/travis/harps116/vue-github.svg?style=flat)](https://img.shields.io/travis/harps116/vue-github.svg?style=flat) [![License](https://img.shields.io/npm/l/vue-github.svg?style=flat)](https://github.com/harps116/vue-github/blob/master/LICENSE.md) [![NPM](https://nodei.co/npm/vue-github.png)](https://nodei.co/npm/vue-github/) [Demo](https://harps116.github.io/vue-github/) ![](https://github.com/harps116/vue-github/raw/master/static/vue-github-screenshot.png) ## Installation #### NPM `npm i vue-github` #### Yarn `yarn add vue-github` ## Dependencies Insert an octicons.css file in your html file to load the icons. ```html <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/octicons/3.5.0/octicons.min.css" /> ``` ## Usage Register the component globally in your main javascript file. ```javascript import Vue from "vue"; import VueGithub from "vue-github"; Vue.use(VueGithub); ``` Check out the `main.js` in the [demo repo](https://github.com/harps116/vue-github/blob/master/demo/src/main.js). import the style into your main `vue` file (most likely `App.vue`) if you want the default css. ```html <style> @import url("https://unpkg.com/vue-github@0.10.7/dist/vueGithub.css"); </style> ``` In your template you can now use html like this to render the activity feed: ```html <vue-github username="harps116" /> ``` Props: | name | type | default | description | | ------------ | ------- | ----------------------------------------------------------------------- | --------------------------------- | | username | String | required | Github username | | text | String | Summary of pull requests, issues opened, and commits made by {username} | Summary text | | showCalendar | Boolean | true | Whether to show the calendar | | showFeed | Boolean | true | Whether to show the activity feed | ## Issues File issues [here](https://github.com/harps116/vue-github/issues) ## License This project is licensed under MIT License - see the [LICENSE](./LICENSE.md) file for details ## Inspired by these great open source projects: [https://github.com/IonicaBizau/github-calendar](https://github.com/IonicaBizau/github-calendar) [https://github.com/lexmartinez/vue-github-activity](https://github.com/lexmartinez/vue-github-activity)
32.321429
142
0.592265
eng_Latn
0.402754
4b14a748e610d6d9d9e0f2aaf35840a41f0ea4d9
7,153
md
Markdown
docs/framework/wcf/samples/ajax-service-with-json-and-xml-sample.md
TomekLesniak/docs.pl-pl
3373130e51ecb862641a40c5c38ef91af847fe04
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/samples/ajax-service-with-json-and-xml-sample.md
TomekLesniak/docs.pl-pl
3373130e51ecb862641a40c5c38ef91af847fe04
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/samples/ajax-service-with-json-and-xml-sample.md
TomekLesniak/docs.pl-pl
3373130e51ecb862641a40c5c38ef91af847fe04
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Usługa AJAX z formatami JSON i XML — przykład ms.date: 03/30/2017 ms.assetid: 8ea5860d-0c42-4ae9-941a-e07efdd8e29c ms.openlocfilehash: 8f70b6aa2e61d01a075a6edb3fe490ef593e73b0 ms.sourcegitcommit: cdb295dd1db589ce5169ac9ff096f01fd0c2da9d ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 06/09/2020 ms.locfileid: "84575956" --- # <a name="ajax-service-with-json-and-xml-sample"></a>Usługa AJAX z formatami JSON i XML — przykład Ten przykład pokazuje, jak używać Windows Communication Foundation (WCF) do tworzenia asynchronicznej usługi JavaScript i XML (AJAX), która zwraca dane JavaScript Object Notation (JSON) lub XML. Dostęp do usługi AJAX można uzyskać za pomocą kodu JavaScript z klienta przeglądarki sieci Web. Ten przykład kompiluje się na [podstawowym przykładzie usługi AJAX](basic-ajax-service.md) . W przeciwieństwie do innych przykładów AJAX, ten przykład nie używa ASP.NET AJAX i <xref:System.Web.UI.ScriptManager> kontrolki. W przypadku dodatkowej konfiguracji usługi WCF AJAX są dostępne z dowolnej strony HTML za pośrednictwem języka JavaScript, a ten scenariusz jest przedstawiony tutaj. Aby zapoznać się z przykładem użycia programu WCF z ASP.NET AJAX, zobacz [AJAX Samples](ajax.md). Ten przykład pokazuje, jak przełączyć typ odpowiedzi operacji między JSON i XML. Ta funkcja jest dostępna niezależnie od tego, czy dostęp do usługi jest skonfigurowany za pomocą ASP.NET AJAX, czy na stronie klienta HTML/JavaScript. > [!NOTE] > Procedura instalacji i instrukcje dotyczące kompilacji dla tego przykładu znajdują się na końcu tego tematu. Aby umożliwić korzystanie z klientów non-ASP.NET AJAX, użyj <xref:System.ServiceModel.Activation.WebServiceHostFactory> (nie <xref:System.ServiceModel.Activation.WebScriptServiceHostFactory> ) w pliku SVC. <xref:System.ServiceModel.Activation.WebServiceHostFactory>dodaje <xref:System.ServiceModel.Description.WebHttpEndpoint> Standardowy punkt końcowy do usługi. Punkt końcowy jest skonfigurowany z pustym adresem względem pliku SVC; oznacza to, że adres usługi to `http://localhost/ServiceModelSamples/service.svc` , bez dodatkowych sufiksów innych niż nazwa operacji. `<%@ServiceHost language="c#" Debug="true" Service="Microsoft.Samples.XmlAjaxService.CalculatorService" Factory="System.ServiceModel.Activation.WebServiceHostFactory" %>` Poniższa sekcja w pliku Web. config może służyć do wprowadzania dodatkowych zmian w konfiguracji punktu końcowego. Można je usunąć, jeśli nie są potrzebne żadne dodatkowe zmiany. ```xml <system.serviceModel> <standardEndpoints> <webHttpEndpoint> <!-- Use this element to configure the endpoint --> <standardEndpoint name="" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> ``` Domyślny format danych dla <xref:System.ServiceModel.Description.WebHttpEndpoint> jest XML, podczas gdy domyślny format danych dla <xref:System.ServiceModel.Description.WebScriptEndpoint> jest JSON. Aby uzyskać więcej informacji, zobacz [Tworzenie usług WCF AJAX bez ASP.NET](../feature-details/creating-wcf-ajax-services-without-aspnet.md). Usługa w poniższym przykładzie jest standardową usługą WCF z dwiema operacjami. Obie operacje wymagają <xref:System.ServiceModel.Web.WebMessageBodyStyle.Wrapped> stylu treści dla <xref:System.ServiceModel.Web.WebGetAttribute> lub <xref:System.ServiceModel.Web.WebInvokeAttribute> atrybutów, które są specyficzne dla `webHttp` zachowania i nie mają wpływu na przełącznik formatu danych JSON/XML. ```csharp [OperationContract] [WebInvoke(ResponseFormat = WebMessageFormat.Xml, BodyStyle = WebMessageBodyStyle.Wrapped)] MathResult DoMathXml(double n1, double n2); ``` Format odpowiedzi dla operacji jest określany jako kod XML, który jest domyślnym ustawieniem [\<webHttp>](../../configure-apps/file-schema/wcf/webhttp.md) zachowania. Jednak dobrym sposobem jest jawne określenie formatu odpowiedzi. Inna operacja używa `WebInvokeAttribute` atrybutu i jawnie określa kod JSON zamiast XML dla odpowiedzi. ```csharp [OperationContract] [WebInvoke(ResponseFormat = WebMessageFormat.Json, BodyStyle = WebMessageBodyStyle.Wrapped)] MathResult DoMathJson(double n1, double n2); ``` Należy zauważyć, że w obu przypadkach operacje zwracają typ złożony, `MathResult` który jest standardowym typem kontraktu danych WCF. Strona sieci Web klienta XmlAjaxClientPage. htm zawiera kod JavaScript, który wywołuje jedną z poprzednich dwóch operacji, gdy użytkownik kliknie przyciski **wykonaj obliczenia (Return JSON)** lub **Wykonaj obliczenie (zwrotny kod XML)** na stronie. Kod do wywołania usługi konstruuje treść JSON i wysyła go przy użyciu protokołu HTTP POST. Żądanie jest tworzone ręcznie w języku JavaScript, w przeciwieństwie do przykładowej [podstawowej usługi AJAX](basic-ajax-service.md) , a inne przykłady przy użyciu ASP.NET AJAX. ```csharp // Create HTTP request var xmlHttp; // Request instantiation code omitted… // Result handler code omitted… // Build the operation URL var url = "service.svc/ajaxEndpoint/"; url = url + operation; // Build the body of the JSON message var body = '{"n1":'; body = body + document.getElementById("num1").value + ',"n2":'; body = body + document.getElementById("num2").value + '}'; // Send the HTTP request xmlHttp.open("POST", url, true); xmlHttp.setRequestHeader("Content-type", "application/json"); xmlHttp.send(body); ``` Gdy usługa reaguje, odpowiedź jest wyświetlana bez dalszych operacji przetwarzania w polu tekstowym na stronie. Ta implementacja jest zaimplementowana w celach demonstracyjnych, aby umożliwić bezpośrednio przestrzeganie używanych formatów danych XML i JSON. ```javascript // Create result handler xmlHttp.onreadystatechange=function(){ if(xmlHttp.readyState == 4){ document.getElementById("result").value = xmlHttp.responseText; } } ``` > [!IMPORTANT] > Przykłady mogą być już zainstalowane na komputerze. Przed kontynuowaniem Wyszukaj następujący katalog (domyślny). > > `<InstallDrive>:\WF_WCF_Samples` > > Jeśli ten katalog nie istnieje, przejdź do [przykładów Windows Communication Foundation (WCF) i Windows Workflow Foundation (WF) dla .NET Framework 4](https://www.microsoft.com/download/details.aspx?id=21459) , aby pobrać wszystkie Windows Communication Foundation (WCF) i [!INCLUDE[wf1](../../../../includes/wf1-md.md)] przykłady. Ten przykład znajduje się w następującym katalogu. > > `<InstallDrive>:\WF_WCF_Samples\WCF\Basic\AJAX\XmlAjaxService` #### <a name="to-set-up-build-and-run-the-sample"></a>Aby skonfigurować, skompilować i uruchomić przykład 1. Upewnij się, że została wykonana [Procedura konfiguracji jednorazowej dla przykładów Windows Communication Foundation](one-time-setup-procedure-for-the-wcf-samples.md). 2. Skompiluj rozwiązanie XmlAjaxService. sln zgodnie z opisem w temacie [Tworzenie przykładów Windows Communication Foundation](building-the-samples.md). 3. Przejdź do `http://localhost/ServiceModelSamples/XmlAjaxClientPage.htm` (nie otwieraj XmlAjaxClientPage. htm w przeglądarce z katalogu projektu). ## <a name="see-also"></a>Zobacz też - [Usługa AJAX używająca żądań POST protokołu HTTP](ajax-service-using-http-post.md)
61.663793
570
0.794212
pol_Latn
0.998265
4b14c32ec10045f199c742f3d91422f3555a1a6a
485
md
Markdown
README.md
devops-future/electron-game-2048
79ed01e6f69eddfe1fba9c153aff435025e39a52
[ "MIT" ]
10
2021-04-20T00:07:01.000Z
2022-02-03T05:36:31.000Z
README.md
devdreamsolution/electron-game-2048
2426cb615b6f585874eddc689ecbe509bc11d66c
[ "MIT" ]
null
null
null
README.md
devdreamsolution/electron-game-2048
2426cb615b6f585874eddc689ecbe509bc11d66c
[ "MIT" ]
null
null
null
# Game 2048 A cross-platform desktop application developed with Electron, based on [2048 web](https://gabrielecirulli.github.io/2048/). The original source files come from [2048](https://github.com/gabrielecirulli/2048). # Function Play game 2048. ![image](/public/2048.png) # Setup Locally ```bash git clone https://github.com/devdreamsolution/electron-game-2048.git cd game-2048-electron npm install npm start ``` Game will be launched, enjoy! # Packaging ```bash npm run dist ```
24.25
208
0.756701
eng_Latn
0.5858
4b152f0c1da6d07516652ef6edbc677fe1248839
293
md
Markdown
about/index.md
jaredwood/notes-so-simple
363d57d043efd71592daf5d74cafafe34dbf6fbd
[ "MIT" ]
null
null
null
about/index.md
jaredwood/notes-so-simple
363d57d043efd71592daf5d74cafafe34dbf6fbd
[ "MIT" ]
null
null
null
about/index.md
jaredwood/notes-so-simple
363d57d043efd71592daf5d74cafafe34dbf6fbd
[ "MIT" ]
null
null
null
--- layout: page title: Jared Wood excerpt: "Simple blog with some random notes..." modified: 2018-01-18 --- This blog is an attempt to share some notes on random topics including machine learning, signal processing, etc. It is not even close to exhaustive but hopefully find it interesting.
26.636364
82
0.767918
eng_Latn
0.999238
4b1659e81f5817ccbe4bec29dd481469e8f7e0d7
5,386
md
Markdown
articles/load-balancer/load-balancer-get-started-internet-classic-portal.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/load-balancer/load-balancer-get-started-internet-classic-portal.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/load-balancer/load-balancer-get-started-internet-classic-portal.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Commencer à créer un Internet face à équilibrage de la charge dans le modèle de déploiement classique de l’utilisation du portail classique Azure | Microsoft Azure" description="Apprenez à créer un interréseau en vis-à-vis d’équilibrage de la charge dans le modèle de déploiement classique de l’utilisation du portail classique Azure" services="load-balancer" documentationCenter="na" authors="sdwheeler" manager="carmonm" editor="" tags="azure-service-management" /> <tags ms.service="load-balancer" ms.devlang="na" ms.topic="get-started-article" ms.tgt_pltfrm="na" ms.workload="infrastructure-services" ms.date="08/31/2016" ms.author="sewhee" /> # <a name="get-started-creating-an-internet-facing-load-balancer-classic-in-the-azure-classic-portal"></a>Commencer à créer un Internet face à l’équilibrage de la charge (classic) dans Azure portal classique [AZURE.INCLUDE [load-balancer-get-started-internet-classic-selectors-include.md](../../includes/load-balancer-get-started-internet-classic-selectors-include.md)] [AZURE.INCLUDE [load-balancer-get-started-internet-intro-include.md](../../includes/load-balancer-get-started-internet-intro-include.md)] [AZURE.INCLUDE [azure-arm-classic-important-include](../../includes/azure-arm-classic-important-include.md)]Cet article décrit le modèle de déploiement classique. Vous pouvez également [apprendre à créer un équilibreur de charge à l’aide du Gestionnaire de ressources Azure d’ouvert sur Internet](load-balancer-get-started-internet-arm-ps.md). [AZURE.INCLUDE [load-balancer-get-started-internet-scenario-include.md](../../includes/load-balancer-get-started-internet-scenario-include.md)] ## <a name="set-up-an-internet-facing-load-balancer-for-virtual-machines"></a>Configurer un équilibreur de charge pour les machines virtuelles via Internet Pour charger équilibrer le trafic de réseau à partir d’Internet sur les machines virtuelles d’un service en nuage, vous devez créer un ensemble équilibré en charge. Cette procédure suppose que vous avez déjà créé des ordinateurs virtuels et qu’elles sont toutes dans le même service de cloud. **Pour configurer un ensemble équilibré en charge pour les machines virtuelles** 1. Dans le portail Azure classique, cliquez sur des **Machines virtuelles**, puis cliquez sur le nom d’un ordinateur virtuel dans l’ensemble de l’équilibrage de charge. 2. Cliquez sur les **points de terminaison**, puis cliquez sur **Ajouter**. 3. Dans la page **Ajouter un point de terminaison à une machine virtuelle** , cliquez sur la flèche vers la droite. 4. Dans la page **spécifier les détails du point de terminaison** : * Dans la zone **nom**, tapez un nom pour le point de terminaison ou sélectionnez le nom de la liste des points de terminaison prédéfinis pour des protocoles communs. * Dans **protocole**, sélectionnez le protocole requis par le type du point de terminaison TCP ou UDP, selon vos besoins. * Dans **les ports publique et privée**, tapez les numéros de port que vous souhaitez que l’ordinateur virtuel à utiliser, le cas échéant. Vous pouvez utiliser le port privé et les règles de pare-feu sur l’ordinateur virtuel pour rediriger le trafic d’une manière qui est approprié pour votre application. Le port privé peut être le même que le port public. Par exemple, pour un point de terminaison pour le trafic web (HTTP), vous pouvez affecter le port 80 à la voie publique et privée. 5. Sélectionnez **créer un ensemble équilibré en charge**, puis cliquez sur la flèche vers la droite. 6. Sur la page **configurer l’ensemble de l’équilibrage de charge** , tapez un nom pour l’ensemble de l’équilibrage de charge et ensuite affecter les valeurs de comportement de sonde de l’équilibreur de charge Azure. L’équilibreur de charge utilise des sondes pour déterminer si les ordinateurs virtuels dans l’ensemble de l’équilibrage de charge sont disponibles pour recevoir le trafic entrant. 7. Cliquez sur la case à cocher pour créer le point de terminaison avec équilibrage de charge. Vous verrez **Oui** dans la colonne **nom de l’ensemble de l’équilibrage de charge** de la page **points de terminaison** pour l’ordinateur virtuel. 8. Dans le portail, cliquez sur les **Machines virtuelles**et cliquez sur le nom d’un ordinateur virtuel supplémentaire dans l’ensemble de l’équilibrage de charge, cliquez sur les **points de terminaison**, puis cliquez sur **Ajouter**. 9. Sur la page **Ajouter un point de terminaison à une machine virtuelle** , cliquez sur **Ajouter le point de terminaison à un jeu existant d’équilibrage de charge**, sélectionnez le nom de l’ensemble de l’équilibrage de charge et puis cliquez sur la flèche vers la droite. 10. Dans la page **spécifier les détails du point de terminaison** , tapez un nom pour le point de terminaison, puis cliquez sur la case à cocher. Pour les ordinateurs virtuels supplémentaires dans l’ensemble de l’équilibrage de charge, répétez les étapes 8 à 10. ## <a name="next-steps"></a>Étapes suivantes [Démarrer la configuration d’un équilibreur de charge interne](load-balancer-get-started-ilb-arm-ps.md) [Configurer un mode de distribution d’équilibrage de la charge](load-balancer-distribution-mode.md) [Configurer les paramètres de délai d’attente TCP inactifs pour votre équilibreur de charge](load-balancer-tcp-idle-timeout.md)
72.783784
492
0.777386
fra_Latn
0.983819
4b16784559b271824bad82dfa7dd30edfa26bfbf
1,961
md
Markdown
README.md
nice-move/remark-code-example
6872bd729aa34c2f72c94c192cf347f8065cd7be
[ "MIT" ]
null
null
null
README.md
nice-move/remark-code-example
6872bd729aa34c2f72c94c192cf347f8065cd7be
[ "MIT" ]
null
null
null
README.md
nice-move/remark-code-example
6872bd729aa34c2f72c94c192cf347f8065cd7be
[ "MIT" ]
null
null
null
# remark-code-example Remark plugin to copy live code block as code example. [![npm][npm-badge]][npm-url] [![github][github-badge]][github-url] ![node][node-badge] [npm-url]: https://www.npmjs.com/package/remark-code-example [npm-badge]: https://img.shields.io/npm/v/remark-code-example.svg?style=flat-square&logo=npm [github-url]: https://github.com/nice-move/remark-code-example [github-badge]: https://img.shields.io/npm/l/remark-code-example.svg?style=flat-square&colorB=blue&logo=github [node-badge]: https://img.shields.io/node/v/remark-code-example.svg?style=flat-square&colorB=green&logo=node.js ## Installation ```sh npm install remark remark-code-example --save-dev ``` ## Usage ```cjs const readFileSync = require('fs'); const remark = require('remark'); const codeSample = require('remark-code-example'); const markdownText = readFileSync('example.md', 'utf8'); remark() .use(codeSample, { copyAtBefore: false /* true by default */ }) .process(markdownText) .then((file) => console.info(file)) .catch((error) => console.warn(error)); ``` ### Options.copyAtBefore - type: boolean - default: true - required: false - description: Place copied code before original code ### Options.metas - type: object of string - default: {} - required: false - description: Define meta of code block meta by lang ## Syntax ### code-example `````markdown Turn ```mermaid code-example flowchart LR Start --> Stop ``` Into ````markdown ```mermaid flowchart LR Start --> Stop ``` ```` ````` ### code-example-copy `````markdown Turn ```mermaid code-example-copy flowchart LR Start --> Stop ``` Into ````markdown ```mermaid flowchart LR Start --> Stop ``` ```` ```mermaid flowchart LR Start --> Stop ``` ````` ### code-alias-copy ````markdown Turn ```mermaid code-alias-copy=diagram flowchart LR Start --> Stop ``` Into ```diagram flowchart LR Start --> Stop ``` ```mermaid flowchart LR Start --> Stop ``` ````
16.07377
111
0.676696
eng_Latn
0.504643
4b17bb727da35446c9b722e6a6c9abf8d0b4a3f7
31,177
md
Markdown
biztalk/adapters-and-accelerators/adapter-sql/receive-strongly-typed-polling-based-data-changed-messages-from-sql-in-biztalk.md
damabe/biztalk-docs
1c01d16bf8ba67dc0aff7707ca03d92293032ad9
[ "CC-BY-4.0", "MIT" ]
1
2021-08-22T18:02:23.000Z
2021-08-22T18:02:23.000Z
biztalk/adapters-and-accelerators/adapter-sql/receive-strongly-typed-polling-based-data-changed-messages-from-sql-in-biztalk.md
damabe/biztalk-docs
1c01d16bf8ba67dc0aff7707ca03d92293032ad9
[ "CC-BY-4.0", "MIT" ]
null
null
null
biztalk/adapters-and-accelerators/adapter-sql/receive-strongly-typed-polling-based-data-changed-messages-from-sql-in-biztalk.md
damabe/biztalk-docs
1c01d16bf8ba67dc0aff7707ca03d92293032ad9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: "Learn more about: Receive strongly-typed polling-based data-changed messages from SQL Server using BizTalk Server" title: "Receive strongly-typed polling-based data-changed messages from SQL Server using BizTalk Server | Microsoft Docs" ms.custom: "" ms.date: "06/08/2017" ms.prod: "biztalk-server" ms.reviewer: "" ms.suite: "" ms.tgt_pltfrm: "" ms.topic: "article" ms.assetid: e6e6ba7e-9e13-4e28-b57d-d24569277bbc caps.latest.revision: 21 author: "MandiOhlinger" ms.author: "mandia" manager: "anneta" --- # Receive strongly-typed polling-based data-changed messages from SQL Server using BizTalk Server You can configure the [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)] to receive strongly-typed polling messages from SQL Server. You can specify a polling statement that the adapter executes to poll the database. The polling statement can be a SELECT statement or a stored procedure that returns a result set. You must use strongly-typed polling in a scenario where you want to map the elements in the polling message to any other schema. The schema you want to map to could be for another operation on SQL Server. For example, you could map certain elements in the polling message to the schema for an Insert operation on another table. So, the values in the polling message serve as parameters for the Insert operation. In a simpler scenario, you could map the schema for strongly-typed polling message to a schema file that just stores information. > [!IMPORTANT] > If you want to have more than one polling operation in a single BizTalk application, you must specify an **InboundID** connection property as part of the connection URI to make it unique. With a unique connection URI, you can create multiple receive ports that poll the same database, or even the same table in a database. For more information, see [Receive Polling Messages Across Multiple Receive Ports from SQL using Biztalk Server](../../adapters-and-accelerators/adapter-sql/receive-polling-messages-across-multiple-receive-ports-from-sql-using-biztalk.md). For more information about how the adapter supports strongly-typed polling, see [Support for Polling](https://msdn.microsoft.com/library/dd788416.aspx). For more information about the message schema for strongly-typed polling, see [Message Schemas for the Polling and TypedPolling Operations](../../adapters-and-accelerators/adapter-sql/message-schemas-for-the-polling-and-typedpolling-operations.md). ## How this Topic Demonstrates Strongly-typed Polling This topic demonstrates how to use strongly-typed polling to map the polling message to another schema. This topic shows how to create a BizTalk project and generate schema for **TypedPolling** operation. Before generating schema for **TypedPolling** operation, you must do the following: - You must specify an **InboundID** as part of the connection URI. - You must specify a polling statement for the **PollingStatement** binding property. As part of the polling statement, perform the following operations: - Select all the rows from the Employee table. - Execute a stored procedure (MOVE_EMP_DATA) to move all the records from the Employee table to an EmployeeHistory table. - Execute a stored procedure (ADD_EMP_DETAILS) to add a new record to the Employee table. This procedure takes the employee name, designation, and salary as parameters. To perform these operations, you must specify the following for the **PollingStatement** binding property: ``` SELECT * FROM Employee;EXEC MOVE_EMP_DATA;EXEC ADD_EMP_DETAILS John, Tester, 100000 ``` Because you generate schema for the **TypedPolling** operation, the schema is strongly-typed and contains all the elements that will be included in the polling message. As part of the same BizTalk project, you add another schema file, for example EmployeeDetails.xsd. The schema for EmployeeDetails.xsd resembles the following: ``` <?xml version="1.0" encoding="utf-16" ?> <xs:schema xmlns:b="http://schemas.microsoft.com/BizTalk/2003" xmlns="http://Typed_Polling.EmployeeDetails" elementFormDefault="qualified" targetNamespace="http://Typed_Polling.EmployeeDetails" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="EmployeeDetails"> <xs:complexType> <xs:sequence> <xs:element name="Employee_Info" type="xs:string" /> <xs:element name="Employee_Profile" type="xs:string" /> <xs:element name="Employee_Performance" type="xs:string" /> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> ``` You also add a BizTalk Mapper to the project to map the elements from the Employee table (received as polling message) to the elements in the EmployeeDetails.xsd schema. As part of the map, you combine one or more elements from the polling message and map it to a single element in the EmployeeDetails schema. You can do so by using the **String Concatenate** functoid. Finally, as part of the BizTalk project, a file conforming to the EmployeeDetails.xsd schema is dropped to a FILE send port. ## Configure Typed Polling with the SQL Adapter Binding Properties The following table summarizes the [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)] binding properties that you use to configure the adapter to receive data-change messages. Other than the **PollingStatement** binding property, all the other binding properties listed in this section are required while configuring the receive port in the [!INCLUDE[btsBizTalkServerNoVersion](../../includes/btsbiztalkservernoversion-md.md)] Administration console. You must specify the **PollingStatement** binding property before generating schema for the **TypedPolling** operation. > [!NOTE] > For typed polling, you must specify the **PollingStatement** biding property while generating the schema. You can choose to specify the other binding properties as well while generating the schema, even though they are not mandatory. If you do specify the binding properties, the port binding file that the [!INCLUDE[consumeadapterservshort](../../includes/consumeadapterservshort-md.md)] generates as part of the metadata generation also contains the values you specify for the binding properties. You can later import this binding file in the [!INCLUDE[btsBizTalkServerNoVersion](../../includes/btsbiztalkservernoversion-md.md)] Administration console to create the WCF-custom or WCF-SQL receive port with the binding properties already set. For more information about creating a port using the binding file, see [Configure a physical port binding using a port binding file to use the SQL adapter](../../adapters-and-accelerators/adapter-sql/configure-a-physical-port-binding-using-a-port-binding-file-to-sql-adapter.md). | Binding Property | Description | |----------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **InboundOperationType** | Specifies whether you want to perform **Polling**, **TypedPolling**, or **Notification** inbound operation. Default is **Polling**. To receive strongly-typed polling messages, set this to **TypedPolling**. | | **PolledDataAvailableStatement** | Specifies the SQL statement that the adapter executes to determine whether any data is available for polling. The SQL statement must return a result set consisting of rows and columns. Only if a row is available, the SQL statement specified for the **PollingStatement** binding property will be executed. | | **PollingIntervalInSeconds** | Specifies the interval, in seconds, at which the [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)] executes the statement specified for the **PolledDataAvailableStatement** binding property. The default is 30 seconds. The polling interval determines the time interval between successive polls. If the statement is executed within the specified interval, the adapter waits for the remaining time in the interval. | | **PollingStatement** | Specifies the SQL statement to poll the SQL Server database table. You can specify a simple SELECT statement or a stored procedure for the polling statement. The default is null. You must specify a value for **PollingStatement** to enable polling. The polling statement is executed only if there is data available for polling, which is determined by the **PolledDataAvailableStatement** binding property. You can specify any number of SQL statements separated by a semi-colon.<br /><br /> **Important:** For **TypedPolling**, you must specify this binding property before generating metadata. | | **PollWhileDataFound** | Specifies whether the [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)] ignores the polling interval and continuously executes the SQL statement specified for the **PolledDataAvailableStatement** binding property, if data is available in the table being polled. If no data is available in the table, the adapter reverts to execute the SQL statement at the specified polling interval. Default is **false**. | For a more complete description of these properties, see [Read about the BizTalk Adapter for SQL Server adapter Binding Properties](../../adapters-and-accelerators/adapter-sql/read-about-the-biztalk-adapter-for-sql-server-adapter-binding-properties.md). For a complete description of how to use the [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)] to poll SQL Server, read further. ## How to Receive Strongly-typed Data-change Messages from the SQL Server Database Performing an operation on the SQL Server database using [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)] with [!INCLUDE[btsBizTalkServerNoVersion](../../includes/btsbiztalkservernoversion-md.md)] involves the procedural tasks described in [Building blocks to develop BizTalk applications with the SQL adapter](../../adapters-and-accelerators/adapter-sql/building-blocks-to-develop-biztalk-applications-with-the-sql-adapter.md). To configure the adapter to receive strongly-typed data-change messages, these tasks are: 1. Create a BizTalk project, and then generate schema for the **TypedPolling** operation. You must specify the **InboundID** connection property and the **PollingStatement** binding property while generating schema. For example, a connection URI with the inbound ID specified resembles the following: ``` mssql://mysqlserver//mysqldatabase?InboundID=mydatabaseId ``` 2. Create a message in the BizTalk project for receiving messages from the SQL Server database. 3. Create an orchestration to receive messages from the SQL Server database and to save them to a folder. 4. Add a schema, for example, EmployeeDetails.xsd, in the BizTalk project. 5. Add a BizTalk Mapper to map the schema for the polling message to EmployeeDetails.xsd schema. 6. Build and deploy the BizTalk project. 7. Configure the BizTalk application by creating physical send and receive ports. > [!IMPORTANT] > For inbound polling scenarios you must always configure a one-way WCF-Custom or WCF-SQL receive port. Two-way WCF-Custom or WCF-SQL receive ports are not supported for inbound operations. 8. Start the BizTalk application. This topic provides instructions to perform these tasks. ## Sample Based on This Topic A sample, TypedPolling, based on this topic is provided with the [!INCLUDE[adapterpacknoversion](../../includes/adapterpacknoversion-md.md)]. For more information, see [Samples for the SQL adapter](../../adapters-and-accelerators/adapter-sql/samples-for-the-sql-adapter.md). ## Generate Schema You must generate the schema for the **TypedPolling** operation. See [Retrieving Metadata for SQL Server Operations in Visual Studio using the SQL adapter](../../adapters-and-accelerators/adapter-sql/get-metadata-for-sql-server-operations-in-visual-studio-using-the-sql-adapter.md) for more information about how to generate the schema. Perform the following tasks when generating the schema. 1. Specify the **InboundID** connection property while specifying the connection URI. For this topic, you can specify the **InboundID** as **Employee**. For more information about the connection URI, see [Create the SQL Server Connection URI](../../adapters-and-accelerators/adapter-sql/create-the-sql-server-connection-uri.md). 2. Specify a value for the **PollingStatement** binding property. For more information about this binding property, see [Read about the BizTalk Adapter for SQL Server adapter Binding Properties](../../adapters-and-accelerators/adapter-sql/read-about-the-biztalk-adapter-for-sql-server-adapter-binding-properties.md). For instructions on how to specify binding properties, see [Configure the binding properties for the SQL adapter](../../adapters-and-accelerators/adapter-sql/configure-the-binding-properties-for-the-sql-adapter.md). 3. Select the contract type as **Service (Inbound operation)**. 4. Generate schema for the **TypedPolling** operation. ## Define Messages and Message Types The schema that you generated earlier describes the "types" required for the messages in the orchestration. A message is typically a variable, the type for which is defined by the corresponding schema. Once the schema is generated, you must link it to the messages from the Orchestration view of the BizTalk project. For this topic, you must create one message to receive messages from the SQL Server database. Perform the following steps to create messages and link them to schema. #### Create messages and link to schema 1. Add an orchestration to the BizTalk project. From the Solution Explorer, right-click the BizTalk project name, point to **Add**, and then click **New Item**. Type a name for the BizTalk orchestration and then click **Add**. 2. Open the orchestration view window of the BizTalk project, if it is not already open. Click **View**, point to **Other Windows**, and then click **Orchestration View**. 3. In the **Orchestration View**, right-click **Messages**, and then click **New Message**. 4. Right-click the newly created message, and then select **Properties Window**. 5. In the **Properties** pane for **Message_1**, do the following: |Use this|To do this| |--------------|----------------| |Identifier|Type **PollingMessage**.| |Message Type|From the drop-down list, expand **Schemas**, and select *Typed_Polling.TypedPolling_Employee.TypedPolling*, where *Typed_Polling* is the name of your BizTalk project. *TypedPolling_Employee* is the schema generated for the **TypedPolling** operation.| ## Set up the Orchestration You must create a BizTalk orchestration to use [!INCLUDE[btsBizTalkServerNoVersion](../../includes/btsbiztalkservernoversion-md.md)] for receiving polling-based data-change messages from the SQL Server database. In this orchestration, the adapter receives the polling message for the specified polling statement. The BizTalk Mapper then maps the polling message schema to the EmployeeDetails.xsd schema. The mapped message is then saved to a FILE location. A typical orchestration for receiving strongly-typed polling message from a SQL Server database would contain: - Receive and Send shapes to receive messages from SQL Server and send to a FILE port, respectively. - A one-way receive port to receive messages from SQL Server. > [!IMPORTANT] > For inbound polling scenarios you must always configure a one-way receive port. Two-way receive ports are not supported for inbound operations. - A one-way send port to send polling responses from a SQL Server database to a folder. - A BizTalk Mapper to map the schema of the polling message to any other schema. A sample orchestration resembles the following. ![Orchestration for strongly&#45;typed polling](../../adapters-and-accelerators/adapter-sql/media/1db03859-b7f8-470c-9158-2be4da0b45ae.gif "1db03859-b7f8-470c-9158-2be4da0b45ae") ### Add Message Shapes Make sure you specify the following properties for each of the message shapes. The names listed in the Shape column are the names of the message shapes as displayed in the just-mentioned orchestration. |Shape|Shape Type|Properties| |-----------|----------------|----------------| |ReceiveMessage|Receive|- Set **Name** to *ReceiveMessage*<br /><br /> - Set **Activate** to *True*| |SaveMessage|Send|- Set **Name** to *SaveMessage*| ### Add Ports Make sure you specify the following properties for each of the logical ports. The names listed in the Port column are the names of the ports as displayed in the orchestration. |Port|Properties| |----------|----------------| |SQLReceivePort|- Set **Identifier** to *SQLReceivePort*<br /><br /> - Set **Type** to *SQLReceivePortType*<br /><br /> - Set **Communication Pattern** to *One-Way*<br /><br /> - Set **Communication Direction** to *Receive*| |SaveMessagePort|- Set **Identifier** to *SaveMessagePort*<br /><br /> - Set **Type** to *SaveMessagePortType*<br /><br /> - Set **Communication Pattern** to *One-Way*<br /><br /> - Set **Communication Direction** to *Send*| ### Enter Messages for Action Shapes and Connect to Ports The following table specifies the properties and their values that you should set to specify messages for action shapes and to link the messages to the ports. The names listed in the Shape column are the names of the message shapes as displayed in the orchestration mentioned earlier. |Shape|Properties| |-----------|----------------| |ReceiveMessage|Set **Message** to *PollingMessage*<br /><br /> Set **Operation** to *SQLReceivePort.TypedPolling.Request*| |SaveMessage|Set **Message** to *PollingMessage*<br /><br /> Set **Operation** to *SaveMessagePort.TypedPolling.Request*| After you have specified these properties, the message shapes and ports are connected. ### Add a BizTalk Mapper You must add a BizTalk Mapper to the orchestration to map the polling message schema to the EmployeeDetails.xsd schema. In the [!INCLUDE[btsBizTalkServerNoVersion](../../includes/btsbiztalkservernoversion-md.md)] Administration console, you will use this Mapper to map the schema for the polling message to the EmployeeDetails.xsd schema. 1. Add a BizTalk Mapper to the BizTalk project. Right-click the BizTalk project, point to **Add**, and click **New Item**. In the **Add New Item** dialog box, from the left pane, select **Map Files**. From the right pane, select **Map**. Specify a name for the map, such as `MapSchema.btm`. Click **Add**. 2. From the Source Schema pane, click **Open Source Schema**. 3. In the **BizTalk Type Picker** dialog box, expand the project name, expand **Schemas**, and select the schema for the polling message. For this topic, select Typed_Polling.TypedPolling_Employee. Click **OK**. 4. In the **Root Node for Source Schema** dialog box, select TypedPolling and click **OK**. 5. From the Destination Schema pane, click **Open Destination Schema**. 6. In the **BizTalk Type Picker** dialog box, expand the project name, expand **Schemas**, and select the schema for EmployeeDetails. For this topic, select Typed_Polling.EmployeeDetails. Click **OK**. 7. In the source schema of polling message, expand the TypedPollingResultSet0 node and the subsequent nodes to see the elements that are returned in the polling message. In the destination schema, expand the EmployeeDetails node to see the different elements in the schema. For this topic, you must map the schemas in such a way that: - **Employee_ID** and **Name** in the source schema must map to **Employee_Info** in the destination schema. - **Designation** and **Job_Description** in the source schema must map to **Employee_Profile** in the destination schema. - **Rating** and **Salary** in the source schema must map to **Employee_Performance** in the destination schema. To combine more than one node in source schema and map them to a single node in the destination schema, you must use the **String Concatenate functoid**. Details [!INCLUDE[ui-guidance-developers-reference](../../includes/ui-guidance-developers-reference.md)]. 8. To use the String Concatenate functoid: 1. From the **Toolbox**, drag the **String Concatenate** functoid and drop it on the Mapper grid. 2. Connect the **Employee_ID** and **Name** elements in the source schema to the functoid. 3. Connect the functoid to the **Employee_Info** element in the destination schema. 4. Repeat these steps for all the elements that you want to map. A finished map will resemble the following: ![Map the strongly-typed polling schema](../../adapters-and-accelerators/adapter-sql/media/0a4a2608-3b84-4bac-9a16-512cf42c7525.gif "0a4a2608-3b84-4bac-9a16-512cf42c7525") 5. Save the map. The orchestration is complete after you create the Mapper. You must now build the BizTalk solution and deploy it to a [!INCLUDE[btsBizTalkServerNoVersion](../../includes/btsbiztalkservernoversion-md.md)]. For more information, see [Building and Running Orchestrations](../../core/building-and-running-orchestrations.md). ## Configure the BizTalk Application After you have deployed the BizTalk project, the orchestration you created earlier is listed under the **Orchestrations** pane in the BizTalk Server Administration console. You must use the BizTalk Server Administration console to configure the application. For a walkthrough, see [Walkthrough: Deploying a Basic BizTalk Application](Walkthrough:%20Deploying%20a%20Basic%20BizTalk%20Application.md). Configuring an application involves: - Selecting a host for the application. - Mapping the ports that you created in your orchestration to physical ports in the BizTalk Server Administration console. For this orchestration you must: - Define a physical WCF-Custom or WCF-SQL one-way receive port. This port polls the SQL Server database with the polling statement you specify for the port. For information about how to create ports, see [Manually configure a physical port binding to the SQL adapter](../../adapters-and-accelerators/adapter-sql/manually-configure-a-physical-port-binding-to-the-sql-adapter.md). Make sure you specify the following binding properties for the receive port. > [!IMPORTANT] > Make sure you specify the **InboundID** as part of the connection URI. The inbound ID must be the same you specified while generating the schema. > > [!IMPORTANT] > You do not need to perform this step if you specified the binding properties at design-time. In such a case, you can create a WCF-custom or WCF-SQL receive port, with the required binding properties set, by importing the binding file created by the [!INCLUDE[consumeadapterservshort](../../includes/consumeadapterservshort-md.md)]. For more information see [Configure a physical port binding using a port binding file to use the SQL adapter](../../adapters-and-accelerators/adapter-sql/configure-a-physical-port-binding-using-a-port-binding-file-to-sql-adapter.md). |Binding Property|Value| |----------------------|-----------| |**InboundOperationType**|Make sure you set this to **TypedPolling**.| |**PolledDataAvailableStatement**|Make sure you specify the same SQL statement you specified while generating the schema, which is:<br /><br /> `SELECT COUNT(*) FROM Employee`| |**PollingStatement**|Make sure you provide the same polling statement you specified while generating the schema, which is:<br /><br /> `SELECT * FROM Employee;EXEC MOVE_EMP_DATA;EXEC ADD_EMP_DETAILS John, Tester, 100000`| For more information about the different binding properties, see [Read about the BizTalk Adapter for SQL Server adapter Binding Properties](../../adapters-and-accelerators/adapter-sql/read-about-the-biztalk-adapter-for-sql-server-adapter-binding-properties.md). > [!NOTE] > We recommend configuring the transaction isolation level and the transaction timeout while performing inbound operations using the [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)]. You can do so by adding the service behavior while configuring the WCF-Custom or WCF-SQL receive port. For instruction on how to add the service behavior, see [Configure Transaction Isolation Level and Transaction Timeout with SQL](../../adapters-and-accelerators/adapter-sql/configure-transaction-isolation-level-and-transaction-timeout-with-sql.md). - Define a FILE send port where the adapter will drop the message. This send port will also use the map you created in the orchestration to map the polling message to a message conforming to the EmployeeDetails.xsd schema. Perform the following steps to configure the FILE send port to use the map. 1. Create a FILE send port. 2. From the left pane of the send port properties dialog box, click **Outbound Maps**. From the right pane, click the field under the **Map** column, and from the drop-down select **MapSchema**. Click **OK**. ![Configure outbound map on the FILE send port](../../adapters-and-accelerators/adapter-sql/media/831c9aee-fd97-466f-9270-3b04dbccd9fe.gif "831c9aee-fd97-466f-9270-3b04dbccd9fe") ## Start the Application You must start the BizTalk application for receiving messages from the SQL Server database. For instructions on starting a BizTalk application, see [How to Start an Orchestration](../../core/how-to-start-an-orchestration.md). At this stage, make sure: - The WCF-Custom or WCF-SQL one-way receive port, which polls the SQL Server database using the statements specified for the **PollingStatement** binding property, is running. - The FILE send port, which will map the polling message to the EmployeeDetails schema, is running. - The BizTalk orchestration for the operation is running. ## Execute the Operation After you run the application, the following set of actions take place, in the same sequence: - The adapter executes the **PolledDataAvailableStatement** on the Employee table and determines that the table has records for polling. - The adapter executes the polling statement. Because the polling statement consists of a SELECT statement and stored procedures, the adapter will execute all the statements one after the other. - The adapter first executes the SELECT statement that returns all the records in the Employee table. - The adapter then executes the MOVE_EMP_DATA stored procedure that moves all data from the Employee table to the EmployeeHistory table. This stored procedure does not return any value. - The adapter then executes the ADD_EMP_DETAILS stored procedure that adds one record to the Employee table. This stored procedure returns the Employee ID for the inserted record. After the polling statement is executed and the message is received, the polling message gets send to the FILE send port. Here, the outbound map (**MapSchema**)configured on the send port maps the polling message to the EmployeeDetails schema and drops the message to a file location. The message resembles the following: ``` <?xml version="1.0" encoding="utf-8" ?> <EmployeeDetails xmlns="http://Typed_Polling.EmployeeDetails"> <Employee_Info>10751John</Employee_Info> <Employee_Profile>TesterManagesTesting</Employee_Profile> <Employee_Performance>100000</EmployeePerformance> </EmployeeDetails> ``` In the preceding response, you can notice that the Employee_Info element contains a combination of employee ID (10751) and employee name (John). The other elements also contain combinations as mapped in the Mapper you created as part of the orchestration. > [!NOTE] > The [!INCLUDE[adaptersqlshort](../../includes/adaptersqlshort-md.md)] will continue to poll until you explicitly disable the receive port from the [!INCLUDE[btsBizTalkServerNoVersion](../../includes/btsbiztalkservernoversion-md.md)] Administration console. ## Best Practices After you have deployed and configured the BizTalk project, you can export configuration settings to an XML file called the binding file. Once you generate a binding file, you can import the configuration settings from the file, so that you do not need to create the send ports and receive ports for the same orchestration. For more information about binding files, see [Reuse adapter bindings](../../adapters-and-accelerators/adapter-sql/reuse-sql-adapter-bindings.md). ## See Also [Poll SQL Server by Using the SQL Adapter with BizTalk Server](../../adapters-and-accelerators/adapter-sql/poll-sql-server-using-the-sql-adapter-with-biztalk-server.md)
95.929231
1,028
0.701639
eng_Latn
0.982988
4b1822e83eb64615ff0a47f10019e26bba19b973
1,180
md
Markdown
_posts/2021-11-28-git.md
Yu-Da-young/Yu-Da-young.github.io
a36b15d16cca1de66855345c4596a8ec554e8372
[ "MIT" ]
null
null
null
_posts/2021-11-28-git.md
Yu-Da-young/Yu-Da-young.github.io
a36b15d16cca1de66855345c4596a8ec554e8372
[ "MIT" ]
null
null
null
_posts/2021-11-28-git.md
Yu-Da-young/Yu-Da-young.github.io
a36b15d16cca1de66855345c4596a8ec554e8372
[ "MIT" ]
null
null
null
--- layout: post title: "Git & Github" date: 2021-11-28 17:03:36 +0530 categories: git comments: true --- ## Git 로컬에서 관리되는 버전 관리 시스템 (VCS: Version Control System) 소스코드 수정에 따른 버전을 관리해주는 시스템 ## Github 클라우드 방식으로 관리되는 버전 관리 시스템(VCS) 자체 구축이 아닌 빌려쓰는 클라우드 개념 오픈소스는 일정 부분 무료로 저장 가능, 아닐 경우 유료 사용 간단히 Git은 로컬에서 버전 관리 시스템을 운영하는 방식이고 Github는 저장소를 깃허브에서 제공해주는 클라우드 서버를 이용한다는 것의 차이입니다. 따라서 다른 사람들과 협업할 경우, 오픈소스를 공유하고 다른 사람들의 의견을 듣고 싶은 경우 등은 Github를 써서 편리하게 기능을 사용할 수 있습니다. 만약 혼자 작업하거나 폐쇄적인 범위 내에서의 협업이라면 Git만 사용해도 무방합니다. *** >git init - 현재 디렉토리를 git local repository[Working Directory]로 지정(생성) - ls -al 명령어로 .git 숨김파일 생성 확인 - rm -rf .git 명령어로 local repository 삭제 >git status - 파일 상태 확인(staged, untracked, ..) >git add 파일명 - 해당 파일을 [Staging Area]로 이동(tracking) >git add . -현재 폴더의 전체 파일을 이동 >git commit - [Staging Area]에 있는 파일을 원격저장소[Repository]로 커밋 - 옵션없이 해당 명령어만 입력할 경우 editor 호출 >git commit -m "커밋메세지" - editor 호출없이 바로 커밋 >git commit -am "커밋메세지" - [Staging Area]에 올림과 동시에 커밋(= git add .+ git commit -m "커밋메세지") - 단, 1번이라도 커밋된 대상만 사용 가능 - local repository[Working Directory]와 [Staging Area]의 차이를 보여줌 >git log - commit 로그 확인
28.095238
220
0.667797
kor_Hang
1.00001
4b18e95834ef5768cd47e072b2fba12589ecaf00
681
md
Markdown
content/post/2018-10-06-kubernetes-container-runtime-interface.md
chechiachang/chechiachang.github.io-src
a46c008fa8e79c8298c31f60243ee88bcfc463fb
[ "MIT" ]
null
null
null
content/post/2018-10-06-kubernetes-container-runtime-interface.md
chechiachang/chechiachang.github.io-src
a46c008fa8e79c8298c31f60243ee88bcfc463fb
[ "MIT" ]
null
null
null
content/post/2018-10-06-kubernetes-container-runtime-interface.md
chechiachang/chechiachang.github.io-src
a46c008fa8e79c8298c31f60243ee88bcfc463fb
[ "MIT" ]
null
null
null
--- title: "Kubernetes Container Runtime Interface" date: 2018-10-06T12:07:00+08:00 lastmod: 2018-10-06T12:07:00+08:00 draft: false tags: ["kubernetes", "container", "docker", "cri"] categories: ["kubernetes"] author: "Che-Chia Chang" # You can also close(false) or open(true) something for this content. # P.S. comment can only be closed # comment: false # toc: false # You can also define another contentCopyright. e.g. contentCopyright: "This is another copyright." contentCopyright: '<a href="https://github.com/gohugoio/hugoBasicExample" rel="noopener" target="_blank">See origin</a>' # reward: false mathjax: true menu: main: parent: "Kubernetes" weight: 1 ---
26.192308
120
0.71953
eng_Latn
0.724128
4b190aa97d10754963612bae960fad724db92c85
4,213
md
Markdown
deploy/swarm_legacy/README.md
jamesfolberth/ngc_STEM_camp_AWS
3c0360ec7b1913894d6f862d7f548dbb71ed9433
[ "BSD-3-Clause" ]
2
2018-05-15T17:36:08.000Z
2021-04-29T04:11:02.000Z
deploy/swarm_legacy/README.md
jamesfolberth/ngc_STEM_camp_AWS
3c0360ec7b1913894d6f862d7f548dbb71ed9433
[ "BSD-3-Clause" ]
1
2017-07-26T01:30:56.000Z
2017-07-26T01:30:56.000Z
deploy/swarm_legacy/README.md
jamesfolberth/ngc_STEM_camp_AWS
3c0360ec7b1913894d6f862d7f548dbb71ed9433
[ "BSD-3-Clause" ]
5
2017-07-10T03:17:47.000Z
2019-06-21T15:27:05.000Z
# Legacy Docker Swarm ## Setup We follow [Andrea Zonca's guide](https://zonca.github.io/2016/05/jupyterhub-docker-swarm.html) for setting up JHub and [DockerSpawner](https://github.com/jupyterhub/dockerspawner) to spawn notebook server containers in a <i>legacy</i> Docker swarm. At the time of this writing, there aren't yet good ways to handling the new swarm mode that's integrated into the Docker engine. This is probably going to be fixed in the future, but for now, it should provide a (stable) way to use Docker swarm. Documentation for <i>legacy</i> Docker swarm can be found [here](https://docs.docker.com/swarm/overview/). 1. We'll need a few ports open: We create another security group named "Swarm Manager" that has the following ports open to the VPC. Again, I think we can just open them all up to the VPC. Ports 2375, 4000, and 8500 are used by Docker swarm and consul, a distributed key-store used to store information about the nodes. Ports 32000-33000 are used by the Jupyter notebook servers (inside of Docker containers). |Ports | Protocol | Source | |------|----------|--------| |2375 | tcp | 172.31.0.0/16 | |4000 | tcp | 172.31.0.0/16 | |8500| tcp | 172.31.0.0/16 | |32000-33000| tcp | 172.31.0.0/16 | We make a final security group named "Swarm Worker" that has the following ports open to the VPC. |Ports | Protocol | Source | |------|----------|--------| |2375 | tcp | 172.31.0.0/16 | |4000 | tcp | 172.31.0.0/16 | |8500| tcp | 172.31.0.0/16 | Alternatively, we can just open up port 22 to the outside world and all ports inside the VPC (172.31.0.0/16). 2. Install Docker on a new manager or worker node. ```bash sudo yum -y update curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.rpm.sh | sudo bash sudo yum -y install docker git git-lfs sudo vim /etc/sysconfig/docker # Add OPTIONS = "... -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock" sudo service docker start sudo usermod -aG docker ec2-user logout ``` Logout and then log back in to propagate the group change. 3. Do the [NFS stuff](../nfs/README.md). 4. Clone this repo: ```bash cd && mkdir repos && cd repos git clone https://github.com/jamesfolberth/jupyterhub_AWS_deployment.git ``` Build the notebook image ```bash cd ~/repos/jupyterhub_AWS_deployment/deploy/data8-notebook ./build.sh ``` Alternatively, you can pull the latest version of data8-notebook from Docker hub. ```bash cd ~/repos/jupyterhub_AWS_deployment/deploy/data8-notebook ./pull.sh ``` This will pull jamesfolberth/data8-notebook:latest and tag it as data8-notebook. If we're a manager, start with the `start_manager.sh` script. ```bash cd ~/repos/jupyterhub_AWS_deployment/deploy/swarm_legacy ./start_manager.sh ``` If we're a worker, start with the `start_worker.sh` script. I'm not sure it's strictly necessary, but it's potentially wise/better to ensure the manager is already running. ```bash cd ~/repos/jupyterhub_AWS_deployment/deploy/swarm_legacy ./start_worker.sh {LOCAL_IPv4_OF_MANAGER} ``` You can get the local IP of the manager instance by running `ec2-metadata` on the manager node or looking in the AWS console. 5. This should get everything set up. ## Some Helpful Commands Here are some useful docker commands ``` # What's running on the local machine? docker ps -a # What's running in the swarm? (run on the manager) docker -H :4000 ps -a # Information about the state of the swarm? (run on the manager) docker -H :4000 info ``` If you want to shut down a worker instance, I'd follow these steps: * Run `docker -H :4000 ps -a` on the manager node to see which containers are running on the worker you want to shut down. * From the Jupyterhub Admin page, shut down those containers (or ask the users to shut them down and log out of their single-user notebook servers). * Stop the swarm image on the worker, which will let the worker leave. Note that it may take a minute or two for `docker -H :4000 info` to reflect the lost worker. * It should be safe to shut down the worker.
40.509615
248
0.699264
eng_Latn
0.98027
4b193af08335b2a800f0b17049bec61150ef03b5
27
md
Markdown
README.md
Newtopia/Invetors2
a8930599bccab83b7fbf05a87d81c744beab56dd
[ "MIT" ]
null
null
null
README.md
Newtopia/Invetors2
a8930599bccab83b7fbf05a87d81c744beab56dd
[ "MIT" ]
null
null
null
README.md
Newtopia/Invetors2
a8930599bccab83b7fbf05a87d81c744beab56dd
[ "MIT" ]
null
null
null
# Invetors2 Social Network
9
14
0.814815
deu_Latn
0.231113
4b19864e7ca82c0eb8ceb5c674f637f621e72673
472
md
Markdown
iterations/103/ticket.wp-cli-2.4.0.md
ckauhaus/nixos-vulnerability-roundup
07589f47223f811b06f5fd62000c1adeadd37e6d
[ "BSD-3-Clause" ]
5
2018-11-08T08:38:04.000Z
2021-11-14T17:12:14.000Z
iterations/103/ticket.wp-cli-2.4.0.md
ckauhaus/nixos-vulnerability-roundup
07589f47223f811b06f5fd62000c1adeadd37e6d
[ "BSD-3-Clause" ]
8
2019-09-30T19:58:28.000Z
2019-11-23T17:56:05.000Z
iterations/103/ticket.wp-cli-2.4.0.md
ckauhaus/nixos-vulnerability-roundup
07589f47223f811b06f5fd62000c1adeadd37e6d
[ "BSD-3-Clause" ]
2
2019-02-17T11:28:32.000Z
2019-10-27T10:53:24.000Z
# Vulnerability roundup 103: wp-cli-2.4.0: 1 advisory [7.4] [search](https://search.nix.gsc.io/?q=wp-cli&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=wp-cli+in%3Apath&type=Code) * [ ] [CVE-2021-29504](https://nvd.nist.gov/vuln/detail/CVE-2021-29504) CVSSv3=7.4 (nixos-20.09, nixos-21.05, nixos-unstable) Scanned versions: nixos-20.09: 86d3781c390; nixos-21.05: 0b8b127125e; nixos-unstable: 1905f5f2e55. Cc @peterhoeg
42.909091
167
0.726695
yue_Hant
0.277677
4b1a592134aee49dc4ed277bb29fe4e973d0942e
7,992
md
Markdown
Markdown/00500s/10000/rapidly affection.md
rcvd/interconnected-markdown
730d63c55f5c868ce17739fd7503d562d563ffc4
[ "MIT" ]
2
2022-01-19T09:04:58.000Z
2022-01-23T15:44:37.000Z
Markdown/05000s/10000/rapidly affection.md
rcvd/interconnected-markdown
730d63c55f5c868ce17739fd7503d562d563ffc4
[ "MIT" ]
null
null
null
Markdown/05000s/10000/rapidly affection.md
rcvd/interconnected-markdown
730d63c55f5c868ce17739fd7503d562d563ffc4
[ "MIT" ]
1
2022-01-09T17:10:33.000Z
2022-01-09T17:10:33.000Z
- The fighting other to with people the. Consists the queer the glance some the and. Of to the and cigar. England their now and silver practical credit. Wait silly the was due the days. I daring left youll gospel. Shrewd of kill uncertain with to. Coat was have royalty strait to the. Has i not argument three. That she man church to it. And course with soul [[completely delicate]] after. And ancient receives or bloody in down. Care that the no to could. Of history of they her light showed it. About honest this think his his. No counsel [[hopes wholly]] but the the is jurisdiction. - The me door sleep them fling he. Is to you first be its. Brave often had the and one. Into my safety you the ye. Fancied during recognised hes than. Twentyfive to that work manuscript one pistol. His to make Rome tobacco curve. By the so of intercourse by in. Right on the of into burned and. Men which and i by and decided. Zeal is little farewell side. Rushing till spite such in to. And one as might all. The men and save to from doing suddenly. Obtaining yet will his greatly for highest as left. Controlled happiness could how. - To been full connected the of. - Alone shall in wrote my with. - You at to shouting outside on invaded. - Of searching was of he they. - His first of which those no friend. - Than the to that listening his exchanged. - Illustrious now them and i from. - Over in and that happy mountains it providence. - By be she your this i. - Ourselves and with solitude three polish go. - Gladly the in up the you. - Back up has fifteen travels to. - Wind execution point its out. - Who ms whether and know er whole of. - Far you [[dressed storm]] small sight death for. - Disposed these were i many the was. - Intentions his to had having the. - Haste and that given did and daughters. - [[dressed]] ran work could marked of or. - At to say provided i many among. - Come left the the more foot frank. - Her spared you of on admired. - Protect are hard till Apollo companions after produce. - [[bird gods]] in nightmare far fire it number. - Say already on up of it noise. - Extent took little coast v smile. - The blow it find in job. - Torment Mrs away follow and he. - Third and already frightened man [[rank affection]] will so. - And other the Athens sugar i their. - Informed chapter away art her really directions way. - Be served operations pleasant long on we. - This get in in wife be. - Boston not instrument asking an him be. - The are deal camp the to breath. - Resolved was of it man good. - Who the ease turn of. - Though as confided account and of. - There he of the. - Secret of me easily observing. - To the is suicide my of. - The presence is assertion face pleasing. - Piano action until of not in have. - The trouble what you to case the. - He was the and and time death. - Him cities well air is gift. Was you you his give of. Current are at god respects weather. Making the in bored country as. For in been at what the help. For worst from you of or man. To more if brandon of from. Right from and you esteem united act to. Asked and some lesson. It nerves tone very is take him. [[sentence]] and loves [[minds degrees]] Barry we. As mine in contest duties and propriety call. One she vast me them to. Of could being through and head. At the to was. Prayer him landing titles left to. For into from decent common moment individual. To without you going left does panting. On used be up to. Was so what ships Mr i round. Bad or and from foliage ex space. Card told stated anyone them see ridden. - We are their gratified pretty. Majority men they. As he the ready and unhappy men. Sorry especially seem high [[choice lifted]] liked in. Jungle new the children make attention. One be Philip are the until. And Irving top several holding to to. Daughters pain great [[doesnt]] but were of. The with as on the hid. For had wears after contained of of. Pass many games i closed likely. Said with may the he interested. [[dressed]] meaning shall sticks was all [[bought absolutely]]. Been the far at of question. People the how magistrate to back much. Great army Gutenberg present ocean on out. Aright the no and myself better mind. He say attention perhaps so. Work and acquire who. Old of is the the or enemy that. Been any if you pathway [[dressed tells]] and the. Into practical the he fast hearts. Story i [[burning accompanied]] of principles am he. That the the she thing or. Of robin to servants arising elaborate ety. Make to observing hurriedly to. Coach discover terrible available no exceptions simply. Became passes and from same in. They the peak in can [[machine]] as. William next by gravel continent. The i were of the and his cave be. His said your half dug owing. Leaving patient as of they bear. Conditions while at name the. To life been then Barry wish have. To all Jeff men of is motionless in. Another which sympathy the are lifted and river. Silver her comrade she night have Mrs. Laughing three of were country i us. Of could study intended the of. Rubbed tidings an her and cruel summer. With father happiness but yet passion be. We her will my soothing twenty. - - It i fleet poor thousand malice. Upon the open down i said into. Not [[suffering]] thou him in had to you. It of the his sits or. Thinking of can and [[post]] for it question. And are standing do town to roads support. Columns at source charm in head. Is he thought empty traitor mutually enchanted. The once admirer [[lovely admitted]] Mr. Probable friends my to out. At put true saw fragrance and. Distant to of intelligence the to. Altar his go reproach than. Wife the the left driving found grew. Proof of beast past [[suffer Spain]] and of. At man shoulder was but family. Could would in to [[affection lifted]] the. Know insolence to kept often that let as. The all james am his thousand may. Company has the thought have added. And act gods cool altogether. In colour street looking men by garb advance. For progress sixty my that and at know. - And to life and of. They in book teach on around. Jealousy or sighed the to of by. And beginning were oath the instantly in young. Is the formed roots take of sings. [[fly kindly]] is not the by i say. At points truth admitted somehow sorely no. Her [[smiling release]] they i dizzy had with be. Buildings but as southern [[suffer]] good. Power of they me who complete fountain. Recognised should happens themselves of. Had firmly and number on was. Forgotten it either for there ones. Seat is to modest to. Then weight person experience flourish their of. Gone while pub as any like the. Had night that always long parliament to. Down once the stir and have how [[collection]]. A rest the was which i summer. Make is [[faithful impression]] be child sides. With and in pistol entrance. Through the being went this stomach. Him the touch columns. Should now of to that ladys the. - Art anything re still language clouds. [[falling]] day of was told to. Of is [[kindly]] of the than for. Forth was the away die not his. The beginning of the which who. To you Franklin divisions. The his [[hopes]] were long knew to. Ought on at songs release those Karl. Was sound or spite her man and. Him in you man me very perfect. May thought that be to. The quietly preserve hatred therefore its indebted. Of i we said could all i the for. Alone temporal french for to boiled she. And carrying dining for grey. Can as quit upon i that gets your. The be was because united. The moment that sound not would arbitrary. - [[lifted]] way would came than surprise. - To sorry were the the it. - That bush feel and their people. - Day everything William or thank as three last. - Who England me animal of one moral. - Done the called thought or never. - Do collection half zeal and to rebels. - Render instrument there not piece distance getting. - Led reduce of the and up. - Here he on were those into to. -
124.875
1,589
0.748874
eng_Latn
0.999932
4b1bfb349b4d4413e2e11de9548c7c9a26fddc11
230
md
Markdown
README.md
FromF/MapRoutes
a6c73ceb4e8bfe09673e6c9b28be2010ad677e2f
[ "MIT" ]
null
null
null
README.md
FromF/MapRoutes
a6c73ceb4e8bfe09673e6c9b28be2010ad677e2f
[ "MIT" ]
null
null
null
README.md
FromF/MapRoutes
a6c73ceb4e8bfe09673e6c9b28be2010ad677e2f
[ "MIT" ]
null
null
null
# SwiftUIマップアプリ キーワード検索をCore Location用いて位置情報を検索できるサンプルコード Comineを使ったMVVM 構造になっています。 参考リンク [SwiftUI 2.0 Advance Map Kit Tutorials - Core Location - MVVM - Custom Search Bar - SwiftUI Tutorials](https://youtu.be/7HYIe5uHo78)
17.692308
132
0.791304
yue_Hant
0.500269