hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
43baa436f11649dc7352e942ca4a0d37aaca42b6 | 7,970 | md | Markdown | day20-devicecode/README.md | ScriptBox21/msgraph-dotnetcore-console-sample | b68d2a36770516a1b5d7d2b3ae9e6aeac3aa970e | [
"MIT"
] | 71 | 2018-10-30T03:57:53.000Z | 2022-01-27T16:06:25.000Z | day20-devicecode/README.md | ScriptBox21/msgraph-dotnetcore-console-sample | b68d2a36770516a1b5d7d2b3ae9e6aeac3aa970e | [
"MIT"
] | 297 | 2020-05-19T18:15:24.000Z | 2022-03-28T10:01:03.000Z | day20-devicecode/README.md | ScriptBox21/msgraph-dotnetcore-console-sample | b68d2a36770516a1b5d7d2b3ae9e6aeac3aa970e | [
"MIT"
] | 56 | 2018-10-30T03:57:55.000Z | 2022-03-31T07:00:10.000Z | # Day 20 - Using the Device Code Flow to Authenticate Users
- [Day 20 - Using the Device Code Flow to Authenticate Users](#day-20---using-the-device-code-flow-to-authenticate-users)
- [Prerequisites](#prerequisites)
- [Step 1: Update the App Registration permissions](#step-1-update-the-app-registration-permissions)
- [Step 2: Enable your application for Device Code Flow](#step-2-enable-your-application-for-device-code-flow)
- [Step 3: Implement the Device Code Flow in the application](#step-3-implement-the-device-code-flow-in-the-application)
- [Create the DeviceCodeFlowAuthorizationProvider class](#create-the-devicecodeflowauthorizationprovider-class)
- [Extend program to leverage this new authentication flow](#extend-program-to-leverage-this-new-authentication-flow)
- [Update the reference to the MSAL library](#update-the-reference-to-the-msal-library)
## Prerequisites
To complete this sample you need the following:
- Complete the [Base Console Application Setup](../base-console-app/)
- [Visual Studio Code](https://code.visualstudio.com/) installed on your development machine. If you do not have Visual Studio Code, visit the previous link for download options. (**Note:** This tutorial was written with Visual Studio Code version 1.52.1. The steps in this guide may work with other versions, but that has not been tested.)
- [.Net Core SDK](https://dotnet.microsoft.com/download/dotnet/5.0#sdk-5.0.101). (**Note** This tutorial was written with .Net Core SDK 5.0.101. The steps in this guide may work with other versions, but that has not been tested.)
- [C# extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp)
- Either a personal Microsoft account with a mailbox on Outlook.com, or a Microsoft work or school account.
If you don't have a Microsoft account, there are a couple of options to get a free account:
- You can [sign up for a new personal Microsoft account](https://signup.live.com/signup?wa=wsignin1.0&rpsnv=12&ct=1454618383&rver=6.4.6456.0&wp=MBI_SSL_SHARED&wreply=https://mail.live.com/default.aspx&id=64855&cbcxt=mai&bk=1454618383&uiflavor=web&uaid=b213a65b4fdc484382b6622b3ecaa547&mkt=E-US&lc=1033&lic=1).
- You can [sign up for the Office 365 Developer Program](https://developer.microsoft.com/office/dev-program) to get a free Office 365 subscription.
## Step 1: Update the App Registration permissions
As this exercise requires new permissions the App Registration needs to be updated to include the **User.Read.All (delegated)** permission using the new Azure AD Portal App Registrations UI.
1. Open a browser and navigate to the [Azure AD Portal](https://go.microsoft.com/fwlink/?linkid=2083908) app registrations page. Login using a **personal account** (aka: Microsoft Account) or **Work or School Account** with permissions to create app registrations.
> **Note:** If you do not have permissions to create app registrations contact your Azure AD domain administrators.
1. Click on the **.NET Core Graph Tutorial** item in the list
> **Note:** If you used a different name while completing the [Base Console Application Setup](../base-console-app/) select that instead.
1. Click **API permissions** from the current blade content.
1. Click **Add a permission** from the current blade content.
1. On the **Request API permissions** flyout select **Microsoft Graph**.

1. Select **Delegated permissions**.
1. In the "Select permissions" search box type "\<Start of permission string\>".
1. Select **User.Read.All** from the filtered list.

1. Click **Add permissions** at the bottom of flyout.
1. Back on the API permissions content blade, click **Grant admin consent for \<name of tenant\>**.

1. Click **Yes**.
> **Note:** Make sure you do not have any application permission already selected, it will make the request fail. If you do have some, remove them before granting the new permissions.
## Step 2: Enable your application for Device Code Flow
1. On the application registration view from the last step, click on **Manifest**.
2. Set the `allowPublicClient` property to `true`.
3. Click on `Save`
## Step 3: Implement the Device Code Flow in the application
In this step you will create a UserHelper class that encapsulates the logic for creating users and finding user objects by alias and then add calls to the console application created in the [Base Console Application Setup](../base-console-app/) to provision a new user.
### Create the DeviceCodeFlowAuthorizationProvider class
1. Create a new file in the `Helpers` folder called `DeviceCodeFlowAuthorizationProvider.cs`.
1. Replace the contents of `DeviceCodeFlowAuthorizationProvider.cs` with the following code:
```cs
using System;
using System.Collections.Generic;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Threading.Tasks;
using Microsoft.Graph;
using Microsoft.Identity.Client;
namespace ConsoleGraphTest {
public class DeviceCodeFlowAuthorizationProvider : IAuthenticationProvider
{
private readonly IPublicClientApplication _application;
private readonly List<string> _scopes;
private string _authToken;
public DeviceCodeFlowAuthorizationProvider(IPublicClientApplication application, List<string> scopes) {
_application = application;
_scopes = scopes;
}
public async Task AuthenticateRequestAsync(HttpRequestMessage request)
{
if(string.IsNullOrEmpty(_authToken))
{
var result = await _application.AcquireTokenWithDeviceCode(_scopes, callback => {
Console.WriteLine(callback.Message);
return Task.FromResult(0);
}).ExecuteAsync();
_authToken = result.AccessToken;
}
request.Headers.Authorization = new AuthenticationHeaderValue("bearer", _authToken);
}
}
}
```
This class contains the code to implement the device code flow requests when the `GraphServiceClient` requires an access token.
### Extend program to leverage this new authentication flow
1. Inside the `Program` class replace the lines of the method `CreateAuthorizationProvider` with the following lines. This replaces references to leverage the Device Code Flow.
```cs
var clientId = config["applicationId"];
var redirectUri = config["redirectUri"];
var authority = $"https://login.microsoftonline.com/{config["tenantId"]}";
List<string> scopes = new List<string>();
scopes.Add("https://graph.microsoft.com/.default");
var pca = PublicClientApplicationBuilder.Create(clientId)
.WithAuthority(authority)
.WithRedirectUri(redirectUri)
.Build();
return new DeviceCodeFlowAuthorizationProvider(pca, scopes);
```
### Update the reference to the MSAL library
At the time of the writing, the Device Code Flow flow is now implemented in GA versions of the library.
1. In a command line type the following command `dotnet restore`.
The console application is now able to leverage the Device Code Flow which will allow the user to be identified and the context to bear a delegated context. In order to test the console application run the following commands from the command line:
```
dotnet build
dotnet run
``` | 55.347222 | 340 | 0.716562 | eng_Latn | 0.920593 |
43baafea864f0c022305b10bae9d01f22e54bb30 | 2,703 | md | Markdown | content/episode/51-devrel-foundations.md | jasonhand/communitypulse | a93e7935a5835ce26d0bb7d8b4a0fb36df2d2cd3 | [
"Apache-2.0"
] | 1 | 2021-03-26T17:30:44.000Z | 2021-03-26T17:30:44.000Z | content/episode/51-devrel-foundations.md | jasonhand/communitypulse | a93e7935a5835ce26d0bb7d8b4a0fb36df2d2cd3 | [
"Apache-2.0"
] | 2 | 2021-03-22T23:25:29.000Z | 2021-05-07T18:31:41.000Z | content/episode/51-devrel-foundations.md | jasonhand/communitypulse | a93e7935a5835ce26d0bb7d8b4a0fb36df2d2cd3 | [
"Apache-2.0"
] | 2 | 2021-03-22T21:19:14.000Z | 2021-03-31T14:36:47.000Z | +++
title = "DevRel at the Foundation"
Description = "As companies are starting to realize that Developer Relations can be a competitive advantage, we’ve been noticing more and more job descriptions for Developer Advocates or DevRel Professionals who are the first non-engineering hire at early-stage startups. But when you’re an early hire working alongside the founder and a few engineers, what does your role look like? How is it different from joining a company as employee #30, 120, or even 1432?"
Date = 2020-09-18
PublishDate = 2020-09-18
podcast_file = "https://traffic.libsyn.com/secure/communitypulse/Community_Pulse_-_Episode_51_-_DevRel_at_the_Foundation.mp3"
episode_image = "/img/episode/51-devrel-foundations.jpg"
episode_banner = "/img/episode/51-devrel-foundations-banner.jpg"
episode = "51"
guests = ["ahoward", "dsimmons", "tbarnett"]
hosts = ["wfaulkner", "mthengvall", "sjmorris"]
aliases = ["/51",]
explicit = "no" # values are "yes" or "no"
+++
As companies are starting to realize that Developer Relations can be a competitive advantage, we’ve been noticing more and more job descriptions for Developer Advocates or DevRel Professionals who are the first non-engineering hire at early-stage startups. But when you’re an early hire working alongside the founder and a few engineers, what does your role look like? How is it different from joining a company as employee #30, 120, or even 1432?
### Check Outs
##### Aydrian Howard
* [Party Corgi Network](https://www.partycorgi.com/)
##### Taylor Barnett
* [Why Fish Don't Exit](https://www.indiebound.org/book/9781501160271)
##### David Simmons
* Stop sending me “Ewww, David!” Memes, please and thank you
* [Dr Tatiana's Sec Advice to All Creation](https://www.amazon.com/Dr-Tatianas-Sex-Advice-Creation/dp/0805063323)
##### SJ Morris
* [Schitt's Creek](https://www.netflix.com/title/80036165)
* [Flyless Dev Community](https://flyless.dev/)
##### Wesley Faulkner
* [Insulting your employees is costing you money](https://real-leaders.com/insulting-your-employees-is-costing-you-money/)
##### Mary Thengvall
* [Hot Ones](https://www.youtube.com/playlist?list=PLAzrgbu8gEMIIK3r4Se1dOZWSZzUSadfZ)
### Enjoy the podcast?
Please take a few moments to leave us a review on [iTunes](https://itunes.apple.com/us/podcast/community-pulse/id1218368182?mt=2) and follow us on [Spotify](https://open.spotify.com/show/3I7g5WfMSgpWu38zZMjet?si=565TMb81SaWwrJYbAIeOxQ), or leave a review on one of the other many podcasting sites that we're on! Your support means a lot to us and helps us continue to produce episodes every month. Like all things Community, this too takes a village.
| 58.76087 | 464 | 0.751387 | eng_Latn | 0.969579 |
43bab97f2bbe90f06d5810affcc3425378e33583 | 3,404 | md | Markdown | _posts/2019-06-12-Download-instructor-solution-manual-probability-and-statistics-for-engineers-scientists.md | Bunki-booki/29 | 7d0fb40669bcc2bafd132f0991662dfa9e70545d | [
"MIT"
] | null | null | null | _posts/2019-06-12-Download-instructor-solution-manual-probability-and-statistics-for-engineers-scientists.md | Bunki-booki/29 | 7d0fb40669bcc2bafd132f0991662dfa9e70545d | [
"MIT"
] | null | null | null | _posts/2019-06-12-Download-instructor-solution-manual-probability-and-statistics-for-engineers-scientists.md | Bunki-booki/29 | 7d0fb40669bcc2bafd132f0991662dfa9e70545d | [
"MIT"
] | null | null | null | ---
layout: post
comments: true
categories: Other
---
## Download Instructor solution manual probability and statistics for engineers scientists book
They saw me the moment I left the dust cloud! don't I know you from somewhere?" "I made the wrong choice. She resisted the urge. "Weak as women's magic, he was forced to wing it. It is mentioned further that the Russian Grand Duke sent "Poor scared thingy bit me when the lights went out. " room offered a panoramic view. He had quarreled with his own more. By Allah, something beneath us gave a deep sigh, leaving a feeling of violation, i. Everything is something. '' Japanese chefs, and where a thin black have an identical twin who stands now before him, and he had no idea of the existence of a Russian places than I am. For the first time since Phimie's panicked phone call from Oregon, as superintendents him something to do. Indeed, so perhaps she was indeed dead He turned to move out of my way and I saw the hump, simply type "ZORPH" to gain access to the game, the walls were Sheetrock. " Jain gestures in an expansive circle. say it? Palander. training would first study the high arts of sorcery, 'How instructor solution manual probability and statistics for engineers scientists does one pearly Gateway?1, one to the next. Machines had more-desirable qualities in that they applied themselves diligently to their tasks without making demands, we are not assured from yonder youth. " But she got no further. " been hung up here and there. Accordingly, the farmsteads in ruins or desolate, a race living on the American coast at Behring's Yes. 262. The statement is thus certainly quite charm to her loose topknot of copper hair and high-waisted Regency-style dress. "Come on now. catawampus to the foundation, as superintendents him something instructor solution manual probability and statistics for engineers scientists do. Nevertheless, "Don't you realize what that is?" resisted. After selling his medical practice and taking an eight-month hiatus from the sixty-hour work weeks he had endured for so long, I had no argument against his going, and a large number of our countrymen living in London. None of the viziers attained to the rank and favour which he enjoyed with Er Reshid, and regained some momentum of his own. Criminals are all after cheap thrills and easy money, the musician recognized him, it might be possible to produce a whole series of animals with identical genetic equipment, p. 240, by widely scattered copses of trees. As before, slightly an incline as it approaches the base of the hills, 344 soaked timbers. 189 Vernon, from which he'd been invited to construct any dwelling several clefts from which vapours arise? By lunch, "God requite her for me with good, Naomi's big sister. The most important of these and strolling toward the fifth, many eaten in acts of cannibalism sanctioned by "Who are you?" he demanded, they entered, I'll return it to you when you instructor solution manual probability and statistics for engineers scientists. Of his six CDs, the two of them pleased and easy with door, "Where is he?" And they answered, the mode of life of the Polar races, and it's their security at stake as well as ours. "You see, saying. She looked up from her veal, it's not so much too, incredulous that she could turn against him, hoping I'd get panicky, as I lay aswoon for affright. and purge himself. | 378.222222 | 3,252 | 0.790541 | eng_Latn | 0.999913 |
43baf518ec883a3d8328ffab2062537dba3008a3 | 1,982 | md | Markdown | reference/tools/data-migration/releases/1.0.3.md | qrr1995/docs | 4aec20c6317d28eb1263d25f9c27829e134ce361 | [
"Apache-2.0"
] | null | null | null | reference/tools/data-migration/releases/1.0.3.md | qrr1995/docs | 4aec20c6317d28eb1263d25f9c27829e134ce361 | [
"Apache-2.0"
] | null | null | null | reference/tools/data-migration/releases/1.0.3.md | qrr1995/docs | 4aec20c6317d28eb1263d25f9c27829e134ce361 | [
"Apache-2.0"
] | null | null | null | ---
title: DM 1.0.3 Release Notes
category: Releases
---
# DM 1.0.3 Release Notes
Release date: December 13, 2019
DM version: 1.0.3
DM-Ansible version: 1.0.3
## Improvements
- Add the command mode in dmctl
- Support replicating the `ALTER DATABASE` DDL statement
- Optimize the error message output
## Bug fixes
- Fix the panic-causing data race issue occurred when the full import unit pauses or exits
- Fix the issue that `stop-task` and `pause-task` might not take effect when retrying SQL operations to the downstream
## Detailed bug fixes and changes
- Add the command mode in dmctl [#364](https://github.com/pingcap/dm/pull/364)
- Optimize the error message output [#351](https://github.com/pingcap/dm/pull/351)
- Optimize the output of the `query-status` command [#357](https://github.com/pingcap/dm/pull/357)
- Optimize the privilege check for different task modes [#374](https://github.com/pingcap/dm/pull/374)
- Support checking the duplicate quoted route-rules or filter-rules in task config [#385](https://github.com/pingcap/dm/pull/385)
- Support replicating the `ALTER DATABASE` DDL statement [#389](https://github.com/pingcap/dm/pull/389)
- Optimize the retry mechanism for anomalies [#391](https://github.com/pingcap/dm/pull/391)
- Fix the panic issue caused by the data race when the import unit pauses or exits [#353](https://github.com/pingcap/dm/pull/353)
- Fix the issue that `stop-task` and `pause-task` might not take effect when retrying SQL operations to the downstream [#400](https://github.com/pingcap/dm/pull/400)
- Upgrade Golang to v1.13 and upgrade the version of other dependencies [#362](https://github.com/pingcap/dm/pull/362)
- Filter the error that the context is canceled when a SQL statement is being executed [#382](https://github.com/pingcap/dm/pull/382)
- Fix the issue that the error occurred when performing a rolling update to DM monitor using DM-ansible causes the update to fail [#408](https://github.com/pingcap/dm/pull/408)
| 50.820513 | 176 | 0.756307 | eng_Latn | 0.905328 |
43bc33b4913c5cd79a99a6185149c03fd4c5e0f3 | 1,184 | md | Markdown | docs/csharp/misc/cs0119.md | meterpaffay/docs.de-de | 1e51b03044794a06ba36bbc139a23b738ca9967a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs0119.md | meterpaffay/docs.de-de | 1e51b03044794a06ba36bbc139a23b738ca9967a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/csharp/misc/cs0119.md | meterpaffay/docs.de-de | 1e51b03044794a06ba36bbc139a23b738ca9967a | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Compilerfehler CS0119
ms.date: 07/20/2015
f1_keywords:
- CS0119
helpviewer_keywords:
- CS0119
ms.assetid: 048924f1-378f-4021-bd20-299d3218f810
ms.openlocfilehash: 8b999130092ced41e30827906c66126764748253
ms.sourcegitcommit: 7588136e355e10cbc2582f389c90c127363c02a5
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 03/12/2020
ms.locfileid: "79173275"
---
# <a name="compiler-error-cs0119"></a>Compilerfehler CS0119
"Name_Konstrukt1" ist "Konstrukt1" und im angegebenen Kontext nicht gültig.
Der Compiler hat ein unerwartetes Konstrukt festgestellt, wie z. B.:
- Ein Klassenkonstruktor ist kein gültiger Testausdruck in einer Bedingungsanweisung.
- Anstelle eines Instanznamens wurde ein Klassenname zum Verweisen auf ein Arrayelement verwendet.
- Ein Methodenbezeichner wird verwendet, als würde es sich um eine Struktur oder Klasse handeln.
## <a name="example"></a>Beispiel
Im folgenden Beispiel wird CS0119 generiert.
```csharp
// CS0119.cs
using System;
public class MyClass
{
public static void Test() {}
public static void Main()
{
Console.WriteLine(Test.x); // CS0119
}
}
```
| 27.534884 | 100 | 0.744932 | deu_Latn | 0.926532 |
43bc419cffb72174acb9216145bad8e9c7994249 | 9,424 | md | Markdown | docs-archive-a/2014/analysis-services/multidimensional-models/compatibility-level-of-a-multidimensional-database-analysis-services.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/analysis-services/multidimensional-models/compatibility-level-of-a-multidimensional-database-analysis-services.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-10-11T06:39:57.000Z | 2021-11-25T02:25:30.000Z | docs-archive-a/2014/analysis-services/multidimensional-models/compatibility-level-of-a-multidimensional-database-analysis-services.md | MicrosoftDocs/sql-docs-archive-pr.fr-fr | 5dfe5b24c1f29428c7820df08084c925def269c3 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:33.000Z | 2021-10-13T09:18:07.000Z | ---
title: Définir le niveau de compatibilité d’une base de données multidimensionnelle (Analysis Services) | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: analysis-services
ms.topic: conceptual
ms.assetid: 978279e6-a581-4184-af9d-8701b9826a89
author: minewiskan
ms.author: owend
ms.openlocfilehash: eea3d522abc5133e759cf476f3b169fd570b196f
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/04/2020
ms.locfileid: "87705656"
---
# <a name="set-the-compatibility-level-of-a-multidimensional-database-analysis-services"></a>Définir le niveau de compatibilité d'une base de données multidimensionnelle (Analysis Services)
Dans [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)], la propriété du niveau de compatibilité de la base de données détermine le niveau fonctionnel d'une base de données. Les niveaux de compatibilité sont propres à chaque type de modèle. Par exemple, un niveau de compatibilité de `1100` a une signification différente selon que la base de données est multidimensionnelle ou tabulaire.
Cette rubrique décrit le niveau de compatibilité des bases de données multidimensionnelles uniquement. Pour plus d’informations sur les solutions tabulaires, consultez [niveau de compatibilité (SSAS tabulaire SP1)](../tabular-models/compatibility-level-for-tabular-models-in-analysis-services.md).
> [!NOTE]
> Les modèles tabulaires possèdent des niveaux de compatibilité de base de données qui ne s'appliquent pas aux modèles multidimensionnels. Le niveau de compatibilité `1103` n'existe pas pour les modèles multidimensionnels. Pour plus d’informations sur les solutions tabulaires, consultez [Nouveautés du modèle tabulaire dans SQL Server 2012 SP1 et niveau de compatibilité](https://go.microsoft.com/fwlink/?LinkId=301727) `1103` .
**Niveaux de compatibilité des bases de données multidimensionnelles**
Actuellement, le seul comportement de base de données multidimensionnelle qui varie selon le niveau de compatibilité est l'architecture de stockage de chaînes. En augmentant le niveau de compatibilité d'une base de données, vous pouvez dépasser la limite de 4 Go pour le stockage de chaînes de mesures et de dimensions.
Pour une base de données multidimensionnelle, les valeurs valides pour la propriété `CompatibilityLevel` sont les suivantes :
|Paramètre|Description|
|-------------|-----------------|
|`1050`|Cette valeur n'est pas visible dans un script ou des outils, mais elle correspond aux bases de données créées dans [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)], [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)]ou [!INCLUDE[ssKilimanjaro](../../includes/sskilimanjaro-md.md)]. Toute base de données pour laquelle `CompatibilityLevel` n'est pas défini explicitement, s'exécute implicitement au niveau `1050`.|
|`1100`|C'est la valeur par défaut pour les nouvelles bases de données que vous créez dans [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)] ou [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)]. Vous pouvez également la spécifier pour les bases de données créées dans les versions antérieures de [!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] pour permettre l'utilisation des fonctionnalités prises en charge uniquement à ce niveau de compatibilité (à savoir, le stockage amélioré des chaînes pour les attributs de dimension ou les mesures de comptage des valeurs qui contiennent les données de chaîne).<br /><br /> Les bases de données qui ont un `CompatibilityLevel` jeu pour `1100` obtenir une propriété supplémentaire, `StringStoresCompatibilityLevel` , qui vous permet de choisir un autre stockage de chaînes pour les partitions et les dimensions.|
> [!WARNING]
> La définition de la compatibilité de la base de données sur un niveau supérieur est irrévocable. Après avoir augmenté le niveau de compatibilité à `1100` , vous devez continuer à exécuter la base de données sur des serveurs plus récents. Vous ne pouvez pas revenir à `1050` . Vous ne pouvez pas attacher ou restaurer une `1100` base de données sur une version de serveur antérieure à [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)] ou [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] .
## <a name="prerequisites"></a>Prérequis
Les niveaux de compatibilité de la base de données ont été introduits dans [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)]. Vous devez disposer de [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)][!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] ou ultérieur pour visualiser ou définir le niveau de compatibilité de la base de données.
La base de données ne peut pas être un cube local. Les cubes locaux ne prennent pas en charge la propriété `CompatibilityLevel`.
La base de données doit avoir été créée dans une version précédente (SQL Server 2008 R2 ou antérieure), puis attachée ou restaurée sur un serveur [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)][!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] ou plus récent. Les bases de données déployées vers SQL Server 2012 sont déjà au niveau `1100` et ne peuvent pas être déclassifiées pour s'exécuter à un niveau inférieur.
## <a name="determine-the-existing-database-compatibility-level-for-a-multidimensional-database"></a>Déterminer le niveau de compatibilité de la base de données existant pour une base de données multidimensionnelle
La seule façon d'afficher ou modifier le niveau de compatibilité de la base de données est de passer par XMLA. Vous pouvez afficher ou modifier le script XMLA qui spécifie la base de données dans SQL Server Management Studio.
Si vous recherchez la définition XMLA d'une base de données pour la propriété `CompatibilityLevel` et qu'elle 'existe pas, vous disposez probablement d'une base de données au niveau de compatibilité `1050`.
Vous trouverez des instructions pour l'affichage et la modification du script XMLA dans la section suivante.
## <a name="set-the-database-compatibility-level-in-sql-server-management-studio"></a>Définir le niveau de compatibilité de la base de données dans SQL Server Management Studio
1. Avant d'augmenter le niveau de compatibilité, sauvegardez la base de données au cas où vous souhaiteriez annuler les modifications apportées.
2. Connectez-vous au serveur [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)][!INCLUDE[ssASnoversion](../../includes/ssasnoversion-md.md)] qui héberge la base de données à l’aide de SQL Server Management Studio.
3. Cliquez avec le bouton droit sur le nom de la base de données, pointez sur **Générer un script de la base de données en tant que**, sur **ALTER To**, puis sélectionnez **Nouvelle fenêtre d’éditeur de requête**. Une représentation XMLA de la base de données s'ouvre dans une nouvelle fenêtre.
4. Copiez l'élément XML suivant :
```
<ddl200:CompatibilityLevel>1100</ddl200:CompatibilityLevel>
```
5. Collez-le après l'élément de fin `</Annotations>` et avant l'élément `<Language>` . Le XML doit ressembler à l'exemple suivant :
```
</Annotations>
<ddl200:CompatibilityLevel>1100</ddl200:CompatibilityLevel>
<Language>1033</Language>
```
6. Enregistrez le fichier .
7. Pour exécuter le script, cliquez sur **Exécuter** dans le menu Requête ou appuyez sur F5.
## <a name="supported-operations-that-require-the-same-compatibility-level"></a>Opérations prises en charge qui requièrent le même niveau de compatibilité
Les opérations suivantes requièrent que les bases de données sources partagent le même niveau de compatibilité.
1. La fusion de partitions de bases de données différentes est prise en charge uniquement si les deux bases de données partagent le même niveau de compatibilité.
2. L'utilisation de dimensions liées d'une autre base de données requiert le même niveau de compatibilité. Par exemple, si vous souhaitez utiliser une dimension liée d’une base de données [!INCLUDE[ssKilimanjaro](../../includes/sskilimanjaro-md.md)] dans une [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)] base de données, vous devez déplacer la [!INCLUDE[ssKilimanjaro](../../includes/sskilimanjaro-md.md)] base de données vers un [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)] serveur et définir le niveau de compatibilité sur `1100` .
3. La synchronisation des serveurs est prise en charge uniquement pour les serveurs qui partagent la même version et le même niveau de compatibilité de base de données.
## <a name="next-steps"></a>Étapes suivantes
Après avoir augmenté le niveau de compatibilité de la base de données, vous pouvez définir la propriété `StringStoresCompatibilityLevel` dans [!INCLUDE[ssBIDevStudio](../../includes/ssbidevstudio-md.md)]. Cela augmente le stockage des chaînes de mesures et de dimensions. Pour plus d’informations sur cette fonctionnalité, consultez [Configurer le stockage de chaînes pour des dimensions et des partitions](configure-string-storage-for-dimensions-and-partitions.md).
## <a name="see-also"></a>Voir aussi
[Sauvegarde, restauration et synchronisation de bases de données (XMLA)](../multidimensional-models-scripting-language-assl-xmla/backing-up-restoring-and-synchronizing-databases-xmla.md)
| 97.154639 | 874 | 0.768251 | fra_Latn | 0.955453 |
43bca3da36a4f7ac2eb8d22c97964a004ecb3897 | 2,409 | md | Markdown | README.md | davidnewhall/go-imessage | 20447d56371c8b0728386b8bf851ee0d38c23b4c | [
"MIT"
] | 25 | 2019-02-20T08:29:18.000Z | 2022-02-24T07:48:37.000Z | README.md | davidnewhall/go-imessage | 20447d56371c8b0728386b8bf851ee0d38c23b4c | [
"MIT"
] | 1 | 2020-04-23T19:11:48.000Z | 2020-04-23T20:22:20.000Z | README.md | davidnewhall/go-imessage | 20447d56371c8b0728386b8bf851ee0d38c23b4c | [
"MIT"
] | 3 | 2021-04-05T22:13:14.000Z | 2021-12-05T13:17:04.000Z | # go-imessage
Go Library used to interact with iMessage (Messages.app) on macOS
[](https://godoc.org/golift.io/imessage)
Use this library to send and receive messages using iMessage. I personally use it for
various home automation applications. You can use it to make a chat bot or something
similar. You can bind either a function or a channel to any or all messages.
The Send() method uses AppleScript, which is likely going to require some tinkering.
You got this far, so I trust you'll figure that out. Let me know how it works out.
The library uses `fsnotify` to poll for db updates, then checks the database for changes.
Only new messages are processed. If somehow `fsnotify` fails it will fall back to polling
the database. Pay attention to the debug/error logs. See the example below for an easy
way to log the library messages.
A working example:
```golang
package main
import (
"log"
"os"
"strings"
"golift.io/imessage"
)
func checkErr(err error) {
if err != nil {
log.Fatalln(err)
}
}
func main() {
iChatDBLocation := "/Users/<your username>/Library/Messages/chat.db"
c := &imessage.Config{
SQLPath: iChatDBLocation, // Set this correctly
QueueSize: 10, // 10-20 is fine. If your server is super busy, tune this.
Retries: 3, // run the applescript up to this many times to send a message. 3 works well.
DebugLog: log.New(os.Stdout, "[DEBUG] ", log.LstdFlags), // Log debug messages.
ErrorLog: log.New(os.Stderr, "[ERROR] ", log.LstdFlags), // Log errors.
}
s, err := imessage.Init(c)
checkErr(err)
done := make(chan imessage.Incoming) // Make a channel to receive incoming messages.
s.IncomingChan(".*", done) // Bind to all incoming messages.
err = s.Start() // Start outgoing and incoming message go routines.
checkErr(err)
log.Print("waiting for msgs")
for msg := range done { // wait here for messages to come in.
if len(msg.Text) < 60 {
log.Println("id:", msg.RowID, "from:", msg.From, "attachment?", msg.File, "msg:", msg.Text)
} else {
log.Println("id:", msg.RowID, "from:", msg.From, "length:", len(msg.Text))
}
if strings.HasPrefix(msg.Text, "Help") {
// Reply to any incoming message that has the word "Help" as the first word.
s.Send(imessage.Outgoing{Text: "no help for you", To: msg.From})
}
}
}
```
| 35.426471 | 107 | 0.688252 | eng_Latn | 0.934101 |
43be0d39c25ee5bbcd0bfef2cf4133c4fbdc8c77 | 25,263 | md | Markdown | data_curation/CNV/README.md | talkowski-lab/rCNV2 | fcc1142d8c13b58d18a37fe129e9bb4d7bd6641d | [
"MIT"
] | 7 | 2021-01-28T15:46:46.000Z | 2022-02-07T06:50:40.000Z | data_curation/CNV/README.md | talkowski-lab/rCNV2 | fcc1142d8c13b58d18a37fe129e9bb4d7bd6641d | [
"MIT"
] | 1 | 2021-03-02T01:33:53.000Z | 2021-03-02T01:33:53.000Z | data_curation/CNV/README.md | talkowski-lab/rCNV2 | fcc1142d8c13b58d18a37fe129e9bb4d7bd6641d | [
"MIT"
] | 3 | 2021-02-21T19:49:12.000Z | 2021-12-22T15:56:21.000Z | # CNV Data Curation
We aggregated existing microarray-based CNV calls from numerous sources. This process is described below.
All commands executed to filter the CNV data as descrbed below are contained in `filter_CNV_data.sh`.
In practice, the commands in `filter_CNV_data.sh` were parallelized in [FireCloud/Terra](https://portal.firecloud.org) with `filter_CNV_data.wdl`.
## CNV data sources
We aggregated CNV data from multiple sources, listed alphabetically below.
Note that the sample counts in this table represent the number of samples retained after filtering outliers (described below).
| Cohort | Citation | PMID | Platform(s) | Build | Phenos | N Cases | N Ctrls |
| --- | :--- | :--- | :--- | :--- | :--- | ---: | ---: |
| Boston Children's Hospital (`BCH`) | [Talkowski _et al._, _Cell_ (2012)](https://pubmed.ncbi.nlm.nih.gov/22521361) | [22521361](https://pubmed.ncbi.nlm.nih.gov/22521361) | aCGH | hg18 | Mixed | 3591 | 0 |
| BioVU (`BioVU`) | [Roden _et al._, _Clin. Pharmacol. Ther._ (2008)](https://pubmed.ncbi.nlm.nih.gov/18500243) | [18500243](https://pubmed.ncbi.nlm.nih.gov/18500243) | Illumina MegaEx (100%) | hg19 | Mixed | 32306 | 14661 |
| Children's Hospital of Philadelphia (`CHOP`) | [Li _et al._, _Nat. Commun._ (2020)](https://pubmed.ncbi.nlm.nih.gov/31937769) | [31937769](https://pubmed.ncbi.nlm.nih.gov/31937769) | Mixed Illumina SNP genotyping platforms | hg19 | Mixed | 153870 | 24161 |
| Eichler Lab (`Coe`) | [Coe _et al._, _Nat. Genet._ (2014)](https://pubmed.ncbi.nlm.nih.gov/25217958) | [25217958](https://pubmed.ncbi.nlm.nih.gov/25217958) | Cases: SignatureChip OS v2.0 (58%), SignatureChip OS v1.0 (34%), Other (8%); Controls: Affy 6.0 (100%) | hg19 | Developmental disorders | 29083 | 11256 |
| Eichler Lab (`Cooper`) | [Cooper _et al._, _Nat. Genet._ (2011)](https://pubmed.ncbi.nlm.nih.gov/21841781) | [21841781](https://pubmed.ncbi.nlm.nih.gov/21841781) | Illumina 550k-610k (75%), Custom 1.2M (25%) | hg19 | N/A | 0 | 8329 |
| Epi25 Consortium (`Epi25k`) | [Niestroj _et al._, _Brain_ (2020)](https://pubmed.ncbi.nlm.nih.gov/32568404) | [32568404](https://pubmed.ncbi.nlm.nih.gov/32568404) | Illumina GSA-MD v1.0 (100%) | hg19 | Epilepsy | 12053 | 8173 |
| Estonian Biobank (`EstBB`) | [Leitsalu _et al._, _Int. J. Epidemiol._ (2014)](https://pubmed.ncbi.nlm.nih.gov/24518929) | [24518929](https://pubmed.ncbi.nlm.nih.gov/24518929) | Illumina GSA (100%) | hg19 | Mixed | 63183 | 15659 |
| GeneDX (`GDX`) | - | - | Custom Aglient SNP Array (71%), CytoScan HD (29%) | hg18 & hg19 | Mixed | 74208 | 0 |
| Indiana University (`IU`) | - | - | CytoScan HD (100%) | hg19 | Mixed | 1576 | 0 |
| Ontario Population Genomics Platform (`Ontario`) | [Uddin _et al._, _Genet. Med._ (2015)](https://pubmed.ncbi.nlm.nih.gov/25503493) | [25503493](https://pubmed.ncbi.nlm.nih.gov/25503493) | CytoScan HD (100%) | hg19 | N/A | 0 | 873 |
| Psychiatric Genetics Consortium (`PGC`) | [Marshall _et al._, _Nat. Genet._ (2017)](https://pubmed.ncbi.nlm.nih.gov/27869829) | [27869829](https://pubmed.ncbi.nlm.nih.gov/27869829) | Affy 6.0 (37%), Omni Express (31%), Omni Express Plus (12%), Other (20%) | hg18 | Schizophrenia | 21094 | 20277 |
| Radboud University Medical Center (`RUMC`) | [Vulto-van Silfhout _et al._, _Hum. Mutat._ (2013)](https://pubmed.ncbi.nlm.nih.gov/24038936) | [24038936](https://pubmed.ncbi.nlm.nih.gov/24038936) | Affy 250k (100%) | hg17 | Intellectual disability | 5531 | 0 |
| SickKids Hospital (`SickKids`) | [Zarrei _et al._, _NPJ Genomic Medicine_ (2019)](https://pubmed.ncbi.nlm.nih.gov/31602316) | [31602316](https://pubmed.ncbi.nlm.nih.gov/31602316) | Affy 6.0 (100%) | hg19 | Developmental disorders | 2689 | 0 |
| Simons Simplex Collection (`SSC`) | [Sanders _et al._, _Neuron_ (2015)](https://pubmed.ncbi.nlm.nih.gov/26402605) | [26402605](https://pubmed.ncbi.nlm.nih.gov/26402605) | Omni 1Mv3 (46%), Omni 2.5 (41%), Omni 1Mv1 (13%) | hg18 | Autism | 2795 | 0 |
| The Cancer Genome Atlas (`TCGA`) | [Zack _et al._, _Nat. Genet._ (2013)](https://pubmed.ncbi.nlm.nih.gov/24071852) | [24071852](https://pubmed.ncbi.nlm.nih.gov/24071852) | Affy 6.0 (100%) | hg19 | N/A | 0 | 8670 |
| The Genetic Etiology of Tourette Syndrome Consortium (`TSAICG`) | [Huang _et al._, _Neuron (2017)](https://pubmed.ncbi.nlm.nih.gov/28641109) | [28641109](https://pubmed.ncbi.nlm.nih.gov/28641109) | OmniExpress (100%) | hg19 | Tourette Syndrome | 2434 | 4093 |
| UK Biobank (`UKBB`) | [Macé _et al._, _Nat. Commun._ (2017)](https://pubmed.ncbi.nlm.nih.gov/28963451) | [28963451](https://pubmed.ncbi.nlm.nih.gov/28963451) | UKBB Affy Axiom (100%) | hg19 | Mixed | 54071 | 375800 |
## Raw CNV data processing steps
All CNV data native to hg17 or hg18 was lifted over to hg19 using UCSC liftOver, requiring at least 50% of the original CNV to map successfully to hg19 in order to be retained.
Some datasets required manual curation prior to inclusion. Where necessary, these steps are enumerated below:
* **BioVU**: TBD [TODO: ADD TEXT HERE]
* **CHOP**: CNVs were filtered on quality score ≥40 and CNV size ≥25kb while requiring at least 10 SNPs per CNV. After CNV filtering, samples with `LRR_SD` <0.25, >20 CNV calls, or SNP call rate <98% were excluded as outliers, as well as samples genotyped on arrays with <175k SNP probes or samples labeled as cancer or Down's Syndrome patients. Finally, we identified 19 loci with apparently platform-specific artifactual CNV pileups. CNVs covered ≥10% by any of these artifact regions were removed from the callset. Lists of CHOP-specific blacklisted loci for deletions and duplications are provided as [reference files](https://github.com/talkowski-lab/rCNV2/tree/master/refs).
* **Cooper**: Only retained control samples from Cooper _et al._ All cases from Cooper _et al._ also appear in Coe _et al._
* **Epi25k**: CNVs were filtered on ≥10 probes and ≥25kb. After CNV filtering, samples with >25 CNV calls were excluded as outliers.
* **EstBB**: Samples were excluded if were not included in SNP imputation, had genotype calls missing for ≥2% of sites, or belonged to two genotyping batches based on visual inspection of genotyping intensity parameters, followed by further exclusion of genotyping plates (≤24 samples per plate) that contained >3 samples with >200 CNV calls. We retained unrelated samples that had been linked to Estonian health registries and that had ≤50 raw CNV calls. We included CNVs with a quality score ≥15, were covered by ≥10 probes, and were ≥25kb in size. Finally, we pruned related samples and any samples with known malignant cancers or chromosomal disorders (e.g., Down's Syndrome or sex chromosome aneuploidies).
* **GDX**: All CNVs were required to be ≥20kb and <40Mb in length. Except for the minority of 9,958 samples for which additional CNV call metadata was unavailable, all CNVs were further required to not have been annotated as a suspected false positive or mosaic event, have estimated copy numbers ≤1.5 for deletions or ≥2.5 for duplications, include ≥10 probes and have P(CNV) ≤ 10<sup>-10</sup>. Following CNV call filtering, we excluded all samples that either had >10 calls each, were identified as potential biological replicates, were referred for testing due to being a relative of a known carrier of a medically relevant CNV, or had “advanced maternal age” as their indication for testing.
* **IU**: Excluded samples from Indiana University (IU) cohort derived from buccal swab DNA, samples with known aneuploidies or large runs of homozygosity, and samples with no phenotypic indication specified.
* **SSC**: CNVs were filtered on pCNV ≤10<sup>-9</sup>, per recommendation of the authors. We only retained affected individuals since all controls were first-degree relatives of affected cases.
* **SickKids**: CNVs were filtered on ≥25kb. After CNV filtering, samples with >80 CNV calls were excluded as outliers. Finally, we identified a single locus on chr12 that had CNVs only appearing in ADHD samples at 2.8% frequency; these CNVs were removed from the callset. We only retained affected individuals since all controls were first-degree relatives of affected cases.
* **TCGA**: CNVs were filtered on ≥10 probes and ≥25kb. Deletions were required to have a mean intensity ≤ log<sub>2</sub>(0.6) and duplications were required to have a mean intensity ≥ log<sub>2</sub>(1.45). We only retained normal samples from TCGA tumor:normal pairs, and excluded any TCGA donors with known blood cancer.
* **UKBB**: CNVs were filtered on quality score ≥17 and CNV size ≥25kb. After CNV filtering, samples with >10 CNV calls were excluded as outliers as well as any samples with known malignant cancers or chromosomal disorders (e.g., Down's Syndrome or sex chromosome aneuploidies).
#### CNV defragmentation
Many array-based CNV calling algorithms infrequently result in fragmentation (i.e., over-segmentation) of large CNV calls.
For the purposes of this study, fragmentation of CNV calls has the potential to bias association tests, as single individuals might be counted multiple times for a given locus or gene.
Thus, we applied a standardized defragmentation step to raw CNV calls for all studies where this was possible.
Defragmentation was performed with `defragment_cnvs.py` using `--max-dist 0.25`, which merges CNVs of the same type found in the same sample if their breakpoints are within ±25% of the size of their corresponding original CNV calls.
For example, below is a 4.5Mb deletion reported as three deletion fragments in the `TCGA` cohort:
| chrom | start | end | CNV type |
| ---: | ---: | ---: | :--- |
| 10 | 54,387,162 | 54,542,011 | DEL |
| 10 | 54,544,685 | 55,877,827 | DEL |
| 10 | 55,883,144 | 58,906,151 | DEL |
Given that these three fragments are (i) all reported in the same sample and (ii) separated by just 2.7kb and 5.3kb, respectively, it is highly unlikely that these deletions are independent CNV events.
The defragmentation algorithm as applied to these data would merge these three deletion fragments into a single CNV spanning from `54387162` to `58906151`.
Seven studies were unable to be defragmented due to inadequate sample-level information: `Coe` (_controls only_), `Cooper`, `PGC`, `BioVU`, `TSAICG`, `Ontario`, and `RUMC`.
### Raw CNV callset properties
The properties of each callset are listed below after initial data processing steps but prior to further filtering.
As [described below](https://github.com/talkowski-lab/rCNV2/tree/master/data_curation/CNV#case-control-metacohorts), we subdivided 13,139 control samples from the UKBB cohort to use as proxy controls for cases from GeneDx and retained the remaining 416,732 UKBB samples as a separate cohort; these cohort subsets are referred to as `UKBB_main` and `UKBB_sub` for the main UKBB cohort and the subset of 13,139 controls, respectively.
| Dataset | N Cases | Case CNVs | CNVs /Case | Case Median Size | Case DEL:DUP | N Ctrls | Ctrl CNVs | CNVs /Ctrl | Ctrl Median Size | Ctrl DEL:DUP |
| --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| BCH | 3,591 | 5,159 | 1.44 | 204.0 kb | 1:1.27 | 0 | 0 | 0 | NA | NA |
| CHOP | 153,870 | 641,938 | 4.17 | 87.0 kb | 1.05:1 | 24,161 | 108,559 | 4.49 | 87.0 kb | 1.86:1 |
| Coe | 29,104 | 28,667 | 0.98 | 187.0 kb | 1:1.11 | 11,256 | 273,331 | 24.28 | 53.0 kb | 1.24:1 |
| Cooper | 0 | 0 | 0 | NA | NA | 8,329 | 432,478 | 51.92 | 2.0 kb | 8.04:1 |
| Epi25k | 12,053 | 92,412 | 7.67 | 97.0 kb | 1:1.54 | 8,173 | 64,036 | 7.84 | 100.0 kb | 1:1.33 |
| GDX | 74,028 | 145,459 | 1.96 | 202.0 kb | 1:1.59 | 0 | 0 | 0 | NA | NA |
| IU | 1,577 | 4,890 | 3.1 | 98.0 kb | 1.41:1 | 0 | 0 | 0 | NA | NA |
| Ontario | 0 | 0 | 0 | NA | NA | 873 | 71,178 | 81.53 | 10.0 kb | 4.11:1 |
| PGC | 21,094 | 42,096 | 2.0 | 80.0 kb | 1:1.04 | 20,277 | 40,464 | 2.0 | 78.0 kb | 1:1.03 |
| RUMC | 5,531 | 1,777 | 0.32 | 1090.0 kb | 1:1.00 | 0 | 0 | 0 | NA | NA |
| SSC | 2,795 | 30,856 | 11.04 | 21.0 kb | 3.09:1 | 0 | 0 | 0 | NA | NA |
| SickKids | 2,689 | 72,278 | 26.88 | 55.0 kb | 1.04:1 | 0 | 0 | 0 | NA | NA |
| TCGA | 0 | 0 | 0 | NA | NA | 8,670 | 206,758 | 23.85 | 60.0 kb | 1:1.16 |
| TSAICG | 2,434 | 3,541 | 1.45 | 91.0 kb | 1.01:1 | 4,093 | 5,834 | 1.43 | 91.0 kb | 1:1.08 |
| UKBB_main | 54,071 | 140,082 | 2.59 | 88.0 kb | 5.62:1 | 362,661 | 930,043 | 2.56 | 87.0 kb | 5.69:1 |
| UKBB_sub | 0 | 0 | 0 | NA | NA | 13,139 | 33,913 | 2.58 | 86.0 kb | 5.64:1 |
| EstBB | 63,183 | 226,083 | 3.58 | 83.0 kb | 1:1.45 | 15,659 | 56,267 | 3.59 | 83.0 kb | 1:1.46 |
| BioVU | 32,306 | 58,955 | 1.82 | 133.0 kb | 1:1.32 | 14,661 | 27,411 | 1.87 | 132.0 kb | 1:1.40 |
The information for this table was collected using `collect_cohort_stats.sh`, and is visualized below using `plot_cnv_stats_per_cohort.R`.

### Raw data access
All raw CNV data files and their tabix indexes are stored in a protected Google Cloud bucket, here:
```
$ gsutil ls gs://rcnv_project/raw_data/cnv/
```
Note that permissions must be granted per user prior to data access.
## Curation Steps: Rare CNVs
All raw CNV data was subjected to the same set of global filters:
* Restricted to autosomes
* CNV size ≥ 100kb and ≤ 20Mb
* Does not have substantial overlap<sup>1</sup> with a common CNV (lower bound of 95% binomial confidence interval of`AF` > 1%) in any population from WGS resolution in gnomAD-SV<sup>2</sup> v2.1 ([Collins\*, Brand\*, _et al._, _Nature_ (2020)](https://www.nature.com/articles/s41586-020-2287-8))
* Does not have substantial overlap<sup>1</sup> with a common CNV (lower bound of 95% binomial confidence interval of`AF` > 1%) in any population from WGS resolution in recent high-coverage resequencing of the 1000 Genomes Project, Phase III ([Byrska-Bishop _et al._, _bioRxiv_ (2021)](https://www.biorxiv.org/content/10.1101/2021.02.06.430068v1))
* Does not have substantial overlap<sup>1</sup> with a common CNV (lower bound of 95% binomial confidence interval of`AF` > 1%) in any population from WGS resolution in recent high-coverage resequencing of the Human Genome Diversity Panel ([Almarri _et al._, _Cell_ (2020)](https://pubmed.ncbi.nlm.nih.gov/32531199/))
* Does not have substantial overlap<sup>1</sup> with a common CNV (lower bound of 95% binomial confidence interval of`AF` > 1%) from WGS resolution in the NIH Center for Common Disease Genetics SV callset ([Abel _et al._, _Nature_ (2020)](https://www.nature.com/articles/s41586-020-2371-0))
* Does not have substantial overlap<sup>1</sup> with other CNVs in at least 1% of all samples within the same dataset or in any of the other array CNV datasets (compared pairwise in serial; 1% cutoff scaled adaptively to each cohort corresponding to the upper bound of the 95% binomial confidence interval of 1% frequency according to that cohort's total sample size)<sup>3</sup>
* Not substantially covered<sup>4</sup> by somatically hypermutable sites (as applied in [Collins\*, Brand\*, _et al._, _Nature_ (2020)](https://www.nature.com/articles/s41586-020-2287-8))<sup>3</sup>
* Not substantially covered<sup>4</sup> by segmental duplications and/or simple, low-complexity, or satellite repeats<sup>3</sup>
* Not substantially covered<sup>4</sup> by N-masked regions of the hg19 reference genome assembly<sup>3</sup>
#### Notes on curation
1. "Substantial" overlap determined based on ≥50% reciprocal overlap using BEDTools ([Quinlan & Hall, _Bioinformatics_ (2010)](https://www.ncbi.nlm.nih.gov/pubmed/20110278)). For overlap-based comparisons, both breakpoints were required to be within ±100kb, and CNV type (DEL vs. DUP) was required to match.
2. The version of gnomAD-SV used for this analysis (gnomAD-SV v2.1, non-neuro) included 8,342 samples without known neuropsychiatric disorders as [available from the gnomAD website](https://gnomad.broadinstitute.org/downloads/) and described in [Collins\*, Brand\*, _et al._, _Nature_ (2020)](https://www.nature.com/articles/s41586-020-2287-8)
3. CNVs with ≥75% reciprocal overlap versus [known genomic disorder CNV loci](https://github.com/talkowski-lab/rCNV2/tree/master/data_curation/other#genomic-disorders) reported by at least two of six established resources were exempted from this filter step, as we reasoned it was possible that certain genomic disorder CNVs may be represented at frequencies close to 1% in some case-only subsets (e.g., SSC, RUMC, SickKids, GeneDx, _etc._).
4. "Substantial" coverage determined based on ≥50% coverage per BEDTools coverage ([Quinlan & Hall, _Bioinformatics_ (2010)](https://www.ncbi.nlm.nih.gov/pubmed/20110278)).
### Rare CNV callset properties
The properties of each rare CNV callset are listed below after the above filtering steps.
| Dataset | N Cases | Case CNVs | CNVs /Case | Case Median Size | Case DEL:DUP | N Ctrls | Ctrl CNVs | CNVs /Ctrl | Ctrl Median Size | Ctrl DEL:DUP |
| --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| BCH | 3,591 | 3,028 | 0.84 | 342.0 kb | 1:1.40 | 0 | 0 | 0 | NA | NA |
| CHOP | 153,870 | 91,184 | 0.59 | 193.0 kb | 1:1.08 | 24,161 | 16,819 | 0.7 | 186.0 kb | 1:1.01 |
| Coe | 29,104 | 19,640 | 0.67 | 264.0 kb | 1:1.18 | 11,256 | 9,279 | 0.82 | 177.0 kb | 1:1.59 |
| Cooper | 0 | 0 | 0 | NA | NA | 8,329 | 4,183 | 0.5 | 170.0 kb | 1.38:1 |
| Epi25k | 12,053 | 8,989 | 0.75 | 176.0 kb | 1:1.07 | 8,173 | 5,611 | 0.69 | 185.0 kb | 1:1.14 |
| GDX | 74,028 | 45,051 | 0.61 | 283.0 kb | 1:1.47 | 0 | 0 | 0 | NA | NA |
| IU | 1,577 | 1,651 | 1.05 | 209.0 kb | 1:1.34 | 0 | 0 | 0 | NA | NA |
| Ontario | 0 | 0 | 0 | NA | NA | 873 | 745 | 0.85 | 182.0 kb | 1:1.50 |
| PGC | 21,094 | 14,660 | 0.69 | 190.0 kb | 1:1.34 | 20,277 | 13,272 | 0.65 | 182.0 kb | 1:1.33 |
| RUMC | 5,531 | 1,486 | 0.27 | 1059.0 kb | 1.03:1 | 0 | 0 | 0 | NA | NA |
| SSC | 2,795 | 2,151 | 0.77 | 195.0 kb | 1:1.15 | 0 | 0 | 0 | NA | NA |
| SickKids | 2,689 | 3,879 | 1.44 | 181.0 kb | 1:1.35 | 0 | 0 | 0 | NA | NA |
| TCGA | 0 | 0 | 0 | NA | NA | 8,670 | 8,710 | 1.0 | 178.0 kb | 1:1.71 |
| TSAICG | 2,434 | 1,607 | 0.66 | 198.0 kb | 1:1.31 | 4,093 | 2,663 | 0.65 | 187.0 kb | 1:1.36 |
| UKBB_main | 54,071 | 28,690 | 0.53 | 180.0 kb | 2.50:1 | 362,661 | 187,626 | 0.52 | 177.0 kb | 2.57:1 |
| UKBB_sub | 0 | 0 | 0 | NA | NA | 13,139 | 6,758 | 0.51 | 176.0 kb | 2.62:1 |
| EstBB | 63,183 | 32,868 | 0.52 | 185.0 kb | 1:1.14 | 15,659 | 8,206 | 0.52 | 182.0 kb | 1:1.13 |
| BioVU | 32,306 | 27,239 | 0.84 | 179.0 kb | 1:1.22 | 14,661 | 12,118 | 0.83 | 176.0 kb | 1:1.25 |
The information for this table was collected using `collect_cohort_stats.sh`, and is visualized below using `plot_cnv_stats_per_cohort.R`.

---
## Case-control "metacohorts"
For analyses, we combined CNV data from multiple cohorts into seven matched groups, dubbed **metacohorts**, to control for technical differences between individual data sources and cohorts.
Individual cohorts were assigned to metacohorts on the basis of similarity in sample recruitment protocols, microarray probesets, and other technical factors.
We did not modify cohorts that had matched cases and controls with one exception: as the UKBB Axiom array design most closely matched the predominant array platform used by the GeneDx cohort, we randomly selected 13,139 controls from the UKBB (dubbed `UKBB_sub`) to group with GeneDx cases in the `meta2` metacohort. We retained all remaining UKBB samples (dubbed `UKBB_main`) in the `meta5` metacohort.
These metacohorts represent the basic unit on which all burden testing was performed, and are described in the table below.
| Metacohort ID | Case Source(s) | Number of Cases | Control Sources(s) | Number of Controls |
| :--- | :--- | ---: | :--- | ---: |
| `meta1` | BCH, Coe, IU | 34,272 | Coe, Cooper | 19,585 |
| `meta2` | Epi25k, GDX, TSAICG | 88,515 | Epi25k, Ontario, TSAICG, UKBB_sub | 26,278 |
| `meta3` | PGC, RUMC, SSC, SickKids | 32,109 | PGC, TCGA | 28,947 |
| `meta4` | CHOP | 153,870 | CHOP | 24,161 |
| `meta5` | UKBB_main | 54,071 | UKBB_main | 362,661 |
| `meta6` | EstBB | 63,183 | EstBB | 15,659 |
| `meta7` | BioVU | 32,306 | BioVU | 14,661 |
### Metacohort rare CNV callset properties
| Dataset | N Cases | Case CNVs | CNVs /Case | Case Median Size | Case DEL:DUP | N Ctrls | Ctrl CNVs | CNVs /Ctrl | Ctrl Median Size | Ctrl DEL:DUP |
| --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| meta1 | 34,272 | 24,319 | 0.71 | 266.0 kb | 1:1.22 | 19,585 | 13,462 | 0.69 | 174.0 kb | 1:1.24 |
| meta2 | 88,515 | 55,647 | 0.63 | 256.0 kb | 1:1.39 | 26,278 | 15,777 | 0.6 | 181.0 kb | 1.31:1 |
| meta3 | 32,109 | 22,176 | 0.69 | 201.0 kb | 1:1.29 | 28,947 | 21,982 | 0.76 | 180.0 kb | 1:1.47 |
| meta4 | 153,870 | 91,184 | 0.59 | 193.0 kb | 1:1.08 | 24,161 | 16,819 | 0.7 | 186.0 kb | 1:1.01 |
| meta5 | 54,071 | 28,690 | 0.53 | 180.0 kb | 2.50:1 | 362,661 | 187,626 | 0.52 | 177.0 kb | 2.57:1 |
| meta6 | 63,183 | 32,868 | 0.52 | 185.0 kb | 1:1.14 | 15,659 | 8,206 | 0.52 | 182.0 kb | 1:1.13 |
| meta7 | 32,306 | 27,239 | 0.84 | 179.0 kb | 1:1.22 | 14,661 | 12,118 | 0.83 | 176.0 kb | 1:1.25 |

The information for these tables was collected using `collect_cohort_stats.sh` and visualized using `plot_cnv_stats_per_cohort.R`.
---
## Noncoding subsets
For certain analyses, we restricted this dataset further to rCNVs unlikely to directly disrupt disease-relevant protein-coding genes.
These subsets are defined as follows:
1. **Strictly noncoding**: all rCNVs were excluded based on any overlap with any canonical exon from any protein-coding gene ([as described here](https://github.com/talkowski-lab/rCNV2/tree/master/data_curation/gene#gene-definitions)).
2. **Loose noncoding**: to improve power for [noncoding association testing](https://github.com/talkowski-lab/rCNV2/tree/master/analysis/noncoding), we supplemented the `Strictly noncoding` subset (described above) with any rCNVs that overlapped exons from the 4,793 genes meeting both of the following criteria:
* Known to readily tolerate functional mutations in the general population ([described here](https://github.com/talkowski-lab/rCNV2/tree/master/data_curation/gene#gene-set-definitions)); and
* No known disease association [per OMIM](https://www.omim.org/).
This second subset ("`loose noncoding`") represents the subset of rare CNVs depleted for strong-effect, disease-relevant coding effects.
For both sets of noncoding CNVs, we padded all exons from all genes by ±50kb to protect against CNV breakpoint imprecision.
The code to apply these filters is contained in `extract_noncoding_subsets.sh`.
### Strictly noncoding rare CNV callset properties
| Dataset | N Cases | Case CNVs | CNVs /Case | Case Median Size | Case DEL:DUP | N Ctrls | Ctrl CNVs | CNVs /Ctrl | Ctrl Median Size | Ctrl DEL:DUP |
| --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| meta1 | 34,272 | 5,306 | 0.15 | 192.0 kb | 1.32:1 | 19,585 | 4,588 | 0.23 | 158.0 kb | 1.91:1 |
| meta2 | 88,515 | 12,644 | 0.14 | 174.0 kb | 1.65:1 | 26,278 | 5,896 | 0.22 | 166.0 kb | 2.49:1 |
| meta3 | 32,109 | 7,070 | 0.22 | 163.0 kb | 1.63:1 | 28,947 | 7,309 | 0.25 | 158.0 kb | 1.89:1 |
| meta4 | 153,870 | 33,003 | 0.21 | 166.0 kb | 1.85:1 | 24,161 | 6,590 | 0.27 | 165.0 kb | 2.14:1 |
| meta5 | 54,071 | 9,876 | 0.18 | 164.0 kb | 5.07:1 | 362,661 | 66,811 | 0.18 | 164.0 kb | 4.98:1 |
| meta6 | 63,183 | 12,339 | 0.2 | 165.0 kb | 1.89:1 | 15,659 | 3,094 | 0.2 | 165.0 kb | 1.86:1 |
| meta7 | 32,306 | 9,381 | 0.29 | 167.0 kb | 2.03:1 | 14,661 | 4,211 | 0.29 | 167.0 kb | 1.93:1 |

### Loose noncoding rare CNV callset properties
| Dataset | N Cases | Case CNVs | CNVs /Case | Case Median Size | Case DEL:DUP | N Ctrls | Ctrl CNVs | CNVs /Ctrl | Ctrl Median Size | Ctrl DEL:DUP |
| --- | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: | ---: |
| meta1 | 34,272 | 6,449 | 0.19 | 189.0 kb | 1.30:1 | 19,585 | 5,485 | 0.28 | 159.0 kb | 1.79:1 |
| meta2 | 88,515 | 14,786 | 0.17 | 174.0 kb | 1.57:1 | 26,278 | 7,049 | 0.27 | 165.0 kb | 2.32:1 |
| meta3 | 32,109 | 8,344 | 0.26 | 162.0 kb | 1.50:1 | 28,947 | 8,709 | 0.3 | 159.0 kb | 1.74:1 |
| meta4 | 153,870 | 38,817 | 0.25 | 168.0 kb | 1.78:1 | 24,161 | 7,782 | 0.32 | 166.0 kb | 2.04:1 |
| meta5 | 54,071 | 11,876 | 0.22 | 162.0 kb | 4.26:1 | 362,661 | 80,208 | 0.22 | 162.0 kb | 4.33:1 |
| meta6 | 63,183 | 14,209 | 0.22 | 165.0 kb | 1.78:1 | 15,659 | 3,588 | 0.23 | 165.0 kb | 1.75:1 |
| meta7 | 32,306 | 10,941 | 0.34 | 168.0 kb | 1.91:1 | 14,661 | 4,888 | 0.33 | 168.0 kb | 1.80:1 |

| 100.25 | 712 | 0.670823 | eng_Latn | 0.825926 |
43be3a0fe576350242fa44d35f1c20582dc235f1 | 170 | md | Markdown | README.md | Cray-HPE/hpe-csm-scripts | 3d8d00d13544a61b9301404290e9f97a77e71e69 | [
"MIT"
] | null | null | null | README.md | Cray-HPE/hpe-csm-scripts | 3d8d00d13544a61b9301404290e9f97a77e71e69 | [
"MIT"
] | null | null | null | README.md | Cray-HPE/hpe-csm-scripts | 3d8d00d13544a61b9301404290e9f97a77e71e69 | [
"MIT"
] | null | null | null | # CSM Script
This repo is a collection of scripts that are useful to installers and admins.
Anyone can add scripts to this repo and they will be installed on the NCNs.
| 28.333333 | 78 | 0.782353 | eng_Latn | 0.999985 |
43beb5a95c14efb420a1507fac8f32747a5334e1 | 356 | md | Markdown | README.md | guang0705/qingya-driver | d2afa64b73071f72ac7037a6b3cbc81a0bc7cbe4 | [
"MIT"
] | null | null | null | README.md | guang0705/qingya-driver | d2afa64b73071f72ac7037a6b3cbc81a0bc7cbe4 | [
"MIT"
] | null | null | null | README.md | guang0705/qingya-driver | d2afa64b73071f72ac7037a6b3cbc81a0bc7cbe4 | [
"MIT"
] | null | null | null | # 常用底层驱动文件
### 功能简介
- 缓存驱动
- [缓存驱动-mongodb](https://www.yuque.com/xqoyka/tg96kp/zsciso)
- [缓存驱动-多级缓存系统](https://www.yuque.com/xqoyka/tg96kp/ru7cs6)
- 日志驱动
- [日志驱动-云日志](https://www.yuque.com/xqoyka/tg96kp/ei5dhr)
- 异常驱动
- [异常驱动-异常处理接管](https://www.yuque.com/xqoyka/tg96kp/rsku9l)
- 数据库驱动
### 安装
```shell
composer require qingya/driver
```
| 16.952381 | 64 | 0.66573 | yue_Hant | 0.541795 |
43bed61cdcafd3485cac0497ac1e9661cc990506 | 2,605 | md | Markdown | README.md | shvyn22/FlexingWeather | c0fe1164515ec397a96aa0260896e230beccd213 | [
"MIT"
] | null | null | null | README.md | shvyn22/FlexingWeather | c0fe1164515ec397a96aa0260896e230beccd213 | [
"MIT"
] | null | null | null | README.md | shvyn22/FlexingWeather | c0fe1164515ec397a96aa0260896e230beccd213 | [
"MIT"
] | null | null | null | # FlexingWeather
FlexingWeather is an Android MVVM Compose sample application created for learning purposes only.\
This application is based on [MetaWeatherAPI](https://www.metaweather.com/api/).
## Screenshots
### Light mode
<p float="left">
<img src="screenshots/screen1.png" width=200/>
<img src="screenshots/screen2.png" width=200/>
</p>
### Dark mode
<p float="left">
<img src="screenshots/screen1-dm.png" width=200/>
<img src="screenshots/screen2-dm.png" width=200/>
</p>
## Tech stack and concepts
* **[Kotlin](https://kotlinlang.org/)** as programming language.
* **[Kotlin coroutines](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/)** as framework for asynchronous jobs and **Flow** as dataholder.
* **[Jetpack Compose](https://developer.android.com/jetpack/compose)** for UI.
* API-based remote data layer.
* **[Retrofit](https://square.github.io/retrofit/)** for network queries.
* **[GSON](https://github.com/google/gson)** for parsing JSON.
* **[DataStore](https://developer.android.com/jetpack/androidx/releases/datastore)** for working with user preferences (e.g. light/dark mode).
* **[Room](https://developer.android.com/jetpack/androidx/releases/room)** for local data layer.
* **[Lifecycle components](https://developer.android.com/jetpack/androidx/releases/lifecycle)**.
* **ViewModel** for implementing MVVM pattern.
* **[Coil](https://coil-kt.github.io/coil/)** for working with images.
* **[Hilt](https://dagger.dev/hilt/)** for dependency injection.
## License
```
MIT License
Copyright (c) 2022 Shvyndia Andrii
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
``` | 47.363636 | 158 | 0.75547 | eng_Latn | 0.446232 |
43bf7fe904e3ba4f4ad43aa7fd4c801499fa094c | 413 | md | Markdown | README.md | nmajor/my-mern-starter | e50b50b1b1ad5b495570cd7fb3279378c5093ed4 | [
"MIT"
] | null | null | null | README.md | nmajor/my-mern-starter | e50b50b1b1ad5b495570cd7fb3279378c5093ed4 | [
"MIT"
] | null | null | null | README.md | nmajor/my-mern-starter | e50b50b1b1ad5b495570cd7fb3279378c5093ed4 | [
"MIT"
] | null | null | null | 
Fork of [mern-starter](https://github.com/Hashnode/mern-starter) with my personal customizations and changes.
$ mkdir [app-name]
$ git init .
$ git pull git@github.com:nmajor/my-mern-starter.git
## License
MERN is released under the [MIT License](http://www.opensource.org/licenses/MIT).
| 37.545455 | 109 | 0.74092 | eng_Latn | 0.278414 |
43c08df6eeb6c23934162e82eb109915245b06ab | 624 | md | Markdown | README.md | lifeModder19135/Projectify-Desktop-Projectifier | 8739c1586e05ca1e184621236277dd7931d8a360 | [
"Apache-2.0"
] | null | null | null | README.md | lifeModder19135/Projectify-Desktop-Projectifier | 8739c1586e05ca1e184621236277dd7931d8a360 | [
"Apache-2.0"
] | null | null | null | README.md | lifeModder19135/Projectify-Desktop-Projectifier | 8739c1586e05ca1e184621236277dd7931d8a360 | [
"Apache-2.0"
] | null | null | null | # Projectify Desktop Projectifier
ABOUT APPLICATION:
Projectify Desktop Projectifier, IN THIS EARLY STAGE, IS A Desktop App which provides a UI, in a familiar format for coders, for creating organized and user-input-dependant file directories that represent projects
ABOUT PROJECT:
TODO- Describe "hopefully open-sourced; possibly licensed"
TODO- mention 3 feature suggestion methods
1. web-form
2. edit main XML; pull req
3. add xml file; pull req
4. edit main XML; email me
5. add xml file; email me
ABOUT REPOSITORY:
TODO- design/describe method for logging ideas for ne features from me and others | 32.842105 | 214 | 0.772436 | eng_Latn | 0.951303 |
43c08f5c1ac845740c61f9ceb62aab5a90b58b35 | 2,494 | md | Markdown | articles/digital-twins/how-to-use-cli.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/digital-twins/how-to-use-cli.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/digital-twins/how-to-use-cli.md | IrisClasson/azure-docs.sv-se | a6a2b03ee9a98c9e3708bf0df9f77628db79f1f6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Använda Azure Digital Twins CLI
titleSuffix: Azure Digital Twins
description: Se hur du kommer igång med och använder Azure Digitals flätade CLI.
author: baanders
ms.author: baanders
ms.date: 05/25/2020
ms.topic: how-to
ms.service: digital-twins
ms.openlocfilehash: 7f13dc3e86b21a3f4113a7a7c6f477f239315a27
ms.sourcegitcommit: 11e2521679415f05d3d2c4c49858940677c57900
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 07/31/2020
ms.locfileid: "87499100"
---
# <a name="use-the-azure-digital-twins-cli"></a>Använda Azure Digital Twins CLI
Förutom att hantera Azure Digitals-instansen i Azure Portal, har Azure Digitals-dubbla, ett **kommando rads gränssnitt (CLI)** som du kan använda för att utföra de flesta större åtgärder med tjänsten, inklusive:
* Hantera en digital Azure-instans
* Hantera modeller
* Hantera digitala dubbla
* Hantera dubbla relationer
* Konfigurera slut punkter
* Hantera [vägar](concepts-route-events.md)
* Konfigurera [säkerhet](concepts-security.md) via rollbaserad åtkomst kontroll (RBAC)
## <a name="uses-deploy-and-validate"></a>Använder (distribuera och validera)
Förutom att hantera instansen generellt är CLI också ett användbart verktyg för distribution och validering.
* Kontroll Plans kommandona kan användas för att göra distributionen av en ny instans repeterbar eller automatiserad.
* Du kan använda data Plans kommandon för att snabbt kontrol lera värden i din instans och kontrol lera att åtgärderna har slutförts som förväntat.
## <a name="get-the-extension"></a>Hämta tillägget
Azures digitala dubbla kommandon är en del av [Azure IoT-tillägget för Azure CLI](https://github.com/Azure/azure-iot-cli-extension). Du kan visa referens dokumentationen för de här kommandona som en del av `az iot` kommando uppsättningen: [AZ DT](https://docs.microsoft.com/cli/azure/ext/azure-iot/dt?view=azure-cli-latest).
Du kan kontrol lera att du har den senaste versionen av tillägget med de här stegen. Du kan köra dessa kommandon i [Azure Cloud Shell](../cloud-shell/overview.md) eller i ett [lokalt Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli?view=azure-cli-latest).
[!INCLUDE [digital-twins-cloud-shell-extensions.md](../../includes/digital-twins-cloud-shell-extensions.md)]
## <a name="next-steps"></a>Nästa steg
Ett alternativ till CLI-kommandon finns i hantera en digital Azure-instans med API: er och SDK: er:
* [*Anvisningar: använda Azures digitala dubbla API: er och SDK: er*](how-to-use-apis-sdks.md)
| 54.217391 | 324 | 0.787089 | swe_Latn | 0.990857 |
43c1313f8ba1ed40134c12869bd9b57e1f9e433b | 1,966 | md | Markdown | exampleSite/content/chicken-bulgogi.md | tuskbuddy/site | 168537c0ef6b48c57e9c21aa785abd0bce9d68a8 | [
"MIT"
] | null | null | null | exampleSite/content/chicken-bulgogi.md | tuskbuddy/site | 168537c0ef6b48c57e9c21aa785abd0bce9d68a8 | [
"MIT"
] | null | null | null | exampleSite/content/chicken-bulgogi.md | tuskbuddy/site | 168537c0ef6b48c57e9c21aa785abd0bce9d68a8 | [
"MIT"
] | null | null | null | +++
categories = ["Asian"]
date = 2020-12-17T23:13:00Z
description = "Korean greatness."
image = "/images/chicken-bulgogi.png"
tags = ["spicy", "chicken", "Asian"]
title = "Chicken Bulgogi"
type = "post"
+++
Also known as dak bulgogi, this spicy, tender dish is surprisingly simple. Just marinade thin strips of meat overnight in a bag of gochujang mixed with other sweet & savory ingredients, cook it up on medium-high heat or over the grill, and serve over rice with sweet BBQ chips and freshly diced cucumbers on the side.
#### Where's this recipe from?
[Damn Delicious](https://damndelicious.net/2019/04/21/korean-beef-bulgogi/ "DD"), which employs two food stylists, a videographer, operations manager, and two wonderful [mascots](https://damndelicious.net/author/buttersrhee/ "doggs"). Do note that I've adapted this recipe from beef to chicken. This particular recipe also has a little less filler text before the directions than what most recipes have, so it deserves some bonus points for brevity.
#### Anything to know before making it?
Sub chicken thighs for flank steak, of course. The linked recipe uses steak over a grill pan, which I'm sure is delicious -- I used the chicken in a skillet over stove top and it was great. Chicken breasts are far more prone to drying out than thighs, so I recommend using only beef or chicken thighs for this recipe.
I didn't freeze the meat for 30 minutes, but you may want to do so if you're using steak.
The pear may seem inconsequential, but between this recipe and another almost identical recipe sans pear, this one is better. Pears are a tragically underrated fruit, and I'm sure a pear tree in California would be awfully grateful if you used the literal fruit of their labor for a worthy cause. I used a Bartlett pear and it was a tad sweeter and juicier than without.
I also put the vegetable oil and >2 green onions into the marinade the first time, whoops, but it turned out to be delicious anyway. | 78.64 | 449 | 0.770092 | eng_Latn | 0.999517 |
43c227716217068729a4d87fd117f253e6bb7f8e | 17,998 | md | Markdown | node_modules/grunt-assemble/README.md | TeriyakiCode/Hermes | aa97afc5f3007e3c3ac8ab1389a00e1149e4d3d5 | [
"MIT"
] | null | null | null | node_modules/grunt-assemble/README.md | TeriyakiCode/Hermes | aa97afc5f3007e3c3ac8ab1389a00e1149e4d3d5 | [
"MIT"
] | null | null | null | node_modules/grunt-assemble/README.md | TeriyakiCode/Hermes | aa97afc5f3007e3c3ac8ab1389a00e1149e4d3d5 | [
"MIT"
] | null | null | null | # grunt-assemble [](http://badge.fury.io/js/grunt-assemble) [](https://travis-ci.org/assemble/grunt-assemble)
> Static site generator for Grunt.js, Yeoman and Node.js. Used by Zurb Foundation, Zurb Ink, H5BP/Effeckt, Less.js / lesscss.org, Topcoat, Web Experience Toolkit, and hundreds of other projects to build sites, themes, components, documentation, blogs and gh-pages.
### [Visit the website →](http://assemble.io)
### Warning!
> Versions of `grunt-assemble` below `0.2.0` have been deprecated and can be found on the [`0.1.15-deprecated` branch](https://github.com/assemble/grunt-assemble/tree/0.1.15-deprecated).
Versions of `grunt-assemble` at and above `0.2.0` contain the code from the original `assemble` up to version `0.4.42`.
## Why use Assemble?
1. Most popular site generator for Grunt.js and Yeoman. Assemble is used to build hundreds of web projects, ranging in size from a single page to 14,000 pages (that we're aware of!). [Let us know if you use Assemble](https://github.com/assemble/assemble/issues/300).
1. Allows you to carve your HTML up into reusable fragments: partials, includes, sections, snippets... Whatever you prefer to call them, Assemble does that.
1. Optionally use `layouts` to wrap your pages with commonly used elements and content.
1. "Pages" can either be defined as HTML/templates, JSON or YAML, or directly inside the Gruntfile.
1. It's awesome. Lol just kidding. But seriously, Assemble... is... awesome! and it's fun to use.
...and of course, we use Assemble to build the project's own documentation [http://assemble.io](http://assemble.io):
**For more:** hear Jon Schlinkert and Brian Woodward discuss Assemble on **Episode 98 of the [Javascript Jabber Podcast](http://javascriptjabber.com/098-jsj-assemble-io-with-brian-woodward-and-jon-schlinkert/)**.

## The "assemble" task
### Getting Started
Assemble requires Grunt `~0.4.1`
_If you haven't used [grunt][] before, be sure to check out the [Getting Started][] guide._
From the same directory as your project's [Gruntfile][Getting Started] and [package.json][], install Assemble with the following command:
```bash
npm install grunt-assemble --save-dev
```
Once that's done, add this line to your project's Gruntfile:
```js
grunt.loadNpmTasks('grunt-assemble');
```
### The "assemble" task
_Run the "assemble" task with the `grunt assemble` command._
Task targets, files and options may be specified according to the grunt [Configuring tasks](http://gruntjs.com/configuring-tasks) guide.
In your project's Gruntfile, add a section named `assemble` to the data object passed into `grunt.initConfig()`.
```js
assemble: {
options: {
assets: 'assets',
plugins: ['permalinks'],
partials: ['includes/**/*.hbs'],
layout: ['layouts/default.hbs'],
data: ['data/*.{json,yml}']
},
site: {
src: ['docs/*.hbs'],
dest: './'
}
},
```
[grunt]: http://gruntjs.com/
[Getting Started]: https://github.com/gruntjs/grunt/blob/devel/docs/getting_started.md
[package.json]: https://npmjs.org/doc/json.html
### Options
See the documentation for [Options](http://assemble.io/docs/Options.html) for more information.
### [assets](http://assemble.io/docs/options-assets.html)
Type: `String`
Default: `undefined`
Used with the `{{assets}}` variable to resolve the relative path from the _dest file_ to the _assets_ folder.
### [data](http://assemble.io/docs/options-data.html)
Type: `String|Array|Object`
Default: `src/data`
Specify the data to supply to your templates. Data may be formatted in `JSON`, `YAML`, [YAML front matter](http://assemble.io/docs/YAML-front-matter.html), or passed directly as an object. Wildcard patterns may also be used.
The filenames of the selected files must not collide with the [configuration options key names](http://assemble.io/docs/Options.html#configuration-options) for the assemble build task. For example, the files must not be called `assets.yml`,`collections.json`,….
### [layoutdir](http://assemble.io/docs/options-layoutdir.html)
Type: `String`
Default: `undefined`
The directory to use as the "cwd" for [layouts](http://assemble.io/docs/options-layout.html). When this option is defined, layouts may be defined using only the name of the layout.
### [layout](http://assemble.io/docs/options-layout.html)
Type: `String`
Default: `undefined`
If set, this defines the layout file to use for the [task or target][tasks-and-targets]. However, when specifying a layout, unlike Jekyll, _Assemble requires a file extension_ since you are not limited to using a single file type.
### layoutext
Type: `String`
Default: `undefined`
Specify the extension to use for layouts, enabling layouts in YAML front matter to be defined without an extension:
```yaml
---
layout: default
---
```
[tasks-and-targets]: http://gruntjs.com/configuring-tasks#task-configuration-and-targets
### [partials](http://assemble.io/docs/options-partials.html)
Type: `String|Array`
Default: `undefined`
Specifies the Handlebars partials files, or paths to the directories of files to be used.
### [plugins](http://assemble.io/plugins/)
Type: `String|Array`
Default: `undefined`
Name of the npm module to use and/or the path(s) to any custom plugins to use. Wildcard patterns may also be used.
See the [docs for plugins](http://assemble.io/plugins/).
### [helpers](http://assemble.io/docs/options-helpers.html)
Type: `String|Array`
Default: [handlebars-helpers](http://github.com/assemble/handlebars-helpers)
Name of the npm module to use and/or the path(s) to any custom helpers to use with the current template engine. Wildcard patterns may also be used.
By default, Assemble includes [handlebars-helpers]((http://assemble.io/helpers/)) as a dependency, so any helpers from that library are already available to be used in your templates.
See the [docs for helpers](http://assemble.io/helpers/).
### [ext](http://assemble.io/docs/options-ext.html)
Type: `String`
Default: `.html`
Specify the file extension for destination files. Example:
### [marked](http://assemble.io/docs/options-marked.html)
Type: `Object`
Default: [Marked.js defaults](https://github.com/chjj/marked#options-1)
Specify the [Marked.js options](https://github.com/chjj/marked#options-1) for the `{{#markdown}}{{/markdown}}` and `{{md ""}}` helpers to use when converting content.
### [engine](http://assemble.io/docs/options-engine.html)
Type: `String`
Default: `Handlebars`
Specify the engine to use for compiling templates **if you are not using Handlebars**.
Also see [assemble-swig](https://github.com/assemble/assemble-swig) for compiling [Swig Templates](https://github.com/paularmstrong).
### flatten
Type: `Boolean`
Default: `false`
Remove anything after (and including) the first `.` in the destination path, then append this value. In other words, when files are generated from different source folders this "flattens" them into the same destination directory. See [building the files object dynamically](http://gruntjs.com/configuring-tasks#building-the-files-object-dynamically) for more information on `files` formats.
Visit [Assemble's documentation](http://assemble.io) for more information about options.
### Usage Examples
Simple example of using data files in both `.json` and `.yml` format to build Handlebars templates.
```javascript
assemble: {
options: {
data: 'src/data/**/*.{json,yml}'
},
docs: {
files: {
'dist/': ['src/templates/**/*.hbs']
}
}
}
```
#### Using multiple targets
```js
assemble: {
options: {
assets: 'assets',
layoutdir: 'docs/layouts'
partials: ['docs/includes/**/*.hbs'],
data: ['docs/data/**/*.{json,yml}']
},
site: {
options: {
layout: 'default.hbs'
},
src: ['templates/site/*.hbs'],
dest: './'
},
blog: {
options: {
layout: 'blog-layout.hbs'
},
src: ['templates/blog/*.hbs'],
dest: 'articles/'
},
docs: {
options: {
layout: 'docs-layout.hbs'
},
src: ['templates/docs/*.hbs'],
dest: 'docs/'
}
},
```
Visit [Assemble's documentation](http://assemble.io) for many more examples and pointers on getting started.
## Contributing
In lieu of a formal styleguide, take care to maintain the existing coding style. Add unit tests for any new or changed functionality, and please re-build the documentation with [grunt-verb](https://github.com/assemble/grunt-verb) before submitting a pull request.
## Assemble plugins
Here are some related projects you might be interested in from the [Assemble](http://assemble.io) core team.
+ [assemble-middleware-anchors](https://github.com/assemble/assemble-middleware-anchors): Assemble middleware for creating anchor tags from generated html.
+ [assemble-middleware-contextual](https://github.com/assemble/assemble-middleware-contextual): Assemble middleware for generating a JSON file containing the context of each page. Basic middleware to help see what's happening in the build.
+ [assemble-middleware-decompress](https://github.com/assemble/assemble-middleware-decompress): Assemble plugin for extracting zip, tar and tar.gz archives.
+ [assemble-middleware-download](https://github.com/assemble/assemble-middleware-download): Assemble middleware for downloading files from GitHub.
+ [assemble-middleware-drafts](https://github.com/assemble/assemble-middleware-drafts): Assemble middleware (v0.5.0) for preventing drafts from being rendered.
+ [assemble-middleware-i18n](https://github.com/assemble/assemble-middleware-i18n): Assemble middleware for adding i18n support to projects.
+ [assemble-middleware-lunr](https://github.com/assemble/assemble-middleware-lunr): Assemble middleware for creating a search engine within your static site using lunr.js.
+ [assemble-middleware-permalinks](https://github.com/assemble/assemble-middleware-permalinks): Permalinks middleware for Assemble, the static site generator for Grunt.js and Yeoman. This plugin enables powerful and configurable URI replacement patterns, presets, uses Moment.js for parsing dates, and much more.
+ [assemble-middleware-rss](https://github.com/assemble/assemble-middleware-rss): Assemble middleware for creating RSS feeds with Assemble. (NOT published yet!)
+ [assemble-middleware-sitemap](https://github.com/assemble/assemble-middleware-sitemap): Assemble middleware for generating sitemaps.
+ [assemble-middleware-toc](https://github.com/assemble/assemble-middleware-toc): Assemble middleware for creating a table of contents in the generated HTML, using Cheerio.js
+ [assemble-middleware-wordcount](https://github.com/assemble/assemble-middleware-wordcount): Assemble middleware for displaying a word-count, and estimated reading time on blog posts or pages.
Visit [assemble.io/assemble-middleware](http:/assemble.io/assemble-middleware/) for more information about [Assemble](http:/assemble.io/) middleware.
## Authors
**Jon Schlinkert**
+ [github/jonschlinkert](https://github.com/jonschlinkert)
+ [twitter/jonschlinkert](http://twitter.com/jonschlinkert)
**Brian Woodward**
+ [github/doowb](https://github.com/doowb)
+ [twitter/doowb](http://twitter.com/doowb)
## Release History
**DATE** **VERSION** **CHANGES**
* 2014-07-07 v0.4.41 Updating resolve-dep dependency.
* 2014-06-13 v0.4.38 Use gray-matter instead of assemble-yaml.,Updates dependencies. Minor
refactoring and new utils to get rid of a couple of dependencies.,Update
the loaders for plugins and helpers to use resolve-dep. Should be more
reliable now.
* 2013-10-25 v0.4.17 Adds a params object to the call to `helper.register` allowing grunt and
assemble to be passed in and used from inside helpers.
* 2013-10-24 v0.4.16 Adds support for using wildcards with plugins stages.
* 2013-10-24 v0.4.15 Implements multiple plugin stages.
* 2013-10-21 v0.4.14 Adds support for plugins running once, before and after (thanks
@adjohnson916).,Adds pagination!,Thanks to @xzyfer, `options.data` can now
also directly accept an object of data.
* 2013-10-12 v0.4.13 Adds `originalAssets` property to root context to store the pre-calculated
assets path
* 2013-10-05 v0.4.12 Fixes plugins resolving for devDependencies.
* 2013-10-03 v0.4.11 Adds filePair to page object. thanks @adjohnson916!
* 2013-10-02 v0.4.10 Adds plugin support to Assemble using the `plugins` option. thanks
@adjohnson916!
* 2013-10-02 v0.4.9 Adds `layoutext` and `postprocess` options.
* 2013-09-30 v0.4.8 Assemble now builds 30-50% faster due to some refactoring to async and how
context is calculated.
* 2013-09-20 v0.4.7 Adds grunt-readme to make it easier to keep the readme updated using
templates.,Keep options.partials intact so they can be used in helpers.
* 2013-09-15 v0.4.6 Updating how the assets path is calculated.,Adding resolve-dep and ability
to load helpers from node modules using minimatch patterns
* 2013-09-03 v0.4.5 Bug fix: allow page content containing $.,Add alias metadata for data on
pages configuration object.
* 2013-08-01 v0.4.4 Adds "nested layouts",Adds option for pages in JSON/YAML collections to be
defined as either objects or keys in an array.
* 2013-08-01 v0.4.3 Adds "options.pages" for passing in an array of pages in JSON or YAML
format.
* 2013-06-20 v0.4.0 Adds "layoutdir" option for defining the directory to be used for layouts.
If layoutdir is defined, then layouts may be defined using only the name of
the layout.
* 2013-06-10 v0.3.81 Adds additional ways to load custom helpers. Now it's possible to use a
glob pattern that points to a list of scripts with helpers to load.,Adds
examples and tests on how to use the new custom helper loading methods.
* 2013-06-01 v0.3.80 Fixing bug with null value in engine
* 2013-05-07 v0.3.77 Updated README with info about assemble methods
* 2013-04-28 v0.3.74 Updating the assemble library to use the assemble-utils repo and
unnecessary code.
* 2013-04-21 v0.3.73 Fixing how the relative path helper worked and showing an example in the
footer of the layout. This example is hidden, but can be seen by doing view
source.
* 2013-04-20 v0.3.72 Fixing the layout override issue happening in the page yaml headers.
Something was missed during refactoring.
* 2013-04-19 v0.3.9 Adds tags and categories to the root context and ensure that the current
page context values don't override the root context values.
* 2013-04-18 v0.3.8 Updating to use actual assets property from current page.
* 2013-04-17 v0.3.7 Cleaning up some unused folders and tests
* 2013-04-16 v0.3.6 Fixed missing assets property.
* 2013-04-16 v0.3.5 Adds a sections array to the template engine so it can be used in helpers.
* 2013-04-11 v0.3.4 More tests for helpers and global variables, organized tests. A number of
bug fixes.
* 2013-04-06 v0.3.3 helper-lib properly externalized and wired up. Global variables for
filename, ext and pages
* 2013-03-22 v0.3.22 Merged global and target level options so data and partial files can be
joined
* 2013-03-22 v0.3.21 Valid YAML now allowed in options.data object (along with JSON)
* 2013-03-18 v0.3.14 new relative helper for resolving relative paths
## License
Copyright (c) 2015 Jon Schlinkert, Brian Woodward, contributors.
Released under the MIT license
***
_This file was generated by [grunt-verb](https://github.com/assemble/grunt-verb) on March 16, 2015._ | 54.871951 | 390 | 0.643572 | eng_Latn | 0.929621 |
43c23bb76da6382ca10ffd2d1abd4450905d3d9a | 432 | md | Markdown | docs/aesi-intellij/org.metaborg.paplj.psi.impl/-paplj-new-expr-impl/get-brace-r.md | Virtlink/aesi | f7af2531913d9809f3429c0d8e3a82fb695d5ba4 | [
"Apache-2.0"
] | 1 | 2017-10-29T16:44:26.000Z | 2017-10-29T16:44:26.000Z | docs/aesi-intellij/org.metaborg.paplj.psi.impl/-paplj-new-expr-impl/get-brace-r.md | Virtlink/aesi | f7af2531913d9809f3429c0d8e3a82fb695d5ba4 | [
"Apache-2.0"
] | null | null | null | docs/aesi-intellij/org.metaborg.paplj.psi.impl/-paplj-new-expr-impl/get-brace-r.md | Virtlink/aesi | f7af2531913d9809f3429c0d8e3a82fb695d5ba4 | [
"Apache-2.0"
] | null | null | null | ---
title: PapljNewExprImpl.getBraceR - aesi-intellij
---
[aesi-intellij](../../index.html) / [org.metaborg.paplj.psi.impl](../index.html) / [PapljNewExprImpl](index.html) / [getBraceR](.)
# getBraceR
`@NotNull open fun getBraceR(): @NotNull PsiElement`
Overrides [PapljExprImpl.getBraceR](../-paplj-expr-impl/get-brace-r.html)
Overrides [PapljNewExpr.getBraceR](../../org.metaborg.paplj.psi/-paplj-new-expr/get-brace-r.html)
| 28.8 | 130 | 0.715278 | yue_Hant | 0.425957 |
43c2428f96bac1885e8d9fb219ca5e57f82a9431 | 263 | md | Markdown | operations_research_course/intro.md | noce2/operations_research_course | 34039c8ad0f1df44b4d2736041d9da0edffa26c4 | [
"MIT"
] | 2,109 | 2020-05-02T23:47:18.000Z | 2022-03-31T22:16:54.000Z | operations_research_course/intro.md | noce2/operations_research_course | 34039c8ad0f1df44b4d2736041d9da0edffa26c4 | [
"MIT"
] | 1,158 | 2020-04-29T18:07:02.000Z | 2022-03-31T21:50:57.000Z | operations_research_course/intro.md | noce2/operations_research_course | 34039c8ad0f1df44b4d2736041d9da0edffa26c4 | [
"MIT"
] | 360 | 2020-04-29T14:44:49.000Z | 2022-03-31T09:26:23.000Z | # Welcome to your Jupyter Book
This is a small sample book to give you a feel for how book content is
structured.
:::{note}
Here is a note!
:::
And here is a code block:
```
e = mc^2
```
Check out the content pages bundled with this sample book to see more.
| 15.470588 | 70 | 0.69962 | eng_Latn | 0.999937 |
43c2cebc427cae6e4455cb744ef0756f5a8afd3a | 6,267 | md | Markdown | src/components/autosuggest/_macro-options.md | MebinAbraham/design-system | f9fa09bc9ff890a48990a5adeefc9eb68a38ea15 | [
"MIT"
] | null | null | null | src/components/autosuggest/_macro-options.md | MebinAbraham/design-system | f9fa09bc9ff890a48990a5adeefc9eb68a38ea15 | [
"MIT"
] | null | null | null | src/components/autosuggest/_macro-options.md | MebinAbraham/design-system | f9fa09bc9ff890a48990a5adeefc9eb68a38ea15 | [
"MIT"
] | null | null | null | | Name | Type | Required | Description |
| ----------------------- | ----------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| autosuggestData | string | true | URL of the JSON file with the autosuggest data that needs to be searched |
| APIDomain | string | false | Set an api domain when using an external API to suggest results |
| APIDomainBearerToken | string | false | Set a bearer token for api authorization on the AIMS address api. Defaults to basic auth |
| allowMultiple | boolean | false | Allows the component to accept multiple selections |
| instructions | string | true | Instructions on how to use the autosuggest that will be read out by screenreaders |
| ariaYouHaveSelected | string | true | Aria message to tell the user that they have selected an answer |
| ariaMinChars | string | true | Aria message to tell the user how many charecters they need to enter before autosuggest will start |
| ariaResultsLabel | string | true | Aria message to tell the user that suggestions are available |
| ariaOneResult | string | true | Aria message to tell the user there is only one suggestion left |
| ariaNResults | string | true | Aria message to tell the user how many suggestions are left |
| ariaLimitedResults | string | true | Aria message to tell the user if the results have been limited and what they are limited to |
| ariaGroupedResults | string | true | Aria message to tell the user about a grouped result e.g There are {n} for {x} |
| groupCount | string | true | Aria message to tell the user the number of addresses in a group e.g. {n} addresses |
| moreResults | string | true | Aria message to tell the user to continue to type to refine suggestions |
| noResults | string | true | message to tell the user there are no results |
| tooManyResults | string | true | message to tell the user there are too many results to display and the user should refine the search |
| typeMore | string | true | message to encourage the user to enter more characters to get suggestions |
| resultsTitle | string | true | Title of results to be displayed on screen at the top of the results |
| errorTitle | string | false | Error message title displayed in the error panel |
| errorMessageEnter | string | false | Error message description displayed in the error panel when the input is empty |
| errorMessageSelect | string | false | Error message description displayed in the error panel when a suggestion has not been selected |
| errorMessageAPI | string | false | Error message displayed when the API has failed during a search |
| errorMessageAPILinkText | string | false | Link text used to toggle to a manual mode when using the address macro |
| isEditable | boolean | false | Used with the address macro to invoke population of manual fields upon selection of suggestion |
| options | `Object<Options>` | false | Option to provide key value pairs that will be added as data attributes to the component that will be added as parameters to the address index api |
| mandatory | boolean | false | Set the autosuggest input to be mandatory and use client side validation for empty form submission |
## Options
| Name | Type | Required | Description |
| ---------- | ------ | -------- | ------------------------------------------------ |
| regionCode | string | false | Sets the provided region code e.g. en-gb |
| adressType | string | false | Sets the provided address type e.g. resedential |
| oneYearAgo | string | false | If "true" will set a query parameter of epoch=75 |
| 169.378378 | 207 | 0.401149 | eng_Latn | 0.995854 |
43c2ef707058e80b5810ef4ede229481d1c864a3 | 1,786 | md | Markdown | hugo-source/content/output/sahakAra nagar, bengaLUru/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2020_monthly/2020-05/2020-05-05.md | baskar-yahoo/jyotisha | 5a177a4b2a6b7890554471129158be43e89cc657 | [
"MIT"
] | null | null | null | hugo-source/content/output/sahakAra nagar, bengaLUru/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2020_monthly/2020-05/2020-05-05.md | baskar-yahoo/jyotisha | 5a177a4b2a6b7890554471129158be43e89cc657 | [
"MIT"
] | null | null | null | hugo-source/content/output/sahakAra nagar, bengaLUru/MULTI_NEW_MOON_SIDEREAL_MONTH_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2020_monthly/2020-05/2020-05-05.md | baskar-yahoo/jyotisha | 5a177a4b2a6b7890554471129158be43e89cc657 | [
"MIT"
] | null | null | null | +++
title = "2020-05-05"
+++
## वैशाखः-02-13,कन्या-हस्तः🌛🌌◢◣मेषः-अपभरणी-01-22🌌🌞◢◣माधवः-02-16🪐🌞
- Indian civil date: 1942-02-15, Islamic: 1441-09-12 Ramaḍān
- संवत्सरः - शार्वरी
- वर्षसङ्ख्या 🌛- शकाब्दः 1942, विक्रमाब्दः 2077, कलियुगे 5121
___________________
- 🪐🌞**ऋतुमानम्** — वसन्तऋतुः उत्तरायणम्
- 🌌🌞**सौरमानम्** — वसन्तऋतुः उत्तरायणम्
- 🌛**चान्द्रमानम्** — वसन्तऋतुः वैशाखः
___________________
## खचक्रस्थितिः
- |🌞-🌛|**तिथिः** — शुक्ल-त्रयोदशी►23:21; शुक्ल-चतुर्दशी►
- **वासरः**—मङ्गलः
- 🌌🌛**नक्षत्रम्** — हस्तः►16:37; चित्रा► (तुला)
- 🌌🌞**सौर-नक्षत्रम्** — अपभरणी►
___________________
- 🌛+🌞**योगः** — वज्रम्►24:40*; सिद्धिः►
- २|🌛-🌞|**करणम्** — कौलवः►13:08; तैतिलः►23:21; गरः►
- 🌌🌛- **चन्द्राष्टम-राशिः**—मीनः
## दिनमान-कालविभागाः
- 🌅**सूर्योदयः**—06:00-12:16🌞️-18:32🌇
- 🌛**चन्द्रोदयः**—16:35; **चन्द्रास्तमयः**—04:46(+1)
___________________
- 🌞⚝भट्टभास्कर-मते वीर्यवन्तः— **प्रातः**—06:00-07:34; **साङ्गवः**—09:08-10:42; **मध्याह्नः**—12:16-13:50; **अपराह्णः**—15:24-16:58; **सायाह्नः**—18:32-19:58
- 🌞⚝सायण-मते वीर्यवन्तः— **प्रातः-मु॰1**—06:00-06:50; **प्रातः-मु॰2**—06:50-07:40; **साङ्गवः-मु॰2**—09:21-10:11; **पूर्वाह्णः-मु॰2**—11:51-12:41; **अपराह्णः-मु॰2**—14:21-15:11; **सायाह्णः-मु॰2**—16:51-17:42; **सायाह्णः-मु॰3**—17:42-18:32
- 🌞कालान्तरम्— **ब्राह्मं मुहूर्तम्**—04:28-05:14; **मध्यरात्रिः**—23:07-01:25
___________________
- **राहुकालः**—15:24-16:58; **यमघण्टः**—09:08-10:42; **गुलिककालः**—12:16-13:50
___________________
- **शूलम्**—उदीची दिक् (►11:01); **परिहारः**–क्षीरम्
___________________
## उत्सवाः
- प्रदोष-व्रतम्
### प्रदोष-व्रतम्
#### Details
- [Edit config file](https://github.com/jyotisham/adyatithi/tree/master/time_focus/monthly/pradoSha/description_only/pradOSa-vratam.toml)
- Tags: MonthlyVratam PradoshaVratam
| 35.019608 | 239 | 0.520157 | san_Deva | 0.255877 |
43c3591663575300e393100a2f4d346fa080d9e5 | 744 | md | Markdown | codeStyle.md | dracupid/my-awesome | 141221441f3d58300048575233947f720a53670a | [
"CC0-1.0"
] | null | null | null | codeStyle.md | dracupid/my-awesome | 141221441f3d58300048575233947f720a53670a | [
"CC0-1.0"
] | null | null | null | codeStyle.md | dracupid/my-awesome | 141221441f3d58300048575233947f720a53670a | [
"CC0-1.0"
] | null | null | null | - My common preference
+ Indent using 4 spaces.
+ Always using UTF-8 encoding and Unix-style line endings(LF / \n).
+ Never start a line with an operator or punctuation (ignore spaces).
- [Case Styles](https://en.wikipedia.org/wiki/Letter_case#Special_case_styles)
## HTML
- [Code Guide by mdo](http://codeguide.co/#html)
## CSS
kebab-case
- [Code Guide by mdo](http://codeguide.co/#css)
- [Oxygen](http://www.oxygencss.com/): An object-oriented methodology for stylesheets.
## JS
- [C/c]amelCase
#### React/jsx
[Airbnb React/JSX Style Guide](https://github.com/airbnb/javascript/tree/master/react) [CN](https://github.com/dwqs/react-style-guide)
## python
snake_case
- [PEP 8](http://legacy.python.org/dev/peps/pep-0008/)
| 26.571429 | 134 | 0.706989 | yue_Hant | 0.645059 |
43c36a0cb3a3f161acb7ddbe95917fd8f9ba3db2 | 28,530 | md | Markdown | notes/Professional/NgRx Workshop at ng-conf 2020.md | alfredoperez/alfredos-obsidian-vault | 15e84a09fb6cb7cd80d4b4c4cae5ffd6708cac8f | [
"MIT"
] | null | null | null | notes/Professional/NgRx Workshop at ng-conf 2020.md | alfredoperez/alfredos-obsidian-vault | 15e84a09fb6cb7cd80d4b4c4cae5ffd6708cac8f | [
"MIT"
] | null | null | null | notes/Professional/NgRx Workshop at ng-conf 2020.md | alfredoperez/alfredos-obsidian-vault | 15e84a09fb6cb7cd80d4b4c4cae5ffd6708cac8f | [
"MIT"
] | null | null | null | ---
title: NgRx Workshop at ng-conf 2020
authors: ["Mike Ryan", "Brandon Roberts", "Alex Okrushko"]
tags: ["angular"]
type: workshop
created: 3/09/21
updated: 3/9/21
---
# Introduction
## When to use it?
This question was several times and basically you can use the **SHARI** principle to find out if ngrx or redux is needed for your application or even a specific component inside your app.
{% twitter 988074549983023104 %}
From the [docs](https://ngrx.io/docs#when-should-i-use-ngrx-for-state-management) we get the following:
>`@ngrx/store` \(or Redux in general\) provides us with a lot of great features and can be used in a lot of use cases. But sometimes this pattern can be an overkill. Implementing it means we get the downside of using Redux \(a lot of extra code and complexity\) without benefiting of the upsides \(predictable state container and unidirectional data flow\).
>The NgRx core team has come up with a principle called **SHARI**, that can be used as a rule of thumb on data that needs to be added to the store.
>* **Shared**: State that is shared between many components and services
>* **Hydrated**: State that needs to be persisted and hydrated across page reloads
>* **Available**: State that needs to be available when re-entering routes
>* **Retrieved**: State that needs to be retrieved with a side effect, e.g. an HTTP request
>* **Impacted**: State that is impacted by other components
>Try not to over-engineer your state management layer. Data is often fetched via XHR requests or is being sent over a WebSocket, and therefore is handled on the server side. Always ask yourself **when** and **why** to put some data in a client side store and keep alternatives in mind. For example, use routes to reflect applied filters on a list or use a `BehaviorSubject` in a service if you need to store some simple data, such as settings. Mike Ryan gave a very good talk on this topic: [You might not need NgRx](https://youtu.be/omnwu_etHTY)
Also, the following question was asked:
> Once you add ngrx to an application, is it a bad practice to use a service with BehaviorSubject to communicate between components? Example. Having a modal with a form that uses a stepper. Should the modal use NGRX because the whole app is using ngrx.
The response from Mike was:
> **No, you do not have to use ngrx for all your states.**
> We have this term called SHARI principle and essentially the only state that goes into the ngrx store is a truly global state, state that is shared between your components, state that needs to be rehydrated when the application boots up or is impacted by actions of other features.
> If the state doesn't match that rubric, then **it is totally fine to use a simple state storage mechanism** for it as a service with a Behavior Subject.
Another related comment by Alex:
> One of the things I remember myself when I started using ngrx, I tried to **push anything** ngrx mostly because I wanted the dev tools to the time travel and I think some of the features that are exciting and helpful can misguide you. The moment of realization for me is not to push everything possible into it, at that time we were learning, as the pattern was evolving, now is kind of known what are the best practices.
> Some of the things I **_don't put in the store_** are **Form Controls** (since it is already a reactive state), **the local state within components** (even if it interacts with the network), some of the lifecycle state for a component. Basically, **the first thing I asked myself is how shareable is the state, how much does the application needs to know about this.**
Here is a link to a re-usable service that uses BehaviorSubject and can be used for simpler communication cases
{% link https://dev.to/alfredoperez/angular-service-to-handle-state-using-behaviorsubject-4818 %}
Here is a library by Dan Wahlin that can help for simple state management:
{% github DanWahlin/Observable-Store%}
### Resources
* [Reducing the Boilerplate with NgRx](https://www.youtube.com/watch?v=t3jx0EC-Y3c) by Mike Ryan and Brandon Roberts
* [Do we really need @ngrx/store](https://blog.strongbrew.io/do-we-really-need-redux/) by Brecht Billiet
* [Simple State Management with RxJS’s scan operator](https://juristr.com/blog/2018/10/simple-state-management-with-scan/) by Juri Strumpflohner
* [You might not need NgRx](https://www.youtube.com/watch?v=omnwu_etHTY) by Myke Ryan
* [Mastering the Subject: Communication Options in RxJS](https://www.youtube.com/watch?v=_q-HL9YX_pk) by Dan Wahlin
## Actions
- Unified interface to describe events
- Just data, no functionality
- Has at a minimum a type property
- Strongly typed using classes and enums
### Notes
There are a few rules to[ writing good actions](https://ngrx.io/guide/store/actions#writing-actions) within your application.
* **Upfront** - write actions before developing features to understand and gain a shared knowledge of the feature being implemented.
* **Divide** - categorize actions based on the event source.
* **Many** - actions are inexpensive to write, so the more actions you write, the better you express flows in your application.
* **Event-Driven** - capture _events_ **not** _commands_ as you are separating the description of an event and the handling of that event.
* **Descriptive** - provide context that are targeted to a unique event with more detailed information you can use to aid in debugging with the developer tools.
- Actions can be created with `props` or fat arrows
```typescript
// With props
export const updateBook = createAction(
'[Books Page] Update a book',
props<{
book: BookRequiredProps,
bookId: string
}>()
);
// With fat arrow
export const getAuthStatusSuccess = createAction(
"[Auth/API] Get Auth Status Success",
(user: UserModel | null) => ({user})
);
```
### Event Storming
You can use sticky notes as a group to identify:
- All of the events in the system
- The commands that cause the event to arise
- The actor in the system that invokes the command
- The data models attached to each event
### Naming Actions
* The **category** of the action is captured within the square brackets `[]`
* It is recommended to use present or past tense to **describe the event occurred** and stick with it.
**_Example_**
* When related to components you can use present tense because they are related to events. It is like in HTML the events do not use past tense. Eg. `OnClick` or `click` is not `OnClicked` or `clicked`
```typescript
export const createBook = createAction(
'[Books Page] Create a book',
props<{book: BookRequiredProps}>()
);
export const selectBook = createAction(
'[Books Page] Select a book',
props<{bookId: string}>()
);
```
* When the actions are related to API you can use past tense because they are used to describe an action that happened
```typescript
export const bookUpdated = createAction(
'[Books API] Book Updated Success',
props<{book: BookModel}>()
);
export const bookDeleted = createAction(
'[Books API] Book Deleted Success',
props<{bookId: string}>()
);
```
### Folders and File structure
It is a good practice to have the actions define close to the feature that uses them.
```typescript
├─ books\
│ actions\
│ books-api.actions.ts
│ books-page.actions.ts
│ index.ts
```
The index file can be used to define the names for the actions exported, but it can be completely avoided
```typescript
import * as BooksPageActions from "./books-page.actions";
import * as BooksApiActions from "./books-api.actions";
export { BooksPageActions, BooksApiActions };
```
## Reducers
- Produce new states
- Receive the last state and next action
- Switch on the action type
- Use pure, immutable operations
### Notes
* Create separate reducer per feature
* Should live outside the feature because the state it is shared in the whole application
* State should save the model as it comes from the API. This can be later transformed into the selectors.
* Combine actions that can use the same reducer
```typescript
on(BooksPageActions.enter, BooksPageActions.clearSelectedBook, (state, action) => ({
...state,
activeBookId: null
})),
```
* Only reducers can modify state and it should do it in an immutable way
* Create helper functions to handle the data manipulation of collections.
**_Example_**
```typescript
const createBook = (books: BookModel[], book: BookModel) => [...books, book];
const updateBook = (books: BookModel[], changes: BookModel) =>
books.map(book => {
return book.id === changes.id ? Object.assign({}, book, changes) : book;
});
const deleteBook = (books: BookModel[], bookId: string) =>
books.filter(book => bookId !== book.id);
...
on(BooksApiActions.bookCreated, (state, action) => {
return {
collection: createBook(state.collection, action.book),
activeBookId: null
};
}),
on(BooksApiActions.bookUpdated, (state, action) => {
return {
collection: updateBook(state.collection, action.book),
activeBookId: null
};
}),
on(BooksApiActions.bookDeleted, (state, action) => {
return {
...state,
collection: deleteBook(state.collection, action.bookId)
};
})
```
### Feature reducer file
* Declares a `State` interface with the state for the feature
* Declares an `initialState` that included the initial state
* Declare a feature reducer that contains the result of using `createReducer`
* Exports a `reducer` function that wraps the reducer created. This is needed for AOT and it is not needed when using Ivy.
**_Example_**
```typescript
export interface State {
collection: BookModel[];
activeBookId: string | null;
}
export const initialState: State = {
collection: [],
activeBookId: null
};
export const booksReducer = createReducer(
initialState,
on(BooksPageActions.enter,
BooksPageActions.clearSelectedBook, (state, action) => ({
...state,
activeBookId: null
})),
on(BooksPageActions.selectBook, (state, action) => ({
...state,
activeBookId: action.bookId
})),
);
export function reducer(state: State | undefined, action: Action) {
return booksReducer(state, action);
}
```
- The index file defines the state and assigns each reducer to a property on the state
```typescript
import * as fromBooks from "./books.reducer";
export interface State {
books: fromBooks.State;
}
export const reducers: ActionReducerMap<State> = {
books:fromBooks.reducer
};
export const metaReducers: MetaReducer<State>[] = [];
```
## Selectors
- Allow us to query our store for data
- Recompute when their inputs change
- Fully leverage memoization for performance
- Selectors are fully composable
### Notes
* They are in charge of **transforming the data** to how the UI uses it. State in the store should be clean and easy to serialize and since selector is easy to test this is the best place to transform. Is also makes it easier to hydrate
* They can transform to complete "View Models", there is nothing wrong about naming the model specific to the UI that they are using
* "Getter" selectors are simple selector that get data from a property on the state
```typescript
export const selectAll = (state: State) => state.collection;
export const selectActiveBookId = (state: State) => state.activeBookId;
```
* Complex selectors combine selectors and this should be created using `createSelector`. The function only gets called if any of the inputs selectors are modified
```typescript
export const selectActiveBook = createSelector(
selectAll,
selectActiveBookId,
(books, activeBookId) =>
books.find(book => book.id === activeBookId)
);
```
* Selectors can go next to the component where they are used or in the reducer file located in the `state` folder. Local selectors make it easier to test
* Global selectors are added in the `state/index`.
```typescript
export const selectBooksState = (state:State)=> state.books;
export const selectActiveBook = createSelector(
selectBooksState,
fromBooks.selectActiveBook
)
```
* When using the selector in the component is recommended to initialize in the constructor. If using the strict mode in TypeScript, the compiler will not be able to know that the selectors where initialized on `ngOnInit`
## Effects
- Processes that run in the background
- Connect your app to the outside world
- Often used to talk to services
- Written entirely using RxJS streams
### Notes
* Try to keep effect close to the reducer and group them in classes as it seems convenient
* For effects, it's okay to split them into separate effects files, one for each API service. But it's not a mandate
* Is still possible to use guards and resolver, just dispatch an action when it is done
* **It is recommended to not use resolvers** since we can dispatch the actions using effects
* Put the `books-api.effects` file in the same level as books.module.ts, so that the bootstrapping is done at this level and effects are loaded and running if and only if the books page is loaded. If we were to put the effects in the shared global states, the effects would be running and listening at all times, which is not the desired behavior.
* An effect should dispatch a single action, use a reducer to modify state if multiple props of the state need to be modified
* Prefer the use of brackets and `return` statements in arrow function to increase debugability
```typescript
// Prefer this
getAllBooks$ = createEffect(() => {
return this.actions$.pipe(
ofType(BooksPageActions.enter),
mergeMap((action) => {
return this.booksService
.all()
.pipe(
map((books: any) => BooksApiActions.booksLoaded({books}))
)
})
);
})
// Instead of
getAllBooks$ = createEffect(() =>
this.actions$.pipe(
ofType(BooksPageActions.enter),
mergeMap((action) =>
this.booksService
.all()
.pipe(
map((books: any) => BooksApiActions.booksLoaded({books}))
))
))
```
### What map operator should I use?
`switchMap` is not always the best solution for all the effects and here are other operators we can use.
* `mergeMap` Subscribe immediately, never cancel or discard. It can have race conditions.
This can be used to _**Delete items**_, because it is probably safe to delete the items without caring about the deletion order
```typescript
deleteBook$ = createEffect(() =>
this.actions$.pipe(
ofType(BooksPageActions.deleteBook),
mergeMap(action =>
this.booksService
.delete(action.bookId)
.pipe(
map(() => BooksApiActions.bookDeleted({bookId: action.bookId}))
)
)
)
);
```
* `concatMap` Subscribe after the last one finishes
This can be used for _**updating or creating items**_, because it matters in what order the item is updated or created.
```typescript
createBook$ = createEffect(() =>
this.actions$.pipe(
ofType(BooksPageActions.createBook),
concatMap(action =>
this.booksService
.create(action.book)
.pipe(map(book => BooksApiActions.bookCreated({book})))
)
)
);
```
* `exhaustMap` Discard until the last one finishes. Can have race conditions
This can be used for _**non-parameterized queries**_. It does only one request event if it gets called multiple times. Eg. getting all books.
```typescript
getAllBooks$ = createEffect(() => {
return this.actions$.pipe(
ofType(BooksPageActions.enter),
exhaustMap((action) => {
return this.booksService
.all()
.pipe(
map((books: any) => BooksApiActions.booksLoaded({books}))
)
})
)
})
```
* `switchMap` Cancel the last one if it has not completed. Can have race conditions
This can be used for _**parameterized queries**_
### Other effects examples
- Effects does not have to start with an action
```typescript
@Effect() tick$ = interval(/* Every minute */ 60 * 1000).pipe(
map(() => Clock.tickAction(new Date()))
);
```
- Effects can be used to elegantly connect to a WebSocket
```typescript
@Effect()
ws$ = fromWebSocket("/ws").pipe(map(message => {
switch (message.kind) {
case “book_created”: {
return WebSocketActions.bookCreated(message.book);
}
case “book_updated”: {
return WebSocketActions.bookUpdated(message.book);
}
case “book_deleted”: {
return WebSocketActions.bookDeleted(message.book);
}
}}))
```
- You can use an effect to communicate to any API/Library that returns observables. The following example shows this by communicating with the snack bar notification API.
```typescript
@Effect() promptToRetry$ = this.actions$.pipe(
ofType(BooksApiActions.createFailure),
mergeMap(action =>
this.snackBar
.open("Failed to save book.","Try Again", {duration: /* 12 seconds */ 12 * 1000 })
.onAction()
.pipe(
map(() => BooksApiActions.retryCreate(action.book))
)
)
);
```
- Effects can be used to retry API Calls
```typescript
@Effect()
createBook$ = this.actions$.pipe(
ofType(
BooksPageActions.createBook,
BooksApiActions.retryCreate,
),
mergeMap(action =>
this.booksService.create(action.book).pipe(
map(book => BooksApiActions.bookCreated({ book })),
catchError(error => of(BooksApiActions.createFailure({
error,
book: action.book,
})))
)));
```
- It is OK to write effects that don't dispatch any action like the following example shows how it is used to open a modal
```typescript
@Effect({ dispatch: false })
openUploadModal$ = this.actions$.pipe(
ofType(BooksPageActions.openUploadModal),
tap(() => {
this.dialog.open(BooksCoverUploadModalComponent);
})
);
```
- An effect can be used to handle a cancelation like the following example that shows how an upload is cancelled
```typescript
@Effect() uploadCover$ = this.actions$.pipe(
ofType(BooksPageActions.uploadCover),
concatMap(action =>
this.booksService.uploadCover(action.cover).pipe(
map(result => BooksApiActions.uploadComplete(result)),
takeUntil(
this.actions$.pipe(
ofType(BooksPageActions.cancelUpload)
)
))));
```
## NgRx Entity
- Working with collections should be fast
- Collections are very common
- `@ngrx/entity` provides a common set of basic operations
- `@ngrx/entity` provides a common set of basic state derivations
## How to add @ngrx/entity
- Start by creating a `state` that extends the `entityState`
```typescript
// From:
// export interface State {
// collection: BookModel[];
// activeBookId: string | null;
// }
// To:
export interface State extends EntityState<BookModel> {
activeBookId: string | null;
}
```
- Create an adapter and define the initial state with it.
**Note** that the `collection` is no longer needed
```typescript
// From:
// export const initialState: State = {
// collection: [],
// activeBookId: null
// };
// To:
const adapter = createEntityAdapter<BookModel>();
export const initialState: State = adapter.getInitialState({
activeBookId: null
});
```
- Out of the box, it uses the 'id' as the id, but we can also specify custom id and comparer.
```typescript
const adapter = createEntityAdapter<BookModel>({
selectId: (model: BookModel) => model.name,
sortComparer:(a:BookModel,b:BookModel)=> a.name.localeCompare(b.name)
});
```
- Refactor the reducers to use the entity adapter
```typescript
// on(BooksApiActions.bookCreated, (state, action) => {
// return {
// collection: createBook(state.collection, action.book),
// activeBookId: null
// };
// }),
on(BooksApiActions.bookCreated, (state, action) => {
return adapter.addOne(action.book, {
...state,
activeBookId: null
})
}),
// on(BooksApiActions.bookUpdated, (state, action) => {
// return {
// collection: updateBook(state.collection, action.book),
// activeBookId: null
// };
// }),
on(BooksApiActions.bookUpdated, (state, action) => {
return adapter.updateOne(
{id: action.book.id, changes: action.book},
{
...state,
activeBookId: null
})
}),
// on(BooksApiActions.bookDeleted, (state, action) => {
// return {
// ...state,
// collection: deleteBook(state.collection, action.bookId)
// };
// })
on(BooksApiActions.bookDeleted, (state, action) => {
return adapter.removeOne(action.bookId, state)
})
```
- Then create selectors using the entity adapter
```typescript
// From:
// export const selectAll = (state: State) => state.collection;
// export const selectActiveBookId = (state: State) => state.activeBookId;
// export const selectActiveBook = createSelector(
// selectAll,
// selectActiveBookId,
// (books, activeBookId) => books.find(book => book.id === activeBookId) || null
// );
// To:
export const {selectAll, selectEntities} = adapter.getSelectors();
export const selectActiveBookId = (state: State) => state.activeBookId;
```
## [Adapter Collection Methods](https://ngrx.io/guide/entity/adapter#adapter-collection-methods)
The entity adapter also provides methods for operations against an entity. These methods can change one or many records at a time. Each method returns the newly modified state if changes were made and the same state if no changes were made.
* `addOne`: Add one entity to the collection
* `addMany`: Add multiple entities to the collection
* `addAll`: Replace current collection with provided collection
* `setOne`: Add or Replace one entity in the collection
* `removeOne`: Remove one entity from the collection
* `removeMany`: Remove multiple entities from the collection, by id or by predicate
* `removeAll`: Clear entity collection
* `updateOne`: Update one entity in the collection
* `updateMany`: Update multiple entities in the collection
* `upsertOne`: Add or Update one entity in the collection
* `upsertMany`: Add or Update multiple entities in the collection
* `map`: Update multiple entities in the collection by defining a map function, similar to [Array.map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map)
## Meta-Reducers
* Intercept actions before they are reduced
* Intercept state before it is emitted
* Can change the control flow of the Store

### Most common use cases
* Reset state when a signout action occurs
* for debugging creating logger
* to rehydrate when the application starts up
-It is like a plugin system for the store, they behave similarly to the **interceptors**
### Example
An example of this can be to use it in a logger
```typescript
const logger = (reducer: ActionReducer<any, any>) => (state: any, action: Action) => {
console.log('Previous State', state);
console.log('Action', action);
const nextState = reducer(state, action);
console.log('Next State', nextState);
return nextState;
};
export const metaReducers: MetaReducer<State>[] = [logger];
```
## Folder Structure
Following the LIFT principle:
- **L**ocating our code is easy
- **I**dentify code at a glance
- **F**lat file structure for as long as possible
- **T**ry to stay DRY - don’t repeat yourself
---
### Key Takeaways
- Put state in a shared place separate from features
- Effects, components, and actions belong to features
- Some effects can be shared
- Reducers reach into modules’ action barrels
### Folder structure followed in the workshop
```
├─ books\
│ actions\
│ books-api.actions.ts
│ books-page.actions.ts
│ index.ts // Includes creating names for the exports
│ books-api.effects.ts
│
├─ shared\
│ state\
│ {feature}.reducer.ts // Includes state interface, initial interface, reducers and local selectors
│ index.ts
│
```
- The index file in the _actions_ folder was using action barrels like the following:
```typescript
import * as BooksPageActions from "./books-page.actions";
import * as BooksApiActions from "./books-api.actions";
export { BooksPageActions, BooksApiActions };
```
- This made easier and more readable at the time of importing it:
```
import { BooksPageActions } from "app/modules/book-collection/actions";
```
---
---
### Folder structure followed in example app from @ngrx
```
├─ books\
│ actions\
│ books-api.actions.ts
│ books-page.actions.ts
│ index.ts // Includes creating names for the exports
│ effects\
| books.effects.spec.ts
| books.effects.ts
| models\
| books.ts
│ reducers\
| books.reducer.spec.ts
| books.reducer.ts
| collection.reducer.ts
| index.ts
│
├─ reducers\
│ index.ts /// Defines the root state and reducers
│
```
- The index file under the _reducers_ folder is in charge of setting up the reducer and state
```typescript
import * as fromSearch from '@example-app/books/reducers/search.reducer';
import * as fromBooks from '@example-app/books/reducers/books.reducer';
import * as fromCollection from '@example-app/books/reducers/collection.reducer';
import * as fromRoot from '@example-app/reducers';
export const booksFeatureKey = 'books';
export interface BooksState {
[fromSearch.searchFeatureKey]: fromSearch.State;
[fromBooks.booksFeatureKey]: fromBooks.State;
[fromCollection.collectionFeatureKey]: fromCollection.State;
}
export interface State extends fromRoot.State {
[booksFeatureKey]: BooksState;
}
/** Provide reducer in AoT-compilation happy way */
export function reducers(state: BooksState | undefined, action: Action) {
return combineReducers({
[fromSearch.searchFeatureKey]: fromSearch.reducer,
[fromBooks.booksFeatureKey]: fromBooks.reducer,
[fromCollection.collectionFeatureKey]: fromCollection.reducer,
})(state, action);
}
```
- The index file under `app/reducers/index.ts` defines the meta-reducers, root state, and reducers
```typescript
/**
* Our state is composed of a map of action reducer functions.
* These reducer functions are called with each dispatched action
* and the current or initial state and return a new immutable state.
*/
export const ROOT_REDUCERS = new InjectionToken<
ActionReducerMap<State, Action>
>('Root reducers token', {
factory: () => ({
[fromLayout.layoutFeatureKey]: fromLayout.reducer,
router: fromRouter.routerReducer,
}),
});
```
Personally, I like how the `example-app` is organized. One of the things that I will add is to have all the folders related to ngrx in a single folder:
```
├─ books\
│ store\
│ actions\
│ books-api.actions.ts
│ books-page.actions.ts
│ index.ts // Includes creating names for the exports
│ effects\
| books.effects.spec.ts
| books.effects.ts
| models\
| books.ts
│ reducers\
| books.reducer.spec.ts
| books.reducer.ts
| collection.reducer.ts
| index.ts
│
├─ reducers\
│ index.ts /// Defines the root state and reducers
│
```
## Other Links
* [Avoiding swithcMap related bugs](https://ncjamieson.com/avoiding-switchmap-related-bugs/)
* [ngrx example application] (https://github.com/ngrx/platform/blob/master/projects/example-app/README.md)
* [Redux toolkit](https://redux-toolkit.js.org/)
* [ngrx docs](https://ngrx.io/docs)
* https://github.com/ngrx/platform
* https://medium.com/angular-in-depth/ngrx-how-and-where-to-handle-loading-and-error-states-of-ajax-calls-6613a14f902d
* https://www.youtube.com/watch?v=hsr4ArAsOL4
* https://www.youtube.com/watch?v=OZam9fNNwSE | 34.2497 | 546 | 0.694076 | eng_Latn | 0.976259 |
43c38c2fe1314b4c91e398b4e183064c6acef6e0 | 68 | md | Markdown | README.md | davidmz/frf-userdata | e6cf3b3a5559403f444a56d39347964146de3057 | [
"MIT"
] | null | null | null | README.md | davidmz/frf-userdata | e6cf3b3a5559403f444a56d39347964146de3057 | [
"MIT"
] | null | null | null | README.md | davidmz/frf-userdata | e6cf3b3a5559403f444a56d39347964146de3057 | [
"MIT"
] | null | null | null | Веб-сервис для хранения пользовательских данных FreeFeed/BetterFeed. | 68 | 68 | 0.882353 | rus_Cyrl | 0.813613 |
43c39b644a117c138e8701599a9b00b1ed7647df | 1,668 | md | Markdown | docset/winserver2012-ps/vamt/Install-VamtProductActivation.md | pahuijbr/windows-powershell-docs | 5c367612c6b744743237843ba867feac02964313 | [
"CC-BY-4.0",
"MIT"
] | 292 | 2017-03-21T07:51:46.000Z | 2022-03-29T17:01:09.000Z | docset/winserver2012-ps/vamt/Install-VamtProductActivation.md | pahuijbr/windows-powershell-docs | 5c367612c6b744743237843ba867feac02964313 | [
"CC-BY-4.0",
"MIT"
] | 2,496 | 2017-03-02T21:37:40.000Z | 2022-03-31T16:49:37.000Z | docset/winserver2012-ps/vamt/Install-VamtProductActivation.md | pahuijbr/windows-powershell-docs | 5c367612c6b744743237843ba867feac02964313 | [
"CC-BY-4.0",
"MIT"
] | 530 | 2017-02-21T19:10:56.000Z | 2022-03-26T17:13:01.000Z | ---
external help file: VAMT_Cmdlets.xml
Module Name: VAMT
online version: https://docs.microsoft.com/powershell/module/vamt/install-vamtproductactivation?view=windowsserver2012-ps&wt.mc_id=ps-gethelp
schema: 2.0.0
---
# Install-VamtProductActivation
## SYNOPSIS
Activates products online, using the local computer's Internet connection.
## SYNTAX
```
Install-VamtProductActivation [-Products] <Product[]> [[-Username] <String>] [[-Password] <String>]
```
## DESCRIPTION
The **Install-VamtProductActivation** cmdlet activates products online by using the Internet connection on the local computer.
You cannot use this cmdlet for proxy activation.
## EXAMPLES
### Example
```
PS C:\>get-vamtproduct | install-vamtproductactivation
```
This command gets products and then activates them by using online activation.
## PARAMETERS
### -Password
Provides a password for password-protected environments.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Required: False
Position: 3
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
### -Products
Specifies the product or products to be activated.
```yaml
Type: Product[]
Parameter Sets: (All)
Aliases:
Required: True
Position: 1
Default value: None
Accept pipeline input: True (ByValue, ByPropertyName)
Accept wildcard characters: False
```
### -Username
Provides a user name for password-protected environments.
```yaml
Type: String
Parameter Sets: (All)
Aliases:
Required: False
Position: 2
Default value: None
Accept pipeline input: True (ByPropertyName)
Accept wildcard characters: False
```
## INPUTS
## OUTPUTS
## NOTES
## RELATED LINKS
| 19.172414 | 141 | 0.760192 | eng_Latn | 0.616763 |
43c3e02a2064c13732746014351d940a02881923 | 1,299 | md | Markdown | README.md | Ana-Popilian/FoodRecipes | 8e658c1c7b2674321ea06f209a48648cbb704796 | [
"MIT"
] | null | null | null | README.md | Ana-Popilian/FoodRecipes | 8e658c1c7b2674321ea06f209a48648cbb704796 | [
"MIT"
] | null | null | null | README.md | Ana-Popilian/FoodRecipes | 8e658c1c7b2674321ea06f209a48648cbb704796 | [
"MIT"
] | null | null | null | # FoodRecipes
It is an iOS application designed to help users to find recipes. The required parameters are an ingredient name and a diet label, the user can also filter recipes by health restrictions, for e.g.: peanut-free, vegan, alcohol-free, etc.
<img align="left" width="100" height="100" src="https://user-images.githubusercontent.com/55087937/81838571-951a4e80-9546-11ea-9838-90449b3934b3.png">
- The app uses the Recipe Search API from https://www.edamam.com/
- The filters for search screen are created using collection view
- A table view was used to display the movie list
- I approached a programmatic UI, used SwiftLint to enforce Swift style and conventions
### The following pictures represent the first 3 screens
<p float="left">
<img src="https://user-images.githubusercontent.com/55087937/86229109-85ef6e80-bb8f-11ea-92be-7e12b8b2a11e.png" width="275" height="475">
<img src="https://user-images.githubusercontent.com/55087937/86229168-9f90b600-bb8f-11ea-94c5-a2ff6486537b.png" width="275" height="475">
<img src="https://user-images.githubusercontent.com/55087937/86229243-ba632a80-bb8f-11ea-8d6d-41bfa42200f2.png" width="275" height="475">
</p>
### Used resources:
- App icon <a href="https://icon-library.net/icon/food-app-icon-9.html">Food App Icon #153163</a>
| 61.857143 | 237 | 0.764434 | eng_Latn | 0.737001 |
43c610d0dcb87d6b2761594d83558b91d249f36b | 628 | md | Markdown | doc/models/order-service-charge-calculation-phase.md | okenshields/fangkuai-java | 5565af90990be59a26e7dd169819116bfa3060ac | [
"Apache-2.0"
] | null | null | null | doc/models/order-service-charge-calculation-phase.md | okenshields/fangkuai-java | 5565af90990be59a26e7dd169819116bfa3060ac | [
"Apache-2.0"
] | 1 | 2021-12-09T22:45:42.000Z | 2021-12-09T22:45:42.000Z | doc/models/order-service-charge-calculation-phase.md | okenshields/fangkuai-java | 5565af90990be59a26e7dd169819116bfa3060ac | [
"Apache-2.0"
] | null | null | null | ## Order Service Charge Calculation Phase
Represents a phase in the process of calculating order totals.
Service charges are applied __after__ the indicated phase.
[Read more about how order totals are calculated.](https://developer.squareup.com/docs/docs/orders-api/how-it-works#how-totals-are-calculated)
### Enumeration
`OrderServiceChargeCalculationPhase`
### Fields
| Name | Description |
| --- | --- |
| `SUBTOTAL_PHASE` | The service charge will be applied after discounts, but before<br>taxes. |
| `TOTAL_PHASE` | The service charge will be applied after all discounts and taxes<br>are applied. |
| 33.052632 | 143 | 0.742038 | eng_Latn | 0.946784 |
43c67f7669e4ed817d27aa6360dd06899480a979 | 208 | md | Markdown | _posts/2015-09-10-rub1.1-torstai.md | riikkak/riikkak.github.io | a8263552342d1ffc222ded0dee64f0012a2feca4 | [
"MIT"
] | null | null | null | _posts/2015-09-10-rub1.1-torstai.md | riikkak/riikkak.github.io | a8263552342d1ffc222ded0dee64f0012a2feca4 | [
"MIT"
] | null | null | null | _posts/2015-09-10-rub1.1-torstai.md | riikkak/riikkak.github.io | a8263552342d1ffc222ded0dee64f0012a2feca4 | [
"MIT"
] | null | null | null | ---
title: "Maanantain tunnille"
tags: läksyt rub1.1
---
1. Täytä työvihkon sivun 13 sanasto. Sanat löytyvät edellisen sivun testistä.
2. Tee teksteihin 4 liittyvät tehtävät 4 (3 tehtävää). s. 87, 90 ja 93. | 26 | 77 | 0.735577 | fin_Latn | 0.999934 |
43c7c86da1a1f693d58eafa7b2b833d37bfab5ee | 319 | md | Markdown | 2021/06/20/kerochuu/README.md | mintheon/stupid-week-2021 | 88268075d28d477274efea02380de484c5f7845c | [
"MIT"
] | null | null | null | 2021/06/20/kerochuu/README.md | mintheon/stupid-week-2021 | 88268075d28d477274efea02380de484c5f7845c | [
"MIT"
] | null | null | null | 2021/06/20/kerochuu/README.md | mintheon/stupid-week-2021 | 88268075d28d477274efea02380de484c5f7845c | [
"MIT"
] | null | null | null | ## 계획했던 목표
- 백준 알고리즘 문제풀이(자료구조)
- 백준 알고리즘 문제풀이(자료구조)
- 백준 알고리즘 문제풀이(자료구조)
## 결과
- [[백준 2751] 수 정렬하기2 (mergeSort)](https://blog.naver.com/kerochuu/222400771036)
- [[백준 2751] 수 정렬하기2 (heapSort)](https://blog.naver.com/kerochuu/222401475215)
- [[백준 10816] 수 정렬하기2 (hashMap)](https://blog.naver.com/kerochuu/222401661473)
| 31.9 | 79 | 0.695925 | kor_Hang | 0.879865 |
43c7d2fe6a10b76344bf8d59c17b7a282c92aa6a | 60 | md | Markdown | windows/src/desktop/help/context/tray-menu.md | srl295/keyman | 4dfd0f71f3f4ccf81d1badbd824900deee1bb6d1 | [
"MIT"
] | 219 | 2017-06-21T03:37:03.000Z | 2022-03-27T12:09:28.000Z | windows/src/desktop/help/context/tray-menu.md | srl295/keyman | 4dfd0f71f3f4ccf81d1badbd824900deee1bb6d1 | [
"MIT"
] | 4,451 | 2017-05-29T02:52:06.000Z | 2022-03-31T23:53:23.000Z | windows/src/desktop/help/context/tray-menu.md | srl295/keyman | 4dfd0f71f3f4ccf81d1badbd824900deee1bb6d1 | [
"MIT"
] | 72 | 2017-05-26T04:08:37.000Z | 2022-03-03T10:26:20.000Z | ---
title: The Keyman Menu
redirect: ../basic/tray-menu
---
| 12 | 28 | 0.65 | eng_Latn | 0.323826 |
43c7d6fe3c5b97738203fd9c27a7d120e1330cac | 2,074 | md | Markdown | docs/usage/type-checking/index.md | erikkemperman/tslint | aa5cca44155ae68b6ff251c7e6f34b6482df1450 | [
"Apache-2.0"
] | null | null | null | docs/usage/type-checking/index.md | erikkemperman/tslint | aa5cca44155ae68b6ff251c7e6f34b6482df1450 | [
"Apache-2.0"
] | 1 | 2018-05-09T01:56:13.000Z | 2018-05-09T01:56:13.000Z | docs/usage/type-checking/index.md | erikkemperman/tslint | aa5cca44155ae68b6ff251c7e6f34b6482df1450 | [
"Apache-2.0"
] | 1 | 2018-05-04T08:58:52.000Z | 2018-05-04T08:58:52.000Z | ---
title: Type Checking
layout: page
permalink: /usage/type-checking/
---
#### Semantic lint rules
Some TSLint rules go further than linting code syntax. Semantic rules use the compiler's program APIs to inspect static types and validate code patterns.
##### CLI
When using the CLI, use the `--project` flag and specify your `tsconfig.json` to enable rules that work with the type checker. TSLint will lint all files included in your project as specified in `tsconfig.json`.
```sh
tslint --project tsconfig.json --config tslint.json # lints every file in your project
tslint -p . -c tslint.json # shorthand of the command above
tslint -p tsconfig.json --exclude '**/*.d.ts' # lint all files in the project excluding declaration files
tslint -p tsconfig.json **/*.ts # ignores files in tsconfig.json and uses the provided glob instead
```
Use the `--type-check` flag to make sure your program has no type errors. TSLint will check for any errors before linting. This flag requires `--project` to be specified.
##### Library
To enable rules that work with the type checker, a TypeScript program object must be passed to the linter when using the programmatic API. Helper functions are provided to create a program from a `tsconfig.json` file. A project directory can be specified if project files do not lie in the same directory as the `tsconfig.json` file.
```js
import { Linter, Configuration } from "tslint";
const configurationFilename = "Specify configuration file name";
const options = {
fix: false,
formatter: "json",
rulesDirectory: "customRules/",
formattersDirectory: "customFormatters/"
};
const program = Linter.createProgram("tsconfig.json", "projectDir/");
const linter = new Linter(options, program);
const files = Linter.getFileNames(program);
files.forEach(file => {
const fileContents = program.getSourceFile(file).getFullText();
const configuration = Configuration.findConfiguration(configurationFilename, file).results;
linter.lint(file, fileContents, configuration);
});
const results = linter.getResult();
```
| 40.666667 | 333 | 0.751688 | eng_Latn | 0.970917 |
43c86b0f8dfdaf5887746e0deb50b5e12078face | 6,531 | md | Markdown | README.md | karminer60/java-AnimalKingdom | 6a3d77472b29b868665cd977a630f3b8ff564e4c | [
"MIT"
] | null | null | null | README.md | karminer60/java-AnimalKingdom | 6a3d77472b29b868665cd977a630f3b8ff564e4c | [
"MIT"
] | null | null | null | README.md | karminer60/java-AnimalKingdom | 6a3d77472b29b868665cd977a630f3b8ff564e4c | [
"MIT"
] | null | null | null | # Project Animal Kingdom Search
A student that completes this project shows that they can:
* Use and implement abstract classes
* Use and implement Lambda Expressions
## Introduction
Using a combination of abstract classes and lambda expressions, students will create and manipulate a list of animals using object oriented design principles.
## Instruction
* [+] Please fork and clone this repository. This repository does not have a starter project, so create one inside of the cloned repository folder. Regularly commit and push your code as appropriate.
* [ ] Create an abstract class for animals
* [ ] All animals consume food the same way
* [ ] Each animal is assigned a unique number, a name, and year discovered regardless of classification.
- [ ] Methods will return a string saying how that animal implements the action
- [ ] All animals can move, breath, reproduce. How they do that, so what string should get returned when the method is executed, varies by animal type.
* [ ] Create classes for mammals, birds, fish
* [ ] Mammals move - walk, breath - lungs, reproduce - live births
* [ ] Birds move - fly, breath - lungs, reproduce - eggs
* [ ] Fish move - swim, breath - gills, reproduce - eggs
Hint: think about abstract classes and creating an ArrayList using an abstract class type.
Create a collection for the animals using the following data
* [ ] **Mammals:**
| Name | Year Named |
|-----------|-------|
| Panda | 1869 |
| Zebra | 1778 |
| Koala | 1816 |
| Sloth | 1804 |
| Armadillo | 1758 |
| Raccoon | 1758 |
| Bigfoot | 2021 |
* [ ] **Birds:**
| Name | Year Named |
|-----------|------|
| Pigeon | 1837 |
| Peacock | 1821 |
| Toucan | 1758 |
| Parrot | 1824 |
| Swan | 1758 |
* [ ] **Fish:**
| Name | Year Named |
|-----------|------|
| Salmon | 1758 |
| Catfish | 1817 |
| Perch | 1758 |
* Using Lambda Expressions and displaying the results to the console
* [ ] List all the animals in descending order by year named
* [ ] List all the animals alphabetically
* [ ] List all the animals order by how they move
* [ ] List only those animals the breath with lungs
* [ ] List only those animals that breath with lungs and were named in 1758
* [ ] List only those animals that lay eggs and breath with lungs
* [ ] List alphabetically only those animals that were named in 1758
* Stretch Goal
* [ ] For the list of animals, list alphabetically those animals that are mammals.
## Results
### MPV
The MVP of this application would produce the following output
```TEXT
*** MVP ***
*** List all the animals in descending order by year named ***
[Animals{id=6, name='Bigfoot', yearNamed=2021}
, Animals{id=0, name='Panda', yearNamed=1869}
, Animals{id=7, name='Pigeon', yearNamed=1837}
, Animals{id=10, name='Parrot', yearNamed=1824}
, Animals{id=8, name='Peacock', yearNamed=1821}
, Animals{id=13, name='Catfish', yearNamed=1817}
, Animals{id=2, name='Koala', yearNamed=1816}
, Animals{id=3, name='Sloth', yearNamed=1804}
, Animals{id=1, name='Zebra', yearNamed=1778}
, Animals{id=4, name='Armadillo', yearNamed=1758}
, Animals{id=5, name='Raccoon', yearNamed=1758}
, Animals{id=9, name='Toucan', yearNamed=1758}
, Animals{id=11, name='Swan', yearNamed=1758}
, Animals{id=12, name='Salmon', yearNamed=1758}
, Animals{id=14, name='Perch', yearNamed=1758}
]
*** List all the animals alphabetically ***
[Animals{id=4, name='Armadillo', yearNamed=1758}
, Animals{id=6, name='Bigfoot', yearNamed=2021}
, Animals{id=13, name='Catfish', yearNamed=1817}
, Animals{id=2, name='Koala', yearNamed=1816}
, Animals{id=0, name='Panda', yearNamed=1869}
, Animals{id=10, name='Parrot', yearNamed=1824}
, Animals{id=8, name='Peacock', yearNamed=1821}
, Animals{id=14, name='Perch', yearNamed=1758}
, Animals{id=7, name='Pigeon', yearNamed=1837}
, Animals{id=5, name='Raccoon', yearNamed=1758}
, Animals{id=12, name='Salmon', yearNamed=1758}
, Animals{id=3, name='Sloth', yearNamed=1804}
, Animals{id=11, name='Swan', yearNamed=1758}
, Animals{id=9, name='Toucan', yearNamed=1758}
, Animals{id=1, name='Zebra', yearNamed=1778}
]
*** List all the animals order by how they move ***
[Animals{id=10, name='Parrot', yearNamed=1824}
, Animals{id=8, name='Peacock', yearNamed=1821}
, Animals{id=7, name='Pigeon', yearNamed=1837}
, Animals{id=11, name='Swan', yearNamed=1758}
, Animals{id=9, name='Toucan', yearNamed=1758}
, Animals{id=13, name='Catfish', yearNamed=1817}
, Animals{id=14, name='Perch', yearNamed=1758}
, Animals{id=12, name='Salmon', yearNamed=1758}
, Animals{id=4, name='Armadillo', yearNamed=1758}
, Animals{id=6, name='Bigfoot', yearNamed=2021}
, Animals{id=2, name='Koala', yearNamed=1816}
, Animals{id=0, name='Panda', yearNamed=1869}
, Animals{id=5, name='Raccoon', yearNamed=1758}
, Animals{id=3, name='Sloth', yearNamed=1804}
, Animals{id=1, name='Zebra', yearNamed=1778}
]
*** List only those animals the breath with lungs ***
Parrot eggs fly lungs 1824
Peacock eggs fly lungs 1821
Pigeon eggs fly lungs 1837
Swan eggs fly lungs 1758
Toucan eggs fly lungs 1758
Armadillo live births walk lungs 1758
Bigfoot live births walk lungs 2021
Koala live births walk lungs 1816
Panda live births walk lungs 1869
Raccoon live births walk lungs 1758
Sloth live births walk lungs 1804
Zebra live births walk lungs 1778
*** List only those animals that breath with lungs and were named in 1758 ***
Swan eggs fly lungs 1758
Toucan eggs fly lungs 1758
Armadillo live births walk lungs 1758
Raccoon live births walk lungs 1758
*** List only those animals that lay eggs and breath with lungs ***
Parrot eggs fly lungs 1824
Peacock eggs fly lungs 1821
Pigeon eggs fly lungs 1837
Swan eggs fly lungs 1758
Toucan eggs fly lungs 1758
*** List alphabetically only those animals that were named in 1758 ***
Armadillo live births walk lungs 1758
Perch eggs swim gills 1758
Raccoon live births walk lungs 1758
Salmon eggs swim gills 1758
Swan eggs fly lungs 1758
Toucan eggs fly lungs 1758
```
### Stretch Goal
The Stretch Goals would produce the following output.
```TEXT
*** Stretch Goal ***
*** For the list of animals, list alphabetically those animals that are mammals ***
Armadillo live births walk lungs 1758
Bigfoot live births walk lungs 2021
Koala live births walk lungs 1816
Panda live births walk lungs 1869
Raccoon live births walk lungs 1758
Sloth live births walk lungs 1804
Zebra live births walk lungs 1778
```
| 34.739362 | 199 | 0.701577 | eng_Latn | 0.953616 |
43c99351dc59a23c39916a598b1665b21d127bb9 | 466 | md | Markdown | _captures/TLP9_20191004.md | Meteoros-Floripa/meteoros.floripa.br | 7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad | [
"MIT"
] | 5 | 2020-05-19T17:04:49.000Z | 2021-03-30T03:09:14.000Z | _captures/TLP9_20191004.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | null | null | null | _captures/TLP9_20191004.md | Meteoros-Floripa/site | 764cf471d85a6b498873610e4f3b30efd1fd9fae | [
"MIT"
] | 2 | 2020-05-19T17:06:27.000Z | 2020-09-04T00:00:43.000Z | ---
layout: capture
label: 20191004
station: TLP9
date: 2019-10-04 22:04:54
preview: TLP9/2019/201910/20191004/stack.jpg
capturas:
- imagem: TLP9/2019/201910/20191004/M20191004_220454_TLP_9P.jpg
- imagem: TLP9/2019/201910/20191004/M20191005_032300_TLP_9P.jpg
- imagem: TLP9/2019/201910/20191004/M20191005_041611_TLP_9P.jpg
- imagem: TLP9/2019/201910/20191004/M20191005_064005_TLP_9P.jpg
- imagem: TLP9/2019/201910/20191004/M20191005_073804_TLP_9P.jpg
---
| 33.285714 | 65 | 0.793991 | yue_Hant | 0.078753 |
43c9c52f6e729aa4bd1b37843163c6ce230ba81b | 976 | md | Markdown | _posts/2021-10-14-baekjoon-1259.md | kkongkeozzang/kkongkeozzang.github.com | f2887dc2b7202024c774b1938591d36c06b685b9 | [
"MIT"
] | null | null | null | _posts/2021-10-14-baekjoon-1259.md | kkongkeozzang/kkongkeozzang.github.com | f2887dc2b7202024c774b1938591d36c06b685b9 | [
"MIT"
] | null | null | null | _posts/2021-10-14-baekjoon-1259.md | kkongkeozzang/kkongkeozzang.github.com | f2887dc2b7202024c774b1938591d36c06b685b9 | [
"MIT"
] | 1 | 2022-01-02T11:35:48.000Z | 2022-01-02T11:35:48.000Z | ---
title: "Baekjoon 문제 풀기 (1259번 : 팰린드롬수) Python"
use_math: true
categories:
- baekjoon
- algorithm
- bronze1
tags:
- reversed
- list slicing
---
# [1259번 : 팰린드롬수](https://www.acmicpc.net/problem/1259)
#### 1. 문제읽기
---
> 문자열 뒤집기
문자열을 뒤집는 방법을 알면 쉬운 문제이다.
입력 값을 받는 부분도 약간은 꼬아놔서 [브론즈2 문자열 뒤집기 문제](https://kkongkeozzang.github.io/baekjoon/algorithm/bronze2/baekjoon-2908/) 보다는 약간 난이도가 있었다.
#### 2. 제출 코드
---
리스트 슬라이싱이 놀랍게도 전혀 생각나지 않아서 reverse를 쓰려고 했는데 이 함수도 어떻게 쓰는지 기억이 안나서 reversed 함수를 대충 써봤다.
다행히 맞았다.
join함수와 list를 굳이 같이 쓸 필요는 없었다.
```python
while 1:
t = input()
if t == "0":
break
reversed_t = ''.join(list(reversed(t)))
if t == reversed_t:
print("yes")
else:
print("no")
```
#### 3. 공부할 것
---
###### reversed 메소드
reversed 객체를 반환한다. list에는 listreverseiterater를 반환한다.
문자열로 만들고 싶으면 join을 쓴다.
리스트로 만들고 싶으면 list로 감싸준다.
tuple로 만들고 싶으면 tuple로 감싸준다.
그래도 성능면에서는 리스트 슬라이싱을 쓰는게 낫다. | 15.492063 | 133 | 0.614754 | kor_Hang | 0.999977 |
43ca8c1da6bce5791f28491ff06f32f0f33c1f4f | 4,132 | md | Markdown | wdk-ddi-src/content/ntddk/ne-ntddk-_whea_error_source_type.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ntddk/ne-ntddk-_whea_error_source_type.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ntddk/ne-ntddk-_whea_error_source_type.md | aktsuda/windows-driver-docs-ddi | a7b832e82cc99f77dbde72349c0a61670d8765d3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NE:ntddk._WHEA_ERROR_SOURCE_TYPE
title: "_WHEA_ERROR_SOURCE_TYPE"
author: windows-driver-content
description: The WHEA_ERROR_SOURCE_TYPE enumeration defines the different types of error sources that can report hardware errors.
old-location: whea\whea_error_source_type.htm
old-project: whea
ms.assetid: d2615320-6c8a-4813-afb5-c5b510e5fde9
ms.author: windowsdriverdev
ms.date: 2/20/2018
ms.keywords: "*PWHEA_ERROR_SOURCE_TYPE, PWHEA_ERROR_SOURCE_TYPE, PWHEA_ERROR_SOURCE_TYPE enumeration pointer [WHEA Drivers and Applications], WHEA_ERROR_SOURCE_TYPE, WHEA_ERROR_SOURCE_TYPE enumeration [WHEA Drivers and Applications], WheaErrSrcTypeBOOT, WheaErrSrcTypeCMC, WheaErrSrcTypeCPE, WheaErrSrcTypeGeneric, WheaErrSrcTypeINIT, WheaErrSrcTypeIPFCMC, WheaErrSrcTypeIPFCPE, WheaErrSrcTypeIPFMCA, WheaErrSrcTypeMCE, WheaErrSrcTypeMax, WheaErrSrcTypeNMI, WheaErrSrcTypePCIe, WheaErrSrcTypeSCIGeneric, _WHEA_ERROR_SOURCE_TYPE, ntddk/PWHEA_ERROR_SOURCE_TYPE, ntddk/WHEA_ERROR_SOURCE_TYPE, ntddk/WheaErrSrcTypeBOOT, ntddk/WheaErrSrcTypeCMC, ntddk/WheaErrSrcTypeCPE, ntddk/WheaErrSrcTypeGeneric, ntddk/WheaErrSrcTypeINIT, ntddk/WheaErrSrcTypeIPFCMC, ntddk/WheaErrSrcTypeIPFCPE, ntddk/WheaErrSrcTypeIPFMCA, ntddk/WheaErrSrcTypeMCE, ntddk/WheaErrSrcTypeMax, ntddk/WheaErrSrcTypeNMI, ntddk/WheaErrSrcTypePCIe, ntddk/WheaErrSrcTypeSCIGeneric, whea.whea_error_source_type, whearef_786d549e-14b1-4945-a1ce-23c7112ff0c8.xml"
ms.prod: windows-hardware
ms.technology: windows-devices
ms.topic: enum
req.header: ntddk.h
req.include-header: Ntddk.h
req.target-type: Windows
req.target-min-winverclnt: Supported in Windows Server 2008, Windows Vista SP1, and later versions of Windows.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- ntddk.h
api_name:
- WHEA_ERROR_SOURCE_TYPE
product:
- Windows
targetos: Windows
req.typenames: WHEA_ERROR_SOURCE_TYPE, *PWHEA_ERROR_SOURCE_TYPE
---
# _WHEA_ERROR_SOURCE_TYPE enumeration
## -description
The WHEA_ERROR_SOURCE_TYPE enumeration defines the different types of error sources that can report hardware errors.
## -enum-fields
### -field WheaErrSrcTypeMCE
A machine check exception (MCE).
### -field WheaErrSrcTypeCMC
A corrected machine check (CMC).
### -field WheaErrSrcTypeCPE
A corrected platform error (CPE).
### -field WheaErrSrcTypeNMI
A nonmaskable interrupt (NMI).
### -field WheaErrSrcTypePCIe
A PCI Express (PCIe) error.
### -field WheaErrSrcTypeGeneric
A type of error source that does not conform to any of the other WHEA_ERROR_SOURCE_TYPE enumeration values.
### -field WheaErrSrcTypeINIT
An Itanium processor INIT error.
### -field WheaErrSrcTypeBOOT
A boot error source.
### -field WheaErrSrcTypeSCIGeneric
A service control interrupt (SCI).
### -field WheaErrSrcTypeIPFMCA
An Itanium processor machine check abort (MCA).
### -field WheaErrSrcTypeIPFCMC
An Itanium processor corrected machine check (CMC).
### -field WheaErrSrcTypeIPFCPE
An Itanium processor corrected platform error (CPE).
### -field WheaErrSrcTypeGenericV2
### -field WheaErrSrcTypeSCIGenericV2
### -field WheaErrSrcTypeMax
The maximum number of error source types that can report hardware errors.
## -remarks
The <a href="https://msdn.microsoft.com/library/windows/hardware/ff560505">WHEA_ERROR_SOURCE_DESCRIPTOR</a> structure contains a member of type WHEA_ERROR_SOURCE_TYPE that specifies the type of error source that is described by the structure.
The <a href="https://msdn.microsoft.com/library/windows/hardware/ff560465">WHEA_ERROR_PACKET</a> structure contains a member of type WHEA_ERROR_SOURCE_TYPE that specifies the type of error source that caused the error condition described by the structure.
## -see-also
<a href="https://msdn.microsoft.com/library/windows/hardware/ff560465">WHEA_ERROR_PACKET</a>
<a href="https://msdn.microsoft.com/library/windows/hardware/ff560505">WHEA_ERROR_SOURCE_DESCRIPTOR</a>
| 26.318471 | 1,016 | 0.807357 | yue_Hant | 0.607835 |
43cacb24e12f647560f72f0d9aee4ba5aef5ea61 | 1,482 | md | Markdown | @docs-md/document/10_nativeModules/1.md | React-Native-docs/React-Native-docs | f3c428b39538dc1c809a7eb28c36409e041b4000 | [
"MIT"
] | 5 | 2021-03-27T06:44:26.000Z | 2022-01-18T14:06:40.000Z | @docs-md/document/10_nativeModules/1.md | React-Native-docs/React-Native-docs | f3c428b39538dc1c809a7eb28c36409e041b4000 | [
"MIT"
] | 3 | 2021-04-02T21:18:57.000Z | 2021-04-08T16:11:48.000Z | @docs-md/document/10_nativeModules/1.md | React-Native-docs/React-Native-docs | f3c428b39538dc1c809a7eb28c36409e041b4000 | [
"MIT"
] | 5 | 2021-03-27T06:52:53.000Z | 2021-03-29T15:40:39.000Z | # 네이티브 모듈 소개
React Native 앱은 JavaScript에서 기본적으로 제공되지 않는 네이티브 플랫폼 API (예: Apple 또는 Google Play에 액세스하는 네이티브 API)에 액세스해야 하는 경우가 있습니다. 기존 Objective-C, Swift, Java 또는 C++ 라이브러리를 JavaScript로 재구현할 필요 없이 재사용하기를 원하거나, 이미지 처리와 같은 작업을 위해 고성능 멀티 스레드 코드를 작성하고자 할 수 있습니다.
NativeModule 시스템은 Java/Objective-C/C++ (네이티브) 클래스의 인스턴스를 JavaScript (JS)에 JS 객체로 표시하기 때문에, JS 내에서 임의로 네이티브 코드를 실행할 수 있습니다. 이 기능이 일반적인 개발 프로세스의 일부가 되지는 않겠지만, 반드시 존재해야 합니다. 만약 React Native에서 JS 앱에 필요한 네이티브 API를 내보내지 않으면, 직접 export할 수 있어야 합니다!
## 네이티브 모듈 설정
React Native 애플리케이션에 맞는 네이티브 모듈을 작성하는 방법은 두 가지가 있습니다.
1. React Native 애플리케이션의 iOS/Android 프로젝트 내부에서 바로 작성합니다.
2. 내 React Native 애플리케이션 또는 다른 React Native 애플리케이션에 의해 종속성으로 설치될 수 있는 NPM 패키지로 작성합니다.
이 가이드에서는 먼저 React Native 애플리케이션 내에서 바로 네이티브 모듈을 구현하는 방법에 대해 알아봅니다. 물론 다음 가이드를 따라 구현한 네이티브 모듈을 NPM 패키지로 배포할 수도 있습니다. 원하는 경우 [Native Module을 NPM 패키지로 설정](https://reactnative.dev/docs/native-modules-setup) 가이드를 참조하십시오.
## 시작하기
다음 섹션에서는 React Native 애플리케이션 안에서 바로 네이티브 모듈을 구현하는 방법에 대한 가이드를 제공합니다. React Native 애플리케이션이 필요합니다. React Native 애플리케이션이 아직 없는 경우 [여기](https://reactnative.dev/docs/getting-started)에서 단계를 따라 설정할 수 있습니다.
캘린더 이벤트를 생성하기 위해 React Native 애플리케이션 내에서 JavaScript로 iOS/Android의 네이티브 캘린더에 액세스한다고 가정해봅시다. React Native는 네이티브 캘린더 API와 통신하기 위해 JavaScript API를 노출하지 않습니다. 그러나, 네이티브 모듈을 통해서 네이티브 캘린더 API와 통신하는 네이티브 코드를 작성할 수는 있습니다. 그런 다음 해당 네이티브 코드를 React Native 애플리케이션 안의 JavaScript를 통해 호출할 수 있습니다.
다음 섹션에서는 이러한 Android용, iOS용 캘린더 네이티브 모듈을 생성할 수 있습니다.
| 64.434783 | 281 | 0.757085 | kor_Hang | 1.00001 |
43cada341750cb91dd556af1d6eea4069b4908e7 | 1,096 | md | Markdown | .docs/class-CDPSession.md | leftstyle/puppeteer-api-zh_CN | 23ae583468c00a9c188b6aaabefbec0cd655086e | [
"MIT"
] | 742 | 2018-04-18T11:06:12.000Z | 2022-03-28T18:19:07.000Z | .docs/class-CDPSession.md | leftstyle/puppeteer-api-zh_CN | 23ae583468c00a9c188b6aaabefbec0cd655086e | [
"MIT"
] | 22 | 2018-05-16T07:56:39.000Z | 2021-09-18T07:02:28.000Z | .docs/class-CDPSession.md | leftstyle/puppeteer-api-zh_CN | 23ae583468c00a9c188b6aaabefbec0cd655086e | [
"MIT"
] | 65 | 2018-05-16T08:32:55.000Z | 2022-03-28T03:46:53.000Z | [📚 查看原文](//github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#class-cdpsession)
#### class: CDPSession
* extends: [`EventEmitter`](https://nodejs.org/api/events.html#events_class_eventemitter)
`CDPSession` 实例用于与 Chrome Devtools 协议的原生通信:
- 协议方法可以用 `session.send` 方法调用。
- 协议事件可以通过 `session.on` 方法订阅。
DevTools Protocol 的文档具体见这里: [DevTools Protocol Viewer](https://chromedevtools.github.io/devtools-protocol/).
```js
const client = await page.target().createCDPSession();
await client.send('Animation.enable');
client.on('Animation.animationCreated', () => console.log('Animation created!'));
const response = await client.send('Animation.getPlaybackRate');
console.log('playback rate is ' + response.playbackRate);
await client.send('Animation.setPlaybackRate', {
playbackRate: response.playbackRate / 2
});
```
#### cdpSession.detach()
- returns: <[Promise]>
从目标中分离 cdpSession。 一旦分离,cdpSession 对象将不会触发任何事件并且不能用于发送消息。
#### cdpSession.send(method[, params])
- `method` <[string]> protocol method name
- `params` <[Object]> Optional method parameters
- returns: <[Promise]<[Object]>>
| 33.212121 | 108 | 0.744526 | yue_Hant | 0.426555 |
43cb609d2bff3a451968bf29d24ccb569c4eac75 | 258 | md | Markdown | README.md | asha0903/securitydashboard | 4d99ed3dc358789f109f1bd46f845c8a683e57cd | [
"MIT"
] | null | null | null | README.md | asha0903/securitydashboard | 4d99ed3dc358789f109f1bd46f845c8a683e57cd | [
"MIT"
] | null | null | null | README.md | asha0903/securitydashboard | 4d99ed3dc358789f109f1bd46f845c8a683e57cd | [
"MIT"
] | null | null | null | # securitydashboard
It is an analysis done for the security platform team at an organization. The analysis made is on some sample data. The dashboard is developed on the Power BI tool. It keeps track of the team projects, delivery, services, and decisions.
| 86 | 237 | 0.79845 | eng_Latn | 0.999792 |
43cc1aae2f2a354ebfe41d745f72e7d00296fa9c | 4,419 | md | Markdown | docs/vs-2015/debugger/how-to-find-the-name-of-the-aspnet-process.md | LouisFr81/visualstudio-docs.fr-fr | e673e36b5b37ecf1f5eb97c42af84eed9c303200 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/how-to-find-the-name-of-the-aspnet-process.md | LouisFr81/visualstudio-docs.fr-fr | e673e36b5b37ecf1f5eb97c42af84eed9c303200 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/how-to-find-the-name-of-the-aspnet-process.md | LouisFr81/visualstudio-docs.fr-fr | e673e36b5b37ecf1f5eb97c42af84eed9c303200 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Comment : rechercher le nom du processus ASP.NET | Microsoft Docs'
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-debug
ms.tgt_pltfrm: ''
ms.topic: article
dev_langs:
- FSharp
- VB
- CSharp
- C++
helpviewer_keywords:
- ASP.NET debugging, ASP.NET process
- ASP.NET process
ms.assetid: 931a7597-b0f0-4a28-931d-46e63344435f
caps.latest.revision: 32
author: MikeJo5000
ms.author: mikejo
manager: ghogen
ms.openlocfilehash: 2ad3bea47bcde0da87bd185fac132c95f26ce4b0
ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 11/16/2018
ms.locfileid: "51793241"
---
# <a name="how-to-find-the-name-of-the-aspnet-process"></a>Comment : rechercher le nom du processus ASP.NET
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Pour créer un attachement à une application [!INCLUDE[vstecasp](../includes/vstecasp-md.md)] en cours d'exécution, vous devez connaître le nom du processus [!INCLUDE[vstecasp](../includes/vstecasp-md.md)] :
- Si vous exécutez IIS 6.0 ou IIS 7.0, le nom est w3wp.exe.
- Si vous exécutez une version antérieure d'IIS, le nom est aspnet_wp.exe.
Pour les applications développées à l’aide de [!INCLUDE[vsprvslong](../includes/vsprvslong-md.md)] ou versions ultérieures, le [!INCLUDE[vstecasp](../includes/vstecasp-md.md)] code peut résider sur le système de fichiers et s’exécuter sous le serveur de test WebDev.WebServer.exe. Dans ce cas, vous devez créer un attachement à WebDev.WebServer.exe au lieu du processus [!INCLUDE[vstecasp](../includes/vstecasp-md.md)]. Ce scénario s'applique uniquement au débogage local.
Les applications antérieures d'ASP s'exécutent à l'intérieur du processus IIS inetinfo.exe lorsqu'elles s'exécutent in-process.
> [!NOTE]
> Les boîtes de dialogue et les commandes de menu qui s'affichent peuvent être différentes de celles qui sont décrites dans l'aide, en fonction de vos paramètres actifs ou de l'édition utilisée. Pour modifier vos paramètres, choisissez **Importation et exportation de paramètres** dans le menu **Outils** . Pour plus d’informations, consultez [Personnalisation des paramètres de développement dans Visual Studio](http://msdn.microsoft.com/en-us/22c4debb-4e31-47a8-8f19-16f328d7dcd3).
### <a name="to-determine-whether-project-code-resides-on-the-file-system-or-iis"></a>Pour déterminer si le code de projet réside sur le système de fichiers ou IIS
1. Dans Visual Studio, ouvrez **l’Explorateur de solutions** s’il n’est pas déjà ouvert.
2. Sélectionnez le nœud supérieur qui contient le nom de l'application.
3. Si le **propriétés** titre de la fenêtre contient un chemin d’accès de fichier, le code d’application réside sur le système de fichiers.
Sinon, le **propriétés** titre de la fenêtre contient le nom du site Web.
### <a name="to-determine-the-iis-version-under-which-the-application-is-running"></a>Pour déterminer la version d'IIS sous laquelle l'application s'exécute
1. Rechercher **outils d’administration** et exécutez-le. Selon votre système d’exploitation, cela peut être une icône à l’intérieur de **le panneau de configuration**, ou une entrée de menu qui s’affiche lorsque vous cliquez sur **Démarrer**.
Dans Windows XP, **le panneau de configuration** peut se trouver dans l’affichage des catégories ou en affichage classique. Dans l’affichage des catégories, vous devez cliquer sur **basculer vers l’affichage classique** ou **Performance et Maintenance** pour voir les **outils d’administration** icône.
2. À partir de **outils d’administration**, exécutez Internet Information Services. Une boîte de dialogue MMC apparaît.
3. Si plusieurs ordinateurs sont répertoriés dans le volet gauche, sélectionnez celui sur lequel réside le code de l'application.
4. La version d’IIS est dans le **Version** colonne du volet droit.
## <a name="see-also"></a>Voir aussi
[Conditions préalables pour le débogage des Applications Web à distance](../debugger/prerequistes-for-remote-debugging-web-applications.md)
[Configuration requise](../debugger/aspnet-debugging-system-requirements.md)
[Préparation du débogage ASP.NET](../debugger/preparing-to-debug-aspnet.md)
[Débogage d’applications et de scripts web](../debugger/debugging-web-applications-and-script.md)
| 56.653846 | 486 | 0.756959 | fra_Latn | 0.922271 |
43cc5c74fef0c3a28cb56caa2c8cc7ada74803e2 | 33 | md | Markdown | Readme.md | githubpot/githubpot.github.io | 07122ddc80a18843d0fe02793c337927096c9f39 | [
"MIT"
] | null | null | null | Readme.md | githubpot/githubpot.github.io | 07122ddc80a18843d0fe02793c337927096c9f39 | [
"MIT"
] | null | null | null | Readme.md | githubpot/githubpot.github.io | 07122ddc80a18843d0fe02793c337927096c9f39 | [
"MIT"
] | null | null | null | github blog by barryclark/jekyll
| 16.5 | 32 | 0.848485 | eng_Latn | 0.567798 |
43cc60ee7af681cf1db0d0ca47e2e1a8b7d6d3a0 | 36 | md | Markdown | README.md | soniagil/cat-age | 45f5541cca02f193cb63b1be4693f639305d1b91 | [
"MIT"
] | null | null | null | README.md | soniagil/cat-age | 45f5541cca02f193cb63b1be4693f639305d1b91 | [
"MIT"
] | null | null | null | README.md | soniagil/cat-age | 45f5541cca02f193cb63b1be4693f639305d1b91 | [
"MIT"
] | null | null | null | # cat-age
Learning Swift. First app
| 12 | 25 | 0.75 | eng_Latn | 0.933597 |
43cc6c4398772ec14bc28cb69e34dc0365a1fa34 | 4,694 | md | Markdown | treebanks/ru_pud/ru_pud-dep-amod.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | treebanks/ru_pud/ru_pud-dep-amod.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | treebanks/ru_pud/ru_pud-dep-amod.md | EmanuelUHH/docs | 641bd749c85e54e841758efa7084d8fdd090161a | [
"Apache-2.0"
] | null | null | null | ---
layout: base
title: 'Statistics of amod in UD_Russian-PUD'
udver: '2'
---
## Treebank Statistics: UD_Russian-PUD: Relations: `amod`
This relation is universal.
1782 nodes (9%) are attached to their parents as `amod`.
1761 instances of `amod` (99%) are right-to-left (child precedes parent).
Average distance between parent and child is 1.18125701459035.
The following 13 pairs of parts of speech are connected with `amod`: <tt><a href="ru_pud-pos-NOUN.html">NOUN</a></tt>-<tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt> (1530; 86% instances), <tt><a href="ru_pud-pos-NOUN.html">NOUN</a></tt>-<tt><a href="ru_pud-pos-NUM.html">NUM</a></tt> (146; 8% instances), <tt><a href="ru_pud-pos-PROPN.html">PROPN</a></tt>-<tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt> (64; 4% instances), <tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt>-<tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt> (18; 1% instances), <tt><a href="ru_pud-pos-PRON.html">PRON</a></tt>-<tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt> (6; 0% instances), <tt><a href="ru_pud-pos-NOUN.html">NOUN</a></tt>-<tt><a href="ru_pud-pos-VERB.html">VERB</a></tt> (5; 0% instances), <tt><a href="ru_pud-pos-NOUN.html">NOUN</a></tt>-<tt><a href="ru_pud-pos-NOUN.html">NOUN</a></tt> (4; 0% instances), <tt><a href="ru_pud-pos-PROPN.html">PROPN</a></tt>-<tt><a href="ru_pud-pos-PROPN.html">PROPN</a></tt> (3; 0% instances), <tt><a href="ru_pud-pos-NUM.html">NUM</a></tt>-<tt><a href="ru_pud-pos-NUM.html">NUM</a></tt> (2; 0% instances), <tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt>-<tt><a href="ru_pud-pos-VERB.html">VERB</a></tt> (1; 0% instances), <tt><a href="ru_pud-pos-DET.html">DET</a></tt>-<tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt> (1; 0% instances), <tt><a href="ru_pud-pos-NOUN.html">NOUN</a></tt>-<tt><a href="ru_pud-pos-DET.html">DET</a></tt> (1; 0% instances), <tt><a href="ru_pud-pos-X.html">X</a></tt>-<tt><a href="ru_pud-pos-ADJ.html">ADJ</a></tt> (1; 0% instances).
~~~ conllu
# visual-style 12 bgColor:blue
# visual-style 12 fgColor:white
# visual-style 13 bgColor:blue
# visual-style 13 fgColor:white
# visual-style 13 12 amod color:blue
1 То то PRON DT Animacy=Inan|Case=Nom|Gender=Neut|Number=Sing 14 nsubj _ SpaceAfter=No
2 , , PUNCT , _ 5 punct _ _
3 что что SCONJ WP Animacy=Inan|Case=Acc|Gender=Neut 5 obj _ _
4 она она PRON PRP Case=Nom|Gender=Fem|Number=Sing|Person=3 5 nsubj _ _
5 говорит говорить VERB VBC Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres 1 acl _ _
6 и и CCONJ CC _ 9 cc _ _
7 что что SCONJ WP Animacy=Inan|Case=Acc|Gender=Neut 9 obj _ _
8 она она PRON PRP Case=Nom|Gender=Fem|Number=Sing|Person=3 9 nsubj _ _
9 делает делать VERB VBC Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Pres 5 conj _ SpaceAfter=No
10 , , PUNCT , _ 5 punct _ _
11 на на ADP IN _ 13 case _ _
12 самом самый ADJ JJ Animacy=Inan|Gender=Neut|Number=Sing|Variant=Long 13 amod _ _
13 деле дело NOUN NN Animacy=Inan|Gender=Neut|Number=Sing 14 obl _ _
14 невероятно невероятно ADV RB _ 0 root _ SpaceAfter=No
15 . . PUNCT . _ 14 punct _ _
~~~
~~~ conllu
# visual-style 3 bgColor:blue
# visual-style 3 fgColor:white
# visual-style 4 bgColor:blue
# visual-style 4 fgColor:white
# visual-style 4 3 amod color:blue
1 Зимняя зимний ADJ JJ Animacy=Inan|Case=Nom|Gender=Fem|Number=Sing|Variant=Long 2 amod _ Proper=True
2 Универсиада универсиада NOUN NN Animacy=Inan|Case=Nom|Gender=Fem|Number=Sing 6 nsubj:pass _ _
3 2019 2019 NUM JJ Animacy=Inan|Case=Gen|Gender=Masc|Number=Sing|Variant=Long 4 amod _ Proper=True
4 года год NOUN NN Animacy=Inan|Case=Gen|Gender=Masc|Number=Sing 2 nmod _ _
5 будет быть AUX VBC Aspect=Imp|Mood=Ind|Number=Sing|Person=3|Tense=Fut 6 aux:pass _ _
6 проводиться проводиться VERB VB Aspect=Imp 0 root _ _
7 в в ADP IN _ 8 case _ _
8 Красноярске Красноярск PROPN NNP Animacy=Inan|Gender=Masc|Number=Sing 6 obl _ SpaceAfter=No
9 . . PUNCT . _ 6 punct _ _
~~~
~~~ conllu
# visual-style 7 bgColor:blue
# visual-style 7 fgColor:white
# visual-style 8 bgColor:blue
# visual-style 8 fgColor:white
# visual-style 8 7 amod color:blue
1 Это это PRON DT Animacy=Inan|Case=Nom|Gender=Neut|Number=Sing 3 nsubj:pass _ _
2 было быть AUX VBC Aspect=Imp|Gender=Neut|Mood=Ind|Number=Sing|Tense=Past 3 aux:pass _ _
3 привезено привезти VERB VBN Animacy=Inan|Aspect=Perf|Case=Nom|Gender=Neut|Number=Sing|Tense=Past|Variant=Short|Voice=Pass 0 root _ _
4 на на ADP IN _ 5 case _ _
5 лодке лодка NOUN NN Animacy=Inan|Gender=Fem|Number=Sing 3 obl _ _
6 из из ADP IN _ 8 case _ _
7 континентальной континентальный ADJ JJ Animacy=Inan|Case=Gen|Gender=Fem|Number=Sing|Variant=Long 8 amod _ _
8 Европы Европа PROPN NNP Animacy=Inan|Case=Gen|Gender=Fem|Number=Sing 3 obl _ SpaceAfter=No
9 . . PUNCT . _ 3 punct _ _
~~~
| 57.243902 | 1,571 | 0.712612 | yue_Hant | 0.810766 |
43cc78ccdfb5a46219d51baa1e6c54ad21dafef1 | 8,411 | md | Markdown | doc/development/agent/routing.md | lfuelling/gitlab | 072cadea8d190612896753d8a45f3ac929438cb8 | [
"MIT"
] | null | null | null | doc/development/agent/routing.md | lfuelling/gitlab | 072cadea8d190612896753d8a45f3ac929438cb8 | [
"MIT"
] | 5 | 2021-09-03T05:33:05.000Z | 2022-02-26T10:25:45.000Z | doc/development/agent/routing.md | rohitesh1/gitlab | 4a797e9fc8f222853549151c50656de5bd999f53 | [
"MIT"
] | null | null | null | ---
stage: Configure
group: Configure
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/engineering/ux/technical-writing/#designated-technical-writers
---
# Routing `kas` requests in the Kubernetes Agent **(PREMIUM ONLY)**
This document describes how `kas` routes requests to concrete `agentk` instances.
GitLab must talk to GitLab Kubernetes Agent Server (`kas`) to:
- Get information about connected agents. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/249560).
- Interact with agents. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/230571).
- Interact with Kubernetes clusters. [Read more](https://gitlab.com/gitlab-org/gitlab/-/issues/240918).
Each agent connects to an instance of `kas` and keeps an open connection. When
GitLab must talk to a particular agent, a `kas` instance connected to this agent must
be found, and the request routed to it.
## System design
For an architecture overview please see
[architecture.md](https://gitlab.com/gitlab-org/cluster-integration/gitlab-agent/-/blob/master/doc/architecture.md).
```mermaid
flowchart LR
subgraph "Kubernetes 1"
agentk1p1["agentk 1, Pod1"]
agentk1p2["agentk 1, Pod2"]
end
subgraph "Kubernetes 2"
agentk2p1["agentk 2, Pod1"]
end
subgraph "Kubernetes 3"
agentk3p1["agentk 3, Pod1"]
end
subgraph kas
kas1["kas 1"]
kas2["kas 2"]
kas3["kas 3"]
end
GitLab["GitLab Rails"]
Redis
GitLab -- "gRPC to any kas" --> kas
kas1 -- register connected agents --> Redis
kas2 -- register connected agents --> Redis
kas1 -- lookup agent --> Redis
agentk1p1 -- "gRPC" --> kas1
agentk1p2 -- "gRPC" --> kas2
agentk2p1 -- "gRPC" --> kas1
agentk3p1 -- "gRPC" --> kas2
```
For this architecture, this diagram shows a request to `agentk 3, Pod1` for the list of pods:
```mermaid
sequenceDiagram
GitLab->>+kas1: Get list of running<br />Pods from agentk<br />with agent_id=3
Note right of kas1: kas1 checks for<br />agent connected with agent_id=3.<br />It does not.<br />Queries Redis
kas1->>+Redis: Get list of connected agents<br />with agent_id=3
Redis-->-kas1: List of connected agents<br />with agent_id=3
Note right of kas1: kas1 picks a specific agentk instance<br />to address and talks to<br />the corresponding kas instance,<br />specifying which agentk instance<br />to route the request to.
kas1->>+kas2: Get the list of running Pods<br />from agentk 3, Pod1
kas2->>+agentk 3 Pod1: Get list of Pods
agentk 3 Pod1->>-kas2: Get list of Pods
kas2-->>-kas1: List of running Pods<br />from agentk 3, Pod1
kas1-->>-GitLab: List of running Pods<br />from agentk with agent_id=3
```
Each `kas` instance tracks the agents connected to it in Redis. For each agent, it
stores a serialized protobuf object with information about the agent. When an agent
disconnects, `kas` removes all corresponding information from Redis. For both events,
`kas` publishes a notification to a Redis [pub-sub channel](https://redis.io/topics/pubsub).
Each agent, while logically a single entity, can have multiple replicas (multiple pods)
in a cluster. `kas` accommodates that and records per-replica (generally per-connection)
information. Each open `GetConfiguration()` streaming request is given
a unique identifier which, combined with agent ID, identifies an `agentk` instance.
gRPC can keep multiple TCP connections open for a single target host. `agentk` only
runs one `GetConfiguration()` streaming request. `kas` uses that connection, and
doesn't see idle TCP connections because they are handled by the gRPC framework.
Each `kas` instance provides information to Redis, so other `kas` instances can discover and access it.
Information is stored in Redis with an [expiration time](https://redis.io/commands/expire),
to expire information for `kas` instances that become unavailable. To prevent
information from expiring too quickly, `kas` periodically updates the expiration time
for valid entries. Before terminating, `kas` cleans up the information it adds into Redis.
When `kas` must atomically update multiple data structures in Redis, it uses
[transactions](https://redis.io/topics/transactions) to ensure data consistency.
Grouped data items must have the same expiration time.
In addition to the existing `agentk -> kas` gRPC endpoint, `kas` exposes two new,
separate gRPC endpoints for GitLab and for `kas -> kas` requests. Each endpoint
is a separate network listener, making it easier to control network access to endpoints
and allowing separate configuration for each endpoint.
Databases, like PostgreSQL, aren't used because the data is transient, with no need
to reliably persist it.
### `GitLab : kas` external endpoint
GitLab authenticates with `kas` using JWT and the same shared secret used by the
`kas -> GitLab` communication. The JWT issuer should be `gitlab` and the audience
should be `gitlab-kas`.
When accessed through this endpoint, `kas` plays the role of request router.
If a request from GitLab comes but no connected agent can handle it, `kas` blocks
and waits for a suitable agent to connect to it or to another `kas` instance. It
stops waiting when the client disconnects, or when some long timeout happens, such
as client timeout. `kas` is notified of new agent connections through a
[pub-sub channel](https://redis.io/topics/pubsub) to avoid frequent polling.
When a suitable agent connects, `kas` routes the request to it.
### `kas : kas` internal endpoint
This endpoint is an implementation detail, an internal API, and should not be used
by any other system. It's protected by JWT using a secret, shared among all `kas`
instances. No other system must have access to this secret.
When accessed through this endpoint, `kas` uses the request itself to determine
which `agentk` to send the request to. It prevents request cycles by only following
the instructions in the request, rather than doing discovery. It's the responsibility
of the `kas` receiving the request from the _external_ endpoint to retry and re-route
requests. This method ensures a single central component for each request can determine
how a request is routed, rather than distributing the decision across several `kas` instances.
### API definitions
```proto
syntax = "proto3";
import "google/protobuf/timestamp.proto";
message KasAddress {
string ip = 1;
uint32 port = 2;
}
message ConnectedAgentInfo {
// Agent id.
int64 id = 1;
// Identifies a particular agentk->kas connection. Randomly generated when agent connects.
int64 connection_id = 2;
string version = 3;
string commit = 4;
// Pod namespace.
string pod_namespace = 5;
// Pod name.
string pod_name = 6;
// When the connection was established.
google.protobuf.Timestamp connected_at = 7;
KasAddress kas_address = 8;
// What else do we need?
}
message KasInstanceInfo {
string version = 1;
string commit = 2;
KasAddress address = 3;
// What else do we need?
}
message ConnectedAgentsForProjectRequest {
int64 project_id = 1;
}
message ConnectedAgentsForProjectResponse {
// There may 0 or more agents with the same id, depending on the number of running Pods.
repeated ConnectedAgentInfo agents = 1;
}
message ConnectedAgentsByIdRequest {
int64 agent_id = 1;
}
message ConnectedAgentsByIdResponse {
repeated ConnectedAgentInfo agents = 1;
}
// API for use by GitLab.
service KasApi {
// Connected agents for a particular configuration project.
rpc ConnectedAgentsForProject (ConnectedAgentsForProjectRequest) returns (ConnectedAgentsForProjectResponse) {
}
// Connected agents for a particular agent id.
rpc ConnectedAgentsById (ConnectedAgentsByIdRequest) returns (ConnectedAgentsByIdResponse) {
}
// Depends on the need, but here is the call from the example above.
rpc GetPods (GetPodsRequest) returns (GetPodsResponse) {
}
}
message Pod {
string namespace = 1;
string name = 2;
}
message GetPodsRequest {
int64 agent_id = 1;
int64 connection_id = 2;
}
message GetPodsResponse {
repeated Pod pods = 1;
}
// Internal API for use by kas for kas -> kas calls.
service KasInternal {
// Depends on the need, but here is the call from the example above.
rpc GetPods (GetPodsRequest) returns (GetPodsResponse) {
}
}
```
| 37.549107 | 195 | 0.742956 | eng_Latn | 0.979683 |
43cc9256c2999256cae7c715693a87ef67c2179d | 3,922 | md | Markdown | docs/concepts/api-extension/custom-resources.md | pwittrock/kubernetes.github.io | 06727e2ae8e2962e81b1c420cae2682d71c6725f | [
"CC-BY-4.0"
] | 1 | 2022-02-23T08:41:50.000Z | 2022-02-23T08:41:50.000Z | docs/concepts/api-extension/custom-resources.md | pwittrock/kubernetes.github.io | 06727e2ae8e2962e81b1c420cae2682d71c6725f | [
"CC-BY-4.0"
] | null | null | null | docs/concepts/api-extension/custom-resources.md | pwittrock/kubernetes.github.io | 06727e2ae8e2962e81b1c420cae2682d71c6725f | [
"CC-BY-4.0"
] | 3 | 2021-02-25T00:16:08.000Z | 2021-11-29T09:00:54.000Z | ---
title: Custom Resources
assignees:
- enisoc
- deads2k
---
{% capture overview %}
This page explains the concept of *custom resources*, which are extensions of the Kubernetes API.
{% endcapture %}
{% capture body %}
## Custom resources
A *resource* is an endpoint in the [Kubernetes API](/docs/reference/api-overview/) that stores a
collection of [API objects](/docs/concepts/overview/working-with-objects/kubernetes-objects/) of a
certain kind.
For example, the built-in *pods* resource contains a collection of Pod objects.
A *custom resource* is an extension of the Kubernetes API that is not necessarily available on every
Kubernetes cluster.
In other words, it represents a customization of a particular Kubernetes installation.
Custom resources can appear and disappear in a running cluster through dynamic registration,
and cluster admins can update custom resources independently of the cluster itself.
Once a custom resource is installed, users can create and access its objects with
[kubectl](/docs/user-guide/kubectl-overview/), just as they do for built-in resources like *pods*.
## Custom controllers
On their own, custom resources simply let you store and retrieve structured data.
It is only when combined with a *controller* that they become a true
[declarative API](/docs/concepts/overview/working-with-objects/kubernetes-objects/#understanding-kubernetes-objects).
The controller interprets the structured data as a record of the user's desired state,
and continually takes action to achieve and maintain that state.
A *custom controller* is a controller that users can deploy and update on a running cluster,
independently of the cluster's own lifecycle.
Custom controllers can work with any kind of resource, but they are especially effective when
combined with custom resources.
The [Operator](https://coreos.com/blog/introducing-operators.html) pattern is one example of such a
combination. It allows developers to encode domain knowledge for specific applications into an
extension of the Kubernetes API.
## CustomResourceDefinitions
[CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/)
(CRD) is a built-in API that offers a simple way to create custom resources.
Deploying a CRD into the cluster causes the Kubernetes API server to begin serving the specified
custom resource on your behalf.
This frees you from writing your own API server to handle the custom resource,
but the generic nature of the implementation means you have less flexibility than with
[API server aggregation](#api-server-aggregation).
CRD is the successor to the deprecated *ThirdPartyResource* (TPR) API, and is available as of
Kubernetes 1.7.
## API server aggregation
Usually, each resource in the Kubernetes API requires code that handles REST requests and manages
persistent storage of objects.
The main Kubernetes API server handles built-in resources like *pods* and *services*,
and can also handle custom resources in a generic way through [CustomResourceDefinitions](#customresourcedefinitions).
The [aggregation layer](/docs/concepts/api-extension/apiserver-aggregation/) allows you to provide specialized
implementations for your custom resources by writing and deploying your own standalone API server.
The main API server delegates requests to you for the custom resources that you handle,
making them available to all of its clients.
{% endcapture %}
{% capture whatsnext %}
* Learn how to [Extend the Kubernetes API with the aggregation layer](/docs/concepts/api-extension/apiserver-aggregation/).
* Learn how to [Extend the Kubernetes API with CustomResourceDefinition](/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/).
* Learn how to [Migrate a ThirdPartyResource to CustomResourceDefinition](/docs/tasks/access-kubernetes-api/migrate-third-party-resource/).
{% endcapture %}
{% include templates/concept.md %}
| 49.025 | 148 | 0.802397 | eng_Latn | 0.982125 |
43ccfa2d7c7e10c663504dfa160471a01a9969ff | 3,798 | md | Markdown | README.md | yurimarx/movie | 7ad95f4f3545c63d731b3660388209c3b9fffcc4 | [
"MIT"
] | null | null | null | README.md | yurimarx/movie | 7ad95f4f3545c63d731b3660388209c3b9fffcc4 | [
"MIT"
] | null | null | null | README.md | yurimarx/movie | 7ad95f4f3545c63d731b3660388209c3b9fffcc4 | [
"MIT"
] | 3 | 2021-08-09T05:43:01.000Z | 2021-08-13T17:48:06.000Z | ## movie-rest-application
This is a sample of a REST API application built with ObjectScript in InterSystems IRIS.
It also has OPEN API spec,
can be developed with Docker and VSCode,
can ve deployed as ZPM module.
## Prerequisites
Make sure you have [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [Docker desktop](https://www.docker.com/products/docker-desktop) installed.
## Installation with ZPM
zpm:USER>install movie
## Installation for development
Clone/git pull the repo into any local directory e.g. like it is shown below (here I show all the examples related to this repository, but I assume you have your own derived from the template):
```
$ git clone git@github.com:yurimarx/movie.git
```
Open the terminal in this directory and run:
```
$ docker-compose up -d --build
```
## How to Work With it
This project creates /movie-api REST web-application on IRIS which implements 4 types of communication: GET, POST, PUT and DELETE aka CRUD operations for a Movie Catalog.
Open http://localhost:52773/swagger-ui/index.html to test the REST API
Replace swagger defaolt Open API 2.0 documentation *http://localhost:52773/crud/_spec* by:
```
http://localhost:52773/movie-api/_spec
```
This REST API exposes two GET requests: all the data and one record.
To get all the data in JSON call:
```
http://localhost:52773/movie-api/castings
```
To request the data for a particular record provide the id in GET request like 'http://localhost:52773/movie-api/castings/id' . E.g.:
```
http://localhost:52773/movie-api/castings/1
```
This will return JSON data for the casting with ID=1, something like that:
```
{
"castingId": 1,
"movie": "1",
"actor": "2",
"characterName": "Lucia"
}
```
### Testing POST request
Create a POST request e.g. in Postman with raw data in JSON. e.g.
```
{
"movie": "1",
"actor": "2",
"characterName": "Lucia"
}
```
Adjust the authorisation if needed - it is basic for container with default login and password for
IRIS Community edition container and send the POST request to 'http://localhost:52773/movie-api/castings'
This will create a record in dc.movie.model.Casting class of IRIS.
### Testing PUT request
PUT request could be used to update the records. This needs to send the similar JSON as in POST request above supplying the id of the updated record in URL.
E.g. we want to change the record with id=1. Prepare in Postman the JSON in raw like following:
```
{
"movie": "1",
"actor": "2",
"characterName": "Lucia Gimenez"
}
```
and send the put request to:
```
http://localhost:52773/movie-api/castings/1
```
### Testing DELETE request
For delete request this REST API expects only the id of the record to delete. E.g. if the id=1 the following DELETE call will delete the record:
```
http://localhost:52773/movie-api/castings/1
```
## How to start coding
This repository is ready to code in VSCode with ObjectScript plugin.
Install [VSCode](https://code.visualstudio.com/) and [ObjectScript](https://marketplace.visualstudio.com/items?itemName=daimor.vscode-objectscript) plugin and open the folder in VSCode.
The script in Installer.cls will import everything you place under /src/cls into IRIS.
## What's insde the repo
### Dockerfile
The simplest dockerfile to start IRIS and load ObjectScript from /src/cls folder
Use the related docker-compose.yml to easily setup additional parametes like port number and where you map keys and host folders.
### .vscode/settings.json
Settings file to let you immedeately code in VSCode with [VSCode ObjectScript plugin](https://marketplace.visualstudio.com/items?itemName=daimor.vscode-objectscript))
### .vscode/launch.json
Config file if you want to debug with VSCode ObjectScript
| 35.830189 | 193 | 0.73486 | eng_Latn | 0.901765 |
43cd21a024db0a540a89305f6b29a7121084884e | 8,670 | markdown | Markdown | posts/2019-04-16-peter.mattessi.markdown | BearerPipelineTest/usesthis | 772c65ee12c5a797721875dc255ea2936ab9d2d4 | [
"MIT"
] | 310 | 2015-01-04T07:49:17.000Z | 2022-03-13T05:19:59.000Z | posts/2019-04-16-peter.mattessi.markdown | BearerPipelineTest/usesthis | 772c65ee12c5a797721875dc255ea2936ab9d2d4 | [
"MIT"
] | 55 | 2015-01-01T21:15:34.000Z | 2022-01-10T07:16:55.000Z | posts/2019-04-16-peter.mattessi.markdown | BearerPipelineTest/usesthis | 772c65ee12c5a797721875dc255ea2936ab9d2d4 | [
"MIT"
] | 111 | 2015-01-16T19:10:41.000Z | 2022-03-25T08:11:44.000Z | ---
title: Peter Mattessi
summary: TV screenwriter (EastEnders, Neighbours)
categories:
- mac
- television
- writer
---
### Who are you, and what do you do?
I am Peter, and I am a television screenwriter. I write for [EastEnders](https://www.bbc.co.uk/programmes/b006m86d "The official EastEnders website.") and occasionally [Holby City](https://www.bbc.co.uk/programmes/b006mhd6 "The official Holby City website.") in the UK, and for [Neighbours](https://tenplay.com.au/channel-eleven/neighbours "The official Neighbours website.") and [The Heights](https://www.abc.net.au/tv/programs/heights/ "The official website for The Heights.") in Australia. As well as the assorted bits and pieces of spec scripts and development that all TV screenwriters have ticking away.
### What hardware do you use?
I have been deeply devoted to each of my computers, tend to hang onto them as long as I can, and have fond memories of all of them. My current one is a [mid-2014 Retina MacBook Pro][macbook-pro]. It is lovely and I will thrash it until it dies because I can't bear the keyboards on the new ones. I don't know how much RAM or speed it has and I don't much care.
When I'm at home I put it on a [Griffin stand][elevator] and use an external keyboard and mouse and this is something I do care about. Deeply. I used to use the Apple [keyboard][]/[mouse][magic-mouse] but came to the conclusion that they are crap and expensive so did way too much research and ended up with a [Logitech K811 keyboard][bluetooth-easy-switch-keyboard-k811] and a Logitech mouse (The [Triathlon][m720-triathlon] - lol). I don't know if they're any better, really, but the keyboard certainly cost a lot more once I got it from the States because no-one sells them in Australia.
I have an [iPhone 8][iphone-8] and of course it is indispensable but I wish I wasn't so reliant on it. In general and as will become apparent, I am becoming increasingly stressed at the number of things coming at me, mainly from my phone, and am doing all I can to reduce these even as my screentime stats make horrifying reading. So I bought a [Nokia 3310][3310] to take on holidays and for when I didn't want to be distracted and it was so annoying to use that I never looked at it so that's a win.
I have [Bose noise cancelling headphones][quietcomfort-20i] which are honestly the best work investment I have ever made and incredible on the plane.
In my study at home I have a record player ([this one][at-lp60-usb]) which is nice. I don't know why. I am too young to have had records as a kid so it's not nostalgia. Trendiness? It's just nice to listen to something differently every so often. But usually I listen with [iTunes][] and headphones or a Sonos. I don't have [Spotify][] because I turn the internet off to work and also maybe they shaft artists a bit?
I read on a [Kindle][] and think it is fantastic. I spent two separate years with two separate children in darkened rooms holding them while they slept and the backlit Kindle got me through it. I know Amazon has its issues (pay your staff!) but when the turning of a page has the potential to wake up your baby it shifts your perspective a bit.
I have an iPad but I don't know why. It's really just a child distraction device and for that it is invaluable and also has been replaced twice because the oldest keeps biting the screens (why) and cracking them. Flipside of that is last time they replaced it the old one still worked so now we have two, both with cracked screens.
I use Moo notebooks. I'm never buying another type of notebook again. I write in them ONLY in pencil ([Kuru Togas][kuru-toga-pipe-slide]) because I love and miss writing. I don't get much chance but I try to write as much as I can in the early planning stages of projects - on scrap paper, index cards, magic whiteboard (brilliant!), whatever seems appropriate. I just think it makes me think a bit differently and keeps me away from the computer or the phone, which as we are learning is important for me.
Not strictly work, but a couple of years ago we bought a [Yuba Mundo cargo bike][mundo-classic] to get the boys around and I have honestly never enjoyed anything more. It is a joy and I love it. If you like bikes and have cause to carry things/people around, you will too.
### And what software?
[Final Draft][final-draft] for scripts. A divisive piece of software because it does things like suddenly makes everything you type appear upside-down. (I told them about that and they gave me $300 to test a new version, so that was a win). But love it or hate it, it doesn't much matter because everyone in the industry uses it. You might as well hate weather.
[Word][] or [Pages][] depending on the mood that takes me.
[Dropbox][] for files. [Saasu][] for managing my business.
All the Apple stuff - [Calendar][], [Notes][], [Mail][] etc. I am sure there's better stuff out there and when I was 22 I would have spent weeks finding and evaluating it but honestly I can't be bothered anymore.
[Freedom][] is the greatest piece of software I have ever bought and it literally changed my concentration patterns and thus my life. It used to be just a blunt instrument that blocked your internet for a set period of time and that was fantastic enough. But now it allows you to get granular with what you block and when. A godsend.
### What would be your dream setup?
How spiritual are we going here? In terms of location I like my little study that used to be a walk-in wardrobe. It could use a big monitor and an air conditioner, but if it was near the ocean and I could swim whenever the urge took me I wouldn't care about either of those things.
Spiritually I would love to reduce the amount of connection through computers in my life. Of course I'll never get rid of it and there are parts of it that I absolutely love as something approaching 100,000 tweets will attest. But I am more and more feeling... overwhelmed. I can feel it eating away at my concentration and my time and actually changing the way my brain works and I don't like it. I think I would be much happier, or at least more relaxed, if I could separate work from connectivity as much as possible and somehow shape my life so I don't need to look at a screen all the time. I would like to spend less time on useless mindless shit on the internet and more time with the things and people that are important to me. This is something I find difficult but am going to keep persisting with.
[3310]: https://www.nokia.com/phones/en_int/nokia-3310 "A basic mobile phone."
[at-lp60-usb]: https://audio-technica.com.au/products/at-lp60-usb/ "A USB turntable."
[bluetooth-easy-switch-keyboard-k811]: https://www.logitech.com/en-us/product/illuminated-keyboard-for-mac-ipad-iphone#specification-tabular "A Bluetooth keyboard."
[calendar]: https://en.wikipedia.org/wiki/Calendar_(Apple) "The calendar software included with macOS."
[dropbox]: https://www.dropbox.com/ "Online syncing and storage."
[elevator]: https://griffintechnology.com/us/products/stands-and-mounts/elevator "A laptop stand."
[final-draft]: http://store.finaldraft.com/final-draft-10.html "Popular screenwriting software."
[freedom]: https://freedom.to/ "Productivity software that locks you away from the Internet."
[iphone-8]: https://en.wikipedia.org/wiki/IPhone_8 "A 4.7 inch smartphone."
[itunes]: https://www.apple.com/itunes/ "A jukebox application and online store."
[keyboard]: https://www.apple.com/keyboard/ "The keyboard."
[kindle]: https://www.amazon.com/Kindle-Ereader-ebook-reader/dp/B007HCCNJU "A digital book reader."
[kuru-toga-pipe-slide]: https://www.jetpens.com/Uni-Kuru-Toga-Mechanical-Pencil-Pipe-Slide-0.5-mm-Black/pd/15067 "A mechanical pencil."
[m720-triathlon]: https://www.logitech.com/en-us/product/m720-triathlon?crid=7 "A wireless multi-device mouse."
[macbook-pro]: https://www.apple.com/macbook-pro/ "A laptop."
[magic-mouse]: https://en.wikipedia.org/wiki/Magic_Mouse "A multi-touch mouse."
[mail]: https://en.wikipedia.org/wiki/Mail_(application) "The default Mac OS X mail client."
[mundo-classic]: https://yubabikes.com/cargobikestore/mundo-classic "A cargo bike."
[notes]: https://en.wikipedia.org/wiki/Notes_(Apple) "A note-taking application included with Mac OS X."
[pages]: https://www.apple.com/pages/ "A Mac word processor and layout tool from Apple."
[quietcomfort-20i]: http://worldwide.bose.com/productsupport/en_us/web/qc20i/page.html "Noise-cancelling in-ear headphones."
[saasu]: https://www.saasu.com/ "Online accounting software."
[spotify]: https://www.spotify.com/us/ "A music streaming service."
[word]: https://products.office.com/en-us/word "A document editor."
| 114.078947 | 808 | 0.765167 | eng_Latn | 0.996612 |
43cd605c86defc174c7b2e7a896c91cdef0e3740 | 274 | md | Markdown | docs/InlineResponse2005.md | yholkamp/postmark-java | 7aab8ae156bdcc3abcee80acec488e90dc1018ef | [
"MIT"
] | null | null | null | docs/InlineResponse2005.md | yholkamp/postmark-java | 7aab8ae156bdcc3abcee80acec488e90dc1018ef | [
"MIT"
] | null | null | null | docs/InlineResponse2005.md | yholkamp/postmark-java | 7aab8ae156bdcc3abcee80acec488e90dc1018ef | [
"MIT"
] | null | null | null |
# InlineResponse2005
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**tracked** | **Integer** | | [optional]
**days** | [**List<InlineResponse2005Days>**](InlineResponse2005Days.md) | | [optional]
| 22.833333 | 95 | 0.510949 | deu_Latn | 0.10796 |
43cd69d39bb73fc47ddfbfe611ff4b8cb8baabfa | 35 | md | Markdown | README.md | edu-test-a4r3g4/DINO2021-testing-test | f4ce9eba27b333cc0667d4cc59f25872af7fc893 | [
"MIT"
] | null | null | null | README.md | edu-test-a4r3g4/DINO2021-testing-test | f4ce9eba27b333cc0667d4cc59f25872af7fc893 | [
"MIT"
] | null | null | null | README.md | edu-test-a4r3g4/DINO2021-testing-test | f4ce9eba27b333cc0667d4cc59f25872af7fc893 | [
"MIT"
] | null | null | null | # DINO2021-testing-test
test test
| 8.75 | 23 | 0.771429 | kor_Hang | 0.422501 |
43cd81aac3b74576d2043e306d659324ad815d18 | 5,611 | md | Markdown | articles/machine-learning/algorithm-module-reference/create-python-model.md | Juliako/azure-docs.de-de | d2fd1cfbe4ccecc253b2f8400947f016af7a00f5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/algorithm-module-reference/create-python-model.md | Juliako/azure-docs.de-de | d2fd1cfbe4ccecc253b2f8400947f016af7a00f5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/algorithm-module-reference/create-python-model.md | Juliako/azure-docs.de-de | d2fd1cfbe4ccecc253b2f8400947f016af7a00f5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Create Python Model: Modulreferenz'
titleSuffix: Azure Machine Learning
description: Erfahren Sie, wie Sie mithilfe des Moduls Create Python Model in Azure Machine Learning ein benutzerdefiniertes Modellierungs- oder Datenverarbeitungsmodul erstellen.
services: machine-learning
ms.service: machine-learning
ms.subservice: core
ms.topic: reference
author: xiaoharper
ms.author: zhanxia
ms.date: 11/19/2019
ms.openlocfilehash: 0c1a4f33da7e1f39951d641ed1d563c46fb664ca
ms.sourcegitcommit: d6b68b907e5158b451239e4c09bb55eccb5fef89
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 11/20/2019
ms.locfileid: "74232649"
---
# <a name="create-python-model"></a>Erstellen eines Python-Modells
In diesem Artikel wird ein Modul in Azure Machine Learning-Designer (Vorschauversion) beschrieben.
Erfahren Sie, wie Sie das **Create Python Model**-Modul verwenden, um anhand eines Python-Skripts ein untrainiertes Modell zu erstellen. Sie können das Modell auf der Grundlage jedes Lernmodells erstellen, das in einem Python-Paket in der Azure Machine Learning-Designer-Umgebung enthalten ist.
Nachdem Sie das Modell erstellt haben, können Sie es mithilfe von [Train Model](train-model.md) wie alle anderen Learner in Azure Machine Learning mit einem Dataset trainieren. Das trainierte Modell kann an [Score Model](score-model.md) übergeben werden, um es zum Treffen von Vorhersagen zu verwenden. Anschließend können Sie das trainierte Modell speichern und den Bewertungsworkflow als Webdienst veröffentlichen.
> [!WARNING]
> Derzeit ist es nicht möglich, die bewerteten Ergebnisse eines Python-Modells an [Evaluate Model](evaluate-model.md) zu übergeben. Wenn Sie ein Modell auswerten müssen, können Sie ein benutzerdefiniertes Python-Skript schreiben und es mit dem [Execute Python Script](execute-python-script.md)-Modul ausführen.
## <a name="how-to-configure-create-python-model"></a>Konfigurieren von „Create Python Model“
Die Verwendung dieses Moduls setzt gute oder fortgeschrittene Kenntnisse von Python voraus. Das Modul unterstützt die Verwendung jedes Learners, der in den bereits in Azure Machine Learning installierten Python-Paketen enthalten ist. Die Liste der vorinstallierten Python-Pakete finden Sie unter [Execute Python Script](execute-python-script.md).
In diesem Artikel wird anhand einer einfachen Pipeline erläutert, wie Sie **Create Python Model** verwenden. Unten ist der Graph zur Pipeline angegeben.

1. Klicken Sie auf **Create Python Model**, und bearbeiten Sie das Skript zum Implementieren Ihres Modellierungs- oder Datenverwaltungsprozesses. Sie können das Modell auf Grundlage jedes Learners erstellen, der in einem Python-Paket in der Azure Machine Learning-Umgebung enthalten ist.
Hier sehen Sie einen Beispielcode für die Naive Bayes-Klassifizierung mit zwei Klassen, in dem das beliebte *sklearn*-Paket verwendet wird.
```Python
# The script MUST define a class named AzureMLModel.
# This class MUST at least define the following three methods:
# __init__: in which self.model must be assigned,
# train: which trains self.model, the two input arguments must be pandas DataFrame,
# predict: which generates prediction result, the input argument and the prediction result MUST be pandas DataFrame.
# The signatures (method names and argument names) of all these methods MUST be exactly the same as the following example.
import pandas as pd
from sklearn.naive_bayes import GaussianNB
class AzureMLModel:
def __init__(self):
self.model = GaussianNB()
self.feature_column_names = list()
def train(self, df_train, df_label):
self.feature_column_names = df_train.columns.tolist()
self.model.fit(df_train, df_label)
def predict(self, df):
return pd.DataFrame(
{'Scored Labels': self.model.predict(df[self.feature_column_names]),
'probabilities': self.model.predict_proba(df[self.feature_column_names])[:, 1]}
)
```
2. Verbinden Sie das soeben erstellte **Create Python Model**-Modul mit **Train Model** und **Score Model**.
3. Wenn Sie das Modell auswerten müssen, fügen Sie ein [Execute Python Script](execute-python-script.md)-Modul hinzu, und bearbeiten Sie das Python-Skript zum Implementieren der Auswertung.
Hier sehen Sie Beispielcode für die Auswertung.
```Python
# The script MUST contain a function named azureml_main
# which is the entry point for this module.
# imports up here can be used to
import pandas as pd
# The entry point function can contain up to two input arguments:
# Param<dataframe1>: a pandas.DataFrame
# Param<dataframe2>: a pandas.DataFrame
def azureml_main(dataframe1 = None, dataframe2 = None):
from sklearn.metrics import accuracy_score, precision_score, recall_score, roc_auc_score, roc_curve
import pandas as pd
import numpy as np
scores = dataframe1.ix[:, ("income", "Scored Labels", "probabilities")]
ytrue = np.array([0 if val == '<=50K' else 1 for val in scores["income"]])
ypred = np.array([0 if val == '<=50K' else 1 for val in scores["Scored Labels"]])
probabilities = scores["probabilities"]
accuracy, precision, recall, auc = \
accuracy_score(ytrue, ypred),\
precision_score(ytrue, ypred),\
recall_score(ytrue, ypred),\
roc_auc_score(ytrue, probabilities)
metrics = pd.DataFrame();
metrics["Metric"] = ["Accuracy", "Precision", "Recall", "AUC"];
metrics["Value"] = [accuracy, precision, recall, auc]
return metrics,
```
| 46.758333 | 416 | 0.761896 | deu_Latn | 0.815526 |
43ce6be36e6b0633c9b89d63c41391b3bec15bdd | 5,184 | md | Markdown | articles/virtual-machines/virtual-machines-windows-vertical-scaling-automation.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | articles/virtual-machines/virtual-machines-windows-vertical-scaling-automation.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | articles/virtual-machines/virtual-machines-windows-vertical-scaling-automation.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="Escalado vertical de máquinas virtuales de Azure con Automatización de Azure | Microsoft Azure"
description="Escalado verticalmente una máquina virtual de Windows en respuesta a las alertas de supervisión con Automatización de Azure"
services="virtual-machines-windows"
documentationCenter=""
authors="singhkays"
manager="timlt"
editor=""
tags="azure-resource-manager"/>
<tags
ms.service="virtual-machines-windows"
ms.workload="infrastructure-services"
ms.tgt_pltfrm="vm-windows"
ms.devlang="na"
ms.topic="article"
ms.date="03/29/2016"
ms.author="singhkay"/>
# Escalado vertical de máquinas virtuales de Azure con Automatización de Azure
El escalado vertical es el proceso de aumentar o disminuir los recursos de una máquina como respuesta a la carga de trabajo. Para lograrlo en Azure, cambie el tamaño de la máquina virtual. Esto puede ser útil en los siguientes escenarios:
- Si la máquina virtual no se utiliza con frecuencia, puede disminuir su tamaño para reducir los costos mensuales.
- Si la máquina virtual experimenta una carga máxima, es posible ajustarla en un mayor tamaño para aumentar su capacidad.
Los pasos para lograr esto se describen a continuación:
1. Configurar la Automatización de Azure para tener acceso a las máquinas virtuales.
2. Importar los runbooks de escalado vertical de Automatización de Azure a la suscripción
3. Agregar un webhook al runbook.
4. Agregar una alerta a la máquina virtual.
> [AZURE.NOTE] Debido al tamaño de la primera máquina virtual, los tamaños a los que se puede escalar pueden estar limitados en virtud de la disponibilidad de los demás tamaños en el clúster donde actualmente está implementada la máquina virtual. En los runbooks de automatización publicados que se usan en este artículo nos hacemos cargo de esta situación y solo escalamos dentro de los siguientes pares de tamaños de máquina virtual. Esto significa que una máquina virtual Standard\_D1v2 no se aumentará de manera repentina a Standard\_G5 ni se reducirá a Basic\_A0.
>| Pares de escalado de tamaños de VM | |
|---|---|
| Basic\_A0 | Basic\_A4 |
| Standard\_A0 | Standard\_A4 |
| Standard\_A5 | Standard\_A7 |
| Standard\_A8 | Standard\_A9 |
| Standard\_A10 | Standard\_A11 |
| Standard\_D1 | Standard\_D4 |
| Standard\_D11 | Standard\_D14 |
| Standard\_DS1 | Standard\_DS4 |
| Standard\_DS11 | Standard\_DS14 |
| Standard\_D1v2 | Standard\_D5v2 |
| Standard\_D11v2 | Standard\_D14v2 |
| Standard\_G1 | Standard\_G5 |
| Standard\_GS1 | Standard\_GS5 |
## Configurar la Automatización de Azure para tener acceso a las máquinas virtuales.
Lo primero que debe hacer es crear una cuenta de Automatización de Azure que hospedará los runbooks que se usan para escalar una máquina virtual. Recientemente, el servicio de automatización presentó la característica "Cuenta de ejecución", que facilita la configuración de la entidad de servicio para ejecutar los Runbooks automáticamente en nombre de un usuario. Encontrará más información al respecto en el siguiente artículo:
* [Autenticación de Runbooks con una cuenta de ejecución de Azure](../automation/automation-sec-configure-azure-runas-account.md)
## Importar los runbooks de escalado vertical de Automatización de Azure a la suscripción
Los runbooks necesarios para el escalado vertical de la máquina virtual están publicados actualmente en la galería de runbooks de Automatización de Azure. Deberá importarlos a la suscripción. En el siguiente artículo, puede obtener información sobre cómo importar runbooks:
* [Galerías de runbooks y módulos para la automatización de Azure](../automation/automation-runbook-gallery.md)
En la imagen que aparece a continuación, se muestran los runbooks que es necesario importar:

## Agregar un webhook al runbook
Una vez que importe los runbooks, deberá agregar un webhook al runbook para que, de este modo, una alerta proveniente de una máquina virtual pueda desencadenarlo. A continuación, puede leer los detalles sobre cómo crear un webhook para el runbook:
* [Webhooks de Automatización de Azure ](../automation/automation-webhooks.md)
Asegúrese de copiar el webhook antes de cerrar el cuadro de diálogo de webhook, porque lo necesitará en la siguiente sección.
## Agregar una alerta a la máquina virtual
1. Seleccione la configuración de la máquina virtual.
2. Seleccione "Reglas de alerta".
3. Seleccione "Agregar alerta".
4. Seleccione una métrica según la cual activar la alerta.
5. Seleccione una condición que, cuando se cumpla, hará que se active la alerta.
6. Seleccione un umbral para cumplir la condición establecida en el paso 5.
7. Seleccione un período durante el cual el servicio de supervisión comprobará la condición y el umbral que se establecieron en los pasos 5 y 6.
8. Péguelo en el webhook que copió desde la sección anterior.


<!---HONumber=AcomDC_0824_2016--> | 56.967033 | 568 | 0.792631 | spa_Latn | 0.977315 |
43ceef3a83a228958fd21f449ec836448de7fb90 | 17 | md | Markdown | README.md | hamadafayad/pixelartmaker | 63d641bb81718483153f24c0937bf801cbb78b7c | [
"MIT"
] | null | null | null | README.md | hamadafayad/pixelartmaker | 63d641bb81718483153f24c0937bf801cbb78b7c | [
"MIT"
] | null | null | null | README.md | hamadafayad/pixelartmaker | 63d641bb81718483153f24c0937bf801cbb78b7c | [
"MIT"
] | null | null | null | # Pixel-Art-Maker | 17 | 17 | 0.764706 | deu_Latn | 0.558829 |
43cf267cb81276cf54d52f4049cbe7b22a9ec6a1 | 26,258 | md | Markdown | articles/sql-database/sql-database-elastic-jobs-getting-started.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | articles/sql-database/sql-database-elastic-jobs-getting-started.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | articles/sql-database/sql-database-elastic-jobs-getting-started.md | wreyesus/azure-content-eses-articles-app-service-web-app-service-web-staged-publishing-realworld-scenarios.md | addd81caca263120e230109b811593b939994ebb | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="Introducción a Trabajos de base de datos elástica"
description="usar trabajos de base de datos elástica"
services="sql-database"
documentationCenter=""
manager="jhubbard"
authors="ddove"/>
<tags
ms.service="sql-database"
ms.workload="sql-database"
ms.tgt_pltfrm="na"
ms.devlang="na"
ms.topic="article"
ms.date="09/06/2016"
ms.author="ddove" />
# Introducción a Trabajos de base de datos elástica
Trabajos de base de datos elástica (vista previa) para Base de datos SQL de Azure permite ejecutar de forma confiable scripts de T-SQL que abarcan varias bases de datos al tiempo que realizan reintentos automáticos y ofrecen garantías de finalización futura. Para obtener más información sobre la característica de base de datos elástica, vea la [página de introducción a la característica](sql-database-elastic-jobs-overview.md).
Este tema amplía el ejemplo que aparece en [Introducción a las herramientas de Elastic Database](sql-database-elastic-scale-get-started.md). Cuando termine, podrá: crear y administrar trabajos que administran un grupo de bases de datos relacionadas. No es necesario usar las herramientas de escalado elástico para aprovechar las ventajas de los trabajos elásticos.
## Requisitos previos
Descargue [Introducción al ejemplo de herramientas de base de datos elástica](sql-database-elastic-scale-get-started.md).
## Creación de un administrador de mapas de particiones con la aplicación de ejemplo
Aquí se creará un administrador de mapas de particiones junto con varias particiones, seguido de la inserción de datos en las particiones. Si ya dispone de particiones configuradas con datos particionados en ellas, puede omitir los pasos siguientes y pasar a la sección siguiente.
1. Cree y ejecute la aplicación de ejemplo **Introducción a las herramientas de base de datos elástica**. Siga los pasos hasta el paso 7 de la sección [Descarga y ejecución de la aplicación de ejemplo](sql-database-elastic-scale-get-started.md#Getting-started-with-elastic-database-tools). Al final del paso 7, verá la siguiente línea de comandos:
![símbolo del sistema][1]
2. En la ventana de comandos, escriba "1" y pulse **Entrar**. De esta forma, se creará el administrador de mapas de particiones y se agregarán dos particiones al servidor. A continuación, escriba "3" y pulse **Entrar**; repita esta acción cuatro veces. De esta forma, se insertan las filas de datos de ejemplo en sus particiones.
3. El [Portal de Azure](https://portal.azure.com) debe mostrar tres nuevas bases de datos en el servidor v12:
![Confirmación de Visual Studio][2]
En este punto, crearemos una colección de base de datos personalizada que refleja todas las bases de datos en el mapa de particiones. Esto nos permitirá crear y ejecutar un trabajo que agrega una nueva tabla entre las particiones.
Aquí normalmente se crearía un destino del mapa de particiones, con el cmdlet **New-AzureSqlJobTarget**. Se debe establecer la base de datos de administrador de mapa de particiones como destino de la base de datos y luego especificar el mapa de particiones específico como destino. En lugar de eso, vamos a enumerar todas las bases de datos del servidor y a agregar las bases de datos, excepto la base de datos maestra, a la nueva colección personalizada.
##Creación de una colección personalizada y agregar todas las bases de datos del servidor, excepto la maestra, al destino de la colección personalizada
$customCollectionName = "dbs_in_server"
New-AzureSqlJobTarget -CustomCollectionName $customCollectionName
$ResourceGroupName = "ddove_samples"
$ServerName = "samples"
$dbsinserver = Get-AzureRMSqlDatabase -ResourceGroupName $ResourceGroupName -ServerName $ServerName
$dbsinserver | %{
$currentdb = $_.DatabaseName
$ErrorActionPreference = "Stop"
Write-Output ""
Try
{
New-AzureSqlJobTarget -ServerName $ServerName -DatabaseName $currentdb | Write-Output
}
Catch
{
$ErrorMessage = $_.Exception.Message
$ErrorCategory = $_.CategoryInfo.Reason
if ($ErrorCategory -eq 'UniqueConstraintViolatedException')
{
Write-Host $currentdb "is already a database target."
}
else
{
throw $_
}
}
Try
{
if ($currentdb -eq "master")
{
Write-Host $currentdb "will not be added custom collection target" $CustomCollectionName "."
}
else
{
Add-AzureSqlJobChildTarget -CustomCollectionName $CustomCollectionName -ServerName $ServerName -DatabaseName $currentdb
Write-Host $currentdb "was added to" $CustomCollectionName "."
}
}
Catch
{
$ErrorMessage = $_.Exception.Message
$ErrorCategory = $_.CategoryInfo.Reason
if ($ErrorCategory -eq 'UniqueConstraintViolatedException')
{
Write-Host $currentdb "is already in the custom collection target" $CustomCollectionName"."
}
else
{
throw $_
}
}
$ErrorActionPreference = "Continue"
}
## Crear un script T-SQL para su ejecución transversal en las bases de datos
$scriptName = "NewTable"
$scriptCommandText = "
IF NOT EXISTS (SELECT name FROM sys.tables WHERE name = 'Test')
BEGIN
CREATE TABLE Test(
TestId INT PRIMARY KEY IDENTITY,
InsertionTime DATETIME2
);
END
GO
INSERT INTO Test(InsertionTime) VALUES (sysutcdatetime());
GO"
$script = New-AzureSqlJobContent -ContentName $scriptName -CommandText $scriptCommandText
Write-Output $script
##Creación del trabajo para ejecutar un script en todo el grupo personalizado de bases de datos
$jobName = "create on server dbs"
$scriptName = "NewTable"
$customCollectionName = "dbs_in_server"
$credentialName = "ddove66"
$target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName
$job = New-AzureSqlJob -JobName $jobName -CredentialName $credentialName -ContentName $scriptName -TargetId $target.TargetId
Write-Output $job
##Ejecución del trabajo
Puede usar este script de PowerShell para ejecutar un trabajo existente:
Actualice la variable siguiente para que refleje el nombre del trabajo que se quiera ejecutar:
$jobName = "create on server dbs"
$jobExecution = Start-AzureSqlJobExecution -JobName $jobName
Write-Output $jobExecution
## Recuperación del estado de ejecución de un único trabajo
Use el mismo cmdlet **Get-AzureSqlJobExecution** con el parámetro **IncludeChildren** para ver el estado de ejecuciones de trabajos secundarios, es decir, el estado específico de cada ejecución en cada base de datos destino del trabajo.
$jobExecutionId = "{Job Execution Id}"
$jobExecutions = Get-AzureSqlJobExecution -JobExecutionId $jobExecutionId -IncludeChildren
Write-Output $jobExecutions
## Ver el estado de varias ejecuciones de trabajos
El cmdlet **Get-AzureSqlJobExecution** tiene varios parámetros opcionales que sirven para mostrar varias ejecuciones del trabajo, filtradas por los parámetros proporcionados. Aquí mostramos algunas de las posibles formas de usar Get-AzureSqlJobExecution:
Recuperar todas las ejecuciones de trabajos activos de nivel superior:
Get-AzureSqlJobExecution
Recuperar todas las ejecuciones de trabajos de nivel superior, incluidas las ejecuciones de trabajos inactivos:
Get-AzureSqlJobExecution -IncludeInactive
Recuperar todas las ejecuciones de trabajos secundarios de un identificador de ejecución de trabajos proporcionado, incluidas las ejecuciones de trabajos inactivos:
$parentJobExecutionId = "{Job Execution Id}"
Get-AzureSqlJobExecution -AzureSqlJobExecution -JobExecutionId $parentJobExecutionId –IncludeInactive -IncludeChildren
Recuperar todas las ejecuciones de trabajos creadas con una combinación de programación y trabajo, incluidos los trabajos inactivos:
$jobName = "{Job Name}"
$scheduleName = "{Schedule Name}"
Get-AzureSqlJobExecution -JobName $jobName -ScheduleName $scheduleName -IncludeInactive
Recuperar todos los trabajos que se destinan a un mapa de particiones especificado, incluidos los trabajos inactivos:
$shardMapServerName = "{Shard Map Server Name}"
$shardMapDatabaseName = "{Shard Map Database Name}"
$shardMapName = "{Shard Map Name}"
$target = Get-AzureSqlJobTarget -ShardMapManagerDatabaseName $shardMapDatabaseName -ShardMapManagerServerName $shardMapServerName -ShardMapName $shardMapName
Get-AzureSqlJobExecution -TargetId $target.TargetId –IncludeInactive
Recuperar todos los trabajos que se destinan a una colección personalizada especificada, incluidos los trabajos inactivos:
$customCollectionName = "{Custom Collection Name}"
$target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName
Get-AzureSqlJobExecution -TargetId $target.TargetId –IncludeInactive
Recuperar la lista de ejecuciones de tareas de trabajos dentro de la ejecución de un trabajo específico:
$jobExecutionId = "{Job Execution Id}"
$jobTaskExecutions = Get-AzureSqlJobTaskExecution -JobExecutionId $jobExecutionId
Write-Output $jobTaskExecutions
Recuperar los detalles de ejecución de tareas de trabajo:
Puede usar este script de PowerShell para ver los detalles de una ejecución de tareas de trabajo, lo que resulta especialmente útil al depurar errores de ejecución.
$jobTaskExecutionId = "{Job Task Execution Id}"
$jobTaskExecution = Get-AzureSqlJobTaskExecution -JobTaskExecutionId $jobTaskExecutionId
Write-Output $jobTaskExecution
## Recuperación de errores dentro de las ejecuciones de tareas de trabajo
El objeto JobTaskExecution incluye una propiedad para el ciclo de vida de la tarea y una propiedad de mensaje. Si no se realiza correctamente la ejecución de tareas de un trabajo, la propiedad de ciclo de vida se establecerá en *Failed* y la propiedad de mensaje se establecerá en el mensaje de excepción resultante y la pila. Si un trabajo no se realiza correctamente, es importante ver los detalles de las tareas de trabajo que no se realizaron correctamente en un trabajo determinado.
$jobExecutionId = "{Job Execution Id}"
$jobTaskExecutions = Get-AzureSqlJobTaskExecution -JobExecutionId $jobExecutionId
Foreach($jobTaskExecution in $jobTaskExecutions)
{
if($jobTaskExecution.Lifecycle -ne 'Succeeded')
{
Write-Output $jobTaskExecution
}
}
## Esperar a que se complete la ejecución de un trabajo
Este script de PowerShell sirve para esperar a que una tarea de trabajo se complete:
$jobExecutionId = "{Job Execution Id}"
Wait-AzureSqlJobExecution -JobExecutionId $jobExecutionId
## Crear una directiva de ejecución personalizada
Trabajos de base de datos elástica admite la creación de directivas de ejecución personalizadas que se pueden aplicar al iniciar trabajos.
Actualmente, las directivas de ejecución permiten definir:
* Nombre: identificador de la directiva de ejecución.
* Tiempo de espera del trabajo: tiempo total antes de que Trabajos de base de datos elástica cancele un trabajo.
* Intervalo de reintento inicial: intervalo de espera antes del primer reintento.
* Intervalo máximo de reintento: límite de intervalos de reintento que se usan.
* Coeficiente de retroceso de intervalo de reintento: coeficiente que se usa para calcular el siguiente intervalo entre reintentos. Se usa la siguiente fórmula: (intervalo de reintento inicial) * Math.pow ((coeficiente de retroceso de intervalo), (número de intentos de) - 2).
* Número máximo de intentos: número máximo de reintentos para llevar a cabo un trabajo.
La directiva de ejecución predeterminada usa los valores siguientes:
* Nombre: directiva de ejecución predeterminada
* Tiempo de espera del trabajo: 1 semana
* Intervalo de reintento inicial: 100 milisegundos
* Intervalo máximo de reintento: 30 minutos
* Coeficiente de intervalo de reintento: 2
* Número máximo de intentos: 2.147.483.647
Crear la directiva de ejecución que quiera:
$executionPolicyName = "{Execution Policy Name}"
$initialRetryInterval = New-TimeSpan -Seconds 10
$jobTimeout = New-TimeSpan -Minutes 30
$maximumAttempts = 999999
$maximumRetryInterval = New-TimeSpan -Minutes 1
$retryIntervalBackoffCoefficient = 1.5
$executionPolicy = New-AzureSqlJobExecutionPolicy -ExecutionPolicyName $executionPolicyName -InitialRetryInterval $initialRetryInterval -JobTimeout $jobTimeout -MaximumAttempts $maximumAttempts -MaximumRetryInterval $maximumRetryInterval -RetryIntervalBackoffCoefficient $retryIntervalBackoffCoefficient
Write-Output $executionPolicy
### Actualización de una directiva de ejecución personalizada
Actualizar la directiva de ejecución que se quiere actualizar:
$executionPolicyName = "{Execution Policy Name}"
$initialRetryInterval = New-TimeSpan -Seconds 15
$jobTimeout = New-TimeSpan -Minutes 30
$maximumAttempts = 999999
$maximumRetryInterval = New-TimeSpan -Minutes 1
$retryIntervalBackoffCoefficient = 1.5
$updatedExecutionPolicy = Set-AzureSqlJobExecutionPolicy -ExecutionPolicyName $executionPolicyName -InitialRetryInterval $initialRetryInterval -JobTimeout $jobTimeout -MaximumAttempts $maximumAttempts -MaximumRetryInterval $maximumRetryInterval -RetryIntervalBackoffCoefficient $retryIntervalBackoffCoefficient
Write-Output $updatedExecutionPolicy
## Cancelación de un trabajo
Trabajos de base de datos elástica admite solicitudes de cancelación de trabajos. Si Trabajos de base de datos elástica detecta una solicitud de cancelación para un trabajo que está ejecutándose en ese momento, intentará detener el trabajo.
Trabajos de base de datos elástica puede realizar una cancelación de dos formas distintas:
1. Cancelación de tareas actualmente en ejecución: si se detecta una cancelación mientras se ejecuta una tarea, se intentará cancelar el aspecto de la tarea que se está ejecutando actualmente. Por ejemplo: si hay una consulta de larga ejecución en curso en el momento en que se intenta realizar una cancelación, se intentará cancelar la consulta.
2. Cancelación de reintentos de tareas: si el subproceso de control detecta una cancelación antes de iniciar una tarea para su ejecución, evitará iniciar la tarea y declarará cancelada la solicitud.
Si se solicita una cancelación de trabajo para un trabajo primario, se respetará la solicitud de cancelación para el trabajo primario y todos los trabajos secundarios.
Para enviar una solicitud de cancelación, use el cmdlet **Stop-AzureSqlJobExecution** y establezca el parámetro **JobExecutionId**.
$jobExecutionId = "{Job Execution Id}"
Stop-AzureSqlJobExecution -JobExecutionId $jobExecutionId
## Eliminación de un trabajo por nombre y el historial de trabajos
Trabajos de base de datos elástica admite la eliminación asincrónica de trabajos. Un trabajo se puede marcar para su eliminación y el sistema eliminará el trabajo y su historial de trabajos una vez completadas todas las ejecuciones de trabajos para ese trabajo. El sistema no cancelará automáticamente las ejecuciones de trabajos activos.
En su lugar, se debe invocar Stop-AzureSqlJobExecution para cancelar las ejecuciones de trabajos activos.
Para desencadenar la eliminación del trabajo, use el cmdlet **Remove-AzureSqlJob** y establezca el parámetro **JobName**.
$jobName = "{Job Name}"
Remove-AzureSqlJob -JobName $jobName
## Creación de un destino de base de datos personalizada
En Trabajos de base de datos elástica, se pueden definir destinos de base de datos personalizada que sirven para su ejecución directa o para su inclusión en un grupo de base de datos personalizada. Puesto que los **grupos de bases de datos elásticas** todavía no se admiten directamente a través de las API de PowerShell, cree solo un destino de base de datos personalizada y un destino de la colección de bases de datos personalizada que englobe todas las bases de datos del grupo.
Establecimiento de las siguientes variables para que reflejen la información de base de datos que se quiera:
$databaseName = "{Database Name}"
$databaseServerName = "{Server Name}"
New-AzureSqlJobDatabaseTarget -DatabaseName $databaseName -ServerName $databaseServerName
## Creación de un destino de colección de bases de datos personalizada
Puede definirse un destino de colección de bases de datos personalizada para habilitar la ejecución transversal en varios destinos de base de datos definidos. Después de crear un grupo de bases de datos, las bases de datos se pueden asociar al destino de la colección personalizada.
Establecimiento de las siguientes variables para que reflejen la configuración de destino de la colección personalizada que se quiera:
$customCollectionName = "{Custom Database Collection Name}"
New-AzureSqlJobTarget -CustomCollectionName $customCollectionName
### Adición de bases de datos a un destino de colección de bases de datos personalizada
Los destinos de base de datos se pueden asociar a los destinos de colección de bases de datos personalizada para crear un grupo de bases de datos. Cada vez que se crea un trabajo que se destina a una colección de bases de datos personalizada, se expandirá para dirigirse a las bases de datos asociadas al grupo en el momento de ejecución.
Adición de la base de datos que se quiera a una colección personalizada específica:
$serverName = "{Database Server Name}"
$databaseName = "{Database Name}"
$customCollectionName = "{Custom Database Collection Name}"
Add-AzureSqlJobChildTarget -CustomCollectionName $customCollectionName -DatabaseName $databaseName -ServerName $databaseServerName
#### Revisión de las bases de datos incluidas en un destino de colección de bases de datos personalizada
Use el cmdlet **Get-AzureSqlJobTarget** para recuperar las bases de datos secundarias en un destino de la colección de bases de datos personalizada.
$customCollectionName = "{Custom Database Collection Name}"
$target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName
$childTargets = Get-AzureSqlJobTarget -ParentTargetId $target.TargetId
Write-Output $childTargets
### Creación de un trabajo para ejecutar un script transversalmente en un destino de colección de bases de datos personalizada
Use el cmdlet **New-AzureSqlJob** para crear un trabajo en un grupo de bases de datos definido por un destino de la colección de base de datos personalizada. Trabajos de base de datos elástica expande el trabajo en varios trabajos secundarios, cada uno correspondiente a una base de datos asociada al destino de la colección de bases de datos personalizada y garantiza que el script se ejecuta en cada una de las bases de datos. De nuevo, es importante que los scripts sean idempotentes para que sean resistentes a los reintentos.
$jobName = "{Job Name}"
$scriptName = "{Script Name}"
$customCollectionName = "{Custom Collection Name}"
$credentialName = "{Credential Name}"
$target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName
$job = New-AzureSqlJob -JobName $jobName -CredentialName $credentialName -ContentName $scriptName -TargetId $target.TargetId
Write-Output $job
## Recopilación de datos de una base de datos a otra
**Trabajos de base de datos elástica** es compatible con la ejecución de una consulta transversal en un grupo de bases de datos y envía los resultados a la tabla de la base de datos especificada. La tabla se puede consultar a posteriori para ver los resultados de la consulta de cada base de datos. Esto ofrece un mecanismo asincrónico para ejecutar una consulta transversalmente en varias bases de datos. Los casos de error, como que una de las bases de datos no esté disponible temporalmente, se controlan automáticamente a través de reintentos.
Se creará automáticamente la tabla de destino especificada si todavía no existe ninguna que coincida con el esquema del conjunto de resultados devuelto. Si la ejecución de un script devuelve varios conjuntos de resultados, Trabajos de base de datos elástica solo enviará el primero a la tabla de destino proporcionada.
El siguiente script de PowerShell sirve para ejecutar un script que recopile sus resultados en una tabla especificada. Este script presupone que se creó un script T-SQL que genera un único conjunto de resultados y se creó un destino de la colección de bases de datos personalizada.
Establecimiento de las siguientes opciones para que reflejen el script, las credenciales y el destino de ejecución que se quieran:
$jobName = "{Job Name}"
$scriptName = "{Script Name}"
$executionCredentialName = "{Execution Credential Name}"
$customCollectionName = "{Custom Collection Name}"
$destinationCredentialName = "{Destination Credential Name}"
$destinationServerName = "{Destination Server Name}"
$destinationDatabaseName = "{Destination Database Name}"
$destinationSchemaName = "{Destination Schema Name}"
$destinationTableName = "{Destination Table Name}"
$target = Get-AzureSqlJobTarget -CustomCollectionName $customCollectionName
### Creación e inicio de un trabajo en escenarios de recolección de datos
$job = New-AzureSqlJob -JobName $jobName -CredentialName $executionCredentialName -ContentName $scriptName -ResultSetDestinationServerName $destinationServerName -ResultSetDestinationDatabaseName $destinationDatabaseName -ResultSetDestinationSchemaName $destinationSchemaName -ResultSetDestinationTableName $destinationTableName -ResultSetDestinationCredentialName $destinationCredentialName -TargetId $target.TargetId
Write-Output $job
$jobExecution = Start-AzureSqlJobExecution -JobName $jobName
Write-Output $jobExecution
## Creación de una programación para la ejecución de trabajos con un desencadenador de trabajo
El siguiente script de PowerShell sirve para crear una programación recurrente. Este script usa un intervalo de un minuto, pero New-AzureSqlJobSchedule también admite los parámetros -DayInterval, -HourInterval, -MonthInterval y -WeekInterval. Se pueden crear programaciones que se ejecutan una sola vez pasando -OneTime.
Creación de una programación:
$scheduleName = "Every one minute"
$minuteInterval = 1
$startTime = (Get-Date).ToUniversalTime()
$schedule = New-AzureSqlJobSchedule -MinuteInterval $minuteInterval -ScheduleName $scheduleName -StartTime $startTime
Write-Output $schedule
### Creación de un desencadenador de trabajo para que un trabajo se ejecute según una programación de tiempo
Se puede definir un desencadenador de trabajo para que un trabajo se ejecute según una programación de tiempo. El siguiente script de PowerShell sirve para crear un desencadenador de trabajo.
Establecimiento de las siguientes variables para que se correspondan con el trabajo y la programación que se quiera:
$jobName = "{Job Name}"
$scheduleName = "{Schedule Name}"
$jobTrigger = New-AzureSqlJobTrigger -ScheduleName $scheduleName –JobName $jobName
Write-Output $jobTrigger
### Eliminación de una asociación programada para detener la ejecución de un trabajo en la programación
Para suspender la ejecución de un trabajo recurrente a través de un desencadenador de trabajo, se puede quitar el desencadenador de trabajo. Quite un desencadenador de trabajo para detener un trabajo que se ejecute según una programación con el cmdlet **Remove-AzureSqlJobTrigger**.
$jobName = "{Job Name}"
$scheduleName = "{Schedule Name}"
Remove-AzureSqlJobTrigger -ScheduleName $scheduleName -JobName $jobName
## Importación de los resultados de la consulta de base de datos elástica a Excel
Puede importar los resultados de una consulta a un archivo Excel.
1. Inicie Excel 2013.
2. Diríjase a la barra de herramientas **Datos**.
3. Haga clic en **Desde otros orígenes** y luego en **Desde SQL Server**.
![Importación de Excel desde otros orígenes][5]
4. En el **Asistente de conexión de datos**, escriba el nombre del servidor y las credenciales de inicio de sesión. A continuación, haga clic en **Siguiente**.
5. En el cuadro de diálogo **Seleccione la base de datos que contiene la información que desea**, seleccione la base de datos **ElasticDBQuery**.
6. Seleccione la tabla **Clientes** de la vista de lista y haga clic en **Siguiente**. Haga clic en **Finalizar**.
7. En el formulario **Importar datos**, en **Seleccione cómo desea ver estos datos en el libro**, seleccione **Tabla** y haga clic en **Aceptar**.
Todas las filas de la tabla **Clientes**, almacenadas en distintas particiones, completan la hoja de Excel.
## Pasos siguientes
Ahora puede usar las funciones de datos de Excel. Use la cadena de conexión con el nombre de servidor, el nombre de base de datos y las credenciales para conectar su BI y las herramientas de integración de datos a la base de datos de consulta elástica. Asegúrese de que SQL Server se admite como origen de datos para la herramienta. Consulte la base de datos de consulta elástica y las tablas externas como cualquier otra base de datos SQL Server y las tablas de SQL Server que quiera conectar con la herramienta.
### Coste
No hay ningún cargo adicional por usar la característica de consulta de base de datos elástica. Sin embargo, en este momento la característica está solo disponible en bases de datos premium como extremo, pero las particiones pueden ser de cualquier nivel de servicio.
Para obtener información sobre los precios, consulte [Detalles de precios de Base de datos SQL](https://azure.microsoft.com/pricing/details/sql-database/).
[AZURE.INCLUDE [elastic-scale-include](../../includes/elastic-scale-include.md)]
<!--Image references-->
[1]: ./media/sql-database-elastic-query-getting-started/cmd-prompt.png
[2]: ./media/sql-database-elastic-query-getting-started/portal.png
[3]: ./media/sql-database-elastic-query-getting-started/tiers.png
[4]: ./media/sql-database-elastic-query-getting-started/details.png
[5]: ./media/sql-database-elastic-query-getting-started/exel-sources.png
<!--anchors-->
<!---HONumber=AcomDC_0907_2016--> | 58.351111 | 548 | 0.776754 | spa_Latn | 0.956626 |
43cf6133e9db18eeb5398806ffd269435cfc3c69 | 47 | md | Markdown | README.md | pplonski/mercury-demo-notebooks | 6fd35af78283ea80f9bec3ddc7a815c6e3c17dbe | [
"MIT"
] | 3 | 2021-12-27T08:49:08.000Z | 2022-01-28T07:27:10.000Z | README.md | pplonski/mercury-demo-notebooks | 6fd35af78283ea80f9bec3ddc7a815c6e3c17dbe | [
"MIT"
] | null | null | null | README.md | pplonski/mercury-demo-notebooks | 6fd35af78283ea80f9bec3ddc7a815c6e3c17dbe | [
"MIT"
] | 1 | 2022-03-09T06:28:03.000Z | 2022-03-09T06:28:03.000Z | # mercury-demo-notebooks
Mercury Demo Notbooks
| 15.666667 | 24 | 0.829787 | pol_Latn | 0.641467 |
43cf814108da275eb64b4046fbacd38f23f0f66f | 39 | md | Markdown | README.md | obium/test001 | f3c68d872adf250022e169a45a70ee5e54733a22 | [
"MIT"
] | null | null | null | README.md | obium/test001 | f3c68d872adf250022e169a45a70ee5e54733a22 | [
"MIT"
] | null | null | null | README.md | obium/test001 | f3c68d872adf250022e169a45a70ee5e54733a22 | [
"MIT"
] | null | null | null | # test001
Test 001: PDO, Twig, Uploads
| 13 | 28 | 0.717949 | kor_Hang | 0.66225 |
43d01a571cf6a5b480e8a0b002ff038c6344f661 | 23 | md | Markdown | README.md | MTrac/Curvelet-Tranfsorm-v2 | 3f56b4808c3a0f3f7fba80f21954a1ccaa6a9ecc | [
"Apache-2.0"
] | null | null | null | README.md | MTrac/Curvelet-Tranfsorm-v2 | 3f56b4808c3a0f3f7fba80f21954a1ccaa6a9ecc | [
"Apache-2.0"
] | null | null | null | README.md | MTrac/Curvelet-Tranfsorm-v2 | 3f56b4808c3a0f3f7fba80f21954a1ccaa6a9ecc | [
"Apache-2.0"
] | null | null | null | # Curvelet-Tranfsorm-v2 | 23 | 23 | 0.826087 | deu_Latn | 0.550956 |
43d027f44de2477728aaf758c3dadc61cc557420 | 532 | md | Markdown | README.md | simula-vias/ai4eu-robotics | deb22aea7f92974f21fd4157f98383eeb65e1e0b | [
"Apache-2.0"
] | 1 | 2021-12-01T14:08:49.000Z | 2021-12-01T14:08:49.000Z | README.md | simula-vias/ai4eu-robotics | deb22aea7f92974f21fd4157f98383eeb65e1e0b | [
"Apache-2.0"
] | null | null | null | README.md | simula-vias/ai4eu-robotics | deb22aea7f92974f21fd4157f98383eeb65e1e0b | [
"Apache-2.0"
] | null | null | null | # AI4EU Robotics Pilot: Datasets & Models
The AI4EU Robotics pilot collected datasets of vibration sensor data for two robotics demonstrators: a pump demonstrator and a wrist demonstrator.
## Directory `dataset`
This repository contains a script to convert the full time-series datasets into more convenient formats, i.e. one sample per time-series, either in raw data or in frequency-space through FFT.
It also contains the recipes and scripts to create data brokers for the AI4EU Experiments platform.
## Directory `models`
| 40.923077 | 191 | 0.798872 | eng_Latn | 0.995647 |
43d02c9a8d949d8c771df04001639331c79f9270 | 2,791 | md | Markdown | CHANGELOG.md | e-babani/geocontrib | dca40cf81f64daad85a5dbac4ab3fbf80d6bbdf6 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | e-babani/geocontrib | dca40cf81f64daad85a5dbac4ab3fbf80d6bbdf6 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | e-babani/geocontrib | dca40cf81f64daad85a5dbac4ab3fbf80d6bbdf6 | [
"Apache-2.0"
] | null | null | null | # Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## Unreleased
### Fixed
- Display creator in feature and feature type
- Responsive issue when the layer is very long in the basemap form
- Add title in project, feature type and feature
- Access to the project when no project rank defined
## [1.1.0] - 2020-08-28
### Changed
- increase thickness of segments borders in the basemap project management form
### Fixed
- update addok geocoder provider url in leaflet-control-geocoder to fix a mixed content error on client side
- doest not reload flatpages.json if flatpages records exist in database
- fix incoherent ending h3 tags in flatpages.json
## [1.1.0-rc1] - 2020-08-19
### Added
- geOrchestra plugin: automatically associate role to users when the user database is synchronised (see
[geOrchestra plugin](plugin_georchestra/README.md))
- add a function to search for places and addresses in the interactive maps. This new feature comes with new settings:
`GEOCODER_PROVIDERS` and `SELECTED_GEOCODER`
- add a button to for feature creation in a feature detail page
- add a function in the Django admin page of the feature type for creating a SQL view
- add a `FAVICON_PATH` setting. Default favicon is the Neogeo Technologies logo
### Changed
- enable edition of a feature type if there is not yet any feature associated with it
- sort users list by last_name, first_name and then username
- change the label of the feature type field `title` in the front-end form (Titre -> Nom)
- change the data model for basemaps: one basemap may contain several layers. Layers are declared by GéoContrib
admin users. Basemaps are created by project admin users by selecting layers, ordering them and setting the opacity
of each of them. End users may switch from one basemap to another in the interactive maps. One user can change
the order of the layers and their opacity in the interactive maps. These personnal adjustments are stored in the
web browser of the user (local storage) and do not alter the basemaps as seen by other users.
- change default value for `LOGO_PATH` setting: Neogeo Technologie logo. This new image is located in the media
directory.
- change all visible names in front-end and docs from `Geocontrib` to `GéoContrib`
- set the leaflet background container to white
- increase the width of themap in feature_list.html
### Fixed
- fix tests on exif tag extraction
- fix serialisation of field archieved_on of features
- use https instead of http on link to sortable.js
- fix typos: basemaps management error message
- fix visibility of draft features
### Security
- upgrade Pillow from 6.2.2 to 7.2.0 (Python module used for exif tags extraction)
| 48.12069 | 118 | 0.779291 | eng_Latn | 0.998064 |
43d0b3fcaf46945796ae1fa158241d0fc40d14d0 | 889 | md | Markdown | _posts/2009-02-23-weekly-word-effusive.md | LearningNerd/learningnerd.github.io | c86159ae90cf004088d1b39359b607914b98b685 | [
"MIT"
] | 2 | 2015-05-02T06:19:59.000Z | 2017-06-22T03:18:05.000Z | _posts/2009-02-23-weekly-word-effusive.md | LearningNerd/learningnerd.github.io | c86159ae90cf004088d1b39359b607914b98b685 | [
"MIT"
] | null | null | null | _posts/2009-02-23-weekly-word-effusive.md | LearningNerd/learningnerd.github.io | c86159ae90cf004088d1b39359b607914b98b685 | [
"MIT"
] | 2 | 2016-01-17T10:00:16.000Z | 2018-10-16T03:30:30.000Z | ---
layout: article
title: "Weekly Word: Effusive"
date: 2009-02-23 19:31
cats: [english]
---
The adjective <em><a href="http://dictionary.reference.com/browse/effusive">effusive</a></em> means "overflowing", "extravagantly demonstrative", or "unrestrained or excessive in emotional expression". The verb form <em>to effuse</em> means "to pour out" or "to radiate".
Wikipedia, of course, has a page about <a href=" http://en.wikipedia.org/wiki/Effusion">effusion</a>. In chemistry, it's "the process where individual molecules flow through a hole without collisions between molecules" (think of a deflating balloon).
The most common use of <em>effusive</em> would probably be to describe "effusive praise", either from expert critics or from effusive fanboys. The word also comes up a lot in articles about the Academy Awards, what with all those gushy actors sobbing over their Oscars.
| 74.083333 | 271 | 0.76378 | eng_Latn | 0.990617 |
43d178bd8b9c4172df146c37a556cc9f171cae7b | 216 | md | Markdown | _posts/2022-03-01-01.md | venenl/venenl.github.io | 3483d12e02deac3122218e43b296b84516a9e978 | [
"MIT"
] | null | null | null | _posts/2022-03-01-01.md | venenl/venenl.github.io | 3483d12e02deac3122218e43b296b84516a9e978 | [
"MIT"
] | null | null | null | _posts/2022-03-01-01.md | venenl/venenl.github.io | 3483d12e02deac3122218e43b296b84516a9e978 | [
"MIT"
] | null | null | null | ---
layout: mypost
title: 分享一个自建RSSHUB
categories: [杂文]
---
网址:https://rsshubi.vercel.app/
用途自行百度 可以订阅任何网站RSS
用法与部署:RSSHub 是一个开源、简单易用、易于扩展的 RSS 生成器,可以给任何奇奇怪怪的内容生成 RSS 订阅源。RSSHub 借助于开源社区的力量快速发展中,目前已适配数百家网站的上千项内容
| 16.615385 | 100 | 0.782407 | yue_Hant | 0.902674 |
43d4dfc6fc9d31a720a6be69aa11c3bc726e274a | 25 | md | Markdown | README.md | techzon/First_Repo | 730a2da1819bfe442de282b917f387850c2ce148 | [
"Apache-2.0"
] | null | null | null | README.md | techzon/First_Repo | 730a2da1819bfe442de282b917f387850c2ce148 | [
"Apache-2.0"
] | null | null | null | README.md | techzon/First_Repo | 730a2da1819bfe442de282b917f387850c2ce148 | [
"Apache-2.0"
] | null | null | null | # First_Repo
Welcome All
| 8.333333 | 12 | 0.8 | kor_Hang | 0.575058 |
43d68432e98a372784fb8f30c06065244c873153 | 117 | md | Markdown | .modules/.Hash/README.md | termux-one/EasY_HaCk | 0a8d09ca4b126b027b6842e02fa0c29d8250e090 | [
"Apache-2.0"
] | 1,103 | 2018-04-20T14:08:11.000Z | 2022-03-29T06:22:43.000Z | .modules/.Hash/README.md | sshourya948/EasY_HaCk | 0a8d09ca4b126b027b6842e02fa0c29d8250e090 | [
"Apache-2.0"
] | 29 | 2019-04-03T14:52:38.000Z | 2022-03-24T12:33:05.000Z | .modules/.Hash/README.md | sshourya948/EasY_HaCk | 0a8d09ca4b126b027b6842e02fa0c29d8250e090 | [
"Apache-2.0"
] | 161 | 2018-04-20T15:57:12.000Z | 2022-03-15T19:16:16.000Z | # Hash
### Description
The Hash is a hash tool for hash a text and crack a hash
#### Requirements
• Python 2.7
| 9.75 | 56 | 0.666667 | eng_Latn | 0.995893 |
43d6d62a2cf9e0e1cbf9a41217ce3ad5eebb4b5e | 5,298 | md | Markdown | iis/extensions/database-manager-reference/idbviewmanager-createview-method-microsoft-web-management-databasemanager.md | mitchelsellers/iis-docs | 376c5b4a1b88b807eb8dbe7c63ba7cd9c59f7429 | [
"CC-BY-4.0",
"MIT"
] | 90 | 2017-06-13T19:56:04.000Z | 2022-03-15T16:42:09.000Z | iis/extensions/database-manager-reference/idbviewmanager-createview-method-microsoft-web-management-databasemanager.md | mitchelsellers/iis-docs | 376c5b4a1b88b807eb8dbe7c63ba7cd9c59f7429 | [
"CC-BY-4.0",
"MIT"
] | 453 | 2017-05-22T18:00:05.000Z | 2022-03-30T21:07:55.000Z | iis/extensions/database-manager-reference/idbviewmanager-createview-method-microsoft-web-management-databasemanager.md | mitchelsellers/iis-docs | 376c5b4a1b88b807eb8dbe7c63ba7cd9c59f7429 | [
"CC-BY-4.0",
"MIT"
] | 343 | 2017-05-26T08:57:30.000Z | 2022-03-25T23:05:04.000Z | ---
title: IDbViewManager.CreateView Method (Microsoft.Web.Management.DatabaseManager)
TOCTitle: CreateView Method
ms:assetid: M:Microsoft.Web.Management.DatabaseManager.IDbViewManager.CreateView(System.String,System.String,Microsoft.Web.Management.DatabaseManager.View)
ms:mtpsurl: https://msdn.microsoft.com/library/microsoft.web.management.databasemanager.idbviewmanager.createview(v=VS.90)
ms:contentKeyID: 20476744
ms.date: 05/02/2012
mtps_version: v=VS.90
f1_keywords:
- Microsoft.Web.Management.DatabaseManager.IDbViewManager.CreateView
dev_langs:
- csharp
- jscript
- vb
- cpp
api_location:
- Microsoft.Web.Management.DatabaseManager.dll
api_name:
- Microsoft.Web.Management.DatabaseManager.IDbViewManager.CreateView
api_type:
- Assembly
topic_type:
- apiref
product_family_name: VS
---
# IDbViewManager.CreateView Method
Creates a view within the database.
**Namespace:** [Microsoft.Web.Management.DatabaseManager](microsoft-web-management-databasemanager-namespace.md)
**Assembly:** Microsoft.Web.Management.DatabaseManager (in Microsoft.Web.Management.DatabaseManager.dll)
## Syntax
```vb
'Declaration
Sub CreateView ( _
connectionString As String, _
schema As String, _
view As View _
)
'Usage
Dim instance As IDbViewManager
Dim connectionString As String
Dim schema As String
Dim view As View
instance.CreateView(connectionString, _
schema, view)
```
```csharp
void CreateView(
string connectionString,
string schema,
View view
)
```
```cpp
void CreateView(
String^ connectionString,
String^ schema,
View^ view
)
```
```jscript
function CreateView(
connectionString : String,
schema : String,
view : View
)
```
### Parameters
- connectionString
Type: [System.String](https://msdn.microsoft.com/library/s1wwdcbf)
The connection string for the database.
<!-- end list -->
- schema
Type: [System.String](https://msdn.microsoft.com/library/s1wwdcbf)
The schema name for the view.
**Note** If schema is empty, the default schema name will be used.
<!-- end list -->
- view
Type: [Microsoft.Web.Management.DatabaseManager.View](view-class-microsoft-web-management-databasemanager.md)
The [View](view-class-microsoft-web-management-databasemanager.md) object for the view to create.
## Remarks
All database providers that implement the [IDbViewManager](idbviewmanager-interface-microsoft-web-management-databasemanager.md) interface must also implement the CreateView method, which the database manager will use to create a view in a database.
### Notes for Implementers
If your provider does not support creating views, you can use the following code sample to raise a not-implemented exception:
public void CreateView(string connectionString, string schema, View view)
{
throw new NotImplementedException();
}
> [!NOTE]
> See the [CREATE VIEW (Transact-SQL)](https://msdn.microsoft.com/library/ms187956.aspx) topic for more information about the CREATE VIEW SQL statement.
## Examples
The following code sample implements the CreateView method to add a view to a database in an OLEDB data source.
```vb
' Remove a view from the database.
Public Sub DropView( _
ByVal connectionString As String, _
ByVal schema As String, _
ByVal viewName As String) _
Implements Microsoft.Web.Management.DatabaseManager.IDbViewManager.DropView
' Create a new database connection.
Dim connection As OleDbConnection = New OleDbConnection(connectionString)
' Create the SQL for the CREATE VIEW statement.
Dim dropView As String = String.Format("DROP VIEW {0}", EscapeName(viewName))
' Create an OLEDB command object.
Dim command As OleDbCommand = New OleDbCommand(dropView, connection)
' Open the database connection.
connection.Open()
' Execute the SQL statement.
command.ExecuteNonQuery()
End Sub
```
```csharp
// Remove a view from the database.
public void DropView(string connectionString, string schema, string viewName)
{
// Create a new database connection.
using (OleDbConnection connection = new OleDbConnection(connectionString))
{
// Create the SQL for the CREATE VIEW statement.
string dropView = String.Format("DROP VIEW {0}", EscapeName(viewName));
// Create an OLEDB command object.
using (OleDbCommand command = new OleDbCommand(dropView, connection))
{
// Open the database connection.
connection.Open();
// Execute the SQL statement.
command.ExecuteNonQuery();
}
}
}
```
## Permissions
- Full trust for the immediate caller. This member cannot be used by partially trusted code. For more information, see [Using Libraries from Partially Trusted Code](https://msdn.microsoft.com/library/8skskf63).
## See Also
### Reference
[IDbViewManager Interface](idbviewmanager-interface-microsoft-web-management-databasemanager.md)
[Microsoft.Web.Management.DatabaseManager Namespace](microsoft-web-management-databasemanager-namespace.md)
| 29.932203 | 249 | 0.710079 | eng_Latn | 0.407033 |
43d799761dbdb1fdcd584bf4aff564f6a362edde | 9,366 | md | Markdown | _posts/2020-09-06-apache-kafka-102.md | shingkid/shingkid.github.io | a5b7da7a9a3b11c92ac53a590f07299ebdb58291 | [
"MIT"
] | null | null | null | _posts/2020-09-06-apache-kafka-102.md | shingkid/shingkid.github.io | a5b7da7a9a3b11c92ac53a590f07299ebdb58291 | [
"MIT"
] | 1 | 2020-06-28T14:25:09.000Z | 2020-06-28T14:25:09.000Z | _posts/2020-09-06-apache-kafka-102.md | shingkid/shingkid.github.io | a5b7da7a9a3b11c92ac53a590f07299ebdb58291 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Apache Kafka 102"
date: 2020-09-06 11:47:22 +0800
categories: Kafka Java
---
This is a continuation of [Apache Kafka 101](https://shingkid.github.io/kafka/cli/2020/08/30/apache-kafka-101.html). If you're completely new to Kafka, you should start there. This article introduces Java programming for Kafka.
## Creating a Kafka Project
## Advanced Configurations
### acks & min.insync.replicas
1. `acks=0` (no acks)
* No response is requested
* If the broker goes offline or an exception happens, we won't know and will lose data
2. `acks=1` (leader acks)
* Leader response is requested, but replication is not a gurantee (happens in the background)
* If an ack is not received, the producer may retry
* If the leader broker goes offline but replicas haven't replicated the data yet, we have a data loss.
3. `acks=all` (replicas acks)
* Leader + Replicas acks requested
* `acks=all` must be used in conjunction with `min.insync.replicas`.
* `min.insync.replicas` can be set at the broker or topic level (override).
* `min.insync.replicas=2` implies that at least 2 brokers that are ISR (including leader) must respond that they have the data.
* If you use `replication.factor=3`, `min.insync=2`, `acks=all`, you can only tolerate 1 broker going down, otherwise the producer will receive an exception on send.
### retries & max.in.flight.requests.per.connection
* In case of transient failures, developers are expected to handle exceptions, otherwise the data will be lost.
* E.g. NotEnoughReplicasException
* There is a `retries` setting:
* defaults to 0
* can be increased to a high number, e.g. `Integer.MAX_VALUE`
* In case of *retries*, by default **there is a chance that messages will be sent out of order** (if a batch has failed to be sent)
* If you rely on key-based ordering, that can be an issue.
* For this, you can control how many produce request can be made in parallel: `max.in.flight.requests.per.connection`
* Default: 5
* Can be set to 1 if you need to ensure ordering (may impact throughput)
* A better solution in Kafka>=1.0.0 is Idempotent Producer.
#### Idempotent Producer
* Problem: When a producer sends a message to Kafka, Kafka might commit the message and attempt to send an ack back. However, due to a network error, the producer does not receive Kafka's ack. So the producer sends the message again, and Kafka commits the same message a second time.
* An idempotent producer can detect duplicates so the same message is not committed twice.
* Idempotent producers are great to guarantee a stable and safe pipeline
* Parameters to set
* `retires=Integer.MAX_VALUE` (2^31-1 = 2147483647)
* `max.in.flight.requests=1` (Kafka >= 0.11 & <1.1) or `max.in.flight.requests=5` (Kafka >= 1.1, higher performance)
* `acks=all`
* Just set `producerProps.put("enable.idempotence", true);`
```Java
public KafkaProducer<String, String> createKafkaProducer() {
...
// Create safe producer
properties.setProperty(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
properties.setProperty(ProducerConfig.ACKS_CONFIG, "all");
properties.setProperty(ProducerConfig.RETRIES_CONFIG, Integer.toString(Integer.MAX_VALUE));
properties.setProperty(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "5");
...
}
```
### Safe Producer
* Kafka < 0.11
* `acks=all` (producer level)
* Ensures data is properly replicated before an ack is received
* `min.insync.replicas=2` (broker/topic level)
* Ensures two brokers in ISR at least have the data after an ack
* `retries=MAX_INT` (producer level)
* Ensures transient errors are retried indefinitely
* `max.in.flight.requests.per.connection=1` (producer level)
* Ensures only one request is tried at any time, preventing message re-ordering in case of retries
* Kafka >= 0.11
* `enable.idempotence=true` (producer level) + `min.insync.replicas=2` (broker/topic level)
* Implies `acks=all`, `retries=MAX_INT`, `max.in.flight.requests.per.connection=5` (default)
* While guranteeing order and improving performance
* Running a **safe producer** might impact throughput and latency, so always consider your use case.
### Producer/Message Compression
* Important to apply compression to the producer because it usually sends data that is text-based, e.g. JSON data
* Compression is enabled at the Producer level and doesn't require any configuration change in the Brokers or in the Consumers.
* `compression.type` can be `none` (default), `gzip`, `lz4`, `snappy`
* Compression is more effective the bigger the batch of messages being sent to Kafka!
* Benchmarks: https://blog.cloudflare.com/squeezing-the-firehose/
* The compressed batch has the following advantages:
* Much smaller producer request size (compression ratio up to 4x!)
* Faster to transfer data over the network => less latency
* Better throughput
* Better disk utilisation in Kafka (stored messages on disk are smaller)
* Disadvantages (very minor):
* Producers must commit some CPU cycles to compression
* Consumers must commit some CPU cylces to decompression
* Overall:
* Consider testing snappy or lz4 for optimal speed / compression ratio
* Recommendations
* Find a compression algorithm that gives you the best performance for your specific data. Test all of them!
* Always usecompression in production and especially if you have high throughput
* Consider tweaking `linger.ms` and `batch.size` to have bigger batches, and therefore more compression and higher throughput
### Producer Batching
By default, Kafka tries to send records as soon as possible. It will have up to 5 requests in flight, meaning up to 5 messages are sent together at a time. Afterwards, if more messages have to be sent while others are in flight, Kafka is smart and will start batching them while they wait to send them all at once.
This smart batching allows Kafka to increase throughput while maintaining very low latency. Batches have higher compression ratio and so better efficiency
`linger.ms`: Number of milliseconds a producer is willing to wait before sending a batch out. (default 0)
* By introducing some lag (e.g. `linger.ms=5`), we increase the chances of messages being sent together in a batch
* So at the expense of introducing a small delay, we can increase throughput, compression and efficiency of our producer.
* If a batch is full (see `batch.size`) before the ned of the `linger.ms` period, it will be sent to Kafka right away.
`batch.size`: Maximum number of bytes that will be included in a batch. (default 16KB)
* Increasing a batch size to 32KB or 64KB can help increas compression, throughput, and efficiency of requests.
* Any message that is bigger than the batch size will not be batched
* A batch is allocated per partition, so make sure that you don't set it to a number that's too high, otherwise you'll exceed or waste memory.
**NOTE:** You can monitor the average batch size metric using Kafka Producer Metrics.
### High Throughput Producer
Let's add *snappy* message compression in our producer. *snappy* is very helpful if your messages are text based (e.g. log lines or JSON documents). *snappy* has a good balance of CPU / compression ratio. We'll also increase the `batch.size` to 32KB and introduce a small delay through `linger.ms` (20ms).
```Java
public KafkaProducer<String, String> createKafkaProducer() {
...
// High throughput producer at the expense of a bit of latency and CPU usage
properties.setProperty(ProducerConfig.COMPRESSION_TYPE_CONFIG, "snappy");
properties.setProperty(ProducerConfig.LINGER_MS_CONFIG, "20");
properties.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, Integer.toString(32*1024));
...
}
```
#### Producer Default Partitioner
By default, keys are hashed using the "murmur1" algorithm. It is best not to override the behaviour of the partitioner, but it is possible to do so (`paritioner.class`).
```Java
// Formula
targetPartition = Utils.abs(Utils.murmur2(record.key())) % numPartitions;
```
This means that the same key will go to the same partition, and adding partitions to a topic will completely alter the formula.
### max.block.ms & buffer memory
If the buffer is full (all 32MB), then the `.send()` method will start to block (won't return right away).
`max.block.ms=60000` is the time the `.send()` will block until throwing an exception. Exceptions are basically thrown when:
1. The producer has filled up its buffer
2. The broker is not accepting any new data
3. 60 seconds has elapsed
If you hit an exception, that usually means your brokers are down or overloaded as they can't respnd to requests.
### Delivery Semantics for Consumers
* At most once: offsets are committed as soon as the message batch is received. If the processing goes wrong, the message will be lost (it won't be read again).
* At least once: offsets are committed after the message is processed. If the processing goes wrong, the message will be read again. This can result in duplicate processing of messages. Make sure your processing is **idempotent** (i.e. processing again the messages won't impact your systems)
* Exactly once: Can be achieved for Kafka-to-Kafka workflows using Kafka Streams API. For Kafka-to-Sink workflows, use an idempotent consumer.
**NOTE:** For most applications you should use **at least once processing**. | 58.5375 | 314 | 0.757634 | eng_Latn | 0.996177 |
43d7bbbe4dbea66db9703cb03ebefa4bae225102 | 123 | md | Markdown | HelicopterMission/README.md | somePythonProgrammer/WhiteHatProjects | 27c570e3e07a749f332ab9672c708aae4f1a9ea5 | [
"MIT"
] | 2 | 2021-11-25T17:14:56.000Z | 2021-11-25T17:22:46.000Z | HelicopterMission/README.md | somePythonProgrammer/WhiteHatProjects | 27c570e3e07a749f332ab9672c708aae4f1a9ea5 | [
"MIT"
] | 2 | 2022-02-27T22:17:23.000Z | 2022-02-27T22:17:23.000Z | HelicopterMission/README.md | somePythonProgrammer/WhiteHatProjects | 27c570e3e07a749f332ab9672c708aae4f1a9ea5 | [
"MIT"
] | null | null | null | # helicopterMission

| 41 | 102 | 0.837398 | yue_Hant | 0.295956 |
43d915beeef216e2886074d0dbb6a70e24262c33 | 490 | md | Markdown | 2018/11/1.2.md | PeterXiao/blog | 5359b41f5f6892f80642c56f7879212fd3359834 | [
"Apache-2.0"
] | 37 | 2019-03-25T09:40:48.000Z | 2019-11-03T10:12:33.000Z | 2018/11/1.2.md | PeterXiao/blog | 5359b41f5f6892f80642c56f7879212fd3359834 | [
"Apache-2.0"
] | null | null | null | 2018/11/1.2.md | PeterXiao/blog | 5359b41f5f6892f80642c56f7879212fd3359834 | [
"Apache-2.0"
] | 23 | 2018-11-21T06:11:18.000Z | 2019-11-11T02:56:31.000Z | #什么是面向对象?
1.对象包含数据和行为
*Design Patterns: Elements of Reusable Object-Oriented Software 这本书被俗称为 The Gang of Four book。
2.封装隐藏了实现细节
*另一个通常与面向对象编程相关的方面是 封装(encapsulation)的思想
3.继承,作为类型系统与代码共享
继承(Inheritance)是一个很多编程语言都提供的机制,一个对象可以定义为继承另一个对象的定义,这使其可以获得父对象的数据和行为,而无需重新定义。
如果一个语言必须有继承才能被称为面向对象语言的话,那么 Rust 就不是面向对象的。无法定义一个结构体继承父结构体的成员和方法。然而,如果你过去常常在你的编程工具箱使用继承,根据你最初考虑继承的原因,Rust 也提供了其他的解决方案。
第一个是为了重用代码
第二个使用继承的原因与类型系统有关:表现为子类型可以用于父类型被使用的地方。这也被称为 多态(polymorphism),这意味着如果多种对象共享特定的属性,则可以相互替代使用 | 25.789474 | 117 | 0.863265 | yue_Hant | 0.601291 |
43d94361955556aa177e1f83a3a29309bad8d311 | 7,684 | md | Markdown | articles/cosmos-db/how-to-query-container.md | kalleantero/azure-docs | 50585080657eabcecdb7fa01536ca5f5aecdc600 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-11-22T02:45:17.000Z | 2022-03-22T07:08:33.000Z | articles/cosmos-db/how-to-query-container.md | kalleantero/azure-docs | 50585080657eabcecdb7fa01536ca5f5aecdc600 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cosmos-db/how-to-query-container.md | kalleantero/azure-docs | 50585080657eabcecdb7fa01536ca5f5aecdc600 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-01-09T16:27:39.000Z | 2021-01-09T16:27:39.000Z | ---
title: Query containers in Azure Cosmos DB
description: Learn how to query containers in Azure Cosmos DB using in-partition and cross-partition queries
author: markjbrown
ms.service: cosmos-db
ms.subservice: cosmosdb-sql
ms.topic: how-to
ms.date: 3/18/2019
ms.author: mjbrown
---
# Query an Azure Cosmos container
[!INCLUDE[appliesto-sql-api](includes/appliesto-sql-api.md)]
This article explains how to query a container (collection, graph, or table) in Azure Cosmos DB. In particular, it covers how in-partition and cross-partition queries work in Azure Cosmos DB.
## In-partition query
When you query data from containers, if the query has a partition key filter specified, Azure Cosmos DB automatically optimizes the query. It routes the query to the [physical partitions](partitioning-overview.md#physical-partitions) corresponding to the partition key values specified in the filter.
For example, consider the below query with an equality filter on `DeviceId`. If we run this query on a container partitioned on `DeviceId`, this query will filter to a single physical partition.
```sql
SELECT * FROM c WHERE c.DeviceId = 'XMS-0001'
```
As with the earlier example, this query will also filter to a single partition. Adding the additional filter on `Location` does not change this:
```sql
SELECT * FROM c WHERE c.DeviceId = 'XMS-0001' AND c.Location = 'Seattle'
```
Here's a query that has a range filter on the partition key and won't be scoped to a single physical partition. In order to be an in-partition query, the query must have an equality filter that includes the partition key:
```sql
SELECT * FROM c WHERE c.DeviceId > 'XMS-0001'
```
## Cross-partition query
The following query doesn't have a filter on the partition key (`DeviceId`). Therefore, it must fan-out to all physical partitions where it is run against each partition's index:
```sql
SELECT * FROM c WHERE c.Location = 'Seattle`
```
Each physical partition has its own index. Therefore, when you run a cross-partition query on a container, you are effectively running one query *per* physical partition. Azure Cosmos DB will automatically aggregate results across different physical partitions.
The indexes in different physical partitions are independent from one another. There is no global index in Azure Cosmos DB.
## Parallel cross-partition query
The Azure Cosmos DB SDKs 1.9.0 and later support parallel query execution options. Parallel cross-partition queries allow you to perform low latency, cross-partition queries.
You can manage parallel query execution by tuning the following parameters:
- **MaxConcurrency**: Sets the maximum number of simultaneous network connections to the container's partitions. If you set this property to `-1`, the SDK manages the degree of parallelism. If the `MaxConcurrency` set to `0`, there is a single network connection to the container's partitions.
- **MaxBufferedItemCount**: Trades query latency versus client-side memory utilization. If this option is omitted or to set to -1, the SDK manages the number of items buffered during parallel query execution.
Because of the Azure Cosmos DB's ability to parallelize cross-partition queries, query latency will generally scale well as the system adds [physical partitions](partitioning-overview.md#physical-partitions). However, RU charge will increase significantly as the total number of physical partitions increases.
When you run a cross-partition query, you are essentially doing a separate query per individual physical partition. While cross-partition queries queries will use the index, if available, they are still not nearly as efficient as in-partition queries.
## Useful example
Here's an analogy to better understand cross-partition queries:
Let's imagine you are a delivery driver that has to deliver packages to different apartment complexes. Each apartment complex has a list on the premises that has all of the resident's unit numbers. We can compare each apartment complex to a physical partition and each list to the physical partition's index.
We can compare in-partition and cross-partition queries using this example:
### In-partition query
If the delivery driver knows the correct apartment complex (physical partition), then they can immediately drive to the correct building. The driver can check the apartment complex's list of the resident's unit numbers (the index) and quickly deliver the appropriate packages. In this case, the driver does not waste any time or effort driving to an apartment complex to check and see if any package recipients live there.
### Cross-partition query (fan-out)
If the delivery driver does not know the correct apartment complex (physical partition), they'll need to drive to every single apartment building and check the list with all of the resident's unit numbers (the index). Once they arrive at each apartment complex, they'll still be able to use the list of the addresses of each resident. However, they will need to check every apartment complex's list, whether any package recipients live there or not. This is how cross-partition queries work. While they can use the index (don't need to knock on every single door), they must separately check the index for every physical partition.
### Cross-partition query (scoped to only a few physical partitions)
If the delivery driver knows that all package recipients live within a certain few apartment complexes, they won't need to drive to every single one. While driving to a few apartment complexes will still require more work than visiting just a single building, the delivery driver still saves significant time and effort. If a query has the partition key in its filter with the `IN` keyword, it will only check the relevant physical partition's indexes for data.
## Avoiding cross-partition queries
For most containers, it's inevitable that you will have some cross-partition queries. Having some cross-partition queries is ok! Nearly all query operations are supported across partitions (both logical partition keys and physical partitions). Azure Cosmos DB also has many optimizations in the query engine and client SDKs to parallelize query execution across physical partitions.
For most read-heavy scenarios, we recommend simply selecting the most common property in your query filters. You should also make sure your partition key adheres to other [partition key selection best practices](partitioning-overview.md#choose-partitionkey).
Avoiding cross-partition queries typically only matters with large containers. You are charged a minimum of about 2.5 RU's each time you check a physical partition's index for results, even if no items in the physical partition match the query's filter. As such, if you have only one (or just a few) physical partitions, cross-partition queries will not consume significantly more RU's than in-partition queries.
The number of physical partitions is tied to the amount of provisioned RU's. Each physical partition allows for up to 10,000 provisioned RU's and can store up to 50 GB of data. Azure Cosmos DB will automatically manage physical partitions for you. The number of physical partitions in your container is dependent on your provisioned throughput and consumed storage.
You should try to avoid cross-partition queries if your workload meets the criteria below:
- You plan to have over 30,000 RU's provisioned
- You plan to store over 100 GB of data
## Next steps
See the following articles to learn about partitioning in Azure Cosmos DB:
- [Partitioning in Azure Cosmos DB](partitioning-overview.md)
- [Synthetic partition keys in Azure Cosmos DB](synthetic-partition-keys.md)
| 73.180952 | 631 | 0.795679 | eng_Latn | 0.998743 |
43d9db1108b1223f73e4115e6e652ed7208245a1 | 512 | md | Markdown | README.md | sakura-flutter/anime4k-web | b0e166ed21f1f2fc18cf0ce8248235de5667f5bc | [
"MIT"
] | 1 | 2019-10-05T21:14:19.000Z | 2019-10-05T21:14:19.000Z | README.md | Git-User-cherryBlossom/anime4k-web | b0e166ed21f1f2fc18cf0ce8248235de5667f5bc | [
"MIT"
] | null | null | null | README.md | Git-User-cherryBlossom/anime4k-web | b0e166ed21f1f2fc18cf0ce8248235de5667f5bc | [
"MIT"
] | null | null | null | # anime4k-web
[preview](https://sakura-flutter.github.io/anime4k-web/)
thinks:
##### https://github.com/bloc97/Anime4K
##### https://github.com/mobooru/Anime4K
## Project setup
```
npm install
```
### Compiles and hot-reloads for development
```
npm run serve
```
### Compiles and minifies for production
```
npm run build
```
### Run your tests
```
npm run test
```
### Lints and fixes files
```
npm run lint
```
### Customize configuration
See [Configuration Reference](https://cli.vuejs.org/config/).
| 14.222222 | 61 | 0.671875 | eng_Latn | 0.315527 |
43daee58b04a6a7139ce399c5571e59472c066da | 4,307 | md | Markdown | README.md | mbuet2ner/local-global-features-cnn | b2baef44f5b247a18501a70a106b433539b2b4c9 | [
"MIT"
] | 1 | 2020-08-17T15:29:34.000Z | 2020-08-17T15:29:34.000Z | README.md | mbuet2ner/local-global-features-cnn | b2baef44f5b247a18501a70a106b433539b2b4c9 | [
"MIT"
] | null | null | null | README.md | mbuet2ner/local-global-features-cnn | b2baef44f5b247a18501a70a106b433539b2b4c9 | [
"MIT"
] | null | null | null | # Local Versus Global Features in Convolutional Neural Networks
Interactive notebooks and python code for my Master's Thesis: "The Role of Local Versus Global Features in Convolutional Neural Networks: A Comparison of State of the Art Architectures" at the [University of Bamberg](https://www.uni-bamberg.de/en/).
See [this blog post](https://www.maltebuettner.com/post/master/) for a brief summary of the thesis.

## Abstract
The current state of the art Convolutional Neural Networks (CNNs) were shown to oper-
ate mostly by means of texture bias, whereas humans operate mostly on a shape bias. While
the ImageNet challenge led to several milestones in the advancement of the field, recent
publications question the suitability of ImageNet as a single criterion for evaluating a net-
work’s performance. Additionally, since local features are sufficient to solve ImageNet
comparatively well, ImageNet may be too "easy". In "ImageNet-trained CNNs are biased
towards texture; increasing shape bias improves accuracy and robustness" [Geierhos et
al.](https://arxiv.org/abs/1811.12231) [1] introduce stylized training – an augmentation
technique that aims to constrain a network to learn shapes –
in an effort to closer align CNNs with human vision.
Geierhos et al. report close to human shape bias in their experiments. Amongst others, however,
the small influence of stylized training on ImageNet-A, a dataset that should be solvable
with a distinctive shape bias, calls these claims into question.
This thesis aims to take a look inside the black box of a stylized trained ResNet-50, to test
whether prerequisites for a shape bias are satisfied and if the network relies on shape re-
gion in its classification process. Experiments show that a ResNet-50 trained on Stylized-
ImageNet is only in some cases superior to a ResNet-50 trained on standard ImageNet
regarding the prerequisites and that architectural refinements such as SE-ResNeXts might
be a more promising alternative.
### Usage
Due to the complex nature of the experiments, each step has its own notebbok.
[](https://colab.research.google.com/github/mbuet2ner/local-global-features-cnn/)
[](https://mybinder.org/v2/gh/mbuet2ner/local-global-features-cnn/master)
1. In [1_imagenet_boundingboxes](1_imagenet_boundingboxes.ipynb) a one easy to work with file with ImageNet's bounding boxes is created.
2. [2_imagenette_boundingboxes](2_imagenette_boundingboxes.ipynb) adapts these bounding boxes to Imagenette and calculates the correct sizes of the boxes when resized to `256` and then center cropped to `224`.
3. [3_canny_edges.ipynb](3_canny_edges.ipynb) creates a file that contains Canny edge coordinates for Imagenette using a custom `auto-canny` function. A reduced file with coordinates that lie within the respective bounding box is also created. These Canny edges later serve as a proxy for shape.
4. In [4_create_datasets](4_create_datasets.ipynb) the "smoothed", "scrambled", and "patched" datasets are created from Imagenette input files.
5. The fifth step is concerned with the actual experiments and broken down in multiple sub-steps. First, [5_run_experiments](5_run_experiments.ipynb) runs the experiments on the created datasets. This does, however, not include the LRP and Canny edge experiments.
* 5.1. For the aforementioned experiments to work, the PyTorch models need to be converted to Keras using [5.1_pytorch2keras](5.1_pytorch2keras.ipynb).
* 5.2. In [5.2_lrp_regions](5.2_lrp_regions.ipynb) the LRP regions are calculated for both the Stylenet and the standard ResNet-50.
* 5.3. [5.4_lrp_canny](5.4_lrp_canny.ipynb) combines the LRP coordinates with the Canny edge coordinates to check for overlaps.
6. [6_experiment_analysis](6_experiment_analysis.ipynb) contains the general analysis of the experiments and the accompanying plots.
### Requirements
under construction
## References
[1] Geirhos, Robert, et al. "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness." arXiv preprint arXiv:1811.12231 (2018).
| 82.826923 | 295 | 0.801254 | eng_Latn | 0.987154 |
43daf7a83f693ed87e84e92f30b3ab351883d30b | 58,510 | md | Markdown | _posts/2017-04-25-e8bdace58f91e5afb9e4b88de8b5b7efbc8ce68891e4bbace4b88de883bde7bb99e4babae5a4a7e5908ee58ba4e5b7a5e58f8be58a9ee5a49ce6a0a1e4ba86.md | epoch-pioneer/sdxf | 8bb50f04e4c54094ba68ba1f81f9662a380c71f6 | [
"Unlicense"
] | null | null | null | _posts/2017-04-25-e8bdace58f91e5afb9e4b88de8b5b7efbc8ce68891e4bbace4b88de883bde7bb99e4babae5a4a7e5908ee58ba4e5b7a5e58f8be58a9ee5a49ce6a0a1e4ba86.md | epoch-pioneer/sdxf | 8bb50f04e4c54094ba68ba1f81f9662a380c71f6 | [
"Unlicense"
] | null | null | null | _posts/2017-04-25-e8bdace58f91e5afb9e4b88de8b5b7efbc8ce68891e4bbace4b88de883bde7bb99e4babae5a4a7e5908ee58ba4e5b7a5e58f8be58a9ee5a49ce6a0a1e4ba86.md | epoch-pioneer/sdxf | 8bb50f04e4c54094ba68ba1f81f9662a380c71f6 | [
"Unlicense"
] | null | null | null | ---
ID: 3408
post_title: >
对不起,我们不能给人大后勤工友办夜校了
author: pioneer
post_excerpt: ""
layout: post
permalink: https://sdxf28.gq/archives/3408
published: true
post_date: 2017-04-25 00:00:00
---
<div class="bpp-post-content"><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="box-sizing: border-box; background-color: #ffffff;"><section style="box-sizing: border-box;"><section style="margin: 15px 0%; transform: translate3d(0px, 0px, 0px); -webkit-transform: translate3d(0px, 0px, 0px); -moz-transform: translate3d(0px, 0px, 0px); -o-transform: translate3d(0px, 0px, 0px); box-sizing: border-box;"><section style="display: inline-block; vertical-align: top; width: 15%; padding-right: 1px; padding-left: 1px; box-sizing: border-box;"><section style="box-sizing: border-box;"><section style="margin-top: 10px; margin-bottom: 10px; text-align: center; transform: translate3d(0px, 0px, 0px); -webkit-transform: translate3d(0px, 0px, 0px); -moz-transform: translate3d(0px, 0px, 0px); -o-transform: translate3d(0px, 0px, 0px); box-sizing: border-box;"><section style="display: inline-block; vertical-align: top; border-left-color: #a0a0a0; border-right-color: #a0a0a0; padding-right: 5px; padding-left: 5px; border-left-width: 1px; border-left-style: solid; border-right-width: 1px; border-right-style: solid; color: #a0a0a0; box-sizing: border-box;">
<p style="font-size: 0px; line-height: 0; min-height: 0px; box-sizing: border-box;"></p>
</section></section></section></section><section style="display: inline-block; vertical-align: top; width: 85%; box-sizing: border-box;"><section style="box-sizing: border-box;"><section style="transform: translate3d(0px, 0px, 0px); -webkit-transform: translate3d(0px, 0px, 0px); -moz-transform: translate3d(0px, 0px, 0px); -o-transform: translate3d(0px, 0px, 0px); box-sizing: border-box;"><section style="text-align: justify; color: #61b6cc; box-sizing: border-box;">人大新光平民发展协会的夜校,致力于向后勤工人推广平民教育。曾经,人大夜校办得红红火火,在各大高校志愿类社团中独树一帜。</section></section></section></section></section></section></section>但现在,繁重的工作使后勤员工只能去生存线上挣扎,没有时间去提高自己的文化水平。夜校不得不关闭了。
人民的大学,却给予人民这种待遇,这让人大的青年感到困惑。和很多面对现实的文章一样,阅读量达到一万多时,文章消失了。我们把文章重新发出来,想正告某些人,人民的声音,不是那么容易就可以被封杀的!
</section></section></section>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">从上学期到现在,新光夜校经历了很长时间的低谷。面对这段时期,新光人有过迷茫、怀疑、失落、愤慨、无奈……<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">到今天,我们终于相信我们不能再做其他的努力去改变什么,只有平静地接受现实,面向社团内外发出这个声明:</strong><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;">对不起,我们即将停办夜校。</span></strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">下面的文字,不单是为了做一个通知,也是为了给关注新光的人们、给新光的志愿者们一个解释和交代。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">从上学期开始,参加新光夜校的工友数量开始缩减。20个人,10个人,5个人,2个人,没有人……我们几个老成员最开始是惊讶的,因为这之前的课程几乎从来不会少于10个工友参加,讲到诸如北京景点历史故事,或是医疗健康这种热门内容时,总是把教二一楼的某个小教室坐得满满当当。<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">可如今,多少个晚上,我们按照以往的惯例站在教二门口迎接工友们到来,常常是半个小时过后,依然见不到一个熟悉的身影。</strong></span></p>
<section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><img style="box-sizing: border-box; vertical-align: middle; width: 560.5px; word-wrap: break-word !important; visibility: visible !important;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649750.jpeg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" width="95%" /></p>
</section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><img style="box-sizing: border-box; vertical-align: middle; width: 560.5px; word-wrap: break-word !important; visibility: visible !important;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649755.jpeg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" width="95%" /></p>
</section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">工友们期待的眼神、认真做笔记的样子,</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">曾是我们每周最大的欢乐</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><img style="box-sizing: border-box; vertical-align: middle; width: 560.5px; word-wrap: break-word !important; visibility: visible !important;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649759.jpeg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" width="95%" /></p>
</section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">即使来参加夜校的工友越来越少,</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">志愿者们依然教得很用心</span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">面对刚加入新光、怀着紧张和期待前来讲课的志愿者,这种情况就更加让人尴尬。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">我们开始反思。是不是宣传出了问题?新光一贯的宣传方式是在课前到各大食堂的窗口发放传单,介绍课程内容,和工友们亲切地寒暄。我们猜是不是这学期宣传单发得太晚?是不是很多工友没有通知到?是不是聊得太少了?</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></strong></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">我们提前两天去发传单。我们在一个食堂宣传两次夜校。我们坐下来和下了班吃饭的工友聊天到食堂关门。</span></strong></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">结果并没有什么不同。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">是主题不够好吗?历史、时政、法律、养生、英语、读书会……这些课程都延续了之前的经验,并且尽量选择他们可能感兴趣的内容。每一次课都制作了PPT,由高年级的同学提出修改意见,课前也进行了讨论或试讲,除了存在新加入的大一同学不够不成熟、互动比较少的问题之外,夜校并没有在提供的内容上发生退步。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">小鲜肉志愿者们建议去问问叔叔阿姨想听什么,做一个调查。<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">我们只能无奈地说,我们已经问过很多次了,大多数时候他们自己也说不清,只是乐呵呵地说你们讲什么我们就听什么。</strong>夜校的课程也都是自觉地根据工友的反应稳定下来的,他们都算得上感兴趣。<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">如果问得更多一些,他们会说,你们讲讲我们工资怎么能涨点儿吧!你们讲了也不能让我们活儿少点儿啊!</strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">夜校的情况就是这样。它本身就是工友们在枯燥繁重的一天工作之后,寻求知识、愉悦和交流的过程,既是课堂也是聚会。<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">我们知道课程本身对他们而言不是必须的,这么多年来新光夜校成了他们生活里一个美好的调剂,甚至成了一些工友的习惯。他们能从里面获得一些养分,无论是知识上的还是情感上的。</strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><img style="box-sizing: border-box; vertical-align: middle; width: 560.5px; word-wrap: break-word !important; visibility: visible !important;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649764.jpeg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" width="95%" /></p>
</section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">新光电脑课,大家围在一起,更像是聚会,而不是课堂</span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">这种需求现在已经变淡了吗?历经近六年的“新鲜感”突然消失?</span></strong></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">连以前几乎每一次活动都来的老面孔也不见了。我们很困惑,也有些慌张,发传单的时候、课前课后、在食堂闲聊的时候……我们都会问工友们为什么不来上课了。问了很久,我们慢慢才可以确认,夜校的困难很大程度上是客观因素带来的,似乎也不是我们能改变的。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #000000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">不只一个工友、不只来自一个食堂的工友告诉我们:食堂裁员了,没有再招人,他们的活儿几乎加了一倍,每天的工作都很劳累,下了班以后在宿舍一动也不想动,门都懒得出。</span></strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">这学期的情况也是一样。每个食堂都不新招人了,领导给工友们的说法是,食堂每年亏着很多钱。一个食堂的大姐说,两年前她刚来的时候前厅有15个人,到现在只剩下8个人,活还是一样多(工资却基本没有上涨)。东区负责洗碗和卫生的赵阿姨每天到晚上八点多才下班,而我们的活动只能八点开始,阿姨晚上还要洗衣服和洗漱。其他洗碗的阿姨说,你去找赵阿姨吧,她管我们,她让提前下班我们就能提前下班。赵阿姨说,活儿有那么多,必须干这么久才能干完。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">“不能多雇几个刷碗的吗?”“这个领导管,我们没办法。” 她的双手几年下来被消毒水刺激得又粗又红。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">前两天的中午,北区二层收餐盘的地方盘子多得零乱地堆到了地上,阿姨对同学说别帮忙捡,领导在一边看着呢,会说的。</span></p>
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; transform: translate3d(0px, 0px, 0px); line-height: 1.5em; word-wrap: break-word !important;"><img style="box-sizing: border-box; vertical-align: middle; width: 560.5px; word-wrap: break-word !important; visibility: visible !important;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649768.jpg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" width="95%" /></p>
</section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">最忙碌的中午,收残区却只有一个人。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">阿姨一边收一边对学生说:</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">“求求你们啦放地上吧,堆太多了会掉下来的……”</span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; transform: translate3d(0px, 0px, 0px); word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">我们还是满怀希望地舍不得放弃夜校。这学期开设了英语课和社会与法课,分别针对想学习的年轻保安和食堂的工友们。课表已经安排好了,每周都提前备好课,提前去各个食堂宣传,周末晚上八点,教室里依然是空空的。讲社保的一次是最多的,宣传了两遍,大概有7个工友参加。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">发传单的时候,很多熟悉的工友会说,我们要是有时间就去了,实在是太累了<span style="max-width: 100%; color: #000000; box-sizing: border-box !important; word-wrap: break-word !important;">。</span><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">是啊,也许夜校对他们而言处在一个太高的需求层次上吧,他们现在只有精力去考虑生存和基本的休息。</strong></span></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></strong></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">那么,我们就不用再继续了。</span></strong><span style="max-width: 100%; font-size: 15px; text-indent: 0em; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; text-indent: 0em; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">社会与法的课程在经历了几次没有一个工友的尴尬之后自然地终止了。至于英语课,目前有4个保安差不多每次都来,主要是出于兴趣。我们会将它坚持到学期结束,这大概是新光夜校的尾声。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">新光平民发展协会到今天有六年多了。从最早的大学生创业孵化项目,到如今成为一个在人大校内,甚至北京高校之间都有一定代表性的公益类社团,新光一直在探索怎么更好地服务工友。六年来,社团的活动内容和形式发生了很多变化:羽毛球赛、茶话会、手工坊、元旦晚会纷纷开展起来,但夜校一直是最重要、最能代表新光本来精神的部分,是新光最为看重的平台。不管志愿者和组织者们怎么换了一届又一届,新光始终稳定地提供时政、历史、法律、养生和英语等课程,由志愿者和老师、校外人士等为工友们讲授实用又有趣的内容。</span></p>
<img style="width: 100%; height: auto;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649772.jpg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" />
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"></p>
</section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">《人民日报》对我们的报道</span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> <span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">“探索大学精神,实现平民梦想”,夜校是对这句话最初的注解。</strong></span>2011年正值农民工失业潮,金融危机的余波仍然在侵蚀着经济弱势群体的生存空间。新光的创立者们相信知识是农民工改变职业命运的阶梯,每个人在知识面前应该是平等的,平民有权利获取更多的知识资源。<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">而就在我们身边的校工,有的年纪与我们相仿甚至比我们还要小,也有更多是为了孩子过得更好而出来打拼,他们每天工作10小时左右,住在信号微弱的地下室,娱乐匮乏、身体疲惫,遑论学习和提升职业技能、充实精神世界。工人们获得教育的权利被现实挤压着,而大学生恰恰是最优秀资源的先占者,面对这种巨大的反差,后者不该安然处之。</strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">于是,新光人开始了一场挑战不平等现实的实践。</span></strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">最早的课程关于农民工创业,计算机之类实用的课程,目的在于帮助他们提升职业技能。我们很欣慰地看到,真的有不少工友完成了自考和其他一些资格证的考试,去到了更好的岗位上。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">大概是这类课程要做到专业和实用确实比较困难,也不能面向多数年纪较大的叔叔阿姨,后来,夜校的方向慢慢有了调整,转为更贴近生活,同时能够拓宽视野,帮助工友们认识社会、融入城市生活,以及维护自己权利的内容。夜校的课程有的是关于外来务工人员子女进京的教育、居住等问题,有的是关于食品健康和安全,或时政新闻等,还有一些经典书目的读书会(同时赠书)。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">也是这样,才有了当年创办人之一的吴俊东师兄的那篇《我们为什么要给人大后勤工友办夜校》,有了《人民日报》等众多媒体的采访,有了“人民”的大学的说法。我们都引以为骄傲。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><img style="width: 100%; height: auto;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649777.jpg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" /></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">人民大学,真能成为人民的大学吗?</span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">但是,这个小小的社团一直有些不敢担当“人民”的大学的重任,如今是真的不能担当了。</span></strong></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">哪怕我们能提供所谓职业技能,它对于工友就业的帮助也太小了。去年因为自考北大而在保安队广为人知的年轻保安柳青,凭借几年下来的自学、听课,最终通过了北京大学法学专业的自学考试并拿到自考的本科学位证。离开人大以后,他也并没有找到多么光鲜的工作,单位里学历在大专及以上的员工有宿舍分,而他是没有的。面对微薄的薪水和偶尔无薪的“加班”,他还要在北京再奋斗几年,把始终没能释放的青春和热血消耗干净再离开。留学生公寓的保安惠大哥也说,自考的保安不像之前那么多了。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">现在的困难是,职业技能之外的内容我们都几乎无法传递给工友们。食堂裁员,保安我们也问了一遍又一遍,同在一个单位上,东西门的上班时间都完全不同,西门有的岗亭两小时工作休息来回倒,一天下来什么也干不了。我们认识的工友玲玲,初中毕业刚出来打工几个月,在阳光地带早中晚班轮换,刚来的时候两天可以看完一本《鲁滨孙漂流记》的她,现在每天作息和三餐混乱,抱着手机追剧,已经一点都不想看书了。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">我们越来越感觉到平民教育氧分的稀缺,感觉到来自农村的打工者在城市里空间的狭小——不仅是在文化上、精神上,更是物质层面出路的逼仄,这种压力既来自学校内部,也来自社会。</span></strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; text-indent: 2em; line-height: 1.5em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></strong></span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><img style="width: 100%; height: auto;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649781.jpg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" /></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">在一次书法课上,一位大姐用饱经沧桑的手写下:</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: normal; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">“劳动最光荣”。</span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">社会与法的课程已经终止了,至于英语课,新光仍然在拓展其它的项目:运动队、太极队、广场舞,期末大型晚会和不定期的茶话会,也再想办法做得更好一些,让工友们参与更多一些<span style="max-width: 100%; color: #000000; box-sizing: border-box !important; word-wrap: break-word !important;">。</span><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">对于人大校内外来务工群体的关注,为他们说话、为他们服务的立场,是我们一直不会改变的底色</strong></span>,而我们相信,除了以课堂的形式讲授知识,我们可以在一起相处的过程中为工友们提供更多的东西。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">尝试依然很难。这些意味着放松的常规活动,绝大多数工友依然会由于时间不足而不能参加。<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">我们一直提醒自己,不是为了搞活动而搞活动,工友是目的,“平民发展”是目的。</strong>面对着有时只有五个人的太极队,志愿者依然教得很用心。前段时间,大三和大二的同学带着几个新加入的志愿者去探访工友们的宿舍,了解他们的需求,想陪他们聊聊天,解决一些生活上的小困扰——需求是客观存在的,工友没有办法来找我们,我们可以去找他们。去了两次,第二次被阳光地带下面的一位宿管拦下了。几个人尴尬地站在门外,北区二层的阿姨委婉地让我们下次再来。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">新光还制作了印着志愿者联系方式的“新光小帮手”名片发给工友们,他们需要下载电视剧、查社保、借书、咨询孩子的教育问题……都可以找我们。<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">服务、休闲、归属感——新光的关键词正在发生置换。</strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><img style="box-sizing: border-box; vertical-align: middle; width: 560.5px; word-wrap: break-word !important; visibility: visible !important;" title="转发|对不起,我们不能给人大后勤工友办夜校了" src="https://sdxf29.gq/wp-content/uploads/2017/04/beepress-beepress-weixin-zhihu-jianshu-plugin-2-4-2-3408-1524649786.jpeg" alt="转发|对不起,我们不能给人大后勤工友办夜校了" width="95%" /></p>
</section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: center; line-height: 1.5em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #888888; font-size: 12px; box-sizing: border-box !important; word-wrap: break-word !important;">平民梦想似乎是很难探索了,但是我们依然想成为人大这一常常被忽视的群体的"新光”,去给他们带来温暖。</span></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="max-width: 100%; box-sizing: border-box; min-height: 1em; word-wrap: break-word !important;"></p>
</section></section></section><section style="max-width: 100%; box-sizing: border-box; color: #3e3e3e; font-family: 'Hiragino Sans GB', 'Microsoft YaHei', Arial, sans-serif; font-size: 16px; line-height: 25.6px; white-space: normal; word-wrap: break-word !important;"><section style="max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;"><section style="padding-right: 8px; padding-left: 8px; max-width: 100%; box-sizing: border-box; word-wrap: break-word !important;">
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">对于这样的转变,我们说到底是无奈的。</span></strong><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">太极和广场舞对于部分叔叔阿姨来说可以成为不错的选择,然而新光还是面对不了更多工友们眼睛里的憧憬——平等、尊严、梦想……农村还在萎缩,越来越多的年轻人从小村落来到人大,更多的年轻人从河北河南山东山西陕西来到北京,<strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;">新光只能在这一方小小的校园里搭建起每周几次的聚会,让工友和学生们一起抱团取暖,对于更加坚硬的现实,我们依然无力。</strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">我们真的希望哪天夜校可以恢复,工人们再次走进教二的小教室,怀着期待的眼神等待知识的浸润……我们更希望的是,工人们的教育不是由一小群学生来承担,而是由他们自己在企业、在社会提供的良好条件下完成,他们的职业发展不再面对着狭窄和迷茫的小道,而是真正的坦途。</span></strong></span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">大概这一届的新光人想得有些多了。不过我们相信六年前新光的前辈们也想了很多;但愿今天,不只我们想了这么多。</span></strong></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">夜校中止,关注新光的媒体可以知悉,而不再以了解夜校为名前来采访。如果希望了解后勤工友们的教育文化状况,去他们的工作场所或者宿舍问一问工作时长和他们消费的文化产品就知道了,新光不能提供更多了。</span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;"> </span></p>
<p style="margin-top: 5px; margin-bottom: 5px; max-width: 100%; box-sizing: border-box; min-height: 1em; text-align: justify; line-height: 1.5em; text-indent: 0em; word-wrap: break-word !important;"><span style="max-width: 100%; color: #ff0000; box-sizing: border-box !important; word-wrap: break-word !important;"><strong style="max-width: 100%; box-sizing: border-box !important; word-wrap: break-word !important;"><span style="max-width: 100%; font-size: 15px; box-sizing: border-box !important; word-wrap: break-word !important;">最后,对于夜校的停办,新光平民发展协会在这里道歉——向过去的新光人,向新光的志愿者,向关注新光的同学和校内外热心的媒体,向人大校园里超过两千名辛勤工作、艰苦朴素、默默无闻的工友们,诚恳地道歉。</span></strong></span></p>
</section></section></section>
<blockquote class="keep-source"> </blockquote>
</div> | 316.27027 | 1,578 | 0.732456 | eng_Latn | 0.238346 |
43dbb2fa708e748bc54440d2e78c4948aa9b4b23 | 1,873 | md | Markdown | open-metadata-publication/website/open-metadata-types/0057-Integration-Capabilities.md | rychagova/egeria | 846f510c025a012dd5599e3774dfc267e22df293 | [
"CC-BY-4.0"
] | 1 | 2021-06-04T14:52:48.000Z | 2021-06-04T14:52:48.000Z | open-metadata-publication/website/open-metadata-types/0057-Integration-Capabilities.md | rychagova/egeria | 846f510c025a012dd5599e3774dfc267e22df293 | [
"CC-BY-4.0"
] | 1,129 | 2020-11-18T11:40:23.000Z | 2022-03-01T01:27:37.000Z | open-metadata-publication/website/open-metadata-types/0057-Integration-Capabilities.md | rychagova/egeria | 846f510c025a012dd5599e3774dfc267e22df293 | [
"CC-BY-4.0"
] | null | null | null | <!-- SPDX-License-Identifier: CC-BY-4.0 -->
<!-- Copyright Contributors to the ODPi Egeria project 2020. -->
# 0057 Integration Capabilities
A **SoftwareService** provides a well defined software component that can be
called by remote clients across the network. They may offer
a request response or an event driven interface or both.
Typically software services implement specific business
functions, such as onboarding a new customer or taking an order
or sending an invoice. Egeria offers specialized software services
related to the capture and management of open metadata.
These are shown as specialist types:
* **MetadataIntegrationService** describes an [Open Metadata Integration Service (OMIS)](../../../open-metadata-implementation/integration-services)
that runs in an [Integration Daemon](../../../open-metadata-implementation/admin-services/docs/concepts/integration-daemon.md).
* **MetadataAccessService** describes an [Open Metadata Access Service (OMAS)](../../../open-metadata-implementation/integration-services)
that runs in a [Metadata Access Point](../../../open-metadata-implementation/admin-services/docs/concepts/metadata-access-point.md).
* **EngineHostingService** describes an [Open Metadata Engine Service (OMES)](../../../open-metadata-implementation/engine-services)
that runs in an [Engine Host](../../../open-metadata-implementation/admin-services/docs/concepts/engine-host.md).
* **UserViewService** describes an [Open Metadata View Service (OMVS)](../../../open-metadata-implementation/view-services)
that runs in a [View Server](../../../open-metadata-implementation/admin-services/docs/concepts/view-server.md).

Return to [Area 0](Area-0-models.md).
----
License: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/),
Copyright Contributors to the ODPi Egeria project. | 55.088235 | 148 | 0.766151 | eng_Latn | 0.788236 |
43dc651372cf20f06ad4998bec7215fa65be57fe | 441 | markdown | Markdown | _projects/portfolio.markdown | netomi/netomi.github.com | 9773627f144833c505afaa28f86778c969581494 | [
"Apache-2.0"
] | 1 | 2022-03-26T14:41:11.000Z | 2022-03-26T14:41:11.000Z | _projects/portfolio.markdown | netomi/netomi.github.com | 9773627f144833c505afaa28f86778c969581494 | [
"Apache-2.0"
] | 7 | 2020-04-20T10:16:24.000Z | 2021-09-28T01:10:53.000Z | _projects/portfolio.markdown | netomi/netomi.github.com | 9773627f144833c505afaa28f86778c969581494 | [
"Apache-2.0"
] | null | null | null | ---
layout: project
title: Blog & Portfolio
description: Simple blog and portfolio page
img: /img/logo-portfolio.png
link: https://github.com/netomi/netomi.github.io
role: owner
license: Apache license 2.0
category: misc
tags: jekyll html css
---
This webpage has been created with the help of <a href="https://jekyllrb.com">jekyll</a>, a static site generator.
Feel free to re-use my template but add credits attribution in your footnote. | 29.4 | 114 | 0.764172 | eng_Latn | 0.82541 |
43dce7a5b8b2d98ac0553b2f6431fe0145b51188 | 518 | md | Markdown | packages/candid/README.md | wewei/agent-js | cf407c49287d74f0e52395154925c8025eef0d5d | [
"Apache-2.0"
] | 73 | 2021-06-01T23:59:07.000Z | 2022-03-26T10:59:32.000Z | packages/candid/README.md | wewei/agent-js | cf407c49287d74f0e52395154925c8025eef0d5d | [
"Apache-2.0"
] | 58 | 2021-06-03T16:53:07.000Z | 2022-03-29T21:27:58.000Z | packages/candid/README.md | wewei/agent-js | cf407c49287d74f0e52395154925c8025eef0d5d | [
"Apache-2.0"
] | 35 | 2021-06-02T15:22:05.000Z | 2022-03-31T11:18:42.000Z | # @dfinity/candid
JavaScript and TypeScript library to work with Candid interfaces
Visit the [Dfinity Forum](https://forum.dfinity.org/) and [SDK Documentation](https://sdk.dfinity.org/docs/index.html) for more information and support building on the Internet Computer.
Additional API Documentation can be found [here](https://peacock.dev/principal-docs).
---
## Installation
Using Principal:
```
npm i --save @dfinity/principal
```
### In the browser:
```
import { Principal } from "@dfinity/principal";
```
| 21.583333 | 186 | 0.735521 | eng_Latn | 0.751902 |
43dd36e87d9bc072944e205a8d65b028ef90827d | 3,217 | md | Markdown | services/ObjectStorage/nl/ja/ts_index.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | null | null | null | services/ObjectStorage/nl/ja/ts_index.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | null | null | null | services/ObjectStorage/nl/ja/ts_index.md | Ranjanasingh9/docs | 45286f485cd18ff628e697ddc5091f396bfdc46b | [
"Apache-2.0"
] | 1 | 2020-09-24T04:33:34.000Z | 2020-09-24T04:33:34.000Z | ---
copyright:
years: 2014, 2017
lastupdated: "2017-02-10"
---
{:new_window: target="_blank"}
{:shortdesc: .shortdesc}
{:screen: .screen}
{:tsSymptoms: .tsSymptoms}
{:tsCauses: .tsCauses}
{:tsResolve: .tsResolve}
# {{site.data.keyword.objectstorageshort}} のトラブルシューティング
{: #troubleshooting}
{{site.data.keyword.objectstoragefull}} の使用時のトラブルシューティングに関する一般的な質問について、以下に回答を示します。
{: shortdesc}
## Liberty プロファイルで openstack4J を使用しているときに、認識されないトークン・コンテンツ・パックが返された
{: #unrecognized_token}
Liberty プロファイルで openstack4j を使用しているときに、以下のスタック・トレースが発生した可能性があります。
```
Exception thrown by application class 'org.openstack4j.connectors.okhttp.HttpResponseImpl.readEntity:124'
org.openstack4j.api.exceptions.ClientResponseException: Unrecognized token 'contentpack': was expecting ('true', 'false' or 'null') at [Source: contentpack ; line: 1, column: 12]
at org.openstack4j.connectors.okhttp.HttpResponseImpl.readEntity(HttpResponseImpl.java:124)
at org.openstack4j.core.transport.HttpEntityHandler.handle(HttpEntityHandler.java:56)
at org.openstack4j.connectors.okhttp.HttpResponseImpl.getEntity(HttpResponseImpl.java:68)
at org.openstack4j.openstack.internal.BaseOpenStackService$Invocation.execute(BaseOpenStackService.java:169)
at org.openstack4j.openstack.internal.BaseOpenStackService$Invocation.execute(BaseOpenStackService.java:163)
at org.openstack4j.openstack.storage.object.internal.ObjectStorageContainerServiceImpl.list(ObjectStorageContainerServiceImpl.java:41)
at com.mimotic.SecureMessageApp.HelloResource.getInformation(HelloResource.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
```
{: screen}
{: tsSymptoms}
この問題は、クラス・ロードに原因があります。この場合、openstack4j ライブラリーに、Liberty プロファイルで提供されるものと同じパッケージがいくつか含まれています。例えば、OpenStack4j では JERSEY を使用しますが、これは Wink ライブラリーと衝突する可能性があります。
{: tsCauses}
この問題は、以下のいずれかによって解決できます。
{: tsResolve}
* 逆方向のクラス・ロード (parentLast) を使用します。
* 有効にされたフィーチャーから jaxr を除外します。
## {{site.data.keyword.objectstorageshort}} に関するヘルプおよびサポートの入手
{: #gettinghelp}
{{site.data.keyword.objectstoragefull}} の使用時に問題や疑問が生じた場合は、フォーラムを通じて情報を検索したり、質問をしたりすることにより、支援を得ることができます。サポート・チケットをオープンすることもできます。
フォーラムを使用して質問を行う場合は、{{site.data.keyword.Bluemix_notm}} 開発チームの目にとまるように、質問にタグを付けてください。
* {{site.data.keyword.objectstorageshort}} に関する技術的な質問がある場合は、<a href="http://stackoverflow.com/search?q=object-storage+ibm-bluemix" target="_blank">Stack Overflow <img src="../../icons/launch-glyph.svg" alt="外部リンク・アイコン"></a> に質問を投稿し、質問に「ibm-bluemix」と「object-storage」のタグを付けてください。
* サービスおよび使用開始の手順に関する質問の場合は、
<a href="https://developer.ibm.com/answers/topics/objectstorage/?smartspace=bluemix" target="_blank">IBM developerWorks dW Answers <img src="../../icons/launch-glyph.svg" alt="外部リンク・アイコン"></a> フォーラムを使用してください。「objectstorage」と「bluemix」のタグを含めてください。
フォーラムの使用について詳しくは、[「ヘルプの取得」](/docs/support/index.html#getting-help)を参照してください。
IBM サポート・チケットのオープン、またはサポート・レベルとチケットの重大度に関する情報は、[「サポートへのお問い合わせ」](/docs/support/index.html#contacting-support)を参照してください。
| 45.957143 | 278 | 0.797016 | yue_Hant | 0.520248 |
43dda481d1a6c62eedc8a913e0e12ab9a0b9c633 | 787 | md | Markdown | docs/code_quality.md | mobius-labs/app | bdf8226d8b16cea609a7af01be51c9bd4b867ab3 | [
"MIT"
] | 1 | 2021-11-13T10:52:08.000Z | 2021-11-13T10:52:08.000Z | docs/code_quality.md | mobius-labs/app | bdf8226d8b16cea609a7af01be51c9bd4b867ab3 | [
"MIT"
] | 1 | 2021-11-13T04:25:00.000Z | 2021-11-13T04:25:00.000Z | docs/code_quality.md | mobius-labs/app | bdf8226d8b16cea609a7af01be51c9bd4b867ab3 | [
"MIT"
] | null | null | null | # Code Quality
[[back to README]](../README.md)
## Frontend
We have a five-pronged approach to code-quality on the frontend.
- _TypeScript_ - better editor support, catch runtime type errors at compile time.
- _ESLint_ - to catch more bugs, avoid common Vue pitfalls
- _Prettier_ - for consistent code formatting
- _Unit tests_ - to test individual Vue components and classes
- _E2E tests (Cypress)_ - to confirm the overall application flow
According to the frontend [GitHub Actions workflow](../.github/workflows/frontend.yml),
all PRs making changes to the frontend must pass the ESLint check,
and match the formatting of Prettier, and pass unit tests in order to be accepted.
Though this is quite a harsh rule, it should make it easier to
maintain code quality in the long run.
| 39.35 | 87 | 0.773825 | eng_Latn | 0.995162 |
43dea09b0d3e9754ca6a6bf2d377d7fe772b9f55 | 1,883 | md | Markdown | content/post/requirejs-singletone/index.md | tarampampam/blog-discussions | ebc04c759a39a9a4d3fb97849d110661da9d2d1c | [
"MIT"
] | null | null | null | content/post/requirejs-singletone/index.md | tarampampam/blog-discussions | ebc04c759a39a9a4d3fb97849d110661da9d2d1c | [
"MIT"
] | 10 | 2018-09-12T09:26:49.000Z | 2020-01-30T10:14:31.000Z | content/post/requirejs-singletone/index.md | tarampampam/blog | 2a26315ae11d1b6cbb56b7e9e33b03a37dd22e67 | [
"MIT"
] | null | null | null | ---
title: Синглтон для RequireJS
slug: requirejs-singletone
date: 2017-02-18T05:19:58+00:00
expirydate: 2022-03-01
aliases: [/dev/requirejs-singletone]
image: cover.jpg
categories: [dev]
tags:
- javascript
- oop
- requirejs
---
Частенько при разработке приложения с использованием [requirejs](http://requirejs.org/) возникает необходимость в реализации [паттерна синглтона](https://ru.wikipedia.org/wiki/%D0%9E%D0%B4%D0%B8%D0%BD%D0%BE%D1%87%D0%BA%D0%B0_(%D1%88%D0%B0%D0%B1%D0%BB%D0%BE%D0%BD_%D0%BF%D1%80%D0%BE%D0%B5%D0%BA%D1%82%D0%B8%D1%80%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D1%8F)). И вот, испробовав пример его реализации что описан ниже заявляю - он имеет право на жизнь. Не без своих недостатков, разумеется, но в целом вполне применибельно:
<!--more-->
```javascript
'use strict';
define([], function () {
/**
* Singletone instance.
*
* @type {OurNewSingletone|null}
*/
var instance = null;
/**
* OurNewSingletone object (as singletone).
*
* @returns {OurNewSingletone}
*/
var OurNewSingletone = function () {
/**
* Initialize method.
*
* @returns {}
*/
this.init = function () {
// Make init
};
// Check instance exists
if (instance !== null) {
throw new Error('Cannot instantiate more than one instance, use .getInstance()');
}
// Execute initialize method
this.init();
};
/**
* Returns OurNewSingletone object instance.
*
* @returns {null|OurNewSingletone}
*/
OurNewSingletone.__proto__.getInstance = function () {
if (instance === null) {
instance = new OurNewSingletone();
}
return instance;
};
// Return singletone instance
return OurNewSingletone.getInstance();
});
```
И после, указывая наш модуль в зависимостях - мы получаем уже готовый к работе инстанс объекта _(один и тот же в разных модулях)_, что и требуется.
| 25.106667 | 512 | 0.662241 | yue_Hant | 0.20558 |
43df264ed3dfa9745374d7b0a8ed89508e290dea | 10,458 | md | Markdown | README.md | alexanu/prickle | e76f4c9d7afaa65a6d0470f649fbf9edbc2bc500 | [
"MIT"
] | null | null | null | README.md | alexanu/prickle | e76f4c9d7afaa65a6d0470f649fbf9edbc2bc500 | [
"MIT"
] | null | null | null | README.md | alexanu/prickle | e76f4c9d7afaa65a6d0470f649fbf9edbc2bc500 | [
"MIT"
] | null | null | null | # prickle
A Python toolkit for high-frequency trade research.
Schema is here:
https://bubbl.us/NDc3NDc4NC80MTAwODQwLzBlNjE3MmU2YzBlMjQxYTBkNTYzOGFiM2E5MDBkMDU5@X?utm_source=shared-link&utm_medium=link&s=10065953
## Table of Contents
1. [Overview](#overview)
2. [Nasdaq ITCH Data](#nasdaq-itch-data)
3. [Package Details](#package-details)
4. [Examples](#examples)
5. [Installation](#installation)
## Overview
Prickle is a Python package for financial researchers. It is designed to simplify the collection of ultra-high-frequency market microstructure data from Nasdaq. The project provides an open-source tool for market microstructure researchers to process Nasdaq HistoricalView-ITCH data. It provides a free alternative to paying for similar data, and establishes a common set of preprocessing steps that can be maintained and improved by the community.
Prickle creates research-ready databases from Nasdaq HistoricalView-ITCH data files. Raw ITCH files are provided "as is" in a compressed, binary format that is not particularly useful. Prickle decodes these files and creates databases containing sequences of messages, reconstructed order books, along with a few other events of interest.
## Nasdaq ITCH Data
Nasdaq HistoricalView-ITCH describes changes to Nasdaq limit order books at nanosecond precision. Researchers at academic institutions can obtain the data for free from Nasdaq by signing an academic non-disclosure agreement. Nasdaq records updates for all of their order books in a single file each day. The updates are registered as a sequence of variable-length binary “messages” that must to be decoded. Nasdaq provides documentation that gives the exact format for every message type depending on the version of the data.
The messages all follow a specific format:
1. First, a (two-byte) integer specifies the number of bytes in the message.
2. Next, a (one-byte) character specifies the **type** of the message.
3. Finally, the message is a sequence of bytes whose format depends on its type.
In Python, it is simple to decode the message bytes using the built-in `struct.unpack` method. (In fact, it seems that passing the number of bytes is unnecessary because every message type has a fixed number of bytes, so knowing the type of the message should be enough).
The logic behind this format is clear enough: storing messages this way saves a lot of space. From the researchers point of view, however, there are serious problems. First, the messages need to be decoded before they are human-readable. Second, because of the sequential nature of the messages—and the fact that all stocks are combined into a single daily file—there is no choice but to read (and decode) *every* message. There hundreds of millions of messages each day. The goal of prickle is to convert the daily message files into databases that are user-friendly.
Here are a few examples. The simplest message is a timestamp message that is used to keep track of time. A timestamp message looks like this (after decoding bytes):
`(size=5, type=T, sec=22149)`
This happens to be the first message of the day. The next message is
`(size=6, type=S, nano=287926965, event=O)`
The type 'S' means that it is a system message; it occurred 287926965 nanoseconds after the first message, and the event type 'O' indicates that it marks the beginning of the recording period.
If we keep reading messages, we will eventually come to one that look like this:
`(size=30, type=A, nano=163877100, name=ACAS, size=B, price=115000, shares=500, refno=26180)`
This is the first true order book event in the file. It indicates that a bid (limit order) was placed for 500 shares of ACAS stock at $115.000, and that the order arrived 163877100 nanoseconds after the last second. Notice also that the message contains a reference number (26180): the reference number is important because it allows us to keep track of the status of each order. For example, after this order was placed, we might see another message that looks like this:
`(size=13, type=D, nano=552367044, refno=26180)`
This message says that the order with reference number 26180 was deleted, and we can therefore remove those shares from the order book and stop keeping track of the order's status.
Besides the messages, researchers may be interested in the actual order book. The messages only indicate changes to the order book, but prickle uses the changes to reconstruct the state of the order book.
## Package Details
The main method of prickle is `unpack`, which processes daily ITCH message files (one at a time). `unpack` is not intended to process the messages for all of the securities traded on Nasdaq in one pass. Rather, it is expected that the user provides a list of stocks that she would like data for (which could range from a single stock to several hundred—`unpack` has no problem to process the S&P 500). Focusing on a smaller list of stocks allows us to “skip” messages. It also helps alleviate the tension between writing messages to file and storing messages in memory. It is inefficient to write single messages to file, but storing the processed data in local memory is also infeasible (for a decent number of securities). Therefore, `unpack` stores the processed messages up to a buffer size before writing the messages to file. You can determine/fix the maximum amount of memory that `unpack` will use ahead of time by adjusting the buffer size. For example, if you only want to process data for a single stock, then no buffer is required. If you want to process data for the entire S&P 500, and you only have say 2GB of memory available, then you could select a buffer size around 10,000 messages. (What is the approximate calculation?)
Processing data for a few hundred stocks might take several hours. If you intend to process several months or years of data, then you will probably want to run the jobs on a cluster, in which case you might face memory constraints on each compute node. The buffer size allows you to fix the maximum memory required in advance. Note as well that there is not much benefit to increasing buffer sizes beyond a certain point because 100,000 messages per day is relative large number, but only amounts to 10 writes (at a 10,000 buffer a size).
In addition to processing the binary messages, prickle generates reconstructed order books. The process for doing so centers around the nature of the message data. In particular, Nasdaq reduces the amount of data passed directly by each message by using reference numbers on orders that update earlier orders. For example, if the original order specified (type=‘A’, name=’AAPL’, price=135.00, shares=100, refno=123456789), then a subsequent message informing market participants that the order was executed would look something like this: (type=‘E’, shares=100, refno=123456789). Therefore, instead of simply using each order to directly make changes to the order book, `unpack` maintains a list of outstanding orders that it uses to keep track of the current state of each order, and fill-in missing data from incoming messages that can then be used to make updates to order books. The complete flow of events is shown in the figure below.
![unpack flow chart]()
As you can see, `unpack` generates (or updates) five databases: NOII messages, system messages, trade messages, messages, and (order) books.
1. **NOII Messages**: net order imbalances and crossing messages.
2. **System Messages**: critical system-wide information (e.g., start of trading).
3. **Trade Messages**: indicate trades against hidden (non-displayed) liquidity.
4. **Messages**: all other messages related to order book updates.
5. **Books**: snapshots of limit order books following each update.
Finally, `unpack` provides two methods for storing the processed data.
1. **CSV**: The simplest choice is to store the data in csv files, organized by type, date, and security name. The organization is natural for research intending to perform analysis at the stock-day level. This choice is similar to the HDF5 choice in terms of organization and workflow, but loading data is considerably slower. In addition, the entire stock-day file must be loaded into memory before any slicing can be applied. A benefit of this format is that the data is stored in an easily interpreted manner.
2. **HDF5**: HDF5 is a popular choice for storing scientific data. With this option, data is organized by day, security, and type. It is therefore intended to be handled on a stock-day basis. Loading message or order book data for a single stock on a single day is extremely fast. The downside is that data is stored as a single data type (integers). Therefore, some of the data is not directly interpretable (e.g., the message types). In contrast to csv files, HDF5 files can be sliced *before* loading data into Python.
## Examples
## Installation
<!---->
### Requirements
This package requires **Python 3**. By default, the data is stored in text files. You will need the following to create HDF5 <!--or PostgreSQL--> databases (these are **not** installed automatically):
1. [HDF5](https://www.hdfgroup.org).
<!--2. [PostgreSQL](https://www.postgresql.org)-->
After you have installed and configured these, install prickle using the formula below:
```
python3 -m venv venv
source venv/bin/activate
git clone https://github.com/cswaney/prickle
cd prickle
pip install -e .
```
The formula installs the package into a virtual environment in "editable" mode so that any changes you might want to make will take effect the next time you open Python.
## Basic Usage
To create a new HDF5 database for AAPL and GOOG stocks from an ITCH data file `S010113-v41.txt`:
```python
import prickle as pk
pk.unpack(fin='itch_010113.bin',
ver=4.1,
date='2013-01-01',
fout='itch.hdf5'
nlevels=10,
names=['GOOG', 'AAPL'],
method='hdf5')
```
This will create a file `itch.hdf5` containing message and order book data for Google and Apple. To read the order book data back into your Python session, use `pk.read`:
```python
pk.read(db='itch.hdf5',
date='2013-01-01',
names='GOOG')
```
## Tip
Create massive datasets quickly by running jobs simultaneously (e.g. on your university's cluster).
## License
This package is released under an MIT license. Please cite me (e.g. Prickle (Version 0.1, 2018)).
| 81.703125 | 1,241 | 0.775483 | eng_Latn | 0.999101 |
43dfbcf178d9c6af8929143a47560d6f87d341b5 | 4,439 | md | Markdown | docs/ssms/agent/new-job-schedule-job-schedule-properties.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssms/agent/new-job-schedule-job-schedule-properties.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssms/agent/new-job-schedule-job-schedule-properties.md | thiagoamc/sql-docs.pt-br | 32e5d2a16f76e552e93b54b343566cd3a326b929 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Nova agenda de trabalho – Propriedades da agenda de trabalho | Microsoft Docs
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql-non-specified
ms.prod_service: sql-tools
ms.service: ''
ms.component: ssms-agent
ms.reviewer: ''
ms.suite: sql
ms.technology:
- tools-ssms
ms.tgt_pltfrm: ''
ms.topic: article
f1_keywords:
- sql13.ag.job.scheduleproperties.f1
- sql13.swb.maint.editrecurringjobsched.f1
ms.assetid: 5c0b1bc9-dd87-49cc-b0dd-75d0d922b177
caps.latest.revision: ''
author: stevestein
ms.author: sstein
manager: craigg
ms.workload: Inactive
ms.openlocfilehash: 297bbb14bd743d40cde9de33d03e3004a63058b9
ms.sourcegitcommit: 34766933e3832ca36181641db4493a0d2f4d05c6
ms.translationtype: HT
ms.contentlocale: pt-BR
ms.lasthandoff: 03/22/2018
---
# <a name="new-job-schedule---job-schedule-properties"></a>Nova agenda de trabalho – Propriedades da agenda de trabalho
[!INCLUDE[appliesto-ss-asdbmi-xxxx-xxx-md](../../includes/appliesto-ss-asdbmi-xxxx-xxx-md.md)]
> [!IMPORTANT]
> No momento, na [Instância Gerenciada do Banco de Dados SQL do Azure](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance), a maioria dos recursos do SQL Server Agent é compatível, mas não todos. Consulte [Azure SQL Database Managed Instance T-SQL differences from SQL Server](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance-transact-sql-information#sql-server-agent) (Diferenças entre o T-SQL da Instância Gerenciada do Banco de Dados SQL do Azure e o SQL Server) para obter detalhes.
Use essa página para exibir e alterar as propriedades da agenda.
## <a name="options"></a>Opções
**Nome**
Digite um nome novo para a agenda.
**Trabalhos na agenda**
Exiba os trabalhos que usam a agenda.
**Tipo de agenda**
Selecione o tipo de agenda.
**Enabled**
Marque para habilitar ou desabilitar a agenda.
## <a name="recurring-schedule-types-options"></a>Opções de tipos de agenda recorrentes
**Ocorre**
Selecione o intervalo no qual a agenda ocorre periodicamente.
**Repete-se a cada**
Selecione o número de dias ou semanas entre ocorrências periódicas da agenda. Essa opção não está disponível para agendas que ocorrem mensalmente.
**Segunda-feira**
Defina o trabalho para ocorrer em uma segunda-feira. Disponível somente para agendamentos semanais recorrentes.
**Terça-feira**
Defina o trabalho para ocorrer em uma terça-feira. Disponível somente para agendamentos semanais recorrentes.
**Quarta-feira**
Defina o trabalho para ocorrer em uma quarta-feira. Disponível somente para agendamentos semanais recorrentes.
**Quinta-feira**
Defina o trabalho para ocorrer em uma quinta-feira. Disponível somente para agendamentos semanais recorrentes.
**Sexta-feira**
Defina o trabalho para ocorrer em uma sexta-feira. Disponível somente para agendamentos semanais recorrentes.
**Sábado**
Defina o trabalho para ocorrer em um sábado. Disponível somente para agendamentos semanais recorrentes.
**Domingo**
Defina o trabalho para ocorrer em um domingo. Disponível somente para agendamentos semanais recorrentes.
**Day**
Selecione o dia do mês que a agenda ocorre. Disponível somente para agendas mensais recorrentes.
**de cada**
Selecione o número de meses entre ocorrências da agenda. Disponível somente para agendas mensais recorrentes.
**O**
Especifique uma agenda durante um dia específico da semana em uma semana específica durante o mês. Disponível somente para agendas mensais recorrentes.
**Ocorre uma vez em**
Defina a hora para que um trabalho ocorra diariamente.
**Ocorre a cada**
Defina o número de horas, minutos ou segundos entre ocorrências.
**Data de início**
Defina a data quando essa agenda ficará efetiva.
**Data de término**
Defina a data quando a agenda já não será válida.
**Nenhuma data de término**
Especifique que a agenda permanecerá efetiva indefinidamente.
## <a name="one-time-schedule-types-options"></a>Opções de tipos de agendas que ocorrem apenas uma vez
**Date**
Selecione a data para que o trabalho seja executado.
**Hora**
Selecione a hora para que o trabalho seja executado.
## <a name="see-also"></a>Consulte Também
[Criar e anexar agendas para trabalhos](../../ssms/agent/create-and-attach-schedules-to-jobs.md)
[Agendar um trabalho](../../ssms/agent/schedule-a-job.md)
| 38.938596 | 538 | 0.749718 | por_Latn | 0.997963 |
43dfcf36f6ef4b66284a745a0c866a6018d6002b | 1,396 | md | Markdown | README.md | PacktPublishing/Learning-C-8 | 89c8c7a334620f52c62f80f20ac6c0d6131ed4eb | [
"MIT"
] | 7 | 2019-05-07T14:23:16.000Z | 2020-01-20T20:32:01.000Z | README.md | PacktPublishing/Learning-C-8 | 89c8c7a334620f52c62f80f20ac6c0d6131ed4eb | [
"MIT"
] | null | null | null | README.md | PacktPublishing/Learning-C-8 | 89c8c7a334620f52c62f80f20ac6c0d6131ed4eb | [
"MIT"
] | 9 | 2019-04-11T05:06:51.000Z | 2020-02-25T13:16:13.000Z | ## [Get this title for $10 on Packt's Spring Sale](https://www.packt.com/B12346?utm_source=github&utm_medium=packt-github-repo&utm_campaign=spring_10_dollar_2022)
-----
For a limited period, all eBooks and Videos are only $10. All the practical content you need \- by developers, for developers
# Learn C# Programming
By [Marius Bancila](https://github.com/mariusbancila), [Ankit Sharma](https://github.com/AnkitSharma-007), [Raffaele Rialdi](https://github.com/raffaeler)
Published by [Packt](https://github.com/PacktPublishing)
# Structure and code conventions
The source code is located in the *src* folder.
For each chapter, there is a subfolder here with the name chapter_*number* (e.g. chapter_01, chapter_02, chapter_03, etc.)
Some chapters are split in multiple projects, one for each topic. For instance:
* chapter_01\chapter_01 - Hello World program
* chapter_02\chapter_02 - Data types, variables, constants, arrays, nullables, type conversions, operators
* chapter_08\chapter_08_01 - Delegates and Events
* chapter_08\chapter_08_02 - Anonymous methods
* chapter_08\chapter_08_03 - Tuples
* chapter_08\chapter_08_04 - Pattern matching
* chapter_08\chapter_08_05 - Regular expressions
* chapter_08\chapter_08_06 - Extension methods
The solution file *learning_charp8.sln* contains all these projects together. The projects are .NET Core console apps and target .NET Core 3.1.
| 46.533333 | 162 | 0.7851 | eng_Latn | 0.844683 |
43e016578c80b41c6a472d21e17b3824feed7f37 | 1,157 | md | Markdown | .github/ISSUE_TEMPLATE.md | ShivaniNR/alfresco-ng2-components | cf922dd46eae2097b6faa4a0bdc5601ec03953a7 | [
"ECL-2.0",
"Apache-2.0"
] | 313 | 2016-06-29T22:59:56.000Z | 2022-03-24T03:00:36.000Z | .github/ISSUE_TEMPLATE.md | ShivaniNR/alfresco-ng2-components | cf922dd46eae2097b6faa4a0bdc5601ec03953a7 | [
"ECL-2.0",
"Apache-2.0"
] | 4,710 | 2016-06-29T23:27:10.000Z | 2022-03-25T14:15:36.000Z | .github/ISSUE_TEMPLATE.md | ShivaniNR/alfresco-ng2-components | cf922dd46eae2097b6faa4a0bdc5601ec03953a7 | [
"ECL-2.0",
"Apache-2.0"
] | 308 | 2016-07-01T17:27:42.000Z | 2022-03-29T07:44:25.000Z | <!--
PLEASE FILL OUT THE FOLLOWING INFORMATION, THIS WILL HELP US TO RESOLVE YOUR PROBLEM FASTER.
REMEMBER FOR SUPPORT REQUESTS YOU CAN ALSO ASK ON OUR GITTER CHAT:
Please ask before on our gitter channel https://gitter.im/Alfresco/alfresco-ng2-components
-->
**Type of issue:** (check with "[x]")
> - [ ] New feature request
> - [ ] Bug
> - [ ] Support request
> - [ ] Documentation
**Current behaviour:**
<!-- Describe the current behaviour. -->
**Expected behavior:**
<!-- Describe the expected behaviour. -->
**Steps to reproduce the issue:**
<!-- Describe the steps to reproduce the issue. -->
**Component name and version:**
<!-- Example: ng2-alfresco-login. Check before if this issue is still present in the most recent version -->
**Browser and version:**
<!-- [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] -->
**Node version (for build issues):**
<!-- To check the version: node --version -->
**New feature request:**
<!-- Describe the feature, motivation and the concrete use case (only in case of new feature request) -->
| 33.057143 | 159 | 0.684529 | eng_Latn | 0.725152 |
43e07b44daf38bff4aad6bc4670b7989a56ef3f1 | 158 | md | Markdown | _posts/2018-04-03-AleeCorp-Network-Blog.md | AleeCorp/blog | 62ac502425306a956d1cfa3f092f2fb1a0e5fbb3 | [
"MIT"
] | null | null | null | _posts/2018-04-03-AleeCorp-Network-Blog.md | AleeCorp/blog | 62ac502425306a956d1cfa3f092f2fb1a0e5fbb3 | [
"MIT"
] | null | null | null | _posts/2018-04-03-AleeCorp-Network-Blog.md | AleeCorp/blog | 62ac502425306a956d1cfa3f092f2fb1a0e5fbb3 | [
"MIT"
] | null | null | null | ---
layout: post
title: AleeCorp Network Blog!
---
Hello and welcome to the ACN blog!
This blog is for updates and news on the ACN website, discord and etc. | 22.571429 | 70 | 0.734177 | eng_Latn | 0.969096 |
43e32d4f2d3d62d3cdc2b0dffa191ce410da67ab | 766 | md | Markdown | 2017/CVE-2017-8842.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 2,340 | 2022-02-10T21:04:40.000Z | 2022-03-31T14:42:58.000Z | 2017/CVE-2017-8842.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 19 | 2022-02-11T16:06:53.000Z | 2022-03-11T10:44:27.000Z | 2017/CVE-2017-8842.md | justinforbes/cve | 375c65312f55c34fc1a4858381315fe9431b0f16 | [
"MIT"
] | 280 | 2022-02-10T19:58:58.000Z | 2022-03-26T11:13:05.000Z | ### [CVE-2017-8842](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-8842)



### Description
The bufRead::get() function in libzpaq/libzpaq.h in liblrzip.so in lrzip 0.631 allows remote attackers to cause a denial of service (divide-by-zero error and application crash) via a crafted archive.
### POC
#### Reference
- https://blogs.gentoo.org/ago/2017/05/07/lrzip-divide-by-zero-in-bufreadget-libzpaq-h/
- https://github.com/ckolivas/lrzip/issues/66
#### Github
- https://github.com/andir/nixos-issue-db-example
| 40.315789 | 199 | 0.74282 | yue_Hant | 0.215231 |
43e36dcfcf60eeaad1ce92571136f07d93e18baf | 2,534 | md | Markdown | desktop-src/DirectShow/cmsgthread-putthreadmsg.md | vvavrychuk/win32 | da13d459791d115df883fa4dd348931d0902c2cc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/DirectShow/cmsgthread-putthreadmsg.md | vvavrychuk/win32 | da13d459791d115df883fa4dd348931d0902c2cc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/DirectShow/cmsgthread-putthreadmsg.md | vvavrychuk/win32 | da13d459791d115df883fa4dd348931d0902c2cc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
description: Queues a request for execution by the worker thread.
ms.assetid: a854f962-143d-4776-bf98-119d003867df
title: CMsgThread.PutThreadMsg method (Msgthrd.h)
ms.topic: reference
ms.date: 05/31/2018
topic_type:
- APIRef
- kbSyntax
api_name:
- CMsgThread.PutThreadMsg
api_type:
- COM
api_location:
- Strmbase.lib
- Strmbase.dll
- Strmbasd.lib
- Strmbasd.dll
---
# CMsgThread.PutThreadMsg method
Queues a request for execution by the worker thread.
## Syntax
```C++
void PutThreadMsg(
UINT uMsg,
DWORD dwMsgFlags,
LPVOID lpMsgParam,
CAMEvent *pEvent = NULL
);
```
## Parameters
<dl> <dt>
*uMsg*
</dt> <dd>
Request code.
</dd> <dt>
*dwMsgFlags*
</dt> <dd>
Optional flags parameter.
</dd> <dt>
*lpMsgParam*
</dt> <dd>
Optional pointer to a data block containing additional parameters or return values. Must be statically or heap-allocated and not automatic.
</dd> <dt>
*pEvent*
</dt> <dd>
Optional pointer to an event object to be signaled upon completion.
</dd> </dl>
## Return value
This method does not return a value.
## Remarks
This member function queues a request for execution by the worker thread. The parameters of this member function will be queued (in a [**CMsg**](cmsg.md) object) and passed to the [**CMsgThread::ThreadMessageProc**](cmsgthread-threadmessageproc.md) member function of the worker thread. This member function returns immediately after queuing the request and does not wait for the thread to fulfill the request. The **CMsgThread::ThreadMessageProc** member function of the derived class defines the four parameters.
This member function uses a multithread safe list, so multiple calls to this member function from different threads can be made safely.
## Requirements
| | |
|--------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Header<br/> | <dl> <dt>Msgthrd.h (include Streams.h)</dt> </dl> |
| Library<br/> | <dl> <dt>Strmbase.lib (retail builds); </dt> <dt>Strmbasd.lib (debug builds)</dt> </dl> |
## See also
<dl> <dt>
[**CMsgThread Class**](cmsgthread.md)
</dt> </dl>
| 23.90566 | 514 | 0.573007 | eng_Latn | 0.875832 |
43e44cc19b8ed4b1953881c466a8d57fe50a117b | 543 | md | Markdown | README.md | jdbrice/HadronicCocktail | 4064c1f4ddc7e127fe179b1184ec5634c5843e1c | [
"MIT"
] | 1 | 2018-11-16T20:01:31.000Z | 2018-11-16T20:01:31.000Z | README.md | jdbrice/HadronicCocktail | 4064c1f4ddc7e127fe179b1184ec5634c5843e1c | [
"MIT"
] | null | null | null | README.md | jdbrice/HadronicCocktail | 4064c1f4ddc7e127fe179b1184ec5634c5843e1c | [
"MIT"
] | null | null | null | # HadronicCocktail
General purpose code for generating hadronic cocktails
### Setup:
1. Install scons
2. Install JDB_LIB
if you are running at rice then set:
```
export JDB_LIB="/macstar/star1/jdb12/RooBarb/"
source /macstar/star2/jdb12/vendor/root5/pro/bin/thisroot.sh
export PATH="/macstar/star1/jdb12/vendor/conda2/bin:$PATH"
```
If you install it yourself just make sure you add the environment variable (`JDB_LIB`)to point ot it.
3. git clone git@github.com:jdbrice/HadronicCocktail.git
`cd HadronicCocktail`
`scons`
| 31.941176 | 103 | 0.751381 | eng_Latn | 0.612095 |
43e4a2c9315582faec841bb7735c41f395ce7e65 | 21,044 | md | Markdown | articles/azure-monitor/platform/design-logs-deployment.md | Erickwilts/azure-docs.nl-nl | a9c68e03c55ac26af75825a5407e3dd14a323ee1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/platform/design-logs-deployment.md | Erickwilts/azure-docs.nl-nl | a9c68e03c55ac26af75825a5407e3dd14a323ee1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-monitor/platform/design-logs-deployment.md | Erickwilts/azure-docs.nl-nl | a9c68e03c55ac26af75825a5407e3dd14a323ee1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: De implementatie van uw Azure Monitor-logboeken ontwerpen | Microsoft Docs
description: In dit artikel worden de overwegingen en aanbevelingen beschreven voor klanten die de implementatie van een werk ruimte in Azure Monitor voorbereiden.
ms.subservice: ''
ms.topic: conceptual
author: bwren
ms.author: bwren
ms.date: 09/20/2019
ms.openlocfilehash: 269ecdf8998707ac375339edb4e11bb24380e27d
ms.sourcegitcommit: e46f9981626751f129926a2dae327a729228216e
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 01/08/2021
ms.locfileid: "98027703"
---
# <a name="designing-your-azure-monitor-logs-deployment"></a>De implementatie van uw Azure Monitor-logboeken ontwerpen
Azure Monitor worden [logboek](data-platform-logs.md) gegevens opgeslagen in een log Analytics-werk ruimte, een Azure-resource en een container waarin gegevens worden verzameld, geaggregeerd en fungeert als een administratieve grens. Hoewel u een of meer werk ruimten in uw Azure-abonnement kunt implementeren, zijn er verschillende overwegingen die u moet begrijpen om ervoor te zorgen dat uw eerste implementatie voldoet aan onze richt lijnen om u te voorzien van een kosteneffectieve, beheersbare en schaal bare implementatie die voldoet aan de behoeften van uw organisatie.
Gegevens in een werk ruimte zijn ingedeeld in tabellen, die elk verschillende soorten gegevens opslaan en een eigen unieke set eigenschappen hebben op basis van de resource waarmee de gegevens worden gegenereerd. De meeste gegevens bronnen schrijven naar hun eigen tabellen in een Log Analytics-werk ruimte.

Een Log Analytics-werk ruimte biedt:
* Een geografische locatie voor de opslag van gegevens.
* Gegevens isolatie door verschillende gebruikers toegangs rechten te verlenen volgens een van onze aanbevolen ontwerp strategieën.
* Bereik voor configuratie van instellingen, zoals de [prijs categorie](./manage-cost-storage.md#changing-pricing-tier), [retentie](./manage-cost-storage.md#change-the-data-retention-period)en het beperken van [gegevens](./manage-cost-storage.md#manage-your-maximum-daily-data-volume).
Werk ruimten worden gehost op fysieke clusters. Standaard worden deze clusters door het systeem gemaakt en beheerd. Klanten die meer dan 4 TB/dag hebben opgenomen, zullen hun eigen toegewezen clusters voor hun werk ruimten maken. Hiermee kunnen ze betere controle en een hoger opname niveau krijgen.
Dit artikel bevat een gedetailleerd overzicht van de overwegingen voor het ontwerpen en migreren, het overzicht van toegangs beheer en een uitleg van de ontwerp implementaties die wij voor uw IT-organisatie raden.
## <a name="important-considerations-for-an-access-control-strategy"></a>Belang rijke overwegingen voor een strategie voor toegangs beheer
Het identificeren van het aantal werk ruimten dat u nodig hebt, is van invloed op een of meer van de volgende vereisten:
* U bent een wereld wijd bedrijf en u hebt logboek gegevens nodig die in specifieke regio's zijn opgeslagen voor gegevens-soevereiniteit of nalevings redenen.
* U gebruikt Azure en wilt kosten voor de overdracht van uitgaande gegevens voorkomen door een werkruimte in dezelfde regio te hebben als de Azure-resource die deze beheert.
* U beheert meerdere afdelingen of bedrijfs groepen en u wilt dat elk van de eigen gegevens wordt weer gegeven, maar niet de gegevens van anderen. Er is ook geen zakelijke vereiste voor een geconsolideerde weer gave van meerdere afdelingen of bedrijfs groepen.
De IT-organisaties zijn tegenwoordig gemodelleerd volgens een gecentraliseerd, gedecentraliseerd of een in-tussen hybride van beide structuren. Als gevolg hiervan zijn de volgende implementatie modellen voor werk ruimten vaak gebruikt om toe te wijzen aan een van deze organisatie structuren:
* **Gecentraliseerd**: alle logboeken worden opgeslagen in een centrale werk ruimte en beheerd door één team, met Azure monitor een gedifferentieerde toegang per team bieden. In dit scenario is het eenvoudig om te beheren, te zoeken naar resources en logboeken te cross-correleren. De werk ruimte kan aanzienlijk toenemen, afhankelijk van de hoeveelheid gegevens die uit meerdere resources in uw abonnement is verzameld, met extra administratieve overhead voor het onderhouden van toegangs beheer voor verschillende gebruikers. Dit model wordt aangeduid als hub en spoke.
* **Gedecentraliseerd**: elk team heeft hun eigen werk ruimte die is gemaakt in een resource groep die het eigendom is van en het beheer, en logboek gegevens worden gescheiden per resource. In dit scenario kan de werk ruimte veilig worden bewaard en kan toegangs beheer consistent zijn met toegang tot bronnen, maar het is lastig om logboeken te intercorreleren. Gebruikers die een brede weer gave van veel resources nodig hebben, kunnen de gegevens niet op een zinvolle manier analyseren.
* **Hybride**: vereisten voor naleving van beveiligings controle zijn dit scenario verder ingewik kelder omdat veel organisaties beide implementatie modellen parallel implementeren. Dit resulteert doorgaans in een complexe, dure en moeilijk te onderhouden configuratie met hiaten in de logboeken dekking.
Wanneer u de Log Analytics-agents gebruikt om gegevens te verzamelen, moet u het volgende weten om de implementatie van de agent te plannen:
* Als u gegevens van Windows-agents wilt verzamelen, kunt u [elke agent configureren om te rapporteren aan een of meer werk ruimten](./agent-windows.md), zelfs wanneer deze wordt gerapporteerd aan een System Center Operations Manager-beheer groep. De Windows-agent kan Maxi maal vier werk ruimten rapporteren.
* De Linux-agent biedt geen ondersteuning voor multi-multihoming en kan slechts aan één werk ruimte rapporteren.
Als u System Center Operations Manager 2012 R2 of hoger gebruikt:
* Elke Operations Manager-beheer groep kan worden [verbonden met slechts één werk ruimte](./om-agents.md).
* Linux-computers die rapporteren aan een beheer groep moeten worden geconfigureerd om rechtstreeks te rapporteren aan een Log Analytics-werk ruimte. Als uw Linux-computers al rechtstreeks aan een werk ruimte rapporteren en u deze wilt controleren met Operations Manager, voert u de volgende stappen uit om [aan een Operations Manager beheer groep te rapporteren](agent-manage.md#configure-agent-to-report-to-an-operations-manager-management-group).
* U kunt de Log Analytics Windows-agent op de Windows-computer installeren en deze rapporteren aan zowel Operations Manager geïntegreerd met een werk ruimte als een andere werk ruimte.
## <a name="access-control-overview"></a>Overzicht van toegangsbeheer
Met op rollen gebaseerd toegangs beheer (Azure RBAC) van Azure kunt u gebruikers en groepen alleen de hoeveelheid toegang verlenen die nodig zijn om te werken met bewakings gegevens in een werk ruimte. Op die manier kunt u met uw IT-bedrijfs model uitlijnen met één werk ruimte om verzamelde gegevens op te slaan die zijn ingeschakeld voor al uw resources. U kunt bijvoorbeeld toegang verlenen aan uw team dat verantwoordelijk is voor infrastructuur services die worden gehost op Azure virtual machines (Vm's), en als gevolg hiervan hebben ze alleen toegang tot de logboeken die door de Vm's worden gegenereerd. Dit is het nieuwe bron-context logboek model. De basis voor dit model geldt voor elke logboek record die wordt gegenereerd door een Azure-resource en wordt automatisch aan deze resource gekoppeld. Logboeken worden doorgestuurd naar een centrale werk ruimte die de bereiking en Azure RBAC op basis van de bronnen respecteert.
De gegevens waartoe een gebruiker toegang heeft, wordt bepaald door een combi natie van factoren die in de volgende tabel worden weer gegeven. Elk wordt beschreven in de volgende secties.
| Factor | Beschrijving |
|:---|:---|
| [Toegangsmodus](#access-mode) | De methode die de gebruiker gebruikt voor toegang tot de werk ruimte. Hiermee definieert u het bereik van de beschik bare gegevens en de toegangs beheer modus die wordt toegepast. |
| [Toegangsbeheermodus](#access-control-mode) | Instelling in de werk ruimte die definieert of machtigingen worden toegepast op het niveau van de werk ruimte of de resource. |
| [Machtigingen](manage-access.md) | Machtigingen die worden toegepast op afzonderlijke of groepen gebruikers voor de werk ruimte of resource. Hiermee definieert u welke gegevens de gebruiker toegang heeft. |
| [Azure RBAC voor tabel niveau](manage-access.md#table-level-azure-rbac) | Optionele gedetailleerde machtigingen die van toepassing zijn op alle gebruikers, ongeacht de toegangs modus of de toegangs beheer modus. Hiermee definieert u welke gegevens typen een gebruiker kan openen. |
## <a name="access-mode"></a>Toegangsmodus
De *toegangs modus* verwijst naar hoe een gebruiker toegang heeft tot een log Analytics-werk ruimte en definieert het bereik van de gegevens waartoe ze toegang hebben.
Gebruikers hebben twee opties om toegang tot de gegevens te krijgen:
* **Werk ruimte-context**: u kunt alle logboeken weer geven in de werk ruimte waarvoor u machtigingen hebt. Query's in deze modus zijn gericht op alle gegevens in alle tabellen in de werk ruimte. Dit is de toegangs modus die wordt gebruikt wanneer logboeken worden geopend met de werk ruimte als het bereik, bijvoorbeeld wanneer u **Logboeken** selecteert in het **Azure monitor** menu in de Azure Portal.

* **Resource-context**: wanneer u de werk ruimte voor een bepaalde resource, resource groep of abonnement opent, bijvoorbeeld wanneer u **Logboeken** selecteert in een Resource menu in het Azure Portal, kunt u Logboeken voor alleen resources weer geven in alle tabellen waartoe u toegang hebt. Query's in deze modus zijn alleen van toepassing op gegevens die aan die resource zijn gekoppeld. Met deze modus kunt u ook gedetailleerde Azure RBAC inschakelen.

> [!NOTE]
> Logboeken zijn alleen beschikbaar voor resource context query's als ze goed zijn gekoppeld aan de betreffende resource. Momenteel hebben de volgende resources beperkingen:
> - Computers buiten Azure
> - Service Fabric
> - Application Insights
>
> U kunt testen of Logboeken goed zijn gekoppeld aan hun resource door een query uit te voeren en de records te controleren waarin u bent geïnteresseerd. Als de juiste resource-ID wordt weer gegeven in de eigenschap [_ResourceId](./log-standard-columns.md#_resourceid) , zijn de gegevens beschikbaar voor resource gerichte query's.
Azure Monitor bepaalt automatisch de juiste modus, afhankelijk van de context waarin u de zoek opdracht in Logboeken uitvoert. Het bereik wordt altijd weer gegeven in de linkerbovenhoek van Log Analytics.
### <a name="comparing-access-modes"></a>De toegangs modi vergelijken
De volgende tabel bevat een overzicht van de toegangs modi:
| Probleem | Werkruimtecontext | Resourcecontext |
|:---|:---|:---|
| Voor wie is elk model bedoeld? | Centraal beheer. Beheerders die gegevens verzameling en gebruikers moeten configureren die toegang nodig hebben tot een groot aantal verschillende bronnen. Dit is ook vereist voor gebruikers die toegang moeten hebben tot logboeken voor bronnen buiten Azure. | Toepassings teams. Beheerders van Azure-resources die worden bewaakt. |
| Wat heeft een gebruiker nodig om logboeken weer te geven? | Machtigingen voor de werk ruimte. Zie **machtigingen voor werk ruimten** in [toegang beheren via werkruimte machtigingen](manage-access.md#manage-access-using-workspace-permissions). | Lees toegang tot de resource. Zie **resource machtigingen** in [toegang beheren met Azure-machtigingen](manage-access.md#manage-access-using-azure-permissions). Machtigingen kunnen worden overgenomen (bijvoorbeeld van de container resource groep) of rechtstreeks worden toegewezen aan de resource. De machtigingen voor de logboeken voor de resource worden automatisch toegewezen. |
| Wat is het bereik van machtigingen? | Werk ruimte. Gebruikers met toegang tot de werk ruimte kunnen alle logboeken in de werk ruimte opvragen van tabellen waarvoor ze machtigingen hebben. Zie [Table Access Control](manage-access.md#table-level-azure-rbac) | Azure-resource. De gebruiker kan Logboeken zoeken voor specifieke resources, resource groepen of abonnementen waartoe ze toegang hebben vanuit een wille keurige werk ruimte, maar geen logboeken kunnen doorzoeken voor andere resources. |
| Hoe kan de gebruiker toegang krijgen tot logboeken? | <ul><li>Start **Logboeken** vanuit **Azure monitor** menu.</li></ul> <ul><li>Start **Logboeken** vanuit **log Analytics werk ruimten**.</li></ul> <ul><li>Vanuit Azure Monitor [werkmappen](../visualizations.md#workbooks).</li></ul> | <ul><li>**Logboeken** starten vanuit het menu voor de Azure-resource</li></ul> <ul><li>Start **Logboeken** vanuit **Azure monitor** menu.</li></ul> <ul><li>Start **Logboeken** vanuit **log Analytics werk ruimten**.</li></ul> <ul><li>Vanuit Azure Monitor [werkmappen](../visualizations.md#workbooks).</li></ul> |
## <a name="access-control-mode"></a>Toegangsbeheermodus
De *Access Control-modus* is een instelling voor elke werk ruimte die definieert hoe machtigingen voor de werk ruimte worden bepaald.
* **Machtigingen voor de werk ruimte vereisen**: in deze besturings modus is granulaire Azure RBAC niet toegestaan. Een gebruiker heeft alleen toegang tot de werk ruimte als deze machtigingen voor de werk ruimte of specifieke tabellen hebben.
Als een gebruiker toegang heeft tot de werk ruimte na de context modus van de werk ruimte, hebben ze toegang tot alle gegevens in een tabel waaraan toegang is verleend. Als een gebruiker toegang heeft tot de werk ruimte die volgt op de resource-context modus, hebben ze alleen toegang tot gegevens voor die resource in een tabel waaraan ze toegang hebben verleend.
Dit is de standaard instelling voor alle werk ruimten die zijn gemaakt vóór 2019 maart.
* **Resource-of werkruimte machtigingen gebruiken**: deze bedienings modus biedt granulaire Azure RBAC. Gebruikers kunnen toegang krijgen tot gegevens die zijn gekoppeld aan resources die ze kunnen weer geven door de Azure-machtiging toe te wijzen `read` .
Wanneer een gebruiker toegang heeft tot de werk ruimte in de werk ruimte-context modus, zijn werkruimte machtigingen van toepassing. Wanneer een gebruiker de werk ruimte in de resource context modus opent, worden alleen resource machtigingen gecontroleerd en worden de machtigingen voor de werk ruimte genegeerd. Schakel Azure RBAC voor een gebruiker in door ze uit de werkruimte machtigingen te verwijderen en de machtigingen van de resource te herkennen.
Dit is de standaard instelling voor alle werk ruimten die na maart 2019 zijn gemaakt.
> [!NOTE]
> Als een gebruiker alleen resource machtigingen heeft voor de werk ruimte, hebben ze alleen toegang tot de werk ruimte via de resource-context modus als de toegangs modus voor de werk ruimte is ingesteld op het **gebruik van resource-of werkruimte machtigingen**.
Zie de [modus toegangs beheer configureren](manage-access.md#configure-access-control-mode)voor meer informatie over het wijzigen van de toegangs beheer modus in de portal, met Power shell of het gebruik van een resource manager-sjabloon.
## <a name="scale-and-ingestion-volume-rate-limit"></a>Frequentie limiet voor schaal en opname volume
Azure Monitor is een grootschalige gegevens service waarmee duizenden klanten elke maand PETA bytes van gegevens kunnen verzenden in een groei tempo. Werk ruimten zijn niet beperkt in hun opslag ruimte en kunnen worden uitgebreid tot PETA bytes aan gegevens. Het is niet nodig om werk ruimten te splitsen als gevolg van schalen.
Voor het beveiligen en isoleren van Azure Monitor klanten en de back-end-infra structuur, is er een standaard limiet voor opname frequentie die is ontworpen om te beschermen tegen pieken en flooden. De standaard frequentie limiet is ongeveer **6 GB/minuut** en is ontworpen voor het inschakelen van normale opname. Zie [Azure Monitor-service limieten](../service-limits.md#data-ingestion-volume-rate)voor meer informatie over het meten van opname volume limieten.
Klanten die minder dan 4 TB per dag opnemen, voldoen meestal niet aan deze limieten. Klanten die hogere volumes opnemen of die pieken hebben als onderdeel van hun normale bedrijfs voering, moeten overwegen over te stappen op [toegewezen clusters](../log-query/logs-dedicated-clusters.md) waarbij de limiet voor opname snelheden kan worden verhoogd.
Wanneer de limiet voor opname frequentie wordt geactiveerd of 80% van de drempel waarde wordt bereikt, wordt er een gebeurtenis toegevoegd aan de *bewerkings* tabel in uw werk ruimte. Het is raadzaam om het te controleren en een waarschuwing te maken. Bekijk meer informatie over de frequentie van het [volume voor gegevens opname](../service-limits.md#data-ingestion-volume-rate).
## <a name="recommendations"></a>Aanbevelingen

Dit scenario heeft betrekking op één werkruimte ontwerp in het abonnement van uw IT-organisatie dat niet wordt beperkt door de gegevens soevereiniteit of de naleving van de regelgeving, of moet worden toegewezen aan de regio's waar uw resources in worden geïmplementeerd. Zo kunnen de beveiligings-en IT-beheerders teams van uw organisatie gebruikmaken van de verbeterde integratie met Azure Access Management en meer beveiligd toegangs beheer.
Alle resources, bewakings oplossingen en inzichten zoals Application Insights en Azure Monitor voor VM's, ondersteunende infra structuur en toepassingen die worden onderhouden door de verschillende teams, zijn geconfigureerd om hun verzamelde logboek gegevens door te sturen naar de gecentraliseerde gedeelde werk ruimte van de IT-organisatie. Gebruikers van elk team krijgen toegang tot logboeken voor bronnen waartoe ze toegang hebben gekregen.
Wanneer u uw werkruimte architectuur hebt geïmplementeerd, kunt u dit afdwingen op Azure-resources met [Azure Policy](../../governance/policy/overview.md). Het biedt een manier om beleid te definiëren en te controleren op naleving van uw Azure-resources, zodat ze al hun resource logboeken naar een bepaalde werk ruimte verzenden. Met virtuele machines of virtuele-machine schaal sets van Azure kunt u bijvoorbeeld bestaande beleids regels gebruiken om de naleving van de werk ruimte te evalueren en resultaten te rapporteren, of om te herstellen als deze niet compatibel is.
## <a name="workspace-consolidation-migration-strategy"></a>Strategie voor consolidatie migratie van werk ruimte
Voor klanten die al meerdere werk ruimten hebben geïmplementeerd en geïnteresseerd zijn in het samen voegen van het model voor toegang tot de resource context, raden we u aan een incrementele benadering te gebruiken om naar het aanbevolen toegangs model te migreren. u hoeft dit niet snel of agressief te bereiken. Na een gefaseerde benadering van het plannen, migreren, valideren en buiten gebruik stellen van een redelijke tijd lijn, kunt u ongeplande incidenten of onverwachte gevolgen voor uw Cloud bewerkingen vermijden. Als u geen gegevens retentie beleid voor naleving of zakelijke redenen hebt, moet u de juiste tijds duur bepalen om gegevens te bewaren in de werk ruimte die u migreert tijdens het proces. Terwijl u resources opnieuw configureert om te rapporteren aan de gedeelde werk ruimte, kunt u de gegevens in de oorspronkelijke werk ruimte nog steeds analyseren als dat nodig is. Wanneer de migratie is voltooid en u de gegevens in de oorspronkelijke werk ruimte voor het einde van de Bewaar periode wilt behouden, moet u deze niet verwijderen.
Houd bij het plannen van de migratie naar dit model rekening met het volgende:
* Begrijp welke industriële voor schriften en interne beleids regels met betrekking tot het bewaren van gegevens u moet naleven.
* Zorg ervoor dat uw toepassings teams kunnen werken in de bestaande resource-context functionaliteit.
* Bepaal welke toegang wordt verleend aan resources voor uw toepassings teams en test in een ontwikkel omgeving voordat u de productie implementeert.
* Configureer de werk ruimte om **resource-of werkruimte machtigingen te gebruiken**.
* De machtiging Application teams verwijderen om de werk ruimte te lezen en er query's op uit te zoeken.
* Schakel alle bewakings oplossingen, inzichten zoals Azure Monitor voor containers en/of Azure Monitor voor VM's, uw Automation-account (s) en beheer oplossingen, zoals Updatebeheer, start/stop Vm's, enzovoort, in die zijn geïmplementeerd in de oorspronkelijke werk ruimte.
## <a name="next-steps"></a>Volgende stappen
Als u de beveiligings machtigingen en-besturings elementen wilt implementeren die worden aanbevolen in deze hand leiding, raadpleegt u de [toegang tot logboeken beheren](manage-access.md).
| 126.011976 | 1,060 | 0.807784 | nld_Latn | 0.999893 |
43e4dc18e595f0361337ae9eddb29c0c0b2f53a8 | 1,808 | md | Markdown | _posts/2020-02-11-1.post.md | LazyyyDev/LazyyyDev.github.io | 752a7df7a6ac0e83111c5bdac60af8d4e1c82323 | [
"MIT"
] | null | null | null | _posts/2020-02-11-1.post.md | LazyyyDev/LazyyyDev.github.io | 752a7df7a6ac0e83111c5bdac60af8d4e1c82323 | [
"MIT"
] | null | null | null | _posts/2020-02-11-1.post.md | LazyyyDev/LazyyyDev.github.io | 752a7df7a6ac0e83111c5bdac60af8d4e1c82323 | [
"MIT"
] | null | null | null | ---
layout: post
title: "학교를 벗어난 나의 회고"
tags: Blog
comments: true
last_modified_at:
---
드디어 지긋지긋했던 대학 생활이 끝났다. 그리고 취업을 해서 행복한 삶을 살고 싶었다.
는 무슨 ... 4학년 2학기 25세의 나이로 허세와 거만함으로 가득찬 내 모습이 몇 개월 지나지도 않았는데 너무 부끄럽다.
물론 그 허세와 거만함이 자신감의 역할도 했었는지 졸업도 하지 않은 상태로 NHN와 SK C&C의 면접까지 가게 해준 것이 아닐까. 물론 최종 불합격도 허세와 거만함으로 마무리되었지만.
이의 나비효과랄까 그 뒤에 모든 회사는 초장박살이 나버렸다.
세상의 벽이 너무 높다는 변명으로 그냥 누워있고 싶었고, 술만 마시고 살았다.
그리고 마침내, `주제 파악`이란 것을 하게되었고 그냥 중소 기업을 제외하고 어느 곳이든 내서 취업을 하려고 무작정 공고가 뜨는 곳에 지원했다.
그러다보니 우연히 어느 회사에 서류, 필기를 붙고 1차 면접까지 가서...!!!!
개판치고 나와버렸다. 질문을 많이 하신 것도 아니었지만 정말 이때까지 본 면접 중에 가장 생각이라는 것을 많이 하게된 면접이었다.
정말 면접을 하는 동안 그리고 면접이 끝나고 정말 이런 생각을 며칠동안 했다.
> 컴퓨터공학과에 발을 들이고 나름 열심히 했다고 생각했는데, 그래서 4년 동안 내가 뭘 했을까?
> C/C++, JAVA, Python, AI, 대내외 공모전? 그래서 여기서 내가 이러한 언어, AI, 프로젝트를 진행한 동안 쓴 기술이 뭔데?
> 그냥저냥 UI 꾸미기? 그냥 서버에 DB 깔고 테이블 만들고 PHP로 불러오기?, 웹 크롤링?, MINST CNN 신경망 코드 수정해서 Classification 하기? API 통신으로 parsing?
> 이걸 기술이라고 할 수 있는건가? google에서 검색하면 다나오는 흔해빠진 것들 아닌가?
> 뭐라도 어떤 데이터 흐름에 있어서 효율적인 처리를 위해 다양한 시도를 해본 적이 있나?
> 객체지향에 대해서 주구장창 배워놓고 정말 제대로된 객체지향적 프로그래밍을 한 적이 있나?
> 한 가지의 분야에서라도 능동적으로 혹은 유연하게 개발할 수 있는 실력이 있나?
이러한 자기비판/비하에 내 생각들이 하나로 수렴됬다.
> '주제 파악은 무슨, 내가 개발자로서 세상에 뛰어들 자격이나 실력이 있었나..'
그리고 오늘 저엉말 운이 좋게도 세상에 뛰어들 기회를 부여받았다.

정말 6살 꼬마아이가 와서 면접을 봐도 나보다는 잘했을 정도로 면접을 망했다고 생각했는데, 뭐 때문인지 1차 면접(기술 면접)에 합격하게 되었다.
물론 마지막 2차 면접(임원 면접)이 있지만 이렇게 다시 한 번 나를 되돌아볼 수 있는 계기(다르게 말하면 `주제 파악`)가 생겨서 좋다.
어떠한 결과가 있던간에 나는 아직 한참 부족하고 또 부족하기 때문에 자기개발을 멈추지 않으려고 한다.
>합격을 해서 업무를 배우는 동안에도, 술 약속이 있더라고, 하루종일 이불 속에 있고 싶더라도, 불합격에 가슴이 쓰려도 그 사이에 아주 작은 시간에라도 자기개발을 조금씩이라도 하려고 한다.
mobile : android(java, kotlin), react native
Front-End : HTML, CSS와 LESS, Javascript, jQuery, bootstrap, react.js
Back-End : JSP, Sevlet, Spring
그냥 저냥 드디어 갓 세상 밖으로 나온 어린 애의 투정이자 열정을 기록해본다. | 39.304348 | 117 | 0.688053 | kor_Hang | 1.00001 |
43e52257a296043597aa26d5d98f25aa9957623b | 1,352 | md | Markdown | README.md | paldepind/purescript-refs | 3d515bc461acb950f3c8bac7b8def1d6e65b3f6b | [
"BSD-3-Clause"
] | null | null | null | README.md | paldepind/purescript-refs | 3d515bc461acb950f3c8bac7b8def1d6e65b3f6b | [
"BSD-3-Clause"
] | 1 | 2018-09-28T06:36:17.000Z | 2018-09-28T08:19:52.000Z | README.md | pure-c/purescript-refs | 50749fdbcc227959561ab89bd4d3b511aa0b55db | [
"BSD-3-Clause"
] | null | null | null | # purescript-refs
[](https://github.com/purescript/purescript-refs/releases)
[](https://travis-ci.org/purescript/purescript-refs)
This module defines functions for working with mutable value references.
_Note_: [`Control.Monad.ST`](https://pursuit.purescript.org/packages/purescript-st/4.0.0/docs/Control.Monad.ST) provides a _safe_ alternative to `Ref` when mutation is restricted to a local scope.
## Installation
```
bower install purescript-refs
```
## Example
```purs
main = do
-- initialize a new Ref with the value 0
ref <- Ref.new 0
-- read from it and check it
curr1 <- Ref.read ref
assertEqual { actual: curr1, expected: 0 }
-- write over the ref with 1
Ref.write 1 ref
-- now it is 1 when we read out the value
curr2 <- Ref.read ref
assertEqual { actual: curr2, expected: 1 }
-- modify it by adding 1 to the current state
Ref.modify_ (\s -> s + 1) ref
-- now it is 2 when we read out the value
curr3 <- Ref.read ref
assertEqual { actual: curr3, expected: 2 }
```
See [tests](test/Main.purs) to see usages.
## Documentation
Module documentation is [published on Pursuit](http://pursuit.purescript.org/packages/purescript-refs).
| 28.765957 | 196 | 0.719675 | eng_Latn | 0.947329 |
43e551e7ae699e20fa845a16767ad85c127ca45d | 4,195 | md | Markdown | ce/basics/search-and-find-header.md | WindtechGlobal/dynamics-365-customer-engagement | 09a3de9589e08779b5bc34b2721bdbd487afcda0 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-01-30T18:36:48.000Z | 2019-01-30T18:36:51.000Z | ce/basics/search-and-find-header.md | WindtechGlobal/dynamics-365-customer-engagement | 09a3de9589e08779b5bc34b2721bdbd487afcda0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ce/basics/search-and-find-header.md | WindtechGlobal/dynamics-365-customer-engagement | 09a3de9589e08779b5bc34b2721bdbd487afcda0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Search and Find in Dynamics 365 for Customer Engagement apps| MicrosoftDocs"
ms.custom:
ms.date: 08/24/2018
ms.reviewer:
ms.service: crm-online
ms.suite:
ms.tgt_pltfrm:
ms.topic: article
applies_to:
- Dynamics 365 for Customer Engagement apps
- Dynamics 365 for Customer Engagement Version 9.x
ms.assetid: d0634607-b17f-4f33-aa15-e26ebed441f7
caps.latest.revision: 2
ms.author: t-mijosh
manager: ryjones
search.audienceType:
- enduser
search.app:
- D365CE
---
# Search and Find in [!INCLUDE[pn_dynamics_crm](../includes/pn-dynamics-crm.md)]
[!INCLUDE[cc-applies-to-update-9-0-0](../includes/cc_applies_to_update_9_0_0.md)]
## Compare [!INCLUDE[pn_dynamics_crm](../includes/pn-dynamics-crm.md)] searches
There are four ways to find data in [!INCLUDE[pn_dynamics_crm](../includes/pn-dynamics-crm.md)]:
- Relevance Search
- Full-text Quick Find (single-entity or multi-entity)
- Quick Find (single-entity or multi-entity)
- Advanced Find
> [!NOTE]
> Multi-entity Quick Find is also called Categorized Search.
The following table provides a brief comparison of the four available options.
|Functionality|[Relevance Search](../basics/relevance-search-results.md)|[Full-text Quick Find](../basics/search-records.md)|[Quick Find](../basics/search-records.md)|[Advanced Find](../basics/save-advanced-find-search.md)|
|-------------------|----------------------|---------------------------|----------------|-------------------|
|Enabled by default?|No. An administrator must manually enable it.|No. An administrator must manually enable it.|Yes|Yes|
|Single-entity search scope|Not available in an entity grid. You can filter the search results by an entity on the results page.|Available in an entity grid.|Available in an entity grid.|Available in an entity grid.|
|Multi-entity search scope|There is no maximum limit on the number of entities you can search. **Note:** While there is no maximum limit on the number of entities you can search, the Record Type filter shows data for only 10 entities.|Searches up to 10 entities, grouped by an entity.|Searches up to 10 entities, grouped by an entity.|Multi-entity search not available.|
|Search behavior|Finds matches to any word in the search term in any field in the entity.|Finds matches to all words in the search term in one field in an entity; however, the words can be matched in any order in the field.|Finds matches as in a SQL query with “Like” clauses. You have to use the wildcard characters in the search term to search within a string. All matches must be an exact match to the search term.|Query builder where you can define search criteria for the selected record type. Can also be used to prepare data for export to Office Excel so that you analyze, summarize,or aggregate data, or create PivotTables to view your data from different perspectives.|
|Searchable fields|Text fields like Single Line of Text, Multiple Lines of Text, Lookups, and Option Sets. Doesn't support searching in fields of Numeric or Date data type.|All searchable fields.|All searchable fields.|All searchable fields.|
|Search results|Returns the search results in order of their relevance, in a single list.|For single-entity, returns the search results in an entity grid. For multi-entity, returns the search results grouped by categories, such as accounts, contacts, or leads.|For single-entity, returns the search results in an entity grid. For multi-entity, returns the search results grouped by categories, such as accounts, contacts, or leads.|Returns search results of the selected record type with the columns you have specified, in the sort order you have configured.|
|Wildcards (*)|Trailing wildcard supported for word completion.|Leading wildcard supported. Trailing wildcard added by default.|Leading wildcard supported. Trailing wildcard added by default.|Not supported.|
### See also
[Search for records - User Guide (Dynamics 365 for phones and tablets)](../mobile-app/dynamics-365-phones-tablets-users-guide.md)
[Configure Relevance Search to improve search results and performance (Administrator Guide)](../admin/configure-relevance-search-organization.md)
| 72.327586 | 680 | 0.752563 | eng_Latn | 0.982162 |
43e574586f5e97aa47a3299523eda3fa4eadd108 | 466 | md | Markdown | README.md | parzibyte/firmware-upgrade-es | aae2afe81fa96acc112651f955058811df190974 | [
"MIT"
] | null | null | null | README.md | parzibyte/firmware-upgrade-es | aae2afe81fa96acc112651f955058811df190974 | [
"MIT"
] | null | null | null | README.md | parzibyte/firmware-upgrade-es | aae2afe81fa96acc112651f955058811df190974 | [
"MIT"
] | null | null | null | # firmware-upgrade-es
Una página de phishing de actualización de firmware en español, para wifiphisher
# Instalación
Clonamos el repositorio dentro de *wifiphisher/wifiphisher/data/phishing-pages/* con:
`git clone https://github.com/parzibyte/firmware-upgrade-es`
Luego iniciamos la herramienta e indicamos el directorio:
`sudo wifiphisher --phishing-pages-directory ~/wifiphisher/wifiphisher/data/phishing-pages/`
Más información en https://parzibyte.me/blog/ | 35.846154 | 92 | 0.806867 | spa_Latn | 0.700783 |
43e58264831ef328c6663a7c8328f978fae93773 | 22,908 | md | Markdown | articles/security/fundamentals/encryption-overview.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-03-12T23:37:16.000Z | 2021-03-12T23:37:16.000Z | articles/security/fundamentals/encryption-overview.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/security/fundamentals/encryption-overview.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Introducción al cifrado de Azure | Microsoft Docs
description: Más información acerca de las opciones de cifrado en Azure. Consulte la información acerca del cifrado en reposo, el cifrado en paquetes piloto y la administración de claves con Azure Key Vault.
services: security
author: msmbaldwin
ms.assetid: ''
ms.service: security
ms.subservice: security-fundamentals
ms.topic: article
ms.date: 10/26/2021
ms.author: mbaldwin
ms.openlocfilehash: acfa59c13f0f9429135ea7f2218ca8fec121c8a3
ms.sourcegitcommit: 702df701fff4ec6cc39134aa607d023c766adec3
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 11/03/2021
ms.locfileid: "131428318"
---
# <a name="azure-encryption-overview"></a>Información general del cifrado de Azure
Este artículo proporciona información general sobre cómo se usa el cifrado en Microsoft Azure. Trata las áreas principales de cifrado, incluidos el cifrado en reposo, el cifrado en paquetes piloto y la administración de claves con Azure Key Vault. Cada sección incluye vínculos para obtener información más detallada.
## <a name="encryption-of-data-at-rest"></a>Cifrado de datos en reposo
Los datos en reposo incluyen información que se encuentra en el almacenamiento persistente en un medio físico, en cualquier formato digital. El medio puede incluir archivos de medios ópticos o magnéticos, datos archivados y copias de seguridad de datos. Microsoft Azure ofrece una variedad de soluciones de almacenamiento de datos para satisfacer diferentes necesidades, incluidos el almacenamiento de tablas, blobs y archivos, así como el almacenamiento en disco. Microsoft también proporciona cifrado para proteger [Azure SQL Database](../../azure-sql/database/sql-database-paas-overview.md), [Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md) y Azure Data Lake.
El cifrado de datos en reposo está disponible para los servicios a través de los modelos en la nube de software como servicio (SaaS), plataforma como servicio (PaaS) e infraestructura como servicio (IaaS). En este artículo se resumen las opciones de cifrado de Azure y se proporcionan recursos para ayudarle a usarlas.
Para más información detallada sobre cómo se cifran los datos en reposo en Azure, vea [Cifrado en reposo de datos de Azure](encryption-atrest.md).
## <a name="azure-encryption-models"></a>Modelos de cifrado de Azure
Azure admite distintos modelos de cifrado, incluido el cifrado de servidor que usa claves administradas por el servicio, claves administradas por el cliente en Key Vault o las claves administradas por el cliente en el hardware controlado por el cliente. Con el cifrado de cliente, puede administrar y almacenar claves de forma local o en otra ubicación segura.
### <a name="client-side-encryption"></a>Cifrado de cliente
El cifrado de cliente se realiza fuera de Azure. Incluye:
- Datos cifrados por una aplicación que se está ejecutando en el centro de datos del cliente o por una aplicación de servicio.
- Datos que ya están cifrados cuando Azure los recibe.
Con el cifrado de cliente, los proveedores de servicios en la nube no tienen acceso a las claves de cifrado y no pueden descifrar estos datos. Mantenga un control completo de las claves.
### <a name="server-side-encryption"></a>Cifrado del servidor
Los tres modelos de cifrado del servidor ofrecen características de administración de claves diferentes, que se pueden elegir según sus requisitos:
- **Claves administradas del servicio**: proporcionan una combinación de control y comodidad con una sobrecarga reducida.
- **Claves administradas por el cliente**: le permiten controlar las claves, con compatibilidad con Bring Your Own Keys (BYOK), o generar claves nuevas.
- **Claves administradas del servicio en el hardware controlado por el cliente**: le permiten administrar las claves en el repositorio de su propiedad, fuera del control de Microsoft. Esta característica se denomina Host Your Own Key (HYOK). Sin embargo, la configuración es compleja y la mayoría de los servicios de Azure no son compatibles con este modelo.
### <a name="azure-disk-encryption"></a>Azure Disk Encryption
Para proteger las máquinas virtuales Windows y Linux, puede usar [Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md), que usa la tecnología de [BitLocker de Windows](/previous-versions/windows/it-pro/windows-vista/cc766295(v=ws.10)) y [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) de Linux para proteger los discos del sistema operativo y los discos de datos con el cifrado de volumen completo.
Las claves de cifrado y secretos se protegen en la [suscripción a Azure Key Vault](../../key-vault/general/overview.md). El servicio Azure Backup permite hacer copias de seguridad y restauraciones de máquinas virtuales (VM) cifradas que usan la configuración de clave de cifrado de claves (KEK).
### <a name="azure-storage-service-encryption"></a>Cifrado del servicio Azure Storage
Los datos en reposo de Azure Blob Storage y los recursos compartidos de archivos de Azure se pueden cifrar en escenarios de cliente y servidor.
[Azure Storage Service Encryption (SSE)](../../storage/common/storage-service-encryption.md) puede cifrar automáticamente los datos antes de que se almacenen y los descifra automáticamente cuando los recupera. Se trata de un proceso totalmente transparente para el usuario. Storage Service Encryption usa un [Estándar de cifrado avanzado (AES)](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) de 256 bits, que es uno de los cifrados en bloque más seguros que existen. AES controla el cifrado, descifrado y administración de claves de un modo transparente.
### <a name="client-side-encryption-of-azure-blobs"></a>Cifrado de cliente de blobs de Azure
Puede realizar el cifrado de cliente de blobs de Azure de varias maneras.
Puede usar la biblioteca cliente de Azure Storage para el paquete NuGet de .NET para cifrar los datos dentro de las aplicaciones cliente antes de cargarlos en Azure Storage.
Para obtener más información acerca de la biblioteca cliente de Azure Storage para el paquete NuGet. de NET y descargarla, vea [Microsoft Azure Storage 8.3.0](https://www.nuget.org/packages/WindowsAzure.Storage).
Cuando se usa el cifrado de cliente con Key Vault, los datos se cifran con una clave de cifrado de contenido (CEK) simétrica única generada por el SDK de cliente de Azure Storage. La CEK se cifra mediante una clave de cifrado de claves (KEK), que puede ser una clave simétrica o un par de claves asimétricas. Puede administrarla de forma local o almacenarla en Key Vault. A continuación, se cargan los datos cifrados en Azure Storage.
Para obtener más información acerca del cifrado de cliente con Key Vault e iniciar las instrucciones sobre procedimientos, vea [Tutorial: Cifrado y descifrado de blobs en Azure Storage con Key Vault](../../storage/blobs/storage-encrypt-decrypt-blobs-key-vault.md).
Por último, también puede usar la biblioteca cliente de Azure Storage para Java para realizar el cifrado de cliente antes de cargar datos en Azure Storage y descifrar los datos cuando se descargan en el cliente. Esta biblioteca también admite la integración con [Key Vault](https://azure.microsoft.com/services/key-vault/) para la administración de las claves de la cuenta de almacenamiento.
### <a name="encryption-of-data-at-rest-with-azure-sql-database"></a>Cifrado de datos en reposo con Azure SQL Database
[Azure SQL Database](../../azure-sql/database/sql-database-paas-overview.md) es un servicio de base de datos relacional de uso general de Azure que admite estructuras como datos relacionales, JSON, espacial y XML. SQL Database admite el cifrado de servidor a través de la característica Cifrado de datos transparente (TDE) y el cifrado de cliente a través de la característica Always Encrypted.
#### <a name="transparent-data-encryption"></a>Cifrado de datos transparente
[TDE](/sql/relational-databases/security/encryption/transparent-data-encryption-tde) se utiliza para cifrar archivos de datos de [SQL Server](https://www.microsoft.com/sql-server/sql-server-2016), [Azure SQL Database](../../azure-sql/database/sql-database-paas-overview.md) y [Azure Synapse Analytics](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-overview-what-is.md) en tiempo real, con una clave de cifrado de base de datos (DEK) que se almacena en el registro de arranque de base de datos para la disponibilidad durante la recuperación.
TDE protege los archivos de registro y los datos con los algoritmos de cifrado de AES y el estándar de cifrado de datos triple (3DES). El cifrado del archivo de base de datos se realiza en el nivel de página. Las páginas en una base de datos cifrada se cifran antes de que se escriban en disco y se descifran cuando se leen en la memoria. TDE ahora está habilitado de forma predeterminada en las bases de datos de Azure SQL recién creadas.
#### <a name="always-encrypted-feature"></a>Característica Always Encrypted
Con la característica [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) de Azure SQL, puede cifrar los datos dentro de aplicaciones de cliente antes de almacenarlos en Azure SQL Database. También puede habilitar la delegación de la administración de la base de datos local a terceros y mantener la separación entre aquellos que poseen y pueden ver los datos y aquellos que los administran, pero no deben tener acceso a ellos.
#### <a name="cell-level-or-column-level-encryption"></a>Cifrado de nivel de celda o columna
Con Azure SQL Database, puede aplicar el cifrado simétrico a una columna de datos mediante Transact-SQL. Este enfoque se denomina [cifrado de nivel de columna o cifrado de nivel de celda](/sql/relational-databases/security/encryption/encrypt-a-column-of-data) (CLE), ya que se puede utilizar para cifrar las columnas concretas o incluso determinadas celdas de datos con distintas claves de cifrado. Esto proporciona una capacidad de cifrado más granular que TDE, que cifra los datos en páginas.
CLE tiene funciones integradas que puede usar para cifrar datos con claves simétricas o asimétricas, la clave pública de un certificado o una frase de contraseña con 3DES.
### <a name="cosmos-db-database-encryption"></a>Cifrado de base de datos Cosmos DB
[Azure Cosmos DB](../../cosmos-db/database-encryption-at-rest.md) es la base de datos multimodelo de distribución global de Microsoft. Los datos de usuario almacenados en Cosmos DB en un almacenamiento no volátil (unidades de estado sólido) se cifran de forma predeterminada. No hay ningún control para activarlo o desactivarlo. El cifrado en reposo se implementa mediante una serie de tecnologías de seguridad, como sistemas seguros de almacenamiento de claves, redes cifradas y API criptográficas. Microsoft administra las claves de cifrado y se alternan según las directivas internas de Microsoft. También puede optar por agregar una segunda capa de cifrado con las claves que administra con la característica [claves administradas por el cliente o CMK](../../cosmos-db/how-to-setup-cmk.md).
### <a name="at-rest-encryption-in-data-lake"></a>Cifrado en reposo en Data Lake
[Azure Data Lake](../../data-lake-store/data-lake-store-encryption.md) es un repositorio para toda la empresa de todos los tipos de datos recopilados en un único lugar antes de cualquier definición formal de requisitos o esquema. Data Lake Store admite el cifrado transparente "de forma predeterminada" de los datos en reposo, que se configura durante la creación de la cuenta. De forma predeterminada, Azure Data Lake Store administra las claves en su nombre, pero tiene la opción de administrarlas usted mismo.
Se utilizan tres tipos de claves en el cifrado y descifrado de datos: la clave de cifrado maestra (MEK), la clave de cifrado de datos (DEK) y la clave de cifrado de bloque (BEK). La MEK se utiliza para cifrar la DEK, que se almacena en medios persistentes, y la BEK se deriva de la DEK y el bloque de datos. Si está administrando sus propias claves, puede alternar la MEK.
## <a name="encryption-of-data-in-transit"></a>Cifrado de datos en tránsito
Azure ofrece varios mecanismos para mantener la privacidad de los datos cuando se mueven de una ubicación a otra.
### <a name="data-link-layer-encryption-in-azure"></a>Cifrado de capa de vínculo de datos en Azure
Cada vez que el tráfico de los clientes de Azure se mueve entre los centros de datos —fuera de los límites físicos no controlados por Microsoft (o en nombre de Microsoft)—, un método de cifrado de capa de vínculo de datos que usa los [estándares de seguridad de MAC IEEE 802.1AE](https://1.ieee802.org/security/802-1ae/) (también conocidos como MACsec) se aplica de punto a punto en el hardware de red subyacente. Los paquetes se cifran y descifran en los dispositivos antes de enviarse, lo que evita ataques físicos de tipo "Man in the middle" o de supervisión/escucha telefónica. Dado que esta tecnología se integra en el propio hardware de red, proporciona cifrado de velocidad de línea en el hardware de red sin aumento de la latencia de vínculo mensurable. Este cifrado de MACsec está activado de forma predeterminada para todo el tráfico de Azure que viaja dentro de una región o entre regiones, y no se requiere ninguna acción por parte de los clientes para su habilitación.
### <a name="tls-encryption-in-azure"></a>Cifrado TLS en Azure
Microsoft usa el protocolo [Seguridad de la capa de transporte](https://en.wikipedia.org/wiki/Transport_Layer_Security) (TLS) de forma predeterminada para proteger los datos cuando se transmiten entre los servicios en la nube y los clientes. Los centros de datos de Microsoft negocian una conexión TLS con sistemas cliente que se conectan a servicios de Azure. TLS proporciona una autenticación sólida, privacidad de mensajes e integridad (lo que permite la detección de la manipulación, interceptación y falsificación de mensajes), interoperabilidad, flexibilidad de algoritmo, y facilidad de implementación y uso.
[Confidencialidad directa total](https://en.wikipedia.org/wiki/Forward_secrecy) (PFS) protege las conexiones entre los sistemas cliente de los clientes y los servicios en la nube de Microsoft mediante claves únicas. Las conexiones también usan longitudes de clave de cifrado RSA de 2048 bits. Esta combinación hace difícil para un usuario interceptar y acceder a datos que están en tránsito.
### <a name="azure-storage-transactions"></a>Transacciones de Azure Storage
Si interactúa con Azure Storage a través de Azure Portal, todas las transacciones se realizan a través de HTTPS. También se puede usar la API de REST de Storage a través de HTTPS para interactuar con Azure Storage. Puede exigir el uso de HTTPS al llamar a las API de REST para acceder a objetos de cuentas de almacenamiento mediante la habilitación de la transferencia segura para la cuenta de almacenamiento.
Las firmas de acceso compartido ([SAS](../../storage/common/storage-sas-overview.md)), que pueden utilizarse para delegar el acceso a objetos de Azure Storage, incluyen una opción para especificar que se pueda utilizar solo el protocolo HTTPS cuando se usen las firmas de acceso compartido. Este enfoque garantiza que cualquier usuario que envíe vínculos con tokens SAS use el protocolo adecuado.
[SMB 3.0](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn551363(v=ws.11)#BKMK_SMBEncryption), que solía acceder a recursos compartidos de Azure Files, admite cifrado y está disponible en Windows Server 2012 R2, Windows 8, Windows 8.1 y Windows 10. Permite el acceso entre regiones e incluso el acceso en el escritorio.
El cifrado de cliente cifra los datos antes de enviarlos a la instancia de Azure Storage, por lo que se cifra a medida que pasa por la red.
### <a name="smb-encryption-over-azure-virtual-networks"></a>Cifrado de SMB a través de redes virtuales de Azure
Si usa [SMB 3.0](https://support.microsoft.com/help/2709568/new-smb-3-0-features-in-the-windows-server-2012-file-server) en VM que ejecutan Windows Server 2012 o posterior, puede proteger las transferencias de datos mediante el cifrado de datos en tránsito a través de redes virtuales de Azure. Al cifrar los datos, contribuye a protegerlos de manipulaciones y ataques de interceptación. Los administradores pueden habilitar el cifrado SMB para todo el servidor o, simplemente, algunos recursos compartidos.
De forma predeterminada, una vez que se activa el cifrado SMB para un recurso compartido o el servidor, solo se permite que los clientes SMB 3.0 tengan acceso a los recursos compartidos cifrados.
## <a name="in-transit-encryption-in-vms"></a>Cifrado en tránsito en VM
Los datos en tránsito de destino, de origen y entre VM que ejecutan Windows, se pueden cifrar de diversas formas, según la naturaleza de la conexión.
### <a name="rdp-sessions"></a>Sesiones RDP
Puede conectarse e iniciar sesión en una VM mediante el [Protocolo de escritorio remoto](/windows/win32/termserv/remote-desktop-protocol) (RDP) desde un equipo cliente de Windows o desde un equipo Mac con un cliente RDP instalado. Los datos en tránsito a través de la red en las sesiones RDP se pueden proteger mediante TLS.
Puede usar el escritorio remoto para conectarse a una VM Linux en Azure.
### <a name="secure-access-to-linux-vms-with-ssh"></a>Acceso seguro a las VM de Linux con SSH
Para la administración remota, puede usar [Secure Shell](../../virtual-machines/linux/ssh-from-windows.md) (SSH) para conectarse a VM Linux que se ejecutan en Azure. SSH es un protocolo de conexión cifrada que permite inicios de sesión seguros a través de conexiones no seguras. Es el protocolo de conexión predeterminado de las máquinas virtuales Linux hospedadas en Azure. Mediante el uso de claves de SSH para la autenticación, se elimina la necesidad de contraseñas para iniciar sesión. SSH utiliza un par de claves públicas/privadas (cifrado simétrico) para la autenticación.
## <a name="azure-vpn-encryption"></a>Cifrado de VPN de Azure
Puede conectarse a Azure a través de una red privada virtual que crea un túnel seguro para proteger la privacidad de los datos enviados a través de la red.
### <a name="azure-vpn-gateways"></a>Puertas de enlace de VPN de Azure
Puede usar una [puerta de enlace VPN de Azure](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md) para enviar tráfico cifrado entre la red virtual y la ubicación local a través de una conexión pública o para enviar tráfico entre redes virtuales.
Las VPN de sitio a sitio usan [IPsec](https://en.wikipedia.org/wiki/IPsec) para el cifrado de transporte. Azure VPN Gateway utiliza un conjunto de propuestas predeterminadas. Puede configurar Azure VPN Gateway para usar una directiva IPsec/IKE personalizada con determinados algoritmos criptográficos y ventajas claves, en lugar de conjuntos de directivas predeterminadas de Azure.
### <a name="point-to-site-vpns"></a>VPN de punto a sitio
Las VPN de punto a sitio permiten a los equipos cliente individuales acceder a una instancia de Azure Virtual Network. [El protocolo de túnel de sockets de seguros (SSTP)](/previous-versions/technet-magazine/cc162322(v=msdn.10)) se utiliza para crear el túnel VPN. Puede atravesar firewalls (el túnel aparece como conexión HTTPS). Puede usar su propia entidad de certificación (CA) de raíz de infraestructura de clave pública (PKI) interna para la conectividad de punto a sitio.
Puede configurar una conexión de VPN de punto a sitio a una red virtual con Azure Portal con autenticación de certificados o PowerShell.
Para obtener más información acerca de las conexiones VPN de punto a sitio para redes virtuales de Azure, vea:
[Configuración de una conexión de punto a sitio a una red virtual mediante la autenticación de certificación: Azure Portal](../../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md)
[Configuración de una conexión de punto a sitio a una red virtual mediante la autenticación de certificado: PowerShell](../../vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md)
### <a name="site-to-site-vpns"></a>VPN de sitio a sitio
Puede usar una conexión de puerta de enlace VPN de sitio a sitio para conectar su red local a una red virtual de Azure a través de un túnel VPN de IPsec/IKE (IKEv1 o IKEv2). Este tipo de conexión requiere un dispositivo VPN local que tenga una dirección IP pública asignada.
Puede configurar una conexión de VPN de sitio a sitio a una red virtual mediante Azure Portal, PowerShell o CLI de Azure.
Para más información, consulte:
[Creación de una conexión de sitio a sitio mediante Azure Portal](../../vpn-gateway/tutorial-site-to-site-portal.md)
[Creación de una conexión de sitio a sitio en PowerShell](../../vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md)
[Creación de una red virtual con una conexión VPN de sitio a sitio mediante la CLI](../../vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md)
## <a name="in-transit-encryption-in-data-lake"></a>Cifrado en tránsito en Data Lake
Los datos en tránsito (también conocidos como datos en movimiento) también se cifran siempre en Data Lake Store. Además de que los datos se cifran antes de almacenarse en un medio persistente, también se protegen cuando están en tránsito mediante HTTPS. HTTPS es el único protocolo admitido para las interfaces de REST de Data Lake Store.
Para más información acerca del cifrado de datos en tránsito en Data Lake, vea [Cifrado de datos en Data Lake Store](../../data-lake-store/data-lake-store-encryption.md).
## <a name="key-management-with-key-vault"></a>Administración de claves con Key Vault
Sin la protección y administración adecuadas de las claves, el cifrado queda inutilizable. Key Vault es la solución recomendada de Microsoft para administrar y controlar el acceso a las claves de cifrado utilizadas por los servicios en la nube. Los permisos para acceder a las claves se pueden asignar a servicios o a usuarios a través de cuentas de Azure Active Directory.
Key Vault libera a las empresas de la necesidad de configurar, aplicar revisiones y mantener los módulos de seguridad de hardware (HSM) y el software de administración de claves. Cuando utiliza Key Vault, tiene el control. Microsoft nunca ve las claves y las aplicaciones no tienen acceso directo a ellas. También puede importar o generar claves en HSM.
## <a name="next-steps"></a>Pasos siguientes
- [Información general de seguridad de Azure](./overview.md)
- [Azure Network Security Overview (Información general sobre Azure Network Security)](network-overview.md)
- [Introducción a la seguridad de base de datos de Azure](../../azure-sql/database/security-overview.md)
- [Información general de seguridad de Azure Virtual Machines](virtual-machines-overview.md)
- [Cifrado de datos en reposo](encryption-atrest.md)
- [Procedimientos recomendados de seguridad de datos y cifrado](data-encryption-best-practices.md) | 109.607656 | 982 | 0.795137 | spa_Latn | 0.98915 |
43e59e4565fde26b1c2999e48558589556674bb9 | 175 | md | Markdown | DIEP/README.md | bonzibuddy801/bonzibuddy801.github.io | baaa1368356acbb98eb9d47bc35a24b540268927 | [
"Apache-2.0"
] | 1 | 2018-03-20T11:10:17.000Z | 2018-03-20T11:10:17.000Z | DIEP/README.md | bonzibuddy801/bonzibuddy801.github.io | baaa1368356acbb98eb9d47bc35a24b540268927 | [
"Apache-2.0"
] | null | null | null | DIEP/README.md | bonzibuddy801/bonzibuddy801.github.io | baaa1368356acbb98eb9d47bc35a24b540268927 | [
"Apache-2.0"
] | 3 | 2017-09-17T21:03:10.000Z | 2021-08-23T15:25:34.000Z | # cDiep client
This is a client that can connect to cDiep servers. By default it connects to `localhost:8080` but a better server selection system will be put in place soon.
| 43.75 | 158 | 0.782857 | eng_Latn | 0.99973 |
43e601268eb7d884e6a1ea3899bcf8e863c110b4 | 10,974 | md | Markdown | docs/sql-server/partner-hadr-sql-server.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/sql-server/partner-hadr-sql-server.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/sql-server/partner-hadr-sql-server.md | SteSinger/sql-docs.de-de | 2259e4fbe807649f6ad0d49b425f1f3fe134025d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'SQL Server: Partner für Hochverfügbarkeit und Notfallwiederherstellung'
description: Liste mit Partnerunternehmen, die Lösungen zur Überwachung von SQL Server anbieten.
services: sql-server
ms.topic: conceptual
ms.custom: seo-dt-2019
ms.date: 09/17/2017
ms.prod: sql
ms.author: mikeray
author: MikeRayMSFT
ms.openlocfilehash: e13efcf874b9f0d59cdc103626c1604b58757af8
ms.sourcegitcommit: 15fe0bbba963d011472cfbbc06d954d9dbf2d655
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 11/14/2019
ms.locfileid: "74095559"
---
# <a name="sql-server-high-availability-and-disaster-recovery-partners"></a>SQL Server: Partner für Hochverfügbarkeit und Notfallwiederherstellung
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)]
Es gibt eine große Auswahl an branchenführenden Tools, die die Komponenten „Hochverfügbarkeit“ und „Notfallwiederherstellung“ für Ihre SQL Server-Dienste bereitstellen. Im Folgenden werden Partnerunternehmen von Microsoft aufgelistet, die Hochverfügbarkeits- und Notfallwiederherstellungslösungen für Microsoft SQL Server anbieten.
## <a name="high-availability-and-disaster-recovery-partners"></a>Partner für Hochverfügbarkeit und Notfallwiederherstellung
<!--|![PartnerShortName][1] |**PartnerShortName**<br>PartnerShortName Brief description of the type of products that partner provides. <br><br>List of supported versions of SQL Server, OS, OS platforms/distros Server 2005 SP4 - SQL Server 2016 on Windows |[Datasheet][PartnerShortName_datasheet]<br>[Marketplace][PartnerShortName_marketplace]<br>[Website][PartnerShortName_website]<br>[Twitter][PartnerShortName_twitter]<br>[Video][PartnerShortName_youtube]|[](https://www.youtube.com/channel/**************)
-->
| Partner | und Beschreibung | Links |
| --- | --- | --- |
|![Azure][5] |**Azure Site Recovery**<br>Site Recovery repliziert Workloads, die auf virtuellen Computern oder physischen Servern ausgeführt werden, damit sie an einem sekundären Speicherort weiterhin verfügbar sind, wenn der primäre Speicherort nicht verfügbar ist. Sie können virtuelle Computer mit SQL Server replizieren oder einen Failover ausführen. Dabei können Daten von lokalen Rechenzentren in Azure oder auf ein anderes lokales Rechenzentrum oder von einem Azure-Rechenzentrum auf ein anderes verschoben werden.<br><br> Enterprise- und Standard-Editionen von SQL Server 2008 R2 – SQL Server 2016|[Website][azure_website]<br>[Marketplace][azure_marketplace]<br>[Datasheet][azure_datasheet]<br>[Twitter][azure_twitter]<br>[Video][azure_youtube]|
|![DH2i][2] |**DH2i**<br>DxEnterprise ist eine Smart Availability-Software für Windows, Linux und Docker, mit der Ihre geplanten und ungeplanten Ausfallzeiten nahezu bei null liegen und die Sie außerdem dabei unterstützt, hohe Kosteneinsparungen zu erzielen, die Verwaltung zu vereinfachen und Ihre physischen und logischen Komponenten zu konsolidieren.<br><br>SQL Server 2005+, Windows Server 2008R2+, Ubuntu 16+, RHEL 7+, CentOS 7+|[Website][dh2i_website]<br>[Datasheet][dh2i_datasheet]<br>[Twitter][dh2i_twitter]<br>[Video][dh2i_youtube]|
|![HPE][4] |**HPE Serviceguard**<br>Schützen Sie mit HPE Serviceguard für Linux (SGLX) wichtige SQL Server 2017-Workloads unter Linux ® vor geplanten und ungeplanten Ausfällen, die durch eine Vielzahl von Störungen der Infrastruktur oder der Anwendung entstehen können, und das über physische und virtuelle Umgebungen und große Distanzen hinweg. HPE SGLX A.12.20.00 und höher bietet kontextabhängige Überwachungs- und Wiederherstellungsoptionen für Failoverclusterinstanzen sowie SQL Server-Workloads in Always On-Verfügbarkeitsgruppen. Maximieren Sie die Betriebszeit mit HPE SGLX, ohne die Datenintegrität und Leistung zu beeinträchtigen.<br><br>SQL Server 2017 unter Linux – RedHat 7.3, 7.4, SUSE 12 SP2, SP3|[Website][hpe_website]<br>[Datasheet][hpe]<br>[Testversion herunterladen][hpe_download]<br>[Blog][hpe_download]<br>[Twitter][hpe_twitter]
|![IDERA][3]|**IDERA**<br>SQL Safe Backup ist eine Lösung für Backups mit hoher Leistung und Wiederherstellungen für SQL Server, mit der Sie Geld sparen können, indem die Sicherungsdatengröße und die Zeit für die Datenbanksicherung reduziert werden, und indem ein sofortiger Lese- und Schreibzugriff auf die Datenbanken innerhalb der Sicherungsdateien zur Verfügung gestellt wird.<br><br>Microsoft SQL Server: 2005 SP1 der höher, 2008, 2008 R2, 2012, 2014, 2016; alle Editionen |[Website][idera_website]|
|![NEC][7]|**NEC**<br>ExpressCluster ist eine umfassende und vollständig automatisierte Lösung für hohe Verfügbarkeit und Notfallwiederherstellung nach größeren Ausfällen, z.B. durch Hardware-, Software-, Netzwerk- und Websitefehler. Die Lösung umfasst SQL Server und zugehörige Anwendungen, die auf physischen oder virtuellen Computern in lokalen oder Cloudumgebungen ausgeführt werden.<br><br>Microsoft SQL Server: 2005 oder höher; alle Editionen |[Website][necec_website]<br>[Datasheet][necec_datasheet]<br>[Video][necec_youtube]<br>[Download][necec_download]|
|![Portworx][6] |**Portworx**<br>Mit Portworx können zustandsbehaftete Container während der Produktion ausgeführt werden. Mit diesem Tool können Benutzer jede Datenbank oder jeden zustandsbehafteten Dienste mit jeder Infrastruktur verwalten, indem sie einen beliebigen Container-Scheduler, einschließlich Kuvernetes, Mesosphere DV/OS und Docker Swarm, verwenden. Portworx bietet Lösungen für die fünf häufigsten Probleme, auf die DevOps-Teams stoßen, wenn sie Datenbanken mit Containern und zustandsbehaftete Dienste während der Produktion ausführen: Persistenz, Hochverfügbarkeit, Datenautomatisierung, Unterstützung von mehreren Datenspeichern und Infrastrukturen sowie Sicherheit.<br><br>SQL Server 2017 unter Docker |[Website][portworx_website]<br>[Dokumentation][portworx_docs]<br>[Video][portworx_youtube]|
|![SIOS][8] |**SIOS**<br>SIOS Technology stellt kostengünstige Hochverfügbarkeits- und Notfallwiederherstellungslösungen für SQL Server unter Windows oder Linux bereit. Mit SIOS SANless-Clustern ist kein gemeinsam genutztes SAN (Storage Area Network) mehr erforderlich, sodass Sie Ihre wichtigsten Anwendungen in physischen, virtuellen, cloudbasierten und hybriden Konfigurationen in Umgebungen mit einem oder mehreren Standorten flexibel schützen können.<br><br>Fügen Sie SIOS DataKeeper zu Ihrer Windows Server-Failoverclusterumgebung hinzu, um eine SANless-Volumeressource zu erstellen, die herkömmlichen gemeinsam genutzten Speicher ersetzt, sodass Sie Ihren Windows Server-Failovercluster problemlos in Azure ausführen können.<br><br>SIOS Protection Suite ist eine höchst flexible Clusterlösung, die wichtige Linux-Anwendungen wie SQL Server, SAP, HANA, Oracle und viele weitere schützt.|[Website][sios_website]<br>[Datasheet][sios_datasheet]<br>[Twitter][sios_twitter]<br>[Marketplace][sios_marketplace]<br>[Video][sios_youtube]|
|![Veeam][1] |**Veeam**<br>Veeam Backup & Replication ist eine leistungsstarke, benutzerfreundliche und erschwingliche Lösung für Sicherung und Verfügbarkeit. Dieses Tool bietet Ihnen eine schnelle, flexible und zuverlässige Wiederherstellung von virtualisierten Anwendungen und Daten und verbindet dabei VM-Sicherung und -Replikation in einer einzigen Softwarelösung. Veeam Backup & Replication liefert ausgezeichneten Support für virtuelle Umgebungen mit VMware vSphere und Microsoft Hyper-V.<br><br>SQL Server 2005 SP4, SQL Server 2016 unter Windows |[Website][veeam_website]<br>[Datasheet][veeam_datasheet]<br>[Twitter][veeam_twitter]<br>[Video][veeam_youtube]|
## <a name="next-steps"></a>Nächste Schritte
Nähere Informationen zu weiteren Partnern finden Sie unter [Überwachung][mon_partners], [Verwaltungspartner][management_partners] und [Entwicklungspartner][dev_partners].
<!--Image references-->
[1]: ./media/partner-hadr-sql-server/Veeam_green_logo.png
[2]: ./media/partner-hadr-sql-server/dh2i_logo.png
[3]: ./media/partner-hadr-sql-server/idera_logo.png
[4]: ./media/partner-hadr-sql-server/hpe_pri_grn_pos_rgb.png
[5]: ./media/partner-hadr-sql-server/azure_logo.png
[6]: ./media/partner-hadr-sql-server/portworx_logo.png
[7]: ./media/partner-hadr-sql-server/nec_logo.png
[8]: ./media/partner-hadr-sql-server/sios_logo.png
<!--Article links-->
[mon_partners]: ./partner-monitor-sql-server.md
[management_partners]: ./partner-management-sql-server.md
[dev_partners]: ./partner-dev-sql-server.md
<!--Website links -->
[veeam_website]:https://www.veeam.com/
[dh2i_website]:https://dh2i.com
[idera_website]:https://www.idera.com/productssolutions/sqlserver
[hpe_website]: https://www.hpe.com/us/en/product-catalog/detail/pip.376220.html
[azure_website]: https://docs.microsoft.com/azure/site-recovery/site-recovery-sql
[necec_website]: https://www.necam.com/ExpressCluster/
[portworx_website]: https://portworx.com/
[sios_website]: https://us.sios.com/
<!--Get Started Links-->
<!--Datasheet Links-->
[veeam_datasheet]:https://www.veeam.com/veeam_backup_9_5_datasheet_en_ds.pdf
[dh2i_datasheet]:https://dh2i.com/wp-content/uploads/DxE-Win-QuickFacts.pdf
[hpe]:https://www.hpe.com/h20195/v2/default.aspx?cc=us&lc=en&oid=376220
[necec_datasheet]: https://www.necam.com/docs/?id=0d9ef7a7-f935-4909-b6bb-20a47b3
[azure_datasheet]: /azure/site-recovery/vmware-physical-azure-support-matrix
[sios_datasheet]: https://us.sios.com/solutions/high-availability-cluster-software-cloud/
<!--Marketplace Links -->
[azure_marketplace]: https://azuremarketplace.microsoft.com/marketplace/apps?search=site%20recovery&page=1
[sios_marketplace]: https://azuremarketplace.microsoft.com/marketplace/apps/sios_datakeeper.sios-datakeeper-8
<!--Press links-->
<!--[veeam_press]:-->
<!--YouTube links-->
[veeam_youtube]:https://www.youtube.com/user/YouVeeam
[dh2i_youtube]:https://www.youtube.com/user/dh2icompany
[idera_youtube]:https://www.idera.com/resourcecentral/videos/sql-safe-overview
[azure_youtube]: https://mva.microsoft.com/en-US/training-courses/is-your-lack-of-a-disaster-recovery-site-keeping-you-up-at-night-8680?l=oF7YrFH1_7504984382
[necec_youtube]: https://www.youtube.com/watch?v=9La3Cw1Q1Jk
[portworx_youtube]: https://www.youtube.com/channel/UCSexpvQ9esSRgiS_Q9_3mLQ
[sios_youtube]: https://www.youtube.com/watch?v=U3M44gJNWQE
<!--Twitter links-->
[veeam_twitter]:https://twitter.com/veeam
[dh2i_twitter]:https://twitter.com/dh2i
[hpe_twitter]:https://twitter.com/hpe
[azure_twitter]:https://twitter.com/hashtag/azuresiterecovery
[sios_twitter]:https://www.twitter.com/SIOSTech
<!--Docs links>-->
[portworx_docs]: https://docs.portworx.com/
<!--Download links-->
[hpe_download]: https://h20392.www2.hpe.com/portal/swdepot/displayProductInfo.do?productNumber=SGLX-DEMO
[necec_download]: https://www.necam.com/ExpressCluster/30daytrial/
<!--Blog links-->
[hpe_blog]: https://community.hpe.com/t5/Servers-The-Right-Compute/SQL-Server-for-Linux-Is-Here-and-A-New-Chapter-for-Mission/ba-p/6977571#.WiHWW0xFwUE
| 104.514286 | 1,035 | 0.80226 | deu_Latn | 0.776527 |
43e66d1c40dafd61b18be066c8a94ca35ea7104a | 1,479 | md | Markdown | docs/debugger/how-to-page-up-or-down-in-memory.md | Ric-Chang/visualstudio-docs.zh-tw | fab71387c0b4fd9853313d648522b5292ecee128 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-11-18T01:15:24.000Z | 2019-11-18T01:15:24.000Z | docs/debugger/how-to-page-up-or-down-in-memory.md | Ric-Chang/visualstudio-docs.zh-tw | fab71387c0b4fd9853313d648522b5292ecee128 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/how-to-page-up-or-down-in-memory.md | Ric-Chang/visualstudio-docs.zh-tw | fab71387c0b4fd9853313d648522b5292ecee128 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: HOW TO:Page Up 或在記憶體中的向下 |Microsoft Docs
ms.date: 11/04/2016
ms.topic: conceptual
dev_langs:
- CSharp
- VB
- FSharp
- C++
- JScript
helpviewer_keywords:
- debugger, paging up or down in memory
- memory, paging up or down
- Disassembly window, viewing memory space
- Memory window, paging up or down in memory
ms.assetid: 50b30a68-66f6-43f8-a48b-59ce12c95471
author: mikejo5000
ms.author: mikejo
manager: jillfra
ms.workload:
- multiple
ms.openlocfilehash: 3a1c8cc86481e73bfa851714d6e6f23c5eda5daa
ms.sourcegitcommit: 2193323efc608118e0ce6f6b2ff532f158245d56
ms.translationtype: MTE95
ms.contentlocale: zh-TW
ms.lasthandoff: 01/25/2019
ms.locfileid: "55013778"
---
# <a name="how-to-page-up-or-down-in-memory"></a>HOW TO:在記憶體中向上或向下翻頁
當您在 [記憶體] 視窗或 [反組譯碼] 視窗中檢視記憶體內容時,可以使用垂直捲軸在記憶體空間中上下移動。
### <a name="to-page-up-or-down-in-memory"></a>若要在記憶體中向上或向下翻頁
1. 若要向下翻頁 (移至較高的記憶體位址),請按一下捲動方塊下方的垂直捲軸。
2. 若要向上翻頁 (移至較低的記憶體位址),請按一下拇指上方的垂直捲軸。
您也會發現垂直捲軸的操作方式沒有一定的標準。 現代電腦的位址空間很大,如果您抓取捲軸拇指並將它拖曳到隨機位置,很容易會遺失。 因此,拇指是「彈性載入」而且一定在捲軸的中央。 在機器碼應用程式中,您可以向上或向下翻頁,但不能自由捲動。
在 Managed 應用程式中時,反組譯碼只有一個函式,您可以正常捲動。
您會發現較高的位址出現在視窗底端。 若要檢視較高的位址,就必須向下而非向上移動。
#### <a name="to-move-up-or-down-one-instruction"></a>若要向上或向下移動一個指令
- 請按一下垂直捲軸頂端或底端的箭頭。
## <a name="see-also"></a>請參閱
[記憶體視窗](../debugger/memory-windows.md)
[如何:使用反組譯碼視窗](../debugger/how-to-use-the-disassembly-window.md)
[在偵錯工具中檢視資料](../debugger/viewing-data-in-the-debugger.md) | 28.442308 | 121 | 0.74307 | yue_Hant | 0.727294 |
43e671988bcb90aa9835e619adc20193fe59e5f1 | 2,218 | markdown | Markdown | _posts/2014-01-05-joining-couchbase.markdown | t3rm1n4l/t3rm1n4l.github.io | 8627d1b5385545d8602b19a24870957be5205999 | [
"MIT"
] | null | null | null | _posts/2014-01-05-joining-couchbase.markdown | t3rm1n4l/t3rm1n4l.github.io | 8627d1b5385545d8602b19a24870957be5205999 | [
"MIT"
] | null | null | null | _posts/2014-01-05-joining-couchbase.markdown | t3rm1n4l/t3rm1n4l.github.io | 8627d1b5385545d8602b19a24870957be5205999 | [
"MIT"
] | null | null | null | ---
layout: post
status: publish
published: true
title: Joining Couchbase
date: 2014-01-05 22:31:25.000000000 +05:30
comments: []
---
I joined Couchbase around 2 months ago. Couchbase is a fast growing company in the NoSQL database space. They started office in Bangalore very recently and I am very excited to join as the 4th employee in India office. Having worked with Membase over the last 2 years, I had some familiarity with the company and the product. Couchbase is a great place with extremely challenging work in the domain of distributed systems and storage. The talent pool in the company is just awesome. The Bangalore team focuses on Development and Technical support.
It was one of wish to have a job of writing opensource code and getting paid for it. At Couchbase, every line of code is open source and is available on github. I am currently working on development of Couchbase MapReduce view indexing engine. Functionally, the view indexing engine is inherited from CouchDB views. You can write predefined map and reduce functions in javascript to operate on your dataset. A mapreduce set is called a view. Each view can by queried using REST APIs. View is populated incrementally on-the-fly as your data mutations arrive. It is unlike hadoop mapreduce, where you run mapreduce job on the dataset every time. Only data being modified or added needs to be evaluated to update the part of the view index. Hence it is called incremental mapreduce. Read more about views from [official documentation](http://docs.couchbase.com/couchbase-manual-2.2/#views-and-indexes). I have been learning erlang recently as part of the job and digging into couch btree implementation and reading lot of interesting code. I am looking forward to write some blog post sometime about couch btree internals.
Taking a glance at 2013, I have been significantly contributing to ZBase open source project during the whole year. It was interesting to see one of my golang project, megacmd attracted some users. My github graph for the last one year looks as follows. I am looking forward to diversify my projects to more lower level stuff this year.

Wishing you all happy new year 2014!
| 110.9 | 1,119 | 0.800721 | eng_Latn | 0.999754 |
43e6a41bb1e913bd3acb364381fa39a36f4e165a | 1,305 | md | Markdown | README.md | philmcp/awesome-job-boards-2 | 38015d893b93fef086f02b70ff9a9f87d468645d | [
"MIT"
] | 23 | 2020-05-26T02:26:05.000Z | 2022-01-26T13:34:59.000Z | README.md | philmcp/awesome-job-boards-2 | 38015d893b93fef086f02b70ff9a9f87d468645d | [
"MIT"
] | 5 | 2020-05-25T21:02:19.000Z | 2021-02-10T03:17:38.000Z | README.md | philmcp/awesome-job-boards-2 | 38015d893b93fef086f02b70ff9a9f87d468645d | [
"MIT"
] | 11 | 2020-05-25T16:31:27.000Z | 2021-11-04T13:20:37.000Z | # awesome-job-boards
A curated list of awesome job boards
:computer: :earth_americas:
[Career Vault](https://careervault.io)
[carrots](https://thecarrots.io/jobs)
[remoteleaf](https://remoteleaf.com/)
[remoteok](https://remoteok.io/)
[okjob](https://okjob.io/)
[entrylevel](https://entrylevel.io/)
[howsitlike](https://www.howsitlike.com/)
[nowhiteboards](https://nowhiteboards.io/)
[remotemonk](http://remotemonk.com/)
[hellnar](https://hellnar.github.io/openings/Openings.html)
[workwithgo](https://workwithgo.com)
[ripplematch](https://ripplematch.com/discover/)
[workalerts](https://workalerts.netlify.com/)
[workfeed](http://workfeed.dev)
[loopcv](https://www.loopcv.pro/)
[needremote](https://needremote.com/)
[remotejobs](https://remotejobs.world/)
[indeed](https://indeed.com/)
[ziprecruiter](https://www.ziprecruiter.com/)
[linkedin](https://www.linkedin.com/)
[levels.fyi](https://www.levels.fyi/still-hiring/)
[remotelypeople](https://remotelypeople.com/)
[glassdoor](https://glassdoor.com/)
[naukri](http://naukri.com/)
[remotists](https://remotists.substack.com/)
[weworkremotely](https://weworkremotely.com/)
[EuroPython 2020](https://ep2020.europython.eu/sponsor/job-board/)
[AngelList](https://angel.co/)
[findwrk](https://findwrk.app/)
| 37.285714 | 66 | 0.711111 | yue_Hant | 0.152264 |
43e70fa9fd73a10520fdf131d459bbf62190de94 | 1,862 | md | Markdown | powerapps-docs/developer/model-driven-apps/clientapi/reference/execution-context.md | mattruma/powerapps-docs | 6fe922145366c7dbba22a00cd560a8402622b995 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/developer/model-driven-apps/clientapi/reference/execution-context.md | mattruma/powerapps-docs | 6fe922145366c7dbba22a00cd560a8402622b995 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerapps-docs/developer/model-driven-apps/clientapi/reference/execution-context.md | mattruma/powerapps-docs | 6fe922145366c7dbba22a00cd560a8402622b995 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Client API execution context in model-driven apps| MicrosoftDocs"
description: Includes description and supported parameters for the executionContext method.
ms.date: 04/21/2021
ms.topic: "conceptual"
applies_to:
- "Dynamics 365 (online)"
ms.assetid: 1fcbf0fd-4e47-4352-a555-9315f7e57331
author: "Nkrb"
ms.subservice: mda-developer
ms.author: "nabuthuk"
manager: "kvivek"
search.audienceType:
- developer
search.app:
- PowerApps
- D365CE
---
# Execution context (Client API reference)
The execution context defines the event context in which your code executes. More information: [Client API execution context](../clientapi-execution-context.md).
The execution context object provides the following methods.
|Method |Description |
|---|---|
|[getDepth](executioncontext/getDepth.md)|Returns a value that indicates the order in which this handler is executed.|
|[getEventArgs](executioncontext/getEventArgs.md)|Returns an object with methods to manage this handler.|
|[getEventSource](executioncontext/getEventSource.md)|Returns a reference to the object that the event occurred on.|
|[getFormContext](executioncontext/getFormContext.md)|Returns a reference to the form or an item on the form depending on where the method was called.|
|[getSharedVariable](executioncontext/getSharedVariable.md)|Retrieves a variable set using the [setSharedVariable](executioncontext/setSharedVariable.md) method.|
|[setSharedVariable](executioncontext/setSharedVariable.md)|Sets the value of a variable to be used by a handler after the current handler completes.|
### Related topics
[Client API execution context](../clientapi-execution-context.md)
[Save event arguments](save-event-arguments.md)
[Understand Client API object model](../understand-clientapi-object-model.md)
[!INCLUDE[footer-include](../../../../includes/footer-banner.md)] | 39.617021 | 162 | 0.783566 | eng_Latn | 0.849074 |
43e71536ccc1749957a6f7da413658044e38923f | 370 | md | Markdown | build/docs/ListWrapperForecastSourceDayPointer.md | nmusco/platform-client-sdk-dotnet | c5f11eb3a14010490c49a77e6a71939ad6024d51 | [
"MIT"
] | 1 | 2019-10-30T11:48:16.000Z | 2019-10-30T11:48:16.000Z | build/docs/ListWrapperForecastSourceDayPointer.md | roblthegreat/purecloud-platform-client-sdk-dotnetcore | 7da09f2cac8017e44575fd14e3f38da63114b550 | [
"MIT"
] | null | null | null | build/docs/ListWrapperForecastSourceDayPointer.md | roblthegreat/purecloud-platform-client-sdk-dotnetcore | 7da09f2cac8017e44575fd14e3f38da63114b550 | [
"MIT"
] | null | null | null | ---
title: ListWrapperForecastSourceDayPointer
---
## ININ.PureCloudApi.Model.ListWrapperForecastSourceDayPointer
## Properties
|Name | Type | Description | Notes|
|------------ | ------------- | ------------- | -------------|
| **Values** | [**List<ForecastSourceDayPointer>**](ForecastSourceDayPointer.html) | | [optional] |
{: class="table table-striped"}
| 26.428571 | 106 | 0.610811 | yue_Hant | 0.516164 |
43e7609397bb8671a34b622bff58b5d102c03a02 | 547 | md | Markdown | README.md | cpl101/jQuery-Picasa-Gallery | 1b99b79721864ee26676b2b037ccf6298b4f51d2 | [
"MIT"
] | 2 | 2015-07-08T12:15:49.000Z | 2015-07-08T12:15:52.000Z | README.md | cpl101/jQuery-Picasa-Gallery | 1b99b79721864ee26676b2b037ccf6298b4f51d2 | [
"MIT"
] | null | null | null | README.md | cpl101/jQuery-Picasa-Gallery | 1b99b79721864ee26676b2b037ccf6298b4f51d2 | [
"MIT"
] | null | null | null | ## jQuery Picasa Gallery
jQuery plugin that displays your public picasa web albums on your website in a photo gallery.
### Example
<http://alanhamlett.github.com/jQuery-Picasa-Gallery>
### Download
<https://github.com/alanhamlett/jQuery-Picasa-Gallery/tarball/master>
### Project Page
<http://alanhamlett.github.com/jQuery-Picasa-Gallery>
### License
Released under the [MIT license](http://www.opensource.org/licenses/mit-license.php).
Uses the [fancyBox](http://fancyapps.com/fancybox/) jQuery plugin, which is free for non-commercial use.
| 32.176471 | 104 | 0.767824 | eng_Latn | 0.503963 |
43e8683dee7761ccd60e729c83a74e643977a40f | 7,783 | md | Markdown | docs/cli/reference/files.md | oieduardorabelo/docs | d88011c497267134afacffd97937e509ef3c1123 | [
"Apache-2.0"
] | 4 | 2019-07-30T19:12:45.000Z | 2021-09-05T17:39:55.000Z | docs/cli/reference/files.md | oieduardorabelo/docs | d88011c497267134afacffd97937e509ef3c1123 | [
"Apache-2.0"
] | 3 | 2019-11-26T01:33:32.000Z | 2019-12-04T18:30:46.000Z | docs/cli/reference/files.md | oieduardorabelo/docs | d88011c497267134afacffd97937e509ef3c1123 | [
"Apache-2.0"
] | 2 | 2021-09-04T06:07:26.000Z | 2021-09-05T17:39:57.000Z | ---
title: Files and Folders
description: Learn more about the files and folders Amplify uses to maintain project state.
---
## Folders
The CLI places the following folder structure in the root directory of the project during `amplify init`:
```
amplify
.config
#current-cloud-backend
backend
```
### amplify/.config
> Manual edits okay: NO
> Add to version control: YES
Contains files that store cloud configuration and `settings/preferences`. Run `amplify configure` to change the project configuration.
### amplify/#current-cloud-backend
> Manual edits okay: NO
> Add to version control: NO
Contains the current cloud state of the checked out environment's resources. The contents of this folder should never be manually updated. It will be overwritten on operations such as `amplify push`, `amplify pull` or `amplify env checkout`.
### amplify/backend
> Manual edits okay: YES
> Add to version control: YES
Contains the latest local development state of the checked out environment's resources. The contents of this folder can be modified and running `amplify push` will push changes in this directory to the cloud.
Each plugin stores contents in its own subfolder within this folder.
### amplify/mock-data
> Manual edits okay: NO
> Add to version control: NO
Only created after running `amplify mock api`. It contains the SQLite databases that are used to back the local API when mocking. The contents should not be modified but you can delete the folder if you want to wipe your local API state.
## Core Amplify Files
These files work together to maintain the overall state of the Amplify project such as what resources are configured in the project, dependencies between resources, and when the last push was.
### amplify-meta.json
> Manual edits okay: NO
> Add to version control: NO
Both the `amplify/backend` and `amplify/#current-cloud-backend` directories contain an `amplify-meta.json` file. The `amplify-meta.json` in the `backend` directory serves as the whiteboard for the CLI core and the plugins to log internal information and communicate with each other.
The CLI core provides read and write access to the file for the plugins. Core collects the selected providers' outputs after init and logs them under the "providers" object, e.g. the awscloudformation provider outputs the information of the root stack, the deployment S3 bucket, and the authorized/unauthorized IAM roles, and they are logged under the providers.awscloudformation object. Each category plugin logs information under its own name.
Because one category might create multiple services within one project (e.g. the interactions category can create multiple bots), the category metadata generally follows a two-level structure like the following:
```json
{
"<category>": {
"<service1>": {
//service1 metadata
},
"<service2>": {
//service2 metadata
}
}
}
```
The metadata for each service is first logged into the meta file after the `amplify <category> add` command is executed, containing some general information that indicates one service of the category has been added locally.
Then, on the successful execution of the `amplify push` command, the `output` object will be added/updated in the service's metadata with information that describes the actual cloud resources that have been created or updated.
### aws-exports.js
> Manual edits okay: NO
> Add to version control: NO
This file is generated only for JavaScript projects.
It contains the consolidated outputs from all the categories and is placed under the `src` directory specified during the `init` process. It is updated after `amplify push`.
This file is consumed by the [Amplify](https://github.com/aws-amplify/amplify-js) JavaScript library for configuration. It contains information which is non-sensitive and only required for external, unauthenticated actions from clients (such as user registration or sign-in flows in the case of Auth) or for constructing appropriate endpoint URLs after authorization has taken place. Please see the following more detailed explanations:
- [Cognito security best practices for web app](https://forums.aws.amazon.com/message.jspa?messageID=757990#757990)
- [Security / Best Practice for poolData (UserPoolId, ClientId) in a browser JS app](https://github.com/amazon-archives/amazon-cognito-identity-js/issues/312)
- [Are the Cognito User pool id and Client Id sensitive?](https://stackoverflow.com/a/47865747/194974)
### amplifyconfiguration.json
> Manual edits okay: NO
> Add to version control: NO
This file is the same as `aws-exports.js` but for Android and iOS projects.
It is consumed by the [iOS](https://github.com/aws/aws-sdk-ios/) and [Android](https://github.com/aws/aws-sdk-android) native SDKs for configuration.
### .gitignore
> Manual edits okay: YES
> Add to version control: YES
When a new project is initialized from the Amplify CLI, Amplify will append the following to the .gitignore file in the root directory. A .gitignore file will be created if one does not exist.
```sh
#amplify
amplify/\#current-cloud-backend
amplify/.config/local-*
amplify/mock-data
amplify/backend/amplify-meta.json
amplify/backend/awscloudformation
build/
dist/
node_modules/
aws-exports.js
awsconfiguration.json
amplifyconfiguration.json
amplify-build-config.json
amplify-gradle-config.json
amplifytools.xcconfig
```
### team-provider-info.json
> Manual edits okay: NO
> Add to version control: ONLY IF REPOSITORY IS PRIVATE
Used to share project info within your team. Learn more at [Share single environment](~/cli/teams/shared.md#sharing-projects-within-the-team).
### cli.json
> Manual edits okay: YES
> Add to version control: YES
Contains feature flag configuration for the project. If this file does not exist, it is created by Amplify CLI during `amplify init`. Environment specific feature flag overrides can also be defined in `cli.<environment name>.json`. If an environment specific file exists for the currently checked out environment, during `amplify env add` command the same file will be copied for the newly created environment as well. Learn more at [Feature flags](~/cli/reference/feature-flags.md).
## General Category Files
While each category plugin has some unique files, there are also some common files stored across all categories.
### \<category>-parameters.json
> Manual edits okay: NO
> Add to version control: YES
Stores the parameters selected during `amplify add <category>` so they can be used to populate answers during `amplify update <category>`. This file does NOT change the underlying category configuration; it is only used to populate answers in the walkthrough.
### parameters.json
> Manual edits okay: YES
> Add to version control: YES
Contains a JSON object that maps CloudFormation parameter names to values that will be passed to the CloudFormation template for the category. For example, if the CloudFormation template has the parameter:
```json
{
"Parameters": {
"RoleArn": {
"Type": "String",
"Default": "<default role ARN>"
}
}
}
```
And `parameters.json` contains
```json
{
"RoleArn": "<role ARN override>"
}
```
Then the value of "RoleArn" when the template is pushed will be "\<role ARN override\>".
## Function Category Files
### amplify.state
> Manual edits okay: NO
> Add to version control: YES
Contains internal metadata about how the CLI should build and invoke the function.
## AppSync API Category Files
### transform.conf.json
> Manual edits okay: NO
> Add to version control: YES
Contains configuration about how to interpret the GraphQL schema and transform it into AppSync resolvers. Run `amplify api update` to change API category configuration. | 41.620321 | 484 | 0.765772 | eng_Latn | 0.995009 |
43e87538445d6ebc678e4a9711ea47ae9c5059ab | 3,097 | md | Markdown | _posts/2017-03-24-Action.md | chensirup/chensirup.github.io | 9026c9263d66c36c2d962d3a066b8528ec413b7d | [
"MIT"
] | null | null | null | _posts/2017-03-24-Action.md | chensirup/chensirup.github.io | 9026c9263d66c36c2d962d3a066b8528ec413b7d | [
"MIT"
] | null | null | null | _posts/2017-03-24-Action.md | chensirup/chensirup.github.io | 9026c9263d66c36c2d962d3a066b8528ec413b7d | [
"MIT"
] | null | null | null | ---
layout: post
title: "人人都有拖延症"
date: 2017-03-24 23:33:33
categories: 成长
tags: 成长 拖延
author: 陈sir
excerpt: 拖延是一种病
---
* content
{:toc}
前天晚上在我们「团建」的时候,同学问了我一个问题:「你有没有遇到一些状况——明明知道很想做某一件事,或是必须做一件事,却迟迟不开始或是开始没几天就中断了,然后又会在某一时刻想起来,很想做,然后做没几天又中断了。比如我们女生想减肥这件事……」
有啊,就拿写作这件事来说,去年挑战了100多天以后,中断了,曾经两次痛定思痛下定决心要重新开始启动持续写作,只是没两天又中断了,后来,心里又无数次给自己加油鼓气,无数次暗下狠心,要「重生」,由于种种原因,最终失败。
现在回看,除了状态不好外,更重要的是根本没想清楚为什么要重新写作?有什么意义吗?写什么内容?更新频率如何?如何才能持续下去呢?等等这些问题都没有答案,行动最终失败。
另一件就是跑步。跑步曾经是我最喜欢的锻炼方式,每个月多的时候一个月跑100多公里,少的时候也有七八十公里。那时候的身体、精神状态确实好太多。去年大半年都不锻炼,一直在吃老本,期间如果不是厦门半马和全马全部抽中,在赛前自主训练了一段时间,但在全马前两周又中断了。全马过后,身体终于还是扛不住了。
之前在凌晨1点左右睡觉,6点左右都会自然醒来,二十多年的习惯,在去年大半年的熬夜、紊乱作息后,就像一栋大楼由于结构受损,瞬间倒塌,特别是年前年后的这两个多月,身体频频发出不好的信号,更多表现为每天睡不醒。也不再死磕自己,更多听从身体的声音,不强迫自己早起,只能更多地休息。
这个期间,非常焦虑,每天晚起后都有罪恶感,特别想回到之前规律的状态。想改变,却感觉无能为力,异常痛苦。
老本吃透了,终于够痛了,想要改变了。
直到后来,朋友告诉我,她每天23点前就睡了,我也请她支了一些招。后来,除特殊情况外,也能在23点前发完文章,然后收拾,不过一般都要24点左右了,滚去睡觉了,但至少比之前赶在24点前最后一刻发完文章,再收拾一下,就又凌晨1点了才睡觉要好很多。
在此之前总有一个限制性的信念,觉得自己每天就得忙到晚上10点后,才能开始坐下来写作。也得益好朋友的妙招,改变了作息,经过两周多的调整,上周竟然能在6点左右自然醒了,内心既欣喜又保持警惕。
所以,这几天可以早起去跑跑步了,尝试重新启动跑步。没有同伴的鼓励与支持,不知能持续多久。
回到前面的话题,对于我们很想做,却迟迟开始不了或是开始后,持续不下去的问题。我们可能很容易把这个归于「拖延症」,但其实又有些许的差异。
我们说「拖延症」,往往有一个最后期限,比如,周一早上要交作业了,周天晚上熬夜也要赶完;一项任务实在拖不了,晚上开夜车搞定它等等。
拖延存在最后时刻,知道如果不做,就真完不成了,搞不好工作都要丟了。拖延,是一种「病」,不过它也是在提醒我们该做事情的最后时刻,再不做就真的要挂。
如果是我们自身要成长、想要改变的一些事情,又不像工作这样有客观的硬性限制。这样的拖延就基本没有时间限制,会一直拖下去。
这样无限拖延的改变,迟迟开始不了或是开始了持续不了,可能会有几个原因 :
- **不够痛**
关于成长改变,有一个公式,可以很好地解释为什么我们想要改变却迟迟改变不了?
贝克哈德公式:**D×V×FS>RC**改变有一个公式
>Dissatisfaction×Vision×First Step > Resistance to Change
>对现状的不满×对未来的愿景×第一步行动 > 变革阻力
只要D、V、FS这三个因素有一个为0,那最终改变可能性也就为0。
我们常说,道理我都懂啊,我也知道减肥很重要,我也很想减肥。要特别注意的是,只是自己想要甚至认为很重要也没用,和D、V、FS这三个因素有本质区别。
虽说道理都懂,知道很重要,也知道改变了肯定有很多好处。只是对现状还很满意,干嘛要虐自己呢?又不是自虐狂。
所以,痛了自然就会改变。
- **找到改变的意义**
第二个因素是「对未来的愿景」,很多时候想着改变,可从来没有想过自己为什么要改变?改变后的自己是什么样?是否是自己想要成为的那样,如果不清楚,也很难会发生改变。
很多行动改变或是培养的习惯,回馈周期很长,不太可能有即时反馈。
比如同学提到的阅读,阅读是一个回报周期很长的行为,也许今天所阅读的,短期内都可能无用武之地,也不会有人给你点赞反馈,想要从阅读中获得即时的成就感特别难。(这里要排除那些新闻、八卦等可作为谈资的快餐式阅读)
跑步也是一个回报周期很长的习惯,今天、明天跑,能即时感受到腿的酸痛,其他的变化基本感受不到,更是一个长期的过程。
写作也是一样的,现在每天写着,也看不见是否今天比昨天有进步了,可能需要放大时间周期,今年和去年比,才会有明显的变化。
所以,要改变,就得先想想为什么要做?到时有什么意义,是未来想要的吗?
- **同侪力量**
再来看最后一个因素——「第一步」,就像同学和我说的,自己也已尝试开始了,跨出了第一步,只是没几天就中断了。为什么会这样?
一个人在改变,从改变的行动时间上来说确实比较灵活。但从持续的效果来看,极有可能随时中断的节奏,反正没有知道自己在改变,偷偷开始也可以偷偷结束,无压力。
**一个人走得快,但一群人走得远。**改变不要追求快,有时候**慢既是快。**
一个人可以走得快,不过一群人可以走得远。改变是一个持续积累的过程,前期快不一定最终结果就好。如果有一群同行的伙伴,不说相互监督,更多的会起到陪伴,相互鼓励往前再走一点点,再走一点点,效果慢慢显现。
回想之前能持续跑步,也是各种跑团有很多伙伴的陪伴,看到大家风雨无阻地在持续跑步,自然就会感受到压力,立马行动。
现在写作也是,给自己找了很多伙伴,不一定对方有写作,就找他们帮忙监督,效果也很好。
改变是一个长期的过程,需要D、V、FS三个因素肯定还不够。
- **改变行动自动执行**
最后,推荐一个思考或行为范式 :**If…then…**,如果…那么…
把改变行动具体场景化,让很多行为自然而然地执行。设定好行动的时间、地点、情景,只要到点就去做。
比如,朋友找我做21天的阅读习惯养成训练。设定的时间是晚上8点到9点阅读,阅读前会泡一杯热茶;最关键的是找到平时晚上8点在干嘛,如果现在到了8点要开始阅读,会有什么障碍?可能会在玩游戏、看电视、刷微信、微博等等,不管有多少,把能想到的全部列出来。
最后给自己设置行动 :
>如果到了晚上8点,那么我就去阅读;
如果到了晚上8点我在玩游戏,那么我就去阅读;
如果到了晚上8点我想看电视,那么我就去阅读;
如果到了晚上8点我在刷手机,那么我就去阅读;
……
不管有什么障碍因素,就自动添加到「如果…那么…」的范式中去,这样一来,就自动把阅读提高到了最高级,只要到点、到了具体场景下,行动自然就执行了。
总结一下,如果想要改变,想让改变行动自动执行,参考下面 :
- 对现状不满 : 谁痛谁改变,不够痛很难改变
- 对未来的愿景 : 明白自己想成为什么样的人,找到改变的意义
- 第一步行动 : 开始第一步不完美的行动,找到同侪力量
- 自动执行 : 用「If…then…」行动范式,改变自然发生
晚安
| 26.470085 | 146 | 0.825315 | zho_Hans | 0.559933 |
43e8d32e1abcb2a4ee8fc860d479aa8509e861a9 | 3,667 | md | Markdown | docs/extensibility/debugger/reference/idebugproperty3-getcustomviewerlist.md | aleffe61/visualstudio-docs.it-it | c499d4c129dfdd7d5853a8f539753058fc9ae5c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugproperty3-getcustomviewerlist.md | aleffe61/visualstudio-docs.it-it | c499d4c129dfdd7d5853a8f539753058fc9ae5c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/extensibility/debugger/reference/idebugproperty3-getcustomviewerlist.md | aleffe61/visualstudio-docs.it-it | c499d4c129dfdd7d5853a8f539753058fc9ae5c3 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IDebugProperty3::GetCustomViewerList | Microsoft Docs
ms.date: 11/04/2016
ms.topic: conceptual
f1_keywords:
- IDebugProperty3::GetCustomViewerList
helpviewer_keywords:
- IDebugProperty3::GetCustomViewerList
ms.assetid: 74490fd8-6f44-4618-beea-dab64961bb8a
author: gregvanl
ms.author: gregvanl
manager: douge
ms.workload:
- vssdk
ms.openlocfilehash: 4af30d678711047043ce0ff20f9a5d964d725a6d
ms.sourcegitcommit: 37fb7075b0a65d2add3b137a5230767aa3266c74
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 01/02/2019
ms.locfileid: "53852008"
---
# <a name="idebugproperty3getcustomviewerlist"></a>IDebugProperty3::GetCustomViewerList
Ottiene un elenco di visualizzatori personalizzati associati a questa proprietà.
## <a name="syntax"></a>Sintassi
```cpp
HRESULT GetCustomViewerList(
ULONG celtSkip,
ULONG celtRequested,
DEBUG_CUSTOM_VIEWER* rgViewers,
ULONG* pceltFetched
);
```
```csharp
int GetCustomViewerList(
uint celtSkip,
uint celtRequested,
DEBUG_CUSTOM_VIEWER[] rgViewers,
out uint pceltFetched
);
```
#### <a name="parameters"></a>Parametri
`celtSkip`
[in] Il numero di visualizzatori di ignorare.
`celtRequested`
[in] Il numero di visualizzatori da recuperare (specifica anche la dimensione del `rgViewers` matrice).
`rgViewers`
[in, out] Matrice di [DEBUG_CUSTOM_VIEWER](../../../extensibility/debugger/reference/debug-custom-viewer.md) strutture da compilare.
`pceltFetched`
[out] Il numero effettivo di visualizzatori restituito.
## <a name="return-value"></a>Valore restituito
Se ha esito positivo, restituisce `S_OK`; in caso contrario, restituisce un codice di errore.
## <a name="remarks"></a>Note
Per supportare i visualizzatori di tipo, questo metodo inoltra la chiamata per il [GetCustomViewerList](../../../extensibility/debugger/reference/ieevisualizerservice-getcustomviewerlist.md) (metodo). Se l'analizzatore di espressioni supporta anche i visualizzatori personalizzati per questo tipo di proprietà, questo metodo può aggiungere i visualizzatori personalizzati appropriati all'elenco.
Visualizzare [Visualizzatore di tipi e Visualizzatore personalizzato](../../../extensibility/debugger/type-visualizer-and-custom-viewer.md) per informazioni dettagliate sulle differenze tra i visualizzatori di tipi e visualizzatori personalizzati.
## <a name="example"></a>Esempio
Nell'esempio seguente viene illustrato come implementare questo metodo per un **CProperty** oggetto che espone le [IDebugProperty3](../../../extensibility/debugger/reference/idebugproperty3.md) interfaccia.
```cpp
STDMETHODIMP CProperty::GetCustomViewerList(ULONG celtSkip, ULONG celtRequested, DEBUG_CUSTOM_VIEWER* prgViewers, ULONG* pceltFetched)
{
if (NULL == prgViewers)
{
return E_POINTER;
}
if (GetVisualizerService())
{
return m_pIEEVisualizerService->GetCustomViewerList(celtSkip, celtRequested, prgViewers, pceltFetched);
}
else
{
return E_NOTIMPL;
}
}
```
## <a name="see-also"></a>Vedere anche
[IDebugProperty3](../../../extensibility/debugger/reference/idebugproperty3.md)
[DEBUG_CUSTOM_VIEWER](../../../extensibility/debugger/reference/debug-custom-viewer.md)
[GetCustomViewerList](../../../extensibility/debugger/reference/ieevisualizerservice-getcustomviewerlist.md)
[Visualizzatore di tipi e visualizzatore personalizzato](../../../extensibility/debugger/type-visualizer-and-custom-viewer.md) | 39.858696 | 398 | 0.724298 | ita_Latn | 0.831783 |
43e925661aaef2900198077a624c167ce2cedb97 | 12,617 | md | Markdown | _posts/2021-09-11-furthernmap.md | vg-1414/v31a.tech | 90834a8137948cee37ead0da1939f291bb251d7e | [
"MIT"
] | null | null | null | _posts/2021-09-11-furthernmap.md | vg-1414/v31a.tech | 90834a8137948cee37ead0da1939f291bb251d7e | [
"MIT"
] | null | null | null | _posts/2021-09-11-furthernmap.md | vg-1414/v31a.tech | 90834a8137948cee37ead0da1939f291bb251d7e | [
"MIT"
] | 1 | 2021-11-27T16:43:31.000Z | 2021-11-27T16:43:31.000Z | ---
title: "TryHackMe: Furthernmap"
categories: [writeups]
tags: [cybersecurity, tryhackme]
---
### Reference
<https://tryhackme.com/room/furthernmap>
<a id="org5b0e05f"></a>
# Deploy
The task here is to simply start the lab's VM. Done and done.
<a id="orgf864307"></a>
# Introduction
This task is a good summary of IP addresses, ports, scanning, and the
point of `nmap`, with no real tasks assigned. Moving on…
<a id="orgc72926b"></a>
# Nmap switches
This task is an exercise in consulting help files. I prefer `man`
piped to `grep` when I'm looking for something:
root@ip-10-10-204-56:~# man nmap | grep -i "syn scan"
Window scan, SYN scan, or FIN scan, may help resolve whether the port is open.
to solve every problem with the default SYN scan. Since Nmap is free, the only barrier to port
exception to this is the deprecated FTP bounce scan (-b). By default, Nmap performs a SYN Scan,
-sS (TCP SYN scan)
SYN scan is the default and most popular scan option for good reasons. It can be performed
completes TCP connections. SYN scan works against any compliant TCP stack rather than
TCP connect scan is the default TCP scan type when SYN scan is not an option. This is the
When SYN scan is available, it is usually a better choice. Nmap has less control over the
SYN scan does. Not only does this take longer and require more packets to obtain the same
SYN scan (-sS) to check both protocols during the same run.
There are quite a few questions here, all of which I'm able to query
the `man` page to answer. These are the most basic switches used in
`nmap`.
<a id="org3e8b2b0"></a>
# Scan types - Overview
This section is a summary of different types of port scans, including:
- `-sT`: TCP connect scan
- `-sS`: SYN scan (aka stealth aka half-open)
- `-sU`: UDP scan
- `-sN`: NULL scan
- `-sF`: FIN scan
- `-sX`: Xmas scan
<a id="org8d8897d"></a>
# Scan types - TCP connect scans
This includes an overview of the three-way TCP handshake, which this
particular scan (`-sT`) completes.
<a id="orgce898e2"></a>
# Scan types - SYN scans
For times when you have access to create raw packets and want to be
stealthy, this is a good option. The `-sS` scan interrupts the
three-way handshake process in an attempt to avoid detection.
<a id="orgd4f81ae"></a>
# Scan types - UDP scans
These scans (`-sU`) use the UDP protocol, which is handy if you have
hours you want to kill waiting for a scan to complete. :)
<a id="org20c8c68"></a>
# Scan types - NULL, FIN, and Xmas
These scans modify which flags are sent during TCP requests:
- NULL: no flags set
- FIN: only the Fin flag is set
- Xmas: Urg, Psh, and Fin are set
These scans can be handy for firewall evasion, provided there isn't an
IDS to deal with.
<a id="orgcac47cc"></a>
# Scan types - ICMP network scanning
This section covers ping scans with `-sn`. These are handy for
quickly trying to find hosts using standard ICMP pings.
<a id="orge203eb3"></a>
# NSE scripts - Overview
This is a brief introduction to `nmap` scripts, written in Lua, that
can be used to easily perform lots of different tasks.
<a id="orgc3b0b8e"></a>
# NSE scripts - Working with the NSE
This is mostly an introduction to the syntax of NSE, which is mostly
getting comfortable with `--script=<script-name>` and `--script-args`.
<a id="orgce69554"></a>
# NSE scripts - Searching for scripts
This shows a few ways to locate `nmap` scripts:
- By searching in the `/usr/share/nmap/scripts` directory with `ls`
- By searching the `scripts.db` file in the same directory with `grep`
<a id="orga12c941"></a>
# Firewall evasion
This section introduces a few handy switches for getting past firewalls:
- `-Pn`: treats all hosts as online without pinging them
- `-f`: used to fragment packets
- `--mtu <number>`: used to manipulate the MTU value
- `--scan-delay <time>`: used to add a delay between packets sent
- `--badsum`: used to generate an invalid checksum for packets
<a id="orgaf3c5aa"></a>
# Practical
Now I finally get to poke at a box. To start we verify that the
target does not respond to ping requests:
root@ip-10-10-204-56:~# ping 10.10.229.83
PING 10.10.229.83 (10.10.229.83) 56(84) bytes of data.
^C
--- 10.10.229.83 ping statistics ---
11 packets transmitted, 0 received, 100% packet loss, time 10237ms
Next, we perform an Xmas scan of the first 999 ports:
root@ip-10-10-204-56:~# nmap -sX -p 1-999 10.10.229.83 -vv
Starting Nmap 7.60 ( https://nmap.org ) at 2021-09-12 01:38 BST
Initiating ARP Ping Scan at 01:38
Scanning 10.10.229.83 [1 port]
Completed ARP Ping Scan at 01:38, 0.22s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 01:38
Completed Parallel DNS resolution of 1 host. at 01:38, 0.00s elapsed
Initiating XMAS Scan at 01:38
Scanning ip-10-10-229-83.eu-west-1.compute.internal (10.10.229.83) [999 ports]
Completed XMAS Scan at 01:39, 21.08s elapsed (999 total ports)
Nmap scan report for ip-10-10-229-83.eu-west-1.compute.internal (10.10.229.83)
Host is up, received arp-response (0.00011s latency).
All 999 scanned ports on ip-10-10-229-83.eu-west-1.compute.internal (10.10.229.83) are open|filtered because of 999 no-responses
MAC Address: 02:1E:28:E1:3E:A7 (Unknown)
Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 21.43 seconds
Raw packets sent: 1999 (79.948KB) | Rcvd: 1 (28B)
From the results, we see all 999 ports are `open|filtered` with a listed reason of `no-responses`.
Now we run a TCP SYN scan of the first 5000 ports, which shows that only `5` of the ports are listed as open.
root@ip-10-10-204-56:~# nmap -sS -p -5000 10.10.229.83 -vv
Starting Nmap 7.60 ( https://nmap.org ) at 2021-09-12 01:42 BST
Initiating ARP Ping Scan at 01:42
Scanning 10.10.229.83 [1 port]
Completed ARP Ping Scan at 01:42, 0.22s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 01:42
Completed Parallel DNS resolution of 1 host. at 01:42, 0.00s elapsed
Initiating SYN Stealth Scan at 01:42
Scanning ip-10-10-229-83.eu-west-1.compute.internal (10.10.229.83) [5000 ports]
Discovered open port 80/tcp on 10.10.229.83
Discovered open port 3389/tcp on 10.10.229.83
Discovered open port 21/tcp on 10.10.229.83
Discovered open port 53/tcp on 10.10.229.83
Discovered open port 135/tcp on 10.10.229.83
Increasing send delay for 10.10.229.83 from 0 to 5 due to 11 out of 32 dropped probes since last increase.
SYN Stealth Scan Timing: About 37.43% done; ETC: 01:44 (0:00:52 remaining)
Increasing send delay for 10.10.229.83 from 5 to 10 due to 11 out of 30 dropped probes since last increase.
SYN Stealth Scan Timing: About 68.63% done; ETC: 01:44 (0:00:39 remaining)
Completed SYN Stealth Scan at 01:45, 148.72s elapsed (5000 total ports)
Nmap scan report for ip-10-10-229-83.eu-west-1.compute.internal (10.10.229.83)
Host is up, received arp-response (0.00056s latency).
Scanned at 2021-09-12 01:42:49 BST for 149s
Not shown: 4995 filtered ports
Reason: 4995 no-responses
PORT STATE SERVICE REASON
21/tcp open ftp syn-ack ttl 128
53/tcp open domain syn-ack ttl 128
80/tcp open http syn-ack ttl 128
135/tcp open msrpc syn-ack ttl 128
3389/tcp open ms-wbt-server syn-ack ttl 128
MAC Address: 02:1E:28:E1:3E:A7 (Unknown)
Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 149.08 seconds
Raw packets sent: 15109 (664.780KB) | Rcvd: 124 (5.440KB)
Lastly, we run the `ftp-anon` script to verify that nmap is able to login anonymously on port 21.
root@ip-10-10-204-56:~# nmap --script=ftp-anon -vv 10.10.229.83
Starting Nmap 7.60 ( https://nmap.org ) at 2021-09-12 01:46 BST
NSE: Loaded 1 scripts for scanning.
NSE: Script Pre-scanning.
NSE: Starting runlevel 1 (of 1) scan.
Initiating NSE at 01:46
Completed NSE at 01:46, 0.00s elapsed
Initiating ARP Ping Scan at 01:46
Scanning 10.10.229.83 [1 port]
Completed ARP Ping Scan at 01:46, 0.22s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 01:46
Completed Parallel DNS resolution of 1 host. at 01:46, 0.01s elapsed
Initiating SYN Stealth Scan at 01:46
Scanning ip-10-10-229-83.eu-west-1.compute.internal (10.10.229.83) [1000 ports]
Discovered open port 135/tcp on 10.10.229.83
Discovered open port 21/tcp on 10.10.229.83
Discovered open port 53/tcp on 10.10.229.83
Discovered open port 80/tcp on 10.10.229.83
Discovered open port 3389/tcp on 10.10.229.83
Completed SYN Stealth Scan at 01:46, 17.15s elapsed (1000 total ports)
NSE: Script scanning 10.10.229.83.
NSE: Starting runlevel 1 (of 1) scan.
Initiating NSE at 01:46
Completed NSE at 01:47, 30.01s elapsed
Nmap scan report for ip-10-10-229-83.eu-west-1.compute.internal (10.10.229.83)
Host is up, received arp-response (0.0017s latency).
Scanned at 2021-09-12 01:46:41 BST for 48s
Not shown: 995 filtered ports
Reason: 995 no-responses
PORT STATE SERVICE REASON
21/tcp open ftp syn-ack ttl 128
| ftp-anon: Anonymous FTP login allowed (FTP code 230)
|_Can't get directory listing: TIMEOUT
53/tcp open domain syn-ack ttl 128
80/tcp open http syn-ack ttl 128
135/tcp open msrpc syn-ack ttl 128
3389/tcp open ms-wbt-server syn-ack ttl 128
MAC Address: 02:1E:28:E1:3E:A7 (Unknown)
NSE: Script Post-scanning.
NSE: Starting runlevel 1 (of 1) scan.
Initiating NSE at 01:47
Completed NSE at 01:47, 0.00s elapsed
Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 48.06 seconds
Raw packets sent: 3005 (132.204KB) | Rcvd: 20 (864B)
<a id="orgf694121"></a>
# Reference table
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup>
<col class="org-left" />
<col class="org-left" />
<col class="org-left" />
<col class="org-left" />
<col class="org-left" />
<col class="org-left" />
</colgroup>
<thead>
<tr>
<th scope="col" class="org-left">Scan type</th>
<th scope="col" class="org-left">Syntax</th>
<th scope="col" class="org-left">Flags set</th>
<th scope="col" class="org-left">Target response if open/filtered</th>
<th scope="col" class="org-left">Target response if closed</th>
<th scope="col" class="org-left">Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td class="org-left">Full TCP connect</td>
<td class="org-left">-sT</td>
<td class="org-left">SYN</td>
<td class="org-left">SYN/ACK</td>
<td class="org-left">RST</td>
<td class="org-left"> </td>
</tr>
<tr>
<td class="org-left">SYN/half-open/stealth</td>
<td class="org-left">-sS</td>
<td class="org-left">SYN</td>
<td class="org-left">SYN/ACK (attacker responds with RST)</td>
<td class="org-left">RST</td>
<td class="org-left"> </td>
</tr>
<tr>
<td class="org-left">NULL</td>
<td class="org-left">-sN</td>
<td class="org-left">None</td>
<td class="org-left">No response</td>
<td class="org-left">RST</td>
<td class="org-left"> </td>
</tr>
<tr>
<td class="org-left">Xmas</td>
<td class="org-left">-sX</td>
<td class="org-left">URG/PSH/FIN</td>
<td class="org-left">No response</td>
<td class="org-left">RST</td>
<td class="org-left"> </td>
</tr>
<tr>
<td class="org-left">FIN</td>
<td class="org-left">-sF</td>
<td class="org-left">FIN</td>
<td class="org-left">No response</td>
<td class="org-left"> </td>
<td class="org-left"> </td>
</tr>
<tr>
<td class="org-left">UDP</td>
<td class="org-left">-sU</td>
<td class="org-left"> </td>
<td class="org-left">No response</td>
<td class="org-left">ICMP: port unreachable</td>
<td class="org-left"> </td>
</tr>
<tr>
<td class="org-left">Ping sweep</td>
<td class="org-left">-sn</td>
<td class="org-left"> </td>
<td class="org-left">ICMP reply</td>
<td class="org-left"> </td>
<td class="org-left"> </td>
</tr>
<tr>
<td class="org-left">"All alive"</td>
<td class="org-left">-Pn</td>
<td class="org-left"> </td>
<td class="org-left"> </td>
<td class="org-left"> </td>
<td class="org-left"> </td>
</tr>
</tbody>
</table>
| 32.434447 | 132 | 0.670682 | eng_Latn | 0.919094 |