hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
eeeb8c8f70d83488bd7f4ec6189410d56de56dff | 665 | md | Markdown | README.md | NCU-nwlab/training-plan | 52be9924d3a7ef0e6a4ee5021b82052e927e22cb | [
"MIT"
] | null | null | null | README.md | NCU-nwlab/training-plan | 52be9924d3a7ef0e6a4ee5021b82052e927e22cb | [
"MIT"
] | null | null | null | README.md | NCU-nwlab/training-plan | 52be9924d3a7ef0e6a4ee5021b82052e927e22cb | [
"MIT"
] | null | null | null | # training-plan
NWlab 新手任務區
目前想到可以寫的內容
## 常用工具
### Makefile
- [跟我一起寫Makefile](https://seisman.github.io/how-to-write-makefile/)
### CMake
- [CMake 入門實戰](https://www.hahack.com/codes/cmake/)
- CMake example
- [CMake example github](https://github.com/ttroy50/cmake-examples)
- [CMake example 電子書](https://sfumecjf.github.io/cmake-examples-Chinese/)
- [An Introduction to Modern CMake](https://cliutils.gitlab.io/modern-cmake/)
## 語言
整理一些入門專案就好,語言的內容太多了
- c
- golang
- c++ 14/17
## Network Stack
- *linux network stack* 收送封包的完整流程介紹
- sk_buff
- linux socket 用法 & 實現
- linux tcp/udp 協定實現
- SNAT/DNAT
- eBPF with network stack
- conntrack
| 17.051282 | 77 | 0.693233 | yue_Hant | 0.638673 |
eeec8bf6ab8a1f469ea860cc60cb81dabf92f835 | 1,276 | md | Markdown | problems/intersection-of-two-arrays/README.md | ecgan/leetcode | cd77308f4ab60bc6a0e9ec6796c075bf616d7cf8 | [
"MIT"
] | 9 | 2019-08-15T07:52:20.000Z | 2022-03-26T07:50:01.000Z | problems/intersection-of-two-arrays/README.md | ecgan/leetcode | cd77308f4ab60bc6a0e9ec6796c075bf616d7cf8 | [
"MIT"
] | 9 | 2019-12-29T16:50:39.000Z | 2021-05-29T12:23:46.000Z | problems/intersection-of-two-arrays/README.md | ecgan/leetcode | cd77308f4ab60bc6a0e9ec6796c075bf616d7cf8 | [
"MIT"
] | 2 | 2021-08-11T19:59:02.000Z | 2021-10-17T02:43:23.000Z | # Intersection of Two Arrays
[Link to LeetCode page](https://leetcode.com/problems/intersection-of-two-arrays/)
Difficulty: Easy
Topics: Hash table, two pointers, binary search, sort, set.
## Solution Explanation
We convert the arrays into two [Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) objects - short set and long set. This would remove duplicate values in each array.
Then we iterate through the short set and check if the value exists in the long set. Since it is a set object, the check performs in constant O(1) time complexity.
## Complexity Analysis
Assume m is the length of nums1 and n is the length of nums2.
Time complexity: O(m + n) because we go through every elements in both arrays once during the set creation.
Space complexity: O(m + n) because in worst case scenario we create two sets of length m and n when both arrays contain totally unique values.
## Tests
In the problem description on LeetCode page, it is noted that:
- Each element in the result must be unique.
- The result can be in any order.
Because of this, the assertion in our tests has to be in the following format:
```javascript
expect(result).toHaveLength(2)
expect(result).toContain(9)
expect(result).toContain(4)
```
| 34.486486 | 206 | 0.763323 | eng_Latn | 0.996235 |
eeec8efcbb818bd59f5eba24d331938b71cd9951 | 3,278 | md | Markdown | docs/outlook/mapi/iablogon-getlasterror.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-08-15T11:25:43.000Z | 2021-08-15T11:25:43.000Z | docs/outlook/mapi/iablogon-getlasterror.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/outlook/mapi/iablogon-getlasterror.md | ManuSquall/office-developer-client-docs.fr-FR | 5c3e7961c204833485b8fe857dc7744c12658ec5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: IABLogonGetLastError
manager: soliver
ms.date: 11/16/2014
ms.audience: Developer
ms.topic: reference
ms.prod: office-online-server
localization_priority: Normal
api_name:
- IABLogon.GetLastError
api_type:
- COM
ms.assetid: d157e29e-7731-4e47-b4a7-e8622b223001
description: 'Derniére modification : samedi 23 juillet 2011'
ms.openlocfilehash: 311299b00143667b3f2fb22bd7be6c3a52c7141d
ms.sourcegitcommit: 8657170d071f9bcf680aba50b9c07f2a4fb82283
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 04/28/2019
ms.locfileid: "33434248"
---
# <a name="iablogongetlasterror"></a>IABLogon::GetLastError
**S’applique à** : Outlook 2013 | Outlook 2016
Renvoie une structure [MAPIERROR](mapierror.md) qui contient des informations sur l’erreur de fournisseur de carnet d’adresses précédente.
```cpp
HRESULT GetLastError(
HRESULT hResult,
ULONG ulFlags,
LPMAPIERROR FAR * lppMAPIError
);
```
## <a name="parameters"></a>Parameters
_hResult_
> [in] Handle vers la valeur d’erreur générée dans l’appel de méthode précédent.
_ulFlags_
> [in] Masque de bits d’indicateurs qui contrôle le type de chaînes renvoyées. L’indicateur suivant peut être définie :
MAPI_UNICODE
> Les chaînes de la structure **MAPIERROR renvoyées** dans le paramètre _lppMAPIError_ sont au format Unicode. Si l’MAPI_UNICODE n’est pas définie, les chaînes sont au format ANSI.
_lppMAPIError_
> [out] Pointeur vers un pointeur vers une structure **MAPIERROR** qui contient les informations de version, de composant et de contexte de l’erreur. Le _paramètre lppMAPIError_ peut être définie sur NULL si le fournisseur ne peut pas fournir une structure **MAPIERROR** avec les informations appropriées.
## <a name="return-value"></a>Valeur renvoyée
S_OK
> L'appel a r�ussi et a renvoy� la valeur attendue ou les valeurs.
MAPI_E_BAD_CHARWIDTH
> L’indicateur MAPI_UNICODE a été définie et le fournisseur de carnet d’adresses ne prend pas en charge Unicode, ou MAPI_UNICODE n’a pas été définie et le fournisseur de carnet d’adresses prend uniquement en charge Unicode.
## <a name="remarks"></a>Remarques
Les fournisseurs de carnet d’adresses implémentent **la méthode GetLastError** pour fournir des informations sur un appel de méthode antérieur qui a échoué. Les appelants peuvent fournir à leurs utilisateurs des informations détaillées sur l’erreur en incluant les données de la structure **MAPIERROR** dans une boîte de dialogue.
## <a name="notes-to-callers"></a>Remarques pour les appelants
Vous pouvez utiliser la structure **MAPIERROR** pointée par le paramètre _lppMAPIError_ si le fournisseur de carnet d’adresses fournit la structure et uniquement si **GetLastError** renvoie S_OK. Parfois, le fournisseur de carnet d’adresses ne peut pas déterminer la dernière erreur ou n’a rien d’autre à signaler à propos de l’erreur. Dans ce cas, le fournisseur de carnet d’adresses renvoie un pointeur vers NULL dans _lppMAPIError_ à la place.
Pour plus d’informations **sur la méthode GetLastError,** voir [MAPI Extended Errors](mapi-extended-errors.md).
## <a name="see-also"></a>Voir aussi
[MAPIERROR](mapierror.md)
[MAPIFreeBuffer](mapifreebuffer.md)
[IABLogon : IUnknown](iablogoniunknown.md)
| 38.116279 | 449 | 0.769677 | fra_Latn | 0.927929 |
eeec8f35882dd3868d6702f338b68d18499e558f | 12,688 | md | Markdown | articles/search/search-monitor-usage.md | YulelogPagoda/azure-docs | 467b6399197f039391e4091036a468bdbebf64d6 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-24T13:15:24.000Z | 2019-10-24T13:15:24.000Z | articles/search/search-monitor-usage.md | YulelogPagoda/azure-docs | 467b6399197f039391e4091036a468bdbebf64d6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/search/search-monitor-usage.md | YulelogPagoda/azure-docs | 467b6399197f039391e4091036a468bdbebf64d6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Monitor resource usage and query metrics for an search service - Azure Search
description: Enable logging, get query activity metrics, resource usage, and other system data from an Azure Search service.
author: HeidiSteen
manager: nitinme
tags: azure-portal
services: search
ms.service: search
ms.topic: conceptual
ms.date: 05/16/2019
ms.author: heidist
---
# Monitor resource consumption and query activity in Azure Search
In the Overview page of your Azure Search service, you can view system data about resource usage, query metrics, and how much quota is available to create more indexes, indexers, and data sources. You can also use the portal to configure log analytics or another resource used for persistent data collection.
Setting up logs is useful for self-diagnostics and preserving operational history. Internally, logs exist on the backend for a short period of time, sufficient for investigation and analysis if you file a support ticket. If you want control over and access to log information, you should set up one of the solutions described in this article.
In this article, learn about your monitoring options, how to enable logging and log storage, and how to view log contents.
## Metrics at a glance
**Usage** and **Monitoring** sections built into the Overview page report out on resource consumption and query execution metrics. This information becomes available as soon as you start using the service, with no configuration required. This page is refreshed every few minutes. If you are finalizing decisions about [which tier to use for production workloads](search-sku-tier.md), or whether to [adjust the number of active replicas and partitions](search-capacity-planning.md), these metrics can help you with those decisions by showing you how quickly resources are consumed and how well the current configuration handles the existing load.
The **Usage** tab shows you resource availability relative to current [limits](search-limits-quotas-capacity.md). The following illustration is for the free service, which is capped at 3 objects of each type and 50 MB of storage. A Basic or Standard service has higher limits, and if you increase the partition counts, maximum storage goes up proportionally.

## Queries per second (QPS) and other metrics
The **Monitoring** tab shows moving averages for metrics like search *Queries Per Second* (QPS), aggregated per minute.
*Search latency* is the amount of time the search service needed to process search queries, aggregated per minute. *Throttled search queries percentage* (not shown) is the percentage of search queries that were throttled, also aggregated per minute.
These numbers are approximate and are intended to give you a general idea of how well your system is servicing requests. Actual QPS may be higher or lower than the number reported in the portal.

## Activity logs
The **Activity log** collects information from Azure Resource Manager. Examples of information found in the Activity log include creating or deleting a service, updating a resource group, checking for name availability, or getting a service access key to handle a request.
You can access the **Activity log** from the left-navigation pane, or from Notifications in the top window command bar, or from the **Diagnose and solve problems** page.
For in-service tasks like creating an index or deleting a data source, you'll see generic notifications like "Get Admin Key" for each request, but not the specific action itself. For this level of information, you must enable an add-on monitoring solution.
## Add-on monitoring solutions
Azure Search does not store any data beyond the objects it manages, which means log data has to be stored externally. You can configure any of the resources below if you want to persist log data.
The following table compares options for storing logs and adding in-depth monitoring of service operations and query workloads through Application Insights.
| Resource | Used for |
|----------|----------|
| [Azure Monitor logs](https://docs.microsoft.com/azure/azure-monitor/log-query/log-query-overview) | Logged events and query metrics, based on the schemas below. Events are logged to a Log Analytics workspace. You can run queries against a workspace to return detailed information from the log. For more information, see [Get started with Azure Monitor logs](https://docs.microsoft.com/azure/azure-monitor/learn/tutorial-viewdata) |
| [Blob storage](https://docs.microsoft.com/azure/storage/blobs/storage-blobs-overview) | Logged events and query metrics, based on the schemas below. Events are logged to a Blob container and stored in JSON files. Use a JSON editor to view file contents.|
| [Event Hub](https://docs.microsoft.com/azure/event-hubs/) | Logged events and query metrics, based on the schemas documented in this article. Choose this as an alternative data collection service for very large logs. |
Both Azure Monitor logs and Blob storage are available as a free service so that you can try it out at no charge for the lifetime of your Azure subscription. Application Insights is free to sign up and use as long as application data size is under certain limits (see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/) for details).
The next section walks you through the steps of enabling and using Azure Blob storage to collect and access log data created by Azure Search operations.
## Enable logging
Logging for indexing and query workloads is off by default and depends on add-on solutions for both logging infrastructure and long-term external storage. By itself, the only persisted data in Azure Search are the objects it creates and manages, so logs must be stored elsewhere.
In this section, you'll learn how to use Blob storage to store logged events and metrics data.
1. [Create a storage account](https://docs.microsoft.com/azure/storage/common/storage-quickstart-create-account) if you don't already have one. You can place it in the same resource group as Azure Search to simplify clean up later if you want to delete all resources used in this exercise.
Your storage account must exist in the same region as Azure Search.
2. Open your search service Overview page. In the left-navigation pane, scroll down to **Monitoring** and click **Enable Monitoring**.

3. Choose the data you want to export: Logs, Metrics or both. You can copy it to a storage account, send it to an event hub or export it to Azure Monitor logs.
For archival to Blob storage, only the storage account must exist. Containers and blobs will be created as-needed when log data is exported.

4. Save the profile.
5. Test logging by creating or deleting objects (creates log events) and by submitting queries (generates metrics).
Logging is enabled once you save the profile. Containers are only created when there is an activity to log or measure. When the data is copied to a storage account, the data is formatted as JSON and placed in two containers:
* insights-logs-operationlogs: for search traffic logs
* insights-metrics-pt1m: for metrics
**It takes one hour before the containers will appear in Blob storage. There is one blob, per hour, per container.**
You can use [Visual Studio Code](#download-and-open-in-visual-studio-code) or another JSON editor to view the files.
### Example path
```
resourceId=/subscriptions/<subscriptionID>/resourcegroups/<resourceGroupName>/providers/microsoft.search/searchservices/<searchServiceName>/y=2018/m=12/d=25/h=01/m=00/name=PT1H.json
```
## Log schema
Blobs containing your search service traffic logs are structured as described in this section. Each blob has one root object called **records** containing an array of log objects. Each blob contains records for all the operations that took place during the same hour.
| Name | Type | Example | Notes |
| --- | --- | --- | --- |
| time |datetime |"2018-12-07T00:00:43.6872559Z" |Timestamp of the operation |
| resourceId |string |"/SUBSCRIPTIONS/11111111-1111-1111-1111-111111111111/<br/>RESOURCEGROUPS/DEFAULT/PROVIDERS/<br/> MICROSOFT.SEARCH/SEARCHSERVICES/SEARCHSERVICE" |Your ResourceId |
| operationName |string |"Query.Search" |The name of the operation |
| operationVersion |string |"2019-05-06" |The api-version used |
| category |string |"OperationLogs" |constant |
| resultType |string |"Success" |Possible values: Success or Failure |
| resultSignature |int |200 |HTTP result code |
| durationMS |int |50 |Duration of the operation in milliseconds |
| properties |object |see the following table |Object containing operation-specific data |
**Properties schema**
| Name | Type | Example | Notes |
| --- | --- | --- | --- |
| Description |string |"GET /indexes('content')/docs" |The operation's endpoint |
| Query |string |"?search=AzureSearch&$count=true&api-version=2019-05-06" |The query parameters |
| Documents |int |42 |Number of documents processed |
| IndexName |string |"testindex" |Name of the index associated with the operation |
## Metrics schema
Metrics are captured for query requests.
| Name | Type | Example | Notes |
| --- | --- | --- | --- |
| resourceId |string |"/SUBSCRIPTIONS/11111111-1111-1111-1111-111111111111/<br/>RESOURCEGROUPS/DEFAULT/PROVIDERS/<br/>MICROSOFT.SEARCH/SEARCHSERVICES/SEARCHSERVICE" |your resource ID |
| metricName |string |"Latency" |the name of the metric |
| time |datetime |"2018-12-07T00:00:43.6872559Z" |the operation's timestamp |
| average |int |64 |The average value of the raw samples in the metric time interval |
| minimum |int |37 |The minimum value of the raw samples in the metric time interval |
| maximum |int |78 |The maximum value of the raw samples in the metric time interval |
| total |int |258 |The total value of the raw samples in the metric time interval |
| count |int |4 |The number of raw samples used to generate the metric |
| timegrain |string |"PT1M" |The time grain of the metric in ISO 8601 |
All metrics are reported in one-minute intervals. Every metric exposes minimum, maximum and average values per minute.
For the SearchQueriesPerSecond metric, minimum is the lowest value for search queries per second that was registered during that minute. The same applies to the maximum value. Average, is the aggregate across the entire minute.
Think about this scenario during one minute: one second of high load that is the maximum for SearchQueriesPerSecond, followed by 58 seconds of average load, and finally one second with only one query, which is the minimum.
For ThrottledSearchQueriesPercentage, minimum, maximum, average and total, all have the same value: the percentage of search queries that were throttled, from the total number of search queries during one minute.
## Download and open in Visual Studio Code
You can use any JSON editor to view the log file. If you don't have one, we recommend [Visual Studio Code](https://code.visualstudio.com/download).
1. In Azure portal, open your Storage account.
2. In the left-navigation pane, click **Blobs**. You should see **insights-logs-operationlogs** and **insights-metrics-pt1m**. These containers are created by Azure Search when the log data is exported to Blob storage.
3. Click down the folder hierarchy until you reach the .json file. Use the context-menu to download the file.
Once the file is downloaded, open it in a JSON editor to view the contents.
## Use system APIs
Both the Azure Search REST API and the .NET SDK provide programmatic access to service metrics, index and indexer information, and document counts.
* [Get Services Statistics](/rest/api/searchservice/get-service-statistics)
* [Get Index Statistics](/rest/api/searchservice/get-index-statistics)
* [Count Documents](/rest/api/searchservice/count-documents)
* [Get Indexer Status](/rest/api/searchservice/get-indexer-status)
To enable using PowerShell or the Azure CLI, see the documentation [here](https://docs.microsoft.com/azure/azure-monitor/platform/diagnostic-logs-overview).
## Next steps
[Manage your Search service on Microsoft Azure](search-manage.md) for more information on service administration and [Performance and optimization](search-performance-optimization.md) for tuning guidance.
| 72.91954 | 645 | 0.779398 | eng_Latn | 0.990857 |
eeece856070127c1ca55627a9f874bd1d72b2d63 | 789 | md | Markdown | docs/debugger/debug-interface-access/idialoadcallback2-restrictdbgaccess.md | icnocop/visualstudio-docs | 61ee799c65dc6ccd0559e7872e168ab75387ed96 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-12T08:46:10.000Z | 2021-02-12T08:46:10.000Z | docs/debugger/debug-interface-access/idialoadcallback2-restrictdbgaccess.md | icnocop/visualstudio-docs | 61ee799c65dc6ccd0559e7872e168ab75387ed96 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/debugger/debug-interface-access/idialoadcallback2-restrictdbgaccess.md | icnocop/visualstudio-docs | 61ee799c65dc6ccd0559e7872e168ab75387ed96 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-02-21T21:24:15.000Z | 2021-02-21T21:24:15.000Z | ---
title: "IDiaLoadCallback2::RestrictDBGAccess | Microsoft Docs"
ms.date: "11/04/2016"
ms.topic: "reference"
dev_langs:
- "C++"
helpviewer_keywords:
- "IDiaLoadCallback2::RestrictDBGAccess method"
ms.assetid: 63b67a93-2910-4fff-aa70-6b2eaa08e5c8
author: "mikejo5000"
ms.author: "mikejo"
manager: jmartens
ms.workload:
- "multiple"
---
# IDiaLoadCallback2::RestrictDBGAccess
Determines if looking for debug information is allowed from .dbg files.
## Syntax
```C++
HRESULT RestrictDBGAccess();
```
## Return Value
If successful, returns `S_OK`; otherwise, returns an error code.
## Remarks
Any return value other than `S_OK` to prevent looking for debug information from .dbg files.
## See also
- [IDiaLoadCallback2](../../debugger/debug-interface-access/idialoadcallback2.md) | 24.65625 | 93 | 0.750317 | yue_Hant | 0.35753 |
eeed22c01392ebbbd0b64b02997a5a4b82a1e925 | 3,251 | md | Markdown | docs/content/releases/4.5.1.md | marekhanus/postgres-operator | b066320243b55d340b5af55313cdc62857ae887e | [
"Apache-2.0"
] | null | null | null | docs/content/releases/4.5.1.md | marekhanus/postgres-operator | b066320243b55d340b5af55313cdc62857ae887e | [
"Apache-2.0"
] | null | null | null | docs/content/releases/4.5.1.md | marekhanus/postgres-operator | b066320243b55d340b5af55313cdc62857ae887e | [
"Apache-2.0"
] | 1 | 2021-04-25T01:33:02.000Z | 2021-04-25T01:33:02.000Z | ---
title: "4.5.1"
date:
draft: false
weight: 69
---
Crunchy Data announces the release of the PostgreSQL Operator 4.5.1 on November 13, 2020.
The PostgreSQL Operator is released in conjunction with the [Crunchy Container Suite](https://github.com/CrunchyData/crunchy-containers/).
PostgreSQL Operator 4.5.1 release includes the following software versions upgrades:
- [PostgreSQL](https://www.postgresql.org) is now at versions 13.1, 12.5, 11.10, 10.15, 9.6.20, and 9.5.24.
- [Patroni](https://patroni.readthedocs.io/) is now at version 2.0.1.
- PL/Perl can now be used in the PostGIS-enabled containers.
## Changes
- Simplified creation of a PostgreSQL cluster from a `pgcluster` resource. A user no longer has to provide a pgBackRest repository Secret: the Postgres Operator will now automatically generate this.
- The exposed ports for Services associated with a cluster is now available from the `pgo show cluster` command.
- If the `pgo-config` ConfigMap is not created during the installation of the Postgres Operator, the Postgres Operator will generate one when it initializes.
- Providing a value for `pgo_admin_password` in the installer is now optional. If no value is provided, the password for the initial administrative user is randomly generated.
- Added an example for how to create a PostgreSQL cluster that uses S3 for pgBackRest backups via a custom resource.
## Fixes
- Fix readiness check for a standby leader. Previously, the standby leader would not report as ready, even though it was. Reported by Alec Rooney (@alrooney).
- Proper determination if a `pgcluster` custom resource creation has been processed by its corresponding Postgres Operator controller. This prevents the custom resource from being run by the creation logic multiple times.
- Prevent `initdb` (cluster reinitialization) from occurring if the PostgreSQL container cannot initialize while bootstrapping from an existing PGDATA directory.
- Fix issue with UBI 8 / CentOS 8 when running a pgBackRest bootstrap or restore job, where duplicate "repo types" could be set. Specifically, the ensures the name of the repo type is set via the `PGBACKREST_REPO1_TYPE` environmental variable. Reported by Alec Rooney (@alrooney).
- Ensure external WAL and Tablespace PVCs are fully recreated during a restore. Reported by (@aurelien43).
- Ensure `pgo show backup` will work regardless of state of any of the PostgreSQL clusters. This pulls the information directly from the pgBackRest Pod itself. Reported by (@saltenhub).
- Ensure that sidecars (e.g. metrics collection, pgAdmin 4, pgBouncer) are deployable when using the PostGIS-enabled PostgreSQL image. Reported by Jean-Denis Giguère (@jdenisgiguere).
- Allow for special characters in pgBackRest environmental variables. Reported by (@SockenSalat).
- Ensure password for the `pgbouncer` administrative user stays synchronized between an existing Kubernetes Secret and PostgreSQL should the pgBouncer be recreated.
- When uninstalling an instance of the Postgres Operator in a Kubernetes cluster that has multiple instances of the Postgres Operator, ensure that only the requested instance to be uninstalled is the one that's uninstalled.
- The logger no longer defaults to using a log level of `DEBUG`.
| 83.358974 | 280 | 0.794832 | eng_Latn | 0.994951 |
eeed757c19c6236c3cdaf5ec99364b52a4bdbcb4 | 29,680 | md | Markdown | etc/api/browser-ui.api.md | devmotiramani/jsplumb | ba0f68b1ff5aa86ef4dbd0c0390e9754ad557685 | [
"MIT"
] | 1 | 2021-12-16T02:25:26.000Z | 2021-12-16T02:25:26.000Z | etc/api/browser-ui.api.md | devmotiramani/jsplumb | ba0f68b1ff5aa86ef4dbd0c0390e9754ad557685 | [
"MIT"
] | null | null | null | etc/api/browser-ui.api.md | devmotiramani/jsplumb | ba0f68b1ff5aa86ef4dbd0c0390e9754ad557685 | [
"MIT"
] | null | null | null | ## API Report File for "@jsplumb/browser-ui"
> Do not edit this file. It is a report generated by [API Extractor](https://api-extractor.com/).
```ts
import { AbstractConnector } from '@jsplumb/core';
import { BehaviouralTypeDescriptor } from '@jsplumb/core';
import { BoundingBox } from '@jsplumb/util';
import { Component } from '@jsplumb/core';
import { Connection } from '@jsplumb/core';
import { DeleteConnectionOptions } from '@jsplumb/core';
import { Endpoint } from '@jsplumb/core';
import { Extents } from '@jsplumb/util';
import { Grid } from '@jsplumb/util';
import { JsPlumbDefaults } from '@jsplumb/core';
import { jsPlumbElement } from '@jsplumb/core';
import { JsPlumbInstance } from '@jsplumb/core';
import { LabelOverlay } from '@jsplumb/core';
import { Overlay } from '@jsplumb/core';
import { PaintStyle } from '@jsplumb/common';
import { PointXY } from '@jsplumb/util';
import { RedrawResult } from '@jsplumb/core';
import { Size } from '@jsplumb/util';
import { SourceSelector } from '@jsplumb/core';
import { TypeDescriptor } from '@jsplumb/core';
import { UIGroup } from '@jsplumb/core';
// @public (undocumented)
export function addClass(el: Element | NodeListOf<Element>, clazz: string): void;
// @public (undocumented)
export const ATTRIBUTE_CONTAINER = "data-jtk-container";
// @public (undocumented)
export const ATTRIBUTE_GROUP_CONTENT = "data-jtk-group-content";
// @public (undocumented)
export const ATTRIBUTE_JTK_ENABLED = "data-jtk-enabled";
// @public (undocumented)
export const ATTRIBUTE_JTK_SCOPE = "data-jtk-scope";
// @public (undocumented)
export interface BeforeStartEventParams extends DragStartEventParams {
}
// @public (undocumented)
export interface BrowserJsPlumbDefaults extends JsPlumbDefaults<Element> {
dragOptions?: DragOptions;
elementsDraggable?: boolean;
// (undocumented)
managedElementsSelector?: string;
}
// @public
export class BrowserJsPlumbInstance extends JsPlumbInstance<ElementType> {
constructor(_instanceIndex: number, defaults?: BrowserJsPlumbDefaults);
addClass(el: Element | NodeListOf<Element>, clazz: string): void;
// @internal (undocumented)
addConnectorClass(connector: AbstractConnector, clazz: string): void;
addDragFilter(filter: Function | string, exclude?: boolean): void;
// @internal (undocumented)
addEndpointClass(ep: Endpoint, c: string): void;
// (undocumented)
addOverlayClass(o: Overlay, clazz: string): void;
// (undocumented)
addSourceSelector(selector: string, params?: BehaviouralTypeDescriptor, exclude?: boolean): SourceSelector;
addToDragGroup(spec: DragGroupSpec, ...els: Array<Element>): void;
addToDragSelection(...el: Array<Element>): void;
// @internal (undocumented)
_appendElement(el: Element, parent: Element): void;
// @internal (undocumented)
applyConnectorType(connector: AbstractConnector, t: TypeDescriptor): void;
// @internal (undocumented)
applyEndpointType<C>(ep: Endpoint, t: TypeDescriptor): void;
clearDragSelection(): void;
// (undocumented)
_connectorClick: Function;
// (undocumented)
_connectorContextmenu: Function;
// (undocumented)
_connectorDblClick: Function;
// (undocumented)
_connectorDblTap: Function;
// (undocumented)
_connectorMousedown: Function;
// (undocumented)
_connectorMouseout: Function;
// (undocumented)
_connectorMouseover: Function;
// (undocumented)
_connectorMouseup: Function;
// (undocumented)
_connectorTap: Function;
consume(e: Event, doNotPreventDefault?: boolean): void;
// @internal (undocumented)
deleteConnection(connection: Connection, params?: DeleteConnectionOptions): boolean;
destroy(): void;
// @internal (undocumented)
destroyConnector(connection: Connection): void;
// @internal (undocumented)
destroyEndpoint(ep: Endpoint): void;
// (undocumented)
destroyOverlay(o: Overlay): void;
// (undocumented)
draggingClass: string;
// Warning: (ae-forgotten-export) The symbol "DragManager" needs to be exported by the entry point index.d.ts
//
// (undocumented)
dragManager: DragManager;
// (undocumented)
dragSelectClass: string;
// (undocumented)
drawOverlay(o: Overlay, component: any, paintStyle: PaintStyle, absolutePosition?: PointXY): any;
// (undocumented)
_elementClick: Function;
// (undocumented)
_elementContextmenu: Function;
// (undocumented)
_elementDblTap: Function;
// (undocumented)
elementDraggingClass: string;
// (undocumented)
_elementMousedown: Function;
// (undocumented)
_elementMouseenter: Function;
// (undocumented)
_elementMouseexit: Function;
// (undocumented)
_elementMousemove: Function;
// (undocumented)
_elementMouseup: Function;
elementsDraggable: boolean;
// (undocumented)
_elementTap: Function;
// (undocumented)
_endpointClick: Function;
// (undocumented)
_endpointDblClick: Function;
// (undocumented)
_endpointMousedown: Function;
// (undocumented)
_endpointMouseout: Function;
// (undocumented)
_endpointMouseover: Function;
// (undocumented)
_endpointMouseup: Function;
// (undocumented)
eventManager: EventManager;
// @internal (undocumented)
_getAssociatedElements(el: Element): Array<Element>;
getAttribute(el: Element, name: string): string;
getClass(el: Element): string;
// @internal (undocumented)
getConnectorClass(connector: AbstractConnector): string;
// @internal (undocumented)
getEndpointClass(ep: Endpoint): string;
// @internal
getGroupContentArea(group: UIGroup<any>): ElementType["E"];
// @internal
getOffset(el: Element): PointXY;
// @internal
getOffsetRelativeToRoot(el: Element): PointXY;
// @internal
getSelector(ctx: string | Element, spec?: string): ArrayLike<jsPlumbDOMElement>;
// @internal
getSize(el: Element): Size;
// @internal
getStyle(el: Element, prop: string): any;
hasClass(el: Element, clazz: string): boolean;
// (undocumented)
hoverClass: string;
// (undocumented)
hoverSourceClass: string;
// (undocumented)
hoverTargetClass: string;
// (undocumented)
_instanceIndex: number;
isDraggable(el: Element): boolean;
// (undocumented)
managedElementsSelector: string;
off(el: Document | Element | NodeListOf<Element>, event: string, callback: Function): this;
on(el: Document | Element | NodeListOf<Element>, event: string, callbackOrSelector: Function | string, callback?: Function): this;
// (undocumented)
_overlayClick: Function;
// (undocumented)
_overlayDblClick: Function;
// (undocumented)
_overlayDblTap: Function;
// (undocumented)
_overlayMouseout: Function;
// (undocumented)
_overlayMouseover: Function;
// (undocumented)
_overlayTap: Function;
// (undocumented)
paintConnector(connector: AbstractConnector, paintStyle: PaintStyle, extents?: Extents): void;
// (undocumented)
paintOverlay(o: Overlay, params: any, extents: any): void;
// (undocumented)
reattachOverlay(o: Overlay, c: Component): void;
removeAttribute(el: Element, attName: string): void;
removeClass(el: Element | NodeListOf<Element>, clazz: string): void;
// @internal (undocumented)
removeConnectorClass(connector: AbstractConnector, clazz: string): void;
removeDragFilter(filter: Function | string): void;
// @internal (undocumented)
_removeElement(element: Element): void;
// @internal (undocumented)
removeEndpointClass(ep: Endpoint, c: string): void;
removeFromDragGroup(...els: Array<Element>): void;
removeFromDragSelection(...el: Array<Element>): void;
// (undocumented)
removeOverlayClass(o: Overlay, clazz: string): void;
// (undocumented)
removeSourceSelector(selector: SourceSelector): void;
// @internal (undocumented)
renderEndpoint(ep: Endpoint, paintStyle: PaintStyle): void;
reset(): void;
rotate(element: Element, rotation: number, doNotRepaint?: boolean): RedrawResult;
setAttribute(el: Element, name: string, value: string): void;
setAttributes(el: Element, atts: Record<string, string>): void;
// (undocumented)
setConnectorHover(connector: AbstractConnector, hover: boolean, doNotCascade?: boolean): void;
// @internal (undocumented)
setConnectorVisible(connector: AbstractConnector, v: boolean): void;
setContainer(newContainer: Element): void;
setDraggable(element: Element, draggable: boolean): void;
setDragGrid(grid: Grid): void;
setDragGroupState(state: boolean, ...els: Array<Element>): void;
// @internal (undocumented)
setEndpointHover(endpoint: Endpoint, hover: boolean, doNotCascade?: boolean): void;
// @internal (undocumented)
setEndpointVisible(ep: Endpoint, v: boolean): void;
// @internal (undocumented)
setGroupVisible(group: UIGroup<Element>, state: boolean): void;
// (undocumented)
setHover(component: Component, hover: boolean): void;
// (undocumented)
setOverlayHover(o: Overlay, hover: boolean): void;
// (undocumented)
setOverlayVisible(o: Overlay, visible: boolean): void;
// @internal
setPosition(el: Element, p: PointXY): void;
// (undocumented)
shouldFireEvent(event: string, value: any, originalEvent?: Event): boolean;
// (undocumented)
sourceElementDraggingClass: string;
// (undocumented)
svg: {
node: (name: string, attributes?: Record<string, string | number>) => SVGElement;
attr: (node: SVGElement, attributes: Record<string, string | number>) => void;
pos: (d: [number, number]) => string;
};
// (undocumented)
targetElementDraggingClass: string;
toggleClass(el: Element | NodeListOf<Element>, clazz: string): void;
// (undocumented)
toggleDraggable(el: Element): boolean;
toggleDragSelection(...el: Array<Element>): void;
trigger(el: Document | Element, event: string, originalEvent?: Event, payload?: any, detail?: number): void;
unmanage(el: Element, removeElement?: boolean): void;
// (undocumented)
updateLabel(o: LabelOverlay): void;
}
// @public (undocumented)
export class Collicat implements jsPlumbDragManager {
constructor(options?: CollicatOptions);
// (undocumented)
css: Record<string, string>;
// (undocumented)
destroyDraggable(el: jsPlumbDOMElement): void;
// (undocumented)
draggable(el: jsPlumbDOMElement, params: DragParams): Drag;
// (undocumented)
eventManager: EventManager;
getInputFilterSelector(): string;
// (undocumented)
getZoom(): number;
// (undocumented)
inputFilterSelector: string;
setInputFilterSelector(selector: string): this;
// (undocumented)
setZoom(z: number): void;
}
// @public (undocumented)
export interface CollicatOptions {
// (undocumented)
css?: Record<string, string>;
// (undocumented)
inputFilterSelector?: string;
// (undocumented)
zoom?: number;
}
// @public (undocumented)
export function compoundEvent(stem: string, event: string, subevent?: string): string;
// @public (undocumented)
export const CONNECTION = "connection";
// @public (undocumented)
export type ConstrainFunction = (desiredLoc: PointXY, dragEl: HTMLElement, constrainRect: Size, size: Size) => PointXY;
// @public (undocumented)
export function consume(e: Event, doNotPreventDefault?: boolean): void;
// @public (undocumented)
export enum ContainmentType {
// (undocumented)
notNegative = "notNegative",
// (undocumented)
parent = "parent",
// (undocumented)
parentEnclosed = "parentEnclosed"
}
// @public (undocumented)
export function createElement(tag: string, style?: Record<string, any>, clazz?: string, atts?: Record<string, string>): jsPlumbDOMElement;
// @public (undocumented)
export function createElementNS(ns: string, tag: string, style?: Record<string, any>, clazz?: string, atts?: Record<string, string | number>): jsPlumbDOMElement;
// Warning: (ae-forgotten-export) The symbol "Base" needs to be exported by the entry point index.d.ts
//
// @public (undocumented)
export class Drag extends Base {
constructor(el: jsPlumbDOMElement, params: DragParams, k: Collicat);
// (undocumented)
abort(): void;
// (undocumented)
_activeSelectorParams: DragParams;
// (undocumented)
addFilter(f: Function | string, _exclude?: boolean): void;
// (undocumented)
addSelector(params: DragHandlerOptions, atStart?: boolean): void;
// (undocumented)
_availableSelectors: Array<DragParams>;
// (undocumented)
_canDrag: Function;
// (undocumented)
_class: string;
// (undocumented)
clearAllFilters(): void;
// (undocumented)
clone: boolean;
// (undocumented)
_constrainRect: {
w: number;
h: number;
};
// (undocumented)
consumeStartEvent: boolean;
// (undocumented)
destroy(): void;
// (undocumented)
downListener: (e: MouseEvent) => void;
// (undocumented)
_elementToDrag: jsPlumbDOMElement;
// (undocumented)
_filters: Record<string, [Function, boolean]>;
// (undocumented)
getDragElement(retrieveOriginalElement?: boolean): jsPlumbDOMElement;
// (undocumented)
_ghostProxyFunction: GhostProxyGenerator;
// (undocumented)
_ghostProxyParent: jsPlumbDOMElement;
// (undocumented)
_isConstrained: boolean;
// (undocumented)
listeners: Record<string, Array<Function>>;
// (undocumented)
moveBy(dx: number, dy: number, e?: MouseEvent): void;
// (undocumented)
moveListener: (e: MouseEvent) => void;
// (undocumented)
off(evt: string, fn: Function): void;
// (undocumented)
on(evt: string, fn: Function): void;
// (undocumented)
removeFilter(f: Function | string): void;
// (undocumented)
rightButtonCanDrag: boolean;
// (undocumented)
scroll: boolean;
// (undocumented)
setUseGhostProxy(val: boolean): void;
// (undocumented)
_size: Size;
// (undocumented)
stop(e?: MouseEvent, force?: boolean): void;
// (undocumented)
_testFilter(e: any): boolean;
// (undocumented)
trackScroll: boolean;
// (undocumented)
upListener: (e?: MouseEvent) => void;
// (undocumented)
_useGhostProxy: Function;
}
// @public (undocumented)
export interface DragEventParams extends DragStartEventParams {
// (undocumented)
originalPos: PointXY;
}
// @public (undocumented)
export type DraggedElement = {
el: jsPlumbDOMElement;
id: string;
pos: PointXY;
originalPos: PointXY;
originalGroup: UIGroup;
redrawResult: RedrawResult;
reverted: boolean;
dropGroup: UIGroup;
};
// @public (undocumented)
export type DragGroupSpec = string | {
id: string;
active: boolean;
};
// @public (undocumented)
export interface DragHandlerOptions {
// (undocumented)
beforeStart?: (beforeStartParams: BeforeStartEventParams) => void;
// (undocumented)
constrainFunction?: ConstrainFunction | boolean;
// (undocumented)
containment?: ContainmentType;
// (undocumented)
containmentPadding?: number;
// (undocumented)
drag?: (p: DragEventParams) => any;
// (undocumented)
dragAbort?: (el: Element) => any;
// (undocumented)
dragInit?: (el: Element) => any;
// (undocumented)
filter?: string;
// (undocumented)
filterExclude?: boolean;
// (undocumented)
ghostProxy?: GhostProxyGenerator | boolean;
// (undocumented)
ghostProxyParent?: Element;
// (undocumented)
grid?: Grid;
// (undocumented)
makeGhostProxy?: GhostProxyGenerator;
// (undocumented)
revertFunction?: RevertFunction;
// (undocumented)
selector?: string;
// (undocumented)
snapThreshold?: number;
// (undocumented)
start?: (p: DragStartEventParams) => any;
// (undocumented)
stop?: (p: DragStopEventParams) => any;
// (undocumented)
useGhostProxy?: (container: any, dragEl: jsPlumbDOMElement) => boolean;
}
// @public
export interface DragMovePayload extends DragPayload {
}
// @public (undocumented)
export interface DragOptions {
// (undocumented)
beforeStart?: (params: BeforeStartEventParams) => void;
// (undocumented)
containment?: ContainmentType;
// (undocumented)
cursor?: string;
// (undocumented)
drag?: (params: DragEventParams) => void;
// (undocumented)
filter?: string;
// (undocumented)
grid?: Grid;
// (undocumented)
start?: (params: DragStartEventParams) => void;
// (undocumented)
stop?: (params: DragStopEventParams) => void;
// (undocumented)
trackScroll?: boolean;
// (undocumented)
zIndex?: number;
}
// @public (undocumented)
export interface DragParams extends DragHandlerOptions {
// (undocumented)
canDrag?: Function;
// (undocumented)
clone?: boolean;
// (undocumented)
consumeFilteredEvents?: boolean;
// (undocumented)
consumeStartEvent?: boolean;
// (undocumented)
events?: Record<string, Function>;
// (undocumented)
ignoreZoom?: boolean;
// (undocumented)
multipleDrop?: boolean;
// (undocumented)
parent?: any;
// (undocumented)
rightButtonCanDrag?: boolean;
// (undocumented)
scope?: string;
// (undocumented)
scroll?: boolean;
// (undocumented)
trackScroll?: boolean;
}
// @public
export interface DragPayload {
// (undocumented)
e: Event;
// (undocumented)
el: Element;
// (undocumented)
originalPosition: PointXY;
// (undocumented)
payload?: Record<string, any>;
// (undocumented)
pos: PointXY;
}
// @public (undocumented)
export interface DragStartEventParams {
// (undocumented)
drag: Drag;
// (undocumented)
e: MouseEvent;
// (undocumented)
el: jsPlumbDOMElement;
// (undocumented)
pos: PointXY;
// (undocumented)
size: Size;
}
// @public
export interface DragStartPayload extends DragPayload {
}
// @public (undocumented)
export interface DragStopEventParams extends DragEventParams {
// (undocumented)
finalPos: PointXY;
// (undocumented)
selection: Array<[jsPlumbDOMElement, PointXY, Drag, Size]>;
}
// @public
export interface DragStopPayload {
// (undocumented)
e: Event;
// (undocumented)
el: Element;
// (undocumented)
elements: Array<DraggedElement>;
// (undocumented)
payload?: Record<string, any>;
}
// @public (undocumented)
export const ELEMENT = "element";
// @public (undocumented)
export const ELEMENT_DIV = "div";
// Warning: (ae-forgotten-export) The symbol "DragHandler" needs to be exported by the entry point index.d.ts
//
// @public (undocumented)
export class ElementDragHandler implements DragHandler {
constructor(instance: BrowserJsPlumbInstance, _dragSelection: DragSelection);
// (undocumented)
addToDragGroup(spec: DragGroupSpec, ...els: Array<Element>): void;
// (undocumented)
protected drag: Drag;
// Warning: (ae-forgotten-export) The symbol "DragSelection" needs to be exported by the entry point index.d.ts
//
// (undocumented)
protected _dragSelection: DragSelection;
// (undocumented)
protected getDropGroup(): IntersectingGroup | null;
// (undocumented)
init(drag: Drag): void;
// (undocumented)
protected instance: BrowserJsPlumbInstance;
// (undocumented)
protected _intersectingGroups: Array<IntersectingGroup>;
// (undocumented)
onDrag(params: DragEventParams): void;
// (undocumented)
onDragAbort(el: Element): void;
// (undocumented)
onDragInit(el: Element): Element;
// (undocumented)
onStart(params: {
e: MouseEvent;
el: jsPlumbDOMElement;
pos: PointXY;
drag: Drag;
}): boolean;
// (undocumented)
onStop(params: DragStopEventParams): void;
// (undocumented)
originalPosition: PointXY;
// (undocumented)
removeFromDragGroup(...els: Array<Element>): void;
// (undocumented)
reset(): void;
// (undocumented)
selector: string;
// (undocumented)
setDragGroupState(state: boolean, ...els: Array<Element>): void;
}
// @public (undocumented)
export type ElementType = {
E: Element;
};
// @public (undocumented)
export const ENDPOINT = "endpoint";
// @public (undocumented)
export type EndpointHelperFunctions<E> = {
makeNode: (ep: E, paintStyle: PaintStyle) => void;
updateNode: (ep: E, node: SVGElement) => void;
};
// @public (undocumented)
export const EVENT_BEFORE_START = "beforeStart";
// @public (undocumented)
export const EVENT_CLICK = "click";
// @public (undocumented)
export const EVENT_CONNECTION_ABORT = "connection:abort";
// @public (undocumented)
export const EVENT_CONNECTION_CLICK: string;
// @public (undocumented)
export const EVENT_CONNECTION_CONTEXTMENU: string;
// @public (undocumented)
export const EVENT_CONNECTION_DBL_CLICK: string;
// @public (undocumented)
export const EVENT_CONNECTION_DBL_TAP: string;
// @public (undocumented)
export const EVENT_CONNECTION_DRAG = "connection:drag";
// @public (undocumented)
export const EVENT_CONNECTION_MOUSEDOWN: string;
// @public (undocumented)
export const EVENT_CONNECTION_MOUSEOUT: string;
// @public (undocumented)
export const EVENT_CONNECTION_MOUSEOVER: string;
// @public (undocumented)
export const EVENT_CONNECTION_MOUSEUP: string;
// @public (undocumented)
export const EVENT_CONNECTION_TAP: string;
// @public (undocumented)
export const EVENT_CONTEXTMENU = "contextmenu";
// @public (undocumented)
export const EVENT_DBL_CLICK = "dblclick";
// @public (undocumented)
export const EVENT_DBL_TAP = "dbltap";
// @public (undocumented)
export const EVENT_DRAG = "drag";
// @public (undocumented)
export const EVENT_DRAG_MOVE = "drag:move";
// @public (undocumented)
export const EVENT_DRAG_START = "drag:start";
// @public (undocumented)
export const EVENT_DRAG_STOP = "drag:stop";
// @public (undocumented)
export const EVENT_DROP = "drop";
// @public (undocumented)
export const EVENT_ELEMENT_CLICK: string;
// @public (undocumented)
export const EVENT_ELEMENT_CONTEXTMENU: string;
// @public (undocumented)
export const EVENT_ELEMENT_DBL_CLICK: string;
// @public (undocumented)
export const EVENT_ELEMENT_DBL_TAP: string;
// @public (undocumented)
export const EVENT_ELEMENT_MOUSE_DOWN: string;
// @public (undocumented)
export const EVENT_ELEMENT_MOUSE_MOVE: string;
// @public (undocumented)
export const EVENT_ELEMENT_MOUSE_OUT: string;
// @public (undocumented)
export const EVENT_ELEMENT_MOUSE_OVER: string;
// @public (undocumented)
export const EVENT_ELEMENT_MOUSE_UP: string;
// @public (undocumented)
export const EVENT_ELEMENT_TAP: string;
// @public (undocumented)
export const EVENT_ENDPOINT_CLICK: string;
// @public (undocumented)
export const EVENT_ENDPOINT_DBL_CLICK: string;
// @public (undocumented)
export const EVENT_ENDPOINT_DBL_TAP: string;
// @public (undocumented)
export const EVENT_ENDPOINT_MOUSEDOWN: string;
// @public (undocumented)
export const EVENT_ENDPOINT_MOUSEOUT: string;
// @public (undocumented)
export const EVENT_ENDPOINT_MOUSEOVER: string;
// @public (undocumented)
export const EVENT_ENDPOINT_MOUSEUP: string;
// @public (undocumented)
export const EVENT_ENDPOINT_TAP: string;
// @public (undocumented)
export const EVENT_FOCUS = "focus";
// @public (undocumented)
export const EVENT_MOUSEDOWN = "mousedown";
// @public (undocumented)
export const EVENT_MOUSEENTER = "mouseenter";
// @public (undocumented)
export const EVENT_MOUSEEXIT = "mouseexit";
// @public (undocumented)
export const EVENT_MOUSEMOVE = "mousemove";
// @public (undocumented)
export const EVENT_MOUSEOUT = "mouseout";
// @public (undocumented)
export const EVENT_MOUSEOVER = "mouseover";
// @public (undocumented)
export const EVENT_MOUSEUP = "mouseup";
// @public (undocumented)
export const EVENT_OUT = "out";
// @public (undocumented)
export const EVENT_OVER = "over";
// @public (undocumented)
export const EVENT_REVERT = "revert";
// @public (undocumented)
export const EVENT_START = "start";
// @public (undocumented)
export const EVENT_STOP = "stop";
// @public (undocumented)
export const EVENT_TAP = "tap";
// @public (undocumented)
export class EventManager {
// Warning: (ae-forgotten-export) The symbol "EventManagerOptions" needs to be exported by the entry point index.d.ts
constructor(params?: EventManagerOptions);
// (undocumented)
clickThreshold: number;
// (undocumented)
dblClickThreshold: number;
// (undocumented)
off(el: any, event: string, fn: any): this;
// (undocumented)
on(el: any, event: string, children?: string | Function, fn?: Function, options?: {
passive?: boolean;
capture?: boolean;
once?: boolean;
}): this;
// (undocumented)
trigger(el: any, event: string, originalEvent: any, payload?: any, detail?: number): this;
}
// @public (undocumented)
export function findParent(el: jsPlumbDOMElement, selector: string, container: HTMLElement, matchOnElementAlso: boolean): jsPlumbDOMElement;
// @public (undocumented)
export function getClass(el: Element): string;
// @public (undocumented)
export function getEventSource(e: Event): jsPlumbDOMElement;
// @public (undocumented)
export function getPositionOnElement(evt: Event, el: Element, zoom: number): PointXY;
// @public (undocumented)
export function getTouch(touches: TouchList, idx: number): Touch;
// @public (undocumented)
export type GhostProxyGenerator = (el: Element) => Element;
// @public (undocumented)
export function groupDragConstrain(desiredLoc: PointXY, dragEl: jsPlumbDOMElement, constrainRect: BoundingBox, size: Size): PointXY;
// @public (undocumented)
export type GroupLocation = {
el: Element;
r: BoundingBox;
group: UIGroup<Element>;
};
// @public (undocumented)
export function hasClass(el: Element, clazz: string): boolean;
// @public (undocumented)
export type IntersectingGroup = {
groupLoc: GroupLocation;
d: number;
intersectingElement: Element;
};
// @public (undocumented)
export function isArrayLike(el: any): el is ArrayLike<Element>;
// @public (undocumented)
export function isInsideParent(instance: BrowserJsPlumbInstance, _el: HTMLElement, pos: PointXY): boolean;
// @public (undocumented)
export function isNodeList(el: any): el is NodeListOf<Element>;
// @public (undocumented)
export interface jsPlumbDOMElement extends HTMLElement, jsPlumbElement<Element> {
// (undocumented)
cloneNode: (deep?: boolean) => jsPlumbDOMElement;
// (undocumented)
_isJsPlumbGroup: boolean;
// (undocumented)
_jsPlumbOrphanedEndpoints: Array<Endpoint>;
// (undocumented)
_jsPlumbScrollHandler?: Function;
// (undocumented)
jtk: jsPlumbDOMInformation;
// (undocumented)
_katavorioDrag?: Drag;
// (undocumented)
offsetParent: jsPlumbDOMElement;
// (undocumented)
parentNode: jsPlumbDOMElement;
}
// @public (undocumented)
export interface jsPlumbDOMInformation {
// (undocumented)
connector?: AbstractConnector;
// (undocumented)
endpoint?: Endpoint;
// (undocumented)
overlay?: Overlay;
}
// @public (undocumented)
export interface jsPlumbDragManager {
// (undocumented)
destroyDraggable(el: jsPlumbDOMElement): void;
// (undocumented)
draggable(el: jsPlumbDOMElement, params: DragParams): Drag;
// (undocumented)
getInputFilterSelector(): string;
// (undocumented)
getZoom(): number;
// (undocumented)
setInputFilterSelector(selector: string): void;
// (undocumented)
setZoom(z: number): void;
}
// @public (undocumented)
export function matchesSelector(el: jsPlumbDOMElement, selector: string, ctx?: Element): boolean;
// @public (undocumented)
export function newInstance(defaults?: BrowserJsPlumbDefaults): BrowserJsPlumbInstance;
// @public (undocumented)
export function offsetRelativeToRoot(el: Element): PointXY;
// @public (undocumented)
export function pageLocation(e: Event): PointXY;
// @public (undocumented)
export const PROPERTY_POSITION = "position";
// @public (undocumented)
export function ready(f: Function): void;
// @public (undocumented)
export function registerEndpointRenderer<C>(name: string, fns: EndpointHelperFunctions<C>): void;
// @public (undocumented)
export function removeClass(el: Element | NodeListOf<Element>, clazz: string): void;
// @public (undocumented)
export type RevertEventParams = jsPlumbDOMElement;
// @public (undocumented)
export type RevertFunction = (dragEl: HTMLElement, pos: PointXY) => boolean;
// @public (undocumented)
export const SELECTOR_CONNECTOR: string;
// @public (undocumented)
export const SELECTOR_ENDPOINT: string;
// @public (undocumented)
export const SELECTOR_GROUP: string;
// @public (undocumented)
export const SELECTOR_GROUP_CONTAINER: string;
// @public (undocumented)
export const SELECTOR_OVERLAY: string;
// @public (undocumented)
export function size(el: Element): Size;
// @public (undocumented)
export function toggleClass(el: Element | NodeListOf<Element>, clazz: string): void;
// @public (undocumented)
export function touchCount(e: Event): number;
// @public (undocumented)
export function touches(e: any): TouchList;
// @public (undocumented)
export interface UIComponent {
// (undocumented)
canvas: HTMLElement;
// (undocumented)
svg: SVGElement;
}
```
| 30.13198 | 161 | 0.697001 | eng_Latn | 0.220213 |
eeede13bffbdee0447f40d8c8b4faa6a1d80f85b | 3,002 | md | Markdown | README.md | LogInsight/tarantool | e060d243e5ecc6613056b9cea5cbdfae47e004aa | [
"BSD-2-Clause"
] | 2 | 2015-12-08T21:37:42.000Z | 2015-12-08T22:11:02.000Z | README.md | LogInsight/tarantool | e060d243e5ecc6613056b9cea5cbdfae47e004aa | [
"BSD-2-Clause"
] | null | null | null | README.md | LogInsight/tarantool | e060d243e5ecc6613056b9cea5cbdfae47e004aa | [
"BSD-2-Clause"
] | 1 | 2020-01-29T15:33:33.000Z | 2020-01-29T15:33:33.000Z | # tarantool [](https://travis-ci.org/tarantool/tarantool)
[](https://gitter.im/tarantool/tarantool?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
http://tarantool.org
Tarantool is an in-memory database and application server.
Key features of the application server:
* 100% compatible drop-in replacement for Lua 5.1,
based on LuaJIT 2.0.
Simply use #!/usr/bin/tarantool instead of
#!/usr/bin/lua in your script.
* full support for Lua modules and a rich set of
own modules, including cooperative multitasking,
non-blocking I/O, access to external databases, etc
Key features of the database:
* MsgPack data format and MsgPack based
client-server protocol
* two data engines: 100% in-memory with
optional persistence and a 2-level disk-based
B-tree, to use with large data sets
* multiple index types: HASH, TREE, BITSET
* asynchronous master-master replication
* authentication and access control
* the database is just a C extension to the
app server and can be turned off
Supported platforms are Linux/x86 and FreeBSD/x86, Mac OS X.
Tarantool is ideal for data-enriched components of
scalable Web architecture: queue servers, caches,
stateful Web applications.
## Compilation and install
Tarantool is written in C and C++.
To build, you will need GCC or Apple CLang compiler.
CMake is used for configuration management.
3 standard CMake build types are supported:
* Debug -- used by project maintainers
* RelWithDebugInfo -- the most common release configuration,
also provides debugging capabilities
* Release -- use only if the highest performance is required
The build depends on the following external libraries:
- libreadline and libreadline-dev
- GNU bfd (part of GNU binutils).
Please follow these steps to compile Tarantool:
# If compiling from git
tarantool $ git submodule update --init --recursive
tarantool $ cmake .
tarantool $ make
To use a different release type, say, RelWithDebugInfo, use:
tarantool $ cmake . -DCMAKE_BUILD_TYPE=RelWithDebugInfo
Additional build options can be set similarly:
tarantool $ cmake . -DCMAKE_BUILD_TYPE=RelWithDebugInfo -DENABLE_DOC=true # builds the docs
'make' creates 'tarantool' executable in directory src/.
There is 'make install' goal. One can also run Tarantool executable without
installation.
To start the server, try:
tarantool $ ./src/tarantool
This will start Tarantool in interactive mode.
To run Tarantool regression tests (test/test-run.py),
a few additional Python modules are necessary:
* daemon
* pyyaml
* msgpack-python
Simply type 'make test' to fire off the test coverage.
Please report bugs at http://github.com/tarantool/tarantool/issues
We also warmly welcome your feedback in the discussion mailing
list, tarantool@googlegroups.com.
Thank you for your interest in Tarantool!
| 32.27957 | 166 | 0.768155 | eng_Latn | 0.971279 |
eeef9f2a1824b6deb4ecb0665c878fbde24d60d5 | 947 | md | Markdown | CHANGELOG.md | mmozeiko/CxxProfiler | 5836f2a948cf97ad87bef0e357fd39c95d07189f | [
"Unlicense"
] | 110 | 2016-05-06T09:17:55.000Z | 2022-03-30T02:34:41.000Z | CHANGELOG.md | mmozeiko/CxxProfiler | 5836f2a948cf97ad87bef0e357fd39c95d07189f | [
"Unlicense"
] | null | null | null | CHANGELOG.md | mmozeiko/CxxProfiler | 5836f2a948cf97ad87bef0e357fd39c95d07189f | [
"Unlicense"
] | 5 | 2017-10-27T10:59:53.000Z | 2022-01-10T02:24:41.000Z | # Change Log
## [v2] - 2015-08-27
### Bugfix and improvements for VS CRT source location
- Fixed potential crash when using links in source view
- Better support for VS CRT source detection (added VS2015 and Universal CRT)
- Upgraded Qt to 5.7.0
## [v1] - 2015-05-02
### First release!
- Allows to profile 32-bit or 64-bit executables
- Run new process or attach existing one
- Automatic download of pdb files for system dll files
- Shows flat view - who was function taking most time
- Shows call graph - which function calls which one
- Search or filter by function name
- View source code with profiling stats per line
- Navigate profile information in source code view (click on red percent numbers)
- Open files in explorer, in default editor or in Visual Studio
- Saving collected data to file for analyzing later
[v2]: https://github.com/mmozeiko/CxxProfiler/releases/tag/v2
[v1]: https://github.com/mmozeiko/CxxProfiler/releases/tag/v1
| 39.458333 | 81 | 0.762408 | eng_Latn | 0.962124 |
eeefacbacd1b6807a682b73e47f48bcf302de5c5 | 1,644 | md | Markdown | charts/cerebro/README.md | vladlosev/wiremind-helm-charts | e3351be2593eb9d2310a1dc6b24de2c1eddd44c5 | [
"Apache-2.0"
] | 13 | 2020-12-01T22:33:58.000Z | 2022-03-23T10:50:33.000Z | charts/cerebro/README.md | vladlosev/wiremind-helm-charts | e3351be2593eb9d2310a1dc6b24de2c1eddd44c5 | [
"Apache-2.0"
] | 32 | 2020-11-23T21:23:10.000Z | 2022-03-24T13:37:48.000Z | charts/cerebro/README.md | vladlosev/wiremind-helm-charts | e3351be2593eb9d2310a1dc6b24de2c1eddd44c5 | [
"Apache-2.0"
] | 44 | 2021-03-02T05:33:18.000Z | 2022-03-30T13:05:49.000Z | # Cerebro
Cerebro is an open source (MIT License) elasticsearch web admin tool built using Scala, Play Framework, AngularJS and Bootstrap.
## Introduction
This chart deploys Cerebro to your cluster via a Deployment and Service.
Optionally you can also enable ingress.
Optionally you can use cerebro provided auth by uploading a Secret with the needed env vars (don't forget to set `AUTH_TYPE`).
# Prerequisites
- Kubernetes 1.9+
## Installing the Chart
To install the chart with the release name `my-release`, run:
```bash
$ helm install --name my-release wiremind/cerebro
```
After a few seconds, you should see service statuses being written to the configured output.
> **Tip**: List all releases using `helm list`
## Uninstalling the Chart
To uninstall/delete the `my-release` deployment:
```bash
$ helm delete my-release
```
The command removes all the Kubernetes components associated with the chart and deletes the release.
## Configuration
Please refer to values.yaml to see parameters and their default values.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,
```bash
$ helm install --name my-release \
stable/cerebro
```
Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,
```bash
$ helm install --name my-release -f values.yaml stable/cerebro
```
## Backend connection with basic auth
You can create your secret, make sure the key name is "application.conf" and simply give the secret name `configFromSecretRef:`
> **Tip**: You can use the default [values.yaml](values.yaml)
| 27.864407 | 128 | 0.758516 | eng_Latn | 0.995395 |
eeefef26a8cd1e0d4c6200aed08216d728789389 | 10,328 | md | Markdown | articles/mobile-services-windows-store-dotnet-adal-sso-authentication.md | estei/azure-content | 9279b62c22abc4533b62a4803112b92fba118df9 | [
"CC-BY-3.0"
] | null | null | null | articles/mobile-services-windows-store-dotnet-adal-sso-authentication.md | estei/azure-content | 9279b62c22abc4533b62a4803112b92fba118df9 | [
"CC-BY-3.0"
] | null | null | null | articles/mobile-services-windows-store-dotnet-adal-sso-authentication.md | estei/azure-content | 9279b62c22abc4533b62a4803112b92fba118df9 | [
"CC-BY-3.0"
] | null | null | null | <properties
pageTitle="Authenticate your app with Active Directory Authentication Library Single Sign-On (Windows Store) | Mobile Dev Center"
description="Learn how to authentication users for single sign-on with ADAL in your Windows Store application."
documentationCenter="windows"
authors="wesmc7777"
manager="dwrede"
editor=""
services="mobile-services"/>
<tags
ms.service="mobile-services"
ms.workload="mobile"
ms.tgt_pltfrm=""
ms.devlang="dotnet"
ms.topic="article"
ms.date="02/23/2015"
ms.author="wesmc"/>
# Authenticate your app with Active Directory Authentication Library Single Sign-On
[AZURE.INCLUDE [mobile-services-selector-adal-sso](../includes/mobile-services-selector-adal-sso.md)]
##Overview
In this tutorial, you add authentication to the quickstart project using the Active Directory Authentication Library to support [client-directed login operations](http://msdn.microsoft.com/library/azure/jj710106.aspx) with Azure Active Directory. To support [service-directed login operations](http://msdn.microsoft.com/library/azure/dn283952.aspx) with Azure Active Directory, start with the [Add authentication to your Mobile Services app](mobile-services-dotnet-backend-windows-store-dotnet-get-started-users.md) tutorial.
To be able to authenticate users, you must register your application with the Azure Active Directory (AAD). This is done in two steps. First, you must register your mobile service and expose permissions on it. Second, you must register your Windows Store app and grant it access to those permissions
>[AZURE.NOTE] This tutorial is intended to help you better understand how Mobile Services enables you to do single sign-on Azure Active Directory authentication for Windows Store apps using a [client-directed login operation](http://msdn.microsoft.com/library/azure/jj710106.aspx). If this is your first experience with Mobile Services, complete the tutorial [Get started with Mobile Services].
##Prerequisites
This tutorial requires the following:
* Visual Studio 2013 running on Windows 8.1.
* Completion of the [Get started with Mobile Services] or [Get Started with Data] tutorial.
* Microsoft Azure Mobile Services SDK NuGet package
* Active Directory Authentication Library NuGet package
[AZURE.INCLUDE [mobile-services-dotnet-adal-register-service](../includes/mobile-services-dotnet-adal-register-service.md)]
##Register your app with the Azure Active Directory
To register the app with Azure Active Directory, you must associate it to the Windows Store and have a package security identifier (SID) for the app. The package SID gets registered with the native application settings in the Azure Active Directory.
###Associate the app with a new store app name
1. In Visual Studio, right click the client app project and click **Store** and **Associate App with the Store**
![][1]
2. Sign into your Dev Center account.
3. Enter the app name you want to reserve for the app and click **Reserve**.
![][2]
4. Select the new app name and click **Next**.
5. Click **Associate** to associate the app with the store name.
###Retrieve the package SID for your app.
Now you need to retrieve your package SID which will be configured with the native app settings.
1. Log into your [Windows Dev Center Dashboard] and click **Edit** on the app.
![][3]
2. Then click **Services**
![][4]
3. Then click **Live Services Site**.
![][5]
4. Copy your package SID from the top of the page.
![][6]
###Create the native app registration
1. Navigate to **Active Directory** in the [Azure Management Portal], then click your directory.
![][7]
2. Click the **Applications** tab at the top, then click to **ADD** an app.
![][8]
3. Click **Add an application my organization is developing**.
4. In the Add Application Wizard, enter a **Name** for your application and click the **Native Client Application** type. Then click to continue.
![][9]
5. In the **Redirect URI** box, paste the App package SID you copied earlier then click to complete the native app registration.
![][10]
6. Click the **Configure** tab for the native application and copy the **Client ID**. You will need this later.
![][11]
7. Scroll the page down to the **permissions to other applications** section and grant full access to the mobile service application that you registered earlier. Then click **Save**
![][12]
Your mobile service is now configured in AAD to receive single sign-on logins from your app.
##Configure the mobile service to require authentication
[AZURE.INCLUDE [mobile-services-restrict-permissions-dotnet-backend](../includes/mobile-services-restrict-permissions-dotnet-backend.md)]
##Add authentication code to the client app
1. Open your Windows store client app project in Visual Studio.
[AZURE.INCLUDE [mobile-services-dotnet-adal-install-nuget](../includes/mobile-services-dotnet-adal-install-nuget.md)]
4. In the Solution Explorer window of Visual Studio, open the MainPage.xaml.cs file and add the following using statements.
using Windows.UI.Popups;
using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Newtonsoft.Json.Linq;
5. Add the following code to the MainPage class which declares the `AuthenticateAsync` method.
private MobileServiceUser user;
private async Task AuthenticateAsync()
{
string authority = "<INSERT-AUTHORITY-HERE>";
string resourceURI = "<INSERT-RESOURCE-URI-HERE>";
string clientID = "<INSERT-CLIENT-ID-HERE>";
while (user == null)
{
string message;
try
{
AuthenticationContext ac = new AuthenticationContext(authority);
AuthenticationResult ar = await ac.AcquireTokenAsync(resourceURI, clientID, (Uri) null);
JObject payload = new JObject();
payload["access_token"] = ar.AccessToken;
user = await App.MobileService.LoginAsync(MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory, payload);
message = string.Format("You are now logged in - {0}", user.UserId);
}
catch (InvalidOperationException)
{
message = "You must log in. Login Required";
}
var dialog = new MessageDialog(message);
dialog.Commands.Add(new UICommand("OK"));
await dialog.ShowAsync();
}
}
6. In the code for the `AuthenticateAsync` method above, replace **INSERT-AUTHORITY-HERE** with the name of the tenant in which you provisioned your application, the format should be https://login.windows.net/tenant-name.onmicrosoft.com. This value can be copied out of the Domain tab in your Azure Active Directory in the [Azure Management Portal].
7. In the code for the `AuthenticateAsync` method above, replace **INSERT-RESOURCE-URI-HERE** with the **App ID URI** for your mobile service. If you followed the [How to Register with the Azure Active Directory] topic your App ID URI should be similar to https://todolist.azure-mobile.net/login/aad.
8. In the code for the `AuthenticateAsync` method above, replace **INSERT-CLIENT-ID-HERE** with the client ID you copied from the native client application.
9. In the Solution Explorer window for Visual Studio, open the Package.appxmanifest file in the client project. Click the **Capabilities** tab and enable **Enterprise Application** and **Private Networks (Client & Server)**. Save the file.
![][14]
10. In the MainPage.cs file, update the `OnNavigatedTo` event handler to call the `AuthenticateAsync` method as follows.
protected override async void OnNavigatedTo(NavigationEventArgs e)
{
await AuthenticateAsync();
await RefreshTodoItems();
}
##Test the client using authentication
1. In Visual Studio,run the client app.
2. You will receive a prompt to login against your Azure Active Directory.
3. The app authenticates and returns the todo items.
![][15]
<!-- Images -->
[0]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-aad-app-manage-manifest.png
[1]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-vs-associate-app.png
[2]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-vs-reserve-store-appname.png
[3]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-store-app-edit.png
[4]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-store-app-services.png
[5]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-live-services-site.png
[6]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-store-app-package-sid.png
[7]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-select-aad.png
[8]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-aad-applications-tab.png
[9]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-native-selection.png
[10]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-native-sid-redirect-uri.png
[11]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-native-client-id.png
[12]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-native-add-permissions.png
[14]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-package-appxmanifest.png
[15]: ./media/mobile-services-windows-store-dotnet-adal-sso-authenticate/mobile-services-app-run.png
<!-- URLs. -->
[How to Register with the Azure Active Directory]: mobile-services-how-to-register-active-directory-authentication.md
[Azure Management Portal]: https://manage.windowsazure.com/
[Get started with data]: mobile-services-dotnet-backend-windows-store-dotnet-get-started-data.md
[Get started with Mobile Services]: mobile-services-dotnet-backend-windows-store-dotnet-get-started.md
[Windows Dev Center Dashboard]: http://go.microsoft.com/fwlink/p/?LinkID=266734 | 47.59447 | 525 | 0.737219 | eng_Latn | 0.848995 |
eef36a44df9db2bf39309c2c3bf3b5c2cad5d027 | 3,086 | md | Markdown | docs/t-sql/functions/upper-transact-sql.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/functions/upper-transact-sql.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/functions/upper-transact-sql.md | L3onard80/sql-docs.it-it | f73e3d20b5b2f15f839ff784096254478c045bbb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: UPPER (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 03/13/2017
ms.prod: sql
ms.prod_service: database-engine, sql-database, sql-data-warehouse, pdw
ms.reviewer: ''
ms.technology: t-sql
ms.topic: language-reference
f1_keywords:
- UPPER_TSQL
- UPPER
dev_langs:
- TSQL
helpviewer_keywords:
- UPPER function
- characters [SQL Server], lowercase
- converting lowercase to uppercase
- uppercase characters [SQL Server]
- characters [SQL Server], uppercase
- lowercase characters
ms.assetid: 5ced55f7-ac89-4cf2-9465-f63f4dc480db
author: MikeRayMSFT
ms.author: mikeray
monikerRange: '>=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current'
ms.openlocfilehash: d91870e53e5976ba5d52b83f086a57fa552ad1ae
ms.sourcegitcommit: 58158eda0aa0d7f87f9d958ae349a14c0ba8a209
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 03/30/2020
ms.locfileid: "67927620"
---
# <a name="upper-transact-sql"></a>UPPER (Transact-SQL)
[!INCLUDE[tsql-appliesto-ss2008-all-md](../../includes/tsql-appliesto-ss2008-all-md.md)]
Restituisce un'espressione di caratteri con dati di tipo carattere minuscoli convertiti in maiuscolo.
 [Convenzioni della sintassi Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md)
## <a name="syntax"></a>Sintassi
```
UPPER ( character_expression )
```
## <a name="arguments"></a>Argomenti
*character_expression*
[Espressione](../../t-sql/language-elements/expressions-transact-sql.md) di dati di tipo carattere. *character_expression* può essere una costante, una variabile o una colonna di dati di tipo carattere o binario.
*character_expression* deve essere di un tipo di dati che può essere convertito in modo implicito in **varchar**. In caso contrario usare [CAST](../../t-sql/functions/cast-and-convert-transact-sql.md) per convertire in modo esplicito *character_expression*.
## <a name="return-types"></a>Tipi restituiti
**varchar** o **nvarchar**
## <a name="examples"></a>Esempi
Nell'esempio seguente vengono usate le funzioni `UPPER` e `RTRIM` per restituire il cognome delle persone incluse nella tabella `dbo.DimEmployee` in modo che sia concatenato con il nome, in maiuscolo e in formato ridotto.
```
-- Uses AdventureWorks
SELECT UPPER(RTRIM(LastName)) + ', ' + FirstName AS Name
FROM dbo.DimEmployee
ORDER BY LastName;
```
Set di risultati parziale:
```
Name
------------------------------
ABBAS, Syed
ABERCROMBIE, Kim
ABOLROUS, Hazem
```
## <a name="see-also"></a>Vedere anche
[Tipi di dati (Transact-SQL)](../../t-sql/data-types/data-types-transact-sql.md)
[Funzioni per i valori stringa (Transact-SQL)](../../t-sql/functions/string-functions-transact-sql.md)
[LOWER (Transact-SQL)](../../t-sql/functions/lower-transact-sql.md)
| 37.180723 | 264 | 0.725859 | ita_Latn | 0.414371 |
eef50d8483e67f0905626370a0b749a2c012cca1 | 1,033 | md | Markdown | README.md | ndukh/liver-segmentation | f46697719b78d11f8871048c7c3aaaf8d8f3777f | [
"MIT"
] | 2 | 2019-12-06T13:36:47.000Z | 2020-10-08T13:57:02.000Z | README.md | ndukh/liver-segmentation | f46697719b78d11f8871048c7c3aaaf8d8f3777f | [
"MIT"
] | null | null | null | README.md | ndukh/liver-segmentation | f46697719b78d11f8871048c7c3aaaf8d8f3777f | [
"MIT"
] | null | null | null | # Liver segmentation ([CHAOS](https://chaos.grand-challenge.org/Combined_Healthy_Abdominal_Organ_Segmentation/) challenge).
### An implementation of a CNN model, trained to segment liver on CT-scans.
##### Using:
```shell
$ python segment.py [-h] [-i INPUT_DIR] [-o OUTPUT_DIR]
optional arguments:
-h, --help show this help message and exit
-i INPUT_DIR, --input INPUT_DIR
path to the folder with the CT-scans,
default: samples/input
-o OUTPUT_DIR, --output OUTPUT_DIR
path to the folder where segmented masks should be saved,
default: samples/output
```
##### Clarification:
The exact structure of nested in input folders will be cloned in output.
The program looks for .dcm files in the input folder, repeats their folder
paths in the output folder and saves segmentation masks in .png format,
following the structure of input folder content.
The work progress is showed by a progress bar.
| 36.892857 | 123 | 0.666989 | eng_Latn | 0.975862 |
eef5757cd55c8dad0820ccb782952bdde04d7b38 | 3,973 | md | Markdown | _posts/2020-12-07-Shallow-Neural-Networks.md | evfox9/minimal-mistakes | 8e340c76c9d97549ec4bc40e59760b5fde584b6e | [
"MIT"
] | null | null | null | _posts/2020-12-07-Shallow-Neural-Networks.md | evfox9/minimal-mistakes | 8e340c76c9d97549ec4bc40e59760b5fde584b6e | [
"MIT"
] | null | null | null | _posts/2020-12-07-Shallow-Neural-Networks.md | evfox9/minimal-mistakes | 8e340c76c9d97549ec4bc40e59760b5fde584b6e | [
"MIT"
] | null | null | null | ---
title: 1-3. Shallow Neural Networks
tags: AI Deep_Learning Coursera Deep_Learning_Specialization
---
## Shallow Neural Network
### Neural Network Representation

We call the layers with input features $x_1, x_2, x_3$ as **input layer**. Layer in the middle are called
**hidden layer**. Layer on the right with only one node is called **output layer**. We don't count the input layer, so the neural
network above is a 2-layer NN.
### Computing a Neural Network's Output

$z^{[1]} = \begin{bmatrix} {w_1}^{[1]T} \\\ {w_2}^{[1]T} \\\ {w_3}^{[1]T} \\\ {w_4}^{[1]T} \end{bmatrix}
\begin{bmatrix} x_1 \\\ x_2 \\\ x_3 \end{bmatrix} + \begin{bmatrix} {b_1}^{[1]} \\\ {b_2}^{[1]} \\\ {b_3}^{[1]} \\\ {b_4}^{[1]} \end{bmatrix}
= \begin{bmatrix} {w_1}^{[1]T} x + {b_1}^{[1]} \\\ {w_2}^{[1]T} x + {b_2}^{[1]} \\\ {w_3}^{[1]T} x + {b_3}^{[1]} \\\ {w_4}^{[1]T} x + {b_4}^{[1]} \end{bmatrix}
= \begin{bmatrix} {z_1}^{[1]} \\\ {z_2}^{[1]} \\\ {z_3}^{[1]} \\\ {z_4}^{[1]} \end{bmatrix}$
In short, $z^{[1]} = W^{[1]} x + b^{[1]},\ a^{[1]} = \sigma(z^{[1]})$.
For $z^{[i]}$ which $i \geq 2$, $z^{[i]} = W^{[i]} a^{[i-1]}+ b^{[i]},\ a^{[i]} = \sigma(z^{[i]})$.
### Activation Functions
**Activation function** is the function that defines the output of that node. Here are some examples of activation functions.
#### Sigmoid

$$a = \frac{1}{1 + e^{-z}}$$
#### Hyperbolic tangent (tanh)

$$a = \tanh{z} = \frac{e^z - e^{-z}}{e^z + e^{-z}}$$
#### Rectified Linear Unit (ReLU)

$$a = \max (0,z)$$
#### Leaky ReLU

$$a = \max (0.01z,z)$$
* can replace 0.01 to other number
#### Why non-linear activation functions?
Functions above are all non-linear functions. Activation functions should be non-linear because no matter how much you compose
linear functions it will still be linear functions, so having many hidden layers won't have any meanings. Having non-linear
function as activation function makes the model more expressive.
### Derivative of Activation functions
#### Sigmoid
$$g'(z) = g(z) (1 - g(z))$$
$$0 < g'(z) \leq \frac{1}{4}$$
#### Hyperbolic tangent (tanh)
$$g'(z) = 1 - {\tanh{z}}^2$$
$$0 < g'(z) < 1$$
#### Rectified Linear Unit (ReLU)
$$g'(z) = \begin{cases} 0 \ \text{if} \ z < 0 \\ 1 \ \text{if} \ z > 0 \end{cases} $$
#### Leaky ReLU
$$g'(z) = \begin{cases} 0.01 \ \text{if} \ z < 0 \\ 1 \ \text{if} \ z > 0 \end{cases} $$
### Gradient descent for Neural Networks
#### Forward Propagation
$z^{[1]} = W^{[1]} x + b^{[1]}$
$a^{[1]} = \sigma(z^{[1]})$.
$z^{[2]} = W^{[2]} a^{[1]}+ b^{[2]}$
$a^{[2]} = \sigma(z^{[2]})$.
#### Backward Propagation
$d z^{[2]} = A^{[2]} - Y$
$d w^{[2]} = \frac{1}{m} d z^{[2]} A^{[1]T}$
$d b^{[2]} = \frac{1}{m}$ `np.sum`($d z^{[2]}$, axis=1, keepdims=True)
$d z^{[1]} = w^{[2]T} d z^{[2]} \times g^{[1]'} z^{[1]}$
$d W^{[1]} = \frac{1}{m} d Z^{[1]} X^{[T]}$
$d b^{[1]} = \frac{1}{m}$ `np.sum`($d z^{[1]}$, axis=1, keepdims=True)
To keep the dimension of the matrix after sum operation, you should set the `keepdims` parameter to true.
## Programming Assignment
[Planar_data_classification_with_onehidden_layer](https://github.com/evfox9/Coursera/blob/master/Deep_Learning/Neural_Networks_and_Deep_Learning/Planar_data_classification_with_onehidden_layer.ipynb)
---
## References
[Neural Networks and Deep Learning](https://www.coursera.org/learn/neural-networks-deep-learning)
| 31.784 | 200 | 0.593254 | eng_Latn | 0.495849 |
eef590b9337585327a6fe3f55f9eacd2b7691bce | 233 | md | Markdown | README.md | SayHello-Creator/2020-Miz-Game-Jamz | 973fecfce1ee0bf703b0fc935910320914cf185d | [
"MIT"
] | null | null | null | README.md | SayHello-Creator/2020-Miz-Game-Jamz | 973fecfce1ee0bf703b0fc935910320914cf185d | [
"MIT"
] | null | null | null | README.md | SayHello-Creator/2020-Miz-Game-Jamz | 973fecfce1ee0bf703b0fc935910320914cf185d | [
"MIT"
] | null | null | null | # 2020-Miz-Game-Jamz
Game for 2020 Miz Game Jam
Features:
- Pathfinding
- Turn based movement
- Inventory System
This game wasn't finished, however still has some mechanics and ideas I enjoyed. Will revisit in the future.
| 25.888889 | 110 | 0.746781 | eng_Latn | 0.989524 |
eef5c2a53287057b192b9ab035b214a41f7923fa | 1,284 | md | Markdown | .github/ISSUE_TEMPLATE/support-request.md | leon-anavi/transitions | ab2366accc5b54f70b13ec29193b4b2e429bde04 | [
"MIT"
] | 3,277 | 2017-06-09T15:03:15.000Z | 2022-03-31T15:46:01.000Z | .github/ISSUE_TEMPLATE/support-request.md | leon-anavi/transitions | ab2366accc5b54f70b13ec29193b4b2e429bde04 | [
"MIT"
] | 347 | 2017-06-13T22:50:35.000Z | 2022-03-31T11:37:34.000Z | .github/ISSUE_TEMPLATE/support-request.md | leon-anavi/transitions | ab2366accc5b54f70b13ec29193b4b2e429bde04 | [
"MIT"
] | 389 | 2017-06-16T00:54:31.000Z | 2022-03-23T06:35:10.000Z | ---
name: Support request
about: Ask for help
title: ''
labels: ''
assignees: ''
---
👋 Hello! If you have a question like "How do I do X with `transitions`" or "I have the following problem... Can `transitions` help me with that?", please consider [Stack Overflow](https://stackoverflow.com/questions/tagged/pytransitions) first.
Your question gains higher visibility since most developers look for help there.
The targeted community is larger; Some people will even help you to formulate a good question.
People get 'rewarded' with 'reputation' to help you. You also gain reputation in case this questions pops up more frequently. It's a win-win situation. Tag your question with `[pytransitions]` to make sure, that users of transitions will receive a notification. If the SO community cannot answer you question within a week, you can notify us by opening an issue here. Make sure to link your Stack Overflow post. We'd rather answer questions there.
**Checklist**
- [ ] I checked the [README](https://github.com/pytransitions/transitions/blob/master/README.md) and the [issue tracker](https://github.com/pytransitions/transitions/issues) but did not find an answer.
- [ ] I posted my question on Stack Overflow [here](enter link here) but received no answer after a week.
| 64.2 | 448 | 0.764798 | eng_Latn | 0.99682 |
eef5c2ed62b61a2432ed7794680242e8b33c7cea | 748 | md | Markdown | _publications/2014-01-01-SonicData-Broadcasting-Data-via-Sound-for-Smartphones.md | hcilab/hcilab.github.io | 0e005266e317429cca60407163bed4d17250b8fb | [
"MIT"
] | 3 | 2016-05-09T16:08:32.000Z | 2019-05-09T14:47:00.000Z | _publications/2014-01-01-SonicData-Broadcasting-Data-via-Sound-for-Smartphones.md | hcilab/hcilab.github.io | 0e005266e317429cca60407163bed4d17250b8fb | [
"MIT"
] | 3 | 2021-05-17T23:23:40.000Z | 2022-02-26T01:23:33.000Z | _publications/2014-01-01-SonicData-Broadcasting-Data-via-Sound-for-Smartphones.md | hcilab/hcilab.github.io | 0e005266e317429cca60407163bed4d17250b8fb | [
"MIT"
] | 3 | 2019-01-22T17:37:21.000Z | 2020-05-08T13:33:50.000Z | ---
title: "SonicData: Broadcasting Data via Sound for Smartphones"
collection: publications
permalink: /publication/2014-01-01-SonicData-Broadcasting-Data-via-Sound-for-Smartphones
date: 2014-01-01
venue: 'University of Calgary, Department of Computer Science Technical Report'
citation: ' Aditya Nittala, Xing-Dong Yang, Ehud Sharlin, Scott Bateman, Saul Greenberg, "SonicData: Broadcasting Data via Sound for Smartphones." University of Calgary, Department of Computer Science Technical Report, 2014.'
authors: 'Aditya Nittala, Xing-Dong Yang, Ehud Sharlin, Scott Bateman, Saul Greenberg'
---
See on [Google Scholar](https://scholar.google.com/scholar?q=SonicData:+Broadcasting+Data+via+Sound+for+Smartphones){:target="_blank"} | 74.8 | 239 | 0.78877 | yue_Hant | 0.292645 |
eef62d02bc6cd0d983c91b7609d1bb2bf185d1b1 | 2,226 | md | Markdown | windows.networking.proximity/peerrole.md | TerryWarwick/winrt-api | 8067d355063938408e4d070241b1959dd62a295f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.networking.proximity/peerrole.md | TerryWarwick/winrt-api | 8067d355063938408e4d070241b1959dd62a295f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows.networking.proximity/peerrole.md | TerryWarwick/winrt-api | 8067d355063938408e4d070241b1959dd62a295f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
-api-id: T:Windows.Networking.Proximity.PeerRole
-api-type: winrt enum
-api-device-family-note: xbox
---
<!-- Enumeration syntax
public enum Windows.Networking.Proximity.PeerRole : int
-->
# PeerRole
## -description
Describes the role of the peer app when connected to multiple peers.
## -enum-fields
### -field Peer:0
The app is part of a two-peer connection.
### -field Host:1
The app is the host peer app in a multi-peer connection.
### -field Client:2
The app is a client peer app in a multi-peer connection.
## -remarks
The [Role](peerfinder_role.md) property is used in multi-peer app connections to identify whether the peer app is the **Host** or **Client**, or if the peer app is participating in a two-peer connection as a **Peer**. For multi-peer app connections, you must set the [Role](peerfinder_role.md) property before calling the [Start](peerfinder_start_119778276.md) method. If the Role property is not set, the default is **Peer**.
In a multi-peer app scenario, the Role identifies the capability of the apps to connect. A **Host** app can connect to up to four **Client** apps. **Host** apps can only discover apps that advertise as **Client** apps. **Client** apps can only discover apps that advertise as **Host** apps. The **Peer** role identifies a two-app scenario. Therefore, **Peer** apps can only discover other **Peer** apps. The same rules apply for peer apps connected using a tap gesture. For example, apps advertising as **Host** apps can only tap to connect with apps advertising as **Client** apps.
## -examples
[!code-csharp[PeerRole_CS](../windows.networking.proximity/code/Proximity_FindAllPeersAsync1/csharp/PeerRole.xaml.cs#SnippetPeerRole_CS)]
[!code-js[PeerRole](../windows.networking.proximity/code/Proximity_FindAllPeersAsync1/js/peerrole.js#SnippetPeerRole)]
## -see-also
[PeerFinder](peerfinder.md), [Proximity and Tapping (JavaScript)](/previous-versions/windows/apps/hh465229(v=win.10)), [Proximity and Tapping (C#/VB/C++)](/previous-versions/windows/apps/hh465221(v=win.10)), [Proximity sample](https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/master/Official%20Windows%20Platform%20Sample/Proximity%20sample)
## -capabilities
proximity
| 50.590909 | 582 | 0.756963 | eng_Latn | 0.966115 |
eef6eeaa367188837f9aca65a58b02f8637221da | 839 | md | Markdown | docs/manual-CN/0.about-this-manual.md | luobeichen/nebula-docs-cn | e7f11f24986e69a80683ba54af9c8e51879c8095 | [
"Apache-2.0"
] | null | null | null | docs/manual-CN/0.about-this-manual.md | luobeichen/nebula-docs-cn | e7f11f24986e69a80683ba54af9c8e51879c8095 | [
"Apache-2.0"
] | null | null | null | docs/manual-CN/0.about-this-manual.md | luobeichen/nebula-docs-cn | e7f11f24986e69a80683ba54af9c8e51879c8095 | [
"Apache-2.0"
] | 1 | 2021-10-08T08:21:19.000Z | 2021-10-08T08:21:19.000Z | # 关于本手册
此手册为 Nebula Graph 的用户手册,版本为 1.2。详细版本更新信息参见 [Release Notes](https://github.com/vesoft-inc/nebula/releases)。
## 面向的读者
本手册适用于 `算法工程师`、`数据科学家`、`软件开发人员`和 `DBA`,以及所有对`图数据库`感兴趣的人群。
如果在使用 Nebula Graph 的过程中有任何问题,欢迎在 [Nebula Graph Community Slack](https://join.slack.com/t/nebulagraph/shared_invite/enQtNjIzMjQ5MzE2OTQ2LTM0MjY0MWFlODg3ZTNjMjg3YWU5ZGY2NDM5MDhmOGU2OWI5ZWZjZDUwNTExMGIxZTk2ZmQxY2Q2MzM1OWJhMmY#") 或[官方论坛](https://discuss.nebula-graph.com.cn/)提问。
如果对本手册有任何建议或疑问,请在 [GitHub](https://github.com/vesoft-inc/nebula/issues) 给我们留言。
## 格式约定
Nebula Graph 尚在持续开发中,本手册也将持续更新。
本手册使用如下语法惯例:
- `等宽字体`
等宽字体用于表示**命令**,**需要用户输入的命令**及**接口**。
- **加粗字体**
用于命令以及其他需要用户逐字输入的文字。
- `UPPERCASE fixed width`
在查询语言中,`保留关键字` 和 `非保留关键字` 均使用大写等宽字体表示。
## 文件格式
本手册所有文件均采用 Markdown 编写,HTML 网站使用 [mkdocs](https://www.mkdocs.org/) 自动生成。
| 26.21875 | 274 | 0.753278 | yue_Hant | 0.751236 |
eef713c8bdf13bb5563e432a672d1637a4872a53 | 2,445 | md | Markdown | docs/framework/winforms/advanced/how-to-use-antialiasing-with-text.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/advanced/how-to-use-antialiasing-with-text.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/winforms/advanced/how-to-use-antialiasing-with-text.md | olifantix/docs.de-de | a31a14cdc3967b64f434a2055f7de6bf1bb3cda8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Gewusst wie: Verwenden der Bildkantenglättung mit Text'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
helpviewer_keywords:
- strings [Windows Forms], smoothing drawn
- antialiasing [Windows Forms], using with text
- text [Windows Forms], smoothing
- text [Windows Forms], antialiasing
- strings [Windows Forms], antialiasing when drawing
ms.assetid: 48fc34f3-f236-4b01-a0cb-f0752e6d22ae
ms.openlocfilehash: 842c1fb0b73533fd2e87474b9e8cfa282eba2a70
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 05/04/2018
---
# <a name="how-to-use-antialiasing-with-text"></a>Gewusst wie: Verwenden der Bildkantenglättung mit Text
*Antialiasing* bezieht sich auf das Glätten Flatterrändern von gezeichneten Grafiken und Text, um ihre Darstellung oder Lesbarkeit zu verbessern. Mit der verwalteten [!INCLUDE[ndptecgdiplus](../../../../includes/ndptecgdiplus-md.md)] Klassen, können Sie qualitativ hochwertigen geglätteten Text als auch Text von geringer Qualität rendern. In der Regel akzeptiert höherer Qualität rendern zeitaufwändiger als niedrigere Qualität rendern. Legen Sie zum Festlegen der Qualitätsstufe Text der <xref:System.Drawing.Graphics.TextRenderingHint%2A> Eigenschaft eine <xref:System.Drawing.Graphics> in eines der Elemente des der <xref:System.Drawing.Text.TextRenderingHint> Enumeration
## <a name="example"></a>Beispiel
Im folgenden Codebeispiel wird zeichnet Text mit zwei verschiedenen Einstellungen.
Die folgende Abbildung zeigt die Ausgabe des Beispielcodes Beispielcode.

[!code-csharp[System.Drawing.FontsAndText#21](../../../../samples/snippets/csharp/VS_Snippets_Winforms/System.Drawing.FontsAndText/CS/Class1.cs#21)]
[!code-vb[System.Drawing.FontsAndText#21](../../../../samples/snippets/visualbasic/VS_Snippets_Winforms/System.Drawing.FontsAndText/VB/Class1.vb#21)]
## <a name="compiling-the-code"></a>Kompilieren des Codes
Im vorangehenden Codebeispiel ist für die Verwendung mit Windows Forms konzipiert und erfordert <xref:System.Windows.Forms.PaintEventArgs> `e`, einen Parameter des <xref:System.Windows.Forms.PaintEventHandler>.
## <a name="see-also"></a>Siehe auch
[Verwenden von Schriftarten und Text](../../../../docs/framework/winforms/advanced/using-fonts-and-text.md)
| 64.342105 | 678 | 0.778323 | deu_Latn | 0.837557 |
eef720f468a4558e38421dc195112cb41cd327ac | 36 | md | Markdown | README.md | Ragnarokr45/LibraryWebApi | 0feb4e80ac845cde5fb3bf6ec1d283098f7438ab | [
"MIT"
] | null | null | null | README.md | Ragnarokr45/LibraryWebApi | 0feb4e80ac845cde5fb3bf6ec1d283098f7438ab | [
"MIT"
] | 1 | 2020-05-17T14:12:20.000Z | 2020-05-17T14:12:20.000Z | README.md | Ragnarokr45/LibraryWebApi | 0feb4e80ac845cde5fb3bf6ec1d283098f7438ab | [
"MIT"
] | null | null | null | # LibraryWebApi
Web API for Library
| 12 | 19 | 0.805556 | kor_Hang | 0.322886 |
eef7427081a237d4d405c6fa9256bf0c6f19b875 | 1,095 | md | Markdown | README.md | isek27/LocalizationClientHeroku | ae1ff2d885d7e261aa1218cefd4d72cf9943bb45 | [
"MIT"
] | null | null | null | README.md | isek27/LocalizationClientHeroku | ae1ff2d885d7e261aa1218cefd4d72cf9943bb45 | [
"MIT"
] | null | null | null | README.md | isek27/LocalizationClientHeroku | ae1ff2d885d7e261aa1218cefd4d72cf9943bb45 | [
"MIT"
] | null | null | null | # Localization App made with React (for [Heroku](https://www.heroku.com/) deployment)
This app was orinally made to help patients with hearing implants 'localize' sound. Works best with a left and right speakers.
Can also be used with headphones.
https://localization-client.herokuapp.com/
### INSTALL (Local/Development)
* `npm install`
* `npm start`
* visit `http://localhost:8080/`
### DEPLOYING TO HEROKU
This app is set up for deployment to Heroku!
Heroku will follow the `postinstall` command in your `package.json` and compile assets with `webpack.prod.config.js`. It runs the Express web server in `server.js`. You'll notice there's a special section set up for running in development.
If you've never deployed a Node app to Heroku (or just need a refresher), they have a really great walkthrough [here](https://devcenter.heroku.com/articles/getting-started-with-nodejs#introduction).
### To Test:
- To test, you must have 2 speakers, one labeled "red" and the other "blue"
- Also, allow site access to camera and microphone
- Login with "user" : "password"
- Work in progress***
| 43.8 | 239 | 0.752511 | eng_Latn | 0.982508 |
eef7b83462d2645cef91d3beb1e7699a23d94838 | 907 | md | Markdown | doc/help/system-console/Team-Statistics.md | alexgaribay/platform | 178e30ecededae61eb727f30919b30358590e7f5 | [
"Apache-2.0"
] | 1 | 2021-03-16T14:06:32.000Z | 2021-03-16T14:06:32.000Z | doc/help/system-console/Team-Statistics.md | alexgaribay/platform | 178e30ecededae61eb727f30919b30358590e7f5 | [
"Apache-2.0"
] | null | null | null | doc/help/system-console/Team-Statistics.md | alexgaribay/platform | 178e30ecededae61eb727f30919b30358590e7f5 | [
"Apache-2.0"
] | null | null | null | # Team Statistics
Statistics on users, posts and channels are tracked for each team are viewable under **System Console** > **Teams** > **Statistics**.
## Total Users
The total number of accounts created, including both active and inactive accounts.
## Total Posts
The total number of posts made in a team, including deleted posts and posts made using automation.
## Public Groups
The number of public channels created by your team, including channels that may have been archived.
## Private Group
The number of private groups created by your team, including groups that may have been archived.
## Active Users With Posts
Users who logged in and made a post on a certain day.
## Recently Active Users
Users that have logged in and had recent browser activity in Mattermost.
## Newly Created Users
Users that have recently completed the sign-up process to create a Mattermost account on the team.
| 36.28 | 134 | 0.776185 | eng_Latn | 0.99992 |
eef84bcc15d36c465360577a94173a66968a6024 | 2,716 | md | Markdown | content/en/user-manual/optimization/hardware-instancing.md | Viktor20012002/developcanvas.com | 30f04880962e752fed5b4faab782c517e9e2fbd5 | [
"MIT"
] | null | null | null | content/en/user-manual/optimization/hardware-instancing.md | Viktor20012002/developcanvas.com | 30f04880962e752fed5b4faab782c517e9e2fbd5 | [
"MIT"
] | null | null | null | content/en/user-manual/optimization/hardware-instancing.md | Viktor20012002/developcanvas.com | 30f04880962e752fed5b4faab782c517e9e2fbd5 | [
"MIT"
] | null | null | null | ---
title: Hardware Instancing
template: usermanual-page.tmpl.html
position: 5
---
Hardware instancing is a rendering technique which allows the GPU to render multiple identical meshes in a small number of draw calls. Each instance of the mesh can have a different limited amount of state (for example, position or color). It's a technique suitable to drawing objects such as trees or bullets, say.
For its support on a device, check `pc.GraphicsDevice.supportsInstancing`. In general, it is supported on all WebGL2 devices and also on the majority of WebGL1 devices using the ANGLE_instanced_arrays extension.
Note that all instances are submitted for rendering by the GPU with no camera frustum culling taking place.
## How to use instancing
Enable instancing on a StandardMaterial that you use for rendering:
```javascript
var material = new pc.StandardMaterial();
material.onUpdateShader = function(options) {
options.useInstancing = true;
return options;
};
material.update();
```
Populate a vertex buffer with per instance matrices to provide their world matrices for rendering.
```javascript
// store matrices for individual instances into array
var matrices = new Float32Array(instanceCount * 16);
var matrix = new pc.Mat4();
var matrixIndex = 0;
for (var i = 0; i < instanceCount; i++) {
matrix.setTRS(pos, pc.Vec3.ZERO, pc.Vec3.ONE);
// copy matrix elements into array of floats
for (var m = 0; m < 16; m++)
matrices[matrixIndex++] = matrix.data[m];
}
```
Create a VertexBuffer which stores per-instance state and initialize it with the matrices. In the following example, we use `pc.VertexFormat.defaultInstancingFormat` which allows us to store a per-instance Mat4 matrix. Then we enable instancing on a MeshInstance, which contains the mesh geometry we want to instance.
```javascript
var instanceCount = 10;
var vertexBuffer = new pc.VertexBuffer(this.app.graphicsDevice, pc.VertexFormat.defaultInstancingFormat,
instanceCount, pc.BUFFER_STATIC, matrices);
meshInst.setInstancing(vertexBuffer);
```
Note, that you can create a dynamic vertex buffer using pc.BUFFER_DYNAMIC, and update the contents of it per-frame like this:
```javascript
vertexBuffer.setData(matrices);
```
## Custom shader
When you write custom shader that uses instancing, you need to read and use per-instance state from vertex attributes.
In the following example, we read a mat4 using vertex attributes.
```
attribute vec4 instance_line1;
attribute vec4 instance_line2;
attribute vec4 instance_line3;
attribute vec4 instance_line4;
mat4 getModelMatrix() {
return mat4(instance_line1, instance_line2, instance_line3, instance_line4);
}
```
| 37.722222 | 317 | 0.762887 | eng_Latn | 0.966593 |
eef891be4d52c58593e0c1db5b9aa82764d3a933 | 1,808 | md | Markdown | website/docs/intro.md | asmengistu/flutter-facebook-auth | 4bf13cd4bcad4c5e7156b33ead546451a3406e5a | [
"MIT"
] | null | null | null | website/docs/intro.md | asmengistu/flutter-facebook-auth | 4bf13cd4bcad4c5e7156b33ead546451a3406e5a | [
"MIT"
] | null | null | null | website/docs/intro.md | asmengistu/flutter-facebook-auth | 4bf13cd4bcad4c5e7156b33ead546451a3406e5a | [
"MIT"
] | null | null | null | <!--  -->
<p align="center">
<a href="https://pub.dev/packages/flutter_facebook_auth"><img alt="pub version" src="https://img.shields.io/pub/v/flutter_facebook_auth?color=%2300b0ff&label=flutter_facebook_auth&style=flat-square"/></a>
<img alt="last commit" src="https://img.shields.io/github/last-commit/the-meedu-app/flutter-facebook-auth?color=%23ffa000&style=flat-square"/>
<a href="https://codecov.io/gh/darwin-morocho/flutter-facebook-auth">
<img src="https://codecov.io/gh/darwin-morocho/flutter-facebook-auth/branch/master/graph/badge.svg?token=XEXUNVP0UK"/>
</a>
<img alt="license" src="https://img.shields.io/github/license/the-meedu-app/flutter-facebook-auth?style=flat-square"/>
<img alt="stars" src="https://img.shields.io/github/stars/the-meedu-app/flutter-facebook-auth?style=social"/>
</p>
<p>The easiest way to add facebook login to your flutter app, get user information, profile picture and more. Web support included.</p>
## Features
- Login on iOS, Android and Web.
- Express login on Android.
- Granted and declined permissions.
- User information, picture profile and more.
- Provide an access token to make request to the Graph API.
## Install
Add the following to your `pubspec.yaml`
```yaml
dependencies:
flutter_facebook_auth: ^4.0.0
```
:::danger IMPORTANT
When you install this plugin you need to configure the plugin on Android before run the project again . If you don't do it you will have a **No implementation found** error because the facebook SDK on Android throws an Exception when the configuration is not defined yet and this locks the other plugins in your project. If you don't need the plugin yet please remove or comment it.
:::
| 48.864865 | 382 | 0.755531 | eng_Latn | 0.623086 |
eef8fe8453df013768b3e6cbc2459be449d004c6 | 22,416 | md | Markdown | docs/reference/sql_reference/create-table.md | jiayuasu/snappydata-versus-tabula | 8c619dae370336334a0a6fed85b5404fbca20583 | [
"BSD-3-Clause-Open-MPI",
"PSF-2.0",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"MIT-0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"PostgreSQL",
"BSD-3-Clause"
] | null | null | null | docs/reference/sql_reference/create-table.md | jiayuasu/snappydata-versus-tabula | 8c619dae370336334a0a6fed85b5404fbca20583 | [
"BSD-3-Clause-Open-MPI",
"PSF-2.0",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"MIT-0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"PostgreSQL",
"BSD-3-Clause"
] | null | null | null | docs/reference/sql_reference/create-table.md | jiayuasu/snappydata-versus-tabula | 8c619dae370336334a0a6fed85b5404fbca20583 | [
"BSD-3-Clause-Open-MPI",
"PSF-2.0",
"Apache-2.0",
"BSD-2-Clause",
"MIT",
"MIT-0",
"BSD-3-Clause-No-Nuclear-License-2014",
"BSD-3-Clause-Clear",
"PostgreSQL",
"BSD-3-Clause"
] | 1 | 2021-08-10T01:55:55.000Z | 2021-08-10T01:55:55.000Z | # CREATE TABLE
**To Create Row/Column Table:**
```pre
CREATE TABLE [IF NOT EXISTS] table_name
( column-definition [ , column-definition ] * )
USING [row | column] // If not specified, a row table is created.
OPTIONS (
COLOCATE_WITH 'table-name', // Default none
PARTITION_BY 'column-name', // If not specified, replicated table for row tables, and partitioned internally for column tables.
BUCKETS 'num-partitions', // Default 128. Must be an integer.
REDUNDANCY 'num-of-copies' , // Must be an integer
EVICTION_BY 'LRUMEMSIZE integer-constant | LRUCOUNT interger-constant | LRUHEAPPERCENT',
PERSISTENCE 'ASYNCHRONOUS | ASYNC | SYNCHRONOUS | SYNC | NONE’,
DISKSTORE 'DISKSTORE_NAME', //empty string maps to default diskstore
OVERFLOW 'true | false', // specifies the action to be executed upon eviction event, 'false' allowed only when EVCITON_BY is not set.
EXPIRE 'time_to_live_in_seconds',
COLUMN_BATCH_SIZE 'column-batch-size-in-bytes', // Must be an integer. Only for column table.
KEY_COLUMNS 'column_name,..', // Only for column table if putInto support is required
COLUMN_MAX_DELTA_ROWS 'number-of-rows-in-each-bucket', // Must be an integer > 0 and < 2GB. Only for column table.
)
[AS select_statement];
```
Refer to these sections for more information on [Creating Sample Table](create-sample-table.md), [Creating External Table](create-external-table.md), [Creating Temporary Table](create-temporary-table.md), [Creating Stream Table](create-stream-table.md).
The column definition defines the name of a column and its data type.
<a id="column-definition"></a>
`column-definition` (for Column Table)
```pre
column-definition: column-name column-data-type [NOT NULL]
column-name: 'unique column name'
```
<a id="row-definition"></a>
`column-definition` (for Row Table)
```pre
column-definition: column-definition-for-row-table | table-constraint
column-definition-for-row-table: column-name column-data-type [ column-constraint ] *
[ [ WITH ] DEFAULT { constant-expression | NULL }
| [ GENERATED { ALWAYS | BY DEFAULT } AS IDENTITY
[ ( START WITH start-value-as-integer [, INCREMENT BY step-value-as-integer ] ) ] ] ]
[ column-constraint ] *
```
Refer to the [identity](#id-columns) section for more information on GENERATED.</br>
Refer to the [constraint](#constraint) section for more information on table-constraint and column-constraint.
`column-data-type`
```pre
column-data-type:
BIGINT |
BINARY |
BLOB |
BOOLEAN |
BYTE |
CLOB |
DATE |
DECIMAL |
DOUBLE |
FLOAT |
INT |
INTEGER |
LONG |
NUMERIC |
REAL |
SHORT |
SMALLINT |
STRING |
TIMESTAMP |
TINYINT |
VARBINARY |
VARCHAR |
```
Column tables can also use ARRAY, MAP and STRUCT types.</br>
Decimal and numeric has default precision of 38 and scale of 18.</br>
In this release, LONG is supported only for column tables. It is recommended to use BIGINT for row tables instead.
If no option is specified, default values are provided.
<a id="ddl"></a>
<a id="colocate-with"></a>
`COLOCATE_WITH`</br>
The COLOCATE_WITH clause specifies a partitioned table with which the new partitioned table must be colocated.
<a id="partition-by"></a>
`PARTITION_BY`</br>
Use the PARTITION_BY {COLUMN} clause to provide a set of column names that determine the partitioning. </br>
If not specified, for row table (mentioned further for case of column table) it is a 'replicated row table'.</br>
Column and row tables support hash partitioning on one or more columns. These are specified as comma-separated column names in the PARTITION_BY option of the CREATE TABLE DDL or createTable API. The hashing scheme follows the Spark Catalyst Hash Partitioning to minimize shuffles in joins. If no PARTITION_BY option is specified for a column table, then, the table is still partitioned internally.</br> The default number of storage partitions (BUCKETS) is 128 in cluster mode for column and row tables, and 11 in local mode for column and partitioned row tables. This can be changed using the BUCKETS option in CREATE TABLE DDL or createTable API.
<a id="buckets"></a>
`BUCKETS` </br>
The optional BUCKETS attribute specifies the fixed number of "buckets" to use for the partitioned row or column tables. Each data server JVM manages one or more buckets. A bucket is a container of data and is the smallest unit of partitioning and migration in the system. For instance, in a cluster of 5 nodes and bucket count of 25 would result in 5 buckets on each node. But, if you configured the reverse - 25 nodes and a bucket count of 5, only 5 data servers hosts all the data for this table. If not specified, the number of buckets defaults to 128. See [best practices](../../best_practices/optimizing_query_latency.md#partition-scheme) for more information.
For row tables, `BUCKETS` must be created with the `PARTITION_BY` clause, else an error is reported.
<a id="redundancy"></a>
`REDUNDANCY`</br>
Use the REDUNDANCY clause to specify the number of redundant copies that should be maintained for each partition, to ensure that the partitioned table is highly available even if members fail. It is important to note that a redundancy of '1' implies two physical copies of data. By default, REDUNDANCY is set to 0 (zero). See [best practices](../../best_practices/optimizing_query_latency.md#redundancy) for more information.
<a id="eviction-by"></a>
`EVICTION_BY`</br>
Use the EVICTION_BY clause to evict rows automatically from the in-memory table based on different criteria. You can use this clause to create an overflow table where evicted rows are written to a local SnappyStore disk store. It is important to note that all tables (expected to host larger data sets) overflow to disk, by default. See [best practices](../../best_practices/optimizing_query_latency.md#overflow) for more information. The value for this parameter is set in MB.
For column tables, the default eviction setting is `LRUHEAPPERCENT` and the default action is to overflow to disk. You can also specify the `OVERFLOW` parameter along with the `EVICTION_BY` clause.
!!! Note
- EVICTION_BY is not supported for replicated tables.
- For column tables, you cannot use the LRUMEMSIZE or LRUCOUNT eviction settings. For row tables, no such defaults are set. Row tables allow all the eviction settings.
<a id="persistence"></a>
`PERSISTENCE`</br>
When you specify the PERSISTENCE keyword, SnappyData persists the in-memory table data to a local SnappyData disk store configuration. SnappyStore automatically restores the persisted table data to memory when you restart the member.
!!! Note
* By default, both row and column tables are persistent.
* The option `PERSISTENT` has been deprecated as of SnappyData 0.9. Although it does work, it is recommended to use `PERSISTENCE` instead.
<a id="diskstore"></a>
`DISKSTORE`</br>
The disk directories where you want to persist the table data. By default, SnappyData creates a "default" disk store on each member node. You can use this option to control the location where data is stored. For instance, you may decide to use a network file system or specify multiple disk mount points to uniformly scatter the data across disks. For more information, refer to [CREATE DISKSTORE](create-diskstore.md).
<a id="overflow"></a>
`OVERFLOW`</br>
Use the OVERFLOW clause to specify the action to be taken upon the eviction event. For persistent tables, setting this to 'true' overflows the table evicted rows to disk based on the EVICTION_BY criteria. Setting this to 'false' is not allowed except when EVICTION_BY is set. In such case, the eviction itself is disabled.</br>
When you configure an overflow table, only the evicted rows are written to disk. If you restart or shut down a member that hosts the overflow table, the table data that was in memory is not restored unless you explicitly configure persistence (or you configure one or more replicas with a partitioned table).
!!! Note
The tables are evicted to disk by default, which means table data overflows to a local SnappyStore disk store.
<a id="expire"></a>
`EXPIRE`</br>
Use the EXPIRE clause with tables to control the SnappyStore memory usage. It expires the rows after configured `time_to_live_in_seconds`.
<a id="column-batch-size"></a>
`COLUMN_BATCH_SIZE`</br>
The default size of blocks to use for storage in the SnappyData column store. When inserting data into the column storage this is the unit (in bytes) that is used to split the data into chunks for efficient storage and retrieval. The default value is 25165824 (24M).
<a id="column-max-delta-rows"></a>
`COLUMN_MAX_DELTA_ROWS`</br>
The maximum number of rows that can be in the delta buffer of a column table for each bucket, before it is flushed into the column store. Although the size of column batches is limited by COLUMN_BATCH_SIZE (and thus limits the size of row buffer for each bucket as well), this property allows a lower limit on the number of rows for better scan performance. The value should be > 0 and < 2GB. The default value is 10000.
!!! Note
The following corresponding SQLConf properties for `COLUMN_BATCH_SIZE` and `COLUMN_MAX_DELTA_ROWS` are set if the table creation is done in that session (and the properties have not been explicitly specified in the DDL):
* `snappydata.column.batchSize` - Explicit batch size for this session for bulk insert operations. If a table is created in the session without any explicit `COLUMN_BATCH_SIZE` specification, then this is inherited for that table property.
* `snappydata.column.maxDeltaRows` - The maximum limit on rows in the delta buffer for each bucket of column table in this session. If a table is created in the session without any explicit COLUMN_MAX_DELTA_ROWS specification, then this is inherited for that table property.
Tables created using the standard SQL syntax without any of SnappyData specific extensions are created as row-oriented replicated tables. Thus, each data server node in the cluster hosts a consistent replica of the table. All tables are also registered in the Spark catalog and hence visible as DataFrames.
For example, `create table if not exists Table1 (a int)` is equivalent to `create table if not exists Table1 (a int) using row`.
## Examples
### Example: Column Table Partitioned on a Single Column
```pre
snappy>CREATE TABLE CUSTOMER (
C_CUSTKEY INTEGER NOT NULL,
C_NAME VARCHAR(25) NOT NULL,
C_ADDRESS VARCHAR(40) NOT NULL,
C_NATIONKEY INTEGER NOT NULL,
C_PHONE VARCHAR(15) NOT NULL,
C_ACCTBAL DECIMAL(15,2) NOT NULL,
C_MKTSEGMENT VARCHAR(10) NOT NULL,
C_COMMENT VARCHAR(117) NOT NULL)
USING COLUMN OPTIONS (BUCKETS '10', PARTITION_BY 'C_CUSTKEY');
```
### Example: Column Table Partitioned with 10 Buckets and Persistence Enabled
```pre
snappy>CREATE TABLE CUSTOMER (
C_CUSTKEY INTEGER NOT NULL,
C_NAME VARCHAR(25) NOT NULL,
C_ADDRESS VARCHAR(40) NOT NULL,
C_NATIONKEY INTEGER NOT NULL,
C_PHONE VARCHAR(15) NOT NULL,
C_ACCTBAL DECIMAL(15,2) NOT NULL,
C_MKTSEGMENT VARCHAR(10) NOT NULL,
C_COMMENT VARCHAR(117) NOT NULL)
USING COLUMN OPTIONS (BUCKETS '10', PARTITION_BY 'C_CUSTKEY', PERSISTENCE 'SYNCHRONOUS');
```
### Example: Replicated, Persistent Row Table
```pre
snappy>CREATE TABLE SUPPLIER (
S_SUPPKEY INTEGER NOT NULL PRIMARY KEY,
S_NAME STRING NOT NULL,
S_ADDRESS STRING NOT NULL,
S_NATIONKEY INTEGER NOT NULL,
S_PHONE STRING NOT NULL,
S_ACCTBAL DECIMAL(15, 2) NOT NULL,
S_COMMENT STRING NOT NULL)
USING ROW OPTIONS (PARTITION_BY 'S_SUPPKEY', BUCKETS '10', PERSISTENCE 'ASYNCHRONOUS');
```
### Example: Row Table Partitioned with 10 Buckets and Overflow Enabled
```pre
snappy>CREATE TABLE SUPPLIER (
S_SUPPKEY INTEGER NOT NULL PRIMARY KEY,
S_NAME STRING NOT NULL,
S_ADDRESS STRING NOT NULL,
S_NATIONKEY INTEGER NOT NULL,
S_PHONE STRING NOT NULL,
S_ACCTBAL DECIMAL(15, 2) NOT NULL,
S_COMMENT STRING NOT NULL)
USING ROW OPTIONS (BUCKETS '10',
PARTITION_BY 'S_SUPPKEY',
PERSISTENCE 'ASYNCHRONOUS',
EVICTION_BY 'LRUCOUNT 3',
OVERFLOW 'true');
```
### Example: Create Table using Select Query
```pre
CREATE TABLE CUSTOMER_STAGING USING COLUMN OPTIONS (PARTITION_BY 'C_CUSTKEY') AS SELECT * FROM CUSTOMER ;
```
With this alternate form of the CREATE TABLE statement, you specify the column names and/or the column data types with a query. The columns in the query result are used as a model for creating the columns in the new table.
If no column names are specified for the new table, then all the columns in the result of the query expression are used to create same-named columns in the new table, of the corresponding data type(s). If one or more column names are specified for the new table, the same number of columns must be present in the result of the query expression; the data types of those columns are used for the corresponding columns of the new table.
Note that only the column names and datatypes from the queried table are used when creating the new table. Additional settings in the queried table, such as partitioning, replication, and persistence, are not duplicated. You can optionally specify partitioning, replication, and persistence configuration settings for the new table and those settings need not match the settings of the queried table.
### Example: Create Table using Spark DataFrame API
For information on using the Apache Spark API, refer to [Using the Spark DataFrame API](../../sde/running_queries.md#using-the-spark-dataframe-api).
### Example: Create Column Table with PUT INTO
```pre
snappy> CREATE TABLE COL_TABLE (
PRSN_EVNT_ID BIGINT NOT NULL,
VER bigint NOT NULL,
CLIENT_ID BIGINT NOT NULL,
SRC_TYP_ID BIGINT NOT NULL)
USING COLUMN OPTIONS(PARTITION_BY 'PRSN_EVNT_ID,CLIENT_ID', BUCKETS '64', KEY_COLUMNS 'PRSN_EVNT_ID, CLIENT_ID');
```
### Example: Create Table with Eviction Settings
Use eviction settings to keep your table within a specified limit, either by removing evicted data completely or by creating an overflow table that persists the evicted data to a disk store.
1. Decide whether to evict based on:
- Entry count (useful if table row sizes are relatively uniform).
- Total bytes used.
- Percentage of JVM heap used. This uses the SnappyData resource manager. When the manager determines that eviction is required, the manager orders the eviction controller to start evicting from all tables where the eviction criterion is set to LRUHEAPPERCENT.
2. Decide what action to take when the limit is reached:
- Locally destroy the row (partitioned tables only).
- Overflow the row data to disk.
3. If you want to overflow data to disk (or persist the entire table to disk), configure a named disk store to use for the overflow data. If you do not specify a disk store when creating an overflow table, SnappyData stores the overflow data in the default disk store.
4. Create the table with the required eviction configuration.
For example, to evict using LRU entry count and overflow evicted rows to a disk store (OverflowDiskStore):
CREATE TABLE Orders(OrderId INT NOT NULL,ItemId INT) USING row OPTIONS (EVICTION_BY 'LRUCOUNT 2', OVERFLOW 'true', DISKSTORE 'OverflowDiskStore', PERSISTENCE 'async');
To create a table that simply removes evicted data from memory without persisting the evicted data, use the `DESTROY` eviction action. For example:
Default in SnappyData for `synchronous` is `persistence`, `overflow` is `true` and `eviction_by` is `LRUHEAPPERCENT`.
CREATE TABLE Orders(OrderId INT NOT NULL,ItemId INT) USING row OPTIONS (PARTITION_BY 'OrderId', EVICTION_BY 'LRUMEMSIZE 1000');
<a id="constraint"></a>
### Constraint (only for Row Tables)
A CONSTRAINT clause is an optional part of a CREATE TABLE statement that defines a rule to which table data must conform.
There are two types of constraints:</br>
**Column-level constraints**: Refer to a single column in the table and do not specify a column name (except check constraints). They refer to the column that they follow.</br>
**Table-level constraints**: Refer to one or more columns in the table. Table-level constraints specify the names of the columns to which they apply. Table-level CHECK constraints can refer to 0 or more columns in the table.
Column and table constraints include:
* NOT NULL— Specifies that a column cannot hold NULL values (constraints of this type are not nameable).
* PRIMARY KEY— Specifies a column (or multiple columns if specified in a table constraint) that uniquely identifies a row in the table. The identified columns must be defined as NOT NULL.
* UNIQUE— Specifies that values in the column must be unique. NULL values are not allowed.
* FOREIGN KEY— Specifies that the values in the columns must correspond to values in referenced primary key or unique columns or that they are NULL. </br>If the foreign key consists of multiple columns and any column is NULL, then the whole key is considered NULL. SnappyData permits the insert no matter what is in the non-null columns.
* CHECK— Specifies rules for values in a column, or specifies a wide range of rules for values when included as a table constraint. The CHECK constraint has the same format and restrictions for column and table constraints.
Column constraints and table constraints have the same function; the difference is where you specify them. Table constraints allow you to specify more than one column in a PRIMARY KEY, UNIQUE, CHECK, or FOREIGN KEY constraint definition.
Column-level constraints (except for check constraints) refer to only one column.
If you do not specify a name for a column or table constraint, then SnappyData generates a unique name.
**Example**: The following example demonstrates how to create a table with `FOREIGN KEY`: </br>
```pre
snappy> create table trading.customers (cid int not null, cust_name varchar(100), since date, addr varchar(100), tid int, primary key (cid));
snappy> create table trading.networth (cid int not null, cash decimal (30, 20), securities decimal (30, 20), loanlimit int, availloan decimal (30, 20), tid int, constraint netw_pk primary key (cid), constraint cust_newt_fk foreign key (cid) references trading.customers (cid));
snappy> show importedkeys in trading;
PKTABLE_NAME |PKCOLUMN_NAME |PK_NAME |FKTABLE_SCHEM |FKTABLE_NAME |FKCOLUMN_NAME |FK_NAME |KEY_SEQ
----------------------------------------------------------------------------------------------------------------------------------------------------------
CUSTOMERS |CID |SQL180403162038220 |TRADING |NETWORTH |CID |CUST_NEWT_FK |1
```
<a id="id-columns"></a>
### Identity Columns (only for Row Tables)
<a id="generate"></a>
SnappyData supports both GENERATED ALWAYS and GENERATED BY DEFAULT identity columns only for BIGINT and INTEGER data types. The START WITH and INCREMENT BY clauses are supported only for GENERATED BY DEFAULT identity columns.
For a GENERATED ALWAYS identity column, SnappyData increments the default value on every insertion, and stores the incremented value in the column. You cannot insert a value directly into a GENERATED ALWAYS identity column, and you cannot update a value in a GENERATED ALWAYS identity column. Instead, you must either specify the DEFAULT keyword when inserting data into the table or you must leave the identity column out of the insertion column list.
Consider a table with the following column definition:
```pre
create table greetings (i int generated always as identity, ch char(50)) using row;
```
You can insert rows into the table using either the DEFAULT keyword or by omitting the identity column from the INSERT statement:
```pre
insert into greetings values (DEFAULT, 'hello');
```
```pre
insert into greetings(ch) values ('hi');
```
The values that SnappyData automatically generates for a GENERATED ALWAYS identity column are unique.
For a GENERATED BY DEFAULT identity column, SnappyData increments and uses a default value for an INSERT only when no explicit value is given. To use the generated default value, either specify the DEFAULT keyword when inserting into the identity column or leave the identity column out of the INSERT column list.
In contrast to GENERATED ALWAYS identity columns, with a GENERATED BY DEFAULT column you can specify an identity value to use instead of the generated default value. To specify a value, include it in the INSERT statement.
For example, consider a table created using the statement:
```pre
create table greetings (i int generated by default as identity, ch char(50));
```
The following statement specifies the value “1” for the identity column:
```pre
insert into greetings values (1, 'hi');
```
These statements both use generated default values:
```pre
insert into greetings values (DEFAULT, 'hello');
insert into greetings(ch) values ('bye');
```
Although the automatically-generated values in a GENERATED BY DEFAULT identity column are unique, a GENERATED BY DEFAULT column does not guarantee unique identity values for all rows in the table. For example, in the above statements, the rows containing “hi” and “hello” both have an identity value of “1.” This occurs because the generated column starts at “1” and the user-specified value was also “1.”
To avoid duplicating identity values (for example, during an import operation), you can use the START WITH clause to specify the first identity value that SnappyData should assign and increment. Or, you can use a primary key or a unique constraint on the GENERATED BY DEFAULT identity column to check for and disallow duplicates.
By default, the initial value of a GENERATED BY DEFAULT identity column is 1, and the value is incremented by 1 for each INSERT. Use the optional START WITH clause to specify a new initial value. Use the optional INCREMENT BY clause to change the increment value used during each INSERT.
**Related Topics**</br>
* [DROP TABLE](drop-table.md)
* [DELETE TABLE](delete.md)
* [SHOW TABLES](../interactive_commands/show.md#tables)
* [TRUNCATE TABLE](truncate-table.md)
| 58.680628 | 666 | 0.752409 | eng_Latn | 0.984312 |
eef95ed8a9cfbb31a364fe5be42c41835113ce19 | 1,970 | md | Markdown | _posts/11/2021-04-06-jasmin-brown.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/11/2021-04-06-jasmin-brown.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | _posts/11/2021-04-06-jasmin-brown.md | chito365/ukdat | 382c0628a4a8bed0f504f6414496281daf78f2d8 | [
"MIT"
] | null | null | null | ---
id: 5589
title: Jasmin Brown
date: 2021-04-06T20:36:37+00:00
author: Laima
layout: post
guid: https://ukdataservers.com/jasmin-brown/
permalink: /04/06/jasmin-brown
tags:
- claims
- lawyer
- doctor
- house
- multi family
- online
- poll
- business
category: Guides
---
* some text
{: toc}
## Who is Jasmin Brown
Comedian, actress, and host known for the #NoFilter podcast. She has also worked as a host and correspondent for BET’s 50 Central.
## Prior to Popularity
She previously ran a YouTube channel called ItsJazzysWorld before taking a hiatus in March of 2014.
## Random data
She was nominated for a 2018 Social Hustle Award.
## Family & Everyday Life of Jasmin Brown
She was born the youngest of four in Takoma Park, Maryland, and raised in West Palm Beach, Florida. Her father was Jamaican; her mother was Trinidadian.
## People Related With Jasmin Brown
She and Terrence J are both former 106 & Park hosts.
| 20.102041 | 152 | 0.35533 | eng_Latn | 0.997715 |
eef9ca23c3ea61f46095d46f2dd9952448bbe914 | 142 | md | Markdown | README.md | davidsyntex/adventofcode2016csharp | 449ce24964296d41e1a2f72e0b9e9ad809503f26 | [
"MIT"
] | null | null | null | README.md | davidsyntex/adventofcode2016csharp | 449ce24964296d41e1a2f72e0b9e9ad809503f26 | [
"MIT"
] | null | null | null | README.md | davidsyntex/adventofcode2016csharp | 449ce24964296d41e1a2f72e0b9e9ad809503f26 | [
"MIT"
] | null | null | null | # My C#-solutions for Advent of Code 2016
Focus has been on finding a fast solution.
Refactoring for performance and beauty is in the future.
| 35.5 | 56 | 0.788732 | eng_Latn | 0.999627 |
eef9f03f3fbdf99ea2b539bf3bffdc3b56cbff6f | 2,119 | md | Markdown | source/webapp/README.md | chriscoombs/amazon-rekognition-shot-detection-demo-using-segment-api | 3bb5bb074436bbac7f29d46b1921c9220a2fb156 | [
"MIT-0"
] | null | null | null | source/webapp/README.md | chriscoombs/amazon-rekognition-shot-detection-demo-using-segment-api | 3bb5bb074436bbac7f29d46b1921c9220a2fb156 | [
"MIT-0"
] | null | null | null | source/webapp/README.md | chriscoombs/amazon-rekognition-shot-detection-demo-using-segment-api | 3bb5bb074436bbac7f29d46b1921c9220a2fb156 | [
"MIT-0"
] | null | null | null | # Webapp Component
The web application is written in ES6 and uses JQuery and Boostrap libraries.
___
# Limitations
The solution is designed to demonstrate how you can use Amazon Rekognition Segment Detection to extract shot boundaries from a given video. While the solut is fully functional, it is **not** meant to be a production-ready solution.
There are limitations with the web application:
* The webapp uses a 3-minutes timer to periodically polls the state machine executions' status through the RESTful API. Thus, the status is not updated in real-time. For real-time update, we highly recommend to use our Pub/Sub services such as [AWS AppSync](https://aws.amazon.com/appsync/), [Amazon MQ](https://aws.amazon.com/amazon-mq/?amazon-mq.sort-by=item.additionalFields.postDateTime&amazon-mq.sort-order=desc), or [AWS IoT Core](https://aws.amazon.com/iot-core/).
* The temporary security credential issued by Amazon Cognito expires every hour. The webapp doesn't automatically refresh the security credential. If you experience timeout error, **reload** the page.
___
# Security
When you build systems on AWS infrastructure, security responsibilities are shared between you and AWS. This shared model can reduce your operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the services operate. For more information about security on AWS, visit the [AWS Security Center](https://aws.amazon.com/security).
## Subresource Integrity (SRI)
Web application assets are secured using Subresource Integrity (SRI). Input/output encoding are performed to prevent Cross Site Scripting (XSS) attack.
Sign-in flow uses [Amazon Cognito](https://aws.amazon.com/cognito/) service to authenticate user.
HTTPS requests requires [AWS Signature V4](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
___
Next to [Custom Resources Component](../custom-resources/README.md) | Back to [Shot Detection State Machine](../step/README.md) | Retun to [README](../../README.md)
| 66.21875 | 471 | 0.787636 | eng_Latn | 0.97 |
eefa3d1c0d6ed26d0d8eba5b8535fbf831977604 | 21 | md | Markdown | README.md | tynashax36/shax | 5dc3b47b2fe7ea28fa773528f257d6b8f2ba4cf3 | [
"Apache-2.0"
] | null | null | null | README.md | tynashax36/shax | 5dc3b47b2fe7ea28fa773528f257d6b8f2ba4cf3 | [
"Apache-2.0"
] | null | null | null | README.md | tynashax36/shax | 5dc3b47b2fe7ea28fa773528f257d6b8f2ba4cf3 | [
"Apache-2.0"
] | null | null | null | # shax
my repository
| 7 | 13 | 0.761905 | eng_Latn | 0.999621 |
eefcf44bea6f6fc0d35df0c8ba097270fcb3d35d | 249 | md | Markdown | README.md | danerbland/Grun | ca19e8de7fb1ed67ee9126c758a2e4506e5313f3 | [
"MIT"
] | null | null | null | README.md | danerbland/Grun | ca19e8de7fb1ed67ee9126c758a2e4506e5313f3 | [
"MIT"
] | 2 | 2021-03-10T00:10:36.000Z | 2021-05-10T20:00:13.000Z | README.md | danerbland/Grun | ca19e8de7fb1ed67ee9126c758a2e4506e5313f3 | [
"MIT"
] | null | null | null | # GRUN
Grun is a social network for users to review NYC restaurants based on their sustability efforts and social responsibility. Users can easily look up their favorite restaurants, leave reviews, and find the top rated restaurants in their boro.
| 62.25 | 240 | 0.815261 | eng_Latn | 0.999143 |
eefd0916f7f8224a40863c56a5160084f97273d9 | 3,367 | md | Markdown | CONTRIBUTING.md | RPiAwesomeness/dephell | 993a212ce17dda04a878ceac64854d809f3dc47b | [
"MIT"
] | null | null | null | CONTRIBUTING.md | RPiAwesomeness/dephell | 993a212ce17dda04a878ceac64854d809f3dc47b | [
"MIT"
] | null | null | null | CONTRIBUTING.md | RPiAwesomeness/dephell | 993a212ce17dda04a878ceac64854d809f3dc47b | [
"MIT"
] | null | null | null | # Contributing to dephell
Thank you for deciding to contribute to DepHell! This guide is to assist you with contributing code. If you have a question that isn't answered here, please [open an issue][open issue].
## The basics
So you want to contribute some code? Great! Here are the basic steps:
1. Find an [issue][issues] that you want to work on. Good places to start are [good first issues] or [help wanted]. You could also [open an issue][open issue] if there is something specific you want to contribute. Wait for a response before you start coding though, as the thing you want might already exist somewhere!
1. Fork DepHell.
1. Clone your fork.
1. Create a branch to work against.
1. Run tests to make sure they work for your system.
1. Write some tests.
1. Write some code.
1. Run tests to make sure it works.
1. Run flake8 checks.
1. Write some docs.
1. Push your branch (to your fork).
1. Create a pull request to dephell/master
1. Wait for checks to run and fix anything that was wrong
## Testing
Any new code that you contribute will be ideally covered under an automated test. To run existing tests:
```bash
dephell venv create --env pytest
dephell deps install --env pytest
dephell venv run --env pytest
```
To write new tests using [pytest], place them in the `tests` directory. This directory should roughly follow the same file structure as the source directory (`dephell`) except that every file/module is prepended with `test_`. For example, the file containing tests for `dephell/commands/deps_convert.py` is `tests/test_commands/test_deps_convert.py`.
## Style
All the code you contribute must follow the same style as the rest of dephell:
- Follow [PEP8]
- Use `'single quotes'` for strings, not `"double quotes"`
Run flake8 to see how you're doing:
```bash
dephell venv create --env flake8
dephell deps install --env flake8
dephell venv run --env flake8
```
Sort imports before pushing:
```bash
dephell venv create --env isort
dephell deps install --env isort
dephell venv run --env isort
```
Main things you contribute are ideas and implementation. So, if you struggled with flake8 checks, don't worry, just ask help of maintainers in comments to your Pull Request. If your code passed CI, merging of Pull Request can't be rejected or delayed because of style. No [bikeshedding](https://en.wikipedia.org/wiki/Law_of_triviality) and meaningless discussions.
## Using an IDE
If you want to use an IDE to edit / test dephell code, you'll have to point that IDE to the virtual environment dephell created. You can either get this path using `dephell inspect venv` or create the venv in a directory your IDE will find (e.g. `dephell venv create --venv .venv`). Some tests currently assume they are being run from the root of the project. If your IDE likes to run tests from other directories, you may need to update some existing tests to use relative paths.
[issues]: https://github.com/dephell/dephell/issues?utf8=✓&q=is%3Aissue+is%3Aopen+
[open issue]: https://github.com/dephell/dephell/issues/new
[help wanted]: https://github.com/dephell/dephell/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22
[good first issues]: https://github.com/dephell/dephell/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22
[pytest]: https://docs.pytest.org/en/latest/
[PEP8]: https://www.python.org/dev/peps/pep-0008/
| 47.422535 | 483 | 0.761806 | eng_Latn | 0.994341 |
eefd4b873a483ada6d19186afb136e6829df6fb3 | 352 | md | Markdown | README.md | rukbrook/UOFDBot | 20d4efa566f73f508b8a3f42bd8214bd17a56839 | [
"Apache-2.0"
] | null | null | null | README.md | rukbrook/UOFDBot | 20d4efa566f73f508b8a3f42bd8214bd17a56839 | [
"Apache-2.0"
] | null | null | null | README.md | rukbrook/UOFDBot | 20d4efa566f73f508b8a3f42bd8214bd17a56839 | [
"Apache-2.0"
] | 1 | 2021-01-22T16:28:54.000Z | 2021-01-22T16:28:54.000Z | # UOFDBot - User of the day [bot]
<p align="center">
<img width="100px" src="https://user-images.githubusercontent.com/2866780/72239870-6a010c00-35f3-11ea-9d8f-9d499762e1bb.png"></img>
</p>
Веселый телеграм бот. Поможет определиться, кто сегодня 'пидор', а кто 'герой'. Каждая игра запускается отдельно. При необходимости статистику можно сбросить.
| 50.285714 | 158 | 0.761364 | rus_Cyrl | 0.333533 |
eefd627497c9fa3389eb0e48596dd23a875b87c0 | 4,717 | md | Markdown | _technologies/apache-beam.md | boy007uk/OnDataEngineeringContent | bb88f85199fb4c1dbe84d4d2d8a0c5503c0c6f11 | [
"Apache-2.0"
] | 6 | 2017-02-25T19:38:56.000Z | 2021-06-08T16:21:51.000Z | _technologies/apache-beam.md | boy007uk/OnDataEngineeringContent | bb88f85199fb4c1dbe84d4d2d8a0c5503c0c6f11 | [
"Apache-2.0"
] | 17 | 2017-02-21T21:27:16.000Z | 2019-05-24T16:27:41.000Z | _technologies/apache-beam.md | boy007uk/OnDataEngineeringContent | bb88f85199fb4c1dbe84d4d2d8a0c5503c0c6f11 | [
"Apache-2.0"
] | 8 | 2017-02-12T02:22:53.000Z | 2020-01-07T11:54:20.000Z | ---
title: "Apache Beam"
description: "Unified batch and streaming programming model to define portable data processing pipelines and execute these using a range of different engines. Originating from the Google Dataflow model, focuses on unifying both styles of processing by treating static data sets as streams (which happen to have a beginning and an end), while achieving data correctness and the ability to handle late-arriving data through a set of abstractions and concepts that give users control over estimated quality of arrived data (completeness), duration to wait for results (latency) and how much speculative/redundant computation to do (cost). Allows business logic, data characteristics and trade-off strategies to be defined via different programming languages through pluggable language SDKs (with out of the box support for Java and Python). Supports a range of pluggable runtime platforms through pipeline runners, with support for a direct runner (for development and testing pipelines in a non-distributed environment), Apache Apex, Flink, Spark, and (under development) Gearpump runners, and a Google Cloud Dataflow runner. Also supports a growing set of connectors that allow pipelines to read and write data to various data storage systems (IOs). An Apache project, opened sourced by Google in January 2016, graduated in January 2017, with a first stable release (2.0) in May 2017. Written in Java and Python and under active development with a large number of contributors including Google, data Artisans, Talend and PayPal."
alt-titles: [Beam]
vendors: [Apache]
type: "Commercial Open Source"
date: 2017-08-22 07:30
last_updated: 2019-08-07
version: "2.14"
---
## Release History
| version | release date | release links | release comment
| 2.1 | 2017-08-23 | [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12340528) |
| 2.2 | 2017-12-02 | [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12341044) |
| 2.3 | 2018-02-19 | [blog post](https://beam.apache.org/blog/2018/02/19/beam-2.3.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12341608) |
| 2.4 | 2018-03-20 | [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12341608)
| 2.5 | 2018-06-26 | [blog post](https://beam.apache.org/blog/2018/06/26/beam-2.5.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12342847)
| 2.6 | 2018-08-08 | [blog post](https://beam.apache.org/blog/2018/08/10/beam-2.6.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12343392)
| 2.7 | 2018-10-02 | [blog post](https://beam.apache.org/blog/2018/10/03/beam-2.7.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12343654)
| 2.8 | 2018-10-31 | [blog post](https://beam.apache.org/blog/2018/10/29/beam-2.8.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12343985)
| 2.9 | 2018-12-19 | [blog post](https://beam.apache.org/blog/2018/12/13/beam-2.9.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12344258)
| 2.10 | 2019-02-01 | [blog post](https://beam.apache.org/blog/2019/02/15/beam-2.10.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12344540)
| 2.11 | 2019-03-05 | [blog post](https://beam.apache.org/blog/2019/03/05/beam-2.11.0.html); [release notes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12344775)
| 2.12 | 2019-04-25 | [blog post](https://beam.apache.org/blog/2019/04/25/beam-2.12.0.html); [release notes](https://jira.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12344944)
| 2.13 | 2019-05-22 | [blog post](https://beam.apache.org/blog/2019/05/22/beam-2.13.0.html); [release notes](https://jira.apache.org/jira/secure/ReleaseNote.jspa?projectId=12319527&version=12345166)
| 2.14 | 2019-08-07 | [blog post](https://beam.apache.org/blog/2019/07/31/beam-2.14.0.html)
## Links
* <https://beam.apache.org> - product home page
* <https://beam.apache.org/documentation/> - documentation
* <https://beam.apache.org/documentation/runners/capability-matrix/> - defines capabilities of individual runners
* <https://cloud.google.com/blog/big-data/2016/05/why-apache-beam-a-google-perspective> - motivation behind Beam
## News
* <https://beam.apache.org/blog/> - blog
* <https://beam.apache.org/get-started/downloads/> - details of releases | 120.948718 | 1,529 | 0.764893 | eng_Latn | 0.512567 |
eefe3343e16067182c049e56447efdc66ce6b660 | 4,273 | md | Markdown | README.md | amykep/Amy-Book-Search-Engine | 0b065d34ee6674fe07e1b062b747f2c1043bac8e | [
"MIT"
] | null | null | null | README.md | amykep/Amy-Book-Search-Engine | 0b065d34ee6674fe07e1b062b747f2c1043bac8e | [
"MIT"
] | null | null | null | README.md | amykep/Amy-Book-Search-Engine | 0b065d34ee6674fe07e1b062b747f2c1043bac8e | [
"MIT"
] | null | null | null | ## Description
This project takes a fully functioning Google Books API search engine built with a RESTful API, and refactor it to be a GraphQL API built with Apollo Server. The app was built using the MERN stack, with a React front end, MongoDB database, and Node.js/Express.js server and API. It's already set up to allow users to save book searches to the back end.
<br />
This project required:
* Setting up an Apollo Server to use GraphQL queries and mutations to fetch and modify data.
* Modify the existing authentication middleware so that it works in the context of a GraphQL API.
* Creating an Apollo Provider so requests can communicate with an Apollo Server.
* Deploying the application to Heroku with a MongoDB database using MongoDB Atlas.
## Project Criteria
GIVEN a book search engine<br/>
WHEN I load the search engine<br/>
THEN I am presented with a menu with the options Search for Books and Login/Signup and an input field to search for books and a submit button<br/>
WHEN I click on the Search for Books menu option<br/>
THEN I am presented with an input field to search for books and a submit button<br/>
WHEN I am not logged in and enter a search term in the input field and click the submit button<br/>
THEN I am presented with several search results, each featuring a book’s title, author, description, image, and a link to that book on the Google Books site<br/>
WHEN I click on the Login/Signup menu option<br/>
THEN a modal appears on the screen with a toggle between the option to log in or sign up<br/>
WHEN the toggle is set to Signup<br/>
THEN I am presented with three inputs for a username, an email address, and a password, and a signup button<br/>
WHEN the toggle is set to Login<br/>
THEN I am presented with two inputs for an email address and a password and login button<br/>
WHEN I enter a valid email address and create a password and click on the signup button<br/>
THEN my user account is created and I am logged in to the site<br/>
WHEN I enter my account’s email address and password and click on the login button<br/>
THEN I the modal closes and I am logged in to the site<br/>
WHEN I am logged in to the site<br/>
THEN the menu options change to Search for Books, an option to see my saved books, and Logout<br/>
WHEN I am logged in and enter a search term in the input field and click the submit button<br/>
THEN I am presented with several search results, each featuring a book’s title, author, description, image, and a link to that book on the Google Books site and a button to save a book to my account<br/>
WHEN I click on the Save button on a book<br/>
THEN that book’s information is saved to my account<br/>
WHEN I click on the option to see my saved books<br/>
THEN I am presented with all of the books I have saved to my account, each featuring the book’s title, author, description, image, and a link to that book on the Google Books site and a button to remove a book from my account<br/>
WHEN I click on the Remove button on a book<br/>
THEN that book is deleted from my saved books list<br/>
WHEN I click on the Logout button<br/>
THEN I am logged out of the site and presented with a menu with the options Search for Books and Login/Signup and an input field to search for books and a submit button <br/>
## Screenshots
<p align="center"><img src="./assets/images/Google-Book-Search.gif"></p> <br />
## Deployed application link
https://immense-sierra-46849.herokuapp.com<br />
## Technologies
- [Apollo-sercer-express](https://www.apollographql.com/docs/react/essentials/setup.html)
- [Jwt-decode](https://www.npmjs.com/package/jwt-decode)
- [Bcrypt](https://www.npmjs.com/package/bcrypt)
- [React](https://reactjs.org/)
- [React-Router-dom](https://reacttraining.com/react-router/web/guides/quick-start)
- [Stripe](https://stripe.com/)
- [Express](https://expressjs.com/)
- [Node.js](https://nodejs.org/)
- [MongoDB](https://www.mongodb.com/)
- [Mongoose](https://mongoosejs.com/)
- [Graphql](https://graphql.org/)
- [Google Fonts](https://fonts.google.com/)
- [Google Analytics](https://analytics.google.com/)
- [jsonwebtoken](https://www.npmjs.com/package/jsonwebtoken)
- [Heroku](https://www.heroku.com/)
- [Git](https://git-scm.com/)
## License
MIT <br />
| 57.743243 | 352 | 0.74842 | eng_Latn | 0.975655 |
eefe576e20648069415939c36f13ed90134a264a | 33 | md | Markdown | README.md | umairq123/HelperiOS | e3df2ba6537b38e356b4e6529642c7ccaf9f0345 | [
"MIT"
] | 1 | 2016-09-24T06:11:50.000Z | 2016-09-24T06:11:50.000Z | README.md | umairq123/HelperiOS | e3df2ba6537b38e356b4e6529642c7ccaf9f0345 | [
"MIT"
] | null | null | null | README.md | umairq123/HelperiOS | e3df2ba6537b38e356b4e6529642c7ccaf9f0345 | [
"MIT"
] | null | null | null | # HelperiOS
helper files for ios
| 11 | 20 | 0.787879 | eng_Latn | 0.732311 |
eefe79dbe2b206ed8f9dbdecad9ddd6c49566cf4 | 1,295 | md | Markdown | _posts/2010-09-22-3287.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | 1 | 2020-11-20T20:39:54.000Z | 2020-11-20T20:39:54.000Z | _posts/2010-09-22-3287.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | _posts/2010-09-22-3287.md | TkTech/skins.tkte.ch | 458838013820531bc47d899f920916ef8c37542d | [
"MIT"
] | null | null | null | ---
title: >
Dig Dug
layout: post
permalink: /view/3287
votes: 2
preview: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAAAfklEQVRIiWP8//8fAww8f/6CARuQlJTAKo4GiNHORIxBVAT0to8FmSPV8IyBgYFhlglD2hlkxv+ZRIUnMWAwhCfEc8gM2tpHS4ASf88apGhuH2P6OYL2Iav5P9MIlxQx2gdDehm1b9S+UftG7aMGYBxtv4zaN2rfqH2j9pEFANyWJvMsIvNAAAAAAElFTkSuQmCC"
---
<dl class="side-by-side">
<dt>Preview</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACUAAAAgCAIAAAAaMSbnAAAABnRSTlMA/wD/AP5AXyvrAAAAfklEQVRIiWP8//8fAww8f/6CARuQlJTAKo4GiNHORIxBVAT0to8FmSPV8IyBgYFhlglD2hlkxv+ZRIUnMWAwhCfEc8gM2tpHS4ASf88apGhuH2P6OYL2Iav5P9MIlxQx2gdDehm1b9S+UftG7aMGYBxtv4zaN2rfqH2j9pEFANyWJvMsIvNAAAAAAElFTkSuQmCC">
</dd>
<dt>Original</dt>
<dd>
<img class="preview" src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEAAAAAgCAYAAACinX6EAAAAdElEQVR42u3WwQmAMAwF0EzrtM6RNfTqoVIkKKa8wIcek0dJGzGpzDwqie4FAAAAAAAALAxQHbA9EAAAl2a2/T4R4/MgAJYEeJDWO+DL4S3BP/4Tqg2/DjK7hQAAAGi1BKcg1WcZAAAAAAAAAAAAAAAAozoBSwFVuoLH4KsAAAAASUVORK5CYII=">
</dd>
<dt>Title</dt>
<dd>Dig Dug</dd>
<dt>Description</dt>
<dd>This is a Minecraft adaptation to the player from Dig Dug.</dd>
<dt>Added By</dt>
<dd>TheName</dd>
<dt>Added On</dt>
<dd>2010-09-22</dd>
<dt>Votes</dt>
<dd>2</dd>
</dl>
| 43.166667 | 322 | 0.836293 | yue_Hant | 0.51085 |
eefecfa8fe09096b3c92176b882fcd1d82fbb408 | 72 | md | Markdown | info/cnblogs/blog/start.md | BigShuang/python-introductory-exercises | 9b8d391ce5fcbd12a654aba1c62a746ddb52a42d | [
"MIT"
] | null | null | null | info/cnblogs/blog/start.md | BigShuang/python-introductory-exercises | 9b8d391ce5fcbd12a654aba1c62a746ddb52a42d | [
"MIT"
] | null | null | null | info/cnblogs/blog/start.md | BigShuang/python-introductory-exercises | 9b8d391ce5fcbd12a654aba1c62a746ddb52a42d | [
"MIT"
] | null | null | null | > [大爽Python入门练习题总目录](https://www.cnblogs.com/BigShuang/p/15664677.html)
| 36 | 71 | 0.777778 | yue_Hant | 0.941899 |
eefed49da84017f19f9e62d78f34f016e182a96a | 2,012 | md | Markdown | frontend/node_modules/ember-factory-for-polyfill/README.md | sahilpaudel/AfterGlow | 0859ec14b47c8c5704cc8e5cba86d39aa258fff5 | [
"MIT"
] | null | null | null | frontend/node_modules/ember-factory-for-polyfill/README.md | sahilpaudel/AfterGlow | 0859ec14b47c8c5704cc8e5cba86d39aa258fff5 | [
"MIT"
] | null | null | null | frontend/node_modules/ember-factory-for-polyfill/README.md | sahilpaudel/AfterGlow | 0859ec14b47c8c5704cc8e5cba86d39aa258fff5 | [
"MIT"
] | null | null | null | # ember-factory-for-polyfill
This addon provides a best effort polyfill for the `ember-factory-for` feature added in Ember 2.12.
Please review [emberjs/rfcs#150](https://github.com/emberjs/rfcs/blob/master/text/0150-factory-for.md) for more details.
## Installation
```sh
ember install ember-factory-for-polyfill
```
## Usage
```javascript
import Ember from 'ember';
export default Ember.Service.extend({
someMethod() {
let owner = Ember.getOwner(this);
let ValidatorFactory = owner.factoryFor('validator:post');
let validator = ValidatorFactory.create();
}
});
```
## Migration
### Applications
After you upgrade your application to Ember 2.12, you should remove `ember-factory-for-polyfill` from
your `package.json`.
### Addons
Addons generally support many different Ember versions, so leaving `ember-factory-for-polyfill` in
place for consumers of your addon is perfectly normal. When the addon no longer supports Ember
versions older than 2.12, we recommend removing `ember-factory-for-polyfill` from your `package.json`
and doing a major version bump.
## Compatibility
This addon is tested against quite a few past Ember versions. Check `config/ember-try.js` for the current list, but
the list of supported Ember versions at the time of authoring was:
* 2.3
* 2.4
* 2.8
* 2.12
* 2.16 (canary at the time)
For compatibility with older Ember versions prior to 2.3, please use [ember-getowner-polyfill](https://github.com/rwjblue/ember-getowner-polyfill) instead.
## Addon Maintenance
### Installation
* `git clone <repository-url>` this repository
* `cd ember-factory-for-polyfill`
* `npm install`
### Running
* `ember serve`
* Visit your app at [http://localhost:4200](http://localhost:4200).
### Running Tests
* `npm test` (Runs `ember try:each` to test your addon against multiple Ember versions)
* `ember test`
* `ember test --server`
### Building
* `ember build`
For more information on using ember-cli, visit [https://ember-cli.com/](https://ember-cli.com/).
| 25.794872 | 155 | 0.736581 | eng_Latn | 0.922375 |
eeff5d7d31b6f33d4345ba23389291cf2c842b82 | 6,187 | md | Markdown | articles/finance/fixed-assets/acquire-assets-procurement.md | MicrosoftDocs/Dynamics-365-Operations.ja-jp | 821ba731df05b9c1d0b0947b8d7e66ae9a34c0b4 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2020-05-18T17:14:25.000Z | 2021-11-13T07:27:21.000Z | articles/finance/fixed-assets/acquire-assets-procurement.md | MicrosoftDocs/Dynamics-365-Operations.ja-jp | 821ba731df05b9c1d0b0947b8d7e66ae9a34c0b4 | [
"CC-BY-4.0",
"MIT"
] | 37 | 2017-12-13T17:53:18.000Z | 2021-03-16T19:04:28.000Z | articles/finance/fixed-assets/acquire-assets-procurement.md | MicrosoftDocs/Dynamics-365-Operations.ja-jp | 821ba731df05b9c1d0b0947b8d7e66ae9a34c0b4 | [
"CC-BY-4.0",
"MIT"
] | 8 | 2017-11-06T03:10:26.000Z | 2020-03-21T18:08:51.000Z | ---
title: 調達によって取得される資産の取得
description: このトピックでは、固定資産と買掛金の統合を設定して、発注書または仕入先請求書から固定資産を自動作成する方法、また固定資産の取得および取得原価調整トランザクションの自動転記を実行する方法を説明します。
author: ShylaThompson
ms.date: 03/05/2019
ms.topic: article
ms.prod: ''
ms.technology: ''
ms.search.form: AssetParameters
audience: Application User
ms.reviewer: roschlom
ms.custom: 3481
ms.assetid: d4e73a3f-633b-48b2-b8db-7a4a59a4d7ec
ms.search.region: Global
ms.author: saraschi
ms.search.validFrom: 2016-02-28
ms.dyn365.ops.version: AX 7.0.0
ms.openlocfilehash: 4b1834f0087931760d223a018c93decdea1ddfddca219fdca57c97181d37084c
ms.sourcegitcommit: 42fe9790ddf0bdad911544deaa82123a396712fb
ms.translationtype: HT
ms.contentlocale: ja-JP
ms.lasthandoff: 08/05/2021
ms.locfileid: "6728010"
---
# <a name="acquire-assets-through-procurement"></a>調達によって取得される資産の取得
[!include [banner](../includes/banner.md)]
このトピックでは、固定資産と買掛金の統合を設定して、発注書または仕入先請求書から固定資産を自動作成する方法、また固定資産の取得および取得原価調整トランザクションの自動転記を実行する方法を説明します。 購買明細行の数量に関係なく、1 つの購買明細行が作成する資産は 1 つです。 複数の固定資産を作成する必要がある場合は、複数の購買明細行を作成する必要があります。
固定資産と買掛金を統合するために使用できる次の方法があります。すべての固定資産で同一の方法を使用する必要があります。:
- 発注書または仕入先請求書の明細行に固定資産番号を追加する前に、固定資産を手動で作成します。 仕入先請求書の転記時に、資産取得トランザクションが資産に自動的に転記されます。 既定では、この方法が選択されます。
- 発注書または仕入先請求書の明細行に固定資産番号を追加する前に、固定資産を手動で作成します。 仕入先請求書の転記時に、資産取得トランザクションは資産に転記されません。
- [新しい固定資産の作成] チェック ボックスがオンになっている製品受領書または仕入先請求書の転記時に、固定資産が自動的に作成されます。 仕入先請求書の転記時に、資産取得トランザクションが資産に自動的に転記されます。
- [新しい固定資産の作成] チェック ボックスがオンになっている製品受領書または仕入先請求書の転記時に、固定資産が自動的に作成されます。 仕入先請求書の転記時に、資産取得トランザクションは資産に転記されません。
固定資産を手動で作成する場合は、前半の 2 種類の方法のいずれかを選択し、発注書または仕入先請求書に固定資産番号を割り当てます。 より柔軟なアプローチを使用する場合は、後半の 2 種類の方法のうち 1 を選択します。たとえば、ある時は固定資産を手動で作成することも、別な時には明細行品目情報に基づいて自動的に固定資産を作成することもできます。
固定資産を手動で作成する場合でも、より柔軟なアプローチを使用する場合でも、取得トランザクションを固定資産だけに転記できるか、仕入先請求書の転記時に転記できるかを決定する必要があります。 一部の組織は、手動による仕訳帳入力または提案の使用によって、ユーザーが固定資産に取得および取得トランザクションを手動で作成することを選択しています。
このトピックでは、それぞれの方法について、詳しく説明します。
## <a name="methods-for-manually-creating-fixed-assets"></a>固定資産を手動で作成する方法
明細行に入力された固定資産番号が含まれている仕入先請求書を転記するときに、固定資産パラメーター ページで [購買からの資産の取得を許可する] オプションが選択されている場合は、取得が自動的に転記され、資産のステータスが [未解決] に変更されます。
取得を転記できない場合は、[固定資産] に取得トランザクションを手動で入力するか、固定資産仕訳帳で取得提案を使用して、複数の取得トランザクションをまとめて作成できます。
> [!NOTE]
> 固定資産が取得トランザクションの転記を特定のユーザー グループに制限するように設定されている場合、請求書から取得トランザクションを転記するには、そのユーザー グループのメンバである必要があります。
## <a name="methods-for-automatically-creating-fixed-assets"></a>固定資産を自動で作成する方法
明細行の [新しい固定資産の作成] オプションが選択された製品受領書を転記すると、新しい固定資産が、[未取得] ステータスで作成されます。 この後、新しい固定資産が含まれている仕入先請求書を転記するときに、固定資産が買掛金からの資産の取得を許可するように設定され、ユーザーが取得トランザクションを転記できるユーザー グループのメンバーであれば、その新しい資産の取得トランザクションが転記され、資産のステータスが [未解決] に変更されます。
製品受領書を転記するときに購買注文明細行で [新しい固定資産?] オプションは選択されていなかったが、仕入先請求書を転記するときには選択されていた場合は、固定資産が資産の作成と取得を許可するように設定されていれば、新しい固定資産が作成され、ステータス [未解決] で取得されます。 製品受領書の転記時に資産が既に作成されている場合は、仕入先請求書の転記時に別の資産が作成されることはありません。
### <a name="capitalization-threshold"></a>資本化のしきい値
資産の作成と取得が自動的に行われる方法を使用する場合は、固定資産の購入金額が、資産を減価償却するために指定された資本化のしきい値に適合していることを検証するようにシステムを設定できます。 これを行う場合は、買掛金を作成するときに、資産の帳簿で [減価償却] オプションが選択されます。
資本化のしきい値は、資産が指定された金額に適合する場合に、減価償却を行うかどうかを決定する金額です。 たとえば、資産を購入したが、その購入金額が資本化のしきい値よりも小さい場合は、資産が減価償却の対象になりません。購入金額がしきい値以上の場合は、資産が減価償却されるように指定されます。
資本化のしきい値は、固定資産グループ ページで設定できます。
## <a name="scenario"></a>シナリオ
次のシナリオは、固定資産と買掛金の統合のすべてのサイクルを説明しています。 設定例を示し、取得提案の使用方法についても説明します。
このシナリオでは、システムは次のように設定されています。
- 資産は、製品受領書または仕入先請求書の転記時に自動的に作成されますが、固定資産は、買掛金からの取得トランザクションの転記を禁止するように設定されています。
- 勘定は品目グループ ページの [固定資産受入] と [固定資産払出] 勘定タイプに対して [勘定タイプ] フィールドで指定されます。
- コンピュータ グループ (COMP) に対する資本化のしきい値は 1,500 です。
- 実行する作業は、従業員が使用する新しいラップトップ コンピュータの発注書の入力、その発注書の転記、入庫担当が製品受領書を転記したことの確認、仕入先請求書の転記、および経理担当がラップトップ資産のステータスを [未解決] に更新したことの確認です。
始めに、発注書ページを使用して、ラップトップの詳細を入力します (価格は 1,600 です)。 発注明細行の [固定資産] クイック タブの [新しい固定資産?] オプションを選択し、固定資産グループとして [COMP] を選択した後に、この発注書を保存します。
ラップトップが入荷すると、入庫担当が製品受領書の入力と転記を行って、ラップトップの受入を記録します。 このラップトップ資産は、ステータス [未取得] で作成されます。 金額が資本化のしきい値を超えています。 したがって、ラップトップ資産の帳簿の [減価償却] オプションが選択されます。 次のトランザクションが発生します。
| 説明 | 勘定 | 借方 | 貸方 |
|-------------------------------------------|---------------------|----------|----------|
| 購買、製品受領書の購買 | 未請求入庫 | 1,600.00 | |
| 購買、製品受領書仕入相殺 | 未払購買 | | 1,600.00 |
次に、ラップトップの仕入先請求書を転記します。 固定資産が、仕入先請求書の転記時の資産取得トランザクションの転記を禁止するように設定されているため、ラップトップのステータスは変更されません。 仕入先請求書を転記するときに、[新しい固定資産の作成] オプションが選択されました。 したがって、[固定資産] 入庫勘定が使用されました。 取得の転記がなかったので、[固定資産払出] 勘定は使用されません。この勘定は、組織の経理担当が取得提案を使用して [固定資産] の取得トランザクションを転記するときに使用されます。
次のトランザクションが発生します。
| 説明 | 勘定 | 借方 | 貸方 |
|-------------------------------------------|---------------------|----------|----------|
| 購買、製品受領書仕入相殺 | 未払購買 | 1,600.00 | |
| 仕入先残高 | 買掛金勘定 | | 1,600.00 |
| 購買、固定資産受入 | コンピュータ費 | 1,600.00 | |
| 購買、製品受領書の購買 | 未請求入庫 | | 1,600.00 |
最後に、経理担当は [未取得] ステータスを持つすべての固定資産を確認します。 したがって、新しいラップトップの資産も見直されます。 経理担当は、[固定資産] 仕訳帳明細行から取得提案ページを開き、請求書は存在するが、ステータスが依然として [未取得] であるすべての資産の取得トランザクションを作成します。 仕訳帳を転記すると、ラップトップ資産のステータスが[未解決] に変更されます。 固定資産払出勘定は貸方に転記され、資産取得勘定は借方に転記されます。
次のシナリオは、上のシナリオのバリエーションです。
- 固定資産が仕入先請求書の転記時に資産取得トランザクションの転記を許可するように設定されている場合、取得トランザクションが既に作成されているため、経理担当は固定資産の取得提案を使用する必要はありません。 また、さまざまな勘定は、仕入先請求書の転記時に更新されます。 コンピュータ費の代わりに、[固定資産受入] 在庫勘定が借方に転記され、2 種類の追加トランザクションが発生します。つまり、資産取得勘定が借方に転記され、[固定資産払出] 在庫勘定が貸方に転記されます。
- 製品受領書の転記時に [新しい固定資産の作成] オプションを選択しない場合、資産はその時点で作成されません。 仕入先請求書を転記する前に [新しい固定資産の作成] オプションを選択すると、[未取得] ステータスの資産が生成されるか、仕入先請求書の転記時に取得トランザクションを転記する場合、ステータスが [未解決] になります。
- ラップトップの価格が 1,600 の代わりに 1,400である場合、資本化のしきい値には達しません。 このため、資産が作成され、[減価償却] オプションがオフになっています。
- 仕入帳を使用している場合、仕入帳の転記後に請求書承認仕訳帳ページを使用して伝票を取得し、発注書を仕入先と関連付け、[新しい固定資産の作成] オプションを選択し、仕入先請求書を転記します。 取得トランザクションを作成できるユーザー グループのメンバーは、取得が作成され、資産に [未解決] ステータスがあります。
- 数量の一部だけが入荷された場合は、ユーザー グループの制限により、最初の仕入先請求書では資産の取得が作成されません。 発注数量を満たす 2 番目の仕入先請求書に対して取得を転記できるのは、最初の仕入先請求書で取得トランザクションが入力されており、ユーザーが、取得を転記することができるユーザー グループのメンバである場合に限られます。
詳細については、「[固定資産の統合](fixed-asset-integration.md)」を参照してください。
[!INCLUDE[footer-include](../../includes/footer-banner.md)] | 55.738739 | 257 | 0.769517 | jpn_Jpan | 0.876333 |
eeffb275c7c9edf1291a5b31528e98e6a81f9f38 | 246 | md | Markdown | README.md | wf539/VB2005ExpressStarterKit | 48d8853692bc6fda5f9afd612388984ab1f08116 | [
"MIT"
] | null | null | null | README.md | wf539/VB2005ExpressStarterKit | 48d8853692bc6fda5f9afd612388984ab1f08116 | [
"MIT"
] | null | null | null | README.md | wf539/VB2005ExpressStarterKit | 48d8853692bc6fda5f9afd612388984ab1f08116 | [
"MIT"
] | null | null | null | # Example code for book: Wrox's Visual Basic 2005 Express Edition Starter Kit
Visual Basic 2005 Express Edition Starter Kit
Andrew Parsons
Copyright (c) 2006 by Wiley Publishing, Inc. Indianapolis IN 46256; Andrew Parsons
ISBN: 978-0764595738 | 27.333333 | 82 | 0.800813 | kor_Hang | 0.360636 |
eefff4c075d7aba5e05de032ec3c3d73d253ac65 | 3,220 | md | Markdown | docs/tutorials/train/detection.md | ZHUI/PaddleX | 27f21b7a1e4a209cf7cd9bb558410278e49e2806 | [
"Apache-2.0"
] | 3 | 2020-05-12T03:09:13.000Z | 2020-06-18T02:50:34.000Z | docs/tutorials/train/detection.md | wyc880622/PaddleX | f001960b7359f3a88b7dd96e1f34500b90566ceb | [
"Apache-2.0"
] | null | null | null | docs/tutorials/train/detection.md | wyc880622/PaddleX | f001960b7359f3a88b7dd96e1f34500b90566ceb | [
"Apache-2.0"
] | 1 | 2020-05-18T07:06:28.000Z | 2020-05-18T07:06:28.000Z | # 训练目标检测模型
------
更多检测模型在VOC数据集或COCO数据集上的训练代码可参考[代码tutorials/train/detection/faster_rcnn_r50_fpn.py](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/faster_rcnn_r50_fpn.py)、[代码tutorials/train/detection/yolov3_darknet53.py](https://github.com/PaddlePaddle/PaddleX/blob/develop/tutorials/train/detection/yolov3_darknet53.py)。
**1.下载并解压训练所需的数据集**
> 使用1张显卡训练并指定使用0号卡。
```python
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import paddlex as pdx
```
> 这里使用昆虫数据集,训练集、验证集和测试集共包含217个样本,6个类别。
```python
insect_dataset = 'https://bj.bcebos.com/paddlex/datasets/insect_det.tar.gz'
pdx.utils.download_and_decompress(insect_dataset, path='./')
```
**2.定义训练和验证过程中的数据处理和增强操作**
> 在训练过程中使用`RandomHorizontalFlip`进行数据增强,由于接下来选择的模型是带FPN结构的Faster RCNN,所以使用`Padding`将输入图像的尺寸补齐到32的倍数,以保证FPN中两个需做相加操作的特征层的尺寸完全相同。transforms的使用见[paddlex.det.transforms](../../apis/transforms/det_transforms.md)
```python
from paddlex.det import transforms
train_transforms = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.Normalize(),
transforms.ResizeByShort(short_size=800, max_size=1333),
transforms.Padding(coarsest_stride=32)
])
eval_transforms = transforms.Compose([
transforms.Normalize(),
transforms.ResizeByShort(short_size=800, max_size=1333),
transforms.Padding(coarsest_stride=32),
])
```
**3.创建数据集读取器,并绑定相应的数据预处理流程**
> 数据集读取器的介绍见文档[paddlex.datasets](../../apis/datasets.md)
```python
train_dataset = pdx.datasets.VOCDetection(
data_dir='insect_det',
file_list='insect_det/train_list.txt',
label_list='insect_det/labels.txt',
transforms=train_transforms,
shuffle=True)
eval_dataset = pdx.datasets.VOCDetection(
data_dir='insect_det',
file_list='insect_det/val_list.txt',
label_list='insect_det/labels.txt',
transforms=eval_transforms)
```
**4.创建Faster RCNN模型,并进行训练**
> 创建带FPN结构的Faster RCNN模型,`num_classes` 需要设置为包含背景类的类别数,即: 目标类别数量(6) + 1
```python
num_classes = len(train_dataset.labels) + 1
model = pdx.det.FasterRCNN(num_classes=num_classes)
```
> 模型训练默认下载并使用在ImageNet数据集上训练得到的Backone,用户也可自行指定`pretrain_weights`参数来设置预训练权重。训练过程每间隔`save_interval_epochs`会在`save_dir`保存一次模型,与此同时也会在验证数据集上计算指标。检测模型的接口可见文档[paddlex.cv.models](../../apis/models.md#fasterrcnn)
```python
model.train(
num_epochs=12,
train_dataset=train_dataset,
train_batch_size=2,
eval_dataset=eval_dataset,
learning_rate=0.0025,
lr_decay_epochs=[8, 11],
save_dir='output/faster_rcnn_r50_fpn',
use_vdl=True)
```
> 将`use_vdl`设置为`True`时可使用VisualDL查看训练指标。按以下方式启动VisualDL后,浏览器打开 https://0.0.0.0:8001即可。其中0.0.0.0为本机访问,如为远程服务, 改成相应机器IP。
```shell
visualdl --logdir output/faster_rcnn_r50_fpn/vdl_log --port 8001
```
**5.验证或测试**
> 训练完利用模型可继续在验证集上进行验证。
```python
eval_metrics = model.evaluate(eval_dataset, batch_size=2)
print("eval_metrics:", eval_metrics)
```
> 结果输出:
```python
eval_metrics: {'bbox_map': 76.085371}
```
> 训练完用模型对图片进行测试。
```python
predict_result = model.predict('./insect_det/JPEGImages/1968.jpg')
```
> 可视化测试结果:
```python
pdx.det.visualize('./insect_det/JPEGImages/1968.jpg', predict_result, threshold=0.5, save_dir='./output/faster_rcnn_r50_fpn')
```

| 26.833333 | 336 | 0.762422 | yue_Hant | 0.128457 |
eefff5de95d70d8e59652e7d6a79cc3eade82c5b | 2,276 | md | Markdown | _kb/how-to-add-drop-map-location.md | Projectx-hub/projectx-live | 5d06ab212ead987935a5c7a6ee3bb0ea1bcf95e0 | [
"MIT"
] | null | null | null | _kb/how-to-add-drop-map-location.md | Projectx-hub/projectx-live | 5d06ab212ead987935a5c7a6ee3bb0ea1bcf95e0 | [
"MIT"
] | 1 | 2021-04-04T07:31:10.000Z | 2021-04-04T07:31:10.000Z | _kb/how-to-add-drop-map-location.md | Projectx-hub/projectx-live | 5d06ab212ead987935a5c7a6ee3bb0ea1bcf95e0 | [
"MIT"
] | 1 | 2020-12-26T11:37:37.000Z | 2020-12-26T11:37:37.000Z | ---
layout: kb
collection: kb
type: how-to
title: "Adding a Drop Map Location"
date: 2019-11-12
description: "How-to guide for accessing CloudCannon's visual editor to add a Google Maps location to the Drop Map page (standalone page)."
author: marklchaves
---
As of 15 November 2019, up to five map locations can be entered.
## Steps
### Find the Drop Locations Name and URL Fields
1. [Log into CloudCannon.](https://app.cloudcannon.com/users/sign_in)
2. Navigate to the **ProjectX** website from the _Projects_ or _Sites_ dashboard from the left menu bar.
3. From **Projects > ProjectX Live > Sites > ProjectX on master**
4. From **Sites > ProjectX on master**
5. Click on **Explore**. You'll land on **Pages > Standalone** by default.
6. Click on **Drop Map** in the thumbnail grid (should be near the top row). Type **Drop Map** in the search bar if you don't see it right away. The visual editor will open.
7. The page properties are on right hand side. We only need to change the **Locations Name** and **URL**. Please try **not** to edit the **Layout** or **Title**.
8. Enter a name for the location in the **Name** field.
### Add the New URL
1. Go to the new location using **Google Maps**.
2. Click on **Share**.
3. Click on **Embed a map**. Keep the default **Medium** size.
4. Click **COPY HTML**. Then, `cmd C` for Mac or `ctrl C` for Windows.
5. Paste the embed code into a new text editor page.
6. Copy only the link in the `src` attribute. For example, everthing after `src="` and before the closing `"` below. 
7. Paste the link into the **URL** field in the **Visual Editor**.
8. Click the _floppy disk_ icon on the upper right to save. Once saved the site will rebuild and your changes will take affect in a few seconds.
---
### Watch the Video
**Note:** Support for multiple map locations was added after this video was recorded. The concept is exactly the same. But, now more than one location can be entered.
<iframe width="560" height="315" src="https://www.youtube.com/embed/NkI6kBczZM0" frameborder="0" allowfullscreen></iframe>
[Watch on YouTube](https://youtu.be/NkI6kBczZM0 "Adding a New Drop Map Location Screencast") | 51.727273 | 239 | 0.725395 | eng_Latn | 0.966023 |
e10062f7c139a56da8edb1e10fe297cad3041c21 | 2,276 | md | Markdown | README.md | YuriKovalchuk/Coworkee | 1ad3505cd65565faae6fe16c6fd3b6b7a2951796 | [
"MIT"
] | null | null | null | README.md | YuriKovalchuk/Coworkee | 1ad3505cd65565faae6fe16c6fd3b6b7a2951796 | [
"MIT"
] | null | null | null | README.md | YuriKovalchuk/Coworkee | 1ad3505cd65565faae6fe16c6fd3b6b7a2951796 | [
"MIT"
] | null | null | null | # Ext JS Employee Directory
Ext JS Sample Application - Employee Directory (Coworkee)
## Getting started
### Prerequisite
- Install [Node.js](https://nodejs.org/) (^6.9.2)
- Install [Sencha Cmd](https://www.sencha.com/products/sencha-cmd) (^6.5.1)
- Download [Sencha Ext JS](https://www.sencha.com/products/extjs) (^6.5.1). We recommend
extracting Ext JS into a `"sencha-sdks"` folder in your home directory.
On Windows the "~" part of the path will be replaced by something like "C:\Users\Me\".
### Install the server
Install the server node.js dependencies:
$ cd server
$ npm install
### Build the client
Install the Ext JS framework for the application:
$ cd client
$ sencha app install ~/sencha-sdks
or
$ sencha app upgrade ~/sencha-sdks/ext-<version of the sdk>
Note: If you use `sencha app install ~/sencha-sdks` here, the version of the SDK inside ~/sencha-sdks will
have to mach the version specified in `workspace.json`.
Development build:
$ sencha app build --development
Production build:
$ sencha app build --production
### Run the app
$ cd server
$ npm start
Note: by default, `npm start` will use the **development** build. To run the production
build, use the following command instead:
$ npm start -- --client-environment=production
Open your browser on http://localhost:3000
#### Network access
By default, the server is setup to expose the Ext.Direct API through `localhost`. This
address can be changed via the [`direct.server`](server/config.json#L16) option (e.g.
`192.168.1.2`), in which case the client must be launched using the same address (e.g.
`https://192.168.1.2:3000`). If the client needs to be accessed with a different address,
you first need to enable CORS using [`cors.enabled: true`](server/config.json#L3).
#### Cordova / PhoneGap
If the app is ran inside
[Cordova (or PhoneGap)](https://docs.sencha.com/cmd/guides/cordova_phonegap.html), it's
required to change the following configs:
- change the Ext.Direct API endpoint in the client app ([`app.json#js`](client/app.json#L254)) by the absolute URL
- change the server IP/hostname ([`direct.server` option](server/config.json#L16)) by an accessible endpoint
- enable CORS ([`cors.enabled: true`](server/config.json#L3))
| 34.484848 | 114 | 0.721441 | eng_Latn | 0.930478 |
e101287ac78f1a694ce2d8ff977bd8cc2ffcea43 | 7,438 | md | Markdown | nodejs/pnpm.md | BillowsTao/Front-End-Knowledge-Base | 5590c055b86d38ebfa85674f07d56d7db580f16a | [
"MIT"
] | null | null | null | nodejs/pnpm.md | BillowsTao/Front-End-Knowledge-Base | 5590c055b86d38ebfa85674f07d56d7db580f16a | [
"MIT"
] | 5 | 2020-02-04T05:13:52.000Z | 2020-02-06T11:36:23.000Z | nodejs/pnpm.md | BillowsTao/Front-End-Knowledge-Base | 5590c055b86d38ebfa85674f07d56d7db580f16a | [
"MIT"
] | null | null | null | # pnpm - 快速的,节省磁盘空间的包管理工具
> Node.js 的包管理工具经历了从 npm, yarn 到 pnpm 的发展过程。截止目前 (2021.12),pnpm 为最新一代的包管理工具。pnpm 相较于其他包管理工具的优势有哪些,为何要选择 pnpm,本文为大家逐一介绍。
## 为什么不是 npm, Yarn
pnpm 是一个 Node.js 的另一种包管理工具,它是 npm, Yarn 的替代,但是更快、更高效。下面概述一下 npm, Yarn 的原理和问题,以及 Yarn 相对于 npm 的改进。
### pnpm 不是扁平化 `node_modules`
在 npm v3 之前的版本,`node_modules` 的结构是可预测的和干净的,每个依赖在 `node_modules` 目录中,并且有它自己的 `node_modules` 目录,其中所有的依赖都在 `package.json` 中指定。
```bash
node_modules
└─ foo
├─ index.js
├─ package.json
└─ node_modules
└─ bar
├─ index.js
└─ package.json
```
这种方法存在 2 个严重的问题:
- 软件包经常创建太深的依赖树,这导致了 Windows 上的长目录路径问题
- 当不同的依赖关系需要包时,会对包进行多次复制粘贴
为了解决这些问题,npm 重新思考了 `node_modules` 的结构,并提出了扁平化。使用 `npm@3`, `node_modules` 结构现在看起来是这样的:
```bash
node_modules
├─ foo
| ├─ index.js
| └─ package.json
└─ bar
├─ index.js
└─ package.json
```
有关 npm 依赖解析的更多信息,可以参考:
- [npm install - Algorithm](https://docs.npmjs.com/cli/v8/commands/npm-install#algorithm)
- [npm folders - Cycles, Conflicts, and Folder Parsimony](https://docs.npmjs.com/cli/v8/configuring-npm/folders#cycles-conflicts-and-folder-parsimony)
Yarn 只是对 npm 的一个小小的改进。尽管它使安装速度更快,也有一些不错的新特性,但它使用了与 npm 相同的扁平 `node_modules` 结构(从 v3 开始)。
扁平化依赖树产生了一系列的问题:
- 模块可以访问他们未依赖的包
- 依赖扁平化算法相当的复杂
- 有些包必须复制到项目的 `node_modules` 目录中(而不是通过缓存)
此外,还有一些问题是 Yarn 不打算解决的,比如磁盘空间使用问题。为了解决上述的一系列问题,pnpm 诞生了,而且取得了巨大的成功。pnpm 拥有 Yarn 在 npm 之上的所有附加特性:
- **安全**。和 Yarn 一样,pnpm 也有一个特殊的文件,里面有所有已安装包的校验和,用来在执行代码之前验证每个已安装包的完整性。
- **离线模式**。pnpm 将所有下载的包的压缩文件保存在本地注册的镜像中。当包在本地可用时,它从不发出请求。使用`--offline` 参数,完全可以禁止 HTTP 请求。
- **速度**。pnpm 不仅比 npm 快,还比 Yarn 快。无论使用冷缓存还是热缓存,它都比 Yarn 快。Yarn 从缓存中复制文件,而 pnpm 只从全局存储中链接文件。
### 安装方式差异
pnpm 不允许安装没有保存到 package.json 的包。如果没有参数传递给 `pnpm add`,包会被保存为常规依赖项。像 npm 一样,`--save-dev` 和`--save-optional` 可以用来将包安装为 `dev` 或 `optional` 的依赖项。
由于这个限制,项目在使用 pnpm 时不会有任何额外的包,除非它们删除了一个依赖并使其孤立。这就是为什么 pnpm 的 [`prune` 命令](https://pnpm.io/cli/prune)的实现不允许你指定要修剪的包 —— 它总是删除所有无关的和孤立的包。
### 严格
相较于 npm, Yarn 能够访问 `node_modules` 中的任意包。在 pnpm 中,一个包只能访问 `package.json` 中指定的依赖项。
## pnpm 概述

pnpm 意为 performant npm, 官网地址: [http://pnpm.io](http://pnpm.io)
优点:
- 快速: pnpm 比替代方案快 2 倍
- 高效: `node_modules` 中的文件是从一个单一的可内容寻址的存储中链接过来的
- 支持 monorepos: pnpm 内置支持了单仓多包
- 严格: pnpm 创建了一个非平铺的 `node_modules`,因此代码无法访问任意包
## 快速
参照官方提供的[JavaScript 包管理工具基准测试](https://pnpm.io/benchmarks):
| action | cache | lockfile | `node_modules` | npm | pnpm | Yarn | Yarn PnP |
| ------- | ----- | -------- | -------------- | ----- | ----- | ----- | -------- |
| install | | | | 8.6s | 16.3s | 22.1s | 27.5s |
| install | ✔ | ✔ | ✔ | 2.1s | 1.4s | 2.6s | n/a |
| install | ✔ | ✔ | | 13.5s | 4.1s | 8.6s | 1.9s |
| install | ✔ | | | 19.8s | 7.6s | 14.2s | 7.4s |
| install | | ✔ | | 31.8s | 13.4s | 15.3s | 21.1s |
| install | ✔ | | ✔ | 2.7s | 1.8s | 8.3s | n/a |
| install | | ✔ | ✔ | 2.1s | 1.3s | 9.4s | n/a |
| install | | | ✔ | 2.7s | 5.9s | 15s | n/a |
| update | n/a | n/a | n/a | 2.2s | 11.8s | 18.7s | 32.4s |

## 高效

内容可寻址存储:
当使用 npm 或 Yarn 时,如果你有 100 个项目使用了某个依赖,就会有 100 份该依赖的副本保存在硬盘上。对于 pnpm,依赖项将存储在一个内容可寻址的仓库中,因此:
1. 如果你用到了某依赖项的不同版本,那么只会将有差异的文件添加到仓库。例如,如果某个包有 100 个文件,而它的新版本只改变了其中 1 个文件。那么 pnpm update 时只会向存储中心额外添加 1 个新文件,而不会因为仅仅一个文件的改变复制整新版本包的内容。
2. 所有文件都会存储在硬盘上的某一位置。当软件包被被安装时,包里的文件会硬链接到这一位置,而不会占用额外的磁盘空间。这允许你跨项目地共享同一版本的依赖。
因此,你在磁盘上节省了大量空间,这与项目和依赖项的数量成正比,并且安装速度要快得多!
### 这是如何做到的?

正如之前提到的,pnpm 不扁平化依赖树。因此,pnpm 使用的算法会简单很多!这就是为什么 pnpm 早期只有一个开发人员可以跟上 Yarn 的几十个贡献者的步伐。
那么,如果不是扁平化的话,pnpm 如何构造 `node_modules` 目录呢?
与 npm@3 不同的是,pnpm 试图解决 npm@2 所存在的问题,而不是扁平化依赖树。在由 pnpm 创建的 `node_modules` 文件夹中,所有的包都有自己的依赖组在一起,但是目录树不会像 npm@2 那样深。pnpm 使所有依赖关系保持扁平,但使用符号链接将它们组合在一起。
pnpm 的 `node_modules` 布局使用符号链接来创建依赖项的嵌套结构。
`node_modules` 中每个包的每个文件都是来自内容可寻址存储的硬链接。假设你安装了依赖于 `bar@1.0.0` 的 `foo@1.0.0`。 pnpm 会将两个包硬链接到 `node_modules` 如下所示:
```bash
# -> - 符号连接 (或者是 Windows 上的 Junction)
node_modules
└── .pnpm
├── bar@1.0.0
│ └── node_modules
│ └── bar -> <store>/bar
│ ├── index.js
│ └── package.json
└── foo@1.0.0
└── node_modules
└── foo -> <store>/foo
├── index.js
└── package.json
```
这是 `node_modules` 中的唯一的“真实”文件。一旦所有包都硬链接到 `node_modules`,就会创建符号链接来构建嵌套的依赖关系图结构。
你可能已经注意到,这两个包都硬链接到一个 `node_modules` 文件夹(`foo@1.0.0/node_modules/foo`)内的子文件夹中。这是必要的:
1. 允许包自行导入自己。`foo` 应该能够 `require('foo/package.json')` 或者 `import * as package from "foo/package.json"`。
2. 避免循环符号链接。依赖以及需要依赖的包被放置在一个文件夹下。对于 Node.js 来说,依赖是在包的内部 `node_modules` 中或在任何其它在父目录 `node_modules` 中是没有区别的。
安装的下一阶段是符号链接依赖项。`bar` 将被符号链接到 `foo@1.0.0/node_modules` 文件夹:
```shell
node_modules
└── .pnpm
├── bar@1.0.0
│ └── node_modules
│ └── bar -> <store>/bar
└── foo@1.0.0
└── node_modules
├── foo -> <store>/foo
└── bar -> ../../bar@1.0.0/node_modules/bar
```
接下来,处理直接依赖关系。`foo` 将被符号链接至根目录的 `node_modules` 文件夹,因为 `foo` 是项目的依赖项:
```shell
node_modules
├── foo -> ./.pnpm/foo@1.0.0/node_modules/foo
└── .pnpm
├── bar@1.0.0
│ └── node_modules
│ └── bar -> <store>/bar
└── foo@1.0.0
└── node_modules
├── foo -> <store>/foo
└── bar -> ../../bar@1.0.0/node_modules/bar
```
这是一个非常简单的例子。但是,无论依赖项的数量和依赖关系图的深度如何,布局都会保持这种结构。
让我们添加 `qar@2.0.0` 作为 `bar` 和 `foo` 的依赖项。这是新的结构的样子:
```shell
node_modules
├── foo -> ./.pnpm/foo@1.0.0/node_modules/foo
└── .pnpm
├── bar@1.0.0
│ └── node_modules
│ ├── bar -> <store>/bar
│ └── qar -> ../../qar@2.0.0/node_modules/qar
├── foo@1.0.0
│ └── node_modules
│ ├── foo -> <store>/foo
│ ├── bar -> ../../bar@1.0.0/node_modules/bar
│ └── qar -> ../../qar@2.0.0/node_modules/qar
└── qar@2.0.0
└── node_modules
└── qar -> <store>/qar
```
如你所见,即使图形现在更深(`foo > bar > qar`),但目录深度仍然相同。
这种布局乍一看可能很奇怪,但它与 Node 的模块解析算法完全兼容!解析模块时,Node 会忽略符号链接,因此当 `foo@1.0.0/node_modules/foo/index.js` 需要 `bar` 时,Node 不会使用在 `foo@1.0.0/node_modules/bar` 的 `bar`,相反,`bar` 是被解析到其实际位置(`bar@1.0.0/node_modules/bar`)。因此,bar 也可以解析其在 `bar@1.0.0/node_modules` 中的依赖项。
这种布局的一大好处是只有真正在依赖项中的包才能访问。使用平铺的 `node_modules` 结构,所有被提升的包都可以访问。要了解更多关于为什么这是一个优势,见 [pnpm's strictness helps to avoid silly bugs](https://www.kochan.io/nodejs/pnpms-strictness-helps-to-avoid-silly-bugs.html)。
## 其他特性
- [Filtering](https://pnpm.io/filtering): 限定命令在指定的 package 子集运行;
- [pnpm link](https://pnpm.io/cli/link): 将本地 package 变为系统级或其他路径下可访问;
- [pnpm exec](https://pnpm.io/cli/exec): 运行项目作用域下的依赖指令;
- [pnpm env <cmd>](https://pnpm.io/cli/env): 管理 Node.js 环境;
- [Workspace](https://pnpm.io/workspaces): Monorepo 支持。
## 参考
- [pnpm - Motivation](https://pnpm.io/motivation)
- [Flat node_modules is not the only way](https://pnpm.io/blog/2020/05/27/flat-node-modules-is-not-the-only-way)
- [Why should we use pnpm?](https://www.kochan.io/nodejs/why-should-we-use-pnpm.html)
- [pnpm vs npm](https://pnpm.io/pnpm-vs-npm)
| 33.35426 | 250 | 0.628798 | yue_Hant | 0.642318 |
e1014a516797ad111748eeafaa065769777e3cf4 | 443 | md | Markdown | _posts/2016-7-4-Warden-blog-has-just-started.md | warden-stack/warden-stack.github.io | 57004bd53af4c0d4cb70d515be453ecda452c278 | [
"MIT"
] | 1 | 2016-07-01T06:05:24.000Z | 2016-07-01T06:05:24.000Z | _posts/2016-7-4-Warden-blog-has-just-started.md | warden-stack/warden-stack.github.io | 57004bd53af4c0d4cb70d515be453ecda452c278 | [
"MIT"
] | null | null | null | _posts/2016-7-4-Warden-blog-has-just-started.md | warden-stack/warden-stack.github.io | 57004bd53af4c0d4cb70d515be453ecda452c278 | [
"MIT"
] | null | null | null | ---
layout: post
title: Warden blog has just started!
---
Let me introduce a blog about the [Warden](https://getwarden.net) where the most important information and updates will be posted in order to keep you informed what's going on with the proejct.

Just make sure you checkout the [repository](https://github.com/warden-stack) on GitHub as there new features and platforms coming.
| 40.272727 | 193 | 0.760722 | eng_Latn | 0.989725 |
e1014cbf73533ea854246247679abef6a97e6774 | 3,808 | md | Markdown | README.md | canhspokeo/rtask | 28ef0f05731874b63e2ef08e2c3bd080e76089c0 | [
"MIT"
] | null | null | null | README.md | canhspokeo/rtask | 28ef0f05731874b63e2ef08e2c3bd080e76089c0 | [
"MIT"
] | null | null | null | README.md | canhspokeo/rtask | 28ef0f05731874b63e2ef08e2c3bd080e76089c0 | [
"MIT"
] | null | null | null | # RTask
RTask mimicks Task class in .NET to allow writing asynchronous code in ruby easier.
## Installation
Add this line to your application's Gemfile:
```ruby
gem 'rtask'
```
And then execute:
$ bundle
Or install it yourself as:
$ gem install rtask
## Usage
Run a task
```ruby
# this is a non-blocking call
task = RTask.run do
# expensive code to run asynchronous
'result of the task'
end
# other code
task.result # This is blocking call. Return 'result of the task'
task.status # 'completed'
```
Create and run a new task manually
```ruby
task = RTask::Task.new
# expensive code to run asynchronous
'result of the task'
end
# Can register a callback to be called when the task completes successfully.
# The task is passed the callback block automatically
task.oncomplete do |t|
# code here.
# if the callback registered after task completed, it's executed immediatedly.
end
# Can register a callback to be called when the task completes with error.
# The task is passed the callback block automatically
task.onfault do |t|
# code here.
# if the callback registered after task completed, it's executed immediatedly.
end
# task can be chained one after another.
# The antecedent task is passed to the next task block as parameter.
# task2 will be executed right after task completed regardless of error.
task2 = task.continue_with do |t|
# code to run asynchronous
t.result + ' result of the task2'
end
task.start # Start running the task. This is a non-blocking call
task.result # 'result of the task'
task2.result # 'result of the task result of the task2'
```
Cancel a task
```ruby
task = RTask.run do
sleep 10
end
# other work
task.cancel
task.status # 'canceled'
```
Get exception thrown by the task
```ruby
task = RTask.run do
raise 'error'
end
task.result # nil
task.status # 'faulted'
task.exception # the exception thrown by the task
```
Wait for task(s) to finish
```ruby
tasks = []
tasks << RTask.run do
sleep 2
'task 1'
end
tasks << Rtask.run do
sleep 1
'task 2'
end
task = TRask.wait_any(tasks) # wait for any task to finish and return the first finished task.
task.result # 'taks 2'
RTask.wait_all(tasks) # wait for all tasks to finish.
```
Run each item in the array with a task
```ruby
# without index
tasks = RTask.run_each([1, 2, 3]]) do |item|
# code to execute on item
end
# with index
tasks = RTask.run_each_with_index(1, 2, 3) do |item, index|
# code to execute on item
end
RTask.wait_all(tasks)
```
Create a completed task
```ruby
task = RTask.from_result('completed')
task = RTask.from_exception(StandardError.new('error message'))
task = RTask.from_canceled
```
Gets and sets parallel level
```ruby
pl = RTask.parallel_level # gets number of tasks can be run at the same time.
# Default to the number of processors available.
RTask.parallel_level = 10 # sets number of tasks can be run at the same time.
```
## Development
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
## Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/canhspokeo/rtask.
## License
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
| 23.949686 | 324 | 0.707983 | eng_Latn | 0.987067 |
e101d97500c919af214673e1095c6212959a7ab5 | 152 | md | Markdown | content/events/2017-riga/speakers.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 6 | 2016-11-14T14:08:29.000Z | 2018-05-09T18:57:06.000Z | content/events/2017-riga/speakers.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 461 | 2016-11-11T19:23:06.000Z | 2019-07-21T16:10:04.000Z | content/events/2017-riga/speakers.md | docent-net/devopsdays-web | 8056b7937e293bd63b43d98bd8dca1844eee8a88 | [
"Apache-2.0",
"MIT"
] | 15 | 2016-11-11T15:07:53.000Z | 2019-01-18T04:55:24.000Z | +++
date = "2017-05-23T00:00:00+02:00"
tags = ["riga","riga-2017"]
type = "speakers"
title = "Speakers"
heading = "DevOpsDays Riga 2017 - Speakers"
+++
| 19 | 43 | 0.638158 | eng_Latn | 0.276983 |
e102bcc09f038d1bef1da3633e06795912f7fda2 | 22 | md | Markdown | README.md | tutac/myplaybook | ab828f18233a32d8a3b558dbdff9af3d2b5fd9d3 | [
"Apache-2.0"
] | null | null | null | README.md | tutac/myplaybook | ab828f18233a32d8a3b558dbdff9af3d2b5fd9d3 | [
"Apache-2.0"
] | null | null | null | README.md | tutac/myplaybook | ab828f18233a32d8a3b558dbdff9af3d2b5fd9d3 | [
"Apache-2.0"
] | null | null | null | # myplaybook
playbook
| 7.333333 | 12 | 0.818182 | eng_Latn | 0.901663 |
e102bd28468501ee2f4cd1c6abbd43c80728a225 | 1,349 | md | Markdown | docs/generate-keys.md | joeabbey-anchor/flow-cli | ff9081f288eb544ff218b1f3ee77191c4e0a58fb | [
"Apache-2.0"
] | null | null | null | docs/generate-keys.md | joeabbey-anchor/flow-cli | ff9081f288eb544ff218b1f3ee77191c4e0a58fb | [
"Apache-2.0"
] | null | null | null | docs/generate-keys.md | joeabbey-anchor/flow-cli | ff9081f288eb544ff218b1f3ee77191c4e0a58fb | [
"Apache-2.0"
] | null | null | null | ---
title: Generate Keys with the Flow CLI
sidebar_title: Generate a Key
description: How to generate a Flow account key-pair from the command line
---
The Flow CLI provides a command to generate ECDSA key pairs
that can be [attached to new or existing Flow accounts](https://docs.onflow.org/concepts/accounts-and-keys).
`flow keys generate`
## Example Usage
```shell
> flow keys generate
Generating key pair with signature algorithm: ECDSA_P256
...
🔐 Private key (do not share with anyone): xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
🕊 Encoded public key (share freely): a69c6986e69fa1eadcd3bcb4aa51ee8aed74fc9430004af6b96f9e7d0e4891e84cfb99171846ba6d0354d195571397f5904cd319c3e01e96375d5777f1a47010
```
## Options
### Signature Algorithm
- Flag: `--algo,-a`
- Valid inputs: `"ECDSA_P256", "ECDSA_secp256k1"`
- Default: `"ECDSA_P256"`
Specify the ECDSA signature algorithm for the key pair.
Flow supports the secp256k1 and P-256 curves.
### Seed
- Flag: `--seed,s`
- Valid inputs: any string with length >= 32
Specify a UTF-8 seed string that will be used to generate the key pair.
Key generation is deterministic, so the same seed will always
result in the same key.
If no seed is specified, the key pair will be generated using
a random 32 byte seed.
| 29.326087 | 189 | 0.744255 | eng_Latn | 0.951452 |
e10355bfd9a19e22804dabc2e26f4e6bdab5b73a | 256 | md | Markdown | examples/README.md | integer32llc/tower-http | 53a312746f1467483f46741921e150d9aa639605 | [
"MIT"
] | 221 | 2018-01-22T22:52:19.000Z | 2022-03-29T08:26:29.000Z | examples/README.md | integer32llc/tower-http | 53a312746f1467483f46741921e150d9aa639605 | [
"MIT"
] | 148 | 2019-03-04T20:39:12.000Z | 2022-03-30T17:37:53.000Z | examples/README.md | integer32llc/tower-http | 53a312746f1467483f46741921e150d9aa639605 | [
"MIT"
] | 76 | 2018-04-07T17:29:17.000Z | 2022-03-15T04:27:05.000Z | # tower-http examples
This folder contains various examples of how to use tower-http.
- `warp-key-value-store`: A key/value store with an HTTP API built with warp.
- `tonic-key-value-store`: A key/value store with a gRPC API and client built with tonic.
| 36.571429 | 89 | 0.753906 | eng_Latn | 0.99287 |
e10395969dfb0e5c61398471503d75f93e23a17c | 5,096 | md | Markdown | A08.Dubbo/B01.DubboCoreTechnology/doc/sourcecode/A10_Server.md | huaxueyihao/NoteOfStudy | 061e62c97f4fa04fa417fd08ecf1dab361c20b87 | [
"Apache-2.0"
] | null | null | null | A08.Dubbo/B01.DubboCoreTechnology/doc/sourcecode/A10_Server.md | huaxueyihao/NoteOfStudy | 061e62c97f4fa04fa417fd08ecf1dab361c20b87 | [
"Apache-2.0"
] | 2 | 2020-05-12T02:05:50.000Z | 2022-01-12T23:04:55.000Z | A08.Dubbo/B01.DubboCoreTechnology/doc/sourcecode/A10_Server.md | huaxueyihao/NoteOfStudy | 061e62c97f4fa04fa417fd08ecf1dab361c20b87 | [
"Apache-2.0"
] | null | null | null | ### Server
#### 1 简介
> 服务器,dubbo远程暴露服务启动的服务器。
```
public interface Server extends Endpoint, Resetable, IdleSensible {
/**
* is bound.
*
* @return bound
*/
boolean isBound();
/**
* get channels.
*
* @return channels
*/
Collection<Channel> getChannels();
/**
* get channel.
*
* @param remoteAddress
* @return channel
*/
// 获得Channel
Channel getChannel(InetSocketAddress remoteAddress);
@Deprecated
void reset(org.apache.dubbo.common.Parameters parameters);
}
```
#### 2 类关系
> 默认使用的netty4下的NettyServer
```
ExchangeServer (org.apache.dubbo.remoting.exchange)
ExchangePeer (org.apache.dubbo.remoting.p2p.exchange)
HeaderExchangeServer (org.apache.dubbo.remoting.exchange.support.header)
ExchangeServerDelegate (org.apache.dubbo.remoting.exchange.support)
Peer (org.apache.dubbo.remoting.p2p)
ExchangePeer (org.apache.dubbo.remoting.p2p.exchange)
ServerPeer (org.apache.dubbo.remoting.p2p.support)
ServerDelegate (org.apache.dubbo.remoting.transport)
AbstractServer (org.apache.dubbo.remoting.transport)
GrizzlyServer (org.apache.dubbo.remoting.transport.grizzly)
NettyServer (org.apache.dubbo.remoting.transport.netty)
NettyServer (org.apache.dubbo.remoting.transport.netty4) // 默认使用的是它
MinaServer (org.apache.dubbo.remoting.transport.mina)
```
#### 3 NettyServer(netty4)
```
public NettyServer(URL url, ChannelHandler handler) throws RemotingException {
// 这里是父类AbstractServer
super(url, ChannelHandlers.wrap(handler, ExecutorUtil.setThreadName(url, SERVER_THREAD_POOL_NAME)));
}
// 这里完全是netty的知识点,就不再深入,主要是提供一个服务:启动boss线程组,worker线程组。netty的背后是nio和多线程
protected void doOpen() throws Throwable {
bootstrap = new ServerBootstrap();
bossGroup = new NioEventLoopGroup(1, new DefaultThreadFactory("NettyServerBoss", true));
workerGroup = new NioEventLoopGroup(getUrl().getPositiveParameter(Constants.IO_THREADS_KEY, Constants.DEFAULT_IO_THREADS),
new DefaultThreadFactory("NettyServerWorker", true));
final NettyServerHandler nettyServerHandler = new NettyServerHandler(getUrl(), this);
channels = nettyServerHandler.getChannels();
bootstrap.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.childOption(ChannelOption.TCP_NODELAY, Boolean.TRUE)
.childOption(ChannelOption.SO_REUSEADDR, Boolean.TRUE)
.childOption(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
.childHandler(new ChannelInitializer<NioSocketChannel>() {
@Override
protected void initChannel(NioSocketChannel ch) throws Exception {
// FIXME: should we use getTimeout()?
int idleTimeout = UrlUtils.getIdleTimeout(getUrl());
NettyCodecAdapter adapter = new NettyCodecAdapter(getCodec(), getUrl(), NettyServer.this);
ch.pipeline()//.addLast("logging",new LoggingHandler(LogLevel.INFO))//for debug
.addLast("decoder", adapter.getDecoder())
.addLast("encoder", adapter.getEncoder())
.addLast("server-idle-handler", new IdleStateHandler(0, 0, idleTimeout, MILLISECONDS))
.addLast("handler", nettyServerHandler);
}
});
// bind
ChannelFuture channelFuture = bootstrap.bind(getBindAddress());
channelFuture.syncUninterruptibly();
channel = channelFuture.channel();
}
```
#### 4 AbstractServer
```
public AbstractServer(URL url, ChannelHandler handler) throws RemotingException {
super(url, handler);
localAddress = getUrl().toInetSocketAddress();
String bindIp = getUrl().getParameter(Constants.BIND_IP_KEY, getUrl().getHost());
int bindPort = getUrl().getParameter(Constants.BIND_PORT_KEY, getUrl().getPort());
if (url.getParameter(Constants.ANYHOST_KEY, false) || NetUtils.isInvalidLocalHost(bindIp)) {
bindIp = Constants.ANYHOST_VALUE;
}
bindAddress = new InetSocketAddress(bindIp, bindPort);
this.accepts = url.getParameter(Constants.ACCEPTS_KEY, Constants.DEFAULT_ACCEPTS);
this.idleTimeout = url.getParameter(Constants.IDLE_TIMEOUT_KEY, Constants.DEFAULT_IDLE_TIMEOUT);
try {
// 这里还是调用子类的doOpen方法
doOpen();
if (logger.isInfoEnabled()) {
logger.info("Start " + getClass().getSimpleName() + " bind " + getBindAddress() + ", export " + getLocalAddress());
}
} catch (Throwable t) {
throw new RemotingException(url.toInetSocketAddress(), null, "Failed to bind " + getClass().getSimpleName()
+ " on " + getLocalAddress() + ", cause: " + t.getMessage(), t);
}
//fixme replace this with better method
DataStore dataStore = ExtensionLoader.getExtensionLoader(DataStore.class).getDefaultExtension();
executor = (ExecutorService) dataStore.get(Constants.EXECUTOR_SERVICE_COMPONENT_KEY, Integer.toString(url.getPort()));
}
```
| 30.698795 | 127 | 0.680141 | yue_Hant | 0.387646 |
e1042e9ffd41e4ec23c52581f58f5687274e8373 | 2,980 | md | Markdown | src/docs/external-data/actors/Frankenstein.md | swimlane/attck | dfff8c484c601633a47ad06a9da686571c21776f | [
"MIT"
] | 3 | 2020-05-16T05:22:34.000Z | 2020-06-30T16:59:10.000Z | src/docs/external-data/actors/Frankenstein.md | swimlane/attck | dfff8c484c601633a47ad06a9da686571c21776f | [
"MIT"
] | null | null | null | src/docs/external-data/actors/Frankenstein.md | swimlane/attck | dfff8c484c601633a47ad06a9da686571c21776f | [
"MIT"
] | 4 | 2020-05-06T02:08:05.000Z | 2021-06-21T18:10:05.000Z |
# Frankenstein
```
_____ __ __ .__
_/ ____\___________ ____ | | __ ____ ____ _______/ |_ ____ |__|
\ __\\_ __ \__ \ / \| |/ // __ \ / \ / ___/\ __\/ __ \| |
| | | | \// __ \| | \ <\ ___/| | \\___ \ | | \ ___/| |
|__| |__| (____ /___| /__|_ \\___ >___| /____ > |__| \___ >__|
\/ \/ \/ \/ \/ \/ \/
____
/ \
| | \
|___| /
\/
```
## Description
### MITRE Description
> [Frankenstein](https://attack.mitre.org/groups/G0101) is a campaign carried out between January and April 2019 by unknown threat actors. The campaign name comes from the actors' ability to piece together several unrelated components.(Citation: Talos Frankenstein June 2019)
### External Description
>
## Aliases
```
Frankenstein
```
## Known Tools
```
```
## Operations
```
```
## Targets
```
```
## Attribution Links
```
```
## Country
```
```
## Comments
```
```
# Techniques
* [Spearphishing Attachment](../techniques/Spearphishing-Attachment.md)
* [PowerShell](../techniques/PowerShell.md)
* [Windows Command Shell](../techniques/Windows-Command-Shell.md)
* [Malicious File](../techniques/Malicious-File.md)
* [Exploitation for Client Execution](../techniques/Exploitation-for-Client-Execution.md)
* [Scheduled Task](../techniques/Scheduled-Task.md)
* [Template Injection](../techniques/Template-Injection.md)
* [Visual Basic](../techniques/Visual-Basic.md)
* [Obfuscated Files or Information](../techniques/Obfuscated-Files-or-Information.md)
* [MSBuild](../techniques/MSBuild.md)
* [Windows Management Instrumentation](../techniques/Windows-Management-Instrumentation.md)
* [System Checks](../techniques/System-Checks.md)
* [Security Software Discovery](../techniques/Security-Software-Discovery.md)
* [OS Credential Dumping](../techniques/OS-Credential-Dumping.md)
* [System Network Configuration Discovery](../techniques/System-Network-Configuration-Discovery.md)
* [Ingress Tool Transfer](../techniques/Ingress-Tool-Transfer.md)
* [Exfiltration Over C2 Channel](../techniques/Exfiltration-Over-C2-Channel.md)
* [Automated Exfiltration](../techniques/Automated-Exfiltration.md)
* [Process Discovery](../techniques/Process-Discovery.md)
* [System Information Discovery](../techniques/System-Information-Discovery.md)
* [Symmetric Cryptography](../techniques/Symmetric-Cryptography.md)
* [System Owner/User Discovery](../techniques/System-Owner-User-Discovery.md)
* [Data from Local System](../techniques/Data-from-Local-System.md)
* [Automated Collection](../techniques/Automated-Collection.md)
* [Deobfuscate/Decode Files or Information](../techniques/Deobfuscate-Decode-Files-or-Information.md)
# Malwares
None
# Tools
* [Empire](../tools/Empire.md)
| 22.074074 | 277 | 0.625168 | yue_Hant | 0.537745 |
e10452bc93fa446d1558fbc258ecc282568a6107 | 1,537 | md | Markdown | _posts/2016-08-23-first.md | subdiff/subdiff.github.io | e5b683e434c2e260c52fe0c5ba83f08ce8f87eb7 | [
"MIT"
] | null | null | null | _posts/2016-08-23-first.md | subdiff/subdiff.github.io | e5b683e434c2e260c52fe0c5ba83f08ce8f87eb7 | [
"MIT"
] | null | null | null | _posts/2016-08-23-first.md | subdiff/subdiff.github.io | e5b683e434c2e260c52fe0c5ba83f08ce8f87eb7 | [
"MIT"
] | null | null | null | ---
layout: post
title: "First!"
date: 2016-08-23 19:43:00
tags:
- internal affairs
banner_image: 2016-08-23-first.jpg
comments: true
---
So, here we have one more programmer's blog. It's mainly for documenting my successes and defeats in the conquest of becoming better in [Linux][flip], [KDE][kde-dies], [Qt][cute] development.
But sometimes I'll probably talk also about other stuff: Maybe [politics][trumpkiss], maybe [religion][popekiss], maybe [your mum][yourmumkiss]. But most certainly [mathematics][math]. Enjoy!
By the way: The blog is made for now with [Jekyll][jekyll] (like it seems to me most programmer's blogs are nowadays) and I used the [Gaya theme][gaya]. Thank you [Gayan][gayan]!
[flip]: http://cdn.arstechnica.net/wp-content/uploads/2012/06/torvaldsnvidia-640x424.jpg
[kde-dies]: https://ask.slashdot.org/story/16/08/21/0327239/ask-slashdot-is-kde-dying
[cute]: https://i.ytimg.com/vi/g4xLVP_eFec/maxresdefault.jpg
[trumpkiss]: http://g1.dcdn.lt/images/pix/donald-trump-vladimir-putin-kissing-mindaugas-bonanu-keule-ruke-3-71265714.jpg
[popekiss]: http://i.cbc.ca/1.2040100.1381646201!/httpImage/image.jpg_gen/derivatives/16x9_1180/hi-pope-kiss.jpg
[yourmumkiss]: https://upload.wikimedia.org/wikipedia/commons/thumb/9/97/CENSORED.svg/2000px-CENSORED.svg.png
[math]: http://rlv.zcache.com/grumpy_cat_poster_i_love_math_it_makes_people_cry_poster-r91cc8357f25340ff8fef39c8418b3bf8_wvt_8byvr_512.jpg
[jekyll]: http://jekyllrb.com
[gaya]: https://github.com/gayanvirajith/gaya
[gayan]: http://gayan.me/
| 59.115385 | 191 | 0.766428 | eng_Latn | 0.272457 |
e104ac3af515fe6d5d70ae63c732a907a2c39524 | 1,879 | md | Markdown | _posts/2018-05-01-Edited_Media--2017-18.md | TheGuppyDude/TheGuppyDude.github.io | 396d37563e14ca4985a06b436927944766e9cd4f | [
"MIT"
] | null | null | null | _posts/2018-05-01-Edited_Media--2017-18.md | TheGuppyDude/TheGuppyDude.github.io | 396d37563e14ca4985a06b436927944766e9cd4f | [
"MIT"
] | null | null | null | _posts/2018-05-01-Edited_Media--2017-18.md | TheGuppyDude/TheGuppyDude.github.io | 396d37563e14ca4985a06b436927944766e9cd4f | [
"MIT"
] | null | null | null | ---
title: Edited_Media
layout: post
author: noah.mccall
permalink: /Edited_Media--2017-18/
source-id: 1-00iu-8QXgAphLsw_1hXoqPLNL3dn7wxqJiId4xD8c0
published: true
---
<table>
<tr>
<td>Title</td>
<td>Looking at things that are often edited in media</td>
<td>Date</td>
<td>01/05/18</td>
</tr>
</table>
<table>
<tr>
<td>Starting point:</td>
<td>To look at examples of things that are photoshopped for adverts.</td>
</tr>
<tr>
<td>Target for this lesson?</td>
<td>To point out features that show that the object is photoshopped.</td>
</tr>
<tr>
<td>Did I reach my target? </td>
<td>Yes, I noticed a few features that were clearly photoshopped.</td>
</tr>
</table>
<table>
<tr>
<td>How did you use your learning habits this week?</td>
<td></td>
</tr>
<tr>
<td>Persevering</td>
<td></td>
</tr>
<tr>
<td>Questioning?</td>
<td>When trying to spot problems with the fake version that had been edited, I would question if it actually had been edited, or if it just looked that way.</td>
</tr>
<tr>
<td>Independence</td>
<td></td>
</tr>
<tr>
<td>Reflecting</td>
<td></td>
</tr>
<tr>
<td>Engagement</td>
<td>I engaged well with the lesson, and managed to spot a couple of things that were clearly photoshopped.</td>
</tr>
<tr>
<td>What could have gone better in your learning?</td>
<td></td>
</tr>
<tr>
<td>I could perhaps have tried to find harder examples of photoshopping in the images, instead of just pointing out the obvious ones.</td>
<td></td>
</tr>
<tr>
<td>What changes do you need to make to improve your learning next time?</td>
<td></td>
</tr>
<tr>
<td>I maybe need to push myself when trying to find examples of things - instead of just looking for the easy ones.</td>
<td></td>
</tr>
</table>
| 23.78481 | 165 | 0.628526 | eng_Latn | 0.982001 |
e1050410caff8812e2049a90ab101bf2266ff3cb | 103 | md | Markdown | provisioning/roles/common/README.md | satoshi-hirayama/techtalk4 | 42483f2577c50ac06255c5da94a0362d53e9f461 | [
"MIT"
] | 8 | 2017-02-14T16:12:31.000Z | 2018-12-18T10:48:35.000Z | provisioning/roles/common/README.md | satoshi-hirayama/techtalk4 | 42483f2577c50ac06255c5da94a0362d53e9f461 | [
"MIT"
] | null | null | null | provisioning/roles/common/README.md | satoshi-hirayama/techtalk4 | 42483f2577c50ac06255c5da94a0362d53e9f461 | [
"MIT"
] | null | null | null | # common of role
サーバーの基本的な設定を行います。
* パッケージの更新
* ユーティリティパッケージのインストール
* wget
* ユーザーのディレクトリ調整
* sshの設定 | 11.444444 | 21 | 0.757282 | jpn_Jpan | 0.44409 |
e105206f3b834a552607f72cc6c14b11f3eef65d | 597 | md | Markdown | aspnet/visual-studio/overview/2017/index.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/visual-studio/overview/2017/index.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/visual-studio/overview/2017/index.md | terrajobst/AspNetDocs.pl-pl | 0fa8689494d61eeca5f7ddc52d218a22fe5b979f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: visual-studio/overview/2017/index
title: ASP.NET i Visual Studio 2017
author: rick-anderson
description: Visual Studio 2017
ms.author: riande
ms.date: 08/25/2018
msc.type: chapter
ms.openlocfilehash: 46871b95709ae56c418dc8dd1a4466442da3bf3a
ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/06/2020
ms.locfileid: "78557737"
---
# <a name="aspnet-and-visual-studio-2017"></a>ASP.NET i Visual Studio 2017
[Optymalizacja wydajności kompilacji dla rozwiązania](xref:visual-studio/overview/2017/optimize-build-perf) | 33.166667 | 107 | 0.812395 | pol_Latn | 0.255343 |
e1054a0f8e66d582efe1de95a1a8388c06c017b6 | 935 | md | Markdown | content/post/isca2020-dsagen/index.md | MaxwellWjj/homepage-academic | 27db23286fe9db6e0740f0ff039f33829f422940 | [
"MIT"
] | null | null | null | content/post/isca2020-dsagen/index.md | MaxwellWjj/homepage-academic | 27db23286fe9db6e0740f0ff039f33829f422940 | [
"MIT"
] | null | null | null | content/post/isca2020-dsagen/index.md | MaxwellWjj/homepage-academic | 27db23286fe9db6e0740f0ff039f33829f422940 | [
"MIT"
] | null | null | null | ---
title: ISCA 2020, DSAGEN - Synthesizing Programmable Spatial Accelerators
subtitle: To broaden the potential of acceleration, this work develops an approach and framework, DSAGEN, for programmable accelerator synthesis.
# Summary for listings and search engines
summary: To broaden the potential of acceleration, this work develops an approach and framework, DSAGEN, for programmable accelerator synthesis.
# Link this post with a project
projects: []
# Date published
date: "2021-03-03T00:00:00Z"
# Date updated
lastmod: "2021-03-04T00:00:00Z"
# Is this an unpublished draft?
draft: false
# Show this page in the Featured widget?
featured: false
# Featured image
# Place an image named `featured.jpg/png` in this page's folder and customize its options here.
image:
focal_point: ""
placement: 2
preview_only: false
authors:
- admin
tags:
categories:
- Top-tier conference
- ISCA
---
## This page is to be updated
| 22.804878 | 145 | 0.763636 | eng_Latn | 0.968044 |
e1066da02ce078e25575adbc72297166145d373b | 10,764 | md | Markdown | articles/openshift/oshift-faq.md | ccouzens/documentation | d3e80f06a0727176721062c88835bd6e9dfa14c9 | [
"MIT"
] | null | null | null | articles/openshift/oshift-faq.md | ccouzens/documentation | d3e80f06a0727176721062c88835bd6e9dfa14c9 | [
"MIT"
] | null | null | null | articles/openshift/oshift-faq.md | ccouzens/documentation | d3e80f06a0727176721062c88835bd6e9dfa14c9 | [
"MIT"
] | null | null | null | ---
title: UKCloud for OpenShift FAQs
description: Frequently asked questions for UKCloud for OpenShift
services: openshift
author: Matt Warner
reviewer:
lastreviewed: 08/07/2019
toc_rootlink: FAQs
toc_sub1:
toc_sub2:
toc_sub3:
toc_sub4:
toc_title: UKCloud for OpenShift FAQs
toc_fullpath: FAQs/oshift-faq.md
toc_mdlink: oshift-faq.md
---
# UKCloud for OpenShift FAQs
### What is UKCloud for OpenShift?
UKCloud for OpenShift is a Kubernetes based Platform as a Service (PaaS) solution providing container management and orchestration using Red Hat OpenShift to deliver a flexible, scalable cloud application platform. Unlike traditional managed PaaS offerings, it provides a modern application platform that accelerates end-to-end development, deployment and operation of digital applications, while raising overall application reliability and availability.
### Which version of OpenShift is the service based upon?
UKCloud for OpenShift is built using Red Hat's OpenShift Container Platform v3.11, which is the Enterprise-hardened version of OKD (previously Origins) v.3.11, which is based upon Kubernetes v1.11.
### Why deliver UKCloud for OpenShift as a cloud service?
Although UKCloud for OpenShift is a simple, benefits-rich service to consume, it is a complex platform of inter-dependent servers and services, whose deployment, configuration and maintenance requires time and expertise.
By offering this service, we take on all that complexity so that customers can immediately realise the value of UKCloud for OpenShift by simply consuming it.
### Is UKCloud for OpenShift a single-tenant or multi-tenant solution?
UKCloud for OpenShift is built as an isolated single-tenant environment on top of UKCloud's secure, assured multi-tenant UKCloud for OpenStack IaaS platform, helping to deliver the benefits of single-tenant isolation with the economics and flexibility of multi-tenant infrastructure.
### How is UKCloud for OpenShift billed?
This service comprises of two main chargeable elements:
- **Foundation Pack -** providing an initial footprint of 32GiB of RAM, billed by the month with a one-month minimum commitment
- **Runtime Pack -** billed by the month with a one-month minimum commitment based upon the amount of additional RAM allocated
### Does UKCloud offer a free trial?
Free trials are currently available for UKCloud for OpenShift. Please get in touch with your Account Manager to raise this request.
### Where is the service hosted?
The service is delivered by UKCloud, a UK company, from two UK data centres separated by more than 100km, which are securely connected by high-bandwidth, low-latency dedicated connectivity.
### Does my data leave the UK?
As the service is delivered from UK data centres by a UK company, your data doesn't leave the UK when at rest.
### How is UKCloud for OpenShift supported?
UKCloud manages and supports UKCloud for OpenShift using our dedicated support team based in the UK. Support is available via helpdesk ticket or phone.
### What constitutes UKCloud for OpenShift?
We monitor, maintain and support our controlled UKCloud for OpenShift infrastructure and services, including:
- UKCloud-controlled components, such as the virtual infrastructure, storage, power and physical firewalls and routers
- UKCloud-maintained OpenShift services (for example, router service, DEAs, health manager, cloud controller, Master Nodes, Worker Nodes, Routing Layer).
### Can I use UKCloud for OpenShift in the UKCloud Elevated domain?
UKCloud for OpenShift services are available in both our Assured and upon request in our Elevated security domains.
### Is the service Pan Government Accredited?
UKCloud's existing PGA continues to apply to the infrastructure underpinning our services. But since the move to the Government Security Classification Policy (GSCP), we can no longer seek PGA for newer services, such as UKCloud for OpenShift.
We are now required to self-assert our services, with customers taking responsibility for assessing and selecting the most appropriate cloud services to meet their individual security requirements.
We provide confidence that our OpenShift service still meets the highest level of information assurance, which is why we continue to have our platform independently tested and validated, and have the findings made available to customers and partners. This enables SIROs to make an informed decision about any service they choose to consume.
### Can I use UKCloud for OpenShift over closed networks such as PSN and HSCN?
UKCloud for OpenShift has been certified for use over the PSN network.
Connectivity to the HSCN network is available for customers and partners serving the healthcare community.
### Does UKCloud offer any scheduled automated backups for UKCloud for OpenShift?
As standard, localised component failures are tolerated within the infrastructure through elimination of single points of failure (including physical server failure or disk failure).
Although UKCloud for OpenShift is designed to deploy and manage stateless apps (applications that can be killed and re-instantiated without risk of data loss), customers should ensure they maintain a master copy or backup copy of any persistent or dynamic data hosted on this service (such as MySQL DB) by using, for example, our Cloud Storage.
### What languages and frameworks are compatible with UKCloud for OpenShift?
The service supports many popular development frameworks and languages such as:
- Java
- Spring
- Ruby
- Sinatra
- Node.js
- Python
- PHP
- GoLang
For the full list please visit https://access.redhat.com/articles/2176281?hsLang=en-us
### Does UKCloud for OpenShift support any data services?
Our OpenShift service provides popular open source data service packages deployable within the platform, all supported by the global open source community, including:
- MySQL, an open source relational database
- Postgres, a relational database based on PostgreSQL
- MongoDB, a scalable, open, document-based database
- RabbitMQ, for reliable, scalable and portable messaging for applications
Note that these services are offered 'as is' with no management, support or availability commitment from UKCloud. We strongly suggest customers ensure they maintain a master copy or backup copy of any persistent or dynamic data hosted on this service (such as MySQL DB) by using, for example, a data service provided by a managed service provider on our UKCloud for VMware platform.
### How scalable is UKCloud for OpenShift?
As a true cloud platform, UKCloud for OpenShift provides full elasticity and scalability. However, in order to protect the integrity of the platform and manage customer spend, soft limits on the number and size of application instances will be in place. These limits may be extended upon request.
### Which ports are open to the platform from the internet by default?
By default, ports 80 and 443 are open for customer application traffic. Further ports can be opened on request either at time of deployment or post-deployment by raising a Service Request via the [My Calls](https://portal.skyscapecloud.com/support/ivanti) section of the UKCloud Portal.
### How do I add users?
In order to add new users, you will need to raise a Service Request via the [My Calls](https://portal.skyscapecloud.com/support/ivanti) section of the UKCloud Portall.
### What monitoring of the services is provided by default in a trial?
By default, no specific monitoring service is integrated. However, we recommend external monitoring services such as Datadog or Coscale for production-grade OpenShift hosted applications. Alternatively, you can implement your own simple monitoring solution as described in [How to monitor your OpenShift cluster](oshift-how-monitor-cluster.md).
### What monitoring of the services is provided on a billable service?
By default, no specific monitoring service is integrated. It is our expectation that customers may want to use a third party or their own monitoring service to ensure cluster availability. There is a metrics service deployed into the cluster that provides utilisation stats of the cluster, and a wide range of metrics about both the cluster itself and the containers running within it can be extracted via the API to this service.
### Can I integrate external monitoring SaaS providers or my own monitoring agents to the service?
We will happily work with customers during a trial period to integrate an external monitoring service that enables customer monitoring of the cluster and applications where infrastructure level configuration is required.
### How do I add extra capacity to my cluster?
To add extra capacity to you cluster, you will need to raise a Service Request via the [My Calls](https://portal.skyscapecloud.com/support/ivanti) section of the UKCloud Portal. We hope to provide portal integration to enable customers to be in control of this in the future.
### How many persistent volumes can I claim/attach to each worker node?
In line with current restrictions on the OpenStack service underpinning OpenShift, you can claim/attach 25 additional persistent volume claims (PVCs) to each worker node.
### Can I have integrated container logging deployed with the platform?
Yes, this can be requested at time of deployment, or added post-deployment. The services can be run on either the master nodes or the worker nodes in the cluster. By default, we would place them on the master nodes, but you may wish to change this placement to be more suitable for the specific cluster performance you desire.
### How much control do I have over the policies and configuration of the platform once it has been deployed?
Customers have full administrative rights over the cluster configuration via the UI and API. Due to the varied nature of configurations that customers may want, such as the ability to merge projects and add service accounts for applications, we provide customers with the ability to self-serve on cluster administration from the beginning. Only infrastructure level tasks are controlled by UKCloud, such as adding users, scaling environments and patching.
### Can I run a privileged container?
Yes, it is possible to run a privileged container. However, this is not recommended as it goes against security best practices.
### What is the underlying architecture of the starter deployment?

## Feedback
If you find an issue with this article, click **Improve this Doc** to suggest a change. If you have an idea for how we could improve any of our services, visit the [Ideas](https://community.ukcloud.com/ideas) section of the [UKCloud Community](https://community.ukcloud.com).
| 59.8 | 455 | 0.802861 | eng_Latn | 0.998429 |
e107027ac4da038db1f699426d1a5020c021dbad | 4,547 | md | Markdown | ros/src/waypoint_updater/README.md | iamsumit16/CarND-Capstone | c18bc69f10c791fd680882bdbbca893bb9411827 | [
"MIT"
] | 2 | 2019-04-23T21:40:40.000Z | 2019-04-25T13:33:18.000Z | ros/src/waypoint_updater/README.md | iamsumit16/CarND-Capstone | c18bc69f10c791fd680882bdbbca893bb9411827 | [
"MIT"
] | null | null | null | ros/src/waypoint_updater/README.md | iamsumit16/CarND-Capstone | c18bc69f10c791fd680882bdbbca893bb9411827 | [
"MIT"
] | 4 | 2019-04-25T13:33:35.000Z | 2019-05-13T06:08:40.000Z | # TEAM MEMBERS
Steve Frazee - stevefrazee123@gmail.com
## Waypoint Updater (partial)
The goal of the first part of the waypoint updater is to publish a list of waypoints directly in front of the car for the car to follow. We are first given a list of waypoints that wraps around the entire track. From here we need to find the waypoint that is closest to the car and in front of the car. Once we have that, we can publish a list starting from that waypoint and include N waypoints after that. This task was broken down into 2 steps: first find the closest waypoint and second determine whether or not that waypoint is in front of or behind the car.
### Find the closest waypoint
Given the [x,y] coordinates of our car's location, we need to find the closest [x,y] waypoint. One way to do this would be a brute force search of the list of waypoints we are given. The advantage of this is that the list of waypoints is already given to us so we won't need to construct a new data structure. The disadvantage is that the list contains 10902 waypoints. Since a brute force search scales in O(N) time, we will have to perform 10000+ distance comparisons near the end of the list. Another option would be to create a KDTree from the list of waypoints. The advantage of this is that a nearest neighbor lookup scales in O(logN) time. The disadvantage is we have to construct the KDTree which takes O(NlogN) time. Since we only have to construct the KDTree once and we have to perform lookups many times per second, it makes much more sense to use a KDTree and take the penalty of construction while reaping the benefits of a much faster nearest neighbor search time.
Using the scipy.spatial library the KDTree is easily constructed once in the waypoints_cb function:
```
def waypoints_cb(self, lane):
## called once on startup
# save base waypoints as lane object
self.base_waypoints = lane
# create list of [x,y] waypoint positions to initialize kd_tree
if not self.kd_tree:
kd_tree_pts = [[waypoint.pose.pose.position.x, waypoint.pose.pose.position.y] for waypoint in lane.waypoints]
self.kd_tree = KDTree(kd_tree_pts)
```
Now that we have a KDTree constructed we can find the closest waypoint to the car's current location by querying the tree:
```
closest_waypoint_idx = self.kd_tree.query(self.current_pose)[1]
```
### Checking if the closest waypoint is in front or behind the car
These waypoints are eventually going to be sent to the drive by wire node to create control commands so the car can follow the waypoints. We don't want the car to try and follow a waypoint that is behind the car, so we now need to make sure that the closest waypoint we found is actually in front of the car. The method I chose to accomplish this is to compare the distance between the nearest neighbor and the previous waypoint and the distance between the car and the previous waypoint. If the car is closer to the previous waypoint than the nearest neighbor, then the nearest neighbor must be in front of the car. If the nearest neighbor is closer, then it is behind the car. In this case we can just take the next waypoint in the list to get the waypoint that is ahead of the car.
Using the euclidean function from the scipy.spatial.distance library, the distances between the car and previous waypoint and the nearest neighbor and previous waypoint are compared:
```
def get_closest_waypoint_idx(self):
# get index of waypoint closest to the car
closest_waypoint_idx = self.kd_tree.query(self.current_pose)[1]
# check if point is behind or ahead of vehicle by comparing distance to previous point
previous_waypoint = [self.base_waypoints.waypoints[closest_waypoint_idx - 1].pose.pose.position.x, self.base_waypoints.waypoints[closest_waypoint_idx - 1].pose.pose.position.y]
current_waypoint = [self.base_waypoints.waypoints[closest_waypoint_idx].pose.pose.position.x, self.base_waypoints.waypoints[closest_waypoint_idx].pose.pose.position.y]
car_dist = euclidean(self.current_pose, previous_waypoint)
waypoint_dist = euclidean(current_waypoint, previous_waypoint)
# if the car is further away from the previous waypoint than the closest waypoint, then the closest waypoint is behind the car and we should take the next waypoint in the list
if car_dist > waypoint_dist:
closest_waypoint_idx = (closest_waypoint_idx + 1) % len(self.base_waypoints.waypoints)
return closest_waypoint_idx
```
| 98.847826 | 979 | 0.768419 | eng_Latn | 0.999746 |
e10754896694eb3f472abafa3b03c286e0c8789f | 3,639 | md | Markdown | includes/vpn-gateway-add-local-network-gateway-portal-include.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/vpn-gateway-add-local-network-gateway-portal-include.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/vpn-gateway-add-local-network-gateway-portal-include.md | fuatrihtim/azure-docs.tr-tr | 6569c5eb54bdab7488b44498dc4dad397d32f1be | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: include dosyası
description: include dosyası
services: vpn-gateway
author: cherylmc
ms.service: vpn-gateway
ms.topic: include
ms.date: 10/22/2020
ms.author: cherylmc
ms.custom: include file
ms.openlocfilehash: 5358bbbca716f5152a943c90cb7a5f735ae12047
ms.sourcegitcommit: 910a1a38711966cb171050db245fc3b22abc8c5f
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 03/19/2021
ms.locfileid: "92479607"
---
1. [Azure Portal](https://portal.azure.com), **arama kaynakları, hizmetler ve docs (G +/)** ' da **yerel ağ geçidi** yazın. Arama sonuçlarında **Market** altında **yerel ağ geçidini** bulun ve seçin. Bu, **yerel ağ geçidi oluştur** sayfasını açar.
1. **Yerel ağ geçidi oluştur** sayfasında, yerel ağ geçidiniz için değerlerleri belirtin.
:::image type="content" source="./media/vpn-gateway-add-local-network-gateway-portal-include/create-ip.png" alt-text="IP adresi ile yerel ağ geçidi oluşturma":::
* **Ad:** Yerel ağ geçidi nesneniz için bir ad belirtin.
* **Uç nokta:** Şirket içi VPN cihazı için uç nokta türünü seçin- **IP adresi** veya **FQDN (tam etki alanı adı)**.
* **IP adresi**: VPN cihazınız için Internet hizmet sağlayıcınızdan ayrılan statik BIR genel IP ADRESINIZ varsa IP adresi seçeneğini BELIRLEYIN ve IP adresini örnekte gösterildiği gibi girin. Bu, Azure VPN ağ geçidinin bağlanmasını istediğiniz VPN cihazının genel IP adresidir. IP adresini şu anda bilmiyorsanız örnekte gösterilen değerleri kullanabilirsiniz ancak geri dönüp yer tutucu IP adresinizi VPN cihazınızın genel IP adresiyle değiştirmeniz gerekir. Aksi halde Azure bağlantı kuramaz.
* **FQDN**: genellikle Internet servis sağlayıcınız tarafından belirlenen belirli bir süre sonunda değişebilir bir dınamık genel IP adresiniz varsa, VPN cihazınızın GEÇERLI genel IP adresini göstermek için dınamık bir DNS hizmeti ile sabıt bir DNS adı kullanabilirsiniz. Azure VPN ağ geçidiniz, bağlanılacak genel IP adresini belirleyecek FQDN 'yi çözer.
* **Adres Alanı**, bu yerel ağın temsil ettiği ağa ilişkin adres aralıkları anlamına gelir. Birden fazla adres alanı aralığı ekleyebilirsiniz. Burada belirttiğiniz aralıkların, bağlanmak istediğiniz diğer ağların aralıklarıyla çakışmadığından emin olun. Azure, belirttiğiniz adres aralığını şirket içi VPN cihazının IP adresine yönlendirir. *Şirket içi sitenize bağlanmak istiyorsanız, burada, örnekte gösterilen değerleri değil, kendi değerlerinizi kullanın*.
* **BGP ayarları yapılandır:** Yalnızca BGP’yi yapılandırırken kullanın. Aksi takdirde, bu seçeneği işaretlemeyin.
* **Abonelik:** Doğru aboneliğin gösterildiğinden emin olun.
* **Kaynak Grubu:** Kullanmak istediğiniz kaynak grubunu seçin. Yeni bir kaynak grubu oluşturabilir veya önceden oluşturduğunuz birini seçebilirsiniz.
* **Konum:** Konum, diğer ayarların **bölgesiyle** aynı olur. Bu nesnenin oluşturulacağı konumu seçin. VNet'inizin bulunduğu konumu seçebilirsiniz ancak bu zorunlu değildir.
> [!NOTE]
>
> * Azure VPN her FQDN için yalnızca bir IPv4 adresi destekler. Etki alanı adı birden çok IP adresi olarak çözümlenirse Azure VPN Gateway, DNS sunucuları tarafından döndürülen ilk IP adresini kullanır. Belirsizlik ortadan kaldırmak için, FQDN 'nizin her zaman tek bir IPv4 adresine çözümlenmenizi öneririz. IPv6 desteklenmiyor.
> * Azure VPN Gateway, her 5 dakikada bir DNS önbelleğini korur. Ağ Geçidi, yalnızca bağlantısı kesilen tüneller için FQDN 'leri çözümlemeye çalışır. Ağ geçidini sıfırlamak, FQDN çözümlemesini de tetikler.
>
1. Değerleri belirtmeyi tamamladığınızda, sayfanın alt kısmındaki **Oluştur** düğmesini seçerek yerel ağ geçidini oluşturun.
| 90.975 | 499 | 0.792251 | tur_Latn | 0.999959 |
e107815e0b91f9448d6a3ac39c5f2b9df154c7f4 | 7,927 | md | Markdown | README.md | Geogboe/xWindowsUpdate | e02aadb1a4d56ae2076f4e08037eb866059f8e73 | [
"MIT"
] | null | null | null | README.md | Geogboe/xWindowsUpdate | e02aadb1a4d56ae2076f4e08037eb866059f8e73 | [
"MIT"
] | null | null | null | README.md | Geogboe/xWindowsUpdate | e02aadb1a4d56ae2076f4e08037eb866059f8e73 | [
"MIT"
] | null | null | null | # xWindowsUpdate Module
master: [](https://ci.appveyor.com/project/PowerShell/xwindowsupdate/branch/master)
[](https://codecov.io/gh/PowerShell/xWindowsUpdate)
dev: [](https://ci.appveyor.com/project/PowerShell/xwindowsupdate/branch/dev)
[](https://codecov.io/gh/PowerShell/xWindowsUpdate)
The **xWindowsUpdate** module contains the **xHotfix** and
**xWindowsUpdateAgent** DSC resources. **xHotfix** installs a
Windows Update (or hotfix) from a given path.
**xWindowsUpdateAgent** will configure the source download settings for the machine,
update notifications on the system, and can automatically initiate installation of the updates.
For more information on Windows Update and Hotfix, please refer to
[this TechNet article](http://technet.microsoft.com/en-us/library/cc750077.aspx).
**xMicrosoftUpdate** enables or disables Microsoft Update.
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any
additional questions or comments.
## Contributing
Please check out common DSC Resources
[contributing guidelines](https://github.com/PowerShell/DscResource.Kit/blob/master/CONTRIBUTING.md).
## Resources
### xHotfix
* **Path**: The path from where the hotfix should be installed
* **Log**: The name of the log where installation/uninstallation details
are stored.
If no log is used, a temporary log name is created by the resource.
* **Id**: The hotfix ID of the Windows update that uniquely identifies
the hotfix.
* **Ensure**: Ensures that the hotfix is **Present** or **Absent**.
### xWindowsUpdateAgent
* **UpdateNow**: Indicates if the resource should trigger an update during
consistency (including initial.)
* **Category**: Indicates the categories (one or more) of updates the resource
should update for. 'Security', 'Important', 'Optional'.
Default: 'Security' (please note that security is not mutually
exclusive with Important and Optional, so selecting Important may
install some security updates, etc.)
* **Notifications**: Sets the windows update agent notification setting.
Supported options are 'disabled' and 'ScheduledInstallation'.
[Documentation from Windows Update](https://msdn.microsoft.com/en-us/library/windows/desktop/aa385806%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396)
* **Source**: Sets the service windows update agent will use for searching
for updates. Supported options are 'MicrosoftUpdate' and 'WindowsUpdate'.
Note that 'WSUS' is currently reserved for future use.
* **IsSingleInstance**: Should always be yes. Ensures you can only have
one instance of this resource in a configuration.
### xMicrosoftUpdate
**Note:** `xMicrosoftUpdate` is deprecated. Please use `xWindowsUpdateAgent`.
* **Ensure**: Determines whether the Microsoft Update service should be
enabled (ensure) or disabled (absent) in Windows Update.
## Versions
### Unreleased
### 2.8.0.0
* xWindowsUpdateAgent: Fixed verbose statement returning incorrect variable
* Tests no longer fail on `Assert-VerifiableMocks`, these are now renamed
to `Assert-VerifiableMock` (breaking change in Pester v4).
* README.md has been updated with correct description of the resources
([issue #58](https://github.com/PowerShell/xWindowsUpdate/issues/58)).
* Updated appveyor.yml to use the correct parameters to call the test framework.
* Update appveyor.yml to use the default template.
* Added default template files .gitattributes, and .gitignore, and
.vscode folder.
### 2.7.0.0
* xWindowsUpdateAgent: Fix Get-TargetResource returning incorrect key
### 2.6.0.0
* Converted appveyor.yml to install Pester from PSGallery instead of from
Chocolatey.
* Fixed PSScriptAnalyzer issues.
* Fixed common test breaks (markdown style, and example style).
* Added CodeCov.io reporting
* Deprecated xMicrosoftUpdate as it's functionality is replaced by xWindowsUpdateAgent
### 2.5.0.0
* Added xWindowsUpdateAgent
### 2.4.0.0
* Fixed PSScriptAnalyzer error in examples
### 2.3.0.0
* MSFT_xWindowsUpdate: Fixed an issue in the Get-TargetResource function,
resulting in the Get-DscConfiguration cmdlet now working appropriately
when the resource is applied.
* MSFT_xWindowsUpdate: Fixed an issue in the Set-TargetResource function
that was causing the function to fail when the installation of a hotfix
did not provide an exit code.
### 2.2.0.0
* Minor fixes
### 2.1.0.0
* Added xMicrosoftUpdate DSC resource which can be used to enable/disable
Microsoft Update in the Windows Update Settings.
### 1.0.0.0
* Initial release with the xHotfix resource
## Examples
### xHotfix Examples 1
This configuration will install the hotfix from the .msu file given.
If the hotfix with the required hotfix ID is already present on the system,
the installation is skipped.
```powershell
Configuration UpdateWindowsWithPath
{
Node 'NodeName'
{
xHotfix HotfixInstall
{
Ensure = "Present"
Path = "c:/temp/Windows8.1-KB2908279-v2-x86.msu"
Id = "KB2908279"
}
}
}
```
### Installs a hotfix from a given URI
This configuration will install the hotfix from a URI that is connected to
a particular hotfix ID.
```powershell
Configuration UpdateWindowsWithURI
{
Node 'NodeName'
{
xHotfix HotfixInstall
{
Ensure = "Present"
Path = "http://hotfixv4.microsoft.com/Microsoft%20Office%20SharePoint%20Server%202007/sp2/officekb956056fullfilex64glb/12.0000.6327.5000/free/358323_intl_x64_zip.exe"
Id = "KB2937982"
}
}
}
```
### Enable Microsoft Update
This configuration will enable the Microsoft Update Settings (checkbox) in
the Windows Update settings
```powershell
Configuration MSUpdate
{
Import-DscResource -Module xWindowsUpdate
xMicrosoftUpdate "EnableMSUpdate"
{
Ensure = "Present"
}
}
```
### xWindowsUpdateAgent Sample 1
Set Windows Update Agent to use Microsoft Update. Disables notification of
future updates. Install all Security and Important updates from Microsoft
Update during the configuration using `UpdateNow = $true`.
```PowerShell
Configuration MuSecurityImportant
{
Import-DscResource -ModuleName xWindowsUpdate
xWindowsUpdateAgent MuSecurityImportant
{
IsSingleInstance = 'Yes'
UpdateNow = $true
Category = @('Security','Important')
Source = 'MicrosoftUpdate'
Notifications = 'Disabled'
}
}
```
### xWindowsUpdateAgent Sample 2
Sets the Windows Update Agent to use the Windows Update service
(vs Microsoft Update or WSUS) and sets the notifications to scheduled install
(no notifications, just automatically install the updates.) Does not install
updates during the configuration `UpdateNow = $false`.
```PowerShell
Configuration WuScheduleInstall
{
Import-DscResource -ModuleName xWindowsUpdate
xWindowsUpdateAgent MuSecurityImportant
{
IsSingleInstance = 'Yes'
UpdateNow = $false
Source = 'WindowsUpdate'
Notifications = 'ScheduledInstallation'
}
}
```
| 35.388393 | 184 | 0.719061 | eng_Latn | 0.606525 |
e108030cfdce3915013a812d02e81f314a1890ff | 8,223 | md | Markdown | docs/ja/api_reference/services/TrafficEstimatorService.md | yahoojp-marketingsolutions/sponsored-search-api-documents | cf9af1cd623524e04d4e100a7de136840f269ee8 | [
"MIT"
] | null | null | null | docs/ja/api_reference/services/TrafficEstimatorService.md | yahoojp-marketingsolutions/sponsored-search-api-documents | cf9af1cd623524e04d4e100a7de136840f269ee8 | [
"MIT"
] | null | null | null | docs/ja/api_reference/services/TrafficEstimatorService.md | yahoojp-marketingsolutions/sponsored-search-api-documents | cf9af1cd623524e04d4e100a7de136840f269ee8 | [
"MIT"
] | 1 | 2019-10-03T09:56:04.000Z | 2019-10-03T09:56:04.000Z | # TrafficEstimatorService
TrafficEstimatorServiceは、ターゲットやキーワードなど、指定した条件で獲得できるトラフィックを見積る機能を提供します。
#### WSDL
| environment | url |
|---|---|
| production | https://ss.yahooapis.jp/services/V6.0/TrafficEstimatorService?wsdl|
| sandbox | https://sandbox.ss.yahooapis.jp/services/V6.0/TrafficEstimatorService?wsdl|
#### Namespace
http://ss.yahooapis.jp/V6
#### サービス概要
ターゲットやキーワードなど、指定した条件で獲得できるトラフィックを見積ります。
#### 操作
TrafficEstimatorServiceで提供される操作を説明します。
## get
広告に関する情報を取得します。
### リクエスト
| パラメータ | 必須 | データ型 | 説明 |
|---|---|---|---|
| selector | ○ | [TrafficEstimatorSelector](../data/TrafficEstimatorSelector.md) | トラフィックを見積る条件を指定します。 |
##### <リクエストサンプル>
```xml
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns1="http://ss.yahooapis.jp/V6">
<SOAP-ENV:Header>
<ns1:RequestHeader>
<ns1:license>xxxx-xxxx-xxxx-xxxx</ns1:license>
<ns1:apiAccountId>xxxx-xxxx-xxxx-xxxx</ns1:apiAccountId>
<ns1:apiAccountPassword>passwd</ns1:apiAccountPassword>
</ns1:RequestHeader>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<ns1:get>
<ns1:selector>
<ns1:estimateRequest>
<ns1:target>
<ns1:network>YAHOO_SEARCH</ns1:network>
<ns1:mobileCarrier>DOCOMO</ns1:mobileCarrier>
<ns1:province>JP-23</ns1:province>
<ns1:province>JP-02</ns1:province>
<ns1:province>JP-21</ns1:province>
</ns1:target>
<ns1:keyword>
<ns1:type>KEYWORD</ns1:type>
<ns1:text>夏</ns1:text>
<ns1:matchType>PHRASE</ns1:matchType>
</ns1:keyword>
<ns1:bid>100</ns1:bid>
<ns1:platform>
<ns1:platformName>SMART_PHONE</ns1:platformName>
<ns1:bidMultiplier>3.0</ns1:bidMultiplier>
</ns1:platform>
<ns1:wap>WAP_ONLY</ns1:wap>
</ns1:estimateRequest>
<ns1:estimateRequest>
<ns1:target>
<ns1:platform>DESKTOP</ns1:platform>
<ns1:network>YAHOO_SEARCH</ns1:network>
<ns1:province>JP-25</ns1:province>
<ns1:province>JP-01</ns1:province>
<ns1:province>JP-26</ns1:province>
</ns1:target>
<ns1:keyword>
<ns1:type>KEYWORD</ns1:type>
<ns1:text>秋</ns1:text>
<ns1:matchType>PHRASE</ns1:matchType>
</ns1:keyword>
<ns1:bid>200</ns1:bid>
<ns1:platform>
<ns1:platformName>SMART_PHONE</ns1:platformName>
<ns1:bidMultiplier>3.0</ns1:bidMultiplier>
</ns1:platform>
<ns1:wap>WAP_INCLUDED</ns1:wap>
</ns1:estimateRequest>
<ns1:month>8</ns1:month>
</ns1:selector>
</ns1:get>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
```
### レスポンス
正常時のレスポンスフィールド
| フィールド | データ型 | 説明 |
|---|---|---|
| rval | [TrafficEstimatorPage](../data/TrafficEstimatorPage.md) | 取得される見積もりに関するエントリーです。 |
##### <レスポンスサンプル>
```xml
<?xml version="1.0" encoding="UTF-8"?>
<SOAP-ENV:Envelope
xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:ns1="http://ss.yahooapis.jp/V6">
<SOAP-ENV:Header>
<ns1:ResponseHeader>
<ns1:service>TrafficEstimatorService</ns1:service>
<ns1:remainingQuota>109988</ns1:remainingQuota>
<ns1:quotaUsedForThisRequest>1</ns1:quotaUsedForThisRequest>
<ns1:timeTakenMillis>0.0564</ns1:timeTakenMillis>
</ns1:ResponseHeader>
</SOAP-ENV:Header>
<SOAP-ENV:Body>
<ns1:getResponse>
<ns1:rval>
<ns1:totalNumEntries>2</ns1:totalNumEntries>
<ns1:Page.Type>TrafficEstimatorPage</ns1:Page.Type>
<ns1:values>
<ns1:operationSucceeded>true</ns1:operationSucceeded>
<ns1:data xsi:type="ns1:TotalEstimateResult">
<ns1:type>TOTAL</ns1:type>
<ns1:keyword>夏</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:carrier>ALL</ns1:carrier>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
<ns1:data xsi:type="ns1:DesktopEstimateResult">
<ns1:type>DESKTOP</ns1:type>
<ns1:keyword>夏</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
<ns1:data xsi:type="ns1:SmartPhoneEstimateResult">
<ns1:type>SMART_PHONE</ns1:type>
<ns1:keyword>夏</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
<ns1:data xsi:type="ns1:WapMobileEstimateResult">
<ns1:type>WAP_MOBILE</ns1:type>
<ns1:keyword>夏</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:carrier>ALL</ns1:carrier>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
</ns1:values>
<ns1:values>
<ns1:operationSucceeded>true</ns1:operationSucceeded>
<ns1:data xsi:type="ns1:TotalEstimateResult">
<ns1:type>TOTAL</ns1:type>
<ns1:keyword>秋</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:carrier>ALL</ns1:carrier>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
<ns1:data xsi:type="ns1:DesktopEstimateResult">
<ns1:type>DESKTOP</ns1:type>
<ns1:keyword>秋</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
<ns1:data xsi:type="ns1:SmartPhoneEstimateResult">
<ns1:type>SMART_PHONE</ns1:type>
<ns1:keyword>秋</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
<ns1:data xsi:type="ns1:WapMobileEstimateResult">
<ns1:type>WAP_MOBILE</ns1:type>
<ns1:keyword>秋</ns1:keyword>
<ns1:matchType>PHRASE</ns1:matchType>
<ns1:carrier>ALL</ns1:carrier>
<ns1:bid>100</ns1:bid>
<ns1:impressions>10000.12345678</ns1:impressions>
<ns1:clicks>99.12345678</ns1:clicks>
<ns1:rank>5.12345678</ns1:rank>
<ns1:cpc>12.12345678</ns1:cpc>
</ns1:data>
</ns1:values>
</ns1:rval>
</ns1:getResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
```
<a rel="license" href="http://creativecommons.org/licenses/by-nd/2.1/jp/"><img alt="クリエイティブ・コモンズ・ライセンス" style="border-width:0" src="https://i.creativecommons.org/l/by-nd/2.1/jp/88x31.png" /></a><br />この 作品 は <a rel="license" href="http://creativecommons.org/licenses/by-nd/2.1/jp/">クリエイティブ・コモンズ 表示 - 改変禁止 2.1 日本 ライセンスの下に提供されています。</a>
| 39.344498 | 334 | 0.578864 | yue_Hant | 0.187262 |
e1085c7db4dd38bfb9be8c1382c9f70ca8922e26 | 1,736 | md | Markdown | README.md | changlinli/diceware-generators | d39b2519482568db66e103b3c1e774c203488594 | [
"Unlicense"
] | null | null | null | README.md | changlinli/diceware-generators | d39b2519482568db66e103b3c1e774c203488594 | [
"Unlicense"
] | null | null | null | README.md | changlinli/diceware-generators | d39b2519482568db66e103b3c1e774c203488594 | [
"Unlicense"
] | null | null | null | Simple Diceware Password Generators
===================================
Tools for generating [diceware passwords]("Diceware homepage" http://world.std.com/~reinhold/diceware.html).
Because we don't always have dice by our side (nor do we want to roll them 30+
times).
Just download and execute.
If you don't want to spend the handful of minutes coding this yourself, I've
done it for you in a way that you should be able to run out of box for most
platforms. No compiling, no exotic toolchain required. Download and run these.
You could even just copy the source code into your own file and run. No need to
even fire up git. If figuring out how to run these takes longer than a couple
of seconds, I've probably failed.
What's Included
---------------
+ A Python version
$ python diceware.py
slat zloty chris gripe coyote pond
$ python diceware.py 9
zig nancy beer aaa lob cope noose defy pen
$ python diceware.py -h
usage: diceware.py [-h] [N]
Generate a diceware password.
positional arguments:
N The desired number of diceware words for your password
optional arguments:
-h, --help show this help message and exit
+ A Bash version (uses `/dev/urandom`).
$ chmod u+x diceware.sh
$ ./diceware.sh
slog mammal ra den k's qb
$ ./diceware.sh 9
crank breeze tiny bu camp moron decca bulge hertz
$ ./diceware.sh -h
usage: diceware.sh [-h] [N]
Generate a diceware password.
propositional arguments:
N The desired number of diceware words for your password
optional arguments:
-h, --help show this help message and exit
| 31.563636 | 108 | 0.647465 | eng_Latn | 0.980354 |
e109c1c4a7edb0ae603a31674b03ad0aaf846413 | 244 | md | Markdown | menuler/yazarlar/sabahattin-ali.md | uguratmaca/edebiyatyarismalari | 1830b695819bcba72337f3c1ef1a46b38f54c88b | [
"MIT"
] | 4 | 2019-01-14T22:48:33.000Z | 2021-09-12T20:49:10.000Z | menuler/yazarlar/sabahattin-ali.md | uguratmaca/edebiyatyarismalari | 1830b695819bcba72337f3c1ef1a46b38f54c88b | [
"MIT"
] | 4 | 2018-10-10T09:07:58.000Z | 2020-10-29T21:06:07.000Z | menuler/yazarlar/sabahattin-ali.md | uguratmaca/edebiyatyarismalari | 1830b695819bcba72337f3c1ef1a46b38f54c88b | [
"MIT"
] | null | null | null | ---
layout: all
headline: "Sabahattin Ali Yarışmaları"
title: "Sabahattin Ali Yarışmaları"
key: "sabahattin ali"
description: "Ünlü Türk yazar Sabahattin Ali adına düzenlenen edebiyat yarışmalarıdır"
permalink: "sabahattin-ali-yarismalari/"
--- | 30.5 | 86 | 0.790984 | tur_Latn | 0.997051 |
e109d13c541c31f6ba1a38da0981d6f8c6f9332c | 65 | md | Markdown | source/about/index.md | tongjimirrors/mirrors.tongji.edu.cn | 3e5be3280d72a24737d6367e58d69678d19f3d3e | [
"MIT"
] | 13 | 2017-11-12T14:05:29.000Z | 2018-09-11T02:47:55.000Z | source/about/index.md | tongjimirrors/mirrors.tongji.edu.cn | 3e5be3280d72a24737d6367e58d69678d19f3d3e | [
"MIT"
] | 7 | 2017-12-23T13:04:53.000Z | 2021-02-27T06:24:04.000Z | source/about/index.md | tongjimirrors/mirrors.tongji.edu.cn | 3e5be3280d72a24737d6367e58d69678d19f3d3e | [
"MIT"
] | 2 | 2017-04-11T05:24:16.000Z | 2017-11-12T14:05:50.000Z | ---
type: page
title: about
date: 2017-01-13 23:08:56
---
About! | 9.285714 | 25 | 0.630769 | eng_Latn | 0.69651 |
e10a21c12b1aae8d371b83ebe903866120e2e6d5 | 1,574 | md | Markdown | views/blogs/healty.md | pgm-sybrdebo/Landingpage_Hiking_App | b28d58bc4bce32f1c54dfbcfe61f365e22440729 | [
"Apache-2.0"
] | null | null | null | views/blogs/healty.md | pgm-sybrdebo/Landingpage_Hiking_App | b28d58bc4bce32f1c54dfbcfe61f365e22440729 | [
"Apache-2.0"
] | null | null | null | views/blogs/healty.md | pgm-sybrdebo/Landingpage_Hiking_App | b28d58bc4bce32f1c54dfbcfe61f365e22440729 | [
"Apache-2.0"
] | null | null | null | ---
title: Nature can help!
author: Meaghan
release: March 16 2020
text: Hey friends – These are unusual times. While our day-to-day routines have changed pretty dramatically, it’s important to remember that we can still seek joy, peace, and health.
img: walkingHealty.jpg
alt: two people happily walking
highlighted: true
permalink: "blogs/nature-can-help/"
---
Hey friends –
These are unusual times. While our day-to-day routines have changed pretty dramatically, it’s important to remember that we can still seek joy, peace, and health.
__We might need to keep some physical distance from each other for a while, but we don’t necessarily need to shut ourselves in.__ The outdoors is still open for business for most of us (just remember to check with local authorities and be sure to practice social distancing!). AllTrails is here to help you find local trails and open spaces for mental, physical and emotional catharsis. We invite you to move your body, breathe the fresh air, and strengthen your spirit in the safe haven that’s always been there – Nature.
Just a heads up though, while the trails may be open, many facilities are now closed or closing including public restrooms, visitor centers, and public campgrounds. Make sure you plan accordingly.
As a community committed to celebrating and protecting Mother Nature, let’s not forget that this relationship is symbiotic – we can count on her to help take care of us, too.
Elbow bump!
The Hikeventure Team
*source:*
[Original blog](https://fieldnotes.alltrails.com/blog/2020/03/16/nature-can-help/)
| 56.214286 | 523 | 0.78399 | eng_Latn | 0.99956 |
e10b9c06b28fb1b94897b90207fa50d7716bff56 | 1,003 | md | Markdown | .local/share/OpenSCAD/libraries/dotSCAD/docs/lib2x-shape_pie.md | IlyaMZP/dotfiles | b55e0cd63117c09778ea917b02313da74f7c1067 | [
"MIT"
] | null | null | null | .local/share/OpenSCAD/libraries/dotSCAD/docs/lib2x-shape_pie.md | IlyaMZP/dotfiles | b55e0cd63117c09778ea917b02313da74f7c1067 | [
"MIT"
] | null | null | null | .local/share/OpenSCAD/libraries/dotSCAD/docs/lib2x-shape_pie.md | IlyaMZP/dotfiles | b55e0cd63117c09778ea917b02313da74f7c1067 | [
"MIT"
] | null | null | null | # shape_pie
Returns shape points of a pie (circular sector) shape. They can be used with xxx_extrude modules of dotSCAD. The shape points can be also used with the built-in polygon module.
## Parameters
- `radius` : The radius of the circle.
- `angle` : A single value or a 2 element vector which defines the central angle. The first element of the vector is the beginning angle in degrees, and the second element is the ending angle.
- `$fa`, `$fs`, `$fn` : Check [the circle module](https://en.wikibooks.org/wiki/OpenSCAD_User_Manual/Using_the_2D_Subsystem#circle) for more details.
## Examples
use <shape_pie.scad>;
shape_pts = shape_pie(10, [45, 315], $fn = 24);
polygon(shape_pts);

use <shape_pie.scad>;
use <helix_extrude.scad>;
shape_pts = shape_pie(10, [45, 315], $fn = 8);
helix_extrude(shape_pts,
radius = 40,
levels = 5,
level_dist = 20
);

| 31.34375 | 192 | 0.692921 | eng_Latn | 0.967008 |
e10c88d755e521ca9537a447179ce22399ea725e | 155 | md | Markdown | README.md | lciolecki/php-library | f80e807ab9a27abf217cc0e7b57ecc4baba9d81e | [
"BSD-3-Clause"
] | null | null | null | README.md | lciolecki/php-library | f80e807ab9a27abf217cc0e7b57ecc4baba9d81e | [
"BSD-3-Clause"
] | null | null | null | README.md | lciolecki/php-library | f80e807ab9a27abf217cc0e7b57ecc4baba9d81e | [
"BSD-3-Clause"
] | null | null | null | PHP Extlib libray
===========
##Installation using Composer
{
"require": {
"lciolecki/php-library": "dev-master"
}
}
| 14.090909 | 49 | 0.483871 | kor_Hang | 0.316387 |
e10c8acde64b8efbfd522f8e45151a5c69dbc529 | 8,693 | md | Markdown | docs/ide/finding-and-replacing-text.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/finding-and-replacing-text.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ide/finding-and-replacing-text.md | monkey3310/visualstudio-docs.pl-pl | adc80e0d3bef9965253897b72971ccb1a3781354 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Znajdź i Zamień tekst i wybieranie wielu karetki
ms.date: 08/14/2018
ms.prod: visual-studio-dev15
ms.technology: vs-ide-general
ms.topic: conceptual
f1_keywords:
- vs.find
- vs.findreplacecontrol
- vs.findreplace.findsymbol
- vs.findreplace.symbol
- findresultswindow
- vs.findreplace.quickreplace
- vs.findsymbol
- vs.findinfiles
- vs.findresults1
- vs,findsymbolwindow
- vs.findreplace.quickfind
- vs.lookin
- vs.replace
helpviewer_keywords:
- text searches
- Replace in Files dialog box
- Find in Files dialog box
- text searches, finding and replacing text
- text, finding and replacing
- find and replace
- find text
- replace text
- multi-caret selection
author: gewarren
ms.author: gewarren
manager: douge
ms.workload:
- multiple
ms.openlocfilehash: 3f6359585f13a4086a332d8a4dbcc3c435aeaa26
ms.sourcegitcommit: 4708f0ba09b540424efcc344f8438f25432e3d51
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 09/11/2018
ms.locfileid: "44384243"
---
# <a name="find-and-replace-text"></a>Znajdowanie i zastępowanie tekstu
Możesz znaleźć i zamienić tekst w edytorze programu Visual Studio przy użyciu [Znajdź i Zamień](#find-and-replace-control) lub [Znajdź/Zamień w plikach](#find-in-files-and-replace-in-files). Nowość w programie Visual Studio 2017 w wersji 15.8 można znaleźć i zamienić *niektóre* wystąpień wzorca za pomocą *[zaznaczenie wielu karetki](#multi-caret-selection)*.
> [!TIP]
> Jeśli zmieniasz kodu symbole, takie jak zmienne i metody, lepiej jest *[zrefaktoryzuj](../ide/reference/rename.md)* ich niż korzystać Znajdź i Zamień. Refaktoryzacja jest inteligentne i określa zakres, Znajdź i Zamień bezrefleksyjne zastępuje wszystkie wystąpienia.
Funkcja Znajdź i zamień jest dostępna w edytorze, a w niektórych oparte na tekście windows takich jak **Find Results** systemu windows, w oknach projektantów, takich jak projektant XAML i Projektant formularzy Windows i w oknach narzędzi.
Możesz ograniczyć wyszukiwanie do bieżącego dokumentu, bieżącego rozwiązania lub niestandardowego zestawu folderów. Można również określić zestaw rozszerzeń nazw plików dla wyszukiwania wieloplikowego. Dostosować składnię wyszukiwania przy użyciu platformy .NET [wyrażeń regularnych](../ide/using-regular-expressions-in-visual-studio.md).
> [!TIP]
> [Find/Command](../ide/find-command-box.md) pole jest dostępne jako formant paska narzędzi, ale nie jest wyświetlany domyślnie. Do wyświetlenia **Find/Command** wybierz opcję **apletu Dodaj lub usuń przyciski** na **standardowa** narzędzi, a następnie wybierz pozycję **znaleźć**.
## <a name="find-and-replace-control"></a>Formant Znajdź i Zamień
**Znajdź i Zamień** formant jest widoczny w prawym górnym rogu okna edytora kodu. **Znajdź i Zamień** kontroli natychmiast wyróżnia każde wystąpienie wyszukiwanego ciągu w bieżącym dokumencie. Możesz przejść z jednego wystąpienia do innego, wybierając **Znajdź następny** przycisk lub **Find Previous** przycisku na kontrolce wyszukiwania.

Dostęp do opcji zastępowania, wybierając przycisk obok **znaleźć** pola tekstowego. Aby wykonywać jedną zamianę naraz, wybierz opcję **Zamień następny** znajdujący się obok **Zastąp** pola tekstowego. Aby zamienić wszystkie dopasowania, wybierz opcję **Zamień wszystkie** przycisku.
Aby zmienić kolor podświetlenia dopasowań, wybierz **narzędzia** menu, wybierz opcję **opcje**, a następnie wybierz **środowiska**i wybierz **czcionki i kolory** . W **Pokaż ustawienia dla** listy wybierz **edytora tekstów**, a następnie w polu **wyświetlania elementów** listy wybierz **Znajdź zaznaczone (rozszerzenie)**.
### <a name="search-tool-windows"></a>Wyszukiwanie okien
Możesz użyć **znaleźć** kontrolować w oknach kodu lub tekstu, takie jak **dane wyjściowe** systemu windows i **Find Results** systemu windows, wybierając **Edytuj** > **Znajdź i Zamień** lub naciskając **Ctrl + F**.
Wersja **znaleźć** kontroli jest również dostępna w niektórych oknach narzędzi. Na przykład można filtrować listę formantów w **przybornika** okna, wprowadzając tekst w polu wyszukiwania. Innymi oknami narzędzi, które umożliwiają wyszukiwanie ich zawartość zawierają **Eksploratora rozwiązań**, **właściwości** oknie i **Team Explorer**.
## <a name="find-in-files-and-replace-in-files"></a>Znajdź w plikach i Zamień w plikach
**Znajdź/Zamień w plikach** działa jak **Znajdź i Zamień** kontrolować, z tą różnicą, że można zdefiniować zakres wyszukiwania. Nie tylko można przeszukiwać bieżący plik otwarty w edytorze, ale również wszystkie otwierać dokumenty, całe rozwiązanie, bieżący projekt i wybrane foldery zestawów. Możesz również wyszukiwać według rozszerzenia nazwy pliku. Aby uzyskać dostęp do **Znajdź/Zamień w plikach** okno dialogowe, wybierz opcję **Znajdź i Zamień** na **Edytuj** menu lub naciśnij klawisz **Ctrl + Shift + F**.

### <a name="find-results"></a>Znajdź wyniki
Po wybraniu **Znajdź wszystkie**, **Find Results** okna otwiera i wyświetla listę wyników wyszukiwania. Wybranie wyniku na liście Wyświetla skojarzony plik i wyróżnienie dopasowania. Jeśli plik nie jest już otwarty do edycji, jest otwierany w karcie podglądu po prawej stronie na karcie dobrze. Możesz użyć **znaleźć** formantu, aby przeszukiwać **Find Results** listy.
### <a name="create-custom-search-folder-sets"></a>Tworzenie zestawów folderu wyszukiwania niestandardowego
Można zdefiniować zakres wyszukiwania, wybierając **Choose Search Folders** przycisku (wygląda jak **...** ) obok pozycji **przeszukania** pole. W **Choose Search Folders** okno dialogowe, można określić zbiór folderów wyszukiwania i zapisać specyfikację dzięki czemu użytkownik może użyć go ponownie później.
> [!TIP]
> Jeśli komputer zdalny dysk zamapowany na komputer lokalny, możesz określić folderów do wyszukiwania na komputerze zdalnym.
### <a name="create-custom-component-sets"></a>Tworzenie zestawów składników niestandardowych
Można zdefiniować zestawy składników jako zakres wyszukiwania, wybierając **Edit Custom Component Set** znajdujący się obok **przeszukania** pole. Można określić zainstalowane składniki .NET lub COM, projekty programu Visual Studio, które znajdują się w rozwiązaniu lub wszystkie zestawy lub typy biblioteki (*.dll*, *.tlb*, *.olb*, *.exe*, lub *.ocx*). Aby przeszukać odwołania, zaznacz **Szukaj w odwołaniach** pole.
## <a name="multi-caret-selection"></a>Wybieranie wielu karetki
**Nowość w programie Visual Studio 2017 w wersji 15.8**
Użyj *zaznaczenie wielu karetki* się tego samego edycji w dwóch lub więcej miejsc, w tym samym czasie. Na przykład można wstawić ten sam tekst lub zmodyfikować istniejący tekst w wielu lokalizacjach, w tym samym czasie.
Poniższy zrzut ekranu `-0000` wybrane w trzech miejscach; gdy użytkownik naciśnie **Usuń**, zostaną usunięte wszystkie trzy opcje:

Aby wybrać wiele daszka, kliknij przycisk lub utworzyć pierwszy wybór tekstu w zwykły sposób, a następnie naciśnij **Alt** podczas kliknij lub wybierz tekst w każdej lokalizacji dodatkowej. Można również automatycznie dodać pasujący tekst jako dodatkowe opcje lub zaznacz pole tekstowe do edycji identycznie w każdym wierszu.
> [!TIP]
> Jeśli wybrano **Alt** jako klucz modyfikujący kliknięcie myszą, przejdź do definicji w **narzędzia** > **opcje**, wybierz wielu daszka jest wyłączona.
### <a name="commands"></a>Polecenia
Dla zachowania wyboru wielu karetki, należy użyć następujących kluczy i akcji:
|Skrót|Akcja|
|-|-|
|**CTRL**+**Alt** + kliknięcie|Dodawanie dodatkowej karetki|
|**CTRL**+**Alt** i kliknij dwukrotnie ikonę|Dodaj wybrane elementy dodatkowej programu word|
|**CTRL**+**Alt** kliknij i przeciągnij|Dodaj pomocniczy zaznaczenie|
|**SHIFT**+**Alt**+**.**|Dodaj następny szukanego tekstu jako zaznaczenia|
|**CTRL**+**Shift**+**Alt**+**,**|Dodaj wszystkie dopasowania tekstu jako zaznaczenia|
|**SHIFT**+**Alt**+**,**|Usuń ostatni zaznaczone wystąpienie|
|**CTRL**+**Shift**+**Alt**+**.**|Pomiń kolejne wystąpienie dopasowania|
|**ALT** + kliknięcie|Dodaj pole wyboru|
|**ESC** lub kliknij przycisk|Wyczyść wszystkie zaznaczenia|
Niektóre polecenia są również dostępne na **Edytuj** menu, w obszarze **wielu daszka**:

## <a name="see-also"></a>Zobacz także
- [Używanie wyrażeń regularnych w programie Visual Studio](../ide/using-regular-expressions-in-visual-studio.md)
- [Refaktoryzacja kodu w programie Visual Studio](../ide/refactoring-in-visual-studio.md) | 65.856061 | 514 | 0.777867 | pol_Latn | 0.999772 |
e10db90e46621a7ba92276e1cb62e83e58c41ef5 | 2,609 | md | Markdown | docs/mkdocs/docs/api/macros/json_throw_user.md | sthagen/nlohmann-json | 2b2d8b81ea3717385ac408e23aeda97326e506b5 | [
"MIT"
] | null | null | null | docs/mkdocs/docs/api/macros/json_throw_user.md | sthagen/nlohmann-json | 2b2d8b81ea3717385ac408e23aeda97326e506b5 | [
"MIT"
] | null | null | null | docs/mkdocs/docs/api/macros/json_throw_user.md | sthagen/nlohmann-json | 2b2d8b81ea3717385ac408e23aeda97326e506b5 | [
"MIT"
] | null | null | null | # JSON_CATCH_USER, JSON_THROW_USER, JSON_TRY_USER
```cpp
// (1)
#define JSON_CATCH_USER(exception) /* value */
// (2)
#define JSON_THROW_USER(exception) /* value */
// (3)
#define JSON_TRY_USER /* value */
```
Controls how exceptions are handled by the library.
1. This macro overrides [`#!cpp catch`](https://en.cppreference.com/w/cpp/language/try_catch) calls inside the library.
The argument is the type of the exception to catch. As of version 3.8.0, the library only catches `std::out_of_range`
exceptions internally to rethrow them as [`json::out_of_range`](../../home/exceptions.md#out-of-range) exceptions.
The macro is always followed by a scope.
2. This macro overrides `#!cpp throw` calls inside the library. The argument is the exception to be thrown. Note that
`JSON_THROW_USER` should leave the current scope (e.g., by throwing or aborting), as continuing after it may yield
undefined behavior.
3. This macro overrides `#!cpp try` calls inside the library. It has no arguments and is always followed by a scope.
## Parameters
`exception` (in)
: an exception type
## Default definition
By default, the macros map to their respective C++ keywords:
```cpp
#define JSON_CATCH_USER(exception) catch(exception)
#define JSON_THROW_USER(exception) throw exception
#define JSON_TRY_USER try
```
When exceptions are switched off, the `#!cpp try` block is executed unconditionally, and throwing exceptions is
replaced by calling [`std::abort`](https://en.cppreference.com/w/cpp/utility/program/abort) to make reaching the
`#!cpp throw` branch abort the process.
```cpp
#define JSON_THROW_USER(exception) std::abort()
#define JSON_TRY_USER if (true)
#define JSON_CATCH_USER(exception) if (false)
```
## Examples
??? example
The code below switches off exceptions and creates a log entry with a detailed error message in case of errors.
```cpp
#include <iostream>
#define JSON_TRY_USER if(true)
#define JSON_CATCH_USER(exception) if(false)
#define JSON_THROW_USER(exception) \
{std::clog << "Error in " << __FILE__ << ":" << __LINE__ \
<< " (function " << __FUNCTION__ << ") - " \
<< (exception).what() << std::endl; \
std::abort();}
#include <nlohmann/json.hpp>
```
## See also
- [Switch off exceptions](../../home/exceptions.md#switch-off-exceptions) for more information how to switch off exceptions
- [JSON_NOEXCEPTION](JSON_NOEXCEPTION) - switch off exceptions
## Version history
- Added in version 3.1.0.
| 34.328947 | 123 | 0.68877 | eng_Latn | 0.910436 |
e10e8f5a129ea8a87fc0577d5f96ea8eddfd67cf | 3,267 | md | Markdown | docs/extensibility/debugger/reference/bp-passcount-style.md | MicrosoftDocs/visualstudio-docs.pt-br | b1882dc108a37c4caebbf5a80c274c440b9bbd4a | [
"CC-BY-4.0",
"MIT"
] | 5 | 2019-02-19T20:22:40.000Z | 2022-02-19T14:55:39.000Z | docs/extensibility/debugger/reference/bp-passcount-style.md | MicrosoftDocs/visualstudio-docs.pt-br | b1882dc108a37c4caebbf5a80c274c440b9bbd4a | [
"CC-BY-4.0",
"MIT"
] | 32 | 2018-08-24T19:12:03.000Z | 2021-03-03T01:30:48.000Z | docs/extensibility/debugger/reference/bp-passcount-style.md | MicrosoftDocs/visualstudio-docs.pt-br | b1882dc108a37c4caebbf5a80c274c440b9bbd4a | [
"CC-BY-4.0",
"MIT"
] | 25 | 2017-11-02T16:03:15.000Z | 2021-10-02T02:18:00.000Z | ---
description: Especifica a condição associada à contagem de passagem do ponto de interrupção que faz com que o ponto de interrupção seja a incêndio.
title: BP_PASSCOUNT_STYLE | Microsoft Docs
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- BP_PASSCOUNT_STYLE
helpviewer_keywords:
- BP_PASSCOUNT_STYLE structure
ms.assetid: 0a647047-e2d5-4724-a0b8-68108425ecad
author: leslierichardson95
ms.author: lerich
manager: jmartens
ms.technology: vs-ide-debug
ms.workload:
- vssdk
dev_langs:
- CPP
- CSharp
ms.openlocfilehash: 02aae6a4ef4939660639004602b539b0f68c4fa2
ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 08/13/2021
ms.locfileid: "122145907"
---
# <a name="bp_passcount_style"></a>BP_PASSCOUNT_STYLE
Especifica a condição associada à contagem de passagem do ponto de interrupção que faz com que o ponto de interrupção seja a incêndio.
## <a name="syntax"></a>Syntax
```cpp
enum enum_BP_PASSCOUNT_STYLE {
BP_PASSCOUNT_NONE = 0x0000,
BP_PASSCOUNT_EQUAL = 0x0001,
BP_PASSCOUNT_EQUAL_OR_GREATER = 0x0002,
BP_PASSCOUNT_MOD = 0x0003
};
typedef DWORD BP_PASSCOUNT_STYLE;
```
```csharp
public enum enum_BP_PASSCOUNT_STYLE {
BP_PASSCOUNT_NONE = 0x0000,
BP_PASSCOUNT_EQUAL = 0x0001,
BP_PASSCOUNT_EQUAL_OR_GREATER = 0x0002,
BP_PASSCOUNT_MOD = 0x0003
};
```
## <a name="fields"></a>Campos
`BP_PASSCOUNT_NONE`\
Não especifica nenhum estilo de contagem de passagem de ponto de interrupção.
`BP_PASSCOUNT_EQUAL`\
Define o estilo de contagem de passagem do ponto de interrupção como igual. O ponto de interrupção é a incêndio quando o número de vezes que o ponto de interrupção é atingido é igual à contagem de passagem.
`BP_PASSCOUNT_EQUAL_OR_GREATER`\
Define o estilo de contagem de passagem do ponto de interrupção como igual ou maior. O ponto de interrupção é a disparo quando o número de vezes que o ponto de interrupção é atingido é igual ou maior que a contagem de passagem.
`BP_PASSCOUNT_MOD`\
Especifica uma contagem de passagem de módulo. Por exemplo, se a contagem de aprovação for do tipo e o valor da contagem de aprovação for 4, o ponto de interrupção será a fire sempre que a contagem de acertos for um múltiplo `BP_PASSCOUNT_MOD` de 4.
## <a name="remarks"></a>Comentários
Usado para o membro da estrutura BP_PASSCOUNT que, por sua vez, é um membro das estruturas `stylePassCount` [BP_REQUEST_INFO](../../../extensibility/debugger/reference/bp-request-info.md) e [](../../../extensibility/debugger/reference/bp-passcount.md) [BP_REQUEST_INFO2](../../../extensibility/debugger/reference/bp-request-info2.md) estruturas.
## <a name="requirements"></a>Requisitos
Header: msdbg.h
Namespace: Microsoft.VisualStudio.Debugger.Interop
Assembly: Microsoft.VisualStudio.Debugger.Interop.dll
## <a name="see-also"></a>Confira também
- [Enumerações](../../../extensibility/debugger/reference/enumerations-visual-studio-debugging.md)
- [BP_PASSCOUNT](../../../extensibility/debugger/reference/bp-passcount.md)
- [BP_REQUEST_INFO](../../../extensibility/debugger/reference/bp-request-info.md)
- [BP_REQUEST_INFO2](../../../extensibility/debugger/reference/bp-request-info2.md)
| 41.35443 | 345 | 0.763085 | por_Latn | 0.942371 |
e10f25200c7181a991249c4a06f2ab2f1e653867 | 87 | md | Markdown | docs/pages/components/modal/fragments/import.md | imadilkhalil/rsuite | 2d7580d22a367a6b4e3a36989e59ee4bbaddf646 | [
"MIT"
] | 3 | 2021-01-12T01:39:44.000Z | 2021-01-12T01:39:48.000Z | docs/pages/components/modal/fragments/import.md | song-ran/rsuite | dd674d0b8a931387cef42c973213b05604b04e17 | [
"MIT"
] | 14 | 2022-01-11T19:37:32.000Z | 2022-03-31T11:32:01.000Z | docs/pages/components/modal/fragments/import.md | song-ran/rsuite | dd674d0b8a931387cef42c973213b05604b04e17 | [
"MIT"
] | null | null | null | ```js
import { Modal } from 'rsuite';
// or
import Modal from 'rsuite/lib/Modal';
```
| 12.428571 | 37 | 0.62069 | eng_Latn | 0.864523 |
e10fa518f8b1830607ce9cfd15f1413f0ad3df45 | 291 | md | Markdown | README.md | luke2m/BigSurBar | 93c2bb4d61e2d9cda2e48b43ac7c4474c215e228 | [
"MIT"
] | null | null | null | README.md | luke2m/BigSurBar | 93c2bb4d61e2d9cda2e48b43ac7c4474c215e228 | [
"MIT"
] | null | null | null | README.md | luke2m/BigSurBar | 93c2bb4d61e2d9cda2e48b43ac7c4474c215e228 | [
"MIT"
] | null | null | null | # MacOS Big Sur StatusBar
## For iPadOS
## Required the latest version of XenHTML from this repo https://xenpublic.incendo.ws/
# Installing:
- install it (From Repo) then add it as background widget from XenHTML
# Manual Install
- download deb from release page then install it with Filza
| 29.1 | 86 | 0.766323 | eng_Latn | 0.935308 |
e10ffc23b6c82f3e5de1d175b6524283fdeef2c1 | 153 | md | Markdown | CHANGELOG.md | walid-ashik/dartTwitterAPI | 51eb638eb7f0fffe902ba54c80a44c0e44427225 | [
"BSD-2-Clause"
] | 16 | 2020-02-03T22:56:01.000Z | 2021-12-23T10:02:28.000Z | CHANGELOG.md | kwe-k-u/toot | e73fec222f911a3cc9076ba702036d3e6901fb3b | [
"BSD-2-Clause"
] | 2 | 2020-02-02T20:05:21.000Z | 2021-08-22T03:14:42.000Z | CHANGELOG.md | kwe-k-u/toot | e73fec222f911a3cc9076ba702036d3e6901fb3b | [
"BSD-2-Clause"
] | 5 | 2020-02-07T16:36:37.000Z | 2022-01-07T10:43:14.000Z | # 0.1.2
* Fixing the changelog version titles
# 0.1.1
* Added example and fixed a couple suggested fixes
# 0.1.0
* Initial Version of package uploaded
| 17 | 50 | 0.732026 | eng_Latn | 0.997419 |
e110362b9ff63f69c603c6d62704c2a6f8932c08 | 422 | md | Markdown | README.md | polco-us/jupyter-notebook-heroku | 2990bfa8e9bbf22229d98315e80f3249e481e2ea | [
"MIT"
] | 4 | 2019-10-07T10:36:47.000Z | 2021-05-01T06:18:34.000Z | README.md | Bpowers4/jupyter-notebook-heroku | 3532cedc183aa7b04d0b7aed6a3d9278db2b93f7 | [
"MIT"
] | 2 | 2020-03-24T17:43:40.000Z | 2020-07-26T11:16:09.000Z | README.md | Bpowers4/jupyter-notebook-heroku | 3532cedc183aa7b04d0b7aed6a3d9278db2b93f7 | [
"MIT"
] | 22 | 2019-10-23T03:39:00.000Z | 2021-06-08T14:38:14.000Z | # jupyter-notebook-heroku
## Deployment
1. Generate a Secure password with [prepare_password.py](https://github.com/msimav/jupyter-notebook-heroku/blob/master/prepare_password.py)
```bash
curl -s https://raw.githubusercontent.com/msimav/jupyter-notebook-heroku/master/prepare_password.py | python
```
2. Click Deploy to Heroku Button
[](https://heroku.com/deploy)
| 32.461538 | 139 | 0.774882 | kor_Hang | 0.099288 |
e11069ccedacc23be6562c7ece2df26ecbce870e | 2,044 | md | Markdown | docs/c-language/overview-of-functions.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-language/overview-of-functions.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/c-language/overview-of-functions.md | Mdlglobal-atlassian-net/cpp-docs.it-it | c8edd4e9238d24b047d2b59a86e2a540f371bd93 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-28T15:54:57.000Z | 2020-05-28T15:54:57.000Z | ---
title: Cenni preliminari sulle funzioni
ms.date: 11/04/2016
helpviewer_keywords:
- functions [C++]
- control flow, function calls
ms.assetid: b6f4637f-02b9-49d8-8601-1f886bd2cfb9
ms.openlocfilehash: 1c54dcdeec1bad1ffbd335d411e39c77be0ad961
ms.sourcegitcommit: 0ab61bc3d2b6cfbd52a16c6ab2b97a8ea1864f12
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 04/23/2019
ms.locfileid: "62232286"
---
# <a name="overview-of-functions"></a>Cenni preliminari sulle funzioni
Le funzioni devono disporre di una definizione e devono disporre di una dichiarazione, sebbene una definizione possa servire come dichiarazione se questa viene riportata prima che venga chiamata la funzione. La definizione di una funzione include il corpo della funzione (il codice eseguito quando viene chiamata la funzione).
Una dichiarazione di funzione stabilisce il nome, il tipo restituito e gli attributi di una funzione definita altrove nel programma. Una dichiarazione di funzione deve precedere la chiamata alla funzione. Per questo motivo i file di intestazione contenenti le dichiarazioni per le funzioni di runtime vengono inclusi nel codice prima di una chiamata a una funzione di runtime. Se la dichiarazione dispone di informazioni sui tipi e sul numero di parametri, la dichiarazione è un prototipo. Per altre informazioni, vedere [Prototipi di funzioni](../c-language/function-prototypes.md).
Il compilatore utilizza il prototipo per confrontare i tipi di argomenti nelle successive chiamate alla funzione con parametri della funzione e per convertire i tipi degli argomenti nei tipi dei parametri ogni qualvolta è necessario.
Una chiamata di funzione passa il controllo di esecuzione dalla funzione chiamante alla funzione chiamata. Gli argomenti, se presenti, vengono passati in base al valore della funzione chiamata. L'esecuzione di un'istruzione `return` nella funzione chiamata restituisce il controllo ed eventualmente un valore alla funzione chiamante.
## <a name="see-also"></a>Vedere anche
[Funzioni](../c-language/functions-c.md)
| 73 | 583 | 0.819472 | ita_Latn | 0.999466 |
e110c1f39f364b5b29bef05155495af48484896f | 1,293 | md | Markdown | AlchemyInsights/blur-your-background-in-a-teams-meeting.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-05-19T19:07:15.000Z | 2021-03-06T00:34:53.000Z | AlchemyInsights/blur-your-background-in-a-teams-meeting.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:25:08.000Z | 2022-02-09T06:52:49.000Z | AlchemyInsights/blur-your-background-in-a-teams-meeting.md | isabella232/OfficeDocs-AlchemyInsights-pr.nb-NO | e72dad0e24e02cdcb7eeb3dd8c4fc4cf5ec56554 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-10-09T20:30:02.000Z | 2020-06-02T23:24:46.000Z | ---
title: Redusere skarpheten på bakgrunnen i et Teams-møte
ms.author: pebaum
author: pebaum
manager: scotv
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Priority
ms.collection: Adm_O365
ms.custom:
- "3815"
- "9001720"
ms.openlocfilehash: 7b553d5d0342382af3cbabb8e9c42583c404f88b28f3eea33642baef2863dcd7
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: nb-NO
ms.lasthandoff: 08/05/2021
ms.locfileid: "53931350"
---
# <a name="blur-your-background-in-a-teams-meeting"></a>Redusere skarpheten på bakgrunnen i et Teams-møte
Du kan bare redusere skarpheten på bakgrunnen for planlagte møter.
- For å starte et møte med en uskarp bakgrunn flytter du glidebryteren for uskarphet – den til høyre for glidebryteren for video – til høyre på skjermen Velg innstillinger for lyd og video når du blir med i møtet.
- For å slå på bakgrunnsuskarphet under et møte klikker du på knappen **Flere alternativer > Flere alternativer** **> Reduser skarpheten på bakgrunnen**.
Hvis du vil ha mer informasjon, kan du se [Redusere skarpheten på bakgrunnen i et Teams-møte](https://support.office.com/article/Blur-your-background-in-a-Teams-meeting-f77a2381-443a-499d-825e-509a140f4780). | 43.1 | 213 | 0.803558 | nob_Latn | 0.915228 |
e11175a95b6e9c34e546a0145522567ac19f1a2b | 38 | md | Markdown | README.md | wang-zihao/SmartButler | d74d614bc4ade587a647df1bce01189959516a8b | [
"Apache-2.0"
] | null | null | null | README.md | wang-zihao/SmartButler | d74d614bc4ade587a647df1bce01189959516a8b | [
"Apache-2.0"
] | 1 | 2018-05-18T07:47:37.000Z | 2018-05-18T07:47:37.000Z | README.md | wang-zihao/SmartButler | d74d614bc4ade587a647df1bce01189959516a8b | [
"Apache-2.0"
] | null | null | null | # SmartButler
Smart voice life butler
| 12.666667 | 23 | 0.815789 | kor_Hang | 0.875281 |
e11251adcb3914bd4d46d51fe93f7e94cc5d67fc | 1,615 | md | Markdown | README.md | evancharlton/spelling-bee-grid | 0518c04b7bddcfac1329ee4b8397e40d05432f90 | [
"MIT"
] | null | null | null | README.md | evancharlton/spelling-bee-grid | 0518c04b7bddcfac1329ee4b8397e40d05432f90 | [
"MIT"
] | null | null | null | README.md | evancharlton/spelling-bee-grid | 0518c04b7bddcfac1329ee4b8397e40d05432f90 | [
"MIT"
] | null | null | null | # Spelling Bee Grid
A simple Chrome extension to augment the [New York Times'][nyt] fantastic [Spelling Bee] game.
## Usage
When you reach the "Genius" level, a new button for "Today's Grid" will appear in the toolbar.

Clicking this button will bring up the grid for today's answers.

This might just help you reach Queen Bee status on those extra-difficult days!
## Installation
This is published to the [Chrome Web Store][cws] as the [Spelling Bee Grid][sbg] extension.
It can be installed for free with a few clicks.
## Development
To install this locally, start by cloning this repository:
```sh
git clone https://github.com/evancharlton/spelling-bee-grid
```
Then perform the following actions:
1. Clone the repo
1. Navigate to the extensions management page in [Chrome][chrome] (`chrome://extensions`)
1. If you use [Edge][edge], use `edge://extensions`
1. Enable developer mode by flipping the toggle switch for "Developer mode" to the **enabled** position.
1. Load the unpacked extension:
1. Click the button titled **Load unpacked**
1. Choose the `crx/` folder within the `spelling-bee-grid` checkout
When you navigate to the [spelling bee], the extension should be automatically loaded.
PRs welcome!
[nyt]: https://nytimes.com
[spelling bee]: https://www.nytimes.com/puzzles/spelling-bee
[sbg]: https://chrome.google.com/webstore/detail/gfipmgpiamgpdnfcconjobelbkkfphkp
[cws]: https://chrome.google.com/webstore
[chrome]: https://chrome.google.com
[edge]: https://www.microsoft.com/en-us/edge
| 32.3 | 104 | 0.749845 | eng_Latn | 0.882966 |
e1128a14bf706831c38456beb18a160e067fbefe | 17 | md | Markdown | _includes/01-name.md | nicolorossetti/markdown-portfolio | ee8a4209a189474ec2a8505fe7345f6c071ffc10 | [
"MIT"
] | null | null | null | _includes/01-name.md | nicolorossetti/markdown-portfolio | ee8a4209a189474ec2a8505fe7345f6c071ffc10 | [
"MIT"
] | 5 | 2021-06-25T13:54:35.000Z | 2021-06-25T14:49:44.000Z | _includes/01-name.md | nicolorossetti/markdown-portfolio | ee8a4209a189474ec2a8505fe7345f6c071ffc10 | [
"MIT"
] | null | null | null | ## My name
NRoss
| 5.666667 | 10 | 0.647059 | eng_Latn | 0.986017 |
e113c70d900e4b8370f1cd0c34419c22ab2dbe3a | 655 | md | Markdown | README.md | Morcki/android_raw2rinex | 6e53bef09b609f99af47786fce1879673191788f | [
"MIT"
] | 5 | 2020-07-11T02:42:47.000Z | 2021-01-11T17:51:42.000Z | README.md | Morcki/android_raw2rinex | 6e53bef09b609f99af47786fce1879673191788f | [
"MIT"
] | 1 | 2021-01-11T18:02:51.000Z | 2022-03-16T14:10:04.000Z | README.md | Morcki/android_raw2rinex | 6e53bef09b609f99af47786fce1879673191788f | [
"MIT"
] | 3 | 2020-10-24T10:27:41.000Z | 2021-01-11T17:51:46.000Z | # Usage
## function
convert raw/fix data generated by GnssLogger into standard observation format(Version Rinex 3.x)
## android_raw2rinex
`raw2rinex.py -c <confgure file>`
## configure file
- dir_path : directory of raw file
- raw_path : raw file name
- type : FIX / RAW
## mode
- FIX:
Get positioning results by android inner algorithm.
Output `*.out` file with format `YYYY/mm/dd HH:MM:SS X Y Z`
- RAW:
Convert raw data to rinex 3.x format.
Output `*.o` file.
## reference
Refence : [Android GNSS raw measurements](https://gnss-compare.readthedocs.io/en/latest/user_manual/android_gnssMeasurements.html?tdsourcetag=s_pctim_aiomsg].)
| 18.714286 | 159 | 0.732824 | eng_Latn | 0.512425 |
e11483a21bb5c93b945990e5a283f96706f16584 | 1,900 | md | Markdown | _posts/2015-03-28-how-to-add-custom-file-attributes.md | albertattard/blog | b451056bfc83edd206e371bdaedb0ded677ea213 | [
"Apache-2.0"
] | null | null | null | _posts/2015-03-28-how-to-add-custom-file-attributes.md | albertattard/blog | b451056bfc83edd206e371bdaedb0ded677ea213 | [
"Apache-2.0"
] | null | null | null | _posts/2015-03-28-how-to-add-custom-file-attributes.md | albertattard/blog | b451056bfc83edd206e371bdaedb0ded677ea213 | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: How to Add Custom File Attributes
description: The Files class, added in Java 7, provides an easy way to add and retrieve custom file's attributes
date: 2015-03-28 08:00:00 +0200
categories: IO
permalink: how-to-add-custom-file-attributes
author: Albert Attard
published: true
---
The `Files` class ([Java Doc](http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html)), added in Java 7 ([Java Doc](http://docs.oracle.com/javase/7/docs/api/)), provides an easy way to add and retrieve custom file's attributes as shown in the following example.
```java
package com.javacreed.examples.io;
import java.nio.ByteBuffer;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.attribute.UserDefinedFileAttributeView;
public class Example {
public static void main(final String[] args) throws Exception {
final Path file = Paths.get(Example.class.getResource("/samples/example.txt").toURI()).toAbsolutePath();
final UserDefinedFileAttributeView view = Files.getFileAttributeView(file, UserDefinedFileAttributeView.class);
/* The file attribute */
final String name = "com.javacreed.attr.1";
final String value = "Custom Value 1";
/* Write the properties */
final byte[] bytes = value.getBytes("UTF-8");
final ByteBuffer writeBuffer = ByteBuffer.allocate(bytes.length);
writeBuffer.put(bytes);
writeBuffer.flip();
view.write(name, writeBuffer);
/* Read the property */
final ByteBuffer readBuffer = ByteBuffer.allocate(view.size(name));
view.read(name, readBuffer);
readBuffer.flip();
final String valueFromAttributes = new String(readBuffer.array(), "UTF-8");
System.out.println("File Attribute: " + valueFromAttributes);
}
}
```
The above code can be downloaded from [Git Hub](https://github.com/javacreed/how-to-add-custom-file-attributes).
| 36.538462 | 272 | 0.735789 | eng_Latn | 0.608624 |
e114942aa8343ee6cb80a4eca5efd8a35d7e58b5 | 1,959 | md | Markdown | Vorlesungsbeispiele/MRT2_VL-8_OPCUA_FirstSteps_client/README.md | zmanjiyani/PLT_MRT_ARM-RPi2 | da77ab8ddf652a12ae6a6647c993daa2f94b39fb | [
"MIT"
] | 4 | 2015-11-11T14:02:17.000Z | 2021-11-29T16:07:38.000Z | Vorlesungsbeispiele/MRT2_VL-8_OPCUA_FirstSteps_client/README.md | zmanjiyani/PLT_MRT_ARM-RPi2 | da77ab8ddf652a12ae6a6647c993daa2f94b39fb | [
"MIT"
] | 6 | 2019-04-20T14:48:58.000Z | 2020-08-26T15:13:02.000Z | Vorlesungsbeispiele/MRT2_VL-8_OPCUA_FirstSteps_client/README.md | zmanjiyani/PLT_MRT_ARM-RPi2 | da77ab8ddf652a12ae6a6647c993daa2f94b39fb | [
"MIT"
] | 9 | 2016-01-10T15:14:28.000Z | 2021-10-13T22:45:19.000Z | # Erster OPC UA Client
Dieses Program erstellt einen neuen Client mit dem open62541 Stack und verbindet sich mit unserem Server auf dem Pi (der Server darf die Ampel oder das Erste-Schritte-Beispiel sein).
Der Client macht nichts, außer mit dem Stack verbunden zu bleiben. Er dient als erster Schritt für weitere Interaktionen, wenn wir einmal verbunden sind.
Der CLient kann wahlweise local oder auf dem Pi laufen.
**NOTE:** This project only contains the used API header and sources. For the complete project, please check out the `Ampel_mit_IoT`-project.
# Projeteinstellungen für ARM/Raspberry Pi
Per Rechtsklick auch das Projekt unter dem Menüpunkt "Eigenschaften" sollten die nachfolgenden Einstellungen getroffen werden.
### Toolchain
1. Project Properties > C/C++ Build > Tool Chain Editor > (DropDown) Cross GCC
### Dialects
1. Project Properties > C/C++ Build > Settings > GCC C++ Compiler > Dialect > ISO C++11
1. Project Properties > C/C++ Build > Settings > GCC C Compiler > Dialect > ISO C99
### Includes
1. Project Properties > C/C++ Build > Settings > GCC C++ Compiler > Includes > `${workspce_loc:}${ProjName}/include`
1. Project Properties > C/C++ Build > Settings > GCC C++ Compiler > Includes > `${workspce_loc:}${ProjName}/src`
1. Project Properties > C/C++ Build > Settings > GCC C Compiler > Includes > `${workspce_loc:}${ProjName}/include`
1. Project Properties > C/C++ Build > Settings > GCC C Compiler > Includes > `${workspce_loc:}${ProjName}/src`
### Libraries
1. Project Properties > C/C++ Build > Settings > GCC C++ Linker > Libraries > Libraries (-l)> `pthread`
1. Project Properties > C/C++ Build > Settings > GCC C++ Linker > Libraries > Libraries (-l)> `bcm2835`
1. Project Properties > C/C++ Build > Settings > GCC C++ Linker > Libraries > Libraries (-l)> `rt`
1. Project Properties > C/C++ Build > Settings > GCC C++ Linker > Libraries > Library Search Path (-L) > `${workspce_loc:}${ProjName}/lib`
| 55.971429 | 182 | 0.718224 | deu_Latn | 0.608708 |
e1157034e8ea8b721e697079da9a7bfed39b9216 | 2,029 | md | Markdown | README.md | iserveradmi/sendgrid-webhook-lambda | 22c6ff28e442cbbad4b22833cd5ec08e467a07bd | [
"MIT"
] | 8 | 2019-04-04T18:30:16.000Z | 2020-03-04T01:03:51.000Z | README.md | iserveradmin/sendgrid-webhook-lambda | 22c6ff28e442cbbad4b22833cd5ec08e467a07bd | [
"MIT"
] | null | null | null | README.md | iserveradmin/sendgrid-webhook-lambda | 22c6ff28e442cbbad4b22833cd5ec08e467a07bd | [
"MIT"
] | 5 | 2020-03-31T07:28:46.000Z | 2021-01-21T20:50:28.000Z | # sendgrid-webhook-lambda
A handler that saves Sendgrid's webhook events into DynamoDB
Requires
- SendGrid
- AWS: API Gateway, Lambda, DynamoDB
## SendGrid
In your SendGrid account, head to https://app.sendgrid.com/settings/mail_settings. Then find Event Notification. This section is where you will tell SendGrid where to POST. (If you don't mind consuming everything, you don't have to pay any attention to those boxes. The lambda handler will handle all of that for you.)
<img width="600px" src="https://i.imgur.com/UpTb5y8.png">
## AWS: API Gateway
Add an POST method endpoint in AWS Gateway. Make sure you choose only the POST method as well as Request Body Validation. While we won't be validating that the body has fields for every type of event, we'll make sure to cover the basics. You can use this JSON Schema to setup a model.
```
{
"title": "Event",
"type": "object",
"properties": {
"timestamp": {
"type": "integer"
},
"event": {
"type": "string"
},
"email": {
"type": "string"
},
"smtp-id": {
"type": "string"
},
"sg_event_id": {
"type": "string"
},
"sg_message_id": {
"type": "string"
}
},
"required": ["event", "timestamp", "email", "smtp-id", "sg_event_id", "sg_message_id"]
}
```
## AWS: Lambda
Import the files in this repo for the handler and the two imports.
## AWS: DynamoDB
The only thing that I did to setup a DB was choose `sg_event_id` as the primary key and `timestamp` as the sort key.
### Testing
Once you're all set up, grab yourself an API key in SendGrid, the URL that you made in API Gateway, and test it with cURL.
```
curl --request POST --url https://api.sendgrid.com/v3/user/webhooks/event/test --header 'authorization: your_API_KEY_here' --data '{"url": your_URL_here}' -v
```
Then (if you have Cloudwatcher Logs turned on for you'll notice logs coming in) you'll see items in Dynamo!
| 34.982759 | 318 | 0.647117 | eng_Latn | 0.950923 |
e1158312dd6a1a3b4b3be3fd67507c799cf9881f | 670 | md | Markdown | docs/release-notes/3.0.0.md | Ingridamilsina/Waffle | e93d80233394d1feae2acfdbf6f4c06abbf8ee86 | [
"MIT"
] | 750 | 2018-08-16T03:38:27.000Z | 2022-03-01T12:54:31.000Z | docs/release-notes/3.0.0.md | Ingridamilsina/Waffle | e93d80233394d1feae2acfdbf6f4c06abbf8ee86 | [
"MIT"
] | 394 | 2018-08-24T10:59:52.000Z | 2022-03-01T22:44:26.000Z | docs/release-notes/3.0.0.md | Ingridamilsina/Waffle | e93d80233394d1feae2acfdbf6f4c06abbf8ee86 | [
"MIT"
] | 149 | 2018-08-29T14:50:31.000Z | 2022-02-28T17:43:22.000Z | ## Changes:
* Introduced better ens support.
* Updated EthersJS version to ^5.0.0.
* Removed deprecated APIs from the provider.
* Swapped arguments for Fixture.
```ts
function createFixtureLoader(wallets: Wallet[], provider?: MockProvider);
```
* Added automatic recognising waffle.json config without cli argument.
```json
{
"scripts": {
"build": "waffle"
}
}
```
* Introduced MockProviderOptions
```ts
const provider = new MockProvider({
ganacheOptions: {
accounts: [{balance: '100', secretKey: privateKey}]
}
});
```
* Dropped support for contract interface
* Improved documentation
* Added migration guides for different Waffle versions
| 16.341463 | 73 | 0.710448 | eng_Latn | 0.883513 |
e11597c4f138334305d1373a2876cfc92243a455 | 279 | md | Markdown | sharepoint-farm-solution-with-config/README.md | karamem0/samples | 0110c318390a99822ffc4997d548d065eafec77e | [
"MIT"
] | 2 | 2020-01-03T03:42:44.000Z | 2021-01-22T08:57:05.000Z | sharepoint-farm-solution-with-config/README.md | karamem0/samples | 0110c318390a99822ffc4997d548d065eafec77e | [
"MIT"
] | 5 | 2021-11-11T08:20:38.000Z | 2022-03-03T07:50:42.000Z | sharepoint-farm-solution-with-config/README.md | karamem0/samples | 0110c318390a99822ffc4997d548d065eafec77e | [
"MIT"
] | 1 | 2021-09-09T07:47:36.000Z | 2021-09-09T07:47:36.000Z | # sharepoint-farm-solution-with-config


[SharePoint 2013 アプリケーション ページで Web.config を使用する](https://blog.karamem0.dev/entry/2011/08/30/000000)
| 39.857143 | 100 | 0.752688 | yue_Hant | 0.658334 |
e116c97f37cce5e15efbf1ff822bc1f829101a67 | 819 | md | Markdown | products/network/advanced/getting-started.md | jb4free/docs | c1d122454c6c4b53e85b02d03221b0976fddae3e | [
"MIT"
] | null | null | null | products/network/advanced/getting-started.md | jb4free/docs | c1d122454c6c4b53e85b02d03221b0976fddae3e | [
"MIT"
] | 1 | 2021-06-25T17:45:00.000Z | 2021-06-25T17:45:00.000Z | products/network/advanced/getting-started.md | jb4free/docs | c1d122454c6c4b53e85b02d03221b0976fddae3e | [
"MIT"
] | 3 | 2020-10-29T13:28:41.000Z | 2021-06-10T18:56:34.000Z | <!-- <meta>
{
"title":"Advanced Network",
"description":"Using advanced network features",
"tag":["Layer2", "Native VLAN", "BGP"],
"seo-title": "Bare Metal Cloud Network - Packet Developer Docs",
"seo-description": "Using advanced network features",
"og-title": "Overview",
"og-description": "Using advanced network features",
"og-image": "/images/packet-product-docs.png"
}
</meta> -->
Network is the one feature you have to buy from a cloud provider if you plan to do anything useful! As such, we've invested heavily in bringing a developer experience to advanced networking features.
If you have trouble taking advantage of our network, we'd be happy to sit down and review your use case, our topology, and the specifics of each feature. Just schedule a chat via support@packet.com
| 48.176471 | 199 | 0.716728 | eng_Latn | 0.977067 |
e11706f0216057d9a79c5296e193f7cfadfbce3f | 1,828 | md | Markdown | wdk-ddi-src/content/usbscan/ns-usbscan-_channel_info.md | jazzdelightsme/windows-driver-docs-ddi | 793b0c96e117b1658144ba8b3939fdc31a49f6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/usbscan/ns-usbscan-_channel_info.md | jazzdelightsme/windows-driver-docs-ddi | 793b0c96e117b1658144ba8b3939fdc31a49f6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/usbscan/ns-usbscan-_channel_info.md | jazzdelightsme/windows-driver-docs-ddi | 793b0c96e117b1658144ba8b3939fdc31a49f6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:usbscan._CHANNEL_INFO
title: _CHANNEL_INFO (usbscan.h)
description: The CHANNEL_INFO structure is used as a parameter to DeviceIoControl, when the specified I/O control code is IOCTL_GET_CHANNEL_ALIGN_RQST.
old-location: image\channel_info.htm
tech.root: image
ms.assetid: 1f1cb952-9a63-461f-b70f-4cc41b8d88f8
ms.date: 05/03/2018
ms.keywords: "*PCHANNEL_INFO, CHANNEL_INFO, CHANNEL_INFO structure [Imaging Devices], PCHANNEL_INFO, PCHANNEL_INFO structure pointer [Imaging Devices], _CHANNEL_INFO, image.channel_info, stifnc_f0aea91c-5d41-43e5-bb8b-139bfb7c3198.xml, usbscan/CHANNEL_INFO, usbscan/PCHANNEL_INFO"
ms.topic: struct
req.header: usbscan.h
req.include-header: Usbscan.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- usbscan.h
api_name:
- CHANNEL_INFO
product:
- Windows
targetos: Windows
req.typenames: CHANNEL_INFO, *PCHANNEL_INFO
---
# _CHANNEL_INFO structure
## -description
The CHANNEL_INFO structure is used as a parameter to <a href="https://docs.microsoft.com/windows/desktop/api/ioapiset/nf-ioapiset-deviceiocontrol">DeviceIoControl</a>, when the specified I/O control code is <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/content/usbscan/ni-usbscan-ioctl_get_channel_align_rqst">IOCTL_GET_CHANNEL_ALIGN_RQST</a>.
## -struct-fields
### -field EventChannelSize
Maximum packet size for the interrupt transfer pipe.
### -field uReadDataAlignment
Maximum packet size for the bulk IN transfer pipe.
### -field uWriteDataAlignment
Maximum packet size for the bulk OUT transfer pipe.
| 25.746479 | 362 | 0.789387 | yue_Hant | 0.407509 |
e1174dc6e3c27690475e4e279fd6c1a6124a1a21 | 1,409 | md | Markdown | AlchemyInsights/restore-a-deleted-onedrive.md | isabella232/OfficeDocs-AlchemyInsights-pr.lv-LV | adf4768355ef570e9932eecb193599d2930398fb | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-05-19T19:07:08.000Z | 2020-05-19T19:07:08.000Z | AlchemyInsights/restore-a-deleted-onedrive.md | MicrosoftDocs/OfficeDocs-AlchemyInsights-pr.lv-LV | 96359061cbdf14f4c39b06fe9d9d29d8721dfcfb | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-06-02T23:28:33.000Z | 2022-02-09T06:51:14.000Z | AlchemyInsights/restore-a-deleted-onedrive.md | isabella232/OfficeDocs-AlchemyInsights-pr.lv-LV | adf4768355ef570e9932eecb193599d2930398fb | [
"CC-BY-4.0",
"MIT"
] | 3 | 2019-10-11T18:38:02.000Z | 2021-10-09T10:41:24.000Z | ---
title: Izdzēstas mapes OneDrive
ms.author: pebaum
author: pebaum
manager: scotv
ms.date: 04/21/2020
ms.audience: Admin
ms.topic: article
ms.service: o365-administration
ROBOTS: NOINDEX, NOFOLLOW
localization_priority: Normal
ms.collection: Adm_O365
ms.custom: ''
ms.assetid: 5298f192-326b-4820-b007-7e1a1c3c2b13
ms.openlocfilehash: 6310e3e225392a911bd1f5ae18dc3d49c6b50f0a32f603ceb60816657d5b3fc6
ms.sourcegitcommit: b5f7da89a650d2915dc652449623c78be6247175
ms.translationtype: MT
ms.contentlocale: lv-LV
ms.lasthandoff: 08/05/2021
ms.locfileid: "53958916"
---
# <a name="restore-a-deleted-onedrive"></a>Izdzēstas mapes OneDrive
Pēc lietotāja dzēšanas varat piekļūt lietotāja OneDrive lietotājam Microsoft 365 administrēšanas centrs 30 dienas. Citi lietotāji var turpināt piekļūt koplietojamam saturam OneDrive, cik ilgi esat iestatījis OneDrive administrēšanas centrā. (Lai uzzinātu, kā to iestatīt, skatiet rakstu Noklusējuma failu saglabāšanas [iestatīšana izdzēstiem OneDrive lietotājiem](https://go.microsoft.com/fwlink/?linkid=874267).) Pēc tam OneDrive 93 dienas tiek pārvietots uz atkritni, pēc tam tas tiek izdzēsts.
Pēc sākotnējās 30 dienas, kad izdzēstais lietotājs vairs netiek parādīts Microsoft 365 administrēšanas centrs, varat piekļūt lietotāja OneDrive, izmantojot PowerShell. Informāciju skatiet rakstā [Izdzēstas mapes OneDrive](https://go.microsoft.com/fwlink/?linkid=874269).
| 48.586207 | 496 | 0.825408 | lvs_Latn | 0.993068 |
e117895766eac791619d7400701e6e27aedf56d1 | 5,488 | md | Markdown | articles/azure-signalr/signalr-concept-azure-functions.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-signalr/signalr-concept-azure-functions.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-signalr/signalr-concept-azure-functions.md | mtaheij/azure-docs.nl-nl | 6447611648064a057aae926a62fe8b6d854e3ea6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Real-time app-Azure Functions bouwen & Azure signalerings service
description: Meer informatie over het ontwikkelen van een realtime serverloze webtoepassing met de Azure signalerings service op het volgende voor beeld.
author: sffamily
ms.service: signalr
ms.topic: conceptual
ms.date: 11/13/2019
ms.author: zhshang
ms.openlocfilehash: cbb1fcf320a78f11045bf9627ffcc438af3e388a
ms.sourcegitcommit: 877491bd46921c11dd478bd25fc718ceee2dcc08
ms.translationtype: MT
ms.contentlocale: nl-NL
ms.lasthandoff: 07/02/2020
ms.locfileid: "74157616"
---
# <a name="build-real-time-apps-with-azure-functions-and-azure-signalr-service"></a>Real-time apps bouwen met Azure Functions en Azure signalerings service
Omdat de Azure SignalR-service en Azure Functions beide volledig beheerde, zeer schaalbare services zijn waarmee u zich kunt richten op het ontwikkelen van toepassingen in plaats van op het beheren van de infrastructuur, worden de beide services vaak samen gebruikt om realtime communicatie in een [serverloze](https://azure.microsoft.com/solutions/serverless/) omgeving te bieden.
> [!NOTE]
> Meer informatie over het gebruik van de Signa lering en het Azure Functions samen in de interactieve zelf studie [inschakelen automatische updates in een webtoepassing met behulp van Azure functions en de signalerings service](https://docs.microsoft.com/learn/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr).
## <a name="integrate-real-time-communications-with-azure-services"></a>Realtimecommunicatie integreren met Azure-services
Met Azure Functions kunt u code schrijven in [verschillende talen](../azure-functions/supported-languages.md), waaronder Java script, Python, C# en Java, die wordt geactiveerd wanneer er gebeurtenissen optreden in de Cloud. Voorbeelden van deze gebeurtenissen zijn:
* HTTP- en webhook-aanvragen
* Periodieke timers
* Gebeurtenissen van Azure-services, zoals:
- Event Grid
- Event Hubs
- Service Bus
- Cosmos DB-wijzigingenfeed
- Storage - blobs en wachtrijen
- Logic Apps-connectoren, zoals Salesforce en SQL Server
Als u Azure Functions gebruikt om deze evenementen met Azure SignalR-service te integreren, hebt u de mogelijkheid om duizenden clients te informeren als zich gebeurtenissen voordoen.
Sommige veelvoorkomende scenario’s voor serverloze berichten in realtime die u met Azure Functions en SignalR-service kunt implementeren, zijn:
* IoT-apparaattelemetrie visualiseren op een realtimedashboard of -kaart
* Gegevens bijwerken in een toepassing als documenten in Cosmos DB worden bijgewerkt
* Meldingen in apps verzenden als er nieuwe bestellingen in Salesforce worden gemaakt
## <a name="signalr-service-bindings-for-azure-functions"></a>SignalR-servicebindingen voor Azure Functions
Met de SignalR-servicebindingen voor Azure Functions kan een Azure Function-app worden toegestaan om berichten te publiceren voor clients die met de SignalR-service zijn verbonden. Clients kunnen verbinding met de service maken via een SDK voor SignalR-clients die beschikbaar is in .NET, JavaScript en Jave; er worden binnenkort meer talen toegevoegd.
### <a name="an-example-scenario"></a>Een voorbeeldscenario
Een voorbeeld van hoe u de SignalR-servicebindingen kunt gebruiken, maakt gebruik van Azure Functions om te integreren met Azure Cosmos DB en SignalR-service om realtimeberichten te verzenden als er gebeurtenissen voorkomen in een Cosmos DB-wijzigingenfeed.

1. Er wordt een wijziging doorgevoerd in een Cosmos DB-verzameling
2. De wijzigingsgebeurtenis wordt doorgegeven aan de Cosmos DB-wijzigingenfeed
3. Er wordt een Azure Functions door de wijzigingenfeed geactiveerd met behulp van de Cosmos DB-trigger
4. De uitvoerbinding van de SignalIR-service publiceert een bericht naar SignalR-service
5. SignalR-service publiceert het bericht aan alle verbonden clients
### <a name="authentication-and-users"></a>Verificatie en gebruikers
Met SignalR-service kunt u berichten uitzenden naar alle clients of alleen naar een subset van clients, zoals clients die tot één gebruiker behoren. De SignalR-servicebindingen voor Azure Functions kunnen worden gecombineerd met App Service-verificatie om gebruikers te verifiëren met behulp van providers, zoals Azure Active Directory, Facebook en Twitter. U kunt vervolgens rechtstreeks berichten naar deze geverifieerde gebruikers verzenden.
## <a name="next-steps"></a>Volgende stappen
In dit artikel hebt u een overzicht gekregen van hoe u Azure Functions kunt gebruiken met SignalIR-service om een breed scala aan scenario’s voor serverloze realtimeberichten mogelijk te maken.
Raadpleeg de volgende bronnen voor meer informatie over het samen gebruiken van Azure Functions en de signaal service:
* [Azure Functions ontwikkeling en configuratie met de seingevings service](signalr-concept-serverless-development-config.md)
* [Automatische updates in een webtoepassing inschakelen met behulp van Azure Functions en SignalR Service](https://docs.microsoft.com/learn/modules/automatic-update-of-a-webapp-using-azure-functions-and-signalr)
Volg een van deze snelstartgidsen voor meer informatie.
* [Snelstart voor de serverloze Azure SignalR Service - C#](signalr-quickstart-azure-functions-csharp.md)
* [Snelstart voor de serverloze Azure SignalR Service - JavaScript](signalr-quickstart-azure-functions-javascript.md)
| 70.358974 | 444 | 0.816691 | nld_Latn | 0.998578 |
e117e1652d8f8fdd36e75820f377c987660c3935 | 515 | md | Markdown | README.md | nwtgck/trans-server-http-knocking-docker-compose | eac5c295afd419628ff20438c2333bed711830de | [
"MIT"
] | null | null | null | README.md | nwtgck/trans-server-http-knocking-docker-compose | eac5c295afd419628ff20438c2333bed711830de | [
"MIT"
] | null | null | null | README.md | nwtgck/trans-server-http-knocking-docker-compose | eac5c295afd419628ff20438c2333bed711830de | [
"MIT"
] | null | null | null | # trans-server-http-knocking-docker-compose
[Trans server](https://github.com/nwtgck/trans-server-akka) on [http-knocking](https://github.com/nwtgck/http-knocking)
## Run the server
```bash
cd <this repo>
docker-compose up
```
## Knocking procedure
Visit the following order to open the server.
1. Access to http://localhost:8080/alpha
1. Access to http://localhost:8080/foxtrot
1. Access to http://localhost:8080/lima
Then, you will have Trans server.
(You can close the site by the reverse order.)
| 23.409091 | 119 | 0.733981 | eng_Latn | 0.589902 |
e1182211f47dacb74a73c5fd9caa1df94ba2d3e9 | 154 | markdown | Markdown | _projects/Ghaya_Bin_Mesmar.markdown | SpaceTypeContinuum/generative-typography-gallery | f6acab2fed47d98ccf79c96a4427d8703d1d7968 | [
"CC-BY-4.0"
] | 3 | 2020-12-06T22:09:12.000Z | 2021-06-21T21:21:35.000Z | _projects/Ghaya_Bin_Mesmar.markdown | SpaceTypeContinuum/generative-typography-gallery | f6acab2fed47d98ccf79c96a4427d8703d1d7968 | [
"CC-BY-4.0"
] | null | null | null | _projects/Ghaya_Bin_Mesmar.markdown | SpaceTypeContinuum/generative-typography-gallery | f6acab2fed47d98ccf79c96a4427d8703d1d7968 | [
"CC-BY-4.0"
] | 1 | 2021-11-03T01:54:02.000Z | 2021-11-03T01:54:02.000Z | ---
layout: project
title: Ghaya Bin Mesmar
teaser: assets/img/Ghaya_Bin_Mesmar/00.png
link: https://editor.p5js.org/GhayaBinMesmar/present/Gwg_-qqtz
---
| 22 | 62 | 0.772727 | kor_Hang | 0.155691 |
e11b063ec7f71504216c83d310fb5f1760a00828 | 11,054 | md | Markdown | docs/linux/sql-server-linux-performance-best-practices.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/linux/sql-server-linux-performance-best-practices.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/linux/sql-server-linux-performance-best-practices.md | in4matica/sql-docs.de-de | b5a6c26b66f347686c4943dc8307b3b1deedbe7e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Bewährte Methoden für die Leistung für SQL Server für Linux
description: Dieser Artikel enthält bewährte Methoden für die Leistung sowie Ausführungsrichtlinien für SQL Server für Linux.
author: tejasaks
ms.author: tejasaks
ms.reviewer: vanto
ms.date: 09/14/2017
ms.topic: conceptual
ms.prod: sql
ms.technology: linux
ms.openlocfilehash: 548ab73e97b9bccb6a64a95b7294d3d5ca63493d
ms.sourcegitcommit: ff1bd69a8335ad656b220e78acb37dbef86bc78a
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 03/05/2020
ms.locfileid: "78339800"
---
# <a name="performance-best-practices-and-configuration-guidelines-for-sql-server-on-linux"></a>Bewährte Methoden für die Leistung und Konfigurationsrichtlinien für SQL Server für Linux
[!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md-linuxonly](../includes/appliesto-ss-xxxx-xxxx-xxx-md-linuxonly.md)]
Dieser Artikel enthält bewährte Methoden und Empfehlungen, um die Leistung für Datenbankanwendungen zu maximieren, die mit SQL Server für Linux verbunden sind. Diese Empfehlungen gelten speziell für die Ausführung auf der Linux-Plattform. Alle normalen SQL Server Empfehlungen wie der Indexentwurf gelten weiterhin.
Im Folgenden erhalten Sie Empfehlungen zum Konfigurieren von SQL Server und des Linux-Betriebssystems.
## <a name="sql-server-configuration"></a>SQL Server-Konfiguration
Es wird empfohlen, nach der Installation von SQL Server für Linux die folgenden Konfigurationsaufgaben auszuführen, um eine optimale Leistung für Ihre Anwendung zu erzielen.
### <a name="best-practices"></a>Bewährte Methoden
- **Verwenden von PROCESS AFFINITY für Knoten und/oder CPUs**
Es wird empfohlen, `ALTER SERVER CONFIGURATION` zu verwenden, um `PROCESS AFFINITY` für alle **NUMANODE**-Elemente und/oder CPUs festzulegen, die Sie für SQL Server (in der Regel für alle Knoten und CPUs) unter Linux verwenden. Die Prozessoraffinität hilft dabei, das Verhalten von Linux und SQL effizient zu planen. Die Verwendung der Option **NUMANODE** ist die einfachste Methode. Beachten Sie, dass Sie **PROCESS AFFINITY** auch dann verwenden sollten, wenn auf Ihrem Computer nur ein einzelner NUMA-Knoten vorhanden ist. Weitere Informationen zum Festlegen von **PROCESS AFFINITY** finden Sie in der Dokumentation zu [ALTER SERVER CONFIGURATION](../t-sql/statements/alter-server-configuration-transact-sql.md).
- **Konfigurieren mehrerer tempdb-Datendateien**
Da die Installation von SQL Server für Linux keine Option zum Konfigurieren mehrerer tempdb-Dateien umfasst, empfiehlt es sich, erst nach der Installation die tempdb-Datendateien zu erstellen. Weitere Informationen finden Sie im Artikel [Empfehlungen zum Reduzieren von Konflikten bei der Zuweisung in tempdb-Datenbank für SQL Server](https://support.microsoft.com/help/2154845/recommendations-to-reduce-allocation-contention-in-sql-server-tempdb-d).
### <a name="advanced-configuration"></a>Erweiterte Konfiguration
Die folgenden Empfehlungen stellen optionale Konfigurationseinstellungen dar, die Sie nach der Installation von SQL Server für Linux durchführen können. Diese Optionen basieren auf den Anforderungen Ihrer Workload und der Konfiguration Ihres Linux-Betriebssystems.
- **Festlegen eines Arbeitsspeicherlimits mithilfe von mssql-conf**
Damit immer genügend physischer Speicherplatz für Linux vorhanden ist, verwendet der SQL Server-Prozess standardmäßig nur 80 % des physischen Speichers. Bei großen Systemen können 20 % einen beachtlichen Unterschied darstellen. Bei einem System mit 1 TB RAM werden durch diese Standardeinstellung ca. 200 GB RAM freigelassen. In diesem Fall sollten Sie das Arbeitsspeicherlimit auf einen höheren Wert festlegen. Weitere Informationen finden Sie in der Dokumentation zum Tool **mssql-config** und der Einstellung [memory.memorylimitmb](sql-server-linux-configure-mssql-conf.md#memorylimit), die den für SQL Server sichtbaren Speicherplatz (in MB) steuert.
Wenn Sie diese Einstellung ändern, achten Sie darauf, diesen Wert nicht zu hoch festzulegen. Wenn nicht genügend Arbeitsspeicher frei ist, können Probleme mit dem Linux-Betriebssystem und anderen Linux-Anwendungen auftreten.
## <a name="linux-os-configuration"></a>Konfigurieren des Linux-Betriebssystems
Verwenden Sie die folgenden Konfigurationseinstellungen für das Linux-Betriebssystem, um bei einer SQL Server-Installation die beste Leistung zu erzielen.
### <a name="kernel-settings-for-high-performance"></a>Kerneleinstellungen für hohe Leistung
Für das Linux-Betriebssystem werden die folgenden Einstellungen empfohlen, um bei einer SQL Server-Installation hohe Leistung und einen hohen Durchsatz zu erzielen. Der Prozess zum Konfigurieren dieser Einstellungen wird in der Linux-Dokumentation beschrieben.
> [!Note]
> Für Red Hat Enterprise Linux-Benutzer (RHEL) konfiguriert das [Tuned](https://tuned-project.org)-Profil „throughput-performance“ diese Einstellungen automatisch (mit Ausnahme von C-Status). Ab RHEL 8.0 bietet ein integriertes MSSQL-Profil (/usr/lib/tuned), das in Zusammenarbeit mit Red Hat entwickelt wurde, genau abgestimmte Linux-Leistungsoptimierungen für SQL Server-Arbeitsauslastungen. Dieses Profil enthält das RHEL-Profil „throughput-performance“. Die Definitionen dieses Profils werden im Folgenden veranschaulicht, damit Sie es anderen Linux-Distributionen und RHEL-Releases ohne dieses Profil gegenüberstellen können.
In der folgenden Tabelle finden Sie Empfehlungen für die CPU-Einstellungen:
| Einstellung | value | Weitere Informationen |
|---|---|---|
| CPU frequency governor (Kontrolle der CPU-Häufigkeit) | Leistung | Dokumentation zum Befehl **cpupower** |
| ENERGY_PERF_BIAS | Leistung | Dokumentation zum Befehl **x86_energy_perf_policy** |
| min_perf_pct | 100 | Dokumentation zum Status „intel p“ |
| C-States (C-Status) | C1 only (Nur C1) | In der Linux- oder Systemdokumentation erhalten Sie Informationen dazu, wie Sie sicherstellen können, dass die Einstellung „C-States“ (C-Status) auf „C1 only“ (Nur C1) festgelegt ist. |
In der folgenden Tabelle finden Sie Empfehlungen für die Datenträgereinstellungen:
| Einstellung | value | Weitere Informationen |
|---|---|---|
| disk readahead | 4096 | Dokumentation zum Befehl **blockdev** |
| sysctl-Einstellungen | kernel.sched_min_granularity_ns = 10.000.000<br/>kernel.sched_wakeup_granularity_ns = 15.000.000<br/>vm.dirty_ratio = 40<br/>vm.dirty_background_ratio = 10<br/>vm.swappiness = 10 | Dokumentation zum Befehl **sysctl** |
### <a name="kernel-setting-auto-numa-balancing-for-multi-node-numa-systems"></a>Kernel-Einstellung für den automatischen NUMA-Ausgleich für NUMA-Systeme mit mehreren Knoten
Wenn Sie SQL Server auf einem **NUMA**-System mit mehreren Knoten installieren, ist die folgende **kernel.numa_balancing**-Kerneleinstellung standardmäßig aktiviert. Deaktivieren Sie den automatischen NUMA-Ausgleich für das **NUMA**-System mit mehreren Knoten, damit SQL Server auf diesem mit bestmöglicher Effizienz arbeiten können:
```bash
sysctl -w kernel.numa_balancing=0
```
### <a name="kernel-settings-for-virtual-address-space"></a>Kerneleinstellungen für virtuelle Adressräume
Die Standardeinstellung für **vm.max_map_count** (65536) ist für eine SQL Server-Installation möglicherweise nicht hoch genug. Ändern Sie diesen Wert, der die Obergrenze darstellt, in 256.000.
```bash
sysctl -w vm.max_map_count=262144
```
### <a name="proposed-linux-settings-using-a-tuned-mssql-profile"></a>Empfohlene Linux-Einstellungen für ein optimiertes MSSQL-Profil
```bash
#
# A tuned configuration for SQL Server on Linux
#
[main]
summary=Optimize for Microsoft SQL Server
include=throughput-performance
[cpu]
force_latency=5
[sysctl]
vm.swappiness = 1
vm.dirty_background_ratio = 3
vm.dirty_ratio = 80
vm.dirty_expire_centisecs = 500
vm.dirty_writeback_centisecs = 100
vm.transparent_hugepages=always
# For , use
# vm.transparent_hugepages=madvice
vm.max_map_count=1600000
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
kernel.numa_balancing=0
kernel.sched_latency_ns = 60000000
kernel.sched_migration_cost_ns = 500000
kernel.sched_min_granularity_ns = 15000000
kernel.sched_wakeup_granularity_ns = 2000000
```
Zum Aktivieren dieses optimierten Profils speichern Sie diese Definitionen in einer Datei namens **tuned.conf** im Ordner /usr/lib/tuned/mssql, und aktivieren Sie das Profil mithilfe des folgenden Befehls.
```bash
chmod +x /usr/lib/tuned/mssql/tuned.conf
tuned-adm profile mssql
```
Überprüfen Sie mit einem der folgenden Befehle, ob das Profil aktiviert wurde:
```bash
tuned-adm active
```
oder
```bash
tuned-adm list
```
### <a name="disable-last-accessed-datetime-on-file-systems-for-sql-server-data-and-log-files"></a>Deaktivieren von Datum/Uhrzeit des letzten Zugriffs auf Dateisysteme für SQL Server-Daten- und -Protokolldateien
Verwenden Sie das Attribut **noatime** mit einem beliebigen Dateisystem, das zum Speichern von SQL Server-Daten- und -Protokolldateien verwendet wird. Informationen zum Festlegen dieses Attributs finden Sie in der Linux-Dokumentation.
### <a name="leave-transparent-huge-pages-thp-enabled"></a>Lassen Sie die Option „Transparent Huge Pages“ aktiviert.
Bei den meisten Linux-Installationen sollte diese Option standardmäßig aktiviert sein. Es wird empfohlen, dies nicht zu ändern, damit die Leistung konstant gut bleibt. Bei hoher Arbeitsspeicherauslastung in SQL Server-Bereitstellungen mit mehreren Instanzen (z. B. bei SQL Server-Ausführung mit anderen anspruchsvollen Anwendungen auf dem Server) wird jedoch empfohlen, dass Sie die Leistung Ihrer Anwendungen nach Ausführung des folgenden Befehls testen:
```bash
echo madvice > /sys/kernel/mm/transparent_hugepage/enabled
```
Alternativ können Sie das optimierte MSSQL-Profil mit der folgenden Zeile versehen,
```bash
vm.transparent_hugepages=madvice
```
und das Profil nach der Änderung aktivieren.
```bash
tuned-adm off
tuned-amd profile mssql
```
### <a name="swapfile"></a>Auslagerungsdatei
Stellen Sie sicher, dass Sie über eine ordnungsgemäß konfigurierte Auslagerungsdatei verfügen, damit keine Probleme mit dem Arbeitsspeicher auftreten. In der Linux-Dokumentation erhalten Sie Informationen über die Erstellung und ordnungsgemäße Größenanpassung von Auslagerungsdateien.
### <a name="virtual-machines-and-dynamic-memory"></a>Virtuelle Computer und dynamischer Arbeitsspeicher
Wenn Sie SQL Server für Linux auf einem virtuellen Computer ausführen, stellen Sie sicher, dass Sie entsprechende Optionen auswählen, um dem für den virtuellen Computer reservierten Arbeitsspeicher gerecht zu werden. Verwenden Sie keine Features wie Hyper-V Dynamic Memory.
## <a name="next-steps"></a>Nächste Schritte
Weitere Informationen zu SQL Server-Features, die die Leistung verbessern, finden Sie unter [Erste Schritte mit den Leistungsfeatures](sql-server-linux-performance-get-started.md).
Weitere Informationen zu SQL Server für Linux finden Sie in der [Übersicht für SQL Server für Linux](sql-server-linux-overview.md).
| 61.071823 | 719 | 0.804505 | deu_Latn | 0.976148 |
e11ba56f9a63c77a706b56c003a2e56d84220af5 | 247 | md | Markdown | documentary/index.md | kumarikandam/control-style | 9d97b5e2055a2a6892d8a1d3db270e7550b5cba9 | [
"MIT"
] | 1 | 2019-09-16T21:19:55.000Z | 2019-09-16T21:19:55.000Z | documentary/index.md | kumarikandam/control-style | 9d97b5e2055a2a6892d8a1d3db270e7550b5cba9 | [
"MIT"
] | 1 | 2019-11-16T07:01:12.000Z | 2019-11-16T07:01:12.000Z | documentary/index.md | kumarikandam/control-style | 9d97b5e2055a2a6892d8a1d3db270e7550b5cba9 | [
"MIT"
] | null | null | null | # @lemuria/control-style
%NPM: @lemuria/control-style%
`@lemuria/control-style` is Extracts CSS Properties From Component Properties Map And Returns The Composed Style.
```sh
yarn add @lemuria/control-style
```
## Table Of Contents
%TOC%
%~% | 16.466667 | 113 | 0.732794 | eng_Latn | 0.621995 |
e11cbdd404779d928630b26c9967c529c900617e | 639 | md | Markdown | README.md | a24ma/msg2eml | dc5d23339cd231991918fc6956a94a30308b72d5 | [
"MIT"
] | 1 | 2020-10-11T14:21:30.000Z | 2020-10-11T14:21:30.000Z | README.md | a24ma/msg2eml | dc5d23339cd231991918fc6956a94a30308b72d5 | [
"MIT"
] | null | null | null | README.md | a24ma/msg2eml | dc5d23339cd231991918fc6956a94a30308b72d5 | [
"MIT"
] | null | null | null | # msg2eml
複数環境でのファイル互換性を保つために、
Outlook の .msg ファイルから、
情報交換に最低限必要な情報のみ抜き出して .eml ファイルに変換する。
* Microsoft Outlook 環境必須。
* 日本標準時(JST)のみ対応。
* ファイル名は ディレクトリ内序数・date・sender・subject 情報から自動生成。
* 書き換える必要がある場合はソースコード要変更。
* (余裕があればフォーマッタを実装するかも。)
# インストール
```
pip install pywin32 pathlib
pip install --upgrade PySimpleGUIQt
```
# 使い方
以下のいずれかを実行する。
* `python main.py` を実行する(GUIモードで起動)。
* 表示される白色領域に *.msg ファイルを D&D して使用。
* `python main.py <email.msg>` を実行する(CUIモードで起動)。
* 一度に一つのファイルのみ処理が可能。
* (Windows 環境) msg2eml.bat に msg ファイルをドラックアンドドロップする。
* 一度に複数のファイルの処理が可能。
* パスに半角空白文字を含む場合はエラーとなるので注意。
実行時に Outlook でデータ読み込みを承認するよう要求されるので注意。
| 19.363636 | 52 | 0.752739 | yue_Hant | 0.619184 |
e11d3826ccfbc7c8c755996199b62d7200f41547 | 181 | md | Markdown | readme.md | DesignPond/daemon | 5796f7d0a914b6e6bd339582ab76d7cb8306af33 | [
"MIT"
] | null | null | null | readme.md | DesignPond/daemon | 5796f7d0a914b6e6bd339582ab76d7cb8306af33 | [
"MIT"
] | null | null | null | readme.md | DesignPond/daemon | 5796f7d0a914b6e6bd339582ab76d7cb8306af33 | [
"MIT"
] | null | null | null | ## Site de documentation
Auteur: @designpond 2015
### License
The Laravel framework is open-sourced software licensed under the [MIT license](http://opensource.org/licenses/MIT)
| 22.625 | 115 | 0.773481 | eng_Latn | 0.902342 |
e11e51885c43e0f97b3285d6c87281098e1e3222 | 296 | md | Markdown | README.md | Stanleynista/Dataviz-for-BI | 094528ba870831bdbbfafd9e1e79053d66679893 | [
"MIT"
] | null | null | null | README.md | Stanleynista/Dataviz-for-BI | 094528ba870831bdbbfafd9e1e79053d66679893 | [
"MIT"
] | null | null | null | README.md | Stanleynista/Dataviz-for-BI | 094528ba870831bdbbfafd9e1e79053d66679893 | [
"MIT"
] | null | null | null | 
Status: Developing ⚠️
<h3> I gonna use this repo for Dataviz using PowerBi and Tableau for training and presentations </h3>
<h4> Technologies Used: </h4>
* Power BI
* Tableau
| 22.769231 | 113 | 0.756757 | eng_Latn | 0.380767 |
e11e9350428f3da6be5ed263cfb1dbdb949ca4ef | 4,212 | md | Markdown | _drafts/2021-04-06-watch-all-four-hunger-games-movies-free-on-our-favorite-netflix-rival.md | sergioafanou/smart-cv | dc522a19e38f5b55f9e4cda3e695a1a8dc087b4d | [
"MIT"
] | null | null | null | _drafts/2021-04-06-watch-all-four-hunger-games-movies-free-on-our-favorite-netflix-rival.md | sergioafanou/smart-cv | dc522a19e38f5b55f9e4cda3e695a1a8dc087b4d | [
"MIT"
] | 5 | 2020-01-09T10:46:58.000Z | 2021-11-03T15:13:42.000Z | _drafts/2021-04-06-watch-all-four-hunger-games-movies-free-on-our-favorite-netflix-rival.md | sergioafanou/smart-cv | 516b8ede13f74951c64f7390fa08c4f7a2344dd6 | [
"MIT"
] | null | null | null | ---
title : "Watch all four ‘Hunger Games’ movies free on our favorite Netflix rival"
layout: post
tags: tutorial labnol
post_inspiration: https://bgr.com/2021/04/01/free-movies-tubi-hunger-games-streaming/
image: "https://sergio.afanou.com/assets/images/image-midres-30.jpg"
---
<center><a href="https://bgr.com/2021/04/01/free-movies-tubi-hunger-games-streaming/" class="bgr-rss-featured-image bgr-rss-test-class"><img loading="lazy" width="610" height="358" src="https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?quality=70&strip=all&w=610" class="attachment-feed_normal size-feed_normal wp-post-image" alt="Free movies" loading="lazy" srcset="https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg 919w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=150,88 150w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=300,176 300w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=768,451 768w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=610,358 610w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=664,390 664w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=400,234 400w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=782,459 782w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=827,486 827w, https://bgr.com/wp-content/uploads/2021/04/The-Hunger-Games.jpg?resize=800,470 800w" sizes="(max-width: 610px) 100vw, 610px" title="Free movies" /></a></center><p>Earlier today, <em>Bloomberg</em> reported that Comcast subsidiary NBCUniversal is considering pulling its movies from HBO Max and Netflix to make them all exclusive to Peacock in the coming months. Now that every major brand has its own streaming platform, it is going to be harder and harder to find any cross-pollination between services, which is why it's always worth pointing out when a major film franchise pops up on one of them.</p>
<p>On Thursday afternoon, Fox sent out an email alert to let us know that The Hunger Games and all three of its sequels will be streaming free on the company's recently-acquired streaming service Tubi for the entire month of April and into May. If you want to watch Jennifer Lawrence fight for survival in a post-apocalyptic society for eight to nine hours, all you have to do is visit the site and start streaming -- no account required.</p>
<p><a href="https://bgr.com/2021/04/01/free-movies-tubi-hunger-games-streaming/" class="more-link"><em>Continue reading...</em></a></p>
<p><strong>Today's Top Deals</strong></p>
<ol>
<li><a href="https://bgr.com/2021/04/01/drone-with-camera-on-amazon-prime-coupon-lowest-price/?utm_source=rss&utm_campaign=topdeals">Amazon coupon gets you a 2K camera drone that folds up as small as a smartphone for $60</a></li>
<li><a href="https://bgr.com/2021/04/01/amazon-echo-deals-lowest-price-echo-flex-alexa-speaker/?utm_source=rss&utm_campaign=topdeals">Crazy Amazon sale gets you an Alexa smart speaker for just $17</a></li>
<li><a href="https://bgr.com/2021/04/01/viral-tiktok-reveals-a-23-amazon-find-that-will-blow-your-mind/?utm_source=rss&utm_campaign=topdeals">Viral TikTok reveals a $23 Amazon find that will blow your mind</a></li>
</ol>
<p><strong>Trending Right Now:</strong></p>
<ol>
<li><a href="https://bgr.com/2021/04/01/new-stimulus-check-coming-tax-refund-for-unemployment-benefits/">New stimulus check for the unemployed might be coming in May</a></li>
<li><a href="https://bgr.com/2021/04/01/mars-rock-perseverance-mystery/">Perseverance found a weird rock on Mars, and scientists can’t identify it</a></li>
<li><a href="https://bgr.com/2021/04/01/gmail-account-trick-to-figure-out-whos-spamming-you-selling-your-data/">If your Gmail account is flooded with spam, this trick is for you</a></li>
</ol>
<p><a href="https://bgr.com/2021/04/01/free-movies-tubi-hunger-games-streaming/">Watch all four ‘Hunger Games’ movies free on our favorite Netflix rival</a> originally appeared on <a href="http://bgr.com">BGR.com</a> on Thu, 1 Apr 2021 at 21:24:08 EDT. Please see our terms for use of feeds.</p> | 156 | 1,742 | 0.75831 | eng_Latn | 0.471267 |
e11ec73292533d566be89644f6c380a6de693a5b | 2,760 | md | Markdown | windows-driver-docs-pr/debugger/-wmitrace-logsave.md | Ryooooooga/windows-driver-docs.ja-jp | c7526f4e7d66ff01ae965b5670d19fd4be158f04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/debugger/-wmitrace-logsave.md | Ryooooooga/windows-driver-docs.ja-jp | c7526f4e7d66ff01ae965b5670d19fd4be158f04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | windows-driver-docs-pr/debugger/-wmitrace-logsave.md | Ryooooooga/windows-driver-docs.ja-jp | c7526f4e7d66ff01ae965b5670d19fd4be158f04 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: wmitrace.logsave
description: Wmitrace.logsave 拡張機能は、トレース セッションのトレース バッファーの現在の内容をファイルに書き込みます。
ms.assetid: 713fea09-d405-4142-b2e8-29c813a4c3b6
keywords:
- デバッグ wmitrace.logsave Windows
ms.date: 05/23/2017
topic_type:
- apiref
api_name:
- wmitrace.logsave
api_type:
- NA
ms.localizationpriority: medium
ms.openlocfilehash: 161bd2db2b65095be689769cfa5e2733024b2653
ms.sourcegitcommit: 0cc5051945559a242d941a6f2799d161d8eba2a7
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 04/23/2019
ms.locfileid: "63327358"
---
# <a name="wmitracelogsave"></a>!wmitrace.logsave
**! Wmitrace.logsave**拡張機能は、トレース セッションのトレース バッファーの現在の内容をファイルに書き込みます。
```dbgcmd
!wmitrace.logsave {LoggerID|LoggerName} Filename
```
## <a name="span-idddkwmitracelogsavedbgspanspan-idddkwmitracelogsavedbgspanparameters"></a><span id="ddk__wmitrace_logsave_dbg"></span><span id="DDK__WMITRACE_LOGSAVE_DBG"></span>パラメーター
<span id="_______LoggerID______"></span><span id="_______loggerid______"></span><span id="_______LOGGERID______"></span> *LoggerID*
トレース セッションを指定します。 *LoggerID*は序数をコンピューター上の各トレース セッションに割り当てられます。
<span id="_______LoggerName______"></span><span id="_______loggername______"></span><span id="_______LOGGERNAME______"></span> *LoggerName*
トレース セッションを指定します。 *ロガー*テキスト名を指定は、トレース セッションが開始されたとき。
<span id="_______Filename______"></span><span id="_______filename______"></span><span id="_______FILENAME______"></span> *ファイル名*
パス (省略可能) と出力ファイルのファイル名を指定します。
### <a name="span-iddllspanspan-iddllspandll"></a><span id="DLL"></span><span id="dll"></span>DLL
この拡張機能は、Wmitrace.dll によってエクスポートされます。
この拡張機能は、Windows 2000 以降のバージョンの Windows で使用できます。 Windows 2000 でこの拡張機能を使用する場合は、w2kfre サブディレクトリに Wmitrace.dll ファイルを Windows 内のデバッグ ツールのインストール ディレクトリの winxp サブディレクトリからまずコピーする必要があります。
### <a name="span-idadditionalinformationspanspan-idadditionalinformationspanspan-idadditionalinformationspanadditional-information"></a><span id="Additional_Information"></span><span id="additional_information"></span><span id="ADDITIONAL_INFORMATION"></span>追加情報
イベントのトレースの概念的な概要については、Microsoft Windows SDK を参照してください。 トレース ログの詳細については、Windows Driver Kit (WDK) で「トレース ログ」を参照してください。
<a name="remarks"></a>注釈
-------
この拡張機能では、時にメモリ内のトレースだけが表示されます。 バッファーからフラッシュされ、イベント トレース ログ ファイルまたはトレース コンシューマーに配信されているトレース メッセージは表示されません。
トレース セッション バッファーは、ログ ファイルまたはリアルタイム表示用のトレース コンシューマーがフラッシュされるまで、トレース メッセージを保存します。 この拡張機能は、指定されたファイルの物理メモリ内のバッファーの内容を保存します。
出力は、バイナリ形式で記述されます。 通常、これらのファイルは、.etl (イベント トレース ログ) のファイル名拡張子を使用します。
循環バッファー トレース セッションを開始するトレース ログを使用する場合 (-バッファリング)、この拡張機能を使用して、現在のバッファーの内容を保存することができます。
トレース セッションのロガーの ID を検索するには、使用、 [ **! wmitrace.strdump** ](-wmitrace-strdump.md)拡張機能。 Tracelog コマンドのトレース ログ l を使用してトレース セッションとロガー ID などの基本的なプロパティを一覧表示または、
| 37.297297 | 264 | 0.801812 | yue_Hant | 0.906488 |