hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
21cd1002fe47bd75bff2b89ff000e409d47a8630 | 10,362 | md | Markdown | wdk-ddi-src/content/ndis/nc-ndis-miniport_add_device.md | yulyugin/windows-driver-docs-ddi | 7b0e5dc2ca43e3a67c0155f8f65c7a8f607e5ba5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ndis/nc-ndis-miniport_add_device.md | yulyugin/windows-driver-docs-ddi | 7b0e5dc2ca43e3a67c0155f8f65c7a8f607e5ba5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/ndis/nc-ndis-miniport_add_device.md | yulyugin/windows-driver-docs-ddi | 7b0e5dc2ca43e3a67c0155f8f65c7a8f607e5ba5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NC:ndis.MINIPORT_ADD_DEVICE
title: MINIPORT_ADD_DEVICE (ndis.h)
description: The MiniportAddDevice function enables a miniport driver to establish a context area for an added device.
old-location: netvista\miniportadddevice.htm
tech.root: netvista
ms.assetid: 50e04b5a-e430-484c-aabb-cc7b9ecb53b0
ms.date: 05/02/2018
keywords: ["MINIPORT_ADD_DEVICE callback function"]
ms.keywords: MINIPORT_ADD_DEVICE, MINIPORT_ADD_DEVICE callback, MiniportAddDevice, MiniportAddDevice callback function [Network Drivers Starting with Windows Vista], ndis/MiniportAddDevice, ndis_msix_ref_60df66e2-2e17-4bd1-8793-8310326d883d.xml, netvista.miniportadddevice
req.header: ndis.h
req.include-header: Ndis.h
req.target-type: Windows
req.target-min-winverclnt: Supported in NDIS 6.0 and later.
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql: PASSIVE_LEVEL
targetos: Windows
req.typenames:
f1_keywords:
- MINIPORT_ADD_DEVICE
- ndis/MINIPORT_ADD_DEVICE
topic_type:
- APIRef
- kbSyntax
api_type:
- UserDefined
api_location:
- Ndis.h
api_name:
- MiniportAddDevice
---
# MINIPORT_ADD_DEVICE callback function
## -description
The
<i>MiniportAddDevice</i> function enables a miniport driver to establish a context area
for an added device.
<div class="alert"><b>Note</b> You must declare the function by using the <b>MINIPORT_ADD_DEVICE</b> type. For more
information, see the following Examples section.</div><div> </div>
## -parameters
### -param NdisMiniportHandle
[in]
An NDIS handle that identifies the miniport adapter that the Plug and Play (PnP) manager is
adding. NDIS also passes this handle to the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_initialize">
MiniportInitializeEx</a> function.
### -param MiniportDriverContext
[in]
A handle to a driver-allocated context area where the driver maintains state and configuration
information. The miniport driver passed this context area to the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nf-ndis-ndismregisterminiportdriver">
NdisMRegisterMiniportDriver</a> function.
## -returns
<i>MiniportAddDevice</i> returns one of the following values:
<table>
<tr>
<th>Return code</th>
<th>Description</th>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>NDIS_STATUS_SUCCESS</b></dt>
</dl>
</td>
<td width="60%">
The miniport driver successfully allocated the resources that it requires to add the
device.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>NDIS_STATUS_RESOURCES</b></dt>
</dl>
</td>
<td width="60%">
The miniport driver failed to allocate the required resources.
</td>
</tr>
<tr>
<td width="40%">
<dl>
<dt><b>NDIS_STATUS_FAILURE</b></dt>
</dl>
</td>
<td width="60%">
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_add_device">MiniportAddDevice</a> failed for reasons other than insufficient
resources.
</td>
</tr>
</table>
If
<i>MiniportAddDevice</i> fails, NDIS will not call the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_initialize">MiniportInitializeEx</a> function
to initialize the miniport adapter.
## -remarks
The
<i>MiniportAddDevice</i> function is an optional function. Miniport drivers that
support MSI-X should specify an entry point for this function in the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_pnp_characteristics">
NDIS_MINIPORT_PNP_CHARACTERISTICS</a> structure.
<i>MiniportAddDevice</i> can allocate a context area for handling
<a href="https://docs.microsoft.com/windows-hardware/drivers/kernel/irp-mn-filter-resource-requirements">
IRP_MN_FILTER_RESOURCE_REQUIREMENTS</a> I/O request packets (IRPs) that the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_pnp_irp">
MiniportFilterResourceRequirements</a> function handles. Miniport drivers specify the context area by
initializing an
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_add_device_registration_attributes">
NDIS_MINIPORT_ADD_DEVICE_REGISTRATION_ATTRIBUTES</a> structure and then calling the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nf-ndis-ndismsetminiportattributes">
NdisMSetMiniportAttributes</a> function. NDIS later provides this context handle to the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_remove_device">MiniportRemoveDevice</a>,
<i>
MiniportFilterResourceRequirements</i>,
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_pnp_irp">MiniportStartDevice</a>, and
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_initialize">MiniportInitializeEx</a> functions.
For
<i>MiniportInitializeEx</i>, the context handle is passed in the
<b>MiniportAddDeviceContext</b> member of the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_init_parameters">
NDIS_MINIPORT_INIT_PARAMETERS</a> structure that the
<i>MiniportInitParameters</i> parameter points to.
If the miniport driver fails the
<i>MiniportAddDevice</i> call after it allocated the context area, the driver must
free the context area before returning from
<i>MiniportAddDevice</i>.
Miniport drivers should use a different context area for the
<b>MiniportAddDeviceContext</b> member of the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_add_device_registration_attributes">NDIS_MINIPORT_ADD_DEVICE_REGISTRATION_ATTRIBUTES</a> structure
and the
<b>MiniportAdapterContext</b> member of the <a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_init_parameters">NDIS_MINIPORT_INIT_PARAMETERS</a> structure. Separate context
areas will ensure that information in the context area is not reinitialized, which might occur in the
<i>MiniportInitializeEx</i> function if the miniport adapter is halted and
reinitialized.
When the PnP manager requests that NDIS remove the device, NDIS calls the
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_remove_device">MiniportRemoveDevice</a> function to
undo the operations that
<i>MiniportAddDevice</i> performed.
NDIS calls
<i>MiniportAddDevice</i> at IRQL = PASSIVE_LEVEL.
<h3><a id="Examples"></a><a id="examples"></a><a id="EXAMPLES"></a>Examples</h3>
To define a <i>MiniportAddDevice</i> function, you must first provide a function declaration that identifies the type of function you're defining. Windows provides a set of function types for drivers. Declaring a function using the function types helps <a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/code-analysis-for-drivers">Code Analysis for Drivers</a>, <a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/static-driver-verifier">Static Driver Verifier</a> (SDV), and other verification tools find errors, and it's a requirement for writing drivers for the Windows operating system.
For example, to define a <i>MiniportAddDevice</i> function that is named "MyAddDevice", use the <b>MINIPORT_ADD_DEVICE</b> type as shown in this code example:
```
MINIPORT_ADD_DEVICE MyAddDevice;
```
Then, implement your function as follows:
```
_Use_decl_annotations_
NDIS_STATUS
MyAddDevice(
NDIS_HANDLE NdisMiniportHandle,
NDIS_HANDLE MiniportDriverContext
)
{...}
```
The <b>MINIPORT_ADD_DEVICE</b> function type is defined in the Ndis.h header file. To more accurately identify errors when you run the code analysis tools, be sure to add the _Use_decl_annotations_ annotation to your function definition. The _Use_decl_annotations_ annotation ensures that the annotations that are applied to the <b>MINIPORT_ADD_DEVICE</b> function type in the header file are used. For more information about the requirements for function declarations, see <a href="https://docs.microsoft.com/windows-hardware/drivers/devtest/declaring-functions-by-using-function-role-types-for-ndis-drivers">Declaring Functions by Using Function Role Types for NDIS Drivers</a>.
For information about _Use_decl_annotations_, see <a href="https://go.microsoft.com/fwlink/p/?linkid=286697">Annotating Function Behavior</a>.
## -see-also
<a href="https://docs.microsoft.com/windows-hardware/drivers/kernel/irp-mn-filter-resource-requirements">
IRP_MN_FILTER_RESOURCE_REQUIREMENTS</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_pnp_irp">
MiniportFilterResourceRequirements</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_initialize">MiniportInitializeEx</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_remove_device">MiniportRemoveDevice</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nc-ndis-miniport_pnp_irp">MiniportStartDevice</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_add_device_registration_attributes">
NDIS_MINIPORT_ADD_DEVICE_REGISTRATION_ATTRIBUTES</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_init_parameters">NDIS_MINIPORT_INIT_PARAMETERS</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/ns-ndis-_ndis_miniport_pnp_characteristics">
NDIS_MINIPORT_PNP_CHARACTERISTICS</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nf-ndis-ndismregisterminiportdriver">NdisMRegisterMiniportDriver</a>
<a href="https://docs.microsoft.com/windows-hardware/drivers/ddi/ndis/nf-ndis-ndismsetminiportattributes">NdisMSetMiniportAttributes</a>
| 41.951417 | 683 | 0.757093 | eng_Latn | 0.429005 |
21cd3915c3467614fb98b2744d4d31a12ff228e3 | 1,103 | md | Markdown | content/blog/14-15/codenights-2015-praktische-en-nuttige-zaken.md | SchrodingersCat00/zeus.ugent.be | 207ec1f9edb1c939090e79a7970c04f5b571b3a7 | [
"MIT"
] | 9 | 2016-06-09T16:02:10.000Z | 2019-11-13T12:13:08.000Z | content/blog/14-15/codenights-2015-praktische-en-nuttige-zaken.md | SchrodingersCat00/zeus.ugent.be | 207ec1f9edb1c939090e79a7970c04f5b571b3a7 | [
"MIT"
] | 353 | 2016-06-09T18:29:23.000Z | 2022-03-24T22:29:58.000Z | content/blog/14-15/codenights-2015-praktische-en-nuttige-zaken.md | SchrodingersCat00/zeus.ugent.be | 207ec1f9edb1c939090e79a7970c04f5b571b3a7 | [
"MIT"
] | 20 | 2016-09-27T20:22:11.000Z | 2020-10-17T12:15:39.000Z | ---
title: 'Codenights 2015: Praktische en nuttige zaken'
created_at: 05-07-2015
---
Eerst en vooral wensen we jullie allen een prettige vakantie toe! We willen vlug eens toelichten hoe onze codenights in het algemeen door gaan.
De codenights verlopen op dezelfde wijze als die van vorige jaren. Voor degenen die hier niet mee bekend zijn: Codenights zijn avonden waarbij we tot in de late uurtjes aan allerlei projecten werken en al doende leute hebben. Deze zijn open voor **iedereen**, dus spring zeker eens binnen!
De codenights gaan wekelijks door op **dinsdagen**, behalve tijdens de Gentse Feesten. Telkens gaat een codenight officieel van start rond **17u**. Elke codenight wordt aangekondigd op onze [**facebook**](https://www.facebook.com/zeus.wpi "facebook") en [**twitter**](https://twitter.com/zeuswpi "twitter") paginas.
Als je het ziet zitten, nodigen we jullie uit om te helpen aan een van onze [**projecten**](https://github.com/ZeusWPI "github"). Er zal altijd iemand aanwezig zijn om je te begeleiden.
We zien jullie graag volgende dinsdag voor de tweede codenight! Tot dan!
| 73.533333 | 315 | 0.772439 | nld_Latn | 0.999955 |
21cd4055be79b6c8bfdab4312ed9e2ee189911eb | 1,061 | md | Markdown | source/index.html.md | flow-ai/flowai-api-docs | ad2889db7f9a15580b959c4f87cccd8c8c174df8 | [
"Apache-2.0"
] | null | null | null | source/index.html.md | flow-ai/flowai-api-docs | ad2889db7f9a15580b959c4f87cccd8c8c174df8 | [
"Apache-2.0"
] | null | null | null | source/index.html.md | flow-ai/flowai-api-docs | ad2889db7f9a15580b959c4f87cccd8c8c174df8 | [
"Apache-2.0"
] | null | null | null | ---
title: API Reference
language_tabs: # must be one of https://git.io/vQNgJ
- http
- javascript: Node.js
toc_footers:
- <a href='https://app.flow.ai/signup'>Sign Up for a free account</a>
- <a href='https://github.com/lord/slate'>Documentation Powered by Slate</a>
includes:
- rtm
- messaging
- management
search: true
---
# Introduction
The Flow.ai API is organized around [REST](http://en.wikipedia.org/wiki/Representational_State_Transfer). Our API has predictable, resource-oriented URLs, and uses HTTP response codes to indicate API errors. We use built-in HTTP features, like HTTP authentication and HTTP verbs, which are understood by off-the-shelf HTTP clients.
Our messaging API supports [cross-origin resource sharing](http://en.wikipedia.org/wiki/Cross-origin_resource_sharing), allowing you to interact securely with our API from a client-side web application (though you should never expose your secret API key in any public client-side code). [JSON](http://www.json.org/) is returned by all API responses, including errors.
| 42.44 | 367 | 0.758718 | eng_Latn | 0.900881 |
21cdf0ee8e2f42da45ed0e10dc695c3cfade6218 | 1,417 | md | Markdown | source/_news/Tupoho Sports Scholarships - Applications Due 27 October.blade.md | salihzeki12000/Website | 733c8f7ea86f6ee4967cbbbb83cfc10d12cf7c12 | [
"MIT"
] | null | null | null | source/_news/Tupoho Sports Scholarships - Applications Due 27 October.blade.md | salihzeki12000/Website | 733c8f7ea86f6ee4967cbbbb83cfc10d12cf7c12 | [
"MIT"
] | 10 | 2019-08-13T10:55:15.000Z | 2022-02-26T10:21:10.000Z | source/_news/Tupoho Sports Scholarships - Applications Due 27 October.blade.md | wing5wong/artisan-whs-static | cb36480c54adcb5cbe1fd25c1a741bb525112a72 | [
"MIT"
] | 1 | 2020-10-30T12:55:04.000Z | 2020-10-30T12:55:04.000Z | ---
title: "Tupoho Sports Scholarships - Applications Due 27 October"
date:
description: "The Tupoho Whānau Trust is pleased to announce the opening of nominations toward the 2017/2018 Tupoho Sports Scholarships. Applications Due 27 October 2017."
image: http://c1940652.r52.cf0.rackcdn.com/59e3c792b8d39a463b0001da/Tupoho-Whanau-Trust-emblem.jpg
excerpt: "The Tupoho Whānau Trust is pleased to announce the opening of nominations toward the 2017/2018 Tupoho Sports Scholarships. Applications Due 27 October 2017."
image_gallery:
---
<p><span>The Tupoho Whānau Trust is pleased to announce the opening of nominations toward the 2017/2018 Tupoho Sports Scholarships. </span></p>
<p><a href="http://c1940652.r52.cf0.rackcdn.com/59e3c992b8d39a463b0001df/Tupoho-Sport-Scholarship-Application-2017-2018.pdf">Download the</a><strong><a href="http://c1940652.r52.cf0.rackcdn.com/59e3c992b8d39a463b0001df/Tupoho-Sport-Scholarship-Application-2017-2018.pdf">Tupoho Sport Scholarship Application form 2017-2018</a></strong> <br /><strong>It closes 27 October 2017. </strong></p>
<p><span>The inaugural Tupoho Sports Scholarships were created to provide support to rangatahi in the pursuit of their national and international sporting aspirations and careers. </span> </p>
<p><em><br />Nicole Dryden<br />Sport Whanganui<br />Senior Advisor - Communities & Iwi</em></p>
| 70.85 | 400 | 0.775582 | eng_Latn | 0.671165 |
21cdf2060ca50a76163d24feac40e8cf6236f555 | 2,621 | md | Markdown | README.md | junagao/readable-frontend | 324f6ec741e389ce1b517781fe3c712f3a12a038 | [
"MIT"
] | null | null | null | README.md | junagao/readable-frontend | 324f6ec741e389ce1b517781fe3c712f3a12a038 | [
"MIT"
] | 3 | 2021-03-09T18:47:53.000Z | 2022-02-26T18:14:23.000Z | README.md | junagao/readable-frontend | 324f6ec741e389ce1b517781fe3c712f3a12a038 | [
"MIT"
] | null | null | null | # Readable
Readable is a HackerNews/reddit inspired thread app. It's a basic blogging system that has posts and comments. Users can post content to predefined categories, comment on their posts and other users' posts, and vote on posts and comments. Users will also be able to edit and delete posts and comments.
This is my second project for Udacity's React Nanodegree Program.
I chose to build this project from scratch using React, Webpack, Babel, ESlint, instead of using plain Create React App.
## 👀 Preview

## 🚀 Quick start
1. Download the project and cd into it:
```
git clone https://github.com/junagao/readable-frontend.git
cd readable-frontend
```
2. Install dependencies and run the application:
```
yarn install
yarn start
```
3. Open your browser and navigate:
http://localhost:8080
### Backend Server
- Readable API Server: This application consumes data from an API provided by Udacity, specifically for this project, which can be found at https://github.com/junagao/readable-backend and you can have more information about the endpoints here: [README.md](https://github.com/junagao/readable-backend/tree/master/api-server)
Users will be able to post content to predefined categories, comment on their posts and other users' posts, and vote on posts and comments. Users will also be able to edit and delete posts and comments.
1. Download, install and start the API server:
```
git clone https://github.com/junagao/readable-backend
cd readable-backend
yarn install
node server
```
## 🧐 What's inside?
### Features (Requirements)
- List post by categories:
- All
- React
- Redux
- Udacity
- List posts:
- Post title
- Author
- Number of comments
- Current post vote score
- Show post item details:
- Post title
- Post body
- Author
- Number of comments
- Current post vote score
- Comments:
- Current comment vote score
- Create, edit and delete posts
- Add, edit and delete comment
- Vote on posts and comments
### Additional Features
- Sort posts by date, title and vote: ascending or descending
- Redux-form to manage forms with field validation
- Confirm modal dialog to delete posts or comments
- Google OAuth User Authentication
## 📚 Tech Stack
- Webpack
- Babel
- ESLint
- React
- react-router
- react-spinners-kit
- react-fontawesome
- Redux
- Redux-Thunk
- Redux-Form
- axios
- moment
- lodash
- uuid
## Author
Juliane Nagao - [GitHub](https://github.com/junagao) - [LinkedIn](https://www.linkedin.com/in/junagao/)
## License
This project is licensed under the MIT License.
| 24.961905 | 323 | 0.743609 | eng_Latn | 0.975383 |
21ce4aa7145a100b24df7cf9a9d92e223526dbb8 | 1,474 | md | Markdown | README.md | CaydendW/MidiByConsole | b77fea33c7fe1ec1dbfd150043947c67a1f4d284 | [
"MIT"
] | 1 | 2022-03-08T01:18:28.000Z | 2022-03-08T01:18:28.000Z | README.md | CaydendW/MidiByConsole | b77fea33c7fe1ec1dbfd150043947c67a1f4d284 | [
"MIT"
] | null | null | null | README.md | CaydendW/MidiByConsole | b77fea33c7fe1ec1dbfd150043947c67a1f4d284 | [
"MIT"
] | null | null | null | # MidiByConsole
A console app that plays music
The app uses Console.Beep to play notes. It was original just meant to be inside my WhiteDOS project but I thought it was worthy of it's own project.
The app accept arguements in the following way:
* Each letter plays it's own note. E.G: a, plays a; b, plays b; etc.
* Capitalizing the letter makes it go 1 octave up. E.G: a is low a. A is high a.
* Wrapping the lowercase letter in square brackets makes it even lower octave. So a is low A ad \[a\] is very low A.
* After every letter it must get a - with a number on the end to denote time in miliseconds. E.G: A-200 plays a for 200 miliseconds.
* _ is a rest command. It too has to be followed by a - to denote time in miliseconds.
* Wraping the letter in brackets () makes it have a special note. Cases and length still apply. Low notes have the round brackets on the outside: \(\[g\]\) Special notes are:
1. C# (c)
2. Bb (b)
3. G# (g)
4. F# (f)
5. Eb (e)
* Adding end; to the end of music breaks from the loop. This allows for comments to be put the end of the text file.
Those are all the commands.
Here is an example: (F)-1000 play F sharp high for 1000 miliseconds (1 second)
A simple song
Here is mary had a little lamb in MidiByConsole:
b-500 a-500 g-500 a-500 b-500 b-500 b-1000 a-500 a-500 a-1000 b-500 b-500 b-1000 b-500 a-500 g-500 a-500 b-500 b-500 b-500 b-500 a-500 a-500 b-500 a-500 g-2000
More complex songs can be found in the samples file.
| 49.133333 | 174 | 0.717775 | eng_Latn | 0.999533 |
21cf01be45c6231e64537fe7c25f23805fbe5a07 | 2,108 | md | Markdown | README.md | lucasfigueiredoweb/webfeira | 8bf6f1c4f38a636564d3570958c6908d034ed0f9 | [
"MIT"
] | 1 | 2021-08-30T00:38:20.000Z | 2021-08-30T00:38:20.000Z | README.md | lucasfigueiredoweb/webfeira | 8bf6f1c4f38a636564d3570958c6908d034ed0f9 | [
"MIT"
] | 3 | 2021-08-30T12:12:14.000Z | 2021-08-30T14:30:53.000Z | README.md | lucasfigueiredoweb/webfeira | 8bf6f1c4f38a636564d3570958c6908d034ed0f9 | [
"MIT"
] | null | null | null | # WebFeira
API de Gerenciamento do Ciclo de vida de criação de Feiras Online.
# Tecnologias
>Java 16.0.2
>Spring Boot 2.5.4
>Maven 4.0.0
>PostgresSQL 13.4.1
>H2 Database 1.4.2
>Docker 19.03.12
>docker-compose 1.17.1
>GNU Make 4.1
>Junit 5.7.2
>Kubernetes 1.22
# Inicialização
Os dados do arquivo “DEINFO_AB_FEIRASLIVRES_2014.csv” contidos no repositorio do projeto, na pasta data, são carregados automaticamente pelo sistema durante sua inicialização no banco de dados alvo.
# Execução
Para execução do projeto de maneira agnostica e sem se preocupar com dependencias como banco de dados Postgres, basta ter o Make instalado e executar os comandos descritos nele, exemplo:
### Limpeza dos arquivos do projeto:
`make clean`
___
### Build do Projeto:
`make build`
___
### Testes do Projeto:
`make test`
___
### Build da Imagem do Docker da aplicação:
`make docker_build`
___
### Execução dos containers da aplicação
`make docker_compose_run`
___
### Limpeza dos containers de aplicação e de banco:
`make docker_compose_clean`
___
OBS: Após a limpeza dos containers os arquivos de dados do Banco ainda poderão ser acessados em:
**/var/lib/postgresql/data/**
___
- Caso queira utilizar o processo de CI/CD via Jenkins, só criar 1 nova pipeline (testado via modo freestyle) no Jenkins e adicionar como arquivo de build o arquivo `Jenkinsfile` contido na raiz do projeto
- Caso queira utilizar o Kubernetes, utilizar o arquivo `deploy.yml` na raiz do projeto, alterando seu host para domain desejado, utilizando Traefik como Ingress Controller
# Documentação
Todos os testes realizados na aplicação foram realizados via ferramenta POSTMAN para testes de API.
https://documenter.getpostman.com/view/954685/U16bvU2e
## Logs da Aplicação
Arquivo na raiz do projeto: `ApplicationLogs.log`
## Melhorias Futuras:,
- Adicionar mais testes e tipos de testes.
- Incrementar Makefile com K8s deploy.
- Implantar em algum cloudProvider ou PaaS para utilização sem necessidade de baixar o projeto.
### Issues
Por favor verificar a seção de Issues do Github para mais informações
### Licença
MIT
| 25.095238 | 205 | 0.774194 | por_Latn | 0.997444 |
21cf824b53ecdfa8f224c2a8a410e2ed9f71a92a | 538 | md | Markdown | cassandra/docs/counter.cassandra.ClientRequest.Read.Unavailables.Count.md | Mattlk13/integrations-1 | d8be6f2349ab55c4c9cddb27609a18ecb3d00361 | [
"Apache-2.0"
] | null | null | null | cassandra/docs/counter.cassandra.ClientRequest.Read.Unavailables.Count.md | Mattlk13/integrations-1 | d8be6f2349ab55c4c9cddb27609a18ecb3d00361 | [
"Apache-2.0"
] | 1 | 2020-11-05T20:14:54.000Z | 2020-11-05T20:14:54.000Z | cassandra/docs/counter.cassandra.ClientRequest.Read.Unavailables.Count.md | Mattlk13/integrations-1 | d8be6f2349ab55c4c9cddb27609a18ecb3d00361 | [
"Apache-2.0"
] | 1 | 2017-02-08T09:58:51.000Z | 2017-02-08T09:58:51.000Z |
---
title: counter.cassandra.ClientRequest.Read.Unavailables.Count
brief: Count of read unavailables since server start
metric_type: cumulative
custom: false
---
### counter.cassandra.ClientRequest.Read.Unavailables.Count
Count of read unavailables since server start. A non-zero value means
that insufficient replicas were available to fulfil a read request at
the requested consistency level. This typically means that one or more
nodes are down. To fix this condition, any down nodes must be
restarted, or removed from the cluster.
| 33.625 | 70 | 0.810409 | eng_Latn | 0.995964 |
21cf90441f5eb4f84d8b0565f82343049452d10d | 2,109 | md | Markdown | README.md | CPE-1040-Spring-2020/assignment-5-transistor-circuits-Destineg1 | dd7d7d62c9c66b39e05c24f61c0b3efebe569b5e | [
"CECILL-B"
] | null | null | null | README.md | CPE-1040-Spring-2020/assignment-5-transistor-circuits-Destineg1 | dd7d7d62c9c66b39e05c24f61c0b3efebe569b5e | [
"CECILL-B"
] | null | null | null | README.md | CPE-1040-Spring-2020/assignment-5-transistor-circuits-Destineg1 | dd7d7d62c9c66b39e05c24f61c0b3efebe569b5e | [
"CECILL-B"
] | null | null | null | # CPE 1040 - Spring 2020
## Assignment 5: Transistors
Author: Ivo Georgiev, PhD
Last updated: 2020-02-22
Code: 98ffb5e9c5964e27028001933faec10caa0e4709
---
_**NOTE:** This assignment [README](README.md) is _intentionally_ blank. It is part of the assignment to fill it. Refer to the [submission template](submission-template.md) for expectations and guidance. Read the [requirements](requirements.md) and [criteria](criteria.md) for the assignment proper._
```
_ _ _ _ _ _ _
| | | | | \ | | | | | | | |
| | __ _| |__ | \| | ___ | |_ ___| |__ ___ ___ | | __
| | / _` | '_ \ | . ` |/ _ \| __/ _ \ '_ \ / _ \ / _ \| |/ /
| |___| (_| | |_) | | |\ | (_) | || __/ |_) | (_) | (_) | <
|______\__,_|_.__/ |_| \_|\___/ \__\___|_.__/ \___/ \___/|_|\_\
```
Art acknowledgement: [taag](http://patorjk.com/software/taag/)
2N3904 Transmitter
1. With the switch off, measure the voltages:
* Accross the resistor RC. 0.13MV
* At the collector (C). 2.69
* At the base (B). 0.03
* At the emitter (E). 0.02
* Record these voltages in the README.
2. With the switch off, measure the currents:
* The collector current IC. 0.1V
* The emitter current IE. 2.7mv
* The base current IB. If this current is not zero, you are doing something wrong :) 5.7 ma
* Record these currents in the README. Does any current flow?
2N3906 transmitter
1. With the switch off, measure the voltages:
* Accross the resistor RC. 0.7V
* At the collector (C). 3.43V
* At the base (B). 4.27V
* At the emitter (E). 4.97V
* Record these voltages in the README.
2. With the switch off, measure the currents:
* The collector current IC. 2.17 mA
* The emitter current IE. 2.59 mA
* The base current IB. If this current is not zero, you are doing something wrong :) 0.42mA
* Record these currents in the README. Does any current flow?
| 39.055556 | 300 | 0.567093 | eng_Latn | 0.97427 |
21cff51e8613d596b37d65147f6398e72bf44c5e | 159 | md | Markdown | CHANGELOG.md | nollieheel/cookbook-updater | 40a61dc9ce9040753573f5f0a15e5fb78a496270 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | nollieheel/cookbook-updater | 40a61dc9ce9040753573f5f0a15e5fb78a496270 | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | nollieheel/cookbook-updater | 40a61dc9ce9040753573f5f0a15e5fb78a496270 | [
"Apache-2.0"
] | null | null | null | # 0.2.0
Add logging as part of every update action
# 0.1.1
Change apt-get action from dist-upgrade to upgrade
# 0.1.0
Initial release of cookbook-updater
| 13.25 | 50 | 0.735849 | eng_Latn | 0.952853 |
21d12d03206e1e8c7d115bd0aada09be319fa58c | 437 | md | Markdown | README.md | ewcole/baltimore_catechism | 1ba3e2ed44108d66adfbc622c9ef220d9f186a85 | [
"Unlicense"
] | null | null | null | README.md | ewcole/baltimore_catechism | 1ba3e2ed44108d66adfbc622c9ef220d9f186a85 | [
"Unlicense"
] | null | null | null | README.md | ewcole/baltimore_catechism | 1ba3e2ed44108d66adfbc622c9ef220d9f186a85 | [
"Unlicense"
] | null | null | null | The Baltimore Catechism was the official American catholic catechism from long before my parents were young. It was a clear and thorough introduction to the Faith.
This project takes the full text of the Project Gutenberg edition of the catechism found at [archive.org](http://archive.org/stream/baltimorecatechi14551gut/14551.txt), and adds an ANTLR 4 grammar to parse it, so that it can be converted into a machine-readable format. | 145.666667 | 269 | 0.80778 | eng_Latn | 0.998541 |
21d12f8ccd8ea4a2d0ed9b4fd776da2f670ec6f4 | 1,367 | md | Markdown | data/issues/ZF-12340.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 40 | 2016-06-23T17:52:49.000Z | 2021-03-27T20:02:40.000Z | data/issues/ZF-12340.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 80 | 2016-06-24T13:39:11.000Z | 2019-08-08T06:37:19.000Z | data/issues/ZF-12340.md | zendframework/zf3-web | 5852ab5bfd47285e6b46f9e7b13250629b3e372e | [
"BSD-3-Clause"
] | 52 | 2016-06-24T22:21:49.000Z | 2022-02-24T18:14:03.000Z | ---
layout: issue
title: "Zend_Pdf_Canvas is not fully implemented"
id: ZF-12340
---
ZF-12340: Zend\_Pdf\_Canvas is not fully implemented
----------------------------------------------------
Issue Type: Bug Created: 2012-07-18T14:13:15.000+0000 Last Updated: 2012-07-18T14:14:28.000+0000 Status: Open Fix version(s):
Reporter: Arnaud Lemercier (arnolem) Assignee: Dolf Schimmel (Freeaqingme) (freak) Tags: - Zend\_Pdf
Related issues:
Attachments:
### Description
I tried to create a canvas to place an image but nothing happens. I looked at the internals of the Zend\_Pdf\_Canvas\_Abstract::drawCanvas() method and it seems that the variable $contentsToDraw is not used.
Here is the code of my example :
<pre class="highlight">
$pdf = new Zend_Pdf();
$page = new Zend_Pdf_Page(Zend_Pdf_Page::SIZE_A4);
$canvas = New Zend_Pdf_Canvas($page->getWidth(), $page->getHeight());
$canvas->setFillColor(new Zend_Pdf_Color_Html('#000000'));
$canvas->drawRectangle(0, $canvas->getHeight(), $canvas->getWidth(), 0, Zend_Pdf_Page::SHAPE_DRAW_FILL_AND_STROKE);
$page->drawCanvas($canvas, 0, $page->getHeight());
$pdf->pages[] = $page;
header("Content-Disposition: inline; filename=canvas.pdf");
header("Content-type: application/pdf");
echo $pdf->render();
### Comments
No comments to display | 31.068182 | 207 | 0.673738 | eng_Latn | 0.530763 |
21d1611e1fdef0acedf9bee1e84670c4c2246c90 | 10,434 | md | Markdown | guides/mp-point/how-to-integrate.es.md | mreverter/devsite-docs | e8efa3d57da1b71d014155ba720548b402ad9ee5 | [
"MIT"
] | null | null | null | guides/mp-point/how-to-integrate.es.md | mreverter/devsite-docs | e8efa3d57da1b71d014155ba720548b402ad9ee5 | [
"MIT"
] | null | null | null | guides/mp-point/how-to-integrate.es.md | mreverter/devsite-docs | e8efa3d57da1b71d014155ba720548b402ad9ee5 | [
"MIT"
] | null | null | null | ---
sites_supported:
- mla
- mlb
- mlm
- global
---
# Cómo integrar Mercado Pago Point
Para poder cobrar de manera integrada con nuestro dispositivo Point es necesario descargar la aplicación de Mercado Pago disponible en los marketplaces de iOS y Android.
Existen dos grandes mundos a la hora de integrar Point:
1) Cuando la aplicación se puede utilizar desde el mismo dispositivo (celular o tablet) donde esta instalada la aplicación de Mercado Pago. Esto se puede lograr con una integración de deep linking o intent-based mediante
2) Cuando la aplicación no se puede utilizar desde el mismo dispositivo (celular o tablet) donde esta instalada la aplicación de Mercado Pago. Esto se puede lograr con una integración vía API.
> WARNING
>
> Pre-requisitos
>
> * Contar con la aplicación de Mercado Pago (Desde la versión 2.34 para Android y 2.32 para iOS).
> * Contar con un dispositivo Point.
> * El usuario debe estar logueado con su cuenta de Mercado Pago en la aplicación de Mercado Pago.
> * Disponible para Android versión 2.8.0 o superior, iOS versión 1.7.0 o superior y solo cuando corren en iOS 9 o superior.
## Integración vía Deep Linking
Una de las formas de integrarse con Mercado Pago Point es mediante un deep linking. Cuando se llama a dicho _link_, el mismo va a ser interceptado como un _Point-handled address_ por parte de la aplicación de Mercado Pago.
En la llamada a este _link_ se pueden enviar diferentes parámetros que serán levantados por la aplicación de Mercado Pago e impactados en el pago. Una vez que se hace el llamado a este link, el usuario será redireccionado a la pantalla de la aplicación de Mercado Pago para pasar la tarjeta del cliente y así realizar el cobro.
Una vez que el pago es procesado, el usuario será redireccionado a la `success_url` o `fail_url`, dependiendo del estado del pago. Esto deberá ser interceptado para retornar al usuario al flujo de la aplicación.
### Diagrama de Flujo

### Creación del Deep Linking
La URL a ser interceptada es la siguiente: `https://www.mercadopago.com/point/integrations`
Los parámetros que se pueden incluir son:
* `amount`: El monto que se le va a cobrar al cliente (\*).
* `description`: Una descripción de la operación (Máx.: 20 carácteres) (\*).
* `external_reference`: El código de referencia de tu sistema, el mismo es el que permitirá conciliar tu orden de compra con el pago.
* `notification_url`: Es la URL donde vas a recibir la notificación de ese pago.
* `payer_email`: Es el email del pagador.
* `success_url`: Es la URL donde será redireccionado el usuario luego de un pago aprobado.
* `fail_url`: Es la URL donde será redireccionado el usuario luego de un pago rechazado.
> WARNING
>
> * Los campos marcados con (\*) son campos obligatorios.
En el artículo de [GitHub](https://github.com/mercadopago/point-android_integration#deep-linking) podes obtener más información y el ejemplo correspondiente.
## Integración vía Intent-Based
> WARNING
>
> * Esta integración sólo esta disponible para Android versión 2.8.0 o superior.
Otra forma de integrarse con la aplicación de Mercado Pago es mediante código nativo de Android, mediante el concepto de _Intent-Based_.
Debes utilizar el método “startActivityForResult” para iniciar directamente el proceso de pago. El resultado del pago va a retornar como “activityResult”.
Es muy importante que el código maneje la posibilidad de que el usuario no tenga instalada la aplicación de Mercado Pago en su dispositivo, en este caso, recomendamos redireccionar al usuario al Play Store para descargar la misma.
Como referencia puedes utilizar el código de ejemplo y documentación que tiene el formato para poder enviar la información del pago y manejar el objeto de retorno.
En el artículo de [GitHub](https://github.com/mercadopago/point-android_integration#intent) podes obtener más información y el ejemplo correspondiente.
## Integración vía API
> WARNING
>
> * Esta integración sólo esta disponible para Android versión 2.8.0 o superior.
> * No esta disponible para iOS.
> * Para poder utilizar esta integración es necesario que te comuniques con [integracioneshispanos@mercadopago.com](mailto:integracioneshispanos@mercadopago.com) para que te habiliten las opciones de integraciones en la app de Mercado Pago.
La otra forma de integrarse con la aplicación de Mercado Pago para cobrar con nuestro Point es mediante nuestras API.
Para esta integración, primero es necesario configurar desde la aplicación de Mercado Pago el `device_name` . El mismo sirve para identificar tu celular o tablet y relacionarlo con tu cuenta de Mercado Pago. De esta manera, sabrás a que dispositivo enviar la orden de pago.
El siguiente paso, consiste en generar una orden de pago y enviarsela vía API al device desde donde queres cobrarla. El usuario verá que en la pantalla de ese dispositivo se levanta la aplicación de Mercado Pago lista para pasar la tarjeta y avanzar con el flujo de pago utilizando el Point.
Una vez que el pago es procesado, el usario verá el resultado del pago en la aplicación de Mercado Pago. Finalmente, la orden generada se cerrará y se creará el pago correspondiente.
### Creación de la orden de pago
El POST en esta API genera una orden, que es la que recibe la aplicación de Mercado Pago para cobrar con el Point, con un identificador, que se puede utilizar para saber el estado de la misma.
```curl
curl -X POST \
-H "Content-Type: application/json" \
-d '{"amount":100,"description":"taza","device_name":"dispositivo","cc_type":"credit_card"}' \
'https://mobile.mercadopago.com/point/services/integrations/v1?access_token=ACCESS_TOKEN'
```
Los parámetros que se pueden incluir son:
* `amount`: El monto que se le va a cobrar al cliente (\*).
* `description`: Una descripción de la operación (Máx.: 20 carácteres) (\*).
* `device_name`: Es el nombre del dispositivo que queres que levante la orden de pago (\*).
* `cc_type`: El tipo de tarjeta. Puede ser credit_card o debit_card (\*).
* `external_reference`: El código de referencia de tu sistema, el mismo es el que permitirá conciliar tu orden de compra con el pago.
* `disable_back_button`: True o False. Para que al clickear en el back button la orden de pago se cierre.
* `notification_url`: Es la URL donde vas a recibir la notificación de ese pago.
* `payer_email`: Es el email del pagador.
> WARNING
>
> * Los campos marcados con (\*) son campos obligatorios.
La respuesta tendra el siguiente formato:
```json
{
"status":"OPEN",
"id":<order_id>
}
```
**Response status code: 200 OK**
### Obtener la orden de pago
El GET en esta API, junto al `order_id`, es la que posibilita recuperar el estado de la misma para saber si se generó o no un pago.
```curl
curl -X GET \
-H "Content-Type: application/json" \
'https://mobile.mercadopago.com/point/services/integrations/v1/:ID?access_token=ACCESS_TOKEN'
```
Si el status de la orden es `OPEN` quiere decir que la misma todavía no se pago. Si el status es `CLOSED` quiere decir que la orden ya se pago y por lo tanto obtendrás el `payment_id` con el resto de la información. La respuesta tendra el siguiente formato.
```json
{
"status":"CLOSED",
"id":<order_id>,
"payment_id":<payment_id>,
"payment_status":"<payment_status>",
"external_reference": "<external_reference>",
"payer_email": "<email_payer>"
}
```
**Response status code: 200 OK**
### Eliminar la orden de pago
El DELETE en esta API es la que posibilita eliminar una orden. Existen dos formas de eliminar la orden.
Podes eliminar la orden por `device_name`:
```curl
curl -X DELETE \
-H "Content-Type: application/json" \
'https://mobile.mercadopago.com/point/services/integrations/v1/attempt/device/:DEVICE_NAME?access_token=ACCESS_TOKEN'
```
La respuesta tendra el siguiente formato.
```json
{
"status":"OK"
}
```
**Response status code: 200 OK**
O podes eliminar la orden por `order_id`:
```curl
curl -X DELETE \
-H "Content-Type: application/json" \
'https://mobile.mercadopago.com/point/services/integrations/v1/:ID?access_token=ACCESS_TOKEN'
```
**Response status code: 204 OK**
### Obtener todos los devices de una cuenta
El GET en esta API es la que posibilita obtener todos los devices configurados y sincronizados para tu cuenta de Mercado Pago
```curl
curl -X GET \
-H "Content-Type: application/json" \
'https://mobile.mercadopago.com/point/services/integrations/v1/devices?access_token=ACCESS_TOKEN'
```
Si el status del device es `FREE` quiere decir que el dispositivo puede recibir una orden nueva. Si el status es `BUSY` quiere decir que el dispostivo ya tiene asignado una orden. La respuesta tendra el siguiente formato.
```json
{
"devices"[{
"name": "<device_name>",
"caller": <id_interno>,
"id": <id_interno>,
”status”:{
status”:”FREE”,
"payment_attempt": <order_id>
}
}]
}
```
**Response status code: 200 OK**
### Eliminar un device de una cuenta
El DELETE en esta API es la que posibilita borrar alguno de los devices configurados y sincronizados para tu cuenta de Mercado Pago
```curl
curl -X DELETE \
-H "Content-Type: application/json" \
'https://mobile.mercadopago.com/point/services/integrations/v1/devices/:DEVICE_NAME?access_token=ACCESS_TOKEN'
```
La respuesta tendra el siguiente formato.
```json
{
"status":"OK"
}
```
**Response status code: 200 OK**
## Notificaciones de Pago
Es necesario que envíes tu `notification_url`, donde recibirás aviso de todos los nuevos pagos y actualizaciones de estados que se generen.
En el artículo de [notificaciones](https://www.mercadopago.com.ar/developers/es/guides/notifications/webhooks) podes obtener más información.
## Pagos de Point
Los pagos de Point se pueden buscar en la API de Payments. Podes encontrar más información en el artículo de [API's](https://www.mercadopago.com.ar/developers/es/reference/payments/_payments_id/get/)
A su vez, existe una API exclusiva de Point que cuenta con alguna información adicional del pago: `https://api.mercadolibre.com/point/services/payment/<payment_id>?access_token=<access_token>`
La respuesta tendra el siguiente formato:
```json
{
"payment_id": 12345,
"caller_id": 44444,
"poi": "BBPOS-123123123",
"poi_type": "BBPOS",
"operator_id": 555555,
"buyer_info": {
"email": "email@email.com"
}
}
```
> El campo "poi" es el identificador físico del dispositivo Point.
| 37.804348 | 327 | 0.759824 | spa_Latn | 0.986511 |
21d2033e9bff790c53e50931eeb3f023e1bc23f8 | 4,411 | md | Markdown | README.md | MonikaFJ/CarND-PID-Control-Project | 9016a6fc6a9e76b8c48f57d931ff4c8631cb889b | [
"MIT"
] | null | null | null | README.md | MonikaFJ/CarND-PID-Control-Project | 9016a6fc6a9e76b8c48f57d931ff4c8631cb889b | [
"MIT"
] | null | null | null | README.md | MonikaFJ/CarND-PID-Control-Project | 9016a6fc6a9e76b8c48f57d931ff4c8631cb889b | [
"MIT"
] | null | null | null | # CarND-Controls-PID
Self-Driving Car Engineer Nanodegree Program
---
## Dependencies
* cmake >= 3.5
* All OSes: [click here for installation instructions](https://cmake.org/install/)
* make >= 4.1(mac, linux), 3.81(Windows)
* Linux: make is installed by default on most Linux distros
* Mac: [install Xcode command line tools to get make](https://developer.apple.com/xcode/features/)
* Windows: [Click here for installation instructions](http://gnuwin32.sourceforge.net/packages/make.htm)
* gcc/g++ >= 5.4
* Linux: gcc / g++ is installed by default on most Linux distros
* Mac: same deal as make - [install Xcode command line tools]((https://developer.apple.com/xcode/features/)
* Windows: recommend using [MinGW](http://www.mingw.org/)
* [uWebSockets](https://github.com/uWebSockets/uWebSockets)
* Run either `./install-mac.sh` or `./install-ubuntu.sh`.
* If you install from source, checkout to commit `e94b6e1`, i.e.
```
git clone https://github.com/uWebSockets/uWebSockets
cd uWebSockets
git checkout e94b6e1
```
Some function signatures have changed in v0.14.x. See [this PR](https://github.com/udacity/CarND-MPC-Project/pull/3) for more details.
* Simulator. You can download these from the [project intro page](https://github.com/udacity/self-driving-car-sim/releases) in the classroom.
Fellow students have put together a guide to Windows set-up for the project [here](https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/files/Kidnapped_Vehicle_Windows_Setup.pdf) if the environment you have set up for the Sensor Fusion projects does not work for this project. There's also an experimental patch for windows in this [PR](https://github.com/udacity/CarND-PID-Control-Project/pull/3).
## Basic Build Instructions
1. Clone this repo.
2. Make a build directory: `mkdir build && cd build`
3. Compile: `cmake .. && make`
4. Run it: `./pid`.
Tips for setting up your environment can be found [here](https://classroom.udacity.com/nanodegrees/nd013/parts/40f38239-66b6-46ec-ae68-03afd8a601c8/modules/0949fca6-b379-42af-a919-ee50aa304e6a/lessons/f758c44c-5e40-4e01-93b5-1a82aa4e044f/concepts/23d376c7-0195-4276-bdf0-e02f1f3c665d)
## Algorithm and tuning
The goal of this project is to control the steering angle of a simulated car so it will follow the road. It is also possible to control the throttle, which found very useful.
### Controlling the steering angle
For controlling the car basic PID controller is used. The input the the PID was the distance of the car from the center of the line.
This type of controller has three basic terms:
* Proportional (P) - proportional to the current value - directly reacts to changes in control value (if used alone overshoots and leads to oscillations)
* Derivative (D) - proportional to error between current and previous value - tries to estimate future trend (used to dampen or remove the oscillation)
* Integral (I) - proportional to sum of errors - prevents residual error
The biggest challenge of the project was to tune the PID parameters to get the smooth and safe trajectory. I've started to estimate the biggest P parameter when the car is still behaving in a stable way (with I and D set to 0). I found out that this value is around 0.5. Next step was to find a D parameter that will compensate for the oscillations in the system. During tests with different values I realized that the car was not turning enough when approaching the corners, which leads to overshooting. To tackle this problem I did two things: increase the P value and introduce the heuristic to control the throttle (described later).
After multiple tests I found out that parameters P = 0.13 I = 0.0003 and D = 1.4, yield a good results. The car is still oscillating a bit, but it is not driving off the road.
### Controlling the throttle
During the PID tuning I realized that it's hard to follow the road when driving with the full speed. That's why I introduced the heuristic that limits the throttle depending on the current control value. The ranges that I used are based on experiments, and could potentially be improved. Another improvement might be to use the PID controller instead of the heuristics.
Used throttle values:
| CTE | throttle |
| ------------- |:-------------:|
| < 0.5 | 0.25|
|0.5< CTE <0.8 | 0.15|
|0.8<CTE<2 | 0.1|
|>2 (if speed <20)|0.05|
|>2 (if speed >20)|-0.05|
| 63.927536 | 638 | 0.746316 | eng_Latn | 0.991875 |
21d3bff4fe4da8bd0e3c914df8b75ac939ff4155 | 21,456 | md | Markdown | content/docs/0.16.x/integrations/custom_integration/index.md | keptn/keptn.sh | d0fb5c8cc8018030db304bd919d3d25d9b23977b | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | content/docs/0.16.x/integrations/custom_integration/index.md | keptn/keptn.sh | d0fb5c8cc8018030db304bd919d3d25d9b23977b | [
"Apache-2.0",
"CC-BY-4.0"
] | 2 | 2022-03-24T08:35:15.000Z | 2022-03-31T08:44:35.000Z | content/docs/0.16.x/integrations/custom_integration/index.md | agardnerIT/keptn.github.io | d78096278c75955df6c3a6158dae5a8bad3abcbb | [
"Apache-2.0",
"CC-BY-4.0"
] | null | null | null | ---
title: Write a Keptn-service
description: Implement your Keptn-service that listens to Keptn events and extends your Keptn with certain functionality.
weight: 2
keywords: [0.16.x-integration]
---
Here you learn how to add additional functionality to your Keptn installation with a custom [*Keptn-service*](#write-your-keptn-service), [*SLI-provider*](../sli_provider), or [*Action-provider*](../action_provider).
* A *Keptn-service* is responsible for implementing a continuous delivery or operations task.
* An *SLI-provider* is used to query Service-Level Indicators (SLI) from an external source like a monitoring or testing solution.
* An *Action-provider* is used to extend a task sequence for remediation with an additional action step.
## Template repository
A template for writing a new *Keptn-service* is provided here: [keptn-service-template-go](https://github.com/keptn-sandbox/keptn-service-template-go).
Please note that the master branch of this repository might represent a development state. Check out the [releases page](https://github.com/keptn-sandbox/keptn-service-template-go/releases) and download the code for a release that's compatible with the Keptn version for which you are developing.
Since a *Keptn-service* is a Kubernetes service with a deployment and service template, the deployment manifest in the template repository can be re-used; see [deploy/service.yaml](https://github.com/keptn-sandbox/keptn-service-template-go/blob/master/deploy/service.yaml).
This deployment manifest contains:
* Kubernetes **Deployment**, with two containers:
* *keptn-service-template-go*: Replace the image of this container with the image of your implementation.
* *distributor*: This container integrates with your Keptn and does the event handling. *Do not remove it.*
* Kubernetes **Service**
## Write your Keptn-service
A Keptn-service has the following characteristics:
* has a subscription to a **triggered event** that occurs during the execution of a task sequence (e.g., for continuous delivery or operations)
* sends a **started event** to inform Keptn about receiving the event and acting on it
* processes functionality and can therefore leverage additional tools, e.g., through their REST interface
* sends a **finished event** to inform Keptn about its execution status and the result
### Subscription to a triggered event
Your Keptn-service must have a subscription to at least one [Keptn CloudEvent](https://github.com/keptn/spec/blob/0.2.2/cloudevents.md). The event type to subscribe to looks like:
- `sh.keptn.event.[task].triggered`
In this example, `[task]` works as a placeholder for tasks such as: `deployment`, `test`, `evaluation`, `remediation`, etc. The task defines the topic the Keptn-service is interested in. Assuming you are writing a Keptn-service for testing, the event type would be: `sh.keptn.event.test.triggered`.
**Distributor:**
* To subscribe your Keptn-service to the `sh.keptn.event.[task].triggered` event, a distributor with `PUBSUB_TOPIC` set to the specific event type is required, see example below. Alternatively, a default distributor listening to all events (e.g., `PUBSUB_TOPIC: sh.keptn.>`) is provided in the deployment manifest of the keptn-service-template-go template (see [deploy/service.yaml](https://github.com/keptn-sandbox/keptn-service-template-go/blob/master/deploy/service.yaml)).
The `PUBSUB_TOPIC` variable sets the initial subscription for your service. If a subscription has been modified through the Bridge, Keptn prioritizes this information and it discards the value of `PUBSUB_TOPIC`. Please, look at the Bridge paragraph later on this page for more information on the subject.
```yaml
spec:
containers:
- name: distributor
image: keptn/distributor:0.16.0
ports:
- containerPort: 8080
resources:
requests:
memory: "16Mi"
cpu: "25m"
limits:
memory: "128Mi"
cpu: "250m"
env:
- name: PUBSUB_URL
value: 'nats://keptn-nats-cluster'
- name: PUBSUB_TOPIC
value: 'sh.keptn.event.test.triggered'
- name: PUBSUB_RECIPIENT
value: '127.0.0.1'
```
In addition to forwarding received events for the subscribed topic to the Keptn-service, the distributor also provides the feature to act as a proxy to the Keptn API.
Using this feature, the following Keptn API services are reachable for the Keptn-service via the following URLs, if the **distributor runs within the same K8s pod as the Keptn-service**:
- Mongodb-datastore:
- `http://localhost:8081/mongodb-datastore`
- Configuration-service:
- `http://localhost:8081/configuration-service`
- Shipyard-controller:
- `http://localhost:8081/controlPlane`
To configure this distributor for your *Keptn-service*, the following environment variables can be adapted. However, in most scenarios only a subset of them needs to be configured. The full list of environment variables is as follows:
| Environment variable | Description | Default Value |
|-----------------------|:-----------------------------------------------------------------------------------------------------------------------------------------|:----------------------------|
| KEPTN_API_ENDPOINT | Keptn API Endpoint - needed when the distributor runs outside of the Keptn cluster | `""` |
| KEPTN_API_TOKEN | Keptn API Token - needed when the distributor runs outside of the Keptn cluster | `""` |
| API_PROXY_PORT | Port on which the distributor will listen for incoming Keptn API requests by its execution plane service | `8081`. |
| API_PROXY_PATH | Path on which the distributor will listen for incoming Keptn API requests by its execution plane service | `/`. |
| HTTP_POLLING_INTERVAL | Interval (in seconds) in which the distributor will check for new triggered events on the Keptn API | `10` |
| EVENT_FORWARDING_PATH | Path on which the distributor will listen for incoming events from its execution plane service | `/event` |
| HTTP_SSL_VERIFY | Determines whether the distributor should check the validity of SSL certificates when sending requests to a Keptn API endpoint via HTTPS | `true` |
| PUBSUB_URL | The URL of the nats cluster the distributor should connect to when the distributor is running within the Keptn cluster | `nats://keptn-nats-cluster` |
| PUBSUB_TOPIC | Comma separated list of topics (i.e. event types) the distributor should listen to | `""` |
| PUBSUB_RECIPIENT | Hostname of the execution plane service the distributor should forward incoming CloudEvents to | `http://127.0.0.1` |
| PUBSUB_RECIPIENT_PORT | Port of the execution plane service the distributor should forward incoming CloudEvents to | `8080` |
| PUBSUB_RECIPIENT_PATH | Path of the execution plane service the distributor should forward incoming CloudEvents to | `/` |
| DISABLE_REGISTRATION | Disables automatic registration of the Keptn integration to the control plane. | `false` |
| REGISTRATION_INTERVAL | Time duration between trying to re-register to the Keptn control plane. |`10s` |
| LOCATION | Location the distributor is running on, e.g. "executionPlane-A". | `""` |
| DISTRIBUTOR_VERSION | The software version of the distributor. | `""` |
| VERSION | The version of the Keptn integration. | `""` |
| K8S_DEPLOYMENT_NAME | Kubernetes deployment name of the Keptn integration. | `""` |
| K8S_POD_NAME | Kubernetes deployment name of the Keptn integration. | `""` |
| K8S_NAMESPACE | Kubernetes namespace of the Keptn integration. | `""` |
| K8S_NODE_NAME | Kubernetes node name the Keptn integration is running on. | `""` |
| PROJECT_FILTER | Filter events for a specific project, supports a comma-separated list of projects. | `""` |
| STAGE_FILTER | Filter events for a specific stage, supports a comma-separated list of stages. | `""` |
| SERVICE_FILTER | Filter events for a specific service, supports a comma-separated list of services. | `""` |
The above list of environment variables is pretty long, but in most scenarios only a few of them have to be set. The following examples show how to set the environment variables properly, depending on where the distributor and it's accompanying execution plane service should run:
<details><summary>*Distributor for Keptn-service that is running* **inside the Keptn control plane**: </summary>
<p>
| Environment variable | Setting |
|----------------------- |:-------- |
| PUBSUB_RECIPIENT | Host name of the Keptn-service. |
| PUBSUB_RECIPIENT_PORT | Service port to receive the event (default: `8080`) |
| PUBSUB_RECIPIENT_PATH | Service endpoint to receive the event (default: `/`) |
| PUBSUB_TOPIC | Event(s) the Keptn-service is subscribed to. To subscribe to multiple events, declare a comma-separated list, e.g.: `sh.keptn.event.test.triggered, sh.keptn.event.evaluation.triggered` |
If your Keptn-service is running in the same pod as the distributor (which we recommend), and receives events at the port `8080` and the path `/`, you will only need to set the `PUBSUB_TOPIC` environment variable.
</p>
</details>
<details><summary>*Distributor for Keptn-service that is running* **outside the Keptn control plane (in execution plane)**: </summary>
<p>
| Environment variable | Setting |
|----------------------- |:-------- |
| KEPTN_API_ENDPOINT | The endpoint of the Keptn API, e.g. `https://my-keptn.dev/api` |
| KEPTN_API_TOKEN | Keptn API token |
| HTTP_POLLING_INTERVAL | Polling interval in seconds |
| PUBSUB_RECIPIENT | Host name of the Keptn-service |
| PUBSUB_RECIPIENT_PORT | Service port to receive the event (default: `8080`) |
| PUBSUB_RECIPIENT_PATH | Service endpoint to receive the event (default: `/`) |
| PUBSUB_TOPIC | Event(s) the Keptn-service is subscribed to. To subscribe to multiple events, declare a comma-separated list, e.g.: `sh.keptn.event.test.triggered, sh.keptn.event.evaluation.triggered` |
If your Keptn-service is running in the same pod as the distributor (which we recommend), and receives events at the port `8080` and the path `/`, you will only need to set the `PUBSUB_TOPIC` environment variable.
</p>
</details>
**Bridge:**
After the application is deployed, its subscription can be changed directly in the Bridge.
For this, open Keptn Bridge, select a project, and go to the *Uniform* page.
Then select your service, and click the `Add subscription` button.
{{< popup_image
link="./assets/uniform_api.png"
caption="Add task subscription for your integration"
width="700px">}}
In this form, you can provide the information for the task subscription:
* *Task*: The task on which the integration should be fired (e.g., `test` or `deployment`)
* *Task suffix*: The state of the task when the integration should be fired; select one of: `triggered`, `started`, of `finished`
* *Filter*: To restrict your integration to certain stages and services you can specify those using filters.
*Note:* multiple subscriptions can be added for the same service.
Keptn stores this subscriptions information even if the integration is not running.
By default, Keptn keeps this information for 48h after the last contact with the integration.
This value can be configured using the [Advanced Install](../../operate/advanced_install_options/) option `shipyardController.config.uniformIntegrationTTL`.
### Send a started event
After receiving a `triggered` event for a particular task, your *Keptn-service* must inform Keptn by sending an event of the type:
- `sh.keptn.event.[task].started`
The request body needs to follow the [CloudEvent specification](https://github.com/keptn/spec/blob/0.2.2/cloudevents.md) and the HTTP header attribute `Content-Type` has to be set to `application/cloudevents+json`.
**Send the event:**
The event can be sent to Keptn in either of the following ways:
1. Post it on the `v1/event` endpoint of Keptn
1. If the distributor is running as sidecar, post the event on `127.0.0.1:8081`
### Execute the functionality
The functionality of your *Keptn-service* depends on the capability you want to add to the continuous delivery or operational workflow. In many cases, the event payload -- containing meta-data such as the project, stage, or service name -- is first processed and then used to call the REST API of another tool.
### Send a finished event
After your *Keptn-service* has completed its functionality, it has to inform Keptn by sending an event of the type:
- `sh.keptn.event.[task].finished`
The request body must follow the [CloudEvent specification](https://github.com/keptn/spec/blob/0.2.2/cloudevents.md) and the HTTP header attribute `Content-Type` must be set to `application/cloudevents+json`.
**Add property to event header:**
Add to the *header* of the event:
* `triggeredid`: The value of this property is the `id` of the `sh.keptn.event.[task].triggered` event.
**Add data to event payload:**
You can send data back to Keptn by adding it to the data block in the event payload. In more details, the data block has a reserved space depending on the event type. If, for example, your Keptn service has a subscription to a `sh.keptn.event.test.finished` event, the reserved space is `data.test`. Your Keptn-service is allowed to add data there, but must provide at least a value for `status` and `result`:
* `status`: [succeeded, errored, unknown] - The status of the task execution.
* `result`: [pass, failed] - The result of a successful task execution.
```json
{
"type": "sh.keptn.event.test.finished",
"specversion": "1.0",
"source": "https://github.com/keptn/keptn/jmeter-service",
"id": "ggb878d3-03c0-4e8f-bc3f-454bc1b3d888",
"time": "2019-06-07T07:02:15.64489Z",
"contenttype": "application/json",
"shkeptncontext": "08735340-6f9e-4b32-97ff-3b6c292bc509",
"triggeredid" : "f2b878d3-03c0-4e8f-bc3f-454bc1b3d79d",
"data": {
"test": {
"status": "succeeded",
"result": "pass"
},
"project": "sockshop",
"stage": "staging",
"service": "carts",
"labels": {
"testId": "4711",
"buildId": "build-17",
"owner": "JohnDoe"
}
}
}
```
**Send the event:**
The event can be sent to Keptn, in either of the following ways:
1. Post it on the `v1/event` endpoint of Keptn
1. If the distributor is running as sidecar, post the event on `127.0.0.1:8081`
## Deploy Keptn-service with distributor
### Subscribe service to Keptn event
**Distributor:** To subscribe your service to a Keptn event, a distributor is required. A distributor is part of the above mentioned deployment manifest and shown by the example below:
```yaml
spec:
selector:
matchLabels:
run: distributor
replicas: 1
template:
metadata:
labels:
run: distributor
spec:
containers:
- name: distributor
image: keptn/distributor:0.16.0
ports:
- containerPort: 8080
resources:
requests:
memory: "16Mi"
cpu: "25m"
limits:
memory: "128Mi"
cpu: "250m"
env:
- name: PUBSUB_URL
value: 'nats://keptn-nats-cluster'
- name: PUBSUB_TOPIC
value: 'sh.keptn.event.deployment.finished'
- name: PUBSUB_RECIPIENT
value: '127.0.0.1'
- name: VERSION
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: 'metadata.labels[''app.kubernetes.io/version'']'
- name: K8S_DEPLOYMENT_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: 'metadata.labels[''app.kubernetes.io/name'']'
- name: K8S_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
```
To configure this distributor for your *Keptn-service*, two environment variables need to be adapted:
* `PUBSUB_RECIPIENT`: Defines the service address as specified in the Kubernetes service manifest (e.g., 127.0.0.1 or jmeter-service)
* `PUBSUB_TOPIC`: Defines the event type your *Keptn-service* is listening to (e.g. `sh.keptn.event.test.triggered` or `sh.keptn.event.>`).
### Deploy Keptn-service and distributor
With a service and deployment manifest for your custom *Keptn-service* (`service.yaml`), you are ready to deploy both components in the K8s cluster where Keptn is installed:
```console
kubectl apply -f service.yaml -n keptn
```
## CloudEvents
CloudEvents must be sent with the HTTP header `Content-Type` set to `application/cloudevents+json`. For a detailed look into CloudEvents, please go the Keptn [CloudEvent specification](https://github.com/keptn/spec/blob/0.2.3/cloudevents.md).
## Error Logging
By default, the distributor automatically extracts error logs from received `sh.keptn.<task>.finished` events with `data.status=errored` and/or `data.result=fail`, that have been sent by your service. These error messages are then forwarded to Keptn's Log Ingestion API.
Additionally, for easier debugging of errors that occur either during the execution of a task of a sequence, or while performing any other operation, Keptn integration services can send error log events to the Keptn API via the distributor.
Examples for those events are listed below.
### Logging an error related task sequence execution:
If the error log event is associated with the execution of a specific task that has been triggered by a `sh.keptn.event.<task>.triggered` event, the following properties must be set in order to correlate them to the correct task sequence execution:
- `shkeptncontext`: The context of the task sequence execution. Can be adapted from the received `sh.keptn.event.<task>.triggered` event
- `triggeredid`: The `id` of the received `sh.keptn.event.<task>.triggered` event
- `data.task`: The name of the executed task.
- `data.message`: The message you would like to log
**Example event payload**
```json
{
"specversion": "1.0",
"id": "c4d3a334-6cb9-4e8c-a372-7e0b45942f53",
"source": "source-service",
"type": "sh.keptn.log.error",
"datacontenttype": "application/json",
"data": {
"message": "an unexpected error occurred during the execution of my task",
"task": "deployment"
},
"triggeredid": "3f9640b6-1d2a-4f11-95f5-23259f1d82d6",
"shkeptncontext": "a3e5f16d-8888-4720-82c7-6995062905c1",
"shkeptnspecversion": "0.2.3"
}
```
### Logging an error that is not related to the execution of a task
If the error log event is not associated with an execution of a specific task, the properties `shkeptncontext`, `triggeredid`, and `data.task` are not required. In this case, an example payload would look as follows:
```json
{
"specversion": "1.0",
"id": "c4d3a334-6cb9-4e8c-a372-7e0b45942f53",
"source": "source-service",
"type": "sh.keptn.log.error",
"datacontenttype": "application/json",
"shkeptnspecversion": "0.2.3",
"data": {
"message": "an unexpected error occurred during the execution of my task",
"task": "deployment"
}
}
```
| 55.298969 | 476 | 0.644389 | eng_Latn | 0.982575 |
21d3c1955c253571244151306d60d21230adbb30 | 6,031 | md | Markdown | _posts/2019-10-31-understanding-pytorch-tensors.md | asheeshcric/blogs | 630956be9844e2a46efe03bcb6800ea0cc2dcef1 | [
"MIT"
] | 23 | 2019-02-15T11:49:21.000Z | 2019-07-15T10:58:16.000Z | _posts/2019-10-31-understanding-pytorch-tensors.md | asheeshcric/blogs | 630956be9844e2a46efe03bcb6800ea0cc2dcef1 | [
"MIT"
] | null | null | null | _posts/2019-10-31-understanding-pytorch-tensors.md | asheeshcric/blogs | 630956be9844e2a46efe03bcb6800ea0cc2dcef1 | [
"MIT"
] | 1 | 2021-12-30T12:25:00.000Z | 2021-12-30T12:25:00.000Z | ---
title: Understanding PyTorch Tensors
date: 2019-10-31 11:23:00 +0515
categories: [Tutorial]
tags: [machine_learning]
description: Understanding how tensor operations are carried out in PyTorch with examples.
published: false
seo:
date_modified: 2020-04-13 14:47:03 -0500
---
This post is just intended to present example codes to see how tensors are operated and manipulated in
PyTorch. There are no explanations or descriptions for any of the code. This post is just for reference purposes and will soon be removed.
```python
import torch
```
###### Creating a tensor in PyTorch
```python
a = torch.tensor([1, 2, 3, 4])
```
```python
a
```
tensor([1, 2, 3, 4])
###### Checking the tensor type
```python
a.type()
```
'torch.LongTensor'
###### Type of data stored in the tensor
```python
a.dtype
```
torch.int64
###### Create a tensor of specific type
```python
b = torch.FloatTensor([1, 2, 3, 4])
b.type()
```
'torch.FloatTensor'
###### Size and dimension of a tensor
```python
print(a.size())
print(a.ndimension())
```
torch.Size([4])
1
###### Adding a new dimension to a tensor
```python
a_col = a.view(4, 1) # 4 rows and 1 column
# OR
a_col = a.view(-1, 1) # In case you don't know the number of rows, both do the same thing
a_col.size()
```
torch.Size([4, 1])
###### Numpy array and tensors
```python
# Numpy array and tensors
import numpy as np
np_array = np.array([1, 2, 3, 4])
# From numpy array to a torch tensor
torch_tensor = torch.from_numpy(np_array)
# Back to torch tensor from a numpy array
back_to_np_array = torch_tensor.numpy()
"""
Here, the tensors and np_arrays carry reference of the ones they were assigned from
torch_tensor points back to np_array, and
back_to_np_array points back to torch_tensor
So, if you change the variable np_array, both torch_tensor and back_to_np_array change
"""
print(torch_tensor)
print(back_to_np_array)
```
tensor([1, 2, 3, 4])
[1 2 3 4]
```python
# Tensor to list
this_tensor = torch.tensor([1, 2, 3, 4])
this_tensor.tolist()
```
[1, 2, 3, 4]
### Vector Addition and Subtraction
```python
u = torch.tensor([1.0, 2.1, 4.2])
v = torch.tensor([3.0, 1.9, -0.2])
z = u + v
z
```
tensor([4.0000, 4.0000, 4.0000])
###### Multiplication and dot product of two tensors
```python
# Multiplying two tensors
m = u * v
# Dot product of two tensors
d = torch.dot(u, v)
print(m)
print(d)
```
tensor([ 3.0000, 3.9900, -0.8400])
tensor(6.1500)
###### Adding scalar to a tensor (Broadcasting)
```python
# Adding a scalar to a tensor is similar to broadcasting in numpy
m+1
```
tensor([4.0000, 4.9900, 0.1600])
### Universal Function in PyTorch
```python
m.mean()
```
tensor(2.0500)
```python
m.max()
```
tensor(3.9900)
### Getting evenly spaced numbers from one point (lower) to another (upper)
```python
torch.linspace(-2, 2, steps=9).type(torch.IntTensor)
```
tensor([-2, -1, -1, 0, 0, 0, 1, 1, 2], dtype=torch.int32)
###### Plotting a sin(x) function
```python
x = torch.linspace(0, 2*np.pi, 100)
y = torch.sin(x)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(x.numpy(), y.numpy())
```
[<matplotlib.lines.Line2D at 0x7f9855de8898>]

### Derivatives in PyTorch
```python
x = torch.tensor(2.0, requires_grad=True)
y = x**2
y.backward()
x.grad
```
tensor(4.)

```python
x = torch.linspace(-10, 10, 10, requires_grad=True)
Y = x**2
y = torch.sum(x**2)
y.backward()
plt.plot(x.detach().numpy(), Y.detach().numpy(), label='function')
plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label='derivative')
plt.legend()
```
<matplotlib.legend.Legend at 0x7f9852d27780>


## Dataset Class in PyTorch
```python
from torch.utils.data import Dataset
```
###### The class below inherits the Dataset class and is used to represent data in a table (x, y)
```python
class ToySet(Dataset):
def __init__(self, length=100, transform=None):
self.x = 2*torch.ones(length, 2)
self.y = torch.ones(length, 1)
self.len = length
self.transform = transform
def __getitem__(self, index):
sample = self.x[index], self.y[index]
if self.transform:
sample = self.transform(sample)
return sample
def __len__(self):
return self.len
```
```python
dataset = ToySet()
for i in range(3):
x, y = dataset[i]
print(i, 'x: {}, y: {}'.format(x, y))
```
0 x: tensor([2., 2.]), y: tensor([1.])
1 x: tensor([2., 2.]), y: tensor([1.])
2 x: tensor([2., 2.]), y: tensor([1.])
###### Making a Transform class for our dataset that transforms our samples
```python
class add_mult(object):
def __init__(self, addx=1, muly=1):
self.addx = addx
self.muly = muly
def __call__(self, sample):
x, y = sample
x += self.addx
y *= self.muly
return x, y
```
```python
a_m = add_mult()
dataset = ToySet(transform=a_m)
dataset[0]
```
(tensor([3., 3.]), tensor([1.]))
```python
# Another transform class to modify our data samples
class mult(object):
def __init__(self, mul=100):
self.mul = mul
def __call__(self, sample):
x, y = sample
x *= self.mul
y *= self.mul
return x, y
```
###### Now, if we need to use multiple transforms to our data samples
```python
from torchvision import transforms
```
```python
# This applies the two transforms one by one to the data samples
data_transform = transforms.Compose([add_mult(), mult()])
data_set = ToySet(transform=data_transform)
data_set[0]
```
(tensor([300., 300.]), tensor([100.]))
| 14.257683 | 138 | 0.625104 | eng_Latn | 0.827552 |
21d3e39fa5b2a91ac47e3c353d5a54dea2fbbc06 | 261 | md | Markdown | _posts/2021-01-08-my-first-blog-post.md | dox1495/github-pages-with-jekyll | e4551e91fc456becf47e21960e0b51843701afc6 | [
"MIT"
] | null | null | null | _posts/2021-01-08-my-first-blog-post.md | dox1495/github-pages-with-jekyll | e4551e91fc456becf47e21960e0b51843701afc6 | [
"MIT"
] | 4 | 2021-01-08T08:50:26.000Z | 2021-01-08T10:10:08.000Z | _posts/2021-01-08-my-first-blog-post.md | dox1495/github-pages-with-jekyll | e4551e91fc456becf47e21960e0b51843701afc6 | [
"MIT"
] | null | null | null | ---
title: "PotatoProgramming"
date: 2021-01-08
---
Draft:
Main Idea:
Make the blog very sleek and minimal with small links for all the topics I wish to discuss
Idea 2:
Have some media to showcase all the important topics and or most relevant topics
| 18.642857 | 92 | 0.735632 | eng_Latn | 0.998464 |
21d40c9d0cf8b3586f653051ad47b3397d6bc9da | 6,002 | md | Markdown | pages/products/flexberry-aspnet/controls/groupedit/fa_age-events.en.md | kn1k/flexberry.github.io | d528f2a30b26ed9d6c4e9dbbda92e4f5938e92de | [
"MIT"
] | null | null | null | pages/products/flexberry-aspnet/controls/groupedit/fa_age-events.en.md | kn1k/flexberry.github.io | d528f2a30b26ed9d6c4e9dbbda92e4f5938e92de | [
"MIT"
] | null | null | null | pages/products/flexberry-aspnet/controls/groupedit/fa_age-events.en.md | kn1k/flexberry.github.io | d528f2a30b26ed9d6c4e9dbbda92e4f5938e92de | [
"MIT"
] | null | null | null | ---
title: События AjaxGroupEdit
sidebar: flexberry-aspnet_sidebar
keywords: CASE Plugins, Flexberry ASP-NET, Web UI (Контролы)
toc: true
permalink: en/fa_age-events.html
lang: en
---
## Описание событий
Все обработчики событий имеют одинаковый тип:
```csharp
/// <summary>
/// Делегат обработчиков событий WGE
/// <summary>
/// <param name="sender">WGE, которому принадлежат события>/param>
/// <param name="args">Параметры события</param>
public delegate void WGEEventHandler(AjaxGroupEdit sender, WGEEventArgs args);
/// <summary>
/// Тип аргумента для событий WGE
/// </summary>
public class WGEEventArgs : CancelEventArgs
{
/// <summary>
/// DataObject
/// </summary>
public DataObject DataObj { get; set; }
/// <summary>
/// Исключение
/// </summary>
public Exception Exception { get; set; }
}
```
Любое событие можно отменить, установив свойство у аргументов `Cancel = true`, т.к. все аргументы наследуются от `CancelEventArgs`.
## Обработка событий AGE
### Событие rowdeleting
Возникает при удалении строки в [AGE](fa_ajax-group-edit.html).
Вызов триггера при удалении строки в [AGE](fa_ajax-group-edit.html):
```xml
<asp:Content ID="Content2" ContentPlaceHolderID="TestContentPlaceHolder" runat="server">
...
<div style="clear: left">
<ac:AjaxGroupEdit ID="ctrlКвартира" runat="server" ReadOnly="false" />
</div>
...
</asp:Content>
<asp:Content ID="Content3" ContentPlaceHolderID="TestScriptsPlaceHolder" runat="server">
<script type="text/javascript">
$(function () {
$('#<%= ctrlКвартира.ClientID %>').on('rowdeleting.ajaxgroupedit', function (e, d) {
alert('Удаляем строку.');
});
});
</script>
</asp:Content>
```
### Событие rowdeleted
Возникает после удалении строки в AGE.
Вызов триггера после удаления строки в [AGE](fa_ajax-group-edit.html):
```xml
<asp:Content ID="Content2" ContentPlaceHolderID="TestContentPlaceHolder" runat="server">
...
<div style="clear: left">
<ac:AjaxGroupEdit ID="ctrlКвартира" runat="server" ReadOnly="false" />
</div>
...
</asp:Content>
<asp:Content ID="Content3" ContentPlaceHolderID="TestScriptsPlaceHolder" runat="server">
<script type="text/javascript">
$(function () {
$('#<%= ctrlКвартира.ClientID %>').on('rowdeleted.ajaxgroupedit', function (e, d) {
alert('Строка удалена.');
});
});
</script>
</asp:Content>
```
### Событие rowadded
Возникает в момент добавления новой строки в AGE. Может использоваться для операций со строками AGE в момент их добавления в список [детейлов](fd_d-view.html).
Например, для наложения ограничения на строки.
```javascript
/**
* Если добавлена новая строка в АГЕ, сразу назначим limit function.
* @param {Element} row Элемент добавленой строки в DOM.
*/
$('#<%=ctrlCompanyEmployee.ClientID%>').on('rowadded.ajaxgroupedit', function(row) {
$('[id$=ctrlCompany]', row).icsMasterEditorAjaxLookup('updateOptions', { lookup: { LFName: lfName } });
});
```
## Методы
### Удаление всех строк в AGE - `deleteAllRows`.
```xml
<asp:Content ID="Content2" ContentPlaceHolderID="TestContentPlaceHolder" runat="server">
...
<span id="delAllRows" style="cursor: pointer">Удалить все записи</span>
<div style="clear: left">
<ac:AjaxGroupEdit ID="ctrlКвартира" runat="server" ReadOnly="false" />
</div>
...
</asp:Content>
<asp:Content ID="Content3" ContentPlaceHolderID="TestScriptsPlaceHolder" runat="server">
<script type="text/javascript">
$(document).ready(function () {
$('span#delAllRows').click(function () {
$('#<%= ctrlКвартира.ClientID %>').ajaxgroupedit('deleteAllRows');
});
});
</script>
</asp:Content>
```
### Получение количества видимых строк в списке - getDataRows
```xml
<asp:Content ID="Content2" ContentPlaceHolderID="TestContentPlaceHolder" runat="server">
...
<div style="clear: left">
<ac:AjaxGroupEdit ID="ctrlПодзадача" runat="server" ReadOnly="false" />
</div>
<button id="getDataRows" onclick="getRows(); return false;">getDataRows</button>
...
</asp:Content>
<asp:Content ContentPlaceHolderID="TestScriptsPlaceHolder" runat="server" >
<script type="text/javascript">
function getRows() {
var data = $('#<%=ctrlПодзадача.ClientID%>').ajaxgroupedit('getDataRows');
if (data.length != 0) {
var result = '';
$.each(data, function(index, value) {
result += value.innerHTML;
});
alert('Записей в списке: ' + data.length + '\n' + result);
} else {
alert('В списке нет записей.');
}
};
</script>
</asp:Content>
```
### Настройка LinkedLookUp в AGE - addDependedLookups
```xml
<asp:Content ID="Content2" ContentPlaceHolderID="TestContentPlaceHolder" runat="server">
...
<div style="clear: left">
<ac:AjaxGroupEdit ID="ctrlTestLookUpD" runat="server" ReadOnly="false" />
</div>
...
</asp:Content>
<asp:Content ID="Content3" ContentPlaceHolderID="TestScriptsPlaceHolder" runat="server">
<script type="text/javascript">
$(function () {
$('#<%=ctrlTestLookUpD.ClientID%>').ajaxgroupedit('addDependedLookups', {
master: '<%=ICSSoft.STORMNET.Information.ExtractPropertyName<WebFormsTestStand.TestLookUpD>(x=>x.TestLookUpA1)%>',
depended: '<%=ICSSoft.STORMNET.Information.ExtractPropertyName<WebFormsTestStand.TestLookUpD>(x=>x.TestLookUpA2)%>',
url: '~/Forms/Controls/AjaxGroupEdit/JavaScriptApiTests/TestLinkedLookUpInAGE.aspx',
method: 'GetPageMethod'
});
});
</script>
</asp:Content>
```
Подробнее об AjaxGroupEdit можно прочитать в данной [статье AjaxGroupEdit](fa_ajax-group-edit.html).
| 31.589474 | 159 | 0.638787 | yue_Hant | 0.428744 |
21d47b07f2a19c60999b26e240770b2ac5cc8472 | 4,811 | md | Markdown | desktop-src/properties/props-system-totalfilesize.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 4 | 2021-07-26T16:18:49.000Z | 2022-02-19T02:00:21.000Z | desktop-src/properties/props-system-totalfilesize.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-04-09T17:00:51.000Z | 2020-04-09T18:30:01.000Z | desktop-src/properties/props-system-totalfilesize.md | dianmsft/win32 | f07b550595a83e44dd2fb6e217525edd10a0341b | [
"CC-BY-4.0",
"MIT"
] | 2 | 2020-07-19T02:58:48.000Z | 2021-03-06T21:09:47.000Z | ---
Description: .
ms.assetid: 10e1f806-e408-48ab-8fe7-1d7a4d41f320
title: System.TotalFileSize
ms.topic: article
ms.date: 05/31/2018
---
# System.TotalFileSize
## Windows 10, version 1703, Windows 10, version 1607, Windows 10, version 1511, Windows 10, version 1507, Windows 8.1, Windows 8, Windows 7
```
propertyDescription
name = System.TotalFileSize
shellPKey = PKEY_TotalFileSize
formatID = 28636AA6-953D-11D2-B5D6-00C04FD918D0
propID = 14
SearchInfo
InInvertedIndex = false
IsColumn = false
typeInfo
type = UInt64
IsInnate = true
EnumeratedList
UseValueForDefault = True
enumRange
name = Empty
minValue = 0
setValue = 0
text = Zero
defineToken = TOTALFILESIZE_EMPTY
mnemonics = empty
enumRange
name = Tiny
minValue = 1
setValue = 1
text = Tiny (0 - 10 KB)
defineToken = TOTALFILESIZE_TINY
mnemonics = tiny
enumRange
name = Small
minValue = 10241
setValue = 10241
text = Small (10 - 100 KB)
defineToken = TOTALFILESIZE_SMALL
mnemonics = small
enumRange
name = Medium
minValue = 102401
setValue = 102401
text = Medium (100 KB - 1 MB)
defineToken = TOTALFILESIZE_MEDIUM
mnemonics = medium
enumRange
name = Large
minValue = 1048577
setValue = 1048577
text = Large (1 - 16 MB)
defineToken = TOTALFILESIZE_LARGE
mnemonics = large
enumRange
name = Huge
minValue = 16777217
setValue = 16777217
text = Huge (16 - 128 MB)
defineToken = TOTALFILESIZE_HUGE
mnemonics = huge
enumRange
name = Gigantic
minValue = 134217729
setValue = 134217729
text = Gigantic (>129 MB)
defineToken = TOTALFILESIZE_GIGANTIC
mnemonics = gigantic
```
## Windows Vista
```
propertyDescription
name = System.TotalFileSize
shellPKey = PKEY_TotalFileSize
formatID = 28636AA6-953D-11D2-B5D6-00C04FD918D0
propID = 14
SearchInfo
InInvertedIndex = false
IsColumn = false
typeInfo
type = UInt64
IsInnate = true
EnumeratedList
UseValueForDefault = True
enumRange
minValue = 0
setValue = 0
text = Zero
mnemonics = empty
enumRange
minValue = 1
setValue = 1
text = Tiny (0 - 10 KB)
mnemonics = tiny
enumRange
minValue = 10241
setValue = 10241
text = Small (10 - 100 KB)
mnemonics = small
enumRange
minValue = 102401
setValue = 102401
text = Medium (100 KB - 1 MB)
mnemonics = medium
enumRange
minValue = 1048577
setValue = 1048577
text = Large (1 - 16 MB)
mnemonics = large
enumRange
minValue = 16777217
setValue = 16777217
text = Huge (16 - 128 MB)
mnemonics = huge
enumRange
minValue = 134217729
setValue = 134217729
text = Gigantic (>129 MB)
mnemonics = gigantic
```
## Remarks
PKEY values are defined in Propkey.h.
## Related topics
<dl> <dt>
[propertyDescription](https://msdn.microsoft.com/library/Bb773880(v=VS.85).aspx)
</dt> <dt>
[searchInfo](https://msdn.microsoft.com/library/Bb773885(v=VS.85).aspx)
</dt> <dt>
[labelInfo](https://msdn.microsoft.com/library/Bb773876(v=VS.85).aspx)
</dt> <dt>
[typeInfo](https://msdn.microsoft.com/library/Bb773889(v=VS.85).aspx)
</dt> <dt>
[displayInfo](https://msdn.microsoft.com/library/Bb773865(v=VS.85).aspx)
</dt> <dt>
[stringFormat](https://msdn.microsoft.com/library/Bb773886(v=VS.85).aspx)
</dt> <dt>
[booleanFormat](https://msdn.microsoft.com/library/Bb773862(v=VS.85).aspx)
</dt> <dt>
[numberFormat](https://msdn.microsoft.com/library/Bb773877(v=VS.85).aspx)
</dt> <dt>
[dateTimeFormat](https://msdn.microsoft.com/library/Bb773863(v=VS.85).aspx)
</dt> <dt>
[enumeratedList](https://msdn.microsoft.com/library/Bb773871(v=VS.85).aspx)
</dt> <dt>
[drawControl](https://msdn.microsoft.com/library/Bb773866(v=VS.85).aspx)
</dt> <dt>
[editControl](https://msdn.microsoft.com/library/Bb773868(v=VS.85).aspx)
</dt> <dt>
[filterControl](https://msdn.microsoft.com/library/Bb773874(v=VS.85).aspx)
</dt> <dt>
[queryControl](https://msdn.microsoft.com/library/Bb773883(v=VS.85).aspx)
</dt> </dl>
| 25.727273 | 140 | 0.577427 | yue_Hant | 0.618456 |
21d4b963b8ed5abb49ebd86ce076b1dfcdbe8615 | 2,318 | md | Markdown | docker/plugins/README.md | dersteppenwolf/GeoHealthCheck | a1f4792dfca8f3774b4af9acd61de66a44103249 | [
"MIT"
] | 58 | 2015-01-16T22:30:03.000Z | 2022-03-17T11:43:34.000Z | docker/plugins/README.md | justb4/GeoHealthCheck | 3e643bdb3fc3732efab8f4bc035f34e421e222b6 | [
"MIT"
] | 335 | 2015-01-21T20:37:13.000Z | 2022-03-28T15:09:37.000Z | docker/plugins/README.md | justb4/GeoHealthCheck | 3e643bdb3fc3732efab8f4bc035f34e421e222b6 | [
"MIT"
] | 67 | 2015-01-20T14:10:19.000Z | 2022-03-22T11:57:18.000Z | # User Plugins
GHC User Plugins can add new Probes and Checks or extend existing ones.
## Via Docker
When using Docker, Plugins need to be available under `/GeoHealthCheck/GeoHealthCheck/plugins` within the
GHC Docker Image or Container.
You can choose to add your Plugins to the GHC Docker Image at build-time
or to the GHC Docker Container at run-time. The latter method is preferred as you can use the standard
GHC Docker Image from Docker Hub. In both cases your
Plugin-modules and classes need to be configured in the GHC `GHC_USER_PLUGINS` Environment variable.
## At Image build-time
The following steps:
- place your Plugins within the sub-directory `user`
- do regular Docker build `docker build -t geopython/geohealthcheck .`
During the build Docker will `ADD` (copy) this dir to `/plugins` within the GHC Docker Image.
The [install.sh](../install.sh) script will then move `/plugins`
to the app-dir `/GeoHealthCheck/GeoHealthCheck/plugins`.
## At Container run-time (preferred)
The following steps:
- place your Plugins within this directory (or any other dir on your host)
- make a Docker Volume mapping from this dir to the Container internal dir `/plugins`
- specify your Plugins via Container Environment as `GHC_USER_PLUGINS: (comma-separated string of modules and/or classes)`
- within `GHC_USER_PLUGINS` the Python package `GeoHealthCheck.plugins` is needed as prefix
Example via [docker-compose.yml](../compose/docker-compose.yml):
```
services:
geohealthcheck:
image: geopython/geohealthcheck:latest
ports:
- 8083:80
# Override settings to enable email notifications
environment:
GHC_USER_PLUGINS: 'GeoHealthCheck.plugins.user.myplugins'
.
.
volumes:
- ghc_sqlitedb:/GeoHealthCheck/DB
- Path on the host, relative to the Compose file
- ./../plugins:/plugins:ro
```
Or if you run the Image via `docker run` :
```
docker run -d --name GeoHealthCheck -p 8083:80 \
-v ghc_sqlitedb:/GeoHealthCheck/DB \
-v ./plugins:/plugins:ro \
-e 'GHC_USER_PLUGINS=GeoHealthCheck.plugins.user.myplugins'
geopython/geohealthcheck:latest
```
When the Container starts it will copy all content under
`/plugins` to the internal dir `/GeoHealthCheck/GeoHealthCheck/plugins`.
| 34.088235 | 122 | 0.731234 | eng_Latn | 0.880586 |
21d4ba5cd7a5c22c16b272ea116e749ad5b2a56f | 3,882 | md | Markdown | README.md | qq575792372/lime-util | caa10348c021d8ff5faa8a220e399470e13cfe7f | [
"MIT"
] | null | null | null | README.md | qq575792372/lime-util | caa10348c021d8ff5faa8a220e399470e13cfe7f | [
"MIT"
] | null | null | null | README.md | qq575792372/lime-util | caa10348c021d8ff5faa8a220e399470e13cfe7f | [
"MIT"
] | null | null | null | # lime-util 前端模块化 JavaScript 工具库
[](https://github.com/qq575792372/lime-util)
[](LICENSE)
🔥 **lime-util** 🔥 是一个 前端模块化 **JavaScript** 工具库,目前共有 **245+** ⚡️ 个 Api 方法,包含了开发中经常用到的一些模块方法集合,如`字符串`,`数组`,`浏览器缓存`,`浏览器Cookie`,`Dom处理`,`日期工具`,`数学计算`,`文件处理`,`正则校验`,`微信小程序工具库`等等。
### 📦 安装
#### 1. 使用 npm 安装
```bash
npm i @lime-util/util --save
```
#### 2. 或在浏览器中直接使用
```html
<!-- 将工具包下面 dist/index.js 文件拷贝出来引入路径 -->
<script src="dist/index.js"></script>
<!-- 使用 -->
<script>
LimeUtil.loadedTest();
</script>
```
### 🔨 构建
工具库架构使用 `pnpm` 做为包管理工具,在 `packages` 目录中分包为多个工程模块来构建。如果需要 `fork` 二次开发,需要本地安装 `pnpm`,直接在 `packages` 根目录下面加入新的模块目录即可。
```bash
# 构建整合工具的包
pnpm build
# 构建核心工具的包
pnpm build:core
# 构建日期工具的包
pnpm build:date
```
### 🎨 使用
#### 1. es6 方式
```javascript
// 全部引入
import LimeUtil from "@lime-util/util";
LimeUtil.loadedTest();
// 按需引入
import { loadedTest } from "@lime-util/util";
loadedTest();
```
#### 2. require 方式
```javascript
// 全部引入
const LimeUtil = require("@lime-util/util");
LimeUtil.loadedTest();
// 按需引入
const { loadedTest } = require("@lime-util/util");
loadedTest();
```
### 📚 模块分包
`lime-util` 分为`lime-util 整合工具库`,`lime-core 核心工具库` 或 `lime-date 日期工具库`,`整合工具库`包含所有的内容,如果你想轻量级使用,可以按需只使用`核心库`和`日期库`。
1. [lime-util 整合工具库(传送阵)](https://github.com/qq575792372/lime-util)
2. [lime-core 核心工具库(传送阵)](https://github.com/qq575792372/lime-util/tree/master/packages/core)
3. [lime-date 日期工具库(传送阵)](https://github.com/qq575792372/lime-util/tree/master/packages/date)
### 📝API 文档
1. [常量集合 Constant](https://github.com/qq575792372/lime-util/blob/master/doc/constant.md)
2. [字符串 String](https://github.com/qq575792372/lime-util/blob/master/doc/string.md)
3. [数字 Number](https://github.com/qq575792372/lime-util/blob/master/doc/number.md)
4. [数组 Array](https://github.com/qq575792372/lime-util/blob/master/doc/array.md)
5. [对象 Object](https://github.com/qq575792372/lime-util/blob/master/doc/object.md)
6. [函数 Function](https://github.com/qq575792372/lime-util/blob/master/doc/function.md)
7. [日期 Date](https://github.com/qq575792372/lime-util/blob/master/doc/date.md)
8. [正则 Regexp](https://github.com/qq575792372/lime-util/blob/master/doc/regexp.md)
9. [数学 Math](https://github.com/qq575792372/lime-util/blob/master/doc/math.md)
10. [随机数 Random](https://github.com/qq575792372/lime-util/blob/master/doc/random.md)
11. [文件 File](https://github.com/qq575792372/lime-util/blob/master/doc/file.md)
12. [颜色 Color](https://github.com/qq575792372/lime-util/blob/master/doc/color.md)
13. [校验 Validate](https://github.com/qq575792372/lime-util/blob/master/doc/validate.md)
14. [键盘 Keycode](https://github.com/qq575792372/lime-util/blob/master/doc/keycode.md)
15. [浏览器 Url](https://github.com/qq575792372/lime-util/blob/master/doc/browser-url.md)
16. [浏览器 Storage](https://github.com/qq575792372/lime-util/blob/master/doc/browser-storage.md)
17. [浏览器 Cookie](https://github.com/qq575792372/lime-util/blob/master/doc/browser-cookie.md)
18. [浏览器 Dom](https://github.com/qq575792372/lime-util/blob/master/doc/browser-dom.md)
19. [浏览器 Device](https://github.com/qq575792372/lime-util/blob/master/doc/browser-device.md)
20. [微信小程序工具库 Weapp](https://github.com/qq575792372/lime-util/blob/master/doc/weapp.md)
### 🔖Git 提交规范
#### 😝 主要
`fix`: 修复 bug
`add`: 增加功能
`del`: 删除功能
`update`: 更新功能
#### 😉 次要
`docs`: 文档更新
`merge`: 合并分支
`style`: 颜色、字体大小等变动(不影响代码运行)
`build`: 构造工具或相关依赖变更
`refactor`: 代码重构
`revert`: 撤销,版本回退
#### 😳 一般
`test`: 添加或修改测试
`perf`: 提高性能的改动
`chore`: 构建过程或辅助工具的变更
`ci`: CI 配置,脚本文件等改动
```bash
# <type>后面英文冒号,并且后跟一个空格
git commit -m <type>(<scope>): <description>
# 举个栗子
git commit -m 'fix: 修复了xxx问题'
git commit -m 'fix(string): 修复了string工具类的xxx问题'
git commit -m 'docs: 更新了字符串模块文档'
```
| 26.958333 | 174 | 0.702731 | yue_Hant | 0.569641 |
21d4cdd206cf0dab4120feb4a9f20f3693b7af12 | 10,765 | md | Markdown | articles/ansible/ansible-create-configure-vmss.md | ZexDC/azure-docs | edf98652ceca4fbbd9c3ac156d87c7b985a4587d | [
"CC-BY-4.0"
] | null | null | null | articles/ansible/ansible-create-configure-vmss.md | ZexDC/azure-docs | edf98652ceca4fbbd9c3ac156d87c7b985a4587d | [
"CC-BY-4.0"
] | null | null | null | articles/ansible/ansible-create-configure-vmss.md | ZexDC/azure-docs | edf98652ceca4fbbd9c3ac156d87c7b985a4587d | [
"CC-BY-4.0"
] | null | null | null | ---
title: Create virtual machine scale sets in Azure using Ansible
description: Learn how to use Ansible to create and configure a virtual machine scale set in Azure
ms.service: ansible
keywords: ansible, azure, devops, bash, playbook, virtual machine, virtual machine scale set, vmss
author: tomarchermsft
manager: jeconnoc
ms.author: tarcher
ms.topic: tutorial
ms.date: 08/24/2018
---
# Create virtual machine scale sets in Azure using Ansible
Ansible allows you to automate the deployment and configuration of resources in your environment. You can use Ansible to manage your virtual machine scale set (VMSS) in Azure, the same as you would manage any other Azure resource. This article shows you how to use Ansible to create and scale out a virtual machine scale set.
## Prerequisites
- **Azure subscription** - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
- [!INCLUDE [ansible-prereqs-for-cloudshell-use-or-vm-creation1.md](../../includes/ansible-prereqs-for-cloudshell-use-or-vm-creation1.md)] [!INCLUDE [ansible-prereqs-for-cloudshell-use-or-vm-creation2.md](../../includes/ansible-prereqs-for-cloudshell-use-or-vm-creation2.md)]
> [!Note]
> Ansible 2.6 is required to run the following the sample playbooks in this tutorial.
## Create a VMSS
This section presents a sample Ansible playbook that defines the following resources:
- **Resource group** into which all of your resources will be deployed
- **Virtual network** in the 10.0.0.0/16 address space
- **Subnet** within the virtual network
- **Public IP address** that allows you to access resources across the Internet
- **Network security group** that controls the flow of network traffic in and out of your virtual machine scale set
- **Load balancer** that distributes traffic across a set of defined VMs using load balancer rules
- **Virtual machine scale set** that uses all the created resources
Enter your own password for the *admin_password* value.
```yml
- hosts: localhost
vars:
resource_group: myResourceGroup
vmss_name: myVMSS
vmss_lb_name: myVMSSlb
location: eastus
admin_username: azureuser
admin_password: "your_password"
tasks:
- name: Create a resource group
azure_rm_resourcegroup:
name: "{{ resource_group }}"
location: "{{ location }}"
- name: Create virtual network
azure_rm_virtualnetwork:
resource_group: "{{ resource_group }}"
name: "{{ vmss_name }}"
address_prefixes: "10.0.0.0/16"
- name: Add subnet
azure_rm_subnet:
resource_group: "{{ resource_group }}"
name: "{{ vmss_name }}"
address_prefix: "10.0.1.0/24"
virtual_network: "{{ vmss_name }}"
- name: Create public IP address
azure_rm_publicipaddress:
resource_group: "{{ resource_group }}"
allocation_method: Static
name: "{{ vmss_name }}"
- name: Create Network Security Group that allows SSH
azure_rm_securitygroup:
resource_group: "{{ resource_group }}"
name: "{{ vmss_name }}"
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 1001
direction: Inbound
- name: Create a load balancer
azure_rm_loadbalancer:
name: "{{ vmss_lb_name }}"
location: "{{ location }}"
resource_group: "{{ resource_group }}"
public_ip: "{{ vmss_name }}"
probe_protocol: Tcp
probe_port: 8080
probe_interval: 10
probe_fail_count: 3
protocol: Tcp
load_distribution: Default
frontend_port: 80
backend_port: 8080
idle_timeout: 4
natpool_frontend_port_start: 50000
natpool_frontend_port_end: 50040
natpool_backend_port: 22
natpool_protocol: Tcp
- name: Create VMSS
azure_rm_virtualmachine_scaleset:
resource_group: "{{ resource_group }}"
name: "{{ vmss_name }}"
vm_size: Standard_DS1_v2
admin_username: "{{ admin_username }}"
admin_password: "{{ admin_password }}"
ssh_password_enabled: true
capacity: 2
virtual_network_name: "{{ vmss_name }}"
subnet_name: "{{ vmss_name }}"
upgrade_policy: Manual
tier: Standard
managed_disk_type: Standard_LRS
os_disk_caching: ReadWrite
image:
offer: UbuntuServer
publisher: Canonical
sku: 16.04-LTS
version: latest
load_balancer: "{{ vmss_lb_name }}"
data_disks:
- lun: 0
disk_size_gb: 20
managed_disk_type: Standard_LRS
caching: ReadOnly
- lun: 1
disk_size_gb: 30
managed_disk_type: Standard_LRS
caching: ReadOnly
```
To create the virtual machine scale set environment with Ansible, save the preceding playbook as `vmss-create.yml`, or [download the sample Ansible playbook](https://github.com/Azure-Samples/ansible-playbooks/blob/master/vmss/vmss-create.yml).
To run the Ansible playbook, use the **ansible-playbook** command as follows:
```bash
ansible-playbook vmss-create.yml
```
After running the playbook, output similar to the following example shows that the virtual machine scale set has been successfully created:
```Output
PLAY [localhost] ***********************************************************
TASK [Gathering Facts] *****************************************************
ok: [localhost]
TASK [Create a resource group] ****************************************************************************
changed: [localhost]
TASK [Create virtual network] ****************************************************************************
changed: [localhost]
TASK [Add subnet] **********************************************************
changed: [localhost]
TASK [Create public IP address] ****************************************************************************
changed: [localhost]
TASK [Create Network Security Group that allows SSH] ****************************************************************************
changed: [localhost]
TASK [Create a load balancer] ****************************************************************************
changed: [localhost]
TASK [Create VMSS] *********************************************************
changed: [localhost]
PLAY RECAP *****************************************************************
localhost : ok=8 changed=7 unreachable=0 failed=0
```
## Scale out a VMSS
The created virtual machine scale set has two instances. If you navigate to the virtual machine scale set in the Azure portal, you see **Standard_DS1_v2 (2 instances)**. You can also use the [Azure Cloud Shell](https://shell.azure.com/) by running the following command within the Cloud Shell:
```azurecli-interactive
az vmss show -n myVMSS -g myResourceGroup --query '{"capacity":sku.capacity}'
```
You see results similar to the following output:
```bash
{
"capacity": 2,
}
```
Now, let's scale from two instances to three instances. The following Ansible playbook code retrieves information about the virtual machine scale, and changes its capacity from two to three.
```yml
- hosts: localhost
vars:
resource_group: myResourceGroup
vmss_name: myVMSS
tasks:
- name: Get scaleset info
azure_rm_virtualmachine_scaleset_facts:
resource_group: "{{ resource_group }}"
name: "{{ vmss_name }}"
format: curated
register: output_scaleset
- name: Dump scaleset info
debug:
var: output_scaleset
- name: Modify scaleset (change the capacity to 3)
set_fact:
body: "{{ output_scaleset.ansible_facts.azure_vmss[0] | combine({'capacity': 3}, recursive=True) }}"
- name: Update something in that VMSS
azure_rm_virtualmachine_scaleset: "{{ body }}"
```
To scale out the virtual machine scale set you created, save the preceding playbook as `vmss-scale-out.yml` or [download the sample Ansible playbook](https://github.com/Azure-Samples/ansible-playbooks/blob/master/vmss/vmss-scale-out.yml)).
The following command will run the playbook:
```bash
ansible-playbook vmss-scale-out.yml
```
The output from running the Ansible playbook shows that the virtual machine scale set has been successfully scaled out:
```Output
PLAY [localhost] **********************************************************
TASK [Gathering Facts] ****************************************************
ok: [localhost]
TASK [Get scaleset info] ***************************************************************************
ok: [localhost]
TASK [Dump scaleset info] ***************************************************************************
ok: [localhost] => {
"output_scaleset": {
"ansible_facts": {
"azure_vmss": [
{
......
}
]
},
"changed": false,
"failed": false
}
}
TASK [Modify scaleset (set upgradePolicy to Automatic and capacity to 3)] ***************************************************************************
ok: [localhost]
TASK [Update something in that VMSS] ***************************************************************************
changed: [localhost]
PLAY RECAP ****************************************************************
localhost : ok=5 changed=1 unreachable=0 failed=0
```
If navigate to the virtual machine scale set you configured in the Azure portal, you see **Standard_DS1_v2 (3 instances)**. You can also verify the change with the [Azure Cloud Shell](https://shell.azure.com/) by running the following command:
```azurecli-interactive
az vmss show -n myVMSS -g myResourceGroup --query '{"capacity":sku.capacity}'
```
The result of running the command in Cloud Shell shows that three instances now exist.
```bash
{
"capacity": 3,
}
```
## Next steps
> [!div class="nextstepaction"]
> [Deploy applications to virtual machine scale sets using Ansible](https://docs.microsoft.com/azure/ansible/ansible-deploy-app-vmss)
> [Automatically scale a virtual machine scale set using Ansible](https://docs.microsoft.com/azure/ansible/ansible-auto-scale-vmss) | 39.145455 | 326 | 0.589224 | eng_Latn | 0.701572 |
21d4ec09f987fb7e04f0d98279e8db5b8ae4dcfd | 4,989 | md | Markdown | _posts/2018-03-01-github-ddos-attack.md | kylec32/postmortemstories | fb2e1174640923a7e304fe93fafec1cedf5c6ba3 | [
"MIT"
] | null | null | null | _posts/2018-03-01-github-ddos-attack.md | kylec32/postmortemstories | fb2e1174640923a7e304fe93fafec1cedf5c6ba3 | [
"MIT"
] | null | null | null | _posts/2018-03-01-github-ddos-attack.md | kylec32/postmortemstories | fb2e1174640923a7e304fe93fafec1cedf5c6ba3 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Github DDoS Attack"
description: "Explanation of What Happened During the March 1st 2018 Github DDoS Memcachd attack."
tags: [github, cloudflare, ddos, memcached]
comments: false
author: Sam Kottler
author_link: https://github.com/skottler
---
_[Originally Posted On the GitHub Engineering Blog](https://githubengineering.com/ddos-incident-report/)_
On Wednesday, February 28, 2018 GitHub.com was unavailable from 17:21 to 17:26 UTC and intermittently unavailable from 17:26 to 17:30 UTC due to a distributed denial-of-service (DDoS) attack. We understand how much you rely on GitHub and we know the availability of our service is of critical importance to our users. To note, at no point was the confidentiality or integrity of your data at risk. We are sorry for the impact of this incident and would like to describe the event, the efforts we’ve taken to drive availability, and how we aim to improve response and mitigation moving forward.
## Background
Cloudflare described an amplification vector using memcached over UDP in their blog post this week, [“Memcrashed - Major amplification attacks from UDP port 11211”](https://blog.cloudflare.com/memcrashed-major-amplification-attacks-from-port-11211/). The attack works by abusing memcached instances that are inadvertently accessible on the public internet with UDP support enabled. Spoofing of IP addresses allows memcached’s responses to be targeted against another address, like ones used to serve GitHub.com, and send more data toward the target than needs to be sent by the unspoofed source. The vulnerability via misconfiguration described in the post is somewhat unique amongst that class of attacks because the amplification factor is up to 51,000, meaning that for each byte sent by the attacker, up to 51KB is sent toward the target.
Over the past year we have deployed additional transit to our facilities. We’ve more than doubled our transit capacity during that time, which has allowed us to withstand certain volumetric attacks without impact to users. We’re continuing to deploy additional transit capacity and [develop robust peering relationships across a diverse set of exchanges](https://githubengineering.com/transit-and-peering-how-your-requests-reach-github/). Even still, attacks like this sometimes require the help of partners with larger transit networks to provide blocking and filtering.
The incident
Between 17:21 and 17:30 UTC on February 28th we identified and mitigated a significant volumetric DDoS attack. The attack originated from over a thousand different autonomous systems (ASNs) across tens of thousands of unique endpoints. It was an amplification attack using the memcached-based approach described above that peaked at 1.35Tbps via 126.9 million packets per second.
At 17:21 UTC our network monitoring system detected an anomaly in the ratio of ingress to egress traffic and notified the on-call engineer and others in our chat system. This graph shows inbound versus outbound throughput over transit links:

Given the increase in inbound transit bandwidth to over 100Gbps in one of our facilities, the decision was made to move traffic to Akamai, who could help provide additional edge network capacity. At 17:26 UTC the command was initiated via our ChatOps tooling to withdraw BGP announcements over transit providers and announce AS36459 exclusively over our links to Akamai. Routes reconverged in the next few minutes and access control lists mitigated the attack at their border. Monitoring of transit bandwidth levels and load balancer response codes indicated a full recovery at 17:30 UTC. At 17:34 UTC routes to internet exchanges were withdrawn as a follow-up to shift an additional 40Gbps away from our edge.

The first portion of the attack peaked at 1.35Tbps and there was a second 400Gbps spike a little after 18:00 UTC. This graph provided by Akamai shows inbound traffic in bits per second that reached their edge:

## Next steps
Making GitHub’s edge infrastructure more resilient to current and future conditions of the internet and less dependent upon human involvement requires better automated intervention. We’re investigating the use of our monitoring infrastructure to automate enabling DDoS mitigation providers and will continue to measure our response times to incidents like this with a goal of reducing mean time to recovery (MTTR).
We’re going to continue to expand our edge network and strive to identify and mitigate new attack vectors before they affect your workflow on GitHub.com.
We know how much you rely on GitHub for your projects and businesses to succeed. We will continue to analyze this and other events that impact our availability, build better detection systems, and streamline response. | 118.785714 | 842 | 0.813389 | eng_Latn | 0.998863 |
21d51500f10eda0204a9446178ee09d28f49279f | 192 | md | Markdown | README.md | fizzo999/serverless-api | 1b10ae906b84751c0ea6be5e6b00e064aef463a7 | [
"MIT"
] | null | null | null | README.md | fizzo999/serverless-api | 1b10ae906b84751c0ea6be5e6b00e064aef463a7 | [
"MIT"
] | null | null | null | README.md | fizzo999/serverless-api | 1b10ae906b84751c0ea6be5e6b00e064aef463a7 | [
"MIT"
] | null | null | null | # serverless-api
AWS API gateway connected to AWS Lambda function and AWS dynamoDb using Dynamoose
{
"body": "{\"\_id\": \"999\", \"name\": \"test_user999\", \"phone\": \"123 456 7890\"}"
}
| 24 | 86 | 0.640625 | eng_Latn | 0.576984 |
21d57b7ca3c3f8cdbcacb479af41aefc9ab4b388 | 1,735 | md | Markdown | docs/framework/unmanaged-api/hosting/iclrsyncmanager-interface.md | sheng-jie/docs.zh-cn-1 | e825f92bb3665ff8e05a0d627bb65a9243b39992 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/iclrsyncmanager-interface.md | sheng-jie/docs.zh-cn-1 | e825f92bb3665ff8e05a0d627bb65a9243b39992 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/hosting/iclrsyncmanager-interface.md | sheng-jie/docs.zh-cn-1 | e825f92bb3665ff8e05a0d627bb65a9243b39992 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: ICLRSyncManager 接口
ms.date: 03/30/2017
api_name:
- ICLRSyncManager
api_location:
- mscoree.dll
api_type:
- COM
f1_keywords:
- ICLRSyncManager
helpviewer_keywords:
- ICLRSyncManager interface [.NET Framework hosting]
ms.assetid: a49f9d80-1c76-4ddd-8c49-34f913a5c596
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: 1b87ccc3d6c3e957d0384499048032e35247093a
ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 05/04/2018
ms.locfileid: "33436476"
---
# <a name="iclrsyncmanager-interface"></a>ICLRSyncManager 接口
定义允许主机以获取有关请求的任务的信息并在其同步实现检测死锁的方法。
## <a name="methods"></a>方法
|方法|描述|
|------------|-----------------|
|[CreateRWLockOwnerIterator 方法](iclrsyncmanager-createrwlockowneriterator-method.md)|公共语言运行时 (CLR) 创建主机可用于确定的读取器 / 编写器锁等待的任务集的迭代器的请求。|
|[DeleteRWLockOwnerIterator 方法](iclrsyncmanager-deleterwlockowneriterator-method.md)|请求 CLR 销毁由调用的迭代器`CreateRWLockOwnerIterator`。|
|[GetMonitorOwner 方法](iclrsyncmanager-getmonitorowner-method.md)|获取拥有指定的监视器的任务。|
|[GetRWLockOwnerNext 方法](iclrsyncmanager-getrwlockownernext-method.md)|获取在当前的读取器 / 编写器锁等待下一个任务。|
## <a name="requirements"></a>要求
**平台:** 请参阅[系统要求](../../get-started/system-requirements.md)。
**标头:** MSCorEE.h
**库:** 作为 MSCorEE.dll 中的资源
**.NET framework 版本:** [!INCLUDE[net_current_v20plus](../../../../includes/net-current-v20plus-md.md)]
## <a name="see-also"></a>请参阅
<xref:System.Threading.Thread>
[IHostSyncManager 接口](ihostsyncmanager-interface.md)
[托管和非托管线程处理](https://msdn.microsoft.com/library/db425c20-4b2f-4433-bf96-76071c7881e5(v=vs.100))
[承载接口](hosting-interfaces.md)
| 33.365385 | 136 | 0.73487 | yue_Hant | 0.289444 |
21d57ffc42fb868da9f5147276bb20a454b5ffe9 | 34 | md | Markdown | README.md | JunedQureshi/LuaProgram | 1738aeb3285e0292e8dbbf98a8e1218be3b5e6a4 | [
"MIT"
] | null | null | null | README.md | JunedQureshi/LuaProgram | 1738aeb3285e0292e8dbbf98a8e1218be3b5e6a4 | [
"MIT"
] | null | null | null | README.md | JunedQureshi/LuaProgram | 1738aeb3285e0292e8dbbf98a8e1218be3b5e6a4 | [
"MIT"
] | null | null | null | # LuaProgram
Lua Program to learn
| 11.333333 | 20 | 0.794118 | eng_Latn | 0.958363 |
21d61759f93cbd299f584fb880b48db36c22bebe | 2,751 | md | Markdown | Initial_Access/PasswordFinder/PasswordFinder_Mindmap.md | CyberThulhu22/Python-Pentest-TTPs | 483547400555cbd1ff502cb2860cba155a394f34 | [
"MIT"
] | 2 | 2022-02-14T18:54:01.000Z | 2022-02-23T03:53:13.000Z | Initial_Access/PasswordFinder/PasswordFinder_Mindmap.md | CyberThulhu22/Python-Pentest-TTPs | 483547400555cbd1ff502cb2860cba155a394f34 | [
"MIT"
] | null | null | null | Initial_Access/PasswordFinder/PasswordFinder_Mindmap.md | CyberThulhu22/Python-Pentest-TTPs | 483547400555cbd1ff502cb2860cba155a394f34 | [
"MIT"
] | 1 | 2022-01-05T04:19:05.000Z | 2022-01-05T04:19:05.000Z | # **Project 3 Mind Map**
## Brute Force!
### Tasking at Hand
- [x] Beginner Task: Single User / Multi Passwords
- [x] Intermediate Task: Single Password / Multi Users
- [ ] Traditional Brute Force: "AAA", "AAB", "AAC"
- [x] Expert Task: Add Time Limits / Login Attempts ( 5 Attempts per 60 Seconds )
### Initial Assumptions
Need a Brute Force Component
Need a Passowrd Spray Component
Need a Time Component
Python version of Hydra or Other Password Tools
### Basic Ideas
- Test Single User / Single Password
- Test Single User / Multiple Passwords
- Optional: Limit?
- Optional: True Bruteforce?
- Optional: List of Passwords?
- Test Multiple Users / Single Password
- Test Multiple Users / Multiple Passwords
- Optional: Limit?
- Optional: True Bruteforce? (This would be more like a Hail Mary... Not Ideal.. Maybe dont Implement )
- Optional: List of Users
- Optional: List of Passowords
- Optional: Multiple Protocols? (Not an Original Tasking; SSH, FTP, SMB, HTTP, etc)
- Optional: Auto Mangle? (Not an Original Tasking)
- Optional: Automatic Shell Option? (Would need to Dig into how this could work; May need to break up code into different modules)
### Question to Assumptions
- [ ] How do I Interact with SSH using Python with built-ins? (Sockets?)
- [ ] When interacting with diff SSH, how do I account for ea ver., custom prompts, etc? ( Expect? -> Is Expect built-in)
- [ ] What do I make into an object? if at all? ( The connection to SSH itself? )
- [ ] How do I implement threading? Should I?
### Pseudo Coding - What will it look like?
``` py
# Imports
import sys, time, socket, argparse, ipaddress
from threading import Thread
from datetime import datetime
try: import paramiko except ImportError: exit
# Initial Variables
# Instantiate Argparse
parser = argparse.ArgumentParser()
pgroup = parser.add_mutually_exclusive_group("What Mode?")
pgroup.add_arg("-bf") # Brute Force Mode
pgroup.add_arg("-ps") # Pass Spray Mode
parser.add_arg("-u") # Username
parser.add_arg("-c") # Password
parser.add_arg("-i") # IP Address
parser.add_arg("-p") # port
parser.add_arg("-t") # Timeout
parser.add_arg("-q") # Quit on Success
parser.add_arg("-o") # Output File
parser.add_arg("-v") # Verbose
def verbosity_checker(input_string) -> str:
class SSHConnection:
def __init__(self, ipaddr, username, password, timeout)
def check_ipaddress(self) -> bool:
def test_ip_connection(self) -> bool:
def test_port_open(self) -> bool:
def ssh_connection(self) -> None:
def open_file(filename) -> list:
def output_results(results, filename) -> str:
def main() -> None:
check_ipaddress()
test_ip_connection()
test_port_open()
ssh_connection()
if __name__ == "__main__":
try:
main()
exit
except:
exit
``` | 29.902174 | 130 | 0.719375 | eng_Latn | 0.623005 |
21d663cc67ba4783bfe979eeb4e0e4fd1e850d0d | 64 | md | Markdown | repository/Swank-Client.package/SwankServer.class/README.md | charcodelimit/swankclient-smalltalk | 5128b0438c86e822ad0efc24ff5e673810d9c0b7 | [
"Apache-2.0"
] | 2 | 2019-01-04T02:44:25.000Z | 2019-05-19T18:52:02.000Z | repository/Swank-Client.package/SwankServer.class/README.md | charcodelimit/swankclient-smalltalk | 5128b0438c86e822ad0efc24ff5e673810d9c0b7 | [
"Apache-2.0"
] | null | null | null | repository/Swank-Client.package/SwankServer.class/README.md | charcodelimit/swankclient-smalltalk | 5128b0438c86e822ad0efc24ff5e673810d9c0b7 | [
"Apache-2.0"
] | null | null | null | Contains all information necessary to connect to a Swank server
| 32 | 63 | 0.84375 | eng_Latn | 0.999712 |
21d69da1e97fe38baef01ab98653f9570e2d0f71 | 53,366 | md | Markdown | docs/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options.md | IrvinDominin/sql-docs.it-it | 4b82830a24c29e5486f950728a69ddb46cb4c874 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Opzioni per file e filegroup ALTER DATABASE (Transact-SQL) | Microsoft Docs
ms.custom: ''
ms.date: 07/03/2018
ms.prod: sql
ms.prod_service: sql-database
ms.reviewer: ''
ms.suite: sql
ms.technology: t-sql
ms.tgt_pltfrm: ''
ms.topic: language-reference
f1_keywords:
- ADD FILE
- ADD_FILE_TSQL
dev_langs:
- TSQL
helpviewer_keywords:
- deleting files
- removing files
- deleting filegroups
- file modifications [SQL Server]
- databases [SQL Server], size
- relocating files
- databases [SQL Server], modifying
- file additions [SQL Server], ALTER DATABASE
- file moving [SQL Server]
- default filegroups
- ALTER DATABASE statement, files and filegroups
- initializing files [SQL Server]
- database files [SQL Server]
- moving files
- filegroups [SQL Server], deleting
- adding filegroups
- adding files
- database filegroups [SQL Server]
- adding log files
- modifying files
- filegroups [SQL Server], adding
- file initialization [SQL Server]
- files [SQL Server], deleting
- files [SQL Server], adding
- databases [SQL Server], moving
ms.assetid: 1f635762-f7aa-4241-9b7a-b51b22292b07
caps.latest.revision: 61
author: CarlRabeler
ms.author: carlrab
manager: craigg
monikerRange: =azuresqldb-mi-current||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017
ms.openlocfilehash: 2cc06595f2827704009f96b4a7f7c047e5c27c28
ms.sourcegitcommit: bab5f52b76ac53d0885683b7c39a808a41d93cfe
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 09/07/2018
ms.locfileid: "44090001"
---
# <a name="alter-database-transact-sql-file-and-filegroup-options"></a>Opzioni per file e filegroup ALTER DATABASE (Transact-SQL)
Modifica i file e i filegroup associati al database. Consente inoltre di aggiungere file e filegroup a un database, di rimuoverli da esso e di modificare gli attributi di un database o i relativi file e filegroup. Per altre opzioni di ALTER DATABASE, vedere [ALTER DATABASE](../../t-sql/statements/alter-database-transact-sql.md).
Per altre informazioni sulle convenzioni di sintassi, vedere [Convenzioni della sintassi Transact-SQL](../../t-sql/language-elements/transact-sql-syntax-conventions-transact-sql.md).
## <a name="click-a-product"></a>Fare clic su un prodotto.
Nella riga seguente fare clic su qualsiasi nome di prodotto. Viene visualizzato contenuto diverso in questa pagina Web, appropriato per il prodotto su cui si fa clic.
::: moniker range=">=sql-server-2016||>=sql-server-linux-2017||=sqlallproducts-allversions"
> [!div class="mx-tdCol2BreakAll"]
> |||
> |-|-|-|
> |**_\* SQL Server \*_**<br /> |[Database SQL<br />Istanza gestita](alter-database-transact-sql-file-and-filegroup-options.md?view=azuresqldb-mi-current)|
# <a name="sql-server"></a>SQL Server
## <a name="syntax"></a>Sintassi
```
ALTER DATABASE database_name
{
<add_or_modify_files>
| <add_or_modify_filegroups>
}
[;]
<add_or_modify_files>::=
{
ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name } ]
| ADD LOG FILE <filespec> [ ,...n ]
| REMOVE FILE logical_file_name
| MODIFY FILE <filespec>
}
<filespec>::=
(
NAME = logical_file_name
[ , NEWNAME = new_logical_name ]
[ , FILENAME = {'os_file_name' | 'filestream_path' | 'memory_optimized_data_path' } ]
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB| % ] ]
[ , OFFLINE ]
)
<add_or_modify_filegroups>::=
{
| ADD FILEGROUP filegroup_name
[ CONTAINS FILESTREAM | CONTAINS MEMORY_OPTIMIZED_DATA ]
| REMOVE FILEGROUP filegroup_name
| MODIFY FILEGROUP filegroup_name
{ <filegroup_updatability_option>
| DEFAULT
| NAME = new_filegroup_name
| { AUTOGROW_SINGLE_FILE | AUTOGROW_ALL_FILES }
}
}
<filegroup_updatability_option>::=
{
{ READONLY | READWRITE }
| { READ_ONLY | READ_WRITE }
}
```
## <a name="arguments"></a>Argomenti
**\<add_or_modify_files>::=**
Specifica i file da aggiungere, rimuovere o modificare.
*database_name*
Nome del database da modificare.
ADD FILE
Aggiunge un file al database.
TO FILEGROUP { *filegroup_name* }
Specifica il filegroup in cui aggiungere il file specificato. Per visualizzare i filegroup correnti e determinare il filegroup attualmente predefinito, usare la vista del catalogo [sys.filegroups](../../relational-databases/system-catalog-views/sys-filegroups-transact-sql.md).
ADD LOG FILE
Aggiunge un file di log al database specificato.
REMOVE FILE *logical_file_name*
Rimuove la descrizione del file logico da un'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] ed elimina il file fisico. Il file può essere rimosso solo se è vuoto.
*logical_file_name*
Nome logico utilizzato in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] per fare riferimento al file.
> [!WARNING]
> Sarà possibile rimuovere un file di database con backup FILE_SNAPSHOT associati, ma non saranno eliminati tutti gli snapshot associati per evitare di invalidare i backup che fanno riferimento al file di database. Il file verrà troncato, ma non sarà eliminato fisicamente in modo da mantenere inalterati i backup FILE_SNAPSHOT. Per altre informazioni, vedere [Backup e ripristino di SQL Server con il servizio di archiviazione BLOB di Microsoft Azure](../../relational-databases/backup-restore/sql-server-backup-and-restore-with-microsoft-azure-blob-storage-service.md). **Si applica a**: [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (da[!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)]).
MODIFY FILE
Specifica il file da modificare. È possibile modificare una sola proprietà \<filespec> alla volta. L'opzione NAME deve essere sempre specificata in \<filespec> per identificare il file da modificare. Se si specifica l'opzione SIZE, le nuove dimensioni del file devono essere superiori a quelle correnti.
Per modificare il nome logico di un file di dati o di un file di log, specificare il nome del file logico da rinominare nella clausola `NAME` e specificare il nuovo nome logico per il file nella clausola `NEWNAME`. Ad esempio
```sql
MODIFY FILE ( NAME = logical_file_name, NEWNAME = new_logical_name )
```
Per spostare un file di dati o un file di log in una nuova posizione, specificare il nome di file logico corrente nella clausola `NAME` e specificare il nuovo percorso e il nome del file nel sistema operativo nella clausola `FILENAME`. Ad esempio
```sql
MODIFY FILE ( NAME = logical_file_name, FILENAME = ' new_path/os_file_name ' )
```
Per lo spostamento di un catalogo full-text, specificare solo il nuovo percorso nella clausola FILENAME, senza indicare il nome del file nel sistema operativo.
Per altre informazioni, vedere [Spostare file del database](../../relational-databases/databases/move-database-files.md).
Per un filegroup FILESTREAM, NAME può essere modificato online. Sebbene FILENAME possa essere modificato online, la modifica non diventa effettiva fino a quando il contenitore non viene rilocato fisicamente e il server non viene arrestato e successivamente riavviato.
È possibile impostare un file FILESTREAM su OFFLINE. Quando un file FILESTREAM è offline, il filegroup padre sarà contrassegnato internamente come offline. Di conseguenza, ogni accesso a dati FILESTREAM all'interno del filegroup specifico avrà esito negativo.
> [!NOTE]
> Le opzioni \<add_or_modify_files> non sono disponibili in un database indipendente.
**\<filespec>::=**
Controlla le proprietà del file.
NAME *logical_file_name*
Specifica il nome logico del file.
*logical_file_name*
Nome logico utilizzato in un'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] per fare riferimento al file.
NEWNAME *new_logical_file_name*
Specifica un nuovo nome logico per il file.
*new_logical_file_name*
Nome con cui sostituire il nome di file logico esistente. Il nome deve essere univoco all'interno del database e conforme alle regole per gli [identificatori](../../relational-databases/databases/database-identifiers.md). Il nome può essere costituito da una costante per valori di carattere o Unicode, da un identificatore regolare o da un identificatore delimitato.
FILENAME { **'***os_file_name***'** | **'***filestream_path***'** | **'***memory_optimized_data_path***'**}
Specifica il nome del file (fisico) del sistema operativo.
' *os_file_name* '
Per un filegroup standard (ROWS), questo è il percorso e il nome di file utilizzato dal sistema operativo al momento della creazione del file. Il file deve trovarsi nel server in cui è installato [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Il percorso specificato deve essere esistente prima di eseguire l'istruzione ALTER DATABASE.
Non è possibile impostare i parametri SIZE, MAXSIZE e FILEGROWTH se è specificato un percorso UNC per il file.
> [!NOTE]
> I database di sistema non possono trovarsi in directory di condivisione UNC.
I file di dati non dovrebbero essere memorizzati in file system compressi, a meno che tali file non siano file secondari di sola lettura o il database non sia di sola lettura. I file di log non devono mai essere archiviati in file system compressi.
Se il file si trova in una partizione non formattata, nell'argomento *os_file_name* è necessario specificare solo la lettera dell'unità di una partizione non formattata esistente. In ogni partizione non formattata è possibile inserire un solo file.
**'** *filestream_path* **'**
Per un filegroup FILESTREAM, FILENAME si riferisce a un percorso in cui verrà archiviato FILESTREAM. È necessario che il percorso fino all'ultima cartella esista già, mentre l'ultima cartella non deve essere presente. Se, ad esempio, si specifica il percorso C:\MyFiles\MyFilestreamData, C:\MyFiles deve esistere già prima di eseguire ALTER DATABASE, mentre la cartella MyFilestreamData non deve essere presente.
Le proprietà SIZE e FILEGROWTH non si applicano a un filegroup FILESTREAM.
**'** *memory_optimized_data_path* **'**
Per un filegroup ottimizzato per la memoria, FILENAME fa riferimento a un percorso in cui verranno archiviati dati ottimizzati per la memoria. È necessario che il percorso fino all'ultima cartella esista già, mentre l'ultima cartella non deve essere presente. Se, ad esempio, si specifica il percorso C:\MyFiles\MyData, C:\MyFiles deve esistere già prima di eseguire ALTER DATABASE, mentre la cartella MyData non deve essere presente.
Il filegroup e il file (`<filespec>`) devono essere creati nella stessa istruzione.
Le proprietà SIZE, MAXSIZE e FILEGROWTH non si applicano a un filegroup ottimizzato per la memoria.
SIZE *size*
Specifica le dimensioni del file. SIZE non si applica a filegroup FILESTREAM.
*size*
Dimensioni del file.
Quando si specifica con ADD FILE, *size* corrisponde alle dimensioni iniziali del file. Se si specifica con MODIFY FILE, *size* corrisponde alle nuove dimensioni del file, che devono essere superiori a quelle correnti.
Se non si specifica *size* per il file primario, il [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] usa le dimensioni del file primario del database **model**. Se si specifica un file di dati o un file di log secondario senza specificare *size*, il [!INCLUDE[ssDE](../../includes/ssde-md.md)] crea un file di 1 MB.
È possibile utilizzare i suffissi KB, MB, GB e TB per indicare kilobyte, megabyte, gigabyte e terabyte. Il valore predefinito è MB. Specificare un numero intero, ovvero non includere decimali. Se si desidera specificare una frazione di megabyte, convertire il valore in kilobyte moltiplicando il numero per 1024. Ad esempio, specificare 1536 KB anziché 1,5 MB (1,5 x 1024 = 1536).
MAXSIZE { *max_size*| UNLIMITED }
Specifica le dimensioni massime consentite per l'aumento di dimensioni del file.
*max_size*
Dimensioni massime del file. È possibile utilizzare i suffissi KB, MB, GB e TB per indicare kilobyte, megabyte, gigabyte e terabyte. Il valore predefinito è MB. Specificare un numero intero, ovvero non includere decimali. Se non si specifica *max_size*, le dimensioni del file aumenteranno fino a quando il disco risulta pieno.
UNLIMITED
Specifica che le dimensioni del file aumentano fino a quando il disco risulta pieno. In [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], un file di log specificato con aumento delle dimensioni illimitato può raggiungere una dimensione massima di 2 TB, mentre un file di dati può raggiungere una dimensione massima di 16 TB. Non vi sono dimensioni massime se questa opzione viene specificata per un contenitore FILESTREAM, il quale continua a crescere finché il disco non è pieno.
FILEGROWTH *growth_increment*
Specifica l'incremento automatico per l'aumento delle dimensioni del file. Il valore impostato per il parametro FILEGROWTH di un file non può essere superiore al valore del parametro MAXSIZE. FILEGROWTH non si applica a filegroup FILESTREAM.
*growth_increment*
Quantità di spazio aggiunta al file ogni volta che è necessario spazio aggiuntivo.
È possibile specificare il valore in megabyte (MB), kilobyte (KB), gigabyte (GB) o terabyte (TB) oppure in forma di percentuale (%). Se si specifica un valore senza il suffisso MB, KB o %, il suffisso predefinito è MB. Se si utilizza il suffisso %, l'incremento corrisponde alla percentuale delle dimensioni del file specificata quando si verifica l'incremento. Le dimensioni specificate vengono arrotondate al blocco di 64 KB più prossimo.
Il valore 0 indica che l'aumento automatico delle dimensioni è disattivato e non è consentita l'allocazione di spazio aggiuntivo.
Se FILEGROWTH viene omesso, i valori predefiniti sono i seguenti:
|Versione|Valori predefiniti|
|-------------|--------------------|
|A partire da [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)]|Dati 64 MB. File di log 64 MB.|
|A partire da [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)]|Dati 1 MB. File di log 10%.|
|Prima di [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)]|Dati 10%. File di log 10%.|
OFFLINE
Imposta il file offline e rende inaccessibili tutti gli oggetti nel filegroup.
> [!CAUTION]
> Utilizzare questa opzione solo quando il file è danneggiato e non è possibile ripristinarlo. Un file impostato su OFFLINE può essere riportato online solo tramite il ripristino del file dal backup. Per altre informazioni sul ripristino di un singolo file, vedere [RESTORE (Transact-SQL)](../../t-sql/statements/restore-statements-transact-sql.md).
> [!NOTE]
> Le opzioni \<filespec> non sono disponibili in un database indipendente.
**\<add_or_modify_filegroups>::=**
Aggiunge, modifica o rimuove un filegroup nel database.
ADD FILEGROUP *filegroup_name*
Aggiunge un filegroup nel database.
CONTAINS FILESTREAM
Specifica che tramite il filegroup vengono archiviati oggetti binari di grandi dimensioni (BLOB) nel file system.
CONTAINS MEMORY_OPTIMIZED_DATA
**Si applica a**: [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (da [!INCLUDE[ssSQL14](../../includes/sssql14-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)])
Specifica che il filegroup archivia i dati con ottimizzazione per la memoria nel file system. Per altre informazioni, vedere [OLTP in memoria (ottimizzazione in memoria)](../../relational-databases/in-memory-oltp/in-memory-oltp-in-memory-optimization.md). È ammesso un solo filegroup MEMORY_OPTIMIZED_DATA per database. Per la creazione di tabelle con ottimizzazione per la memoria, il filegroup non può essere vuoto. Deve essere presente almeno un file. *filegroup_name* fa riferimento a un percorso. È necessario che il percorso fino all'ultima cartella esista già, mentre l'ultima cartella non deve essere presente.
Nell'esempio seguente viene creato un filegroup che viene aggiunto a un database denominato xtp_db, quindi viene aggiunto un file al filegroup. Nel filegroup vengono archiviati i dati memory_optimized.
```sql
ALTER DATABASE xtp_db ADD FILEGROUP xtp_fg CONTAINS MEMORY_OPTIMIZED_DATA;
GO
ALTER DATABASE xtp_db ADD FILE (NAME='xtp_mod', FILENAME='d:\data\xtp_mod') TO FILEGROUP xtp_fg;
```
REMOVE FILEGROUP *filegroup_name*
Rimuove un filegroup dal database. Il filegroup può essere rimosso solo se è vuoto. Rimuove tutti i file a partire dal filegroup. Per altre informazioni, vedere "REMOVE FILE *logical_file_name*" più indietro in questo argomento.
> [!NOTE]
> A meno che il Garbage Collector per FILESTREAM non abbia rimosso tutti i file da un contenitore FILESTREAM, l'operazione ALTER DATABASE REMOVE FILE per rimuovere un contenitore FILESTREAM avrà esito negativo e verrà restituito un errore. Vedere la sezione relativa alla rimozione del contenitore FILESTREAM in "Osservazioni" di seguito in questo argomento.
MODIFY FILEGROUP *filegroup_name* { \<filegroup_updatability_option> | DEFAULT | NAME **=***new_filegroup_name* } Modifica il filegroup impostando lo stato su READ_ONLY o READ_WRITE, impostando il filegroup come predefinito per il database o modificando il nome del filegroup.
\<filegroup_updatability_option>
Imposta la proprietà di sola lettura o di lettura/scrittura per il filegroup.
DEFAULT
Modifica il filegroup predefinito del database impostandolo su *filegroup_name*. In un database può esistere un solo filegroup predefinito. Per altre informazioni, vedere [Database Files and Filegroups](../../relational-databases/databases/database-files-and-filegroups.md).
NAME = *new_filegroup_name*
Modifica il nome del filegroup impostandolo su *new_filegroup_name*.
AUTOGROW_SINGLE_FILE
**Si applica a**: [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (da [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)])
Quando un file del filegroup raggiunge la soglia dell'aumento automatico delle dimensioni, vengono aumentate le dimensioni solo di quel file. Impostazione predefinita.
AUTOGROW_ALL_FILES
**Si applica a**: [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (da [!INCLUDE[ssSQL15](../../includes/sssql15-md.md)] a [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)])
Quando un file del filegroup raggiunge la soglia dell'aumento automatico delle dimensioni, vengono aumentate le dimensioni di tutti i file del filegroup.
> [!NOTE]
> Questo è il valore predefinito per TempDB.
**\<filegroup_updatability_option>::=**
Imposta la proprietà di sola lettura o di lettura/scrittura per il filegroup.
READ_ONLY | READONLY
Specifica che il filegroup è di sola lettura. Non sono consentiti aggiornamenti degli oggetti nel filegroup. Non è possibile rendere di sola lettura il filegroup primario. Per modificare questo stato, è necessario disporre dell'accesso esclusivo al database. Per altre informazioni, vedere la clausola SINGLE_USER.
I database di sola lettura non consentono modifiche dei dati e pertanto:
- Non viene eseguito il recupero automatico all'avvio del sistema.
- La compattazione del database non è possibile.
- Non vengono attivati blocchi nei database di sola lettura e ciò può portare a migliori prestazioni di esecuzione delle query.
> [!NOTE]
> La parola chiave READONLY verrà rimossa a partire da una delle prossime versioni di [!INCLUDE[msCoName](../../includes/msconame-md.md)][!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Evitarne l'utilizzo in un nuovo progetto di sviluppo e prevedere interventi di modifica nelle applicazioni in cui è attualmente implementata. Utilizzare READ_ONLY in alternativa.
READ_WRITE | READWRITE
Specifica che il filegroup è di lettura/scrittura. Sono consentiti aggiornamenti degli oggetti contenuti nel filegroup. Per modificare questo stato, è necessario disporre dell'accesso esclusivo al database. Per altre informazioni, vedere la clausola SINGLE_USER.
> [!NOTE]
> La parola chiave `READWRITE` verrà rimossa a partire da una delle prossime versioni di [!INCLUDE[msCoName](../../includes/msconame-md.md)][!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Evitare l'uso di `READWRITE` in un nuovo progetto di sviluppo e prevedere interventi di modifica nelle applicazioni in cui è attualmente usata la parola chiave `READWRITE`. Usare al suo posto la parola chiave `READ_WRITE`.
Per determinare lo stato di queste opzioni, è possibile esaminare la colonna **is_read_only** nella vista del catalogo **sys.databases** oppure la proprietà **Updateability** della funzione `DATABASEPROPERTYEX`.
## <a name="remarks"></a>Remarks
Per ridurre le dimensioni di un database, usare [DBCC SHRINKDATABASE](../../t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql.md).
Non è possibile aggiungere o rimuovere un file durante l'esecuzione di un'istruzione `BACKUP`.
Per ogni database è possibile specificare un massimo di 32.767 file e 32.767 filegroup.
A partire da [!INCLUDE[ssVersion2005](../../includes/ssversion2005-md.md)] lo stato di un file di database, ad esempio online o offline, viene mantenuto indipendentemente dallo stato del database. Per altre informazioni, vedere [Stati del file](../../relational-databases/databases/file-states.md).
- Lo stato dei file all'interno di un filegroup determina la disponibilità dell'intero filegroup. Un filegroup è disponibile se tutti i file in esso inclusi sono online.
- Se un filegroup è offline, qualsiasi tentativo di accesso al filegroup tramite un'istruzione SQL avrà esito negativo e verrà generato un errore. Per la compilazione di piani delle query per istruzioni `SELECT`, Query Optimizer evita gli indici non cluster e le viste indicizzate presenti in filegroup offline. Ciò consente la corretta esecuzione di tali istruzioni. Se tuttavia il filegroup offline contiene l'indice cluster o heap della tabella di destinazione, l'istruzione `SELECT` avrà esito negativo, così come tutte le istruzioni `INSERT`, `UPDATE` o `DELETE` che implicano la modifica di una tabella tramite un indice incluso in un filegroup offline.
## <a name="moving-files"></a>Spostamento di file
È possibile spostare file di dati e di log, di sistema o definiti dall'utente, specificando la nuova posizione in FILENAME. Questa procedura può risultare utile nei casi seguenti:
- Recupero da errore. ad esempio quando il database è in modalità sospetta o viene chiuso a causa di un errore hardware.
- Rilocazione pianificata.
- Rilocazione per una manutenzione pianificata del disco.
Per altre informazioni, vedere [Spostare file del database](../../relational-databases/databases/move-database-files.md).
## <a name="initializing-files"></a>Inizializzazione dei file
Per impostazione predefinita, i file di dati e di log vengono inizializzati tramite il riempimento con zeri quando si esegue una delle operazioni seguenti:
- Creazione di un database.
- Aggiunta di file a un database esistente.
- Aumento delle dimensioni di un file esistente.
- Ripristino di un database o un filegroup.
I file di dati possono essere inizializzati immediatamente. Ciò consente l'esecuzione rapida di queste operazioni sui file. Per ulteriori informazioni, vedere [Inizializzazione di file di database](../../relational-databases/databases/database-instant-file-initialization.md).
## <a name="removing-a-filestream-container"></a>Rimozione di un contenitore FILESTREAM
Anche se il contenitore FILESTREAM potrebbe essere stato svuotato mediante l'operazione "DBCC SHRINKFILE", potrebbe essere ancora necessario mantenere nel database i riferimenti ai file eliminati per vari motivi di manutenzione del sistema. [sp_filestream_force_garbage_collection (Transact-SQL)](../../relational-databases/system-stored-procedures/filestream-and-filetable-sp-filestream-force-garbage-collection.md) eseguirà il Garbage Collector per FILESTREAM per rimuovere questi file al momento opportuno affinché sia sicuro. A meno che il Garbage Collector per FILESTREAM non abbia rimosso tutti i file da un contenitore FILESTREAM, l'operazione ALTER DATABASE REMOVE FILE avrà esito negativo per la rimozione di un contenitore FILESTREAM e verrà restituito un errore. È consigliabile utilizzare il processo seguente per rimuovere un contenitore FILESTREAM.
1. Eseguire [DBCC SHRINKFILE (Transact-SQL)](../../t-sql/database-console-commands/dbcc-shrinkfile-transact-sql.md) con l'opzione EMPTYFILE per spostare il contenuto attivo del contenitore in altri contenitori.
2. Assicurarsi che siano stati eseguiti i backup del log, nel modello di recupero FULL o BULK_LOGGED.
3. Assicurarsi che sia stato eseguito il processo di lettura log repliche, se rilevante.
4. Eseguire [sp_filestream_force_garbage_collection (Transact-SQL)](../../relational-databases/system-stored-procedures/filestream-and-filetable-sp-filestream-force-garbage-collection.md) per forzare l'eliminazione tramite Garbage Collector di qualsiasi file non più necessario in questo contenitore.
5. Eseguire ALTER DATABASE con l'opzione REMOVE FILE per rimuovere questo contenitore.
6. Ripetere i passaggi da 2 a 4 ancora una volta per completare la Garbage Collection.
7. Utilizzare ALTER Database...REMOVE FILE per rimuovere questo contenitore.
## <a name="examples"></a>Esempi
### <a name="a-adding-a-file-to-a-database"></a>A. Aggiunta di un file a un database
Nell'esempio seguente viene aggiunto un file di dati da 5 MB al database [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)].
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = Test1dat2,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat2.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO
```
### <a name="b-adding-a-filegroup-with-two-files-to-a-database"></a>B. Aggiunta di un filegroup con due file a un database
Nell'esempio seguente viene creato il filegroup `Test1FG1` nel database [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)] e vengono aggiunti due file da 5 MB al filegroup.
```sql
USE master
GO
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = test1dat3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat3.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1dat4,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\t1dat4.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP Test1FG1;
GO
```
### <a name="c-adding-two-log-files-to-a-database"></a>C. Aggiunta di due file di log a un database
Nell'esempio seguente vengono aggiunti due file di log da 5 MB al database [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)].
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD LOG FILE
(
NAME = test1log2,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\test2log.ldf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1log3,
FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\test3log.ldf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO
```
### <a name="d-removing-a-file-from-a-database"></a>D. Rimozione di un file da un database
Nell'esempio seguente viene rimosso uno dei file aggiunti nell'esempio B.
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
REMOVE FILE test1dat4;
GO
```
### <a name="e-modifying-a-file"></a>E. Modifica di un file
Nell'esempio seguente vengono aumentate le dimensioni di uno dei file aggiunti nell'esempio B.
L'istruzione ALTER DATABASE con il comando MODIFY FILE può soltanto incrementare le dimensioni di un file. Se si vogliono invece ridurre le dimensioni di un file, è necessario usare DBCC SHRINKFILE.
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO
```
In questo esempio le dimensioni di un file di dati vengono ridotte a 100 MB e ne vengono specificate le dimensioni.
```sql
USE AdventureWorks2012;
GO
DBCC SHRINKFILE (AdventureWorks2012_data, 100);
GO
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO
```
### <a name="f-moving-a-file-to-a-new-location"></a>F. Spostamento di un file in un diverso percorso
Nell'esempio seguente il file `Test1dat2` creato nell'esempio A viene spostato in una nuova directory.
> [!NOTE]
> È necessario spostare fisicamente il file nella nuova directory prima di eseguire l'esempio. In seguito, arrestare e avviare l'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] oppure impostare il database [!INCLUDE[ssSampleDBobject](../../includes/sssampledbobject-md.md)] su OFFLINE e quindi su ONLINE per implementare la modifica.
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(
NAME = Test1dat2,
FILENAME = N'c:\t1dat2.ndf'
);
GO
```
### <a name="g-moving-tempdb-to-a-new-location"></a>G. Spostamento del database tempdb in un diverso percorso
Nell'esempio seguente viene spostato il database `tempdb` dal percorso corrente nel disco a un'altro percorso nel disco. Poiché `tempdb` viene ricreato a ogni avvio del servizio MSSQLSERVER, non è necessario spostare fisicamente i dati e i file di log. I file vengono creati quando il servizio viene riavviato nel passaggio 3. Fino a quando il servizio non viene riavviato, `tempdb` continua a funzionare nel percorso esistente.
1. Determinare i nomi di file logici del database `tempdb` e il rispettivo percorso corrente su disco.
```sql
SELECT name, physical_name
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');
GO
```
2. Modificare il percorso di ogni file tramite `ALTER DATABASE`.
```sql
USE master;
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf');
GO
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'E:\SQLData\templog.ldf');
GO
```
3. Arrestare e riavviare l'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)].
4. Verificare la modifica ai file.
```sql
SELECT name, physical_name
FROM sys.master_files
WHERE database_id = DB_ID('tempdb');
```
5. Eliminare i file tempdb.mdf e templog.ldf dai percorsi originali.
### <a name="h-making-a-filegroup-the-default"></a>H. Impostazione di un filegroup come predefinito
Nell'esempio seguente il filegroup `Test1FG1` creato nell'esempio B viene impostato come filegroup predefinito. Il filegroup `PRIMARY` viene quindi reimpostato come filegroup predefinito. Si noti che il nome `PRIMARY` deve essere delimitato da parentesi quadre o virgolette.
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP Test1FG1 DEFAULT;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO
```
### <a name="i-adding-a-filegroup-using-alter-database"></a>I. Aggiunta di un filegroup utilizzando ALTER DATABASE
Nell'esempio seguente viene aggiunto un `FILEGROUP` che contiene la clausola `FILESTREAM` al database `FileStreamPhotoDB`.
```sql
--Create and add a FILEGROUP that CONTAINS the FILESTREAM clause to
--the FileStreamPhotoDB database.
ALTER DATABASE FileStreamPhotoDB
ADD FILEGROUP TodaysPhotoShoot
CONTAINS FILESTREAM;
GO
--Add a file for storing database photos to FILEGROUP
ALTER DATABASE FileStreamPhotoDB
ADD FILE
(
NAME= 'PhotoShoot1',
FILENAME = 'C:\Users\Administrator\Pictures\TodaysPhotoShoot.ndf'
)
TO FILEGROUP TodaysPhotoShoot;
GO
```
### <a name="j-change-filegroup-so-that-when-a-file-in-the-filegroup-meets-the-autogrow-threshold-all-files-in-the-filegroup-grow"></a>J. Modificare il filegroup in modo che, quando un file del filegroup raggiunge la soglia dell'aumento automatico delle dimensioni, vengono aumentate le dimensioni di tutti i file del filegroup
Nell'esempio seguente vengono generate le istruzioni `ALTER DATABASE` necessarie per modificare il filegroup di lettura/scrittura con l'impostazione di `AUTOGROW_ALL_FILES`.
```sql
--Generate ALTER DATABASE ... MODIFY FILEGROUP statements
--so that all read-write filegroups grow at the same time.
SET NOCOUNT ON;
DROP TABLE IF EXISTS #tmpdbs
CREATE TABLE #tmpdbs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, isdone bit);
DROP TABLE IF EXISTS #tmpfgs
CREATE TABLE #tmpfgs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, fgname sysname, isdone bit);
INSERT INTO #tmpdbs ([dbid], [dbname], [isdone])
SELECT database_id, name, 0 FROM master.sys.databases (NOLOCK) WHERE is_read_only = 0 AND state = 0;
DECLARE @dbid int, @query VARCHAR(1000), @dbname sysname, @fgname sysname
WHILE (SELECT COUNT(id) FROM #tmpdbs WHERE isdone = 0) > 0
BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid] FROM #tmpdbs WHERE isdone = 0
SET @query = 'SELECT ' + CAST(@dbid AS NVARCHAR) + ', ''' + @dbname + ''', [name], 0 FROM [' + @dbname + '].sys.filegroups WHERE [type] = ''FG'' AND is_read_only = 0;'
INSERT INTO #tmpfgs
EXEC (@query)
UPDATE #tmpdbs
SET isdone = 1
WHERE [dbid] = @dbid
END;
IF (SELECT COUNT(ID) FROM #tmpfgs) > 0
BEGIN
WHILE (SELECT COUNT(id) FROM #tmpfgs WHERE isdone = 0) > 0
BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid], @fgname = fgname FROM #tmpfgs WHERE isdone = 0
SET @query = 'ALTER DATABASE [' + @dbname + '] MODIFY FILEGROUP [' + @fgname + '] AUTOGROW_ALL_FILES;'
PRINT @query
UPDATE #tmpfgs
SET isdone = 1
WHERE [dbid] = @dbid AND fgname = @fgname
END
END;
GO
```
## <a name="see-also"></a>Vedere anche
[CREATE DATABASE](../../t-sql/statements/create-database-transact-sql.md?&tabs=sqlserver)
[DATABASEPROPERTYEX](../../t-sql/functions/databasepropertyex-transact-sql.md)
[DROP DATABASE](../../t-sql/statements/drop-database-transact-sql.md)
[sp_spaceused](../../relational-databases/system-stored-procedures/sp-spaceused-transact-sql.md)
[sys.databases](../../relational-databases/system-catalog-views/sys-databases-transact-sql.md)
[sys.database_files](../../relational-databases/system-catalog-views/sys-database-files-transact-sql.md)
[sys.data_spaces](../../relational-databases/system-catalog-views/sys-data-spaces-transact-sql.md)
[sys.filegroups](../../relational-databases/system-catalog-views/sys-filegroups-transact-sql.md)
[sys.master_files](../../relational-databases/system-catalog-views/sys-master-files-transact-sql.md)
[Dati BLOB (Binary Large Object)](../../relational-databases/blob/binary-large-object-blob-data-sql-server.md)
[DBCC SHRINKFIL](../../t-sql/database-console-commands/dbcc-shrinkfile-transact-sql.md)
[sp_filestream_force_garbage_collection](../../relational-databases/system-stored-procedures/filestream-and-filetable-sp-filestream-force-garbage-collection.md)
[Inizializzazione di file di database](../../relational-databases/databases/database-instant-file-initialization.md)
::: moniker-end
::: moniker range="=azuresqldb-mi-current||=sqlallproducts-allversions"
> [!div class="mx-tdCol2BreakAll"]
> <table>
> <tr>
> <th> </th>
> <th> </th>
> </tr>
> <tr>
> <th><a href="alter-database-transact-sql-file-and-filegroup-options.md?view=sql-server-2016">SQL Server</a></th>
> <th><strong><em>* Database SQL<br />Istanza gestita *</em></strong></th>
> </tr>
> </table>
# <a name="azure-sql-database-managed-instance"></a>Istanza gestita di database SQL di Azure
Usare questa istruzione con un database nell'Istanza gestita di database SQL di Azure.
## <a name="syntax-for-databases-in-a-managed-instance"></a>Sintassi per i database in un'istanza gestita
```
ALTER DATABASE database_name
{
<add_or_modify_files>
| <add_or_modify_filegroups>
}
[;]
<add_or_modify_files>::=
{
ADD FILE <filespec> [ ,...n ]
[ TO FILEGROUP { filegroup_name } ]
| REMOVE FILE logical_file_name
| MODIFY FILE <filespec>
}
<filespec>::=
(
NAME = logical_file_name
[ , SIZE = size [ KB | MB | GB | TB ] ]
[ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ]
[ , FILEGROWTH = growth_increment [ KB | MB | GB | TB| % ] ]
)
<add_or_modify_filegroups>::=
{
| ADD FILEGROUP filegroup_name
| REMOVE FILEGROUP filegroup_name
| MODIFY FILEGROUP filegroup_name
{ <filegroup_updatability_option>
| DEFAULT
| NAME = new_filegroup_name
| { AUTOGROW_SINGLE_FILE | AUTOGROW_ALL_FILES }
}
}
<filegroup_updatability_option>::=
{
{ READONLY | READWRITE }
| { READ_ONLY | READ_WRITE }
}
```
## <a name="arguments"></a>Argomenti
**\<add_or_modify_files>::=**
Specifica i file da aggiungere, rimuovere o modificare.
*database_name*
Nome del database da modificare.
ADD FILE
Aggiunge un file al database.
TO FILEGROUP { *filegroup_name* }
Specifica il filegroup in cui aggiungere il file specificato. Per visualizzare i filegroup correnti e determinare il filegroup attualmente predefinito, usare la vista del catalogo [sys.filegroups](../../relational-databases/system-catalog-views/sys-filegroups-transact-sql.md).
REMOVE FILE *logical_file_name*
Rimuove la descrizione del file logico da un'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] ed elimina il file fisico. Il file può essere rimosso solo se è vuoto.
*logical_file_name*
Nome logico utilizzato in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] per fare riferimento al file.
MODIFY FILE
Specifica il file da modificare. È possibile modificare una sola proprietà \<filespec> alla volta. L'opzione NAME deve essere sempre specificata in \<filespec> per identificare il file da modificare. Se si specifica l'opzione SIZE, le nuove dimensioni del file devono essere superiori a quelle correnti.
**\<filespec>::=**
Controlla le proprietà del file.
NAME *logical_file_name*
Specifica il nome logico del file.
*logical_file_name*
Nome logico utilizzato in un'istanza di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] per fare riferimento al file.
NEWNAME *new_logical_file_name*
Specifica un nuovo nome logico per il file.
*new_logical_file_name*
Nome con cui sostituire il nome di file logico esistente. Il nome deve essere univoco all'interno del database e conforme alle regole per gli [identificatori](../../relational-databases/databases/database-identifiers.md). Il nome può essere costituito da una costante per valori di carattere o Unicode, da un identificatore regolare o da un identificatore delimitato.
SIZE *size*
Specifica le dimensioni del file.
*size*
Dimensioni del file.
Quando si specifica con ADD FILE, *size* corrisponde alle dimensioni iniziali del file. Se si specifica con MODIFY FILE, *size* corrisponde alle nuove dimensioni del file, che devono essere superiori a quelle correnti.
Se non si specifica *size* per il file primario, il [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] usa le dimensioni del file primario del database **model**. Se si specifica un file di dati o un file di log secondario senza specificare *size*, il [!INCLUDE[ssDE](../../includes/ssde-md.md)] crea un file di 1 MB.
È possibile utilizzare i suffissi KB, MB, GB e TB per indicare kilobyte, megabyte, gigabyte e terabyte. Il valore predefinito è MB. Specificare un numero intero, ovvero non includere decimali. Se si desidera specificare una frazione di megabyte, convertire il valore in kilobyte moltiplicando il numero per 1024. Ad esempio, specificare 1536 KB anziché 1,5 MB (1,5 x 1024 = 1536).
MAXSIZE { *max_size*| UNLIMITED }
Specifica le dimensioni massime consentite per l'aumento di dimensioni del file.
*max_size*
Dimensioni massime del file. È possibile utilizzare i suffissi KB, MB, GB e TB per indicare kilobyte, megabyte, gigabyte e terabyte. Il valore predefinito è MB. Specificare un numero intero, ovvero non includere decimali. Se non si specifica *max_size*, le dimensioni del file aumenteranno fino a quando il disco risulta pieno.
UNLIMITED
Specifica che le dimensioni del file aumentano fino a quando il disco risulta pieno. In [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], un file di log specificato con aumento delle dimensioni illimitato può raggiungere una dimensione massima di 2 TB, mentre un file di dati può raggiungere una dimensione massima di 16 TB.
FILEGROWTH *growth_increment*
Specifica l'incremento automatico per l'aumento delle dimensioni del file. Il valore impostato per il parametro FILEGROWTH di un file non può essere superiore al valore del parametro MAXSIZE.
*growth_increment*
Quantità di spazio aggiunta al file ogni volta che è necessario spazio aggiuntivo.
È possibile specificare il valore in megabyte (MB), kilobyte (KB), gigabyte (GB) o terabyte (TB) oppure in forma di percentuale (%). Se si specifica un valore senza il suffisso MB, KB o %, il suffisso predefinito è MB. Se si utilizza il suffisso %, l'incremento corrisponde alla percentuale delle dimensioni del file specificata quando si verifica l'incremento. Le dimensioni specificate vengono arrotondate al blocco di 64 KB più prossimo.
Il valore 0 indica che l'aumento automatico delle dimensioni è disattivato e non è consentita l'allocazione di spazio aggiuntivo.
Se FILEGROWTH viene omesso, i valori predefiniti sono i seguenti:
- Dati 64 MB
- File di log 64 MB
**\<add_or_modify_filegroups>::=**
Aggiunge, modifica o rimuove un filegroup nel database.
ADD FILEGROUP *filegroup_name*
Aggiunge un filegroup nel database.
Nell'esempio seguente viene creato un filegroup che viene aggiunto a un database denominato sql_db_mi, quindi viene aggiunto un file al filegroup.
```sql
ALTER DATABASE sql_db_mi ADD FILEGROUP sql_db_mi_fg;
GO
ALTER DATABASE sql_db_mi ADD FILE (NAME='sql_db_mi_mod') TO FILEGROUP sql_db_mi_fg;
```
REMOVE FILEGROUP *filegroup_name*
Rimuove un filegroup dal database. Il filegroup può essere rimosso solo se è vuoto. Rimuove tutti i file a partire dal filegroup. Per altre informazioni, vedere "REMOVE FILE *logical_file_name*" più indietro in questo argomento.
MODIFY FILEGROUP *filegroup_name* { \<filegroup_updatability_option> | DEFAULT | NAME **=***new_filegroup_name* } Modifica il filegroup impostando lo stato su READ_ONLY o READ_WRITE, impostando il filegroup come predefinito per il database o modificando il nome del filegroup.
\<filegroup_updatability_option>
Imposta la proprietà di sola lettura o di lettura/scrittura per il filegroup.
DEFAULT
Modifica il filegroup predefinito del database impostandolo su *filegroup_name*. In un database può esistere un solo filegroup predefinito. Per altre informazioni, vedere [Database Files and Filegroups](../../relational-databases/databases/database-files-and-filegroups.md).
NAME = *new_filegroup_name*
Modifica il nome del filegroup impostandolo su *new_filegroup_name*.
AUTOGROW_SINGLE_FILE
Quando un file del filegroup raggiunge la soglia dell'aumento automatico delle dimensioni, vengono aumentate le dimensioni solo di quel file. Impostazione predefinita.
AUTOGROW_ALL_FILES
Quando un file del filegroup raggiunge la soglia dell'aumento automatico delle dimensioni, vengono aumentate le dimensioni di tutti i file del filegroup.
**\<filegroup_updatability_option>::=**
Imposta la proprietà di sola lettura o di lettura/scrittura per il filegroup.
READ_ONLY | READONLY
Specifica che il filegroup è di sola lettura. Non sono consentiti aggiornamenti degli oggetti nel filegroup. Non è possibile rendere di sola lettura il filegroup primario. Per modificare questo stato, è necessario disporre dell'accesso esclusivo al database. Per altre informazioni, vedere la clausola SINGLE_USER.
I database di sola lettura non consentono modifiche dei dati e pertanto:
- Non viene eseguito il recupero automatico all'avvio del sistema.
- La compattazione del database non è possibile.
- Non vengono attivati blocchi nei database di sola lettura e ciò può portare a migliori prestazioni di esecuzione delle query.
> [!NOTE]
> La parola chiave READONLY verrà rimossa a partire da una delle prossime versioni di [!INCLUDE[msCoName](../../includes/msconame-md.md)][!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Evitarne l'utilizzo in un nuovo progetto di sviluppo e prevedere interventi di modifica nelle applicazioni in cui è attualmente implementata. Utilizzare READ_ONLY in alternativa.
READ_WRITE | READWRITE
Specifica che il filegroup è di lettura/scrittura. Sono consentiti aggiornamenti degli oggetti contenuti nel filegroup. Per modificare questo stato, è necessario disporre dell'accesso esclusivo al database. Per altre informazioni, vedere la clausola SINGLE_USER.
> [!NOTE]
> La parola chiave `READWRITE` verrà rimossa a partire da una delle prossime versioni di [!INCLUDE[msCoName](../../includes/msconame-md.md)][!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]. Evitare l'uso di `READWRITE` in un nuovo progetto di sviluppo e prevedere interventi di modifica nelle applicazioni in cui è attualmente usata la parola chiave `READWRITE`. Usare al suo posto la parola chiave `READ_WRITE`.
Per determinare lo stato di queste opzioni, è possibile esaminare la colonna **is_read_only** nella vista del catalogo **sys.databases** oppure la proprietà **Updateability** della funzione `DATABASEPROPERTYEX`.
## <a name="remarks"></a>Remarks
Per ridurre le dimensioni di un database, usare [DBCC SHRINKDATABASE](../../t-sql/database-console-commands/dbcc-shrinkdatabase-transact-sql.md).
Non è possibile aggiungere o rimuovere un file durante l'esecuzione di un'istruzione `BACKUP`.
Per ogni database è possibile specificare un massimo di 32.767 file e 32.767 filegroup.
## <a name="examples"></a>Esempi
### <a name="a-adding-a-file-to-a-database"></a>A. Aggiunta di un file a un database
Nell'esempio seguente viene aggiunto un file di dati da 5 MB al database [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)].
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = Test1dat2,
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
);
GO
```
### <a name="b-adding-a-filegroup-with-two-files-to-a-database"></a>B. Aggiunta di un filegroup con due file a un database
Nell'esempio seguente viene creato il filegroup `Test1FG1` nel database [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal-md.md)] e vengono aggiunti due file da 5 MB al filegroup.
```sql
USE master
GO
ALTER DATABASE AdventureWorks2012
ADD FILEGROUP Test1FG1;
GO
ALTER DATABASE AdventureWorks2012
ADD FILE
(
NAME = test1dat3,
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
),
(
NAME = test1dat4,
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP Test1FG1;
GO
```
### <a name="c-removing-a-file-from-a-database"></a>C. Rimozione di un file da un database
Nell'esempio seguente viene rimosso uno dei file aggiunti nell'esempio B.
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
REMOVE FILE test1dat4;
GO
```
### <a name="d-modifying-a-file"></a>D. Modifica di un file
Nell'esempio seguente vengono aumentate le dimensioni di uno dei file aggiunti nell'esempio B.
L'istruzione ALTER DATABASE con il comando MODIFY FILE può soltanto incrementare le dimensioni di un file. Se si vogliono invece ridurre le dimensioni di un file, è necessario usare DBCC SHRINKFILE.
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO
```
In questo esempio le dimensioni di un file di dati vengono ridotte a 100 MB e ne vengono specificate le dimensioni.
```sql
USE AdventureWorks2012;
GO
DBCC SHRINKFILE (AdventureWorks2012_data, 100);
GO
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILE
(NAME = test1dat3,
SIZE = 200MB);
GO
```
### <a name="e-making-a-filegroup-the-default"></a>E. Impostazione di un filegroup come predefinito
Nell'esempio seguente il filegroup `Test1FG1` creato nell'esempio B viene impostato come filegroup predefinito. Il filegroup `PRIMARY` viene quindi reimpostato come filegroup predefinito. Si noti che il nome `PRIMARY` deve essere delimitato da parentesi quadre o virgolette.
```sql
USE master;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP Test1FG1 DEFAULT;
GO
ALTER DATABASE AdventureWorks2012
MODIFY FILEGROUP [PRIMARY] DEFAULT;
GO
```
### <a name="f-adding-a-filegroup-using-alter-database"></a>F. Aggiunta di un filegroup utilizzando ALTER DATABASE
Nell'esempio seguente viene aggiunto un `FILEGROUP` al database `MyDB`.
```sql
--Create and add a FILEGROUP.
ALTER DATABASE MyDB
ADD FILEGROUP NewFG;
GO
--Add a file to FILEGROUP
ALTER DATABASE MyDB
ADD FILE
(
NAME= 'MyFile',
)
TO FILEGROUP NewFG;
GO
```
### <a name="g-change-filegroup-so-that-when-a-file-in-the-filegroup-meets-the-autogrow-threshold-all-files-in-the-filegroup-grow"></a>G. Modificare il filegroup in modo che, quando un file del filegroup raggiunge la soglia dell'aumento automatico delle dimensioni, vengono aumentate le dimensioni di tutti i file del filegroup
Nell'esempio seguente vengono generate le istruzioni `ALTER DATABASE` necessarie per modificare il filegroup di lettura/scrittura con l'impostazione di `AUTOGROW_ALL_FILES`.
```sql
--Generate ALTER DATABASE ... MODIFY FILEGROUP statements
--so that all read-write filegroups grow at the same time.
SET NOCOUNT ON;
DROP TABLE IF EXISTS #tmpdbs
CREATE TABLE #tmpdbs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, isdone bit);
DROP TABLE IF EXISTS #tmpfgs
CREATE TABLE #tmpfgs (id int IDENTITY(1,1), [dbid] int, [dbname] sysname, fgname sysname, isdone bit);
INSERT INTO #tmpdbs ([dbid], [dbname], [isdone])
SELECT database_id, name, 0 FROM master.sys.databases (NOLOCK) WHERE is_read_only = 0 AND state = 0;
DECLARE @dbid int, @query VARCHAR(1000), @dbname sysname, @fgname sysname
WHILE (SELECT COUNT(id) FROM #tmpdbs WHERE isdone = 0) > 0
BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid] FROM #tmpdbs WHERE isdone = 0
SET @query = 'SELECT ' + CAST(@dbid AS NVARCHAR) + ', ''' + @dbname + ''', [name], 0 FROM [' + @dbname + '].sys.filegroups WHERE [type] = ''FG'' AND is_read_only = 0;'
INSERT INTO #tmpfgs
EXEC (@query)
UPDATE #tmpdbs
SET isdone = 1
WHERE [dbid] = @dbid
END;
IF (SELECT COUNT(ID) FROM #tmpfgs) > 0
BEGIN
WHILE (SELECT COUNT(id) FROM #tmpfgs WHERE isdone = 0) > 0
BEGIN
SELECT TOP 1 @dbname = [dbname], @dbid = [dbid], @fgname = fgname FROM #tmpfgs WHERE isdone = 0
SET @query = 'ALTER DATABASE [' + @dbname + '] MODIFY FILEGROUP [' + @fgname + '] AUTOGROW_ALL_FILES;'
PRINT @query
UPDATE #tmpfgs
SET isdone = 1
WHERE [dbid] = @dbid AND fgname = @fgname
END
END;
GO
```
## <a name="see-also"></a>Vedere anche
[CREATE DATABASE](../../t-sql/statements/create-database-transact-sql.md?&tabs=sqldbmi)
[DATABASEPROPERTYEX](../../t-sql/functions/databasepropertyex-transact-sql.md)
[DROP DATABASE](../../t-sql/statements/drop-database-transact-sql.md)
[sp_spaceused](../../relational-databases/system-stored-procedures/sp-spaceused-transact-sql.md)
[sys.databases](../../relational-databases/system-catalog-views/sys-databases-transact-sql.md)
[sys.database_files](../../relational-databases/system-catalog-views/sys-database-files-transact-sql.md)
[sys.data_spaces](../../relational-databases/system-catalog-views/sys-data-spaces-transact-sql.md)
[sys.filegroups](../../relational-databases/system-catalog-views/sys-filegroups-transact-sql.md)
[sys.master_files](../../relational-databases/system-catalog-views/sys-master-files-transact-sql.md)
[DBCC SHRINKFIL](../../t-sql/database-console-commands/dbcc-shrinkfile-transact-sql.md)
::: moniker-end
| 49.828198 | 872 | 0.739029 | ita_Latn | 0.979127 |
21d6c410f159d76463b7903695f709ab8eb4346e | 2,323 | md | Markdown | README.md | edemagbenyo/Javascript30 | a4ca0812a22744a7740e46e852cc49e5d5f7a9e0 | [
"RSA-MD",
"Linux-OpenIB",
"CECILL-B"
] | null | null | null | README.md | edemagbenyo/Javascript30 | a4ca0812a22744a7740e46e852cc49e5d5f7a9e0 | [
"RSA-MD",
"Linux-OpenIB",
"CECILL-B"
] | null | null | null | README.md | edemagbenyo/Javascript30 | a4ca0812a22744a7740e46e852cc49e5d5f7a9e0 | [
"RSA-MD",
"Linux-OpenIB",
"CECILL-B"
] | null | null | null | #Javascript 30 course
<!--- These are examples. See https://shields.io for others or to customize this set of shields. You might want to include dependencies, project status and licence info here --->
Javascript30 is a serie of courses on Javascript from javascript30.com
Each course covers a topic on JS and CSS
## Here are the links to the live version of all 30 courses
1.
2.
3. [Playing with CSS and JS](https://rawcdn.githack.com/edemagbenyo/Javascript30/f95cbd55276c5f163dfd1befa639e723f34cb3c0/03_playing_with_css_and_js.html)
## Prerequisites
Before you begin, ensure you have met the following requirements:
<!--- These are just example requirements. Add, duplicate or remove as required --->
* You have installed the latest version of `node`
* You have a `<Windows/Linux/Mac>` machine. State which OS is supported/which is not.
* You have read `<guide/link/documentation_related_to_project>`.
## Installing Javascript30
install react-webpack-starter, follow these steps:
Linux and macOS:
```
npm install
```
Windows:
```
npm install
```
## Using Javascript 30
To use react-webpack-starter, follow these steps:
```
npm run start
```
Add run commands and examples you think users will find useful. Provide an options reference for bonus points!
## Contributing to Javascript30
<!--- If your README is long or you have some specific process or steps you want contributors to follow, consider creating a separate CONTRIBUTING.md file--->
To contribute to <project_name>, follow these steps:
1. Fork this repository.
2. Create a branch: `git checkout -b <branch_name>`.
3. Make your changes and commit them: `git commit -m '<commit_message>'`
4. Push to the original branch: `git push origin <project_name>/<location>`
5. Create the pull request.
Alternatively see the GitHub documentation on [creating a pull request](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request).
## Contributors
Thanks to the following people who have contributed to this project:
* [@edemagbenyo](https://github.com/edemagbenyo) 📖
## Contact
If you want to contact me you can reach me at <mail@edemagbenyo.com>.
## License
<!--- If you're not sure which open license to use see https://choosealicense.com/--->
This project uses the following license: [MIT]().
| 32.71831 | 178 | 0.758071 | eng_Latn | 0.993617 |
21d6f58dae1a417726d54b8be243cabd26315c19 | 5,834 | md | Markdown | _posts/2020/2020-02-25-virtual-bishoujo-meeting.md | mzyy94/mzyy94.github.io | 6f9d7fe0ffe5ec2e69fb4cd265401c57700ded89 | [
"MIT"
] | 3 | 2018-09-02T02:47:39.000Z | 2021-12-25T09:50:08.000Z | _posts/2020/2020-02-25-virtual-bishoujo-meeting.md | mzyy94/mzyy94.github.io | 6f9d7fe0ffe5ec2e69fb4cd265401c57700ded89 | [
"MIT"
] | null | null | null | _posts/2020/2020-02-25-virtual-bishoujo-meeting.md | mzyy94/mzyy94.github.io | 6f9d7fe0ffe5ec2e69fb4cd265401c57700ded89 | [
"MIT"
] | 2 | 2018-03-10T06:30:01.000Z | 2020-12-24T22:23:10.000Z | ---
title: "リモートワークでバ美肉ビデオ会議"
date: 2020-02-25 22:30:00 +0900
published: true
toc: true
category: XR
tags: oss ios arkit live2d
image:
path: /assets/images/2020/02/25/babiniku.png
thumbnail: /assets/images/2020/02/25/babiniku.png
---
みなさん体調は大丈夫ですか?私は咳が止まらない毎日を送っています。医師の診断によるとただの風邪らしいです。
巷では新型肺炎が流行しているようで、情勢を鑑み出社を禁止する会社もちらほら出てきている現状です。
弊社も例に漏れず、状況によってリモートワークが推奨されることとなりました。
処方された薬を飲み続けていて身体はすこぶる元気なものの、咳がまだ止まらないので最近はリモートワークの毎日です。
ズボラな私は、家から出ないと決めた日は寝起きのまま、髪はボサボサ、目は半開き、家庭用の微妙なデザインのメガネ。
リモートワークなのだからソレくらいがちょうどいいんですが、社会はそんなに甘くありませんでした。
ビデオ会議の存在です。
<!-- more -->
{% include toc %}
## ビデオ会議
ビデオ会議ではカメラを通して自分自身をメンバーに晒す必要があります。
ボサボサの髪・開ききらない目・クソダサメガネを備えた姿を、たとえ低画質のカメラを通したとしても人に見せるわけにはいきません。
そのためには、現代のテクノロジーでなんとかできる方法を探るしかありません。
バーチャル美少女受肉、通称 **バ美肉** です。
## バ美肉
[https://ja.wikipedia.org/wiki/バ美肉](https://ja.wikipedia.org/wiki/%E3%83%90%E7%BE%8E%E8%82%89)
バ美肉がなんたるかはWikipediaで確認してもらうとして、バ美肉をどのようにするかを考えなければなりません。
リモートワークでバ美肉の必要性を感じる人はちらほらいるようで、以下の記事でも実践していました。
[これからのリモートワークの話をしよう|schemelisp|note](https://note.com/garbageable/n/nd50b9abdcb28)
### FaceRig
- [Steam:FaceRig](https://store.steampowered.com/app/274920/FaceRig/?l=japanese)
- [Steam:FaceRig Live2D Module](https://store.steampowered.com/app/420680/FaceRig_Live2D_Module/?l=japanese)
先ほどの記事にはFaceRigとLive2D Moduleを用いているとの記載があります。
これで受肉できる!と、考えたのも束の間、利用環境が全くもって合致しませんでした。
業務で用いているのはMacBook Proで動かしているOSはmacOS Catalina。そもそもWindows向けに作られているFaceRigの起動すらできない環境です。
そしてそこそこの計算資源が必要とあり、障壁はなかなかに高いものでした。
さて、この解決策は使えないと知り、振り出しに戻ったかと思ったものの、FaceRig Live2D Moduleの説明に既視感のある文字列を見つけました。
**Live2D Cubism**です。
## ARKit-Live2D 2.0
iOS 11でARKitのAPIが提供され始めた時のこと、Face Trackingして[Live2D Cubism](https://www.live2d.com/)のパラメーターを操作するデモアプリを作った過去があります。
<blockquote class="twitter-tweet"><p lang="ja" dir="ltr">iPhone X+ARKit+Live2Dがどんな感じに動くかのサンプル<br>(音声はPCで再生してるやつを録音したもの)<br>まだ表情のパラメーターは弄りがいがある感じ <a href="https://t.co/xNQZcQhsTY">pic.twitter.com/xNQZcQhsTY</a></p>— 劇場版ハイスクール・フリート4DX (@mzyy94) <a href="https://twitter.com/mzyy94/status/931228533166260225?ref_src=twsrc%5Etfw">November 16, 2017</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
そうだこれを使えばいいんだ。FaceRigから遠回りしたものの、一つの解決策にたどり着きました。
これが今でもそのまま動けばよかったものの、最後の更新から2年以上の月日が経っていたり、利用しているLive2D Cubism 2.0の配布が終了していたりで今のiPhoneで動かすことはできませんでした。
この三連休は咳が酷くてずっと家にいたため、これを手直し、最新のLive2D Cubism 4.0への対応と新しいEye Trackingにも対応しました。バージョン2.0です。
[mzyy94/ARKit-Live2D - GitHub](https://github.com/mzyy94/ARKit-Live2D)
誰でも簡単に使えるようになっているので、READMEを読んで動かしてみてください。
こんな感じで動きます
<blockquote class="twitter-tweet"><p lang="ja" dir="ltr">ビデオ会議風景 <a href="https://t.co/9jIyNgwBki">pic.twitter.com/9jIyNgwBki</a></p>— 劇場版ハイスクール・フリート4DX (@mzyy94) <a href="https://twitter.com/mzyy94/status/1232523165131108354?ref_src=twsrc%5Etfw">February 26, 2020</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
ちょっとアップデートして背景色と位置の変更ができるようになりました。
## macOSでiPhoneの画面表示
さて、これでiPhoneでのバ美肉はできるようになりました。しかし、これだけではビデオ会議にこの映像を使うことはできません。
MacでiPhoneに表示されてる映像を使えるようにする必要があります。
ここで考えられる方法は、以下の2つがあります。
### QuickTimeでビデオ録画
USBで接続したiPhoneの画面をQuickTime Playerでプレビューする方法が、最も手軽にiPhoneの画面をmacOSに表示する手段です。これを画面共有するなどして、ビデオ会議に用いる方法がまず浮かびました。
[QuickTime Player を使う方法 - Apple サポート](https://support.apple.com/ja-jp/HT201066#record)
ただ、遅延が大きいのと、計算資源を大きく消費することが気になります。
### CoreMediaIO DAL
macOSに備わるフレームワークを用いてメディアデバイスを仮想的に作り上げる方法があります。CoreMediaIOです。
QuickTimeの画面録画もこれを用いています。
- [CoreMediaIO - Apple Developer Documentation Archive](https://developer.apple.com/library/archive/samplecode/CoreMediaIO/Introduction/Intro.html)
- [508_camera_capture_manual_controls.pdf](http://devstreaming.apple.com/videos/wwdc/2014/508xxfvaehrll14/508/508_camera_capture_manual_controls.pdf)
まず、このCoreMediaIOのDevice Abstraction Layer (DAL) を通して、仮想カメラをmacOSに作成し、iOSからの映像を受け取る算段を思いつきました。
ARKit-Live2Dは[ReplayKit](https://developer.apple.com/documentation/replaykit)に対応しているため、RTMP配信対応アプリを通して、以下のような流れが実現できるはずでした。
```
iOS -> ReplayKit - RTMP -> macOS -> CoreMediaIO DAL -> Camera
```
ただ、CoreMediaIO DALのサンプルコードが古いのと、Kernel Extensionのデバッグに難航してしまったので、仮想カメラを作るのは諦めました。
### OBS + OBS Virtual Camera
ゲーム実況やYouTubeでの配信に用いられる、OBSという配信ソフトウェアがあります。
[Open Broadcaster Software®️ \| OBS](https://obsproject.com/ja)
OBSでは映像や音声、テキストなど、複数の入力ソースを組み合わせて一つの動画を作り上げることができます。
それらの入力ソースの中に、映像キャプチャデバイスがあり、これによりiPhoneの画面出力をLightningケーブルで取り込むことができます。
これはCoreMediaIOを用いて実現しています。

そしてOBSはプラグインシステムで拡張でき、[obs-mac-virtualcam](https://github.com/johnboiles/obs-mac-virtualcam)というプラグインを用いることで、作った動画を**仮想Webカメラ**として、他のアプリケーションに取り込むことができるのです。
[johnboiles/obs-mac-virtualcam: Creates a virtual webcam device from the output of OBS. Especially useful for streaming smooth, composited video into Zoom, Hangouts, Jitsi etc. Like CatxFish/obs-virtualcam but for macOS.](https://github.com/johnboiles/obs-mac-virtualcam)
```
iOS -> CoreMediaIO -> OBS -> CoreMediaIO DAL -> Camera
```
## Zoomビデオ会議
弊社では、ビデオ会議には[Zoom](https://zoom.us/)を用いています。
Zoomは少し前まで仮想Webカメラからの映像入力ができないようになっていました。
Zoomバージョン5.0.4からホワイトリストで一部の仮想Webカメラが使えるようになり、バージョン5.1.1から全ての仮想Webカメラが使えるようになりました。
[New updates for macOS – Zoom Help Center](https://support.zoom.us/hc/en-us/articles/201361963)

OBSとobs-mac-virtualcamをインストール、そして、OBSで映像キャプチャデバイスを設定して動画を準備しツールメニューから"Start Virtual Camera"を選択すると、Zoomのカメラ一覧に"OBS Virtual Camera"が現れ、映像入力として使えるようになります。

あとはこれでミーティングに参加すれば、バ美肉ビデオ会議が叶うこととなりました☺️

声がおじさんのまま?よしなに頑張ってください。
## まとめ
WindowsでFaceRig+Live2D Module+OBSを用いてやるよりも、低負荷で実現できていそうな気がします。
リモートワークでLet'sバ美肉。
| 35.357576 | 452 | 0.817964 | yue_Hant | 0.713 |
21d8c4774680c29bf6773695e6edec8ee2d40eea | 5,020 | md | Markdown | INSTALL.md | burimshala/nakadi-ui | 84a6b7b3cda5748f5932de30326383e76a733843 | [
"MIT"
] | 45 | 2018-07-06T20:36:58.000Z | 2019-01-03T07:07:29.000Z | INSTALL.md | burimshala/nakadi-ui | 84a6b7b3cda5748f5932de30326383e76a733843 | [
"MIT"
] | 101 | 2019-01-23T13:52:07.000Z | 2021-12-20T10:46:50.000Z | INSTALL.md | burimshala/nakadi-ui | 84a6b7b3cda5748f5932de30326383e76a733843 | [
"MIT"
] | 6 | 2019-02-06T15:35:02.000Z | 2021-12-28T01:50:52.000Z | # Install
## Prerequisites
To run this web server, you need to have [Node.js](https://nodejs.org/) version 6.4 or newer
installed.
Also it is better to have [Elm](http://elm-lang.org/) installed globally for an easy use later.
```bash
npm install elm@latest-0.19.1
```
## Get the source code
Download source code using [Git](https://git-scm.com/)
```bash
git clone git@github.bus.zalan.do:zalando-nakadi/nakadi-ui.git
```
or as [zip archive](https://github.com/zalando-nakadi/nakadi-ui/archive/master.zip)
## Build
Go to this project root and install all dependencies.
```bash
cd nakadi-ui
npm install
```
Install one of the [Passport.js](http://passportjs.org/) auth strategies you plan to use
(see AUTH_STRATEGY configuration variable).
For example [Google OAuth 2.0 API](https://github.com/jaredhanson/passport-google-oauth2).
```bash
npm install passport-google-oauth20
```
To build production ready app you need to run.
```bash
npm run build
```
After the build, the resulting client files will be stored in `dist` folder.
## Configuration
All the configuration parameters are in the environment variables.
To make it simpler, you can put them in the `.env` file in the project root folder.
The real environment variable if set will override values from this file.
You can copy the example configuration and edit it.
```bash
cp .env.example .env
editor .env
```
Follow the comments in `.env.example`
*WARNING*: You MUST change `COOKIE_SECRET`! Use this command to generate new secret: `openssl rand -base64 100`
```bash
# Bind and listen this TCP port.
# Required
HTTP_PORT="3000"
# Base URL of the app visible to the end user.
# Required
BASE_URL="https://localhost:3000"
# Nakadi API URL.
# Required
NAKADI_API_URL="https://nakadi-staging.exmaple.com"
# Base link to apps info register. ie https://yourturn.stups.example.com/application/detail/{some_app_name}
# Optional
APPS_INFO_URL="https://yourturn.stups.example.com/application/detail/"
# Base link to user info register. e.g https://people.example.com/details/{uid}
# Optional
USERS_INFO_URL="https://people.example.com/details/"
# Link to company internal documentation related to Nakadi service
# Optional
DOCS_URL=https://nakadi-faq.docs.example.com/
# Link to company internal Nakadi support (chat room, support request form etc.)
# Optional
SUPPORT_URL=https://hipchat.example.com/chat/room/12345
# URL of Scalyr API. Used to extract list of producers and consumers from the access logs
# Required
SCALYR_URL="https://www.scalyr.com/api/"
SCALYR_KEY="YOUR-SCALYR-API-KEY-3c9f8h3e8b3d07e-"
SCALYR_BASE_FILTER=($serverHost=="nakadi") and ($logfile=="/var/log/application.log") and
# Feature flags.
# Optional, default "No"
ALLOW_DELETE_EVENT_TYPE=yes
# Link to the description of why the event type deletion is disabled
# Only needed if ALLOW_DELETE_EVENT_TYPE is false
# Optional
FORBID_DELETE_URL=https://nakadi-faq.docs.example.com/#how-to-delete-et
# Module name and configuration of Passport.js auth plugin (see /server/auth.js).
# Required
AUTH_STRATEGY="passport-google-oauth20"
# Zalando employees can use OAuth for easiness of setup
#AUTH_STRATEGY="../tests/mocks/devPassportStrategy.js"
AUTH_OPTIONS_clientID="YOUR client id.apps.googleusercontent.com"
AUTH_OPTIONS_clientSecret="YOUR client secret"
AUTH_OPTIONS_scope="profile email"
AUTH_OPTIONS_callbackURL="https://localhost:3000/auth/callback"
# Module name and configuration of custom authorization plugin (see /server/nakadiApi.js).
# Optional
AUTHORIZE_STRATEGY="myGoogleAdapter"
AUTHORIZE_OPTIONS=myscope
# Module name and configuration of custom analytics plugin (see /server/App.js#analytics)
# Can be used for collecting user statistic (KPI, AUDIT etc)
# Optional
#ANALYTICS_STRATEGY="./nakadi-ui-analytics-plugin"
#ANALYTICS_OPTIONS_url="https://nakadi-staging.example.com"
#ANALYTICS_OPTIONS_name="example-team.nakadi-ui.access-log"
# Nakadi SQL Support
# If "yes" shows create SQL Query menu item and the SQL Query tab
# for query output event types
SHOW_NAKADI_SQL=no
NAKADI_SQL_API_URL="http://nakadi-sql.example.com"
QUERY_MONITORING_URL="https://zmon.example.com/grafana/dashboard/db/nakadi-et/?var-stack=live&var-$queryId={query}"
# The key used to encode/decode user session.
# Required
COOKIE_SECRET="!!! CHANGE THIS!!! ksdgi98NNliuHHy^fdjy6b!khl_ig6%^#vGdsljhgl Bfdes&8yh3e"
# Run as HTTPS server. If disabled then run as HTTP server.
# Optional, default "No"
HTTPS_ENABLE=1
HTTPS_PRIVATE_KEY_FILE="deploy/certs/privkey.pem"
HTTPS_PUBLIC_KEY_FILE="deploy/certs/cert.pem"
```
*HINT*: Use openssl `openssl req -x509 -newkey rsa:4096 -keyout privkey.pem -out cert.pem -days 365 -subj '/CN=localhost' -nodes`
Copy SSL keys `privkey.pem` and `cert.pem` to `deploy/certs` folder.
## Run
You can now run server in the production mode.
```bash
npm run start:prod
```
or in development mode with watch and Hot Module Replacement.
```bash
npm run start
```
Now you can go to `https://localhost:3000` and look at the Nakadi UI interface.
| 30.240964 | 129 | 0.767729 | eng_Latn | 0.568679 |
21d932f5fe096d7a4a8a77c8c07a9598ad28b7e8 | 692 | md | Markdown | README.md | edsonmfreire/react-native-animated-drawer | dfcdf56e60e060d40c6e9eab9c70394ad7bf89bf | [
"MIT"
] | null | null | null | README.md | edsonmfreire/react-native-animated-drawer | dfcdf56e60e060d40c6e9eab9c70394ad7bf89bf | [
"MIT"
] | null | null | null | README.md | edsonmfreire/react-native-animated-drawer | dfcdf56e60e060d40c6e9eab9c70394ad7bf89bf | [
"MIT"
] | null | null | null | ## About the Project
React-Native example project of Drawer Animated.
<img src="./assets/record.gif" width="200" />
## Installation
1. Run: `yarn` for instalation of dependencies
2. For each platform (iOS/Android) you plan to use, follow one of the options for the corresponding platform.
#### Android
`yarn android`
#### IOS
`yarn ios`
## Libraries & Versions
react-native 0.63.3
react-native-vector-icons 7.1.0
@react-navigation/native 5.8.9
## License
This project is licenced under the [MIT License](http://opensource.org/licenses/mit-license.html).
Any bundled fonts are copyright to their respective authors and mostly under MIT or [SIL OFL](http://scripts.sil.org/OFL).
| 20.969697 | 122 | 0.735549 | eng_Latn | 0.938671 |
21d9bb7d3ad211de879b888b10401ed64edf1b0f | 643 | md | Markdown | README.md | LaBretelle/DataJVC | 9aaab4034ec4f69eae85709b0ad922d3ce4e1b47 | [
"CC-BY-4.0"
] | null | null | null | README.md | LaBretelle/DataJVC | 9aaab4034ec4f69eae85709b0ad922d3ce4e1b47 | [
"CC-BY-4.0"
] | null | null | null | README.md | LaBretelle/DataJVC | 9aaab4034ec4f69eae85709b0ad922d3ce4e1b47 | [
"CC-BY-4.0"
] | null | null | null | # Data JVC #
C'est un set de données relevée sur le site jeuxvideo.com à partir de tous les tests effectués.
## Récupération des données ##
Tout est dans le jupyter notebook "Getting_URL.ipynb". La liste des URL relevées grâce au script sont dans "list_url.csv" et toutes les données relevées sur dans "datajvctest.csv"
## Quelques analyses ##
Quelques analyses ont été produites avec R dans le fichier "jvcdata.R"
## Reconstruire l'étude ##
Créer un environnement virtuel à la racine avec la commande virtualenv -p python3 venv
Installez les packages avec la commande pip install -r requirements.txt
Puis lancez un jupyter notebook.
| 32.15 | 179 | 0.77605 | fra_Latn | 0.985865 |
21db8a15c15dc5e0ca09c159f12d40b265af7790 | 314 | md | Markdown | docs/en/tools/parse-app-package.md | 15563988825/flutter_distributor | c8c7460e2031ac1c7ed547585dabd45b81a08727 | [
"MIT"
] | null | null | null | docs/en/tools/parse-app-package.md | 15563988825/flutter_distributor | c8c7460e2031ac1c7ed547585dabd45b81a08727 | [
"MIT"
] | null | null | null | docs/en/tools/parse-app-package.md | 15563988825/flutter_distributor | c8c7460e2031ac1c7ed547585dabd45b81a08727 | [
"MIT"
] | null | null | null | # Parse App Package
## Installation
```
dart pub global activate parse_app_package
```
## Usage
```
parse_app_package hello_world-1.0.0+1-android.apk
```
Output:
```
{
"platform": "android",
"identifier": "com.example.hello_world",
"name": "hello_world",
"version": "1.0.0",
"buildNumber": 1
}
```
| 12.076923 | 49 | 0.640127 | kor_Hang | 0.180062 |
21db9a4c7268052f81fd687a3c4d544742823b10 | 471 | md | Markdown | README.md | kfarr/www.3d.st | de6433f0d0b16adb144d96b41ef6981e7ce0c696 | [
"CC-BY-3.0"
] | null | null | null | README.md | kfarr/www.3d.st | de6433f0d0b16adb144d96b41ef6981e7ce0c696 | [
"CC-BY-3.0"
] | null | null | null | README.md | kfarr/www.3d.st | de6433f0d0b16adb144d96b41ef6981e7ce0c696 | [
"CC-BY-3.0"
] | null | null | null | This is the website for www.3d.st
****
Info about the HTML template:
Hyperspace by HTML5 UP
html5up.net | @ajlkn
Free for personal and commercial use under the CCA 3.0 license (html5up.net/license)
AJ
aj@lkn.io | @ajlkn
Credits:
Demo Images:
Unsplash (unsplash.com)
Icons:
Font Awesome (fontawesome.io)
Other:
jQuery (jquery.com)
Scrollex (github.com/ajlkn/jquery.scrollex)
Responsive Tools (github.com/ajlkn/responsive-tools)
| 18.84 | 85 | 0.70276 | eng_Latn | 0.46984 |
21dd919bbaa8532c010b2f2eb52b69336b4c4899 | 2,933 | md | Markdown | articles/azure-app-configuration/scripts/cli-import.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-app-configuration/scripts/cli-import.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-app-configuration/scripts/cli-import.md | salem84/azure-docs.it-it | 3ec6a13aebb82936591c7fc479f084be9bb8776d | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Esempio di script dell'interfaccia della riga di comando di Azure - Eseguire l'importazione in un archivio di Configurazione app di Azure | Microsoft Docs
description: Questo articolo fornisce informazioni e script di esempio per l'importazione in un archivio di Configurazione app di Azure
services: azure-app-configuration
documentationcenter: ''
author: yegu-ms
manager: balans
editor: ''
ms.service: azure-app-configuration
ms.devlang: azurecli
ms.topic: sample
ms.tgt_pltfrm: na
ms.workload: azure-app-configuration
ms.date: 02/24/2019
ms.author: yegu
ms.custom: mvc
ms.openlocfilehash: cd1e54fc6cfbf254da010c03dfaa859a0ee8213c
ms.sourcegitcommit: 11265f4ff9f8e727a0cbf2af20a8057f5923ccda
ms.translationtype: HT
ms.contentlocale: it-IT
ms.lasthandoff: 10/08/2019
ms.locfileid: "72029824"
---
# <a name="import-to-an-azure-app-configuration-store"></a>Importare in un archivio di Configurazione app di Azure
Questo script di esempio importa coppie chiave-valore in un archivio di Configurazione app di Azure.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
Se si sceglie di installare e usare l'interfaccia della riga di comando in locale, per questo articolo è necessario eseguire la versione 2.0 o successiva dell'interfaccia della riga di comando di Azure. Eseguire `az --version` per trovare la versione. Se è necessario eseguire l'installazione o l'aggiornamento, vedere [Installare l'interfaccia della riga di comando di Azure](/cli/azure/install-azure-cli).
È prima necessario installare l'estensione dell'interfaccia della riga di comando di Configurazione app di Azure eseguendo il comando seguente:
az extension add -n appconfig
## <a name="sample-script"></a>Script di esempio
```azurecli-interactive
#!/bin/bash
# Import key-values from a file
az appconfig kv import --name myTestAppConfigStore --source file --path ~/Import.json
```
[!INCLUDE [cli-script-cleanup](../../../includes/cli-script-clean-up.md)]
## <a name="script-explanation"></a>Spiegazione dello script
Questo script usa i comandi seguenti per importare un archivio di configurazione di app. Ogni comando della tabella include collegamenti alla documentazione specifica del comando.
| Comando | Note |
|---|---|
| [az appconfig import](/cli/azure/ext/appconfig/appconfig/kv#ext-appconfig-az-appconfig-kv-import) | Esegue l'importazione in una risorsa dell'archivio di configurazione di app. |
## <a name="next-steps"></a>Passaggi successivi
Per altre informazioni sull'interfaccia della riga di comando di Azure, vedere la [documentazione sull'interfaccia della riga di comando di Azure](/cli/azure).
Altri esempi di script dell'interfaccia della riga di comando di configurazione di app sono disponibili nella [documentazione relativa al servizio Configurazione app di Azure](../cli-samples.md).
| 47.306452 | 407 | 0.783157 | ita_Latn | 0.978592 |
21dd9d7a006c618eceff1285cd9f85d4c82dfda8 | 4,631 | md | Markdown | packages/@secretlint/quick-start/CHANGELOG.md | cbsi-dto/secretlint | bfff6db593ff86e87ca4f9145d025f70036f148d | [
"MIT"
] | null | null | null | packages/@secretlint/quick-start/CHANGELOG.md | cbsi-dto/secretlint | bfff6db593ff86e87ca4f9145d025f70036f148d | [
"MIT"
] | null | null | null | packages/@secretlint/quick-start/CHANGELOG.md | cbsi-dto/secretlint | bfff6db593ff86e87ca4f9145d025f70036f148d | [
"MIT"
] | null | null | null | # Change Log
All notable changes to this project will be documented in this file.
See [Conventional Commits](https://conventionalcommits.org) for commit guidelines.
## [2.1.1](https://github.com/secretlint/secretlint/compare/v2.1.0...v2.1.1) (2020-11-04)
**Note:** Version bump only for package @secretlint/quick-start
# [2.1.0](https://github.com/secretlint/secretlint/compare/v2.0.0...v2.1.0) (2020-06-16)
**Note:** Version bump only for package @secretlint/quick-start
# [2.0.0](https://github.com/secretlint/secretlint/compare/v1.1.0...v2.0.0) (2020-04-27)
**Note:** Version bump only for package @secretlint/quick-start
# [1.1.0](https://github.com/secretlint/secretlint/compare/v1.0.5...v1.1.0) (2020-04-04)
**Note:** Version bump only for package @secretlint/quick-start
## [1.0.5](https://github.com/secretlint/secretlint/compare/v1.0.4...v1.0.5) (2020-04-03)
**Note:** Version bump only for package @secretlint/quick-start
## [1.0.4](https://github.com/secretlint/secretlint/compare/v1.0.3...v1.0.4) (2020-03-31)
**Note:** Version bump only for package @secretlint/quick-start
## [1.0.3](https://github.com/secretlint/secretlint/compare/v1.0.2...v1.0.3) (2020-03-30)
**Note:** Version bump only for package @secretlint/quick-start
## [1.0.1](https://github.com/secretlint/secretlint/compare/v1.0.0...v1.0.1) (2020-03-29)
**Note:** Version bump only for package @secretlint/quick-start
# [1.0.0](https://github.com/secretlint/secretlint/compare/v0.10.1...v1.0.0) (2020-03-18)
**Note:** Version bump only for package @secretlint/quick-start
## [0.10.1](https://github.com/secretlint/secretlint/compare/v0.10.0...v0.10.1) (2020-03-18)
**Note:** Version bump only for package @secretlint/quick-start
# [0.10.0](https://github.com/secretlint/secretlint/compare/v0.9.2...v0.10.0) (2020-03-18)
**Note:** Version bump only for package @secretlint/quick-start
## [0.9.2](https://github.com/secretlint/secretlint/compare/v0.9.1...v0.9.2) (2020-03-16)
**Note:** Version bump only for package @secretlint/quick-start
## [0.9.1](https://github.com/secretlint/secretlint/compare/v0.9.0...v0.9.1) (2020-03-16)
**Note:** Version bump only for package @secretlint/quick-start
# [0.9.0](https://github.com/secretlint/secretlint/compare/v0.7.3...v0.9.0) (2020-03-16)
**Note:** Version bump only for package @secretlint/quick-start
## [0.7.3](https://github.com/secretlint/secretlint/compare/v0.7.2...v0.7.3) (2020-03-01)
**Note:** Version bump only for package @secretlint/quick-start
## [0.7.2](https://github.com/secretlint/secretlint/compare/v0.7.1...v0.7.2) (2020-03-01)
**Note:** Version bump only for package @secretlint/quick-start
## [0.7.1](https://github.com/secretlint/secretlint/compare/v0.7.0...v0.7.1) (2020-03-01)
### Bug Fixes
- **quick-start:** add config/ as files ([15488ec](https://github.com/secretlint/secretlint/commit/15488ecedd6ce72f8593cb3b4c5186201e7926cb))
# [0.7.0](https://github.com/secretlint/secretlint/compare/v0.6.0...v0.7.0) (2020-03-01)
### Bug Fixes
- **quick-start:** fix bin script name ([#70](https://github.com/secretlint/secretlint/issues/70)) ([8887af1](https://github.com/secretlint/secretlint/commit/8887af1adb411ba8dacce0e3e5a497f0bb822c85))
- **quick-start:** fix env type ([9a797ac](https://github.com/secretlint/secretlint/commit/9a797ace78bf17141a89c11dbae740fdb9b233e7))
### Features
- **quick-start:** add @secretlint/quick-start module ([8c7c298](https://github.com/secretlint/secretlint/commit/8c7c298a0aa2cff6c03278006aacbf2468e232b1))
# @secretlint/quick-start
## 0.6.2
### Patch Changes
- 8887af1: Fix to work `secretlint/quick-start`:
```
npx @secretlint/quick-start "**/*"
```
Refer to `@secretlint/quick-start`'s `quick-start` script by default.
## 0.6.1
### Patch Changes
- c679a7d: # Quick Start
We have introduce `@secretlint/quick-start` package.
It will be used in "Quick Start" section on README.
```markdown
## Quick Start
You can try to use Secretlint on your project at one command.
If you already have installed Docker:
docker run -v `pwd`:`pwd` -w `pwd` -it secretlint/secretlint secretlint "**/*"
If you already have installed Node.js:
npx @secretlint/quick-start "**/*"
After running,
If you got empty result and exit status is `0`, your project is secure.
Otherwise you got some error report, your project includes credential as plain format.
You want to get continuous security, Please see following installation guide and setup pre-commit hook and CI.
```
| 24.897849 | 202 | 0.695962 | eng_Latn | 0.356608 |
21de61c651638445a22d88a0c929d240e405bc2b | 747 | md | Markdown | ReadMe.md | Triosus/SnakeGame | 0019126ea85c2663862eaad6b0b19a78da48f5af | [
"MIT"
] | null | null | null | ReadMe.md | Triosus/SnakeGame | 0019126ea85c2663862eaad6b0b19a78da48f5af | [
"MIT"
] | null | null | null | ReadMe.md | Triosus/SnakeGame | 0019126ea85c2663862eaad6b0b19a78da48f5af | [
"MIT"
] | null | null | null | # SnakeGame


The written code is a primitive attempt by the author to code a Snake Game. The game has been limited to select features for initial development goals. The code might be edited later on for increased features, but as its stands the features in the game are limited only to:
* Snake movement
* Food aquisition
* Change in length of snake
* 60 fps smooth movement
## Future features
The features that are in mind of the author that might be added later are:
* Sound
* Online logging of highscores
* An online multiplayer mode
* GUI for displaying of score in-game
### Contact
If you have any suggestions for improvement, feel free to contact me! | 29.88 | 273 | 0.768407 | eng_Latn | 0.998764 |
21de6d2120483f6d08aa8586e09897d11c686127 | 2,437 | md | Markdown | docs/framework/unmanaged-api/debugging/icorpublishprocess-enumappdomains-method.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icorpublishprocess-enumappdomains-method.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/unmanaged-api/debugging/icorpublishprocess-enumappdomains-method.md | lbragaglia/docs.it-it | 2dc596db6f16ffa0e123c2ad225ce4348546fdb2 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Metodo ICorPublishProcess::EnumAppDomains
ms.date: 03/30/2017
api_name:
- ICorPublishProcess.EnumAppDomains
api_location:
- mscordbi.dll
api_type:
- COM
f1_keywords:
- ICorPublishProcess::EnumAppDomains
helpviewer_keywords:
- EnumAppDomains method [.NET Framework debugging]
- ICorPublishProcess::EnumAppDomains method [.NET Framework debugging]
ms.assetid: 7da621fc-e7d0-4c00-9439-5c93619d7414
topic_type:
- apiref
author: rpetrusha
ms.author: ronpet
ms.openlocfilehash: c614afee18824e1672b378dd468cb11c9c173d9f
ms.sourcegitcommit: 7f616512044ab7795e32806578e8dc0c6a0e038f
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 07/10/2019
ms.locfileid: "67764952"
---
# <a name="icorpublishprocessenumappdomains-method"></a>Metodo ICorPublishProcess::EnumAppDomains
Ottiene un enumeratore per i domini applicazione nel processo di cui viene fatto riferimento da questo [ICorPublishProcess](../../../../docs/framework/unmanaged-api/debugging/icorpublishprocess-interface.md).
## <a name="syntax"></a>Sintassi
```cpp
HRESULT EnumAppDomains (
[out] ICorPublishAppDomainEnum **ppEnum
);
```
## <a name="parameters"></a>Parametri
`ppEnum`
[out] Un puntatore all'indirizzo di un [ICorPublishAppDomainEnum](../../../../docs/framework/unmanaged-api/debugging/icorpublishappdomainenum-interface.md) istanza che consente l'iterazione nella raccolta di domini applicazione in questo processo.
## <a name="remarks"></a>Note
L'elenco di domini applicazione si basa su uno snapshot dei domini applicazione che esiste quando il `EnumAppDomains` viene chiamato il metodo. Questo metodo può essere chiamato più volte per creare un nuovo elenco aggiornato. Elenchi esistenti non essere interessati dalle chiamate successive di questo metodo.
Se il processo è stato terminato, `EnumAppDomains` avrà esito negativo con un valore HRESULT di CORDBG_E_PROCESS_TERMINATED.
## <a name="requirements"></a>Requisiti
**Piattaforme:** Vedere [Requisiti di sistema](../../../../docs/framework/get-started/system-requirements.md).
**Intestazione:** CorPub.idl, CorPub.h
**Libreria:** CorGuids.lib
**Versioni di .NET Framework:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)]
## <a name="see-also"></a>Vedere anche
- [Interfaccia ICorPublishProcess](../../../../docs/framework/unmanaged-api/debugging/icorpublishprocess-interface.md)
| 41.305085 | 314 | 0.762413 | ita_Latn | 0.651679 |
21dea788b2270cbd1156f23f3f1d5c1df199d06c | 306 | md | Markdown | _posts/2004/2004-06-19-dodgeball.md | thingles/thingles.github.io | 46b8d98959bcae7ddd6e89d13070fe3e7644d40b | [
"MIT"
] | 2 | 2016-08-23T03:14:22.000Z | 2017-05-04T03:48:16.000Z | _posts/2004/2004-06-19-dodgeball.md | jthingelstad/thingles.github.io | 46b8d98959bcae7ddd6e89d13070fe3e7644d40b | [
"MIT"
] | 29 | 2017-05-09T01:10:31.000Z | 2017-11-04T20:29:56.000Z | _posts/2004/2004-06-19-dodgeball.md | jthingelstad/thingles.github.io | 46b8d98959bcae7ddd6e89d13070fe3e7644d40b | [
"MIT"
] | null | null | null | ---
title: Dodgeball!
tags:
---
We went and saw [Dodgeball](http://www.imdb.com/title/tt0364725/) tonight and I have to admit that while it's a pretty stupid movie it is really funny. Don't go expected _any_ type of intellectual experience, this is chewing gum for the brain, but fine chewing gum it is.
| 43.714286 | 271 | 0.748366 | eng_Latn | 0.997268 |
21decaae7daaecce13e8f5ead2f7d4949027caa3 | 36 | md | Markdown | _includes/03-links.md | m2paulc/markdown-portfolio | 0bf2ec622b27e5428915304023fa83eaf7a07e4a | [
"MIT"
] | null | null | null | _includes/03-links.md | m2paulc/markdown-portfolio | 0bf2ec622b27e5428915304023fa83eaf7a07e4a | [
"MIT"
] | 5 | 2019-12-10T10:21:16.000Z | 2019-12-10T11:18:11.000Z | _includes/03-links.md | m2paulc/markdown-portfolio | 0bf2ec622b27e5428915304023fa83eaf7a07e4a | [
"MIT"
] | null | null | null | [Github](https://github.com/m2paulc) | 36 | 36 | 0.75 | zul_Latn | 0.511679 |
21df5a9d492fa599c88b9364954dd4989e5653fc | 1,800 | md | Markdown | frontend/README.md | Palando/the-buzzword-bingo | 516b2d99862a07374ed0de89e66236a2fd96f3dd | [
"MIT"
] | 2 | 2020-01-21T04:47:28.000Z | 2021-12-17T13:34:16.000Z | frontend/README.md | Palando/the-buzzword-bingo | 516b2d99862a07374ed0de89e66236a2fd96f3dd | [
"MIT"
] | null | null | null | frontend/README.md | Palando/the-buzzword-bingo | 516b2d99862a07374ed0de89e66236a2fd96f3dd | [
"MIT"
] | 2 | 2021-12-17T18:25:28.000Z | 2022-01-21T16:44:57.000Z | # Frontend
The Frontend service is the user-facing portion of the app. It serves the user a bingo grid with a predetermined set of buzzwords in a random layout. It also handles showing the current list of connected users as well as bingo winners.
This service was originally designed with web sockets to facilitate instanteous updates of the connected users/winners lists, but that functionality was stripped out in favour of regular old polling to enable deployment on GCP Cloud Run (in essence, serverless containers) for demo purposes.
**Note:** Just for context, this app was originally written in 2018, so you're not going to find any newer React features like hooks in here.
## Tech Stack
- **Language**: JavaScript
- **Framework**: React
## Code Structure
```
├── Dockerfile # Docker configuration
├── package.json # NPM dependencies
├── package-lock.json # NPM dependencies lock file
├── public/ # Public files auto-generated by create-react-app
├── scripts/ # Other scripts that enable running the service
├── server/ # The Node server that serves the Fronted service
└── src/ # Source files
├── api/ # Wrapper around the calls to the Backend service
├── App.css # App-level stylesheet
├── App.js # Main application setup and components
├── components/ # React components that make up the app
├── config.js # App-level configuration
├── index.js # Entrypoint to the app that renders the root App component
├── serviceWorker.js # Auto-generated file (not used)
└── utils/ # Various other app-level utilities
```
| 54.545455 | 291 | 0.639444 | eng_Latn | 0.995525 |
21df65afa9b2ee190bd1beec45889980c7ff2d36 | 2,596 | md | Markdown | docs/vs-2015/debugger/edit-and-continue.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/edit-and-continue.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/debugger/edit-and-continue.md | seferciogluecce/visualstudio-docs.tr-tr | 222704fc7d0e32183a44e7e0c94f11ea4cf54a33 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Düzenle ve devam et | Microsoft Docs
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-debug
ms.tgt_pltfrm: ''
ms.topic: article
f1_keywords:
- vs.debug.enc
dev_langs:
- FSharp
- VB
- CSharp
- C++
- VB
- C++
helpviewer_keywords:
- Edit and Continue
- debugger, Edit and Continue
- debugging [Visual Studio], Edit and Continue
- debugger, tools for debugging
ms.assetid: 2cdd4531-7117-4221-a809-8447812de2a1
caps.latest.revision: 27
author: mikejo5000
ms.author: mikejo
manager: ghogen
ms.openlocfilehash: df27a9f2eaa9c6a923a17c640f19ab94c608d6df
ms.sourcegitcommit: 9ceaf69568d61023868ced59108ae4dd46f720ab
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 10/12/2018
ms.locfileid: "49175727"
---
# <a name="edit-and-continue"></a>Düzenle ve Devam Et
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Düzenle ve devam programınız kesme modundayken, kaynak kodunuzda değişiklikler yapmanızı sağlayan bir zamandan tasarruf özelliktir. Ne zaman, sürdürme programın yürütülmesini gibi bir yürütme komutu seçerek **devam** veya **adım**, Düzenle ve devam otomatik olarak kod değişiklikleri ile ilgili bazı sınırlamalar uygular. Bu, durdurmak, tüm programınızı yeniden derleyin ve hata ayıklama oturumunu yeniden zorunda kalmak yerine bir hata ayıklama oturumu sırasında kodunuzda değişiklikler yapmanızı sağlar.
Bu bölüm şu konuları içerir:
[Nasıl yapılır: kod değişikliklerini açıkça uygulama](http://msdn.microsoft.com/en-us/89c4fce9-a3ef-432d-a840-67840b1c4be8)
[Nasıl Yapılır: Düzenle ve Devam Et'i Etkinleştirme veya Devre Dışı Bırakma](../debugger/how-to-enable-and-disable-edit-and-continue.md)
[Nasıl Yapılır: Kod Değişikliklerini Durdurma](../debugger/how-to-stop-code-changes.md)
[Yürütme noktası](http://msdn.microsoft.com/en-us/dd9855a7-b536-4e76-821f-27017829b996)
[Düzenle ve Devam Et (Visual C++)](../debugger/edit-and-continue-visual-cpp.md)
[Düzenle ve Devam Et (Visual C#)](../debugger/edit-and-continue-visual-csharp.md)
[Düzenle ve Devam Et (Visual Basic)](../debugger/edit-and-continue-visual-basic.md)
[F# için Düzenle ve Devam Et Desteklenmez](../debugger/edit-and-continue-not-supported-for-f-hash.md)
## <a name="see-also"></a>Ayrıca Bkz.
[Hata ayıklayıcısı güvenliği](../debugger/debugger-security.md)
[Düzenle ve devam et, hata ayıklama, Seçenekler iletişim kutusu](http://msdn.microsoft.com/library/009d225f-ef65-463f-a146-e4c518f86103)
[Hata Ayıklayıcısı Temel Bilgileri](../debugger/debugger-basics.md)
| 38.176471 | 507 | 0.761941 | tur_Latn | 0.989814 |
21df980c71f19f225644c9d48ed944954aaef7bf | 1,145 | md | Markdown | Resources/doc/resource_owners/yahoo.md | rmzamora/hwi-oauth-bundle | 0663fe332cc2c026c4737bc27385ef0c64075569 | [
"MIT"
] | 1 | 2018-08-14T13:47:42.000Z | 2018-08-14T13:47:42.000Z | Resources/doc/resource_owners/yahoo.md | rmzamora/hwi-oauth-bundle | 0663fe332cc2c026c4737bc27385ef0c64075569 | [
"MIT"
] | null | null | null | Resources/doc/resource_owners/yahoo.md | rmzamora/hwi-oauth-bundle | 0663fe332cc2c026c4737bc27385ef0c64075569 | [
"MIT"
] | null | null | null | Step 2x: Setup Yahoo
=======================
First you will have to register your application on Yahoo ("Create a Project"). Check out the
documentation for more information: http://developer.yahoo.com/oauth/.
The Yahoo Resource Owner uses the Yahoo Profile API to get user information, so when setting up your Yahoo Project
you must ensure that you have enabled access to the "Social Directory" service, with at least "Read Public" access
under the "Social Directory (Profiles)" section.
Next configure a resource owner of type `yahoo` with appropriate `client_id`,
`client_secret`.
```yaml
# app/config.yml
hwi_oauth:
resource_owners:
any_name:
type: yahoo
client_id: <client_id>
client_secret: <client_secret>
```
When you're done. Continue by configuring the security layer or go back to
setup more resource owners.
- [Step 2: Configuring resource owners (Facebook, GitHub, Google, Windows Live and others](../2-configuring_resource_owners.md)
- [Step 3: Configuring the security layer](../3-configuring_the_security_layer.md).
| 39.482759 | 128 | 0.692576 | eng_Latn | 0.921585 |
21dfd43280271bcb62d1315dbc6212afc4598985 | 8,187 | md | Markdown | oasc-mims-mim-5-fair-artificial-intelligence-and-algoritmes/README.md | HarriHonko/oasc-mims | c3dbdf1d58a4f24376008dd53a70459e45deeec5 | [
"CC-BY-4.0"
] | null | null | null | oasc-mims-mim-5-fair-artificial-intelligence-and-algoritmes/README.md | HarriHonko/oasc-mims | c3dbdf1d58a4f24376008dd53a70459e45deeec5 | [
"CC-BY-4.0"
] | 2 | 2021-09-12T22:46:06.000Z | 2021-09-29T11:35:29.000Z | oasc-mims-mim-5-fair-artificial-intelligence-and-algoritmes/README.md | HarriHonko/oasc-mims | c3dbdf1d58a4f24376008dd53a70459e45deeec5 | [
"CC-BY-4.0"
] | 3 | 2021-07-13T10:28:19.000Z | 2022-01-19T10:58:46.000Z | # MIM5: Fair Artificial Intelligence
Proposed by the City of Amsterdam and adopted as work item for 2020
## Why fair AI? <a id="MIM5:FairArtificialIntelligenceandAlgoritmes-WhyfairAI?"></a>
Governments are increasingly seeking to capture the opportunities offered by AI to improve their services. However, governments and the general public also have justified concerns over bias, privacy, accountability, transparency of artificial intelligence. New examples are becoming apparent of negative consequences from the use of \(‘black box’\) algorithms. Recently, UN Human Rights Rapporteur Alston warned the Dutch Government for the bias of fraud risk analytics system SYRI.
## Why through contracts and procurement conditions? <a id="MIM5:FairArtificialIntelligenceandAlgoritmes-Whythroughcontractsandprocurementconditions?"></a>
Without clear conditions, governments fail to install democratic oversight of algorithmic decision-making, and may inadvertently create new risks or harm to their citizens. Governments often rely on the expertise of technology providers and often lack the necessary technical or judicial skills to decide how to implement algorithmic decision making. ‘Vendor lock-ins’ are looming: The more training an algorithm gets, the more valuable and useful it usually becomes \(Assuming problems of ‘overfitting’ models that decrease in performance are adequately dealt with while training the algorithm\). This could cause market inefficiencies, as switching to another vendor involves substantial costs, and monopolization or winner take all effects are risking. The barrier to choosing an AI implementation for government officials is becoming higher, wary for not overseeing the political and material risks of AI. Technology providers understand these challenges and aim to create clarity and predictability about risks related to AI implementations in their service provisioning. While companies are generally wary of stricter guidelines for government procurement, common-sense frameworks can help governments to procure complex new technological solutions and actually open new markets for companies. Transparent guidelines will permit both established companies and new entrants to the AI space to compete on a level playing field for government contracts. \(Source: World Economic Forum \). However, guidelines and principles are not enough in practice. When contracts are negotiated decisions are being made.
Therefore, we propose to create standardised contract conditions for AI.
Summarising, current uncertainty in the market leads to higher transaction costs for governments and technology providers and a growing barrier for AI implementations. If we can harmonise the trust relationship between government and tech providers we will reduce these market inefficiencies, and use the European procurement framework to strengthen the European market for AI solutions.
1. **Background**
Internationally, we have based the draft city procurement conditions on
* the AI Ethical Guidelines of the European Commission,
* the AI Procurement Guidelines the British Government made in collaboration with World Economic Forum,
* “a governance framework for algorithmic accountability and transparency” of the European Parliament and
* the recommendations for regulation by the European Council.
At national level,
* responding to a motion in 2018, the Dutch Minister for Judicial Protection Sander Dekker developed guidelines to safeguard against the risk of data analysis by the government. These guidelines were the starting point for the
* draft procurement contract conditions we are creating for cities in the Netherlands in Amsterdam.
* Finally, a report about the oversight on algorithms by the Dutch government provided an overview of the Dutch situation, that we will use as a reference case for European AI contract conditions.
At global city level,
* New York City has installed a “ task force for Algorithmic Decision Systems”, aiming to increase accountability and transparency of algorithms. However, there has been a lotof critique of the results of this task force. As we have learned from the New York example of an algorithmic oversight commission, there is too little skills and too much risk aversion in government and the private sector to be transparent about algorithms.
In other words, the mechanisms for algorithmic transparency and fairness in government and companies are not optimal. It is our aim to define these mechanisms here.
**2. Initial scoping of \* technical \* minimal capabilities**
As a minimum interoperable definition, we have to start with defining what we mean with “algorithms”. We hereby reference the terminology used by the Dutch Ministry of Justice. Also, it allows for a technical scope ranging from complex decision tree rules till deep learning models, while excluding traditional automated decision making as referenced in GDPR.
Since we are proposing minimal technical capabilities, we are not taken into account here the processes in which AI applications are being bought, procured, and/or organisational aspects to AI control frameworks like risk assessments .
We are defining here ‘draft minimal capabilities for algorithms’ to be used in contract conditions, consisting of the following 6 minimal capabilities. For a full overview of our draft contract conditions, we refer to this document.
**Procedural Transparency**
* Full disclosure of the choices made, parties involved, risks and mitigation actions in the process of creating an algorithmic model
**Technical Transparency meaning**
* full disclosure for the buyer of the source code and model to be able to explain the model to citizens or other stakeholders
* a license to use the data to maximize multiple use cases in one government
* Access to the learnings of the model to prevent vendor lock-ins
* There should be a general understanding of the process by which an algorithmic system makes decisions in an overall system, ie. the goals and outcomes of an algorithm.
**Technical Explainability**
* Ability to explain on an individual level how a model creates certain outcomes.
* When using the algorithmic contract conditions it should always be specified to whom the information will be classified: public servants, other experts, etc.
**Fairness**
* Fairness is discussed through the lens of social justice, highlighting the potential for algorithmic systems to systematically disadvantage, or even discriminate against, different social groups and demographics. Defining fairness as a MIM: ”Fairness reflects the appreciation of a situation based on a set of social values, such as promoting equality in society”.
**Context**
* The assessment of fairness depends on facts, events, and goals, and therefore has to be understood as situation or task-specific and necessarily addressed within the scope of practice.
* Mechanisms for behavioural transparency may need to be designed into systems, and typically require the participation of the developers or operators of systems
**Accountability**
* Accountability for the supplier to create algorithms respecting human digital rights, and is compliant with federal, state, and local anti-discrimination laws.
* Agencies should not procure algorithms that are shielded from an independent validation and public review because of trade-secret or confidentiality claims
It should be noted that these capabilities should be applied differently to different systems depending on the nature, context and goals of the algorithmic system.
Technically, these capabilities can be translated into a metadata API that every vendor would provide, when supplying high impact algorithms to cities. Technically, there are also various fairness standards and frameworks to quantify the legal terms used above. For example, for fairness, there are many definitions of fairness that have been proposed in the literature. However, most of them are based on the following six, and they can be quantified.
* Unawareness
* Demographic Parity
* Equalized Odds
* Predictive Rate Parity
* Individual Fairness
* Counterfactual fairness equal probability of being labelled positively
| 94.103448 | 1,610 | 0.820813 | eng_Latn | 0.999546 |
21dfe7051d3c4c79bdbb53ef84660de0da18f495 | 6,036 | md | Markdown | _drafts/technical-work516-autosave-v1-Technical-work.md | jamescrowley/jamescrowley.net | 2cbd491229e2d655935a928cfcbab1e274a0a0fb | [
"MIT"
] | null | null | null | _drafts/technical-work516-autosave-v1-Technical-work.md | jamescrowley/jamescrowley.net | 2cbd491229e2d655935a928cfcbab1e274a0a0fb | [
"MIT"
] | null | null | null | _drafts/technical-work516-autosave-v1-Technical-work.md | jamescrowley/jamescrowley.net | 2cbd491229e2d655935a928cfcbab1e274a0a0fb | [
"MIT"
] | null | null | null | ---
id: 520
title: Technical work
date: 2018-08-03T21:58:43+00:00
author: james.crowley
layout: revision
guid: https://www.jamescrowley.net/2018/06/03/516-autosave-v1/
permalink: /2018/08/03/516-autosave-v1/
---
I’ve worked on all sorts of technical challenges and projects over the years – some my own, with others, and for others. Here’s a few highlights.
### The fintech challenges… (WIP)
#### Domain specific languages <em style="font-size: 70%; font-weight: normal;">FundApps –</em>
Powerful DSL to express regulation.
#### Security <em style="font-size: 70%; font-weight: normal;">FundApps –</em>
### The start-up challenges… (WIP)
#### In-line text ad placement <em style="font-size: 70%; font-weight: normal;">TechClicks – 2010</em>
Placing a standard ad-tag on the page, we’d intelligently identify article content and place our AdWords style text ads to naturally appear within the the flow of the core content. Scaling to XX M impressions.
#### Automated SEO & Social Media <em style="font-size: 70%; font-weight: normal;">TechEye – 2010</em>
#### Social graph crawling & authority detection <em style="font-size: 70%; font-weight: normal;">BlipNetwork – 2009</em>
Identifying though leaders by combining social profiles using Google’s Social Graph API, extracting blog and article content and NLP to identify appropriate categories.
#### Automated code language conversion <em style="font-size: 70%; font-weight: normal;">Developer Fusion – 2009</em>
### The agency projects…
#### AJAX infinite panning <em style="font-size: 70%; font-weight: normal;">BigWhiteWall @ Anorak digital – 2007</em>
Working with the BigWhiteWall founders from initial concept for their [digital community for mental health,](http://www.bigwhitewall.com) they envisagioned a virtual wall of artwork from their users. I devised an AJAX based infinitely pannable, and zoomable ‘wall’ with ‘bricks’ of user generated artwork loading on demand, inspired by Google Maps.
#### Accessible font embedding <em style="font-size: 70%; font-weight: normal;">Anorak digital – 2006</em>
Our designers loved using non-web fonts, and our team hated cutting header images out of photoshop. I devised a system that would replace headers with the desired fonts whilst preserving accessibility. Simply by adding a JavaScript tag to the page, the required CSS and images would be automatically generated, embedded and cached.
#### Continuous integration, automated deployments, SOA <em style="font-size: 70%; font-weight: normal;">Anorak digital – 2006</em>
I introduced highly available infrastructure at Rackspace to enable us to host worldwide campaigns for customers. CruiseControl and Web Deploy were used for continuous integration and automated deployments, practices that were rare in the digital agency space at the time. I architected and built out a set of core set of services for features repeatedly needed for customer campaigns – full text search with Lucene, competition mechanics, user registration, email delivery.
#### Continuous integration, automated deployments, SOA <em style="font-size: 70%; font-weight: normal;">REDBURN – 2006</em>
### The pre-graduation years…
#### ‘CAPTIVE PORTAL’ for NETWORK ACCESS – 200_5_
Working with the IT office at college and a fellow student, we created a ‘captive portal’ to welcome and register students and conference attendees on the college network when they first connected a device, validating their identity with their student ID and mac address, and integrating with firewall configuration to then whitelist those devices.
#### Content management – vCommune – 200_5_
As my [third year project](https://www.jamescrowley.net/wp-content/uploads/Content-Management-System-v0.3.pdf) at university, I wrote about the challenges building and running a large scale CMS, which I’d been doing since 1999 on Developer Fusion. This included storing hierarchical data in a relational database, finite state machines to model content management workflow, hashing passwords and SQL injection risks, n-tier architecture, rich content editing, CAPTCHA’s to block spam, desktop applications consuming the same public APIs, and tackling concurrency issues with multiple authors editing the same content.
#### e-Commerce platform – LondonPass – 2002 <em style="font-size: 50%;">(@ mission communications)</em>
I built a new e-commerce platform for [LondonPass](http://www.londonpass.com) that at the time processed several £M a year in transactions, integrating with WorldPay and SecPay. The platform included sales analysis for both boardroom to front-office, and I supported its implementation with training & troubleshooting.
#### Commercial web crawling library – WebZinc – 2002
I created the C# version of [WebZinc](http://www.webzinc.com) for WhiteCliff, a commercial component that allows developers to extract content from web sites, parse content reliably and automate form filling tasks. Tackled some fun browser integration, intelligent text parsing, documentation packaging (remember CHM files?!), ilmerge-ing the dependencies to create a simpler package, packaging with MSI, and learning the challenges of supporting and versioning an API that other engineers depended on.
#### Freeware code editor – Developers Pad – 1999
One of my first significant personal projects, I created a free customizable development editor written in Visual Basic, which was reviewed in leading magazine, PC Pro. I was pushing the limits of what was possible with VB at the time – using low-level Windows API calls to build UI features like syntax highlighting, dockable windows (there wasn’t a standard way of doing this back VB days…), and extending the standard ‘common dialog’ windows. | 77.384615 | 629 | 0.770046 | eng_Latn | 0.984209 |
21e03418902a02a4f1185ed85b6e2f7035d02807 | 2,966 | md | Markdown | README.md | wernerstrydom/okta | 931e04e2dca9fbefe723b0561739829e32e53b3b | [
"MIT"
] | null | null | null | README.md | wernerstrydom/okta | 931e04e2dca9fbefe723b0561739829e32e53b3b | [
"MIT"
] | null | null | null | README.md | wernerstrydom/okta | 931e04e2dca9fbefe723b0561739829e32e53b3b | [
"MIT"
] | null | null | null | # Okta
Suppose you're an "DevOps Engineer" and the company you work for, aquired
Okta and asks you to configure it. You have some applications that needs
configuring, and people that currently work for the organization. Where
do you start?
One of the biggest challenges for organizations are onboarding and
offboarding employees. One approach when an employee joins is to submit
requests for them to access. As the organization scales up, submitting
and managing requests becomes it's own bottleneck.
Another approach would be decide which applications based the employee's
department, job title or employment type. For example, everyone in
Engineering should have access to JIRA, and all software engineers should
have access to GitHub.
We can achieve that level of sophistication using Okta.
This repository contains the necessary Terraform code to create
* an Okta group and rule for each department (see [departments.tf](departments.tf))
* an Okta group and rule for each job role (see [roles.tf](roles.tf))
* an Okta group and rule for each user type (see [employees.tf](employees.tf) or [contractors.tf](contractors.tf))
* an Okta application and corresponding application roles for each application (see [app1.tf](app1.tf))
* an Okta rule to map departments, jobs and user types to application groups
So when a new user is created in Okta, if the "department" field of their
profile is "Engineering" then they'll be added to the "Engineering" group
using a group rule. Then another group rule will kick in and assign them
to an application specific role, which in turn gives them specific access
to an application.
The respository also contains a [Jenkinsfile](Jenkinsfile) which allows you
to execute the automation via Jenkins. This elminates the need for an Okta
admin to make changes, and gives anyone in the organization the ability
to request changes. Innersourcing the project is particulary useful, since
it's rare the Okta admin knows the access everyone needs.
## Jenkins Setup
Other than adding the repo to GitHub, you'll also need to register a few
credentials in Jenkins.
Name | Type | Description
---------|----------|---------
okta-org-name | Secret text | `dev-123456789` when you access `dev-123456789.okta.com`
okta-base-url | Secret text | Typically `okta.com`
okta-api-token | Secret text | The API token
aws-secret-key-id | Secret text | The AWS secret key to store state
aws-secret-access-key | Secret text | The AWS access key to store state
terraform-backend-s3-bucket | Secret text | The S3 bucket for state
terraform-backend-s3-region | Secret text | The region of the S3 bucket
terraform-backend-s3-dynamodb-table | Secret text | The dynamodb table used for locking
The Jenkinsfile will execute [setup.sh](setup.sh) which downloads Terraform,
and makes it executable. It only works on Linux 64-bit. It then generates
`backend.tf` with the S3 configuration extracted from the credentials stored
in Jenkins.
| 49.433333 | 114 | 0.77849 | eng_Latn | 0.998212 |
21e0e02c6ef2deaa6977fb587288416aaf94005a | 2,046 | md | Markdown | help/export/home.md | AdobeDocs/analytics.de-DE | a2220088544976c1bfa8474791494dd70bca663e | [
"MIT"
] | null | null | null | help/export/home.md | AdobeDocs/analytics.de-DE | a2220088544976c1bfa8474791494dd70bca663e | [
"MIT"
] | 18 | 2022-01-11T20:29:36.000Z | 2022-02-08T10:21:53.000Z | help/export/home.md | AdobeDocs/analytics.de-DE | a2220088544976c1bfa8474791494dd70bca663e | [
"MIT"
] | null | null | null | ---
title: Exportleitfaden für Analytics
description: In diesem Handbuch werden Möglichkeiten erläutert, wie Daten mithilfe von Daten-Feeds und Data Warehouse aus Adobe Analytics abgerufen werden können.
exl-id: 0e4014a7-3354-4111-ab74-64d9fa37b9cc
source-git-commit: 38fb7ec39495b2b8cde4955bd1b3c1d3487632c3
workflow-type: ht
source-wordcount: '161'
ht-degree: 100%
---
# Exportleitfaden für Analytics

In diesem Handbuch wird beschrieben, wie Sie Daten aus Adobe Analytics abrufen können. Zu diesem Service gehören:
* **Daten-Feeds:** Hiermit erhalten Sie einen stündlichen oder täglichen Export von Rohdaten. Jede Zeile ist ein separater Treffer und jede Spalte ist eine Variable. Daten-Feeds werden normalerweise an FTP-Sites gesendet.
* **Data Warehouse:** Verwenden Sie einen Anfrageassistenten, um Daten in Form einer Tabellenausgabe abzurufen. Data Warehouse verwendet eine andere Verarbeitungsarchitektur, um eine beliebige Anzahl von Zeilen und von eindeutigen Werten zu ermöglichen.
Im Folgenden finden Sie eine Videoübersicht zu Adobe Analytics:
>[!VIDEO](https://video.tv.adobe.com/v/27429/?quality=12)
## Wichtige Artikel über den Export in Analytics
* [Referenz zur Daten-Feed-Spalte](/help/export/analytics-data-feed/c-df-contents/datafeeds-reference.md)
* [Data Warehouse](data-warehouse/data-warehouse.md)
* [Exportieren nach FTP](ftp-and-sftp/ftp-overview.md)
## Weitere Benutzerhandbücher für Analytics
[Analytics-Benutzerhandbücher](https://experienceleague.adobe.com/docs/analytics.html?lang=de)
## Wichtige Analytics-Ressourcen
* [Kundenunterstützung kontaktieren](https://helpx.adobe.com/de/contact/enterprise-support.ec.html)
* [Analytics-Forum](https://forums.adobe.com/community/experience-cloud/analytics-cloud/analytics)
* [Adobe Analytics-Ressourcen](https://experienceleaguecommunities.adobe.com/t5/adobe-analytics-discussions/adobe-analytics-resources/m-p/276666?profile.language=de)
* [Experience League](https://experienceleague.adobe.com/?lang=de#home)
| 49.902439 | 253 | 0.804985 | deu_Latn | 0.910995 |
21e1b99671432f630fcda126642731b22f140284 | 3,805 | md | Markdown | website/trunk/src/site/markdown/tutorial_messaging.md | kanakb/helix | 804ba783a4cf9c08604a41ae51e05aa133fd45cc | [
"Apache-2.0"
] | 1 | 2015-04-23T06:08:45.000Z | 2015-04-23T06:08:45.000Z | website/trunk/src/site/markdown/tutorial_messaging.md | kanakb/helix | 804ba783a4cf9c08604a41ae51e05aa133fd45cc | [
"Apache-2.0"
] | null | null | null | website/trunk/src/site/markdown/tutorial_messaging.md | kanakb/helix | 804ba783a4cf9c08604a41ae51e05aa133fd45cc | [
"Apache-2.0"
] | null | null | null | <!---
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<head>
<title>Tutorial - Messaging</title>
</head>
## [Helix Tutorial](./Tutorial.html): Messaging
In this chapter, we\'ll learn about messaging, a convenient feature in Helix for sending messages between nodes of a cluster. This is an interesting feature that is quite useful in practice. It is common that nodes in a distributed system require a mechanism to interact with each other.
### Example: Bootstrapping a Replica
Consider a search system where the index replica starts up and it does not have an index. A typical solution is to get the index from a common location, or to copy the index from another replica.
Helix provides a messaging API for intra-cluster communication between nodes in the system. This API provides a mechanism to specify the message recipient in terms of resource, partition, and state rather than specifying hostnames. Helix ensures that the message is delivered to all of the required recipients. In this particular use case, the instance can specify the recipient criteria as all replicas of the desired partition to bootstrap.
Since Helix is aware of the global state of the system, it can send the message to the appropriate nodes. Once the nodes respond, Helix provides the bootstrapping replica with all the responses.
This is a very generic API and can also be used to schedule various periodic tasks in the cluster, such as data backups, log cleanup, etc.
System Admins can also perform ad-hoc tasks, such as on-demand backups or a system command (such as rm -rf ;) across all nodes of the cluster
```
ClusterMessagingService messagingService = manager.getMessagingService();
// Construct the Message
Message requestBackupUriRequest = new Message(
MessageType.USER_DEFINE_MSG, UUID.randomUUID().toString());
requestBackupUriRequest
.setMsgSubType(BootstrapProcess.REQUEST_BOOTSTRAP_URL);
requestBackupUriRequest.setMsgState(MessageState.NEW);
// Set the Recipient criteria: all nodes that satisfy the criteria will receive the message
Criteria recipientCriteria = new Criteria();
recipientCriteria.setInstanceName("%");
recipientCriteria.setRecipientInstanceType(InstanceType.PARTICIPANT);
recipientCriteria.setResource("MyDB");
recipientCriteria.setPartition("");
// Should be processed only by process(es) that are active at the time of sending the message
// This means if the recipient is restarted after message is sent, it will not be processe.
recipientCriteria.setSessionSpecific(true);
// wait for 30 seconds
int timeout = 30000;
// the handler that will be invoked when any recipient responds to the message.
BootstrapReplyHandler responseHandler = new BootstrapReplyHandler();
// this will return only after all recipients respond or after timeout
int sentMessageCount = messagingService.sendAndWait(recipientCriteria,
requestBackupUriRequest, responseHandler, timeout);
```
See HelixManager.DefaultMessagingService in the [Javadocs](http://helix.apache.org/apidocs/reference/org/apache/helix/messaging/DefaultMessagingService.html) for more information.
| 53.591549 | 444 | 0.802102 | eng_Latn | 0.990644 |
21e22637f977a0d9848bb3d03739f15c63386b11 | 917 | md | Markdown | content/projekte/2016-03-14-be-woistmarkt.md | offenesdresden/codefor.de | 1c8f0f12ed4e614be8bab94ebe97f6923b4cffe3 | [
"MIT"
] | null | null | null | content/projekte/2016-03-14-be-woistmarkt.md | offenesdresden/codefor.de | 1c8f0f12ed4e614be8bab94ebe97f6923b4cffe3 | [
"MIT"
] | null | null | null | content/projekte/2016-03-14-be-woistmarkt.md | offenesdresden/codefor.de | 1c8f0f12ed4e614be8bab94ebe97f6923b4cffe3 | [
"MIT"
] | null | null | null | ---
layout: project
lab: [berlin] #needed for Aggregation on Lab-Page
imgname: berlin/wo-ist-markt.png
title: Wo ist Markt?
showcase: true
status: Finished
links:
- url: http://wo-ist-markt.github.io
name: Wo ist Markt?
- url: https://github.com/wo-ist-markt/wo-ist-markt.github.io
name: Code auf Github
collaborators:
- name: Torf
links:
- url: https://github.com/torfsen
name: GitHub
- name: Tobias Preuss
links:
- url: https://twitter.com/tbsprs
name: Twitter
tags:
- Umwelt
---
Dieses Projekt visualisiert auf einer Karte wo gerade Wochenmarkt ist.
Man kann für den aktuellen Tag filtern oder alle anzeigen. Die gerade stattfindenden
Märkte werden grün gekennzeichnet.<br />
<br />
Nachdem das Projekt von Karlsruhe gestartet wurde, ist es nun auch mit Daten
in Berlin verfügbar. Bei der Datenaufbereitung gibt es noch Verbesserungspotential -
einfach mal nachfragen oder reinschauen.
| 22.925 | 84 | 0.747001 | deu_Latn | 0.982875 |
21e2b3897b9185bee0f0b4f17c94c8ce88e0ff29 | 19,152 | md | Markdown | README.md | britishcoffee/BSImp | ee5137fe6fa989dd18b7ed185a3a6f232107200e | [
"MIT"
] | null | null | null | README.md | britishcoffee/BSImp | ee5137fe6fa989dd18b7ed185a3a6f232107200e | [
"MIT"
] | null | null | null | README.md | britishcoffee/BSImp | ee5137fe6fa989dd18b7ed185a3a6f232107200e | [
"MIT"
] | null | null | null | # BSImp:
:mega: Imputation recovers partially observed methylation patterns for the analysis of methylation heterogeneity at a large proportion of regions genomewide and also estimates methylation levels accurately.
  
<!-- <p align="center"><img src="https://github.com/britishcoffee/Methylationhet/blob/main/READMEimages/MeHscr.png?raw=true" width="300"></p> -->
### Publication
Ya-Ting Sabrina Chang, Ming-Ren Yen, Pao-Yang Chen (2022) BSImp: imputing partially observed methylation patterns for evaluating methylation heterogeneity Frontiers in Bioinformatics, Research Topic in Computational Methods for Analysis of DNA Methylation Data https://doi.org/10.3389/fbinf.2022.815289
## Pipeline
<!-- <p align="center"><img src="./READMEimages/BSImp.png"></p> -->
<p align="center"><img src="https://github.com/britishcoffee/BSImp/blob/main/READMEimages/BSImp.png?raw=true" width="600"></p>
<!-- ### Documentation
MeH users guide is available as a [PDF file](./Manual.pdf), containing the detail of each step. For questions please open an issue on [GitHub](https://github.com/britishcoffee/MeHscr/issues) or [contact me](#contact). -->
## Table of Contents
* [System requirements](#system-requirements)
<!-- * [Installation](#Installation) -->
* [Genome screening of imputation and methylation profiling](#methylation-heterogeneity-profiling)
* [Usages](#usages)
* [Examples](#examples)
<!-- * [Subsequent analysis](#subsequent-analysis)
* [Example](#example)
-->
## System requirements
* python 2.7 +
* pandas package 0.24 +
* pysam package 0.16.0.1 +
* joblib package
### Can be fulfilled by running one of the following lines
```js
pip install MeHscr
pip3 install MeHscr
```
<!--
```js
pip install MeHscr
pip3 install MeHscr
```
or
```js
sudo pip install MeHscr
sudo pip3 install MeHscr
```
-->
## Genome screening of imputation and methylation profiling
### 1. Download example folder or script bsimp.py
```js
git clone https://github.com/britishcoffee/BSImp.git
cd BSImp
wget https://raw.githubusercontent.com/britishcoffee/BSImp/main/bsimp.py
```
### (Optional) 2. Open a folder named "MeHdata" or anything you like under the same directory
```js
mkdir MeHdata
mkdir myinputfiles
```
### 3. Place .bam and .bam.bai files of all samples you wish to obtain methylation heterogeneity profiles into folder MeHdata/ or myinputfiles/
```js
scp [directory_to_bamfiles_of_all_samples].bam* ./MeHdata
# or within MeHdata/
ln -s [directory_to_bamfiles_of_all_samples].bam* ./
```
### 4. Also place .fa and .fa.fai of the reference genome into the folder
```js
scp [directory_to_reference_genome].fa* ./MeHdata
# or within MeHdata/
ln -s [directory_to_reference_genome].fa* ./
```
### 5. Run the program bsimp.py (see examples below)
<!--
### 6. Download DHR.R for subsequent analysis
#### Load required packages and functions
```R
install.packages("roperators")
library(roperators)
install.packages("dplyr")
library(dplyr)
install.packages("foreach")
library(foreach)
MeH.t = function(vector,conditions,compare) {
ind1<-which(conditions == compare[1])+3 # +3 for chrom,bin and strand columns
ind2<-which(conditions == compare[2])+3
#l=length(vector)
vector=as.data.frame(vector)
mean2=mean(as.numeric(vector[ind2]),na.rm=TRUE)
mean1=mean(as.numeric(vector[ind1]),na.rm=TRUE)
diff=mean2-mean1
if(sd(vector[ind1])<1e-5 && sd(vector[ind2])<1e-5)
return(data.frame(chrom=vector[1],pos=vector[2],delta=diff,pvalue=NaN,mean2=mean2,mean1=mean1))
else {
out=t.test(vector[ind1],vector[ind2])
return(data.frame(chrom=vector[1],pos=vector[2],delta=out$est[2]-out$est[1],pvalue=as.numeric(out$p.value),mean2=out$est[2],mean1=out$est[1]))
}
}
findgene = function(position) {
chr=as.character(position[1])
#message(chr)
BP=as.numeric(position[2])
#message(BP)
St=as.character(position[3])
Gene=geneloc$gene[which((geneloc$TSS<=BP)*(geneloc$TES>=BP)*(as.character(geneloc$chrom)==chr)*(as.character(geneloc$strand)==as.character(St))==1)][1]
if (St=='f') {
promoter=geneloc$gene[which((geneloc$TSS-1000<=BP)*(geneloc$TSS+1000>=BP)*(as.character(geneloc$chrom)==chr)*(geneloc$strand=="f")==1)][1]
}
if (St=='r') {
promoter=geneloc$gene[which((geneloc$TES-1000<=BP)*(geneloc$TES+1000>=BP)*(as.character(geneloc$chrom)==chr)*(geneloc$strand=="r")==1)][1]
}
return(list(chrom=chr,bin=BP,Gene=Gene,Promoter=promoter,strand=St))
}
```
#### Load files for analysis by first setting the work directory to where your files are located
```R
setwd("~/MeHdata")
CG <- read.table('CG_Results.csv',header=TRUE,sep=",")
CHG <- read.table('CHG_Results.csv',header=TRUE,sep=",")
CHH <- read.table('CHH_Results.csv',header=TRUE,sep=",")
```
<img src="https://github.com/britishcoffee/BSImp/blob/main/READMEimages/image1.png?raw=true" width="600">
#### Define conditions of all samples; i.e., A and B for 2 conditions, each with two replicates, samples 1 and 2 are replicates of A and samples 3 and 4 are replicates for B. This is for comparisons to be carried out later on
```R
conditions <- c("A","A","B","B")
```
#### Calculate t-statistics and p-values for all bins between user specified conditions; An example is for A vs B here
```R
library(doParallel)
registerDoParallel(cores=4)
# Compare condition B with A
Comp1<-data.frame(foreach(i = 1:dim(CG)[1],.combine = rbind) %dopar%
MeH.t(CG[i,],conditions=conditions,c("A","B")))
Comp1$padj=p.adjust(Comp1$pvalue)
```
#### Select differential heterogeneous regions based on user specified conditions; i.e., p-value of 0.05 and delta of 1.4 (positive or negative)
```R
Comp1$DHR <- (Comp1$padj<0.05)*(abs(Comp1$delta)>1.4)
Comp1$DHR <- (Comp1$pvalue<0.05)*(abs(Comp1$delta)>1.4)
Comp1$DHR.up <- (Comp1$pvalue<0.05)*(Comp1$delta>1.4)
Comp1$DHR.down <- (Comp1$pvalue<0.05)*(Comp1$delta<(-1.4))
```
<img src="https://github.com/britishcoffee/Methylationhet/blob/main/READMEimages/image6.png?raw=true" width="450">
#### DHG analysis if bed file is given as .txt with each row representing a gene and consists of gene name, chromosome, TSS, TES and strand as 'f' (forward) or 'r' (reverse)
```R
geneloc<-read.table('genelist.txt',header=TRUE)
colnames(geneloc)<-c("gene","chrom","strand","TSS","TES")
geneloc$strand[as.character(geneloc$strand)=="+"]<-"f"
geneloc$strand[as.character(geneloc$strand)=="-"]<-"r"
```
<img src="https://github.com/britishcoffee/Methylationhet/blob/main/READMEimages/image7.png?raw=true" width="300">
```R
genelist<-foreach(i = 1:dim(Comp1)[1],.combine = rbind) %dopar% findgene(Comp1[i,c("chrom","bin","strand")])
```
## Installation
MeH can be installed for Linux, macOS, or Windows by either compiling from source which has the advantage that it will be optimized to the specific system:
```bash
git clone https://github.com/britishcoffee/MeHscr.git
cd MeHscr
```
## Methylation heterogeneity profiling
Use the scrpit **MeHscr.py** to calculated the methylation heterogeneity.
> :grey_exclamation:used as command-line in your terminal.
##### Input
* Run all the files under folder "**MeHdata**", including:
* .bam and .bam.bai files
* .fa and .fa.fai of the reference genome
-->
##### Usage
```ruby
$ python bsimp.py -h
usage: bsimp.py [-h] [-w WINDOWSIZE] [-c CORES] [--CG] [--CHG] [--CHH]
[-mC MINDEPTH] [-f FOLDERNAME] [--opt]
optional arguments:
-h, --help show this help message and exit
-w WINDOWSIZE, --windowsize WINDOWSIZE
number of CGs
-c CORES, --cores CORES
number of cores
--CG Include genomic context CG
--CHG Include genomic context CHG
--CHH Include genomic context CHH
-mC MINDEPTH, --mindepth MINDEPTH
Minimum depth per cytosine
-f FOLDERNAME, --foldername FOLDERNAME
Folder name of the location of input files
--opt Output original count of patterns
-mML MINML, --minML MINML
minimum methylation level for the consideration of examination of windows for CHG and CHH contexts
```
##### Examples
```ruby
# 'CG' only with window size of 4 cytosines and 4 cores parallel processing (default minimum depth for output is 4 reads at a cytosine)
python bsimp.py -w 4 -c 4 --CG
```
```ruby
# 'CG', 'CHG' and 'CHH' with window size of 4 cytosines and minimum depth for output of 8 reads
# between methylation patterns and 8 cores parallel processing
python bsimp.py -w 4 -c 8 --CG --CHG --CHH -mC 8 -f MeHdata
```
```ruby
# 'CG', 'CHG' and 'CHH' with window size of 4 cytosines and minimum depth for output of 8 reads
# between methylation patterns and 8 cores parallel processing, minimum methylation levels for CHG/CHH
# outputs and output original counts of methylation patterns (prior to imputation)
python bsimp.py -w 4 -c 8 --CG --CHG --CHH -mC 8 -f MeHdata -mML 0.05 --o
```
> The programme is running at folder "/MeHdata"
##### One of the output file
<p align="center"><img src="https://github.com/britishcoffee/BSImp/blob/main/READMEimages/image1.png?raw=true" width="500"></p>
> Format desctiptions:
> (1) chrom: chromosome
> (2) pos: (starting cytosine) position for methylation patterns and position for read copy number
> (3)-(18) pxx: copy number of methylation pattern
* p01: '0000' - UUUU - copy number of methylation pattern: all unmethylated
* p02: '1000' - MUUU
* p03: '0100' - UMUU
* p04: '1100' - MMUU
* p05: '0010' - UUMU
* p06: '1010' - MUMU
* p07: '0110' - UMMU
* p08: '1110' - MMMU
* p09: '0001' - UUUM
* p10: '1001' - MUUM
* p11: '0101' - UMUM
* p12: '1101' - MMUM
* p13: '0011' - UUMM
* p14: '1011' - MUMM
* p15: '0111' - UMMM
* p16: '1111' - MMMM - copy number of methylation pattern: all methylated
> (19) M: # of methylated C/G
> (20) UM: # of unmethylated C/G (T/A)
> (21) strand: f(orward)/r(everse)
<!--
##### Output
* MeHscreening.log
```
Sample AT31test has coverage 5240 for context CG out of data coverage 192834
Sample AT33test has coverage 5236 for context CG out of data coverage 193431
Sample AT35test has coverage 5203 for context CG out of data coverage 192548
Sample AT37test has coverage 5233 for context CG out of data coverage 192694
```
* /MeHdata/sample.0.csv files for each sample
```bash
## CG_AT31test_0.csv in the example
chrom,pos,MeH,dis,strand
1,511,1.41421,139,f
1,791,2.7161,114,r
1,810,3.69631,102,r
1,840,4.11599,109,r
```
> Format desctiptions:
>
> (1) chromsome
> (2) position
> (3) Methlyation heterogeneity
> (4) distance between methylation patterns
> (5) strand as 'f' for forward or 'r' for reverse
* /MeHdata/Results.csv files for summary results
```bash
## CG_Results.csv in the example
chrom,bin,strand,AT31test,AT33test,AT37test,AT35test
1,600,f,1.41421,4.42434,1.97092,2.219035
1,600,r,2.7161,2.59751,3.62414,2.79942
1,1000,r,3.90615,4.90306,6.5213,4.0907849999999994
1,2600,r,0.0,0.707105,0.0,0.0
```
> Format desctiptions:
>
> (1) chromsome
> (2) bin size
> (3) strand
> (4)-(6) Methlyation heterogeneity for each sample
## Subsequent analysis
Use the function of scrpit **DHR.R** to find differentailly heterogeneity regions.
> :grey_exclamation: under R envrionment.
##### Required packages
```R
# install.packages("roperators")
library(roperators)
# install.packages("dplyr")
library(dplyr)
# install.packages("foreach")
library(foreach)
# install.packages("doParallel")
library(doParallel)
```
##### Required Functions
```R
MeH.t=function(vector,conditions,compare) {
ind1<-which(conditions == compare[1])+3
ind2<-which(conditions == compare[2])+3
vector=as.data.frame(vector)
mean2=mean(as.numeric(vector[ind2]),na.rm=TRUE)
mean1=mean(as.numeric(vector[ind1]),na.rm=TRUE)
diff=mean2-mean1
if(sd(vector[ind1])<1e-5 && sd(vector[ind2])<1e-5)
return(data.frame(chrom=vector[1],pos=vector[2],strand=vector[3],delta=diff,pvalue=NaN,mean2=mean2,mean1=mean1))
else {
out=t.test(vector[ind1],vector[ind2])
return(data.frame(chrom=vector[1],pos=vector[2],strand=vector[3],delta=out$est[2]-out$est[1],pvalue=as.numeric(out$p.value),mean2=out$est[2],mean1=out$est[1]))
}
}
findgene = function(position) {
chr=as.character(position[,1])
#message(chr)
BP=as.numeric(position[,2])
#message(BP)
St=as.character(position[,3])
Gene=geneloc$gene[which((geneloc$TSS<=BP)*(geneloc$TES>=BP)*(as.character(geneloc$chrom)==chr)*(as.character(geneloc$strand)==as.character(St))==1)][1]
#user can define theie own promoter region [default: 1000]
if (St=='f') {
promoter=geneloc$gene[which((geneloc$TSS-1000<=BP)*(geneloc$TSS+1000>=BP)*(as.character(geneloc$chrom)==chr)*(geneloc$strand=="f")==1)][1]
}
if (St=='r') {
promoter=geneloc$gene[which((geneloc$TES-1000<=BP)*(geneloc$TES+1000>=BP)*(as.character(geneloc$chrom)==chr)*(geneloc$strand=="r")==1)][1]
}
return(list(chrom=chr,bin=BP,Gene=Gene,Promoter=promoter,strand=St))
}
```
##### Input
* Results.csv files for summary results
* genelist.txt
> genelist.txt can be modified based on gene.gff file consists of gene, chromosome, TSS, TES, and strand.
##### Example
1. Load files for analysis by first setting the work directory to where your files are located
```R
CG <- read.csv('MeHdata/CG_Results_test.csv',header=TRUE)
CG=CG[which(apply(CG,1,function(x) sum(is.na(x)))==0),]
```
```R
> head(CG)
chrom bin strand AT31test AT33test AT37test AT35test
1 1 600 f 1.4142100 4.6827400 11.79846 12.17126
2 1 600 r 2.6795800 2.1208600 13.73091 12.77923
3 1 1000 r 3.8819800 4.9631450 16.54558 14.10241
4 1 2600 r 0.0000000 0.7071050 10.00000 10.00000
5 1 3800 f 0.3304952 0.2571291 10.00000 10.18446
6 1 4200 f 0.0000000 0.0000000 10.00000 10.00000
```
2. Define conditions of all samples
```R
# An example is for A vs B here
conditions <- c("A","B","B","A")
```
3. Calculate t-statistics and p-values for all bins between user specified conditions
```R
registerDoParallel(cores=4)
# Compare condition B with A
Comp1<-data.frame(foreach(i = 1:dim(CG)[1],.combine = rbind) %dopar%
MeH.t(CG[i,],conditions=conditions,c("A","B")))
Comp1$padj=p.adjust(Comp1$pvalue)
stopImplicitCluster()
```
4. Select differential heterogeneous regions based on user specified conditions
```R
# i.e., p-value of 0.05 and delta of 1.4 (positive or negative)
Comp1$DHR <- (Comp1$padj<0.05)*(abs(Comp1$delta)>1.4)
Comp1$DHR <- (Comp1$pvalue<0.05)*(abs(Comp1$delta)>1.4)
Comp1$DHR.up <- (Comp1$pvalue<0.05)*(Comp1$delta>1.4)
Comp1$DHR.down <- (Comp1$pvalue<0.05)*(Comp1$delta<(-1.4))
```
```R
> head(Comp1)
chrom bin strand delta pvalue mean2 mean1 padj DHR DHR.up DHR.down
1 1 600 f 1.3810075 0.4527029 3.1976300 1.8166225 1 0 0 0
2 1 600 r 0.3530650 0.6162005 3.1108250 2.7577600 1 0 0 0
3 1 1000 r 1.7137125 0.2774109 5.7121800 3.9984675 1 0 0 0
4 1 2600 r 0.3535525 0.5000000 0.3535525 0.0000000 1 0 0 0
5 1 3800 f -0.1289142 0.4951501 0.1285645 0.2574787 1 0 0 0
6 1 4200 f 0.0000000 NaN 0.0000000 0.0000000 NaN NA NA NA
```
5. DHG analysis if bed file is given as .txt with each row representing a gene and consists of gene name, chromosome, TSS, TES and strand
```R
geneloc <- read.table('MeHdata/genelist.txt',header=T)
colnames(geneloc) <- c("gene","chrom","TSS","TES","strand")
geneloc$strand<-as.character(geneloc$strand)
#geneloc$strand[as.character(geneloc$strand)=="+"] <- "f"
#geneloc$strand[as.character(geneloc$strand)=="-"] <- "r"
geneloc$gene<-as.character(geneloc$gene)
```
```R
> head(geneloc)
gene chrom strand TSS TES
17 CHI3L1 1 r 6500 7000
20 ATP1A1 1 f 55000 59200
33 CPSF3L 1 r 1246964 1260067
34 GBP5 1 r 89724633 89738544
36 GBP4 1 r 92000 100200
38 FCRL3 1 r 157647977 157670647
```
6. Match the gene from provided gene lists to the regions.
```R
genelist <- foreach(i = 1:dim(Comp1)[1],.combine = rbind) %dopar% findgene(Comp1[i,c("chrom","bin","strand")])
```
```R
> genelist[20:25,]
chrom bin Gene Promoter strand
result.20 "1" 13800 "DDX11L1" "NA" "f"
result.21 "1" 20200 "NA" "NA" "f"
result.22 "1" 21000 "NA" "NA" "f"
result.23 "1" 21000 "WASH7P" "NA" "r"
result.24 "1" 21400 "NA" "NA" "f"
result.25 "1" 21400 "WASH7P" "NA" "r"
```
```R
Result_whole<-merge(Comp1,genelist,c("chrom","bin","strand"))
```
```R
> head(Result_whole)
chrom bin strand delta pvalue mean2 mean1 padj DHR DHR.up DHR.down Gene Promoter
1 1 1000 r 1.71371250 0.27741094 5.7121800 3.9984675 1 0 0 0 NA NA
2 1 12200 f -0.30304500 0.50000000 0.0000000 0.3030450 1 0 0 0 DDX11L1 DDX11L1
3 1 12200 r -0.28284200 0.53267809 0.3142689 0.5971109 1 0 0 0 NA NA
4 1 12600 f 0.24748675 0.09033447 0.3889077 0.1414210 1 0 0 0 DDX11L1 DDX11L1
5 1 12600 r -0.02142742 0.90030415 0.6285378 0.6499652 1 0 0 0 NA NA
6 1 13000 f 0.00000000 NaN 0.0000000 0.0000000 NaN NA NA NA DDX11L1 NA
```
7. Get the up/down regulted DHG gene/promoter lists
```R
DHG_Genebodys_up<-unique(unlist(genelist[which(Comp1$DHR.up==1),"Gene"])[!is.na(unlist(genelist[which(Comp1$DHR.up==1),"Gene"]))])
DHG_Genebodys_down<-unique(unlist(genelist[which(Comp1$DHR.down==1),"Gene"])[!is.na(unlist(genelist[which(Comp1$DHR.down==1),"Gene"]))])
DHG_Promoter_up<-unique(unlist(genelist[which(Comp1$DHR.up==1),"Promoter"])[!is.na(unlist(genelist[which(Comp1$DHR.up==1),"Promoter"]))])
DHG_Promoter_down<-unique(unlist(genelist[which(Comp1$DHR.down==1),"Promoter"])[!is.na(unlist(genelist[which(Comp1$DHR.down==1),"Promoter"]))])
```
```R
result <- file("MeHdata/DHG.txt")
writeLines(paste("DHG Genebodys up: ",paste(DHG_Genebodys_up,collapse= ', ')), result)
close(result)
write(paste("DHG Genebodys down: ",paste(DHG_Genebodys_down,collapse= ', ')),"MeHdata/DHG.txt",append=TRUE)
write(paste("DHG Promoter up: ", paste(DHG_Promoter_up,collapse= ', ')),"MeHdata/DHG.txt",append=TRUE)
write(paste("DHG Promoter down: ",paste(DHG_Promoter_down,collapse= ', ')),"MeHdata/DHG.txt",append=TRUE)
```
##### Output
* DEG.txt
```R
DHG Genebodys up:
DHG Genebodys down: CHI3L1
DHG Promoter up:
DHG Promoter down: CHI3L1, ATP1A1
```
-->
## Contact
[<img src="https://avatars.githubusercontent.com/u/30218118?v=4" width="100">](ytchang.sabrina@gmail.com)
**Sabrina**- [:email: ytchang.sabrina@gmail.com](ytchang.sabrina@gmail.com)
| 35.270718 | 311 | 0.675021 | eng_Latn | 0.531994 |
21e306e384703335d005389e46f89230efebb6f5 | 1,678 | md | Markdown | packages/docker/src/kill.coffee.md | chibanemourad/node-nikita | 35c4b249da7f3fe3550f1679989d7795969fe6a6 | [
"MIT"
] | 1 | 2020-05-04T08:50:45.000Z | 2020-05-04T08:50:45.000Z | packages/docker/src/kill.coffee.md | DanielJohnHarty/node-nikita | 0d83b5b6f568912026044a2c6f5fd66e0afb91ba | [
"MIT"
] | 1 | 2020-05-21T08:45:42.000Z | 2020-05-21T08:45:42.000Z | packages/docker/src/kill.coffee.md | DanielJohnHarty/node-nikita | 0d83b5b6f568912026044a2c6f5fd66e0afb91ba | [
"MIT"
] | null | null | null |
# `nikita.docker.kill`
Send signal to containers using SIGKILL or a specified signal.
Note if container is not running , SIGKILL is not executed and
return status is UNMODIFIED. If container does not exist nor is running
SIGNAL is not sent.
## Options
* `boot2docker` (boolean)
Whether to use boot2docker or not, default to false.
* `container` (string)
Name/ID of the container, required.
* `machine` (string)
Name of the docker-machine, required if using docker-machine.
* `signal` (int|string)
Use a specified signal. SIGKILL by default.
## Callback parameters
* `err`
Error object if any.
* `status`
True if container was killed.
## Example
```javascript
require('nikita')
.docker.kill({
container: 'toto',
signal: 9
}, function(err, status){
console.log( err ? err.message : 'Container killed: ' + status);
})
```
## Source Code
module.exports = ({options}) ->
@log message: "Entering Docker kill", level: 'DEBUG', module: 'nikita/lib/docker/kill'
# Global options
options.docker ?= {}
options[k] ?= v for k, v of options.docker
# Validate parameters
return callback Error 'Missing container parameter' unless options.container?
cmd = 'kill'
cmd += " -s #{options.signal}" if options.signal?
cmd += " #{options.container}"
@system.execute
cmd: docker.wrap options, "ps | grep '#{options.container}' | grep 'Up'"
code_skipped: 1
, docker.callback
@system.execute
if: -> @status -1
cmd: docker.wrap options, cmd
, docker.callback
## Modules Dependencies
docker = require '@nikitajs/core/lib/misc/docker'
| 26.634921 | 92 | 0.653754 | eng_Latn | 0.808412 |
21e3aef3b3a4e91777d2c574948e468b7df6eecd | 926 | md | Markdown | python-numpy/how-to-use-multiple-conditions-in-where.md | Wolfsrudel/onelinerhub | aecf43546941bf8a3038e88db52e273142613ea3 | [
"MIT"
] | 274 | 2021-10-10T16:36:15.000Z | 2022-03-31T12:03:04.000Z | python-numpy/how-to-use-multiple-conditions-in-where.md | Wolfsrudel/onelinerhub | aecf43546941bf8a3038e88db52e273142613ea3 | [
"MIT"
] | 1,143 | 2021-10-08T11:15:08.000Z | 2022-03-31T08:24:49.000Z | python-numpy/how-to-use-multiple-conditions-in-where.md | Wolfsrudel/onelinerhub | aecf43546941bf8a3038e88db52e273142613ea3 | [
"MIT"
] | 54 | 2021-10-10T13:28:16.000Z | 2022-03-31T12:25:35.000Z | # How to use multiple conditions in where()
```python
import numpy as np
array = np.arange(0, 100).reshape(10,10)
condition1 = array > 15
condition2 = array < 25
filtered = array[np.where(condition1 & condition2)]
```
- `import numpy as np` - load [lib:Numpy module](/python-numpy/how-to-install-python-numpy-lib) for Python
- `.arange(` - generates array based on specified range
- `.reshape(` - changes shape of the array to the specified dimensions
- `condition1` - first condition to use
- `condition2` - second condition to use
- `condition1 & condition2` - use both conditions in `where` method
- `where(` - returns filtered elements from array based on condition
group: where
## Example:
```python
import numpy as np
array = np.arange(0, 100).reshape(10,10)
condition1 = array > 15
condition2 = array < 25
print(array[np.where(condition1 & condition2)])
```
```
[16 17 18 19 20 21 22 23 24]
```
| 24.368421 | 106 | 0.697624 | eng_Latn | 0.984817 |
21e407990567bf3f78111e6a339f4d175b077ece | 40 | md | Markdown | README.md | jet1x/vvdigitalmed | 2e0f6a61b71e756fddf197cb9c40384a3e43a110 | [
"Apache-2.0"
] | null | null | null | README.md | jet1x/vvdigitalmed | 2e0f6a61b71e756fddf197cb9c40384a3e43a110 | [
"Apache-2.0"
] | null | null | null | README.md | jet1x/vvdigitalmed | 2e0f6a61b71e756fddf197cb9c40384a3e43a110 | [
"Apache-2.0"
] | null | null | null | # vvdigitalmed https://vvdigitalmed.gr/
| 20 | 39 | 0.775 | hun_Latn | 0.338512 |
21e447e8f6562f4f4c25a9d6e85294f07d4062f0 | 1,310 | md | Markdown | readme.md | mrbean-bremen/ExCSS_Core | 11d7500935cfbc32c67e9808b7997150ee054b25 | [
"MIT"
] | null | null | null | readme.md | mrbean-bremen/ExCSS_Core | 11d7500935cfbc32c67e9808b7997150ee054b25 | [
"MIT"
] | null | null | null | readme.md | mrbean-bremen/ExCSS_Core | 11d7500935cfbc32c67e9808b7997150ee054b25 | [
"MIT"
] | null | null | null | # ExCSS With Updates for .NET Core
[ExCSS v3](https://github.com/TylerBrinks/ExCSS) could be great (as v2 was), but needed some changes from the current master to make it useable (just a couple of bit's made public and moved to .NET Standard 2.0).
I've made the changes here as the pull requests don't seem to be getting incorporated at the moment.
If the changes are made to the main repro (I've asked to be a contributor), this will be deleted.
## Example usage (simple)
```C#
// Could read in a file here...
string css = "html{ background-color: #5a5eed; color: #FFFFFF; margin: 5px; } h2{ background-color: red }";
var stylesheet = new ExCSS.StylesheetParser().Parse(css);
// Get the info out - New way
var info = stylesheet.StyleRules.First() as ExCSS.StyleRule;
var selector = info.SelectorText;
var backgroundColor = info.Style.BackgroundColor;
var foregroundColor = info.Style.Color;
var margin = info.Style.Margin;
//// Create a new stylesheet
var newParser = new ExCSS.StylesheetParser();
ExCSS.StyleRule r = new ExCSS.StyleRule(newParser);
r.SelectorText = "h1";
r.Style.BackgroundColor = "red";
ExCSS.StyleRule r2 = new ExCSS.StyleRule(newParser);
r2.SelectorText = "h2";
r2.Style.BackgroundColor = "green";
var newstylesheet = r.ToCss() + System.Environment.NewLine + r2.ToCss();
``` | 33.589744 | 212 | 0.736641 | eng_Latn | 0.509417 |
21e493bc6fd97162c549043daa63edc99f823ece | 22 | md | Markdown | README.md | rao4/ColumnControl | 45edbcc8614ad50f92f6bbde5ae3227daf5946d0 | [
"MIT"
] | null | null | null | README.md | rao4/ColumnControl | 45edbcc8614ad50f92f6bbde5ae3227daf5946d0 | [
"MIT"
] | null | null | null | README.md | rao4/ColumnControl | 45edbcc8614ad50f92f6bbde5ae3227daf5946d0 | [
"MIT"
] | null | null | null | AEM 6.2 Column control | 22 | 22 | 0.818182 | eng_Latn | 0.829924 |
21e49ecb0f8fd67d6517f3c7216b7d8e58c04d23 | 624 | md | Markdown | avatax/errors/InvalidExemptionOperation.md | ted-spence-avalara/developer-dot | d03e12144f10becc87482018f5a341cafa834441 | [
"MIT"
] | null | null | null | avatax/errors/InvalidExemptionOperation.md | ted-spence-avalara/developer-dot | d03e12144f10becc87482018f5a341cafa834441 | [
"MIT"
] | 1 | 2017-04-18T20:45:42.000Z | 2017-04-18T20:45:42.000Z | avatax/errors/InvalidExemptionOperation.md | ted-spence-avalara/developer-dot | d03e12144f10becc87482018f5a341cafa834441 | [
"MIT"
] | null | null | null | ---
layout: page
title: InvalidExemptionOperation
number: 1212
categories: [AvaTax Error Codes]
disqus: 1
---
## Summary
TBD
## Example
```json
{
"code": "InvalidExemptionOperation",
"target": "Unknown",
"details": [
{
"code": "InvalidExemptionOperation",
"number": 1212,
"message": "Filtering operation is not supported",
"description": "The API -0- does not currently support the -1- filter command.",
"faultCode": "Client",
"helpLink": "http://developer.avalara.com/avatax/errors/InvalidExemptionOperation",
"severity": "Error"
}
]
}
```
## Explanation
TBD
| 17.333333 | 89 | 0.639423 | eng_Latn | 0.452966 |
21e4cdad2026534f50841a9eadc33bc31b5a5976 | 1,710 | md | Markdown | README.md | sssstrange/ODriveHardware | f54255d36404be034ce68c2cafb1b5820048b3c3 | [
"MIT"
] | 686 | 2016-06-02T03:15:44.000Z | 2022-03-30T09:52:02.000Z | README.md | Sgw32/ODriveHardware | 2aac0a8d4340a6d6a4d658efcdacaaa6ca9bb02a | [
"MIT"
] | 11 | 2016-06-13T20:04:54.000Z | 2022-02-28T09:40:29.000Z | README.md | Sgw32/ODriveHardware | 2aac0a8d4340a6d6a4d658efcdacaaa6ca9bb02a | [
"MIT"
] | 344 | 2016-06-12T01:40:20.000Z | 2022-03-25T05:45:01.000Z | # ODriveHardware
This project is all about accuratly driving brushless motors, for cheap. The aim is to make it possible to use inexpensive brushless motors in high performance robotics projects.
Like this (click for video):
[](https://www.youtube.com/watch?v=WT4E5nb3KtY)
If you want to get your hands on a board, check out [this post](https://hackaday.io/project/11583-odrive-high-performance-motor-control/log/40702-boards-and-development).
This repository contains the circuit board design of ODrive. The other related repositories are:
* [ODriveFirmware](https://github.com/madcowswe/ODriveFirmware): Firmware that runs on the board.
* [ODrive](https://github.com/madcowswe/ODrive): Configuration and analysis scripts that runs on a PC.
There is also [ODriveFPGA](https://github.com/madcowswe/ODriveFPGA), which contains the FPGA logic and software that runs on the FPGA based ODrive. This is not currently in development, but may be resumed at some later date.
## Odrive v3 board
The pinout from the microcontroller to the board is documented [here](https://docs.google.com/spreadsheets/d/1QXDCs1IRtUyG__M_9WruWOheywb-GhOwFtfPcHuN2Fg/edit?usp=sharing).
### Pictures!


### ODrive v3.1 errata
* The silkscreen labels for the encoders (M0, M1) are reversed.
* The output impedence of the current amplifiers were not considered when designing the post-amplifier filters. Hence the response is about 5x slower than designed. Max allowed modulation index is therefore about 50%.
| 71.25 | 224 | 0.790643 | eng_Latn | 0.96955 |
21e4fdfc50df695887089fdf9fec859259f1e2d9 | 151 | md | Markdown | content/home/index.md | nadirweibel/hxi-website | 53a31fed357c9727b159bd94ee5fb73da8043a78 | [
"MIT"
] | 2 | 2021-09-28T04:17:32.000Z | 2021-10-01T22:31:18.000Z | content/home/index.md | nadirweibel/hxi-website | 53a31fed357c9727b159bd94ee5fb73da8043a78 | [
"MIT"
] | 3 | 2021-09-24T17:04:21.000Z | 2021-09-28T18:47:39.000Z | content/home/index.md | nadirweibel/hxi-website | 53a31fed357c9727b159bd94ee5fb73da8043a78 | [
"MIT"
] | 2 | 2021-09-27T10:37:05.000Z | 2021-09-28T18:23:20.000Z | ---
# Files in this folder represent a Widget Page (homepage)
type: widget_page
# Homepage is headless, other widget pages are not.
headless: true
--- | 21.571429 | 57 | 0.741722 | eng_Latn | 0.99862 |
21e5d22ccec06abbbf0964dfb0094257a33e1eae | 653 | md | Markdown | vendor/loveorigami/yii2-notification-wrapper/docs/Igrowl.md | chanpan/Siriraj | 9390d8a52024073345285ff99acd84d75feae880 | [
"BSD-3-Clause"
] | 93 | 2016-03-21T18:55:30.000Z | 2022-03-20T23:19:16.000Z | vendor/loveorigami/yii2-notification-wrapper/docs/Igrowl.md | chanpan/Siriraj | 9390d8a52024073345285ff99acd84d75feae880 | [
"BSD-3-Clause"
] | 24 | 2016-03-22T13:34:33.000Z | 2020-05-16T14:46:06.000Z | vendor/loveorigami/yii2-notification-wrapper/docs/Igrowl.md | chanpan/Siriraj | 9390d8a52024073345285ff99acd84d75feae880 | [
"BSD-3-Clause"
] | 29 | 2016-03-22T12:21:19.000Z | 2021-10-08T18:47:11.000Z | # iGrowl
Important - this layer is not stable, because have conflict with bootstrap
Installation
--------
```bash
"loveorigami/yii2-notification-wrapper": "*",
"bower-asset/igrowl": "*"
```
to the ```require``` section of your `composer.json` file.
Usage
-----
```php
use lo\modules\noty\Wrapper;
echo Wrapper::widget([
'layerClass' => 'lo\modules\noty\layers\Igrowl',
// default options
'options' => [
'placement' => [
'x'=>'right',
'y'=>'top',
],
'animation' => true,
'delay' => 3000,
// and more for this library here https://github.com/catc/iGrowl
],
]);
```
| 17.184211 | 74 | 0.563553 | eng_Latn | 0.723003 |
21e5f9bf51882bb20e579e21e8d227e97a8eace8 | 47 | md | Markdown | docs/README-MAIL.md | bjoern-hempel/friends-of-bash | 98295b349749cb8228d5afc2e5ec917b9aed1569 | [
"MIT"
] | 2 | 2017-02-27T00:31:59.000Z | 2020-12-19T17:53:46.000Z | docs/README-MAIL.md | bjoern-hempel/friends-of-bash | 98295b349749cb8228d5afc2e5ec917b9aed1569 | [
"MIT"
] | 8 | 2017-06-03T21:07:07.000Z | 2017-07-19T00:46:12.000Z | docs/README-MAIL.md | bjoern-hempel/friends-of-bash | 98295b349749cb8228d5afc2e5ec917b9aed1569 | [
"MIT"
] | 2 | 2018-04-13T02:27:21.000Z | 2018-07-14T12:42:50.000Z | # friends-of-bash
Describes the mail library
| 9.4 | 26 | 0.765957 | eng_Latn | 0.959346 |
21e69f093d2531dcbe2ffb1071a3ec7c34b4f382 | 2,092 | md | Markdown | _posts/wordpress/2006-09-08-the-future-of-hosted-software.md | olympum/olympum.github.io | 4436dfff125aecd805fce4efd67f3de328646d1a | [
"Apache-2.0"
] | null | null | null | _posts/wordpress/2006-09-08-the-future-of-hosted-software.md | olympum/olympum.github.io | 4436dfff125aecd805fce4efd67f3de328646d1a | [
"Apache-2.0"
] | null | null | null | _posts/wordpress/2006-09-08-the-future-of-hosted-software.md | olympum/olympum.github.io | 4436dfff125aecd805fce4efd67f3de328646d1a | [
"Apache-2.0"
] | null | null | null | ---
layout: post
title: The future of hosted software
date: 2006-09-08 07:47:00.000000000 +01:00
categories:
- cloud
tags: []
status: publish
type: post
published: true
meta:
tmac_last_id: '531409286329425920'
author:
login: admin
email: brunofr@olympum.com
display_name: Bruno Fernandez-Ruiz
first_name: Bruno
last_name: Fernandez-Ruiz
---
As we discussed originally while looking at <a href="http://offthespot.blogspot.com/2006/07/pricing-models-for-web-20-software-as.html">Web 2.0 software-as-a-service business models</a>, we saw how hosted software is not a competitive offering for mid- to large companies over 500 employees. <a href="http://www.itweek.co.uk/2163776">New research by Quocirca and Forrester</a> now comes to a similar conclusion, and they add that there is a grey zone between 250 and 500 employees where it's not clear the value in hosted services. Quocirca concludes saying that hosted services are rarely cheaper than in-house services, overseeing the 5.7 million businesses in the US under 500 employees. I am still to read both research reports to understand the full details.
<p>Additionally, these studies seem to use current pricing models, such as the one from salesforce.com, but miss some of the points raised by <a href="http://www.vertabase.com/blog/the-feel-of-web-20-software/">Mark</a>: the utility of hosted software goes well beyond cost (the focus of Quocirca and Forrester research) and includes less tangible things such as training time, added security, and productivity (as already commented by Steve Garnett, of salesforce.com, when asked about these two research reports). Also, as we discussed while <a href="http://offthespot.blogspot.com/2006/08/web-20-new-old.html">comparing the Web 2.0 to the "old economy"</a>, it will become necessary for hosted software providers that want to remain in the market to start providing free services, adding premium services, leveraging information lock-in and enhancing the value of the intangible benefits such as usability, ease of upgrade, automatic security/patch management, etc.</p>
| 87.166667 | 972 | 0.785373 | eng_Latn | 0.996098 |
21e72993fb1a6a534a02c3b08328e7fd9d09b961 | 802 | md | Markdown | server/README.md | nandbyte/fridgefoodies | fad73c02dd283d7c3eebcd58fbfd30431712061b | [
"MIT"
] | 2 | 2021-11-07T05:30:46.000Z | 2021-11-07T06:27:24.000Z | server/README.md | nandbyte/fridgefoodies | fad73c02dd283d7c3eebcd58fbfd30431712061b | [
"MIT"
] | 7 | 2021-11-01T12:08:06.000Z | 2021-11-01T12:27:24.000Z | server/README.md | nandbyte/fridgefoodies | fad73c02dd283d7c3eebcd58fbfd30431712061b | [
"MIT"
] | 1 | 2022-02-10T12:16:08.000Z | 2022-02-10T12:16:08.000Z | <h1 align="center">Fridgefoodies Backend</h1>
<h3 align="center">Postgres, Express, Node, Typescript</h3>
<br>
<br>
<h3>How to Run</h3>
<ul>
<li>Install the packages:
```bash
cd server
npm install
```
</li>
<li>Then, create an environment file `.env` at the root of the directory and set up environment variables:
```bsh
PORT=3000
DB_USER=postgres
DB_PASSWORD=root
DB_HOST=localhost
DB_PORT=6666
DB_DATABASE=fridgefo
```
</li>
<li>Copy the database configuration commands found in `server/src/database/config/database.sql` into a the respective PSQL terminal.</li>
<li>Finally, run the server with `npm run start`</li>
</ul>
<br>
<h3>Commands:</h3>
<ul>
<li>`npm run start`: Test server live.</li>
<li>`npm run build`: Create server production files.</li>
</ul>
| 17.06383 | 141 | 0.689526 | eng_Latn | 0.650027 |
21e785b7845b655955173bcd8166d6715260bd54 | 466 | md | Markdown | _posts/2020-02-28-test-markdown.md | Gissela-R/Gissela-R.github.io | c4d34c5ca9897a18c9931c3840345788a5031e36 | [
"MIT"
] | null | null | null | _posts/2020-02-28-test-markdown.md | Gissela-R/Gissela-R.github.io | c4d34c5ca9897a18c9931c3840345788a5031e36 | [
"MIT"
] | null | null | null | _posts/2020-02-28-test-markdown.md | Gissela-R/Gissela-R.github.io | c4d34c5ca9897a18c9931c3840345788a5031e36 | [
"MIT"
] | null | null | null | ---
layout: post
title: MIS GUSTOS
subtitle: ASI ES MI VIDA
gh-repo: daattali/beautiful-jekyll
gh-badge: [star, fork, follow]
tags: [test]
comments: true
---
Les voy a presentar mis gustos
**MI COMIDA FAVORITA**
## YAGUARLOCRO

## BUÑUELOS

## MIS LUGARES FAVORITOS :
Baños de Agua Santa,
Loja,
Quito,
Macas,
| 15.533333 | 84 | 0.729614 | yue_Hant | 0.433604 |
21e7e613221e4b8b629c2084c5cd473aa48bac48 | 4,542 | md | Markdown | README.md | iclr2020/MUSAE | 2d47a852fd1135840af27dd3c9d6561739b768b8 | [
"MIT"
] | 3 | 2020-01-21T06:59:46.000Z | 2020-04-21T02:56:23.000Z | README.md | iclr2020/MUSAE | 2d47a852fd1135840af27dd3c9d6561739b768b8 | [
"MIT"
] | null | null | null | README.md | iclr2020/MUSAE | 2d47a852fd1135840af27dd3c9d6561739b768b8 | [
"MIT"
] | 1 | 2020-01-21T06:59:55.000Z | 2020-01-21T06:59:55.000Z | MUSAE
============================================
The reference implementation of "Multi-scale Attributed Node Embedding".
<p align="center">
<img width="800" src="musae.jpg">
</p>
### Abstract
<p align="justify">
We present network embedding algorithms that capture information about a node from the local distribution over node attributes around it, as observed over random walks following an approach similar to Skip-gram. Observations from neighborhoods of different sizes are either pooled (AE) or encoded distinctly in a multi-scale approach (MUSAE). Capturing attribute-neighborhood relationships over multiple scales is useful for a diverse range of applications, including latent feature identification across disconnected networks with similar attributes. We prove theoretically that matrices of node-feature pointwise mutual information are implicitly factorized by the embeddings. Experiments show that our algorithms are robust, computationally efficient and outperform comparable models on social, web and citation network datasets.</p>
The second-order random walk sampling method was taken from the reference implementation of [Node2Vec](https://github.com/aditya-grover/node2vec).
### Table of Contents
1. [Requirements](#requirements)
2. [Datasets](#datasets)
3. [Logging](#logging)
4. [Options](#options)
5. [Examples](#examples)
### Requirements
The codebase is implemented in Python 3.5.2. package versions used for development are just below.
```
networkx 1.11
tqdm 4.28.1
numpy 1.15.4
pandas 0.23.4
texttable 1.5.0
scipy 1.1.0
argparse 1.1.0
gensim 3.6.0
```
### Datasets
### Logging
The models are defined in a way that parameter settings and runtimes are logged. Specifically we log the followings:
```
1. Hyperparameter settings. We save each hyperparameter used in the experiment.
2. Optimization runtime. We measure the time needed for optimization - measured by seconds.
3. Sampling runtime. We measure the time needed for sampling - measured by seconds.
```
### Options
Learning of the embedding is handled by the `src/main.py` script which provides the following command line arguments.
#### Input and output options
```
--graph-input STR Input edge list csv. Default is `input/edges/chameleon_edges.csv`.
--features-input STR Input features json. Default is `input/features/chameleon_features.json`.
--output STR Embedding output path. Default is `output/chameleon_embedding.csv`.
--log STR Log output path. Default is `logs/chameleon.json`.
```
#### Random walk options
```
--sampling STR Random walker order (first/second). Default is `first`.
--P FLOAT Return hyperparameter for second-order walk. Default is 1.0
--Q FLOAT In-out hyperparameter for second-order walk. Default is 1.0.
--walk-number INT Walks per source node. Default is 5.
--walk-length INT Truncated random walk length. Default is 80.
```
#### Model options
```
--model STR Pooled or multi-scla model (AE/MUSAE). Default is `musae`.
--base-model STR Use of Doc2Vec base model. Default is `null`.
--approximation-order INT Matrix powers approximated. Default is 3.
--dimensions INT Number of dimensions. Default is 32.
--down-sampling FLOAT Length of random walk per source. Default is 0.001.
--exponent FLOAT Downsampling exponent of frequency. Default is 0.75.
--alpha FLOAT Initial learning rate. Default is 0.05.
--min-alpha FLOAT Final learning rate. Default is 0.025.
--min-count INT Minimal occurence of features. Default is 1.
--negative-samples INT Number of negative samples per node. Default is 5.
--workers INT Number of cores used for optimization. Default is 4.
--epochs INT Gradient descent epochs. Default is 5.
```
### Examples
Training a MUSAE model for a 10 epochs.
```
python src/main.py --epochs 10
```
Changing the dimension size.
```
python src/main.py --dimensions 32
```
| 47.3125 | 836 | 0.638485 | eng_Latn | 0.979487 |
21e9bd6b3fc982514806d1d53bddd71e24ccf1b7 | 2,445 | md | Markdown | docs/modules/EMS_Fronius.md | itsJS/TWCManager | 88327c641155ba6141b1bba67b17f954c80c901a | [
"Unlicense"
] | 1 | 2022-03-11T06:27:43.000Z | 2022-03-11T06:27:43.000Z | docs/modules/EMS_Fronius.md | itsJS/TWCManager | 88327c641155ba6141b1bba67b17f954c80c901a | [
"Unlicense"
] | null | null | null | docs/modules/EMS_Fronius.md | itsJS/TWCManager | 88327c641155ba6141b1bba67b17f954c80c901a | [
"Unlicense"
] | null | null | null | # Fronius Inverter EMS Module
## Introduction
Fronius Inverters provide a solar.web interface locally on the inverter itself, which allows querying of Solar Generation information. If you have a Fronius Meter installed, the solar.web interface also provides Consumption information.
Fronius Inverters connect via wifi. The serverIP IP address is the IP that the Fronius Inverter is provided via DHCP after connecting to the Wifi network.
### Note
In many Fronius installations, the installation will involve a Fronius Meter mounted within the electricity meter box. If you have one of these installed, it will be between 2-4 DIN slots wide, with an LCD screen showing metering information, and will have a model number similar to 63A-1 or 63A-3.
If you have such a meter installed, you are able to obtain Consumption information via the Fronius interface, and it is likely that the TWC's power draw is being metered. If this is the case, the TWC's load will show as Consumption via the Fronius EMS module. If this is the case, please ensure the following configuration setting is enabled in your ```config.json``` file:
```
{
"config": {
"subtractChargerLoad": true
}
}
```
### Status
| Detail | Value |
| --------------- | ------------------------------ |
| **Module Name** | Fronius |
| **Module Type** | Energy Management System (EMS) |
| **Features** | Consumption, Generation |
| **Status** | Implemented, Mature, Tested |
## Configuration
The following table shows the available configuration parameters for the Fronius EMS module.
| Parameter | Value |
| ----------- | ------------- |
| enabled | *required* Boolean value, ```true``` or ```false```. Determines whether we will poll the Fronius Inverter. |
| serverIP | *required* The IP address of the Fronius Inverter. We will poll this device's HTTP API. Multiple Fronius inverters can be specified, please see examples below. |
| serverPort | *optional* Web Server port. This is the port that we should connect to. This is almost always 80 (HTTP). |
### JSON Configuration Example
Single inverter configuration:
```
"Fronius": {
"enabled": true,
"serverIP": "192.168.1.2",
"serverPort": 80
}
```
Multiple inverter confgiuration:
```
"Fronius": {
"enabled": true,
"serverIP": [ "192.168.1.2", "192.168.1.3", "192.168.1.4" ],
"serverPort": 80
}
```
| 38.809524 | 373 | 0.678119 | eng_Latn | 0.991994 |
21e9e2991c61d0e593ce7edbdd0f39a441af16e4 | 4,749 | md | Markdown | events/2015/WindFarm0-Harvard-2015.md | n8fr8/WindFarm | 362322c909a28b8f587a365e3884a0b08510c695 | [
"CC0-1.0"
] | 24 | 2015-04-22T09:49:08.000Z | 2021-08-28T03:30:30.000Z | events/2015/WindFarm0-Harvard-2015.md | n8fr8/WindFarm | 362322c909a28b8f587a365e3884a0b08510c695 | [
"CC0-1.0"
] | 2 | 2015-04-22T09:50:42.000Z | 2020-01-28T06:56:28.000Z | events/2015/WindFarm0-Harvard-2015.md | n8fr8/WindFarm | 362322c909a28b8f587a365e3884a0b08510c695 | [
"CC0-1.0"
] | 6 | 2015-03-25T17:51:02.000Z | 2020-01-03T09:31:16.000Z | #WIND FARM 0: MAY 15-16, 2015
##OVERVIEW
Wind is the untapped energy that is all around us, spontaneous and often unexpected. It can manifest as a slight breeze, or a powerful gale. It can move in many directions, or as a focused force. It can gently spread the seeds of life, or a fearful gale that moves the sea. Wind can carry a message for miles, even around the world. Wind can be harnessed and turned into energy, and it is that energy which inspires possibilities. In this case, it inspires a metaphor to describe a new way to connect and share digitally, that is not the Internet, and not the Web, but some place new, one that is right in the air around us.

Wind Farm Zero is an opportunity to facilitate a basic vision and metaphor for many groups to all work within. It is a starting point, not a standard, an intervention to create a dialogue, shared terminology and a shared narrative of how, who, and what we expect people to do when they have a super-computer in their pocket, but no signal. We will hear first-hand stories from people who have experienced network-less situations (hurricanes, earthquakes, protests), and share new, promising projects, protocols and products that can address these times. Most importantly, we'll have time to be hands-on with a large group of people moving around in the real world, to see what works and does not work today.
This Wind Farm event builds on an "Internet Blackout Simulation Event" held in 2013 in New York at Eyebeam. You can learn more about that event at the links below:
- http://eyebeam.org/events/eyebeam-square-an-internet-blackout-simulation-event
- http://www.theverge.com/2014/6/10/5794406/what-do-you-do-when-the-internet-turns-off
##INFORMATION
- *Info:* http://windfarm0.link
- *When:* May 15 9am-5pm and March 16 10am-4pm
- *Where:* Berkman Center for Internet & Society, Harvard University Law School, 23 Everett Street, Cambridge, MA, USA
- *Organizer:* Nathan Freitas, Berkman Fellow; Founder of the Guardian Project, and Tech Director at Rights Action Lab
- *Who:* A variety of invited participants working in non/post-Internet communication systems for applications ranging from humanitarian, health, human rights, productivity, and general happiness
* Remote participation can be supported on Friday through video conference or dial-in, but in-person participation is preferred for obvious reasons! The [Chatham House Rule](http://www.chathamhouse.org/about/chatham-house-rule) will be in effect, to encourage more open discussion and honest sharing. Remote participants will be connected via private video conference.
##AGENDA
###DAY ONE: Discussions, Demos and Dreams
Day one will be a semi-private session of invited participants and members of the Berkman community. We will try to find common ground between our various efforts in non-Internet, nearby, and "mesh" communication systems, while having room to share our divergent views on what the future of device-to-device communications will look like. We will hear from everyday people, activists, aid workers, and others who have direct experience being and working in places where all traditional communications are down, or are otherwise not viable. We will have the chance to share with each other our coolest, cutting edge demos and/or our actually shipping, production ready products. Finally, we'll attempt to come away with some basic ideas for collaboration moving forward, which might be helped by sharing a few beers and dinner.
##DAY TWO: Have Fun and Break Things!
Day two will be open to a larger community, and will include a more diverse set of participants, including students from the local colleges and high schools. The aim is to expand our theoretical discussions from day one into a hands-on, "live action" game situation, where we can see how different apps, tools, prototypes, and services fare when put into the hands of real people. Our hope is to be able to test released solutions at some sort of scale, meaning a crowd of 30 to 50 people. We will be using the Harvard campus as our game space, and variations on capture the flag and scavenger hunts as the basis of the games, with all communication occurring via the apps, services and hardware provided.

["Gansu.Guazhou.windturbine farm.sunset" by Popolon. CC BY-SA 3.0 via Wikimedia Commons](http://commons.wikimedia.org/wiki/File:Gansu.Guazhou.windturbine_farm.sunset.jpg#/media/File:Gansu.Guazhou.windturbine_farm.sunset.jpg)
| 135.685714 | 826 | 0.793851 | eng_Latn | 0.9987 |
21ea90b78a35bcb23120daded8ff81d77634a0fa | 1,666 | md | Markdown | SE-threat-model/attacks/Link-spamming.md | tymyrddin/amygdala-umbrella | 1841e77b4b80a5f9908e4f618d9dcf673a290973 | [
"CC0-1.0"
] | null | null | null | SE-threat-model/attacks/Link-spamming.md | tymyrddin/amygdala-umbrella | 1841e77b4b80a5f9908e4f618d9dcf673a290973 | [
"CC0-1.0"
] | null | null | null | SE-threat-model/attacks/Link-spamming.md | tymyrddin/amygdala-umbrella | 1841e77b4b80a5f9908e4f618d9dcf673a290973 | [
"CC0-1.0"
] | 2 | 2020-11-14T10:21:40.000Z | 2020-11-15T09:01:10.000Z | # Link spamming
The use of link analysis by internet search engines to determine relevance resulted in an effort to manipulate such link analysis systems.
* Putting a [link farms](../attack-vectors/Link-farms.md) link at the bottom of every page in a site. The links in the farm point to every other page in that site, or to another site controlled by the author. Link farms are easy to spot, hence pseudo web-rings and random linkage within a member group appeared. In February 2011, Google launched Google Panda, intended to improve its spam-detection algorithm.
* [Doorway pages](../attack-vectors/Doorway-pages.md) consist entirely of links and are not intended to be viewed by humans, but are made just to be discovered by search engines. These pages can have thousands of links, including multiple links to the same object.
* Link farms and doorway pages are effective when the link analysis is sensitive to the absolute number of links. Ranking techniques using quality of links are not that vulnerable to these spam techniques.
* A common form of link spam is the use of link-building software to automate SEO processes.
* [Private blog networks](../attack-vectors/Blog-networks.md) are groups of authoritative websites used as a source of contextual links that point to a particular site to achieve higher search engine ranking for it. This strategy buys up expired domains or auction domains that have backlinks from high-authority websites. Search engines responded with re-indexing campaigns.
* [Indirect link spamming](../attack-vectors/Indirect-link-spamming.md) practices increased after the campaigns against link farms and private blog networks.
| 138.833333 | 409 | 0.802521 | eng_Latn | 0.999207 |
21eb7d6308b88c24b5a16a1ed2042211651e92e9 | 911 | md | Markdown | _software/deep-learning-from-scratch/0-deep-learning-from-scratch-index.md | SCUTEEE/SCUTEEE.github.io | 143b4f7c03142849be04fb17a3be2952a78bff64 | [
"MIT"
] | null | null | null | _software/deep-learning-from-scratch/0-deep-learning-from-scratch-index.md | SCUTEEE/SCUTEEE.github.io | 143b4f7c03142849be04fb17a3be2952a78bff64 | [
"MIT"
] | null | null | null | _software/deep-learning-from-scratch/0-deep-learning-from-scratch-index.md | SCUTEEE/SCUTEEE.github.io | 143b4f7c03142849be04fb17a3be2952a78bff64 | [
"MIT"
] | null | null | null | ---
layout: article
title: 深度学习入门-基于Python
permalink: /software/deep-learning-from-scratch/index
mathjax: false
mermaid: false
chart: false
mathjax_autoNumber: false
mode: immersive
tags: 深度学习
key: deep-learning-from-scratch-index
show_edit_on_github: false
show_date: false
nav_key: software
sidebar:
nav: deep-learning-from-scratch
aside:
toc: true
header:
theme: dark
article_header:
type: overlay
theme: dark
background_color: '#ffffff'
background_image:
src: https://www.oreilly.co.jp/books/images/picture_large978-4-87311-758-4.jpeg
gradient: 'linear-gradient(0deg, rgba(0, 0, 0 , .4), rgba(0, 0, 0, .4))'
---
<!--more-->
<!-- more -->
这是《深度学习入门——基于Python的理论与实现》的学习笔记&代码。这本书使用 numpy 库从 0 编写了简单神经网络,适合初学者看。本笔记原本写在 Jupyter Notebook 上,你可以在 [https://scuteee.com/software/deep-learning-from-scratch/code/](https://scuteee.com/software/deep-learning-from-scratch/code/) 找到各章对应的 ipynb 文件。 | 27.606061 | 245 | 0.748628 | eng_Latn | 0.223516 |
21eb7e28fc39193b4e7424e58a92149ece827a54 | 1,564 | md | Markdown | doc/web/content/comics/SmackJeeves_ERRORERROR_edit.md | yasen-m/dosage | 81fe088621ad335cac2a53fcbc7b9b37f49ddce2 | [
"MIT"
] | null | null | null | doc/web/content/comics/SmackJeeves_ERRORERROR_edit.md | yasen-m/dosage | 81fe088621ad335cac2a53fcbc7b9b37f49ddce2 | [
"MIT"
] | null | null | null | doc/web/content/comics/SmackJeeves_ERRORERROR_edit.md | yasen-m/dosage | 81fe088621ad335cac2a53fcbc7b9b37f49ddce2 | [
"MIT"
] | null | null | null | title: Edit SmackJeeves/ERRORERROR
url: "/comics/SmackJeeves_ERRORERROR_edit.html"
---
Edit info for comic SmackJeeves/ERRORERROR
<form name="comic" action="http://gaepostmail.appspot.com/comic/" method="post">
<table class="comicinfo">
<tr>
<th>Description</th><td><textarea name="description" cols="40" rows="3">A finnish teenage boy Tomi is a normal nerd. He plays videogames and gets average grades, but also is terribly bored with his life. But one day he accidentally gets inside of his computer and meets there a girl, who calls herself princess Jooda. And there is a profecy about Tomi, sword and Viruses. Warnings: If you are under 13 years old, I do not recommend this comic to you. Contains bad language, blood and somewhat sexual themes. ps. This started out as a test for new tablet, so the start looks like that too. Also, sorry about the style changes, I practice. Updates 2-4 pages a week.</textarea></td>
</tr>
<tr>
<th>Website</th><td><input type="text" name="url" value="http://errorerror.smackjeeves.com/comics/" size="40"/></td>
</tr>
<tr>
<th>Genre</th><td><input type="text" name="genre" value="Other" size="40"/></td>
</tr>
<tr>
<th>Language</th><td><input type="text" name="language" value="English" size="40"/></td>
</tr>
<tr>
<th>Adult content</th><td><input type="checkbox" name="adult" value="adult" /></td>
</tr>
<tr>
<th></th><td>
<input type="hidden" name="comic" value="SmackJeeves_ERRORERROR" />
<input type="submit" name="submit" value="Submit" />
</td>
</tr>
</table>
</form>
Back to the [comic](SmackJeeves_ERRORERROR.html).
| 47.393939 | 679 | 0.71867 | eng_Latn | 0.88721 |
21ebe7588f603eabc6815143e7d71f99c4e43080 | 6,504 | md | Markdown | vultr/idc/frankfurt/20180622-vultr-idc-frankfurt.md | qijianjun/topvps | 97b01dd8c9176fa420b3c906da29d3d64d212bf4 | [
"MIT"
] | 10 | 2018-09-06T23:59:08.000Z | 2021-05-04T02:31:53.000Z | vultr/idc/frankfurt/20180622-vultr-idc-frankfurt.md | qijianjun/topvps | 97b01dd8c9176fa420b3c906da29d3d64d212bf4 | [
"MIT"
] | null | null | null | vultr/idc/frankfurt/20180622-vultr-idc-frankfurt.md | qijianjun/topvps | 97b01dd8c9176fa420b3c906da29d3d64d212bf4 | [
"MIT"
] | null | null | null | # 多地到Vultr法兰克福[Frankfurt]机房的Ping测试(20180622)
## 导读
本文是[TopVPS主机网络测试报告系列](https://vps123.top/pingtest)的一部分,主要提供PING测试的响应速度、丢包率等数据,关于PING指令以及响应速度、丢包率的指标意义,请看[这里](https://vps123.top/what-is-ping.html)。
![多地到Vultr法兰克福\[Frankfurt\]机房的Ping测试(20180622)](/images/thumbnails/to_vultr_Frankfurt.png)
本篇的数据视角为全国各地以及海外运营商的网络环境下,到[Vultr](https://vps123.top/go/vultr)的[德国-法兰克福机房](https://vps123.top/vultr-facilities.html#frankfurt)的网络连接;若你有[主要面向大陆用户的站点](https://vps123.top/website-for-mainland-users.html)或[外贸站](https://vps123.top/website-for-internation-trade.html)运行在VPS提供商[Vultr](https://vps123.top/go/vultr)的德国-法兰克福机房,可以从这些数据中了解用户访问你的站点的响应速度和体验。如果你认为当前使用的主机在主要用户所在地的访问体验不佳,可以查看[Vultr全部机房](/vultr/isp/china/20180622-vultr-isp-china.md)的数据或其他VPS提供商的机房数据(文末有链接),发现可用性更好的机房。
**数据地图:**[我应该关注哪类VPS测速数据?](https://vps123.top/find-pingtest-data-you-need.html)
## 概要
大陆32个省市的925个有效测试样本中,共有超时点79个;响应均值为269毫秒,最快的三地区为天津、内蒙古、甘肃,最慢的三地区为山东、西藏、香港;江苏、广东、浙江、福建、辽宁等21处有响应超时情况;西藏、河北、辽宁丢包率较高。海外17个国家地区的76个有效测试样本中,共有超时点3个;响应均值为174毫秒,最快的三地区为法国、巴西、德国,最慢的三地区为日本、台湾、澳大利亚;香港、美国有响应超时情况;澳大利亚丢包率较高。
[注册 Vultr](https://vps123.top/go/vultr/_btn1)
## 大陆测速点
![大陆各省份到VPS提供商Vultr位于法兰克福\[Frankfurt\]的机房的ping测试数据统计图,包含响应值的柱状图以及丢包率的散点图,数据日期为20180622](/images/pingtests/vultr_20180622/plot_idc_vultr_germany-frankfurt_20180622_mainland.png)
**大陆各省份到Vultr德国-法兰克福机房的测速数据 [20180622]**
省份 | 测速点 | 超时点 | 响应时间 | 丢包率
---|---|---|---|---
天津 | 3个 | 0 | 226ms | 0
内蒙古 | 32个 | 0 | 242ms | 1.7%
甘肃 | 21个 | 0 | 244ms | 0
宁夏 | 13个 | 0 | 245ms | 0
青海 | 3个 | 0 | 248ms | 0
北京 | 10个 | 1个 | 248ms | 0.4%
广西 | 44个 | 2个 | 249ms | 1.2%
广东 | 80个 | 13个 | 256ms | 0.1%
安徽 | 36个 | 3个 | 256ms | 2.1%
湖北 | 36个 | 3个 | 258ms | 0.1%
陕西 | 32个 | 2个 | 261ms | 0.1%
海南 | 12个 | 0 | 261ms | 0
云南 | 14个 | 0 | 262ms | 0
上海 | 7个 | 1个 | 264ms | 0
重庆 | 12个 | 2个 | 265ms | 0
江西 | 23个 | 1个 | 267ms | 0.3%
均值 | 925个 | 79个 | 269ms | 1.8%
河南 | 51个 | 2个 | 270ms | 0.2%
山西 | 37个 | 3个 | 270ms | 0
江苏 | 70个 | 14个 | 272ms | 0.7%
浙江 | 52个 | 10个 | 273ms | 2.5%
湖南 | 39个 | 3个 | 273ms | 0.9%
福建 | 32个 | 5个 | 276ms | 1.1%
四川 | 40个 | 3个 | 277ms | 0.6%
贵州 | 25个 | 1个 | 278ms | 0.3%
河北 | 28个 | 0 | 281ms | 18.7%
黑龙江 | 32个 | 0 | 283ms | 0.1%
辽宁 | 29个 | 4个 | 286ms | 14.5%
新疆 | 27个 | 1个 | 288ms | 2.3%
吉林 | 19个 | 0 | 295ms | 0.2%
山东 | 45个 | 3个 | 300ms | 0.3%
西藏 | 1个 | 0 | 328ms | 58.0%
香港 | 20个 | 2个 | - | -
**简评:** 如果你考虑在Vultr的法兰克福[Frankfurt]机房部署[主要面向大陆访客的网站](website-for-mainland-users.html),本表可以作为一个很好的参考,它提供了有关响应速度以及丢包率的详细数据,反映了大陆各省份的测速点到位于法兰克福[Frankfurt]的机房的连通状况。到此机房的ping监测点共有925个,其中超时点79个,平均响应时间为269毫秒,丢包率为1%,本站评价等级为 **较差** ,不太推荐在位于这里的机房建站。
以下是就各省份数据的统计概要:
平均响应时间的分布:在50毫秒以内的1个,在200-300毫秒间的30个,超过300毫秒的1个;
超时点较多的省份:广东、上海、重庆、江苏、浙江、福建、辽宁;
丢包率较高的省份:河北、辽宁、西藏;
## 海外测速点
![海外各国家地区到VPS提供商Vultr位于法兰克福\[Frankfurt\]的机房的ping测试数据统计图,包含响应值的柱状图以及丢包率的散点图,数据日期为20180622](/images/pingtests/vultr_20180622/plot_idc_vultr_germany-frankfurt_20180622_overseas.png)
**海外线路到Vultr德国-法兰克福机房的测速数据 [20180622]**
国家地区 | 测速点 | 超时点 | 响应时间 | 丢包率
---|---|---|---|---
法国 | 1个 | 0 | 0 | 0
巴西 | 1个 | 0 | 15ms | 0
德国 | 1个 | 0 | 17ms | -
意大利 | 1个 | 0 | 19ms | -
加拿大 | 2个 | 0 | 114ms | 0
美国 | 20个 | 1个 | 142ms | 0
新加坡 | 5个 | 0 | 169ms | 0
均值 | 76个 | 3个 | 174ms | 0.6%
南非 | 2个 | 0 | 179ms | 0
英国 | 2个 | 0 | 182ms | 0
香港 | 20个 | 2个 | 223ms | 0
菲律宾 | 1个 | 0 | 247ms | 0
印度尼西亚 | 1个 | 0 | 248ms | 0
越南 | 2个 | 0 | 266ms | 0.5%
韩国 | 12个 | 0 | 266ms | 0
日本 | 2个 | 0 | 277ms | 2.0%
台湾 | 2个 | 0 | 300ms | 0
澳大利亚 | 1个 | 0 | 308ms | 6.0%
**简评:** 如果你考虑在Vultr的法兰克福[Frankfurt]机房部署[主要面向海外访客的网站](https://vps123.top/website-for-internation-trade.html)(例如外贸站),本表可以作为一个很好的参考,它提供了有关响应速度以及丢包率的详细数据,反映了海外各国家/地区的测速点到位于法兰克福[Frankfurt]的机房的连通状况。到此机房的ping监测点共有76个,其中超时点3个,平均响应时间为174毫秒,丢包率为0%,本站评价等级为 **一般** ,可以考虑在位于这里的机房建站。
以下是就各国家/地区数据的统计概要:
平均响应时间的分布:在50毫秒以内的4个,在100-200毫秒间的5个,在200-300毫秒间的7个,超过300毫秒的1个;
[注册 Vultr](https://vps123.top/go/vultr/_btn2)
## 相关测速报告
* 本期测速报告归档 (20180622)
* [全部](https://vps123.top/pingtests/20180622 "本期各VPS提供商全部测速报告")
* [Vultr](https://vps123.top/pingtests/idc-vultr/20180622 "本期Vultr的全部测速报告")
* [各地到Vultr某机房](https://vps123.top/pingtests/idc-vultr/isp-global/20180622 "以Vultr某机房为关注对象的视角,横向比较大陆各省份、海外各国家地区")
* [某地到Vultr各机房](https://vps123.top/pingtests/idc-vultr/facility-all/20180622 "以大陆某省份为关注对象的视角,横向比较Vultr各机房")
* 本期到Vultr _其他机房_ 的报告:
* [阿姆斯特丹](/vultr/idc/amsterdam/20180622-vultr-idc-amsterdam.md "多地到Vultr阿姆斯特丹机房的Ping测试 20180622")
* [亚特兰大](/vultr/idc/atlanta/20180622-vultr-idc-atlanta.md "多地到Vultr亚特兰大机房的Ping测试 20180622")
* [芝加哥](/vultr/idc/chicago/20180622-vultr-idc-chicago.md "多地到Vultr芝加哥机房的Ping测试 20180622")
* [达拉斯](/vultr/idc/dallas/20180622-vultr-idc-dallas.md "多地到Vultr达拉斯机房的Ping测试 20180622")
* [伦敦](/vultr/idc/london/20180622-vultr-idc-london.md "多地到Vultr伦敦机房的Ping测试 20180622")
* [洛杉矶](/vultr/idc/losangeles/20180622-vultr-idc-losangeles.md "多地到Vultr洛杉矶机房的Ping测试 20180622")
* [迈阿密](/vultr/idc/miami/20180622-vultr-idc-miami.md "多地到Vultr迈阿密机房的Ping测试 20180622")
* [新泽西](/vultr/idc/newjersey/20180622-vultr-idc-newjersey.md "多地到Vultr新泽西机房的Ping测试 20180622")
* [巴黎](/vultr/idc/paris/20180622-vultr-idc-paris.md "多地到Vultr巴黎机房的Ping测试 20180622")
* [西雅图](/vultr/idc/seattle/20180622-vultr-idc-seattle.md "多地到Vultr西雅图机房的Ping测试 20180622")
* [硅谷](/vultr/idc/siliconvalley/20180622-vultr-idc-siliconvalley.md "多地到Vultr硅谷机房的Ping测试 20180622")
* [新加坡](/vultr/idc/singapore/20180622-vultr-idc-singapore.md "多地到Vultr新加坡机房的Ping测试 20180622")
* [悉尼](/vultr/idc/sydney/20180622-vultr-idc-sydney.md "多地到Vultr悉尼机房的Ping测试 20180622")
* [东京](/vultr/idc/tokyo/20180622-vultr-idc-tokyo.md "多地到Vultr东京机房的Ping测试 20180622")
* 到Vultr法兰克福机房的 _其他近期报告_ :
* [20180514](/vultr/idc/frankfurt/20180514-vultr-idc-frankfurt.md "多地到Vultr法兰克福机房的Ping测试 20180514")
* [20180527](/vultr/idc/frankfurt/20180527-vultr-idc-frankfurt.md "多地到Vultr法兰克福机房的Ping测试 20180527")
* [20180804](/vultr/idc/frankfurt/20180804-vultr-idc-frankfurt.md "多地到Vultr法兰克福机房的Ping测试 20180804")
* [20180918](/vultr/idc/frankfurt/20180918-vultr-idc-frankfurt.md "多地到Vultr法兰克福机房的Ping测试 20180918")
* 本期报告:在法兰克福部署机房的 _其他VPS提供商_ :
* [Digital Ocean](do/idc/frankfurt/20180622-do-idc-frankfurt.md "多地到Digital Ocean法兰克福机房的Ping测试 20180622")
* [Linode](/linode/idc/frankfurt/20180622-linode-idc-frankfurt.md "多地到Linode法兰克福机房的Ping测试 20180622")
本文最初发表于[多地到Vultr法兰克福[Frankfurt]机房的Ping测试(20180622)](https://vps123.top/pingtest/20180622-vultr-idc-frankfurt.html)
| 47.823529 | 470 | 0.701107 | yue_Hant | 0.66756 |
21ecb42b73159cd645df3d144aef9c168acb2031 | 582 | md | Markdown | _posts/2013-02-14-mobiscroll.md | hiepnt93/tool | b0096b80ee14f14cc7856202ae4e630551e27ab6 | [
"Apache-2.0"
] | 2 | 2016-08-13T14:38:55.000Z | 2017-08-10T03:36:42.000Z | _posts/2013-02-14-mobiscroll.md | hiepnt93/tool | b0096b80ee14f14cc7856202ae4e630551e27ab6 | [
"Apache-2.0"
] | 5 | 2016-08-21T05:01:32.000Z | 2022-02-26T04:19:22.000Z | _posts/2013-02-14-mobiscroll.md | hiepnt93/tool | b0096b80ee14f14cc7856202ae4e630551e27ab6 | [
"Apache-2.0"
] | 9 | 2016-08-05T01:44:08.000Z | 2021-08-05T11:13:19.000Z | ---
date: 2013-02-14 12:04:32.091470 +00:00
email: hello@mobiscroll.com
title: Mobiscroll
status: publish
type: tool
thumb: http://phonegap.com/uploads/tool/2013-02/2013-02-14-mobiscroll.png
permalink: /tool/mobiscroll
developer: Acid Media
link: http://mobiscroll.com
---
Mobiscroll is a library of touch interfaces ranging from Date & Time Picker, Rating & Grading, Select Scroller to Temperature Picker. Built on solid ground with open technologies. Works with jQuery, jQuery Mobile, Zepto.JS and jqMobi. All components are themable, and they come with a theme library as well.
| 41.571429 | 307 | 0.780069 | eng_Latn | 0.739733 |
21ecccb8fb867930acbb7fe3f2fe9f5ba91bfc67 | 7,025 | md | Markdown | curriculum/challenges/spanish/03-front-end-libraries/react/create-a-controlled-form.spanish.md | acenturyandabit/freeCodeCamp | d33f5f82a72cb795380a10fdc4d30427dc17488c | [
"BSD-3-Clause"
] | 2 | 2020-10-24T04:36:33.000Z | 2020-11-07T02:35:03.000Z | curriculum/challenges/spanish/03-front-end-libraries/react/create-a-controlled-form.spanish.md | acenturyandabit/freeCodeCamp | d33f5f82a72cb795380a10fdc4d30427dc17488c | [
"BSD-3-Clause"
] | 58 | 2020-08-05T19:52:25.000Z | 2022-02-27T09:34:35.000Z | curriculum/challenges/spanish/03-front-end-libraries/react/create-a-controlled-form.spanish.md | acenturyandabit/freeCodeCamp | d33f5f82a72cb795380a10fdc4d30427dc17488c | [
"BSD-3-Clause"
] | 1 | 2021-07-14T17:02:39.000Z | 2021-07-14T17:02:39.000Z | ---
id: 5a24c314108439a4d4036179
title: Create a Controlled Form
challengeType: 6
videoUrl: ''
localeTitle: Crear un formulario controlado
---
## Description
<section id="description"> El último desafío demostró que React puede controlar el estado interno de ciertos elementos como <code>input</code> y <code>textarea</code> , lo que los convierte en componentes controlados. Esto se aplica también a otros elementos de formulario, incluido el elemento de <code>form</code> HTML regular. </section>
## Instructions
<section id="instructions"> El componente <code>MyForm</code> se configura con un <code>form</code> vacío con un controlador de envío. Se llamará al controlador de envío cuando se envíe el formulario. Hemos añadido un botón que envía el formulario. Puede ver que tiene el <code>type</code> conjunto para <code>submit</code> indica que es el botón que controla el formulario. Agregue el elemento de <code>input</code> en el <code>form</code> y establezca su <code>value</code> y los <code>onChange()</code> como el último desafío. Luego debe completar el método <code>handleSubmit</code> para que establezca la propiedad de estado del componente <code>submit</code> al valor de entrada actual en el <code>state</code> local. <strong>Nota:</strong> También debe llamar a <code>event.preventDefault()</code> en el controlador de envío, para evitar el comportamiento de envío de formulario predeterminado que actualizará la página web. Finalmente, cree una etiqueta <code>h1</code> después del <code>form</code> que representa el valor de <code>submit</code> del <code>state</code> del componente. Luego puede escribir en el formulario y hacer clic en el botón (o presionar intro), y debería ver su entrada renderizada en la página. </section>
## Tests
<section id='tests'>
```yml
tests:
- text: <code>MyForm</code> debería devolver un elemento <code>div</code> que contenga un <code>form</code> y una etiqueta <code>h1</code> . El formulario debe incluir una <code>input</code> y un <code>button</code> .
testString: 'assert((() => { const mockedComponent = Enzyme.mount(React.createElement(MyForm)); return (mockedComponent.find("div").children().find("form").length === 1 && mockedComponent.find("div").children().find("h1").length === 1 && mockedComponent.find("form").children().find("input").length === 1 && mockedComponent.find("form").children().find("button").length === 1) })(), "<code>MyForm</code> should return a <code>div</code> element which contains a <code>form</code> and an <code>h1</code> tag. The form should include an <code>input</code> and a <code>button</code>.");'
- text: 'El estado de <code>MyForm</code> debe inicializarse con las propiedades de <code>input</code> y <code>submit</code> , ambas configuradas como cadenas vacías.'
testString: 'assert(Enzyme.mount(React.createElement(MyForm)).state("input") === "" && Enzyme.mount(React.createElement(MyForm)).state("submit") === "", "The state of <code>MyForm</code> should initialize with <code>input</code> and <code>submit</code> properties, both set to empty strings.");'
- text: Escribir el elemento de <code>input</code> debe actualizar la propiedad de <code>input</code> del estado del componente.
testString: 'async () => { const waitForIt = (fn) => new Promise((resolve, reject) => setTimeout(() => resolve(fn()), 250)); const mockedComponent = Enzyme.mount(React.createElement(MyForm)); const _1 = () => { mockedComponent.setState({ input: "" }); return waitForIt(() => mockedComponent.state("input"))}; const _2 = () => { mockedComponent.find("input").simulate("change", { target: { value: "TestInput" }}); return waitForIt(() => ({ state: mockedComponent.state("input"), inputVal: mockedComponent.find("input").props().value }))}; const before = await _1(); const after = await _2(); assert(before === "" && after.state === "TestInput" && after.inputVal === "TestInput", "Typing in the <code>input</code> element should update the <code>input</code> property of the component's state."); }; '
- text: El envío del formulario debe ejecutar <code>handleSubmit</code> que debe establecer la propiedad de <code>submit</code> en un estado igual a la entrada actual.
testString: 'async () => { const waitForIt = (fn) => new Promise((resolve, reject) => setTimeout(() => resolve(fn()), 250)); const mockedComponent = Enzyme.mount(React.createElement(MyForm)); const _1 = () => { mockedComponent.setState({ input: "" }); mockedComponent.setState({submit: ""}); mockedComponent.find("input").simulate("change", {target: {value: "SubmitInput"}}); return waitForIt(() => mockedComponent.state("submit"))}; const _2 = () => { mockedComponent.find("form").simulate("submit"); return waitForIt(() => mockedComponent.state("submit"))}; const before = await _1(); const after = await _2(); assert(before === "" && after === "SubmitInput", "Submitting the form should run <code>handleSubmit</code> which should set the <code>submit</code> property in state equal to the current input."); };'
- text: El encabezado <code>h1</code> debe representar el valor del campo de <code>submit</code> desde el estado del componente.
testString: 'async () => { const waitForIt = (fn) => new Promise((resolve, reject) => setTimeout(() => resolve(fn()), 250)); const mockedComponent = Enzyme.mount(React.createElement(MyForm)); const _1 = () => { mockedComponent.setState({ input: "" }); mockedComponent.setState({submit: ""}); mockedComponent.find("input").simulate("change", {target: {value: "TestInput"}}); return waitForIt(() => mockedComponent.find("h1").text())}; const _2 = () => { mockedComponent.find("form").simulate("submit"); return waitForIt(() => mockedComponent.find("h1").text())}; const before = await _1(); const after = await _2(); assert(before === "" && after === "TestInput", "The <code>h1</code> header should render the value of the <code>submit</code> field from the component's state."); }; '
```
</section>
## Challenge Seed
<section id='challengeSeed'>
<div id='jsx-seed'>
```jsx
class MyForm extends React.Component {
constructor(props) {
super(props);
this.state = {
input: ",
submit: "
};
this.handleChange = this.handleChange.bind(this);
this.handleSubmit = this.handleSubmit.bind(this);
}
handleChange(event) {
this.setState({
input: event.target.value
});
}
handleSubmit(event) {
// change code below this line
// change code above this line
}
render() {
return (
<div>
<form onSubmit={this.handleSubmit}>
{ /* change code below this line */ }
{ /* change code above this line */ }
<button type='submit'>Submit!</button>
</form>
{ /* change code below this line */ }
{ /* change code above this line */ }
</div>
);
}
};
```
</div>
### After Test
<div id='jsx-teardown'>
```js
console.info('after the test');
```
</div>
</section>
## Solution
<section id='solution'>
```js
// solution required
```
</section>
| 69.554455 | 1,239 | 0.690534 | spa_Latn | 0.312013 |
21ed269e5c7d7a0bcff8870ec7ad5a0861618728 | 5,003 | md | Markdown | business-central/bi-how-to-set-up-and-publish-kpi-web-services-based-on-account-schedules.md | MicrosoftDocs/dynamics365smb-docs-pr.es-es | 5ed72f3c5f65f0b0287b214eb20430a86d85d169 | [
"CC-BY-4.0",
"MIT"
] | 4 | 2017-08-28T10:41:43.000Z | 2021-12-09T14:44:50.000Z | business-central/bi-how-to-set-up-and-publish-kpi-web-services-based-on-account-schedules.md | MicrosoftDocs/dynamics365smb-docs-pr.es-es | 5ed72f3c5f65f0b0287b214eb20430a86d85d169 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | business-central/bi-how-to-set-up-and-publish-kpi-web-services-based-on-account-schedules.md | MicrosoftDocs/dynamics365smb-docs-pr.es-es | 5ed72f3c5f65f0b0287b214eb20430a86d85d169 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-10-12T19:49:50.000Z | 2019-10-12T19:49:50.000Z | ---
title: Configurar y publicar servicios web KPI para esquemas de cuentas
description: Este tema describe cómo mostrar los datos de KPI del esquema de cuentas según esquemas de cuentas específicos.
author: bholtorf
ms.service: dynamics365-business-central
ms.topic: conceptual
ms.devlang: na
ms.tgt_pltfrm: na
ms.workload: na
ms.search.keywords: ''
ms.date: 06/15/2021
ms.author: bholtorf
ms.openlocfilehash: b224ea5e9bb2d0f53ce41c2dac66a1686dd5a90b
ms.sourcegitcommit: a7cb0be8eae6ece95f5259d7de7a48b385c9cfeb
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 07/08/2021
ms.locfileid: "6437006"
---
# <a name="set-up-and-publish-kpi-web-services-based-on-account-schedules"></a>Configurar y publicar un servicio web KPI que se basa en esquemas de cuentas
En la página **Configuración de servicio Web KPI de esquema de cuentas**, se configura cómo mostrar los datos KPI del esquema de cuentas y en qué esquemas de cuentas específicos se deben basar los KPI. Cuando elige el botón **Publicar servicio Web**, los datos especificados KPI de esquema de cuentas se agregan a la lista de servicios Web publicados en la página **Servicios Web**.
> [!NOTE]
> Cuando utiliza este servicio web, las fechas de cierre no se incluyen en su conjunto de datos. Esto le permite usar filtros en Power BI para analizar varios períodos de tiempo.
## <a name="to-set-up-and-publish-a-kpi-web-service-that-is-based-on-account-schedules"></a>Para configurar y publicar un servicio web KPI que se basa en esquemas de cuentas
1. Elija el icono  , escriba **Configuración de servicio web KPI de esquema de cuentas** y, a continuación, elija el vínculo relacionado.
2. En la ficha desplegable **General**, rellene los campos tal como se describe en la tabla siguiente.
|Campo|Descripción|
|---------------------------------|---------------------------------------|
|**Inicio de valores previstos**|Especifique en qué momento se muestran los valores previstos en el gráfico KPI del esquema de cuentas.<br /><br /> Los valores previstos se recuperan del presupuesto de contabilidad general que seleccione en el campo **Nombre presupuesto contable**. **Nota**: Para obtener KPI que muestren cifras previstas tras una fecha determinada y cifras reales antes de la fecha, puede cambiar el campo **Permitir registro desde** en la página **Configuración de contabilidad**. Para obtener más información, consulte Permitir registrar desde.|
|**Nombres pptos. contabilidad<**|Especifique el nombre del presupuesto de contabilidad que proporciona los valores previstos en el servicio web KPI de esquema de cuentas.|
|**Periodo**|Especifique el periodo en el que se basa el servicio web KPI de esquema de cuentas.|
|**Ver por**|Especifiqué en qué intervalo de tiempo se muestra el KPI de esquema de cuentas.|
|**Nombre del servicio web**|Especifique el nombre del servicio web KPI de esquema de cuentas.<br /><br /> Este nombre aparecerá en el campo **Nombre servicio** de la página **Servicios web**.|
Especifique uno o varios esquemas de cuentas que desee publicar como servicio web de KPI según la configuración que ha realizado en la tabla anterior.
3. En la ficha desplegable **Esquemas de cuentas**, rellene los campos tal como se describe en la tabla siguiente.
|Campo|Descripción|
|---------------------------------|---------------------------------------|
|**Nombre esq. cuentas**|Especifique el esquema de cuentas en el que se basa el servicio web KPI.|
|**Descripción esq. cuentas**|Especifique la descripción del esquema de cuentas en la que se basa el servicio web KPI.|
4. Repita el paso 3 para todos los esquemas de cuentas en los que desea basar el servicio web KPI de esquema de cuentas.
5. Para ver o editar el esquema de cuentas seleccionado, en la ficha desplegable **Esquema de cuentas**, elija **Editar esquema cuentas**.
6. Para ver los datos KPI del esquema de cuentas que ha configurado, elija la acción **Servicio Web KPI de esquema de cuentas**.
7. Para publicar el servicio Web KPI de esquema de cuentas, elija la acción **Publicar servicio Web**. El servicio web se agrega a la lista de servicios web publicados en la página **Servicios web**.
> [!NOTE]
> También puede publicar el servicio web KPI seleccionando el objeto de la página **Configuración de servicio web KPI de esquema de cuentas** de la página **Servicios Web**. Para obtener más información, consulte [Publicar un servicio web](across-how-publish-web-service.md).
## <a name="see-also"></a>Consulte también
[Inteligencia empresarial](bi.md)
[Finanzas](finance.md)
[Configurar las finanzas](finance-setup-finance.md)
[Libro mayor y plan de cuentas](finance-general-ledger.md)
[Trabajar con [!INCLUDE[prod_short](includes/prod_short.md)]](ui-work-product.md)
[!INCLUDE[footer-include](includes/footer-banner.md)] | 79.412698 | 572 | 0.739756 | spa_Latn | 0.976127 |
21ed2a23045c8fa65de14692ffd8cc94c6519d9e | 7,297 | markdown | Markdown | _posts/2020-01-20-Zookeeper-All-you-need-to-know.markdown | jalispran/jalispran | e822fe926e4f763a47f33591cfeb9257621af08f | [
"MIT"
] | null | null | null | _posts/2020-01-20-Zookeeper-All-you-need-to-know.markdown | jalispran/jalispran | e822fe926e4f763a47f33591cfeb9257621af08f | [
"MIT"
] | null | null | null | _posts/2020-01-20-Zookeeper-All-you-need-to-know.markdown | jalispran/jalispran | e822fe926e4f763a47f33591cfeb9257621af08f | [
"MIT"
] | null | null | null | ---
title: "Zookeeper - All you need to know"
layout: post
date: 2020-01-20
tag:
- Zookeeper
- Distributed Systems
- Coordination Service
- Znodes
- Quorum
description: "This blog covers the basic of Zookeeper"
image: https://upload.wikimedia.org/wikipedia/en/thumb/8/81/Apache_ZooKeeper_Logo.svg/1024px-Apache_ZooKeeper_Logo.svg.png
headerImage: https://upload.wikimedia.org/wikipedia/en/thumb/8/81/Apache_ZooKeeper_Logo.svg/1024px-Apache_ZooKeeper_Logo.svg.png
category: blog
author: pranjal
---
Historically, each application was a single program running on a single computer with a single CPU. Today, things have changed. In the Big Data and Cloud Computing world, applications are made up of many indepndant programs running on an ever-changing set of computers.
Coordinating the actions of these independant programs is far more difficult than writing a single program to run on a single computer.
**Zookeeper** was designed to be a robust service that enables application developers to mainly focus on their application logic rather than coordination. It exposes a simple API, inspired by the filesystem API, that allows developers to implement common coordination tasks.
### Where is Zookeeper used?
- Apache HBase
- Apache Kafka
- Apache Solr
- Yahoo! Fetching Services
- Facebook Messenger
## CAP Theorem
It is impossible for a distributed data store to simultaneously provide more than two out of the following three gurantees.
- _Consistency_: Every read receives the most recent write or an error
- _Availability_: Every request receives a (non-error) response - without the gurantee that it contains the most recent write
- _Partition tolerance_: the system continues to operate despite an arbitrary number of messages being dropped (or delayed) by the network between nodes
It implies that in the presence of a network partition, one has to choose between _Consistency_ and _Availability_
Zookeeper is designed with mostly Consistency and Availability in mind. It also provides read-only capability in the presence of network partition. What that means is, the clients connected to a znode, which was part of an ensamble/cluster, can keep communicating with the znode (even after network partition) and will keep receiving stale, read-only data.
_Paxos_ and _Virtual Synchrony_ algorithms have been perticularly influential in the design of Zookeeper
> Zookeeper is a CP system
## Definitions
#### znode
Every node in a Zookeeper tree is referred to as a _znode_
#### watch
A watch is a one-shot operation, which means that it triggers one notification
#### quorum
Zookeeper quorum is the minimum number of servers that have to be running and available in order for Zookeeper to work
#### zxid
A _zxid_ is a long (64 bit) integer split into two parts: the _epoch_ and the _counter_. Each part has 32 bits. Simply put, _zxid_ is the timestamp of the last change
#### leader
The _leader_ is the central point for handling all requests that change the Zookeeper system. It acts as a sequencer and establishes the order of updates to the Zookeeper state.
#### follower
A _follower_ receive and vote on the updates proposed by the leader to guarantee that updates to the state survive crashes
#### split-brain
Two subsets of servers making progress simultaneously. In this scenario, a single cluster can have multiple Leaders. Zookeeper strongly advises **against** _split-brain_. Needless to say, a _split-brain_ points to a faulty configuration
> Myth-buster: Zookeeper is not for bulk storage
## znode in detail
#### Persistent znode
A persistent znode _/path_ can be deleted only through a call to _delete/remove_ (there is a slight difference between the two)
By default, all znodes are persistent znodes unless otherwise stated
#### Ephemeral znode
An _ephemeral znode_ is deleted if the client that created it crashes or simply closes the connection to Zookeeper.
#### Sequential znode
A _sequential znode_ is assigned a unique, monotonically increasing integer
There are four options for the mode of a znode:
`persistent, ephemeral, persistent_sequential and ephemeral_sequential `
## watch in detail
To replace client polling, Zookeeper has opted for a mechanism based on notifications. Clients register with Zookeeper to receive notifications of changes ro znodes. Registering to receive a notification for a given znode consists of setting a _watch_.
_Watches_ are one time triggers and are sent asynchronously to the watchers/client.
_Watches_ are set while reading data and triggered while writing data.
> Notifications are delivered to a client before any other change is made to the same node
_Watches_ only tell that something has changed, it does not talk about what has changed.
A client can set a _watch_ for:
- changes to the data of a znode
- changes to the children of a znode
- znode being created or deleted
## quorum in detail
This number is the minimum number of servers that have to store client's data before telling the client that its data is safely stored.
We should always shoot for an odd number of servers. Typically, `(2F + 1)` servers. Where `F` is the number of server failures the cluster can tolerate
## Versions
A couple of operations in the Zookeeper API can be executed conditionally: `setData` and `delete`. Both calls take a _version_ as an input parameter and the operation succeeds only if the _version_ passed by the client matches the current _version_ on the server.
The use of _version_ is important when multiple clients might be trying to perform operations over the same znode.
## Sessions
Before executing any request against a Zookeeper ensemble, a client must establish a session with the Zookeeper service.
As soon as a client is connected to a server and a session established, the session is replicated with the leader.
Sessions offer order grantees, which means, requests in a session are executed in `FIFO` order. However, a client can have multiple concurrent sessions, in which case `FIFO` ordering is not preserved across sessions.
Sessions may be moved to a different server of the client has not heard from its current server for some time.
> Moving the session to a different server, transparently, is handled by the client library
All operations a client submits to Zookeeper are associated to a session. When a session ends for any reason, the ephemeral nodes created dusring that session disappear.
The Zookeeper ensamble is the one responsible for declaring session expired, not the client. The client may choose to close the session, however.
#### Session handling on the client side
On the client side, if it has heard nothing from the server at the end of `1/3rd of t`, it sends a _heartbeat_ message to the server. At `2/3rd of t` the client starts looking for another server, and it has another `1/3rd of t` to find one.
## Stay tuned for season 2
There is so much more to Zookeeper than this. Lets not overwhelm ourselves with information. Take time to process all this and in the meanwhile, allow me to write another blog in the continuation.
I will cover the following topics in next post:
- Types of servers
- Leader election
- How to configure Zookeeper in standalone mode and in ensemble mode
- Notification messages
- Logs
See you soon! | 53.262774 | 356 | 0.79334 | eng_Latn | 0.999447 |
21edb3a0a09fc07d0b9048688b00dc4c15fbf6a0 | 11,398 | md | Markdown | articles/cloud-services/cloud-services-vs-ci.md | DarryStonem/azure-docs.es-es | aa59a5fa09188f4cd2ae772e7818b708e064b1c0 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2017-05-20T17:31:12.000Z | 2017-05-20T17:31:12.000Z | articles/cloud-services/cloud-services-vs-ci.md | DarryStonem/azure-docs.es-es | aa59a5fa09188f4cd2ae772e7818b708e064b1c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cloud-services/cloud-services-vs-ci.md | DarryStonem/azure-docs.es-es | aa59a5fa09188f4cd2ae772e7818b708e064b1c0 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Entrega continua para Cloud Services con Visual Studio Online | Microsoft Docs
description: "Obtenga información acerca de cómo configurar la entrega continua para aplicaciones en la nube de Azure sin guardar la clave de almacenamiento de la información de diagnóstico en los archivos de configuración de servicio"
services: cloud-services
documentationcenter:
author: cawa
manager: paulyuk
editor:
ms.assetid: 148b2959-c5db-4e4a-a7e9-fccb252e7e8a
ms.service: cloud-services
ms.workload: tbd
ms.tgt_pltfrm: na
ms.devlang: dotnet
ms.topic: article
ms.date: 11/02/2016
ms.author: cawa
translationtype: Human Translation
ms.sourcegitcommit: 165ab7363efaf90eaab41098f71e2f1b846c346e
ms.openlocfilehash: 7e70f92d4d1333ca6cbac5876e5ccbc763bd915c
---
# <a name="securely-save-cloud-services-diagnostics-storage-key-and-setup-continuous-integration-and-deployment-to-azure-using-visual-studio-online"></a>Guarde de forma segura la clave de almacenamiento de la información de diagnóstico de Cloud Services y configure la integración e implementación continua en Azure mediante Visual Studio Online
Es una práctica común para abrir proyectos de código fuente hoy en día. Guardar los secretos de aplicación en archivos de configuración ya no es una práctica segura, ya que se producen vulnerabilidades de seguridad procedentes de los secretos que se filtran desde los controles de código fuente públicos. Almacenar el secreto como texto no cifrado en un archivo de una canalización de integración continua no es seguro ya que los servidores de compilación podrían convertirse en recursos compartidos en el entorno en la nube. Este artículo explica cómo Visual Studio y Visual Studio Online mitiga los problemas de seguridad durante el desarrollo y el proceso de integración continua.
## <a name="remove-diagnostics-storage-key-secret-in-project-configuration-file"></a>Eliminación del secreto de la clave de almacenamiento de la información de diagnóstico del archivo de configuración de proyecto
La extensión de diagnósticos de Cloud Services requiere Azure Storage para guardar los resultados de diagnóstico. Anteriormente, la cadena de conexión de almacenamiento se especificaba en los archivos de configuración (.cscfg) de Cloud Services y podría incorporarse al control de código fuente. En la versión más reciente de Azure SDK se ha cambiado el comportamiento de forma que solo se almacena una cadena de conexión parcial y la clave se sustituye por un token. Los siguientes pasos describen el funcionamiento de las nuevas herramientas de Cloud Services:
### <a name="1-open-the-role-designer"></a>1. Apertura del diseñador de roles
* Haga doble clic o haga clic con el botón secundario en un rol de Cloud Services para abrir el diseñador de roles
![Abra el diseñador de roles][0]
### <a name="2-under-diagnostics-section-a-new-check-box-dont-remove-storage-key-secret-from-project-is-added"></a>2. En la sección de diagnósticos, se ha agregado una nueva casilla de verificación "Don’t remove storage key secret from project" (No quitar secreto de la clave de almacenamiento del proyecto)
* Si está utilizando el emulador de almacenamiento local, esta casilla está deshabilitada porque no hay ningún secreto que administrar para la cadena de conexión local, que es UseDevelopmentStorage=true.
![La cadena de conexión del emulador de almacenamiento local no es un secreto][1]
* Si va a crear un nuevo proyecto, esta casilla de verificación estará desactivada de forma predeterminada. Esto da como resultado que la sección de clave de almacenamiento de la cadena de conexión de almacenamiento seleccionada se sustituye por un token. El valor del token se encuentra en la carpeta AppData Roaming actual del usuario, por ejemplo: C:\Users\contosouser\AppData\Roaming\Microsoft\CloudService
> Tenga en cuenta que la carpeta user\AppData tiene un acceso controlado mediante el inicio de sesión de usuario y se considera un lugar seguro para almacenar secretos de desarrollo.
>
>
![La clave de almacenamiento se guarda en la carpeta del perfil de usuario][2]
### <a name="3-select-a-diagnostics-storage-account"></a>3. Selección de la cuenta de almacenamiento de información de diagnóstico
* Seleccione una cuenta de almacenamiento en el cuadro de diálogo que se inicia haciendo clic en el botón "..." . Observe cómo la cadena de conexión de almacenamiento generada no tendrá la clave de cuenta de almacenamiento.
* Por ejemplo: DefaultEndpointsProtocol=https;AccountName=contosostorage;AccountKey=$(*clouddiagstrg.key*)
### <a name="4-debugging-the-project"></a>4. Depuración del proyecto
* Presione F5 para empezar la depuración en Visual Studio. Todo debería funcionar de la misma manera que antes.
![Inicie la depuración localmente][3]
### <a name="5-publish-project-from-visual-studio"></a>5. Publicación del proyecto desde Visual Studio
* Inicie el cuadro de diálogo Publicar y continúe con las instrucciones de inicio de sesión para publicar la aplicación en Azure.
### <a name="6-additional-information"></a>6. Información adicional
> Nota: El panel de configuración del diseñador de roles permanecerá igual que está ahora. Si desea utilizar la característica de administración de secretos para diagnósticos, vaya a la pestaña de configuraciones.
>
>
![Incorporación de la configuración][4]
> Nota: Si está habilitada, se almacenará la clave de Application Insights como texto sin formato. La clave solo se utiliza para los contenidos de carga, de modo que no se ve comprometida la seguridad de ningún dato confidencial.
>
>
## <a name="build-and-publish-a-cloud-services-project-using-visual-studio-online-task-templates"></a>Compilación y publicación de un proyecto de Cloud Services mediante plantillas de tareas en Visual Studio Online
* Los pasos siguientes muestran cómo configurar la integración continua para proyectos de Cloud Services con tareas de Visual Studio Online:
### <a name="1-obtain-a-vso-account"></a>1. Obtenga una cuenta de VSO
* [Creación de una cuenta de Visual Studio Online][Creación de una cuenta de Visual Studio Online] si aún no tiene una
* [Creación de un proyecto de equipo][Creación de un proyecto de equipo] en su cuenta de Visual Studio Online
### <a name="2-setup-source-control-in-visual-studio"></a>2. Configuración del control de código fuente en Visual Studio
* Conéctese a un proyecto de equipo
![Conéctese a un proyecto de equipo][5]
![Seleccione el proyecto de equipo al que conectarse][6]
* Agregue el proyecto al control de código fuente
![Agregue el proyecto al control de código fuente][7]
![Asigne el proyecto a una carpeta de control de código fuente][8]
* Compruebe el proyecto desde Team Explorer
![Proteja un proyecto en el control de código fuente][9]
### <a name="3-configure-build-process"></a>3. Configuración del proceso de compilación
* Vaya al proyecto de equipo y agregue un nuevo proceso de compilación, Templates
![Agregue una nueva compilación][10]
* Seleccione una tarea de compilación
![Agregue una tarea de compilación][11]
![Seleccione la plantilla de tarea de compilación de Visual Studio][12]
* Edite la entrada de la tarea de compilación. Personalice los parámetros de compilación según sus necesidades
![Configure la tarea de compilación][13]
`/t:Publish /p:TargetProfile=$(targetProfile) /p:DebugType=None /p:SkipInvalidConfigurations=true /p:OutputPath=bin\ /p:PublishDir="$(build.artifactstagingdirectory)\\"`
* Configure las variables de compilación
![Configure las variables de compilación][14]
* Agregue una tarea para cargar el destino de compilación
![Elija publicar tarea de destino de compilación][15]
![Configure la tarea de publicación de destino de compilación][16]
* Ejecute la compilación
![Ponga en cola la nueva compilación][17]
![Vea el resumen de compilación][18]
* Si la compilación se ha realizado correctamente, verá un resultado parecido al siguiente
![Resultado de la compilación][19]
### <a name="4-configure-release-process"></a>4. Configuración del proceso de lanzamiento
* Cree una nueva versión
![Cree una nueva versión][20]
* Seleccione la tarea de implementación de Azure Cloud Services
![Seleccione la tarea de implementación de Azure Cloud Services][21]
* Como la clave de la cuenta de almacenamiento no está activada en el control de código fuente, es preciso especificar la clave secreta para establecer las extensiones de diagnóstico. Expanda la sección **Opciones avanzadas para Creación de un nuevo servicio** y edite la entrada de parámetro de **claves de cuenta de almacenamiento de información de diagnósticos**. Esta entrada ocupa varias líneas de par clave-valor con el formato **[RoleName]:$(StorageAccountKey)**
> Nota: Si la cuenta de almacenamiento de información de diagnóstico pertenece a la misma suscripción en la que publicará la aplicación de Cloud Services, no es necesario que introduzca la clave de la entrada de la tarea de implementación. La implementación obtendrá mediante programación la información de almacenamiento de la suscripción
>
>
![Configuración de la tarea de implementación de Azure Cloud Services][22]
* Use variables de compilación de secretos para guardar las claves de almacenamiento. Para enmascarar una variable como secreto, haga clic en el icono de bloqueo en el lado derecho de la entrada Variables
![Guardar las claves de almacenamiento en variables de compilación de secreto][23]
* Cree una versión e implemente el proyecto en Azure
![Cree una nueva versión][24]
## <a name="next-steps"></a>Pasos siguientes
Para más información sobre el establecimiento de extensiones de diagnóstico para Azure Cloud Services, consulte [Habilitar el diagnóstico en Azure Cloud Services mediante PowerShell][Habilitar el diagnóstico en Azure Cloud Services mediante PowerShell]
[Creación de una cuenta de Visual Studio Online]:https://www.visualstudio.com/team-services/
[Creación de un proyecto de equipo]: https://www.visualstudio.com/it-it/docs/setup-admin/team-services/connect-to-visual-studio-team-services
[Habilitar el diagnóstico en Azure Cloud Services mediante PowerShell]:https://azure.microsoft.com/en-us/documentation/articles/cloud-services-diagnostics-powershell/
[0]: ./media/cloud-services-vs-ci/vs-01.png
[1]: ./media/cloud-services-vs-ci/vs-02.png
[2]: ./media/cloud-services-vs-ci/file-01.png
[3]: ./media/cloud-services-vs-ci/vs-03.png
[4]: ./media/cloud-services-vs-ci/vs-04.png
[5]: ./media/cloud-services-vs-ci/vs-05.png
[6]: ./media/cloud-services-vs-ci/vs-06.png
[7]: ./media/cloud-services-vs-ci/vs-07.png
[8]: ./media/cloud-services-vs-ci/vs-08.png
[9]: ./media/cloud-services-vs-ci/vs-09.png
[10]: ./media/cloud-services-vs-ci/vso-01.png
[11]: ./media/cloud-services-vs-ci/vso-02.png
[12]: ./media/cloud-services-vs-ci/vso-03.png
[13]: ./media/cloud-services-vs-ci/vso-04.png
[14]: ./media/cloud-services-vs-ci/vso-05.png
[15]: ./media/cloud-services-vs-ci/vso-06.png
[16]: ./media/cloud-services-vs-ci/vso-07.png
[17]: ./media/cloud-services-vs-ci/vso-08.png
[18]: ./media/cloud-services-vs-ci/vso-09.png
[19]: ./media/cloud-services-vs-ci/vso-10.png
[20]: ./media/cloud-services-vs-ci/vso-11.png
[21]: ./media/cloud-services-vs-ci/vso-12.png
[22]: ./media/cloud-services-vs-ci/vso-13.png
[23]: ./media/cloud-services-vs-ci/vso-14.png
[24]: ./media/cloud-services-vs-ci/vso-15.png
<!--HONumber=Nov16_HO3-->
| 59.675393 | 684 | 0.784085 | spa_Latn | 0.966294 |
21ede0c630e6546c2cda6642b04e12e3a1520d71 | 1,269 | md | Markdown | _posts/2021-03-15-these-are-some-software-development-trends-that-everyone-should-pay-attention-to-in-twentytwentyone.md | ahyconsulting/stories | c422a474fce6732a9af83216746b0ad6db7a62f7 | [
"MIT"
] | null | null | null | _posts/2021-03-15-these-are-some-software-development-trends-that-everyone-should-pay-attention-to-in-twentytwentyone.md | ahyconsulting/stories | c422a474fce6732a9af83216746b0ad6db7a62f7 | [
"MIT"
] | null | null | null | _posts/2021-03-15-these-are-some-software-development-trends-that-everyone-should-pay-attention-to-in-twentytwentyone.md | ahyconsulting/stories | c422a474fce6732a9af83216746b0ad6db7a62f7 | [
"MIT"
] | null | null | null | ---
layout: post
title: "These are some Software Development Trends That Everyone Should Pay Attention to in 2021"
author: Anand
categories: [ tech, web ]
tags: [ outsource, webdesign ]
image: assets/images/2021/03/pexels-fauxels-3182752.jpg
featured: false
hidden: false
---
### 1. The Cloud Factor
I recall the first wave of the Covid-19, when the majority of businesses, outlets, colleges, universities, and shopping malls all over the world were forced to close. This inspired people all over the world to look for other ways to work from home. Covid-19 dispelled any lingering suspicions and apprehensions about cloud computing.
### 2. Artificial Intelligence
Artificial intelligence (AI) is a kind of software that is smart. Companies that work with machine learning applications are at the top of the heap. Today, more companies are implementing the AI concept.
### 3. Business Automation
We understand that companies are increasingly investing in artificial intelligence and machine learning training for their employees in order to prepare them for a future dominated by automation. Take, for example, the supply and demand chain industry, which has arguably incorporated AI and machine learning best practices to streamline workflows and procedures.
| 52.875 | 363 | 0.795902 | eng_Latn | 0.998652 |
21ee1674e3c0de5363302942c3aca1fa37367697 | 2,568 | md | Markdown | components/bee-tree/dist/v2.0.5/api.md | JetXing/mtl-api | e03adc9bfb24093872297ca42f9ecc659474fed0 | [
"MIT"
] | 7 | 2017-01-05T00:36:19.000Z | 2020-01-05T10:47:19.000Z | components/bee-tree/dist/v2.0.5/api.md | JetXing/mtl-tools | 4c7cb2bbc77d59926ddd284b1c27fd999327e06c | [
"MIT"
] | 5 | 2017-08-29T11:24:18.000Z | 2018-05-13T09:04:38.000Z | components/bee-tree/dist/v2.0.5/api.md | JetXing/mtl-tools | 4c7cb2bbc77d59926ddd284b1c27fd999327e06c | [
"MIT"
] | 4 | 2018-08-29T06:07:02.000Z | 2020-01-02T08:36:15.000Z | # Tree
## 代码演示
## API
## Tree
|参数|说明|类型|默认值|
|:---|:-----|:----|:------|
|multiple|是否允许选择多个树节点|bool|false
|checkable|是否支持添加在树节点前添加Checkbox|bool|false
|defaultExpandAll|默认是否展开所有节点|bool|false
|defaultExpandedKeys|默认展开指定的节点|String[]|[]
|expandedKeys|指定展开的节点(controlled)|String[]|[]
|autoExpandParent|是否自定展开父节点|bool|true
|defaultCheckedKeys|指定默认被选中的节点key|String[]|[]
|checkedKeys|指定被选中的节点(controlled)(PS:当指定的是父节点,所有的子节点也会被指定;当指定的是子节点,父节点也会被选中。当checkable和checkStrictly都为true,子节点与父节点的选择情况都不会影响到对方|String[]/{checked:Array,halfChecked:Array}|[]
|checkStrictly|checkable状态下节点选择完全受控(父子节点选中状态不再关联)|bool|false
|defaultSelectedKeys|指定选中的节点key|String[]|[]
|selectedKeys|指定选中的节点keys(controlled)|String[]|-
|cancelUnSelect|选中的节点第二次点击时还是选中,不自动取消选中|bool|false
|showLine|是否显示连接线|bool|false
|openIcon|自定义展开节点图标的名称[参考这里](http://bee.tinper.org/bee-icon)String[]|-
|closeIcon|自定义关闭节点图标的名称[参考这里](http://bee.tinper.org/bee-icon)String[]|-
|onExpand|当打开或关闭树节点触发的方法|function(expandedKeys, {expanded: bool, node})|-
|onCheck|当选择事件发生触发的方法|function(checkedKeys, e:{checked: bool, checkedNodes, node, event})|-
|onSelect|当用户选择树节点触发的回调函数|function(selectedKeys, e:{selected: bool, selectedNodes, node, event})|-
|filterTreeNode|过滤树节点的方法(highlight),当返回true,相关联的节点会高亮|function(node)|-
|loadData|异步加载数据|function(node)|-
|onRightClick|当用户点击右键触发的回调函数|function({event,node})|-
|draggable|树是否可拖拽(IE>8| bool|false
|onDragStart|当树节点刚开始拖拽所触发的放方法|function({event,node})|-
|onDragEnter|当拖拽进入触发的方法|function({event,node,expandedKeys})|-
|onDragOver|当拖拽经过触发的方法|function({event,node})|-
|onDragLeave|当拖拽离开触发的方法|function({event,node})|-
|onDragEnd|当拖拽结束触发的方法|function({event,node})|-
|onDrop|当节点放下触发方法|function({event, node, dragNode, dragNodesKeys})|-
|onDoubleClick|当双击发生触发的方法|function(checkedKeys, e:{node, event})|-
|focusable|是否开启快捷键功能,使用Tab 键导航获取焦点↓↑选中下、上一个节点,→←展开或者收起一个节点,enter键为节点双击事件|bool|-
|tabIndexValue|节点获取焦点时,自定义tabIndex的值|Number|0
|Children|必填,TreeNode组件|node|-
## TreeNode
|参数|说明|类型|默认值|
|:---|:-----|:----|:------|
|disabled|节点是否不可用|bool|false
|disableCheckbox|节点的checkbox是否不可用|bool|false
|title|名称标题|String/element |--
|key|节点key,和(default)ExpandedKeys / (default)CheckedKeys / (default)SelectedKeys一起用,必须是唯一的|String|-
|isLeaf|是否是叶子节点|bool|false
|titleClass|名称类名|String|-
|titleStyle|名称样式|Object|-
|switcherClass|switcher类名|String|-
|switcherStyle|switcher样式|Object|-
|Children|TreeNode组件/无|node|-
## 快捷键API
| 快捷键 | 类型 |快捷键说明 |
| --- | :---: | --- |
| focusable | function | 是否开启快捷键 |
| tab | - | tab 进入焦点,且选中第一行。|
| ↑、↓ | - | ↑(上箭)、↓(下箭) 选中上一行、选中下一行。 |
| ←、→ | - | ←(左箭)、→(右箭) 收起、展开。 | | 38.909091 | 173 | 0.749221 | yue_Hant | 0.570525 |
21eeb9bb0ad76872614e5ff9fda679f8808e8299 | 1,476 | md | Markdown | docs/vs-2015/extensibility/customizing-editor-controls-and-menus-by-using-the-legacy-api.md | drvoss/visualstudio-docs.ko-kr | 739317082fa0a67453bad9d3073fbebcb0aaa7fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/customizing-editor-controls-and-menus-by-using-the-legacy-api.md | drvoss/visualstudio-docs.ko-kr | 739317082fa0a67453bad9d3073fbebcb0aaa7fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/extensibility/customizing-editor-controls-and-menus-by-using-the-legacy-api.md | drvoss/visualstudio-docs.ko-kr | 739317082fa0a67453bad9d3073fbebcb0aaa7fb | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 레거시 API를 사용 하 여 편집기 컨트롤 및 메뉴를 사용자 지정 | Microsoft Docs
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-ide-sdk
ms.tgt_pltfrm: ''
ms.topic: article
helpviewer_keywords:
- editors [Visual Studio SDK], legacy - controls and menus
ms.assetid: 1ce1f55b-6825-4654-a60a-7831af2ab44f
caps.latest.revision: 18
ms.author: gregvanl
manager: ghogen
ms.openlocfilehash: ad505c9f2d6f63d275a0f4aea49dc38e0be1bb4d
ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2
ms.translationtype: MT
ms.contentlocale: ko-KR
ms.lasthandoff: 11/16/2018
ms.locfileid: "51798770"
---
# <a name="customizing-editor-controls-and-menus-by-using-the-legacy-api"></a>레거시 API를 사용 하 여 편집기 컨트롤 및 메뉴를 사용자 지정
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
언어 서비스 또는 텍스트 뷰 상황에 맞는 메뉴 및 편집기에서 컨트롤을 제어할 수 있습니다. 이 섹션의 페이지에는 이러한 기능을 사용 하는 방법을 자세히 설명 합니다.
## <a name="in-this-section"></a>섹션 내용
[드롭다운 표시줄](../extensibility/drop-down-bar.md)
드롭다운 메뉴 모음에 설명 하 고 구현에 대 한 지침을 제공 합니다.
[명령 처리](../extensibility/command-handling.md)
편집기에서 처리 하는 세부 정보 명령입니다.
[상황에 맞는 메뉴](../extensibility/context-menus.md)
편집기 상황에 맞는 메뉴를 설명합니다.
[방법: 상태 표시줄 업데이트](../extensibility/how-to-update-the-status-bar.md)
업데이트에 대 한 지침을 제공 합니다 **상태 표시줄**합니다.
## <a name="related-sections"></a>관련 단원
[편집기 및 언어 서비스 확장](../extensibility/editor-and-language-service-extensions.md)
다양 한 Visual Studio에서 사용할 수는 편집기를 소개 합니다.
| 31.404255 | 114 | 0.718157 | kor_Hang | 0.999713 |
21efdcac9d72afc6abbf31fb591a4d7527a3eea4 | 938 | md | Markdown | README.md | dune-copasi/dune-copasi | 61a40c67c94798db182f1ecf06ff44f34e295494 | [
"BSD-2-Clause"
] | 2 | 2020-10-03T22:37:01.000Z | 2021-07-15T07:55:44.000Z | README.md | dune-copasi/dune-copasi | 61a40c67c94798db182f1ecf06ff44f34e295494 | [
"BSD-2-Clause"
] | 12 | 2021-02-08T21:56:01.000Z | 2022-02-27T09:51:07.000Z | README.md | dune-copasi/dune-copasi | 61a40c67c94798db182f1ecf06ff44f34e295494 | [
"BSD-2-Clause"
] | 2 | 2021-02-03T17:02:32.000Z | 2021-07-14T12:19:38.000Z | [](https://gitlab.dune-project.org/copasi/dune-copasi/pipelines)
[](https://github.com/dune-copasi/dune-copasi/actions?query=branch%3Amaster+)
[](https://app.netlify.com/sites/dune-copasi/deploys)
# DuneCopasi
Solver for reaction-diffusion systems in multiple compartments
* Solve a different reaction-diffusion system for each comartment
* Custom fluxes between compartments
* Easy to modify configuration file
* Initial conditions can be a TIFF file and/or math expressions
* Using the finite element/finite volume method
* Output in the VTK format
Get started [here](https://dune-copasi.netlify.app/docs//install_use).
| 55.176471 | 182 | 0.787846 | eng_Latn | 0.2045 |
21f009d415d32adc3e7f22adec75b50866bffd46 | 525 | md | Markdown | CHANGELOG.md | cwilper/debug-observables | 233c7e50e0637b72e0e606d1294bb842e9329913 | [
"MIT"
] | null | null | null | CHANGELOG.md | cwilper/debug-observables | 233c7e50e0637b72e0e606d1294bb842e9329913 | [
"MIT"
] | 38 | 2021-05-12T06:23:00.000Z | 2022-03-31T10:10:42.000Z | CHANGELOG.md | cwilper/debug-observables | 233c7e50e0637b72e0e606d1294bb842e9329913 | [
"MIT"
] | null | null | null | <!-- Keep a Changelog guide -> https://keepachangelog.com -->
# Debug Observables Changelog
## [Unreleased]
## [1.0.2]
### Fixed
- [#10 - Occasional NPE when checking if Observable is selected](https://github.com/cwilper/debug-observables/issues/10)
## 1.0.1
### Updated
- Test and marked as compatible with latest version of platform
## 1.0.0
### Added
- Initial scaffold created from [IntelliJ Platform Plugin Template](https://github.com/JetBrains/intellij-platform-plugin-template)
- Initial implementation completed
| 29.166667 | 131 | 0.737143 | eng_Latn | 0.64671 |
21f05e3fd581e32f1afc50bc88a0eca0d9cfcb0f | 962 | md | Markdown | _posts/2021-07-21-17533066.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | _posts/2021-07-21-17533066.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | _posts/2021-07-21-17533066.md | bookmana/bookmana.github.io | 2ed7b023b0851c0c18ad8e7831ece910d9108852 | [
"MIT"
] | null | null | null | ---
title: "Art Fundamentals and CC CD-ROM V3.0 (MP) (Paperback/ 10th Ed.)"
date: 2021-07-21 14:29:15
categories: [외국도서, 해외주문원서]
image: https://bimage.interpark.com/goods_image/3/0/6/6/17533066s.jpg
description: ● ● The original text that set the standard for introduction to art courses across the country, Art Fundamentals has guided generations of students through th
---
## **정보**
- **ISBN : 0072878711**
- **출판사 : McGraw-Hill College**
- **출판일 : 20050805**
- **저자 : Ocvirk, Otto G./ Stinson, Robert E./ Wigg, Philip R./ Bone, Robert O./ Cayton, David L.**
------
## **요약**
● ● The original text that set the standard for introduction to art courses across the country, Art Fundamentals has guided generations of students through the essential elements of art as well as the rich and varied history of their uses. The tenth edition expands the wealth...
------
------
Art Fundamentals and CC CD-ROM V3.0 (MP) (Paperback/ 10th Ed.)
------
| 26.722222 | 282 | 0.691268 | eng_Latn | 0.834639 |
21f07d97d3ba251a23d4186c9bfba877b486727c | 103 | md | Markdown | _posts/2018-12-17-test-post-v2.md | bemeadows/stayathomewifelife | 2695398d73dbedb49d09d77306317718015c8199 | [
"MIT"
] | 1 | 2018-12-18T15:02:17.000Z | 2018-12-18T15:02:17.000Z | _posts/2018-12-17-test-post-v2.md | bemeadows/stayathomewifelife | 2695398d73dbedb49d09d77306317718015c8199 | [
"MIT"
] | null | null | null | _posts/2018-12-17-test-post-v2.md | bemeadows/stayathomewifelife | 2695398d73dbedb49d09d77306317718015c8199 | [
"MIT"
] | null | null | null | ---
layout: post
title: Test Post V2
keywords: New Keyword 3
description: Test3
---
Wenis wenis wenis.
| 12.875 | 23 | 0.728155 | arn_Latn | 0.181735 |
21f09f80defb504f29dcf9830ce4cf737ce45631 | 1,194 | md | Markdown | _announcements/2013 and earlier/2008-01-07.md | frtib/tsp-redesign | 7d733182a4fa0900799f0f10bb096259113e7110 | [
"CC0-1.0"
] | 3 | 2018-07-06T20:46:03.000Z | 2021-02-04T13:38:42.000Z | _announcements/2013 and earlier/2008-01-07.md | frtib/tsp-redesign | 7d733182a4fa0900799f0f10bb096259113e7110 | [
"CC0-1.0"
] | 128 | 2018-07-25T21:16:13.000Z | 2021-07-29T00:26:11.000Z | _announcements/2013 and earlier/2008-01-07.md | frtib/tsp-redesign | 7d733182a4fa0900799f0f10bb096259113e7110 | [
"CC0-1.0"
] | 3 | 2019-02-07T22:23:01.000Z | 2021-03-30T21:14:53.000Z | ---
permalink: /agency-service-reps/announcements/2008-01-07/
---
The Federal Retirement Thrift Investment Board has issued interim regulations that allow the Executive Director to adopt a policy of setting limits on the number of interfund transfers requests. In the near term, this interim regulation allows the Executive Director to immediately address and, if necessary, restrict the activity of frequent traders, who have disrupted management of the TSP Funds and whose activity has resulted in increased costs to all participants. This interim rule is effective January 7, 2008. For more information about frequent trading, see the Questions and Answers on the TSP Web site Home page.
A new chart, "Elective Deferral and Catch-Up Contribution Limits for 1987-Present,"is now available under [Historical information]({{ site.baseurl }}/agency-service-reps/historical-information/).
The elective deferral limit for 2008 remains $15,500. The limit for 2007 was $15,500. See TSP Bulletin 07-4.
Catch-Up Contributions - The limit on catch-up contributions for 2008 is $5,000. Eligible employees must make new elections for catch-up contributions each calendar year. See TSP Bulletin 07-4.
| 99.5 | 624 | 0.80402 | eng_Latn | 0.992922 |
21f11bb3671fee8fbc6e11804c2f5c283083ce45 | 4,166 | markdown | Markdown | _posts/2012-02-01-peluncuran-hpku-teman-belajarku-di-kediri.markdown | hixio-mh/website-4 | 943b8ecdb1f7e507abbb404051afbd40d0a855d3 | [
"MIT"
] | 4 | 2018-03-23T08:55:53.000Z | 2018-03-24T15:10:22.000Z | _posts/2012-02-01-peluncuran-hpku-teman-belajarku-di-kediri.markdown | hixio-mh/website-4 | 943b8ecdb1f7e507abbb404051afbd40d0a855d3 | [
"MIT"
] | 3 | 2021-12-20T17:56:32.000Z | 2021-12-20T17:59:59.000Z | _posts/2012-02-01-peluncuran-hpku-teman-belajarku-di-kediri.markdown | hixio-mh/website-4 | 943b8ecdb1f7e507abbb404051afbd40d0a855d3 | [
"MIT"
] | 5 | 2020-01-01T09:54:05.000Z | 2021-11-23T15:49:11.000Z | ---
title: Peluncuran HPku Teman Belajarku di Kediri
date: 2012-02-01 13:22:00 +07:00
categories:
- CMB
- Acara
tags:
- hp
- Media
- hpku teman belajarku
author: admincmb
comments: true
img: "/uploads/Peluncuran-HPku-Teman-Belajarku.jpg"
---
{: .img-responsive .center-block }
Pada tanggal 12 – 14 Januari 2011, Tim Cipta Media Bersama bertandang ke Kediri untuk melakukan peluncuran inisiatif [HPku Teman Belajarku](http://www.ciptamedia.org/wiki/Hpku-Teman_Belajarku).
Perjalanan dari Bandara Juanda, Surabaya ke Kediri memakan waktu kurang lebih tiga jam, tergantung seberapa padat lalu lintas. Jalanan dua jalur yang sempit hanya cukup untuk dilalui satu mobil, namun kecepatan dan kepadatan mobilnya cukup rapat.
Kediri adalah kota Pabrik Gudang Garam, informasi dari pengemudi memaparkan bahwa dari segala penjuru Jawa Timur buruh pabrik rokok bersiap untuk masuk kerja sekitar jam empat pagi, ada yang menggunakan sepeda, kereta api, hingga angkutan umum, sekitar 6.000 orang akan tiba secara bersamaan. Karena tidak ingin anak sekolah dan orang kantoran terganggu dengan lautan pekerja buruh, karena itu Walikota Kediri menetapkan jam tiba buruh pabrik lebih pagi. Masih menurut pengemudi, pengaturan waktu tersebut menyebabkan Kota Kediri telah “hidup” dan “ramai” sejak jam enam pagi. Tentu saja disini peraturan “Dilarang Merokok” sangat jarang ditemukan.
Setibanya di Hotel Merdeka di Kediri Tim Cipta Media disambut oleh [Bapak Ali Sahbana](http://www.ciptamedia.org/wiki/Ali_Sahbana) sebagai pelopor proyek. Tim Cipta Media diberikan jadwal yang cukup ketat. Pada sore hari kita mengadakan ulasan tentang pelatihan wiki dan mengapa anggaran harus transparan. Ada empat peserta yang hadir yaitu Pak Ali sendiri, Pak Nanang sebagai koordinator 1000 guru wilayah kota yang menganani 11 sekolah, Ibu Zulfa koordinator 1000 guru wilayah kabupaten menangani 20 SLTA, dan Ibu Zamzam sebagai bendahara proyek.
{: .img-responsive .center-block }
Pada hari Jum’at pagi bertempat di Ruang Joyoboyo Pemkot Kediri dihadiri oleh 60 orang peserta Lokakarya 1000 guru dilaksanakan [peluncuran proyek](http://www.ciptamedia.org/wiki/Hpku-Teman_Belajarku/Laporan_aktivitas#13_Januari_2012). Pak Ali bercerita tentang upaya yang ingin ia lakukan dan mengulas betapa tegangnya detik detik pengumuman penerima hibah. Turut memberikan sambutan adalah Kepala Dinas Pendidikan Kota Kediri Bapak Drs. Wahid Ansori yang memaparkan bahwa guru membutuhkan banyak pelatihan agar dapat melakukan peningkatan kapasitas. Selepas shalat Jum’at, acara dilakukan dengan [pelatihan wiki](http://www.ciptamedia.org/wiki/Hpku-Teman_Belajarku/Laporan_aktivitas#13_Januari_2012_2) selama lima jam non-stop di teras Hotel Merdeka. Seluruh mesin pemindai dan komputer diangkut, diiringi oleh alunan hujan deras.
Hari terakhir ditutup dengan kunjungan pada Madrasah Aliyah Negeri 2 Kediri. Karena hampir seluruh gurunya sedang “dipinjam” untuk melakukan lokakarya singkat Hpku Teman Belajarku, maka siswa diminta untuk berkumpul pada aula untuk menghadiri Acara seminar (siswa). Sebagai pendahuluan Tim Cipta Media sowan dulu pada Bapak Kepala Sekolah yang mengungkapkan betapa bangganya salah satu guru di Kediri masuk pertimbangan dan kemudian lolos selesi. Sebelum acara dilakukan diadakan pembacaan do’a dan pengumuman pemenang lomba futsal. Lalu acara dilanjutkan dengan ulasan singkat tentang Cipta Media. Setiap kali foto Pak Ali yang merupakan Kepala Lab Komputer dan Guru Teknologi Informasi dan Komputer (TIK) ditayangkan di layar, para murid bersahut-sahutan dan bertepuk tangan.
{: .img-responsive .center-block }
Selepas acara di sekolah, tim Cipta Media langsung bertolak ke Jakarta. Sebelumnya mampir dulu membeli tahu pong dan tahu takwa, dua tahu yang terkenal dan beberapa bungkus kerupuk pasir. Selamat tinggal Kediri, sampai bertemu lagi dan selamat bekerja keras untuk Pak Ali dan HPKu Teman Belajarku! | 122.529412 | 832 | 0.819491 | ind_Latn | 0.957049 |
21f1699c72ec4d3f29c8a63f59aa92078ba97e43 | 1,490 | md | Markdown | Provisioning/GroupEnrollmentFunction/Readme.md | ReneHezser/Azure-IoT-Python-Samples | 7dc0538f19f8c202404b1f6352e73798552573cb | [
"MIT"
] | null | null | null | Provisioning/GroupEnrollmentFunction/Readme.md | ReneHezser/Azure-IoT-Python-Samples | 7dc0538f19f8c202404b1f6352e73798552573cb | [
"MIT"
] | null | null | null | Provisioning/GroupEnrollmentFunction/Readme.md | ReneHezser/Azure-IoT-Python-Samples | 7dc0538f19f8c202404b1f6352e73798552573cb | [
"MIT"
] | null | null | null | # Introduction
This sample code of an Azure Function will take a registration id, and use the primary key of a group enrollment in Azure Device Provisioning Service to derive a key, that can then be used to connect a device to the enrollment group.
Ideally the device keys are derived and installed in the factory. This method guarantees the group key is never included in any software deployed to the device.
Well, this is not always possible. The Azure Function is a way to create a derived key later, without the need to place the DPS key on the device(s). You can pass a custom value as device id (e.g. the MAC address) and the Function will return a derived key that you can use to register the device in DPS in the next step.

## Configuration
Two environment variables are used:
- DpsConnectionString
- DpsEnrollmentGroupName
## Usage
After deployment, the function can be triggered with GET or POST. Just pass the function key and an individual id as ```deviceid``` parameter.
The function will return the derived key for the deviceid in the body.
## Links
- [DPS - Group Enrollments](https://docs.microsoft.com/en-us/azure/iot-dps/concepts-symmetric-key-attestation#group-enrollments)
- [DPS - Derive a device key](https://docs.microsoft.com/en-us/azure/iot-edge/how-to-auto-provision-symmetric-keys?view=iotedge-2018-06#derive-a-device-key)
- [Developing inside a container](https://code.visualstudio.com/docs/remote/containers) | 64.782609 | 321 | 0.781879 | eng_Latn | 0.992325 |
21f30b8cf1d1dfa1f882116c899a8e9a80956bec | 4,941 | md | Markdown | content/GTD Weekly Review April Week 2 2021.md | ransurf/quartz | 174c514401f4265b360fb0e22449adeb462cc152 | [
"MIT"
] | null | null | null | content/GTD Weekly Review April Week 2 2021.md | ransurf/quartz | 174c514401f4265b360fb0e22449adeb462cc152 | [
"MIT"
] | null | null | null | content/GTD Weekly Review April Week 2 2021.md | ransurf/quartz | 174c514401f4265b360fb0e22449adeb462cc152 | [
"MIT"
] | null | null | null | Status:
Tags:
Links: [[My Weekly Reviews]]
___
# GTD Weekly Review
## April Week 2 (April 13 - 18)
**High-impact tasks completed this week:**
- Beginning to use obsidian and recording my first week experiences
- Completely revolutionized the way I take notes, and am looking forward to further optimizing it and becoming more efficient with it!
- Gives me an incentive to capture the things I learn, which is important since I learn a lot xd
- Won't be surprised if I use it for the rest of my life
- Finished ELA soliloquy, ELA recap, and comparative essay
- Managed to maintain a 100 going into the final, hopefully I don't drop it too much :p
- It's gonna be weird no longer having to do any essays or writing until university where I'll only have one class left for it
- Finished reading "how to take smart notes"
- Perfect timing considering I just started using obsidian
- Submitted math handin, got a 5 on both ap practice exams given
- I'm just gonna tone it down a bit and focus on other things
- Finished unit 1 of nand2tetris
- It was fun learning about the different computer gates and actually building them
- the progression from a simple nand gate to dmux8ways and other complex chips was satisfying to build
- was a nice change from coding without knowing the internals of how a pc works
- It'd be neat if I can finish a unit a week
- Managed to create a decent point systems for the app
- Only problem is I'm starting to get errors for code that I copy-pasted, so troubleshooting is a pain since I don't know half the concepts lmao
**Other notable tasks completed this week:**
- Finished jujutsu kaisen :p
- Enjoyed the anime a lot, kind of want to watch more action/super power animes now xd
**Measuring key metrics:**
- Productive hours:
- Mon- 11
- Tues- 10.75
- Wed- 10.25
- Thurs- 9.5
- Fri- 11
- Sat- 10
- Sun- 10
- Total Hours: 72.5
- Hours average: 10.35 hours/day
- *Too lazy to subtract the time spent on 5 minute pomodoro breaks, and I don't always take them so idek*
**Assessment for important categories:**
- Was mainly productive during my time spent, so I was able to get lots done
- Only parts of friction were reading articles earlier in the week as well as app development
#### Reflection
**How do you feel about the last week?**
- Probably my most productive week yet, I don't know if I should take a break or if I should try and make this the norm hmm
**What setbacks did I face this week, and what can I do to deal with it next week?**
- I genuinely cannot focus while talking to other people on discord
- Don't join discord calls when I'm working, and mitigate my conversations during deep work sessions
**What are possible areas for improvement to maximize success?**
- I just need to practice focus and awareness so I can achieve extreme levels of productivity
- Have a pre-determined plan for time blocks
- Plan the day before using to-do lists or obsidian or something
- Implementing my water reminder and using the reminder for other things like checking my posture, ensuring muscle relaxation, increasing bloodflow, clearing my mind, etc
- Need to practice skimming through articles to see if any information is new and important
#### Future Plan
**What lead measures (skills) do I want to work on in the week ahead?**
- 0.5/d wellness research/learning
- little daily hacks, diet, skincare, meditation
- 0.5/d article reading
- Gotta make the most of my medium free trial :p
- Networking and preparing for BC and university transition
- I want to feel at home when I'm there, and I want to have support groups and get to know people who are taking similar classes
- I have to accept SFU by april 30th if ubc still hasn't reviewed my application
- Can be done during breaks
- Consider more intricate ways to reflect during my daily and weekly reviews
- Experiment more I guess
**What lag measures (products) do I want to complete in the week ahead?**
- Review essay one more time
- Mentally prepare for math final and new quarter transition
- Possibly chat with teachers prior to learn how it's gonna work out
- Pray that we don't go back to irl learning
- 1/d Finish reading "4 hour body"
- 1/d finish obsidian experience review
- Script, audio recording, visuals
- Think of ways to be more engaging
- 1.5/d math practice
- Download ap digital testing app
- Do some practice MC in place of FRQ possibly
- Otherwise, just skim through FRQs
- 1/d Unit 2 nand2tetris
- probably won't finish but i'll try to
- 1/d App development
- Gotta learn how to use stack overflow and fix problems I don't understand
- 1/d convert book notes to obsidian
- Keep expanding my second brain :)
- 0.5/d daily review and journalling
- Find new anime to watch or play new VR puzzle game
- Come up with an awareness routine
- Breathing exercises, mindset/meditation, something pleasing and relaxing yet motivating
___
References:
| 49.909091 | 170 | 0.754503 | eng_Latn | 0.999282 |
21f3e7b8b286441783e1e2c6245b598706718ba4 | 8,936 | md | Markdown | docs/readme.md | TypedProject/logger | 7d693f073789796d380d97e62f2776a3722525f3 | [
"MIT"
] | 20 | 2020-05-15T09:36:49.000Z | 2021-04-07T10:01:35.000Z | docs/readme.md | tsedio/logger | f48e6e808d49764ec14c92b018fd8da30002cb8b | [
"MIT"
] | 68 | 2021-05-13T08:18:23.000Z | 2022-03-30T05:50:50.000Z | docs/readme.md | TypedProject/logger | 7d693f073789796d380d97e62f2776a3722525f3 | [
"MIT"
] | 7 | 2020-07-26T22:36:39.000Z | 2021-03-18T07:58:59.000Z | ---
layout: Home
sidebar: false
meta:
- name: description
content: A Node.js and TypeScript multi channel logger.
- name: keywords
content: Ts.Logger nodejs typescript logger javascript decorators
gettingStartedText: Getting started
gettingStartedUrl: /getting-started.html
messengerText: Discussions
messengerIcon: bxl-slack
messengerUrl: https://api.tsed.io/rest/slack/tsedio/tsed
features:
- title: Colored
icon: bxs-color-fill
details: <a href="/appenders/console.html">Colored</a> console logging to <a href="/appenders/stdout">stdout</a> or <a href="/appenders/stderr">stderr</a>.
- title: Multi channel
icon: bxs-select-multiple
details: <a href="/appenders/file.htm">File appender</a>, with configurable log rolling based on file size or <a href="/appenders/file-date.html">date</a>.
- title: Extensible
icon: bx-extension
details: Use decorators to declare your own <a href="/appenders">appenders</a> and <a href="/layouts">layouts</a> logger.
contributors:
classes: bg-gray-lighter
title: Our awesome <b>contributors</b>
cta:
label: Become contributor
url: /contributing.html
badge:
width: 45
bgColor: white
backers:
cta:
label: Become backer
url: https://opencollective.com/tsed#backers
sponsors:
classes:
title: Support us
description: Ts.ED is under MIT-license and is an open-source project. Many thanks to our sponsors, partners and backers who contribute to promote and support our project!
cta:
label: Become sponsor
url: /support.html
items:
- title: Premium sponsors
class: w-1/2 sm:w-1/6 px-5 py-3
style:
maxHeight: 150px
items:
- title: Medayo
href: https://www.medayo.com
src: https://images.opencollective.com/medayo/1ef2d6b/logo/256.png
- title: They use it
class: w-1/3 sm:w-1/6 px-5 py-3
style:
maxHeight: 80px
items:
- title: Artips
href: https://artips.fr
src: /they-use-it/artips.png
- title: Yumi.us
src: https://yumi.us/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F6bc09fed-4612-4aa0-9192-225a0b3c7a30%2FYumi-logo-circle.png?table=block&id=1a875820-287a-4a97-aa40-ba3c8f3de9ae&width=250&userId=&cache=v2
href: https://yumi.us/
showContent: false
frameworks:
- title: TypeScript
href: https://www.typescriptlang.org/
src: /typescript.png
- title: Mocha
href: https://mochajs.org/
src: /mochajs.svg
- title: Vuepress
href: https://vuepress.vuejs.org
src: https://vuepress.vuejs.org/hero.png
- title: Lerna
href: https://lerna.js.org
src: https://lerna.js.org/images/lerna-hero.svg
- title: Yarn
href: https://yarnpkg.com/
src: data:image/svg+xml;base64,PHN2ZyBpZD0iTGF5ZXJfMSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMTE1NC44IDUxOCI+PHN0eWxlPi5zdDB7ZmlsbDojMmM4ZWJifS5zdDF7ZmlsbDojZmZmfTwvc3R5bGU+PHBhdGggY2xhc3M9InN0MCIgZD0iTTcxOC42IDI1Ny44Yy04IDI3LjYtMjAuOCA0Ny42LTM1LjIgNjMuNlYxODFjMC05LjYtOC40LTE3LjYtMjEuNi0xNy42LTUuNiAwLTEwLjQgMi44LTEwLjQgNi44IDAgMi44IDEuNiA1LjIgMS42IDEyLjh2NjQuNGMtNC44IDI4LTE2LjggNTQtMzIuOCA1NC0xMS42IDAtMTguNC0xMS42LTE4LjQtMzMuMiAwLTMzLjYgNC40LTUxLjIgMTEuNi04MC44IDEuNi02IDEzLjItMjItNi40LTIyLTIxLjIgMC0xOC40IDgtMjEuMiAxNC44IDAgMC0xMy40IDQ3LjYtMTMuNCA5MCAwIDM0LjggMTQuNiA1Ny42IDQxLjQgNTcuNiAxNy4yIDAgMjkuNi0xMS42IDM5LjItMjcuNlYzNTFjLTI2LjQgMjMuMi00OS42IDQzLjYtNDkuNiA4NCAwIDI1LjYgMTYgNDYgMzguNCA0NiAyMC40IDAgNDEuNi0xNC44IDQxLjYtNTYuOFYzNTVjMjEuNi0xOC44IDQ0LjgtNDIuNCA1OC40LTg4LjguNC0xLjYuNC0zLjYuNC00IDAtNy42LTcuNi0xNi40LTE0LTE2LjQtNCAwLTcuMiAzLjYtOS42IDEyem0tNzYuOCAxOThjLTYuNCAwLTEwLjQtOS42LTEwLjQtMjIgMC0yNCA4LjgtMzkuMiAyMS42LTUyLjR2NDIuOGMwIDcuNiAxLjYgMzEuNi0xMS4yIDMxLjZ6Ii8+PHBhdGggY2xhc3M9InN0MCIgZD0iTTgzMy40IDMwMWMtOS42IDAtMTMuNi05LjYtMTMuNi0xOC40di02NmMwLTkuNi04LjQtMTcuNi0yMS42LTE3LjYtNS42IDAtMTAuNCAyLjgtMTAuNCA2LjggMCAyLjggMS42IDUuMiAxLjYgMTIuOHY2MS42Qzc4NSAyOTEuNCA3NzcuOCAzMDEgNzY3IDMwMWMtMTQgMC0yMi44LTEyLTIyLjgtMzIuOCAwLTU3LjYgMzUuNi04My42IDY2LTgzLjYgNCAwIDggLjggMTEuNi44IDQgMCA1LjItMi40IDUuMi05LjIgMC0xMC40LTcuNi0xNi44LTE4LjQtMTYuOC00OC44IDAtOTUuMiA0MC44LTk1LjIgMTA3LjYgMCAzNCAxNi40IDYwLjQgNDcuNiA2MC40IDE1LjIgMCAyNi40LTcuMiAzNC40LTE2LjQgNiA5LjYgMTYuOCAxNi40IDMwLjggMTYuNCAzNC40IDAgNTAuNC0zNiA1Ny4yLTYyLjQuNC0xLjYuNC0yLjQuNC0yLjggMC03LjYtNy42LTE2LjQtMTQtMTYuNC00IDAtOCAzLjYtOS42IDEyLTMuNiAxNy42LTEwLjggNDMuMi0yNi44IDQzLjJ6Ii8+PHBhdGggY2xhc3M9InN0MCIgZD0iTTk0OSAzMjcuNGMzNC40IDAgNTAtMzYgNTcuMi02Mi40IDAtLjguNC0xLjYuNC0yLjggMC03LjYtNy42LTE2LjQtMTQtMTYuNC00IDAtOCAzLjYtOS42IDEyLTMuNiAxNy42LTEwLjQgNDMuMi0yOC44IDQzLjItMTAuOCAwLTE2LTEwLjQtMTYtMjEuNiAwLTQwIDE4LTg3LjIgMTgtOTIgMS42LTkuMi0xNC40LTIyLjQtMTkuMi0yMi40aC0yMC44Yy00IDAtOCAwLTIxLjItMS42LTQuNC0xNi40LTE1LjYtMjEuMi0yNS4yLTIxLjItMTAuNCAwLTIwIDcuMi0yMCAxOC40IDAgMTEuNiA3LjIgMjAgMTcuMiAyNS42LS40IDIwLjQtMiA1My42LTYuNCA2OS42LTMuNiAxMy42IDE3LjIgMjggMjIuNCAxMS4yIDcuMi0yMy4yIDkuNi01OCAxMC03My42aDM0LjhjLTEyLjggMzQuNC0yMCA2Mi44LTIwIDg4LjQgMCAzNS4yIDIyLjQgNDUuNiA0MS4yIDQ1LjZ6Ii8+PHBhdGggY2xhc3M9InN0MCIgZD0iTTk4NC42IDMwOS44YzAgMTQuOCAxMS4yIDE3LjYgMTkuMiAxNy42IDExLjYgMCAxMS4yLTkuNiAxMS4yLTE3LjJ2LTU4LjRjMi44LTMxLjYgMjcuNi02NiAzOS4yLTY2IDcuNiAwIDguNCAxMC40IDguNCAyMi44djgxLjJjMCAyMC40IDEyLjQgMzcuNiAzMy42IDM3LjYgMzQuNCAwIDUxLjQtMzYgNTguMi02Mi40LjQtMS42LjQtMi40LjQtMi44IDAtNy42LTcuNi0xNi40LTE0LTE2LjQtNCAwLTggMy42LTkuNiAxMi0zLjYgMTcuNi0xMS44IDQzLjItMjcuOCA0My4yLTEwLjQgMC0xMC40LTE0LjgtMTAuNC0xOC40di04Mi44YzAtMTguNC02LjQtNDAuNC0zMy4yLTQwLjQtMTkuNiAwLTM0IDE3LjItNDQuOCAzOS42di0xOGMwLTkuNi04LjQtMTcuNi0yMS42LTE3LjYtNS42IDAtMTAuNCAyLjgtMTAuNCA2LjggMCAyLjggMS42IDUuMiAxLjYgMTIuOHYxMjYuOHpNMjU5IDBjMTQzIDAgMjU5IDExNiAyNTkgMjU5UzQwMiA1MTggMjU5IDUxOCAwIDQwMiAwIDI1OSAxMTYgMCAyNTkgMHoiLz48cGF0aCBjbGFzcz0ic3QxIiBkPSJNNDM1LjIgMzM3LjVjLTEuOC0xNC4yLTEzLjgtMjQtMjkuMi0yMy44LTIzIC4zLTQyLjMgMTIuMi01NS4xIDIwLjEtNSAzLjEtOS4zIDUuNC0xMyA3LjEuOC0xMS42LjEtMjYuOC01LjktNDMuNS03LjMtMjAtMTcuMS0zMi4zLTI0LjEtMzkuNCA4LjEtMTEuOCAxOS4yLTI5IDI0LjQtNTUuNiA0LjUtMjIuNyAzLjEtNTgtNy4yLTc3LjgtMi4xLTQtNS42LTYuOS0xMC04LjEtMS44LS41LTUuMi0xLjUtMTEuOS40QzI5My4xIDk2IDI4OS42IDkzLjggMjg2LjkgOTJjLTUuNi0zLjYtMTIuMi00LjQtMTguNC0yLjEtOC4zIDMtMTUuNCAxMS0yMi4xIDI1LjItMSAyLjEtMS45IDQuMS0yLjcgNi4xLTEyLjcuOS0zMi43IDUuNS00OS42IDIzLjgtMi4xIDIuMy02LjIgNC0xMC41IDUuNmguMWMtOC44IDMuMS0xMi44IDEwLjMtMTcuNyAyMy4zLTYuOCAxOC4yLjIgMzYuMSA3LjEgNDcuNy05LjQgOC40LTIxLjkgMjEuOC0yOC41IDM3LjUtOC4yIDE5LjQtOS4xIDM4LjQtOC44IDQ4LjctNyA3LjQtMTcuOCAyMS4zLTE5IDM2LjktMS42IDIxLjggNi4zIDM2LjYgOS44IDQyIDEgMS42IDIuMSAyLjkgMy4zIDQuMi0uNCAyLjctLjUgNS42LjEgOC42IDEuMyA3IDUuNyAxMi43IDEyLjQgMTYuMyAxMy4yIDcgMzEuNiAxMCA0NS44IDIuOSA1LjEgNS40IDE0LjQgMTAuNiAzMS4zIDEwLjZoMWM0LjMgMCA1OC45LTIuOSA3NC44LTYuOCA3LjEtMS43IDEyLTQuNyAxNS4yLTcuNCAxMC4yLTMuMiAzOC40LTEyLjggNjUtMzAgMTguOC0xMi4yIDI1LjMtMTQuOCAzOS4zLTE4LjIgMTMuNi0zLjMgMjIuMS0xNS43IDIwLjQtMjkuNHptLTIzLjggMTQuN2MtMTYgMy44LTI0LjEgNy4zLTQzLjkgMjAuMi0zMC45IDIwLTY0LjcgMjkuMy02NC43IDI5LjNzLTIuOCA0LjItMTAuOSA2LjFjLTE0IDMuNC02Ni43IDYuMy03MS41IDYuNC0xMi45LjEtMjAuOC0zLjMtMjMtOC42LTYuNy0xNiA5LjYtMjMgOS42LTIzcy0zLjYtMi4yLTUuNy00LjJjLTEuOS0xLjktMy45LTUuNy00LjUtNC4zLTIuNSA2LjEtMy44IDIxLTEwLjUgMjcuNy05LjIgOS4zLTI2LjYgNi4yLTM2LjkuOC0xMS4zLTYgLjgtMjAuMS44LTIwLjFzLTYuMSAzLjYtMTEtMy44Yy00LjQtNi44LTguNS0xOC40LTcuNC0zMi43IDEuMi0xNi4zIDE5LjQtMzIuMSAxOS40LTMyLjFzLTMuMi0yNC4xIDcuMy00OC44YzkuNS0yMi41IDM1LjEtNDAuNiAzNS4xLTQwLjZzLTIxLjUtMjMuOC0xMy41LTQ1LjJjNS4yLTE0IDcuMy0xMy45IDktMTQuNSA2LTIuMyAxMS44LTQuOCAxNi4xLTkuNSAyMS41LTIzLjIgNDguOS0xOC44IDQ4LjktMTguOHMxMy0zOS41IDI1LTMxLjhjMy43IDIuNCAxNyAzMiAxNyAzMnMxNC4yLTguMyAxNS44LTUuMmM4LjYgMTYuNyA5LjYgNDguNiA1LjggNjgtNi40IDMyLTIyLjQgNDkuMi0yOC44IDYwLTEuNSAyLjUgMTcuMiAxMC40IDI5IDQzLjEgMTAuOSAyOS45IDEuMiA1NSAyLjkgNTcuOC4zLjUuNC43LjQuN3MxMi41IDEgMzcuNi0xNC41YzEzLjQtOC4zIDI5LjMtMTcuNiA0Ny40LTE3LjggMTcuNS0uMyAxOC40IDIwLjIgNS4yIDIzLjR6Ii8+PC9zdmc+
- title: Seq
href: /appenders/seq.md
src: https://blog.datalust.co/content/images/2018/09/Seq-380px-1.png
- title: LogEntries
href: /appenders/logentries.md
src: /logentries.svg
- title: Insight
href: /appenders/insight.md
src: /rapid7.svg
- title: RabbitMQ
href: /appenders/rabbitmq.md
src: /rabbitmq.svg
- title: Loggly
href: /appenders/loggly.md
src: /loggly.svg
- title: LogStash
href: /appenders/logstash-http.md
src: /elastic-logstash.svg
- title: Slack
href: /appenders/slack.md
src: /slack.svg
---
::: slot hero-brand
<div class="mb-5 sm:mb-0">
<span class="block sm:inline mb-3 sm:mb-0 sm:text-bold text-7xl sm:text-5xl font-medium"><span class="text-blue">Ts</span>.ED</span> Logger
</div>
<small>A <a class="text-darker-gray">Node.js</a> and <a class="text-darker-gray">TypeScript</a> multi channel logger</small>
:::
::: slot hero-slogan
Manage your logs of your **application.** <WordsSlider>#Colored, #Console, #Configurable, #Extensible</WordsSlider>
:::
::: slot hero-content
<CLI />
:::
::: slot testimonial-title
What is it ?
:::
::: slot testimonial-content
Ts.Logger is a Node.js and TypeScript logger with multi channel support and configurable.
:::
<HomeBody /> | 70.920635 | 5,041 | 0.86974 | yue_Hant | 0.672611 |
21f46c8374f7c38b55ea35876e65a6e3502b76e3 | 8,404 | md | Markdown | docs/framework/wcf/extending/how-to-create-a-custom-client-identity-verifier.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/extending/how-to-create-a-custom-client-identity-verifier.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/extending/how-to-create-a-custom-client-identity-verifier.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Como: criar um verificador de identidade de cliente personalizado'
ms.date: 03/30/2017
dev_langs:
- csharp
- vb
ms.assetid: f2d34e43-fa8b-46d2-91cf-d2960e13e16b
ms.openlocfilehash: 84982aca06bacb5718855602872fe4dab2376a9d
ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 11/26/2020
ms.locfileid: "96256060"
---
# <a name="how-to-create-a-custom-client-identity-verifier"></a>Como: criar um verificador de identidade de cliente personalizado
O recurso de *identidade* do Windows Communication Foundation (WCF) permite que um cliente especifique com antecedência a identidade esperada do serviço. Sempre que um servidor se autentica no cliente, a identidade é verificada em relação à identidade esperada. (Para obter uma explicação de identidade e como ela funciona, consulte [identidade e autenticação de serviço](../feature-details/service-identity-and-authentication.md).)
Se necessário, a verificação pode ser personalizada usando um verificador de identidade personalizado. Por exemplo, você pode executar verificações de verificação de identidade de serviço adicionais. Neste exemplo, o verificador de identidade personalizado verifica declarações adicionais no certificado X. 509 retornado do servidor. Para um aplicativo de exemplo, consulte [exemplo de identidade de serviço](../samples/service-identity-sample.md).
### <a name="to-extend-the-endpointidentity-class"></a>Para estender a classe EndpointIdentity
1. Defina uma nova classe derivada da <xref:System.ServiceModel.EndpointIdentity> classe. Este exemplo nomeia a extensão `OrgEndpointIdentity` .
2. Adicione membros privados junto com as propriedades que serão usadas pela classe estendida <xref:System.ServiceModel.Security.IdentityVerifier> para executar a verificação de identidade em declarações no token de segurança retornado do serviço. Este exemplo define uma propriedade: a `OrganizationClaim` propriedade.
[!code-csharp[c_HowToSetCustomClientIdentity#6](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#6)]
[!code-vb[c_HowToSetCustomClientIdentity#6](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#6)]
### <a name="to-extend-the-identityverifier-class"></a>Para estender a classe IdentityVerifier
1. Defina uma nova classe derivada de <xref:System.ServiceModel.Security.IdentityVerifier> . Este exemplo nomeia a extensão `CustomIdentityVerifier` .
[!code-csharp[c_HowToSetCustomClientIdentity#7](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#7)]
[!code-vb[c_HowToSetCustomClientIdentity#7](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#7)]
2. Substitua o método <xref:System.ServiceModel.Security.IdentityVerifier.CheckAccess%2A>. O método determina se a verificação de identidade foi bem-sucedida ou falhou.
3. O `CheckAccess` método tem dois parâmetros. A primeira é uma instância da <xref:System.ServiceModel.EndpointIdentity> classe. A segunda é uma instância da <xref:System.IdentityModel.Policy.AuthorizationContext> classe.
Na implementação do método, examine a coleção de declarações retornada pela <xref:System.IdentityModel.Policy.AuthorizationContext.ClaimSets%2A> propriedade da <xref:System.IdentityModel.Policy.AuthorizationContext> classe e execute as verificações de autenticação conforme necessário. Este exemplo começa encontrando qualquer declaração que seja do tipo "nome distinto" e, em seguida, compara o nome com a extensão do <xref:System.ServiceModel.EndpointIdentity> ( `OrgEndpointIdentity` ).
[!code-csharp[c_HowToSetCustomClientIdentity#1](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#1)]
[!code-vb[c_HowToSetCustomClientIdentity#1](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#1)]
### <a name="to-implement-the-trygetidentity-method"></a>Para implementar o método TryGetIdentity
1. Implemente o <xref:System.ServiceModel.Security.IdentityVerifier.TryGetIdentity%2A> método, que determina se uma instância da <xref:System.ServiceModel.EndpointIdentity> classe pode ser retornada pelo cliente. A infraestrutura do WCF chama a implementação do `TryGetIdentity` método primeiro para recuperar a identidade do serviço da mensagem. Em seguida, a infraestrutura chama a `CheckAccess` implementação com o retornado `EndpointIdentity` e <xref:System.IdentityModel.Policy.AuthorizationContext> .
2. No `TryGetIdentity` método, coloque o seguinte código:
[!code-csharp[c_HowToSetCustomClientIdentity#2](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#2)]
[!code-vb[c_HowToSetCustomClientIdentity#2](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#2)]
### <a name="to-implement-a-custom-binding-and-set-the-custom-identityverifier"></a>Para implementar uma associação personalizada e definir o IdentityVerifier personalizado
1. Crie um método que retorne um <xref:System.ServiceModel.Channels.Binding> objeto. Este exemplo começa a criar uma instância da <xref:System.ServiceModel.WSHttpBinding> classe e define seu modo de segurança como <xref:System.ServiceModel.SecurityMode.Message> , e seu <xref:System.ServiceModel.MessageSecurityOverHttp.ClientCredentialType%2A> para <xref:System.ServiceModel.MessageCredentialType.None> .
2. Crie um <xref:System.ServiceModel.Channels.BindingElementCollection> usando o <xref:System.ServiceModel.WSHttpBinding.CreateBindingElements%2A> método.
3. Retorne o <xref:System.ServiceModel.Channels.SecurityBindingElement> da coleção e converta-o em uma <xref:System.ServiceModel.Channels.SymmetricSecurityBindingElement> variável.
4. Defina a <xref:System.ServiceModel.Channels.LocalClientSecuritySettings.IdentityVerifier%2A> propriedade da <xref:System.ServiceModel.Channels.LocalClientSecuritySettings> classe como uma nova instância da `CustomIdentityVerifier` classe criada anteriormente.
[!code-csharp[c_HowToSetCustomClientIdentity#3](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#3)]
[!code-vb[c_HowToSetCustomClientIdentity#3](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#3)]
5. A associação personalizada retornada é usada para criar uma instância do cliente e da classe. O cliente pode, então, executar uma verificação de verificação de identidade personalizada do serviço, conforme mostrado no código a seguir.
[!code-csharp[c_HowToSetCustomClientIdentity#4](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#4)]
[!code-vb[c_HowToSetCustomClientIdentity#4](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#4)]
## <a name="example"></a>Exemplo
O exemplo a seguir mostra uma implementação completa da <xref:System.ServiceModel.Security.IdentityVerifier> classe.
[!code-csharp[c_HowToSetCustomClientIdentity#5](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#5)]
[!code-vb[c_HowToSetCustomClientIdentity#5](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#5)]
## <a name="example"></a>Exemplo
O exemplo a seguir mostra uma implementação completa da <xref:System.ServiceModel.EndpointIdentity> classe.
[!code-csharp[c_HowToSetCustomClientIdentity#6](../../../../samples/snippets/csharp/VS_Snippets_CFX/c_howtosetcustomclientidentity/cs/source.cs#6)]
[!code-vb[c_HowToSetCustomClientIdentity#6](../../../../samples/snippets/visualbasic/VS_Snippets_CFX/c_howtosetcustomclientidentity/vb/source.vb#6)]
## <a name="see-also"></a>Veja também
- <xref:System.ServiceModel.ServiceAuthorizationManager>
- <xref:System.ServiceModel.EndpointIdentity>
- <xref:System.ServiceModel.Security.IdentityVerifier>
- [Exemplo de identidade de serviço](../samples/service-identity-sample.md)
- [Política de autorização](../samples/authorization-policy.md)
| 89.404255 | 508 | 0.793789 | por_Latn | 0.874217 |
21f4d9f93fc3717149ce6ee637b2116541606f2b | 2,170 | md | Markdown | README.md | CrowStadt/tailwindcss_reactjs_helloworld | 0218d897143308aa94d95a247f1f6389fa9c8b30 | [
"MIT"
] | null | null | null | README.md | CrowStadt/tailwindcss_reactjs_helloworld | 0218d897143308aa94d95a247f1f6389fa9c8b30 | [
"MIT"
] | null | null | null | README.md | CrowStadt/tailwindcss_reactjs_helloworld | 0218d897143308aa94d95a247f1f6389fa9c8b30 | [
"MIT"
] | null | null | null | # Tailwindcss and ReactJS Hello World
Hello World test project for trying out [Tailwindcss](https://tailwindcss.com)
This is just a sample application from Create React App (CRA) and attempt at an optimised Tailwind css output.
## Steps
1. Scaffold a new app with create-react-app
```
npx create-react-app .
```
2. npm install tailwindcss and dependencies
CRA does not currently support PostCSS 8, so install the [compat](https://tailwindcss.com/docs/installation#post-css-7-compatibility-build) build for Tailwind.
```
npm install --save-dev tailwindcss@npm:@tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9
```
3. Generate configuration file
```
npx tailwindcss-cli@latest init
```
4. Install postcss-cli and npm-run-all
```
npm install --save-dev postcss-cli npm-run-all
```
5. Add postcss.config.js settings
6. Add `src/assets/styles/tailwind.css` with:
```
@tailwind base;
@tailwind components;
@tailwind utilities;
```
7. Add npm scripts to build tailwind:
```
"watch:css": "postcss src/assets/styles/tailwind.css -o src/assets/styles/index.css -w",
"start:js": "react-scripts start",
"start": "npm-run-all -p watch:css start:js",
"prebuild": "NODE_ENV=production postcss src/assets/styles/tailwind.css -o src/assets/styles/index.css",
```
8. Import styles into index
```
import './assets/styles/index.css';
```
9. Test...and it should all work....
The example components also show the tailwind [JIT](https://tailwindcss.com/docs/just-in-time-mode#enabling-jit-mode) syntax.
## After production build...
```
File sizes after gzip:
41.35 KB build\static\js\2.4d80b542.chunk.js
1.96 KB build\static\css\main.8528a75d.chunk.css
1.41 KB build\static\js\3.e8261a18.chunk.js
1.18 KB build\static\js\runtime-main.c67641de.js
818 B build\static\js\main.b2149777.chunk.js
```
## Optimisations
By default Tailwind runs purgecss to remove unnecessary styles. Any further optimisation do not add significant value.
For other CSS assets, `cssnano` has been added. This does not reduce tailwind sizes.
For further reductions to suite your use case - refer the tailwind [docs](https://tailwindcss.com/docs/optimizing-for-production)
| 29.324324 | 159 | 0.7447 | eng_Latn | 0.745623 |
21f5a11ea0116d804f96b660fc87fa511305b180 | 6,147 | md | Markdown | outlook/Concepts/Categories-and-Conversations/managing-outlook-items-as-conversations.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 283 | 2018-07-06T07:44:11.000Z | 2022-03-31T14:09:36.000Z | outlook/Concepts/Categories-and-Conversations/managing-outlook-items-as-conversations.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 1,457 | 2018-05-11T17:48:58.000Z | 2022-03-25T22:03:38.000Z | outlook/Concepts/Categories-and-Conversations/managing-outlook-items-as-conversations.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 469 | 2018-06-14T12:50:12.000Z | 2022-03-27T08:17:02.000Z | ---
title: Managing Outlook Items as Conversations
ms.prod: outlook
ms.assetid: d91959d7-07b2-7952-8e6d-a39422d355e0
ms.date: 06/08/2019
ms.localizationpriority: medium
---
# Managing Outlook Items as Conversations
In Microsoft Outlook, a conversation groups messages that share the same subject and belong to the same thread. In the Outlook user interface, you can expand a conversation in Conversation view to provide a visual relationship between messages, including any responses and related messages from other folders. A conversation can also include branches, such as when a message gets two or more responses and discussions grow independently from each. Since Outlook, Conversation view relates all items in the same conversation across folders and stores.
From the programmatic perspective, items in the same conversation can be heterogeneous, belonging to one or more item types. For example, a conversation can contain **[MailItem](../../../api/Outlook.MailItem.md)** and **[TaskItem](../../../api/Outlook.TaskItem.md)** objects. Before Outlook, support for items that belong to the same conversation was limited to the **ConversationIndex** and **ConversationTopic** properties (for all item types except the **[NoteItem](../../../api/Outlook.NoteItem.md)** object). Clearing the **ConversationIndex** was limited to the **[MailItem](../../../api/Outlook.MailItem.md)**, **[PostItem](../../../api/Outlook.PostItem.md)**, and **[SharingItem](../../../api/Outlook.SharingItem.md)** objects. Since Outlook, Outlook supports the **[Conversation](../../../api/Outlook.Conversation.md)** object, which relates all items in the same conversation across folders and across stores by using the **ConversationID** property on the **Conversation** object as well as on each item of the conversation. Outlook provides a **GetConversation** method for most item types to enable you to obtain a **Conversation** object based on the item.
Conversation view is supported by stores that are POP, IMAP, PST, or Microsoft Exchange Server (at least Microsoft Exchange Server 2010, or Microsoft Exchange Server 2007 if Outlook is running in cached mode). You can call the **[IsConversationEnabled](../../../api/Outlook.Store.IsConversationEnabled.md)** property of the **[Store](../../../api/Outlook.Store.md)** object to verify whether the store supports Conversation view. You can call the **GetConversation** method to get a **Conversation** object based on an item in the conversation only if the store in which the item resides supports Conversation view.
To navigate a conversation hierarchy, you can call the **[GetChildren](../../../api/Outlook.Conversation.GetChildren.md)**, **[GetParent](../../../api/Outlook.Conversation.GetParent.md)**, and **[GetRootItems](../../../api/Outlook.Conversation.GetRootItems.md)** methods of the **Conversation** object. The **[SimpleItems](../../../api/Outlook.SimpleItems.md)** collection exists to provide easy access to items of the conversation. The order of items in the **SimpleItems** collection is the same as the order of items in the conversation. The collection is ordered by the MAPI **PidTagCreationTime** property of each item in ascending order.
To enumerate items in a conversation, you can use the **[Table](../../../api/Outlook.Table.md)** object. The rows of the table represent items of the conversation, and the columns of the table, which you can customize, represent properties for each item. To obtain conversation items by using a **Table** object, use the following procedure:
1. Obtain the object of any item in the conversation.
2. To verify that the store supports Conversation view, use the **IsConversationEnabled** property of the **Store** object that represents the store in which the item resides. You can obtain a **Conversation** object based on an item only if the item resides in a store that supports Conversation view.
3. If the store supports Conversation view, call the **GetConversation** method of that item to get the **Conversation** object.
4. Call the **[GetTable](../../../api/Outlook.Conversation.GetTable.md)** method of that **Conversation** object to get a **Table**.
5. You can now use methods that the **Table** object supports to enumerate rows that represent conversation items, and use the default columns to access default item properties (or customize columns to access other properties of the items).
You can use the **[SetAlwaysDelete](../../../api/Outlook.Conversation.SetAlwaysDelete.md)** and **[SetAlwaysMoveToFolder](../../../api/Outlook.Conversation.SetAlwaysMoveToFolder.md)** methods to always move existing conversation items, and future items that arrive in a specific conversation, to the Deleted Items folder or another folder. The moving of items is supported in the specific store where the item resides, unless the store is a non-delivery store such as a PST store. You can use the **[GetAlwaysDelete](../../../api/Outlook.Conversation.GetAlwaysDelete.md)** and **[GetAlwaysMoveToFolder](../../../api/Outlook.Conversation.GetAlwaysMoveToFolder.md)** methods to get these folders, and the **[StopAlwaysDelete](../../../api/Outlook.Conversation.StopAlwaysDelete.md)** and **[StopAlwaysMoveToFolder](../../../api/Outlook.Conversation.StopAlwaysMoveToFolder.md)** methods to stop moving existing and future conversation items to such folders.
In addition, you can apply actions to all existing and future items of a conversation.
- Call the **[SetAlwaysAssignCategories](../../../api/Outlook.Conversation.SetAlwaysAssignCategories.md)** and **[GetAlwaysAssignCategories](../../../api/Outlook.Conversation.GetAlwaysAssignCategories.md)** methods to set and get categories, respectively, for existing and future conversation items.
- Call the **[MarkAsRead](../../../api/Outlook.Conversation.MarkAsRead.md)** and **[MarkAsUnread](../../../api/Outlook.Conversation.MarkAsUnread.md)** methods to mark items as read or unread, respectively.
## See also
[How to: Obtain and Enumerate Selected Conversations](obtain-and-enumerate-selected-conversations.md)
[!include[Support and feedback](~/includes/feedback-boilerplate.md)] | 146.357143 | 1,171 | 0.764113 | eng_Latn | 0.969666 |
21f5c3ffffad33447b067606ecba75c5cc749e81 | 1,050 | md | Markdown | _posts/2013-02-28-stop-procrastination.md | dedenf/dedenf.github.io | 95a1418f72be434f33ab51c0042a53ca2991535e | [
"MIT"
] | null | null | null | _posts/2013-02-28-stop-procrastination.md | dedenf/dedenf.github.io | 95a1418f72be434f33ab51c0042a53ca2991535e | [
"MIT"
] | 1 | 2019-11-06T16:11:55.000Z | 2019-11-12T09:21:24.000Z | _posts/2013-02-28-stop-procrastination.md | dedenf/dedenf.github.io | 95a1418f72be434f33ab51c0042a53ca2991535e | [
"MIT"
] | null | null | null | ---
layout: post
title: How to stop procrastination
tags: [life, developer, rants]
category: blog
---
did you always stuck in the morning reading news, hundred of rss feed, Flipboarding(?), and last but not least, Twitter or Facebook. they can suck all of your productive time.
but there's a way to satisfy your need to tweets or update you status. you can use the tool that [hipster(?)] developer already knows, GIT.
see, i have a record **7.777** tweets right now, and i hate to break this number. and guess what, yes, i've been ranting/tweeting/etc via my commit message. This way i can get both satisfaction, by writing not-all-nonsense so-called tweet and i can get productive by making change on my code, if i want to 'tweet', i have to change/write codes!
P.S: i can also wrote a 'blog' [like this one](https://github.com/dedenf/vagrant/commit/7467f6c1708795016818718e2dd66fda82645a93), besides for my technical stuff i've been blogging on [coderwall.com](https://coderwall.com/dedenf)

| 55.263158 | 344 | 0.757143 | eng_Latn | 0.996959 |
21f6b3da12c0faa62af80fdfb341657df186b614 | 856 | md | Markdown | controls/radrichtextbox/import-export/xaml/xaml.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | controls/radrichtextbox/import-export/xaml/xaml.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | controls/radrichtextbox/import-export/xaml/xaml.md | kylemurdoch/xaml-docs | 724c2772d5b1bf5c3fe254fdc0653c24d51824fc | [
"MIT",
"Unlicense"
] | null | null | null | ---
title: XAML
page_title: XAML
description: Check our "XAML" documentation article for the RadRichTextBox {{ site.framework_name }} control.
slug: radrichtextbox-import-export-xaml
tags: XAML
published: True
position: 0
---
# XAML
The __[eXtensible Application Markup Language(XAML)](https://en.wikipedia.org/wiki/Extensible_Application_Markup_Language)__ is declaritive XML-based markup language develop by Microsoft for creating a rich graphical user interface(GUI). XAML language is the native __RadRichTextBox__ document format.

__XamlFormatProvider__ is compliant with the XAML markup language.
## See Also
* [Getting Started]({%slug radrichtextbox-getting-started%})
* [Using XamlFormatProvider]({%slug radrichtextbox-import-export-txt-txtformatprovider%}) | 38.909091 | 301 | 0.807243 | eng_Latn | 0.618146 |
21f71bee01ece661618aac8d3994b957e8a58154 | 390 | md | Markdown | src/posts/2018-02-22-diary.md | makenowjust/diary | 83a09435b6dec23eadf876c016ec65be55117215 | [
"MIT"
] | 8 | 2017-10-25T12:40:39.000Z | 2021-11-24T00:11:53.000Z | src/posts/2018-02-22-diary.md | makenowjust/diary | 83a09435b6dec23eadf876c016ec65be55117215 | [
"MIT"
] | 24 | 2017-11-06T09:59:14.000Z | 2021-08-25T04:52:23.000Z | src/posts/2018-02-22-diary.md | makenowjust/diary | 83a09435b6dec23eadf876c016ec65be55117215 | [
"MIT"
] | 3 | 2017-11-22T10:14:00.000Z | 2018-04-01T15:53:28.000Z | ---
title: DARKER THAN BLACK -流星の双子-を観た
---
# やったこと
## 日本橋に行った
ポスターの作り方とか3ミリくらいしか分からないのだけど、どうにか枠だけ作ることができた。
枠が出来ればあとは中身を満たせばいいだけなので簡単、といけばいいんだけど。
しかし非常に忙しいスケジュールになってることが分かった。これは死ぬかもしれない。生きるかもしれない。イキってはならない。
## DARKER THAN BLACK -流星の双子-を観た
面白かった。最終話付近を除いて普通に面白かった。ラストは分からん。しかしSPECの劇場版よりは分かる。そんな感じ。
蘇芳はかわいい。ビジュアルは魔法使いの嫁で見たような気持ちになるけど、キャラクターとしてすごく好き。
とりあえず黒の契約者も観よう。いきなり二期から観るのはよくない。
| 17.727273 | 60 | 0.833333 | jpn_Jpan | 0.955241 |
21f78162b4be21684cd9702a68e7cca6a6f927f1 | 2,479 | md | Markdown | src/uk/2016-04/05/07.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/uk/2016-04/05/07.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/uk/2016-04/05/07.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: ДЛЯ ПОДАЛЬШОГО ДОСЛІДЖЕННЯ
date: 28/10/2016
---
### ДЛЯ ПОДАЛЬШОГО ДОСЛІДЖЕННЯ
«У століття, як ніколи яскраво освітлене наукою і розумом, Добра вістка християнства все більше втрачає свою метафізичну переконливість, переставши бути тією міцною безпечною підставою, на якій можна будувати своє життя, та потроху втрачаючи колишню психологічну мотивацію. З болісною виразністю відкрилася повна неймовірність усього нерозривного сплетіння подій: хіба можливо, аби безмежний вічний Бог несподівано втілився в образ якоїсь однієї людини в конкретний історичний час та в конкретному географічному місці тільки для того, щоб зазнати ганебної страти? Щоб одне-єдине коротке життя, прожите два тисячоліття тому людиною з безвісного первісного народу на планеті, котра, як стало тепер відомо, являє собою порівняно невелике кулясте скупчення речовини і котра обертається навколо однієї зірки з мільярдів їй подібних у неймовірно безмежному й безликому Всесвіті, - хіба можливо, щоб настільки непримітна подія мала особливе космічне й вічне значення? Віра в це втратила будь-яку переконливу силу в очах серйозних людей. Неймовірно, щоб цілий Усесвіт був якось особливо стурбований саме цією крихітною частиною своїх незмірно величезних просторів, - якщо він взагалі може бути чимось «стурбований». Тепер, коли сучасне мислення нагально потребувало знайти соціальне, емпіричне, наукове доведення всіх тверджень віри, чари християнства помітно слабшали» (Річард Тарнас. Історія західного мислення, с. 305).
Що випустив з уваги автор? Що ця цитата говорить нам про обмежені можливості «науки й розуму» в питанні розуміння Бога та Його любові до нас? Що це говорить про необхідність об'явлення істини, яку «наука й розум» не можуть осягнути самі по собі?
### Питання для обговорення:
- ```Як ви, будучи християнином, відповіли б на запитання: «що таке людина?» Чим ваша відповідь відрізнятиметься від відповіді людей, котрі не вірять у Бога Біблії?```
- ```«Мертвим смерть не страшна, - писав Кормак Маккарті. - Смерть - предмет тривоги для живих». Як наше розуміння стану мертвих може допомогти пережити втрату близьких людей? Яким чином нас може втішити думка, що вони спочивають у могилах і життєві турботи й труднощі не турбують їх?```
- ```Чому, на ваш погляд, навіть у найжахливішій ситуації більшість людей чіпляється за життя, яким би поганим воно не здавалося?```
- ```Обговоріть, що Голгофський хрест відкриває про цінність життя однієї людини й усього людства.```
| 137.722222 | 1,414 | 0.806374 | ukr_Cyrl | 1.000003 |
21f83c3fafc3b0006ef105ab6468a49d74ef9c7e | 10,371 | md | Markdown | _posts/2017-04-06-wps-49.md | pinstinct/pinstinct.github.com | 931d534b4150e055c23abdf1bbfc39c66faeea60 | [
"MIT"
] | 1 | 2022-02-11T04:35:02.000Z | 2022-02-11T04:35:02.000Z | _posts/2017-04-06-wps-49.md | pinstinct/pinstinct.github.com | 931d534b4150e055c23abdf1bbfc39c66faeea60 | [
"MIT"
] | null | null | null | _posts/2017-04-06-wps-49.md | pinstinct/pinstinct.github.com | 931d534b4150e055c23abdf1bbfc39c66faeea60 | [
"MIT"
] | 1 | 2017-01-23T11:56:42.000Z | 2017-01-23T11:56:42.000Z | ---
layout: post
title: Extra1 - docker-nginx-node-proxy
categories: web
tags: [fastcampus, git, code]
---
# Front 코드 배포
프론트에서 `vue.js` 작업한 내용을 웹 서버에 올리면, 검색엔진이 긁어갈 때 껍데기만 가져가게 된다. 이런 문제를 극복하기 위해 `node`를 사용해 ServerSideRendering(SSR)을 한다.
우리는 SSR을 제외하고 하나의 서버에서 `backend` 프로젝트와 `forntend` 프로젝트를 배포하는 법을 실습한다.
## Django 프로젝트 설정
```shell
$ mkdir deploy
$ cd deploy
$ mkdir docker-nginx-node-proxy
$ cd docker-nginx-node-proxy
$ pyenv virtualenv 3.5.2 docker-nginx-node-proxy
$ pyenv local docker-nginx-node-proxy
$ git init
$ cp ~/fastcampus/projects/django/instagram-api/.gitignore .
$ vi .gitignore
# https://www.gitignore.io/ 에서 node 내용 추가
$ pip install django
$ pip install --upgrade pip
$ pip freeze > requirements.txt
$ django-admin startproject mysite
$ mv mysite django_app
$ py .
# 인터프리터 설정
# config로 이름 변경
```
```shell
# settings.py
ALLOWED_HOSTS = [
"*",
]
```
## Dockerfile 및 .conf 설정
기본 프로젝트에서 복사
## 스크립트
도커에서 이미지를 생성하기위해 시간이 오래걸리므로 스크립트를 작성해 자동화한다.
### 1. Dockerfile 생성을 위한 텍스트 설정파일 생성
#### .conf/docker/00_template.docker
```
FROM {from_image}
MAINTAINER {maintainer}
{base}
{extra}
```
#### .conf/docker/01_template.docker
```
COPY . /srv/app
RUN apt-get -y update && \
apt-get -y install python3 && \
apt-get -y install python3-pip && \
apt-get -y install nginx && \
apt-get -y install supervisor
WORKDIR /srv/app
RUN pip3 install -r requirements.txt && \
pip3 install uwsgi
```
#### .conf/docker/02_extra.docker
```
COPY .conf/uwsgi-app.ini /etc/uwsgi/sites/app.ini
COPY .conf/nginx-app.conf /etc/nginx/sites-available/app
COPY .conf/nginx.conf /etc/nginx/nginx.conf
COPY .conf/supervisor-app.conf /etc/supervisor/conf.d/
RUN ln -s /etc/nginx/sites-available/app /etc/nginx/sites-enabled/app
WORKDIR /srv/app/django_app
EXPOSE 4567
CMD ["supervisord", "-n"]
```
### 2. dockerfile을 생성하는 스크립트
#### build.py
```python
import argparse
import os
# Const
import sys
MODE_BASE = 'base'
MODE_DEBUG = 'debug'
MODE_PRODUCTION = 'production'
IMAGE_BASE = 'front-base'
IMAGE_DEBUG = 'front-debug'
IMAGE_PRODUCTION = 'front'
MAINTAINER = 'dev@abc.com'
DOCKERFILE_BASE = 'Dockerfile.base'
DOCKERFILE_DEBUG = 'Dockerfile.debug'
DOCKERFILE_DOCKERHUB = 'Dockerfile'
# ArgumentParser
parser = argparse.ArgumentParser(description='Build command')
parser.add_argument('-m', '--mode', type=str, default=MODE_DEBUG)
args = parser.parse_args()
# Paths
ROOT_DIR = os.path.dirname(__file__)
CONF_DIR = os.path.join(ROOT_DIR, '.conf')
CONF_DOCKER_DIR = os.path.join(CONF_DIR, 'docker')
# Docker conf file
dockerfile_template = open(os.path.join(CONF_DOCKER_DIR, '00_template.docker')).read()
dockerfile_base = open(os.path.join(CONF_DOCKER_DIR, '01_base.docker')).read()
dockerfile_extra = open(os.path.join(CONF_DOCKER_DIR, '02_extra.docker')).read()
if args.mode == MODE_BASE:
dockerfile = dockerfile_template.format(
from_image='ubuntu:16.04',
maintainer=MAINTAINER,
base=dockerfile_base,
extra=''
)
filename = DOCKERFILE_BASE
elif args.mode == MODE_DEBUG:
dockerfile = dockerfile_template.format(
from_image=IMAGE_BASE,
maintainer=MAINTAINER,
base='',
extra=dockerfile_extra
)
filename = DOCKERFILE_DEBUG
elif args.mode == MODE_PRODUCTION:
dockerfile = dockerfile_template.format(
from_image='ubuntu:16.04',
maintainer=MAINTAINER,
base=dockerfile_base,
extra=dockerfile_extra
)
filename = DOCKERFILE_PRODUCTION
else:
sys.exit('Mode invalid')
with open(os.path.join(ROOT_DIR, filename), 'wt') as f:
f.write(dockerfile)
```
#### 실행
아래와 같이 실행하면 `Dockerfile`, `Dockerfile.base`, `Dockerfile.debug` 파일이 생성된다.
```shell
$ python build.py -m=base
$ python build.py -m=debug
$ python build.py -m=production
# 정상적으로 파일이 생성된 것을 확인 후,
# Dockerfile.* 을 gitignore에 추가한다.
$ vi .gitignore
```
### 3. docker build 추가
#### build.py
```python
import subprocess
# Docker conf file
# ...
if args.mode == MODE_BASE:
dockerfile = dockerfile_template.format(
# ...
)
filename = DOCKERFILE_BASE
imagename = IMAGE_BASE
elif args.mode == MODE_DEBUG:
dockerfile = dockerfile_template.format(
# ...
)
filename = DOCKERFILE_DEBUG
imagename = IMAGE_DEBUG
elif args.mode == MODE_PRODUCTION:
dockerfile = dockerfile_template.format(
# ...
)
filename = DOCKERFILE_PRODUCTION
imagename = IMAGE_PRODUCTION
else:
sys.exit('Mode invalid')
with open(os.path.join(ROOT_DIR, filename), 'wt') as f:
f.write(dockerfile)
build_command = 'docker build . -t {imagename} -f {filename}'.format(
imagename=imagename,
filename=filename
)
print('Docker build command: {}'.format(build_command))
subprocess.run(build_command, shell=True)
```
#### 실행
```shell
$ python build.py -m=base
# docker 이미지 생성 확인 후
$ docker images
front-base latest a36e8597fc34 48 minutes ago 577 MB
$ python build.py -m=debug
$ docker images
front-debug latest 7562cffe5f54 47 minutes ago 577 MB
$ docker run --rm -it -p 8080:4567 front-debug
```
http://localhost:8080/ 접속해서 Django 페이지가 작동하는지 확인한다.
## Vue.js
### 설치
[Vue.js 설치 가이드](https://kr.vuejs.org/v2/guide/installation.html#NPM)
```shell
$ cd ..
$ l
total 0
drwxr-xr-x 4 limhm staff 136B 4 6 16:53 .
drwxr-xr-x@ 14 limhm staff 476B 4 6 15:21 ..
drwxr-xr-x 15 limhm staff 510B 4 6 18:06 docker-nginx-node-proxy
$ npm install --global vue-cli
$ vue init nuxt/starter mysite
$ mv mysite front-site
$ l
total 0
drwxr-xr-x 4 limhm staff 136B 4 6 16:53 .
drwxr-xr-x@ 14 limhm staff 476B 4 6 15:21 ..
drwxr-xr-x 15 limhm staff 510B 4 6 18:06 docker-nginx-node-proxy
drwxr-xr-x 19 limhm staff 646B 4 6 17:04 front-site
$ cd front-site
# 자동으로 생성된 파일이 많은데 .gitignore 파일도 있다.
# https://www.gitignore.io/ 에서 macOS 추가
$ vi .gitignore
$ git add -A
$ git commit -m "first commit"
# remote repository에 연결
$ git push -u origin master
# https://github.com/pinstinct/front-deploy 접속하면
# 빌드 방법이 나온다.
$ npm install
$ npm run dev
# http://localhost:3000/ 에서 작동 확인
# 서버 실행
$ npm run build
$ npm start
# frontend에서 npm start까지 다 된다는 가정하에서 작업한다.
```
# nginx reverse proxy

### front 코드를 프로젝트에 추가
```shell
$ cd docker-nginx-node-proxy
$ git clone https://github.com/pinstinct/front-deploy.git front
$ vi .gitignore
# front/ 추가
$ git st
# .gitignore가 잘됐나 확인
```
### ngnix 설정에 front server 추가
#### .conf/nginx-app.conf
```conf
server {
listen 4567;
server_name front.localhost;
charset utf-8;
client_max_body_size 128M;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
}
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_min_length 1000;
gzip_disable "MSIE [1-6]\." gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
}
```
### Dockerfile에 nodejs 설정 추가
#### .conf/docker/01_base2.docker
```
RUN apt-get -y install curl
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get -y install nodejs
```
#### .conf/docker/02_extra.docker
```
COPY . /srv/app
WORKDIR /srv/app/front
RUN npm install
RUN npm run build
```
#### .conf/supervisor-app.conf
```
[program:node]
command = npm start --prefix /srv/app/front
```
#### build.py
```python
MODE_BASE2 = 'base2'
IMAGE_BASE2 = 'front-base2'
DOCKERFILE_BASE2 = 'Dockerfile.base2'
# Docker conf files
dockerfile_base2 = open(os.path.join(CONF_DOCKER_DIR, '01_base2.docker')).read()
# ...
elif args.mode == MODE_BASE2:
dockerfile = dockerfile_template.format(
from_image=IMAGE_BASE,
maintainer=MAINTAINER,
base=dockerfile_base2,
extra=''
)
filename = DOCKERFILE_BASE2
imagename = IMAGE_BASE2
elif args.mode == MODE_DEBUG:
dockerfile = dockerfile_template.format(
from_image=IMAGE_BASE2,
maintainer=MAINTAINER,
base='',
extra=dockerfile_extra
)
filename = DOCKERFILE_DEBUG
imagename = IMAGE_DEBUG
```
### 실행
```shell
# base는 빌드했고, 변경사항이 없으므로 다시 빌드하지 않는다.
$ python build.py -m=base2
$ python build.py -m=debug
$ docker run --rm -it -p 8080:4567 front-debug
```
http://localhost:8080 에서는 Django 페이지, http://front.localhost:8080/ 에서는 Vue 페이지가 정상적으로 작동하는지 확인한다.
## DockerHub
이대로 진행하면 설치시간이 너무 길기 때문에 `eb deploy`시, 타임아웃이 발생한다. (npm 이 설치과정이 길다.)
[DockerHub 홈페이지](https://hub.docker.com/)로 이동해 가입한다. 가입 후 진행한다.
```shell
$ docker login
# 원격 저장소 형식 username/저장소이름
$ docker tag front-debug hm07/front:latest
# Dockerhub에 푸시
$ docker push hm07/front
```
배포가 완료되면 [https://hub.docker.com/r/hm07/front/tags/](https://hub.docker.com/r/hm07/front/tags/)에서 이미지를 확인 할 수 있다.
### Dockerhub에 올린 이미지도 스크립트에 추가
#### build.py
프로덕션빌드하는 건 너무 오래걸리므로 Dockerhub을 이용하는 것을 기본 사양으로 한다. 그러면 나중에 `eb deploy` 할 때에도 Dockerfile에는 Dockerhub서버에 있는 것을 받게 되어 있다.
```python
MODE_DOCKERHUB = 'dockerhub'
IMAGE_PRODUCTION = 'front-production'
IMAGE_DOCKERHUB = 'front'
DOCKERFILE_PRODUCTION = 'Dockerfile.production'
DOCKERFILE_DOCKERHUB = 'Dockerfile'
dockerfile_extra_dockerhub = open(os.path.join(CONF_DOCKER_DIR, '03_extra_dockerhub.docker')).read()
# ...
elif args.mode == MODE_DOCKERHUB:
dockerfile = dockerfile_template.format(
from_image='hm07/front',
maintainer=MAINTAINER,
base='',
extra=dockerfile_extra_dockerhub,
)
filename = DOCKERFILE_DOCKERHUB
imagename = IMAGE_DOCKERHUB
```
#### .conf/docker/03_extra_dockerhub.docker
```
WORKDIR /srv/app/django_app
EXPOSE 4567
CMD ["supervisord", "-n"]
```
### 실행
```shell
$ python build.py -m=dockerhub
$ docker run -p 8080:4567 front
```
http://localhost:8080 에서는 Django 페이지, http://front.localhost:8080/ 에서는 Vue 페이지가 정상적으로 작동하는지 확인한다.
| 21.879747 | 166 | 0.668692 | kor_Hang | 0.847965 |
21f85c024b167781b48ccfe1a7f2cfb0ad9a10eb | 3,205 | md | Markdown | docs/cpp/override-specifier.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cpp/override-specifier.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/cpp/override-specifier.md | ANKerD/cpp-docs.pt-br | 6910dc17c79db2fee3f3616206806c5f466b3f00 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Especificador de substituição | Microsoft Docs
ms.custom: ''
ms.date: 11/04/2016
ms.technology:
- cpp-language
ms.topic: language-reference
dev_langs:
- C++
helpviewer_keywords:
- override Identifier
ms.assetid: b286fb46-9374-4ad8-b2e7-4607119b6133
author: mikeblome
ms.author: mblome
ms.workload:
- cplusplus
ms.openlocfilehash: 596580bdd4cb7c5610e7cb68341fba4a3da981ac
ms.sourcegitcommit: 913c3bf23937b64b90ac05181fdff3df947d9f1c
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 09/18/2018
ms.locfileid: "46106136"
---
# <a name="override-specifier"></a>substituir especificador
Você pode usar o **substituir** palavra-chave para designar as funções que substituem uma função virtual em uma classe base membro.
## <a name="syntax"></a>Sintaxe
```
function-declaration override;
```
## <a name="remarks"></a>Comentários
**substituir** é contextual e tem um significado somente quando especial é usada após uma declaração de função de membro; caso contrário, ele não é uma palavra-chave reservada.
## <a name="example"></a>Exemplo
Use **substituir** para ajudar a evitar o comportamento de herança inadequados em seu código. O exemplo a seguir mostra onde, sem usar **substituir**, o comportamento da função de membro da classe derivada pode não ser o esperado. O compilador não emite erros para esse código.
```cpp
class BaseClass
{
virtual void funcA();
virtual void funcB() const;
virtual void funcC(int = 0);
void funcD();
};
class DerivedClass: public BaseClass
{
virtual void funcA(); // ok, works as intended
virtual void funcB(); // DerivedClass::funcB() is non-const, so it does not
// override BaseClass::funcB() const and it is a new member function
virtual void funcC(double = 0.0); // DerivedClass::funcC(double) has a different
// parameter type than BaseClass::funcC(int), so
// DerivedClass::funcC(double) is a new member function
};
```
Quando você usa **substituir**, o compilador gera erros em vez de criar silenciosamente novo membro de funções.
```cpp
class BaseClass
{
virtual void funcA();
virtual void funcB() const;
virtual void funcC(int = 0);
void funcD();
};
class DerivedClass: public BaseClass
{
virtual void funcA() override; // ok
virtual void funcB() override; // compiler error: DerivedClass::funcB() does not
// override BaseClass::funcB() const
virtual void funcC( double = 0.0 ) override; // compiler error:
// DerivedClass::funcC(double) does not
// override BaseClass::funcC(int)
void funcD() override; // compiler error: DerivedClass::funcD() does not
// override the non-virtual BaseClass::funcD()
};
```
Para especificar que as funções não podem ser substituídas e que as classes não podem ser herdadas, use o [final](../cpp/final-specifier.md) palavra-chave.
## <a name="see-also"></a>Consulte também
[Especificador final](../cpp/final-specifier.md)<br/>
[Palavras-chave](../cpp/keywords-cpp.md) | 33.385417 | 277 | 0.674883 | por_Latn | 0.944998 |
21f889ea10d0bce77b2e662dea520597198a2b36 | 5,493 | md | Markdown | wow/jenkins_ci/README.md | Nordix/metal3-dev-tools | 63d18fd7f713e0e68b26ca567357e8e698372718 | [
"Apache-2.0"
] | null | null | null | wow/jenkins_ci/README.md | Nordix/metal3-dev-tools | 63d18fd7f713e0e68b26ca567357e8e698372718 | [
"Apache-2.0"
] | 14 | 2022-02-07T12:49:34.000Z | 2022-03-28T05:43:32.000Z | wow/jenkins_ci/README.md | Nordix/metal3-dev-tools | 63d18fd7f713e0e68b26ca567357e8e698372718 | [
"Apache-2.0"
] | null | null | null |
### Access to Nordix Jenkins
You can access to Nordix [Jenkins](https://jenkins.nordix.org/view/Metal3/)
with your Google account.
### Daily steps to ensure successfullness of Jenkins CI
**Note: you can run 'jenkins-check-ci.sh' script to check CI job statuses**
* jenkins-jobs-to-scan.txt lists string patterns used for checking
* running from your local repo
* running from cli by remotely accessing github, no need for local repo
```
wget -q https://raw.githubusercontent.com/Nordix/metal3-dev-tools/master/wow/jenkins_ci/jenkins-jobs-to-scan.txt | sleep 1 | bash <(curl -Ls https://raw.githubusercontent.com/Nordix/metal3-dev-tools/master/wow/jenkins_ci/jenkins-check-ci.sh) && rm jenkins-jobs-to-scan.txt*
```
* a customized jenkins-jobs-to-scan.txt can be used by running (wget run once to fetch jenkins-jobs-to-scan.txt)
```
bash <(curl -Ls https://raw.githubusercontent.com/Nordix/metal3-dev-tools/master/wow/jenkins_ci/jenkins-check-ci.sh)
```
Check global status of jobs which are built on a daily basis:
* [metal3_master_v1a4_integration_test_centos](https://jenkins.nordix.org/view/Metal3/job/metal3_master_v1a4_integration_test_centos/) *(v1alpha4 integration test)*
* [metal3_master_v1a4_integration_test_ubuntu](https://jenkins.nordix.org/view/Metal3/job/metal3_master_v1a4_integration_test_ubuntu/) *(v1alpha4 integration test)*
* [metal3_master_integration_tests_cleanup](https://jenkins.nordix.org/view/Metal3/job/metal3_master_integration_tests_cleanup/) *(capi baremetal integration tests cleanup)*
* [metal3_openstack_image_building](https://jenkins.nordix.org/view/Metal3/job/metal3_openstack_image_building/)
* [metal3_docker_image_building](https://jenkins.nordix.org/view/Metal3/job/metal3_docker_image_building/)
* [metal3_nordix_dev_tools_repos](https://jenkins.nordix.org/view/Metal3/job/metal3_nordix_dev_tools_repos/)
Jobs with prefix 'metal3_master_' run metal3-dev-env v1alpha3 and v1alpha4 integration tests. You
should see **blue** colour that indicates that run was successfull.
In case of job **FAILURE**,
- first check the job's logs (```console output```) to get more info.
- check what are the PRs that were merged last time, which could potentially
cause CI failure.
### Some of common causes of failure that you might encounter
* ``` E: Could not get lock /var/lib/dpkg/lock-frontend - open (11: Resource temporarily unavailable) ```
* ```"Unable to connect to the server: dial tcp 192.168.39.150:8443: connect: no route to host```
* ```"Error from server: error when creating "STDIN": etcdserver: request timed out"```
* ```Failed due to timeout``` - sometimes job might fail because of the timeout
which could happen due to slowleness of CityCloud infrastructure.
### How to resolve the above failures
Just trigger the CI manually because these causes aren't results of any pull
request but rather system instability.
### Possible sources of notifications on CI failures
* First place to notice the CI failure is [Nordix Jenkins UI](https://jenkins.nordix.org/view/Metal3/)
* Metal3 [Slack channel](https://kubernetes.slack.com/messages/CHD49TLE7)
#cluster-api-baremetal
* Metal3 [Mailing list](https://groups.google.com/forum/#!forum/metal3-dev)
### Workflow of Nordix Jenkins CI
First based on a trigger-phrase from an open PR or on a daily basis, Jenkins Job
Builder (JJB) builds a Jenkins CI job, which will execute the steps given in
the corresponding pipeline (example: [capi_bm_integration_tests.pipeline](https://github.com/Nordix/metal3-dev-tools/blob/master/ci/jobs/capi_bm_integration_tests.pipeline))
**Note:** You will find JJB files in [Nordix Gerrit](https://gerrit.nordix.org/admin/repos/infra/cicd)
and Jenkins pipelines in [metal3-dev-tools](https://github.com/Nordix/metal3-dev-tools/tree/master/ci/jobs).
When a job is triggered either by trigger-phrase or on a daily basis, it is executed in
one of the Jenkins slave VM (example:```metal3-static0-workers-*```), which is
running on **Default_Project_37137** of [City Cloud](https://citycontrolpanel.com/landing?m=login_required).
In order to access the VM where the job is running:
1. Find the name of the Jenkins slave (from [Jenkins](https://jenkins.nordix.org/view/Metal3/))
that is executing the job.
2. Go the [City Cloud](https://citycontrolpanel.com/landing?m=login_required)
console and get the floating IP of the corresponding Jenkins slave VM.
3. Find the IP of the VM from Jenkins job's **console_output**, which is
created for running actual integration test. See example screenshot:

4. SSH into the Jenkins slave VM with its floating IP that you found in step2
and from there SSH into the actual tester VM with its own IP that you found
in step3.
**Note:** to SSH the Jenkins slave VM you need an metal3ci SSH key
* if needed run 'ssh-add -l' to check that the key is loaded to ssh-agent
* ssh metal3ci@'Jenkins-slave-VM-IP' -A
* ssh 'tester-VM-IP'
5. Once you are inside the tester VM you can debug it and get more info.
### How to clean up leftover VMs from CityCloud
There is a Jenkins [master job](https://jenkins.nordix.org/view/Metal3/job/metal3_master_integration_tests_cleanup/) that every 6 hours cleans up all the leftover VMs, which failed to be deleted at the end of v1alphaX integration test.
**Note:** If you want to trigger cleaning job manually, you can use the `/test-clean` phrase within an open pull request under [metal3-io/project-infra](https://github.com/metal3-io/project-infra) repo.
| 57.21875 | 273 | 0.773348 | eng_Latn | 0.883659 |
21f94fa3214149e93d504c84480f8556f4db5ae8 | 1,072 | md | Markdown | inventories/virtualbox/README.md | trombik/ansible-project-dumbhub | 645f7683e6e1fe17ae6bf21ce860c6a26021704b | [
"0BSD"
] | null | null | null | inventories/virtualbox/README.md | trombik/ansible-project-dumbhub | 645f7683e6e1fe17ae6bf21ce860c6a26021704b | [
"0BSD"
] | 4 | 2021-05-20T12:15:17.000Z | 2022-02-26T06:17:32.000Z | inventories/virtualbox/README.md | trombik/ansible-project-dumbhub | 645f7683e6e1fe17ae6bf21ce860c6a26021704b | [
"0BSD"
] | null | null | null | ## `virtualbox` environment
This is an environment for development on local machine. The environment is
created by `virtualbox`.
### Instances
#### `dumbhub`
This is the target machine to create.
It has four network interfaces.
* One for the default network, through which the machine access the Internet.
This interface is required in `virtualbox` environment, but not in
production.
* Two interfaces for bridging. One is connected to `peer`, and another to
`router`. Packets to/from `router` are forwarded via a bridge interface
whose member interfaces are these two physical interfaces.
* One for the internal network, used for the machine to provide services to
the internal clients. `ansible` manages the host using this interface.
#### `router`
This is an instance to mock a router that performs NAT for internal clients,
and acts as the default gateway of the internal network.
One network interface is connected to `dumbhub`.
#### `peer`
This is an instance to mock a peer device of your ISP.
One network interface is connected to `dumbhub`.
| 30.628571 | 77 | 0.76306 | eng_Latn | 0.999795 |