hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2f9d1083fc40949b4363d77d334b44716e7f6e81 | 3,770 | md | Markdown | _posts/2018-07-10-asp-net-return-empty-response-on-production.md | thuannguy/thuannguy.github.io | 4d58aa9a90a2781077a9cb81827d0c16ddc7b4b7 | [
"CC0-1.0",
"MIT"
] | null | null | null | _posts/2018-07-10-asp-net-return-empty-response-on-production.md | thuannguy/thuannguy.github.io | 4d58aa9a90a2781077a9cb81827d0c16ddc7b4b7 | [
"CC0-1.0",
"MIT"
] | null | null | null | _posts/2018-07-10-asp-net-return-empty-response-on-production.md | thuannguy/thuannguy.github.io | 4d58aa9a90a2781077a9cb81827d0c16ddc7b4b7 | [
"CC0-1.0",
"MIT"
] | null | null | null | ---
layout: post
title: "Troubleshooting: Asp.Net returns empty response on a production server"
description: "Last week I got to solve a problem that only happened on a customer's server"
tags: [asp.net, response, async, stream, perfview]
comments: true
published: true
categories:
- Blog
image: https://www.thuannguy.com/images/blog-image.jpg
---
# Troubleshooting: Asp.Net returns empty response on a production server
Last week I got to solve a problem that only happened on a customer's server: one of the endpoint always returned a blank page. Checking network log showed me that the response had no content. Since it didn't happen on our development boxes, debugging needed to be done using log files and all kind of traces I could collect.
## Debugging using Perfview
[Perfview](https://github.com/Microsoft/perfview) is a great tool to collect what a .NET process just did because it is portable yet powerful and has a simple GUI.
Examining a log file pointed us to a suspected method which is probably that last piece of my code that ran:

Briefly speaking, that method copied content from an HttpResponseMessage object to HttpResponse.OutputStream. Another important note is that it is an ActionResult.ExecuteResult method which is called by Asp.Net itself.
A colleague of mine helped using Perfview to collect a trace of what happened internally when the application served that blank response. Although all stacktraces showed that my code ran as expected without any error, the exception view revealed a hidden NullReferenceException happening deep down in Asp.Net framework code:

Up to this point, my guess was that the exception happened while content was being written to HttpResponse.OutputSream. However, I needed more proofs in order to find a good fix because I wouldn't be able to test it on my development box.
I opened code of the CopyToAsync found in Github: 
After checking the type of the stream object, I was able to confirm that code run into the "else" branch that called SerializeToStreamAsync:

This is where SerializeToStreamAsync called WriteToStreamAsync:

Finally, code of the WriteToAnyStreamAsync code is:

Which suggested that either the buffer object or the destination object or something inside the destination object must be null. However, because the buffer was just created locally previously, it couldn't be null. That left me with the destination object being null. Tracing code backward showed me that the destination object is the OutputStream of HttpResponse. All those clues were enough for me to conclude that the error happened because that stream content copy code was executed asynchronously, but by the time it was executed, Asp.Net already closed the response. Interestingly, my guess for why it only happened on that server is that the server had limited resources, so Asp.Net had to close response streams much sooner than when we tested on our development boxes.
By the way, I fixed it by changing ConfigureAwait to Wait. You might scream at me because they are two different things and calling Wait is evil, but I can assure you for this piece of code and for all the constraints that my web application has to fulfill, using Wait is the only viable solution. I wish Asp.Net team changed their mind about [adding support for ExecuteResultAsync](https://github.com/aspnet/AspNetWebStack/issues/113) though. | 91.95122 | 778 | 0.799735 | eng_Latn | 0.998295 |
2f9d1d88fbf0c94e399cc95633201050de1d8146 | 102 | md | Markdown | tests/schema/README.md | EticaAI/HXL-Data-Science-file-formats | c7c5aa56c452ac1613242ee04cc9ae66f38ec24d | [
"Unlicense"
] | 3 | 2021-01-25T20:44:10.000Z | 2021-04-19T22:47:05.000Z | tests/schema/README.md | fititnt/HXL-Data-Science-file-formats | f4fe9866e53280767f9cb4c8c488ef9c8b9d33cd | [
"Unlicense"
] | 24 | 2021-01-26T00:36:39.000Z | 2021-11-13T23:59:56.000Z | tests/schema/README.md | fititnt/HXL-Data-Science-file-formats | f4fe9866e53280767f9cb4c8c488ef9c8b9d33cd | [
"Unlicense"
] | 1 | 2021-09-05T03:43:37.000Z | 2021-09-05T03:43:37.000Z | - https://github.com/microsoft/vscode/issues/68901
- https://github.com/Microsoft/vscode/issues/11770 | 51 | 51 | 0.784314 | yue_Hant | 0.753632 |
2f9dbe3edfc24d8ec1f039d75a7bac3bde8a0a7c | 3,323 | md | Markdown | _posts/2016-08-29-open-tabs-1.md | kottj/kottj.github.io | 3dc682fb7b6cc2104de4c261c535dc0d93af000f | [
"MIT"
] | null | null | null | _posts/2016-08-29-open-tabs-1.md | kottj/kottj.github.io | 3dc682fb7b6cc2104de4c261c535dc0d93af000f | [
"MIT"
] | null | null | null | _posts/2016-08-29-open-tabs-1.md | kottj/kottj.github.io | 3dc682fb7b6cc2104de4c261c535dc0d93af000f | [
"MIT"
] | null | null | null | ---
layout: post
title: Open Tabs No 1
subtitle: This week in Open Tabs.
category: opinion
tags: [custdev, culture, cto]
author: holger_reinhardt
author_email: holger.reinhardt@haufe-lexware.com
header-img: "images/bg-post.alt.jpg"
---
[This week in Open Tabs](http://dev.haufe.com/meta/category/opinion/) is my new weekly column to share the list of links I have (or had) open in my browser tabs to read during this week.
I was looking for a format to easily share these with the larger developer and business community here at Haufe, but it might also be of interest to others. My goal is not to be comprehensive or most up to date, but to provide a cross section of the topics I find interesting and worth investing my time in.
Depending on the week you might find some interesting links or nothing at all. And that is ok. I might even include some stories from the trenches or decide that they warrant a separate column. That is ok too. I might even include links I collected in Evernote as reference material for the future. I am sure you don't mind that, or? The only constrain I would like to set myself is that it should be only the material of a single week. Not more and not less.
So without much ado, here is the first **Open Tabs** editions for the week of August 29th.
##### Lean
* <http://www.allaboutagile.com/7-key-principles-of-lean-software-development-2/>
* <https://social-biz.org/2013/12/27/goldratt-the-theory-of-constraints/>
* <http://leanmagazine.net/lean/cost-of-delay-don-reinertsen/>
* <https://blog.leanstack.com/expose-your-constraints-before-chasing-additional-resources-cc17929cfac4>
##### Product
* <http://www.romanpichler.com/blog/product-roadmap-vs-release-plan/>
* <https://hbr.org/2016/08/what-airbnb-understands-about-customers-jobs-to-be-done>
* <https://hbr.org/2016/09/know-your-customers-jobs-to-be-done>
##### Business
* <http://disruptorshandbook.com/disruptors-handbooks/>
* <http://blog.gardeviance.org/2016/08/on-being-lost.html>
* <http://blog.gardeviance.org/2016/08/finding-path.html>
* <http://blog.gardeviance.org/2016/08/exploring-map.html>
* <http://blog.gardeviance.org/2016/08/doctrine.html>
* <http://blog.gardeviance.org/2016/08/the-play-and-decision-to-act.html>
* <http://blog.gardeviance.org/2016/08/getting-started-yourself.html>
##### Project
* [OpenCredo: Evolving Project Management from the Sin to the Virtue](https://www.youtube.com/watch?v=BpwjDcl8Ae8)
* <http://www.romanpichler.com/blog/product-roadmap-vs-release-plan/>
##### Culture
* <http://www.strategy-business.com/feature/10-Principles-of-Organizational-Culture>
* <https://github.com/blog/2238-octotales-mailchimp>
* <http://www.oreilly.com/webops-perf/free/files/release-engineering.pdf>
##### Technology
* [IT-Trends 2016 for the insurance industry](https://www.munichre.com/en/reinsurance/magazine/topics-online/2016/04/it-trends-2016/index.html)
* <http://raconteur.net/technology/blockchain-is-more-than-the-second-coming-of-the-internet>
* <http://blog.getjaco.com/jaco-labs-nodejs-docker-missing-manual/>
* <http://techblog.netflix.com/2016/08/vizceral-open-source.html>
* <https://readthedocs.org>
* <http://learning.blogs.nytimes.com/2015/11/12/skills-and-strategies-annotating-to-engage-analyze-connect-and-create/>
Now I just need to find the time to read them. :)
| 55.383333 | 460 | 0.758652 | eng_Latn | 0.719147 |
2f9dd6726cfcdfa91c2013a103704e1d6212ef08 | 832 | md | Markdown | README.md | victorpassos97/musica-sem-toque | d6e8ec2cf9cf391cf39c44602a8387193e0a5925 | [
"MIT"
] | null | null | null | README.md | victorpassos97/musica-sem-toque | d6e8ec2cf9cf391cf39c44602a8387193e0a5925 | [
"MIT"
] | null | null | null | README.md | victorpassos97/musica-sem-toque | d6e8ec2cf9cf391cf39c44602a8387193e0a5925 | [
"MIT"
] | 1 | 2020-10-26T21:33:51.000Z | 2020-10-26T21:33:51.000Z | # Música sem Toque
Projeto de um controlador de tocador de música utilizando gestos de cabeça.
Projeto criado para a discplina PCS3559/PCS3859 - Tecnologias para Aplicações
Interativas 2020.
## Documentação
* Raspberry Pi: <https://www.raspberrypi.org>
* Giroscópio e Acelerômetro: <https://invensense.tdk.com/wp-content/uploads/2015/02/MPU-6000-Datasheet1.pdf>
* Tutorial de leituras do sensor: <https://makersportal.com/blog/2019/11/11/raspberry-pi-python-accelerometer-gyroscope-magnetometer>
## Setup
É preciso habilitar a interface I2C do Raspberry nas configurações.
Ligação dos pinos
| MPU6050 | Raspbery Pi |
| ----- | ----- |
| VCC | pin 1 |
| GND | pin 6 |
| SCL | pin 5 |
| SDA | pin 3 |
| XDA | NC |
| XCL | NC |
| ADD | NC |
| INT | NC |
Comando para verificar a posição da conexão
```
$ sudo i2cdetect -y 1
```
| 25.212121 | 133 | 0.711538 | por_Latn | 0.935395 |
2f9dde07ca42b88ffe384fce4c44214d80ebe006 | 983 | md | Markdown | guide/russian/html/attributes/autofocus-attribute/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 10 | 2019-08-09T19:58:19.000Z | 2019-08-11T20:57:44.000Z | guide/russian/html/attributes/autofocus-attribute/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 76 | 2019-02-20T23:22:14.000Z | 2021-06-29T23:21:48.000Z | guide/russian/html/attributes/autofocus-attribute/index.md | SweeneyNew/freeCodeCamp | e24b995d3d6a2829701de7ac2225d72f3a954b40 | [
"BSD-3-Clause"
] | 3 | 2018-10-22T21:03:33.000Z | 2018-10-23T14:37:35.000Z | ---
title: Autofocus Attribute
localeTitle: Атрибут автофокусировки
---
## Автофокус Атрибут | HTML5
Атрибут **autofocus** является логическим атрибутом.
Когда он присутствует, он указывает, что элемент должен автоматически получать фокус ввода при загрузке страницы.
Только один элемент формы в документе может иметь атрибут **автофокусировки** . Он не может применяться к `<input type="hidden">` .
### Относится к
| Элемент | Атрибут | | : - | : - | | `<button>` | автофокус | | `<input>` | автофокус | | `<select>` | автофокус | | `<textarea>` | автофокус |
### пример
```html
<form>
<input type="text" name="fname" autofocus>
<input type="text" name="lname">
</form>
```
### Совместимость
Это атрибут HTML5.
#### Дополнительная информация:
[Атрибут HTML автофокуса](https://www.w3schools.com/tags/att_autofocus.asp) на w3schools.com
[<input> атрибут автофокусировки](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input) на веб-документах MDN | 28.085714 | 144 | 0.707019 | rus_Cyrl | 0.855676 |
2f9e0e493ef4087d77ed5031ff9de8ccc1f23470 | 4,463 | md | Markdown | changelogs/SkiaSharp/1.55.1/SkiaSharp.md | truthiswill/SkiaSharp | 8a7e913dccabfe2342f585ab27fdcd4a6c1da15c | [
"MIT"
] | 2,793 | 2016-02-22T23:51:06.000Z | 2022-03-31T07:11:21.000Z | changelogs/SkiaSharp/1.55.1/SkiaSharp.md | truthiswill/SkiaSharp | 8a7e913dccabfe2342f585ab27fdcd4a6c1da15c | [
"MIT"
] | 1,538 | 2016-02-22T19:13:06.000Z | 2022-03-31T23:22:35.000Z | changelogs/SkiaSharp/1.55.1/SkiaSharp.md | truthiswill/SkiaSharp | 8a7e913dccabfe2342f585ab27fdcd4a6c1da15c | [
"MIT"
] | 480 | 2016-02-23T02:04:08.000Z | 2022-03-31T22:54:09.000Z | # API diff: SkiaSharp.dll
## SkiaSharp.dll
### Namespace SkiaSharp
#### Type Changed: SkiaSharp.GRBackendRenderTargetDesc
Added properties:
```csharp
public SKRectI Rect { get; }
public SKSizeI Size { get; }
```
#### Type Changed: SkiaSharp.SKAutoCanvasRestore
Added constructor:
```csharp
public SKAutoCanvasRestore (SKCanvas canvas);
```
#### Type Changed: SkiaSharp.SKBitmap
Added property:
```csharp
public bool ReadyToDraw { get; }
```
Added methods:
```csharp
public bool CopyPixelsTo (IntPtr dst, int dstSize, int dstRowBytes, bool preserveDstPad);
public static SKBitmap Decode (SKCodec codec, SKImageInfo bitmapInfo);
public static SKBitmap Decode (SKData data, SKImageInfo bitmapInfo);
public static SKBitmap Decode (SKStream stream, SKImageInfo bitmapInfo);
public static SKBitmap Decode (byte[] buffer, SKImageInfo bitmapInfo);
public static SKBitmap Decode (string filename, SKImageInfo bitmapInfo);
public IntPtr GetPixels ();
public bool InstallPixels (SKImageInfo info, IntPtr pixels);
public bool InstallPixels (SKImageInfo info, IntPtr pixels, int rowBytes);
public bool InstallPixels (SKImageInfo info, IntPtr pixels, int rowBytes, SKColorTable ctable);
public bool InstallPixels (SKImageInfo info, IntPtr pixels, int rowBytes, SKColorTable ctable, SKBitmapReleaseDelegate releaseProc, object context);
public void SetColorTable (SKColorTable ct);
public void SetPixels (IntPtr pixels);
public void SetPixels (IntPtr pixels, SKColorTable ct);
```
#### Type Changed: SkiaSharp.SKCanvas
Obsoleted methods:
```diff
[Obsolete ("Use DrawPositionedText instead.")]
public void DrawText (string text, SKPoint[] points, SKPaint paint);
[Obsolete ("Use DrawPositionedText instead.")]
public void DrawText (IntPtr buffer, int length, SKPoint[] points, SKPaint paint);
```
Added methods:
```csharp
protected override void Dispose (bool disposing);
public void DrawPositionedText (string text, SKPoint[] points, SKPaint paint);
public void DrawPositionedText (IntPtr buffer, int length, SKPoint[] points, SKPaint paint);
public void DrawRegion (SKRegion region, SKPaint paint);
public bool QuickReject (SKPath path);
public bool QuickReject (SKRect rect);
```
#### Type Changed: SkiaSharp.SKCodec
Added methods:
```csharp
public SKCodecResult IncrementalDecode ();
public SKCodecResult IncrementalDecode (out int rowsDecoded);
public SKCodecResult StartIncrementalDecode (SKImageInfo info, IntPtr pixels, int rowBytes);
public SKCodecResult StartIncrementalDecode (SKImageInfo info, IntPtr pixels, int rowBytes, SKCodecOptions options);
public SKCodecResult StartIncrementalDecode (SKImageInfo info, IntPtr pixels, int rowBytes, SKCodecOptions options, SKColorTable colorTable, ref int colorTableCount);
public SKCodecResult StartIncrementalDecode (SKImageInfo info, IntPtr pixels, int rowBytes, SKCodecOptions options, IntPtr colorTable, ref int colorTableCount);
```
#### Type Changed: SkiaSharp.SKColor
Added constructor:
```csharp
public SKColor (uint value);
```
#### Type Changed: SkiaSharp.SKColorFilter
Added method:
```csharp
public static SKColorFilter CreateBlendMode (SKColor c, SKBlendMode mode);
```
#### Type Changed: SkiaSharp.SKColorTable
Added property:
```csharp
public SKColor Item { get; }
```
#### Type Changed: SkiaSharp.SKData
Added property:
```csharp
public bool IsEmpty { get; }
```
Added methods:
```csharp
public System.IO.Stream AsStream (bool streamDisposesData);
public byte[] ToArray ();
```
#### Type Changed: SkiaSharp.SKPath
Added properties:
```csharp
public bool IsEmpty { get; }
public int VerbCount { get; }
```
#### New Type: SkiaSharp.SKAutoLockPixels
```csharp
public class SKAutoLockPixels : System.IDisposable {
// constructors
public SKAutoLockPixels (SKBitmap bitmap);
public SKAutoLockPixels (SKBitmap bitmap, bool doLock);
// methods
public virtual void Dispose ();
public void Unlock ();
}
```
#### New Type: SkiaSharp.SKBitmapReleaseDelegate
```csharp
public sealed delegate SKBitmapReleaseDelegate : System.MulticastDelegate, System.ICloneable, System.Runtime.Serialization.ISerializable {
// constructors
public SKBitmapReleaseDelegate (object object, IntPtr method);
// methods
public virtual System.IAsyncResult BeginInvoke (IntPtr address, object context, System.AsyncCallback callback, object object);
public virtual void EndInvoke (System.IAsyncResult result);
public virtual void Invoke (IntPtr address, object context);
}
```
| 26.099415 | 166 | 0.774367 | kor_Hang | 0.463754 |
2f9e7deb143a3aa1ce2dcb970c92ec904cb2142a | 1,617 | md | Markdown | README.md | Shadowsith/vim-minify | cdb13f91ae2bede0f4cfcb4772721c45557595e0 | [
"MIT"
] | 1 | 2021-12-13T17:23:37.000Z | 2021-12-13T17:23:37.000Z | README.md | Shadowsith/vim-minify | cdb13f91ae2bede0f4cfcb4772721c45557595e0 | [
"MIT"
] | 1 | 2020-09-18T09:13:13.000Z | 2020-09-29T23:25:28.000Z | README.md | Shadowsith/vim-minify | cdb13f91ae2bede0f4cfcb4772721c45557595e0 | [
"MIT"
] | 1 | 2020-09-29T23:07:31.000Z | 2020-09-29T23:07:31.000Z | # vim-minify
A small vim plugin to (un)minify your javascript and css code directly and fast in vim.
This plugin uses the API from [javascript-minifier.com](https://javascript-minifier.com)
and [cssminifier.com](https://cssminifier.com/) by [Andrew Chilton](https://chilts.org/).
## Features
* Minifying complete css or javascript files or line-based minifying
* writes a file.min.extension file in the directory of the file (e.g. site.min.js in
folder where site.js is)
* functions occurs only in JavaScript, CSS, HTML and PHP files
* uses well-working minifier API
## Requriements
* curl
* internet connection
## Usage
### In JavaScript files
* <code>:MinifyJS</code> minifies your javascript code to filename.min.js
* <code>:UnMinifyJS</code> reformat to be human readable
### In CSS files
* <code>:MinifyCSS</code> minifies your css code to filename.min.css
* <code>:UnMinifyCSS</code> reformat to be human readable
### In JavaScript, CSS, HTML and PHP files
* <code>:1,5Minjs</code> minifies (embedded) javascript code (line 1 to 5)
* <code>:1,5Mincss</code> minifies (embedded) css code (line 1 to 5)
## Installation
E.g. install plugin with vim-plug:
* In .vimrc: <code>Plug 'Shadowsith/vim-minify'</code>
* Open vim and execute <code>:PlugInstall</code>
Or put it in your .vim/bundle folder if you are using pathogen or vundle
## Examples
### Javascript
<img src="https://shadowsith.de/github/vim-minify/vim_js_minify.gif">
### CSS
<img src="https://shadowsith.de/github/vim-minify/vim_css_minify.gif">
## See also
* [XML-fast.vim](https://github.com/joeky888/XML-fast.vim), vim-xml-minifier
| 32.34 | 89 | 0.735931 | eng_Latn | 0.819722 |
2f9efee2a86313f74b32665bc558c26446b2d5ea | 1,147 | md | Markdown | _posts/G/2015-01-20-GH5BG.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | _posts/G/2015-01-20-GH5BG.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | _posts/G/2015-01-20-GH5BG.md | tiantian-chen/tiantian-chen.github.io | 1b85da907278ea16f08ea41926cd423268340d00 | [
"MIT"
] | null | null | null | ---
layout: post
title: "GH5BG"
description: ""
category: genes
tags: [sheath, seedling, leaf, jasmonate, salt, shoot, submergence, salt stress]
---
* **Information**
+ Symbol: GH5BG
+ MSU: [LOC_Os10g22520](http://rice.plantbiology.msu.edu/cgi-bin/ORF_infopage.cgi?orf=LOC_Os10g22520)
+ RAPdb: [Os10g0370500](http://rapdb.dna.affrc.go.jp/viewer/gbrowse_details/irgsp1?name=Os10g0370500)
* **Publication**
+ [A stress-induced rice Oryza sativa L. beta-glucosidase represents a new subfamily of glycosyl hydrolase family 5 containing a fascin-like domain](http://www.ncbi.nlm.nih.gov/pubmed?term=A stress-induced rice Oryza sativa L. beta-glucosidase represents a new subfamily of glycosyl hydrolase family 5 containing a fascin-like domain%5BTitle%5D), 2007, Biochem J.
* **Genbank accession number**
* **Key message**
+ The GH5BG mRNA is highly expressed in the shoot during germination and in leaf sheaths of mature plants
+ The GH5BG was up-regulated in response to salt stress, submergence stress, methyl jasmonate and abscisic acid in rice seedlings
* **Connection**
[//]: # * **Key figures**
| 40.964286 | 367 | 0.72973 | eng_Latn | 0.831811 |
2f9f2ad521e10b217bb1f9c51bcfa957e7517b62 | 4,171 | md | Markdown | doc/src/doc/component/ellipsis/ellipsis.md | piaoye2019/NoahV | af4b2139b67dda225ac2e07d015c15ea7266a0a0 | [
"Apache-2.0"
] | 686 | 2019-03-07T09:57:53.000Z | 2022-03-21T08:31:57.000Z | doc/src/doc/component/ellipsis/ellipsis.md | piaoye2019/NoahV | af4b2139b67dda225ac2e07d015c15ea7266a0a0 | [
"Apache-2.0"
] | 65 | 2019-04-01T03:46:25.000Z | 2022-02-27T17:44:47.000Z | doc/src/doc/component/ellipsis/ellipsis.md | piaoye2019/NoahV | af4b2139b67dda225ac2e07d015c15ea7266a0a0 | [
"Apache-2.0"
] | 156 | 2019-03-20T12:14:29.000Z | 2022-03-21T08:36:50.000Z | [[TOC]]
## 概述
省略组件,根据父级宽度动态展示内容。多余部分,以省略号展示,并以浮窗展示完整内容,可支持一键复制。
<br>
<br>
## 使用示例
<br>
<br>
:::demo 基础示例
```html
<template>
<div style="width: 100px">
<NvEllipsis content="我是nv-ellipsis,我的内容特别多,用户只能看到部分信息,通过悬浮窗可以查看到完成内容,并支持一键复制" />
</div>
</template>
```
:::
:::demo 当点击这段文字时,会自动唤起拨打电话的应用
```html
<template>
<NvEllipsis content="10086" content-type="tel"/>
</template>
```
:::
:::demo 当点击这段文字时,会自动唤起发邮件的应用
```html
<template>
<NvEllipsis content="NoahV@baidu.com" content-type="email"/>
</template>
```
:::
:::demo 当点击这段文字时,会自动跳转到配置的链接
```html
<template>
<NvEllipsis
:content="{
'href': 'https://www.baidu.com',
'target': '_blank,',
'text': '百度一下,你就知道'
}"
tool-tip="我是nv-ellipsis,这是一个链接"
content-type="link" />
</template>
```
:::
:::demo 支持传入自定义 HTML
```html
<template>
<NvEllipsis
:isHTML="true"
content="<h1>我是nv-ellipsis,我支持html,这是h1</h1>"
tool-tip="<h2>我是nv-ellipsis,我支持html,这是h2</h2>" />
</template>
```
:::
<span style="color:red">注意:自定义内容,可能会导致省略(点点点)样式效果失效,需要自行补充 CSS 样式</span>
:::demo 在表格中应用场景
```html
<template>
<nvTable
:columns="columns"
:tdata="tableData"
:pagination="false"
style="width: 300px"></nvTable>
</template>
<script>
export default {
data() {
return {
tableData: [
{
name: 'a',
age: 1,
addr: 'xxx'
},
{
name: 'b',
age: 2,
addr: 'xxx'
},
{
name: 'c',
age: 3,
addr: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
},
{
name: 'd',
age: 4,
addr: ''
},
{
name: 'e',
age: 5,
addr: 'xx'
}
]
}
},
computed: {
columns() {
return [
{
title: '名字',
key: 'name'
},
{
title: '年龄',
key: 'age'
},
{
title: '地址',
render: (h, {row}) => h('NvEllipsis', {
props: {
content: row.addr
}
})
}
];
}
}
}
</script>
```
:::
## API
### props
| 属性 | 说明 | 类型 | 默认值 |
| ---------- | ----------------------------------------| ---------------- | ----------- |
| copy | 是否支持复制功能 | Boolean | true |
| content | 展示文案 | [String, Object] | 无 |
| contentType | 文案类型。支持 tel、email、link、普通类型。前 3 项具有默认行为。 | String | 普通类型 |
| placement | 提示框出现的位置,可选值为top、top-start、top-end、bottom、bottom-start、bottom-end、left、left-start、left-end、right、right-start、right-end | String | bottom |
| toolTip | 提示框文案。支持设置与content不同的内容,默认为 content 值 | String | content 值 |
| maxWidth | 最大宽度,超出最大值后,文本将自动换行 | Number 300 |
| copySuccessText | 复制成功提示文案 | String | 复制成功 |
| copyErrorText | 复制失败提示是文案 | String | light |
| theme | 主题色调。支持 dark 和 light双色调。 | Number | 1 |
| isHTML | 是否为HTML。content、toolTip支持自定义内容。 | Boolean | false |
<br>
### events
| 名称 | 说明 | 参数 |
| ---------- | ---------------------------------------| -------------- |
| copy-success | 复制成功回调事件 | 无 |
| copy-error | 复制失败回调事件 | 无 |
| 25.588957 | 200 | 0.369216 | yue_Hant | 0.430812 |
2f9f6772deddb295c29d43bedec4494bb0a380d5 | 485 | md | Markdown | includes/active-directory-b2c-advanced-audience-warning.md | followtushar/mc-docs.zh-cn | 68fc72b7c388735e59c94bcae07bac505085b3fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/active-directory-b2c-advanced-audience-warning.md | followtushar/mc-docs.zh-cn | 68fc72b7c388735e59c94bcae07bac505085b3fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/active-directory-b2c-advanced-audience-warning.md | followtushar/mc-docs.zh-cn | 68fc72b7c388735e59c94bcae07bac505085b3fe | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: mmacy
ms.service: active-directory-b2c
ms.topic: include
ms.date: 02/05/2020
ms.author: v-junlch
ms.openlocfilehash: a461021d4343fd697d9d0f2412e07bfd85f5f20f
ms.sourcegitcommit: 23dc63b6fea451f6a2bd4e8d0fbd7ed082ba0740
ms.translationtype: HT
ms.contentlocale: zh-CN
ms.lasthandoff: 02/05/2020
ms.locfileid: "77028070"
---
> [!NOTE]
> 在 Azure Active Directory B2C 中,`custom policies` 主要用于解决复杂方案。 大多数情况下,建议使用内置的[用户流](../articles/active-directory-b2c/user-flow-overview.md)。
| 28.529412 | 139 | 0.795876 | yue_Hant | 0.336372 |
2f9fa0356936ad6d2bf2f2b4aacf09d77cc0b505 | 18,453 | md | Markdown | CHANGELOG.md | onokatio/pumba | 7242a009d0e169d97bbe9fb25f4e5dc2e696559b | [
"Apache-2.0"
] | null | null | null | CHANGELOG.md | onokatio/pumba | 7242a009d0e169d97bbe9fb25f4e5dc2e696559b | [
"Apache-2.0"
] | 1 | 2019-06-28T19:00:20.000Z | 2019-07-03T23:15:03.000Z | CHANGELOG.md | ExpediaInc/pumba | 7242a009d0e169d97bbe9fb25f4e5dc2e696559b | [
"Apache-2.0"
] | null | null | null | # Change Log
## [0.6.4](https://github.com/alexei-led/pumba/tree/0.6.4) (2019-05-02)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.6.3...0.6.4)
**Fixed bugs:**
- Unable to specify target network, only single host IP [\#126](https://github.com/alexei-led/pumba/issues/126)
**Closed issues:**
- How to add more containers to kill [\#124](https://github.com/alexei-led/pumba/issues/124)
- Using re2 in pumba command in kubernetes template [\#123](https://github.com/alexei-led/pumba/issues/123)
- Fedora - command 'tc' not found [\#121](https://github.com/alexei-led/pumba/issues/121)
- log-level behaviour [\#120](https://github.com/alexei-led/pumba/issues/120)
**Merged pull requests:**
- Add support for specifying target networks \(CIDR notation\) \#126 [\#127](https://github.com/alexei-led/pumba/pull/127) ([gmpify](https://github.com/gmpify))
## [0.6.3](https://github.com/alexei-led/pumba/tree/0.6.3) (2019-03-12)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.6.2...0.6.3)
**Fixed bugs:**
- Regex issue with interface name [\#109](https://github.com/alexei-led/pumba/issues/109)
**Closed issues:**
- Slack hook certificate error [\#122](https://github.com/alexei-led/pumba/issues/122)
**Merged pull requests:**
- Fix spelling error [\#116](https://github.com/alexei-led/pumba/pull/116) ([CatEars](https://github.com/CatEars))
- Update interface regexp [\#108](https://github.com/alexei-led/pumba/pull/108) ([ddliu](https://github.com/ddliu))
## [0.6.2](https://github.com/alexei-led/pumba/tree/0.6.2) (2018-12-12)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.6.1...0.6.2)
**Implemented enhancements:**
- Deployment failed on Kubernetes 1.7 [\#42](https://github.com/alexei-led/pumba/issues/42)
**Fixed bugs:**
- Multiple matching container `netem` commands executed in sequentially when triggered with `docker run` [\#112](https://github.com/alexei-led/pumba/issues/112)
**Closed issues:**
- Pumba daemonset pods are crashing [\#110](https://github.com/alexei-led/pumba/issues/110)
**Merged pull requests:**
- run netem in parallel on multiple containers. fix \#112 [\#113](https://github.com/alexei-led/pumba/pull/113) ([alexei-led](https://github.com/alexei-led))
- Change command to args and support entrypoint [\#111](https://github.com/alexei-led/pumba/pull/111) ([yaron-idan](https://github.com/yaron-idan))
## [0.6.1](https://github.com/alexei-led/pumba/tree/0.6.1) (2018-11-15)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.6.0...0.6.1)
**Implemented enhancements:**
- Better killing and respawing options [\#46](https://github.com/alexei-led/pumba/issues/46)
- No such image: gaiadocker/iproute2 [\#40](https://github.com/alexei-led/pumba/issues/40)
**Fixed bugs:**
- reuse tc container [\#97](https://github.com/alexei-led/pumba/issues/97)
**Closed issues:**
- when using --tc-image flag the sidekick image is not deleted after the command execution finishes [\#106](https://github.com/alexei-led/pumba/issues/106)
- Cannot connect to the Docker daemon [\#105](https://github.com/alexei-led/pumba/issues/105)
- Strange latency spikes on 4.15.0-36 kernel [\#103](https://github.com/alexei-led/pumba/issues/103)
**Merged pull requests:**
- Fixing tc image and more [\#107](https://github.com/alexei-led/pumba/pull/107) ([alexei-led](https://github.com/alexei-led))
## [0.6.0](https://github.com/alexei-led/pumba/tree/0.6.0) (2018-10-08)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.5.2...0.6.0)
**Closed issues:**
- Any Plan about Network Partitioning Simulation? [\#96](https://github.com/alexei-led/pumba/issues/96)
- Add SVG version of Pumba logo [\#94](https://github.com/alexei-led/pumba/issues/94)
**Merged pull requests:**
- use SCRATCH image for base image [\#101](https://github.com/alexei-led/pumba/pull/101) ([alexei-led](https://github.com/alexei-led))
- Better support for CI tool and Codecov [\#100](https://github.com/alexei-led/pumba/pull/100) ([alexei-led](https://github.com/alexei-led))
- Refactor: Initialize CLI Commands in a separate func. [\#99](https://github.com/alexei-led/pumba/pull/99) ([nawazish-github](https://github.com/nawazish-github))
## [0.5.2](https://github.com/alexei-led/pumba/tree/0.5.2) (2018-09-03)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.5.0...0.5.2)
**Implemented enhancements:**
- Pumba is not an importable package [\#60](https://github.com/alexei-led/pumba/issues/60)
- Add Start command. [\#59](https://github.com/alexei-led/pumba/issues/59)
**Fixed bugs:**
- Got permission denied after using pumba to delay network [\#83](https://github.com/alexei-led/pumba/issues/83)
- docker\_entrypoint.sh changes ownership of parent socket [\#38](https://github.com/alexei-led/pumba/issues/38)
**Closed issues:**
- Pumba attack - visualize the execution steps in command terminal [\#91](https://github.com/alexei-led/pumba/issues/91)
- Pumba run time startup issues [\#88](https://github.com/alexei-led/pumba/issues/88)
- cat: can't open 'VERSION': No such file or directory [\#87](https://github.com/alexei-led/pumba/issues/87)
- netem delay loses the first 3 packets [\#72](https://github.com/alexei-led/pumba/issues/72)
- Pumba container exiting without any error [\#70](https://github.com/alexei-led/pumba/issues/70)
**Merged pull requests:**
- Add corrupt and duplicate netem commands [\#95](https://github.com/alexei-led/pumba/pull/95) ([philipgloyne](https://github.com/philipgloyne))
## [0.5.0](https://github.com/alexei-led/pumba/tree/0.5.0) (2018-05-21)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.8...0.5.0)
**Closed issues:**
- Slack hooks fail due to no ca [\#84](https://github.com/alexei-led/pumba/issues/84)
**Merged pull requests:**
- Code refactoring [\#85](https://github.com/alexei-led/pumba/pull/85) ([alexei-led](https://github.com/alexei-led))
- implement 'contains' in a cheaper, simpler way [\#82](https://github.com/alexei-led/pumba/pull/82) ([Dieterbe](https://github.com/Dieterbe))
- Spring cleanup [\#80](https://github.com/alexei-led/pumba/pull/80) ([alexei-led](https://github.com/alexei-led))
## [0.4.8](https://github.com/alexei-led/pumba/tree/0.4.8) (2018-03-12)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.7...0.4.8)
**Implemented enhancements:**
- Fix `netem` when destination IP filter is defined [\#52](https://github.com/alexei-led/pumba/issues/52)
**Fixed bugs:**
- Fix `netem` when destination IP filter is defined [\#52](https://github.com/alexei-led/pumba/issues/52)
- netem command fails on images where user != root [\#43](https://github.com/alexei-led/pumba/issues/43)
**Closed issues:**
- use dumb-init [\#69](https://github.com/alexei-led/pumba/issues/69)
- use su-exec instead of gosu [\#68](https://github.com/alexei-led/pumba/issues/68)
- kubernetes command should be an array and not a string [\#63](https://github.com/alexei-led/pumba/issues/63)
- custom built container kills itself [\#62](https://github.com/alexei-led/pumba/issues/62)
- suggest kubernetes limits and requests [\#61](https://github.com/alexei-led/pumba/issues/61)
- allow targetting multiple specific ip's [\#57](https://github.com/alexei-led/pumba/issues/57)
**Merged pull requests:**
- moving git repo to alexei-led [\#78](https://github.com/alexei-led/pumba/pull/78) ([alexei-led](https://github.com/alexei-led))
- Limit the number of container to kill \#46 [\#77](https://github.com/alexei-led/pumba/pull/77) ([camilocot](https://github.com/camilocot))
- Add Start command. \#59 [\#76](https://github.com/alexei-led/pumba/pull/76) ([camilocot](https://github.com/camilocot))
- very minor min corrections [\#74](https://github.com/alexei-led/pumba/pull/74) ([lazerion](https://github.com/lazerion))
- use dumb-init and su-exec [\#71](https://github.com/alexei-led/pumba/pull/71) ([grosser](https://github.com/grosser))
- add requests/limits so container does not be come too greedy [\#67](https://github.com/alexei-led/pumba/pull/67) ([grosser](https://github.com/grosser))
- avoid self-killing on kubernetes [\#66](https://github.com/alexei-led/pumba/pull/66) ([grosser](https://github.com/grosser))
- prefer regular nodes by default [\#65](https://github.com/alexei-led/pumba/pull/65) ([grosser](https://github.com/grosser))
- do not spam extra shell / make killing soft by default [\#64](https://github.com/alexei-led/pumba/pull/64) ([grosser](https://github.com/grosser))
- support specifying multiple target IP's [\#58](https://github.com/alexei-led/pumba/pull/58) ([Dieterbe](https://github.com/Dieterbe))
- fix logging of configs [\#56](https://github.com/alexei-led/pumba/pull/56) ([Dieterbe](https://github.com/Dieterbe))
## [0.4.7](https://github.com/alexei-led/pumba/tree/0.4.7) (2017-11-14)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.6...0.4.7)
**Fixed bugs:**
- Pumba does not seem to work in my environment [\#33](https://github.com/alexei-led/pumba/issues/33)
**Merged pull requests:**
- Fixes [\#55](https://github.com/alexei-led/pumba/pull/55) ([Dieterbe](https://github.com/Dieterbe))
- fix typo's [\#54](https://github.com/alexei-led/pumba/pull/54) ([Dieterbe](https://github.com/Dieterbe))
## [0.4.6](https://github.com/alexei-led/pumba/tree/0.4.6) (2017-10-26)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.5...0.4.6)
**Implemented enhancements:**
- Pumba interact with all containers inside docker [\#41](https://github.com/alexei-led/pumba/issues/41)
**Fixed bugs:**
- Target IP filter blocking all traffic [\#39](https://github.com/alexei-led/pumba/issues/39)
**Closed issues:**
- Regex not working [\#47](https://github.com/alexei-led/pumba/issues/47)
- Building Error - "golang:1.8-alpine AS builder" [\#45](https://github.com/alexei-led/pumba/issues/45)
**Merged pull requests:**
- Add a Gitter chat badge to README.md [\#49](https://github.com/alexei-led/pumba/pull/49) ([gitter-badger](https://github.com/gitter-badger))
- Creates a deploy file for OpenShift [\#48](https://github.com/alexei-led/pumba/pull/48) ([lordofthejars](https://github.com/lordofthejars))
## [0.4.5](https://github.com/alexei-led/pumba/tree/0.4.5) (2017-09-06)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.4...0.4.5)
**Fixed bugs:**
- not work in k8s ver 1.3 [\#19](https://github.com/alexei-led/pumba/issues/19)
## [0.4.4](https://github.com/alexei-led/pumba/tree/0.4.4) (2017-07-08)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.3...0.4.4)
## [0.4.3](https://github.com/alexei-led/pumba/tree/0.4.3) (2017-07-07)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.2...0.4.3)
**Implemented enhancements:**
- tc command check [\#35](https://github.com/alexei-led/pumba/issues/35)
**Fixed bugs:**
- tc command check [\#35](https://github.com/alexei-led/pumba/issues/35)
- Cannot remove running container [\#31](https://github.com/alexei-led/pumba/issues/31)
- "pumba rm" without "--force" flag is useless [\#30](https://github.com/alexei-led/pumba/issues/30)
**Closed issues:**
- Replace `samalba/dockerclient` library [\#14](https://github.com/alexei-led/pumba/issues/14)
## [0.4.2](https://github.com/alexei-led/pumba/tree/0.4.2) (2017-03-16)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.1...0.4.2)
**Merged pull requests:**
- Added basic e2e tests [\#37](https://github.com/alexei-led/pumba/pull/37) ([slnowak](https://github.com/slnowak))
- Pumba is now able to remove container [\#34](https://github.com/alexei-led/pumba/pull/34) ([slnowak](https://github.com/slnowak))
## [0.4.1](https://github.com/alexei-led/pumba/tree/0.4.1) (2017-02-01)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.4.0-2-gdf5e4a3...0.4.1)
## [0.4.0-2-gdf5e4a3](https://github.com/alexei-led/pumba/tree/0.4.0-2-gdf5e4a3) (2017-01-29)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.3.2...0.4.0-2-gdf5e4a3)
**Merged pull requests:**
- Get rid of samalba client [\#32](https://github.com/alexei-led/pumba/pull/32) ([slnowak](https://github.com/slnowak))
## [0.3.2](https://github.com/alexei-led/pumba/tree/0.3.2) (2017-01-17)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.3.1...0.3.2)
## [0.3.1](https://github.com/alexei-led/pumba/tree/0.3.1) (2016-12-13)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.3.0...0.3.1)
**Implemented enhancements:**
- Implement `rate` bandwidth limit [\#25](https://github.com/alexei-led/pumba/issues/25)
**Closed issues:**
- Debug messages problem [\#28](https://github.com/alexei-led/pumba/issues/28)
**Merged pull requests:**
- Implement rate bandwidth limit [\#29](https://github.com/alexei-led/pumba/pull/29) ([meqif](https://github.com/meqif))
## [0.3.0](https://github.com/alexei-led/pumba/tree/0.3.0) (2016-11-24)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.9-4257dcf...0.3.0)
**Closed issues:**
- Unable to start the pumba container [\#27](https://github.com/alexei-led/pumba/issues/27)
## [0.2.9-4257dcf](https://github.com/alexei-led/pumba/tree/0.2.9-4257dcf) (2016-10-28)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.9...0.2.9-4257dcf)
## [0.2.9](https://github.com/alexei-led/pumba/tree/0.2.9) (2016-10-28)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.8...0.2.9)
## [0.2.8](https://github.com/alexei-led/pumba/tree/0.2.8) (2016-10-28)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.7...0.2.8)
## [0.2.7](https://github.com/alexei-led/pumba/tree/0.2.7) (2016-10-27)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.6-3-g705f13b...0.2.7)
## [0.2.6-3-g705f13b](https://github.com/alexei-led/pumba/tree/0.2.6-3-g705f13b) (2016-10-25)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.6...0.2.6-3-g705f13b)
**Implemented enhancements:**
- One time run w/o interval [\#20](https://github.com/alexei-led/pumba/issues/20)
- Run first action before interval [\#17](https://github.com/alexei-led/pumba/issues/17)
- Can't rely on the Docker restart policy [\#11](https://github.com/alexei-led/pumba/issues/11)
**Fixed bugs:**
- netem: add check for `iptools2` install [\#21](https://github.com/alexei-led/pumba/issues/21)
**Closed issues:**
- Chaos state [\#18](https://github.com/alexei-led/pumba/issues/18)
**Merged pull requests:**
- Fix typo: dealy -\> delay [\#26](https://github.com/alexei-led/pumba/pull/26) ([kane-c](https://github.com/kane-c))
## [0.2.6](https://github.com/alexei-led/pumba/tree/0.2.6) (2016-09-25)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.5...0.2.6)
## [0.2.5](https://github.com/alexei-led/pumba/tree/0.2.5) (2016-09-08)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.4...0.2.5)
## [0.2.4](https://github.com/alexei-led/pumba/tree/0.2.4) (2016-08-10)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.3...0.2.4)
## [0.2.3](https://github.com/alexei-led/pumba/tree/0.2.3) (2016-08-07)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.2...0.2.3)
## [0.2.2](https://github.com/alexei-led/pumba/tree/0.2.2) (2016-08-06)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.1...0.2.2)
**Implemented enhancements:**
- Disconnect container from Docker network [\#13](https://github.com/alexei-led/pumba/issues/13)
- Pause running container [\#12](https://github.com/alexei-led/pumba/issues/12)
**Closed issues:**
- Support recovery "validation" scripts [\#5](https://github.com/alexei-led/pumba/issues/5)
- Support additional Docker commands [\#4](https://github.com/alexei-led/pumba/issues/4)
## [0.2.1](https://github.com/alexei-led/pumba/tree/0.2.1) (2016-07-28)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.2.0...0.2.1)
## [0.2.0](https://github.com/alexei-led/pumba/tree/0.2.0) (2016-07-27)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.11...0.2.0)
**Merged pull requests:**
- Add basic capability to disrupt container network [\#16](https://github.com/alexei-led/pumba/pull/16) ([inbarshani](https://github.com/inbarshani))
## [0.1.11](https://github.com/alexei-led/pumba/tree/0.1.11) (2016-07-16)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.10...0.1.11)
**Closed issues:**
- Replace Gox [\#10](https://github.com/alexei-led/pumba/issues/10)
- Add a pkg installer for Mac OS X [\#9](https://github.com/alexei-led/pumba/issues/9)
- Collect container "lifecycle" activities from Docker host, Pumba is running on [\#3](https://github.com/alexei-led/pumba/issues/3)
## [0.1.10](https://github.com/alexei-led/pumba/tree/0.1.10) (2016-06-05)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.9...0.1.10)
## [0.1.9](https://github.com/alexei-led/pumba/tree/0.1.9) (2016-05-22)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.8...0.1.9)
## [0.1.8](https://github.com/alexei-led/pumba/tree/0.1.8) (2016-05-22)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.7...0.1.8)
## [0.1.7](https://github.com/alexei-led/pumba/tree/0.1.7) (2016-05-21)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.6...0.1.7)
**Closed issues:**
- Add label to skip Pumba eyes [\#8](https://github.com/alexei-led/pumba/issues/8)
- Post to Slack does not work [\#7](https://github.com/alexei-led/pumba/issues/7)
## [0.1.6](https://github.com/alexei-led/pumba/tree/0.1.6) (2016-04-25)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.5...0.1.6)
**Closed issues:**
- Are you planning to support Kubernetes or OpenSHift ? [\#6](https://github.com/alexei-led/pumba/issues/6)
- Log Pumba "kill" activities with more details about affected containers [\#2](https://github.com/alexei-led/pumba/issues/2)
## [0.1.5](https://github.com/alexei-led/pumba/tree/0.1.5) (2016-04-13)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.4...0.1.5)
**Merged pull requests:**
- Add a Bitdeli Badge to README [\#1](https://github.com/alexei-led/pumba/pull/1) ([bitdeli-chef](https://github.com/bitdeli-chef))
## [0.1.4](https://github.com/alexei-led/pumba/tree/0.1.4) (2016-04-08)
[Full Changelog](https://github.com/alexei-led/pumba/compare/0.1.3...0.1.4)
## [0.1.3](https://github.com/alexei-led/pumba/tree/0.1.3) (2016-04-04)
\* *This Change Log was automatically generated by [github_changelog_generator](https://github.com/skywinder/Github-Changelog-Generator)* | 48.054688 | 163 | 0.704167 | kor_Hang | 0.163059 |
2fa0d92ae31ca51e0cfed39ba501deb7e2303d96 | 718 | md | Markdown | content/flux/v0.x/stdlib/math/cos.md | Hipska/docs-v2 | aa35a41094c18f6d93c10795a2e1a4da7661543c | [
"MIT"
] | 42 | 2019-10-14T18:38:17.000Z | 2022-03-29T15:34:49.000Z | content/flux/v0.x/stdlib/math/cos.md | Hipska/docs-v2 | aa35a41094c18f6d93c10795a2e1a4da7661543c | [
"MIT"
] | 1,870 | 2019-10-14T17:03:50.000Z | 2022-03-30T22:23:24.000Z | content/flux/v0.x/stdlib/math/cos.md | Hipska/docs-v2 | aa35a41094c18f6d93c10795a2e1a4da7661543c | [
"MIT"
] | 181 | 2019-11-08T19:40:05.000Z | 2022-03-25T10:01:02.000Z | ---
title: math.cos() function
description: The math.cos() function returns the cosine of the radian argument `x`.
aliases:
- /influxdb/v2.0/reference/flux/functions/math/cos/
- /influxdb/v2.0/reference/flux/stdlib/math/cos/
- /influxdb/cloud/reference/flux/stdlib/math/cos/
menu:
flux_0_x_ref:
name: math.cos
parent: math
weight: 301
introduced: 0.22.0
---
The `math.cos()` function returns the cosine of the radian argument `x`.
_**Output data type:** Float_
```js
import "math"
math.cos(x: 3.14)
// Returns -0.9999987317275396
```
## Parameters
### x {data-type="float"}
The value used in the operation.
## Special cases
```js
math.cos(±Inf) // Returns NaN
math.cos(NaN) // Returns NaN
```
| 18.894737 | 83 | 0.688022 | eng_Latn | 0.510538 |
2fa1c6e067ccc573da92dc7308d9594f6bffc572 | 6,957 | md | Markdown | src/content/zombies-out-of-business.md | abyshakeanand/blog | e1003dbf92f47efedf9cf411272b19ee1387c73b | [
"MIT"
] | null | null | null | src/content/zombies-out-of-business.md | abyshakeanand/blog | e1003dbf92f47efedf9cf411272b19ee1387c73b | [
"MIT"
] | null | null | null | src/content/zombies-out-of-business.md | abyshakeanand/blog | e1003dbf92f47efedf9cf411272b19ee1387c73b | [
"MIT"
] | null | null | null | ---
layout: post
title: 'Keep those zombies out of your business'
author: [abyshake]
tags: ['Active Customers', 'User Onboarding', 'Early Stage Growth']
image: img/zombie.png
date: '2021-06-06'
draft: false
excerpt:
---
It is the first week of the month, and every month this is the week I get notifications for a bunch of subscription charges on my credit card. Some of those charges I vaguely recognise. These are products I signed up for, at some point, but haven't been using in a really long time - if I ever used them at all in the first place.
Why don't I just go ahead and cancel my subscription then? Well, I intend to. It is just that I make a mental note of canceling those subscriptions, put a pin in it, and before I know it, it is the next month showing me the next notification. (Yes sir, I am lazy. You got that right.)
I cancelled a few of these subscriptions early this year, and most probably will cancel a few more this week (fingers crossed), but these active running zombie subscriptions aren't what we are talking about here. The subject of this discussion is me - the faux customer.
As a customer, I am completely and utterly useless to these businesses. Yes, I do send a few dollars every month to their revenue pool, but revenue is not the biggest value your customer generates for your business. It is his usage behavior. How is he using your product, which sections and features is he most engaging in, which features is he struggling with, what does he do first every single time he logs in.
There can be many more activity patterns just like these, and obviously they could vary based on your product. But, you measure these on a macro level, and you start getting a much better understanding of what your product means to your customers. And that is something you must want to know, because what matters more than how you describe your product is how your customers would do so. It helps you give a clear direction to the way your product and your business should be moving, and it shapes up your entire sales, marketing and even pricing strategies.
A zombie customer just corrupts your understanding. The more zombies there are, the higher that MRR number grows, the more it would be able to distract you from the fact that you are losing out on invaluable customer usage pattern and insights.
So instead of plainly tracking MRR or number of customers, keep an eye out on your actual active users. Not the typical vanity metric definition of your DAUs and MAUs, actual active users. People who are actively engaging with and using your product.
# But isn't MRR important
To the health of your business, yes! It is imperative. It helps you know if your business is going to be afloat or not. It helps you plan out your team's expansion, earmark a dollar figure to marketing and promotional activities, and a bunch of other things. And if you are an early stage startup, MRR is critical for survival.
But as far as the future of your business goes, it would be the active customers who will help save the day.
Let's look at the MRR figure again, with a special emphasis on your customer type this time.
Say you have two customers - one, a zombie, and two, someone who has been using your product for a few weeks/months. Both of them are churning out now. What can you do at this stage?
The customer who had been active for weeks/months, you can talk to them, figure out the reasons behind them churning out. There are many possible reasons, and every reason could give you an opportunity to stop him from churning out:
1. It is the product capabilities. The customer has some needs that your product just doesn't stand up to. At the very least, in this scenario, you will understand your customers' expectations, and at best, you may be able to get creative and help solve their need even with the current limitations your product may have. It is also a good idea to understand the motivation behind them sticking around for all this time - irrespective of whether or not you are able to retain them today.
2. The pricing. Maybe they aren't quite happy with the prices you have. This is trickier, but still important. You need to understand where this sudden discontent is coming from. Is it a competitor or are they going to be using the product less frequently so the price tag no longer feels justified to them?All of that is valuable information. For example, if they are going to use the product less frequently, and you keep on running into such customers again and again, it may be time to introduce a metered strategy to your pricing. (Also, a personal recommendation - avoid negotiating on price, no good comes out of it.)
>Personally, I am not a big fan of pricing war. Whether fighting it out with a competitor, or just with the expectations of a customer, lowering my prices is a strategy I rarely use or recommend. Why? Because there are no winners. Not even the customer who would have just saved a bunch of money having won the pricing negotiation. If I am a business offering my services at less than ideal prices, it seriously eats up on my ability to expand, dedicate time, money and resources on the product, offer quality and quick customer support. It affects my performance across the board. In many cases, it could pose a threat to my survival. And in almost all of those cases, the customer suffers. So believe me when I tell you, there are no survivors in a pricing war.
But if it is a zombie customer, what can you even do? As a customer they already feel they signed up for a product they weren't going to use, they have already spent hundreds of dollars on your product, and they are even today probably as new to your product as someone new signing up. So, chances are, there is very little you can say that will make them stay.
And you know what this means? That MRR figure we were so reliant on, the zombie managed to corrupt it as well. I can depend on the revenue my active customer sends my way. Because even when he wishes to stop, I have a fair shot at retaining him. The zombie revenue is pure wild card; it can go up in smoke any time. I can't depend on that, and neither should you.
# So what do you do?
Two things:
###### #1. Focus on your active customers. Listen to them, treat them well, pamper them, worship them.
See how can you make the product even more meaningful to them. Stop letting the noise of everything else distract you.
###### #2. Wake up the zombies
Have a methodology to ensure more and more customers who signed up actually start using the product. Yes, user onboarding. And then keep an eye out to make sure they keep on using the product. Anytime there is an anomalous behavior, see what you can do to fix it, instead of just waiting it out, cashing in those dollar bills rolling in every month.
Keep the percentage of zombies as low as possible. Your business will thank you for it.
That's it for today, see you tomorrow.
Cheers
Abhishek | 110.428571 | 763 | 0.781946 | eng_Latn | 0.999957 |
2fa391f8ffbe9bd68e5cb7e5879c5f2bde532b22 | 99 | md | Markdown | javascript/README.md | fcosueza/exercism-code | eaa75de95d8899c90204836a233fd7d3172018d0 | [
"MIT"
] | null | null | null | javascript/README.md | fcosueza/exercism-code | eaa75de95d8899c90204836a233fd7d3172018d0 | [
"MIT"
] | 3 | 2020-03-14T11:28:08.000Z | 2021-09-02T10:06:14.000Z | javascript/README.md | fcosueza/exercism-code | eaa75de95d8899c90204836a233fd7d3172018d0 | [
"MIT"
] | null | null | null | # Javascript
Solutions to challenges in [javascript track](https://exercism.io/tracks/javascript).
| 33 | 85 | 0.79798 | eng_Latn | 0.462687 |
2fa3b2c1e6dc4fdcd6b46724a60cfa422ac8eb45 | 286 | md | Markdown | README.md | HYDupup/YDCamera | b12f1962b699caf81c285df48173ee98ea3e8172 | [
"MIT"
] | 3 | 2019-01-22T12:16:57.000Z | 2020-06-03T18:16:54.000Z | README.md | HYDupup/YDCamera | b12f1962b699caf81c285df48173ee98ea3e8172 | [
"MIT"
] | null | null | null | README.md | HYDupup/YDCamera | b12f1962b699caf81c285df48173ee98ea3e8172 | [
"MIT"
] | null | null | null | # YDCamera
## YDCamera是基于AVFundation框架编写的镜头控件,主要功能如下:<br>
### 1.1 镜头闪光灯状态控制(关闭,自动,打开). <br>
### 1.2 前后镜头转换 <br>
### 1.3 触屏集中控制镜头聚焦与焦距 <br>
### 1.4 控制镜头输出比例(1:1,9:16,3:4) <br>
### 1.5 根据手机方向改变镜头方向 <br>
### 1.6 拍照输出照片,根据手机方向,前后镜头,镜头输出比例改变最终输出照片 <br>
### 1.7 录制视频,根据手机方向,前后镜头改变最终输出视频 <br>
| 28.6 | 46 | 0.657343 | yue_Hant | 0.286633 |
2fa3bbd45c84d613c499063279cb0a4d241012cf | 4,083 | md | Markdown | tools/ts-mixin-generator/readme.md | Adroit-Group/ng-tools | 4b1a70537556033d07575ef230fdcb90eedced01 | [
"MIT"
] | null | null | null | tools/ts-mixin-generator/readme.md | Adroit-Group/ng-tools | 4b1a70537556033d07575ef230fdcb90eedced01 | [
"MIT"
] | null | null | null | tools/ts-mixin-generator/readme.md | Adroit-Group/ng-tools | 4b1a70537556033d07575ef230fdcb90eedced01 | [
"MIT"
] | null | null | null | # TS Mixin Generator
## Base rules and principles form mixins
### General principles of mixins
- Mixin function are regular JS functions that return a class expression.
- The class expressions return by mixin function should have names to make debugging easier.
- AS of now mixin class expressions cannot use Decorators due to TS limitations.
- This may change in the future as the JS Decorators proposal is implemented in TS.
- For the time being it means that these mixin class expressions cannot utilize on features that requires a decorator to work. i.e.: @Input decorators of Angular.
- A class extending a class expression returned by a mixin function (a mixin class) should be able to use both the instance, prototype and static sides of the mixin classes it extends.
### Design principles of mixins
- Mixin functions should always be prepared to receive a base class constructor as their first parameter.
- This ensures that mixin functions can be composed together.
- This does not say that a mixin function can not accept multiple parameters for configuration or other purposes.
- A mixin function should do as little as possible both in terms of JS and TS.
- This means that a mixin function should return a class expression without doing any side effects.
- And also that a mixin function should do as little type casting and TS type wizardry as possible.
- A main design goal is to have the TS compiler and tools set do the heavy lifting of type assertions.
- This way we stay as close to the default behavior of the compiler as possible.
- Mixin class expressions should extend the base class constructor received by the mixin function or an empty anonymous class.
- Allowing the return class expression to extend other classes of mixins would essentially defat the whole purpose of using mixins in the first place.
- Extending regular classes here would mean that the mixin can not extend any other mixin's return class expression that it could have received as it's first parameter.
- Extending another mixin function here would mean that even though we have logic broken up into reusable piece we choose to glue them together just like they were one singular entity in a single inheritance scenario.
- Creating functions that act as an alias for composing frequently used mixin functions together
- Mixin chain aliases (or meta mixins)
- Name collisions between multiple mixin class expressions at the usage site of the client code should be handled by the TS compiler.
- This refers back to the previous principle about the compiler doing most of the heavy lifting of type inferences.
- There are some cases where a name collision should not result in an error neither related to TS types not to JS runtime functionality.
- Multiple mixin class expressions having a prototype member with the same name should NOT result in an error.
- These members are living on the prototype objects of the dynamically created class expressions. i.e.: instance methods, getters and setters.
- In such cases these members are shadowing each other, but due to how the JS property resolution algorithm works the runtime will find the correct member highest up the prototype chain (closes to the derived class) as it would in the case of regular single inheritance classes.
### Mixin function best practices
- Mixin functions should have a PascalCase name (like regular JS classes) with some suffix (preferably Mixin) to make them easily distinguishable from other functions.
- Files containing mixin functions should have a suffix (preferably .mixin) before their file extension part to make them easily distinguishable from regular JS classes and functions.
- Mixin functions and the mixin class expressions returned by them should be self containing units.
- Ideally they shouldn't have to do any assumption about where and how they are going to be used.
- In the case when they have to make such assumptions, we should try to keep it as little as possible and mostly rely on globally available values and apis.
| 92.795455 | 284 | 0.79084 | eng_Latn | 0.99975 |
2fa4510b2a68c07fe1178dd7daba9ce31bb9a362 | 1,570 | md | Markdown | Calypso-NavigationModel.package/ClyQueryResultMetadata.class/README.md | demarey/Calypso | eba0ad1a558df4122160e1b1ae8456e9a7d3324b | [
"MIT"
] | null | null | null | Calypso-NavigationModel.package/ClyQueryResultMetadata.class/README.md | demarey/Calypso | eba0ad1a558df4122160e1b1ae8456e9a7d3324b | [
"MIT"
] | null | null | null | Calypso-NavigationModel.package/ClyQueryResultMetadata.class/README.md | demarey/Calypso | eba0ad1a558df4122160e1b1ae8456e9a7d3324b | [
"MIT"
] | null | null | null | I represent metadata of query result as collection of first class properties (kind of ClyProperty).
I allow annotate query result with summary information about items.
For example SUnit plugin annotates method queries with information about tests.
Another example: ClyAsyncQueryResult includes metadata ClyBackgroundProcessingTag which indicates that result is still in processing.
To access properties use following methods:
- addProperty: aProperty
- getProperty: aPropertyClass
- hasProperty: aPropertyClass
Metadata is collected lazely when user asks it from the result:
aQueryResult metadata
Actual logic to collect metadata is implemented by environment plugins. But concrete dispatch method is choosen by query which built given result:
metadata := ClyQueryResultMetadata new.
environment pluginsDo: [:each |
buildingQuery collectMetadataOf: self by: each ]
Query sends typed message depending on items which query retrieves. For example:
ClyMethodQuery>>collectMetadataOf: aQueryResult by: anEnvironmentPlugin
anEnvironmentPlugin collectMetadataOfMethods: aQueryResult
So metadata is property of query result. But when you open browser cursor metadata is passed to the cursor instances. It is important optimization for remote scenario where result is remote proxy. In that case cursor is transferred to the client by value together with metadata. So all properties are available for the local cursor user.
Internal Representation and Key Implementation Points.
Instance Variables
properties: <OrderedCollection of: <ClyProperty>> | 50.645161 | 337 | 0.82293 | eng_Latn | 0.992279 |
2fa4d15b50f8b28a14831cbaaba8381d409a2b38 | 43 | md | Markdown | README.md | JohanMabille/xrnd | c3127140faef0ca4c0e61b738ae1d5c050292fca | [
"BSD-3-Clause"
] | null | null | null | README.md | JohanMabille/xrnd | c3127140faef0ca4c0e61b738ae1d5c050292fca | [
"BSD-3-Clause"
] | null | null | null | README.md | JohanMabille/xrnd | c3127140faef0ca4c0e61b738ae1d5c050292fca | [
"BSD-3-Clause"
] | null | null | null | # xrnd
Tests and benchmarks around xtensor
| 14.333333 | 35 | 0.813953 | eng_Latn | 0.996484 |
2fa4e9a682522cfaee74f0c56f6d88e2ba399412 | 618 | md | Markdown | book/src/SUMMARY.md | BarePotato/gooey | da2a82dc622616c68e5553b827f0185a13ac5079 | [
"Apache-2.0",
"MIT"
] | 15 | 2021-07-28T15:30:00.000Z | 2022-03-25T19:02:33.000Z | book/src/SUMMARY.md | BarePotato/gooey | da2a82dc622616c68e5553b827f0185a13ac5079 | [
"Apache-2.0",
"MIT"
] | 35 | 2021-05-20T17:30:01.000Z | 2021-09-04T05:19:35.000Z | book/src/SUMMARY.md | BarePotato/gooey | da2a82dc622616c68e5553b827f0185a13ac5079 | [
"Apache-2.0",
"MIT"
] | 1 | 2021-12-14T04:31:41.000Z | 2021-12-14T04:31:41.000Z | # Summary
[Introduction](./introduction.md)
- [About Gooey](./about/gooey.md)
- [Concepts](./about/concepts.md)
- [Creating a Widget](./about/creating-widgets.md)
- [Frontends](./frontends/supported.md)
- [Browser](./frontends/browser/web-sys.md)
- [Native (Rasterized)](./frontends/rasterizer/native.md)
- [Widgets](./widgets/provided.md)
- [Button](./widgets/button.md)
- [Component](./widgets/component.md)
- [Container](./widgets/container.md)
- [Label](./widgets/label.md)
- [Layout](./widgets/layout.md)
- [Tutorial: Build a Database-driven App](./tutorials/bonsaidb-counter/introduction.md)
| 34.333333 | 87 | 0.68932 | yue_Hant | 0.27134 |
2fa6d5505d1f3a02c3d36b89ecec44167fe60429 | 1,007 | md | Markdown | README.md | kamyker/Facepunch.Steamworks | 48fb9fc3f7c9e020ccfd3af59e6d1af3241d1bf6 | [
"MIT"
] | 7 | 2019-12-14T13:41:05.000Z | 2020-07-06T03:09:33.000Z | README.md | kamyker/Facepunch.Steamworks.Package | 48fb9fc3f7c9e020ccfd3af59e6d1af3241d1bf6 | [
"MIT"
] | null | null | null | README.md | kamyker/Facepunch.Steamworks.Package | 48fb9fc3f7c9e020ccfd3af59e6d1af3241d1bf6 | [
"MIT"
] | null | null | null | # Facepunch.Steamworks for Unity
[Another fucking c# Steamworks implementation](https://wiki.facepunch.com/steamworks/)
## Installation:
1. Adding plugin
**Easy way:**
In Unity click Window/Package Manager then:

and copy/paste: `https://github.com/kamyker/Facepunch.Steamworks.Package.git`
**Git way:**
Clone repo as submodule to `<your_project>/Packages/Facepunch.Steamworks`
2. Set proper Scription Define Symbols in Unity Player settings:
Win x64:
`PLATFORM_WIN64;PLATFORM_WIN;PLATFORM_64`
Win x32:
`PLATFORM_WIN32;PLATFORM_WIN;PLATFORM_32`
Linux or Mac x32:
`PLATFORM_POSIX32;PLATFORM_POSIX;PLATFORM_32`
Linux or Mac x64:
`PLATFORM_POSIX64;PLATFORM_POSIX;PLATFORM_64`
If you are using Mac or Linux and build isn't working check libraries unity settings in `Packages\Facepunch.Steamworks\UnityPlugin\redistributable_bin`
## More
More in main repo: https://github.com/Facepunch/Facepunch.Steamworks
| 25.175 | 151 | 0.792453 | kor_Hang | 0.460892 |
2fa75bc3a9b48b9e4a4a598b8353ba6fb68d98dd | 1,405 | md | Markdown | README.md | SmartManQ8/app-ios | be75344fc731bc5a5d5447636a2da482dcf339aa | [
"MIT"
] | null | null | null | README.md | SmartManQ8/app-ios | be75344fc731bc5a5d5447636a2da482dcf339aa | [
"MIT"
] | null | null | null | README.md | SmartManQ8/app-ios | be75344fc731bc5a5d5447636a2da482dcf339aa | [
"MIT"
] | null | null | null | # CoEpi for iOS
This is the repository for the iOS implementation of CoEpi. See [CoEpi.org/vision](https://www.coepi.org/vision.html) for background, and see the rest of our CoEpi repositories [here](https://github.com/Co-Epi).
## Build Status
Develop branch: [](https://appcenter.ms/users/scottleibrand/apps/CoEpi-iOS/build/branches/develop)
## Setup
- Install [CocoaPods](https://cocoapods.org/)*
- in the root of your project, run
```ruby
pod install
```
- Open the project using the xcworkspace file (generated by CocoaPods).
\* This project uses primarily SPM as dependecy manager. CocoaPods is used temporarily as [fallback for some dependencies](https://github.com/Co-Epi/app-ios/wiki/Architecture)
## Contribute
1. Read the [code guidelines](https://github.com/Co-Epi/app-ios/wiki/Code-guidelines).
2. Create a branch
- If you belong to the organization:
Create a branch
- If you don't belong to the organization:
Fork and create a branch in the fork
3. Commit changes to the branch
4. Push your code and make a pull request
### Internationalization
Any text visible to the user should be translated into the phone's preferred language.
See the [Internationalization wiki page](https://github.com/Co-Epi/app-ios/wiki/Internationalization) for details on how to do that.
| 37.972973 | 212 | 0.765125 | eng_Latn | 0.879194 |
2fa7ad32e570f2f686ba98457def1d6c9eb9244e | 2,389 | markdown | Markdown | _posts/2019-02-22-3_things_you_should_know_before_creating_an_e-commerce_application.markdown | Cheng0315/Cheng0315.github.io | 3217fbb8cfe243089a2d41666ecb793f87f9f2db | [
"MIT"
] | null | null | null | _posts/2019-02-22-3_things_you_should_know_before_creating_an_e-commerce_application.markdown | Cheng0315/Cheng0315.github.io | 3217fbb8cfe243089a2d41666ecb793f87f9f2db | [
"MIT"
] | null | null | null | _posts/2019-02-22-3_things_you_should_know_before_creating_an_e-commerce_application.markdown | Cheng0315/Cheng0315.github.io | 3217fbb8cfe243089a2d41666ecb793f87f9f2db | [
"MIT"
] | null | null | null | ---
layout: post
title: "3 Things You Should Know Before Building an E-Commerce Application"
date: 2019-02-22 12:25:52 -0500
permalink: 3_things_you_should_know_before_creating_an_e-commerce_application
---
After spending over 2 weeks working on an e-commerce application and making tons of mistakes, I've learned many things about building an e-commerce application and have gathered three important things I think you should know before start building your own e-commerce application, as they will help you save times and avoid headaches.
1. Store your shopping cart's id - not the cart object or item object - in the session.
When I first started working on my e-commerce application, I thought it would be a good idea to store objects in the session, as it would be easier for me to retrieved data. However, as I got further into the project and started adding in more objects to the session, the session becomes full and stopped taking data I passed to it. Because of that, I had to make many changes in my application to get it to working without having to store object in the session.
2. Know how you want your shopping cart to behave.
For me, I want to create a shopping cart where users can add items to cart without having to log in. To accomplish this, I stored two kinds of data in my session. 1. the id of the user's cart. 2. an array of items' ids if the user is not logged in. If the user is not logged in and adds an item to cart, I would add the item's id to the array in the session. If the user is logged in and adds an item to cart, I would create the association between the item and the user's cart and save them to the database. If the user adds an item to cart without logging in and then decided to log in after, I would iterate through the array of items' ids in the session, find the items, and associate them with the user's current cart.
3. Know when to use your Google-fu.
Even with good planning, you are going to come across things and issues that you were not expecting or not sure how to solve. Instead of spending over 30 minutes to an hour trying to figure it out, google it. This will saves you a lot of time and let you move on to the next part of your project.
I hope you find one or two of these tips helpful. If you would like to checkout my e-commerce application, you can do so [here](https://github.com/Cheng0315/swift-kart).
| 113.761905 | 724 | 0.776057 | eng_Latn | 0.999833 |
2fa7e65000cc03389e6fbb7df1c82c390070f8f7 | 15,869 | md | Markdown | Projet-Tangram/Library/PackageCache/com.unity.addressables@1.8.3/Documentation~/AddressableAssetsGettingStarted.md | UMI3D/UMI3D-Tangram | 355137e2131ccaaa62010953d72e0f560245465c | [
"Apache-2.0"
] | null | null | null | Projet-Tangram/Library/PackageCache/com.unity.addressables@1.8.3/Documentation~/AddressableAssetsGettingStarted.md | UMI3D/UMI3D-Tangram | 355137e2131ccaaa62010953d72e0f560245465c | [
"Apache-2.0"
] | null | null | null | Projet-Tangram/Library/PackageCache/com.unity.addressables@1.8.3/Documentation~/AddressableAssetsGettingStarted.md | UMI3D/UMI3D-Tangram | 355137e2131ccaaa62010953d72e0f560245465c | [
"Apache-2.0"
] | 4 | 2021-05-17T17:39:30.000Z | 2021-05-24T02:05:54.000Z | # Getting started
## Installing the Addressable Assets package
**Important**: The Addressable Asset System requires Unity version 2018.3 or later.
To install this package, follow the instructions in the [Package Manager documentation](https://docs.unity3d.com/Packages/com.unity.package-manager-ui@1.7/manual/index.html).
## Preparing Addressable Assets
### Marking assets as Addressable
There are two ways to mark an asset as Addressable in the Unity Editor:
* In the object's Inspector.
* In the **Addressables Groups** window.
#### Using the Inspector
In your **Project** window, select the desired asset to view its Inspector. In the Inspector, click the **Addressable** checkbox and enter a name by which to identify the asset.
</br>
_Marking an asset as Addressable in the **Inspector** window._
#### Using the Addressables window
Select **Window** > **Asset Management** > **Addressables** > **Groups** to open the **Addressables Groups** window. Next, drag the desired asset from your **Project** window into one of the asset groups in the **Addressables Groups** window.
</br>
_Marking an asset as Addressable in the **Addressables Groups** window._
### Specifying an address
The default address for your asset is the path to the asset in your Project (for example, _Assets/images/myImage.png_). To change the asset's address from the **Addressables Groups** window, right-click the asset and select **Change Address**.
When you first start using Addressable Assets, the system saves some edit-time and runtime data assets for your Project in the _Assets/AddressableAssetsData_ file, which should be added to your version control check-in.
### Building your Addressable content
The Addressables Asset System needs to build your content into files that can be consumed by the running game before you build the application. This step is not automatic. You can build this content via the Editor or API:
* To build content in the Editor, open the **Addressables Groups** window, then select **Build** > **New Build** > **Default Build Script**.
* To build content using the API, use [`AddressableAssetSettings.BuildPlayerContent()`](../api/UnityEditor.AddressableAssets.Settings.AddressableAssetSettings.html#UnityEditor_AddressableAssets_Settings_AddressableAssetSettings_BuildPlayerContent).
## Using Addressable Assets
### Loading or instantiating by address
You can load or instantiate an Addressable Asset at runtime. Loading an asset loads all dependencies into memory (including the asset's bundle data if applicable), allowing you to use the asset when you need to. This does not actually put the desired asset into your scene. To add the asset to your scene you must instantiate. Using Addressables instantiation interfaces will load the asset, then immediately adds it to your Scene.
To access an asset from your game script using a string address, declare the [`UnityEngine.AddressableAssets`](../api/UnityEngine.AddressableAssets.html) namespace, then call the following methods:
```
Addressables.LoadAssetAsync<GameObject>("AssetAddress");
```
This loads the asset with the specified address.
```
Addressables.InstantiateAsync("AssetAddress");
```
This instantiates the asset with the specified address into your Scene.
**Note**: [`LoadAssetAsync`](../api/UnityEngine.AddressableAssets.Addressables.html#UnityEngine_AddressableAssets_Addressables_LoadAssetAsync__1_System_Object_) and [`InstantiateAsync`](../api/UnityEngine.AddressableAssets.Addressables.html#UnityEngine_AddressableAssets_Addressables_InstantiateAsync_System_Object_Transform_System_Boolean_System_Boolean_) are asynchronous operations. You may provide a callback to work with the asset when it finishes loading (see documentation on [**Async operation handling**](AddressableAssetsAsyncOperationHandle) for more information).
```
using System.Collections;
using System.Collections.Generic;
using UnityEngine.AddressableAssets;
using UnityEngine;
public class AddressablesExample : MonoBehaviour {
GameObject myGameObject;
...
Addressables.LoadAssetAsync<GameObject>("AssetAddress").Completed += OnLoadDone;
}
private void OnLoadDone(UnityEngine.ResourceManagement.AsyncOperations.AsyncOperationHandle<GameObject> obj)
{
// In a production environment, you should add exception handling to catch scenarios such as a null result.
myGameObject = obj.Result;
}
}
```
#### Sub-assets and components
Sub-assets and components are special cases for asset loading.
##### Components
You cannot load a GameObject's component directly through Addressables. You must load or instantiate the GameObject, then retrieve the component reference from it. To see how you could extend Addressables to support component loading, see [our ComponentReference sample](https://github.com/Unity-Technologies/Addressables-Sample/tree/master/Basic/ComponentReference).
##### Sub-assets
The system supports loading sub-assets, but requires special syntax. Examples of potential sub-assets include sprites in a sprite sheet, or animation clips in an FBX file. For examples of loading sprites directly, see [our sprite loading sample](https://github.com/Unity-Technologies/Addressables-Sample/tree/master/Basic/Sprite%20Land)
To load all sub-objects in an asset, you can use the following example syntax:
`Addressables.LoadAssetAsync<IList<Sprite>>("MySpriteSheetAddress");`
To load a single sub-object in an asset, you could do this:
`Addressables.LoadAssetAsync<Sprite>("MySpriteSheetAddress[MySpriteName]");`
The names available within an asset are visible in the main Addressables group editor window.
In addition, you can use an [`AssetReference`](../api/UnityEngine.AddressableAssets.AssetReference.html) to access the sub-object of an asset. See notes in the below section.
### Using the AssetReference class
The [`AssetReference`](../api/UnityEngine.AddressableAssets.AssetReference.html) class provides a way to access Addressable Assets without needing to know their addresses. To access an Addressable Asset using the `AssetReference` class:
1. Select a GameObject from your Scene hierarchy or **Project** window.
2. In the Inspector, click the **Add Component** button, then select the component type. Any serializable component can support an `AssetReference` variable (for example, a game script, ScriptableObject, or other serializable class).
3. Add a public `AssetReference` variable in the component (for example, `public AssetReference explosion;`).
4. In the Inspector, select which Addressable Asset to link to the object, by either dragging the asset from the **Project** window into the exposed `AssetReference` field, or choosing from the dropdown of previously defined Addressable Assets in your Project (shown below).
</br>
_Referencing an Addressable Asset via script component._
To load or instantiate an [`AssetReference`](../api/UnityEngine.AddressableAssets.AssetReference.html) asset, call its corresponding method. For example:
```
AssetRefMember.LoadAssetAsync<GameObject>();
```
or
```
AssetRefMember.InstantiateAsync(pos, rot);
```
**Note**: As with normal Addressable Assets, [`LoadAssetAsync`](../api/UnityEngine.AddressableAssets.AssetReference.html#UnityEngine_AddressableAssets_AssetReference_LoadAssetAsync__1) and [`InstantiateAsync`](../api/UnityEngine.AddressableAssets.AssetReference.html#UnityEngine_AddressableAssets_AssetReference_InstantiateAsync_Transform_System_Boolean_) are asynchronous operations. You may provide a callback to work with the asset when it finishes loading (see documentation on [**Async operation handling**](AddressableAssetsAsyncOperationHandle) for more information).
##### Sub-assets
If an asset that contains sub-assets (such as a SpriteAtlas or FBX) is added to an AssetReference, you are given the option to reference the asset itself, or a sub-asset. The single dropdown you are used to seeing becomes two. The first selects the asset itself, and the second selects the sub-asset. If you select "<none>" in the second dropdown, that will be treated as a reference to the main asset.
## Build considerations
### Local data in StreamingAssets
The Addressable Asset System needs some files at runtime to know what to load and how to load it. Those files are generated when you build Addressables data and wind up in the _StreamingAssets_ folder, which is a special folder in Unity that includes all its files in the build. When you build Addressables content, the system stages those files in the Library. Then, when you build the application, the system copies the required files over to _StreamingAssets_, builds, and deletes them from the folder. This way, you can build data for multiple platforms while only having the relevant data included in each build.
In addition to the Addressables-specific data, any groups that build their data for local use will also use the Library platform-specific staging location. To verify that this works, set your build path and load paths to [profile variables](./AddressableAssetsProfiles.md) starting with `[UnityEngine.AddressableAssets.Addressables.BuildPath]` and `{UnityEngine.AddressableAssets.Addressables.RuntimePath}` respectively. You can specify these settings in the `AddressableAssetSettings` Inspector (by default, this object is located in your Project's _Assets/AddressableAssetsData_ directory).
### Downloading in advance
Calling the [`Addressables.DownloadDependenciesAsync()`](../api/UnityEngine.AddressableAssets.Addressables.html#UnityEngine_AddressableAssets_Addressables_DownloadDependenciesAsync_System_Object_System_Boolean_) method loads the dependencies for the address or label that you pass in. Typically, this is the asset bundle.
The [`AsyncOperationHandle`](AddressableAssetsAsyncOperationHandle.md) struct returned by this call includes a `PercentComplete` attribute that you can use to monitor and display download progress. You can also have the app wait until the content has loaded.
If you wish to ask the user for consent prior to download, use [`Addressables.GetDownloadSize()`](../api/UnityEngine.AddressableAssets.Addressables.html#UnityEngine_AddressableAssets_Addressables_GetDownloadSize_System_Object_) to return how much space is needed to download the content from a given address or label. Note that this takes into account any previously downloaded bundles that are still in Unity's asset bundle cache.
While it can be advantageous to download assets for your app in advance, there are instances where you might choose not to do so. For example:
* If your app has a large amount of online content, and you generally expect users to only ever interact with a portion of it.
* You have an app that must be connected online to function. If all your app's content is in small bundles, you might choose to download content as needed.
Rather than using the percent complete value to wait until the content is loaded, you can use the preload functionality to show that the download has started, then continue on. This implementation would require a loading or waiting screen to handle instances where the asset has not finished loading by the time it's needed.
### Building for multiple platforms
The Addressable Asset System generates asset bundles containing your Addressable Assets when building application content. Asset bundles are platform-dependant, and thus must be rebuilt for every unique platform you intend to support.
By default, when building Addressables app data, data for your given platform is stored in platform-specific subdirectories of the Addressables build path(s). The runtime path accounts for these platform folders, and points to the applicable app data.
**Note**: If you use the Addressables [`BuildScriptPackedPlayMode`](../api/UnityEditor.AddressableAssets.Build.DataBuilders.BuildScriptPackedPlayMode.html) script in the Editor Play mode, Addressables will attempt to load data for your current active build target. As such, issues may arise if your current build target data isn't compatible with your current Editor platform. For more information, see documentation on [Play mode scripts](AddressableAssetsDevelopmentCycle.md#play-mode-scripts).
### Grouping assets
It is a good practice to logically collect assets into multiple groups rather than put them all in one large group. The key benefit of this method is to avoid conflicts in version control systems (VCS) when multiple contributors make edits to the same file. Having one large asset group might result in the VCS's inability to cleanly merge these various changes.
### Building scenes that are packed together
After running a build where you have multiple Scenes in an Addressable Assets group, those Scenes will become interdependent if:
* Under _Packed Assets_ in the Project window, the group's Bundle Mode is set to **Pack Together**.
* The Scenes in that group all have the same asset label, and the Bundle Mode is set to **Pack Together By Label**.
If you modify even one of these grouped Scenes then perform a [content update build](AddressableAssetsDevelopmentCycle.md#building-for-content-updates), all the interdependent Scenes will move together into a new Content Update group.
### Loading Content Catalogs
Content Catalogs are the data stores Addressables uses to look up an asset's physical location based on the key(s) provided to the system. By default, Addressables builds the local content catalog for local Addressable Groups. If the Build Remote Catalogs option is turned on under the AddressableAssetSettings, then one additional catalog is built to store locations for remote Addressable Groups. Ultimately Addressables only uses one of these catalogs. If a remote catalog is built and it has a different hash than the local catalog, it is downloaded, cached, and used in place of the built-in local catalog.
It is possible, however, to specify additional Content Catalogs to be loaded. There are different reasons you might decide loading additional catalogs is right for your project, such as building an art-only project that you want to use across different projects.
Should you find that loading additional catalogs is right for you, there is a method that can assist in this regard, `LoadContentCatalogAsync`.
For `LoadContentCatalogAsync`, all that is required is for you to supply the location of the catalog you wish to load. However, this alone does not use catalog caching, so be careful if you're loading a catalog from a remote location. You will incur that WebRequest every time you need to load that catalog.
To help prevent you from needing to download a remote catalog every time, if you provide a `.hash` file with the hash of the catalog alongside the catalog you're loading, we can use this to properly cache your Content Catalog. **Please Note:** The hash file does need to be in the same location and have the same name as your catalog. The only difference to the path should be the extension.
One additional note: You'll notice this method comes with a parameter `autoReleaseHandle`. In order for the system to download a new remote catalog, any prior calls to `LoadContentCatalogAsync` that point to the catalog you're attempting to load need to be released. Otherwise, the system picks up the Content Catalog load operation from our operation cache. If the cached operation is picked up, the new remote catalog is not downloaded. If set to true, the parameter `autoReleaseHandle` can ensure that the operation doesn't stick around in our operation cache after completing. | 90.68 | 618 | 0.802949 | eng_Latn | 0.994873 |
2fa7edc3b6e107a5839a4b3a3cd0567321beaff1 | 7,417 | md | Markdown | README.md | jtpio/jupyterlab-github | fe08169c4806347dd8ceaae3661adb47538c2265 | [
"BSD-3-Clause"
] | null | null | null | README.md | jtpio/jupyterlab-github | fe08169c4806347dd8ceaae3661adb47538c2265 | [
"BSD-3-Clause"
] | null | null | null | README.md | jtpio/jupyterlab-github | fe08169c4806347dd8ceaae3661adb47538c2265 | [
"BSD-3-Clause"
] | null | null | null | # JupyterLab GitHub
A JupyterLab extension for accessing GitHub repositories.
### What this extension is
When you install this extension, an additional filebrowser tab will be added
to the left area of JupyterLab. This filebrowser allows you to select GitHub
organizations and users, browse their repositories, and open the files in those
repositories. If those files are notebooks, you can run them just as you would
any other notebook. You can also attach a kernel to text files and run those.
Basically, you should be able to open any file in a repository that JupyterLab can handle.
Here is a screenshot of the plugin opening this very file on GitHub:

### What this extension is not
This is not an extension that provides full GitHub access, such as
saving files, making commits, forking repositories, etc.
For it to be so, it would need to more-or-less reinvent the GitHub website,
which represents a huge increase in complexity for the extension.
### A note on rate-limiting
This extension has both a client-side component (that is, JavaScript that is bundled
with JupyterLab), and a server-side component (that is, Python code that is added
to the Jupyter server). This extension _will_ work with out the server extension,
with a major caveat: when making unauthenticated requests to GitHub
(as we must do to get repository data), GitHub imposes fairly strict rate-limits
on how many requests we can make. As such, you are likely to hit that limit
within a few minutes of work. You will then have to wait up to an hour to regain access.
For that reason, we recommend that you take the time and effort to set up the server
extension as well as the lab extension, which will allow you to access higher rate-limits.
This process is described in the [installation](#Installation) section.
## Prerequisites
- JupyterLab 3.0
- A GitHub account for the server extension
## Installation
As discussed above, this extension has both a server extension and a lab extension.
Both extensions will be installed by default when installing from PyPI, but you may
have only lab extension installed if you used the Extension Manager in JupyterLab 3.x.
We recommend completing the steps described below as to not be rate-limited.
The purpose of the server extension is to add GitHub credentials that you will need to acquire
from https://github.com/settings/developers, and then to proxy your request to GitHub.
For JupyterLab version older than 3 please see the instructions on the
[2.x branch](https://github.com/jupyterlab/jupyterlab-github/tree/2.x).
### 1. Installing both server and prebuilt lab extension
To install the both the server extension and (prebuilt) lab extension, enter the following in your terminal:
```bash
pip install jupyterlab-github
```
After restarting JupyterLab, the extension should work, and you can experience
the joys of being rate-limited first-hand!
### 2. Getting your credentials from GitHub
There are two approaches to getting credentials from GitHub:
(1) you can get an access token, (2) you can register an OAuth app.
The second approach is not recommended, and will be removed in a future release.
#### Getting an access token (**recommended**)
You can get an access token by following these steps:
1. [Verify](https://help.github.com/articles/verifying-your-email-address) your email address with GitHub.
1. Go to your account settings on GitHub and select "Developer Settings" from the left panel.
1. On the left, select "Personal access tokens"
1. Click the "Generate new token" button, and enter your password.
1. Give the token a description, and check the "**repo**" scope box.
1. Click "Generate token"
1. You should be given a string which will be your access token.
Remember that this token is effectively a password for your GitHub account.
_Do not_ share it online or check the token into version control,
as people can use it to access all of your data on GitHub.
#### Setting up an OAuth application (**deprecated**)
This approach to authenticating with GitHub is deprecated, and will be removed in a future release.
New users should use the access token approach.
You can register an OAuth application with GitHub by following these steps:
1. Log into your GitHub account.
1. Go to https://github.com/settings/developers and select the "OAuth Apps" tab on the left.
1. Click the "New OAuth App" button.
1. Fill out a name, homepage URL, description, and callback URL in the form.
This extension does not actually use OAuth, so these values actually _do not matter much_,
you just need to enter them to register the application.
1. Click the "Register application" button.
1. You should be taken to a new page with the new application information.
If you see fields showing "Client ID" and "Client Secret", congratulations!
These are the strings we need, and you have successfuly set up the application.
It is important to note that the "Client Secret" string is, as the name suggests, a secret.
_Do not_ share this value online, as people may be able to use it to impersonate you on GitHub.
### 3. Enabling and configuring the server extension
The server extension will be enabled by default on new JupyterLab installations
if you installed it with pip. If you used Extension Manager in JupyterLab 3.x,
please uninstall the extension and install it again with the instructions from point (1).
Confirm that the server extension is installed and enabled with:
```bash
jupyter server extension list
```
you should see the following:
```
- Validating jupyterlab_github...
jupyterlab_github 3.0.0 OK
```
On some older installations (e.g. old JupyterHub versions) which use jupyter
`notebook` server instead of the new `jupyter-server`, the extension needs to
show up on the legacy `serverextensions` list (note: no space between _server_ and _extension_):
```bash
jupyter serverextension list
```
If the extension is not enabled run:
```bash
jupyter server extension enable jupyterlab_github
```
or if using the legacy `notebook` server:
```bash
jupyter serverextension enable jupyterlab_github
```
You now need to add the credentials you got from GitHub
to your server configuration file. Instructions for generating a configuration
file can be found [here](https://jupyter-server.readthedocs.io/en/stable/users/configuration.html#configuring-a-jupyter-server).
Once you have identified this file, add the following lines to it:
```python
c.GitHubConfig.access_token = '< YOUR_ACCESS_TOKEN >'
```
where "`< YOUR_ACCESS_TOKEN >`" is the string value you obtained above.
If you generated an OAuth app, instead enter the following:
```python
c.GitHubConfig.client_id = '< YOUR_CLIENT_ID >'
c.GitHubConfig.client_secret = '< YOUR_CLIENT_SECRET >'
```
where "`< YOUR_CLIENT_ID >`" and "`< YOUR_CLIENT_SECRET >`" are the app values you obtained above.
With this, you should be done! Launch JupyterLab and look for the GitHub tab on the left!
## Customization
You can set the plugin to start showing a particular repository at launch time.
Open the "Advanced Settings" editor in the Settings menu,
and under the GitHub settings add
```json
{
"defaultRepo": "owner/repository"
}
```
where `owner` is the GitHub user/org,
and `repository` is the name of the repository you want to open.
| 40.530055 | 128 | 0.776325 | eng_Latn | 0.999028 |
2fa8294f1aacabfa30775bfca46ef2e5c2f67f9a | 491 | md | Markdown | README.md | bkerler/dump_avb_signature | 4b5778bcb4044f758d4f01081a9b59a7c184c292 | [
"MIT"
] | 44 | 2017-12-30T22:25:52.000Z | 2022-03-14T12:26:03.000Z | README.md | bkerler/dump_avb_signature | 4b5778bcb4044f758d4f01081a9b59a7c184c292 | [
"MIT"
] | 5 | 2018-01-15T09:59:36.000Z | 2022-03-18T23:31:12.000Z | README.md | bkerler/dump_avb_signature | 4b5778bcb4044f758d4f01081a9b59a7c184c292 | [
"MIT"
] | 15 | 2018-01-15T09:53:37.000Z | 2021-11-13T13:44:16.000Z | # Dump/Verify Android Verified Boot Signature Hash v1.5 (c) B.Kerler 2017-2019
Why
===
- For researching Android Verified Boot issues
- To exploit TZ image verification :)
Installation
=============
1. Get python 3.6 64-Bit
2. python -m pip install pycryptodome
Run
===
For AVB v1:
```
python verify_signature.py --file boot.img
```
For AVB v2:
```
python verify_signature.py --file boot.img --vbmeta vbmeta.img
```
Published under MIT license
Enjoy !
| 16.931034 | 79 | 0.657841 | eng_Latn | 0.469099 |
2fa8b5f43ec9e45303f96f4b6c1a3ac541bae4bf | 863 | md | Markdown | doc/RPS_DESIGN.md | ericccarlson0/cellular_automata | 00522f246e2553e7e713063f2b83fd9767360798 | [
"MIT"
] | null | null | null | doc/RPS_DESIGN.md | ericccarlson0/cellular_automata | 00522f246e2553e7e713063f2b83fd9767360798 | [
"MIT"
] | null | null | null | doc/RPS_DESIGN.md | ericccarlson0/cellular_automata | 00522f246e2553e7e713063f2b83fd9767360798 | [
"MIT"
] | null | null | null | Charles Papandreou - cnp20
Turner Jordan - tgj5
Eric Carlson - ecc45
Player
* int score
* destructor currSelection
* Collaborators:
* simulation holds this object type
* this object type uses game
Game
* Methods:
* Constructor
* populates data structure to hold interactions by destructor type
* populates data structure to hold allowed destructor types
* compare types
* takes in two destructors from simulation and then decides which wins, returning corresponding result
* Collaborators:
* player and simulation
Simulation
* creates game
* creates 2 players
* allows game interaction calling player methods and game methods
has player select type of destructor
* based on type of destructors selected, calls game method to compare
*based on winner increments score of correct player object
| 27.83871 | 110 | 0.741599 | eng_Latn | 0.998264 |
2fa9155ba29d7efdae567726da8400973c5297a8 | 1,542 | md | Markdown | docs/vs-2015/code-quality/c6515.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/c6515.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/c6515.md | Birgos/visualstudio-docs.de-de | 64595418a3cea245bd45cd3a39645f6e90cfacc9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C6515 | Microsoft-Dokumentation
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-devops-test
ms.tgt_pltfrm: ''
ms.topic: article
f1_keywords:
- C6515
helpviewer_keywords:
- C6515
ms.assetid: e0f21858-0fea-427b-965a-a7eff62e1371
caps.latest.revision: 20
author: mikeblome
ms.author: mblome
manager: ghogen
ms.openlocfilehash: 0c11a8f86c9dc572b64a2e7317e033d8c26768d1
ms.sourcegitcommit: af428c7ccd007e668ec0dd8697c88fc5d8bca1e2
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 11/16/2018
ms.locfileid: "51775276"
---
# <a name="c6515"></a>C6515
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
Warnung C6515: Ungültige Anmerkung: \<Name >-Eigenschaft kann nur für Werte des Zeigertyps verwendet werden
Diese Warnung gibt an, dass eine Eigenschaft für die Verwendung auf Zeiger in einen Zeigertyp angewendet wurde. Eine Liste der Anmerkungseigenschaften, die, finden Sie unter [Anmerkungseigenschaften](http://msdn.microsoft.com/en-us/f77b4370-6bda-4294-bd2a-e7d0df182a3d).
## <a name="example"></a>Beispiel
Der folgende Code generiert diese Warnung:
```cpp
#include <sal.h>
void f(_Readable_bytes_(c) char pc, size_t c)
{
// code ...
}
```
So korrigieren Sie die Warnung unter Verwendung des folgenden Codes
```
#include <sal.h>
void f(_Readable_bytes_(c) char * pc, size_t c)
{
// code ...
}
```
## <a name="see-also"></a>Siehe auch
[C6516](../code-quality/c6516.md)
| 24.47619 | 273 | 0.720493 | deu_Latn | 0.579316 |
2fa9a955352344cc9ef6ca52afd9e00656d84801 | 4,963 | md | Markdown | README.template.md | matbrady/example-wordpress-composer | 17bc7dadc1cb9644232e4e438417f88b3d09c520 | [
"MIT"
] | 1 | 2021-01-22T15:25:14.000Z | 2021-01-22T15:25:14.000Z | README.template.md | matbrady/example-wordpress-composer | 17bc7dadc1cb9644232e4e438417f88b3d09c520 | [
"MIT"
] | 5 | 2020-07-14T15:02:53.000Z | 2021-03-10T14:24:34.000Z | README.template.md | matbrady/example-wordpress-composer | 17bc7dadc1cb9644232e4e438417f88b3d09c520 | [
"MIT"
] | 1 | 2020-07-10T13:06:49.000Z | 2020-07-10T13:06:49.000Z | ## Outline
- [Pantheon Environments](#pantheon-environments)
- [Local Development](#local-development)
- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Boot WordPress Application](#boot-word-press-application)
- [Boot Theme (Webpack) Server](#boot-theme-webpack-server)
- [Pull Files and Database from Pantheon](#pull-files-and-database-from-pantheon)
- [Deploying](#deploying)
- [Deployment Using Terminus](#deployment-using-terminus)
- [Test Environment](#test-environment)
- [Live Environment](#live-environment)
- [Troubleshooting](/docs/troubleshooting.md)
## Pantheon Environments
- **Live** - http://live-fix-me.pantheonsite.io/
- **Test** - http://test-fix-me.pantheonsite.io/
- **Dev** - http://dev-fix-me.pantheonsite.io/
## Local Development
In order to more easily recreate the production environment locally, [Lando](https://lando.dev/) is used for local development. We also use Pantheon’s CLI, Terminus, to sync files and databases.
- **PHP Server** - http://fix-me.lndo.site/
- **Webpack Server** - https://localhost:3000
### Prerequisites
Install all the required local dependencies:
- [Git Version Control](https://git-scm.com/downloads)
- [Docker](https://www.docker.com/products/docker-desktop)
- Lando v3.0.6 ([Windows](https://docs.devwithlando.io/installation/windows.html), [macOS](https://docs.devwithlando.io/installation/macos.html))
- [Terminus](https://pantheon.io/docs/terminus/install/) 2.3.0, Pantheon’s CLI tool
- [Composer](https://getcomposer.org/doc/00-intro.md)
- [Node](https://nodejs.org/en/) 10.21.0
Note: The sage theme dependencies do not support version of Node greater than 10. We recommend [asdf](https://github.com/asdf-vm/asdf) for managing multiple versions of Node
- Yarn
([Windows](https://yarnpkg.com/en/docs/install#windows-stable), [macOS](https://yarnpkg.com/en/docs/install#mac-stable))
You'll also need write access to this repo and be a member of the [Pantheon Project](https://dashboard.pantheon.io/sites/FIXME#dev/code).
### Installation
1. Clone the Repo
`$ git clone https://github.com/Threespot/fix-me.git`
1. Install Application Composer Dependencies
`$ composer install`
1. Install Theme Composer Dependencies
- Navigate to the sage theme directory
`$ cd web/wp-content/themes/sage`
- Install Componser deps
`$ composer install`
1. Install Theme Node Dependencies
- Navigate to the sage theme directory
`$ cd web/wp-content/themes/sage`
- Install Node deps
`$ yarn install` from the theme directory
### Boot WordPress Application
With all the dependencies installed, from the root project directory run:
```
lando start
```
If this is the first time running this command, Lando will build the necessary Docker containers.
To stop the server run:
```
lando stop
```
Other Lando CLI command can be read here in the [Lando docs](https://docs.lando.dev/basics/usage.html)
### Boot Theme (Webpack) Server
Making CSS or JS updates requires running Webpack to recompile and inject the CSS and JS.
1. Navigate to the theme folder
`$ cd /web/wp-content/themes/sage`
1. Install npm dependencies using Yarn
`$ yarn install`
1. Start Webpack
`$ yarn start`
1. You should now be able to view the site locally at https://localhost:3000
1. To stop the server, press <kbd>Control</kbd> + <kbd>C</kbd>
### Pull Files and Database from Pantheon
Lando is used to pull uploads and data from Pantheon. [See docs here](https://docs.lando.dev/config/pantheon.html#importing-your-database-and-files).
```
lando pull --database=test --files=test --code=none
```
## Deploying
Code committed to the remote `master` branch is automatically deployed to the `dev` environement on Pantheon. After a local branch is pushed, [CircleCI](https://circleci.com/gh/Threespot/fix-me) will build and deploy the files to Pantheon’s [dev environment](https://dashboard.pantheon.io/sites/5118c78c-b29d-467c-b178-2728fe3f293c#dev/code). You can tell CircleCI to not run by adding `[skip ci]` to the commit message.
Code that exists on `dev` can be promoted to the `test` enviroment, and `test` can be promoted to the `live` environment. Details about the application lifecycle can be read [here](https://pantheon.io/agencies/development-workflow/dev-test-live-workflow).
Feature branches with a corresponding pull request will create a multi-dev enviroment used for testing indiviual features. Docs are available [here](https://pantheon.io/docs/multidev)
### Deployment Using Terminus
#### Test Environment
Code will be promoted from `dev` to `test`
```shell
lando composer run-script deploy:test
```
**NOTE:** this composer script will also purge Pantheon's cache.
or
```shell
lando terminus env:deploy fix-me.test
```
#### Live Environment
Code will be promoted from `test` to `live`
```shell
lando composer run-script deploy:live
```
or
```shell
lando terminus env:deploy fix-me.live
``` | 35.198582 | 420 | 0.735039 | eng_Latn | 0.717551 |
2fa9b88c42544133a1d477cf83baa7a0c02c1815 | 885 | md | Markdown | cookbook-html/templates/reader/r1/readme.md | jsdelivrbot/jaanga.github.io | 7773c123ac30f6644f05f4b45e2fefbf0e4ed4a5 | [
"CC0-1.0"
] | null | null | null | cookbook-html/templates/reader/r1/readme.md | jsdelivrbot/jaanga.github.io | 7773c123ac30f6644f05f4b45e2fefbf0e4ed4a5 | [
"CC0-1.0"
] | null | null | null | cookbook-html/templates/reader/r1/readme.md | jsdelivrbot/jaanga.github.io | 7773c123ac30f6644f05f4b45e2fefbf0e4ed4a5 | [
"CC0-1.0"
] | null | null | null | [Jaange]( http://jaanga.github.io ) » [Cookbook HTML]( http://jaanga.github.io/cookbook-html/ ) » [Templates]( http://jaanga.github.io/cookbook-html/templates/ ) »
Reader Read Me
====
<span style=display:none; >[You are now in GitHub source code view - Click here to view Read Me file as a web page]( http://exploratoria.github.io/lib/reader/index.html "View file as a web page." ) </span>
<input type=button value='You are now in GitHub web page view - Click here to view Read Me file as source code' onclick=window.location.href='https://github.com/exploratoria/exploratoria.github.io/tree/master/lib/reader/'; />
Loads a file named readme.md by default, converts Markdown to HTML and displays.
Loads and displays any other Markdown file given the filename is included on location hash.
An alternate version also loads MathJax configured to display LaTeX.
| 55.3125 | 225 | 0.751412 | eng_Latn | 0.674604 |
2faa00a7a30e8c266c59fca06a4d8e798effea4d | 405 | md | Markdown | _posts/2011-03-21-21-marzo-harlingen-atlantic-yachts.md | sy-polaris/sy-polaris.github.io | 5059cf4d339d4beebace8b7ed9e1a04160be39a4 | [
"MIT"
] | null | null | null | _posts/2011-03-21-21-marzo-harlingen-atlantic-yachts.md | sy-polaris/sy-polaris.github.io | 5059cf4d339d4beebace8b7ed9e1a04160be39a4 | [
"MIT"
] | null | null | null | _posts/2011-03-21-21-marzo-harlingen-atlantic-yachts.md | sy-polaris/sy-polaris.github.io | 5059cf4d339d4beebace8b7ed9e1a04160be39a4 | [
"MIT"
] | null | null | null | ---
id: 448
title: '21 marzo: Harlingen, Atlantic Yachts'
date: 2011-03-21T18:27:33+01:00
author: polaris
layout: post
categories:
- Senza categoria
---
Domani: pulizia elica, lavaggio e controllo albero.Lampadine tolte dall’albero: testa d’albero 10 W, a baionetta steaming siluro 10 W, luce ponte 10 W, a baionetta piccola.
Fimme mostrato nuovo Atlantic 46 Pilot House pronto: bellissimo!
| 33.75 | 184 | 0.762963 | ita_Latn | 0.868909 |
2fabc399f27cd6b2d9ca99b0ae33871f5575cc82 | 1,464 | md | Markdown | wiki-docs/v2/schemas/Trending-TrendingEntryDestinyRitual.md | kramrm/BungieNetPlatform | a96f233d133d7d61c0c7a8d80fbb054b33a2343e | [
"MIT"
] | 107 | 2016-03-04T16:07:52.000Z | 2022-02-20T04:27:08.000Z | wiki-docs/v2/schemas/Trending-TrendingEntryDestinyRitual.md | kramrm/BungieNetPlatform | a96f233d133d7d61c0c7a8d80fbb054b33a2343e | [
"MIT"
] | 20 | 2016-07-11T09:03:58.000Z | 2021-11-04T17:05:37.000Z | wiki-docs/v2/schemas/Trending-TrendingEntryDestinyRitual.md | kramrm/BungieNetPlatform | a96f233d133d7d61c0c7a8d80fbb054b33a2343e | [
"MIT"
] | 18 | 2016-03-12T18:58:40.000Z | 2021-11-04T23:03:05.000Z | <span class="wiki-builder">This page was generated with Wiki Builder. Do not change the format!</span>
## Info
## Schema
* **Schema Type:** Class
* **Type:** object
## Properties
Name | Type | Description
---- | ---- | -----------
image | string |
icon | string |
title | string |
subtitle | string |
dateStart | string:date-time:nullable |
dateEnd | string:date-time:nullable |
milestoneDetails | [[DestinyPublicMilestone|Destiny-Milestones-DestinyPublicMilestone]] | A destiny event does not necessarily have a related Milestone, but if it does the details will be returned here.
eventContent | [[DestinyMilestoneContent|Destiny-Milestones-DestinyMilestoneContent]] | A destiny event will not necessarily have milestone "custom content", but if it does the details will be here.
## Example
```javascript
{
// Type: string
"image": "",
// Type: string
"icon": "",
// Type: string
"title": "",
// Type: string
"subtitle": "",
// Type: string:date-time:nullable
"dateStart": "",
// Type: string:date-time:nullable
"dateEnd": "",
// Type: [[DestinyPublicMilestone|Destiny-Milestones-DestinyPublicMilestone]]
"milestoneDetails": {},
// Type: [[DestinyMilestoneContent|Destiny-Milestones-DestinyMilestoneContent]]
"eventContent": {}
}
```
## References
1. https://bungie-net.github.io/multi/schema_Trending-TrendingEntryDestinyRitual.html#schema_Trending-TrendingEntryDestinyRitual
| 31.826087 | 208 | 0.696038 | eng_Latn | 0.523998 |
2fabf94fec0966a804528ae77318adeb0f2b20f8 | 6,482 | md | Markdown | README.md | N1nja0p/aiocv | 69e986f2b3f9e42e72deff04050135736fd5ea42 | [
"MIT"
] | 2 | 2021-07-23T21:25:15.000Z | 2021-11-01T04:16:14.000Z | README.md | N1nja0p/aiocv | 69e986f2b3f9e42e72deff04050135736fd5ea42 | [
"MIT"
] | null | null | null | README.md | N1nja0p/aiocv | 69e986f2b3f9e42e72deff04050135736fd5ea42 | [
"MIT"
] | null | null | null | # AIOCV
aiocv Is A Python Library Used To Track Hands, Track Pose, Detect Face, Detect Contours (Shapes), Detect Cars, Detect Number Plate, Detect Smile, Detect Eyes, Control Volume Using Gesture, Read QR Codes And Create Face Mesh On Image/Video.
## Installation
Use the package manager [pip](https://pypi.org/project/aiocv/) to install aiocv.
```bash
pip install aiocv
```
## Usage
#### Hand Tracking
```python
import aiocv
import cv2
img = cv2.imread("hands.png")
# Make An Object
hands = aiocv.HandTrack()
# Use findHands() Method To Track Hands On Image/Video
hands.findHands(img,draw=True)
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findHands() Method :
```python
findHands(self,img=None,draw=True)
```
#### If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1 and trackConfidence = 1
#### Output :

#### Pose Detector
```python
import aiocv
import cv2
img = cv2.imread("man.png")
# Make An Object
pose = aiocv.PoseDetector()
# Use findPose() Method To Detect Pose On Image/Video
pose.findPose(img,draw=True)
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findPose() Method :
```python
findPose(self,img=None,draw=True)
```
#### If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1 and trackConfidence = 1
#### Output :

#### Face Detection
```python
import aiocv
import cv2
img = cv2.imread("elon_musk.png")
# Make An Object
face = aiocv.FaceDetector()
# Use findFace() Method To Detect Face On Image/Video
face.findFace(img,draw=True)
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findFace() Method :
```python
findFace(self,img=None,draw=True)
```
#### If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1
#### Output :

#### Face Mesh
```python
import aiocv
import cv2
img = cv2.imread("elon_musk.png")
# Make An Object
mesh = aiocv.FaceMesh()
# Use findFaceMesh() Method To Detect Face And Draw Mesh On Image/Video
mesh.findFaceMesh(img,draw=True)
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findFaceMesh() Method :
```python
findFaceMesh(self,img=None,draw=True)
```
#### If You Are Not Getting Desired Results, Consider Changing detectionConfidence = 1 and trackConfidence = 1
#### Output :

#### Contour (Shape) Detection
```python
import aiocv
import cv2
img = cv2.imread("shapes.png")
# Make An Object
shape = aiocv.ContourDetector(img)
# Use findContours() Method To Detect Shapes On Image/Video
shape.findContours(img,draw=True)
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Output :

#### Car Detection
```python
import aiocv
import cv2
img = cv2.imread("car.png")
# Make An Object
car = aiocv.CarDetector(img)
# Use findCars() Method To Detect Cars On Image/Video
car.findCars()
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findCars() Method :
```python
findCars(self,color=(255,0,0),thickness=2)
```
#### Output :

#### Number Plate Detection
```python
import aiocv
import cv2
img = cv2.imread("car.png")
# Make An Object
car = aiocv.NumberPlateDetector(img)
# Use findNumberPlate() Method To Detect Number Plate On Image/Video
car.findNumberPlate()
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findNumberPlate() Method :
```python
findNumberPlate(self,color=(255,0,0),thickness=2)
```
#### Output :

#### Smile Detection
```python
import aiocv
import cv2
img = cv2.imread("person.png")
# Make An Object
smile = aiocv.SmileDetector(img)
# Use findSmile() Method To Detect Smile On Image/Video
smile.findSmile()
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findSmile() Method :
```python
findSmile(self,color=(255,0,0),thickness=2)
```
#### Output :

#### Eyes Detection
```python
import aiocv
import cv2
img = cv2.imread("person.png")
# Make An Object
eyes = aiocv.EyesDetector(img)
# Use findEyes() Method To Detect Eyes On Image/Video
eyes.findEyes()
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findEyes() Method :
```python
findEyes(self,color=(255,0,0),thickness=2)
```
#### Output :

#### Control Volume Using Gesture
```python
import aiocv
# Make An Object
gvc = aiocv.GestureVolumeControl()
# Use controlVolume() Method To Control Volume
gvc.controlVolume()
```
#### Params For controlVolume() Method :
```python
controlVolume(self,color=(255,0,0),thickness=2)
```
#### Params For GestureVolumeControl Class :
```python
gvc = aiocv.GestureVolumeControl(webcamIndex = 0)
# If You Want To Control From Other Camera, Set The webcamIndex Accordingly.
```
#### Output :

#### Read QR Code
```python
import aiocv
import cv2
img = cv2.imread("qr.png")
# Make An Object
qr = aiocv.QRCodeReader(img)
# Use findQRCode() Method To Detect QR Code On Image/Video
text=qr.findQRCode()
cv2.imshow("Image",img)
cv2.waitKey(0)
```
#### Params For findQRCode() Method :
```python
findQRCode(self,color=(255,0,0),thickness=3)
```
#### To Print The Extracted Text :
```python
print(text)
```
#### Output :

## Contributing
Pull Requests Are Welcome. For Major Changes, Please Open An Issue First To Discuss What You Would Like To Change.
Please Make Sure To Update Tests As Appropriate.
## License
[MIT](https://github.com/N1nja0p/aiocv/blob/main/LICENCE.txt)
| 28.9375 | 240 | 0.704104 | kor_Hang | 0.333117 |
2fac0077d45e47b9e719a1736b3c57b8ef5cb63b | 126 | md | Markdown | about.md | Ledmington/filippobarbari.github.io | d2bd13f3bc1e380b49ed6ac47c29ec36c76b8133 | [
"MIT"
] | null | null | null | about.md | Ledmington/filippobarbari.github.io | d2bd13f3bc1e380b49ed6ac47c29ec36c76b8133 | [
"MIT"
] | null | null | null | about.md | Ledmington/filippobarbari.github.io | d2bd13f3bc1e380b49ed6ac47c29ec36c76b8133 | [
"MIT"
] | null | null | null | ---
title: About
menus: header
layout: about-me
permalink: /about/
summary: "About Filippo Barbari: developer and student"
--- | 18 | 55 | 0.738095 | eng_Latn | 0.872299 |
2fac07d2c700d64ce3e484f2ccce9908005cf103 | 60 | md | Markdown | oeis/2/30/3522/1066590/604935042/551609685150/737740947722562/1360427147514751710/3308161927353377294082/10256718523496425979562270/A000187.md | hloeffler/oeis-dirs | 1a43b8b9d09cd6304a5d04789c133dcccb57e532 | [
"BSD-2-Clause"
] | null | null | null | oeis/2/30/3522/1066590/604935042/551609685150/737740947722562/1360427147514751710/3308161927353377294082/10256718523496425979562270/A000187.md | hloeffler/oeis-dirs | 1a43b8b9d09cd6304a5d04789c133dcccb57e532 | [
"BSD-2-Clause"
] | null | null | null | oeis/2/30/3522/1066590/604935042/551609685150/737740947722562/1360427147514751710/3308161927353377294082/10256718523496425979562270/A000187.md | hloeffler/oeis-dirs | 1a43b8b9d09cd6304a5d04789c133dcccb57e532 | [
"BSD-2-Clause"
] | null | null | null | http://oeis.org/A000187
Generalized Euler numbers, c(5,n).
| 15 | 34 | 0.733333 | yue_Hant | 0.347793 |
2facd4bfe4c063d650bc466aa98205bdd481c11b | 123 | md | Markdown | README.md | celerstudio/celerstudio.github.io | 5ebd880ca307aafb61dabb8788e8231b8ca37c39 | [
"MIT"
] | null | null | null | README.md | celerstudio/celerstudio.github.io | 5ebd880ca307aafb61dabb8788e8231b8ca37c39 | [
"MIT"
] | null | null | null | README.md | celerstudio/celerstudio.github.io | 5ebd880ca307aafb61dabb8788e8231b8ca37c39 | [
"MIT"
] | null | null | null | Celerstudio portfolio site inspired by [Responsive Full-page Parallax Slider](https://codepen.io/takaneichinose/pen/grKQJx) | 123 | 123 | 0.837398 | eng_Latn | 0.567379 |
2fad5b273a761c225591b74f01ba692be09726b5 | 68 | md | Markdown | README.md | mariocesar/webhoooks.py | 970cd90aee3835dcf2b61164981d024b84ba3140 | [
"MIT"
] | null | null | null | README.md | mariocesar/webhoooks.py | 970cd90aee3835dcf2b61164981d024b84ba3140 | [
"MIT"
] | null | null | null | README.md | mariocesar/webhoooks.py | 970cd90aee3835dcf2b61164981d024b84ba3140 | [
"MIT"
] | null | null | null | # webhoooks.py
Runs commands using a web app, just by configuration
| 22.666667 | 52 | 0.794118 | eng_Latn | 0.99723 |
2fae1fcebab4951fca39155b91937d62e6864660 | 1,118 | md | Markdown | README.md | janioalbuquerque/Projeto_07 | cea0173a03c6c0e073beadde9babc671c3a7850a | [
"MIT"
] | null | null | null | README.md | janioalbuquerque/Projeto_07 | cea0173a03c6c0e073beadde9babc671c3a7850a | [
"MIT"
] | null | null | null | README.md | janioalbuquerque/Projeto_07 | cea0173a03c6c0e073beadde9babc671c3a7850a | [
"MIT"
] | null | null | null | # Site Danki Code
Este projeto consiste em um Web Site que foi desenvolvido para ser um blog, onde as pessoas poderiam entrar para ver postagens sobre lugares, mensagens diversas e etc, esse site foi desenvolvido no curso de WebMaster da DankiCode, onde eu aprendi sobre HTML/CSS/JS e foi possivél colocar em patrica com esse Site.
Segue Prints abaixo para melhor visualização do projeto:

****************************************

****************************************

****************************************

****************************************

| 62.111111 | 313 | 0.694097 | por_Latn | 0.864665 |
2fae6296bf2366381cb9ef6ef8b1555477b5b395 | 3,847 | md | Markdown | articles/application-gateway/ingress-controller-private-ip.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/ingress-controller-private-ip.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/application-gateway/ingress-controller-private-ip.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Verwenden der privaten IP-Adresse für das interne Routing für einen Eingangsendpunkt
description: Dieser Artikel enthält Informationen dazu, wie Sie private IP-Adressen für das interne Routing verwenden und so den Eingangsendpunkt in einem Cluster für den Rest des VNETs zu Verfügung stellen.
services: application-gateway
author: caya
ms.service: application-gateway
ms.topic: how-to
ms.date: 11/4/2019
ms.author: caya
ms.openlocfilehash: 33b70ba8ab7ffef90c42f53e58a2d27e619862f0
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/09/2020
ms.locfileid: "84806796"
---
# <a name="use-private-ip-for-internal-routing-for-an-ingress-endpoint"></a>Verwenden der privaten IP-Adresse für das interne Routing für einen Eingangsendpunkt
Mit dieser Funktion kann der Eingangsendpunkt im `Virtual Network` mit einer privaten IP-Adresse bereitgestellt werden.
## <a name="pre-requisites"></a>Voraussetzungen
Erstellen einer App Gateway-Instanz mit einer [privaten IP-Konfiguration](https://docs.microsoft.com/azure/application-gateway/configure-application-gateway-with-private-frontend-ip)
Es gibt zwei Möglichkeiten, den Controller für die Verwendung einer privaten IP-Adresse für eingehenden Datenverkehr zu konfigurieren:
## <a name="assign-to-a-particular-ingress"></a>Zuweisen zu einem bestimmtem Eingangsdatenverkehr
Um einen bestimmten Eingangsdatenverkehr über eine private IP-Adresse bereitzustellen, verwenden Sie die Anmerkung [`appgw.ingress.kubernetes.io/use-private-ip`](./ingress-controller-annotations.md#use-private-ip) in der Eingangsressource.
### <a name="usage"></a>Verwendung
```yaml
appgw.ingress.kubernetes.io/use-private-ip: "true"
```
Für Application Gateway-Instanzen ohne eine private IP-Adresse wird eingehender Datenverkehr, der mit `appgw.ingress.kubernetes.io/use-private-ip: "true"` kommentiert ist, ignoriert. Dies wird im Eingangsereignis und im AGIC-Podprotokoll angegeben.
* Fehler, wie im Eingangsereignis angegeben
```bash
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning NoPrivateIP 2m (x17 over 2m) azure/application-gateway, prod-ingress-azure-5c9b6fcd4-bctcb Ingress default/hello-world-ingress requires Application Gateway
applicationgateway3026 has a private IP address
```
* Fehler, wie in AGIC-Protokollen angegeben
```bash
E0730 18:57:37.914749 1 prune.go:65] Ingress default/hello-world-ingress requires Application Gateway applicationgateway3026 has a private IP address
```
## <a name="assign-globally"></a>Globale Zuweisung
Falls die Anforderung darin besteht, den gesamten Eingangsdatenverkehr einzuschränken, der über eine private IP-Adresse bereitgestellt werden soll, verwenden Sie `appgw.usePrivateIP: true` in der `helm`-Konfiguration.
### <a name="usage"></a>Verwendung
```yaml
appgw:
subscriptionId: <subscriptionId>
resourceGroup: <resourceGroupName>
name: <applicationGatewayName>
usePrivateIP: true
```
Dadurch filtert der Eingangscontroller bei der Konfiguration der Front-End-Listener auf dem Application Gateway die IP-Adresskonfigurationen auf eine private IP-Adresse.
Wenn `usePrivateIP: true` ist und keine private IP-Adresse zugewiesen wurde, gibt AGIC eine Panic-Meldung aus und stürzt ab.
> [!NOTE]
> Die Application Gateway v2-SKU erfordert eine öffentliche IP-Adresse. Wenn Application Gateway privat sein muss, fügen Sie eine [`Network Security Group`](https://docs.microsoft.com/azure/virtual-network/security-overview) an das Subnetz von Application Gateway an, um den Datenverkehr einzuschränken.
| 54.957143 | 303 | 0.750715 | deu_Latn | 0.906265 |
2fafb2e1e8d62885b6c207edc7358d7adb22b72c | 3,250 | md | Markdown | README.md | jipaman/thymeleaf-fragments-example | 6bde22e49ddfafe326bdacc7489f821e2c713c08 | [
"Apache-2.0"
] | 9 | 2017-08-07T11:37:28.000Z | 2019-05-03T20:32:49.000Z | README.md | jipaman/thymeleaf-fragments-example | 6bde22e49ddfafe326bdacc7489f821e2c713c08 | [
"Apache-2.0"
] | null | null | null | README.md | jipaman/thymeleaf-fragments-example | 6bde22e49ddfafe326bdacc7489f821e2c713c08 | [
"Apache-2.0"
] | 11 | 2017-10-04T18:38:19.000Z | 2021-03-23T12:26:01.000Z | # Thymeleaf Fragments Example
[](https://travis-ci.org/juliuskrah/thymeleaf-fragments-example)
This is to illustrate layouts example in thymeleaf using the Thymeleaf Standard Layout System. For more
information on using the standard template check out [this][layouts-blog] article.
The template used in this example was downloaded from [Start Boostrap][sb-admin 2].
This example uses [Thymeleaf 3.0.0.RELEASE][Thymeleaf 3 announcement] to render the templates.
Some minor configuration changes are required to use Thymeleaf 3 with [Spring 4][Spring Framework].
## Pre-requisites
- Maven 3.3+
- Java 8+
- Jetty 9+ or Tomcat 8+ (Optional)
- Spring Framework 4.2.x
- Thymeleaf 3.0.x
## Configuration
The fist thing required is to declare your depencies in your `pom.xml`
```xml
<dependency>
<groupId>org.thymeleaf</groupId>
<artifactId>thymeleaf</artifactId>
<version>3.0.0.RELEASE</version>
</dependency>
<dependency>
<groupId>org.thymeleaf</groupId>
<artifactId>thymeleaf-spring4</artifactId>
<version>3.0.0.RELEASE</version>
</dependency>
```
The second thing to do is your spring configuration
```java
@Configuration
@EnableWebMvc
@ComponentScan(basePackageClasses = { Controllers.class })
public class WebAppConfig extends WebMvcConfigurerAdapter implements ApplicationContextAware {
private ApplicationContext applicationContext;
private static final String VIEWS = "classpath:templates/";
...
@Bean
public ViewResolver viewResolver() {
ThymeleafViewResolver resolver = new ThymeleafViewResolver();
resolver.setTemplateEngine(templateEngine());
resolver.setCharacterEncoding("UTF-8");
return resolver;
}
private TemplateEngine templateEngine() {
SpringTemplateEngine engine = new SpringTemplateEngine();
engine.setTemplateResolver(templateResolver());
return engine;
}
private ITemplateResolver templateResolver() {
SpringResourceTemplateResolver resolver = new SpringResourceTemplateResolver();
resolver.setApplicationContext(applicationContext);
resolver.setPrefix(VIEWS);
resolver.setSuffix(".html");
resolver.setTemplateMode(TemplateMode.HTML);
return resolver;
}
}
```
The first difference with the Thymeleaf 3 configuration is that now the recommended template resolver for Spring applications is
`SpringResourceTemplateResolver`. It needs a reference to the Spring `ApplicationContext` so the configuration bean has to implement
the `ApplicationContextAware` interface.
The second difference is that the template mode has a value of `TemplateMode.HTML`. Template modes are not strings anymore and the
possible values are a bit different from Thymeleaf 2.
## Running the Application
Your can run the application by specifying a maven goal `mvn jetty:run`, which will start an embedded Jetty instance.
Visit your new application on [http://127.0.0.1:8080/](http://127.0.0.1:8080/).
[layouts-blog]: http://www.thymeleaf.org/doc/articles/layouts.html
[sb-admin 2]: http://startbootstrap.com/template-overviews/sb-admin-2/
[Thymeleaf 3 announcement]: http://forum.thymeleaf.org/Thymeleaf-3-0-is-here-td4029676.html
[Spring Framework]: http://projects.spring.io/spring-framework/
| 36.111111 | 159 | 0.783692 | eng_Latn | 0.736334 |
2fafb83856649c33e43de8ec6aa421e47df1de85 | 2,236 | md | Markdown | articles/cognitive-services/Speech-Service/includes/quickstarts/platform/cpp-macos.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/platform/cpp-macos.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/platform/cpp-macos.md | BielinskiLukasz/azure-docs.pl-pl | 952ecca251b3e6bdc66e84e0559bbad860a886b9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Szybki Start: Konfiguracja platformy SDK C++ (macOS) — usługa mowy'
titleSuffix: Azure Cognitive Services
description: Skorzystaj z tego przewodnika, aby skonfigurować platformę dla języka C++ w systemie macOS za pomocą zestawu Speech Service SDK.
services: cognitive-services
author: markamos
manager: nitinme
ms.service: cognitive-services
ms.subservice: speech-service
ms.topic: include
ms.date: 10/15/2020
ms.author: erhopf
ms.openlocfilehash: 3ad8eb9564c4d8343a0763cc2f6f5061ee602b72
ms.sourcegitcommit: 93329b2fcdb9b4091dbd632ee031801f74beb05b
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 10/15/2020
ms.locfileid: "92096984"
---
W tym przewodniku przedstawiono sposób instalowania [zestawu Speech SDK](~/articles/cognitive-services/speech-service/speech-sdk.md) dla języka C++ w systemie macOS 10,13 i nowszych.
[!INCLUDE [License Notice](~/includes/cognitive-services-speech-service-license-notice.md)]
## <a name="system-requirements"></a>Wymagania systemowe
macOS 10,13 i nowsze
## <a name="install-speech-sdk"></a>Instalowanie zestawu Speech SDK
1. Wybierz katalog, do którego pliki zestawów Speech SDK powinny zostać wyodrębnione, i ustaw zmienną środowiskową `SPEECHSDK_ROOT`, aby wskazywała na ten katalog. Ta zmienna ułatwia odwoływanie się do katalogu w przyszłych poleceniach. Jeśli na przykład chcesz używać katalogu `speechsdk` w katalogu macierzystym, użyj polecenia podobnego do poniższego:
```sh
export SPEECHSDK_ROOT="$HOME/speechsdk"
```
1. Utwórz katalog, jeśli jeszcze nie istnieje.
```sh
mkdir -p "$SPEECHSDK_ROOT"
```
1. Pobierz i Wyodrębnij `.zip` Archiwum zawierające strukturę zestawu Speech SDK:
```sh
wget -O SpeechSDK-macOS.zip https://aka.ms/csspeech/macosbinary
unzip SpeechSDK-macOS.zip -d "$SPEECHSDK_ROOT"
```
1. Zweryfikuj zawartość katalogu najwyższego poziomu wyodrębnionego pakietu:
```sh
ls -l "$SPEECHSDK_ROOT"
```
Lista katalogów powinna zawierać pliki powiadomień i licencji innych firm, a także `MicrosoftCognitiveServicesSpeech.framework` katalog.
Teraz możesz przejść do [kolejnych kroków](#next-steps) poniżej.
## <a name="next-steps"></a>Następne kroki
[!INCLUDE [windows](../quickstart-list.md)]
| 36.064516 | 354 | 0.774597 | pol_Latn | 0.989977 |
2fb0884690c569e95c24d2c5a39a4218f32cb9da | 675 | md | Markdown | README.md | KatVHarris/MovieBot | 4fa775aa838b5010ba1014175289b4bbaea909ad | [
"MIT"
] | null | null | null | README.md | KatVHarris/MovieBot | 4fa775aa838b5010ba1014175289b4bbaea909ad | [
"MIT"
] | null | null | null | README.md | KatVHarris/MovieBot | 4fa775aa838b5010ba1014175289b4bbaea909ad | [
"MIT"
] | null | null | null | # NewsBOT
## Summary
NewsBot is a bot made with the Microsfot Bot Framework. It uses the Bing Speech to Text API, to read of headlines from RSS feed.
## Build
To build simply hit run. The BTTS calls are in Message Controller.
## Contributors
* KatVHarris
* AJLange
* ShahedC
## Todos
* Segment out the calls into Dialogs
* Add audio output for mulitple channels
## Issues
#### Models
The Models are generated from the raw JSON after the Newtonsoft library converts RSS XML to JSON. The problem with this is that the feeds have different formats.
#### Audio
When using the BTTS API audio will only play with the connector IDE that you are using has an audio player.
| 27 | 161 | 0.757037 | eng_Latn | 0.997296 |
2fb0d918260443041446f42433d786f5db351d06 | 666 | md | Markdown | README.md | CarbonAlabel/rip-eve-igb | ffafedaab7f4ac2c5f6bb8e7b7455471355cd989 | [
"MIT"
] | null | null | null | README.md | CarbonAlabel/rip-eve-igb | ffafedaab7f4ac2c5f6bb8e7b7455471355cd989 | [
"MIT"
] | null | null | null | README.md | CarbonAlabel/rip-eve-igb | ffafedaab7f4ac2c5f6bb8e7b7455471355cd989 | [
"MIT"
] | null | null | null | # rip-eve-igb
Gently tell the visitors of your website that they shouldn't be using the EVE Online in-game browser anymore.
This nginx config snippet will ensure that all visitors to your site get a reminder that they really shouldn't be using the IGB.
## Instructions
Move the [no_igb.html](no_igb.html) file to `/usr/share/nginx` and the [no_igb.conf](no_igb.conf) file to `/etc/nginx`.
Include the configuration file in your server block. Example:
````nginx
server{
server_name example.com;
listen 443 ssl http2;
listen [::]:443 ssl http2;
root /usr/share/nginx/example;
#Rest of your configuration goes here...
include /etc/nginx/no_igb.conf;
}
````
| 31.714286 | 128 | 0.746246 | eng_Latn | 0.978739 |
2fb0ee437a80bff012b84ed83faaa62bec62728e | 130 | md | Markdown | frontend/react/developOnLocalDevEnv/readme.md | sohrabsaran/softwareSkillsDemos | 1e607e4f3647b62b752a563455510bed8ff50542 | [
"MIT"
] | null | null | null | frontend/react/developOnLocalDevEnv/readme.md | sohrabsaran/softwareSkillsDemos | 1e607e4f3647b62b752a563455510bed8ff50542 | [
"MIT"
] | null | null | null | frontend/react/developOnLocalDevEnv/readme.md | sohrabsaran/softwareSkillsDemos | 1e607e4f3647b62b752a563455510bed8ff50542 | [
"MIT"
] | 6 | 2020-04-20T04:02:37.000Z | 2021-04-09T15:13:41.000Z | index.html is deployed <a href="https://sohrabsaran.github.io/softwareSkillsDemos/frontend/react/developOnLocalDevEnv/">here</a>.
| 65 | 129 | 0.807692 | yue_Hant | 0.276515 |
2fb1543a13424c6ec88ed421879ef6152b894e4b | 4,393 | md | Markdown | articles/remoteapp/remoteapp-o365user.md | OpenLocalizationTestOrg/azure-docs-pr15_pt-BR | 95dabd136ee50edd2caa1216e745b9f13ff7a1f2 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | 1 | 2018-08-29T17:03:44.000Z | 2018-08-29T17:03:44.000Z | articles/remoteapp/remoteapp-o365user.md | OpenLocalizationTestOrg/azure-docs-pr15_pt-BR | 95dabd136ee50edd2caa1216e745b9f13ff7a1f2 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/remoteapp/remoteapp-o365user.md | OpenLocalizationTestOrg/azure-docs-pr15_pt-BR | 95dabd136ee50edd2caa1216e745b9f13ff7a1f2 | [
"CC-BY-3.0",
"CC-BY-4.0",
"MIT"
] | null | null | null |
<properties
pageTitle="Como usar o Azure RemoteApp com contas de usuário do Office 365 | Microsoft Azure"
description="Saiba como usar o Azure RemoteApp com minhas contas de usuário do Office 365"
services="remoteapp"
documentationCenter=""
authors="piotrci"
manager="mbaldwin" />
<tags
ms.service="remoteapp"
ms.workload="compute"
ms.tgt_pltfrm="na"
ms.devlang="na"
ms.topic="article"
ms.date="08/15/2016"
ms.author="elizapo" />
# <a name="how-to-use-azure-remoteapp-with-office-365-user-accounts"></a>Como usar o Azure RemoteApp com contas de usuário do Office 365
> [AZURE.IMPORTANT]
> RemoteApp Azure está sendo descontinuado. Leia o [comunicado](https://go.microsoft.com/fwlink/?linkid=821148) para obter detalhes.
Se você tiver uma assinatura do Office 365, você terá um Azure Active Directory que armazena os nomes de usuário e senhas usadas para acessar os serviços do Office 365. Por exemplo, quando os usuários ativarem o Office 365 ProPlus eles autenticam Azure AD para verificar se há licenças. A maioria dos clientes gostaria de usar o mesmo diretório com o Azure RemoteApp.
Se você estiver implantando o Azure RemoteApp provavelmente você está usando uma assinatura do Azure que está associada a um anúncio Azure diferentes. Para poder usar seu diretório do Office 365, você precisará mover a assinatura do Azure nessa pasta.
Para obter informações sobre como implantar aplicativos clientes do Office 365, veja [como usar sua assinatura do Office 365 com o Azure RemoteApp](remoteapp-officesubscription.md).
## <a name="phase-1-register-your-free-office-365-azure-active-directory-subscription"></a>Fase 1: Registrar sua assinatura gratuita do Office 365 Azure Active Directory
Se você estiver usando o portal de clássico Azure, use as etapas em [registrar sua assinatura gratuita do Azure Active Directory](https://technet.microsoft.com/library/dn832618.aspx) para obter acesso administrativo ao seu Azure AD através do Portal de gerenciamento do Azure. Como resultado desse processo, você deve ser capaz de fazer logon no portal do Azure e ver seu diretório lá – neste momento você não verá muito mais desde que a assinatura completa Azure que estiver usando com o Azure RemoteApp está em um diretório diferente.
Lembre-se o nome e senha da conta de administrador que você criou nesta etapa – eles serão necessários na fase 2.
Se você estiver usando o portal do Azure, confira [como registrar e ativar um gratuita do Azure Active Directory usando o portal do Office 365](http://azureblogger.com/2016/01/how-to-register-and-activate-a-free-azure-active-directory-using-office-365-portal/).
## <a name="phase-2-change-the-azure-ad-associated-with-your-azure-subscription"></a>Fase 2: Altere o Azure AD associado a sua assinatura do Azure.
Vamos alterar sua assinatura do Azure seu diretório atual no diretório do Office 365, que podemos trabalhou na fase 1.
Siga as instruções descritas em [mudar o locatário do Azure Active Directory do Azure RemoteApp](remoteapp-changetenant.md). Atenção especial para as seguintes etapas:
- Etapa #1: Se você implantou Azure RemoteApp (ARA) nesta inscrição, verifique se que você remove todas as contas de usuário do Azure AD de qualquer conjuntos de ARA primeiro, antes de tentar mais nada. Como alternativa, você pode considerar excluindo qualquer coleções existentes.
- Etapa #2: Este é uma etapa fundamental. Você precisa usar uma conta da Microsoft (por exemplo, @outlook.com) como um administrador de serviço com assinaturas; isso ocorre porque não temos quaisquer contas de usuário do anúncio Azure existente anexados à assinatura – se fazemos, podemos não poderá movê-lo para um anúncio Azure diferentes.
- Etapa 4 #: Ao adicionar uma pasta existente, o sistema solicitará que você entrar com a conta de administrador para esse diretório. Certifique-se de usar a conta de administrador da fase 1.
- Etapa 5 #: Altere o diretório pai da assinatura para seu diretório do Office 365. O resultado final deve ser que em Configurações -> assinaturas sua assinatura listas no diretório do Office 365.

Neste ponto sua assinatura do Azure RemoteApp está associada com o Office 365 Azure AD; Você pode usar as contas de usuário existentes do Office 365 com o Azure RemoteApp!
| 78.446429 | 536 | 0.785796 | por_Latn | 0.999319 |
2fb1df4a472152be9bbf32f261f395cc01a978ab | 3,567 | md | Markdown | README.md | zhangjunhao0/data-512-a2 | 14edeffb92495d5fc870d96c8717bf1b60b5b7f3 | [
"MIT"
] | null | null | null | README.md | zhangjunhao0/data-512-a2 | 14edeffb92495d5fc870d96c8717bf1b60b5b7f3 | [
"MIT"
] | null | null | null | README.md | zhangjunhao0/data-512-a2 | 14edeffb92495d5fc870d96c8717bf1b60b5b7f3 | [
"MIT"
] | null | null | null | # A2: Bias in Data
## Overview
The goal of this project is to explore potential bias in Wikipedia articles. In particular, we are interested in Wikipedia articles on politial figures of each country. We will use Wikipedia article data, population data, and a machine learning service called ORES to examine the coverage and quality of articles across differentc countries and regions.
## Data
For this project we have three sources to retrieve our data.
* Wikipedia politicians articles by country dataset: https://figshare.com/articles/dataset/Untitled_Item/5513449. This dataset is posted on figshare and licensed under the CC-BY-SA 4.0: https://creativecommons.org/licenses/by-sa/4.0/. Terms of use: https://wikimediafoundation.org/wiki/Terms_of_Use/en. The data is included in this repository in the file page_data.csv. It contains the following attributes:
* page: name of Wikipedia article
* country: name of the country
* rev_id: revision id of the article. This will be used to retrieve quality score from an API later.
* Population dataset: https://www.prb.org/international/indicator/population/table/ by Population Reference Bureau. No license information found. This dataset is in this repository in the file WPDS_2020_data.csv. It contains the following attributes:
* FIPS: abbreviation of country/gergraphical region
* Name: name of country/gergraphical region
* Type: indicate the type of region: world, sub-region, or country
* TimeFrame: 2019 for this dataset
* Data (M): population of country in million
* Population: population of country
* Wikipedia article score. This is data is available through the machine learning API ORES (Objective Revision Evaluation Service). Description: https://www.mediawiki.org/wiki/ORES. API: https://ores.wikimedia.org/v3/#!/scoring/get_v3_scores_context_revid_model. It rates each article into one of the six qualities: FA, GA, B, C, Start, Stub. Here we consider FA (featured article) and GA (good article) as high quality articles.
* Note that this API takes in revision ids of articles as input for evaluation, and some revision ids from the articles dataset do not have a rating. These articles are excluded from analysis and their revision ids are listed in the file articles_with_no_ores_scores.
## Data processing and analysis
The code for data cleaning, processing, and analysis, including getting ORES scores from API, is included in hcds-a2-bias.ipynb. For data cleaning and preprocessing, there are a few articles from the article dataset that do not match any country in the population dataset. There are also a few countries in the population dataset that do not match any article. In either case, the data is excluded from analysis and these rows are shown in wp_wpds_countries-no_match.csv.
The other data with good matches are output in the wp_wpds_politicians_by_country.csv. Both of these csv files have the following schemas:
* country: name of the country
* article_name: name of Wikipedia article
* revision_id: revision id of the article
* article_quality_est.: the ORES rating of article quality
* population: population of the country
## How to run the notebook
Run the following commands in terminal.
1. Clone this repository: git clone https://github.com/zhangjunhao0/data-512-a2.git
2. (Optional): create a virtual environment and activate it. See https://docs.python.org/3/tutorial/venv.html. Feel free to skip this step.
3. Install packages (if you do not have them installed): pip3 install pandas tqdm requests
4. Open hcds-a2-bias.ipynb and run all cells.
| 91.461538 | 471 | 0.793103 | eng_Latn | 0.993984 |
2fb20d419305dbb7257dbb5df2ac18e7b19404d8 | 856 | md | Markdown | README.md | opendata-guru/data-portallist-de | b5bd44cc0c4fb4a51ad85a98b6ab2e77cb935b50 | [
"MIT"
] | 6 | 2021-04-21T06:43:47.000Z | 2021-12-04T13:30:43.000Z | README.md | opendata-guru/data-portallist-de | b5bd44cc0c4fb4a51ad85a98b6ab2e77cb935b50 | [
"MIT"
] | 23 | 2019-12-12T13:18:08.000Z | 2022-02-24T10:55:48.000Z | README.md | opendata-guru/data-portallist-de | b5bd44cc0c4fb4a51ad85a98b6ab2e77cb935b50 | [
"MIT"
] | 2 | 2021-04-21T07:41:02.000Z | 2021-05-13T08:26:27.000Z | # Open Data portal list (Germany)
This repository collects and generates a list of all #OpenData portals in Germany.
## Contribute
If you have a mac and Numbers: Please edit `source/opendataportals.numbers` and export it to `dist/opendataportals.csv`. Convert all `;` to `,` (2 strings must be escaped with `"`).
If you don't have a mac or Numbers: It's OK to edit the file `dist/opendataportals.csv`.
GitHub loves `,` in CSV files and show a preview of the file.
## To Do
* add a npm command
* add a script to convert/export source file to CSV file
* add a csv linter
* add a harvester for missing data portals
* add a script to collect AGS infos
* add a script to collect GPS infos
* add a script to collect shape infos
* add an export script for .json
* add an export script for .geojson
* add an upload script to CKAN instance
(CKAN loves ';')
| 31.703704 | 181 | 0.735981 | eng_Latn | 0.974006 |
2fb2ffc84d872bc948558926308e36105ad63eae | 635 | md | Markdown | README.md | uAliFurkanY/resourcemanager | 62e93d5819af8308992092d1552224440955c6bd | [
"MIT"
] | null | null | null | README.md | uAliFurkanY/resourcemanager | 62e93d5819af8308992092d1552224440955c6bd | [
"MIT"
] | null | null | null | README.md | uAliFurkanY/resourcemanager | 62e93d5819af8308992092d1552224440955c6bd | [
"MIT"
] | null | null | null | # ResourceManager
Basic resource manager without external dependencies.
### Usage
In your project folder, run `npm i @alifurkan/resourcemanager`.
`index.js`
```js
const ResourceManager = require("resourcemanager");
const resources = new ResourceManager( // Defaults
"resources", // Folder name
true, // Recursion
true // Caching
); // defaults
async function main() {
console.log(await resources.get("hello.txt"));
}
main();
```
`resources/hello.txt`
```
Hello, world!
```
Assuming your file hierarchy is like this,
```
app
\____ resources
\ \____ hello.txt
\__ index.js
```
the program will output `Hello, world!`. | 15.875 | 63 | 0.699213 | eng_Latn | 0.605027 |
2fb3735f2a0d010fb37b35d42843174452bcc96e | 675 | md | Markdown | README.md | raymag/twitter_bot | c7f1ee023196b5102dcf9bb365d29a4ef27a275b | [
"MIT"
] | null | null | null | README.md | raymag/twitter_bot | c7f1ee023196b5102dcf9bb365d29a4ef27a275b | [
"MIT"
] | 1 | 2020-08-13T03:06:40.000Z | 2020-08-13T03:15:04.000Z | README.md | raymag/retweet-bot | c7f1ee023196b5102dcf9bb365d29a4ef27a275b | [
"MIT"
] | null | null | null | # Twitter Bot
Twitter Bot that automatically retweets "javascript" related tweets.
___
## Development Environment
If you want to setup a Development Environment of this project for yourself, first download or clone this repository with:
```bash
git clone https://github.com/raymag/retweet-bot
```
Open up a terminal on the cloned/downloaded folder and install the dependecies with:
```bash
npm install
```
Now you can run the server with:
```
npm run dev
```
## Contributing
If you like this project or do you think it's trash and that you can improve it, please consider contributing to it. A helping hand is always appreciated here. Just be respectful and do your best. | 32.142857 | 196 | 0.773333 | eng_Latn | 0.998793 |
2fb37471cc72f512f31eda00e47aeefa8fae6bb5 | 1,108 | md | Markdown | doc/api/Scene.cn.md | shejiJiang/G3D | d4bfe7a745993f574892e80370b688e33a56ad7b | [
"Apache-2.0"
] | 1,030 | 2018-02-26T08:23:35.000Z | 2022-03-29T10:42:04.000Z | doc/api/Scene.cn.md | shejiJiang/G3D | d4bfe7a745993f574892e80370b688e33a56ad7b | [
"Apache-2.0"
] | 22 | 2018-03-02T03:23:48.000Z | 2021-03-12T09:55:44.000Z | doc/api/Scene.cn.md | shejiJiang/G3D | d4bfe7a745993f574892e80370b688e33a56ad7b | [
"Apache-2.0"
] | 129 | 2018-03-01T05:35:10.000Z | 2022-02-22T01:50:52.000Z | # Scene
场景。
## 构造函数
```javascript
new G3D.Scene(engine);
```
### 参数
| 名称 | 类型 | 描述 |
| ------ | ---------- | ----------------------- |
| engine | G3D.Engine | 之前创建的 G3D 渲染引擎 |
## 属性
| 名称 | 类型 | 描述 |
| ---------- | --------------------------------- | ------------------------------------------ |
| clearColor | {r: Number, g: Number, b: Number} | 场景的背景色,每个分量取值在 0 到 255 之间 |
## 方法
### render()
渲染整个场景。
### pick(x, y)
尝试点选 3D 场景中的物体。
#### 参数
| 名称 | 类型 | 描述 |
| ---- | ------ | ------------------------- |
| x | Number | 在 canvas 上点选的 x 坐标 |
| y | Number | 在 canvas 上点选的 y 坐标 |
#### 返回值
| 类型 | 描述 |
| -------------- | --------------------------------------------------- |
| Number \| null | 被点到的网格体 id,如果为 null 则表示没有点到网格体 |
## 示例
```javascript
const scene = new G3D.Scene(engine);
scene.clearColor = {r: 200, g: 200, b: 200};
function render(){
scene.render();
requestAnimationFrame(render);
}
``` | 19.438596 | 95 | 0.338448 | yue_Hant | 0.399194 |
2fb3e540c66ae0b39528dd159fe13dd8fb9770fc | 3,022 | md | Markdown | docs/2014/relational-databases/administer-multiple-servers-using-central-management-servers.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/administer-multiple-servers-using-central-management-servers.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/administer-multiple-servers-using-central-management-servers.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Administrar vários servidores usando os Servidores Centrais de Gerenciamento | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: supportability
ms.topic: conceptual
helpviewer_keywords:
- multiserver queries
- central management server
- multiserver administration [SQL Server]
- master servers [SQL Server], central management servers
- target configuration [SQL Server]
- server configuration [SQL Server]
ms.assetid: 427911a7-57d4-4542-8846-47c3267a5d9c
author: MashaMSFT
ms.author: mathoma
manager: craigg
ms.openlocfilehash: f47eec543c21e74565d750035d20fbcee9baa82e
ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/15/2019
ms.locfileid: "62877298"
---
# <a name="administer-multiple-servers-using-central-management-servers"></a>Administrar vários servidores usando os Servidores Centrais de Gerenciamento
Você pode administrar vários servidores designando servidores de gerenciamento centrais e criando grupos de servidores.
## <a name="benefits-of-central-management-servers-and-server-groups"></a>Benefícios dos Servidores Centrais de Gerenciamento e grupos de servidores
Uma instância do [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)], que é designada como um Servidor Central de Gerenciamento, mantém grupos de servidores que contêm as informações de conexão de uma ou mais instâncias do [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)]. As instruções [!INCLUDE[tsql](../includes/tsql-md.md)] e as políticas de gerenciamento baseado em política podem ser executadas ao mesmo tempo nos grupos de servidores. Você também pode exibir os arquivos de log [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] em instâncias que são gerenciadas por um Servidor de Gerenciamento Central. As versões do [!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)] anteriores ao [!INCLUDE[ssKatmai](../includes/sskatmai-md.md)] não podem ser designadas como um Servidor de Gerenciamento Central.
[!INCLUDE[tsql](../includes/tsql-md.md)] também podem ser executadas nos grupos de servidor locais em Servidores Registrados.
### <a name="related-tasks"></a>Related Tasks
Para criar um Servidor de Gerenciamento Central e grupos de servidores, use a janela **Servidores Registrados** no [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)]. Observe que o Servidor Central de Gerenciamento não pode ser membro de um grupo que o mantém. Para obter mais informações sobre como criar Servidores de Gerenciamento Central e grupos de servidores, consulte [Criar um Servidor de Gerenciamento Central e Grupo de Servidores (SQL Server Management Studio)](../ssms/register-servers/create-a-central-management-server-and-server-group.md).
## <a name="see-also"></a>Consulte também
[Administrar servidores com Gerenciamento Baseado em Políticas](policy-based-management/administer-servers-by-using-policy-based-management.md)
| 71.952381 | 827 | 0.794176 | por_Latn | 0.967257 |
2fb43bb9f5641995bb00c30a33c34e5d48ad0671 | 2,362 | md | Markdown | source/_posts/coding-programming-principles.md | ssbunny/myblog | f3da26d775c34bdb2b13d9a884af391483f25a17 | [
"MIT"
] | 1 | 2017-09-29T07:22:14.000Z | 2017-09-29T07:22:14.000Z | source/_posts/coding-programming-principles.md | ssbunny/myblog | f3da26d775c34bdb2b13d9a884af391483f25a17 | [
"MIT"
] | null | null | null | source/_posts/coding-programming-principles.md | ssbunny/myblog | f3da26d775c34bdb2b13d9a884af391483f25a17 | [
"MIT"
] | null | null | null | title: 编程基本原则汇总
date: 2015-10-19 15:48:35
categories: Coding
tags:
- pattern
---
整理一些公认的编程基本原则。
## KISS
> Keep It Simple Stupid
要注重简约。
## YAGNI
> you aren't gonna need it
在确实需要某功能之前,不要依靠预测去实现它。
## DRY
> Don't Repeat Yourself
不要编写重复代码,将业务规则、公式、元数据等放在一处。
## LSP
> Liskov Substitution Principle
里氏替换原则:程序中的对象可以被它的子类型替换,而同时不改变程序的正确性。
## LoD
> Law of Demeter : Don't talk to strangers
迪米特法则:又叫最少知识原则,一个对象应当对其他对象有尽可能少的了解,不和陌生人说话。
对象的方法应该只调用以下方法:
1. 对象自己的;
2. 方法参数中的;
3. 方法内部创建的任何对象中的方法;
4. 对象直接属性或域中的方法(聚合关系)。
## OCP
> **Open/Closed Principle:** Software entities (e.g. classes) should be open for extension, but closed for modification.
开闭原则:软件实体应当对扩展开放,对修改关闭。
## SRP
> **Single Responsibility Principle: **A class should never have more than one reason to change.
单一职责原则:就一个类而言,应该仅有一个引起它变化的原因。
**Resources**
1. [KISS principle](http://en.wikipedia.org/wiki/KISS_principle)
2. [Keep It Simple Stupid (KISS)](http://principles-wiki.net/principles:keep_it_simple_stupid)
3. [You Arent Gonna Need It](http://c2.com/xp/YouArentGonnaNeedIt.html)
4. [You’re NOT gonna need it!](http://www.xprogramming.com/Practices/PracNotNeed.html)
5. [You aren't gonna need it](http://en.wikipedia.org/wiki/You_ain't_gonna_need_it)
6. [Dont Repeat Yourself](http://c2.com/cgi/wiki?DontRepeatYourself)
7. [Don't repeat yourself](http://en.wikipedia.org/wiki/Don't_repeat_yourself)
8. [Don't Repeat Yourself](http://programmer.97things.oreilly.com/wiki/index.php/Don't_Repeat_Yourself)
9. [Code For The Maintainer](http://c2.com/cgi/wiki?CodeForTheMaintainer)
10. [The Noble Art of Maintenance Programming](http://blog.codinghorror.com/the-noble-art-of-maintenance-programming/)
11. [Program optimization](http://en.wikipedia.org/wiki/Program_optimization)
12. [Premature Optimization](http://c2.com/cgi/wiki?PrematureOptimization)
13. [Law of Demeter](http://en.wikipedia.org/wiki/Law_of_Demeter)
14. [The Law of Demeter Is Not A Dot Counting Exercise](http://haacked.com/archive/2009/07/14/law-of-demeter-dot-counting.aspx/)
15. [Liskov substitution principle](http://en.wikipedia.org/wiki/Liskov_substitution_principle)
16. [The Liskov Substitution Principle](http://freshbrewedcode.com/derekgreer/2011/12/31/solid-javascript-the-liskov-substitution-principle/)
17. [Single responsibility principle](http://en.wikipedia.org/wiki/Single_responsibility_principle) | 31.918919 | 141 | 0.770957 | yue_Hant | 0.494546 |
2fb58ccb908bbde2a22f3128e7021d7635df091c | 574 | md | Markdown | cli/packages/prisma-client-lib/src/codegen/generators/__tests__/env-interpolation/__snapshots__/javascript-env.test.js.md | adammichaelwilliams/prisma | 7aa2a8b2ed62c091f712f51cbae021621c0bb05b | [
"Apache-2.0"
] | 2,053 | 2018-05-15T07:46:54.000Z | 2018-08-15T10:36:45.000Z | cli/packages/prisma-client-lib/src/codegen/generators/__tests__/env-interpolation/__snapshots__/javascript-env.test.js.md | michaelfriedman/prisma | 298be5c919119847bb8d102d6b16672edd06b2c5 | [
"Apache-2.0"
] | 881 | 2017-09-07T20:00:21.000Z | 2018-01-16T14:19:24.000Z | cli/packages/prisma-client-lib/src/codegen/generators/__tests__/env-interpolation/__snapshots__/javascript-env.test.js.md | michaelfriedman/prisma | 298be5c919119847bb8d102d6b16672edd06b2c5 | [
"Apache-2.0"
] | 136 | 2018-05-15T12:26:24.000Z | 2018-08-14T00:26:39.000Z | # Snapshot report for `dist/codegen/generators/__tests__/env-interpolation/javascript-env.test.js`
The actual snapshot is saved in `javascript-env.test.js.snap`.
Generated by [AVA](https://ava.li).
## javascript env interpolation - environment multiple
> Snapshot 1
'`http://localhost:4466/${process.env[\'PRISMA_SERVICE\']}/${process.env[\'PRISMA_STAGE\']}`'
## javascript env interpolation - environment one
> Snapshot 1
'`${process.env[\'PRISMA_ENDPOINT\']}`'
## javascript env interpolation - plain
> Snapshot 1
'`http://localhost:4466/test/dev`'
| 23.916667 | 98 | 0.714286 | kor_Hang | 0.224894 |
2fb58ce74100ce103ffdcfb41e437447fb66ebdc | 60 | md | Markdown | README.md | R212-eng/Robi | 17a9ac07733ec56be470036d1f6ff32f5303e1c6 | [
"MIT"
] | null | null | null | README.md | R212-eng/Robi | 17a9ac07733ec56be470036d1f6ff32f5303e1c6 | [
"MIT"
] | null | null | null | README.md | R212-eng/Robi | 17a9ac07733ec56be470036d1f6ff32f5303e1c6 | [
"MIT"
] | null | null | null | # Robi
Hello, I am Robenus. This is my GitHub repository.
| 20 | 52 | 0.716667 | eng_Latn | 0.993165 |
2fb597e08553337bfad82792f3f4773acb3a221c | 1,914 | md | Markdown | docs/melang.md | Water-Melon/Melon | 78a7714973d8a313d99b4e410946ab6fef8b7fb0 | [
"BSD-3-Clause"
] | 237 | 2015-03-06T02:57:56.000Z | 2022-03-30T03:27:51.000Z | docs/melang.md | GogeBlue/Melon | 639c531909d5bc9ea97deb3eefa1ed302c53bbf7 | [
"BSD-3-Clause"
] | 2 | 2021-08-25T09:49:54.000Z | 2021-09-08T07:48:12.000Z | docs/melang.md | GogeBlue/Melon | 639c531909d5bc9ea97deb3eefa1ed302c53bbf7 | [
"BSD-3-Clause"
] | 18 | 2017-09-22T02:53:41.000Z | 2022-02-22T01:34:51.000Z | ## 脚本任务
脚本任务分为C相关的函数使用,以及脚本自身语法和函数库的使用。后者详情参见:[Melang.org](https://melang.org)。
本脚本是一个同步写法但纯异步实现的脚本。脚本可以在单一线程内实现多任务抢占式调度执行,且不会影响到该线程内其他异步事件的处理。换言之,脚本可以与异步网络IO同在一个线程内处理。
本文仅给出创建脚本任务、调度执行脚本任务的函数。关于扩展功能,可以参考后续的脚本开发文章。
### 头文件
```c
#include "mln_lang.h"
```
### 函数/宏
#### mln_lang_new
```c
mln_lang_t *mln_lang_new(mln_alloc_t *pool, mln_event_t *ev);
```
描述:创建脚本管理结构,该结构是脚本任务的管理结构,用于维护多个脚本任务的资源、调度、创建、删除等相关内容。其上每一个脚本任务被看作一个协程。`pool`为每个脚本任务所使用的内存池结构,`ev`为每个脚本任务所依赖的事件结构。换言之,脚本任务的执行是依赖于Melon的异步事件API的。
返回值:成功则返回脚本管理结构`mln_lang_t`指针,否则返回`NULL`
#### mln_lang_free
```c
void mln_lang_free(mln_lang_t *lang);
```
描述:销毁并释放脚本管理结构的所有资源。
返回值:无
#### mln_lang_job_new
```c
mln_lang_ctx_t *mln_lang_job_new(mln_lang_t *lang, mln_u32_t type, mln_string_t *data, void *udata, mln_lang_return_handler handler);
typedef void (*mln_lang_return_handler)(mln_lang_ctx_t *);
```
描述:创建脚本任务。参数含义如下:
- `lang` 脚本管理结构,由`mln_lang_new`创建而来。本任务创建好后将由该结构进行管理。
- `type`脚本任务代码的类型:`M_INPUT_T_FILE`(文件)、`M_INPUT_T_BUF`(字符串)。
- `data` 根据`type`不同,本参数含义不同。文件时本参数为文件路径;字符串时本参数为代码字符串。
- `udata` 用户自定义结构,一般用于第三方库函数实现以及`handler`内获取任务返回值之用。
- `handler`当任务正常完结时,获取任务返回值。本函数只有一个参数,即脚本任务结构,若需要自定义结构辅助实现一些功能,可以设置`udata`字段。
返回值:成功则返回脚本结构`mln_lang_ctx_t`指针,否则返回`NULL`
#### mln_lang_job_free
```c
void mln_lang_job_free(mln_lang_ctx_t *ctx);
```
描述:销毁并释放脚本任务结构所有资源。
返回值:无
#### mln_lang_run
```c
void mln_lang_run(mln_lang_t *lang);
```
描述:执行脚本。本函数不需要每一个调用`mln_lang_job_new`后调用,可以新建多个脚本任务后调用一次即可。
返回值:无
#### mln_lang_cache_set
```c
mln_lang_cache_set(lang)
```
描述:设置是否缓存抽象语法树结构。设置该缓存后,在后续创建脚本代码相同的任务时,可略去抽象语法树生成过程而直接使用,以提升性能。
返回值:无
#### mln_lang_ctx_data_get
```c
mln_lang_ctx_data_get(ctx)
```
描述:获取脚本任务`mln_lang_ctx_t`类型指针的`ctx`中的用户自定义数据。
返回值:用户自定义数据指针
### 示例
最好的示例就是Melang仓库的源代码,仅有一个文件不超过200行,关于脚本调用的代码仅35行。该仓库仅仅是Melon核心库的一个启动器。详情参见:[melang.c](https://github.com/Water-Melon/Melang/blob/master/melang.c)。
| 16.084034 | 145 | 0.767503 | yue_Hant | 0.745913 |
2fb608bbfeab782c0114a9ad274b522c5a2b8762 | 1,381 | md | Markdown | README.md | FRC1775/simple_robot | aaf1dfac597d7aabe2210bf8096900c997c61112 | [
"BSD-3-Clause"
] | null | null | null | README.md | FRC1775/simple_robot | aaf1dfac597d7aabe2210bf8096900c997c61112 | [
"BSD-3-Clause"
] | null | null | null | README.md | FRC1775/simple_robot | aaf1dfac597d7aabe2210bf8096900c997c61112 | [
"BSD-3-Clause"
] | null | null | null | # Super-simple Robot
For teaching purposes.
## Overview
We're using git tags to show a progression, from a single source file, to using imports, to packages and beyond. Checkout the tagged versions in turn, or examine diffs of sequential tags:
git checkout v0.1
git diff v0.2
tag | what's covered
--- | ---
v0.1 | A **single source file** that logs the Gyro angle
v0.2 | Single file with **anonymous inner class** for Command
v0.3 | **Separate source file** for GyroLogger command class
v0.4 | Moving the command class to a `commands` **sub-package**
## Hardware
This code can be deployed to a RoboRio, but currently only the `ADXRS450_Gyro` is utilized. This provides an absolute minimum of code and hardware that can still demostrate the full software-hardware build path.
## Building
We're using [GradleRIO](https://github.com/Open-RIO/GradleRIO) to build and deploy. On mac (or linux):
./gradlew build
./gradlew deploy
On Windows (in Powershell),
.\gradlew.bat build
.\gradlew.bat deploy
There's some issue with `gradlew` not finding the RobioRio initially. As a work-around, add `--info` (my guess is that the added delay of logging to stdout is giving Gradle enough time to find the new connection).
## Roadmap
* Integrate log4j. Currently seems that GradleRIO isn't copying the Log4j JARs to the Rio
* add more hardware (USB camera, say)
| 33.682927 | 213 | 0.739319 | eng_Latn | 0.991779 |
2fb62439a81d871a457d5d26171bd84b36579174 | 515 | markdown | Markdown | _posts/2008-09-03-substitute-capitalist-for-republican-please.markdown | bbrown/bbrown.github.com | 2ad46605848a1893ad36eebbe09ba64edfb1303b | [
"MIT"
] | 1 | 2018-11-12T18:04:18.000Z | 2018-11-12T18:04:18.000Z | _posts/2008-09-03-substitute-capitalist-for-republican-please.markdown | bbrown/bbrown.github.com | 2ad46605848a1893ad36eebbe09ba64edfb1303b | [
"MIT"
] | null | null | null | _posts/2008-09-03-substitute-capitalist-for-republican-please.markdown | bbrown/bbrown.github.com | 2ad46605848a1893ad36eebbe09ba64edfb1303b | [
"MIT"
] | null | null | null | ---
layout: post
title: Substitute Capitalist for Republican Please
date: '2008-09-03 23:35:27 -0700'
mt_id: 1334
blog_id: 1
post_id: 1334
basename: substitute-capitalist-for-republican-please
categories:
- quick-quotes
- politics
---
"I'm not a Republican because I grew up rich, but because I didn't want to spend the rest of my life poor, waiting for the government to rescue me." — Mike Huckabee, <a href="http://www.huffingtonpost.com/2008/09/03/mike-huckabee-rnc-speech_n_123634.html">"RNC Speech"</a>
| 36.785714 | 279 | 0.759223 | eng_Latn | 0.868534 |
2fb6bb10a31dcee2b295d0364fc291d24adce4c5 | 690 | md | Markdown | nearby-connections/README.md | lakeel-altla/samples-google-api | 843eafc9ff14e91bc69d8583994afd75e989072e | [
"MIT"
] | null | null | null | nearby-connections/README.md | lakeel-altla/samples-google-api | 843eafc9ff14e91bc69d8583994afd75e989072e | [
"MIT"
] | null | null | null | nearby-connections/README.md | lakeel-altla/samples-google-api | 843eafc9ff14e91bc69d8583994afd75e989072e | [
"MIT"
] | null | null | null | # Description
This is a sample for Nearby Connections.
Nearby Connections is a peer-to-peer networking API that allows apps to easily discover, connect to, and exchange data with nearby devices in real-time, regardless of network connectivity.
In this app, payload is exchanged between connected devices.
# Required
- OS
Android 4.4 "KitKat" or higher.
- Google Play Services
Google Play services 11.0 or higher.
# Get Started
1. Launch
Prepare two devices, and launch app.
1. Establish the connection
When one device is Advertise and one device discovers, the other device is automatically connected.
1. Send payload
After connection established, can send the payload. | 28.75 | 189 | 0.776812 | eng_Latn | 0.999344 |
2fb7474a74783bb4951d0a698078bb9599990df2 | 3,570 | md | Markdown | docs/integration-services/system-stored-procedures/catalog-move-project-ssisdb-database.md | aisbergde/sql-docs.de-de | 7d98a83b0b13f2b9ff7482c5bdf9f024c20595b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/system-stored-procedures/catalog-move-project-ssisdb-database.md | aisbergde/sql-docs.de-de | 7d98a83b0b13f2b9ff7482c5bdf9f024c20595b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/system-stored-procedures/catalog-move-project-ssisdb-database.md | aisbergde/sql-docs.de-de | 7d98a83b0b13f2b9ff7482c5bdf9f024c20595b5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: catalog.move_project (SSISDB-Datenbank) | Microsoft-Dokumentation
ms.custom: ''
ms.date: 03/04/2017
ms.prod: sql
ms.prod_service: integration-services
ms.reviewer: ''
ms.technology: integration-services
ms.topic: language-reference
ms.assetid: ef3b0325-d8e9-472b-bf11-7d3efa6312ff
author: janinezhang
ms.author: janinez
ms.openlocfilehash: 985c541ccad844d972e1a850d65f10b4f6898347
ms.sourcegitcommit: b2464064c0566590e486a3aafae6d67ce2645cef
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 07/15/2019
ms.locfileid: "68007777"
---
# <a name="catalogmove_project---ssisdb-database"></a>catalog.move_project (SSISDB-Datenbank)
[!INCLUDE[ssis-appliesto](../../includes/ssis-appliesto-ssvrpluslinux-asdb-asdw-xxx.md)]
[!INCLUDE[tsql-appliesto-ss2012-xxxx-xxxx-xxx-md](../../includes/tsql-appliesto-ss2012-xxxx-xxxx-xxx-md.md)]
Verschiebt ein Projekt von einem Ordner im [!INCLUDE[ssISnoversion](../../includes/ssisnoversion-md.md)]-Katalog in einen anderen.
## <a name="syntax"></a>Syntax
```sql
catalog.move_project [ @source_folder = ] source_folder
, [ @project_name = ] project_name
, [ @destination_folder = ] destination_folder
```
## <a name="arguments"></a>Argumente
[ @source_folder = ] *source_folder*
Der Name des Quellordners, in dem sich das Projekt vor dem Verschieben befindet. Der *source_folder* ist **nvarchar(128)** .
[ @project_name = ] *project_name*
Der Name des Projekts, das verschoben werden soll. Der *project_name* ist **nvarchar(128)** .
[ @destination_folder = ] *destination_folder*
Der Name des Zielordners, in dem sich das Projekt nach dem Verschieben befindet. Der *destination_folder* ist **nvarchar(128)** .
## <a name="return-code-value"></a>Rückgabecodewert
0 (Erfolg)
## <a name="result-sets"></a>Resultsets
None
## <a name="permissions"></a>Berechtigungen
Diese gespeicherte Prozedur erfordert eine der folgenden Berechtigungen:
- READ-Berechtigung und MODIFY-Berechtigung für das Projekt, das verschoben werden soll, und CREATE_OBJECTS-Berechtigung für den Zielordner
- Mitgliedschaft in der Datenbankrolle **ssis_admin**
- Mitgliedschaft in der Serverrolle **sysadmin**
## <a name="errors-and-warnings"></a>Fehler und Warnungen
In der folgenden Liste werden Bedingungen beschrieben, die möglicherweise bewirken, dass diese gespeicherte Prozedur einen Fehler auslöst:
- Das Projekt ist nicht vorhanden.
- Der Quellordner nicht vorhanden.
- Der Zielordner ist nicht vorhanden, oder der Zielordner enthält bereits ein Projekt mit dem gleichen Namen.
- Der Benutzer verfügt nicht über die entsprechenden Berechtigungen.
## <a name="remarks"></a>Bemerkungen
Wenn ein Projekt von einem Quellordner in einen Zielordner verschoben wird, werden das Projekt im Quellordner und entsprechende Umgebungsverweise gelöscht. Im Zielordner werden ein identisches Projekt und Umgebungsverweise erstellt. Relative Umgebungsverweise verweisen nach dem Verschieben auf einen anderen Ordner. Absolute Verweise verweisen nach dem Verschieben auf den gleichen Ordner.
> [!NOTE]
> Ein Projekt kann über relative oder absolute Umgebungsverweise verfügen. Relative Verweise verweisen mit dem Namen auf die Umgebung und erfordern, dass sich die Umgebung im gleichen Ordner wie das Projekt befindet. Absolute Verweise verweisen mit Name und Ordner auf die Umgebung, und sie verweisen auf Umgebungen, die sich in einem anderen Ordner als dem Ordner des Projekts befinden.
| 44.625 | 393 | 0.756022 | deu_Latn | 0.978477 |
2fb75d30112e10ca941f8f94b3279abac67e4909 | 1,901 | md | Markdown | shared/getting-started/getDeviceOnDash/revpi-core-3.md | aethernet/docs | f191ecbdebd56ac55942157fd29aa041fe56ba75 | [
"Apache-2.0"
] | 38 | 2018-11-19T20:15:02.000Z | 2022-02-11T08:06:09.000Z | shared/getting-started/getDeviceOnDash/revpi-core-3.md | aethernet/docs | f191ecbdebd56ac55942157fd29aa041fe56ba75 | [
"Apache-2.0"
] | 1,169 | 2018-10-31T15:24:12.000Z | 2022-03-28T18:14:09.000Z | shared/getting-started/getDeviceOnDash/revpi-core-3.md | aethernet/docs | f191ecbdebd56ac55942157fd29aa041fe56ba75 | [
"Apache-2.0"
] | 142 | 2018-10-30T13:15:56.000Z | 2022-03-26T14:39:43.000Z | The next step is to flash the downloaded image onto your {{ $device.name }}'s internal eMMC. The easiest way to flash the image onto the device is using {{ $names.company.lower }}’s [Etcher][etcher-link]. Support for the CM3L is available on Etcher starting from version 1.2.1 for OSX and Windows (Windows still needs the Raspberry Pi Foundation [usbboot drivers](https://github.com/raspberrypi/usbboot/blob/master/win32/rpiboot_setup.exe) installed).
The Raspberry Pi Foundation provides a tool that allows the Compute Module to expose the eMMC as a mass storage device that can be flashed like any other media. If you want to use the tool provided by the Raspberry Pi Foundation instead, please follow their instructions [here](https://www.raspberrypi.com/documentation/computers/compute-module.html#flashing-the-compute-module-emmc).
Once you have everything set up, run Etcher on your computer, connect your {{ $device.name }} to your laptop via a micro usb cable plugged into the port next to the label `MAX ADD`. Now power the {{ $device.name }} via the 10.7 - 28.8V power connectors.
After a couple of seconds, the {{ $device.name }}'s eMMC should be detected on your computer by Etcher, which will initialize and list the board as a Compute Module based device (naming might change in the future). Select the downloaded image and press the “Flash!” button.
<img src="/img/fincm3/etcher-usboot.png" width="100%">
After flash is complete, power off your {{ $device.name }} and unplug the micro-USB cable. Powering the {{ $device.name }} on will now result in the device booting from the freshly-written eMMC.
__Note:__ You can flash several {{ $device.name }}s with the same OS image file and all the devices will boot and provision into your fleet. You can also disable the auto-ejecting or validation steps from the **Etcher** settings panel.
[etcher-link]:https://www.balena.io/etcher
| 118.8125 | 451 | 0.770121 | eng_Latn | 0.996159 |
2fb7ee36909a9c100e3a332622d6b43e93bd0ff4 | 24,952 | md | Markdown | vendor/react/event-loop/README.md | tli4/ratchet-practice | d8e323b1b950ce07097f0b981f8cfacdd131ba50 | [
"MIT"
] | 1 | 2022-01-23T03:09:39.000Z | 2022-01-23T03:09:39.000Z | vendor/react/event-loop/README.md | tli4/ratchet-practice | d8e323b1b950ce07097f0b981f8cfacdd131ba50 | [
"MIT"
] | null | null | null | vendor/react/event-loop/README.md | tli4/ratchet-practice | d8e323b1b950ce07097f0b981f8cfacdd131ba50 | [
"MIT"
] | null | null | null | # EventLoop Component
[](https://travis-ci.org/reactphp/event-loop)
[ReactPHP](https://reactphp.org/)'s core reactor event loop that libraries can use for evented I/O.
In order for async based libraries to be interoperable, they need to use the
same event loop. This component provides a common `LoopInterface` that any
library can target. This allows them to be used in the same loop, with one
single [`run()`](#run) call that is controlled by the user.
**Table of Contents**
* [Quickstart example](#quickstart-example)
* [Usage](#usage)
* [Factory](#factory)
* [create()](#create)
* [Loop implementations](#loop-implementations)
* [StreamSelectLoop](#streamselectloop)
* [ExtEventLoop](#exteventloop)
* [ExtLibeventLoop](#extlibeventloop)
* [ExtLibevLoop](#extlibevloop)
* [LoopInterface](#loopinterface)
* [run()](#run)
* [stop()](#stop)
* [addTimer()](#addtimer)
* [addPeriodicTimer()](#addperiodictimer)
* [cancelTimer()](#canceltimer)
* [futureTick()](#futuretick)
* [addSignal()](#addsignal)
* [removeSignal()](#removesignal)
* [addReadStream()](#addreadstream)
* [addWriteStream()](#addwritestream)
* [removeReadStream()](#removereadstream)
* [removeWriteStream()](#removewritestream)
* [Install](#install)
* [Tests](#tests)
* [License](#license)
* [More](#more)
## Quickstart example
Here is an async HTTP server built with just the event loop.
```php
$loop = React\EventLoop\Factory::create();
$server = stream_socket_server('tcp://127.0.0.1:8080');
stream_set_blocking($server, false);
$loop->addReadStream($server, function ($server) use ($loop) {
$conn = stream_socket_accept($server);
$data = "HTTP/1.1 200 OK\r\nContent-Length: 3\r\n\r\nHi\n";
$loop->addWriteStream($conn, function ($conn) use (&$data, $loop) {
$written = fwrite($conn, $data);
if ($written === strlen($data)) {
fclose($conn);
$loop->removeWriteStream($conn);
} else {
$data = substr($data, $written);
}
});
});
$loop->addPeriodicTimer(5, function () {
$memory = memory_get_usage() / 1024;
$formatted = number_format($memory, 3).'K';
echo "Current memory usage: {$formatted}\n";
});
$loop->run();
```
See also the [examples](examples).
## Usage
Typical applications use a single event loop which is created at the beginning
and run at the end of the program.
```php
// [1]
$loop = React\EventLoop\Factory::create();
// [2]
$loop->addPeriodicTimer(1, function () {
echo "Tick\n";
});
$stream = new React\Stream\ReadableResourceStream(
fopen('file.txt', 'r'),
$loop
);
// [3]
$loop->run();
```
1. The loop instance is created at the beginning of the program. A convenience
factory [`React\EventLoop\Factory::create()`](#create) is provided by this library which
picks the best available [loop implementation](#loop-implementations).
2. The loop instance is used directly or passed to library and application code.
In this example, a periodic timer is registered with the event loop which
simply outputs `Tick` every second and a
[readable stream](https://github.com/reactphp/stream#readableresourcestream)
is created by using ReactPHP's
[stream component](https://github.com/reactphp/stream) for demonstration
purposes.
3. The loop is run with a single [`$loop->run()`](#run) call at the end of the program.
### Factory
The `Factory` class exists as a convenient way to pick the best available
[event loop implementation](#loop-implementations).
#### create()
The `create(): LoopInterface` method can be used to create a new event loop
instance:
```php
$loop = React\EventLoop\Factory::create();
```
This method always returns an instance implementing [`LoopInterface`](#loopinterface),
the actual [event loop implementation](#loop-implementations) is an implementation detail.
This method should usually only be called once at the beginning of the program.
### Loop implementations
In addition to the [`LoopInterface`](#loopinterface), there are a number of
event loop implementations provided.
All of the event loops support these features:
* File descriptor polling
* One-off timers
* Periodic timers
* Deferred execution on future loop tick
For most consumers of this package, the underlying event loop implementation is
an implementation detail.
You should use the [`Factory`](#factory) to automatically create a new instance.
Advanced! If you explicitly need a certain event loop implementation, you can
manually instantiate one of the following classes.
Note that you may have to install the required PHP extensions for the respective
event loop implementation first or they will throw a `BadMethodCallException` on creation.
#### StreamSelectLoop
A `stream_select()` based event loop.
This uses the [`stream_select()`](http://php.net/manual/en/function.stream-select.php)
function and is the only implementation which works out of the box with PHP.
This event loop works out of the box on PHP 5.3 through PHP 7+ and HHVM.
This means that no installation is required and this library works on all
platforms and supported PHP versions.
Accordingly, the [`Factory`](#factory) will use this event loop by default if
you do not install any of the event loop extensions listed below.
Under the hood, it does a simple `select` system call.
This system call is limited to the maximum file descriptor number of
`FD_SETSIZE` (platform dependent, commonly 1024) and scales with `O(m)`
(`m` being the maximum file descriptor number passed).
This means that you may run into issues when handling thousands of streams
concurrently and you may want to look into using one of the alternative
event loop implementations listed below in this case.
If your use case is among the many common use cases that involve handling only
dozens or a few hundred streams at once, then this event loop implementation
performs really well.
If you want to use signal handling (see also [`addSignal()`](#addsignal) below),
this event loop implementation requires `ext-pcntl`.
This extension is only available for Unix-like platforms and does not support
Windows.
It is commonly installed as part of many PHP distributions.
If this extension is missing (or you're running on Windows), signal handling is
not supported and throws a `BadMethodCallException` instead.
This event loop is known to rely on wall-clock time to schedule future
timers, because a monotonic time source is not available in PHP by default.
While this does not affect many common use cases, this is an important
distinction for programs that rely on a high time precision or on systems
that are subject to discontinuous time adjustments (time jumps).
This means that if you schedule a timer to trigger in 30s and then adjust
your system time forward by 20s, the timer may trigger in 10s.
See also [`addTimer()`](#addtimer) for more details.
#### ExtEventLoop
An `ext-event` based event loop.
This uses the [`event` PECL extension](https://pecl.php.net/package/event).
It supports the same backends as libevent.
This loop is known to work with PHP 5.4 through PHP 7+.
#### ExtLibeventLoop
An `ext-libevent` based event loop.
This uses the [`libevent` PECL extension](https://pecl.php.net/package/libevent).
`libevent` itself supports a number of system-specific backends (epoll, kqueue).
This event loop does only work with PHP 5.
An [unofficial update](https://github.com/php/pecl-event-libevent/pull/2) for
PHP 7 does exist, but it is known to cause regular crashes due to `SEGFAULT`s.
To reiterate: Using this event loop on PHP 7 is not recommended.
Accordingly, the [`Factory`](#factory) will not try to use this event loop on
PHP 7.
This event loop is known to trigger a readable listener only if
the stream *becomes* readable (edge-triggered) and may not trigger if the
stream has already been readable from the beginning.
This also implies that a stream may not be recognized as readable when data
is still left in PHP's internal stream buffers.
As such, it's recommended to use `stream_set_read_buffer($stream, 0);`
to disable PHP's internal read buffer in this case.
See also [`addReadStream()`](#addreadstream) for more details.
#### ExtLibevLoop
An `ext-libev` based event loop.
This uses an [unofficial `libev` extension](https://github.com/m4rw3r/php-libev).
It supports the same backends as libevent.
This loop does only work with PHP 5.
An update for PHP 7 is [unlikely](https://github.com/m4rw3r/php-libev/issues/8)
to happen any time soon.
### LoopInterface
#### run()
The `run(): void` method can be used to
run the event loop until there are no more tasks to perform.
For many applications, this method is the only directly visible
invocation on the event loop.
As a rule of thumb, it is usally recommended to attach everything to the
same loop instance and then run the loop once at the bottom end of the
application.
```php
$loop->run();
```
This method will keep the loop running until there are no more tasks
to perform. In other words: This method will block until the last
timer, stream and/or signal has been removed.
Likewise, it is imperative to ensure the application actually invokes
this method once. Adding listeners to the loop and missing to actually
run it will result in the application exiting without actually waiting
for any of the attached listeners.
This method MUST NOT be called while the loop is already running.
This method MAY be called more than once after it has explicity been
[`stop()`ped](#stop) or after it automatically stopped because it
previously did no longer have anything to do.
#### stop()
The `stop(): void` method can be used to
instruct a running event loop to stop.
This method is considered advanced usage and should be used with care.
As a rule of thumb, it is usually recommended to let the loop stop
only automatically when it no longer has anything to do.
This method can be used to explicitly instruct the event loop to stop:
```php
$loop->addTimer(3.0, function () use ($loop) {
$loop->stop();
});
```
Calling this method on a loop instance that is not currently running or
on a loop instance that has already been stopped has no effect.
#### addTimer()
The `addTimer(float $interval, callable $callback): TimerInterface` method can be used to
enqueue a callback to be invoked once after the given interval.
The timer callback function MUST be able to accept a single parameter,
the timer instance as also returned by this method or you MAY use a
function which has no parameters at all.
The timer callback function MUST NOT throw an `Exception`.
The return value of the timer callback function will be ignored and has
no effect, so for performance reasons you're recommended to not return
any excessive data structures.
Unlike [`addPeriodicTimer()`](#addperiodictimer), this method will ensure
the callback will be invoked only once after the given interval.
You can invoke [`cancelTimer`](#canceltimer) to cancel a pending timer.
```php
$loop->addTimer(0.8, function () {
echo 'world!' . PHP_EOL;
});
$loop->addTimer(0.3, function () {
echo 'hello ';
});
```
See also [example #1](examples).
If you want to access any variables within your callback function, you
can bind arbitrary data to a callback closure like this:
```php
function hello($name, LoopInterface $loop)
{
$loop->addTimer(1.0, function () use ($name) {
echo "hello $name\n";
});
}
hello('Tester', $loop);
```
This interface does not enforce any particular timer resolution, so
special care may have to be taken if you rely on very high precision with
millisecond accuracy or below. Event loop implementations SHOULD work on
a best effort basis and SHOULD provide at least millisecond accuracy
unless otherwise noted. Many existing event loop implementations are
known to provide microsecond accuracy, but it's generally not recommended
to rely on this high precision.
Similarly, the execution order of timers scheduled to execute at the
same time (within its possible accuracy) is not guaranteed.
This interface suggests that event loop implementations SHOULD use a
monotonic time source if available. Given that a monotonic time source is
not available on PHP by default, event loop implementations MAY fall back
to using wall-clock time.
While this does not affect many common use cases, this is an important
distinction for programs that rely on a high time precision or on systems
that are subject to discontinuous time adjustments (time jumps).
This means that if you schedule a timer to trigger in 30s and then adjust
your system time forward by 20s, the timer SHOULD still trigger in 30s.
See also [event loop implementations](#loop-implementations) for more details.
#### addPeriodicTimer()
The `addPeriodicTimer(float $interval, callable $callback): TimerInterface` method can be used to
enqueue a callback to be invoked repeatedly after the given interval.
The timer callback function MUST be able to accept a single parameter,
the timer instance as also returned by this method or you MAY use a
function which has no parameters at all.
The timer callback function MUST NOT throw an `Exception`.
The return value of the timer callback function will be ignored and has
no effect, so for performance reasons you're recommended to not return
any excessive data structures.
Unlike [`addTimer()`](#addtimer), this method will ensure the the
callback will be invoked infinitely after the given interval or until you
invoke [`cancelTimer`](#canceltimer).
```php
$timer = $loop->addPeriodicTimer(0.1, function () {
echo 'tick!' . PHP_EOL;
});
$loop->addTimer(1.0, function () use ($loop, $timer) {
$loop->cancelTimer($timer);
echo 'Done' . PHP_EOL;
});
```
See also [example #2](examples).
If you want to limit the number of executions, you can bind
arbitrary data to a callback closure like this:
```php
function hello($name, LoopInterface $loop)
{
$n = 3;
$loop->addPeriodicTimer(1.0, function ($timer) use ($name, $loop, &$n) {
if ($n > 0) {
--$n;
echo "hello $name\n";
} else {
$loop->cancelTimer($timer);
}
});
}
hello('Tester', $loop);
```
This interface does not enforce any particular timer resolution, so
special care may have to be taken if you rely on very high precision with
millisecond accuracy or below. Event loop implementations SHOULD work on
a best effort basis and SHOULD provide at least millisecond accuracy
unless otherwise noted. Many existing event loop implementations are
known to provide microsecond accuracy, but it's generally not recommended
to rely on this high precision.
Similarly, the execution order of timers scheduled to execute at the
same time (within its possible accuracy) is not guaranteed.
This interface suggests that event loop implementations SHOULD use a
monotonic time source if available. Given that a monotonic time source is
not available on PHP by default, event loop implementations MAY fall back
to using wall-clock time.
While this does not affect many common use cases, this is an important
distinction for programs that rely on a high time precision or on systems
that are subject to discontinuous time adjustments (time jumps).
This means that if you schedule a timer to trigger in 30s and then adjust
your system time forward by 20s, the timer SHOULD still trigger in 30s.
See also [event loop implementations](#loop-implementations) for more details.
Additionally, periodic timers may be subject to timer drift due to
re-scheduling after each invocation. As such, it's generally not
recommended to rely on this for high precision intervals with millisecond
accuracy or below.
#### cancelTimer()
The `cancelTimer(TimerInterface $timer): void` method can be used to
cancel a pending timer.
See also [`addPeriodicTimer()`](#addperiodictimer) and [example #2](examples).
Calling this method on a timer instance that has not been added to this
loop instance or on a timer that has already been cancelled has no effect.
#### futureTick()
The `futureTick(callable $listener): void` method can be used to
schedule a callback to be invoked on a future tick of the event loop.
This works very much similar to timers with an interval of zero seconds,
but does not require the overhead of scheduling a timer queue.
The tick callback function MUST be able to accept zero parameters.
The tick callback function MUST NOT throw an `Exception`.
The return value of the tick callback function will be ignored and has
no effect, so for performance reasons you're recommended to not return
any excessive data structures.
If you want to access any variables within your callback function, you
can bind arbitrary data to a callback closure like this:
```php
function hello($name, LoopInterface $loop)
{
$loop->futureTick(function () use ($name) {
echo "hello $name\n";
});
}
hello('Tester', $loop);
```
Unlike timers, tick callbacks are guaranteed to be executed in the order
they are enqueued.
Also, once a callback is enqueued, there's no way to cancel this operation.
This is often used to break down bigger tasks into smaller steps (a form
of cooperative multitasking).
```php
$loop->futureTick(function () {
echo 'b';
});
$loop->futureTick(function () {
echo 'c';
});
echo 'a';
```
See also [example #3](examples).
#### addSignal()
The `addSignal(int $signal, callable $listener): void` method can be used to
register a listener to be notified when a signal has been caught by this process.
This is useful to catch user interrupt signals or shutdown signals from
tools like `supervisor` or `systemd`.
The listener callback function MUST be able to accept a single parameter,
the signal added by this method or you MAY use a function which
has no parameters at all.
The listener callback function MUST NOT throw an `Exception`.
The return value of the listener callback function will be ignored and has
no effect, so for performance reasons you're recommended to not return
any excessive data structures.
```php
$loop->addSignal(SIGINT, function (int $signal) {
echo 'Caught user interrupt signal' . PHP_EOL;
});
```
See also [example #4](examples).
Signaling is only available on Unix-like platform, Windows isn't
supported due to operating system limitations.
This method may throw a `BadMethodCallException` if signals aren't
supported on this platform, for example when required extensions are
missing.
**Note: A listener can only be added once to the same signal, any
attempts to add it more then once will be ignored.**
#### removeSignal()
The `removeSignal(int $signal, callable $listener): void` method can be used to
remove a previously added signal listener.
```php
$loop->removeSignal(SIGINT, $listener);
```
Any attempts to remove listeners that aren't registered will be ignored.
#### addReadStream()
> Advanced! Note that this low-level API is considered advanced usage.
Most use cases should probably use the higher-level
[readable Stream API](https://github.com/reactphp/stream#readablestreaminterface)
instead.
The `addReadStream(resource $stream, callable $callback): void` method can be used to
register a listener to be notified when a stream is ready to read.
The first parameter MUST be a valid stream resource that supports
checking whether it is ready to read by this loop implementation.
A single stream resource MUST NOT be added more than once.
Instead, either call [`removeReadStream()`](#removereadstream) first or
react to this event with a single listener and then dispatch from this
listener. This method MAY throw an `Exception` if the given resource type
is not supported by this loop implementation.
The listener callback function MUST be able to accept a single parameter,
the stream resource added by this method or you MAY use a function which
has no parameters at all.
The listener callback function MUST NOT throw an `Exception`.
The return value of the listener callback function will be ignored and has
no effect, so for performance reasons you're recommended to not return
any excessive data structures.
If you want to access any variables within your callback function, you
can bind arbitrary data to a callback closure like this:
```php
$loop->addReadStream($stream, function ($stream) use ($name) {
echo $name . ' said: ' . fread($stream);
});
```
See also [example #11](examples).
You can invoke [`removeReadStream()`](#removereadstream) to remove the
read event listener for this stream.
The execution order of listeners when multiple streams become ready at
the same time is not guaranteed.
Some event loop implementations are known to only trigger the listener if
the stream *becomes* readable (edge-triggered) and may not trigger if the
stream has already been readable from the beginning.
This also implies that a stream may not be recognized as readable when data
is still left in PHP's internal stream buffers.
As such, it's recommended to use `stream_set_read_buffer($stream, 0);`
to disable PHP's internal read buffer in this case.
#### addWriteStream()
> Advanced! Note that this low-level API is considered advanced usage.
Most use cases should probably use the higher-level
[writable Stream API](https://github.com/reactphp/stream#writablestreaminterface)
instead.
The `addWriteStream(resource $stream, callable $callback): void` method can be used to
register a listener to be notified when a stream is ready to write.
The first parameter MUST be a valid stream resource that supports
checking whether it is ready to write by this loop implementation.
A single stream resource MUST NOT be added more than once.
Instead, either call [`removeWriteStream()`](#removewritestream) first or
react to this event with a single listener and then dispatch from this
listener. This method MAY throw an `Exception` if the given resource type
is not supported by this loop implementation.
The listener callback function MUST be able to accept a single parameter,
the stream resource added by this method or you MAY use a function which
has no parameters at all.
The listener callback function MUST NOT throw an `Exception`.
The return value of the listener callback function will be ignored and has
no effect, so for performance reasons you're recommended to not return
any excessive data structures.
If you want to access any variables within your callback function, you
can bind arbitrary data to a callback closure like this:
```php
$loop->addWriteStream($stream, function ($stream) use ($name) {
fwrite($stream, 'Hello ' . $name);
});
```
See also [example #12](examples).
You can invoke [`removeWriteStream()`](#removewritestream) to remove the
write event listener for this stream.
The execution order of listeners when multiple streams become ready at
the same time is not guaranteed.
#### removeReadStream()
The `removeReadStream(resource $stream): void` method can be used to
remove the read event listener for the given stream.
Removing a stream from the loop that has already been removed or trying
to remove a stream that was never added or is invalid has no effect.
#### removeWriteStream()
The `removeWriteStream(resource $stream): void` method can be used to
remove the write event listener for the given stream.
Removing a stream from the loop that has already been removed or trying
to remove a stream that was never added or is invalid has no effect.
## Install
The recommended way to install this library is [through Composer](https://getcomposer.org).
[New to Composer?](https://getcomposer.org/doc/00-intro.md)
This will install the latest supported version:
```bash
$ composer require react/event-loop:^0.5
```
See also the [CHANGELOG](CHANGELOG.md) for details about version upgrades.
This project aims to run on any platform and thus does not require any PHP
extensions and supports running on legacy PHP 5.3 through current PHP 7+ and
HHVM.
It's *highly recommended to use PHP 7+* for this project.
Installing any of the event loop extensions is suggested, but entirely optional.
See also [event loop implementations](#loop-implementations) for more details.
## Tests
To run the test suite, you first need to clone this repo and then install all
dependencies [through Composer](https://getcomposer.org):
```bash
$ composer install
```
To run the test suite, go to the project root and run:
```bash
$ php vendor/bin/phpunit
```
## License
MIT, see [LICENSE file](LICENSE).
## More
* See our [Stream component](https://github.com/reactphp/stream) for more
information on how streams are used in real-world applications.
* See our [users wiki](https://github.com/reactphp/react/wiki/Users) and the
[dependents on Packagist](https://packagist.org/packages/react/event-loop/dependents)
for a list of packages that use the EventLoop in real-world applications.
| 36.057803 | 121 | 0.755691 | eng_Latn | 0.997723 |
2fb83f5e3e395eecf1decd45c7b909551e9ae755 | 4,162 | md | Markdown | README.md | priscilapatricio/nlw4_reactjs | 92c50a93095db869c87eb296ac4048adc06451f0 | [
"Unlicense",
"MIT"
] | null | null | null | README.md | priscilapatricio/nlw4_reactjs | 92c50a93095db869c87eb296ac4048adc06451f0 | [
"Unlicense",
"MIT"
] | null | null | null | README.md | priscilapatricio/nlw4_reactjs | 92c50a93095db869c87eb296ac4048adc06451f0 | [
"Unlicense",
"MIT"
] | null | null | null | <h1 align="center">
<img alt="Rocketseat" title="Rocketseat" src="/src/components/images/rocketseat.png" width="60px" />
<img alt=" title=" src="/src/components/images/favicon.png" width="60px" />
</h1>
<p align="center">
<a href="#-tecnologias">Tecnologias</a> |
<a href="#-projeto">Projeto</a> |
<a href="#-layout">Layout</a> |
<a href="#-deploy-do-projeto">Deploy do projeto</a> |
<a href="#-como-rodar">Como rodar</a> |
<a href="#-como-contribuir">Como contribuir</a> |
<a href="#-licença">Licença</a>
</p>
<br>
<p align="center">
<img alt="Move-it" src="/src/components/images/Logo.png" width="20%"><br><br>
<img alt="Move-it" src="/src/components/images/Home.png" width="100%"><br>
<img alt="Move-it" src="/src/components/images/Home(ciclo iniciado).png" width="100%"><br>
<img alt="Move-it" src="/src/components/images/Home(novo level1).png" width="100%"><br>
</p>
## 🏃 Aplicação Move-it - Next Level Week #04 - NLW4
- Status do Projeto: Concluído :heavy_check_mark:
## 🚀 Tecnologias
Esse projeto foi desenvolvido com as seguintes tecnologias:
- [Node.js](https://nodejs.org/en/download/)
- [React.js](https://pt-br.reactjs.org/)
- [Next.js](https://nextjs.org/)
- [JavaScript](https://www.javascript.com/)
- [HTML](https://html.spec.whatwg.org/)
- [CSS](https://www.w3.org/Style/CSS/)
## 💻 Projeto
Desenvolvimento de uma aplicação para que o usuário intercale ciclos entre uma atividade que demanda foco total e um longo tempo de atenção, como por exemplo: trabalho, estudos ou games e períodos de descanso ou algum exercício físico, alongamento, etc. A aplicação enviará notificações a cada ciclo realizado e também fará desafios ao usuário.
## 🎨 Layout
Você pode visualizar o layout do projeto através [desse link - Move.it 1.0](https://www.figma.com/file/ge20pu3ofMOKoliUyKx1Nl/Move.it-1.0/duplicate?node-id=160%3A2761). Lembrando que você irá precisar ter uma conta no [Figma](http://figma.com/).
## 👀 Deploy do Projeto
Você pode acessar o projeto através deste [link](https://nlw4moveit-ten.vercel.app/).
## 👩🏿💻 Como rodar
- Faça um clone desse repositório;
- Abra o terminal e faça a instalação dos pacotes de dependências do projeto - node_modules($ npm install),
- Inicie o servidor no terminal ($ npm start) e abra o seu navegador [http://localhost:3000](http://localhost:3000).
- Build do projeto ($ npm run-script build)- como ele seria executado em produção ($ npm start).
## 🤔 Como contribuir
- Faça um fork desse repositório;
- Cria uma branch com a sua feature: `git checkout -b minha-feature`;
- Faça commit das suas alterações: `git commit -m 'feat: Minha nova feature'`;
- Faça push para a sua branch: `git push origin minha-feature`.
Depois que o merge da sua pull request for feito, você pode deletar a sua branch.
## 📝 Licença
Esse projeto está sob a licença MIT.
The MIT License (MIT)
Copyright © 2021 PRISCILA PATRICIO
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | 50.756098 | 460 | 0.729217 | por_Latn | 0.863489 |
2fb87d7d7565f06195af91be092afce55e778de2 | 678 | md | Markdown | README.md | davidlares/davidVue | c2b62fdbe52a5a5d5949b41f72fe933fd97cb88a | [
"MIT"
] | null | null | null | README.md | davidlares/davidVue | c2b62fdbe52a5a5d5949b41f72fe933fd97cb88a | [
"MIT"
] | null | null | null | README.md | davidlares/davidVue | c2b62fdbe52a5a5d5949b41f72fe933fd97cb88a | [
"MIT"
] | null | null | null | # DavidVue
Proyecto de una WebApp frontend (SPA) desarrollado con Vue.JS en su versión 2 y Webpack, que lista las canciones disponibles bajo el Wrapper API de Spotify.
Se Manejan conceptos del Vue-Core tales como las directivas y el sistema de componentes, hasta conceptos mas avanzados como Vuex, Vue-Routers, SSR con Nuxt (no incluído) y algunas características de ES6 bajo el uso de los presets de Babel
> SPA on VueJS for Music
## Build Setup
``` bash
# instalar dependencias
npm install
# desarrollo
npm run dev
# producción
npm run build
```
## Créditos
- [David E Lares S](https://twitter.com/@davidlares3)
## Licencia
[MIT](https://opensource.org/licenses/MIT)
| 25.111111 | 238 | 0.759587 | spa_Latn | 0.982529 |
2fb8fd5f2d9d5de8a7fd283fdf96a9031fc5b2c0 | 576 | md | Markdown | _posts/2020-07-18-Merge_or_Combine_dataframe.md | thinktomato/thinktomato.github.io | 69be32b0c442ddc5fc3b50cf3e99790205fbcc72 | [
"MIT"
] | null | null | null | _posts/2020-07-18-Merge_or_Combine_dataframe.md | thinktomato/thinktomato.github.io | 69be32b0c442ddc5fc3b50cf3e99790205fbcc72 | [
"MIT"
] | null | null | null | _posts/2020-07-18-Merge_or_Combine_dataframe.md | thinktomato/thinktomato.github.io | 69be32b0c442ddc5fc3b50cf3e99790205fbcc72 | [
"MIT"
] | null | null | null | ---
title: 横行合并数据框或添加列数
tags:
- R
- merge
- cbind
desc: use merge and cbind in R
layout: post
---
在R中想要合并两个数据框,依据为行名称,可以使用merge和cbind函数。
(1) merge(x, y, by = "row.names", all=TRUE), 该函数需要注意的是参数*by = "row.names"*,x,y 需是*data.frame* 格式;
(2) cbind函数需行数相等且以同顺序排序,**无法使用公共索引**。
***Note:*** 若取数据框A第3列"*area*"数据,两种获取方式获得数据类型不同:(1) A[,3]格式是numeric, (2) A["area"]格式是data.frame, 所以在merge中合并数据框某几列数据,应用第二种选取方式,否则出错。
```
id length pop area
1 10 10 10
2 11 11 11
3 12 12 12
```
参考:https://stackoverflow.com/questions/6029743/merge-or-combine-by-rownames | 25.043478 | 131 | 0.666667 | yue_Hant | 0.322026 |
2fb910695f502460f62a8aa24bae595b90bd636d | 3,377 | md | Markdown | _posts/2013-02-03-sota-activation-report-w4g-ng-010-blue-mountain.md | k4kpk/k4kpk.github.io | 31c1ff28f1c7691c91e05bf31bbc100aeceda967 | [
"CC-BY-4.0"
] | null | null | null | _posts/2013-02-03-sota-activation-report-w4g-ng-010-blue-mountain.md | k4kpk/k4kpk.github.io | 31c1ff28f1c7691c91e05bf31bbc100aeceda967 | [
"CC-BY-4.0"
] | 4 | 2020-08-25T22:13:01.000Z | 2021-06-30T16:45:38.000Z | _posts/2013-02-03-sota-activation-report-w4g-ng-010-blue-mountain.md | k4kpk/k4kpk.github.io | 31c1ff28f1c7691c91e05bf31bbc100aeceda967 | [
"CC-BY-4.0"
] | null | null | null | ---
categories: []
layout: post
title: 'SOTA Activation Report: W4G/NG-010, Blue Mountain'
created: 1359935068
redirect_from: /content/sota-activation-report-w4gng-010-blue-mountain
---
SOTA Activation Report: W4G/NG-010, Blue Mountain
----------------------------------------------
#### Trip of 2013.01.05
* Succeeded: Yes
* First-activation: Yes
See my trip planning guide at: [SOTA Guide: W4G/NG-010, Blue Mountain](/content/sota-guide-w4gng-010-blue-mountain)
Commentary:
A beautiful winter day in Georgia, an easy drive, and a well-maintained, easy to follow trail. It seemed like maybe propagation was not as good as it has been recently and, sure enough, when I returned him and checked online propagation predictions, they were calling it 'fair' (down from recent 'good') on 20 meters.
My eldest son came along on this trip. (He's not a ham.) He is always a delight to have as a hiking companion and a SOTA buddy. He copies call signs and signal reports better than me! I was also joined by a new trail companion. I call him K4KPK/P. He's a little confused about how he came to be on top of Mt. KX3.

Here's K4KPK/P in "antenna support configuration."

A surprising number of hikers were out for a Georgia winter day. Four on our side of the highway, and a gaggle of Boy Scouts headed the other direction.
My QSOs were at 12 watts. I used 12 NiMh AA cells, which I had drained down to 15.2 volts prior to the trip. Upon my return home, the mailman brought my dummy AA cell, so next time I'll go with 11 fully-charged cells and a dummy, so I won't have to pre-drain.
On the night before the trip, there were two other activators alerted for about the same time, so I moved my alert up by 15 minutes, to avoid competing for chasers. We hung around our summit for a little while after activation, in hopes that one of them would get spotted and we could S2S. We got too cold before anyone went on the air. (I think S2S is going to be a warm weather sport for me.)
In a first (for me), we encountered three people on the trail who knew at least a little about SOTA. One solo hiker is a ham with an Elecraft CW rig on order for SOTA. A pair of hikers said something to the effect of, "Oh. We talked to someone on a mountain in Pennsylvania who was doing what you're doing. There was a terrible storm. He seemed very excited to be there."
Contacts:
* 6 QSO on 40 meters and 8 on 20 meters. 7.19265-ssb and 14.3448-ssb.
* 40 meters
* N4EX
* W4ZV
* K4PIC
* N0ZH
* WA2USA
* K0LAF
* 20 meters
* N7UN
* NS7P
* N0ZH
* W0MNA
* KQ2RP
* AA4AI
* VA6FUN
* KI0G
Thank you chasers! I am grateful that you were then when I called CQ.
Antennas:
* A fine site for all manner of antennas. The brush is not too dense under the trees and there are trees of many sizes.
* I was able to successfully slingshot a heavy monofilament line into a tree; I used it to hoist up my 33 ft wire.
* There's also space to guy out a mast, if that's your preference.
* I used my EARCHI 20m matchbox with an EFHW. I had no trouble launching my antenna using fishing line on a spool, a 1 ounce egg sinker, and a slingshot. (Much better luck than last weekend.) I ended up stringing an inverted L.
------
| 51.953846 | 397 | 0.709209 | eng_Latn | 0.999327 |
2fb99a36f8f7507d34bbcc17a211106920b71679 | 915 | md | Markdown | docs/framework/wcf/diagnostics/event-logging/messageauthenticationfailure.md | trubor/docs.ru-ru | 95745f1c3bd3bb4cf7026dc91d786b97e56fcc70 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/messageauthenticationfailure.md | trubor/docs.ru-ru | 95745f1c3bd3bb4cf7026dc91d786b97e56fcc70 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/diagnostics/event-logging/messageauthenticationfailure.md | trubor/docs.ru-ru | 95745f1c3bd3bb4cf7026dc91d786b97e56fcc70 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-10-31T15:06:56.000Z | 2021-10-31T15:06:56.000Z | ---
description: 'Дополнительные сведения о: Мессажеаусентикатионфаилуре'
title: MessageAuthenticationFailure
ms.date: 03/30/2017
ms.assetid: cde6beae-2d57-447e-8885-a1cfc66bbcbb
ms.openlocfilehash: 87b0bd2aa0ee2d51e3e27e5d82c9e807a0439f82
ms.sourcegitcommit: ddf7edb67715a5b9a45e3dd44536dabc153c1de0
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 02/06/2021
ms.locfileid: "99736249"
---
# <a name="messageauthenticationfailure"></a>MessageAuthenticationFailure
Идентификатор: 170
Важность: ошибка
Категория: SecurityAudit
## <a name="description"></a>Описание
Это событие показывает, что проверка подлинности сообщения завершилась ошибкой. В событии указаны служба, действие, учетная запись клиента и идентификатор действия.
## <a name="see-also"></a>См. также
- [Ведение журналов событий](index.md)
- [Общие справочные сведения о событиях](events-general-reference.md)
| 31.551724 | 167 | 0.79235 | rus_Cyrl | 0.312447 |
2fba032b6c83f3f28e36b29eb9b0522213424985 | 1,909 | md | Markdown | _posts/2016-03-10-amd_isleidzia_radeon_software_crimson_163_tvarkykles.md | LukasTy/technews-jekyll-static | 4ce05ca79053ad9af38ca99cb2a67eb4c77f3072 | [
"MIT"
] | null | null | null | _posts/2016-03-10-amd_isleidzia_radeon_software_crimson_163_tvarkykles.md | LukasTy/technews-jekyll-static | 4ce05ca79053ad9af38ca99cb2a67eb4c77f3072 | [
"MIT"
] | null | null | null | _posts/2016-03-10-amd_isleidzia_radeon_software_crimson_163_tvarkykles.md | LukasTy/technews-jekyll-static | 4ce05ca79053ad9af38ca99cb2a67eb4c77f3072 | [
"MIT"
] | null | null | null | ---
layout: post
status: publish
published: true
title: AMD išleidžia „Radeon Software Crimson 16.3“ tvarkykles
author:
display_name: Mindaugas Klumbis
login: Katonas
email: katonasf1@yahoo.com
url: ''
author_login: Katonas
author_email: katonasf1@yahoo.com
wordpress_id: 9327
wordpress_url: http://localhost/site/new/amd_isleidzia_radeon_software_crimson_163_tvarkykles/
date: '2016-03-10 09:57:52 +0200'
date_gmt: '2016-03-10 09:57:52 +0200'
categories:
- Naujienos
---
<p>
<img alt="" src="http://technews.lt/userfiles/2941924-riseofthetombraider_preview_event_screenshot_4.jpg" style="width: 240px; height: 135px; float: right;" />AMD atnaujina savo programinę įrangą skirtą „Radeon“ vaizdo plokštėm. Su naujausiomis „Radeon Software Crimson 16.3“ tvarkylėmis sulauksime klaidų pataisymo bei spartos pagerinimo „Rise of the Tomb Raider“ bei „Gears of War: Ultimate Edition“ žaidimuose.</p>
<p>
Pasak AMD, turintys „Radeon R9 Fury X“ vaizdo plokštę ir žaidžiantys „Rise of the Tomb Raider“ pajaus 12 % pagerėjimą lyginant su Crimson 16.2 tvarkyklėmis. „Gears of War: Ultimate Edition“ žaidime R9 Fury X įgaus net 60 % didesnę spartą, o R9 380 serijos vaizdo plokštės apie 40 % lyginant su Crimson 16.2.1 tvarkyklėmis.</p>
<p>
Prie viso to atnaujinti CF profiliai „Hitman“ ir „The Park“ žaidimams. Tuo pačiu sulaukėme ir oficialaus Vulkan API palaikymo bei kai kurių klaidų pataisymo, tarp kurių ir pagaliau rastas sprendimas leidžiantis vaizdo plokštėms išlaikyti tinkamą dažnį taip pašalinant nemalonius trūkinėjimus žaidimuose.</p>
<p>
„Radeon Software Crimson 16.3“ tvarkykles galite parsiųsti iš <em><a href="http://support.amd.com/en-us/kb-articles/Pages/AMD_Radeon_Software_Crimson_Edition_16.3.aspx">čia</a></em>.</p>
| 68.178571 | 473 | 0.786276 | lit_Latn | 0.987173 |
2fbb375fdda4c8414afc10803a1736619b6135d7 | 12,178 | md | Markdown | docs/2014/reporting-services/extensions/security-extension/authentication-in-reporting-services.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/reporting-services/extensions/security-extension/authentication-in-reporting-services.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/reporting-services/extensions/security-extension/authentication-in-reporting-services.md | gmilani/sql-docs.pt-br | 02f07ca69eae8435cefd74616a8b00f09c4d4f99 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Autenticação no Reporting Services | Microsoft Docs
ms.custom: ''
ms.date: 03/06/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: reporting-services
ms.topic: reference
helpviewer_keywords:
- security [Reporting Services], authentication
- forms-based authentication [Reporting Services]
- authentication [Reporting Services]
- custom authentication [Reporting Services]
ms.assetid: 103ce1f9-31d8-44bb-b540-2752e4dcf60b
author: maggiesMSFT
ms.author: maggies
manager: kfile
ms.openlocfilehash: c4fc4d98eb32fb07def2fd317ebb7f5a6f6332cb
ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 06/15/2019
ms.locfileid: "63282145"
---
# <a name="authentication-in-reporting-services"></a>Autenticação no Reporting Services
A autenticação é o processo de estabelecimento do direito de um usuário a uma identidade. Existem muitas técnicas que você pode usar para autenticar um usuário. O modo mais comum é usar senhas. Quando você implementa a Autenticação de Formulários, por exemplo, deseja uma implementação que solicite credenciais dos usuários (normalmente por meio de alguma interface que solicita um nome de login e uma senha) e depois valide os usuários em um repositório de dados, como uma tabela de banco de dados ou um arquivo de configuração. Se as credenciais não puderem ser validadas, o processo de autenticação falhará e o usuário assumirá uma identidade anônima.
## <a name="custom-authentication-in-reporting-services"></a>Autenticação personalizada no Reporting Services
No [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)], o sistema operacional Windows lida com a autenticação de usuários por meio da segurança integrada ou da recepção explícita e da validação de credenciais de usuário. A autenticação personalizada pode ser desenvolvida no [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)] para dar suporte a esquemas de autenticação adicionais. Isso é possível por meio da interface de extensão de segurança <xref:Microsoft.ReportingServices.Interfaces.IAuthenticationExtension>. Todas as extensões herdam da interface base <xref:Microsoft.ReportingServices.Interfaces.IExtension> para qualquer extensão implantada e usada pelo servidor de relatório. <xref:Microsoft.ReportingServices.Interfaces.IExtension>, além de <xref:Microsoft.ReportingServices.Interfaces.IAuthenticationExtension>, são membros do namespace <xref:Microsoft.ReportingServices.Interfaces>.
A principal forma de autenticar em um servidor de relatório no [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)] é o método <xref:ReportService2010.ReportingService2010.LogonUser%2A>. Esse membro do serviço Web Reporting Services pode ser usado para passar credenciais de usuário a um servidor de relatório para validação. Sua extensão de segurança subjacente implementa **Iauthenticationextension** que contém o código de autenticação personalizado. Na amostra da Autenticação de Formulários, **LogonUser**, que executa uma verificação de autenticação nas credenciais fornecidas e em um repositório de usuários personalizado de um banco de dados. Um exemplo de implementação de **LogonUser** é semelhante a esta:
```
public bool LogonUser(string userName, string password, string authority)
{
return AuthenticationUtilities.VerifyPassword(userName, password);
}
```
A função de exemplo a seguir é usada para verificar as credenciais fornecidas:
```
internal static bool VerifyPassword(string suppliedUserName,
string suppliedPassword)
{
bool passwordMatch = false;
// Get the salt and pwd from the database based on the user name.
// See "How To: Use DPAPI (Machine Store) from ASP.NET," "How To:
// Use DPAPI (User Store) from Enterprise Services," and "How To:
// Create a DPAPI Library" for more information about how to use
// DPAPI to securely store connection strings.
SqlConnection conn = new SqlConnection(
"Server=localhost;" +
"Integrated Security=SSPI;" +
"database=UserAccounts");
SqlCommand cmd = new SqlCommand("LookupUser", conn);
cmd.CommandType = CommandType.StoredProcedure;
SqlParameter sqlParam = cmd.Parameters.Add("@userName",
SqlDbType.VarChar,
255);
sqlParam.Value = suppliedUserName;
try
{
conn.Open();
SqlDataReader reader = cmd.ExecuteReader();
reader.Read(); // Advance to the one and only row
// Return output parameters from returned data stream
string dbPasswordHash = reader.GetString(0);
string salt = reader.GetString(1);
reader.Close();
// Now take the salt and the password entered by the user
// and concatenate them together.
string passwordAndSalt = String.Concat(suppliedPassword, salt);
// Now hash them
string hashedPasswordAndSalt =
FormsAuthentication.HashPasswordForStoringInConfigFile(
passwordAndSalt,
"SHA1");
// Now verify them. Returns true if they are equal.
passwordMatch = hashedPasswordAndSalt.Equals(dbPasswordHash);
}
catch (Exception ex)
{
throw new Exception("Exception verifying password. " +
ex.Message);
}
finally
{
conn.Close();
}
return passwordMatch;
}
```
## <a name="authentication-flow"></a>Fluxo de autenticação
O serviço Web Reporting Services oferece extensões de autenticação personalizadas para habilitar a Autenticação de Formulários feita pelo Gerenciador de Relatórios e pelo servidor de relatório.
O método <xref:ReportService2010.ReportingService2010.LogonUser%2A> do serviço Web Reporting Services é usado para enviar credenciais ao servidor de relatório para autenticação. O serviço Web usa cabeçalhos HTTP para passar um tíquete de autenticação (conhecido como "cookie") do servidor para o cliente para solicitações de logon validadas.
A ilustração a seguir mostra o método de autenticação de usuários no serviço Web quando o seu aplicativo é implantado com um servidor de relatório configurado para usar uma extensão de autenticação personalizada.

Como mostrado na Figura 2, o processo de autenticação é assim:
1. Um aplicativo cliente chama o método do serviço Web <xref:ReportService2010.ReportingService2010.LogonUser%2A> para autenticar um usuário.
2. O serviço Web faz uma chamada para o <xref:ReportService2010.ReportingService2010.LogonUser%2A> método de sua extensão de segurança, especificamente, a classe que implementa **IAuthenticationExtension**.
3. A sua implementação de <xref:ReportService2010.ReportingService2010.LogonUser%2A> valida o nome de usuário e a senha no repositório de usuários ou na autoridade de segurança.
4. Após a autenticação bem-sucedida, o serviço Web criará um cookie e o gerenciará para a sessão.
5. O serviço Web devolve o tíquete de autenticação ao aplicativo de chamada no cabeçalho HTTP.
Quando o serviço Web autenticar um usuário com êxito por meio da extensão de segurança, vai gerar um cookie que será usado para as solicitações subsequentes. O cookie pode não persistir dentro da autoridade de segurança personalizada porque o servidor de relatório não possui a autoridade de segurança. O cookie é retornado a partir do método <xref:ReportService2010.ReportingService2010.LogonUser%2A> do serviço Web e é usado em chamadas subsequentes do método do serviço Web e no acesso à URL.
> [!NOTE]
> Para impedir o comprometimento do cookie durante a transmissão, os cookies de autenticação retornados de <xref:ReportService2010.ReportingService2010.LogonUser%2A> devem ser transmitidos de forma segura usando criptografia SSL.
Se você acessar o servidor de relatório por meio do acesso à URL quando uma extensão de segurança personalizada for instalada, o IIS (Serviços de Informações da Internet) e o [!INCLUDE[vstecasp](../../../includes/vstecasp-md.md)] gerenciarão automaticamente a transmissão do tíquete de autenticação. Se você estiver acessando o servidor de relatório por meio da API SOAP, a sua implementação da classe proxy deverá incluir suporte adicional para o gerenciamento do tíquete de autenticação. Para obter mais informações sobre como usar a API SOAP e gerenciar o tíquete de autenticação, consulte "Usando o serviço Web com segurança personalizada".
## <a name="forms-authentication"></a>Autenticação de Formulários
A Autenticação de Formulários é um tipo de autenticação do [!INCLUDE[vstecasp](../../../includes/vstecasp-md.md)] na qual um usuário não autenticado é direcionado a um formulário HTML. Quando o usuário fornece credenciais, o sistema emite um cookie com um tíquete de autenticação. Em solicitações posteriores, o sistema verifica o cookie primeiro para ver se o usuário já foi autenticado pelo servidor de relatório.
O [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)] pode ser estendido para dar suporte à Autenticação de Formulários que usa as interfaces de extensibilidade de segurança disponíveis por meio da API do Reporting Services. Se você estender o [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)] para usar a Autenticação de Formulários, use o protocolo SSL (Secure Sockets Layer) para todas as comunicações feitas com o servidor de relatório para impedir que usuários maliciosos obtenham acesso ao cookie de outro usuário. O SSL permite que clientes e que um servidor de relatório autentiquem uns aos outros e garantam que nenhum outro computador poderá ler o conteúdo das comunicações entre os dois computadores. Todos os dados enviados de um cliente por meio de uma conexão SSL são criptografados para que os usuários maliciosos não possam interceptar senhas ou dados enviados para um servidor de relatório.
A Autenticação de Formulário é geralmente implementada para dar suporte a contas e à autenticação para plataformas diferentes do Windows. Uma interface gráfica é apresentada a um usuário que solicite acesso a um servidor de relatório e as credenciais fornecidas serão enviadas a uma autoridade de segurança para autenticação.
A Autenticação de Formulários exige que uma pessoa esteja presente para inserir as credenciais. Para aplicativos autônomos que se comunicam diretamente com o serviço Web Reporting Services, a Autenticação de Formulários deve ser combinada a um esquema de autenticação personalizado.
A Autenticação de Formulários é apropriada para o [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)] quando:
- Você precisa armazenar e autenticar usuários que não têm contas do [!INCLUDE[msCoName](../../../includes/msconame-md.md)] o Windows e
- Você precisa fornecer o seu próprio formulário de interface do usuário como página de logon entre páginas diferentes de um site.
Considere o seguinte ao escrever uma extensão de segurança personalizada que dê suporte à Autenticação de Formulários:
- Se você usar a Autenticação de Formulários, o acesso anônimo deverá ser habilitado no diretório virtual de servidor de relatório no IIS (Serviços de Informações da Internet).
- A autenticação do [!INCLUDE[vstecasp](../../../includes/vstecasp-md.md)] deve ser definida como Formulários. Você configura autenticação do [!INCLUDE[vstecasp](../../../includes/vstecasp-md.md)] no arquivo Web.config para o servidor de relatório.
- O [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)] pode autenticar e autorizar usuários com a Autenticação do Windows ou a autenticação personalizada, mas não com ambas. O [!INCLUDE[ssRSnoversion](../../../includes/ssrsnoversion-md.md)] não dá suporte ao uso simultâneo de várias extensões de segurança.
## <a name="see-also"></a>Consulte também
[Implementando uma extensão de segurança](../security-extension/implementing-a-security-extension.md)
| 79.594771 | 938 | 0.770734 | por_Latn | 0.998305 |
2fbc7b52905990e5c7aa31c2446993cf8ac9f4b7 | 674 | md | Markdown | docs/installation.md | hynek/prometheus_async | 4abb25ac4f893c951131123989013df1286338d0 | [
"Apache-2.0"
] | 49 | 2015-10-03T00:04:12.000Z | 2019-05-13T10:32:02.000Z | docs/installation.md | hynek/prometheus_async | 4abb25ac4f893c951131123989013df1286338d0 | [
"Apache-2.0"
] | 13 | 2015-10-07T21:15:23.000Z | 2019-02-09T17:12:46.000Z | docs/installation.md | hynek/prometheus_async | 4abb25ac4f893c951131123989013df1286338d0 | [
"Apache-2.0"
] | 12 | 2015-10-15T23:05:03.000Z | 2019-02-09T15:49:07.000Z | # Installation and Requirements
If you just want to instrument an *asyncio*-based application:
```console
$ python -m pip install -U pip
$ python -m pip install prometheus-async
```
If you want to expose metrics using *aiohttp*:
```console
$ python -m pip install -U pip
$ python -m pip install prometheus-async[aiohttp]
```
If you want to instrument a Twisted application:
```console
$ python -m pip install -U pip
$ python -m pip install prometheus-async[twisted]
```
```{admonition} Warning
:class: Warning
Please do not skip the update of *pip*, because *prometheus-async* uses modern packaging features and the installation will most likely fail otherwise.
```
| 23.241379 | 151 | 0.740356 | eng_Latn | 0.929144 |
2fbcee99e946aef632469c8fe2fbb332d7fda5c6 | 3,057 | md | Markdown | Knowledge Vault/Cloud Computing/Permanent Notes/CloudWatch.md | nhutlong128/obsidian | 565b4184e63eebdd39f33cff5b1b31fc27df8369 | [
"Apache-2.0"
] | null | null | null | Knowledge Vault/Cloud Computing/Permanent Notes/CloudWatch.md | nhutlong128/obsidian | 565b4184e63eebdd39f33cff5b1b31fc27df8369 | [
"Apache-2.0"
] | null | null | null | Knowledge Vault/Cloud Computing/Permanent Notes/CloudWatch.md | nhutlong128/obsidian | 565b4184e63eebdd39f33cff5b1b31fc27df8369 | [
"Apache-2.0"
] | null | null | null | ---
cards-deck: Obsidian::Cloud-Computing::Monitoring::CloudWatch
tags: cloudwatch, monitoring
---
### Metadata
- Type: #permanent #monitoring
Date written: #2021-07-01
Source: https://docs.aws.amazon.com/iam/index.html
Status: #wip
Keywords: #cloudcomputing #aws #cloudwatch
Related:
### Notes
- [[CloudWatch Metrics]]
- [[CloudWatch Dashboards]]
- [[CloudWatch Logs]]
- [[CloudWatch Alarms]]
- [[CloudWatch Events]]
### Questions
1. We'd like to have CloudWatch Metrics for EC2 at a 1 minute rate. What should we do?
- Enable Custom Metrics
- Enable High Resolution
- Enable Basic Monitoring
- Enable Detailed Monitoring
#card
Enable Basic Monitoring is default and 5 minutes rate. Enable Detailed Monitoring. This is a paid offering and gives you EC2 metrics at a 1 minute rate
^1626790853282
2. High Resolution Custom Metrics can have a minimum resolution of
- 1 second
- 10 seconds
- 30 seconds
- 1 minute
#card
1 second
^1626790853293
3. Your CloudWatch alarm is triggered and controls an ASG. The alarm should trigger 1 instance being deleted from your ASG, but your ASG has already 2 instances running and the minimum capacity is 2. What will happen?
- One instance will be deleted and the ASG capacity and minimum will go to 1
- The alarm will remain in "ALARM" state but never decrease the number of instances in my ASG
- The alarm will be detached from my ASG
- The alarm will go in OK state
#card
The alarm will remain in "ALARM" state but never decrease the number of instances in my ASG. The number of instances in an ASG cannot go below the minimum, even if the alarm would in theory trigger an instance termination
^1626790853304
4. An Alarm on a High Resolution Metric can be triggered as often as
- 1 second
- 10 seconds
- 30 seconds
- 1 minute
#card
10 seconds
^1626790853314
5. You have made a configuration change and would like to evaluate the impact of it on the performance of your application. Which service do you use?
- CloudWatch
- CloudTrail
#card
CloudWatch
^1626790853325
6. Someone has terminated an EC2 instance in your account last week, which was hosting a critical database. You would like to understand who did it and when, how can you achieve that?
- Look at the CloudWatch Metrics
- Look at the CloudWatch Alarms
- Look at the CloudWatch Events
- Look at CloudTrail
#card
CloudTrail helps audit the API calls made within your account, so the database deletion API call will appear here (regardless if made from the console, the CLI, or an SDK)
^1626790853336
7. You would like to ensure that over time, none of your EC2 instances expose the port 84 as it is known to have vulnerabilities with the OS you are using. What can you do to monitor this?
- Setup CloudWatch Metrics
- Setup CloudTrail trails
- Setup Config Rules
- Create AWS Lambda cron job
#card
Setup Config Rules
^1626790853347
8. You would like to evaluate the compliance of your resource's configurations over time. Which technology do you choose?
- CloudWatch
- CloudTrail
- Config
#card
Config^1626790853357
| 33.593407 | 221 | 0.7684 | eng_Latn | 0.992679 |
2fbd439e50c9d260edcd272ad3d6b8cafaeef980 | 4,803 | md | Markdown | README.md | AgmoDarius/SummerSlider | 58fb88c59de87b05b295a35852e90eac0f40900b | [
"MIT"
] | 1 | 2020-01-31T02:27:16.000Z | 2020-01-31T02:27:16.000Z | README.md | AgmoDarius/SummerSlider | 58fb88c59de87b05b295a35852e90eac0f40900b | [
"MIT"
] | null | null | null | README.md | AgmoDarius/SummerSlider | 58fb88c59de87b05b295a35852e90eac0f40900b | [
"MIT"
] | 2 | 2021-01-11T22:18:34.000Z | 2021-01-11T22:19:49.000Z | # SummerSlider


[](https://travis-ci.org/superbderrick/SummerSlider)
[](http://cocoapods.org/pods/SummerSlider)
[](http://cocoapods.org/pods/SummerSlider)
[](http://cocoapods.org/pods/SummerSlider)

## SummerSlider.
SummerSlider is an iOS Custom Slider library
It's available with variety usecases like (typically custome ui slider and video-related apps)
Besides the repository introduces various usecase samples with SummerSlider
## UseCases
- Youtube Player UI scenario
It shows some parts for advertisement separator sections during entire video duration
- IMA SDK (VAST) with AVPLAYER
If you used Google IMA SDK with AVPLAYER , the summer slider is a very useful and suitable
vast sample code was intergrated and explained how to use for some vase usecases such as midrole and prerole cases
## Demonstration
#### Basic.

#### Usecase(IMA SDK).

## Requirements
- Swift 3
- iOS 8.0+
- Xcode 8
## How to install.
SummerSlider is available through [CocoaPods](http://cocoapods.org). To install
it, simply add the following line to your Podfile:
Swift 3.0
```ruby
pod 'SummerSlider', '~>0.2.0'
```
Swift 4.0
```ruby
pod 'SummerSlider', '~>0.3.0'
```
#### Classic and ancient way
Copy into your project the following files:
`SummerSlider.swift` , `Constants.swift`,
`HorizontalSlider.swift`,`Slider.swift`,`SliderDrawingProtocol.swift`,`SliderFactory.swift`,
`SummerSliderTypes.swift`,`VerticalSlider.swift`,
How to use it?
------------
#### First way (User Interface):
Add an UISlider outlet to your view using the User Interface and set `SummerSlider` as the custom class. Mostly the exposed properties are marked with **@IBInspectable**, so you can customize them in storyboard's attributes inspector and preview it directly.
Link it with the outlet property if you want to access its properties:
@IBOutlet weak var sampleSlider: SummerSlider!
Simply customize it! (take a look at -Customization- section)
```
var sampleArray = Array<Float>()
sampleArray = [0,12,23,34,45,56,77,99]
sampleSlider.selectedBarColor = UIColor.white
sampleSlider.unselectedBarColor = UIColor.black
sampleSlider.markColor = UIColor.orange
sampleSlider.markWidth = 2.0
sampleSlider.markPositions = sampleArray
```
Second way (Using code) - **Preferred**
It is really easy to set it! Firstly, import SummerSlider.
import SummerSlider
Instantiate and customize it (again, take a look at -Customization- section). Finally add it to the desired view as usual:
```
let testRect1 = CGRect(x:30 ,y:70 , width:300 ,height:30)
var marksArray1 = Array<Float>()
marksArray1 = [0,10,20,30,40,50,60,70,80,90,100]
secondSlider = SummerSlider(frame: testRect1)
secondSlider.selectedBarColor = UIColor.blue
secondSlider.unselectedBarColor = UIColor.red
secondSlider.markColor = UIColor.yellow
secondSlider.markWidth = 2.0
secondSlider.markPositions = marksArray1
self.view.addSubview(secondSlider)
```
Setting the marks
------------
You can set the marks using a percentage system from 0 to 100 (Percent). Set all the marks in the `markPositions array` property:
```
summerSlider.markPositions = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
```
Customization
------------
Here you can see a bunch of parameters that you can change:
#### Marks
- `markColor` : UIColor - Customize the color of the marks.
- `markWidth`: Float - Customize the width of the marks.
- `markPositions`: [Float] - Set in a percentage system from 0 to 100 where the marks should be placed.
#### Bar colors
- `selectedBarColor`: UIColor - Customize the color of the selected side of the slider.
- `unselectedBarColor`: UIColor - Customize the color of the unselected side of the slider.
# Android
[SummerSlider for Android](https://github.com/superbderrick/SummerSlider-Android)
## Object C Version.
[JMMarkSlider](https://github.com/joamafer/JMMarkSlider)
- Summer slider is based on JMMarkSlider.
## Author
SuperbDerrick, kang.derrick@gmail.com
## References
Please Let me know pull request or if you want to use this library in your application.
## License
SummerSlider is available under the MIT license. See the LICENSE file for more info.
| 32.452703 | 259 | 0.756194 | eng_Latn | 0.750209 |
2fbdabe52b633c219494f1a388eb88839ec098e7 | 6,913 | md | Markdown | powerbi-docs/service-tutorial-analyze-in-excel.md | eltociear/powerbi-docs.es-es | 7976f38db5f3075fa756509e03f8a57b8f33af0f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/service-tutorial-analyze-in-excel.md | eltociear/powerbi-docs.es-es | 7976f38db5f3075fa756509e03f8a57b8f33af0f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/service-tutorial-analyze-in-excel.md | eltociear/powerbi-docs.es-es | 7976f38db5f3075fa756509e03f8a57b8f33af0f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'Tutorial: Uso de la característica Analizar en Excel de Power BI partiendo de Excel'
description: En este tutorial, partirá desde Excel y se conectará a la página Conjuntos de datos de Power BI para importar conjuntos de datos en Excel.
author: tejaskulkarni
ms.reviewer: davidiseminger
ms.service: powerbi
ms.subservice: powerbi-service
ms.custom: connect-to-services
ms.topic: tutorial
ms.date: 02/13/2020
ms.author: tekulka
LocalizationGroup: Connect to services
ms.openlocfilehash: 9a8dd1eb7aa6b239cc542884ab3fae3679997017
ms.sourcegitcommit: 3c51431d85793b71f378c4b0b74483dfdd8411b3
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 03/31/2020
ms.locfileid: "80464606"
---
# <a name="tutorial-use-power-bi-analyze-in-excel-starting-in-excel"></a>Tutorial: Uso de la característica Analizar en Excel de Power BI partiendo de Excel
Su organización usa Power BI para compartir el acceso a los datos. La característica Analizar en Excel de Power BI se inicia desde Excel para crear tablas dinámicas y gráficos dinámicos en Excel. Estos elementos pueden aportar más contexto al análisis o reducir el tiempo necesario para encontrar e importar conjuntos de datos relevantes.
Para empezar a trabajar con un conjunto de datos de Power BI, en Excel, seleccione "Analizar en Excel". Se le guiará por los pasos para crear una tabla dinámica que use los datos.
Vuelva a la página Conjuntos de datos para encontrar conjuntos de datos adicionales compartidos por su organización.
Si encuentra problemas en cualquier momento, seleccione **No** en el paso correspondiente del flujo siguiente y proporcione comentarios en el formulario vinculado.
En este tutorial, obtendrá información sobre cómo:
> [!div class="checklist"]
> * Descargue un archivo ODC de la página Conjuntos de datos de Power BI.
> * Habilite el acceso al conjunto de datos desde Excel.
> * Empezar a usar el conjunto de datos para crear tablas dinámicas, gráficos y hojas de cálculo
## <a name="prerequisites"></a>Requisitos previos
Para completar este tutorial, necesita:
* Una cuenta de Power BI. Si no está registrado en Power BI, [regístrese para obtener una evaluación gratuita](https://app.powerbi.com/signupredirect?pbi_source=web) antes de empezar.
* Asegúrese de que está familiarizado con todos los pasos indicados en el tutorial [Introducción al servicio Power BI](https://docs.microsoft.com/power-bi/service-get-started).
* Necesitará un conjunto de datos de Power BI Premium y una licencia de Power BI Pro. Para más información, visite [¿Qué es Power BI Premium?](https://docs.microsoft.com/power-bi/service-premium-what-is).
* Puede encontrar una lista completa de los requisitos previos en el documento completo [Analizar en Excel](https://docs.microsoft.com/power-bi/service-analyze-in-excel#requirements).
* Una [suscripción a Microsoft Office E5](https://www.microsoft.com/microsoft-365/business/office-365-enterprise-e5-business-software?activetab=pivot%3aoverviewtab) activa
## <a name="get-started"></a>Comenzar
A partir de Excel, seleccione la opción para crear tablas dinámicas con datos compartidos de Power BI y vaya a la página Conjuntos de datos de Power BI.

Al usar el flujo de trabajo de Analizar en Excel, verá varios mensajes que le guían; indique si ha completado cada paso para continuar. Si surgen problemas en cualquier paso, seleccione **No** y proporcione sus comentarios en el formulario correspondiente.

## <a name="download-and-open-the-odc-file"></a>Descarga y apertura del archivo ODC
Elija el conjunto de datos de la lista y el área de trabajo asociado correspondientes y, luego, haga clic en Analizar en Excel. Power BI crea un archivo ODC y lo descarga del explorador al equipo.

Cuando abre el archivo en Excel, aparece una lista vacía de tablas dinámicas y campos con las tablas, los campos y las medidas del conjunto de datos de Power BI. Puede crear tablas dinámicas o gráficos y analizar ese conjunto de datos igual que lo haría con un conjunto de datos local en Excel.
## <a name="enable-data-connections"></a>Habilitación de conexiones de datos
Para analizar los datos de Power BI en Excel, es posible que se le pida que confíe en la conexión. Los administradores pueden deshabilitar el uso de Analizar en Excel con conjuntos de datos locales en bases de datos de Analysis Services desde el portal de administración de Power BI.

## <a name="install-updates-and-authenticate"></a>Instalación de actualizaciones y autenticación
Es posible que también tenga que autenticarse con su cuenta de Power BI la primera vez que abra un nuevo archivo ODC. Si tiene problemas, consulte el documento completo [Analizar en Excel](https://docs.microsoft.com/power-bi/service-analyze-in-excel#sign-in-to-power-bi ) para más información o haga clic en No durante el flujo de trabajo.

## <a name="analyze-away"></a>Analizar todo
De forma similar a otros libros locales, Analizar en Excel permite crear tablas dinámicas y gráficos, agregar datos y crear diferentes hojas de cálculo con vistas de los datos. Analizar en Excel expone todos los datos de nivel de detalle a cualquier usuario con permiso para el conjunto de datos. Puede guardar este libro, pero no puede publicarlo ni importarlo en Power BI ni compartirlo con otros usuarios de su organización. Para más información y conocer otros casos de uso, consulte [Analizar en Excel](https://docs.microsoft.com/power-bi/service-analyze-in-excel#analyze-away).
## <a name="clean-up-resources"></a>Limpieza de recursos
Las interacciones con el servicio Power BI y la página Conjuntos de datos se deben limitar a descargar el archivo ODC y hacer clic en el flujo de trabajo. Si tiene problemas con cualquiera de estos pasos, indique **No** en el paso adecuado y proporcione comentarios en el formulario vinculado. El formulario contiene un vínculo para informarse mejor sobre el problema. Vuelva a visitar la página Conjuntos de datos para procesar o seleccionar otro conjunto de datos.
## <a name="next-steps"></a>Pasos siguientes
Puede que también esté interesado en los siguientes artículos:
* [Uso de la obtención de detalles de varios informes en Power BI Desktop](https://docs.microsoft.com/power-bi/desktop-cross-report-drill-through)
* [Uso de segmentaciones de datos en Power BI Desktop](https://docs.microsoft.com/power-bi/visuals/power-bi-visualization-slicers)
| 72.010417 | 583 | 0.796181 | spa_Latn | 0.989501 |
2fbe607325e29559c6049a45306b35541eb27ca8 | 56 | md | Markdown | README.md | ganqzz/symfony2-sandbox | c44e6f42ee90ceade747892ae78449bf6229209b | [
"MIT"
] | null | null | null | README.md | ganqzz/symfony2-sandbox | c44e6f42ee90ceade747892ae78449bf6229209b | [
"MIT"
] | null | null | null | README.md | ganqzz/symfony2-sandbox | c44e6f42ee90ceade747892ae78449bf6229209b | [
"MIT"
] | null | null | null | Symfony 2 Sandbox
=====================
- Version 2.7
| 9.333333 | 21 | 0.428571 | oci_Latn | 0.597031 |
2fbeb846b268e60c5a1884fb9b35dd6c593fc0de | 1,039 | md | Markdown | README.md | turbosnute/inoklima | a91a286ebb3539e2a4d511a734e8a3066446f440 | [
"MIT"
] | null | null | null | README.md | turbosnute/inoklima | a91a286ebb3539e2a4d511a734e8a3066446f440 | [
"MIT"
] | null | null | null | README.md | turbosnute/inoklima | a91a286ebb3539e2a4d511a734e8a3066446f440 | [
"MIT"
] | null | null | null | # THIS REPO IS DEPRECATED
# inoklima
Arduino indoor climate.
Uses the breakout board sensors BME280, SGP30 and BH1750fvi.
## Measures
- Temperature (in Celcius)
- Humidity
- Pressure
- Dew Point
- Equivalent Sea Level Pressure
- Raw H2
- Raw Ethanol
- TVOC
- Lux
## Schematics
![alt text][logo]
[logo]: https://github.com/turbosnute/inoklima/blob/main/inoklima_figure.png
```
Connecting the BME280 + SGP30 + BJ1750 (i2c) Sensors:
Sensor -> Board
-----------------------------
Vin (Voltage In) -> 3.3V or 5.0V
Gnd (Ground) -> Gnd
SDA (Serial Data) -> A4 on Uno/Pro-Mini, 20 on Mega2560/Due, 2 Leonardo/Pro-Micro
SCK (Serial Clock) -> A5 on Uno/Pro-Mini, 21 on Mega2560/Due, 3 Leonardo/Pro-Micro
```
## Example output
```
{ "Sensors": [{ "Name": "BME280","Temp": 24.40 ,"Humidity": 25.80 ,"Pressure": 100126.52, "DewPoint": -5.79, "EquivSeaLvlPressure": 104263.52}, {"Name": "SGP30", "RawH2": 13536.00, "RawEthanol":18693.00, "TVOC": 0.00, "eCO2": 400.00}, {"Name": "BH1750FVI", "lux": 73.33}]}
```
| 27.342105 | 272 | 0.636189 | yue_Hant | 0.553025 |
2fbebb9898eac945887693a04189972e80fdf15f | 44 | md | Markdown | Numerical_methods/Matrix_Diagonalization/B/README.md | ViktorHaldborg/PPNM | 255d16edc933b90d133a249af6d5848bf901a015 | [
"MIT"
] | null | null | null | Numerical_methods/Matrix_Diagonalization/B/README.md | ViktorHaldborg/PPNM | 255d16edc933b90d133a249af6d5848bf901a015 | [
"MIT"
] | null | null | null | Numerical_methods/Matrix_Diagonalization/B/README.md | ViktorHaldborg/PPNM | 255d16edc933b90d133a249af6d5848bf901a015 | [
"MIT"
] | null | null | null | Compilation time is rather long. (a minute)
| 22 | 43 | 0.772727 | eng_Latn | 0.999182 |
2fbffb99c619249bfacf0d7ca37c1e9b80966816 | 118 | md | Markdown | README.md | Rytnix/Oops-Concept | d9b82e0c2bc406003b24884fe1cc8abe6d186faf | [
"Apache-2.0"
] | 1 | 2021-03-31T16:03:11.000Z | 2021-03-31T16:03:11.000Z | README.md | Rytnix/Oops-Concept | d9b82e0c2bc406003b24884fe1cc8abe6d186faf | [
"Apache-2.0"
] | null | null | null | README.md | Rytnix/Oops-Concept | d9b82e0c2bc406003b24884fe1cc8abe6d186faf | [
"Apache-2.0"
] | null | null | null | # Oops-Concept
In this repo I am going to upload some object oriented programs. Any modification will be appreciated.
| 39.333333 | 102 | 0.805085 | eng_Latn | 0.996956 |
2fc1022796249f54c5e7fe6a4f0fec359ec07e1d | 103 | md | Markdown | _collection/KnetML-singularity-images.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | _collection/KnetML-singularity-images.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | _collection/KnetML-singularity-images.md | singularityhub/singularityhub-archive | dd1d471db0c4ac01998cb84bd1ab82e97c8dab65 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | ---
id: 686
full_name: "KnetML/singularity-images"
images:
- "KnetML-singularity-images-latest"
---
| 14.714286 | 38 | 0.699029 | vie_Latn | 0.087652 |
2fc15b79234fe398c85c9331f01d3928dd1d7fc2 | 14,373 | md | Markdown | articles/azure-functions/functions-bindings-mobile-apps.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-functions/functions-bindings-mobile-apps.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/azure-functions/functions-bindings-mobile-apps.md | niklasloow/azure-docs.sv-se | 31144fcc30505db1b2b9059896e7553bf500e4dc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Mobile Apps bindningar för Azure Functions
description: Lär dig hur du använder Azure Mobile Apps-bindningar i Azure Functions.
author: craigshoemaker
ms.topic: reference
ms.custom: devx-track-csharp
ms.date: 11/21/2017
ms.author: cshoe
ms.openlocfilehash: 5ea58cc3d9f3615a74249b36f3f9ffb79caddda1
ms.sourcegitcommit: 4913da04fd0f3cf7710ec08d0c1867b62c2effe7
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 08/14/2020
ms.locfileid: "88212242"
---
# <a name="mobile-apps-bindings-for-azure-functions"></a>Mobile Apps bindningar för Azure Functions
> [!NOTE]
> Azure Mobile Apps-bindningar är bara tillgängliga för Azure Functions 1. x. De stöds inte i Azure Functions 2. x och högre.
Den här artikeln förklarar hur du arbetar med [Azure Mobile Apps](/previous-versions/azure/app-service-mobile/app-service-mobile-value-prop) -bindningar i Azure Functions. Azure Functions stöder bindningar för indata och utdata för Mobile Apps.
Med Mobile Apps bindningar kan du läsa och uppdatera data tabeller i mobilappar.
[!INCLUDE [intro](../../includes/functions-bindings-intro.md)]
## <a name="packages---functions-1x"></a>Paket-funktioner 1. x
Mobile Apps-bindningar finns i [Microsoft. Azure. WebJobs. Extensions. MobileApps](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.MobileApps) NuGet-paketet, version 1. x. Käll koden för paketet finns i [Azure-WebJobs-SDK-Extensions GitHub-](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/v2.x/src/WebJobs.Extensions.MobileApps/) lagringsplatsen.
[!INCLUDE [functions-package](../../includes/functions-package.md)]
## <a name="input"></a>Indata
Mobile Apps-indata-bindningen läser in en post från en Mobile Table-slutpunkt och skickar den till din funktion. I C#-och F #-funktioner skickas alla ändringar som görs i posten automatiskt tillbaka till tabellen när funktionen avslutas.
## <a name="input---example"></a>Indatamängds exempel
Se språkspecifika exempel:
* [C#-skript (.csx)](#input---c-script-example)
* JavaScript
### <a name="input---c-script-example"></a>Exempel på inmatade C#-skript
I följande exempel visas en Mobile Apps indata-bindning i en *function.jspå* fil och en [C#-skript funktion](functions-reference-csharp.md) som använder bindningen. Funktionen utlöses av ett köat meddelande som har en post-ID. Funktionen läser den angivna posten och ändrar dess `Text` egenskap.
Här är bindnings data i *function.jspå* filen:
```json
{
"bindings": [
{
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection": "",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "record",
"type": "mobileTable",
"tableName": "MyTable",
"id": "{queueTrigger}",
"connection": "My_MobileApp_Url",
"apiKey": "My_MobileApp_Key",
"direction": "in"
}
]
}
```
I [konfigurations](#input---configuration) avsnittet förklaras dessa egenskaper.
Här är C#-skript koden:
```cs
#r "Newtonsoft.Json"
using Newtonsoft.Json.Linq;
public static void Run(string myQueueItem, JObject record)
{
if (record != null)
{
record["Text"] = "This has changed.";
}
}
```
### <a name="input---javascript"></a>Inmatade Java Script
I följande exempel visas en Mobile Apps indata-bindning i en *function.jsi* filen och en [JavaScript-funktion](functions-reference-node.md) som använder bindningen. Funktionen utlöses av ett köat meddelande som har en post-ID. Funktionen läser den angivna posten och ändrar dess `Text` egenskap.
Här är bindnings data i *function.jspå* filen:
```json
{
"bindings": [
{
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection": "",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "record",
"type": "mobileTable",
"tableName": "MyTable",
"id": "{queueTrigger}",
"connection": "My_MobileApp_Url",
"apiKey": "My_MobileApp_Key",
"direction": "in"
}
]
}
```
I [konfigurations](#input---configuration) avsnittet förklaras dessa egenskaper.
Här är JavaScript-koden:
```javascript
module.exports = function (context, myQueueItem) {
context.log(context.bindings.record);
context.done();
};
```
## <a name="input---attributes"></a>In-attribut
Använd attributet [MobileTable](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs) i [C#-klass bibliotek](functions-dotnet-class-library.md).
Information om vilka egenskaper för attribut som du kan konfigurera finns i [följande konfigurations avsnitt](#input---configuration).
## <a name="input---configuration"></a>Indatamängds konfiguration
I följande tabell förklaras de egenskaper för bindnings konfiguration som du anger i *function.js* filen och `MobileTable` attributet.
|function.jspå egenskap | Attributets egenskap |Beskrivning|
|---------|---------|----------------------|
| **bastyp**| saknas | Måste anges till "mobileTable"|
| **position**| saknas |Måste anges till "i"|
| **Namn**| saknas | Namnet på Indataparametern i Function-signaturen.|
|**tableName** |**TableName**|Namn på den mobila appens data tabell|
| **id**| **Identitet** | Identifieraren för den post som ska hämtas. Kan vara statisk eller baserad på den utlösare som anropar funktionen. Om du till exempel använder en Queue-utlösare för din funktion `"id": "{queueTrigger}"` använder det strängvärdet i Queue-meddelandet som post-ID för hämtning.|
|**anslutningen**|**Anslutning**|Namnet på en app-inställning som har appens URL för mobilapp. Funktionen använder denna URL för att skapa nödvändiga REST-åtgärder mot mobilappen. Skapa en app-inställning i din Function-app som innehåller webbappens URL och ange sedan namnet på appens inställning i `connection` egenskapen i den angivna bindningen. URL: en ser ut som `http://<appname>.azurewebsites.net` .
|**apiKey**|**ApiKey**|Namnet på en app-inställning som har din Mobilapps API-nyckel. Ange API-nyckeln om du [implementerar en API-nyckel i din Node.js mobilapp](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key)eller [implementerar en API-nyckel i din .net-mobilapp](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). För att tillhandahålla nyckeln skapar du en app-inställning i din Function-app som innehåller API-nyckeln och lägger sedan till `apiKey` egenskapen i din databindning med namnet på appens inställning. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
> [!IMPORTANT]
> Dela inte API-nyckeln med dina mobilappar. Den bör endast distribueras säkert till klienter på klient sidan, t. ex. Azure Functions. Azure Functions lagrar anslutnings information och API-nycklar som appinställningar så att de inte checkas in i lagrings platsen för käll kontroll. Detta skyddar känslig information.
## <a name="input---usage"></a>Inmatad användning
När posten med det angivna ID: t hittas i C#-funktioner skickas den till den namngivna [JObject](https://www.newtonsoft.com/json/help/html/t_newtonsoft_json_linq_jobject.htm) -parametern. När posten inte hittas är parametervärdet `null` .
I JavaScript-funktioner skickas posten till `context.bindings.<name>` objektet. När posten inte hittas är parametervärdet `null` .
I C#-och F #-funktioner skickas alla ändringar du gör i Indataposten automatiskt tillbaka till tabellen när funktionen avslutas. Du kan inte ändra en post i JavaScript-funktioner.
## <a name="output"></a>Utdata
Använd Mobile Apps utgående bindning för att skriva en ny post i en Mobile Apps tabell.
## <a name="output---example"></a>Utdata-exempel
Se språkspecifika exempel:
* [C#](#output---c-example)
* [C#-skript (.csx)](#output---c-script-example)
* [JavaScript](#output---javascript-example)
### <a name="output---c-example"></a>Utdata – C#-exempel
I följande exempel visas en [C#-funktion](functions-dotnet-class-library.md) som utlöses av ett köat meddelande och skapar en post i en mobilapp-tabell.
```csharp
[FunctionName("MobileAppsOutput")]
[return: MobileTable(ApiKeySetting = "MyMobileAppKey", TableName = "MyTable", MobileAppUriSetting = "MyMobileAppUri")]
public static object Run(
[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
TraceWriter log)
{
return new { Text = $"I'm running in a C# function! {myQueueItem}" };
}
```
### <a name="output---c-script-example"></a>Utdata – C#-skript exempel
I följande exempel visas en Mobile Apps utgående bindning i en *function.jspå* filen och en [C#-skript funktion](functions-reference-csharp.md) som använder bindningen. Funktionen utlöses av ett köat meddelande och skapar en ny post med hårdkodat värde för `Text` egenskapen.
Här är bindnings data i *function.jspå* filen:
```json
{
"bindings": [
{
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection": "",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "record",
"type": "mobileTable",
"tableName": "MyTable",
"connection": "My_MobileApp_Url",
"apiKey": "My_MobileApp_Key",
"direction": "out"
}
]
}
```
I [konfigurations](#output---configuration) avsnittet förklaras dessa egenskaper.
Här är C#-skript koden:
```cs
public static void Run(string myQueueItem, out object record)
{
record = new {
Text = $"I'm running in a C# function! {myQueueItem}"
};
}
```
### <a name="output---javascript-example"></a>Exempel på utdata-JavaScript
I följande exempel visas en Mobile Apps utgående bindning i en *function.jsi* filen och en [JavaScript-funktion](functions-reference-node.md) som använder bindningen. Funktionen utlöses av ett köat meddelande och skapar en ny post med hårdkodat värde för `Text` egenskapen.
Här är bindnings data i *function.jspå* filen:
```json
{
"bindings": [
{
"name": "myQueueItem",
"queueName": "myqueue-items",
"connection": "",
"type": "queueTrigger",
"direction": "in"
},
{
"name": "record",
"type": "mobileTable",
"tableName": "MyTable",
"connection": "My_MobileApp_Url",
"apiKey": "My_MobileApp_Key",
"direction": "out"
}
],
"disabled": false
}
```
I [konfigurations](#output---configuration) avsnittet förklaras dessa egenskaper.
Här är JavaScript-koden:
```javascript
module.exports = function (context, myQueueItem) {
context.bindings.record = {
text : "I'm running in a Node function! Data: '" + myQueueItem + "'"
}
context.done();
};
```
## <a name="output---attributes"></a>Utdata-attribut
Använd attributet [MobileTable](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs) i [C#-klass bibliotek](functions-dotnet-class-library.md).
Information om vilka egenskaper för attribut som du kan konfigurera finns i [utdata-Configuration](#output---configuration). Här är ett `MobileTable` attribut exempel i en metodsignatur:
```csharp
[FunctionName("MobileAppsOutput")]
[return: MobileTable(ApiKeySetting = "MyMobileAppKey", TableName = "MyTable", MobileAppUriSetting = "MyMobileAppUri")]
public static object Run(
[QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")] string myQueueItem,
TraceWriter log)
{
...
}
```
Ett fullständigt exempel finns i [exempel på utdata-C#](#output---c-example).
## <a name="output---configuration"></a>Utdata-konfiguration
I följande tabell förklaras de egenskaper för bindnings konfiguration som du anger i *function.js* filen och `MobileTable` attributet.
|function.jspå egenskap | Attributets egenskap |Beskrivning|
|---------|---------|----------------------|
| **bastyp**| saknas | Måste anges till "mobileTable"|
| **position**| saknas |Måste anges till "out"|
| **Namn**| saknas | Namnet på Utdataparametern i Function Signature.|
|**tableName** |**TableName**|Namn på den mobila appens data tabell|
|**anslutningen**|**MobileAppUriSetting**|Namnet på en app-inställning som har appens URL för mobilapp. Funktionen använder denna URL för att skapa nödvändiga REST-åtgärder mot mobilappen. Skapa en app-inställning i din Function-app som innehåller webbappens URL och ange sedan namnet på appens inställning i `connection` egenskapen i den angivna bindningen. URL: en ser ut som `http://<appname>.azurewebsites.net` .
|**apiKey**|**ApiKeySetting**|Namnet på en app-inställning som har din Mobilapps API-nyckel. Ange API-nyckeln om du [implementerar en API-nyckel i din Node.js Server del för mobilappar](https://github.com/Azure/azure-mobile-apps-node/tree/master/samples/api-key)eller [IMPLEMENTERAr en API-nyckel i din server del för .net-mobilappar](https://github.com/Azure/azure-mobile-apps-net-server/wiki/Implementing-Application-Key). För att tillhandahålla nyckeln skapar du en app-inställning i din Function-app som innehåller API-nyckeln och lägger sedan till `apiKey` egenskapen i din databindning med namnet på appens inställning. |
[!INCLUDE [app settings to local.settings.json](../../includes/functions-app-settings-local.md)]
> [!IMPORTANT]
> Dela inte API-nyckeln med dina mobilappar. Den bör endast distribueras säkert till klienter på klient sidan, t. ex. Azure Functions. Azure Functions lagrar anslutnings information och API-nycklar som appinställningar så att de inte checkas in i lagrings platsen för käll kontroll. Detta skyddar känslig information.
## <a name="output---usage"></a>Utmatnings användning
I C#-skript funktioner använder du en namngiven utdataparameter av typen `out object` för att få åtkomst till den utgående posten. I C#-klass bibliotek `MobileTable` kan attributet användas med någon av följande typer:
* `ICollector<T>` eller `IAsyncCollector<T>` , där `T` är antingen `JObject` en typ med en `public string Id` egenskap.
* `out JObject`
* `out T` eller `out T[]` , där `T` är en typ med en `public string Id` egenskap.
Använd `context.bindings.<name>` för att komma åt output-posten i Node.js funktioner.
## <a name="next-steps"></a>Nästa steg
> [!div class="nextstepaction"]
> [Lär dig mer om Azure Functions-utlösare och bindningar](functions-triggers-bindings.md)
| 44.361111 | 627 | 0.720796 | swe_Latn | 0.968499 |
2fc17dd22b666fcd1f2845da9bab7da7ac8f40c2 | 3,601 | md | Markdown | README.md | KaramanisWeb/SimpleSMSGreek | 62627165d92896f273a45a87b704c010c0fb4964 | [
"MIT"
] | null | null | null | README.md | KaramanisWeb/SimpleSMSGreek | 62627165d92896f273a45a87b704c010c0fb4964 | [
"MIT"
] | null | null | null | README.md | KaramanisWeb/SimpleSMSGreek | 62627165d92896f273a45a87b704c010c0fb4964 | [
"MIT"
] | null | null | null | SimpleSMSGreek
==========
[](https://travis-ci.org/KaramanisWeb/SimpleSMSGreek)
[](https://packagist.org/packages/KaramanisWeb/SimpleSMSGreek)
[](https://packagist.org/packages/karamanisweb/simplesmsgreek)
[](https://packagist.org/packages/karamanisweb/simplesmsgreek)
[](https://packagist.org/packages/karamanisweb/simplesmsgreek)
## Introduction
SimpleSMSGreek is a wrapper for [SimpleSMS](https://github.com/simplesoftwareio/simple-sms) package that provides additional Greek SMS drivers. [simplesoftwareio/simple-sms](https://github.com/simplesoftwareio/simple-sms). This is a package for [Laravel](http://laravel.com/) and provides the capability to use Greek gateways to send SMS. These are the extra providers: [Smsn](http://www.smsn.gr), [Ez4usms](http://ez4usms.com), [Sms.net.gr](http://www.sms.net.gr/)
## Requirements
#### Laravel 5
* PHP: >= 7.0.0
* simplesoftwareio/simple-sms >= 3.1.0
## Installation
#### Composer
You can run the composer command `composer require karamanisweb/simplesmsgreek`
or you can add the package to your `require` in your `composer/json` file:
"require": {
"karamanisweb/simplesmsgreek": "1.0.*"
}
And then run the command `composer update`.
This procedure will install the package into your application.
#### Service Providers
Once you have installed the package to your laravel application.
Add `KaramanisWeb\SimpleSMSGreek\SmsServiceProvider::class` into your `config/app.php` config file inside the `providers` array.
#### Aliases
Now all you have to do is register the Facade.
Add `'SMS' => SimpleSoftwareIO\SMS\Facades\SMS::class` in your `config/app.php` config file inside the `aliases` array.
#### Publish Configuration
If you need to change to make changes into the configuration file you must run the following command to save your config file to your local app:
php artisan vendor:publish --provider="KaramanisWeb\SimpleSMSGreek\SmsServiceProvider"
This will copy the configuration files to your `config` folder.
or you can manual copy the config file from `vendors/karamanisweb/simplesmsgreek/Config` directory to your local app.
## Documentation
This package adds 3 greek SMS drivers
- Smsn
- Ez4usms
- Sms.net.gr
#### Smsn
```php
'driver' => env('SMS_DRIVER', 'smsn'),
'smsn' => [
'username' => env('SMSN_USERNAME', 'Your Smsn Username'),
'password' => env('SMSN_PASSWORD', 'Your Smsn Password'),
'unicode' => env('SMSN_UNICODE', false),
]
```
#### Ez4usms
```php
'driver' => env('SMS_DRIVER', 'ez4us'),
'ez4us' => [
'username' => env('EZ4US_USERNAME', 'Your Ez4us Username'),
'password' => env('EZ4US_PASSWORD', 'Your Ez4us Password'),
'unicode' => env('EZ4US_UNICODE', false),
]
```
#### Sms.net.gr
```php
'driver' => env('SMS_DRIVER', 'smsnetgr'),
'smsnetgr' => [
'username' => env('SMSNETGR_USERNAME', 'Your Smsnetgr Username'),
'api_password' => env('SMSNETGR_API_PASS', 'Your Smsnetgr API Password'),
'api_token' => env('SMSNETGR_API_TOKEN', 'Your Smsnetgr API Token'),
'unicode' => env('SMSNETGR_UNICODE', false),
]
```
The documentation for SimpleSMS can be found [here.](https://www.simplesoftware.io/docs/simple-sms) | 37.905263 | 465 | 0.725909 | eng_Latn | 0.487173 |
2fc242b68272a325d6b8c0dccc7d3d5f53c9a6e7 | 65 | md | Markdown | README.md | autoberry/docker_registry_sync | 851174c187a87bbd5d93e5b6fbf09a3270ff51d3 | [
"Apache-2.0"
] | null | null | null | README.md | autoberry/docker_registry_sync | 851174c187a87bbd5d93e5b6fbf09a3270ff51d3 | [
"Apache-2.0"
] | null | null | null | README.md | autoberry/docker_registry_sync | 851174c187a87bbd5d93e5b6fbf09a3270ff51d3 | [
"Apache-2.0"
] | null | null | null | # docker_registry_sync
To sync images between different registry
| 21.666667 | 41 | 0.861538 | eng_Latn | 0.894962 |
2fc2660cd7565881561463353971962273702b0e | 295 | md | Markdown | ansible_change_hostname/README.md | xujpxm/ansible-roles | 0bf1b67211e6db41f213747791d696d5f78b3ea6 | [
"MIT"
] | 2 | 2017-08-29T02:22:21.000Z | 2017-09-24T04:52:49.000Z | ansible_change_hostname/README.md | xujpxm/ansible-roles | 0bf1b67211e6db41f213747791d696d5f78b3ea6 | [
"MIT"
] | null | null | null | ansible_change_hostname/README.md | xujpxm/ansible-roles | 0bf1b67211e6db41f213747791d696d5f78b3ea6 | [
"MIT"
] | null | null | null | ### Overview
Change your linux host hostname by ansible
### Usage
Change your ansible inventory file like this:
Ansible inventory hosts example:
```
[server_group]
'hostname1' ansible_host='172.30.100.81'
'hostname2' ansible_host='172.30.100.82'
'hostname3' ansible_host='172.30.100.83'
``` | 26.818182 | 48 | 0.745763 | eng_Latn | 0.705674 |
2fc271c0fe3439b8faf66c2ad08a5dd0c31e7dea | 1,326 | markdown | Markdown | _posts/2010-03-09-moving-this-blog-to-a-new-host-was-a-bit-of-a-kerfuffle.markdown | dominicsayers/dominicsayers.github.com | f05bcb87193c865d409f5f2993339d6bf7417b15 | [
"MIT"
] | 4 | 2017-05-20T22:03:48.000Z | 2020-08-29T22:23:52.000Z | _posts/2010-03-09-moving-this-blog-to-a-new-host-was-a-bit-of-a-kerfuffle.markdown | dominicsayers/dominicsayers.github.com | f05bcb87193c865d409f5f2993339d6bf7417b15 | [
"MIT"
] | 3 | 2017-04-21T10:12:11.000Z | 2021-05-23T15:48:05.000Z | _posts/2010-03-09-moving-this-blog-to-a-new-host-was-a-bit-of-a-kerfuffle.markdown | dominicsayers/dominicsayers.github.com | f05bcb87193c865d409f5f2993339d6bf7417b15 | [
"MIT"
] | 1 | 2017-01-06T13:17:20.000Z | 2017-01-06T13:17:20.000Z | ---
layout: post
title: Moving this blog to a new host was a bit of a kerfuffle
date: 2010-03-09 10:42:43.000000000 +00:00
tags: siteground
---
Sorry for the outage since Sunday. I moved the contents of this blog away from wordpress.com because I didn't want to pay for another year's hosting there when I have a perfectly good host elsewhere.
The move of the blog contents took 5 minutes.
Releasing the sub-domain www.dominicsayers.com from the clutches of wordpress.com and getting SiteGround to set it up took 48 hours.
Lessons learned:
1. If you're thinking of choosing a new hosting provider, try to get a look at the hosting control panel first. I've now had three faults with SiteGround that were simply because their tools don't work.
2. I chose to keep my Name Server management away from SiteGround because their tools (a) didn't work and (b) didn't let me view or edit my DNS records explicitly. Don't nanny me if I don't want to be nannied. Because my name servers are not under their control, this broke some of their other tools. How about testing this stuff before releasing it, people?
3. SiteGround tech support are helpful and responsive.
Other things I've noticed: some of the Wordpress plug-in writers have stopped maintaining their plug-ins. Is Wordpress a dying platform? Is blogging a dying art?
| 60.272727 | 358 | 0.781297 | eng_Latn | 0.999583 |
2fc2906b4d07f3bbe72daca6dbedafd8e6d6cf7c | 113 | md | Markdown | src/__tests__/fixtures/unfoldingWord/en_tq/job/34/08.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | null | null | null | src/__tests__/fixtures/unfoldingWord/en_tq/job/34/08.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 226 | 2020-09-09T21:56:14.000Z | 2022-03-26T18:09:53.000Z | src/__tests__/fixtures/unfoldingWord/en_tq/job/34/08.md | unfoldingWord/content-checker | 7b4ca10b94b834d2795ec46c243318089cc9110e | [
"MIT"
] | 1 | 2022-01-10T21:47:07.000Z | 2022-01-10T21:47:07.000Z | # In whose company does Elihu say Job goes around?
He says Job goes around in the company of those who do evil.
| 28.25 | 60 | 0.761062 | eng_Latn | 1.000002 |
2fc2cd104006fee37a2ea2c9e5e1f5ec0f94144f | 197 | md | Markdown | README.md | libitjs/libit | e53bbd9c3b0ee004dc70c8427b6ffb2636303dbc | [
"MIT"
] | null | null | null | README.md | libitjs/libit | e53bbd9c3b0ee004dc70c8427b6ffb2636303dbc | [
"MIT"
] | null | null | null | README.md | libitjs/libit | e53bbd9c3b0ee004dc70c8427b6ffb2636303dbc | [
"MIT"
] | null | null | null | # 🚀 libit
> A collection of type-safe cross-platform packages for building robust server-side and client-side applications,
> packages, and tooling.
## Usage
**TBD**
## Licence
[MIT](LICENSE)
| 15.153846 | 113 | 0.720812 | eng_Latn | 0.94712 |
2fc303bd4bff2bdd06d3672fa8d5748917f5f7aa | 37 | md | Markdown | README.md | joko-del/tugas_uas_array | 79c767135c665eec1cc6ee102beb432975302c14 | [
"MIT"
] | 2 | 2021-04-18T03:13:53.000Z | 2021-09-30T23:21:57.000Z | README.md | joko-del/tugas_uas_array | 79c767135c665eec1cc6ee102beb432975302c14 | [
"MIT"
] | null | null | null | README.md | joko-del/tugas_uas_array | 79c767135c665eec1cc6ee102beb432975302c14 | [
"MIT"
] | null | null | null | # tugas_uas_array
uas membuat array
| 12.333333 | 18 | 0.810811 | ind_Latn | 0.596631 |
2fc464f39587e39fb90aa1f0f0f337ac8ac65d01 | 766 | md | Markdown | pages/api/ts-command-line.commandlineparser.addaction.md | elliot-nelson/rushstack.io-website | 073c8c2621055dc26f8aa0baeef294a3ca832030 | [
"CC-BY-4.0",
"MIT"
] | 10 | 2019-11-08T06:57:43.000Z | 2022-02-04T23:30:01.000Z | pages/api/ts-command-line.commandlineparser.addaction.md | elliot-nelson/rushstack.io-website | 073c8c2621055dc26f8aa0baeef294a3ca832030 | [
"CC-BY-4.0",
"MIT"
] | 11 | 2019-09-05T05:20:57.000Z | 2022-02-26T05:34:54.000Z | pages/api/ts-command-line.commandlineparser.addaction.md | elliot-nelson/rushstack.io-website | 073c8c2621055dc26f8aa0baeef294a3ca832030 | [
"CC-BY-4.0",
"MIT"
] | 23 | 2019-11-08T06:57:46.000Z | 2022-03-25T15:59:47.000Z | ---
layout: page
navigation_source: api_nav
improve_this_button: false
---
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
[Home](./index.md) > [@rushstack/ts-command-line](./ts-command-line.md) > [CommandLineParser](./ts-command-line.commandlineparser.md) > [addAction](./ts-command-line.commandlineparser.addaction.md)
## CommandLineParser.addAction() method
Defines a new action that can be used with the CommandLineParser instance.
<b>Signature:</b>
```typescript
addAction(action: CommandLineAction): void;
```
## Parameters
| Parameter | Type | Description |
| --- | --- | --- |
| action | [CommandLineAction](./ts-command-line.commandlineaction.md) | |
<b>Returns:</b>
void
| 25.533333 | 207 | 0.681462 | eng_Latn | 0.386474 |
2fc48793651582afaaa5d68acb837c5a1da9542d | 191 | md | Markdown | 20210913220914/README.md | rei-nert/zettelkasten | 17fa3929d776ee21da105e69242df48697f2c7b1 | [
"MIT"
] | null | null | null | 20210913220914/README.md | rei-nert/zettelkasten | 17fa3929d776ee21da105e69242df48697f2c7b1 | [
"MIT"
] | null | null | null | 20210913220914/README.md | rei-nert/zettelkasten | 17fa3929d776ee21da105e69242df48697f2c7b1 | [
"MIT"
] | null | null | null | # Proof by contradiction
To prove P is true, assume P is false (not P is true), and then use that hypothesis to derive a falsehood or contradiction.
If not P =>F is true, then P is true.
| 47.75 | 125 | 0.727749 | eng_Latn | 0.999904 |
2fc523a33f1cf6dfa6cdfceade9ace44a8c22c46 | 2,331 | md | Markdown | docs/tools/material.md | Ks1717/cangchengkun__Docs | df9240a9998115e867f519cc1ee9fcedd4c44cb6 | [
"MIT"
] | null | null | null | docs/tools/material.md | Ks1717/cangchengkun__Docs | df9240a9998115e867f519cc1ee9fcedd4c44cb6 | [
"MIT"
] | null | null | null | docs/tools/material.md | Ks1717/cangchengkun__Docs | df9240a9998115e867f519cc1ee9fcedd4c44cb6 | [
"MIT"
] | null | null | null | # Material for MkDocs 安装使用
MkDocs是一套静态工程文档网站模板,使用的是markdown语法编写的页面,同时Material for MkDocs在Mkdocs上使用了google的材料设计开发的网站模板,通过配置方式能够快速搭建工程文档项目,同时可以发布到githubpage,Amazon S3,其他自己搭建的服务器上
官网:[Material for MkDocs][1]
官网:[MkDocs][6]
## 定制主题
定制主题:[MkDocs定制主题](https://www.mkdocs.org/user-guide/custom-themes/)
定制自己的样子:[Styling your docs](https://www.mkdocs.org/user-guide/styling-your-docs/#readthedocs)
MkDocs GitHub定制主题:[MkDocs](https://github.com/mkdocs/mkdocs/wiki/MkDocs-Themes)
## Deploying
发布到GitHub: [发布到Github](https://help.github.com/articles/creating-project-pages-manually/)
Hosting a Static Website on Amazon S3: [Hosting a Static Website on Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html)
发布GitHub: [发布GitHub](https://www.mkdocs.org/user-guide/deploying-your-docs/)
## Python支持
```sh
python --version
# Python 2.7.13
pip --version
# pip 9.0.1
```
## Install MkDocs
Installing and verifying MkDocs is as simple as:
```sh
pip install mkdocs && mkdocs --version
# mkdocs, version 0.17.1
```
注: Material requires MkDocs >= 0.17.1.
## Installing Material
### using pip
Material can be installed with pip:
```sh
pip install mkdocs-material
```
### using choco
If you're on Windows you can use Chocolatey to install Material:
```sh
choco install mkdocs-material
```
This will install all required dependencies like Python and MkDocs.
### cloning from GitHub
Material can also be used without a system-wide installation by cloning the repository into a subfolder of your project's root directory:
```sh
git clone https://github.com/squidfunk/mkdocs-material.git
```
This is especially useful if you want to extend the theme and override some parts of the theme. The theme will reside in the folder mkdocs-material/material.
# 开启本地服务器
```sh
mkdocs serve
```
# deploying
发布站点[deploying][2]
[GitHub page][4]:发布站点到github上
[Amazon s3][5]:发布站点到s3上
```sh
mkdocs gh-deploy
```
```sh
mkdocs gh-deploy --help
```
[ghp-import][3] github page 发布工具
[1]:https://squidfunk.github.io/mkdocs-material/
[2]:https://www.mkdocs.org/user-guide/deploying-your-docs/
[3]:https://github.com/davisp/ghp-import
[4]:https://help.github.com/articles/creating-project-pages-using-the-command-line/
[5]:https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
[6]:https://www.mkdocs.org/
| 24.28125 | 157 | 0.75118 | yue_Hant | 0.429775 |
2fc65d9c4863af0a2c5b67480a2c5cb64843761e | 1,718 | md | Markdown | docs/ssms/agent/delete-operator.md | bingenortuzar/sql-docs.es-es | 9e13730ffa0f3ce461cce71bebf1a3ce188c80ad | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-26T21:26:08.000Z | 2021-04-26T21:26:08.000Z | docs/ssms/agent/delete-operator.md | jlporatti/sql-docs.es-es | 9b35d3acbb48253e1f299815df975f9ddaa5e9c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/ssms/agent/delete-operator.md | jlporatti/sql-docs.es-es | 9b35d3acbb48253e1f299815df975f9ddaa5e9c7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Eliminar operador | Microsoft Docs
ms.custom: ''
ms.date: 01/19/2017
ms.prod: sql
ms.prod_service: sql-tools
ms.reviewer: ''
ms.technology: ssms
ms.topic: conceptual
f1_keywords:
- sql13.ag.operator.delete.f1
ms.assetid: 68402a69-24c4-4304-a1db-7409c42c51dc
author: markingmyname
ms.author: maghan
monikerRange: = azuresqldb-mi-current || >= sql-server-2016 || = sqlallproducts-allversions
ms.openlocfilehash: 6c3b96ef7ec46fca634f95a144d6307c260a9792
ms.sourcegitcommit: e7d921828e9eeac78e7ab96eb90996990c2405e9
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 07/16/2019
ms.locfileid: "68267118"
---
# <a name="delete-operator"></a>Eliminar operador
[!INCLUDE[appliesto-ss-asdbmi-xxxx-xxx-md](../../includes/appliesto-ss-asdbmi-xxxx-xxx-md.md)]
> [!IMPORTANT]
> En [Instancia administrada de Azure SQL Database](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance), la mayoría de las características de agente SQL Server son compatibles actualmente, aunque no todas. Vea [Diferencias de T-SQL en Instancia administrada de Azure SQL Database](https://docs.microsoft.com/azure/sql-database/sql-database-managed-instance-transact-sql-information#sql-server-agent) para obtener más información.
Utilice esta página para eliminar un operador.
## <a name="options"></a>Opciones
**Objeto que se va a eliminar**
Muestra el operador que se va a eliminar.
**Volver a asignar a**
Vuelve a asignar las notificaciones del operador que se va a eliminar.
**Propiedades**
Muestra las propiedades del operador al que se volverán a asignar las notificaciones.
## <a name="see-also"></a>Consulte también
[Operadores](../../ssms/agent/operators.md)
| 39.045455 | 454 | 0.765425 | spa_Latn | 0.555286 |
2fc6f1353d9b1fdb436c42f23e67db32b4bf5dd0 | 18 | md | Markdown | README.md | lebronjames/fanxing | 39f953f260aec5a9d2ba673632f95540b8f68edb | [
"Apache-2.0"
] | null | null | null | README.md | lebronjames/fanxing | 39f953f260aec5a9d2ba673632f95540b8f68edb | [
"Apache-2.0"
] | null | null | null | README.md | lebronjames/fanxing | 39f953f260aec5a9d2ba673632f95540b8f68edb | [
"Apache-2.0"
] | null | null | null | # fanxing
java 泛型
| 6 | 9 | 0.722222 | cat_Latn | 0.926069 |
2fc73e33067765041488cfca44ac55312018687a | 1,065 | md | Markdown | src/ResourceManager/AzureBatch/Commands.Batch/ChangeLog.md | haitsongmsft/azure-powershell | 46b1f02cfccddefae92f9f098fae351adc43dc5c | [
"MIT"
] | 3 | 2018-12-15T01:12:42.000Z | 2018-12-15T01:12:47.000Z | src/ResourceManager/AzureBatch/Commands.Batch/ChangeLog.md | Acidburn0zzz/azure-powershell | 172b856ae7145de34811573c8624dff3c85c3570 | [
"MIT"
] | null | null | null | src/ResourceManager/AzureBatch/Commands.Batch/ChangeLog.md | Acidburn0zzz/azure-powershell | 172b856ae7145de34811573c8624dff3c85c3570 | [
"MIT"
] | 1 | 2018-11-20T22:42:44.000Z | 2018-11-20T22:42:44.000Z | <!--
Please leave this section at the top of the change log.
Changes for the current release should go under the section titled "Current Release", and should adhere to the following format:
## Current Release
* Overview of change #1
- Additional information about change #1
* Overview of change #2
- Additional information about change #2
- Additional information about change #2
* Overview of change #3
* Overview of change #4
- Additional information about change #4
## YYYY.MM.DD - Version X.Y.Z (Previous Release)
* Overview of change #1
- Additional information about change #1
-->
## Current Release
## Version 1.0.0
* General availability of `Az.Batch` module
* Added the ability to see what version of the Azure Batch Node Agent is running on each of the VMs in a pool, via the new `NodeAgentInformation` property on `PSComputeNode`.
* Removed the `ValidationStatus` property from `PSTaskCounts`.
* The `Caching` default for `PSDataDisk` is now `ReadWrite` instead of `None`.
| 38.035714 | 174 | 0.702347 | eng_Latn | 0.979782 |
2fc7c26217397a196b2067c3c12624d8528ddc58 | 581 | md | Markdown | docs/RetrieveInventoryChangesRequest.md | shiyu3169/connect-nodejs-sdk | eca392ed799de9df60d4f328086a9d1c36ef2db8 | [
"Apache-2.0"
] | 1 | 2019-06-21T19:35:37.000Z | 2019-06-21T19:35:37.000Z | docs/RetrieveInventoryChangesRequest.md | shiyu3169/connect-nodejs-sdk | eca392ed799de9df60d4f328086a9d1c36ef2db8 | [
"Apache-2.0"
] | null | null | null | docs/RetrieveInventoryChangesRequest.md | shiyu3169/connect-nodejs-sdk | eca392ed799de9df60d4f328086a9d1c36ef2db8 | [
"Apache-2.0"
] | null | null | null | # SquareConnect.RetrieveInventoryChangesRequest
### Description
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**location_ids** | **String** | The [Location](#type-location) IDs to look up as a comma-separated list. An empty list queries all locations. | [optional]
**cursor** | **String** | A pagination cursor returned by a previous call to this endpoint. Provide this to retrieve the next set of results for the original query. See [Pagination](/basics/api101/pagination) for more information. | [optional]
| 41.5 | 245 | 0.664372 | eng_Latn | 0.848449 |
2fc8092efe7a4bf4ca078d5cc98d4c36deffa4c3 | 140 | md | Markdown | README.md | Ansoulom/randomness_helper | 4d8aa8a19ae2c4b84015c3a7105ca91a19116c38 | [
"MIT"
] | null | null | null | README.md | Ansoulom/randomness_helper | 4d8aa8a19ae2c4b84015c3a7105ca91a19116c38 | [
"MIT"
] | null | null | null | README.md | Ansoulom/randomness_helper | 4d8aa8a19ae2c4b84015c3a7105ca91a19116c38 | [
"MIT"
] | null | null | null | # Randomness Helper
Simply provides three convenient helper functions to simplify random number generation. See test source file for usage.
| 46.666667 | 119 | 0.835714 | eng_Latn | 0.984801 |
2fc8166e8f2e15aa95fa6cf71862c265a70be881 | 2,920 | md | Markdown | _posts/articles/2019-01-18-snt-2019-announcement.md | vbrej/vbrej.github.io | 0d9f6652e2347a83577890f811ef09a7f7bd9ac0 | [
"MIT"
] | null | null | null | _posts/articles/2019-01-18-snt-2019-announcement.md | vbrej/vbrej.github.io | 0d9f6652e2347a83577890f811ef09a7f7bd9ac0 | [
"MIT"
] | 2 | 2020-03-01T21:54:43.000Z | 2021-07-18T08:43:37.000Z | _posts/articles/2019-01-18-snt-2019-announcement.md | vbrej/vbrej.github.io | 0d9f6652e2347a83577890f811ef09a7f7bd9ac0 | [
"MIT"
] | null | null | null | ---
layout: article
title: "SNT–2019"
excerpt: "Southampton Uni to host a novice tournament in February."
categories: articles
share: true
ads: false
author_profile: false
image:
teaser: snt-2019-400x250.jpg
feature: snt-2019-1600x1056.jpg
---
*Current information about the tournament is available on the [Facebook event](https://www.facebook.com/events/231201184468590/).*
The University of Southampton Quiz Society, in association with UK Quizbowl, are pleased to announce a house-written quiz tournament titled the **Southern Novice Tournament (SNT)**, to be held on Saturday, February 2<sup>nd</sup>.
This tournament is specifically geared towards players with minimal previous exposure to competitive quizzing, and will be a great starting point for introducing new players. It will also be a fantastic opportunity for this year’s prospective *University Challenge* teams to practice in a team buzzer quiz, provided they are eligible to play.
To be eligible for this tournament, players must *(1)* be in the first three (four in Scotland) years of a UK undergraduate degree
and *(2)* have never averaged more than 30 points per game at a university-level quizbowl tournament. Players who do not meet either or both of these requirements can be admitted at the discretion of the organisers; please email <quizsoc@soton.ac.uk> if you require any further information.
The tournament consists of eight rounds, each of twenty starters and bonuses, in a University Challenge-style format. The starters are much shorter than typical quizbowl starters, and in each round there will be two picture sets and an audio set.
The event will be held in the Murray Building (B58) on Highfield Campus at the University of Southampton, and the cost of the event will be £30 per team, with discounts as follows:
* <p style="font-size: 18px">£5 for bringing a fully-functional buzzer set;</p>
* <p style="font-size: 18px">£10 for bringing a moderator;</p>
* <p style="font-size: 18px">£10 for the whole team travelling more than 200 miles to get to Southampton;</p>
* <p style="font-size: 18px">£10 for universities that have not taken part in a university-level quiz tournament in the past two years.</p>
The field cap for this tournament is 12 teams (though this can be expanded if needs be) and you can sign up via the Google form here: **<https://goo.gl/forms/YuBnBrlMauaIMW0I3>**.
Initially, a limit of two teams per institution will be imposed, though this may change depending on interest in the tournament.
Finally, please do not hesitate to get in touch if you have any questions, either via email at <quizsoc@soton.ac.uk>, or by sending a message to the University of Southampton Quiz Society's [Facebook page](https://www.facebook.com/SotonQuizSociety/). You can find the event on Facebook [here](https://www.facebook.com/events/231201184468590/), which contains more details about dates, times and locations.
| 81.111111 | 405 | 0.780479 | eng_Latn | 0.997923 |
2fc82b09cfd3eceed5ed46813d0d72de8d179250 | 1,942 | md | Markdown | docs/index.md | knelasevero/external-secrets | 7b883778e9b72cb2aecb5bf22790da1bcc9f22f1 | [
"Apache-2.0"
] | null | null | null | docs/index.md | knelasevero/external-secrets | 7b883778e9b72cb2aecb5bf22790da1bcc9f22f1 | [
"Apache-2.0"
] | null | null | null | docs/index.md | knelasevero/external-secrets | 7b883778e9b72cb2aecb5bf22790da1bcc9f22f1 | [
"Apache-2.0"
] | null | null | null | # Introduction

**External Secrets Operator** is a Kubernetes operator that integrates external
secret management systems like [AWS Secrets
Manager](https://aws.amazon.com/secrets-manager/), [HashiCorp
Vault](https://www.vaultproject.io/), [Google Secrets
Manager](https://cloud.google.com/secret-manager), [Azure Key
Vault](https://azure.microsoft.com/en-us/services/key-vault/) and many more. The
operator reads information from external APIs and automatically injects the
values into a [Kubernetes
Secret](https://kubernetes.io/docs/concepts/configuration/secret/).
### What is the goal of External Secrets Operator?
The goal of External Secrets Operator is to synchronize secrets from external
APIs into Kubernetes. ESO is a collection of custom API resources -
`ExternalSecret`, `SecretStore` and `ClusterSecretStore` that provide a
user-friendly abstraction for the external API that stores and manages the
lifecycle of the secrets for you.
### Where to get started
To get started, please read through [API overview](api-overview.md) this should
give you a high-level overview to understand the API and use-cases. After that
please follow one of our [guides](guides-introduction.md) to get a jump start
using the operator.
For a complete reference of the API types please refer to our [API
Reference](spec.md).
### How to get involved
This project is driven by it's users and contributors and we welcome everybody
to get involved. Join our meetings, open issues or ask questions in Slack. The
success of this project depends on your input: No contribution is too small -
even opinions matter!
How to get involved:
- Monthly Meeting: we announce our meetings on slack
([agenda](https://hackmd.io/GSGEpTVdRZCP6LDxV3FHJA))
- [Kubernetes Slack
#external-secrets](https://kubernetes.slack.com/messages/external-secrets)
- [Contributing Process](contributing-process.md)
| 41.319149 | 80 | 0.785788 | eng_Latn | 0.946016 |
2fc88938d0038cc8c4fe8d68602a495dd31c76be | 8,213 | md | Markdown | Exchange/ExchangeServer2013/view-a-role-exchange-2013-help.md | M34U2NV/OfficeDocs-Exchange | ed7614bed4ac2f88829d12bd337f0284b09333de | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-06-30T11:06:58.000Z | 2021-06-30T11:06:58.000Z | Exchange/ExchangeServer2013/view-a-role-exchange-2013-help.md | M34U2NV/OfficeDocs-Exchange | ed7614bed4ac2f88829d12bd337f0284b09333de | [
"CC-BY-4.0",
"MIT"
] | null | null | null | Exchange/ExchangeServer2013/view-a-role-exchange-2013-help.md | M34U2NV/OfficeDocs-Exchange | ed7614bed4ac2f88829d12bd337f0284b09333de | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-15T18:25:37.000Z | 2021-04-15T18:25:37.000Z | ---
title: 'View a role: Exchange 2013 Help'
TOCTitle: View a role
ms:assetid: 1875b15f-22db-4ede-b310-ea894d6211c8
ms:mtpsurl: https://technet.microsoft.com/library/Dd335117(v=EXCHG.150)
ms:contentKeyID: 49289181
ms.reviewer:
manager: serdars
ms.author: dmaguire
author: msdmaguire
f1.keywords:
- NOCSH
mtps_version: v=EXCHG.150
---
# View a role
_**Applies to:** Exchange Server 2013_
Management roles can be listed in a variety of ways, depending on the information you want. For example, you can choose to return only roles of a specific role type, roles that contain only specific cmdlets and parameters, or view the details of a specific management role. For more information about management roles in Microsoft Exchange Server 2013, see [Understanding management roles](understanding-management-roles-exchange-2013-help.md).
If you want to view a list of all management role entries on a role, see [View role entries](view-role-entries-exchange-2013-help.md).
Looking for other management tasks related to roles? Check out [Advanced permissions](advanced-permissions-exchange-2013-help.md).
## What do you need to know before you begin?
- Estimated time to complete each procedure: 5 minutes
- You need to be assigned permissions before you can perform this procedure or procedures. To see what permissions you need, see the "Management roles" entry in the [Role management permissions](role-management-permissions-exchange-2013-help.md) topic.
- You must use the Shell to perform these procedures.
- This topic makes use of pipelining and the **Format-List** and **Format-Table** cmdlets. For more information about these concepts, see the following topics:
- [about_Pipelines](/powershell/module/microsoft.powershell.core/about/about_pipelines)
- [Working with command output](working-with-command-output-exchange-2013-help.md)
- For information about keyboard shortcuts that may apply to the procedures in this topic, see [Keyboard shortcuts in the Exchange admin center](keyboard-shortcuts-in-the-exchange-admin-center-2013-help.md).
> [!TIP]
> Having problems? Ask for help in the Exchange forums. Visit the forums at [Exchange Server](https://social.technet.microsoft.com/forums/office/home?category=exchangeserver).
## View a specific management role
You can view the details of a specific role by retrieving a specific role using the **Get-ManagementRole** cmdlet and piping the output to the **Format-List** cmdlet.
To view the details of a specific role, use the following syntax.
```powershell
Get-ManagementRole <role name> | Format-List
```
This example retrieves the details about the Mail Recipients management role.
```powershell
Get-ManagementRole "Mail Recipients" | Format-List
```
For detailed syntax and parameter information, see [Get-ManagementRole](/powershell/module/exchange/Get-ManagementRole).
## List all management roles
You can view a list of all the management roles in your organization by not specifying any roles when you run the **Get-ManagementRole** cmdlet. By default, the role name and role type of each role are included in the results.
This example returns a list of all roles in your organization.
```powershell
Get-ManagementRole
```
To return a list of specific properties for all the roles in your organization, you can pipe the results of the **Format-Table** cmdlet and specify the properties you want in the list of results. Use the following syntax.
```powershell
Get-ManagementRole | Format-Table <property 1>, <property 2...>
```
This example returns a list of all the roles in your organization and includes the **Name** property and any property with the word **Implicit** at the beginning of the property name.
```powershell
Get-ManagementRole | Format-Table Name, Implicit*
```
For detailed syntax and parameter information, see [Get-ManagementRole](/powershell/module/exchange/Get-ManagementRole).
## List management roles that contain a specific cmdlet
You can return a list of roles that contain a cmdlet that you specify by using the *Cmdlet* parameter on the **Get-ManagementRole** cmdlet.
To return a list of roles that contain the cmdlet you specify, use the following syntax.
```powershell
Get-ManagementRole -Cmdlet <cmdlet>
```
This example returns a list of roles that contain the **New-Mailbox** cmdlet.
```powershell
Get-ManagementRole -Cmdlet New-Mailbox
```
For detailed syntax and parameter information, see [Get-ManagementRole](/powershell/module/exchange/Get-ManagementRole).
## List management roles that contain a specific parameter
You can return a list of roles that contain one or more specified parameters by using the *CmdletParameters* parameter on the **Get-ManagementRole** cmdlet. Only roles that contain all the parameters you specify are returned.
When you use the *CmdletParameters* parameter, you can choose to include the *Cmdlet* parameter. If you include the *Cmdlet* parameter, only roles that contain the parameters you specify on the cmdlet you specify are returned. If you don't include the *Cmdlet* parameter, roles that contain the parameters you specify, regardless of the cmdlet they're on, are returned.
To return a list of roles that contain the parameters you specify, use the following syntax.
```powershell
Get-ManagementRole [-Cmdlet <cmdlet>] -CmdletParameters <parameter 1>, <parameter 2...>
```
This example returns a list of roles that contain the *Database* and *Server* parameters, regardless of the cmdlets they exist on.
```powershell
Get-ManagementRole -CmdletParameters Database, Server
```
This example returns a list of roles where the *EmailAddresses* parameter exists only on the **Set-Mailbox** cmdlet.
```powershell
Get-ManagementRole -Cmdlet Set-Mailbox -CmdletParameters EmailAddresses
```
You can also use the wildcard character (\*) with either the *Cmdlet* or *CmdletParameters* parameters to match partial cmdlet or parameter names.
For detailed syntax and parameter information, see [Get-ManagementRole](/powershell/module/exchange/Get-ManagementRole).
## List management roles of a specific role type
You can return a list of roles based on a specified role type by using the *RoleType* parameter on the **Get-ManagementRole** cmdlet.
To return a list of roles that match the role type you specify, use the following syntax.
```powershell
Get-ManagementRole -RoleType <roletype>
```
This example returns a list of roles based on the `UmMailboxes` role type.
```powershell
Get-ManagementRole -RoleType UmMailboxes
```
For detailed syntax and parameter information, see [Get-ManagementRole](/powershell/module/exchange/Get-ManagementRole).
## List the immediate child roles of a parent role
You can return a list of roles that are the immediate children of the specified parent role by using the *GetChildren* parameter on the **Get-ManagementRole** cmdlet. Only roles that contain the role you specify as the parent role are returned.
To return a list of the immediate children roles of a parent role, use the following syntax.
```powershell
Get-ManagementRole <parent role name> -GetChildren
```
This example returns a list of immediate children of the Disaster Recovery role.
```powershell
Get-ManagementRole "Disaster Recovery" -GetChildren
```
For detailed syntax and parameter information, see [Get-ManagementRole](/powershell/module/exchange/Get-ManagementRole).
## List all child roles below a parent role
You can return a list of the entire chain of roles from a specified parent role to the last child role by using the *Recurse* parameter on the **Get-ManagementRole** cmdlet. The *Recurse* parameter tells the **Get-ManagementRole** cmdlet to recurse down through every parent and child relationship it finds until it reaches the last child role. The parent role is included in the list that's returned.
This example returns a list of all the child roles of a parent role.
```powershell
Get-ManagementRole <parent role name> -Recurse
```
This example returns all the child roles of the Mail Recipients role.
```powershell
Get-ManagementRole "Mail Recipients" -Recurse
```
For detailed syntax and parameter information, see [Get-ManagementRole](/powershell/module/exchange/Get-ManagementRole). | 44.394595 | 444 | 0.78461 | eng_Latn | 0.991083 |
2fc8c432f9286242b0c020a042ae682de9dd6e78 | 159 | md | Markdown | Test_Programs/README.md | MenkaMehta/48434-embedded-software-Labs | 9fa4257dad9da955c7dd5f6e15a6ac530510d652 | [
"MIT"
] | 1 | 2017-04-14T02:41:17.000Z | 2017-04-14T02:41:17.000Z | Test_Programs/README.md | MenkaMehta/48434-embedded-software-Labs | 9fa4257dad9da955c7dd5f6e15a6ac530510d652 | [
"MIT"
] | null | null | null | Test_Programs/README.md | MenkaMehta/48434-embedded-software-Labs | 9fa4257dad9da955c7dd5f6e15a6ac530510d652 | [
"MIT"
] | 1 | 2019-10-16T08:15:35.000Z | 2019-10-16T08:15:35.000Z | # NOTES
## Lab2_Flash is for quick testing purposes. Uses Ansi 1989 C Standard
* Execute using
* gcc -Wall -ansi -lm Lab2_Flash.c
* AND THEN
* ./a.out | 22.714286 | 70 | 0.679245 | eng_Latn | 0.623325 |
2fc988e65dc5d8c7a4f23378e19ed92733946754 | 1,001 | md | Markdown | README.md | z0u/mugen | 98a8bf1b59b9147490751b7107155f98aa6fadd5 | [
"MIT"
] | 1 | 2019-03-27T04:06:13.000Z | 2019-03-27T04:06:13.000Z | README.md | z0u/mugen | 98a8bf1b59b9147490751b7107155f98aa6fadd5 | [
"MIT"
] | null | null | null | README.md | z0u/mugen | 98a8bf1b59b9147490751b7107155f98aa6fadd5 | [
"MIT"
] | null | null | null | Mugen is a ML-based parameterised music generator.
## Testing
First [set up](#setup) the environment (see below). Then test with:
```
pytest
```
Features under development are marked with `@wip` and are skipped from testing by default. To run those tests, run:
```
pytest -k wip
```
## Setup
After a fresh checkout, or when requirements change, or when starting work:
```
source activate.sh
```
When stopping work:
```
deactivate
```
If you're using Iterm2, you can get figures displaying in your terminal with [itermplot]. Install it like this, and then run the tests as normal:
```
pip install itermplot
export MPLBACKEND=module://itermplot ITERMPLOT=rv
```
[itermplot]: https://github.com/daleroberts/itermplot
## Playing MIDI on Mac OS
1. Open the Audio MIDI Setup app
2. Choose _Window > Show MIDI Studio_
3. Double-click on _IAC Driver_
4. Check _Device is online_
5. Start Garage Band and add a software MIDI track
6. Play a short test:
```
python -m mugen test-midi
```
| 18.886792 | 145 | 0.724276 | eng_Latn | 0.972654 |
2fca4bc586c364b344f6c86e2b0e6e01feb41ce5 | 40 | md | Markdown | README.md | vadym1930/webpack-starter | 14193b1879954186ec4ef1caecdc92447c56e36c | [
"MIT"
] | null | null | null | README.md | vadym1930/webpack-starter | 14193b1879954186ec4ef1caecdc92447c56e36c | [
"MIT"
] | null | null | null | README.md | vadym1930/webpack-starter | 14193b1879954186ec4ef1caecdc92447c56e36c | [
"MIT"
] | null | null | null | # webpack-starter
Basic webpack starter
| 13.333333 | 21 | 0.825 | deu_Latn | 0.411126 |
2fcadc714e0461337e7100c1f15ddf2ca97a6613 | 1,440 | md | Markdown | gssutils/csvcubedintegration/configloaders/MeasuresColumns.md | ONS-OpenData/gss-utils | f0aa9025fe42d9f4090e396d616cf488b80aac8b | [
"Apache-2.0"
] | null | null | null | gssutils/csvcubedintegration/configloaders/MeasuresColumns.md | ONS-OpenData/gss-utils | f0aa9025fe42d9f4090e396d616cf488b80aac8b | [
"Apache-2.0"
] | 3 | 2019-09-18T17:08:00.000Z | 2019-09-20T12:49:17.000Z | gssutils/csvcubedintegration/configloaders/MeasuresColumns.md | ONS-OpenData/gss-utils | f0aa9025fe42d9f4090e396d616cf488b80aac8b | [
"Apache-2.0"
] | null | null | null | # Defining Measures Columns
> This document is part of a wider discussion on [Defining Columns in an info.json v1.1](./README.md).
**For use in multi-measure datasets only.**
A `measures` column is a column in a tidy multi-measure dataset which defines what is being measured/observed in a given observation-row. The measures can either be *new*ly created or *existing* resources defined elsewhere. However, **a single measure column cannot currently support a mix of existing and new measures**; you must only use *existing* measures **or** *new* measures in a given `measures` column.
## New Measures
Automatically defining a series of new measures from the values contained within the column:
```json
{
"type": "measures"
}
```
## New Measures - Specified
Defining a series of new measures.
```json
{
"type": "measures",
"new": [
{
"label": "Mean Income"
},
{
"label": "Median Income",
"path": "median-income-path",
"comment": "A description of what median income is. I cannot use Markdown or HTML here.",
"isDefinedBy": "http://example.com/definitions/something-which-explains-what-this-measure-means.pdf"
}
]
}
```
## Existing Measures
Defining the mapping to convert the measure column's text into the measure URIs.
```json
{
"type": "measures",
"value": "http://example.com/measures/{+column_name}"
}
```
| 28.8 | 411 | 0.667361 | eng_Latn | 0.987611 |
2fcb939336efc870b95179d40a63d55cb97a38f7 | 27,557 | md | Markdown | _posts/2016-10-30-Python.md | ltfschoen/ltfschoen.github.io | af60cdd3a036302c9071dac60c5f8f748c4281d6 | [
"MIT"
] | null | null | null | _posts/2016-10-30-Python.md | ltfschoen/ltfschoen.github.io | af60cdd3a036302c9071dac60c5f8f748c4281d6 | [
"MIT"
] | null | null | null | _posts/2016-10-30-Python.md | ltfschoen/ltfschoen.github.io | af60cdd3a036302c9071dac60c5f8f748c4281d6 | [
"MIT"
] | null | null | null | ---
layout: post
title: Programming Python (DONE)
---
**Note: Based on Programming Python on Udacity**
# Table of Contents
* [Chapter 1 - Programming Python](#chapter-1)
## Chapter 1 - Programming Python <a id="chapter-1"></a>
### Install Python on OSX
* [Download Python 2 of 3](https://www.python.org/downloads/mac-osx/), which includes:
* Python language interpreter
* IDLE IDE
* pip - Python package management system `pip --help`
* Note: pip and easyinstall download from [Python Package Index Server](https://pypi.python.org/pypi)
* Download latest version of Python (i.e. 3.6.0) from https://www.python.org/downloads/
* Note that the version will be created in: /Library/Frameworks/Python.framework/Versions/3.6
* Use virtualenv to create a virtual Python environment using this version `v.mk --python=python3.6 python_test_env_3.6.0`
{% highlight bash %}
$ which python # => /Library/Frameworks/Python.framework/Versions/2.7/bin/python
$ which pip # => /Library/Frameworks/Python.framework/Versions/2.7/bin/pip
$ which idle # => /Library/Frameworks/Python.framework/Versions/2.7/bin/idle
{% endhighlight %}
* Download Tcl/Tk dependency (ActiveTcl) for tkinter on MacOSX for IDLE IDE via this [reference](https://www.python.org/download/mac/tcltk/)
* [ActiveTcl Community Edition for MacOSX](http://www.activestate.com/activetcl/downloads)
* Note: Tk is toolkit to build standard cross-platform GUIs for Tcl language
* Note: tkinter is dependent on Tcl/Tk (ActiveTcl)
* Show where installed and version
* Run Tk Widget Demo based on Tcl/Tk code from Wish GUI
{% highlight bash %}
$ which tclsh # => /usr/bin/tclsh
$ which wish # => /usr/bin/wish
$ tclsh
% info patchlevel # => 8.5.9
$ wish
% info patchlevel # => 8.5.9
{% endhighlight %}
Note: **IGNORE THIS** Alternatively install via Homebrew `brew install python` (BUT `pyenv` appears to be better than `brew` https://gist.github.com/Bouke/11261620)
* Setup development environment
* Create GitHub repo (i.e. PythonTest)
{% highlight bash %}
git clone https://github.com/ltfschoen/PythonTest
cd PythonTest
{% endhighlight %}
### Packages Installed
* Output packages installed with `pip freeze`, i.e.
{% highlight bash %}
pip freeze
dotenv==0.0.5
httplib2==0.9.2
py==1.4.31
pydash==3.4.5
pytest==3.0.3
pytz==2016.7
six==1.10.0
twilio==5.6.0
vboxapi==1.0
{% endhighlight %}
### Product
* Brainstorm to create time tracking app
* Create program tracks time
* Loops
* Wait x hours
* Then open browser
* Share code with friend to truly understand
### Using IDLE IDE
* Run IDLE IDE
* CMD+SPACE (Spotlight)
* Type "IDLE"
* New - Go to Menu: File > New File
* Save - Go to Menu: File > Save
* Run - Go to Menu: Run > Run Module
* Stop - CTRL+C in Console Window `>>>`
* Autocompletion
* Start typing
* Press CTRL+SPACE
* Press up/down to desired method
* Press SPACE to execute
### Debugging IDLE
* Debugging
* CTRL+C in IDLE terminal to terminate any running app
* Click in the Python 2.7.12 Shell window (not a code file)
* Debug - Go to Menu: Debug > Debugger
* Run - Go to Menu: Run (or FN + F5)
* Go To Line Number: CMD+J
### IntelliJ with Python Plugin and source level debugging setup
* After fresh installation of IntelliJ IDEA 2016.2.5 where Python 2.7.12 already installed
* Check Python Plugin installed (i.e. Python Plugin Version: 2016.2.162.43)
* IntelliJ IDEA > Preferences > Plugins
* Add Python as recognized file type
* Preferences > Editor > File Types (shortcut CMD+,)
* Click "+" to add a new "Recognized File Type"
* Add name value of: Python
* Add line comment value of: #
* Add block comment start: """
* Add block comment end: """
* Select all checkboxes (i.e. Support paired ..., Support string escapes)
* Click "1" under "Keywords" and click "+" to add for syntax highlight colour 1: %, from, import
* Click "2" under "Keywords" and click "+" to add for syntax highlight colour 2: def
* Click "3" under "Keywords" and click "+" to add for syntax highlight colour 3: for, if, while, yield, next
* Click "4" under "Keywords" and click "+" to add for syntax highlight colour 4: print
* Click "OK" to save
* Add Python file types
* Preferences > Editor > File Types (shortcut CMD+,)
* Click "+" under "Registered Patterns" and enter: *.py
* Click "OK"
* Check no Python files that you need to edit are listed under "Ignore files and folders"
* Restart IntelliJ IDEA
* Go back to check recognized file types where suddely "Python" (with *.pjw) and "Python Stub" (with *.pyi) are shown (were previous efforts were in vain???)
* Preferences > Editor > File Types (shortcut CMD+,)
* Click "OK"
* Add Python Interpreter
http://stackoverflow.com/questions/24769117/how-do-i-configure-a-python-interpreter-in-intellij-idea-with-the-pycharm-plugin
* File > Project Structure > Platform Settings > SDKs
* Click "+" and select Python SDK > Local > enter directory where Python installed (check in terminal with `which python`). Note that this added "Python 2.7.12" for me
* File > Project Structure > Project Settings > Project > Project SDK > Select "Python 2.7.12"
* Click "OK"
* Configure Python
* Preferences > Languages & Frameworks > Python Template Languages (i.e. Django)
* Preferences > Languages & Frameworks > BDD (i.e. Behave)
* Preferences > Tools > Python External Documentation
* Preferences > Tools > Python Integrated Tools > Default Test Runner (i.e. unittest, py.test)
* Configure Python debug configuration for source level debugging
* Preferences > Build, Execution, Deployment > Debugger > Python Debugger
* Preferences > Build, Execution, Deployment > Console > Python Console > Click "Always show debug console"
* Run > Edit Configuration > Default > Python Tests > py.test
> Add "Target" of project directory
> Check "Options" and add: -v --color=yes --exitfirst --showlocals --durations=5
* Run > Edit Configuration > Click "+"
> select "Python Tests > py.test"
> Rename to "Python tests custom"
> Change "Target" to project directory
* Note: "Python Interpreter" should now be "Use specified interpreter" with value "Python 2.7.12 ..."
* Click "OK"
* Run "Python tests custom"
* **Problem**: No breakpoints appear when I click in the sidebar in Python app!!! (however using a Ruby app breakpoints work!!)
* Attempt to fix #1: File > Invalidate Caches and Restart - DID NOT WORK
* Attempt to fix #2: Uninstalled the Python Plugin Version: 2016.2.162.43 and reinstalled same version - DID NOT WORK
* **Solution** Attempt to fix #2: Tried adding wildcard "*.py" as a Registered Pattern but a popup appears "This wildcard is already registered
by 'Python' filetype" with options Cancel or Reassign Wildcard (even though the only pattern listed is "*.pyw" and
none of the other registered filetypes use that wildcard). Clicking "Reassign Wildcard" allowed me to add "*.py"
to the list. WORKED !!!!!
### Python Configuration Information
* System Path
{% highlight python %}
import sys
print sys.path
{% endhighlight %}
* [Environment Variables & System Info](http://www.thomas-cokelaer.info/tutorials/python/module_os.html)
* Interactive Terminal
{% highlight python %}
import time
time.ctime() # => 'Sun Oct 30 09:21:06 2016'
{% endhighlight %}
### Change Python Version (i.e. from 2.7 to 3.5) - Use virtualenv successfully instead
* [PyEnv](https://github.com/yyuu/pyenv)
### Package Versioning
* http://stackoverflow.com/questions/458550/standard-way-to-embed-version-into-python-package
### Python Dependency Management
* [Reference with excellent links](https://www.fullstackpython.com/application-dependencies.html)
* Note: Python comes with primitive package management tool `easy_install` (i.e. `easy_install pip`)
* Python library dependencies are recommended to be installed using `pip` when a virtualenv is created
* **Pip** downloads and installs app dependencies from central PyPi repo (similar to RubyGems or NPM)
* **Virtualenv** instance stores a copy of Python interpreter and dependencies in the global site-packages directory (i.e. /Library/Python/2.7/site-packages/) to isolate an apps dependencies from system-level packages
* **Virtualenvwrapper** Stores all Virtualenv's in a single directory on the file system
* **requirements.txt** specifies an apps pegged (precise versions or Git tags) dependencies
* **setup.py** used to specify dependencies for Reusable Components (that will be imported as dependencies by other projects)
* **Anaconda** http://stackoverflow.com/questions/38217545/the-different-between-pyenv-virtualenv-anaconda-in-python
* **Python Docker Images** (to switch between Python environments)
* List currently installed dependencies with `pip freeze`
* Create a requirements.txt file in the root directory of the app and past the output of `pip freeze`
* Reference Links
* https://www.fullstackpython.com/application-dependencies.html
* http://jonathanchu.is/posts/virtualenv-and-pip-basics/
* Example Virtualenv and Virtualenvwrapper using Yolk for isolated environments using various Python Interpreters (i.e. v2.7 or v3.5)
{% highlight bash %}
$pip install virtualenv
$ which virtualenv
/Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv
{% endhighlight %}
* Cont'd...
* Install latest Virtualenvwrapper https://virtualenvwrapper.readthedocs.io/en/latest/
{% highlight bash %}
pip install virtualenvwrapper
{% endhighlight %}
* Cont'd...
* Update ~/.bash_profile with directory (i.e. ~/.virtualenvs) where we will store all virtualenv's
{% highlight bash %}
$ echo $HOME
/Users/x
$ export WORKON_HOME=$HOME/.virtualenvs
$ source /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenvwrapper.sh
$ source ~/.bash_profile
$ echo $WORKON_HOME
/Users/x/.virtualenvs
{% endhighlight %}
* Cont'd...
* Add the following to Base Profile: Create aliases to save keystrokes http://blog.doughellmann.com/2010/01/virtualenvwrapper-tips-and-tricks.html
{% highlight bash %}
export WORKON_HOME=$HOME/.virtualenvs
alias v='workon'
alias v.deactivate='deactivate'
alias v.mk='mkvirtualenv --no-site-packages -v'
alias v.mk_withsitepackages='mkvirtualenv'
alias v.rm='rmvirtualenv'
alias v.switch='workon'
alias v.add2virtualenv='add2virtualenv'
alias v.cdsitepackages='cdsitepackages'
alias v.cd='cdvirtualenv'
alias v.lssitepackages='lssitepackages'
source /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenvwrapper.sh
{% endhighlight %}
* Cont'd...
* Show current location of Python Interpreter before creating a virtual environment
{% highlight bash %}
$ which python
/Library/Frameworks/Python.framework/Versions/2.7/bin/python
{% endhighlight %}
* Cont'd...
* Create new completely clean virtual environment (notice how command prompt changes at the end) with a specific Python version
{% highlight bash %}
$ v.mk -python=python2.7 python_test_env
New python executable in /Users/x/.virtualenvs/python_test_env/bin/python
Installing setuptools, pip, wheel...done.
virtualenvwrapper.user_scripts creating /Users/x/.virtualenvs/python_test_env/bin/predeactivate
virtualenvwrapper.user_scripts creating /Users/x/.virtualenvs/python_test_env/bin/postdeactivate
virtualenvwrapper.user_scripts creating /Users/x/.virtualenvs/python_test_env/bin/preactivate
virtualenvwrapper.user_scripts creating /Users/x/.virtualenvs/python_test_env/bin/postactivate
virtualenvwrapper.user_scripts creating /Users/x/.virtualenvs/python_test_env/bin/get_env_details
(python_test_env) $
{% endhighlight %}
* Cont'd...
* Show that the new virtual env contains its own copy of a specific Python Interpreter:
{% highlight bash %}
(python_test_env_2.7) $ which python
/Users/Ls/.virtualenvs/python_test_env_2.7/bin/python
{% endhighlight %}
* Cont'd...
* Show where the new python_test_env has been created:
{% highlight bash %}
(python_test_env) $ ls $WORKON_HOME
... python_test_env
{% endhighlight %}
* Cont'd...
* Deactivate using python_test_env and then show globally installed packages
{% highlight bash %}
(python_test_env) $ deactivate
$
$ pip freeze
click==6.6
coverage==4.2
httplib2==0.9.2
pbr==1.10.0
py==1.4.31
pydash==3.4.5
pytest==3.0.3
python-dotenv==0.6.0
pytz==2016.7
six==1.10.0
stevedore==1.18.0
twilio==5.6.0
vboxapi==1.0
virtualenv==15.0.3
virtualenv-clone==0.2.6
virtualenvwrapper==4.7.2
{% endhighlight %}
* Cont'd...
* Restart python_test_env again
{% highlight bash %}
v python_test_env
{% endhighlight %}
* Cont'd...
* Install Python page Yolk, a utility to list packages installed for environment, and then list them, and show locally installed packages
{% highlight bash %}
$ pip install yolk
$ yolk -l
Python - 2.7.12 - active development (/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload)
pip 9.0.0 has no metadata
setuptools 28.7.1 has no metadata
wheel 0.30.0a0 has no metadata
wsgiref - 0.1.2 - active development (/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7)
yolk 0.4.3 has no metadata
$ pip freeze
yolk==0.4.3
{% endhighlight %}
* Cont'd...
* Install requirements i.e. `pip install x y z`
* Cont'd...
* Send a snapshot of all the requirements and current versions into the requirements.txt file
{% highlight bash %}
(python_test_env) $ pip freeze > requirements.txt
(python_test_env) $ cat requirements.txt
yolk==0.4.3
{% endhighlight %}
* Cont'd...
* Install the dependencies when cloned in another environment
`$ pip install -r requirements.txt`
* Cont'd...
* Remove virtual env
{% highlight bash %}
deactivate
rmvirtualenv python_test_env
{% endhighlight %}
* Cont'd...
* Change to a different virtual env `v.switch python_test_env3.5.2`
* Show current Python version
{% highlight bash %}
python -V
{% endhighlight %}
* Cont'd...
* Create a virtual environement with a specific Python Interpreter (i.e. 3.5) version
{% highlight bash %}
v.mk --python=python3 python_test_env_3.5
{% endhighlight %}
* Cont'd...
* Install dependencies from requirements.txt
{% highlight bash %}
(python_test_env_3.5) $ pip install -r requirements.txt
{% endhighlight %}
* Cont'd...
* Try running a program to see if it breaks using the newer version of Python
* Switch to a different virtual env
{% highlight bash %}
(python_test_env_3.5) $ v.switch python_test_env
(python_test_env) $
{% endhighlight %}
### Pipenv
* https://github.com/kennethreitz/pipenv
* Use suite such as:
* pylint & PEP 8 - Style Guide
* mypy & PEP 484 - Type Hints
* pipenv & PEP 508 - Dependency spec
* pytest - Tests
* Install pipenv, initialise in python3 mode, start pipenv shell,
install project dependencies, run tests
```
pip install pipenv
pipenv --three
pipenv shell -c
pipenv install
pytest
```
### Mypy
* Create stubs
* https://github.com/python/mypy/wiki/Creating-Stubs-For-Python-Modules
* i.e. mkdir stubs, then run
`export MYPYPATH=/Users/<your_username>/code/aind-projects/clones/nd889/2_isolation/stubs`
then Copy and paste the timeit.pyi stub posted on the typeshed GitHub repo [here](https://github.com/python/typeshed/commit/9e39eb636867262584eaf71a31b639afba76f279)
into the stubs directory to remove Mypy error `isolation/isolation.py:11: error: No library stub file for standard library module 'timeit'`
### Convert Python 2 to Python 3
* Use 2to3 utility on all files http://stackoverflow.com/questions/37891188/convert-python-2-code-to-3-in-pycharm
`2to3 myfile.py -w`
* Correct fixes for exceptions manually
### Renaming Python executables and Symlinks
* rename pip to pip2 using brace expansion
`cd /usr/local/bin && mv /usr/local/bin/{pip,pip2}`
* created symlink for `pip` command to points at `pip3` so
`ln -s /usr/local/bin/pip3 /usr/local/bin/pip`
### Jupyter Notebook
If get errors in Jupyter Notebook when running code snippets like:
`ModuleNotFoundError: No module named 'numpy'`
Then note that Jupyter Notebook uses Python 3 by default, and the issue can be resolved by
installing the packages using `pip3 install numpy matplotlib` (pointing to Python 3.6)
instead of just `pip ...` (which pointed to Python 2.7) and then opening Jupiter Notebook and performing Run All.
`pip3 list` shows which packages are installed.
Confirm which installation directory of Python 3.6 that Jupyter Notebook was using by running the following
commands within the code snippets in Jupyter Notebook:
print("{}".format(sys.version))
# show Python 2.7 executable directory
!which pip
# show Python 3.6 executable directory
!which pip3
# show jupyter config paths http://jupyter.readthedocs.io/en/latest/projects/jupyter-directories.html
!jupyter --paths
Which returned the following output:
3.6.0 (v3.6.0:41df79263a11, Dec 22 2016, 17:23:13)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
/Users/<my_username>/miniconda3/envs/aind/bin/pip
/Library/Frameworks/Python.framework/Versions/3.6/bin/pip3
config:
/Users/<my_username>/.jupyter
/Library/Frameworks/Python.framework/Versions/3.6/etc/jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/Users/<my_username>/Library/Jupyter
/Library/Frameworks/Python.framework/Versions/3.6/share/jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/Users/<my_username>/Library/Jupyter/runtime
Notice how `pip3` is in a subdirectory of `/Library/Frameworks/Python.framework/Versions/3.6/`, whereas `pip` was not
### Pip Autcompletion
* Run the following to setup and enable, then use with `pip i[PRESS_TAB]`
{% highlight python %}
pip completion --bash >> ~/.bashrc
source ~/.bashrc
{% endhighlight %}
### Python Standard Library
* Python Standard Library
* [v2](https://docs.python.org/2/library/)
* [v3](https://docs.python.org/3/library/)
* Abstractions hide details of libraries
* `webbrowser` [link](https://docs.python.org/2/library/webbrowser.html?highlight=webbrowser#module-webbrowser)
* `time` [link](https://docs.python.org/2/library/time.html?highlight=time#module-time)
* [CGI scripts](https://docs.python.org/2/library/internet.html)
### Python 3rd Party Library
* Twilio - Send SMS
* [Twilio GitHub](https://github.com/twilio/twilio-python)
* [Twilio GitHub REST Client code](https://github.com/twilio/twilio-python/blob/master/twilio/rest/__init__.py)
* [SMS enabled countries](https://support.twilio.com/hc/en-us/articles/223183068-Twilio-international-phone-number-availability-and-their-capabilities)
* [Sample SMS code for Twilio](https://www.twilio.com/docs/libraries/python#testing-your-installation)
{% highlight python %}
pip install twilio
{% endhighlight %}
* Cont'd
* [Get Twilio Phone Number](https://www.twilio.com/console/phone-numbers/getting-started)
### Google Style Guide for Python
* [Google Style Guide](https://google.github.io/styleguide/pyguide.html)
### Environment Variables
* [Python Dotenv](https://github.com/theskumar/python-dotenv)
* Alternatively add your credentials to your shell environment
{% highlight bash %}
echo "export TWILIO_ACCOUNT_SID='xyz'" >> ~/.bashrc
echo "export TWILIO_AUTH_TOKEN='xyz'" >> ~/.bashrc
source ~/.bashrc
{% endhighlight %}
### Python Packages by Ranking
* [http://pypi-ranking.info/alltime](http://pypi-ranking.info/alltime)
### Python Standard Library Built-In Functions
* [Built-in functions](https://docs.python.org/2/library/functions.html) i.e. `open()`
### Iterable Data Sets
* `itertools` https://docs.python.org/3/library/itertools.html
### Importing Files in Separate Directories
* [Modules and Packages](https://docs.python.org/2/tutorial/modules.html)
* Note `from x import y` means it binds y pointing to module attribute x.modules['x'].y) but the rest of module x is still imported. there is not performance difference to just using `import`, it's just a more granular code style making it cleared what is specifically being imported
* i.e. `from twilio.rest import TwilioRestClient` means in twilio there is a folder called [rest](https://github.com/twilio/twilio-python/tree/master/twilio/rest), and inside that folder is a class called [TwilioRestClient](https://github.com/twilio/twilio-python/blob/master/twilio/rest/client.py) (defined in __init__.py). Use the class with `TwilioRestClient(...)`.
* Alternatively use `from twilio import rest` and use the class with `rest.TwilioRestClient(...)` to call the classes init() function and create an instance
{% highlight python %}
import site
print ("Importing subfolder: %s") % (sys.path[0]+'/helpers')
site.addsitedir(sys.path[0]+'/helpers')
import reusable
{% endhighlight %}
### Python Path
* Add directory to $PYTHONPATH
{% highlight python %}
export PYTHONPATH=$PYTHONPATH:/home/path1:/home/path2
echo $PYTHONPATH
{% endhighlight %}
### Python Functional Programming (similar to Lodash)
* [PyDash](https://pydash.readthedocs.io/en/latest/)
{% highlight bash %}
pip install pydash
{% endhighlight %}
* Example:
{% highlight python %}
from pydash import py_
random_filename_suffix = str(randint(0,99))
filename = "sample42"
randfile_ints_found = re.findall('\d+', filename) # array of numbers found in filename
suffix_matches_found = py_(randfile_ints_found).map(lambda x: x == random_filename_suffix).each(print).value()
return True if any(element for element in suffix_matches_found) else False
{% endhighlight %}
### Python Frameworks
* Pyramid
* Flask
* Django
* Web2Py
### Python Production
* AppHosted
* djangozoom
* Djangy
* ep.io
* Gondor.io
* Google App Engine
* Heroku
* Joyent
* Pydra
* stable.io
### Python Package System (similar to RVM/RBEnv)
* [Virtualenv](http://haridas.in/how-to-use-python-virtualenv-and-ruby-version-manager.html)
### Python Versioning
* Show current version being used `python -V`
* Indicate that a file only supports certain Python versions
```
import sys
assert sys.version_info >= (3,5,2)
```
### Python Unit Testing
* UnitTest
* Standard module testing framework (aka pyUnit similar to xnit)
* PyTest
* [py.test](http://docs.pytest.org/en/latest/) lightweight unit test framework without boilerplate of unittest
* Test discovery requires all test filenames prefixed `test_`
* [PyTest Fixtures](http://pythontesting.net/framework/pytest/pytest-fixtures-nuts-bolts/) i.e. before_each, load sample data, etc
{% highlight bash %}
pip install -U pytest
pytest --version
{% endhighlight %}
* Cont'd
* [Assertions](http://pytest.org/2.2.4/assert.html)
* Nose
* Task runner that runs functions in modules whose name starts with 'test'
* Test generators implement data-driven tests
* Runs unittest tests
* Coverage plugin available
* Google App Engine plugin
* Nose2
* [GitHub](https://github.com/nose-devs/nose2)
* [Docs](http://nose2.readthedocs.io/en/latest/)
* DocTest
* Other
* [TwistedTrial](http://twistedmatrix.com/trac/wiki/TwistedTrial) extension of unittest
### Continuous Integration with Travis CI
* Enable Travis CI to use GitHub app - https://travis-ci.org/auth
* Check build status of GitHub app - [https://travis-ci.org/ltfschoen/PythonTest](https://travis-ci.org/ltfschoen/PythonTest)
* Create .travis.yml file - https://docs.travis-ci.com/user/getting-started/
* Customise .travis.yml file - https://docs.travis-ci.com/user/customizing-the-build/
* View other example .travis.yml files - https://docs.travis-ci.com/user/languages/python/#Examples
### [PyPy](http://pypy.org/)
* Alternative to Python
### Python Syntax
* Inputs
* `input` evaluates expression upon entry
* `raw_input` does not evaluate immediately
* Strings
* Join `''.join(['/', samples_folder_name, '/'])`
* Booleans (first letter capitalised)
* `True`, `False`
* Print Output
* `print("Before - Index: %s, Filename: %s") % (index, filename)`
* Control Flow
* [If statement](https://docs.python.org/2/tutorial/controlflow.html)
* Array
* Iterate array by index from specific starting index
* Examples http://stackoverflow.com/questions/1514553/how-to-declare-an-array-in-python
{% highlight python %}
for i, val in enumerate(data, 1):
print i, val
{% endhighlight %}
* Class
* Define template/blueprint (class) from which copies (instances/objects) may be made as types of it
* Calling the init() function to create space in memory for new instance of the class that can access all class methods
* Class variables are those with values common to all instances
* Pre-existing class variables include __doc__ so you can create documentation for a class within `"""blah"""`
https://discussions.udacity.com/t/exploring-built-in-functions-f/16116/2715
{% highlight bash %}
class A(object): # deriving from 'object' declares A as a 'new-style-class'
def foo(self):
print "foo"
@classmethod
def bar(self):
print("bar")
A().foo() # => foo
A().bar() # => AttributeError: 'A' object has no attribute 'bar'
A.foo() # => TypeError: unbound method foo() must be called with A instance as first argument (got nothing instead)
A.bar() # => AttributeError: 'A' object has no attribute 'bar'
class B(A):
def foo(self):
super(B, self).foo() # calls 'A.foo()'
@classmethod
def bar(self):
super(B, self).bar() # calls 'A.bar()'
B().foo() # => foo
B().bar() # => AttributeError: 'B' object has no attribute 'bar'
B.foo() # => TypeError: unbound method foo() must be called with B instance as first argument (got nothing instead)
B.bar() # => AttributeError: type object 'B' has no attribute 'bar'
b = B()
b.foo() # => foo
b.bar() # => AttributeError: 'B' object has no attribute 'bar'
{% endhighlight %}
* Hash
* Dictionary using hash tables
{% highlight python %}
D = {}
for k,v in D.items():
print k,':',v
{% endhighlight %}
* Random Numbers
* `str(randint(0,99))` random integer between 0 and 99 inclusive
* Random Strings
{% highlight python %}
import random
import string
''.join(random.choice(string.ascii_letters) for char in range(char_qty))
{% endhighlight %}
* Modules
* File with Python definitions and syntax
* Filename is <module_name>.py
* `__name__` is available in Module as module name
* File Handling
* [os.listdir(path)](https://docs.python.org/2/library/os.html?highlight=listdir#os.listdir)
* Regular Expressions
* Replace after last instance of character being queried in string
{% highlight python %}
test_path = "/Users/x/code/apps/PythonTest"
split_on_char = "/"
split_on_char.join(test_path.split(split_on_char)[:-1]) # => /Users/x/code/apps
{% endhighlight %}
### Python Test Coverage
* [Coverage.py](http://coverage.readthedocs.io/en/latest/)
### Python Developers Slack Forum
* [https://pythondevelopers.herokuapp.com/](https://pythondevelopers.herokuapp.com/)
### Udacity Forum
* [Udacity Forum](https://discussions.udacity.com/latest)
### TODO
* Behave - https://pypi.python.org/pypi/behave
* Tox - https://tox.readthedocs.io/en/latest/
* Jenkins and Python - https://wiki.jenkins-ci.org/display/JENKINS/Python+Projects
* PyAtom - https://github.com/pyatom/pyatom
* Random - http://mathieu.agopian.info/presentations/2015_06_djangocon_europe/?full#test-files | 35.149235 | 372 | 0.719853 | eng_Latn | 0.783985 |