hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6f13b46e340551564b951ea143ea333fcf066366 | 6,816 | md | Markdown | docs/framework/configure-apps/file-schema/wcf/netpeertcpbinding.md | chvrl/docs | 73450483ff3ddf005c59cbadc5029a6373d1c32f | [
"CC-BY-4.0",
"MIT"
] | 32 | 2017-11-09T20:29:45.000Z | 2021-11-22T15:54:00.000Z | docs/framework/configure-apps/file-schema/wcf/netpeertcpbinding.md | chvrl/docs | 73450483ff3ddf005c59cbadc5029a6373d1c32f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/configure-apps/file-schema/wcf/netpeertcpbinding.md | chvrl/docs | 73450483ff3ddf005c59cbadc5029a6373d1c32f | [
"CC-BY-4.0",
"MIT"
] | 22 | 2017-11-27T00:38:36.000Z | 2021-03-12T06:51:43.000Z | ---
title: "<netPeerTcpBinding>"
ms.date: "03/30/2017"
helpviewer_keywords:
- "netPeerBinding element"
ms.assetid: 2dd77ada-a176-47c7-a740-900b279f1aad
---
# \<netPeerTcpBinding>
Defines a binding for peer channel specific TCP messaging.
[**\<configuration>**](../configuration-element.md)\
[**\<system.serviceModel>**](system-servicemodel.md)\
[**\<bindings>**](bindings.md)\
**\<netPeerTcpBinding>**
## Syntax
```xml
<netPeerBinding>
<binding name="string"
closeTimeout="TimeSpan"
openTimeout="TimeSpan"
receiveTimeout="TimeSpan"
sendTimeout="TimeSpan"
listenIPAddress="String"
maxBufferPoolSize="integer"
maxReceiveMessageSize="Integer"
port="Integer">
<security mode="None/Transport/Message/TransportWithMessageCredential">
<transport credentialType="Certificate/Password" />
</security>
</binding>
</netPeerBinding>
```
## Attributes and Elements
The following sections describe attributes, child elements, and parent elements
### Attributes
|Attribute|Description|
|---------------|-----------------|
|closeTimeout|A <xref:System.TimeSpan> value that specifies the interval of time provided for a close operation to complete. This value should be greater than or equal to <xref:System.TimeSpan.Zero>. The default is 00:01:00.|
|listenIPAddress|A string that specifies an IP address on which the peer node will listen for TCP messages. The default is `null`.|
|maxBufferPoolSize|An integer that specifies the maximum buffer pool size for this binding. The default is 524,288 bytes (512 * 1024). Many parts of Windows Communication Foundation (WCF) use buffers. Creating and destroying buffers each time they are used is expensive, and garbage collection for buffers is also expensive. With buffer pools, you can take a buffer from the pool, use it, and return it to the pool once you are done. Thus the overhead in creating and destroying buffers is avoided.|
|maxReceivedMessageSize|A positive integer that specifies the maximum message size, in bytes, including headers, that can be received on a channel configured with this binding. The sender of a message exceeding this limit will receive a SOAP fault. The receiver drops the message and creates an entry of the event in the trace log. The default is 65536.|
|name|A string that contains the configuration name of the binding. This value should be unique because it is used as an identification for the binding. Starting with [!INCLUDE[netfx40_short](../../../../../includes/netfx40-short-md.md)], bindings and behaviors are not required to have a name. For more information about default configuration and nameless bindings and behaviors, see [Simplified Configuration](../../../wcf/simplified-configuration.md) and [Simplified Configuration for WCF Services](../../../wcf/samples/simplified-configuration-for-wcf-services.md).|
|openTimeout|A <xref:System.TimeSpan> value that specifies the interval of time provided for an open operation to complete. This value should be greater than or equal to <xref:System.TimeSpan.Zero>. The default is 00:01:00.|
|port|An integer that specifies the network interface port on which this binding will process peer channel TCP messages. This value must be between <xref:System.Net.IPEndPoint.MinPort> and <xref:System.Net.IPEndPoint.MaxPort>. The default is 0.|
|receiveTimeout|A <xref:System.TimeSpan> value that specifies the interval of time provided for a receive operation to complete. This value should be greater than or equal to <xref:System.TimeSpan.Zero>. The default is 00:10:00.|
|sendTimeout|A <xref:System.TimeSpan> value that specifies the interval of time provided for a send operation to complete. This value should be greater than or equal to <xref:System.TimeSpan.Zero>. The default is 00:01:00.|
### Child Elements
|Element|Description|
|-------------|-----------------|
|[\<readerQuotas>](https://docs.microsoft.com/previous-versions/dotnet/netframework-4.0/ms731325(v=vs.100))|Defines the constraints on the complexity of SOAP messages that can be processed by endpoints configured with this binding. This element is of type <xref:System.ServiceModel.Configuration.XmlDictionaryReaderQuotasElement>.|
|[\<resolver>](resolver.md)|Specifies a peer resolver used by this binding to resolve a peer mesh ID to the endpoint IP addresses of nodes within the peer mesh.|
|[\<security>](security-of-netpeerbinding.md)|Defines the security settings for the message. This element is of type <xref:System.ServiceModel.Configuration.PeerSecurityElement>.|
### Parent Elements
|Element|Description|
|-------------|-----------------|
|[\<bindings>](bindings.md)|This element holds a collection of standard and custom bindings.|
## Remarks
This binding provides support for the creation of peer-to-peer or multiparty applications using peer transport over TCP. Each peer node can host multiple peer channels defined with this binding type.
## Example
The following example demonstrates using the NetPeerTcpBinding binding, which provides multiparty communication using a peer channel. For a detailed scenario of using this binding, see [Net Peer TCP](https://docs.microsoft.com/previous-versions/dotnet/netframework-3.5/ms751426(v=vs.90)).
```xml
<configuration>
<system.ServiceModel>
<bindings>
<netPeerBinding>
<binding closeTimeout="00:00:10"
openTimeout="00:00:20"
receiveTimeout="00:00:30"
sendTimeout="00:00:40"
maxBufferSize="1001"
maxConnections="123"
maxReceiveMessageSize="1000">
<reliableSession ordered="false"
inactivityTimeout="00:02:00"
enabled="true" />
<security mode="TransportWithMessageCredential">
<message clientCredentialType="CardSpace" />
</security>
</binding>
</netPeerBinding>
</bindings>
</system.ServiceModel>
</configuration>
```
## See also
- <xref:System.ServiceModel.NetPeerTcpBinding>
- <xref:System.ServiceModel.Configuration.NetPeerTcpBindingElement>
- [Bindings](../../../wcf/bindings.md)
- [Configuring System-Provided Bindings](../../../wcf/feature-details/configuring-system-provided-bindings.md)
- [Using Bindings to Configure Services and Clients](../../../wcf/using-bindings-to-configure-services-and-clients.md)
- [\<binding>](bindings.md)
- [Net Peer TCP](https://docs.microsoft.com/previous-versions/dotnet/netframework-3.5/ms751426(v=vs.90))
- [Peer-to-Peer Networking](../../../wcf/feature-details/peer-to-peer-networking.md)
| 63.111111 | 572 | 0.718457 | eng_Latn | 0.948873 |
6f1414f333075d6f4459612f022131738e67db4d | 1,712 | md | Markdown | README.md | Kirpal/cascade | 03d5a608c21ff33e06ff543247fb431989b44554 | [
"MIT"
] | 5 | 2017-05-23T21:12:05.000Z | 2019-06-30T11:17:32.000Z | README.md | Kirpal/cascade | 03d5a608c21ff33e06ff543247fb431989b44554 | [
"MIT"
] | null | null | null | README.md | Kirpal/cascade | 03d5a608c21ff33e06ff543247fb431989b44554 | [
"MIT"
] | 1 | 2021-04-17T09:16:11.000Z | 2021-04-17T09:16:11.000Z | <img src="https://kirpal.github.io/filmreel/images/icon.svg" height="64" alt="Icon">
# Film Reel
[](https://travis-ci.org/Kirpal/filmreel)    [](https://github.com/kirpal/filmreel/blob/master/LICENSE)
Film Reel is a modern movie watching app, designed with usability in mind. It allows you to stream and download 1000's of movies for free. Using BitTorrent peer-to-peer technology, the movies are delivered at high speed and high quality. The app is cross-platform and open-source.
Visit [https://kirpal.github.io/filmreel/](https://kirpal.github.io/filmreel/) to learn more.
## Installation
### macOS
Download the latest release from the [downloads section](https://kirpal.github.io/filmreel/#downloads).
Film Reel automatically updates when a new version is released.
### Windows
Download the latest from the [downloads section](https://kirpal.github.io/filmreel/#downloads).
The installer works for both 32-bit and 64-bit systems.
Film Reel automatically updates when a new version is released.
### Linux
1. Download the AppImage from the [downloads section](https://kirpal.github.io/filmreel/#downloads).
2. Make the downloaded package executable and run it.
3. Use Film Reel!
Film Reel automatically updates when a new version is released.
## License
[MIT](https://github.com/kirpal/filmreel/blob/master/LICENSE)
| 45.052632 | 480 | 0.767523 | eng_Latn | 0.699676 |
6f152b3ff437f0dd669a9d3799f08db68b497db2 | 51 | md | Markdown | README.md | klemchowda/klemchowda.github.io | 0778304b8dcaa95d42056ef0af2006921075f550 | [
"MIT"
] | null | null | null | README.md | klemchowda/klemchowda.github.io | 0778304b8dcaa95d42056ef0af2006921075f550 | [
"MIT"
] | null | null | null | README.md | klemchowda/klemchowda.github.io | 0778304b8dcaa95d42056ef0af2006921075f550 | [
"MIT"
] | null | null | null | # klemchowda.github.io
First adventure into WebDev
| 17 | 27 | 0.823529 | eng_Latn | 0.211126 |
6f166fe289b8beb46e064707b91fdbe4fe375963 | 123 | md | Markdown | docs/workshops/git/readme.md | cyrildewit/softwarematerial | b1933b70e96830222c12016667246f1c68516d91 | [
"CC-BY-4.0"
] | null | null | null | docs/workshops/git/readme.md | cyrildewit/softwarematerial | b1933b70e96830222c12016667246f1c68516d91 | [
"CC-BY-4.0"
] | null | null | null | docs/workshops/git/readme.md | cyrildewit/softwarematerial | b1933b70e96830222c12016667246f1c68516d91 | [
"CC-BY-4.0"
] | null | null | null | # Git
## Bronnen
+ [gitlab university user_training](https://docs.gitlab.com/ee/university/training/user_training.html)
| 17.571429 | 102 | 0.764228 | kor_Hang | 0.448063 |
6f16a71ddfe86100aa67d4ab8733621cdcc8aab4 | 8,145 | md | Markdown | content/en/project/Olfactory Reception/index.md | guo-cheng/WangLab | 44857b88f7c008f52b6d39732aa34420b6c7293e | [
"MIT"
] | null | null | null | content/en/project/Olfactory Reception/index.md | guo-cheng/WangLab | 44857b88f7c008f52b6d39732aa34420b6c7293e | [
"MIT"
] | null | null | null | content/en/project/Olfactory Reception/index.md | guo-cheng/WangLab | 44857b88f7c008f52b6d39732aa34420b6c7293e | [
"MIT"
] | null | null | null | ---
title: 幺蛾子的生殖器居然也有嗅觉功能???
summary: 烟青虫属于鳞翅目夜蛾科铃夜蛾属昆虫,主要取食烟草,辣椒等少数茄科植物,是我国的主要农业害虫本文主要研究烟青虫雌性产卵器上嗅觉受体的表达以及帮助雌蛾寻找产卵寄主植物。
tags:
- Olfactory Reception
date: "2020-11-18"
# Optional external URL for project (replaces project detail page).
# external_link: http://example.org
image:
caption:
focal_point: Smart
---
# 幺蛾子的生殖器居然也有嗅觉功能???
大家好,好久没有更新了。前几天室里举办了研究生评优答辩,感觉和大家的差距还是蛮大的,无论是工作量还是思路上,我觉得自己还是要抓住生物学问题,而不是针对数据本身,如何找到一个好的科学问题,似乎是解开一切问题最关键的地方。
好的,下面进入正题,看到今天的题目,有点UC那味了。今天分享的文献是我们组李睿婷师姐刚刚发表在eLife上的文章^[1]^,首先恭喜师姐,接下来我将尽可能详尽的介绍这篇文章,文章的专业性可能比较强,大家可以按需浏览。
<img src="http://cdn.liguocheng.top/blog/20200524/is7AUlA6giWX.gif" alt="mark" style="zoom:50%;" />
<img src="http://cdn.liguocheng.top/blog/20200522/DChKtAygai8i.png?imageslim" alt="mark" style="zoom:80%;" />
烟青虫属于鳞翅目夜蛾科铃夜蛾属昆虫,主要取食烟草,辣椒等少数茄科植物,是我国的主要农业害虫本文主要研究烟青虫雌性产卵器上嗅觉受体的表达以及帮助雌蛾寻找产卵寄主植物。
### Abstract:
- 通过对烟青虫性信息素腺体-产卵器(PG-OV)进行转录组测序,分析得到*HassOR31*高表达。
- 通过爪蟾卵母细胞结合双电极电压钳的方式,共注射*HassOR31*和*HassORco*,此嗅觉受体对所测试化合物中的12种有反应,其中顺-3-己烯丁酸酯(Z-3-hexenyl butyrate)反应最强。
- 这些反应的化合物可以通过对产卵器进行单感器记录(SSR)得到电生理反应。
- 通过原位杂交实验,*HassOR31*和*HassORco*在产卵器感器中存在共定位现象。
- 通过产卵实验,雌蛾在缺少触角的情况下也会偏好于顺-3-己烯丁酸酯,这表明*HassOR31*可以帮助烟青虫定位产卵位置。
### Introduction
昆虫的嗅觉系统可以帮助其进行取食,产卵,交配,寻找寄主植物等。触角是昆虫最主要的嗅觉感器,下唇须和喙上也有嗅觉感器,近几年发现昆虫的产卵器上也有嗅觉感器的分布。嗅觉感器是一种多孔型的毛状结构,嗅觉受体神经元(OSN)的树突延伸到感器的端部,而嗅觉受体(OR)就是一种位于树突膜上的膜蛋白。OR是一种非典型的GPCR蛋白(C端和N端位置),OR要发挥功能需要和ORco结合成异源二聚体,从而形成一个离子通道蛋白,来感受气味化合物。下面是关于昆虫嗅觉感器内部的一个示意图^[2]^。
<img src="http://cdn.liguocheng.top/blog/20200522/FTHkVAG8mo5z.png?imageslim" alt="olfactory" style="zoom:80%;" />
### Results
#### Transcriptome sequencing and identification of chemosensory receptors in pheromone gland - ovipositor of Helicoverpa assulta
通过转录组测序,得到22 ORs, 6 GRs, 13 IRs, and 9 iGluRs。其中HassOR31 and HassiGluR7表达量都很高,而HassORco表达量偏低。
<img src="http://cdn.liguocheng.top/blog/20200525/nPqvUqfGawBX.png?imageslim" alt="quantification" style="zoom:50%;" />
**其中还计算了棉铃虫PG-OV中OR表达量,*HarmOR46*的表达量很低,这与我的分析结果存在分歧。**
<img src="http://cdn.liguocheng.top/blog/20200525/hl3BjwIUObcr.png?imageslim" alt="mark" style="zoom:80%;" />
#### Tissue expression pattern of HassOR31
通过对雌雄触角,雌性头(去触角),胸,腹,前足,翅膀以及性信息素腺体-产卵器进行qRT-PCR验证,结果可以验证转录组数据的可靠性。
<img src="http://cdn.liguocheng.top/blog/20200522/qjUuijfwz0si.png?imageslim" alt="Gene expression of HassOR31 in different tissues of H. assulta" style="zoom:80%;" />
#### Localization of HassOR31 and HassORco in the ovipositor
通过原位杂交实验(双标),红色的地高辛标记HassOR31,绿色的生物素标记HassORco,可以看到OR31与ORco确实存在共定位现象,但是大多数情况下,是分开表达的。
> ==**Z-stacking**== (also known as focus stacking) is a digital image processing method which combines multiple images taken at different focal distances to provide a composite image with a greater depth of field (i.e. the thickness of the plane of focus) than any of the individual source images.**(焦点合成技术->提高景深)**
<iframe height=498 width=510 src="http://cdn.liguocheng.top/elife_poa_e53706_Video_1.mp4">
<img src="http://cdn.liguocheng.top/blog/20200522/evi1nT7PRFyL.png?imageslim" alt="Localization of HassOR31 and HassORco expression in cells of H. assulta ovipositors" style="zoom:80%;" />
#### Functional analysis of HassOR31 by Xenopus laevis oocytes
通过爪蟾卵母细胞结合双电极电压钳实验,筛选51种铃夜蛾对其有行为或电生理反应的化合物谱^[3]^,主要包括绿叶气味化合物(GLV),萜类,脂肪族和芳香族化合物。最后有12种气味化合物出现反应,其中顺-3-己烯丁酸酯,顺-3-己烯乙酸酯,月桂烯,柠檬醛反应较强。单独的HassOR31或者HassiGluR7则没有反应。
<img src="http://cdn.liguocheng.top/blog/20200522/AzF1O1Pglc6p.png?imageslim" alt="Functional analyses of chemosensory receptors in Xenopus oocytes" style="zoom:80%;" />
测试化合物:
<img src="http://cdn.liguocheng.top/blog/20200524/hkfNOpNTeWWT.png?imageslim" alt="odorants" style="zoom:80%;" />
#### Scanning electron micrographs (SEM) of the ovipositor of H. assulta and its associated sensilla
扫描电镜的结果显示,产卵器上的感觉毛大致可以分为四类(A和B为整体图):
- C和D:感器长,尖端,表面光滑无毛,可能属于机械感受毛
- E和F:感器短,基部内陷到凹槽中,表面无孔,可能行使属于机械功能
- G和H:感器长度与E,F差不多,表面和顶部有孔,外形类似触角上的毛型感器,可能司味觉和嗅觉功能,主要分布在产卵器中部。
- I和J:表面和顶部有孔,顶部孔大,外形类似触角上的锥型感器,可能司味觉和嗅觉功能。
<img src="http://cdn.liguocheng.top/blog/20200522/2rGdfuovxrKI.png?imageslim" alt="mark" style="zoom:80%;" />
#### Single sensillum recordings (SSR) of putative chemosensory sensilla
通过SSR来验证这些感器是否起嗅觉作用,选取17种HassOR31有反应的以及烟青虫性信息素化合物进行测试,可以看到大多数化合物都与石蜡油,也就是反应基线一致,而顺-3-己烯丁酸酯则有明显反应,且呈剂量梯度。
<iframe height=498 width=510 src="http://cdn.liguocheng.top/elife_poa_e53706_Video_2.mp4">
<img src="http://cdn.liguocheng.top/blog/20200522/PGTsO623ktno.png?imageslim" alt="Single sensillum recordings (SSR) of putative chemosensory sensilla in an H. assulta ovipositor." style="zoom:80%;" />
#### Oviposition preference of H. assulta female
总共有两个产卵实验,一个是图A,将辣椒对称放在一张金属丝网的上面,分别将完整的和切掉触角的交配后雌蛾放入笼子中,看雌蛾的产卵情况,图C和D展现的是完整的和切掉触角雌蛾的产卵分布情况。另一个实验是图B,将顺-3-己烯丁酸酯溶于石蜡油中并放入一个橡胶头中,再以石蜡油为对照放入另一个橡胶头中,将这两种橡胶头分别放在假花的叶子上。
<img src="http://cdn.liguocheng.top/blog/20200522/XML78f9oJp5b.png?imageslim" alt="The set-up of oviposition choice tests and the spread of eggs laid by mated females of H. assulta" style="zoom:80%;" />
器官完整的,交配后的雌蛾对寄主植物辣椒(混合植物挥发物)的产卵偏好性高,去掉触角后的雌蛾仍然对辣椒有产卵偏好性。但完整的雌蛾的产卵偏好指数显著高于去掉触角的雌蛾。
将辣椒换成顺-3-己烯丁酸酯,则雌蛾对顺-3-己烯丁酸酯有产卵偏好性,但完整和去掉触角后的雌蛾在产卵偏好指数上没有了差异。感受混合植物挥发物主要通过触角,而感受单个植物挥发物(顺-3-己烯丁酸酯)则需要产卵器。
<img src="http://cdn.liguocheng.top/blog/20200522/UDvHLt8FrUzQ.png?imageslim" alt="Oviposition preference of the antennae amputated and intact mated females of H. assulta." style="zoom:80%;" />
#### Discussion
##### The function of HassOR31 expressed in the ovipositor
HassOR31的最佳配体是顺-3-己烯丁酸酯,而HarmOR31最佳配体是月桂烯(myrcene)^[3]^。2012年孙亚兰师姐发表的文章^[4]^介绍了烟草顶空收集的化合物通过GC-EAD和GC-MS鉴定到了(E)-β-ocimene, octanal, (Z)-3-hexenyl acetate, (Z)-3-hexen-1-ol, nonanal, (Z)-3-hexenyl-2-methyl butyrate, decanal, linalool, and (E)-β-caryophyllene。
**Hypothesis:**
> A gravid female moth takes ==two steps to find an oviposition site==: firstly, she smells the plant volatiles mainly by using antennae to search for a host plant, and secondly when she comes near to or land on the host plant, she integrates the information from ==olfactory sensilla== as well as ==mechanical== and ==contact chemosensory sensilla== on the ovipositor to determine the precise oviposition sites on the host plants.
<img src="http://cdn.liguocheng.top/blog/20200524/uV02KUiH8YHK.png?imageslim" alt="Di Chang" style="zoom:80%;" />
##### Large difference in expression of HassOR31 and HassORco in the ovipositor
在SSR反应中,对顺-3-己烯丁酸酯反应的感器较少,这可能反应HassORco的表达量在产卵器中较低,但却无法解释HassOR31表达量很高的原因,其可能发挥除嗅觉外的其他功能。除了OR,GR和IR也在产卵器中有表达。
##### The features of chemosensory sensilla on the ovipositor
产卵器上的感器存在类似触角毛形感器和锥形感器,但顶端的孔较大,符合味觉感器的特征,可能同时起到嗅觉和味觉的作用。但这些感器的SSR反应却比较弱。
#### Conclusion
本实验通过转录组,qRT-PCR,原位杂交,爪蟾卵母细胞功能鉴定,扫描电镜,单感器记录以及产卵实验,从组学数据,分子实验,形态学,电生理学,行为学等方面揭示烟青虫产卵器上的HassOR31参与到嗅觉感受中,并影响其产卵行为。
未来的研究可以通过破坏产卵器上的相关感器看是否影响昆虫行为以及通过CRISPR-Cas9敲除掉HassOR31来验证其功能。HassOR31/HassORco所表达的嗅觉受体神经元投射到terminal abdominal ganglion,HassOR31在产卵行为上也需要进一步研究。
#### Reference
[1] Li, R.-T., Huang, L.-Q., Dong, J.-F., Wang, C.-Z., 2020. A moth odorant receptor highly expressed in the ovipositor is involved in detecting host-plant volatiles [WWW Document]. eLife. https://doi.org/10.7554/eLife.53706
[2] Pask, G.M., Ray, A., 2016. Chapter 6 - Insect Olfactory Receptors: An Interface between Chemistry and Biology, in: Zufall, F., Munger, S.D. (Eds.), Chemosensory Transduction. Academic Press, pp. 101–122. https://doi.org/10.1016/B978-0-12-801694-7.00006-8
[3] Di, C., Ning, C., Huang, L.-Q., Wang, C.-Z., 2017. Design of larval chemical attractants based on odorant response spectra of odorant receptors in the cotton bollworm. Insect Biochemistry and Molecular Biology 84, 48–62. https://doi.org/10/gbgwd3
[4] Sun, J.-G., Huang, L.-Q., Wang, C.-Z., 2012. Electrophysiological and behavioral responses of Helicoverpa assulta (Lepidoptera: Noctuidae) to tobacco volatiles. Arthropod-Plant Interactions. https://doi.org/10/ggwzp2
<img src="http://cdn.liguocheng.top/blog/20200524/L7e4Ly8tFk5C.gif" alt="mark" style="zoom:50%;" />
<img src="http://cdn.liguocheng.top/blog/20200524/zkBPa1Dhtgv9.gif" alt="mark" style="zoom:50%;" />
<img src="http://cdn.liguocheng.top/blog/20200524/lKl04c0FmnKM.gif" alt="public number" style="zoom:80%;" /> | 49.969325 | 431 | 0.785144 | yue_Hant | 0.352966 |
6f16acea0656f0cb48338a6eb3f8a59a5a3d390b | 1,649 | md | Markdown | rust/content.md | aguibert/docker-docs | c81e6ad5a1f517bd7d277584f9c5c327b26a80bd | [
"MIT"
] | 2 | 2019-07-12T12:07:39.000Z | 2022-02-17T09:21:17.000Z | rust/content.md | thresheek/docs | 027ddb7deb7c0f4f2634cf3f4c9d444c09ef49d0 | [
"MIT"
] | 1 | 2020-01-16T01:06:11.000Z | 2020-01-16T01:06:11.000Z | rust/content.md | thresheek/docs | 027ddb7deb7c0f4f2634cf3f4c9d444c09ef49d0 | [
"MIT"
] | 2 | 2020-12-23T18:31:39.000Z | 2022-02-23T03:04:25.000Z | # What is Rust?
Rust is a systems programming language sponsored by Mozilla Research. It is designed to be a "safe, concurrent, practical language", supporting functional and imperative-procedural paradigms. Rust is syntactically similar to C++, but is designed for better memory safety while maintaining performance.
> [wikipedia.org/wiki/Rust_(programming_language)](https://en.wikipedia.org/wiki/Rust_%28programming_language%29)
%%LOGO%%
# How to use this image
## Start a Rust instance running your app
The most straightforward way to use this image is to use a Rust container as both the build and runtime environment. In your `Dockerfile`, writing something along the lines of the following will compile and run your project:
```dockerfile
FROM %%IMAGE%%:1.31
WORKDIR /usr/src/myapp
COPY . .
RUN cargo install --path .
CMD ["myapp"]
```
Then, build and run the Docker image:
```console
$ docker build -t my-rust-app .
$ docker run -it --rm --name my-running-app my-rust-app
```
## Compile your app inside the Docker container
There may be occasions where it is not appropriate to run your app inside a container. To compile, but not run your app inside the Docker instance, you can write something like:
```console
$ docker run --rm --user "$(id -u)":"$(id -g)" -v "$PWD":/usr/src/myapp -w /usr/src/myapp %%IMAGE%%:1.23.0 cargo build --release
```
This will add your current directory, as a volume, to the container, set the working directory to the volume, and run the command `cargo build --release`. This tells Cargo, Rust's build system, to compile the crate in `myapp` and output the executable to `target/release/myapp`.
| 39.261905 | 301 | 0.745907 | eng_Latn | 0.997193 |
6f17ca52c6706bb148ab010fd928c192c68af9c3 | 4,709 | md | Markdown | wdk-ddi-src/content/compstui/ns-compstui-_extchkbox.md | manison/windows-driver-docs-ddi | 726ccb62bc04e7e2afe0785186fb3e8422dcfe21 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/compstui/ns-compstui-_extchkbox.md | manison/windows-driver-docs-ddi | 726ccb62bc04e7e2afe0785186fb3e8422dcfe21 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | wdk-ddi-src/content/compstui/ns-compstui-_extchkbox.md | manison/windows-driver-docs-ddi | 726ccb62bc04e7e2afe0785186fb3e8422dcfe21 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
UID: NS:compstui._EXTCHKBOX
title: "_EXTCHKBOX"
description: The EXTCHKBOX structure is used by CPSUI applications (including printer interface DLLs) for specifying an extended check box, which can be added to a property sheet page option.
old-location: print\extchkbox.htm
tech.root: print
ms.assetid: b3b82474-d4e5-467c-93dc-30edac189c66
ms.date: 04/20/2018
ms.keywords: "*PEXTCHKBOX, EXTCHKBOX, EXTCHKBOX structure [Print Devices], PEXTCHKBOX, PEXTCHKBOX structure pointer [Print Devices], _EXTCHKBOX, compstui/EXTCHKBOX, compstui/PEXTCHKBOX, cpsuifnc_3d620423-7173-4a78-b087-f8f269c5715d.xml, print.extchkbox"
ms.topic: struct
req.header: compstui.h
req.include-header: Compstui.h
req.target-type: Windows
req.target-min-winverclnt:
req.target-min-winversvr:
req.kmdf-ver:
req.umdf-ver:
req.ddi-compliance:
req.unicode-ansi:
req.idl:
req.max-support:
req.namespace:
req.assembly:
req.type-library:
req.lib:
req.dll:
req.irql:
topic_type:
- APIRef
- kbSyntax
api_type:
- HeaderDef
api_location:
- compstui.h
api_name:
- EXTCHKBOX
product:
- Windows
targetos: Windows
req.typenames: EXTCHKBOX, *PEXTCHKBOX
---
# _EXTCHKBOX structure
## -description
The EXTCHKBOX structure is used by CPSUI applications (including printer interface DLLs) for specifying an extended check box, which can be added to a property sheet page option.
## -struct-fields
### -field cbSize
Size, in bytes, of the EXTCHKBOX structure.
### -field Flags
Bit flags, which can be one of the following:
#### ECBF_CHECKNAME_AT_FRONT
If set, CPSUI displays strings in the order "pCheckedName pSeparator <i>SelectName</i>", where <i>SelectName</i> is the string associated with the option's selected value.
If not set, CPSUI displays strings in the order "<i>SelectName</i> pSeparator pCheckedName".
#### ECBF_CHECKNAME_ONLY_ENABLED
If set, CPSUI displays the pCheckedName string only if the option is checked and enabled (that is, OPTIF_ECB_CHECKED is set and OPTIF_DISABLED is clear in the OPTITEM structure).
If not set, CPSUI always displays the pCheckedName string if the option is checked (that is, OPTIF_ECB_CHECKED is set in the OPTITEM structure), even if the option is disabled.
#### ECBF_ICONID_AS_HICON
If set, the <b>IconID</b> member contains an icon handle.
If not set, the <b>IconID</b> member contains an icon resource identifier.
#### ECBF_OVERLAY_ECBICON_IF_CHECKED
If set, and if the check box is checked (that is, OPTIF_ECB_CHECKED is set in the OPTITEM structure), CPSUI overlays the icon identified by the <b>IconID</b> member onto the icon associated with the option item.
#### ECBF_OVERLAY_NO_ICON
If set, CPSUI overlays its IDI_CPSUI_NO icon onto the icon identified by the <b>IconID</b> member.
#### ECBF_OVERLAY_STOP_ICON
If set, CPSUI overlays the IDI_CPSUI_STOP icon onto the icon identified by the <b>IconID</b> member.
#### ECBF_OVERLAY_WARNING_ICON
If set, CPSUI overlays its IDI_CPSUI_WARNING icon onto the icon identified by the <b>IconID</b> member.
### -field pTitle
String identifier, representing the check box title. This can be a 32-bit pointer to a NULL-terminated string, or it can be a 16-bit string resource identifier with HIWORD set to zero.
### -field pSeparator
String identifier, representing a separator character to be displayed between the checked name string and the selected option string This can be a 32-bit pointer to a NULL-terminated string, or it can be a 16-bit string resource identifier with HIWORD set to zero.
### -field pCheckedName
String identifier, representing the text to be displayed when the check box is checked. This can be a 32-bit pointer to a NULL-terminated string, or it can be a 16-bit string resource identifier with HIWORD set to zero.
### -field IconID
One of the following icon identifiers:
<ul>
<li>
An icon resource identifier. This can be application-defined, or it can be one of the CPSUI-supplied, IDI_CPSUI-prefixed icon resource identifiers.
</li>
<li>
An icon handle. If a handle is specified, ECBF_ICONID_AS_HICON must be set in the <b>Flags</b> member.
</li>
</ul>
If this value is zero, an icon is not displayed.
### -field wReserved
Reserved, must be initialized to zero.
### -field dwReserved
Reserved, must be initialized to zero.
## -remarks
An extended check box is a CPSUI-defined type of check box that can be associated with an <a href="https://msdn.microsoft.com/library/windows/hardware/ff559656">OPTITEM</a> structure. An OPTITEM structure can have one extended check box or one extended push button associated with it.
## -see-also
<a href="https://msdn.microsoft.com/library/windows/hardware/ff548795">EXTPUSH</a>
| 24.784211 | 284 | 0.764281 | eng_Latn | 0.944428 |
6f1804b8915e054eaf2a2182b7a107d3ccb0884c | 1,903 | md | Markdown | install/amazon_rds/03_setup_iam_policy.md | joehorsnell/pganalyze-docs | 0e305841188f2967899aff5868afb5db9f376e96 | [
"CC-BY-4.0"
] | null | null | null | install/amazon_rds/03_setup_iam_policy.md | joehorsnell/pganalyze-docs | 0e305841188f2967899aff5868afb5db9f376e96 | [
"CC-BY-4.0"
] | null | null | null | install/amazon_rds/03_setup_iam_policy.md | joehorsnell/pganalyze-docs | 0e305841188f2967899aff5868afb5db9f376e96 | [
"CC-BY-4.0"
] | null | null | null | ---
title: 'Step 3: Setup IAM Policy'
backlink_href: /docs/install/amazon_rds
backlink_title: 'Installation Guide (Amazon RDS)'
---
Almost done! 🎉
We now need to setup an IAM policy that the collector can use to access RDS information.
To start, open up **Security Credentials** in your AWS Console.
Continue by clicking on Policies on the left side, and then clicking "Create Policy":

Select **Copy an AWS Managed Policy**, search for `RDS` and select the `AmazonRDSReadOnlyAccess` policy:

---
Before saving, modify the policy so it reads as follows:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"rds:Describe*",
"rds:ListTagsForResource",
"ec2:DescribeAccountAttributes",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"cloudwatch:GetMetricStatistics",
"logs:DescribeLogStreams",
"logs:GetLogEvents"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [ "rds:DownloadDBLogFilePortion" ],
"Effect": "Allow",
"Resource": "*"
}
]
}
```
Note the added section at the end that allows downloading of the log files.
We recommend naming the policy "pganalyze" or similar, so you can easily identify it again. It should look like this once created:

---
In the last step we'll download and run the pganalyze collector software to one
of your EC2 instances:
[Proceed to Step 4: Install the Collector on an EC2 Instance](/docs/install/amazon_rds/04_install_collector)
| 26.802817 | 130 | 0.602207 | eng_Latn | 0.89916 |
6f18372131bc3fea782f0488e66ba913814d27f2 | 2,148 | md | Markdown | _posts/2021-06-25-【對話劉夢熊,第九集】下屆香港特首應由「目前形勢和我們的任務」決定對內,重拾民意民心支持認同;對外,.md | NodeBE4/teahouse | 4d31c4088cc871c98a9760cefd2d77e5e0dd7466 | [
"MIT"
] | 1 | 2020-09-16T02:05:27.000Z | 2020-09-16T02:05:27.000Z | _posts/2021-06-25-【對話劉夢熊,第九集】下屆香港特首應由「目前形勢和我們的任務」決定對內,重拾民意民心支持認同;對外,.md | NodeBE4/teahouse | 4d31c4088cc871c98a9760cefd2d77e5e0dd7466 | [
"MIT"
] | null | null | null | _posts/2021-06-25-【對話劉夢熊,第九集】下屆香港特首應由「目前形勢和我們的任務」決定對內,重拾民意民心支持認同;對外,.md | NodeBE4/teahouse | 4d31c4088cc871c98a9760cefd2d77e5e0dd7466 | [
"MIT"
] | null | null | null | ---
layout: post
title: "【對話劉夢熊,第九集】下屆香港特首應由「目前形勢和我們的任務」決定對內,重拾民意民心支持認同;對外,促使國際社會恢復對香港信心, 重建或鞏固國際金融中心地位時勢造英雄,而不是政治酬庸。"
date: 2021-06-25T13:30:09.000Z
author: 老楊到處說
from: https://www.youtube.com/watch?v=snRaIzW35yM
tags: [ 老楊到處說 ]
categories: [ 老楊到處說 ]
---
<!--1624627809000-->
[【對話劉夢熊,第九集】下屆香港特首應由「目前形勢和我們的任務」決定對內,重拾民意民心支持認同;對外,促使國際社會恢復對香港信心, 重建或鞏固國際金融中心地位時勢造英雄,而不是政治酬庸。](https://www.youtube.com/watch?v=snRaIzW35yM)
------
<div>
訂閱楊錦麟頻道,每天免費收看最新影片:https://bit.ly/yangjinlin歡迎各位透過PayPal支持我們的創作:http://bit.ly/support-laoyang最近香港政壇的政治生態面目全非,不僅泛民主派代表人物從老將到新鋭幾乎被「一鍋端」,氣息奄奄;即使傳统建制派一來遭西環保持距離,二來被北方「新護法」譏為「忠诚的廢物」,索性暫且掩旗息鼓。對比政黨「冬眠」,倒是被市民冠以「689」的前特首梁振英和「777」的現任特首林鄭月娥的明爭暗鬥有點瞄頭,尤其两人的馬前卒煞有介事的拳來脚往,令一些人覺得下届特首要麼就是林鄭月娥連任,要麽就是梁振英回朝,二者必居其一。但我不同意這種看法! 其實,香港下任行政長官由誰出任,應該「時勢造英雄」,以香港「目前形勢和我们的任務」來取捨。實話實説,香港一國两制到了生死存亡之秋。就下任特首而言,對内,最重要是如何彌合自梁振英任内2014年「佔中」到林鄭任内2019年「修例」造成的嚴重社會撕裂、黃藍對立;當務之急是重建和諧香港,再造人心回歸工程尤其將失去的年青一代的心爭取回來,令一國兩制重返基本法轨道,不變形、不走樣,行穩致遠;對外,則要開展大規模國際公關活動,官方渠道和公共外交雙管齐下,做好深入扎實游說工作,減輕、消除西方發達國家對香港「一國兩制巳經死亡」的不了解、誤解,重建國際金融持份者對香港作為國際金融中心的信心,讓外界認識香港與內地普通城市的「不同」,相信香港的自由、法治核心價值依然存在,相信香港「保持原有的资本主义制度和生活方式」,從而令香港的自由港、單獨關稅區原有的特殊待遇保持不變,依舊發揮中國與外部世界的通道和緩衝地帶的特殊作用。 顯而易見,對香港今時今日陷入內外交困局面負有不容推卸責任的前任、現任两位特首梁振英、林鄭月娥根本不可能帶領香港脫離當前的急流險灘 !理由如下: 首先講對内,從彌合社會撕裂、重建和諧香港来說,梁振英和林鄭月娥两任特首都深深卷入了香港越来越尖锐化、覆雜化的政治争拗旋涡,根本不可能具備化解社會矛盾的超脱立塲、超然地位,有些社會矛盾恰好是在這两任特首手上激化起來的。因而無論與建制派還是泛民主派,与工商界还是劳工界抑或基層市民(包括新界原居民)方方面面都留有或深或浅的「牙齿痕」,不利于在各党派、各階層之間做溝通、协調、團结工作,求同存异,凝聚共識。因此,指望由缺乏凝聚力的梁振英或林鄭带领香港休養生息,「聚精会神搞建设,一心一意謀發展」完全是缘木求魚。 再講對外,不可否认,港區国安法和「完善香港选举制度的决定」先後出台以来,種種原因令香港外部环境空前惡化,国际金融中心地位受到衝擊。如何在國際社会重塑香港形象成了重大課题。指望在外形象欠佳的梁振英和本身受到國際制裁的林鄭月娥根本不可能被外界接受,缺乏說服力,甚至被外間輿論視為損害香港人權、民主、法治的重要责任人,两人都欠缺良好國際形象和廣泛國際人脉。由這两位中任何一位作香港友好使者出面向外部世界解释香港真相,講好香港故事,改善香港形象,重建外界對香港信心,吸引外资到香港和粤港澳大湾区投资,重振香港國際金融中心、贸易中心、航運中心、旅游中心雄風,纯屬天方夜谭! 「實踐是檢驗真理的唯一標準」。不客氣的講句,梁振英、林鄭月娥兩個都是香港負資產、逆動力。兩人任內給國家帶來「利」抑或「害」,給港人帶來「福」還是「禍」,根本一目瞭然!古云「江山辈代人才出,各领風骚數百年」。清人龔自珍有詩云:「九州生氣恃風雷 ,萬馬齊喑究可哀 。我勸天公重抖擻,不拘一格降人才!」香江新舵手應另有其人。謂予不信,拭目以待!
</div>
| 126.352941 | 1,709 | 0.759777 | yue_Hant | 0.99134 |
6f185ee19da9c598bb941c7f30da710b83117512 | 1,002 | md | Markdown | algorithm/remove_copy_if.md | aditya041997/30-seconds-of-cpp | df6e750de7b26bc401ea75860be1e6ec1df34454 | [
"MIT"
] | 1,210 | 2019-03-22T18:53:59.000Z | 2022-03-31T07:47:34.000Z | algorithm/remove_copy_if.md | aditya041997/30-seconds-of-cpp | df6e750de7b26bc401ea75860be1e6ec1df34454 | [
"MIT"
] | 217 | 2019-04-05T09:07:26.000Z | 2022-01-29T22:42:37.000Z | algorithm/remove_copy_if.md | aditya041997/30-seconds-of-cpp | df6e750de7b26bc401ea75860be1e6ec1df34454 | [
"MIT"
] | 657 | 2019-03-17T15:04:47.000Z | 2022-03-31T05:05:40.000Z | # remove_copy_if
**Description** : Copies elements from the range `[first, last)`, to another range beginning at `d_first`, omitting the elements which satisfy specific criteria.
**Example**:
```cpp
auto isOdd = [](int i) {
return ((i%2) == 1);
};
std::vector<int> origin {1, 2, 3, 4, 5};
std::vector<int> destination;
// Copy elements to destination that return false for isOdd
std::remove_copy_if(origin.begin(), //first
origin.end(), //last
std::back_inserter(destination), //d_first
isOdd);
// origin is still {1, 2, 3, 4, 5}
for (auto value : origin) {
std::cout << value << " ";
}
std::cout << std::endl;
// destination is {2, 4}
for (auto value : destination) {
std::cout << value << " ";
}
std::cout << std::endl;
```
**[See Sample code](../snippets/algorithm/remove_if.cpp)**
**[Run Code](https://rextester.com/NENHU72340)** | 30.363636 | 161 | 0.550898 | eng_Latn | 0.651304 |
6f196efb964e4b12732969ea9485c529a562ae3a | 686 | md | Markdown | README.md | devscollab/portfolios | 52e62f2a1e916a5fc06d4292d2c5e12f5e8c4217 | [
"MIT"
] | 6 | 2019-11-08T12:57:06.000Z | 2019-12-18T16:20:27.000Z | README.md | devscollab/portfolios | 52e62f2a1e916a5fc06d4292d2c5e12f5e8c4217 | [
"MIT"
] | 4 | 2019-12-08T06:44:41.000Z | 2019-12-29T07:31:29.000Z | README.md | devscollab/portfolios | 52e62f2a1e916a5fc06d4292d2c5e12f5e8c4217 | [
"MIT"
] | 13 | 2019-11-06T13:57:53.000Z | 2019-12-28T09:12:42.000Z | # portfolios
Hosting Repo for DevsCollab OpenSource Memeber Portfolios.
## Members
<!-- [Full_Name](https://devscollab.github.io/portfolios/Folder_Name/) -->
[John Doe](https://devscollab.github.io/portfolios/johndoe/)
[Amritpal Singh Dhir](https://devscollab.github.io/portfolios/amritpal_singh_dhir/)
[Vinit Pramod Hande](https://devscollab.github.io/portfolios/Vinit_Hande/)
[Shubham Suresh Dudhal](https://devscollab.github.io/portfolios/shubhamdudhal/)
[Devendra Patil](https://devscollab.github.io/portfolios/devdeven/)
[Pratik Dhende](https://devscollab.github.io/portfolios/pratik-dhende/)
[Bhavansh Gupta](https://devscollab.github.io/portfolios/bhavansh-gupta/)
| 29.826087 | 83 | 0.77551 | hun_Latn | 0.104263 |
6f1b75555255b85ececa01aed5814f38a761c0fa | 1,774 | md | Markdown | docs/soft/maixpy/en/course/media/audio.md | STRfarfar/sipeed_wiki | e0ce6cbf5679824b5345432e6ae34386e89f979b | [
"MIT"
] | 58 | 2019-02-17T06:52:29.000Z | 2022-03-30T13:00:41.000Z | docs/soft/maixpy/en/course/media/audio.md | STRfarfar/sipeed_wiki | e0ce6cbf5679824b5345432e6ae34386e89f979b | [
"MIT"
] | 44 | 2019-02-02T13:01:47.000Z | 2021-05-02T01:59:52.000Z | docs/soft/maixpy/en/course/media/audio.md | STRfarfar/sipeed_wiki | e0ce6cbf5679824b5345432e6ae34386e89f979b | [
"MIT"
] | 58 | 2019-01-25T06:31:22.000Z | 2021-07-29T05:17:09.000Z | ---
title: the use of audio
keywords: maixpy, k210, AIOT, edge computing
desc: maixpy doc: audio (audio) use
---
Detailed API reference: [audio API](./../../api_reference/media/audio.md)
## Instructions
> MaixAmigo, MaixCube needs [Initialize ES8374 audio decoder chip](https://github.com/sipeed/MaixPy_scripts/blob/master/modules/others/es8374/es8374.py) before using audio
* Create audio object
```python
import audio
player = audio.Audio(path = "/sd/6.wav")
```
* Create I2S objects (used to process audio objects)
```python
from Maix import I2S
# init i2s(i2s0)
wav_dev = I2S(I2S.DEVICE_0)
# config i2s according to audio info
wav_dev.channel_config(wav_dev.CHANNEL_1, I2S.TRANSMITTER,resolution = I2S.RESOLUTION_16_BIT ,cycles = I2S.SCLK_CYCLES_32, align_mode = I2S.RIGHT_JUSTIFYING_MODE)
```
* Get audio object information and associate I2S object
```python
# read audio info
wav_info = player.play_process(wav_dev)
print("wav file head information: ", wav_info)
```
* Configure I2S objects according to audio information
```python
sample_rate = wav_info[1]
wav_dev.set_sample_rate(sample_rate)
```
* Use the associated I2S object to play audio
```python
# loop to play audio
while True:
ret = player.play()
if ret == None:
print("format error")
break
elif ret==0:
print("end")
break
```
* End playback
```python
player.finish()
```
## Routine
> Test audio address: [6.wav](https://github.com/sipeed/MaixPy_scripts/blob/master/multimedia/audio/6.wav)
* Play wav files: [play_wav](https://github.com/sipeed/MaixPy_scripts/blob/master/multimedia/audio/play_wav.py)
* Record audio as a wav file and save: [record_wav](https://github.com/sipeed/MaixPy_scripts/blob/master/multimedia/audio/record_wav.py)
| 23.972973 | 172 | 0.728861 | eng_Latn | 0.579484 |
6f1bbbfa4c015b0a1a60d5d805a2c641994bbcf2 | 10,277 | md | Markdown | content/articles/introduction-to-tailwind-css/index.md | markiesar57/engineering-education | 39c774fda6fa8d7e1979129f3ac3042d92628769 | [
"Apache-2.0"
] | 74 | 2020-02-14T16:36:55.000Z | 2021-01-27T12:44:40.000Z | content/articles/introduction-to-tailwind-css/index.md | markiesar57/engineering-education | 39c774fda6fa8d7e1979129f3ac3042d92628769 | [
"Apache-2.0"
] | 1,343 | 2019-10-02T14:50:30.000Z | 2021-01-27T19:23:59.000Z | content/articles/introduction-to-tailwind-css/index.md | markiesar57/engineering-education | 39c774fda6fa8d7e1979129f3ac3042d92628769 | [
"Apache-2.0"
] | 245 | 2019-10-01T23:18:34.000Z | 2021-01-26T22:05:56.000Z | ---
layout: engineering-education
status: publish
published: true
url: /introduction-to-tailwind-css/
title: Introduction to Tailwind CSS
description: This article is an introduction to Tailwind CSS, we will cover different installation and configuration methods and how to use its utility-based classes.
author: daniel-katungi
date: 2021-01-11T00:00:00-13:00
topics: [Languages]
excerpt_separator: <!--more-->
images:
- url: /engineering-education/introduction-to-tailwind-css/hero.jpg
alt: Introduction to Tailwind CSS image
---
CSS technology was one of the biggest game-changers in web development. It allowed for more styling capabilities and freedom. As CSS grew, so did its complexity. CSS in retrospect is not challenging to write, but it can be tricky to implement.
<!--more-->
The next phase in CSS was the development of libraries and frameworks. One of the most famous examples of CSS frameworks is Bootstrap.
Frameworks like Bootstrap have one major disadvantage. Due to increased growth and usage, they became too big and offer less control over their styles. Learning frameworks like Bootstrap has become increasingly challenging because developers have to learn hundreds of classes.
In this article, we will cover:
- What Tailwind CSS is.
- Different ways to install and configure Tailwind CSS.
- How to use utility-based classes over regular pre-written classes.
To follow along with this tutorial, a basic understanding of HTML and CSS is necessary. Any prior experience with a CSS framework is an added advantage.
### What is Tailwind CSS
Tailwind is a CSS framework created by [Adam Wathan](https://twitter.com/adamwathan). Unlike other frameworks, it does not come prebuilt with classes to add to the HTML tags. Instead, it uses a different approach. It brings a much lower level of control by using utility-based classes.
### Installation
Let's get right into it and install Tailwind CSS so we can discover more about it. There are two ways to install Tailwind CSS, depending on your use case.
#### Method 1: Using the CDN
Using a CDN is the most common way of using any CSS framework, and Tailwind CSS is no exception. We add the Tailwind CSS file from the CDN in form of a link in the HTML page's head section.
First, create a HTML file and give it a name. We will name ours `index.html`. Then inside it, Write the boilerplate HTML code and add the CDN as shown below:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Section Tailwind Demo</title>
<link
href="https://unpkg.com/tailwindcss@^2/dist/tailwind.min.css"
rel="stylesheet"
/>
</head>
<body></body>
</html>
```
Great. We have Tailwind CSS installed in our project.
Note: Installing Tailwind CSS by using its CDN has a few drawbacks such as:
- You cannot add any third-party plugins.
- Since Tailwind CSS fetches files when the file loads, you cannot purge unused files.
- You cannot tinker with some configurations like the theme.
#### Method 2: Using npm
Using the npm method bestows the full experience of Tailwind CSS. It is a common installation method because most JavaScript frameworks use a similar approach. There are minimal differences in the installation process, depending on the framework's architecture.
Let's get started.
First, let's create a directory where we will be working.
In your terminal, run:
```bash
mkdir section-tailwind-demo && cd section-tailwind-demo
npm init -y
```
Next, let's install Tailwind CSS. We also need to install some other packages alongside it. We need PostCSS because Tailwind CSS relies on a preprocessor to bundle the CSS. Tailwind uses a PostCSS plugin called `autoprefixer` to transpile the CSS into vanilla CSS.
In the terminal run:
```bash
npm install tailwindcss@latest postcss@latest autoprefixer@latest
```
The command above will install all the dependencies we need. Next, we have to create a script to configure Tailwind CSS and PostCss. To generate the scripts, we make use of the `tailwind cli` utility provided by Tailwind CSS.
In the terminal run:
```bash
npx tailwind init -p
```
It will generate two files, `tailwind.config.js` and `postcss.config.js`.
The tailwind.config.js looks like this:
```js
module.exports = {
purge: [],
darkMode: false, // or 'media' or 'class'
theme: {
extend: {},
},
variants: {
extend: {},
},
plugins: [],
};
```
The `postcss.config.js` file should look like this:
```js
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
};
```
Now that we have successfully set up the environment, let's setup Tailwind CSS in our project.
First, create two files — an HTML file called `index.html` and a stylesheet called `style.css`.
The HTML part can have the basic boilerplate syntax at the beginning of the article. In the CSS file, we need to inform `autoprefixer` that we will use Tailwind CSS.
We do this by importing the Tailwind CSS files, as shown below:
```CSS
@tailwind base;
@tailwind components;
@tailwind utilities;
```
Finally, we need to build our Tailwind CSS, so in the terminal run:
```js
npx tailwindcss-cli@latest build -o css/tailwind.css
```
This command will create a `tailwind.css` file in the CSS folder. The `-o` in the command stands for the output path, and we specified that our output file would be in the `_css/tailwind.css_` path.
Note: You do not need to have a CSS file to work with that command, it will still generate the Tailwind CSS file. You may, however, need the `style.css` file to add custom styles, write Tailwind CSS in that file, or mix Tailwind CSS with custom CSS.
A different way of processing the CSS is to do it in production by adding a script in the `package.json`. We will use the `postcss cli` to run the operation during build time.
To do this, let's install PostCSS CLI.
Run this in the terminal:
```bash
npm install postcss-cli
```
Next, we add the build script in the package.json like this:
```json
"scripts": {
"build": "postcss style.css -o css/tailwind.css"
}
```
Now, when you run `npm run build` it should compile the Tailwind CSS file into a CSS folder, as we wanted.
We then need to add our Tailwind CSS file to the HTML like we would a regular CSS file.
```HTML
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Document</title>
<link rel="stylesheet" href="css/tailwind.css" />
</head>
<body></body>
</html>
```
Like that, we have everything set up.
### Working with utility classes
To show and explain how utility classes work, let's create a button using Tailwind CSS. This will show how Tailwind CSS affects the HTML elements.
Within the Body tags in the HTML page, add:
```html
<button>Section</button>
```
Let's add some classes to our button because it looks plain. In Tailwind CSS, the color value ranges depending on the color intensity from 100 to 900. The utility class for the background is `bg`. To make our button's background a faint shade of green, we add the class `bg-green-100`.
Next, let's style the text in the button. For this, we use the utility class `text`. Color rules remain, hence to add text to the color, we add the class `text-green-100`.
Our button is looking good so now let's add some padding. For padding, the syntax is quite straightforward. It's the `_property`, size, then the `value_`.
This applies to all size and spacing properties, like margin, for example. To add padding on the bottom, it would be `pb-8`. These values range from 0 to 64. To see how ranges work in-depth, check the [documentation](https://tailwindcss.com/docs/padding).
In our example, our padding will be `py-2` for the top and the bottom and `px-6` for the left and right. Let's add some margin on the top with `my-20` and some on the left with `mx-20`.
Now, let's make the button more rounded with the `rounded` utility. You can find a full list of border-radius utilities and all their classes in the [documentation](https://tailwindcss.com/docs/border-radius). We will use the border-radius of 0.5em, which corresponds to `rounded-lg`.
Finally, let's add some hover magic to make the button more lively. To do so, we add the classes `hover:hover:bg-green-600` and `hover:text-green-200`.
With that, we have styled our button with no CSS at all. The critical thing to take away is how much control Tailwind CSS gives you over the styling and elements. To get the full code for the tutorial, check [here](https://github.com/katungi/Section-tailwind-demo).
Our final code looks like this:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Section-Tailwind-Demo</title>
<link rel="stylesheet" href="css/tailwind.css" />
</head>
<body>
<button
type="button"
class="hover:bg-green-600 hover:text-green-200 bg-green-100 text-green-700 mx-20 my-20 px-6 py-2 rounded-lg"
>
Section
</button>
</body>
</html>
```
The code (our button) should look like this:

### Conclusion
Tailwind CSS is a game-changer to how we use CSS. As demonstrated above, utility classes are easy to use and learn. As a result, you should now be able to build a simple layout using Tailwind CSS.
Tailwind CSS also gives you more control over the look you want, which can be an advantage or a disadvantage. With other frameworks, the base look is usually good enough to get a developer started. That isn't the case with Tailwind CSS as you have to do everything yourself.
Another thing to note is how we added all the CSS classes, we used in the demo to HTML elements. As the complexity of your styling grows, the code will get untidy. We will tackle how to clean up Tailwind CSS projects with a full demo in the next article.
For further details on Tailwind CSS, you can take a look at the official [documentation](https://tailwindcss.com/docs).
---
Peer Review Contributions by: [Linus Muema](/engineering-education/authors/linus-muema/)
| 41.776423 | 285 | 0.743991 | eng_Latn | 0.993922 |
6f1c83114de60f67e7735135eae8306598685b80 | 1,435 | md | Markdown | README.md | jfrausto/React-Employee-Directory | b4a3f9fd19f817346a103d6d5b3b872dd13e4e36 | [
"MIT"
] | null | null | null | README.md | jfrausto/React-Employee-Directory | b4a3f9fd19f817346a103d6d5b3b872dd13e4e36 | [
"MIT"
] | null | null | null | README.md | jfrausto/React-Employee-Directory | b4a3f9fd19f817346a103d6d5b3b872dd13e4e36 | [
"MIT"
] | null | null | null | # React Employee Directory
This application uses React to display an employee directory on a single page with filters and search parameters. Sample users and placeholders are used for demonstration purposes.
## Table of Contents
- [Installation](#Installation)
- [Usage](#Usage)
- [License](#License)
- [Contributing](#Contributing)
- [Tests](#Testing)
- [Questions](#Questions)
- [Links](#Links)
## Installation
Visit the application [here](https://jfrausto.github.io/React-Employee-Directory/)
## Usage
This application uses the React Framework to dynamically search and display employees instantly as you type. Sample employee information is supplied by the [random user](https://randomuser.me/) API. Use this application on your database to search for employees by name, email, phone number or date of birth. You can also sort each category by clicking on the carrot on each header.

## License
This project is covered under the **MIT** license -- see more info [here](https://opensource.org/licenses/MIT).
## Contributing
Create a new branch and do a pull request, pending review.
## Questions
- GitHub: [jfrausto](https://github.com/jfrausto)
- If you have any further questions, you can reach me at _fraustojesse24@gmail.com_
## Links
- [Deployed app](https://jfrausto.github.io/React-Employee-Directory/)
| 35 | 381 | 0.763763 | eng_Latn | 0.977699 |
6f1d482fe6b80deffa34201887e44d788daca4f1 | 2,385 | md | Markdown | _posts/IT/2021-09-03-3way-handshake.md | JIKMAN/JIKMAN.github.io | 31cf6dcfbf954a5cd4b351f20fe71c9dd5132dc6 | [
"MIT"
] | null | null | null | _posts/IT/2021-09-03-3way-handshake.md | JIKMAN/JIKMAN.github.io | 31cf6dcfbf954a5cd4b351f20fe71c9dd5132dc6 | [
"MIT"
] | null | null | null | _posts/IT/2021-09-03-3way-handshake.md | JIKMAN/JIKMAN.github.io | 31cf6dcfbf954a5cd4b351f20fe71c9dd5132dc6 | [
"MIT"
] | null | null | null | ---
title: "[TCP/IP] 3-way / 4-way Handwhake"
categories:
- IT
tags:
- [3-way Handwhake, 4-way Handwhake]
toc: true
toc_sticky: true
---
## **3-way Handshake 란?**
TCP는 장치들 사이에 논리적인 접속을 성립(establish)하기 위하여 three-way handshake를 사용한다.
**TCP 3 Way Handshake는 TCP/IP프로토콜을 이용해서 통신을 하는 응용프로그램이 데이터를 전송하기 전에**
**먼저** 정확한 전송을 보장하기 위해 상대방 컴퓨터와 사전에 세션을 수립하는 과정을 의미한다.
Client > Server : **TCP SYN**
Server > Client : **TCP SYN, ACK**
Client > Server : **TCP ACK**
여기서 SYN은 'synchronize sequence numbers', 그리고 ACK는'acknowledgment' 의 약자이다.
이러한 절차는 TCP 접속을 성공적으로 성립하기 위하여 반드시 필요하다.
**TCP의 3-way Handshaking 역할**
• 양쪽 모두 데이타를 전송할 준비가 되었다는 것을 보장하고, 실제로 데이타 전달이 시작하기전에
한쪽이 다른 쪽이 준비되었다는 것을 알수 있도록 한다.
• 양쪽 모두 상대편에 대한 초기 순차일련변호를 얻을 수 있도록 한다.

**TCP의 3-way Handshaking 과정**
**[STEP 1]**
A클라이언트는 B서버에 접속을 요청하는 SYN 패킷을 보낸다.
이때 A클라이언트는 SYN 을 보내고 SYN/ACK 응답을 기다리는 **SYN_SENT** 상태, **B서버는 Wait for Client** 상태이다.
**[STEP 2]**
B서버는 SYN요청을 받고 A클라이언트에게 요청을 수락한다는 ACK 와 SYN flag 가 설정된 패킷을 발송하고
A가 다시 ACK으로 응답하기를 기다린다. 이때 **B서버**는 **SYN_RECEIVED** 상태가 된다.
**[STEP 3]**
A클라이언트는 B서버에게 ACK을 보내고 이후로부터는 연결이 이루어지고 데이터가 오가게 되는것이다.
이때의 **B서버 상태가 ESTABLISHED** 이다.
위와 같은 방식으로 통신하는것이 신뢰성 있는 연결을 맺어 준다는 TCP의 3 Way handshake 방식이다.
## **4-way Handshaking**
3-Way handshake는 TCP의 연결을 초기화 할 때 사용한다면,
4-Way handshake는 세션을 종료하기 위해 수행되는 절차다.

**TCP의 4-way Handshaking 과정**
**[STEP 1]**
클라이언트가 연결을 종료하겠다는 FIN플래그를 전송한다. 이때 **A클라이언트는 FIN-WAIT** 상태가 된다.
**[STEP 2]**
B서버는 FIN플래그를 받고, 일단 확인메시지 ACK 보내고 자신의 통신이 끝날때까지 기다리는데 이 상태가
**B서버의 CLOSE_WAIT**상태다.
**[STEP 3]**
연결을 종료할 준비가 되면, 연결해지를 위한 준비가 되었음을 알리기 위해 클라이언트에게 FIN플래그를 전송한다. 이때 B서버의 상태는 **LAST-ACK**이다.
**[STEP 4]**
클라이언트는 해지준비가 되었다는 ACK를 확인했다는 메시지를 보낸다.
**A클라이언트의 상태가 FIN-WAIT ->** **TIME-WAIT** 으로 변경된다.
그런데 만약 "Server에서 FIN을 전송하기 전에 전송한 패킷이 Routing 지연이나 패킷 유실로 인한 재전송 등으로 인해 FIN패킷보다 늦게 도착하는 상황"이 발생한다면 어떻게 될까?
Client에서 세션을 종료시킨 후 뒤늦게 도착하는 패킷이 있다면 이 패킷은 Drop되고 데이터는 유실될 것이다.
**A클라이언트**는 이러한 현상에 대비하여 Client는 Server로부터 FIN을 수신하더라도 일정시간(디폴트 240초) 동안 세션을 남겨놓고 잉여 패킷을 기다리는 과정을 거치게 되는데 이 과정을 **"TIME_WAIT"** 라고 한다. 일정시간이 지나면, 세션을 만료하고 연결을 종료시키며, **"CLOSE"** 상태로 변한다.
| 14.813665 | 187 | 0.667086 | kor_Hang | 1.00001 |
6f1d5ccab2ad7741de6578066837f5792801f3e4 | 65 | md | Markdown | README.md | NuChwezi/all266 | 2c08429a5446743ec6a42a891f437bfc10717733 | [
"Apache-2.0"
] | null | null | null | README.md | NuChwezi/all266 | 2c08429a5446743ec6a42a891f437bfc10717733 | [
"Apache-2.0"
] | null | null | null | README.md | NuChwezi/all266 | 2c08429a5446743ec6a42a891f437bfc10717733 | [
"Apache-2.0"
] | null | null | null | # all266
Xploring with Nuchwezi Media Lab (NML) makers of 266 TV
| 21.666667 | 55 | 0.769231 | eng_Latn | 0.830024 |
6f1d705eb7e87347f5574190f070bfe1fd4fffbe | 2,938 | md | Markdown | APT/apt.md | lsieun/learn-linux-debian | d5a6fd083dd7e4ec84d9c00c667346765f15efdb | [
"MIT"
] | null | null | null | APT/apt.md | lsieun/learn-linux-debian | d5a6fd083dd7e4ec84d9c00c667346765f15efdb | [
"MIT"
] | null | null | null | APT/apt.md | lsieun/learn-linux-debian | d5a6fd083dd7e4ec84d9c00c667346765f15efdb | [
"MIT"
] | null | null | null | # apt
## update
The aim of the `apt update` command is to download for each package source the corresponding `Packages` (or `Sources`) file.
## install
If the file `sources.list` mentions **several distributions**, it is possible to give **the version of the package** to install. A specific version number can be requested with `apt install package=version`, but indicating its distribution of origin (Stable, Testing or Unstable) — with `apt install package/distribution` — is usually preferred. With this command, it is possible to go back to an older version of a package (if for instance you know that it works well), provided that it is still available in one of the sources referenced by the `sources.list` file. Otherwise the [snapshot.debian.org](http://snapshot.debian.org/) archive can come to the rescue.
这段理解:
- (1) 在`sources.list`文件指定的package source中,可能包含一个package的多个版本。
- (2) 针对多个版本的问题,第一种解决方式是:`apt install <package>=<version>`
- (3) 针对多个版本的问题,第二种解决方式是:`apt install <package>/<distribution>`
```bash
## Installation of the unstable version of spamassassin
# apt install spamassassin/unstable
```
### The cache of .deb files
APT keeps a copy of each downloaded `.deb` file in the directory `/var/cache/apt/archives/`. In case of frequent updates, this directory can quickly take a lot of disk space with several versions of each package; you should regularly sort through them.
Two commands can be used: `apt-get clean` entirely empties the directory; `apt-get autoclean` only removes packages which can no longer be downloaded (because they have disappeared from the Debian mirror) and are therefore clearly useless (the configuration parameter `APT::Clean-Installed` can prevent the removal of `.deb` files that are currently installed). Note that `apt` does not support those commands.
## System Upgrade
Regular upgrades are recommended, because they include the latest security updates. To upgrade, use `apt upgrade` (of course after `apt update`).
This command looks for installed packages which can be upgraded without removing any packages. In other words, the goal is to ensure the least intrusive upgrade possible. `apt-get` is slightly more demanding than `aptitude` or `apt` because it will refuse to install packages which were not installed beforehand.
## reinstall
The system can sometimes be damaged after the removal or modification of files in a package. The easiest way to retrieve these files is to reinstall the affected package.
Unfortunately, the packaging system finds that the latter is already installed and politely refuses to reinstall it; to avoid this, use the `--reinstall` option of the `apt` and `apt-get` commands. The following command reinstalls `<package>` even if it is already present:
```bash
# apt --reinstall install <package>
```
Be careful! Using `apt --reinstall` to restore packages modified during an attack will certainly not recover the system as it was.
| 62.510638 | 664 | 0.776038 | eng_Latn | 0.998875 |
6f1e07e72d5320e62b216a0c5b5a5ddc8fca4707 | 19,703 | md | Markdown | docs/rook/v1.4/edgefs-csi.md | BlaineEXE/rook.github.io | 65463e0376c3e7fa5abc90abd758590854926941 | [
"Apache-2.0"
] | 31 | 2016-11-08T16:01:57.000Z | 2021-07-27T11:11:17.000Z | docs/rook/v1.4/edgefs-csi.md | BlaineEXE/rook.github.io | 65463e0376c3e7fa5abc90abd758590854926941 | [
"Apache-2.0"
] | 41 | 2016-12-02T18:35:15.000Z | 2021-12-30T17:53:22.000Z | docs/rook/v1.4/edgefs-csi.md | BlaineEXE/rook.github.io | 65463e0376c3e7fa5abc90abd758590854926941 | [
"Apache-2.0"
] | 44 | 2016-11-04T22:00:56.000Z | 2022-01-05T02:02:01.000Z | ---
title: CSI driver
weight: 4700
indent: true
---
{% include_relative branch.liquid %}
# EdgeFS Rook integrated CSI driver, provisioner, attacher and snapshotter
[Deprecated](https://github.com/rook/rook/issues/5823#issuecomment-703834989)
[Container Storage Interface (CSI)](https://github.com/container-storage-interface/) driver, provisioner, attacher and snapshotter for EdgeFS Scale-Out NFS/ISCSI services
## Overview
EdgeFS CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and EdgeFS local cluster site. It allows dynamic and static provisioning of EdgeFS NFS exports and ISCSI LUNs, and attaching them to application workloads. With EdgeFS NFS/ISCSI implementation, I/O load can be spread-out across multiple PODs, thus eliminating I/O bottlenecks of classing single-node NFS/ISCSI and providing highly available persistent volumes. Current implementation of EdgeFS CSI plugins was tested in Kubernetes environment (requires Kubernetes 1.13+)
## Prerequisites
* Ensure your kubernetes cluster version is 1.13+
* Kubernetes cluster must allow privileged pods, this flag must be set for the API server and the kubelet
([instructions](https://github.com/kubernetes-csi/docs/blob/735f1ef4adfcb157afce47c64d750b71012c8151/book/src/Setup.md#enable-privileged-pods)):
```console
--allow-privileged=true
```
* Required the API server and the kubelet feature gates
([instructions](https://github.com/kubernetes-csi/docs/blob/735f1ef4adfcb157afce47c64d750b71012c8151/book/src/Setup.md#enabling-features)):
```console
--feature-gates=VolumeSnapshotDataSource=true,CSIDriverRegistry=true
```
* Mount propagation must be enabled, the Docker daemon for the cluster must allow shared mounts
([instructions](https://github.com/kubernetes-csi/docs/blob/735f1ef4adfcb157afce47c64d750b71012c8151/book/src/Setup.md#enabling-mount-propagation))
* Kubernetes CSI drivers require `CSIDriver` and `CSINodeInfo` resource types
[to be defined on the cluster](https://github.com/kubernetes-csi/docs/blob/460a49286fe164a78fde3114e893c48b572a36c8/book/src/Setup.md#csidriver-custom-resource-alpha).
Check if they are already defined:
```console
kubectl get customresourcedefinition.apiextensions.k8s.io/csidrivers.csi.storage.k8s.io
kubectl get customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io
```
If the cluster doesn't have "csidrivers" and "csinodeinfos" resource types, create them:
```console
kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.13/pkg/crd/manifests/csidriver.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/csi-api/release-1.13/pkg/crd/manifests/csinodeinfo.yaml
```
* Depends on preferred CSI driver type, following utilities must be installed on each Kubernetes node (For Debian/Ubuntu based systems):
```console
# for NFS
apt install -y nfs-common rpcbind
# for ISCSI
apt install -y open-iscsi
```
## EdgeFS CSI drivers configuration
For each driver type (NFS/ISCSI) we have already prepared configuration files examples, there are:
* [EdgeFS CSI NFS driver config](https://github.com/rook/rook/tree/master/cluster/examples/kubernetes/edgefs/csi/nfs/edgefs-nfs-csi-driver-config.yaml)
* [EdgeFS CSI ISCSI driver config](https://github.com/rook/rook/tree/master/cluster/examples/kubernetes/edgefs/csi/iscsi/edgefs-iscsi-csi-driver-config.yaml)
Secret file configuration options example:
```console
# EdgeFS k8s cluster options
k8sEdgefsNamespaces: ["rook-edgefs"] # edgefs cluster namespace
k8sEdgefsMgmtPrefix: rook-edgefs-mgr # edgefs cluster management prefix
# EdgeFS csi operations options
cluster: cltest # substitution edgefs cluster name for csi operations
tenant: test # substitution edgefs tenant name for csi operations
#serviceFilter: "nfs01" # comma delimited list of allowed service names for filtering
# EdgeFS GRPC security options
username: admin # edgefs k8s cluster grpc service username
password: admin # edgefs k8s cluster grpc service password
```
Options for NFS and ISCSI configuration files
| `Name` | `Description` | `Default value` | `Required` | `Type` |
| ----------------------- | --------------------------------------------------------------------- | ------------------------------- | ---------- | ---------- |
| `k8sEdgefsNamespaces` | Array of Kubernetes cluster's namespaces for EdgeFS service discovery | `rook-edgefs` | true | both |
| `k8sEdgefsMgmtPrefix` | Rook EdgeFS cluster mgmt service prefix | `rook-edgefs-mgr` | true | both |
| `username` | EdgeFS gRPC API server privileged user | `admin` | true | both |
| `password` | EdgeFS gRPC API server password | `admin` | true | both |
| `cluster` | EdgeFS cluster namespace also known as 'region' | | false | both |
| `tenant` | EdgeFS tenant isolated namespace | | false | both |
| `bucket` | EdgeFS tenant bucket to use as a default | | false | ISCSI only |
| `serviceFilter` | Comma delimited list of allowed service names for filtering | `""` means all services allowed | false | both |
| `serviceBalancerPolicy` | Service selection policy [`minexportspolicy`, `randomservicepolicy`] | `minexportspolicy` | false | both |
| `chunksize` | Chunk size for actual volume, in bytes | `16384`, should be power of two | false | both |
| `blocksize` | Block size for actual volume, in bytes | `4096`, should be power of two | false | iSCSI only |
| `fsType` | New volume's filesystem type | `ext4`, `ext3`, `xfs` | ext4 | ISCSI only |
| `forceVolumeDeletion` | Automatically deletes EdgeFS volume after usage | `false` | false | both |
By using `k8sEdgefsNamespaces` and `k8sEdgefsMgmtPrefix` parameters, driver is capable of detecting ClusterIPs and Endpoint IPs to provision and attach volumes.
## Apply EdgeFS CSI NFS driver configuration
Check configuration options and create kubernetes secret for Edgefs CSI NFS plugin
```console
git clone --single-branch --branch v1.4.9 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/edgefs/csi/nfs
kubectl create secret generic edgefs-nfs-csi-driver-config --from-file=./edgefs-nfs-csi-driver-config.yaml
```
## Deploy EdgeFS CSI NFS driver
After secret is created successfully, deploy EdgeFS CSI plugin, provisioner and attacher using the following command
```console
cd cluster/examples/kubernetes/edgefs/csi/nfs
kubectl apply -f edgefs-nfs-csi-driver.yaml
```
There should be number of EdgeFS CSI plugin PODs available running as a DaemonSet:
```console
...
NAMESPACE NAME READY STATUS RESTARTS AGE
default edgefs-nfs-csi-controller-0 4/4 Running 0 33s
default edgefs-nfs-csi-node-9st9n 2/2 Running 0 33s
default edgefs-nfs-csi-node-js7jp 2/2 Running 0 33s
default edgefs-nfs-csi-node-lhjgr 2/2 Running 0 33s
...
```
At this point configuration is all ready and available for consumption by applications.
## Pre-provisioned volumes (NFS)
This method allows to use already created exports in EdgeFS services. This method keeps exports provisioned after application PODs terminated.
Read more on how to create PersistentVolume specification for pre-provisioned volumes:
[Link to Pre-provisioned volumes manifest specification](https://kubernetes-csi.github.io/docs/Usage.html#pre-provisioned-volumes)
To test creation and mount pre-provisioned volume to pod execute example
> **NOTE**: Make sure that `volumeHandle: segment:service@cluster/tenant/bucket` in nginx.yaml already exist on EdgeFS cluster and served via any Edgefs NFS service. Any volumeHandle's parameters may be omitted and will be substituted via CSI configuration file parameters.
Examples:
```console
cd cluster/examples/kubernetes/edgefs/csi/nfs/examples
kubectl apply -f ./preprovisioned-edgefs-volume-nginx.yaml
```
## Dynamically provisioned volumes (NFS)
To setup the system for dynamic provisioning, administrator needs to setup a StorageClass pointing to the CSI driver’s external-provisioner and specifying any parameters required by the driver
[Link to dynamically provisioned volumes specification](https://kubernetes-csi.github.io/docs/Usage.html#dynamic-provisioning)
### Note
For dynamically provisioned volumes kubernetes will generate volume name automatically
(for example pvc-871068ed-8b5d-11e8-9dae-005056b37cb2)
Additional creation options should be passed as parameters in StorageClass definition i.e :
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: edgefs-nfs-csi-storageclass
provisioner: io.edgefs.csi.nfs
parameters:
segment: rook-edgefs
service: nfs01
tenant: ten1
encryption: true
```
### Parameters
> **NOTE**: Parameters and their options are case sensitive and should be in lower case.
| `Name` | `Description` | `Allowed values` | `Default value` |
| ------------ | -------------------------------------------------------- | ------------------------------------------------- | --------------- |
| `segment` | Edgefs cluster namespace for current StorageClass or PV. | | rook-edgefs |
| `service` | Edgefs cluster service if not defined in secret | | |
| `cluster` | Edgefs cluster namespace if not defined in secret | | |
| `tenant` | Edgefs tenant namespace if not defined in secret | | |
| `chunksize` | Chunk size for actual volume, in bytes | should be power of two | 16384 bytes |
| `blocksize` | Block size for actual volume, in bytes | should be power of two | 4096 bytes |
| `acl` | Volume acl restrictions | | all |
| `ec` | Enables ccow erasure coding for volume | `true`, `false`, `0`, `1` | `false` |
| `ecmode` | Set ccow erasure mode data mode (If 'ec' option enabled) | `3:1:xor`, `2:2:rs`, `4:2:rs`, `6:2:rs`, `9:3:rs` | `6:2:rs` |
| `encryption` | Enables encryption for volume | `true`, `false`, `0`, `1` | `false` |
Example:
```console
cd cluster/examples/kubernetes/edgefs/csi/nfs/examples
kubectl apply -f ./dynamic-nginx.yaml
```
## Apply Edgefs CSI ISCSI driver configuration
Check configuration options and create kubernetes secret for Edgefs CSI ISCSI plugin
```console
cd cluster/examples/kubernetes/edgefs/csi/iscsi
kubectl create secret generic edgefs-iscsi-csi-driver-config --from-file=./edgefs-iscsi-csi-driver-config.yaml
```
## Deploy Edgefs CSI ISCSI driver
After secret is created successfully, deploy EdgeFS CSI plugin, provisioner, attacher and snapshotter using the following command
```console
cd cluster/examples/kubernetes/edgefs/csi/iscsi
kubectl apply -f edgefs-iscsi-csi-driver.yaml
```
There should be number of EdgeFS CSI ISCSI plugin PODs available running as a DaemonSet:
```console
...
NAMESPACE NAME READY STATUS RESTARTS AGE
default edgefs-iscsi-csi-controller-0 4/4 Running 0 12s
default edgefs-iscsi-csi-node-26464 2/2 Running 0 12s
default edgefs-iscsi-csi-node-p5r58 2/2 Running 0 12s
default edgefs-iscsi-csi-node-ptn2m 2/2 Running 0 12s
...
```
At this point configuration is all ready and available for consumption by applications.
## Pre-provisioned volumes (ISCSI)
This method allows to use already created exports in EdgeFS ISCSI services. This method keeps exports provisioned after application PODs terminated.
Read more on how to create PersistentVolume specification for pre-provisioned volumes:
[Link to Pre-provisioned volumes manifest specification](https://kubernetes-csi.github.io/docs/Usage.html#pre-provisioned-volumes)
To test creation and mount pre-provisioned volume to pod execute example:
> **NOTE**: Make sure that `volumeHandle: segment:service@cluster/tenant/bucket/lun` in nginx.yaml already exist on EdgeFS cluster and served via any Edgefs ISCSI service. Any volumeHandle's parameters may be omitted and will be substituted via CSI configuration file parameters.
Example:
```console
cd cluster/examples/kubernetes/edgefs/csi/iscsi/examples
kubectl apply -f ./preprovisioned-edgefs-volume-nginx.yaml
```
## Dynamically provisioned volumes (ISCSI)
For dynamic volume provisioning, the administrator needs to set up a _StorageClass_ pointing to the driver.
In this case Kubernetes generates volume name automatically (for example `pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8`).
Default driver configuration may be overwritten in `parameters` section:
[Link to dynamically provisioned volumes specification](https://kubernetes-csi.github.io/docs/Usage.html#dynamic-provisioning)
### Note for dynamically provisioned volumes
For dynamically provisioned volumes, Kubernetes will generate volume names automatically
(for example pvc-871068ed-8b5d-11e8-9dae-005056b37cb2).
To pass additional creation parameters, you can add them as parameters to your StorageClass definition.
Example:
```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: edgefs-iscsi-csi-storageclass
provisioner: io.edgefs.csi.nfs
parameters:
segment: rook-edgefs
service: iscsi01
cluster: cltest
tenant: test
bucket: bk1
encryption: true
```
### Parameters
> **NOTE**: Parameters and their options are case sensitive and should be in lower case.
| `Name` | `Description` | `Allowed values` | `Default value` |
| ------------ | --------------------------------------------------------- | -------------------------------------- | --------------- |
| `segment` | Edgefs cluster namespace for specific StorageClass or PV. | | `rook-edgefs` |
| `service` | Edgefs cluster service if not defined in secret | | |
| `cluster` | Edgefs cluster namespace if not defined in secret | | |
| `tenant` | Edgefs tenant namespace if not defined in secret | | |
| `bucket` | Edgefs bucket namespace if not defined in secret | | |
| `chunksize` | Chunk size for actual volume, in bytes | should be power of two | `16384` |
| `blocksize` | Blocksize size for actual volume, in bytes | should be power of two | `4096` |
| `fsType` | New volume's filesystem type | `ext4`, `ext3`, `xfs` | `ext4` |
| `acl` | Volume acl restrictions | | `all` |
| `ec` | Enables ccow erasure coding for volume | `true`, `false`, `0`, `1` | `false` |
| `ecmode` | Set ccow erasure mode data mode (If 'ec' option enabled) | `4:2:rs` ,`6:2:rs`, `4:2:rs`, `9:3:rs` | `4:2:rs` |
| `encryption` | Enables encryption for volume | `true`, `false`, `0`, `1` | `false` |
Example:
```console
cd cluster/examples/kubernetes/edgefs/csi/nfs/examples
kubectl apply -f ./dynamic-nginx.yaml
```
## EdgeFS CSI ISCSI driver snapshots and clones
### Getting information about existing snapshots
```console
# snapshot classes
kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
# snapshot list
kubectl get volumesnapshots.snapshot.storage.k8s.io
# volumesnapshotcontents
kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
```
### To create volume's clone from existing snapshot you should:
* Create snapshotter StorageClass [Example yaml](https://github.com/rook/rook/tree/master/cluster/examples/kubernetes/edgefs/csi/iscsi/examples/snapshots/snapshot-class.yaml)
* Have an existing PVC based on EdgeFS ISCSI LUN
* Take snapshot from volume [Example yaml](https://github.com/rook/rook/tree/master/cluster/examples/kubernetes/edgefs/csi/iscsi/examples/snapshots/create-snapshot.yaml)
* Clone volume from existing snapshot [Example yaml](https://github.com/rook/rook/tree/master/cluster/examples/kubernetes/edgefs/csi/iscsi/examples/snapshots/nginx-snapshot-clone-volume.yaml)
## Troubleshooting and log collection
For details about other configuration and deployment of NFS, ISCSI and EdgeFS CSI plugin, see Wiki pages:
* [Quick Start Guide](https://github.com/Nexenta/edgefs-csi/wiki/EdgeFS-CSI-Quick-Start-Guide)
Please submit an issue at: [Issues](https://github.com/Nexenta/edgefs-csi/issues)
## Troubleshooting
* Show installed drivers:
```console
kubectl get csidrivers.csi.storage.k8s.io
kubectl describe csidrivers.csi.storage.k8s.io
```
* Error:
```console
MountVolume.MountDevice failed for volume "pvc-ns-<...>" :
driver name io.edgefs.csi.iscsi not found in the list of registered CSI drivers
```
Make sure _kubelet_ is configured with `--root-dir=/var/lib/kubelet`, otherwise update paths in the driver yaml file
([all requirements](https://github.com/kubernetes-csi/docs/blob/387dce893e59c1fcf3f4192cbea254440b6f0f07/book/src/Setup.md#enabling-features)).
* "VolumeSnapshotDataSource" feature gate is disabled:
```console
vim /var/lib/kubelet/config.yaml
# ...
# featureGates:
# VolumeSnapshotDataSource: true
# ...
vim /etc/kubernetes/manifests/kube-apiserver.yaml
# ...
# - --feature-gates=VolumeSnapshotDataSource=true
# ...
```
* Driver logs (for ISCSI driver, to get NFS driver logs substitute iscsi to nfs)
```console
kubectl logs -f edgefs-iscsi-csi-controller-0 driver
kubectl logs -f $(kubectl get pods | awk '/edgefs-iscsi-csi-node-/ {print $1;exit}') driver
# combine all pods:
kubectl get pods | awk '/edgefs-iscsi-csi-node-/ {system("kubectl logs " $1 " driver &")}'
```
* Show termination message in case driver failed to run:
<!-- {% raw %} -->
```console
kubectl get pod edgefs-iscsi-csi-controller-0 -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
```
<!-- {% endraw %} -->
| 50.262755 | 561 | 0.644673 | eng_Latn | 0.64444 |
6f1e6b22e17cdb18b75708ef98da1eebb84eb9f8 | 66 | md | Markdown | README.md | ArthurPMachado/EcmaScript6 | 67207b821606091270cfcbe06d2c981d5c723de5 | [
"MIT"
] | null | null | null | README.md | ArthurPMachado/EcmaScript6 | 67207b821606091270cfcbe06d2c981d5c723de5 | [
"MIT"
] | null | null | null | README.md | ArthurPMachado/EcmaScript6 | 67207b821606091270cfcbe06d2c981d5c723de5 | [
"MIT"
] | null | null | null | # EcmaScript6
Repositório para explorar as funcionalidades do ES6
| 22 | 51 | 0.848485 | por_Latn | 0.976649 |
6f1e7cf6301d909aafd06722902aad84a8117c9a | 1,542 | md | Markdown | README.md | AoEiuV020/OpenWrt-RM2100 | c563ae32a36a45db4fa7d89cabe6d0051f3ad20e | [
"MIT"
] | 30 | 2020-05-07T04:27:23.000Z | 2022-02-13T06:15:04.000Z | README.md | AoEiuV020/OpenWrt-RM2100 | c563ae32a36a45db4fa7d89cabe6d0051f3ad20e | [
"MIT"
] | 2 | 2020-08-23T17:42:01.000Z | 2021-03-08T07:19:28.000Z | README.md | AoEiuV020/OpenWrt-RM2100 | c563ae32a36a45db4fa7d89cabe6d0051f3ad20e | [
"MIT"
] | 41 | 2020-05-06T12:50:34.000Z | 2022-03-19T08:34:30.000Z | # OpenWrt for Xiaomi Redmi AC2100 Router
[](https://github.com/harryhpxs/OpenWrt-RM2100/blob/master/LICENSE)
Build OpenWrt for Xiaomi Redmi AC2100 using GitHub Actions
## 云编译方法
请参见 [P3TERX 中文教程](https://p3terx.com/archives/build-openwrt-with-github-actions.html)
## 直接食用
请前往 [Releases](https://github.com/harryhpxs/OpenWrt-RM2100/releases) 下载最新固件
## 本固件包含
除 Lean's OpenWrt (LEDE) 默认配置外,还包含:
- AdGuard Home - [luci](https://github.com/rufengsuixing/luci-app-adguardhome) @ rufengsuixing
- [OpenClash](https://github.com/vernesong/OpenClash) @ vernesong
- [helloworld](https://github.com/fw876/helloworld) @ fw876
- [luci-theme-argon](https://github.com/jerrykuku/luci-theme-argon) @ jerrykuku
- 支持IPv6
## 固件写入方法
请参见
- [一键解锁](https://www.right.com.cn/forum/thread-4016985-1-1.html)
- [刷入OpenWrt](https://www.right.com.cn/forum/thread-4019555-1-1.html)
## 使用情况
电信500M,纯AP模式,信道44,开启160MHz,功率23dBm,开启MU-MIMO
5GHz实测下行峰值
- Mate 20 Pro - 360 Mbps
- 荣耀30 Pro - 320 Mbps
- 小米8 - 300 Mbps
## 参考
- [Lean's OpenWrt](https://github.com/coolsnowwolf/lede)
- [Actions-OpenWrt](https://github.com/P3TERX/Actions-OpenWrt) @ P3TERX
- [OpenWRT-For-Pi](https://github.com/Wygdbb/OpenWRT-For-Pi) @ Wygdbb
- [nanopi-openwrt](https://github.com/klever1988/nanopi-openwrt) @ klever1988
- [NanoPi-R2S](https://github.com/soffchen/NanoPi-R2S) @ soffchen
## License
[MIT](https://github.com/harryhpxs/OpenWrt-RM2100/blob/master/LICENSE) © harryhpxs
| 31.469388 | 170 | 0.743191 | yue_Hant | 0.547815 |
6f1e99746025c671669fa5e17569e74cb8969fe4 | 3,079 | md | Markdown | docs-archive-a/2014/sql-server/install/reporting-services-configuration-manager-f1-help-topics-ssrs-native-mode.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-archive-a/2014/sql-server/install/reporting-services-configuration-manager-f1-help-topics-ssrs-native-mode.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2021-10-11T06:40:46.000Z | 2021-11-25T02:25:44.000Z | docs-archive-a/2014/sql-server/install/reporting-services-configuration-manager-f1-help-topics-ssrs-native-mode.md | redpandabigcat/sql-docs-archive-pr.it-it | 42057907493283d0099bb6f5dc76994d8e9d3b65 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-09-29T08:51:43.000Z | 2021-11-23T02:36:18.000Z | ---
title: Argomenti della Guida di Reporting Services Configuration Manager F1 (modalità nativa SSRS) | Microsoft Docs
ms.custom: ''
ms.date: 06/13/2017
ms.prod: sql-server-2014
ms.reviewer: ''
ms.technology: database-engine
ms.topic: conceptual
helpviewer_keywords:
- Reporting Services Configuration tool
ms.assetid: 7b6fb18e-ec39-4661-88e3-977ed64e2c82
author: maggiesMSFT
ms.author: maggies
ms.openlocfilehash: c59a2acf78ed8df34ff78de209b5a20df8ef07e5
ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 08/04/2020
ms.locfileid: "87726471"
---
# <a name="reporting-services-configuration-manager-f1-help-topics-ssrs-native-mode"></a>Argomenti della Guida F1 di Gestione configurazione Reporting Services (modalità nativa SSRS)
In questa sezione sono inclusi gli argomenti della Guida sensibile al contesto relativi a Gestione configurazione Reporting Services.
[!INCLUDE[applies](../../includes/applies-md.md)][!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)]Modalità nativa.
## <a name="in-this-section"></a>Contenuto della sezione
- [Connessione a un'istanza di un server di report](../../../2014/sql-server/install/connect-to-a-native-mode-report-server.md)
- [Stato server di report](../../../2014/sql-server/install/report-server-status-ssrs-native-mode.md)
- [Account del servizio](../../../2014/sql-server/install/service-account-ssrs-native-mode.md)
- [URL servizio Web](../../../2014/sql-server/install/web-service-url-ssrs-native-mode.md)
- [Configurazione avanzata più siti Web](../../../2014/sql-server/install/advanced-multiple-web-site-configuration-ssrs-native-mode.md)
- [Database](../../../2014/sql-server/install/database-ssrs-native-mode.md)
- [Procedura guidata Modifica database](../../../2014/sql-server/install/change-database-wizard-ssrs-native-mode.md)
- [Procedura guidata Modifica credenziali](../../../2014/sql-server/install/change-credentials-wizard-ssrs-native-mode.md)
- [URL Gestione report](../../../2014/sql-server/install/report-manager-url-ssrs-native-mode.md)
- [Impostazioni posta elettronica](../../reporting-services/install-windows/e-mail-settings-reporting-services-native-mode-configuration-manager.md)
- [Account di esecuzione](../../../2014/sql-server/install/execution-account-ssrs-native-mode.md)
- [Chiavi di crittografia](../../../2014/sql-server/install/encryption-keys-ssrs-native-mode.md)
- [Backup della chiave di crittografia](../../../2014/sql-server/install/backup-encryption-key-ssrs-native-mode.md)
- [Ripristina chiave di crittografia](../../../2014/sql-server/install/restore-encryption-key-ssrs-native-mode.md)
- [Distribuzione con scalabilità orizzontale](../../../2014/sql-server/install/scale-out-deployment-native-mode-report-server.md)
## <a name="see-also"></a>Vedere anche
[Reporting Services Configuration Manager (del)](reporting-services-configuration-manager-native-mode.md)
| 49.66129 | 182 | 0.73368 | ita_Latn | 0.279791 |
6f1edf43f573439e4830abc5e6b7fdcf466f75fc | 32 | md | Markdown | Other/README.md | KLYN74R/BOTS | 2255a525e77c75107bae5eed94ab60ab55dd1acd | [
"MIT"
] | 2 | 2022-01-14T09:31:47.000Z | 2022-01-14T14:59:41.000Z | Other/README.md | KLYN74R/BOTS | 2255a525e77c75107bae5eed94ab60ab55dd1acd | [
"MIT"
] | null | null | null | Other/README.md | KLYN74R/BOTS | 2255a525e77c75107bae5eed94ab60ab55dd1acd | [
"MIT"
] | null | null | null | # Custom bots, bots systems etc. | 32 | 32 | 0.75 | eng_Latn | 0.758817 |
6f1ee48816ab307fe7022881b47aa09e48212ca9 | 18,407 | md | Markdown | articles/machine-learning/how-to-setup-authentication.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/how-to-setup-authentication.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/machine-learning/how-to-setup-authentication.md | changeworld/azure-docs.sv-se | 6234acf8ae0166219b27a9daa33f6f62a2ee45ab | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Konfigurera autentisering
titleSuffix: Azure Machine Learning
description: Lär dig hur du konfigurerar och konfigurerar autentisering för olika resurser och arbetsflöden i Azure Machine Learning. Det finns flera sätt att konfigurera och använda autentisering inom tjänsten, allt från enkel UI-baserad auth för utvecklings- eller testningsändamål, till fullständig Azure Active Directory-tjänsthuvudbefogning.
services: machine-learning
author: trevorbye
ms.author: trbye
ms.reviewer: trbye
ms.service: machine-learning
ms.subservice: core
ms.topic: conceptual
ms.date: 12/17/2019
ms.openlocfilehash: fcaa7a0c44851d6b48b40b01af4c8ec992c330b8
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: sv-SE
ms.lasthandoff: 03/28/2020
ms.locfileid: "79283542"
---
# <a name="set-up-authentication-for-azure-machine-learning-resources-and-workflows"></a>Konfigurera autentisering för Azure Machine Learning-resurser och arbetsflöden
[!INCLUDE [applies-to-skus](../../includes/aml-applies-to-basic-enterprise-sku.md)]
I den här artikeln får du lära dig hur du konfigurerar och konfigurerar autentisering för olika resurser och arbetsflöden i Azure Machine Learning. Det finns flera sätt att autentisera till tjänsten, allt från enkel UI-baserad auth för utvecklings- eller testningsändamål till fullständig Azure Active Directory-tjänsthuvudbefogning. I den här artikeln beskrivs också skillnaderna i hur webbtjänstautentisering fungerar, samt hur du autentiserar till AZURE Machine Learning REST API.
Den här uppgiften visar hur du utför följande uppgifter:
* Använda interaktiv gränssnittsautentisering för testning/utveckling
* Konfigurera autentisering av tjänstens huvudnamn
* Autentisera till din arbetsyta
* Hämta OAuth2.0-innehavartypstoken för AZURE Machine Learning REST API
* Förstå webbtjänstautentisering
Se [konceptartikeln](concept-enterprise-security.md) för en allmän översikt över säkerhet och autentisering i Azure Machine Learning.
## <a name="prerequisites"></a>Krav
* Skapa en [Azure Machine Learning-arbetsyta](how-to-manage-workspace.md).
* [Konfigurera din utvecklingsmiljö](how-to-configure-environment.md) för att installera Azure Machine Learning SDK eller använd en [virtuell virtuell azure machine learning-anteckningsbok](concept-azure-machine-learning-architecture.md#compute-instance) med SDK redan installerat.
## <a name="interactive-authentication"></a>Interaktiv autentisering
De flesta exempel i dokumentationen för den här tjänsten använder interaktiv autentisering i Jupyter-anteckningsböcker som en enkel metod för testning och demonstration. Detta är ett lätt sätt att testa vad du bygger. Det finns två funktionsanrop som automatiskt kommer att uppmana dig med ett gränssnittsbaserat autentiseringsflöde.
När `from_config()` du anropar funktionen uppstår prompten.
```python
from azureml.core import Workspace
ws = Workspace.from_config()
```
Funktionen `from_config()` söker efter en JSON-fil som innehåller anslutningsinformationen för arbetsytan. Du kan också ange anslutningsinformation `Workspace` uttryckligen med hjälp av konstruktorn, som också kommer att fråga efter interaktiv autentisering. Båda samtalen är likvärdiga.
```python
ws = Workspace(subscription_id="your-sub-id",
resource_group="your-resource-group-id",
workspace_name="your-workspace-name"
)
```
Om du har åtkomst till flera klienter kan du behöva importera klassen och uttryckligen definiera vilken klient du riktar in dig på. Ringa konstruktören för `InteractiveLoginAuthentication` kommer också att uppmana dig att logga in liknar samtalen ovan.
```python
from azureml.core.authentication import InteractiveLoginAuthentication
interactive_auth = InteractiveLoginAuthentication(tenant_id="your-tenant-id")
```
Interaktiv autentisering hjälper dig inte att skapa automatiserade eller huvudlösa arbetsflöden, även om du är användbar för testning och inlärning. Att konfigurera tjänsthuvudhuvudautentisering är den bästa metoden för automatiserade processer som använder SDK.
## <a name="set-up-service-principal-authentication"></a>Konfigurera autentisering av tjänstens huvudnamn
Den här processen är nödvändig för att aktivera autentisering som är frikopplad från en viss användarinloggning, vilket gör att du kan autentisera till Azure Machine Learning Python SDK i automatiserade arbetsflöden. Med tjänstens huvudautentisering kan du också [autentisera till REST API.](#azure-machine-learning-rest-api-auth)
Om du vill konfigurera autentisering av tjänstens huvudnamn skapar du först en appregistrering i Azure Active Directory och beviljar sedan appen rollbaserad åtkomst till din ML-arbetsyta. Det enklaste sättet att slutföra den här installationen är via [Azure Cloud Shell](https://azure.microsoft.com/features/cloud-shell/) i Azure-portalen. När du har loggat `>_` in på portalen klickar du på ikonen längst upp till höger på sidan nära ditt namn för att öppna skalet.
Om du inte har använt molnskalet tidigare i ditt Azure-konto måste du skapa en lagringskontoresurs för att lagra filer som skrivs. I allmänhet kommer detta lagringskonto att medföra en försumbar månadskostnad. Installera dessutom maskininlärningstillägget om du inte har använt det tidigare med följande kommando.
```azurecli-interactive
az extension add -n azure-cli-ml
```
> [!NOTE]
> Du måste vara administratör för prenumerationen för att kunna utföra följande steg.
Kör sedan följande kommando för att skapa tjänstens huvudnamn. Ge det ett namn, i detta fall **ml-auth**.
```azurecli-interactive
az ad sp create-for-rbac --sdk-auth --name ml-auth
```
Utdata kommer att vara en JSON som liknar följande. Anteckna `clientId`fälten `clientSecret`, `tenantId` och eftersom du behöver dem för andra steg i den här artikeln.
```json
{
"clientId": "your-client-id",
"clientSecret": "your-client-secret",
"subscriptionId": "your-sub-id",
"tenantId": "your-tenant-id",
"activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
"resourceManagerEndpointUrl": "https://management.azure.com",
"activeDirectoryGraphResourceId": "https://graph.windows.net",
"sqlManagementEndpointUrl": "https://management.core.windows.net:5555",
"galleryEndpointUrl": "https://gallery.azure.com/",
"managementEndpointUrl": "https://management.core.windows.net"
}
```
Kör sedan följande kommando för att få information om tjänstens `clientId` huvudnamn som du `--id` just skapade, med värdet ovanifrån som indata till parametern.
```azurecli-interactive
az ad sp show --id your-client-id
```
Följande är ett förenklat exempel på JSON-utdata från kommandot. Notera fältet, `objectId` eftersom du behöver dess värde för nästa steg.
```json
{
"accountEnabled": "True",
"addIns": [],
"appDisplayName": "ml-auth",
...
...
...
"objectId": "your-sp-object-id",
"objectType": "ServicePrincipal"
}
```
Använd sedan följande kommando för att tilldela tjänstens huvudåtkomst till din maskininlärningsarbetsyta. Du behöver ditt arbetsytenamn och dess `-w` resursgruppsnamn för parametrarna respektive. `-g` Använd `--user` `objectId` värdet från föregående steg för parametern. Med `--role` parametern kan du ange åtkomstrollen för tjänstens huvudnamn, och i allmänhet använder du antingen **ägare** eller **deltagare**. Båda har skrivbehörighet till befintliga resurser som beräkningskluster och datalager, men endast **ägaren** kan etablera dessa resurser.
```azurecli-interactive
az ml workspace share -w your-workspace-name -g your-resource-group-name --user your-sp-object-id --role owner
```
Det här anropet ger inga utdata, men du har nu konfigurerad autentisering av tjänstens huvudnamn för arbetsytan.
## <a name="authenticate-to-your-workspace"></a>Autentisera till din arbetsyta
Nu när du har aktiverat tjänstens huvudansvariga kan du autentisera till arbetsytan i SDK utan att fysiskt logga in som användare. Använd `ServicePrincipalAuthentication` klasskonstruktorn och använd de värden du fick från föregående steg som parametrar. Parametern `tenant_id` mappar `tenantId` till `service_principal_id` ovanifrån, `service_principal_password` mappar `clientSecret`till `clientId`och mappar till .
```python
from azureml.core.authentication import ServicePrincipalAuthentication
sp = ServicePrincipalAuthentication(tenant_id="your-tenant-id", # tenantID
service_principal_id="your-client-id", # clientId
service_principal_password="your-client-secret") # clientSecret
```
Variabeln `sp` innehåller nu ett autentiseringsobjekt som du använder direkt i SDK. I allmänhet är det en bra idé att lagra ids / hemligheter som används ovan i miljövariabler som visas i följande kod.
```python
import os
sp = ServicePrincipalAuthentication(tenant_id=os.environ['AML_TENANT_ID'],
service_principal_id=os.environ['AML_PRINCIPAL_ID'],
service_principal_password=os.environ['AML_PRINCIPAL_PASS'])
```
För automatiserade arbetsflöden som körs i Python och i första hand använda SDK kan du använda det här objektet som i de flesta fall för din autentisering. Följande kod autentiserar till arbetsytan med det auth-objekt som du just skapade.
```python
from azureml.core import Workspace
ws = Workspace.get(name="ml-example",
auth=sp,
subscription_id="your-sub-id")
ws.get_details()
```
## <a name="azure-machine-learning-rest-api-auth"></a>AZURE Machine Learning REST API-autentisering
Tjänsthuvudhuvudnamnet som skapas i stegen ovan kan också användas för att autentisera till AZURE Machine Learning [REST API](https://docs.microsoft.com/rest/api/azureml/). Du använder Azure Active [Directory-klientautentiseringsuppgifter bevilja flöde](https://docs.microsoft.com/azure/active-directory/develop/v1-oauth2-client-creds-grant-flow), som tillåter service-to-service-samtal för huvudlös autentisering i automatiserade arbetsflöden. Exemplen implementeras med [ADAL-biblioteket](https://docs.microsoft.com/azure/active-directory/develop/active-directory-authentication-libraries) i både Python och Node.js, men du kan också använda alla bibliotek med öppen källkod som stöder OpenID Connect 1.0.
> [!NOTE]
> MSAL.js är ett nyare bibliotek än ADAL, men du kan inte göra autentisering mellan tjänst och tjänst med klientautentisering med MSAL.js, eftersom det i första hand är ett bibliotek på klientsidan som är avsett för interaktiv/gränssnittsautentisering som är knuten till en viss användare. Vi rekommenderar att du använder ADAL enligt nedan för att skapa automatiserade arbetsflöden med REST API.
### <a name="nodejs"></a>Node.js
Följ följande steg för att generera en auth-token med Node.js. Kör i din `npm install adal-node`miljö . Använd sedan `tenantId`ditt `clientId`, `clientSecret` och från tjänstens huvudnamn som du skapade i stegen ovan som värden för de matchande variablerna i följande skript.
```javascript
const adal = require('adal-node').AuthenticationContext;
const authorityHostUrl = 'https://login.microsoftonline.com/';
const tenantId = 'your-tenant-id';
const authorityUrl = authorityHostUrl + tenantId;
const clientId = 'your-client-id';
const clientSecret = 'your-client-secret';
const resource = 'https://management.azure.com/';
const context = new adal(authorityUrl);
context.acquireTokenWithClientCredentials(
resource,
clientId,
clientSecret,
(err, tokenResponse) => {
if (err) {
console.log(`Token generation failed due to ${err}`);
} else {
console.dir(tokenResponse, { depth: null, colors: true });
}
}
);
```
Variabeln `tokenResponse` är ett objekt som innehåller token och associerade metadata, till exempel förfallotid. Token är giltiga i 1 timme och kan uppdateras genom att köra samma anrop igen för att hämta en ny token. Följande är ett exempelsvar.
```javascript
{
tokenType: 'Bearer',
expiresIn: 3599,
expiresOn: 2019-12-17T19:15:56.326Z,
resource: 'https://management.azure.com/',
accessToken: "random-oauth-token",
isMRRT: true,
_clientId: 'your-client-id',
_authority: 'https://login.microsoftonline.com/your-tenant-id'
}
```
Använd `accessToken` egenskapen för att hämta auth-token. Mer om hur du använder token för att ringa API-anrop finns i [REST API-dokumentationen.](https://github.com/microsoft/MLOps/tree/master/examples/AzureML-REST-API)
### <a name="python"></a>Python
Följ följande steg för att generera en auth-token med Python. Kör i din `pip install adal`miljö . Använd sedan `tenantId`ditt `clientId`, `clientSecret` och från tjänstens huvudnamn som du skapade i stegen ovan som värden för lämpliga variabler i följande skript.
```python
from adal import AuthenticationContext
client_id = "your-client-id"
client_secret = "your-client-secret"
resource_url = "https://login.microsoftonline.com"
tenant_id = "your-tenant-id"
authority = "{}/{}".format(resource_url, tenant_id)
auth_context = AuthenticationContext(authority)
token_response = auth_context.acquire_token_with_client_credentials("https://management.azure.com/", client_id, client_secret)
print(token_response)
```
Variabeln `token_response` är en ordlista som innehåller token och associerade metadata, till exempel förfallotid. Token är giltiga i 1 timme och kan uppdateras genom att köra samma anrop igen för att hämta en ny token. Följande är ett exempelsvar.
```python
{
'tokenType': 'Bearer',
'expiresIn': 3599,
'expiresOn': '2019-12-17 19:47:15.150205',
'resource': 'https://management.azure.com/',
'accessToken': 'random-oauth-token',
'isMRRT': True,
'_clientId': 'your-client-id',
'_authority': 'https://login.microsoftonline.com/your-tenant-id'
}
```
Används `token_response["accessToken"]` för att hämta auth-token. Mer om hur du använder token för att ringa API-anrop finns i [REST API-dokumentationen.](https://github.com/microsoft/MLOps/tree/master/examples/AzureML-REST-API)
## <a name="web-service-authentication"></a>Webbtjänstautentisering
Webbtjänster i Azure Machine Learning använder ett annat autentiseringsmönster än vad som beskrivs ovan. Det enklaste sättet att autentisera till distribuerade webbtjänster är att använda **nyckelbaserad autentisering**, som genererar autentiseringsnycklar av statisk bäraretyp som inte behöver uppdateras. Om du bara behöver autentisera till en distribuerad webbtjänst behöver du inte konfigurera autentisering av tjänstprinciper enligt ovan.
Webbtjänster som distribueras på Azure Kubernetes-tjänsten har nyckelbaserad autentisering *aktiverad* som standard. Azure Container Instances distribuerade tjänster har nyckelbaserad auth *inaktiverad* `auth_enabled=True`som standard, men du kan aktivera den genom att ange när du skapar ACI-webbtjänsten. Följande är ett exempel på att skapa en ACI-distributionskonfiguration med nyckelbaserad auth aktiverad.
```python
from azureml.core.webservice import AciWebservice
aci_config = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 1,
auth_enabled=True)
```
Sedan kan du använda den anpassade ACI-konfigurationen i distributionen `Model` med klassen.
```python
from azureml.core.model import Model, InferenceConfig
inference_config = InferenceConfig(entry_script="score.py",
environment=myenv)
aci_service = Model.deploy(workspace=ws,
name="aci_service_sample",
models=[model],
inference_config=inference_config,
deployment_config=aci_config)
aci_service.wait_for_deployment(True)
```
Om du vill hämta `aci_service.get_keys()`auth-tangenterna använder du . Om du vill återskapa `regen_key()` en nyckel använder du funktionen och skickar antingen **Primär** eller **Sekundär**.
```python
aci_service.regen_key("Primary")
# or
aci_service.regen_key("Secondary")
```
Webbtjänster stöder också tokenbaserad autentisering, men endast för Azure Kubernetes-tjänstdistributioner. Mer information om hur du autentisering finns i [hur du](how-to-consume-web-service.md) kan använda webbtjänster.
### <a name="token-based-web-service-authentication"></a>Tokenbaserad webbtjänstautentisering
När du aktiverar tokenautentisering för en webbtjänst måste användare presentera en Azure Machine Learning JSON-webbtoken för webbtjänsten för att komma åt den. Token upphör att gälla efter en angiven tidsram och måste uppdateras för att fortsätta ringa samtal.
* Tokenautentisering **inaktiveras som standard** när du distribuerar till Azure Kubernetes Service.
* Tokenautentisering **stöds inte** när du distribuerar till Azure Container Instances.
Om du vill styra `token_auth_enabled` tokenautentisering använder du parametern när du skapar eller uppdaterar en distribution.
Om tokenautentisering är aktiverat kan `get_token` du använda metoden för att hämta en JWT (JWT) (JSON Web Token) och den tokens förfallotid:
```python
token, refresh_by = service.get_token()
print(token)
```
> [!IMPORTANT]
> Du måste begära en ny token efter `refresh_by` tokens tid. Om du behöver uppdatera token utanför Python SDK är ett alternativ att använda REST API med `service.get_token()` tjänsthuvudvillkorsautentisering för att regelbundet ringa samtalet, som diskuterats tidigare.
>
> Vi rekommenderar starkt att du skapar din Azure Machine Learning-arbetsyta i samma region som ditt Azure Kubernetes Service-kluster.
>
> Om du vill autentisera med en token ringer webbtjänsten ett anrop till den region där arbetsytan Azure Machine Learning skapas. Om arbetsytans region inte är tillgänglig kan du inte hämta en token för webbtjänsten, även om klustret ligger i en annan region än arbetsytan. Resultatet är att Azure AD-autentisering inte är tillgänglig förrän arbetsytans region är tillgänglig igen.
>
> Ju större avstånd det tar mellan klustrets region och arbetsytans region, desto längre tid tar det att hämta en token.
## <a name="next-steps"></a>Nästa steg
* [Träna och distribuera en bildklassificeringsmodell](tutorial-train-models-with-aml.md).
* [Använda en Azure Machine Learning-modell som distribueras som en webbtjänst](how-to-consume-web-service.md).
| 56.29052 | 708 | 0.774216 | swe_Latn | 0.998098 |
6f1f1cce5f5fc704232447ccecd18a99d2a4551e | 328 | markdown | Markdown | _laporan-keuangan-cms/kerjabilitas-C4-Desta-gaji-staf-program-area-makasar-oktober-2016-a-n-yusnaeni.markdown | hixio-mh/website-4 | 943b8ecdb1f7e507abbb404051afbd40d0a855d3 | [
"MIT"
] | 4 | 2018-03-23T08:55:53.000Z | 2018-03-24T15:10:22.000Z | _laporan-keuangan-cms/kerjabilitas-C4-Desta-gaji-staf-program-area-makasar-oktober-2016-a-n-yusnaeni.markdown | hixio-mh/website-4 | 943b8ecdb1f7e507abbb404051afbd40d0a855d3 | [
"MIT"
] | 3 | 2021-12-20T17:56:32.000Z | 2021-12-20T17:59:59.000Z | _laporan-keuangan-cms/kerjabilitas-C4-Desta-gaji-staf-program-area-makasar-oktober-2016-a-n-yusnaeni.markdown | hixio-mh/website-4 | 943b8ecdb1f7e507abbb404051afbd40d0a855d3 | [
"MIT"
] | 5 | 2020-01-01T09:54:05.000Z | 2021-11-23T15:49:11.000Z | ---
proyek: kerjabilitas
kode: C4
anggaran: Staf Program Area
nama: Desta A
title: Gaji Staf Program Area - Makasar Oktober 2016 a.n Yusnaeni
date: 2016-10-27
biaya: 3500000
nota: "https://wiki.ciptamedia.org/wiki/File:Oktober_27_2016_kerjabilitas_C4_staf_area_makassar_neni.jpg"
tanggalpelunasan: 2016-10-27
notapelunasan:
---
| 25.230769 | 105 | 0.79878 | ind_Latn | 0.352314 |
6f202b05dd19d606898e2b3a4d1444da7777462f | 1,293 | md | Markdown | desktop-src/Tapi/code-snippets.md | crushonme/win32 | f5099e1e3e455bb162771d80b0ba762ee5c974ec | [
"CC-BY-4.0",
"MIT"
] | 3 | 2020-04-24T13:02:42.000Z | 2021-07-17T15:32:03.000Z | desktop-src/Tapi/code-snippets.md | crushonme/win32 | f5099e1e3e455bb162771d80b0ba762ee5c974ec | [
"CC-BY-4.0",
"MIT"
] | null | null | null | desktop-src/Tapi/code-snippets.md | crushonme/win32 | f5099e1e3e455bb162771d80b0ba762ee5c974ec | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-09T23:50:05.000Z | 2022-03-09T23:50:05.000Z | ---
Description: The following code examples briefly illustrate some basic operations. The examples are written in C++. The samples section of the Platform Software Development Kit (SDK) contains full programs that can be compiled.
ms.assetid: 78a690d2-ccb9-48e0-b137-fb102017fa6a
title: Code Examples
ms.topic: article
ms.date: 05/31/2018
---
# Code Examples
The following code examples briefly illustrate some basic operations. The examples are written in C++. The samples section of the Platform Software Development Kit (SDK) contains full programs that can be compiled.
- [Initialize TAPI](https://msdn.microsoft.com/en-us/library/ms728171(v=VS.85).aspx)
- [Select an Address](https://msdn.microsoft.com/en-us/library/ms734192(v=VS.85).aspx)
- [Register Events](https://msdn.microsoft.com/en-us/library/ms734177(v=VS.85).aspx)
- [Select a Terminal](https://msdn.microsoft.com/en-us/library/ms734193(v=VS.85).aspx)
- [Make a Call](https://msdn.microsoft.com/en-us/library/ms733298(v=VS.85).aspx)
- [Receive a Call](https://msdn.microsoft.com/en-us/library/ms734172(v=VS.85).aspx)
- [Create a Simple Conference](https://msdn.microsoft.com/en-us/library/ms726950(v=VS.85).aspx)
- [Transfer a Call](https://msdn.microsoft.com/en-us/library/ms734809(v=VS.85).aspx)
| 46.178571 | 227 | 0.748647 | eng_Latn | 0.517976 |
6f204cf09e4350337d98901e04b001d69d04a2e1 | 5,960 | md | Markdown | translations/ru-RU/content/codespaces/developing-in-codespaces/using-codespaces-in-visual-studio-code.md | Varshans2/docs | b9dc0417694f59b81f2f597bf72bdf116d05f88a | [
"CC-BY-4.0",
"MIT"
] | 6 | 2021-02-17T03:31:27.000Z | 2021-09-11T04:17:57.000Z | translations/ru-RU/content/codespaces/developing-in-codespaces/using-codespaces-in-visual-studio-code.md | Honeyk25/docs | de643512095e283cb3a56243c1a7adcf680a1d08 | [
"CC-BY-4.0",
"MIT"
] | 144 | 2021-10-21T04:41:09.000Z | 2022-03-30T09:55:16.000Z | translations/ru-RU/content/codespaces/developing-in-codespaces/using-codespaces-in-visual-studio-code.md | Honeyk25/docs | de643512095e283cb3a56243c1a7adcf680a1d08 | [
"CC-BY-4.0",
"MIT"
] | 3 | 2022-03-08T03:54:20.000Z | 2022-03-15T06:28:15.000Z | ---
title: Using Codespaces in Visual Studio Code
intro: 'You can develop in your codespace directly in {% data variables.product.prodname_vscode %} by connecting the {% data variables.product.prodname_github_codespaces %} extension with your account on {% data variables.product.product_name %}.'
product: '{% data reusables.gated-features.codespaces %}'
redirect_from:
- /github/developing-online-with-codespaces/using-codespaces-in-visual-studio-code
- /github/developing-online-with-codespaces/connecting-to-your-codespace-from-visual-studio-code
- /github/developing-online-with-codespaces/using-codespaces-in-visual-studio
versions:
fpt: '*'
type: how_to
topics:
- Codespaces
- Visual Studio Code
- Developer
shortTitle: Visual Studio Code
---
## Требования
To develop in a codespace directly in {% data variables.product.prodname_vscode %}, you must sign into the {% data variables.product.prodname_github_codespaces %} extension. The {% data variables.product.prodname_github_codespaces %} extension requires {% data variables.product.prodname_vscode %} October 2020 Release 1.51 or later.
Use the {% data variables.product.prodname_vs %} Marketplace to install the [{% data variables.product.prodname_github_codespaces %}](https://marketplace.visualstudio.com/items?itemName=GitHub.codespaces) extension. For more information, see [Extension Marketplace](https://code.visualstudio.com/docs/editor/extension-gallery) in the {% data variables.product.prodname_vscode %} documentation.
{% mac %}
{% data reusables.codespaces.click-remote-explorer-icon-vscode %}
2. Click **Sign in to view {% data variables.product.prodname_dotcom %}...**. 
3. To authorize {% data variables.product.prodname_vscode %} to access your account on {% data variables.product.product_name %}, click **Allow**.
4. Sign in to {% data variables.product.product_name %} to approve the extension.
{% endmac %}
{% windows %}
{% data reusables.codespaces.click-remote-explorer-icon-vscode %}
2. Use the "REMOTE EXPLORER" drop-down, then click **{% data variables.product.prodname_github_codespaces %}**. 
3. Click **Sign in to view {% data variables.product.prodname_codespaces %}...**. 
4. To authorize {% data variables.product.prodname_vscode %} to access your account on {% data variables.product.product_name %}, click **Allow**.
5. Sign in to {% data variables.product.product_name %} to approve the extension.
{% endwindows %}
## Creating a codespace in {% data variables.product.prodname_vscode %}
After you connect your {% data variables.product.product_name %} account to the {% data variables.product.prodname_github_codespaces %} extension, you can develop in a codespace that you created on {% data variables.product.product_name %} or in {% data variables.product.prodname_vscode %}.
{% data reusables.codespaces.click-remote-explorer-icon-vscode %}
2. Click the Add icon, then click **Create New Codespace**. 
3. Type, then click the repository's name you want to develop in. 
4. Click the branch you want to develop on. 
5. Click the machine type you want to develop in. 
## Opening a codespace in {% data variables.product.prodname_vscode %}
{% data reusables.codespaces.click-remote-explorer-icon-vscode %}
2. Under "Codespaces", click the codespace you want to develop in.
3. Click the Connect to Codespace icon. 
## Changing the machine type in {% data variables.product.prodname_vscode %}
{% data reusables.codespaces.codespaces-machine-types %}
You can change the machine type of your codespace at any time.
1. In {% data variables.product.prodname_vscode %}, open the Command Palette (`shift command P` / `shift control P`).
2. Search for and select "Codespaces: Change Machine Type." 
3. Click the codespace that you want to change. 
4. Choose the machine type you want to use.
If the codespace is currently running, a message is displayed asking if you would like to restart and reconnect to your codespace now. Click **Yes** if you want to change the machine type used for this codespace immediately. If you click **No**, or if the codespace is not currently running, the change will take effect the next time the codespace restarts.
## Deleting a codespace in {% data variables.product.prodname_vscode %}
1. Under "Codespaces", right-click the codespace you want to delete.
2. In the drop-down menu, click **Delete Codespace**. 
| 75.443038 | 393 | 0.772315 | eng_Latn | 0.809538 |
6f205e0c4bb2575440028818c90da2f63dde717a | 877 | md | Markdown | docs/framework/docker/index.md | GeiGeiLa/docs.zh-tw | 88f98d80c8afc37c430f79fb76c5e14f11dce957 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/docker/index.md | GeiGeiLa/docs.zh-tw | 88f98d80c8afc37c430f79fb76c5e14f11dce957 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/docker/index.md | GeiGeiLa/docs.zh-tw | 88f98d80c8afc37c430f79fb76c5e14f11dce957 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: .NET Framework 上的 Docker
description: 了解如何透過 Windows 容器,使用 Docker 部署 .NET Framework 應用程式。
author: BillWagner
ms.author: wiwagn
ms.date: 09/28/2016
ms.assetid: a27b2ae4-154e-4b2b-b221-0c4c05185274
ms.openlocfilehash: 4cb7ef9346a452c56d056bda63c8428a6502991f
ms.sourcegitcommit: c93fd5139f9efcf6db514e3474301738a6d1d649
ms.translationtype: HT
ms.contentlocale: zh-TW
ms.lasthandoff: 10/27/2018
ms.locfileid: "50184855"
---
# <a name="deploying-net-framework-applications-with-docker"></a>使用 Docker 部署 .NET Framework 應用程式
您可以透過 Windows 容器,使用 Docker 部署 .NET Framework 應用程式。 還可以了解使用 [Windows 容器](/virtualization/windowscontainers/about/)的需求,以及如何[開始使用 Docker for Windows](https://docs.docker.com/docker-for-windows/)。
您一開始可以[使用 Docker 執行主控台應用程式](console.md)。
為了在 Docker 中執行 Web 應用程式,您可以參閱 [Docker 中的 ASP.NET MVC 應用程式](/aspnet/mvc/overview/deployment/docker-aspnetmvc)。
| 41.761905 | 193 | 0.799316 | yue_Hant | 0.905234 |
6f209609f5458d9312262b1e4d001bdb098f63e6 | 1,908 | md | Markdown | _posts/2017-07-19-Lafemme-Limited-Edition-Style-21529.md | queenosestyle/queenosestyle.github.io | 7b095a591cefe4e42cdeb7de71cfa87293a95b5c | [
"MIT"
] | null | null | null | _posts/2017-07-19-Lafemme-Limited-Edition-Style-21529.md | queenosestyle/queenosestyle.github.io | 7b095a591cefe4e42cdeb7de71cfa87293a95b5c | [
"MIT"
] | null | null | null | _posts/2017-07-19-Lafemme-Limited-Edition-Style-21529.md | queenosestyle/queenosestyle.github.io | 7b095a591cefe4e42cdeb7de71cfa87293a95b5c | [
"MIT"
] | null | null | null | ---
layout: post
date: 2017-07-19
title: "Lafemme Limited Edition Style 21529"
category: Lafemme
tags: [Lafemme]
---
### Lafemme Limited Edition Style 21529
Just **$279.99**
###
<table><tr><td>BRANDS</td><td>Lafemme</td></tr></table>
<a href="https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html"><img src="//img.readybrides.com/181317/lafemme-limited-edition-style-21529.jpg" alt="Lafemme Limited Edition Style 21529" style="width:100%;" /></a>
<!-- break --><a href="https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html"><img src="//img.readybrides.com/181318/lafemme-limited-edition-style-21529.jpg" alt="Lafemme Limited Edition Style 21529" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html"><img src="//img.readybrides.com/181319/lafemme-limited-edition-style-21529.jpg" alt="Lafemme Limited Edition Style 21529" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html"><img src="//img.readybrides.com/181320/lafemme-limited-edition-style-21529.jpg" alt="Lafemme Limited Edition Style 21529" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html"><img src="//img.readybrides.com/181321/lafemme-limited-edition-style-21529.jpg" alt="Lafemme Limited Edition Style 21529" style="width:100%;" /></a>
<a href="https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html"><img src="//img.readybrides.com/181316/lafemme-limited-edition-style-21529.jpg" alt="Lafemme Limited Edition Style 21529" style="width:100%;" /></a>
Buy it: [https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html](https://www.readybrides.com/en/lafemme/76178-lafemme-limited-edition-style-21529.html)
| 95.4 | 258 | 0.751048 | yue_Hant | 0.587211 |
6f20992b65dc4a2fe3c7d0887fb7d6e1001ddabf | 1,456 | md | Markdown | README.md | daguiheso/tvmaze-dh | 119b166744a620103f730a3432fc9df07cede19c | [
"Unlicense",
"MIT"
] | null | null | null | README.md | daguiheso/tvmaze-dh | 119b166744a620103f730a3432fc9df07cede19c | [
"Unlicense",
"MIT"
] | null | null | null | README.md | daguiheso/tvmaze-dh | 119b166744a620103f730a3432fc9df07cede19c | [
"Unlicense",
"MIT"
] | null | null | null | #tvmaze-dh
## Install
```
$ npm install tvmaze-dh --save
```
##Usage
``` js
var tvmaze = require('tvmaze-dh')
var client = tvmaze.createClient()
client.shows(function (err, shows) {
// do something shows...
})
client.search('lost', function (err, shows) {
// do something shows...
})
client.show(2473, function (err, show) {
// do something with show
})
```
## License MIT
Copyright (c) 2015 - Daniel Hernández
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
| 26.472727 | 77 | 0.754808 | yue_Hant | 0.374741 |
6f227dc17b223539b6acbefaee25a4bcd3c55857 | 1,258 | md | Markdown | docs/data/oledb/cutlprops-setpropvalue.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | 14 | 2018-01-28T18:10:55.000Z | 2021-11-16T13:21:18.000Z | docs/data/oledb/cutlprops-setpropvalue.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/data/oledb/cutlprops-setpropvalue.md | asklar/cpp-docs | c5e30ee9c63ab4d88b4853acfb6f084cdddb171f | [
"CC-BY-4.0",
"MIT"
] | 2 | 2018-11-01T12:33:08.000Z | 2021-11-16T13:21:19.000Z | ---
title: "CUtlProps::SetPropValue | Microsoft Docs"
ms.custom: ""
ms.date: "11/04/2016"
ms.reviewer: ""
ms.suite: ""
ms.technology: ["cpp-windows"]
ms.tgt_pltfrm: ""
ms.topic: "article"
f1_keywords: ["SetPropValue", "ATL::CUtlProps<T>::SetPropValue", "ATL.CUtlProps<T>.SetPropValue", "ATL.CUtlProps.SetPropValue", "CUtlProps::SetPropValue", "CUtlProps<T>::SetPropValue", "CUtlProps.SetPropValue", "CUtlProps<T>.SetPropValue", "ATL::CUtlProps::SetPropValue"]
dev_langs: ["C++"]
helpviewer_keywords: ["SetPropValue method"]
ms.assetid: 69a703c0-f640-4ca3-8850-0c4e75d52429
caps.latest.revision: 8
author: "mikeblome"
ms.author: "mblome"
manager: "ghogen"
---
# CUtlProps::SetPropValue
Sets a property in a property set.
## Syntax
```
HRESULT SetPropValue(
const GUID* pguidPropSet,
DBPROPID dwPropId,
VARIANT* pvValue
);
```
#### Parameters
`pguidPropSet`
[in] The GUID for the PropSet.
`dwPropId`
[in] The property index.
`pvValue`
[in] A pointer to a variant that contains the new property value.
## Return Value
`Failure` on failure and `S_OK` if successful.
## Requirements
**Header:** atldb.h
## See Also
[CUtlProps Class](../../data/oledb/cutlprops-class.md) | 25.16 | 271 | 0.675676 | yue_Hant | 0.523636 |
6f228ed5b0ddd8cbe0eb32717e97c94d3db59e62 | 190 | md | Markdown | README.md | Fleeg/fleeg-terms-policy | 0cda78b3b94a5210856e41743c77efa54b787626 | [
"MIT"
] | null | null | null | README.md | Fleeg/fleeg-terms-policy | 0cda78b3b94a5210856e41743c77efa54b787626 | [
"MIT"
] | null | null | null | README.md | Fleeg/fleeg-terms-policy | 0cda78b3b94a5210856e41743c77efa54b787626 | [
"MIT"
] | null | null | null | # Fleeg Terms and Policy
This repository contains a fully collection of our policies and terms. It is designed to represent a historical archive of changes for the purpose of transparency.
| 47.5 | 163 | 0.815789 | eng_Latn | 0.999945 |
6f236dfec18b0ad6de86ceac77f126a6ee1084fb | 88 | md | Markdown | backend/README.md | 1512495/CNM_CK | 7a1fc8e83c42bd1950053b31a4fd463348772aac | [
"MIT"
] | null | null | null | backend/README.md | 1512495/CNM_CK | 7a1fc8e83c42bd1950053b31a4fd463348772aac | [
"MIT"
] | null | null | null | backend/README.md | 1512495/CNM_CK | 7a1fc8e83c42bd1950053b31a4fd463348772aac | [
"MIT"
] | null | null | null | # Api nodejs example
# Install package
Run `npm install`
# Run api
Run `npm run start`
| 12.571429 | 20 | 0.715909 | eng_Latn | 0.27246 |
6f24113fe707d363d34bb690be9d7046db83e349 | 3,974 | md | Markdown | _posts/2021-10-23-eu-studie-ekonomi.md | opensourcesweden/opensourcesweden.github.io | 54cd7738ec3da67ea78a880bb19aff3c2df14d02 | [
"MIT"
] | 1 | 2022-02-24T12:03:07.000Z | 2022-02-24T12:03:07.000Z | _posts/2021-10-23-eu-studie-ekonomi.md | opensourcesweden/opensourcesweden.github.io | 54cd7738ec3da67ea78a880bb19aff3c2df14d02 | [
"MIT"
] | null | null | null | _posts/2021-10-23-eu-studie-ekonomi.md | opensourcesweden/opensourcesweden.github.io | 54cd7738ec3da67ea78a880bb19aff3c2df14d02 | [
"MIT"
] | null | null | null | ---
layout: post
title: EU-studie: Open source kommer förstärka EUs ekonomi och öka den digital självständigheten
subtitle: I en nyutgiven studie från EU, visas hur viktig öppen utveckling (Open Source) är för europeisk ekonomi.
tags: [artiklar]
---
### EU-studie: Open source kommer förstärka EUs ekonomi och öka den digital självständigheten
I en nyutgiven studie från EU, visas hur viktig öppen utveckling (Open Source) är för europeisk ekonomi.
Studien visar att om bidrag till Open Sourceutveckling kunde ökas med 10% inom EU, skulle det generera 0,4-0,6% av EU:s BNP. Det motsvarar ca 1000 Miljarder SEK
Mer information om studien finns här: https://openforumeurope.org/publication-of-the-european-commissions-study-on-the-impact-of-open-source/
Nedan den pressrelease som EU sänt ut.
### PRESS RELEASE
Open source will boost economy and increase digital autonomy, EU study says:
The study on the impact of Open Source Software and Hardware for the European Commission found that open source software and hardware are key for the region's digital transformation and can be a major boost to the EU’s GDP. BRUSSELS, 6/09/2021 Today, the European Commission published a study on the impact of open source software (OSS and open source hardware (OSH on the European economy, conducted by Fraunhofer ISI and OpenForum Europe. Full report is available here. The study estimates that open source software contributes between €65 to €95 billion to the European Union’s GDP and promises significant growth opportunities for the region’s digital economy. To achieve that, the EU should actively engage in a transition towards more openness in its political and investment culture. Sachiko Muto, the CEO of OpenForum Europe: “Open source offers a greenfield advantage for policymakers and Europe has the chance to lead.”
The report recommends the EU to pursue a dedicated open source industrial policy and to include it in its major policy frameworks, such as the European Green Deal and the AI Act. It also recommends setting up a European network of governmental units dedicated to accelerating the use of open technologies, providing substantial funding to open source support mechanisms and projects, e.g. through the flagship Horizon Europe program with a total budget of €95.5 billion for 20212027, and following the direction of open innovation in the bloc’s quest for digital autonomy. To put the results into context, the economic value of open source software in the EU is equivalent to both air and water transport combined1. EU governments and companies have already noticed the potential of Open by investing over €1 billion in open source development in 2018 alone. The data predicts that if open source contributions increased by 10% in the EU, they would generate an additional 0.4% to 0.6% (around €100 billion) to the bloc’s GDP. To reap these benefits, the researchers point to a need for a profound culture switch and significant investments in open technologies. Several Member State governments and EU institutions have already taken their first steps in this direction, and now the study equips policymakers with necessary evidence to scale up their efforts for the benefit of the EU economy and citizens.
#### About OFE
OpenForum Europe (OFE is a not-for-profit, Brussels-based independent think tank which explains the merits of openness in computing to policy makers and communities across Europe. Originally launched in 2002, OFE covers topics such as: Open Source, Open standards, Digital Government Government, public procurement, Intellectual Property, cloud computing and Internet policy. OFE also hosts an independent global network of OpenForum Academy Fellows and works closely with the European Commission, the European Parliament, national and local governments, both directly and via its national partners.
Contact:
Paula Grzegorzewska
Senior Policy Advisor OpenForum Europe paula@openforumeurope.org mob +32 0483 62 89 09
| 124.1875 | 1,408 | 0.809763 | eng_Latn | 0.984322 |
6f248af498668162e49405185013177262c55975 | 5,692 | md | Markdown | README.md | lucidsoftware/styledocco | b89373cb195ca5f7e03ea31fb5799daeac9c272d | [
"MIT"
] | null | null | null | README.md | lucidsoftware/styledocco | b89373cb195ca5f7e03ea31fb5799daeac9c272d | [
"MIT"
] | null | null | null | README.md | lucidsoftware/styledocco | b89373cb195ca5f7e03ea31fb5799daeac9c272d | [
"MIT"
] | null | null | null | This is a fork of the [original StyleDocco](https://github.com/jacobrask/styledocco), with some additional features (for [Lucid Software](https://www.golucid.co/)):
* Ability to exclude files/folders using `--exclude <RegExp as string>` (optional)
* Ability to isolate relevant CSS in the previews (rather than lumping all compiled CSS together for all the previews) with the optional `--isolate`
StyleDocco
==========
StyleDocco generates documentation and style guide documents from your stylesheets.
Stylesheet comments will be parsed through [Markdown](http://en.wikipedia.org/wiki/Markdown) and displayed in a generated HTML document. You can write HTML code prefixed with 4 spaces or between [code fences](http://github.github.com/github-flavored-markdown/) (<code>```</code>) in your comments, and StyleDocco show a preview with the styles applied, and displays the example HTML code.
The previews are rendered in a resizable iframes to make it easy to demonstrate responsive designs at different viewport sizes.
Suggestions, feature requests and bug reports are welcome either at [GitHub](https://github.com/jacobrask/styledocco/issues) or on Twitter ([@jacobrask](https://twitter.com/jacobrask)).
Installation
------------
StyleDocco requires [Node.js](http://nodejs.org). After installing Node.js, run `npm install -fg styledocco` or clone this repository and run `./bin/styledocco`.
StyleDocco is free and open source software, released under the [MIT license](https://raw.github.com/jacobrask/styledocco/master/LICENSE).
Usage
=====
`styledocco [options] [STYLESHEET(S)]`
Options
-------
* `--name`, `-n` Name of the project
* `--out`, `-o` Output directory *(default: "docs")*
* `--preprocessor` Custom preprocessor command. *(optional)* (ex: `--preprocessor "~/bin/lessc"`)
* `--include` Include specified CSS and/or JavaScript files in the previews. *(optional)* (ex: `--include mysite.css --include app.js`)
* `--verbose` Show log messages when generating the documentation. *(default: false)*
* Stylesheet (or directory of stylesheets) to process.
Usage examples
--------------
Generate documentation for *My Project* in the `docs` folder, from the files in the `css` directory.
`styledocco -n "My Project" css`
Generate documentation for *My Project* in the `mydocs` folder, from source files in the `styles` folder. Use the `--compass` option for SASS to make Compass imports available.
`styledocco -n "My Project" -o mydocs -s mydocs --preprocessor "scss --compass" styles`
Syntax
------
```css
/* Provides extra visual weight and identifies the primary action in a set of buttons.
<button class="btn primary">Primary</button> */
.btn.primary {
background: blue;
color: white;
}
```
Would display the description, a rendered button as well as the example HTML code. The CSS will be applied to the preview.
See the `examples` folder for more in-depth examples.
Tips and tricks
---------------
* StyleDocco will automatically compile any SASS, SCSS, Less or Stylus files before they are applied to the page. You can also enter a custom preprocessor command if you want to pass custom parameters to the preprocessor.
* If your project includes a `README.md` file, it will be used as the base for an `index.html`.
* If you don't specify a custom name, StyleDocco will use the name from a `package.json` file if it finds one.
* Put some whitespace before a comment block to exclude it from the documentation.
* Level 1 headings will automatically create a new section in the documentation.
* Add `:hover`, `:focus`, etc as class names in example code and the pseudo class styles will be applied in the preview.
Change Log
==========
v0.6.6 - Jan 28, 2014
---------------------
* Fix failure to render iframes in new versions of Chrome (#100)
* Make it an option to minify the code (#106)
v0.6.5 - Nov 17, 2013
---------------------
* Fix failure to install on some systems (#94)
v0.6.4 - Oct 07, 2013
---------------------
* Large preprocessor outputs hit the maxBuffer limit (#87)
* Relative image path is no longer added to data: URLs (#88)
* Replace path.exists with fs.exists (#92)
* Can now use a backslash to separate directories on Windows (#95)
* HTTP URLs in paths now behave correctly (#97)
v0.6.3 - July 09, 2013
----------------------
* Do not add relative paths to data URLs
v0.6.2 - June 30, 2013
----------------------
* Find assets recursively in Windows
* Fail gracefully on no files error
* Relative url() paths are now preserved
v0.6.1 - August 20, 2012
------------------------
* Mute all preprocessor errors unless using verbose option
* Don't try to preprocess SASS partials
* Design tweaks
v0.6.0 - August 15, 2012
------------------------
* Remove custom resources option, as client side scripts/styles are vital to the functionality
* Editable, auto-updating code examples
* Documentation-wide search
* Page specific Table of Contents
v0.5.0 - July 23, 2012
------------------------
* Render previews in sandboxed iframes
* Resizing of iframes for responsive debugging
* All processed CSS is included in all previews
* Allow custom JavaScript and CSS files to be included in previews
* Updated design with topbar instead of sidebar and new colors
Acknowledgements
================
A lot of the heavy lifting in StyleDocco is done by the excellent [Marked](https://github.com/chjj/marked) module by Christopher Jeffrey. The original [Docco](https://github.com/jashkenas/docco) by Jeremy Ashkenas and [Knyle Style Sheets](https://github.com/kneath/kss) have also been sources of inspiration for StyleDocco.
| 39.527778 | 389 | 0.706957 | eng_Latn | 0.981777 |
6f24af59041a744eaad5be49017c7410209c92b8 | 1,536 | md | Markdown | docs/markdown/custom-style.md | ranwawa/vant-weapp | 67dd1ff3606115ac4e896783a485887e4245e251 | [
"MIT"
] | 2 | 2019-09-19T08:51:38.000Z | 2019-11-03T10:09:23.000Z | docs/markdown/custom-style.md | ranwawa/vant-weapp | 67dd1ff3606115ac4e896783a485887e4245e251 | [
"MIT"
] | 1 | 2020-02-15T04:15:27.000Z | 2020-02-15T04:15:27.000Z | docs/markdown/custom-style.md | ranwawa/vant-weapp | 67dd1ff3606115ac4e896783a485887e4245e251 | [
"MIT"
] | 2 | 2020-02-23T08:02:03.000Z | 2020-02-25T02:03:18.000Z | # 样式覆盖
### 介绍
Vant Weapp 基于微信小程序的机制,为开发者提供了以下 3 种修改组件样式的方法
### 解除样式隔离
样式隔离的相关背景知识请查阅[微信小程序文档](https://developers.weixin.qq.com/miniprogram/dev/framework/custom-component/wxml-wxss.html#%E7%BB%84%E4%BB%B6%E6%A0%B7%E5%BC%8F%E9%9A%94%E7%A6%BB)
<br />
Vant Weapp 的所有组件都开启了`addGlobalClass: true`以接受外部样式的影响,可以使用如下 2 种方式覆盖组件样式
> 在页面中使用 Vant Weapp 组件时,可直接在页面的样式文件中覆盖样式
```html
<van-button type="primary">主要按钮</van-button>
```
```css
/* page.wxss */
.van-button--primary {
font-size: 20px;
background-color: pink;
}
```
> 在自定义组件中使用 Vant Weapp 组件时,需开启`styleIsolation: 'shared'`选项
```html
<van-button type="primary">主要按钮</van-button>
```
```js
Component({
options: {
styleIsolation: 'shared'
}
});
```
```css
.van-button--primary {
font-size: 20px;
background-color: pink;
}
```
### 使用外部样式类
外部样式类的相关知识背景请查阅[微信小程序文档](https://developers.weixin.qq.com/miniprogram/dev/framework/custom-component/wxml-wxss.html#%E5%A4%96%E9%83%A8%E6%A0%B7%E5%BC%8F%E7%B1%BB)
<br />
Vant Weapp 开放了大量的外部样式类供开发者使用,具体的样式类名称可查阅对应组件的“外部样式类”部分。
需要注意的是普通样式类和外部样式类的优先级是未定义的,因此使用时请添加`!important`以保证外部样式类的优先级。
```html
<van-cell
title="单元格"
value="内容"
title-class="cell-title"
value-class="cell-value"
/>
```
```css
.cell-title {
color: pink !important;
font-size: 20px !important;
}
.cell-value {
color: green !important;
font-size: 12px !important;
}
```
### 使用 CSS 变量
Vant Weapp 为部分 CSS 属性开放了基于 CSS 属性的定制方案。
相较于 解除样式隔离 和 使用外部样式类,这种方案支持在页面或应用级别对多个组件的样式做批量修改以进行主题样式的定制。
当然,用它来修改单个组件的部分样式也是绰绰有余的。具体的使用方法请查阅[定制主题](/#/theme)
| 17.454545 | 170 | 0.708333 | yue_Hant | 0.553582 |
6f24c9b36a622aa572ed9809d0f09d260c59fb53 | 6,185 | md | Markdown | docs/framework/wcf/feature-details/peer-resolvers.md | eOkadas/docs.fr-fr | 64202ad620f9bcd91f4360ec74aa6d86e1d4ae15 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/peer-resolvers.md | eOkadas/docs.fr-fr | 64202ad620f9bcd91f4360ec74aa6d86e1d4ae15 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/feature-details/peer-resolvers.md | eOkadas/docs.fr-fr | 64202ad620f9bcd91f4360ec74aa6d86e1d4ae15 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Programmes de résolution d'homologue
ms.date: 03/30/2017
ms.assetid: d86d12a1-7358-450f-9727-b6afb95adb9c
ms.openlocfilehash: 0547bb37b03235c61f43cec365551438f7931ad1
ms.sourcegitcommit: 68653db98c5ea7744fd438710248935f70020dfb
ms.translationtype: MT
ms.contentlocale: fr-FR
ms.lasthandoff: 08/22/2019
ms.locfileid: "69909910"
---
# <a name="peer-resolvers"></a>Programmes de résolution d'homologue
Pour se connecter à une maille, un nœud d'homologue requiert les adresses IP d'autres nœuds. Les adresses IP sont obtenues en contactant un service de résolution, qui prend l'ID de la maille et retourne une liste d'adresses correspondant aux nœuds enregistrés sous cet ID de maille particulier. Le programme de résolution conserve une liste des adresses inscrites, qu'il crée en inscrivant chaque nœud de la maille avec le service.
Vous pouvez spécifier le service PeerResolver à utiliser par le biais de la propriété `Resolver` du <xref:System.ServiceModel.NetPeerTcpBinding>.
## <a name="supported-peer-resolvers"></a>Programmes de résolution de pair pris en charge
Canal homologue prend en charge deux types de programmes de résolution: Le protocole PNRP (Peer Name Resolution Protocol) et les services de résolution personnalisés.
Par défaut, le canal homologue utilise le service de résolution de pair PNRP pour la découverte d'homologues et de voisins dans la maille. Pour les situations/plateformes où PNRP n’est pas disponible ou faisable, Windows Communication Foundation (WCF) fournit une alternative, le service de découverte <xref:System.ServiceModel.PeerResolvers.CustomPeerResolverService>basé sur le serveur, le. Vous pouvez également définir explicitement un service de résolution personnalisé en écrivant une classe qui implémente l'interface <xref:System.ServiceModel.PeerResolvers.IPeerResolverContract>.
### <a name="peer-name-resolution-protocol-pnrp"></a>Protocole PNRP (Peer Name Resolution Protocol)
PNRP, le programme de résolution par défaut de [!INCLUDE[wv](../../../../includes/wv-md.md)], est un service de résolution de noms distribué et sans serveur. PNRP peut également être utilisé sur [!INCLUDE[wxpsp2](../../../../includes/wxpsp2-md.md)] en installant le Pack réseau avancé. Deux clients qui exécutent la même version de PNRP peuvent se localiser mutuellement à l'aide de ce protocole, à condition qu'ils remplissent certaines conditions (telles que l'absence d'un pare-feu d'entreprise intermédiaire). Notez que la version de PNRP fournie avec [!INCLUDE[wv](../../../../includes/wv-md.md)] est plus récente que celle incluse dans le Pack réseau avancé. Reportez-vous au Centre de téléchargement Microsoft pour obtenir des mises à jour de PNRP pour [!INCLUDE[wxpsp2](../../../../includes/wxpsp2-md.md)].
### <a name="custom-resolver-services"></a>Services de résolution personnalisés
Lorsque le service PNRP n'est pas disponible ou que vous souhaitez contrôler entièrement la maille, vous pouvez utiliser un service de résolution serveur personnalisé. Vous pouvez définir explicitement ce service en écrivant une classe de programme de résolution qui implémente l'interface <xref:System.ServiceModel.PeerResolvers.IPeerResolverContract>, ou en utilisant l'implémentation par défaut fournie, <xref:System.ServiceModel.PeerResolvers.CustomPeerResolverService>.
Sous l'implémentation par défaut du service, les inscriptions des clients expirent au bout d'un certain délai si les clients n'actualisent pas leur inscription explicitement. Les clients qui utilisent le service de résolution doivent connaître la limite supérieure de la latence client-serveur pour pourvoir actualiser les inscriptions en temps opportun. Cela implique de choisir un délai d'actualisation approprié (`RefreshInterval`) sur le service de résolution. (Pour plus d’informations, [consultez dans CustomPeerResolverService: Inscriptions des clients](../../../../docs/framework/wcf/feature-details/inside-the-custompeerresolverservice-client-registrations.md).)
Le writer d'applications doit également envisager de sécuriser la connexion entre les clients et le service de résolution personnalisé. Pour cela, vous pouvez spécifier les paramètres de sécurité sur <xref:System.ServiceModel.NetTcpBinding> que les clients utilisent pour contacter le service de résolution. Vous devez spécifier des informations d'identification (le cas échéant) sur la `ChannelFactory` qui permet de créer le canal homologue. Ces informations d'identification sont passées à la `ChannelFactory` qui permet de créer des canaux dans le programme de résolution personnalisé.
> [!NOTE]
> Lors de l'utilisation de réseaux locaux et impromptus avec un programme de résolution personnalisé, il est fortement recommandé que les applications utilisant ou prenant en charge des réseaux de liaison locale ou impromptus incluent une logique qui sélectionne une adresse de liaison locale unique à utiliser lors de la connexion. Cela empêche tout risque de confusion provoquée par des ordinateurs dotés de plusieurs adresses de liaison locale. Ainsi, le canal homologue ne peut utiliser qu'une seule adresse de liaison locale à la fois. Vous pouvez spécifier cette adresse à l'aide de la propriété `ListenIpAddress` sur <xref:System.ServiceModel.NetPeerTcpBinding>.
Pour une démonstration de l’implémentation d’un programme de résolution personnalisé, consultez canal homologue programme de [résolution d’homologue personnalisé](https://docs.microsoft.com/previous-versions/dotnet/netframework-3.5/ms751466(v=vs.90)).
## <a name="in-this-section"></a>Dans cette section
[À l’intérieur du CustomPeerResolverService: Inscriptions du client](../../../../docs/framework/wcf/feature-details/inside-the-custompeerresolverservice-client-registrations.md)
## <a name="see-also"></a>Voir aussi
- [Concepts de canal homologue](../../../../docs/framework/wcf/feature-details/peer-channel-concepts.md)
- [Sécurité de canal homologue](../../../../docs/framework/wcf/feature-details/peer-channel-security.md)
- [Création d’une application de canal homologue](../../../../docs/framework/wcf/feature-details/building-a-peer-channel-application.md)
| 137.444444 | 817 | 0.797251 | fra_Latn | 0.956646 |
6f24da624e40eca3191865511667045812c7c951 | 1,679 | md | Markdown | README.md | daudprobst/azur_api | 3f4b0f018574412bba8ee57bd051157b00f5f3b8 | [
"MIT"
] | 3 | 2021-10-01T08:58:41.000Z | 2021-10-01T17:39:13.000Z | README.md | tech4germany/AZUR-API | 3f4b0f018574412bba8ee57bd051157b00f5f3b8 | [
"MIT"
] | 2 | 2021-09-07T08:28:35.000Z | 2021-09-15T18:42:34.000Z | README.md | daudprobst/azur_api | 3f4b0f018574412bba8ee57bd051157b00f5f3b8 | [
"MIT"
] | null | null | null | Code Repository for the API used in the AZUR project from the 2021 [Tech4Germany](tech.4germany.org) fellowship. The functionalities of this API are also being made accessible through a UI, available [here](https://github.com/daudprobst/AZUR-Frontend).
# AZUR
In the German Bundestag, various resources are allocated to parties according to their strengths in the plenum using one of three [proportionality calculation](https://en.wikipedia.org/wiki/Party-list_proportional_representation) methods - the Saint-Lague/Schepers, d'Hondt, and Hare-Niemeyer methods. This includes, for example, the number of seats on committees, speaking time on the floor, and even things like floor space on open-door days. AZUR is an internally used calculator that applies these methods - this project is its third iteration, after one created in the 1970s and one created in 2000.
# AZUR-API
This repository contains the backend API, which is to be hosted separately from the frontend so it can be accessed from other sources. Detailed API docs are to follow - in short, it requires a JSON POST with a vote distribution, the number of seats (or minutes, or square meters, or...) to distribute, and the method to use, and returns the result of that calculation as a JSON with up to three keys: the seat distribution, the assignment sequence (if the method returns one), and a table of distributions from 1 to the requested amount of seats.
# Getting Started
To run the API locally, clone this project, install requirements.txt in a new python environment, set the flask app with `export FLASK_APP app` (or your OS equivalent of setting an environment variable) and run it with `flask run`.
| 119.928571 | 604 | 0.793925 | eng_Latn | 0.999345 |
6f250d73b5fcff076bc0afaa1ed3653040050dba | 3,431 | md | Markdown | windows-driver-docs-pr/install/providing-vendor-icons-for-the-shell-and-autoplay.md | jewelmir81/windows-driver-docs | 484020f1768981efaba97eeea41871c2593d2b44 | [
"CC-BY-4.0",
"MIT"
] | 485 | 2017-05-26T02:26:37.000Z | 2022-03-30T18:22:09.000Z | windows-driver-docs-pr/install/providing-vendor-icons-for-the-shell-and-autoplay.md | jewelmir81/windows-driver-docs | 484020f1768981efaba97eeea41871c2593d2b44 | [
"CC-BY-4.0",
"MIT"
] | 2,511 | 2017-05-16T23:06:32.000Z | 2022-03-31T23:57:00.000Z | windows-driver-docs-pr/install/providing-vendor-icons-for-the-shell-and-autoplay.md | jewelmir81/windows-driver-docs | 484020f1768981efaba97eeea41871c2593d2b44 | [
"CC-BY-4.0",
"MIT"
] | 687 | 2017-05-19T03:16:24.000Z | 2022-03-31T03:19:04.000Z | ---
title: Providing Icons for a Device
description: Providing Icons for a Device
keywords:
- AutoPlay icons WDK
- custom icons WDK device installations
- vendor icons WDK
- icons WDK Shell
- Shell icons WDK
- media-inserted icons WDK
- no-media-inserted icons WDK
- icons WDK AutoPlay
- copying icon files
ms.date: 04/30/2020
ms.localizationpriority: medium
---
# Providing Icons for a Device
This topic describes how you can provide custom icons for a device by referencing them in a driver's INF file. You can provide icons that appear in Device Manager, Windows Explorer, or both, as appropriate.
## Adding icons for Device Manager
You can either embed a custom icon in a DLL or provide a standalone .ico file. If your driver is already a DLL file, the first is the easiest option because it does not require copying any additional files.
To embed the icon in a DLL, use an entry like this:
```inf
[<DDInstall>]
AddProperty = DeviceIconProperty
[DeviceIconProperty]
DeviceIcon,,,,"%13%\UmdfDriver.dll,-100"
```
The above example uses DIRID 13 to copy the file to the Driver Store, which avoids needing to copy it anywhere else. The entry follows the format `<Resource.dll>,-<IconResourceID>`, so the 100 signifies the resource ID of the icon in the resource table of the DLL. For more on DIRID 13, see [Using a Universal INF File](./using-a-universal-inf-file.md).
To reference a standalone .ico file, use an entry like this:
```inf
[<DDInstall>]
AddProperty = DeviceIconProperty
[DeviceIconProperty]
DeviceIcon,,,,"%13%\vendor.ico"
```
## Adding icons for storage volumes in Explorer
The shell uses **Icons** and **NoMediaIcons** registry values to represent the device in AutoPlay, My Computer, and file Open dialog boxes.
To add these, include an [**INF AddReg directive**](inf-addreg-directive.md) under an [**INF DDInstall.HW section**](inf-ddinstall-hw-section.md) for the device. In the **AddReg** section, specify **Icons** and **NoMediaIcons** value entries, as shown in the following example:
```inf
[DDInstall.NT.HW]
AddReg = IconInformation
[IconInformation]
HKR, , Icons, 0x10000, "media-inserted-icon-file"
HKR, , NoMediaIcons, 0x10000, "no-media-inserted-icon-file"
```
Then include an [**INF SourceDisksFiles section**](inf-sourcedisksfiles-section.md) that lists the icon files and a corresponding [**INF CopyFiles directive**](inf-copyfiles-directive.md) that copies them to the system.
The **Icons** and **NoMediaIcons** value entries are stored under the **Device Parameters** key under the device's *hardware key*. For example, `HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\<Hardware ID>\Device Parameters` would contain entries like the following:
* `Icons [REG_MULTI_SZ] = %SystemRoot%\system32\icon.ico`
* `NoMediaIcons [REG_MULTI_SZ] = %SystemRoot%\system32\noicon.ico`
To modify the **Device Parameters** key from user mode, use [**SetupDiCreateDevRegKey**](/windows/win32/api/setupapi/nf-setupapi-setupdicreatedevregkeya) or [**SetupDiOpenDevRegKey**](/windows/win32/api/setupapi/nf-setupapi-setupdiopendevregkey).
From kernel mode, use [**IoOpenDeviceRegistryKey**](/windows-hardware/drivers/ddi/wdm/nf-wdm-ioopendeviceregistrykey).
## Resources
When you create icons, follow the guidelines that are provided in [Icons](/windows/win32/uxguide/vis-icons). These guidelines describe how to create icons that have the appearance and behavior of Windows graphical elements.
| 43.43038 | 353 | 0.767706 | eng_Latn | 0.894508 |
6f25641a62a19f2612cf0fa64523e764f4d4a1ad | 62 | md | Markdown | src/lib/containers/widgets/SynapseVideo.md | yjyoo773/Synapse-React-Client | 626e620b3ee3fda04d36311a3012be0b993b8542 | [
"Apache-2.0"
] | null | null | null | src/lib/containers/widgets/SynapseVideo.md | yjyoo773/Synapse-React-Client | 626e620b3ee3fda04d36311a3012be0b993b8542 | [
"Apache-2.0"
] | null | null | null | src/lib/containers/widgets/SynapseVideo.md | yjyoo773/Synapse-React-Client | 626e620b3ee3fda04d36311a3012be0b993b8542 | [
"Apache-2.0"
] | null | null | null | ```jsx
<SynapseVideo params={{ vimeoId: '355433104' }} />
```
| 15.5 | 50 | 0.596774 | yue_Hant | 0.113556 |
6f27b27a40ce4d5bc5e742ffed0d235ab3a8018b | 577 | md | Markdown | 4-rich-browser-applications/README.md | sjkimball/ux-developer-milestones | 5b9d7b6cbbf5328657c5b69ff224c37bc4603fb4 | [
"Apache-2.0"
] | null | null | null | 4-rich-browser-applications/README.md | sjkimball/ux-developer-milestones | 5b9d7b6cbbf5328657c5b69ff224c37bc4603fb4 | [
"Apache-2.0"
] | 4 | 2018-05-03T05:54:05.000Z | 2018-07-06T17:38:04.000Z | 4-rich-browser-applications/README.md | sjkimball/ux-developer-milestones | 5b9d7b6cbbf5328657c5b69ff224c37bc4603fb4 | [
"Apache-2.0"
] | null | null | null | # Rich Browser Applications Milestone
## Media Queries
[NSS Study Guide: Media Queries](learning-materials/RBA_MEDIA_QUERIES.md)
---
## API's
### NSS Exercises
1. [Weather API](learning-materials/RBA_API_WEATHER_APP.md)
---
## Firebase Hosting
[NSS Study Guide: Firebase Hosting](learning-materials/RBA_FIREBASE_HOSTING)
### NSS Exercises
1. [Firebase and Family Members](learning-materials/RBA_FAMILY_MEMBERS.md)
---
## Challenges
1. [Individual Challenges](learning-materials/INDIVIDUAL_CHALLENGES.md)
1. [Group Challenges](learning-materials/TEAM_CHALLENGES.md)
| 19.896552 | 76 | 0.771231 | kor_Hang | 0.382677 |
6f28247f054c69179db09bf1836e4fbd5a403c38 | 781 | md | Markdown | includes/azure-virtual-machines-limits-azure-resource-manager.md | bergano65/azure-docs.ru-ru | 8baaa0e3e952f85f7a3b5328960f0d0d3e52db07 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/azure-virtual-machines-limits-azure-resource-manager.md | bergano65/azure-docs.ru-ru | 8baaa0e3e952f85f7a3b5328960f0d0d3e52db07 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/azure-virtual-machines-limits-azure-resource-manager.md | bergano65/azure-docs.ru-ru | 8baaa0e3e952f85f7a3b5328960f0d0d3e52db07 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: cynthn
ms.service: virtual-machines
ms.topic: include
ms.date: 11/09/2018
ms.author: cynthn
ms.openlocfilehash: 9070aee55969c1cc0fdf3870a05a065aaa5a8bf3
ms.sourcegitcommit: cd70273f0845cd39b435bd5978ca0df4ac4d7b2c
ms.translationtype: MT
ms.contentlocale: ru-RU
ms.lasthandoff: 09/18/2019
ms.locfileid: "67185644"
---
| Resource | Ограничение по умолчанию |
| --- | --- |
| Количество виртуальных машин на одну группу доступности | 200 |
| Сертификатов на подписку |Не ограничено<sup>1</sup> |
<sup>1</sup>При использовании диспетчера ресурсов Azure сертификаты размещаются в хранилище ключей Azure. Количество сертификатов для подписки не ограничено. Существует ограничение в 1 МБ на развертывание, которое состоит из одной виртуальной машины или группы доступности.
| 37.190476 | 273 | 0.800256 | rus_Cyrl | 0.716055 |
6f29311eac92522a3470b0d0d1cbe4ccfbb911d7 | 3,592 | md | Markdown | docs/2014/relational-databases/manage/move-a-ucp-from-one-instance-of-sql-server-to-another-sql-server-utility.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2019-12-03T07:56:35.000Z | 2019-12-03T07:56:35.000Z | docs/2014/relational-databases/manage/move-a-ucp-from-one-instance-of-sql-server-to-another-sql-server-utility.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/2014/relational-databases/manage/move-a-ucp-from-one-instance-of-sql-server-to-another-sql-server-utility.md | strikersree/sql-docs | 9ece10c2970a4f0812647149d3de2c6b75713e14 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2021-04-05T00:07:53.000Z | 2021-04-05T00:07:53.000Z | ---
title: "Move a UCP from One Instance of SQL Server to Another (SQL Server Utility) | Microsoft Docs"
ms.custom: ""
ms.date: "06/13/2017"
ms.prod: "sql-server-2014"
ms.reviewer: ""
ms.technology:
ms.topic: conceptual
ms.assetid: b402fd9e-0bea-4c38-a371-6ed7fea12e96
author: MikeRayMSFT
ms.author: mikeray
manager: craigg
---
# Move a UCP from One Instance of SQL Server to Another (SQL Server Utility)
This topic describes how to move a utility control point (UCP) from one instance of [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] to another in [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] by using [!INCLUDE[ssManStudioFull](../../includes/ssmanstudiofull-md.md)].
## <a name="SSMSProcedure"></a> Using SQL Server Management Studio
#### Move a UCP from one instance of SQL Server to another.
1. Create a new UCP on the instance of [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] that will be the new host instance of the UCP. For more information, see [Create a SQL Server Utility Control Point (SQL Server Utility)](create-a-sql-server-utility-control-point-sql-server-utility.md).
2. If non-default policy settings have been set for any instances of [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] in your [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Utility, make note of the policy thresholds so that you can re-establish them on the new UCP. Default policies are applied when instances are added to the new UCP. If default policies are in effect, the [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Utility list view displays **Global** in the **Policy Type** column.
3. Remove all managed instances from the old UCP. For more information, see [Remove an Instance of SQL Server from the SQL Server Utility](remove-an-instance-of-sql-server-from-the-sql-server-utility.md).
4. Back up the utility management data warehouse (UMDW) from the old UCP. The filename is Sysutility_mdw_\<GUID>_DATA, and the database default location is \<System drive>:\Program Files\Microsoft SQL Server\MSSQL10_50.<UCP_Name>\MSSQL\Data\\, where \<System drive> is normally the C:\ drive. For more information, see [Copy Databases with Backup and Restore](../databases/copy-databases-with-backup-and-restore.md).
5. Restore the backup of the UMDW to the new UCP. For more information, see [Copy Databases with Backup and Restore](../databases/copy-databases-with-backup-and-restore.md).
6. Enroll instances into the new UCP to make them managed by the [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] Utility. For more information, see [Enroll an Instance of SQL Server (SQL Server Utility)](enroll-an-instance-of-sql-server-sql-server-utility.md).
7. Implement custom policy definitions for the managed instances of [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)], as necessary.
8. Wait approximately 1 hour for data collection and aggregation operations to complete.
9. To refresh data, right-click the **Managed Instances** node in **Utility Explorer**, then select **Refresh**. List view data are displayed in the **Utility Explorer** content pane. For more information, see [View Resource Health Policy Results (SQL Server Utility)](view-resource-health-policy-results-sql-server-utility.md).
## See Also
[SQL Server Utility Features and Tasks](sql-server-utility-features-and-tasks.md)
[Enroll an Instance of SQL Server (SQL Server Utility)](enroll-an-instance-of-sql-server-sql-server-utility.md)
| 81.636364 | 525 | 0.745267 | eng_Latn | 0.904031 |
6f294d73ba488cd92d619147f8b48bcb729dd835 | 16,084 | markdown | Markdown | _posts/2018-04-28-ViDi_FAQ.markdown | CognexKorea/CognexKorea.github.io | df86ca13df814c7220728e803c0ae76e6cbb0168 | [
"MIT"
] | null | null | null | _posts/2018-04-28-ViDi_FAQ.markdown | CognexKorea/CognexKorea.github.io | df86ca13df814c7220728e803c0ae76e6cbb0168 | [
"MIT"
] | null | null | null | _posts/2018-04-28-ViDi_FAQ.markdown | CognexKorea/CognexKorea.github.io | df86ca13df814c7220728e803c0ae76e6cbb0168 | [
"MIT"
] | null | null | null | ---
layout: post
title: "ViDi / ViDi Suite FAQ"
date: 2018-04-28 13:25:00
author: Alex Choi
categories: Deep-Learning
---
------
## Technical and Integration aspects
#### How does Cognex ViDi Suite work and what methods are used
Cognex ViDi Suite is a specialized implementation of a neural network that has been optimized for machine vision under real-world factory automation conditions. The specifics of this optimization are confidential, as they are what makes Cognex ViDi Suite unique.
#### Is Cognex ViDi Suite subjective or objective?
ViDi’s training algorithm intentionally introduces a small amount of randomness. In consequence, we typically measure a <font color="red">1-2% variation in behavior even when training from the same data set</font>. Please refer to the technical documentation for a more detailed explanation (“By how much does the results accuracy vary from one training to another?”)
#### How fast is Cognex ViDi Suite?
<font color="red">Typical training time is 2-5 minutes on a PC with a reasonably high-end NVidia GPU</font>. We provide a link to an up-to-date table of available GPUs that includes relative performance capabilities for your [reference](http://www.videocardbenchmark.net/high_end_gpus.html). The runtime speed of the application depends on three main factors:
- Image size: larger images take longer to process
- Defect size: looking for small defects takes longer than for large defects
- GPU: the more GPUs (up to 4) and the newer/more powerful the GPU, the faster the application will run
As a coarse guide, <font color="red">runtime will take between 2ms (for single character ID) up to 2 seconds</font> (for **a huge, finely-detailed inspection scene**). A rule-of-thumb is computation can be done at **10-15 Megapixels/second/GPU, or 40-60 Megapixels/sec for a high end 4-GPU card**. Of course, you will need to test your actual application for a precise figure.
#### What is the feedback data you get from the ViDi tools?
Each tool provides XML data with the following results. Please consult the product documentation for API details.
- **ViDi Blue**: Image coordinates (X,Y), feature orientations (θ), scale, score, geometric models, angles between 2 linked features.
- **ViDi Red Unsupervised**: Maximum score, score of each pixel, center of gravity of the defective area, enclosing polygon, hit map, anomaly area
- **ViDi Red Supervised**: Maximum score, score of each pixel, center of gravity of the defective area, polygon, hit map, anomaly area
- **ViDi Green**: Confidence index, 1st best type of class found, 2nd best class found, etc.
#### Is the image source of importance, or can I also use thermal or range images with Cognex ViDi Suite?
Cognex ViDi Suite supports **PNG, BMP, TIFF, and JPEG** images, **8 or 16 bit, 1-4 channels**. As long as your data fits in this format, the tools can process your images. That does not imply any specific performance; for instance, some data sources may be noisier than others, or be subject to bias that influences the ViDi algorithms. But as a general rule, you can use images sources like _thermal, X-Ray and 3D range images_ with ViDi Suite.
#### What is the largest image that ViDi can process meaningfully?
There is <U>no practical limit</U> for training/runtime as the systems can easily handle images over 1GB. However, the <U>GUI is only optimized for up to <strong>10 MP</strong> images</U>. Larger images may causes glitches in the user interaction experience, even though the training will run correctly.
#### Is there any kind of HMI provided with Cognex ViDi Suite or is it a pure library?
The Cognex ViDi Suite training interface is a standalone interactive executable with its own UI that produces a “workspace” (trained model or recipe).
At runtime, users will program against a .NET API to run the Red, Green or Blue tool. They have three choices for displaying ViDi results
- Use the ViDi API to create an image with overlay graphics
- Use ViDi WPF components to draw graphics on a separate display
- Use the ViDi result data to manually draw overlay graphics on a VisionPro display that also shows VisionPro results
In the future, Cognex will provide methods to more seamlessly combine VisionPro and ViDi tool results on a common display.
#### Is the GPU in a system exclusively for Cognex ViDi Suite or can it be used by other applications as well?
The GPU is <U>not exclusively reserved</U> for Cognex ViDi Suite, but using it for other applications will impact ViDi performance.
#### Can I use multiple GPUs?
Absolutely. Cognex ViDi Suite supports <U>up to 4 NVidia GPUs</U>. The system will automatically detect and make use of these GPUs to accelerate the application. For details, please refer to the Technical Documentation FAQ under topics “How to build a 4-GPU Machine” and “By How Much will
Processing Time Improve with Multiple GPUs?”
#### How can ViDi communicate directly through FFP or discrete I/O directly?
ViDi is a DLL with a .NET API to the Red, Green and Blue tools. It does not include built-in image acquisition or I/O capabilities. You will need to call these functions using your standard application framework. In order to simplify this, every new ViDi shipment will include a copy of VisionPro, which will provide image acquisition and database read/write, image display, I/O and factory communications functions.
------
## Learning/Training Capabilities
#### What is the purpose of the Development of dongle?
The Development dongle provides a means to train and qualify the system. It includes the Red, Green and Blue tools. As it is time-limited and not suitable for deployment, the price is lower than for permanent runtime Deployment dongles.
#### How long does the Development dongle run?
One year from shipment.
#### What do you need to train a scratch?
As Cognex ViDi Suite is taught by example, you should present it with tens of images that show the range of scratches that you need to detect. These should be real cases showing variation in shape, size, material, lighting, etc. – whatever conditions will be found in production.
#### How do you organize Cognex ViDi Suite if you for instance inspect 20 different views?
The answer depends on how consistent the images (and especially the defects) are. If the defects look very similar from view to view, a single training may be enough. <U>If there’s significant variation in scene and defect appearance (such as field of view, lighting, etc.), then multiple workspaces will need to be created</U>. There’s no simple answer we can give that applies across all applications.
#### How do you retrain Cognex ViDi Suite if new defects show up?
There should be no need to retrain a Red tool in Unsupervised mode. The Unsupervised Red tool is trained using only good images, and it reports any anomalies from these images.
In Supervised mode, you will want to add images of the new defects to your training set. You will need to explicitly highlight them, so the system knows exactly what constitutes the new defect type. Be aware that adding images to the training set requires a complete retraining. It isn’t possible to simply add incremental data without a full retrain. In this case, Cognex recommends re-validating the system on a pre-defined set of test images.
#### With traditional Vision I know exactly what is possible or not. How do I know how Cognex ViDi Suite behaves?
It is true that ViDi does not behave like a traditional image processing tool with explicitly defined behavior. There is no guarantee that it will catch all defects of a specific size or grey level, for instance. But ViDi’s compensating advantage is that it is much more adaptable to real world conditions, and can more easily be deployed to solve problems that can’t be addressed well (or at all) with traditional machine vision approaches.
So it is essential to start with a representative set of training images, to label the images and/or defects in them correctly, and to carefully tune the system until you achieve the expected behavior.
#### How can you control the training images and find the best trained set?
By default, the system trains using a 50% of the images provided, randomly selected. You can change this percentage, for instance including more images if there is a large variation in part appearance or the data set is small. You might decrease the percentage if the data set is very large and consistent. You can also explicitly include certain images if they contain key examples that are limited to just a few images.
In addition, there are a few parameters such as feature size, sampling density, etc. that you can adjust based on your application requirements. The system also includes an internal optimization feature (the “Parameter Search Tool”) that will automatically tune key parameter settings, based on your images and classifications. This will help select an optimal combination of these parameters.
#### How is the segmentation done in an OCR application?
Segmentation is handled internally by the ViDi Blue tool, not as a separate pre-processing step prior to character recognition. You can think of the segmentation and recognition steps as a single process within the Blue tool. Much of what is normally done during segmentation is handled at training time by the Blue tool, where it learns the expected relationship between characters. With degraded print – such as with high speed ink jet applications – segmentation can be extremely challenging using traditional OCR techniques. This is actually one of the strengths of the Blue tool.
------
## Applications
#### What are the limitations of the ViDi software?
ViDi Suite is not suitable for those types of application:
- **Precision Measurements** and metrology
- **Print Quality Grading**. ViDi is not a font-based system where the user specifies the expected character at each position. Instead, it trains on the overall pattern, and can sometimes be confused if a misprinted character resembles a valid character.
- <U><strong>Applications where runtime images differ significantly</strong> from train time images in lighting, contrast, FoV, etc. but are still considered acceptable</U>
- It cannot be trained from CAD data
#### Can you explain what kind of applications you solve with ViDi?
Cognex ViDi Suite is especially well-suited for three types of applications:
- **Cosmetic inspection** applications such as scratches, stains, chips and missing/extra ink, where explicit rule-based solutions are brittle or overly complex
- **Feature location** on deformed or inconsistent parts where PatMax is not a good candidate. Note that the resulting position information is not as accurate as PatMax, but it may be able to find objects whose appearance is less consistent than PatMax expects.
- **Optical character recognition on deformed text**, such as high speed ink jet, molded rubber or stamped metal
- **Defect classification**
- **Product sorting**
#### Where is the borderline between Cognex ViDi Suite and traditional vision?
Some applications are clearly better suited to classic machine vision, such as precision alignment, metrology, and 2D code reading. Others are better for Cognex ViDi Suite, such as cosmetic inspection or OCR on deformed characters. Border areas include many feature location and OCR applications, or simple / consistent inspection applications.
In these cases, our recommendation is to try both approaches to see which works best. Other considerations include the cost of GPU-accelerated hardware and the ease/complexity of training.
But achieving an accurate and stable vision result should be the overriding concern.
#### What is necessary to perform a proof of concept/feasibility?
AE Engineers have to collect image data sets of good and bad parts under the expected lighting and optics conditions, and understand your customer’s defect criteria.
- For <font color="red">Object Detection</font>, we recommend **30-50** samples images representing the range of expected cases.
- For <font color="red">OCR</font>, **30-50** samples of each character, shown in the range of possible string positions.
- For <font color="red">Unsupervised Inspection</font>, **30-50** images of good parts and 30-50 of bad parts.
- For <font color="red">Supervised Inspection</font>, include **30-50** images of each defect type or feature to segment.
Be aware that the number of images needed is closely related to the accuracy the customer is looking for. <U>If he needs 99.9% accuracy, you may need 1,000 images</U>, while <U>99% accuracy may only need 100 images</U>.
#### How many pixels does a "defect" need to have to be detectable in ViDi?
The ViDi software is **capable of detecting defects as small as a single pixel**. However, “defects” this small may be no more than camera noise or lighting artifacts. You should set your <U>minimum defect size based on the quality of the images</U> you provide and the actual reliable size of any defect you want to detect.
------
## Roadmap & Product Development
#### Do you plan to combine ViDi and VisionPro in a single package?
Yes, Cognex will include a VisionPro license with every ViDi Suite purchase. In future releases, the two will be integrated in a single software package.
#### How will ViDi be integrated with VisionPro and Designer?
As a first step, Cognex will simplify data conversion between VisionPro and ViDi tools, without requiring customers to do this manually. We are also looking into other ideas to more tightly couple the products, and will announce details later this year.
What are you doing to support OPC UA and cloud connectivity?
Cognex is exploring several Cloud Server options. We’re extremely interested in talking with partners who have an interest in this capability, in order to better understand their deployment model and requirements. Please contact Product Marketing to discuss this further.
#### Will you proivide a pre-trained font for OCR?
Cognex would consider providing a pre-trained Deep Learning OCR tool if there is sufficient demand. It will help our decision if you can share your application opportunities with your Sales Engineer and our Product Marketing team.
#### Will you change the name of ViDi?
As a general approach, we intend to leverage “Cognex Deep Learning” as the common name for these technologies. While you can use the term “ViDi,” we would recommend against building it into any advertising material, as we are planning to phase it out as a brand name.
------
## Partners
#### Can PSI have a web based presentation with more technical detail to prequalify if ViDi is of any interest to us as it might be a significant investment in our time?
Cognex will publish various industry documents and white papers describing ViDi’s applicability to key markets. But the two best ways to learn are to share your application requirements with a Cognex Application Engineer, or to attend the two-day ViDi training class.
#### Should the PSI teach the end customers to learn how to use and train ViDi?
We assume PSIs will create the initial application solution, both the narrow ViDi trained workspace as well as the overall system with image acquisition, results analysis, I/O, etc. We anticipate that most customers will have the capability to update their training data to reflect new part conditions, if you choose to train them and provide an interface to do so. This is as much a business question as a technical one, and as such something only PSI can answer.
#### Can partners have AE resources to evaluate applications with ViDi?
Cognex AEs will be available to help with initial evaluations. However, we can’t promise availability for every project, so please work closely with your Cognex Sales Engineer to schedule any requests. In order to be successful, please do the up-front work to collect image data sets of good and bad parts under the expected lighting and optics conditions (refer to “What is necessary to perform a proof of concept/feasibility?”) and understand your customer’s defect criteria.
| 97.478788 | 584 | 0.786558 | eng_Latn | 0.999433 |
6f299b99d4ae1b19307138189a0b45c89d2a668f | 2,402 | md | Markdown | data/blog/20201220-python-env-in-wsl.md | yunsii/yuns.love | 0bbf2e88a60c1160e568f52f02c7147d28491e2f | [
"MIT"
] | null | null | null | data/blog/20201220-python-env-in-wsl.md | yunsii/yuns.love | 0bbf2e88a60c1160e568f52f02c7147d28491e2f | [
"MIT"
] | null | null | null | data/blog/20201220-python-env-in-wsl.md | yunsii/yuns.love | 0bbf2e88a60c1160e568f52f02c7147d28491e2f | [
"MIT"
] | null | null | null | ---
title: WSL 中的 Python 开发环境搭建
date: '2020-12-20 23:04:42'
tags: [学习笔记, Python]
draft: false
summary: 尝试了一下 TypeScript 之后,是“真香”啊,静态类型检验确实能避免很多编译时的问题。学有余力之际,所以打算先过一遍官方文档。本系列是以官方文档为主,结合自身相关经验整理而成的入门学习笔记。
layout: PostSimple
---
## 操作系统准备
首先需要选择合适的操作系统,我选择的是 Ubuntu 18.04 on WSL,首先需要[更换国内源](https://zhuanlan.zhihu.com/p/61228593),避免下载过慢的问题:
### 备份源列表
```bash
sudo cp /etc/apt/sources.list /etc/apt/sources.list_backup
```
### 编辑源配置文件
```bash
sudo vim /etc/apt/sources.list
```
阿里源
```
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
```
### 刷新列表
```bash
sudo apt-get update
sudo apt-get upgrade
```
## 安装 Python
由于已安装了 Python 3.6,所以直接[升级到 Python 3.8](https://www.itsupportwale.com/blog/how-to-upgrade-to-python-3-7-on-ubuntu-18-10/)即可
### 安装新版 Python
```bash
sudo apt-get install python3.8
```
### 添加可选项
```bash
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 2
```
### 更改 python3 指向
```bash
sudo update-alternatives --config python3
```
选择合适的版本即可。
### 安装 pip
```bash
sudo apt install python3-pip
```
### 安装 venv
```bash
sudo apt install python3-venv
```
### [安装 pipenv](https://pipenv.pypa.io/en/latest/install/#installing-pipenv)
```bash
pip3 install --user pipenv
```
## 总结
尝试过在 VS Code 中进行 Python 开发,结合自身体验(一些方法不能正常跳转)并结合[一些讨论](https://zhuanlan.zhihu.com/p/66157046)得出一个初步的结论:VS Code 不适合 Python 开发。由于暂时没有更好的 Linux 环境(没有可视化桌面),故还是回到了 PyCharm on Windows 的开发环境。当然如果是 PyCharm on Linux 的话,直接装上 PyCharm 应该也没什么问题了。
| 25.553191 | 234 | 0.759367 | yue_Hant | 0.358232 |
6f29a39850c08456f5dcc6b1ea5f4b1f113f59dc | 5,028 | md | Markdown | tests/unit/fake_data_root/kubernetes/var/lib/juju/agents/unit-kubernetes-master-0/charm/hooks/relations/gcp-integration/docs/provides.md | KellenRenshaw/hotsos | e3fc51ab7f8af606a5846a3486a7fda23d761583 | [
"Apache-2.0"
] | 6 | 2021-10-01T19:46:14.000Z | 2022-03-31T17:05:08.000Z | tests/unit/fake_data_root/kubernetes/var/lib/juju/agents/unit-kubernetes-master-0/charm/hooks/relations/gcp-integration/docs/provides.md | KellenRenshaw/hotsos | e3fc51ab7f8af606a5846a3486a7fda23d761583 | [
"Apache-2.0"
] | 111 | 2021-10-01T18:18:17.000Z | 2022-03-29T12:23:20.000Z | tests/unit/fake_data_root/kubernetes/var/lib/juju/agents/unit-kubernetes-master-0/charm/hooks/relations/gcp-integration/docs/provides.md | KellenRenshaw/hotsos | e3fc51ab7f8af606a5846a3486a7fda23d761583 | [
"Apache-2.0"
] | 10 | 2021-09-29T14:47:54.000Z | 2022-03-18T14:52:16.000Z | <h1 id="provides">provides</h1>
This is the provides side of the interface layer, for use only by the GCP
integration charm itself.
The flags that are set by the provides side of this interface are:
* **`endpoint.{endpoint_name}.requested`** This flag is set when there is
a new or updated request by a remote unit for GCP integration features.
The GCP integration charm should then iterate over each request, perform
whatever actions are necessary to satisfy those requests, and then mark
them as complete.
<h1 id="provides.GCPIntegrationProvides">GCPIntegrationProvides</h1>
```python
GCPIntegrationProvides(self, endpoint_name, relation_ids=None)
```
Example usage:
```python
from charms.reactive import when, endpoint_from_flag
from charms import layer
@when('endpoint.gcp.requests-pending')
def handle_requests():
gcp = endpoint_from_flag('endpoint.gcp.requests-pending')
for request in gcp.requests:
if request.instance_labels:
layer.gcp.label_instance(
request.instance,
request.zone,
request.instance_labels)
if request.requested_load_balancer_management:
layer.gcp.enable_load_balancer_management(
request.charm,
request.instance,
request.zone,
)
# ...
gcp.mark_completed()
```
<h2 id="provides.GCPIntegrationProvides.relation_ids">relation_ids</h2>
A list of the IDs of all established relations.
<h2 id="provides.GCPIntegrationProvides.requests">requests</h2>
A list of the new or updated `IntegrationRequests` that
have been made.
<h2 id="provides.GCPIntegrationProvides.get_departed_charms">get_departed_charms</h2>
```python
GCPIntegrationProvides.get_departed_charms(self)
```
Get a list of all charms that have had all units depart since the
last time this was called.
<h2 id="provides.GCPIntegrationProvides.mark_completed">mark_completed</h2>
```python
GCPIntegrationProvides.mark_completed(self)
```
Mark all requests as completed and remove the `requests-pending` flag.
<h1 id="provides.IntegrationRequest">IntegrationRequest</h1>
```python
IntegrationRequest(self, unit)
```
A request for integration from a single remote unit.
<h2 id="provides.IntegrationRequest.application_name">application_name</h2>
The name of the application making the request.
<h2 id="provides.IntegrationRequest.charm">charm</h2>
The charm name reported for this request.
<h2 id="provides.IntegrationRequest.has_credentials">has_credentials</h2>
Whether or not credentials have been set via `set_credentials`.
<h2 id="provides.IntegrationRequest.instance">instance</h2>
The instance name reported for this request.
<h2 id="provides.IntegrationRequest.instance_labels">instance_labels</h2>
Mapping of label names to values to apply to this instance.
<h2 id="provides.IntegrationRequest.is_changed">is_changed</h2>
Whether this request has changed since the last time it was
marked completed (if ever).
<h2 id="provides.IntegrationRequest.model_uuid">model_uuid</h2>
The UUID of the model containing the application making this request.
<h2 id="provides.IntegrationRequest.relation_id">relation_id</h2>
The ID of the relation for the unit making the request.
<h2 id="provides.IntegrationRequest.requested_block_storage_management">requested_block_storage_management</h2>
Flag indicating whether block storage management was requested.
<h2 id="provides.IntegrationRequest.requested_dns_management">requested_dns_management</h2>
Flag indicating whether DNS management was requested.
<h2 id="provides.IntegrationRequest.requested_instance_inspection">requested_instance_inspection</h2>
Flag indicating whether the ability to inspect instances was requested.
<h2 id="provides.IntegrationRequest.requested_network_management">requested_network_management</h2>
Flag indicating whether the ability to manage networking was requested.
<h2 id="provides.IntegrationRequest.requested_object_storage_access">requested_object_storage_access</h2>
Flag indicating whether object storage access was requested.
<h2 id="provides.IntegrationRequest.requested_object_storage_management">requested_object_storage_management</h2>
Flag indicating whether object storage management was requested.
<h2 id="provides.IntegrationRequest.requested_security_management">requested_security_management</h2>
Flag indicating whether security management was requested.
<h2 id="provides.IntegrationRequest.unit_name">unit_name</h2>
The name of the unit making the request.
<h2 id="provides.IntegrationRequest.zone">zone</h2>
The zone reported for this request.
<h2 id="provides.IntegrationRequest.mark_completed">mark_completed</h2>
```python
IntegrationRequest.mark_completed(self)
```
Mark this request as having been completed.
<h2 id="provides.IntegrationRequest.set_credentials">set_credentials</h2>
```python
IntegrationRequest.set_credentials(self, credentials)
```
Set the credentials for this request.
| 27.326087 | 113 | 0.782021 | eng_Latn | 0.9306 |
6f29b5976447046b33c328b151bfddbe13a50a44 | 12,886 | md | Markdown | reportportal/Install_reportportal_in_k8s.md | patsevanton/articles-to-habr | c426eda4e188bd96924422e3361ad9910a6a60c2 | [
"Apache-2.0"
] | 5 | 2019-12-04T05:54:09.000Z | 2022-02-16T20:26:48.000Z | reportportal/Install_reportportal_in_k8s.md | patsevanton/articles-to-habr | c426eda4e188bd96924422e3361ad9910a6a60c2 | [
"Apache-2.0"
] | null | null | null | reportportal/Install_reportportal_in_k8s.md | patsevanton/articles-to-habr | c426eda4e188bd96924422e3361ad9910a6a60c2 | [
"Apache-2.0"
] | 3 | 2020-07-14T13:39:33.000Z | 2021-10-02T09:01:43.000Z | [ReportPortal](https://reportportal.io/) – это веб-решение на базе открытого ПО, созданное разработчиками EPAM и OSS-сообщества. Его использование позволяет собрать в одном месте документы и результаты различных проектов по тестированию, которые выполняются в компании, и сделать их доступными для тестировщиков, ИТ-специалистов и бизнес-заказчиков.
Благодаря применению ReportPortal становится возможным:
- Ускорить запуск продуктов в эксплуатацию совместно с автоматизацией тестирования
- Просматривать тестовые сценарии со всеми связанными данными в одном решении «здесь и сейчас», с логами, скриншотами, двоичными данными
- Связывать определенные тестовые сценарии с найденными ошибками (багами), проблемами автоматизации или проблемами системы
В этом посте будет описана установка [ReportPortal](https://reportportal.io/) в kubernetes.
<cut />
### Требования:
- Kubernetes (тестировал на версии 1.18)
- 3 Worker ноды kubernetes с 2 ядрами CPU и 10ГБ ОЗУ.
### Установка Kubernetes
Базовая инструкция установки Reportportal в kubernets - https://github.com/reportportal/kubernetes/tree/master/reportportal#cloud-computing-services-platform-installation
Устанавливать будем в Managed by kubernetes в Yandex Cloud.
#### Получаем список нод k8s
После того как у вас будет готов kubernetes, получаем список нод k8s.
```
kubectl get nodes
NAME STATUS ROLES AGE VERSION
cl1mikj7kfic31snr11h-ivow Ready <none> 2m13s v1.18.18
cl1mikj7kfic31snr11h-ubus Ready <none> 2m12s v1.18.18
cl1mikj7kfic31snr11h-ynax Ready <none> 95s v1.18.18
```
#### Устанавливаем Kubernetes Labels на ноды
```
kubectl label nodes <NODE-1> service=api
kubectl label nodes <NODE-2> service=rabbitmq
kubectl label nodes <NODE-3> service=db
```
Меняем `<NODE-X>` на имя вашей ноды.
```
kubectl label nodes cl1mikj7kfic31snr11h-ivow service=api
kubectl label nodes cl1mikj7kfic31snr11h-ubus service=rabbitmq
kubectl label nodes cl1mikj7kfic31snr11h-ynax service=db
```
#### Устанавливаем ingress-nginx
```
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm repo update
helm install --atomic nginx-ingress ingress-nginx/ingress-nginx --version 3.36.0
```
#### Скачиваем репозиторий reportportal
Скачиваем репозиторий https://github.com/reportportal/kubernetes
```
git clone https://github.com/reportportal/kubernetes
cd kubernetes
```
### Установка Elasticsearch
Добавляем helm репозиторий Elasticsearch
```
helm repo add elastic https://helm.elastic.co && helm repo update
```
Скачиваем helm чарты
```
helm dependency build ./reportportal/
```
Строка установки Elasticsearch
```
helm install <elastic-release-name> ./reportportal/charts/elasticsearch-7.6.1.tgz
```
Вместо `<elastic-release-name>` придумайте название helm чарта. Пусть будет `elasticsearch`.
Устанавливаем Elasticsearch. По умолчанию устанавливаются 3 ноды.
```
helm install elasticsearch ./reportportal/charts/elasticsearch-7.6.1.tgz
```
Чтобы установить и использовать 1 master ноду elasticsearch (например, для demо или devel стендов) создайте специальный yaml файл `./reportportal/single-elasticsearch/value.yaml`
```
extraEnvs:
- name: discovery.type
value: single-node
- name: cluster.initial_master_nodes
value: ""
```
и выполните команду
```
helm install elasticsearch ./reportportal/charts/elasticsearch-7.6.1.tgz --set replicas=1 -f ./reportportal/single-elasticsearch/value.yaml
```
Был сделан [pull request](https://github.com/reportportal/kubernetes/pull/205), добавляющий yaml файл для установки single node elasticsearch
### Установка RabbitMQ
Строка установки RabbitMQ
```
helm install <rabbitmq-release-name> --set auth.username=rabbitmq,auth.password=<rmq_password>,replicaCount=1 ./reportportal/charts/rabbitmq-7.5.6.tgz
```
Вместо `<rabbitmq-release-name>` придумайте название helm чарта. Пусть будет `rabbitmq`.
Вместо `<rmq_password>` придумайте пароль от RabbitMQ. Пусть будет `password`.
```
helm install rabbitmq --set auth.username=rabbitmq,auth.password=password,replicaCount=1 ./reportportal/charts/rabbitmq-7.5.6.tgz
```
Примечание:
- Если release-name в helm назван "rabbitmq", то адрес будет "rabbitmq.default.svc.cluster.local".
- Если release-name в helm назван "rabbit", то адрес будет "rabbit-rabbitmq.default.svc.cluster.local".
Дожидаемся что под перейдет в статус running
```
kubectl get pod rabbitmq-0
NAME READY STATUS RESTARTS AGE
rabbitmq-0 0/1 Running 0 60s
```
Настраиваем RabbitMQ memory threshold
```
kubectl exec -it rabbitmq-0 -- rabbitmqctl set_vm_memory_high_watermark 0.8
```
Отредактируем `reportportal/values.yaml`
```
rabbitmq:
SecretName: ""
installdep:
enable: false
endpoint:
address: <rabbitmq-release-name>.default.svc.cluster.local
port: 5672
user: rabbitmq
apiport: 15672
apiuser: rabbitmq
```
Вместо `<rabbitmq-release-name>` меняем название helm чарта, которое вы придумали. У меня это `rabbitmq`.
Итоговый фрагмент получается вот таким:
```
rabbitmq:
SecretName: ""
installdep:
enable: false
endpoint:
address: rabbitmq.default.svc.cluster.local
port: 5672
user: rabbitmq
apiport: 15672
apiuser: rabbitmq
```
### Установка PostgreSQL
Строка установки PostgreSQL
```
helm install <postgresql-release-name> --set postgresqlUsername=rpuser,postgresqlPassword=<rpuser_password>,postgresqlDatabase=reportportal,postgresqlPostgresPassword=<postgres_password> -f ./reportportal/postgresql/values.yaml ./reportportal/charts/postgresql-10.9.4.tgz
```
Вместо `<postgresql-release-name>` придумайте название helm чарта. Пусть будет `postgresql`.
Вместо `<rpuser_password>` придумайте пароль для rpuser. Пусть будет `password`.
Вместо `<postgres_password>` придумайте пароль для postgres. Пусть будет `password`.
```
helm install postgresql --set postgresqlUsername=rpuser,postgresqlPassword=password,postgresqlDatabase=reportportal,postgresqlPostgresPassword=password -f ./reportportal/postgresql/values.yaml ./reportportal/charts/postgresql-10.9.4.tgz
```
Отредактируем `reportportal/values.yaml`
```
postgresql:
SecretName: ""
installdep:
enable: false
endpoint:
address: <postgresql-release-name>.default.svc.cluster.local
port: 5432
user: rpuser
dbName: reportportal
password:
```
Примечание:
- Если release-name в helm назван "postgresql", то address будет "postgresql.default.svc.cluster.local".
- Если release-name в helm назван "postgres", то address будет "postgres-postgresql.default.svc.cluster.local".
Вместо `<postgresql-release-name>` меняем название helm чарта, которое вы придумали. У меня это `postgresql`.
Итоговый фрагмент получается вот таким:
```
postgresql:
SecretName: ""
installdep:
enable: false
endpoint:
address: postgresql.default.svc.cluster.local
port: 5432
user: rpuser
dbName: reportportal
password:
```
### Установка MinIO
Строка установки MinIO
```
helm install <minio-release-name> --set accessKey.password=<your_minio_accesskey>,secretKey.password=<your_minio_secretkey>,persistence.size=5Gi ./reportportal/charts/minio-7.1.9.tgz
```
Вместо `<minio-release-name>` меняем название helm чарта, которое вы придумали. У меня это `minio`.
Вместо `<your_minio_accesskey>` придумайте accesskey. Пусть будет `accesskey`.
Вместо `<your_minio_secretkey>` придумайте secretkey. Пусть будет `secretkey`.
```
helm install minio --set accessKey.password=accesskey,secretKey.password=secretkey,persistence.size=5Gi ./reportportal/charts/minio-7.1.9.tgz
```
Отредактируем `reportportal/values.yaml`
```
minio:
secretName: ""
enabled: true
installdep:
enable: false
endpoint: http://<minio-release-name>.default.svc.cluster.local:9000
endpointshort: <minio-release-name>.default.svc.cluster.local:9000
region:
accesskey: <minio-accesskey>
secretkey: <minio-secretkey>
bucketPrefix:
defaultBucketName:
integrationSaltPath:
```
Вместо `<minio-release-name>` меняем название helm чарта. Это `minio`.
Вместо `<minio-accesskey>` меняем accesskey, которое вы придумали. У меня это `accesskey`.
Вместо `<minio-secretkey>` меняем secretkey, которое вы придумали. У меня это `secretkey`.
Итоговый фрагмент получается вот таким:
```
minio:
secretName: ""
enabled: true
installdep:
enable: false
endpoint: http://minio.default.svc.cluster.local:9000
endpointshort: minio.default.svc.cluster.local:9000
region:
accesskey: accesskey
secretkey: secretkey
bucketPrefix:
defaultBucketName:
integrationSaltPath:
```
#### Настройка Ingress для доступа к reportportal
Отредактируем `reportportal/values.yaml`
```
ingress:
enable: true
# IF YOU HAVE SOME DOMAIN NAME SET INGRESS.USEDOMAINNAME to true
usedomainname: false
hosts:
- reportportal.k8.com
```
Необходимо поправить домен `reportportal.k8.com` на ваш домен.
Так как у меня нет домена, но ingress-nginx выставлен наружу, то можно воспользоваться сервисами nip.io, xip.io или sslip.io.
Устанавливаем домен исходя из вашего публичного IP адреса.
Получаем внешний IP от ingress-ingress
```
kubectl get svc nginx-ingress-ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-ingress-ingress-nginx-controller LoadBalancer 10.96.165.32 178.154.210.20 80:31767/TCP,443:31456/TCP 4m56s
```
Внешний IP адрес - 178.154.210.20.
Получается можно сделать домен reportportal.178.154.210.20.sslip.io.
Итоговый фрагмент получается вот таким:
```
ingress:
enable: true
# IF YOU HAVE SOME DOMAIN NAME SET INGRESS.USEDOMAINNAME to true
usedomainname: false
hosts:
- reportportal.178.154.210.20.sslip.io
```
#### Установка helm чарта reportportal
Упаковка helm чарта reportportal
```
helm package ./reportportal/
```
Установка reportportal
```
helm install <reportportal-release-name> --set postgresql.SecretName=<postgresql-release-name>-postgresql,rabbitmq.SecretName=<rabbitmq-release-name>-rabbitmq,minio.secretName=<minio-release-name> ./reportportal-5.5.0.tgz
```
Вместо `<reportportal-release-name>` придумайте название чарта для reportportal. Пусть будет `reportportal`.
Вместо `<postgresql-release-name>` используем название helm чарта postgresql. Это `postgresql`.
Вместо `<rabbitmq-release-name>` используем название helm чарта rabbitmq. Это `rabbitmq`.
```
helm install reportportal --set postgresql.SecretName=postgresql,rabbitmq.SecretName=rabbitmq,minio.secretName=minio ./reportportal-5.5.0.tgz
```
Логины/пароли для доступа в reportportal:
- Default User: default\1q2w3e
- Administrator: superadmin\erebus
Скриншоты:


Известные ошибки:
[Memory cgroup out of memory: Killed process (uwsgi)](https://github.com/reportportal/kubernetes/issues/203)
```
[ 7063.247407] Memory cgroup out of memory: Killed process 63214 (uwsgi) total-vm:3149468kB, anon-rss:164992kB, file-rss:52176kB, shmem-rss:92kB, UID:0 pgtables:1372kB oom_score_adj:986
[ 7063.263325] oom_reaper: reaped process 63214 (uwsgi), now anon-rss:0kB, file-rss:0kB, shmem-rss:92kB
[ 7093.543707] uwsgi invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=986
[ 7093.543711] CPU: 4 PID: 63635 Comm: uwsgi Not tainted 5.4.0-77-generic #86-Ubuntu
...
[ 7093.544021] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[ 7093.544025] [ 18904] 0 18904 255 1 32768 0 -998 pause
[ 7093.544028] [ 20358] 0 20358 13850 3787 98304 0 986 uwsgi
[ 7093.544030] [ 20425] 0 20425 15899 3877 114688 0 986 uwsgi
[ 7093.544033] [ 62892] 0 62892 787367 54312 1400832 0 986 uwsgi
[ 7093.544039] [ 63384] 0 63384 787367 54314 1400832 0 986 uwsgi
[ 7093.544041] [ 63635] 0 63635 328986 37869 1069056 0 986 uwsgi
[ 7093.544046] [ 63794] 0 63794 202492 24945 753664 0 986 uwsgi
[ 7093.544048] oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=7c29217c3af6fef0b7bcc4150d2502f684ef8f83d2ea377411c9a2265e8d4843,mems_allowed=0,oom_memcg=/kubepods/burstable/pod05347bee-e95c-4857-8435-410a406f1f7a,task_memcg=/kubepods/burstable/pod05347bee-e95c-4857-8435-410a406f1f7a/7c29217c3af6fef0b7bcc4150d2502f684ef8f83d2ea377411c9a2265e8d4843,task=uwsgi,pid=63384,uid=0
[ 7093.544177] Memory cgroup out of memory: Killed process 63384 (uwsgi) total-vm:3149468kB, anon-rss:164988kB, file-rss:52176kB, shmem-rss:92kB, UID:0 pgtables:1368kB oom_score_adj:986
```
Этот issue исправляется pull request - https://github.com/reportportal/kubernetes/pull/208
| 32.622785 | 387 | 0.748099 | kor_Hang | 0.153654 |
6f2ad53cccd8b3966f268c331ac9bcfe27a424eb | 428 | md | Markdown | JavaExamples/Classes_Overview.md | hullabaloo-vincent/iDTech | 2d42cd1d5fe3e0a7dc7701d070786d139c0b911d | [
"MIT"
] | null | null | null | JavaExamples/Classes_Overview.md | hullabaloo-vincent/iDTech | 2d42cd1d5fe3e0a7dc7701d070786d139c0b911d | [
"MIT"
] | null | null | null | JavaExamples/Classes_Overview.md | hullabaloo-vincent/iDTech | 2d42cd1d5fe3e0a7dc7701d070786d139c0b911d | [
"MIT"
] | null | null | null | # Java Classes
## Class 01
* Basics of a Java Class
* What is a class?
* Breaking down public static void main(String[] args)
* Running code in the terminal
* Running a .java file
* Running a .jar file
* Passing parameters into the app
* Hello World
* For Loops
* Java example
* Python example
## Class 02
* Instances
* Inheritance
* Functions
* Return
* Void
## Class 03
<Class Notes Under Construction>
| 16.461538 | 56 | 0.686916 | eng_Latn | 0.933246 |
6f2b438c9bee3273ab65627a0ccf5f06c67551df | 614 | md | Markdown | in/NOTES.static.md | unverbuggt/tinc-lan-party | 12c119e0edcb1a13e45f2a180f24fe63bacabb84 | [
"MIT"
] | 1 | 2021-09-27T08:08:30.000Z | 2021-09-27T08:08:30.000Z | in/NOTES.static.md | unverbuggt/tinc-lan-party | 12c119e0edcb1a13e45f2a180f24fe63bacabb84 | [
"MIT"
] | null | null | null | in/NOTES.static.md | unverbuggt/tinc-lan-party | 12c119e0edcb1a13e45f2a180f24fe63bacabb84 | [
"MIT"
] | null | null | null | % Tinc-VPN LAN party notes
## Build tinc from git
Install packages (Ubuntu 20.04):
```
sudo apt install git build-essential autoconf texinfo \
zlib1g-dev liblzo2-dev libssl-dev libncurses-dev \
libreadline-dev
```
Build and install:
```
git clone https://github.com/gsliepen/tinc.git
cd tinc
autoreconf -fsi
./configure --disable-legacy-protocol --with-systemd --sysconfdir=/etc --localstatedir=/var
make
sudo make install-strip
```
## Extract static configuration
```
sudo tar xzf node.tar.gz -C /etc/tinc/
sudo systemctl enable tinc@__NETWORK_NAME__
sudo systemctl start tinc@__NETWORK_NAME__
```
| 19.806452 | 91 | 0.7443 | kor_Hang | 0.282617 |
6f2b6315020cb77e00244c4b2c38b54bbc7d45cf | 45 | md | Markdown | README.md | exatasmente/recrutalentos | 8358b42aba6974bee6ffa3e83695c51e2caf7503 | [
"MIT"
] | null | null | null | README.md | exatasmente/recrutalentos | 8358b42aba6974bee6ffa3e83695c51e2caf7503 | [
"MIT"
] | null | null | null | README.md | exatasmente/recrutalentos | 8358b42aba6974bee6ffa3e83695c51e2caf7503 | [
"MIT"
] | 2 | 2019-06-14T00:24:08.000Z | 2019-11-18T13:11:40.000Z | # Recrutalentos
Feito com ♥ em Python
| 15 | 28 | 0.777778 | por_Latn | 0.999806 |
6f2b89d3075c046c2afb0e17b10424b025df5b44 | 10,453 | md | Markdown | _posts/2016-12-04-prague-and-geres.md | antoniocapelo/antoniocapelo.github.com | 4d68ba2997525274658b043c501160659020fa4d | [
"MIT"
] | null | null | null | _posts/2016-12-04-prague-and-geres.md | antoniocapelo/antoniocapelo.github.com | 4d68ba2997525274658b043c501160659020fa4d | [
"MIT"
] | null | null | null | _posts/2016-12-04-prague-and-geres.md | antoniocapelo/antoniocapelo.github.com | 4d68ba2997525274658b043c501160659020fa4d | [
"MIT"
] | null | null | null | ---
title: Prague and Gerês
layout: post
summary: Some pics from the trip to Prague and a Trail in Gerês. Including getting kind of lost amongst goats
image: "/img/prague-and-geres/prague-and-geres_cover.jpg"
category: photography
tags:
- prage
- czech republic
- portugal
- gerês
- parque nacional peneda gerês
- trail
- hiking
- running
- analog
---
Some pics from the trip to Prague and a [Trail in Gerês](https://www.strava.com/activities/775099803). Including getting kind of lost amongst goats.
Featuring some shots taken with my trusty Canonet QL-17 (Prage ones) and the all-around Olympus mju-ii (Trail ones). Oh, and my iPhoneSE too - the not-so-grainy photos.
## Prague, Baby
<a href="https://www.flickr.com/photos/acapelo/30535961614/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5762/30535961614_f374a18e57_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/30535948004/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5458/30535948004_b56c92927f_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31213845702/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5527/31213845702_5197a1dc20_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31358130065/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5590/31358130065_3b9caae3ef_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/30550250263/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5783/30550250263_dac3511ffd_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31243096591/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5587/31243096591_132c15a427_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31213770972/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5603/31213770972_1fa791d02c_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31321669356/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5633/31321669356_10be910d4e_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31321643746/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5498/31321643746_bc3bb9d40d_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31358010575/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5472/31358010575_ebfb9a9f63_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/30550131063/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5727/30550131063_46069b4e23_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31242974161/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5831/31242974161_96709a7118_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31213639492/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5708/31213639492_e1f85f9c25_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/30550039033/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5668/30550039033_eab794f3d9_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31242890801/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5530/31242890801_cff9d38334_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/30550000693/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5811/30550000693_06e82f9725_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/30988522460/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5485/30988522460_1932f5e867_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31321458526/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5836/31321458526_f3e7e40f74_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31357801915/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5666/31357801915_83477fc185_b.jpg" alt="Prague, Baby"></a>
<a href="https://www.flickr.com/photos/acapelo/31242793871/in/photostream" target="_blank" title="Prague, Baby"><img src="https://farm6.staticflickr.com/5596/31242793871_d374e968c7_b.jpg" alt="Prague, Baby"></a>
## Trail in Gerês
### iPhoneSE shots
<a href="https://www.flickr.com/photos/acapelo/31054705285/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5525/31054705285_390c8d7c65_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31018948366/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5463/31018948366_62ea256626_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/30940589221/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5601/30940589221_34dd32b5bf_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31054706415/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5330/31054706415_6198bf6d10_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31054706605/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5781/31054706605_bf9a97f87e_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/30912431032/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5612/30912431032_8f61117384_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31018949466/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5645/31018949466_13728ba007_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31018949506/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5630/31018949506_fc2b207e1d_b.jpg" alt="Trail Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/30940641461/in/photostream" target="_blank" title="Trail Gerês"><img src="https://farm6.staticflickr.com/5569/30940641461_745b0fb75f_b.jpg" alt="Trail Gerês"></a>
### Olympus mju-ii shots
<a href="https://www.flickr.com/photos/acapelo/31372408696/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5788/31372408696_a75dbb28ac_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31039374700/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5509/31039374700_34b110bd8a_b.jpg" alt="Trail - Gerês"></a>
Really liked the colors on this one
<a href="https://www.flickr.com/photos/acapelo/31293670191/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5617/31293670191_35bdb6ac4e_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31264051452/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5594/31264051452_de4a5e15e6_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31293524151/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5613/31293524151_ee1d4eb4a0_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31263947202/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5681/31263947202_196ef7bff4_b.jpg" alt="Trail - Gerês"></a>
Goats everywhere
<a href="https://www.flickr.com/photos/acapelo/30601098223/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5703/30601098223_a5101839a4_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31039189830/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5548/31039189830_cb18b97324_b.jpg" alt="Trail - Gerês"></a>
Top
<a href="https://www.flickr.com/photos/acapelo/31372157936/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5512/31372157936_40d747e964_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31263827032/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5477/31263827032_0e73ae7a08_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31263808242/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5686/31263808242_277b5a4298_b.jpg" alt="Trail - Gerês"></a>
Steeps++
<a href="https://www.flickr.com/photos/acapelo/31293391781/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5549/31293391781_fdb3446b55_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/31408145375/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5631/31408145375_31b97746a1_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/30587009224/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5324/30587009224_5904840047_b.jpg" alt="Trail - Gerês"></a>
^ Meik
<a href="https://www.flickr.com/photos/acapelo/31039029380/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5337/31039029380_2a4049a986_b.jpg" alt="Trail - Gerês"></a>
<a href="https://www.flickr.com/photos/acapelo/30586978714/in/photostream" target="_blank" title="Trail - Gerês"><img src="https://farm6.staticflickr.com/5833/30586978714_d3c9f818c0_b.jpg" alt="Trail - Gerês"></a>
Cheers,
*A. Capelo*
| 78.593985 | 213 | 0.757964 | yue_Hant | 0.100649 |
6f2c07451c8cbf3c4b1bf51d267b6ea212ee433c | 403 | markdown | Markdown | cv.markdown | dhutchison/davidhutchison.scot | eefee9ab0b2ac22a2b81d88b61f34ddc2086e85b | [
"MIT"
] | null | null | null | cv.markdown | dhutchison/davidhutchison.scot | eefee9ab0b2ac22a2b81d88b61f34ddc2086e85b | [
"MIT"
] | null | null | null | cv.markdown | dhutchison/davidhutchison.scot | eefee9ab0b2ac22a2b81d88b61f34ddc2086e85b | [
"MIT"
] | null | null | null | ---
keywords: [Resume, software developer, microsoft sql server, jasperreports, software development,
business, information technology, html, java, Glasgow, Lanarkshire, David Hutchison, J2EE, Swing, Strathclyde University, SEEMiS, SEEMiS Group]
lastUpdated: 2019-03-21 20:45:00
layout: profile
title: David Hutchison | Profile
summary: The Resume of David Hutchison.
person: david
permalink: /cv/
--- | 40.3 | 145 | 0.781638 | eng_Latn | 0.29035 |
6f2c3ebdcdcbb3e7ea31601ec068e7c9109bd39c | 77 | md | Markdown | README.md | hrj/bookmarklet-grab-images | 0e801aa7ac24deec52438f98b8a5ad3e992ddf02 | [
"Apache-2.0"
] | 2 | 2017-05-23T08:36:58.000Z | 2021-06-19T17:51:01.000Z | README.md | hrj/bookmarklet-grab-images | 0e801aa7ac24deec52438f98b8a5ad3e992ddf02 | [
"Apache-2.0"
] | null | null | null | README.md | hrj/bookmarklet-grab-images | 0e801aa7ac24deec52438f98b8a5ad3e992ddf02 | [
"Apache-2.0"
] | null | null | null | bookmarklet-grab-images
=======================
A bookmarklet to grab images | 19.25 | 28 | 0.584416 | eng_Latn | 0.186309 |
6f2c779b6ee992d75edaadccbc5df7e09995a39d | 68 | md | Markdown | README.md | kbarbary/talks | 632296ca81d31f1186c8f39b5ea76e04f09d9434 | [
"MIT"
] | 2 | 2019-07-22T08:44:30.000Z | 2021-05-28T02:30:34.000Z | README.md | kbarbary/talks | 632296ca81d31f1186c8f39b5ea76e04f09d9434 | [
"MIT"
] | null | null | null | README.md | kbarbary/talks | 632296ca81d31f1186c8f39b5ea76e04f09d9434 | [
"MIT"
] | 1 | 2021-09-29T08:20:55.000Z | 2021-09-29T08:20:55.000Z | Talks
=====
Some of my talks, usually involving Jupyter notebooks.
| 13.6 | 54 | 0.735294 | eng_Latn | 0.995997 |
6f2da620beb686925a4b3525e8ced5278ab038f6 | 136 | md | Markdown | README.md | mansha99/jwt-laravel-angular-api | d9ac17440fd67b87dba944e001d6242b64a269c8 | [
"MIT"
] | null | null | null | README.md | mansha99/jwt-laravel-angular-api | d9ac17440fd67b87dba944e001d6242b64a269c8 | [
"MIT"
] | null | null | null | README.md | mansha99/jwt-laravel-angular-api | d9ac17440fd67b87dba944e001d6242b64a269c8 | [
"MIT"
] | null | null | null | # jwt-laravel-angular-api
A Complete Tiny App with RESTFULT API and features like Server side validation,pagination,Resource controller
| 45.333333 | 109 | 0.838235 | kor_Hang | 0.500507 |
6f2e272c59f4aa44b97cbfd4352d62ba29ebe7b2 | 779 | md | Markdown | README.md | Azulnegocios/example-lib | 7d1c5786777aad954d27ffcb533701a56d233918 | [
"MIT"
] | 1 | 2016-09-06T05:31:33.000Z | 2016-09-06T05:31:33.000Z | README.md | Azulnegocios/example-lib | 7d1c5786777aad954d27ffcb533701a56d233918 | [
"MIT"
] | null | null | null | README.md | Azulnegocios/example-lib | 7d1c5786777aad954d27ffcb533701a56d233918 | [
"MIT"
] | null | null | null | # Exemplo de como criar sua propria lib no estilo de JQuery
## Modo de uso
#### Arquivo de exemplo de uso: ./test/index.html
> Importanto a lib
```
<script src="../azulnegocios.js" charset="utf-8"></script>
<script type="text/javascript">
//Formas de uso
/*usando fuções adicionadas dentro da propriedade prototype*/
var azulnegocios = new AzulNegocios();
azulnegocios.alert("Operador new");
/*end*/
/*usando fuções adicionadas dentro da propriedade prototype sem a necessidade do operador new*/
AzulNegocios().alert("Sem operador new");
/*end*/
/*usando fuções adicionadas dentro da propriedade __proto__ e sem a necessidade do operador new*/
AzulNegocios.alert("Sem operador new e usando o __proto__");
/*end*/
</script>
```
| 35.409091 | 100 | 0.69448 | por_Latn | 0.972238 |
6f2ff27ef0eafe00d809dca1423b9989a1748f0e | 18,794 | md | Markdown | articles/service-fabric/service-fabric-sfctl-replica.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2019-08-10T02:23:39.000Z | 2019-08-10T02:23:40.000Z | articles/service-fabric/service-fabric-sfctl-replica.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-fabric/service-fabric-sfctl-replica.md | klmnden/azure-docs.tr-tr | 8e1ac7aa3bb717cd24e1bc2612e745aa9d7aa6b6 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure Service Fabric CLI - sfctl çoğaltma | Microsoft Docs
description: Service Fabric CLI'sını sfctl çoğaltma komutlarını açıklamaktadır.
services: service-fabric
documentationcenter: na
author: Christina-Kang
manager: chackdan
editor: ''
ms.assetid: ''
ms.service: service-fabric
ms.devlang: cli
ms.topic: reference
ms.tgt_pltfrm: na
ms.workload: multiple
ms.date: 12/06/2018
ms.author: bikang
ms.openlocfilehash: d0a7199ff0e9cb17c3fbc179a9b37a6620f521f9
ms.sourcegitcommit: 41ca82b5f95d2e07b0c7f9025b912daf0ab21909
ms.translationtype: MT
ms.contentlocale: tr-TR
ms.lasthandoff: 06/13/2019
ms.locfileid: "60544679"
---
# <a name="sfctl-replica"></a>sfctl replica
Hizmet bölümlere ait çoğaltmaları yönetin.
## <a name="commands"></a>Komutlar
|Komut|Açıklama|
| --- | --- |
| deployed | Bir Service Fabric dağıtıldığını çoğaltma ayrıntılarını alır. |
| deployed-list | Bir Service Fabric dağıtıldığını çoğaltmaların listesini alır. |
| health | Bir Service Fabric durum bilgisi olan hizmet çoğaltma veya durum bilgisi olmayan hizmet durumunu alır. |
| info | Bir Service Fabric bölümünün bir çoğaltma bilgilerini alır. |
| list | Bir Service Fabric hizmeti bölüm çoğaltmaları hakkında daha fazla bilgi alır. |
| Kaldır | Bir düğümde çalışan bir hizmet çoğaltmaya kaldırır. |
| report-health | Service Fabric çoğaltma üzerindeki bir sistem durumu raporu gönderir. |
| restart | Bir düğüm üzerinde çalışan kalıcı bir hizmet hizmeti çoğaltmasını yeniden başlatır. |
## <a name="sfctl-replica-deployed"></a>dağıtılan sfctl çoğaltma
Bir Service Fabric dağıtıldığını çoğaltma ayrıntılarını alır.
Bir Service Fabric dağıtıldığını çoğaltma ayrıntılarını alır. Hizmet türü, hizmet adı, geçerli hizmet işleminin bilgiler içerir, geçerli hizmet işleminin başlangıç tarih saat, bölüm kimliği, yineleme/örnek kimliği, bildirilen yükleme ve diğer bilgiler.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --[gerekli] düğüm adı | Düğümün adı. |
| --bölüm kimliği [gerekli] | Bölüm kimliği. |
| --Çoğaltma kimliği [gerekli] | Çoğaltma tanımlayıcısı. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="sfctl-replica-deployed-list"></a>dağıtılan sfctl çoğaltma-listesi
Bir Service Fabric dağıtıldığını çoğaltmaların listesini alır.
Bir Service Fabric dağıtıldığını çoğaltmaları hakkında daha fazla bilgi içeren listeyi alır. Bilgiler, bölüm kimliği, Yineleme Kimliği, adı Hizmet türü ve diğer bilgiler, hizmet adını, çoğaltma durumunu içerir. Bu parametreler için belirtilen değerlere eşleşen dağıtılan çoğaltmaları hakkında daha fazla bilgi almak için PartitionID veya ServiceManifestName sorgu parametrelerini kullanın.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --Uygulama-kimliği [gerekli] | Uygulama kimliği. Bu genellikle uygulamayı olmadan tam adı, ' fabric\:' URI düzeni. Sürüm 6. 0 ' başlayarak, hiyerarşik adları ile ayrılmış "\~" karakter. Örneğin, uygulama adı ise "fabric\:/myapp/app1", uygulama kimliği olur "myapp\~app1" 6.0 + ve "myapp/app1" önceki sürümlerinde. |
| --[gerekli] düğüm adı | Düğümün adı. |
| --bölüm kimliği | Bölüm kimliği. |
| --Hizmet bildiriminin adı | Bir Service Fabric kümesindeki bir uygulama türünün bir parçası olarak kayıtlı bir hizmet bildiriminin adı. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="sfctl-replica-health"></a>sfctl çoğaltma durumu
Bir Service Fabric durum bilgisi olan hizmet çoğaltma veya durum bilgisi olmayan hizmet durumunu alır.
Bir Service Fabric çoğaltma durumunu alır. Sistem durumu olaylarını sistem durumuna bağlıdır çoğaltmadaki bildirilen koleksiyonunu filtrelemek için EventsHealthStateFilter kullanın.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --bölüm kimliği [gerekli] | Bölüm kimliği. |
| --Çoğaltma kimliği [gerekli] | Çoğaltma tanımlayıcısı. |
| --events-health-state-filter | Döndürülen sistem durumu olayı nesnelerinin koleksiyonunu sistem durumuna göre filtrelemeye olanak tanır. Bu parametre için olası değerler aşağıdaki sistem durumlarının bir tamsayı değeri içerir. Yalnızca filtreyle eşleşen olaylar döndürülür. Tüm olaylar, toplanan sistem durumunu değerlendirmek için kullanılır. Belirtilmezse, tüm girişleri döndürülür. Durum değerleri numaralandırma bayrağı tabanlı olduğundan, değer Bitsel 'Veya' işlecini kullanarak elde ettiğiniz bu değerlerin bir birleşimi olabilir. 6 sağlanan değer, örneğin, ardından tüm olayları Tamam (2) ve (4) uyarı HealthState değeriyle döndürülür. <br> -Default - varsayılan değer. Tüm HealthState eşleşir. Değer sıfırdır. <br> -Hiçbiri - herhangi bir HealthState değer eşleşmeyen filtreleyin. Belirli bir koleksiyon durumlarının sonuç döndürmek için kullanılır. Değer 1'dir. <br> -Tamam - eşleşme HealthState değeriyle Tamam giriş filtreleyin. Değeri 2'dir. <br> -Uyarı - filtre HealthState girişle eşleşir uyarı değeri. Değer 4'tür. <br> -Hata - giriş hatası HealthState değeri ile eşleşen filtre. Değer 8'dir. <br> -All - giriş herhangi bir HealthState değeri ile eşleşen filtreleyin. Değer 65535'tir. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="sfctl-replica-info"></a>sfctl çoğaltma bilgileri
Bir Service Fabric bölümünün bir çoğaltma bilgilerini alır.
Yanıt kimliği, rol, durum, sistem durumu, düğüm adı, çalışma süresi ve çoğaltma ilgili diğer ayrıntıları içerir.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --bölüm kimliği [gerekli] | Bölüm kimliği. |
| --Çoğaltma kimliği [gerekli] | Çoğaltma tanımlayıcısı. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="sfctl-replica-list"></a>sfctl çoğaltma listesi
Bir Service Fabric hizmeti bölüm çoğaltmaları hakkında daha fazla bilgi alır.
Belirtilen bölüm çoğaltmaları hakkında daha fazla bilgi GetReplicas uç noktasını döndürür. Yanıt kimliği, rol, durum, sistem durumu, düğüm adı, çalışma süresi ve çoğaltma ilgili diğer ayrıntıları içerir.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --bölüm kimliği [gerekli] | Bölüm kimliği. |
| --continuation-token | Devamlılık belirteci parametresi, sonraki sonuç kümesini almak için kullanılır. Sistem sonuçlardan tek bir yanıtta uymayan bir devamlılık belirteci boş olmayan bir değer ile API yanıt olarak dahil edilir. Bu değer geçirilen zaman sonraki API çağrısı, API, sonraki sonuç kümesini döndürür. Daha fazla sonuç varsa, devamlılık belirteci bir değer içermiyor. Bu parametrenin değeri, URL kodlanmış olmamalıdır. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="sfctl-replica-remove"></a>sfctl çoğaltma Kaldır
Bir düğümde çalışan bir hizmet çoğaltmaya kaldırır.
Bu API, bir Service Fabric kümesinden bir çoğaltma kaldırarak bir Service Fabric çoğaltma hatası benzetimini yapar. Kaldırma çoğaltma kapatır, çoğaltma rolü yok ve ardından kaldırır tüm geçişler kümeden çoğaltmanın durumu bilgileri. Bu API, çoğaltma durumu kaldırma yolu test eder ve istemci API'leri aracılığıyla rapor hata kalıcı yolu benzetimini yapar. Bu API kullanıldığında gerçekleştirilen herhangi bir güvenlik denetimi uyarı - burada var. Bu API'nin yanlış kullanımı durum bilgisi olan hizmetler için veri kaybına neden olabilir. Ayrıca, forceRemove bayrağı aynı işlemde barındırılan tüm çoğaltmaları etkiler.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --[gerekli] düğüm adı | Düğümün adı. |
| --bölüm kimliği [gerekli] | Bölüm kimliği. |
| --Çoğaltma kimliği [gerekli] | Çoğaltma tanımlayıcısı. |
| --force-Kaldır | Bir Service Fabric uygulaması veya hizmeti kapatılmasını sırasıyla gitmeden zorla kaldırın. Bu parametre, zorla bir uygulamayı silmek için kullanılabilir veya hizmet için hangi silme zaman aşımına uğramadan hizmet kodda engelleyen sorunları nedeniyle normal kopyaları kapatın. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="sfctl-replica-report-health"></a>sfctl çoğaltma durumu-
Service Fabric çoğaltma üzerindeki bir sistem durumu raporu gönderir.
Belirtilen Service Fabric çoğaltma sistem durumunu raporlar. Rapor üzerinde bildirilen özellik ve sistem durumu raporu kaynağı hakkındaki bilgileri içermelidir. Rapor, bir Service Fabric ağ geçidine ileten sistem durumu deposu için çoğaltma gönderilir. Rapor ağ geçidi tarafından kabul edilen, ancak sistem durumu deposu tarafından ek doğrulama sonrasında reddedilen. Örneğin, sistem durumu deposu raporu eski sıra numarası gibi geçersiz bir parametre nedeniyle reddedebilir. Çalışma raporu health store içinde uygulanmış olup olmadığını görmek için çoğaltma durumunu ve rapor HealthEvents bölümde görünen onay alın.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --[gerekli] sistem durumu özelliği | Sistem durumu bilgileri özelliği. <br><br> Bir varlık sistem durumu raporlarının farklı özellikler için sahip olabilir. , Bir dize ve rapor tetikleyen durumu koşulu kategorilere ayırmak muhabir esnekliğini tanımak için olmayan bir sabit numaralandırma özelliğidir. Örneğin, "AvailableDisk" özelliği, düğüm üzerinde rapor için bir Raporlayıcı SourceId "LocalWatchdog" ile bir düğümde, kullanılabilir disk durumunu izleyebilirsiniz. Bu özellik "Bağlantı" aynı düğümde raporlamak için aynı muhabir düğüm bağlantısı izleyebilirsiniz. Health store içinde bu raporları belirtilen düğüm için ayrı bir sistem durumu olayları olarak kabul edilir. SourceId birlikte özelliği sistem durumu bilgileri benzersiz olarak tanımlar. |
| --[gerekli] sistem durumu | Olası değerler şunlardır\: 'Geçersiz', 'Tamam', 'Warning', 'Error', 'Bilinmeyen'. |
| --bölüm kimliği [gerekli] | Bölüm kimliği. |
| --Çoğaltma kimliği [gerekli] | Bölüm kimliği. |
| --Kaynak Kimliği [gerekli] | Kaynak adı, sistem durumu bilgileri oluşturulan izleme/istemci/sistem bileşeni belirtir. |
| --açıklaması | Sistem durumu bilgileri açıklaması. <br><br> Bu, insan tarafından okunabilir rapor bilgilerini eklemek için kullanılan serbest metin temsil eder. Açıklama maksimum dize uzunluğu 4096 karakter olabilir. Sağlanan dize uzun olduğunda otomatik olarak kesilir. Kesirli kısmı, bir işaretçi "[kesildi]" açıklama son karakterleri içeren ve toplam dize boyutu 4096 karakter. İşaretleyici varlığı kullanıcılara bu kesme gösterir oluştu. Kesirli kısmı, açıklama orijinal dizeden küçüktür 4096 karakter olduğuna dikkat edin. |
| --hemen | Raporun hemen gönderilmesi gerekip gerekmediğini gösteren bir bayrak. <br><br> Sistem Durumu raporu, Service Fabric için sistem durumu deposu ileten uygulama ağ geçidi için gönderilir. Hemen ayarlanmışsa true, raporun hemen sistem durumu deposu, HTTP ağ geçidi uygulaması kullanarak doku istemci ayarlarına bakılmaksızın HTTP ağ geçidi'ndeki gönderilir. Bu, olabildiğince çabuk gönderilmesi gereken kritik raporlar için kullanışlıdır. Zamanlama ve diğer koşullara bağlı olarak, rapor gönderme yine de, örneğin HTTP ağ geçidini kapalı veya ağ geçidi ileti ulaşmaz başarısız olabilir. Hemen false olarak ayarlarsanız, raporun durumu istemci ayarlarının HTTP ağ geçidi'nden göre gönderilir. Bu nedenle, bunu HealthReportSendInterval yapılandırmasına göre toplu olarak. Sistem Durumu raporu işleme yanı sıra health store iletilere raporlama sistem durumu iyileştirmek sistem durumu istemci izin verdiğinden Önerilen ayar budur. Varsayılan olarak, raporları hemen gönderilmez. |
| --remove-zaman süresi | Belirtecin süresi dolduğunda, health Store'dan rapor kaldırılmış olup olmadığını gösteren değer. <br><br> Süresi dolduktan sonra health Store'dan true olarak rapor kaldırılırsa. Rapor false olarak ayarlanırsa süresi dolduğunda hata kabul edilir Bu özellik varsayılan olarak false değeridir. İstemciler düzenli aralıklarla bildirdiğinde RemoveWhenExpired false (varsayılan) ayarlamanız gerekir. Bu şekilde muhabir sorunları (örneğin, kilitlenme) ve rapor veremez, varlık sistem durumu raporu süresi dolduğunda hatası değerlendirilir ' dir. Bu varlık sistem durumu hatası olarak işaretler. |
| --sıra numarası | Bu sistem durumu raporu sayısal dize olarak için sıra numarası. <br><br> Rapor sıra numarası, eski raporlar algılamak için sistem durumu deposu tarafından kullanılır. Bir rapora eklendiğinde belirtilmezse, bir sıra numarası otomatik olarak sistem istemci tarafından üretilir. |
| --service-kind | Sistem bildirildiğinden tür hizmeti çoğaltma (durum bilgisi olan veya). Olası değerler şunlardır\: 'Durum bilgisiz', 'Durum bilgisi olan'. Varsayılan\: durum bilgisi olan. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
| --ttl | Bu sistem durumu raporu geçerli olduğu süre. Bu alan, süresi belirtmek için ISO8601 biçimini kullanır. <br><br> İstemciler düzenli aralıklarla rapor, yaşam süresi daha yüksek sıklıkta raporları göndermelisiniz. İstemcileri geçişi bildirirse, bunlar sonsuz için yaşam süresi ayarlayabilirsiniz. Yaşam süresi dolduğunda, sistem durumu bilgilerini içeren sistem durumu olayı RemoveWhenExpired ise health Store'dan kaldırıldı ya da doğru veya hata sırasında değerlendirilen ise RemoveWhenExpired false. Aksi durumda sonsuz değer varsayılan olarak belirtilen, süresi. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="sfctl-replica-restart"></a>sfctl çoğaltma yeniden başlatma
Bir düğüm üzerinde çalışan kalıcı bir hizmet hizmeti çoğaltmasını yeniden başlatır.
Bir düğüm üzerinde çalışan kalıcı bir hizmet hizmeti çoğaltmasını yeniden başlatır. Bu API kullanıldığında gerçekleştirilen herhangi bir güvenlik denetimi uyarı - burada var. Bu API'nin yanlış kullanımı durum bilgisi olan hizmetler için kullanılabilirlik kaybına neden olabilir.
### <a name="arguments"></a>Bağımsız Değişkenler
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --[gerekli] düğüm adı | Düğümün adı. |
| --bölüm kimliği [gerekli] | Bölüm kimliği. |
| --Çoğaltma kimliği [gerekli] | Çoğaltma tanımlayıcısı. |
| --zaman aşımı -t | Sunucu zaman aşımı saniye. Varsayılan\: 60. |
### <a name="global-arguments"></a>Genel bağımsız değişkenleri
|Bağımsız Değişken|Açıklama|
| --- | --- |
| --hata ayıklama | Tüm hata ayıklama günlüklerini göster için günlüğün ayrıntı düzeyini artırır. |
| ---h Yardım | Bu yardım iletisini ve çıkış gösterir. |
| --Çıktı -o | Çıkış biçimi. İzin verilen değerler\: json, jsonc, tablo, tsv. Varsayılan\: json. |
| --Sorgu | JMESPath sorgu dizesi. HTTP bkz\://jmespath.org/ daha fazla bilgi ve örnekler. |
| --verbose | Günlüğün ayrıntı düzeyini artırır. Kullanımı--tam hata ayıklama günlükleri için hata ayıklama. |
## <a name="next-steps"></a>Sonraki adımlar
- [Kurulum](service-fabric-cli.md) Service Fabric CLI.
- Service Fabric CLI kullanarak kullanmayı öğrenin [örnek betikleri](/azure/service-fabric/scripts/sfctl-upgrade-application).
| 77.02459 | 1,209 | 0.773704 | tur_Latn | 0.99994 |
6f30713e8856d938b7697ba8b56bb887f85bb92e | 1,183 | md | Markdown | README.md | InfraWay/deploy-bot | 8b4265bdbcd5c32c3f9a17813f7b0ae07e307495 | [
"MIT"
] | 13 | 2022-03-30T12:25:46.000Z | 2022-03-31T09:21:38.000Z | README.md | InfraWay/deploy-bot | 8b4265bdbcd5c32c3f9a17813f7b0ae07e307495 | [
"MIT"
] | null | null | null | README.md | InfraWay/deploy-bot | 8b4265bdbcd5c32c3f9a17813f7b0ae07e307495 | [
"MIT"
] | 8 | 2022-03-30T12:26:06.000Z | 2022-03-31T09:50:17.000Z | # gh-deploy-bot
This is a GitHub bot that simplifies the process of deployment for organizations from multiple interconnected repositories.
## Diagram flow
<img width="2032" alt="bot-flow" src="./bot-flow.png">
## Setup
1. Create a repo named `.infraway` it should contain a file `config.yaml`. Find example of such configuration in [here](https://github.com/InfraWay/gh-deploy-bot/blob/main/.infraway/config.yaml.example)
2. Install bot into kubernetes cluster using docker image: [infraway/gh-deploy-bot](https://hub.docker.com/r/infraway/gh-deploy-bot)
3. Follow install instructions inside the bot to conect it to GitHub organization.
4. Try to make a PR in a repository, that defined in the `config.yaml` file.
## Docker
```sh
# 1. Build container
docker build -t gh-deploy-bot .
# 2. Start container
docker run -e APP_ID=<app-id> -e PRIVATE_KEY=<pem-value> gh-deploy-bot
```
## Contributing
If you have suggestions for how gh-deploy-bot could be improved, or want to report a bug, open an issue! We'd love all and any contributions.
For more, check out the [Contributing Guide](CONTRIBUTING.md).
## License
[MIT](LICENSE) © 2022 [Andrew Red](https://andrew.red)
| 31.972973 | 202 | 0.744717 | eng_Latn | 0.948614 |
6f30fc60eab5ed79f904e7632939543c3127a152 | 3,711 | md | Markdown | Assets/ExternalPackages/VrTunnellingPro/README.md | uhhhci/TheNonContinuousToolkit | 03393baf284ca108f8782e37da1ebd4a1ac7eaa4 | [
"MIT"
] | 2 | 2020-12-16T21:52:00.000Z | 2021-01-15T23:38:22.000Z | Assets/Packs/VrTunnellingPro/README.md | uitwaaien6/Artificial-Intelligence-Chase-Algorithm | dc71d3169d3185d20ff4b3d416d7bbab45a2a6ca | [
"MIT"
] | null | null | null | Assets/Packs/VrTunnellingPro/README.md | uitwaaien6/Artificial-Intelligence-Chase-Algorithm | dc71d3169d3185d20ff4b3d416d7bbab45a2a6ca | [
"MIT"
] | null | null | null | VR Tunnelling Pro (VRTP) is the most advanced VR comfort solution for Unity 5.6+. It can be dropped in to almost any project for plug-and-play comfort options. It's developed by Sigtrap with support from Oculus, and supports all major VR platforms including
* Oculus Rift, Go and Quest
* HTC Vive
* Playstation VR
* Samsung GearVR
* Google Daydream
VRTP is designed not only as a plugin, but a platform for experimenting with and developing new comfort techniques. Familiarity with the source will allow developers to quickly implement new techniques.
Full documentation for the current release is available at [http://www.sigtrapgames.com/VrTunnellingPro/html](http://www.sigtrapgames.com/VrTunnellingPro/html). Check out the [Quickstart Guide](http://www.sigtrapgames.com/VrTunnellingPro/html/quickstart.html) to dive right in, or see our talk [Integrating Locomotion in Unity, Unreal, and Native Engines](https://www.youtube.com/watch?v=dBs65za8fhM) from Oculus Connect 5.
VRTP is also available on the Unity Asset Store [here](https://assetstore.unity.com/packages/tools/camera/vr-tunnelling-pro-106782).
## What is Tunnelling?
Much of VRTP's core is based on "Tunnelling", a technique to reduce sim-sickness in VR games, experiences and apps.
Artificial locomotion in VR - cars, spaceships, first-person "thumbstick" movement - causes sim-sickness in many users. This is a result of a mismatch between the motion they feel and the motion they see. Tunnelling is a highly effective method to reduce this for a large number of users.
It works by fading out peripheral vision. The subconscious uses peripheral vision heavily to interpret motion but the conscious brain largely ignores it - as such, reducing motion in the periphery combats the overall motion disparity users feel without significant information loss.
Additionally, the periphery can be replaced with static imagery to counteract motion even more strongly, "grounding" users in a static reference frame. This is called a "cage" as including a cage or grid maximises this effect.

## Key Features
* Multiple modes
* Color vignette
* Replace periphery with skybox/cubemap cage
* Replace periphery with customised full 3D cage
* View VR scene through static "windows"
* View cage through world-space portals
* Masking
* Exclude objects from the tunnelling effect
* e.g. static cockpit to help ground users
* Fully configurable
* Tweak any settings in-editor or at runtime for full control
* Preset system
* Easily define multiple presets that users can switch between at runtime
* Rich API
* Mobile-friendly version included
* Compatible with Multipass and Single Pass Stereo
* High performance
## Roadmap
We are working on multiple new techniques which are being developed on the *dev* branch. These include counter-optic flow, artificial tilt and more. For full details please see the Oculus Connect 5 talk linked above.
We are also working on the Unreal Engine 4 version which will be available on our Github and on the Marketplace.
## Documentation
HTML documentation (mirroring that at the official docs URL) can be generated using Doxygen. If you have Doxygen installed on Windows, you can just run `Docs~/BuildDoxygen.bat` to do so. The *Docs~* folder will be ignored by Unity, and the resulting HTML docs will be ignored by git.
## Credits
Developed and maintained by Dr Luke Thompson ([@six_ways](https://twitter.com/six_ways) | [github](https://github.com/SixWays))
Research and support by Tom Heath on behalf of Oculus
Art assets by Gary Lloyd ([@garylloyd89](https://twitter.com/garylloyd89))
| 66.267857 | 423 | 0.788197 | eng_Latn | 0.995598 |
6f31614ef2860676e9c1355c10f73be7e8d4e71c | 2,835 | md | Markdown | _posts/2018-09-06-cpgandcxy.md | fusx1/fusx1.github.io | 3a2980b5a8e3927ab46b31a6e3433a54739b6d15 | [
"MIT"
] | null | null | null | _posts/2018-09-06-cpgandcxy.md | fusx1/fusx1.github.io | 3a2980b5a8e3927ab46b31a6e3433a54739b6d15 | [
"MIT"
] | null | null | null | _posts/2018-09-06-cpgandcxy.md | fusx1/fusx1.github.io | 3a2980b5a8e3927ab46b31a6e3433a54739b6d15 | [
"MIT"
] | null | null | null | ---
layout: post
title: "大话程序猿VS产品狗"
date: 2018-09-06 14:35:05 +0800
categories: 安全
tags: 程序猿 产品狗
author: FUSX
---
* content
{:toc}
一个好的互联网公司都离不开好的产品经理,对产品有着充分理解、有强大的分析判断和执行的能力、有良好的沟通和表达能力,对未来自市场、用户等各方面的需求进行收集,编写产品的需求文档;
`产品`提出需求,`程序猿`根据需求设计开发方案,在项目的开发过程中程序猿和产品有很多的交集,不断的讨论碰撞,今天就让我们来好好的说一说程序猿和产品狗的恩恩怨怨吧。

---
>`申明以下产品狗只是一个褒义的称呼,如:程序猿、攻城狮、射鸡师、产品狗`
---
### 程序猿VS产品狗
---
* `产品狗`:(。・∀・)ノ゙嗨,这块业务你做了吗,我们需要调整下,因为之前逻辑有点问题..
* `程序猿`:/(ㄒoㄒ)/~~,`又`改需求,确定的需求能不能不再改了,你总是能给我理由,受伤害的总是我!
* `产品狗`:相信我这是最后一次修改。
* `程序猿OS`:这个本年度最不可信十大的谎言之一,我可以说脏话吗?F**K 哔哔 ***

> 不同的开发阶段修改需求代价不一样,越到后期代价越高,产品狗请三思,程序猿应该根据修改内容上报上级,不能默默的承受了,不惯着产品随意改需求。
---
* `产品狗`:这个需求和之前项目的很像,你应该可以很快完成吧?
* `产品狗OS`:不就copy and paste,应该很快;
* `程序猿`:之前不是设计成通用的服务或者模块,这次需要重新做;
* `程序猿OS`:你把我当成是代码的帮运工么
* `产品狗`:。。。
> 通用业务模块化,组件化,系统化,产品在设计的时候需要考虑,程序猿在需求会的时候也应该根据需求内容提出建议。
---
* `产品狗`:这个上线出问题了,旧版本不兼容,导致....,先加个版本控制吧
* `程序猿`:又加版本控制,下次需求能不能把这种问题考虑进去。
* `产品狗`:。。。
> 产品需求在做需求的时候需要考虑周全,需要考虑新旧版本交替问题,以及需求相关联的业务影响问题,等
程序猿在需求会的时候根据自己对业务的了解提出问题
---
* `产品狗`:本期需求内容是这样的,巴拉巴拉,功能模块*N.....;
* `程序猿`:这期需求内容有点多,我们预估下开发时间有点长,我建议分期开发;
* `产品狗`:本期需求内容都是很重要的,我们期望可以同时完成上线;
* `程序猿`:意思是要加班完成喽。。。
* `程序猿OS`:我可不想加班到狗带;
> 需求内容过多无法按照产品的期望时间上线,往往要通过加班来填补,这种燃烧生命在工作的方式不提倡,而且加班赶工人在疲劳的状态下写的代码一般都很难维护和拓展,bug率也会变高。
产品应该根据程序猿提出的评估进行需求上的调整分期开发。
---
> `还有很多的情景,一时间没想起来,大家一起来补充`
... ...
---
### 总结产品的几大宗罪
---
#### 动不动就修改需求
* `猿的供词`:
> 无论开发到哪个阶段,`产品`总是有着突如其来的灵感,这个UI要调整下,那个业务逻辑修改下比较合理,有时候甚至是打掉原有方案重做,玩我么?
* `联盟调解`:
> 在开发的过程中需求的调整是不能完全避免的,但是理应在需求讨论的时候就把所有的内容确认清楚,竟可能的不在开发的过程中进行需求的调整,
不同的开发阶段修改需求的代价不一样,越到后期代价越高,所以`产品狗`们请三思,非严重或者市场战略的业务逻辑问题就考虑下一期调整,
程序猿需要hold住需求的修改要求,禀报上级,不能让产品养成随意修改需求的习惯。
---
#### 需求内容不够详细
* `猿的供词`:
> 需求惜字如金,缺少必要的描述,什么鬼,看需求文档完全不知道要做什么?
* `联盟调解`:
> 需求内容应当尽量的详细,复杂逻辑需要给出流程图,在不需要解释的情况下能让人看懂本期需求要做内容的文档才是好文档。
---
#### 需求考虑不周全
* `猿的供词`:
> 眼光紧现本期需求内容,往往没有去考虑细节问题,长远问题,需求连带业务问题,上线后发现问题手忙脚乱。
* `联盟调解`:
> 产品需要考虑新旧版本交替问题,以及需求相关联的业务影响问题,需求设计模块化,组件化,系统化思维,等。
程序猿也应该根据自己的经验和对业务的了解提出建议。
---
#### 需求缺少数据依据
* `猿的供词`:
> 需求中都没有对本期需求提供数据的依据,感觉整天都是为了做需求而做需求,一点成就感都没有,没干劲..
* `联盟调解`:
> 产品的需求中应该有对本期需求的内容提供相关的数据依据,比如:上期的优化数据结果展示,本期优化预计会达到多少转化率、参与量,新业务预计要达到多少的用户量、转化率多少,等
数据是最有说服力的,可以表现项目的价值,以及带给我们成就感,同时对开发的设计方案也有一定参考作用。
---
#### 项目流程不熟悉
* `猿的供词`:
> 需求还没确定清楚就开立项会,甚至还在立项会修改需求,我的眼睛,已瞎...
* `联盟调解`:
> 立项会是要对本期要做的内容进行成立项目,并且根据本期内容排人员以及开发周期包括:开发,测试,验收,上线,线上验收等,整个的时间周期;
产品需要严格遵守项目开发流程,做一个有节操的`产品狗`;
---
#### 需求内容没有创新,跟着竞品的脚步
* `猿的供词`:
> 需求怎么老是跟着竞品的脚步走,喜欢一直在模仿从未被超越的感觉?
* `联盟调解`:
> 竞品之间相互的参考和模仿这个是必然存在的,这算是投机取巧的一种方式,毕竟一个好的需求要想出来不是一件容易的事情,不过还是建议可以参考,不要一味地模仿,
虽然高仿也可以走出一片天,如互联网大亨T*,运动品牌阿迪王,好矛盾的话题。
---
### 话题:
---
* `身为程序猿的你对身边的产品狗有哪些看法?躁起来吧`
* `产品狗也可以来吐槽下程序猿,别忍了,亮剑吧~`
* `留言讨论 GOGOGOGO`
---

---
> 发布时间:2018-09-06 | 16.2 | 88 | 0.708289 | zho_Hans | 0.383716 |
6f3180ceb131278c808969ab064e3599871e9547 | 715 | md | Markdown | markdown/org/docs/patterns/aaron/options/hipsease/es.md | TriploidTree/freesewing | 428507c6682d49a0869a13ce56b4a38d844a8e5c | [
"MIT"
] | null | null | null | markdown/org/docs/patterns/aaron/options/hipsease/es.md | TriploidTree/freesewing | 428507c6682d49a0869a13ce56b4a38d844a8e5c | [
"MIT"
] | 2 | 2022-02-04T13:28:21.000Z | 2022-02-04T14:07:38.000Z | markdown/org/docs/patterns/aaron/options/hipsease/es.md | SeaZeeZee/freesewing | 8589e7b7ceeabd738c4b69ac59980acbebfea537 | [
"MIT"
] | null | null | null | - - -
title: "Hips ease"
- - -

How much room do you want at the hips?
Whatever value you provide here will simply be added to your hips circumference measurement when drafting the garment.
> ##### Esta opción también permite valores negativos.
>
> Sólo debe utilizar la facilidad negativa si está utilizando un material estirado que desea encajar con fuerza. El estiramiento total ha de configurarse con la opción de estiramiento.
## Efecto de esta opción en el patrón

| 39.722222 | 189 | 0.773427 | eng_Latn | 0.462812 |
6f32143ed875fb02e413b3a039ab42abd05c27b0 | 4,097 | md | Markdown | includes/virtual-machine-disk-performance.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-machine-disk-performance.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | includes/virtual-machine-disk-performance.md | mKenfenheuer/azure-docs.de-de | 54bb936ae8b933b69bfd2270990bd9f253a7f876 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: include file
description: include file
services: virtual-machines
author: albecker1
ms.service: virtual-machines
ms.topic: include
ms.date: 09/25/2020
ms.author: albecker1
ms.custom: include file
ms.openlocfilehash: 9f5a1010959658e75dcc809b2ee1d6d9222af056
ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9
ms.translationtype: HT
ms.contentlocale: de-DE
ms.lasthandoff: 10/09/2020
ms.locfileid: "91540003"
---
Dieser Artikel hilft dabei, die Leistung von Datenträgern zu verdeutlichen und zu erläutern, wie sie damit umgeht, wenn Sie Azure Virtual Machines und Azure-Datenträger kombinieren. Es wird auch beschrieben, wie Sie Engpässe für Ihre Datenträger-E/A diagnostizieren können und welche Änderungen Sie zur Leistungsoptimierung vornehmen können.
## <a name="how-does-disk-performance-work"></a>Wie funktioniert die Datenträgerleistung?
Virtuelle Azure-Computer weisen Leistungsgrenzwerte für IOPS und Durchsatz auf, die auf dem Typ und der Größe des virtuellen Computers basieren. Betriebssystemdatenträger und Datenträger für Daten, die an virtuelle Computer angefügt werden können, weisen eigene Grenzwerte für IOPS und Durchsatz auf. Wenn die auf Ihren virtuellen Computern ausgeführte Anwendung mehr IOPS oder Durchsatz anfordert, als für den virtuellen Computer oder die angeschlossenen Datenträger vorgesehen ist, wird die Leistung Ihrer Anwendung begrenzt. In diesem Fall erreicht die Anwendung eine suboptimale Leistung, was zu einigen negativen Auswirkungen wie einer erhöhten Wartezeit führen kann. Lassen Sie uns einige Beispiele durchgehen, um dies zu verdeutlichen. Damit diese Beispiele leicht nachvollziehbar sind, betrachten wir nur die IOPS, aber dieselbe Logik gilt auch für den Durchsatz.
## <a name="disk-io-capping"></a>Obergrenzen der Datenträger-E/A
Einrichten:
- Standard_D8s_v3
- Nicht zwischengespeicherte IOPS: 12.800
- E30-Betriebssystemdatenträger
- IOPS: 500
- 2 E30-Datenträger für Daten
- IOPS: 500

Die auf dem virtuellen Computer ausgeführte Anwendung stellt eine Anforderung, die 10.000 IOPS an den virtuellen Computer erfordert. Alle sind gemäß des virtuellen Computers zulässig, da der virtuelle Computer vom Typ „Standard_D8s_v3“ bis zu 12.800 IOPS ausführen kann. Diese 10.000 IOPS-Anforderungen werden dann in drei verschiedene Anforderungen für die verschiedenen Datenträger unterteilt. 1\.000 IOPS werden für den Betriebssystemdatenträger und 4.500 IOPS für jeden Datenträger für Daten angefordert. Da alle angeschlossenen Datenträger den Typ „E30“ aufweisen und nur 500 IOPS verarbeiten können, antworten sie mit jeweils 500 IOPS. Die Leistung der Anwendung wird dann durch die angeschlossenen Datenträger begrenzt und kann nur 1.500 IOPs verarbeiten. Sie könnte mit einer Spitzenleistung von 10.000 IOPS arbeiten, wenn Datenträger mit besserer Leistung verwendet würden, z. B. Datenträger vom Typ „SSD Premium P30“.
## <a name="virtual-machine-io-capping"></a>Obergrenzen der E/A der virtuellen Computer
Einrichten:
- Standard_D8s_v3
- Nicht zwischengespeicherte IOPS: 12.800
- P30-Betriebssystemdatenträger
- IOPS: 5.000
- 2 P30-Datenträger für Daten
- IOPS: 5.000

Die auf dem virtuellen Computer ausgeführte Anwendung stellt eine Anforderung, die 15.000 IOPS erfordert. Leider ist der virtuelle Computer vom Typ „Standard_D8s_v3“ nur für die Verarbeitung von 12.800 IOPs vorgesehen. Davon ausgehend wird die Anwendung durch die Grenzwerte der virtuellen Computer begrenzt und muss dann die zugeteilten 12.800 IOPs zuweisen. Diese angeforderten 12.800 IOPS werden dann in drei verschiedene Anforderungen für die verschiedenen Datenträger unterteilt. 4\.267 IOPS werden für den Betriebssystemdatenträger und 4.266 IOPS für jeden Datenträger für Daten angefordert. Da es sich bei allen angeschlossenen Datenträgern um P30-Datenträger handelt, die 5.000 IOPS verarbeiten können, antworten sie mit ihren angeforderten Mengen. | 87.170213 | 927 | 0.819136 | deu_Latn | 0.99647 |
6f3229de450c18a158a12e841d8f6122cb0f9a48 | 543 | md | Markdown | README.md | hiagomeels/ex_boleto | db7b436b4f8fc6c1f6896c135c356754f7cab88a | [
"MIT"
] | 2 | 2020-08-11T02:12:22.000Z | 2020-08-16T16:34:29.000Z | README.md | hiagomeels/ex_boleto | db7b436b4f8fc6c1f6896c135c356754f7cab88a | [
"MIT"
] | null | null | null | README.md | hiagomeels/ex_boleto | db7b436b4f8fc6c1f6896c135c356754f7cab88a | [
"MIT"
] | null | null | null | # ExBoleto
[](https://github.com/hiagomeels/ex_boleto/blob/master/LICENSE)
A library to handle with a Brazilian payment method.
## Contribute
Just fork the repo, make your change, and send me a pull request.
Or, feel free to file and issue and start a discussion about a new feature you have in mind.
### Running tests
Clone the repo and fetch its dependencies:
```
$ git clone https://github.com/hiagomeels/ex_boleto.git
$ cd ex_boleto
$ mix deps.get
$ mix test
```
| 24.681818 | 131 | 0.742173 | eng_Latn | 0.881038 |
6f3256cfba5fdb6dce03adb2f28e3b5f77a3aa74 | 16,412 | md | Markdown | README.md | echoix/node-red-contrib-blockly | 044993ed8f47e850868317d2560d9332f6a4d693 | [
"Apache-2.0"
] | 78 | 2018-07-16T07:49:55.000Z | 2022-01-27T06:14:55.000Z | README.md | echoix/node-red-contrib-blockly | 044993ed8f47e850868317d2560d9332f6a4d693 | [
"Apache-2.0"
] | 85 | 2018-07-15T17:49:41.000Z | 2021-12-10T18:15:17.000Z | README.md | echoix/node-red-contrib-blockly | 044993ed8f47e850868317d2560d9332f6a4d693 | [
"Apache-2.0"
] | 26 | 2018-07-30T20:56:31.000Z | 2022-01-13T00:27:50.000Z | # node-red-contrib-blockly
A Node Red node that offers a visual programming interface, to make programming a function node easier. Just drag and drop blocks to build your program logic, without having to write the Javascript code yourself. By building your code in a visual way, you don't have to learn the Javascript syntax, which makes programming very difficult for beginners.
Moreover the generated Javascript code can be displayed, so you can learn Javascript coding step by step...
Thanks to lots of people, for their assistance during the development of this node:
* A lot of folks on the Google Blockly [forum](https://groups.google.com/forum/#!forum/blockly) ...
* [Simon Walters](https://github.com/cymplecy) as analyst and righteous judge of this repository.
* [Jeff](https://github.com/jsccjj) for contributing to features like full-screen mode.
* All the translator folks that translate our block texts in multiple languages.
## Install
Run the following npm command in your Node-RED user directory (typically ~/.node-red):
```
npm install node-red-contrib-blockly
```
## Support my Node-RED developments
Please buy my wife a coffee to keep her happy, while I am busy developing Node-RED stuff for you ...
<a href="https://www.buymeacoffee.com/bartbutenaers" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy my wife a coffee" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>
## Performance
The performance of this node is identical to the standard Function-node!
Indeed the Javascript code will be generated once, when the content of the Blockly workspace has been edited. From then on that generated code will run on the Node-RED server side, using exactly the "same" sandbox mechanism as used by the Function-node.
## Where used
When implementing your custom logic into a Node-Red flow, sometimes the available nodes won't be sufficient. In those cases the standard *function node* can be used, to implement your logic by entering custom Javascript coding. However to avoid having to write Javascript code yourself, you can draw your logic by dragging and dropping blocks into the Blockly editor.
As soon as the the Node-Red flow is deployed, the generated Javascript code will run on the Node-Red flow (similar to a standard function node):

1. ***Draw*** the blocks on your editor, according to your business logic.
2. When *deploying* the flow, the corresponding ***Javascript code will be generated*** and send to the server.
3. Create an ***input message*** (called *'msg'*), which can contain an arbitrary number of fields (e.g. payload, topic, ...) and their corresponding values.
4. The *msg* arrives at the Blockly node, which will ***execute*** the generated Javascript code.
5. The generated Javascript code can ***send*** an output message on one of it's output ports.
6. The ***output message*** will be handled by the next nodes in the flow.
## Hello world example
As soon as an input message arrives, we will set the *'payload'* property of that input message to *'Hello world'*. And afterwards that updated input message will be send to the output port:

The following animation shows how this example flow can be constructed using the Blockly node:

Lots of other examples can be found in the [wiki](https://github.com/bartbutenaers/node-red-contrib-blockly/wiki) pages ...
## Blockly basics
[Blockly](https://developers.google.com/blockly/) is a visual block programming language (and editor), maintained by Google. A lot of documentation (tutorials, videos, ...) can be found on the internet, but here are some bascis to get you started:
+ A block can have a *value input* at the right side. Or a *value output* at the left side. Or both:

+ A *statement* is created by connecting value inputs and value outputs together horizontally, to pass data from the outputs to the inputs:

In this example the Node-Red input message is cloned, and then the cloned message is send to output 1 of your Blockly node. This means you have to read the chain of blocks from the right to the left ...
Caution: the *data type* requested by a value input should be equal to the data type offered by the value output. Otherwise you cannot connect the value output to the value input!
+ A value input can be *inline* or *external*:

For an inline value input, the next block will be *inside* the previous block. For an external value input, the next block will be *after* the previous block:

When right-clicking on top of a block, a context menu appears where you can switch between inline and external inputs.
+ A block input can be connected automatically to a *shadow block*:

A shadow block is like a default value for that input, which is automatically attached to it (already in the toolbox). You cannot delete the shadow block, but you can drop another block on top of it. In that case the shadow block will be ignored an the new block will be used as input value.
+ A block can have properties to change the behaviour of the block:

+ A block can have a *statement input* at the top side. Or a *statement output* at the bottom side. Or both:

Remark: a block can have multiple inputs.
+ A *program* (or *statement stack*) is created by connecting statement inputs and statement outputs together vertically, to specify the order in which the statements needs to be executed:

In this example we start by showing a green node status "Re-sending started". Afterwards the input message is cloned 10 times and sended to the output port, and every time a log is written to the console "Logging that the message has been resended". And we end by showing a green node status "Re-sending completed". This means you have to read the blocks from the top to the bottom ...
!!! CAUTION !!!
*Blockly always starts counting from 1*, while Javascript starts counting from 0! This might be confusing for users that are already familiar with programming in Javascript...
## Config screen
The node's config screen consists out of a series of elements:

1. The **library** button is similar to the standard *function node*: the code from this Blockly node can be made globally available by adding it to a libary. That library will be stored withing the 'libs' directory in your node-red user directory.
2. The **config node** that contains the settings used in this Blockly node. If no config node has been selected, then the default settings will be used.
3. The **editor** tabsheet displays a Blockly workspace editor. Here blocks can be added, which will be converted afterwards to Javascript code. Note the fullscreen icon beside the toolbar label: see [wiki](https://github.com/bartbutenaers/node-red-contrib-blockly/wiki/Use-fullscreen-mode).
4. The **generated Javascript** tabsheet will display the Javascript code, which is generated based on the blocks in the Blockly editor. This code is readonly, which means you cannot change it! Reason is that it is *not possible* to generate Blockly blocks, starting from Javascript code ...
5. The **generated XML** tabsheet will display the XML, that represents the blocks in the Blockly editor.
6. The Blockly **toolbox** shows all available blocks, grouped by category.
7. The Blockly **editable area** shows all the blocks representing your custom logic. Blocks can be dragged from the toolbox into the editable area. A lot of user interaction is offered in this area:
+ When pressing the delete button, the selected blocks will be removed.
+ When pressing ctrl-f, a search field will be displayed (containing a next and previous button). See [wiki](https://github.com/bartbutenaers/node-red-contrib-blockly/wiki/Search-for-Blockly-nodes).
+ By clicking and dragging with the mouse, the user can start panning to alter which part of the area is being visualised.
+ By clicking on a block and dragging with the mouse, the block (and all of its's chained next blocks) will be moved.
+ By rotating the mouse wheel, you can zoom in/out.
+ By using ctrl-c and ctrl-v the selected block will be copied. When the block is connected to next blocks, the entire block chain will be copied. Note that you can also paste the selected block in another blockly node!
+ By using ctrl-z the last action will be undone.
+ By right-clicking a context menu appears. That context menu will be different if the grid is clicked, or if a block is clicked.
+ ...
Remark: the toolbox and the editable area together are called a *'workspace'.
8. The **backpack** button allows to create a custom list of your favorite blocks. See [wiki](https://github.com/bartbutenaers/node-red-contrib-blockly/wiki/Add-your-favorite-blocks-to-a-backpack).
9. The **zoom to fit** button will automatically zoom the workspace, so all the blocks will become visible at once.
10. The **center* button allows the available blocks to be centered in the middle of visible workspace area.
11. The **zoom in** button.
12. The **zoom out** button.
13. The **thrash can** button. Click on the bin to show the removed blocks, and optionally restore them.
14. The number of **output ports** for the Blockly node in the Node-Red flow editor. Make sure that this number doesn't exceed the output port number of the *'send'* block (see below).
## Node-Red blocks
When writing Javascript code in a standard *function node*, some Node-Red functionality can be called from within that code (see function node [API](https://nodered.org/docs/writing-functions#api-reference)). To offer the same functionality in the Blockly editor, a series of extra blocks have been added. These blocks are availble in the 'Node-Red' category in the toolbox:

1. **Get** the value of some property in an object.
2. **Set** some property in an object to a specified value.
3. **Send** to the specified output port the message, which is specified on the block input. Make sure that the specified output number doesn't exceed the number of outputs on the Blockly node!
4. The **input message (msg)** exposes the input message that arrive's in the Node-Red flow, on the Blockly node input port.
5. The Node-Red **flow** memory can be used to store data that needs to be shared by all nodes in a flow.
6. The Node-Red **global** memory can be used to store data that needs to be shared by all nodes.
7. The Node-Red **(node)context** memory can be used to store data that needs to be shared by all blockly nodes.
8. **Return** the specified message. This means that we stop processing (i.e. the next blocks will not be executed), and the message will be send on the output port.
9. Show the specified text in the **node status**. Both the status icon shape and color can be specified.
10. **Remove** the node status from the node.
11. **Log** the specified text in the console. Multiple log levels are available (log, error, warning, trace, debug). The warnings and errors will also be displayed in the flow editor debug tab. The trace and debug levels are only used to display details, and are not visible if no logger for those levels is specified in Node-Red.
12. **Clone message** can be used to create a new separate copy of the specified message.
13. Get the specified **node property**. The node identifier, node name and node output count can be get.
14. Specify that the processing of the message has been **completed**, so it can be handled by the Complete-node in the flow. See [wiki](https://github.com/bartbutenaers/node-red-contrib-blockly/wiki/Trigger-a-Complete-node-via-a-Done-block).
15. Some things can be don when the **node is closed**, most of the time to cleanup stuff.
## Config node settings
The Blocky workspace can be configured via the settings in the selected config node:

These settings will be applied to every Blockly node where this config node has been selected:
+ Specify whether a **trash can** icon needs to be displayed in the workspace.
+ Specify whether **comments** can be added to blocks in the workspace.
+ Specify whether the 4 **zoom** icons need to to be displayed in the workspace.
+ Specify whether a **backpack** icon need to be be displayed in the workspace.
+ Customize the toolbox **categories** to fit your needs. CAUTION: this is only for advanced users and need to be done with care!! When this checkbox is activated, the "Categories" tabsheet will become enabled. See [wiki](https://github.com/bartbutenaers/node-red-contrib-blockly/wiki/Customizing-the-toolbox).
+ Specify the **language** that needs to be used in the blocks in the workspace.
+ Specify the location of the **toolbox**, relative to the workspace.
+ Specify the **renderer**, which determines how the blocks need to be drawn.
## Need a change ...
When you need a change in this node, you can create a new Github issue. A couple of remarks about this:
+ Check in advance whether that change hasn't been requested yet (as a Github [issue](https://github.com/bartbutenaers/node-red-contrib-blockly/issues)).
* Take into account that the *function node* offers much more functionality, since we cannot create blocks for all Javascript statements.
* Keep in mind that we won't create blocks for functionality, whcih is already available in other Node-RED nodes (e.g. send mail, ...).
## Need blockly in your own language ...
Google has already provided translations for all their basic blocks. However for our own blocks, there are currently only translations available in a few languages. If you want to translate the blocks in your own language, please follow these steps:
+ Clone the [en.js](https://github.com/bartbutenaers/node-red-contrib-blockly/blob/master/messages/en.js) to a xx.js file (with xx being your country code like 'nl', 'en', 'fr', 'jp', ...).
+ Translate all the texts from the xx.js file in your own language. Blockly will automatically replace the placeholders (%1 %2 ...) by widgets (dropdowns, input fields, ...). You are allowed to rearrange the placeholders, because blockly will automatically rearrange the corresponding widgets also. For example *'convert date %1 to format %2'* could be *'xxx %2 xxxx %1 xxx'* in a language that is being read from right to left.
+ Test whether the blocks look good and correct in your language.
+ Deliver us the xx.js file (preferrable by pull request, or otherwise e.g. via a new issue).
Thanks in advance !
| 89.68306 | 431 | 0.764197 | eng_Latn | 0.997015 |
6f3277825db0a7ae41ee2c3b66e367a3dcd877bd | 6,177 | markdown | Markdown | _posts/unclassified/2021-01-13-post.markdown | chajuhui123/chajuhui123.github.io | 4eafaeeb160bfef37ba0a15b34f23c322af73bc4 | [
"MIT"
] | null | null | null | _posts/unclassified/2021-01-13-post.markdown | chajuhui123/chajuhui123.github.io | 4eafaeeb160bfef37ba0a15b34f23c322af73bc4 | [
"MIT"
] | null | null | null | _posts/unclassified/2021-01-13-post.markdown | chajuhui123/chajuhui123.github.io | 4eafaeeb160bfef37ba0a15b34f23c322af73bc4 | [
"MIT"
] | 1 | 2021-08-30T06:49:46.000Z | 2021-08-30T06:49:46.000Z | ---
layout: post
title: "[Kotlin] 기초부터 탄탄하게, 기본 문법 #1"
date: 2021-01-13 11:18
categories: Android Kotlin
---
Kotlin 공부를 시작했다.<br>
기본 문법부터 시작해서 클론 코딩까지 혼자 공부해볼 예정!!
현장실습 쉬는시간이나 퇴근하고 짬내서 공부 중인데, 코틀린도 파이썬만큼이나 간단한 언어인 것 같다 (!?)
기본 문법 배우는 건 항상 재밌군...
<br>
----
<br>
목차는 다음과 같다.
1. string template
2. function
3. value VS variance
4. 조건식 if when
5. array VS list
6. 반복문 for while (+ Exression VS Statement)
7. null type
8. class
<br>
----
<br>
- String template
```javascript
fun main(){
// 0. String 템플릿
// ${} 표시를 이용해 변수를 활용
val name = "Joy"
println("my name is ${name + name} I'm 23")
println("Is this true? ${1==0}")
println("this is 2\$.")
}
```
- Function
- Unit에 대해 알게 되었다! Kotlin function의 특징은 변수이름이 먼저 오고, 변수 타입이 뒤에 오는 것.
```javascript
// 1. 함수
// 코틀린은 항상 function의 줄임말인 fun을 사용한다.
// 회색으로 뜨는 부분은 생략해도 괜찮은 것!
// return이 없을 땐 Unit이라고 써줘도 되고 생략도 가능
fun helloWorld() : Unit{
println("hello world!")
}
// 두 수를 더해주는 함수
// 특징 : 변수이름 먼저 변수 타입은 뒤에
fun add(a : Int, b : Int) : Int {
return a+b
}
```
- Value와 Variance
- 변할 수 있는지 없는지가 둘의 차이이다.
```javascript
// 2. val VS var
// val = value (상수 : 바뀌지 않는 것)
// var = variance (변수)
fun hi(){
val a : Int = 10
var b : Int = 9
var e : String // 선언할땐 어떤 변수인지 명시해야한다.
b = 100
val c = 100 // Int는 생략가능하나(코틀린이 알아서 인식한다.) var인지 val인지는 선언해야 한다.
var name = "joy" // string도 마찬가지
}
```
- 조건식 if, when
- expression과 statement에 대한 개념이 java와 다르다.
```javascript
// 4. 조건식 (if와 when)
fun maxBy(a : Int, b: Int) : Int{
if (a>b){
return a
} else {
return b
}
}
// + Expression VS Statement
// Expression : 무언가의 값을 만들면 expression
// Statement : 실행을 하도록 하는 것은 statement
// Java : if문은 항상 statement / Kotlin : if문 when문 상황에 따라 expression + staement 로 사용 가능
fun maxBy2(a:Int, b:Int) : Int = if(a>b) a else b // a혹은 b라는 값을 만든다. (=expression)
fun checkNum (score : Int){ // kotlin에서 모든 함수는 expression ( return값이 없어도, Unit을 return하기 때문)
// 다만 void는 statement
when(score) { // 여기서의 when은 값을 만드는 것은 아니고 실행을 위한 문장 (=statement)
0 -> println("this is 0")
1 -> println("this is 1")
2,3 -> println("this is 2 or 3")
else -> println("I don't know") // 없어도 됨
}
var b = when(score){ // 여기서 when은 값을 만들기 때문에 (=expression)
1 -> 1
2 -> 2
else -> 3 // 여기서 else는 빠지면 안됨
}
println("b : ${b}")
when(score){
in 90..100 -> println("you are genius")
in 10..80 -> println("not bad")
else -> println("Okay") // 없어도 됨
}
}
```
- Array와 List
- Array는 크기가 정해지고, List는 Inmutable List와 Mutable List로 나눌 수 있다.
```javascript
// 5. Array and List
// Array : 메모리가 할당되어져 짐 : 크기가 정해짐
// List : 1. (Inmutable) List 읽기만 가능 2. MutableList 읽기쓰기 모두 가능
fun array(){
val array = arrayOf(1,2,3)
val list = listOf(1,2,3)
// 여러가지 형태를 담을 수 있다.
val array2 = arrayOf(1, "Jinwoo", 3.4f)
val list2 = listOf(1, "Jinoo", 11L)
array[0] = 3
var result = list.get(0)
// *** val을 써도 될까? : 참조값이 변하지 않기 때문에 val로 선언 가능하다. 주소값이 변하지 않는다.
val arrayList = arrayListOf<Int>()
arrayList.add(10)
arrayList.add(20)
}
```
- 반복문 for, while
```javascript
// 6. For / While
fun forAndWhile(){
// for
val students = arrayListOf("joy", "juhee", "Rachel", "Jarvis")
for (name in students){
println("${name}")
}
var sum : Int = 0
for (i in 1..10 step 2){
// 10 dowTo 1 : 10부터 1까지 차례로 내려감
//1 until 100 : 1부터 99까지! 1..100은 100까지!
sum += i
}
println(sum)
for ((index, name)in students.withIndex()){
println("${index + 1}번째 학생은 ${name}이에요.")
}
// while
var index = 0
while(index < 10){
println("current index : ${index}")
index++
}
}
```
- Nullable과 NonNull
- null type에 대한 정확한 이해가 필요하니 집중!
```javascript
// 7. Nullable / NonNull
// Java와 Kotlin의 차이점!
// ?: 엘비스 연산자라고 불러요. 90로 봤을 때 엘비스 프레슬리같아서..
fun nullcheck(){
// NPE : Null Point Exception.
// Java에서는 Run time에서 잡힘. Kotlin에서는 Compile 시점에서 잡히도록 함
var name : String = "Joy" // Nonnull type
var nullName : String? = null // ?를 넣으면 nullable type이 된다
var nameInUpperCase = name.toUpperCase()
var nullNameInUpperCase = nullName?.toUpperCase()
//null이 아니면 toUpperCase가 되고, null이면 null이 된다
val lastName : String? = null
val fullName = name + " " + (lastName?: "No lastName") //lastName이 null이면 뒤에 있는 "No lastName"을 사용
println(fullName)
}
fun ignoreNulls(str : String?){
val mNotNull : String = str!!
// 이 자리에 null이 들어올리가 절대 없을 때 사용 가능
// 하지만 확실히 하지 않으면 사용하는 것 지양하자.
val upper = mNotNull.toUpperCase()
val email : String? = null
email?.let{ // null이 아니면 email을 내부로 보내줌
println("my mail is ${email}") // 만약 null이면 내부로 오지않기때문에 이 문장은 사용되지 않음.
}
}
```
- Class에 대한 이해
```javascript
// 8. Class
fun main(){
// val human = Human("juhee")
// val stranger = Human()
// human.eatingCake()
// println("this human's name is ${human.name}")
// println("this human's name is ${stranger.name}")
// val mother = Human("hs", 50)
val korean = Korean()
korean.singAsong()
}
open class Human constructor(val name : String = "Annoymous"){ //기본생성자
// property로 있는 것과 같다.
// 디폴트값도 가능
// 부생성자
constructor(name : String, age : Int) : this(name){
//부생성자는 주생성자의 위임을 받아야한다. this 예약어 사용
// 주생성자가 없으면 위임받을 필요가 없다
println("my name is ${name}, ${age} years old")
}
init{
println("New human has been born!!")
// 주생성자의 일부이기 때문에 시작될 때 같이 생성됨.
//먼저 실행됨
}
fun eatingCake(){
println("This is so YUMMMMMMY")
}
open fun singAsong(){
println("Lalala")
}
}
// class 상속
// 상속받아 올 class는 open 예약어를 사용해야 한다.
// 상속은 1개만 가능
class Korean : Human(){ // 상속 받은 method를 Koran class에서 다르게 사용하고 싶을 때 override를 한다.
override fun singAsong(){
//singAsong in Human is final! 따라서 override가 안된다.
//그렇다면 singAsong in Human도 open 예약어를 작성해주자.
super.singAsong() // singAsong in Human 사용.
println("라라랄")
println("my name is ${name}") // annoymous
}
}
```
<br>
----
<br>
수고하셨습니다 !!
| 21.010204 | 101 | 0.582645 | kor_Hang | 0.999804 |
6f3320f31db1bcf5f3557876477be809a1e01a7e | 1,539 | md | Markdown | common/extensions/loganalyzer/README_en.md | oldbk/bookmaker | 1fa8650f6c5eab7bb6b3d266ed31b300b4a25660 | [
"BSD-2-Clause"
] | null | null | null | common/extensions/loganalyzer/README_en.md | oldbk/bookmaker | 1fa8650f6c5eab7bb6b3d266ed31b300b4a25660 | [
"BSD-2-Clause"
] | null | null | null | common/extensions/loganalyzer/README_en.md | oldbk/bookmaker | 1fa8650f6c5eab7bb6b3d266ed31b300b4a25660 | [
"BSD-2-Clause"
] | null | null | null | #Yii LogAnalyzer - Log file analyzer for Yii
## Features:
- Easy connection to the project
- Output messages from the log file
- Filter log messages (Remove unwanted messages from issuance)
- Filter log output (output only error, warning or info)
- Cleaning the log file
- Multilingual (russian, english and Portuguese)
## Example:
Print out the widget in the view:
```php
<?php
$this->widget('ext.loganalyzer.LogAnalyzerWidget', array(
'filters' => array('Text filtering','One more'),
'title' => 'Title of the widget' ,
// 'log_file_path' => 'Absolute path of the Log File',
));
?>
```
In addition:
Also in the expansion is extended to marshurt logs, which adds to the message logger ip client. Connect as follows:
```php
<?php
'log'=>array(
'class'=>'CLogRouter',
'routes'=>array(
....
array(
'class'=>'ext.loganalyzer.LALogRoute',
'levels'=>'info, error, warning',
...
),
...
),
),
?>
```
## Screenshot:

##Acknowledgments
Big thanks [Tonin De Rosso Bolzan](https://github.com/tonybolzan):
Translating to English
JavaScript Optimized:
- added effects
- confirmation on clean
- Expand/Collapse Stack Trace
PHP Code Optimized:
- removed method "processLogs()" from LALogRoute because it is equal to parent class CFileLogRoute
- changed default "log_file_path" on LogAnalyzerWidget to Yii::app()->getRuntimePath() | 24.046875 | 115 | 0.671215 | eng_Latn | 0.868912 |
6f33d6425ef23aaaabc5c33f7765dbf37d9b7e01 | 2,950 | md | Markdown | docs/code-quality/c6401.md | Chrissavi/cpp-docs.de-de | 6cc90f896a0a1baabb898a00f813f77f058bb7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/c6401.md | Chrissavi/cpp-docs.de-de | 6cc90f896a0a1baabb898a00f813f77f058bb7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/code-quality/c6401.md | Chrissavi/cpp-docs.de-de | 6cc90f896a0a1baabb898a00f813f77f058bb7e5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: C6401
ms.date: 11/04/2016
ms.topic: reference
f1_keywords:
- C6401
helpviewer_keywords:
- C6401
ms.assetid: d57b1c94-57a3-4d4b-a7de-8b9ffbac3ebe
ms.openlocfilehash: d4fe62cadf01bfedb8b3144c9ed6ba18c69d1024
ms.sourcegitcommit: 7bea0420d0e476287641edeb33a9d5689a98cb98
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 02/17/2020
ms.locfileid: "79467694"
---
# <a name="c6401"></a>C6401
> Warnung C6401: die Verwendung \<Funktionsnamens > in einem Standard Gebiets Schema, um beim Vergleich mit konstanter Zeichenfolge \< Zeichen folgen Namen > groß-und Kleinschreibung zu Unternehmen. Führt bei nicht englischen Gebietsschemas zu unerwarteten Ergebnissen.
Diese Warnung weist darauf hin, dass bei der Angabe des Standard Gebiets Schemas ein Vergleich mit der Groß-/Kleinschreibung nicht beachtet wird. Normalerweise war ein Gebiets Schema unabhängiger Vergleich vorgesehen.
Die typische Folge dieses Fehlers ist ein falsches Verhalten in nicht englischsprachigen Gebiets Schemas. In der türkischen Sprache entspricht ". gif" z. b. nicht. GIF "; in Vietnamesisch entspricht "Login" nicht "Login".
Die `CompareString`-Funktion nimmt ein Gebiets Schema als Argument an. Wenn Sie jedoch ein Standard Gebiets Schema übergeben, z. b. die Konstante `LOCALE_USER_DEFAULT`, werden in verschiedenen Gebiets Schemas unterschiedliche Verhalten verursacht, abhängig von der Standardeinstellung des Benutzers. Normalerweise sollten Vergleiche mit einer Konstanten Zeichenfolge ohne Berücksichtigung der Groß-und Kleinschreibung in einem Gebiets Schema unabhängigen Vergleich durchgeführt werden.
Um einen Gebiets Schema unabhängigen Vergleich mithilfe von `CompareString` unter Windows XP auszuführen, sollte der erste Parameter die Konstante `LOCALE_INVARIANT`sein. Wenn Sie z. b. einen Gebiets Schema unabhängigen Test ausführen möchten, um zu überprüfen, ob `pString` mit file1 übereinstimmt. gif ignoriert die Groß-/kleinschreibungunterschiede:
```cpp
CompareString(LOCALE_INVARIANT,
NORM_IGNORECASE,
pString,
-1,
TEXT("file1.gif"),
-1) == CSTR_EQUAL
```
## <a name="example"></a>Beispiel
Der folgende Code generiert diese Warnung:
```cpp
include <windows.h>
int fd(char *ext)
{
return (CompareString(LOCALE_USER_DEFAULT,
NORM_IGNORECASE,
ext,
-1,
TEXT("gif"),
-1) == 2);
}
```
So korrigieren Sie die Warnung unter Verwendung des folgenden Codes
```cpp
include <windows.h>
int f(char *ext)
{
return (CompareString(LOCALE_INVARIANT,
NORM_IGNORECASE,
ext,
-1,
TEXT("gif"),
-1) == 2);
}
```
## <a name="see-also"></a>Weitere Informationen
<xref:Microsoft.VisualBasic.CompilerServices.Operators.CompareString%2A>
| 39.864865 | 485 | 0.711864 | deu_Latn | 0.969377 |
6f347c6f2c628ea896e1b7d99353c376b688a58f | 491 | md | Markdown | docs/OrderAffiliate.md | UltraCart/rest_api_v2_sdk_javascript | 417b521ff63d44d33046d8405755c0e06e9fe5e5 | [
"Apache-2.0"
] | 1 | 2018-11-23T17:55:30.000Z | 2018-11-23T17:55:30.000Z | docs/OrderAffiliate.md | UltraCart/rest_api_v2_sdk_javascript | 417b521ff63d44d33046d8405755c0e06e9fe5e5 | [
"Apache-2.0"
] | 5 | 2018-04-25T13:01:13.000Z | 2022-02-01T20:09:02.000Z | docs/OrderAffiliate.md | UltraCart/rest_api_v2_sdk_javascript | 417b521ff63d44d33046d8405755c0e06e9fe5e5 | [
"Apache-2.0"
] | null | null | null | # UltraCartRestApiV2.OrderAffiliate
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**affiliate_oid** | **Number** | Affiliate ID | [optional]
**ledger_entries** | [**[OrderAffiliateLedger]**](OrderAffiliateLedger.md) | Ledger entries associated with all the commissions earned on this order | [optional]
**sub_id** | **String** | Sub identifier provided by the affiliate on the click that generated this order | [optional]
| 44.636364 | 162 | 0.643585 | eng_Latn | 0.892929 |
6f34a97ecb3ab4293c898190602782222809b124 | 49 | md | Markdown | README.md | saketbhat/FullStackDev | a71a8b197c7430fd24ebacb15413647f811bae27 | [
"MIT"
] | null | null | null | README.md | saketbhat/FullStackDev | a71a8b197c7430fd24ebacb15413647f811bae27 | [
"MIT"
] | null | null | null | README.md | saketbhat/FullStackDev | a71a8b197c7430fd24ebacb15413647f811bae27 | [
"MIT"
] | null | null | null | # FullStackDev
Full Stack Application Movie Flix
| 16.333333 | 33 | 0.836735 | kor_Hang | 0.821426 |
6f355f0fe706bac56a73b089e6b3f1fc2239ea9f | 1,383 | md | Markdown | _drafts/2015-xx-xx-statistical-inference.md | jllopezpino/blog | b728ec9a2c66c2fdbecf944d250f3397e1e33f70 | [
"MIT"
] | null | null | null | _drafts/2015-xx-xx-statistical-inference.md | jllopezpino/blog | b728ec9a2c66c2fdbecf944d250f3397e1e33f70 | [
"MIT"
] | null | null | null | _drafts/2015-xx-xx-statistical-inference.md | jllopezpino/blog | b728ec9a2c66c2fdbecf944d250f3397e1e33f70 | [
"MIT"
] | null | null | null | ---
layout: post
title: Statistical inference
#permalink: /2015/01/01/2014-recap/
# slug: 2014-recap
excerpt: "rder)."
tags: [statistics, statistical inference]
modified: 2015-10-01
comments: true
image:
feature: old-days-of-technology.jpg
---
2014 was a year full of hard work and achievements for me. Some of the best moments of the year (in chronological order):
<ul>
<li>March: <a href="http://smergy.com">Smergy</a> began as a research project by Faisal Moeen and myself.</li>
<li>April: I started to work at <a href="http://getyourguide.com">GetYourGuide</a> as Data Engineer in Marketing.</li>
<li>May: I started tweeting regularly again, reaching out 300+ people and gaining roughly 100 new followers.</li>
<li>September: I defended my thesis and graduated with my <a href="http://it4bi.univ-tours.fr">Master of Science</a>, specializing in 'Distributed and Large-Scale BI'.</li>
<li>November: Guest speaker at <a href="http://www.slideshare.net/jlpino/bds14-big-data-analytics-to-the-masses">Big Data Spain 2014</a>.</li>
</ul>
Sensitivity:
Specificity:
Positive predictive value
Negative predictive value
iid random variable
positive likelihood ratio (LR+, likelihood ratio positive, likelihood ratio for positive results)
negative likelihood ratio (LR–, likelihood ratio negative, likelihood ratio for negative results).
unbiased sample and the center of mass.
| 44.612903 | 173 | 0.758496 | eng_Latn | 0.916182 |
6f3690f74517aa36d5c988f8dccd2bd71ccaeaae | 1,350 | md | Markdown | README.md | lzps/weather | c0e43f56b2f4cac924ccca72cc9bd85edf735281 | [
"MIT"
] | null | null | null | README.md | lzps/weather | c0e43f56b2f4cac924ccca72cc9bd85edf735281 | [
"MIT"
] | 1 | 2017-05-04T15:15:16.000Z | 2017-05-27T08:49:44.000Z | README.md | lzps/weather | c0e43f56b2f4cac924ccca72cc9bd85edf735281 | [
"MIT"
] | null | null | null | # Weather
这是一个python3的脚本,用于查询天气。使用时直接运行main.py。
首次运行将提示输入API KEY与经纬度
注意;请使用 python3 运行,并安装 requests 库
遵循MIT许可证
## Demo
``` bash
$ python3 main.py
未检测到配置,请输入
1.输入 API Key(可留空,留空则跳过获取)
-请输入 Caiyunapp 的 API Key:xxx
-请输入 AccuWeather 的 API Key:3HthYWILGYnX5oKveX72Frvv7agCfw9Y
2.输入城市信息,必填 可参考 https://www.ipip.net/ip.html
-请输入纬度经度(示例:39.93,116.40):39.93,116.40
配置已存至./main.ini,请检查后重新运行。
$ python3 main.py
08-11T21:35,23.0ºC,湿度:42%,AQI:37.0,南风、3级微风
08-11,23.0-31.0ºC,雨,日均降雨强度:2.785
08-12,23.0-31.0ºC,雨,日均降雨强度:1.8399
08-13,22.0-26.0ºC,雨,日均降雨强度:0.8987
08-14,22.0-29.0ºC,多云,日均降雨强度:0.0
08-15,22.0-31.0ºC,晴天,日均降雨强度:0.0
Caiyunapp
08-11T21:20,22.2ºC(室内体感:19.4ºC),湿度:88%,东风、4级和风
08-11,23.0-31.0ºC,AQI:79,
-白天:局部地区有雷雨,降水概率:47%,
-夜晚:间歇性降雨,降水概率:80%
08-12,22.0-31.0ºC,AQI:73,
-白天:局部地区有雷雨,降水概率:61%,
-夜晚:几场雷雨,降水概率:71%
08-13,23.0-29.0ºC,AQI:89,
-白天:局部地区有雷雨,降水概率:40%,
-夜晚:局部地区有雷雨,降水概率:43%
AccuWeather
```
## lib 说明
我使用过四种 API:[AccuWeather](https://developer.accuweather.com/packages)、[彩云天气](https://www.caiyunapp.com/dev_center/regist.html)、[Darksky(原forecast.io)](https://darksky.net/dev)和[心知天气](https://www.seniverse.com/pricing)。
个人感觉 AccuWeather 最好用,可惜免费版访问量限制得厉害,一天只允许 50 次,但数据全面(无地区限制)准确(来源应该是[中国气象局](https://baike.baidu.com/item/华风集团));彩云天气还好,尤其自家 App 做的不错(卫星图是个亮点);[Darksky](https://darksky.net/forecast/) 没其他问题,就是不够准,所以移除了;心知天气限制地区,也移除了 | 27 | 217 | 0.72963 | yue_Hant | 0.499461 |
6f3700cd3c5693111329aa45bebbe3c86354e8da | 8,034 | md | Markdown | documentation/prometheus-design-principles.md | alphagov/monitoring-doc | 3a68d32edefb5339ba5a59aee2ce23b3f9f0208f | [
"MIT"
] | 5 | 2018-04-18T22:45:48.000Z | 2020-04-21T20:11:46.000Z | documentation/prometheus-design-principles.md | alphagov/monitoring-doc | 3a68d32edefb5339ba5a59aee2ce23b3f9f0208f | [
"MIT"
] | 9 | 2018-03-28T14:28:58.000Z | 2018-09-26T09:05:28.000Z | documentation/prometheus-design-principles.md | alphagov/monitoring-doc | 3a68d32edefb5339ba5a59aee2ce23b3f9f0208f | [
"MIT"
] | 3 | 2018-05-11T10:37:26.000Z | 2021-04-10T19:41:09.000Z | # Prometheus design principles
What is prometheus for? There seem to be a number of design
principles behind prometheus that aren't explicitly documented on the
website. However, prometheus maintainers sometimes refer to them in
mailing lists or github issues. This page tries to reverse engineer
them.
## Terminology
The Prometheus community uses certain words as jargon terms. These
are some:
* failure domain: a group of things that can all fail at once (a
data centre, an AWS region or AZ)
* instance: an endpoint that you can scrape
* job: a collection of *instances* with the same purpose, replicated
for scalability or reliability or some other purpose
## What Prometheus does
Prometheus is a tool for collecting time-series data. These
time-series can be queried using PromQL, a custom query language for
time-series data. PromQL queries can be used as a basis for monitoring
systems through dashboards and alerting.
## What Prometheus doesn't do
[Julius Volz listed some things Prometheus doesn't do](https://youtu.be/QgJbxCWRZ1s?t=9m28s):
* raw log / event collection (use a logging system)
* request tracing (use zipkin or similar)
* "magic" anomaly detection
* durable long-term storage (use an external system for this,
decouple it from your operational monitoring)
* automatic horizontal scaling
* user / auth management
## Pull model
Prometheus is a pull-based system.
[Julius Volz listed some benefits of pull](https://youtu.be/QgJbxCWRZ1s?t=17m55s):
* automatic upness monitoring - if you fail to scrape metrics, you
can immediately say something is wrong
* "horizontal monitoring" - Team A can include some metrics from
Team B on their operational dashboard by selectively scraping.
You can be flexible with what you pull from where
* run a copy of monitoring on your laptop, scraping the same endpoints
* experiment with different alerting rules
* simpler HA - just deploy a copy (alertmanager deduplicates alerts)
* less configuration - instances don't need to know where monitoring
lives
* the monitoring system is in control of when it ingests new metrics
by setting a scraping interval. This avoids issues where a
misbehaving client emits so many metrics they negatively impact
other users of the monitoring system.
The
[prometheus docs also say](https://prometheus.io/docs/introduction/faq/#why-do-you-pull-rather-than-push?):
> Overall, we believe that pulling is slightly better than pushing,
> but it should not be considered a major point when considering a
> monitoring system.
### When to use pushgateway
There's a
[page in the docs for when to use pushgateway](https://prometheus.io/docs/practices/pushing/).
## Shape of data
### Names
* labels should not have too high a cardinality
* See the CAUTION here: https://prometheus.io/docs/practices/naming/#labels
* it seems that prometheus 2.0's new storage engine took a lot
of effort to support "churn" of labels, so long as the number of
active labels at any point is relatively small. see
["storing 16 bytes at scale" from promcon 2017](https://promcon.io/2017-munich/talks/storing-16-bytes-at-scale/)
or
[prometheus 2.0: new storage layer...](https://coreos.com/blog/prometheus-2.0-storage-layer-optimization)
* labels should not include redundant informational data
### Values
* [counters should always count up from zero](https://www.robustperception.io/how-does-a-prometheus-counter-work/)
* data should always make sense, independent of scrape interval
* this is in contrast to statsd's behaviour
* [sum of rates, not rate of sums](https://www.robustperception.io/rate-then-sum-never-sum-then-rate/)
## Architectural things ##
### Simplicity ###
Prometheus is designed to achieve reliability through simplicity.
This means:
* redundancy is achieved by deploying multiple separate instances,
not by database replication.
* recovery from backup is not a priority. if a prometheus server
has a disk failure, just use another redundant instance (assuming
you have one)
* if a prometheus instance has a temporary blip (say, a network
issue) and then recovers, it will have missing metrics for the
duration of the issue.
[Julius Volz talked about why they didn't do clustering](https://youtu.be/QgJbxCWRZ1s?t=26m40s).
### Prometheus should be deployed in same failure domain ###
A prometheus server should be deployed in the same failure domain as
the things it monitors. This means the same data centre, region,
availability zone, etc.
This means that:
* a prometheus shouldn't have to go over the public internet to
scrape a service
* (federation might go over public internet, though)
https://www.robustperception.io/scaling-and-federating-prometheus/
### There should normally be a single alertmanager for an organization ###
It seems that there should normally be a single alertmanager for an
organization, rather than having separate alertmanagers with different
configurations for each team or programme or division. The best
reference I've found for this is from
[Julius Volz's Prometheus Design and Philosophy talk (7:44)](https://youtu.be/QgJbxCWRZ1s?t=7m43s)
which doesn't go into much detail as to *why* you'd do this.
## Security model
Prometheus has extensive
[documentation about its security model](https://prometheus.io/docs/operating/security/).
In particular, it distinguishes trusted users (who have admin access
to deploy and reconfigure prometheus) from untrusted users (who can
view prometheus metrics and dashboards).
Mostly, you control access to prometheus at the network level; there
is no support for authentication within prometheus.
You should be careful before enabling the `honor_labels` or
`--web.enable-admin-api` options, because both put extra trust in
scraped targets or untrusted users.
## Federation
Prometheus supports federation: a prometheus server can scrape the
metrics from another prometheus server. This is to support some use
cases, such as:
* have a global, coarse-grain view of the system
* the global prometheus may have a coarser scrape resolution
* the global prometheus
[may aggregate instance-level time series into service-level](https://prometheus.io/docs/prometheus/latest/federation/#hierarchical-federation)
You may also limit the metrics collected at the global level to those
which you really care about (e.g. those which form a Service Level
Objective which impacts a Service Level Agreement you'd rather not
break)
In the event of an issue you could then debug further by querying the
local Prometheus server with a more granular view of the system.
## Alerting
A lot of Prometheus's philosophy on alerting is derived from
[Rob Ewaschuk's philosophy on alerting](https://docs.google.com/document/d/199PqyG3UsyXlwieHaqbGiWVa8eMWi8zzAn0YfcApr8Q/edit#)
which, after an editing process, became
[chapter 6 of the SRE book](https://landing.google.com/sre/book/chapters/monitoring-distributed-systems.html).
### Alert on symptoms, not causes
As mentioned above, Prometheus only deals with aggregated numerical
data, not raw event data. Prometheus data is used to trigger alerts,
based on symptoms ("the system is broken") rather than causes. From
Rob Ewaschuk's philosophy, he has a section
[cause-based alerts are bad (but sometimes necessary)](https://docs.google.com/document/d/199PqyG3UsyXlwieHaqbGiWVa8eMWi8zzAn0YfcApr8Q/edit#heading=h.6ammb5h32uqq).
### Access to context
However, people responding to the alert may well need access to raw
event data such as logs or exceptions to understand the underlying
causes in order to resolve the alert.
As [Brian Brazil put it](https://groups.google.com/forum/m/#!topic/prometheus-users/JZigzNa48QM):
> you get your alert, and [...] then jump to logs to see what exactly went wrong. This sort of debug-level information would never pass through Prometheus or the Alertmanager.
| 41.84375 | 175 | 0.76699 | eng_Latn | 0.99406 |
6f372aa6b2afc3351bbad347d0609534cd643f21 | 214 | md | Markdown | docs/swagger.md | Padel-App/padel-app-docs | be404eb3f9b1cd2de2a79643067386805be0b7eb | [
"MIT"
] | null | null | null | docs/swagger.md | Padel-App/padel-app-docs | be404eb3f9b1cd2de2a79643067386805be0b7eb | [
"MIT"
] | null | null | null | docs/swagger.md | Padel-App/padel-app-docs | be404eb3f9b1cd2de2a79643067386805be0b7eb | [
"MIT"
] | 1 | 2022-02-24T19:14:44.000Z | 2022-02-24T19:14:44.000Z | # Swagger
* URL: [Swagger](https://padelapp-backend.herokuapp.com/docs/)
* JSON: [Swagger JSON](https://padelapp-backend.herokuapp.com/docs-json)
!!swagger-http https://padelapp-backend.herokuapp.com/docs-json!! | 30.571429 | 72 | 0.742991 | yue_Hant | 0.514124 |
6f379cd0f62b977dea992d0abe8d1563af0c6498 | 78,488 | md | Markdown | README.md | kokyiphyocho/Cobra-Web-based-POS-System | 3753e2cdedd593aab0dc7f1321557ab78909eb8c | [
"Unlicense"
] | null | null | null | README.md | kokyiphyocho/Cobra-Web-based-POS-System | 3753e2cdedd593aab0dc7f1321557ab78909eb8c | [
"Unlicense"
] | null | null | null | README.md | kokyiphyocho/Cobra-Web-based-POS-System | 3753e2cdedd593aab0dc7f1321557ab78909eb8c | [
"Unlicense"
] | null | null | null | # Cobra : Web-Based Stateful POS System
This project is to demonstrate how to develop interactive stateful POS application using web technology.
The UI design of the application is based on the iOS and it is compatible across all size of devices.
Even though the application is based on the web technology, it can maintain state of appication and behave
like a mobile application.
__*Demo Link*__ : https://testpos.gshopzone.com/?_e=RTgxNjBDOTRDNEQzM0I0QjQ0NjU2MzRFRkZGOEYyRkVBQUFBQUFBQTExMTExMTExMTExMTExMTExMTExMTExMQ%3D%3D
### Output Design
\

\
### Architectural Design
\

### Project Overview
- Using CSS Styles to design the UI that similar to mobile app and compatible across various device size.
- Saving web state and form stack on the URL.
- Full rendering and partial rendering of Form Control.
- Centralize AJAX service for better maintainability.
- Logical Partitioning of Database.
- Updating of Database using Base64Encoded JSON Data.
- Dynamic Font Loading Mechanism.
- Dynamic Script Loading and CSS Loading Mechanism to load only necessary script file and CSS files.
- Duplicate Resource Detecting Mechanism to eliminate loading same resource multiple times.
- Dependencies awaiting mechanism to eliminate javascript error due to loading order.
- Widget Access Security for strict widgets.
- Multiple language controlling mechanism.
- Printing using Web Print technology.
### Application Features
###### Multiple Subscription Cabability
- Allow multiple subscriptions each of which are totally isolated.
- Data from all subscriptions are reside within the same database and logically partitioned.
- Each subscription maintain it's own system settings and user accounts.
- It allows multiple users with different roles for each subscription.
###### Support Multiple Languages
- System allow multiple lanugages.
- It allow to define different system font for different language.
- System loads only necessary fonts for the selected language.
###### Support Sub-unit feature
- It supports Sub-unit features with custom relationship. (e.g. 10 Pcs per 1 Packet)
- Conversion between Sub-unit and Major unit is handle automatically by system.
###### Support Multiple Receipt Types
- User can issue Sales Receipt, Purchase Receipt, Stock-In Receipt and Stock-Out Receipt.
- To minimized the concurrency issue, system uses on-the-fly validation mechanism.
- System also allow to issue pending sale receipt for each table for restaurants.
###### Support Reporting
- System can generate daily report and monthly report.
- Profit and loss is based on the latest purchase price of item as cost.
- Stock Balance is calculate on-the-fly and it can produce report with stock cost.
- System can generate reports as of any date.
###### Support Printing
- System is compatible with certain receipt printers that support Web Print Technology.
- Currently it supports, __*Star Mc-Print2*__, __*Star Mc-Print3*__, __*EPSON TM-T88VI*__, __*EPSON TM-M30*__ and __*EPSON TM-M10*__ printers.
###### Support Customization of Settings
- Add / Remove users and change password.
- Customizing Transaction Settings.
- Customizing Security Settings.
- Customizing Printer Settings.
- Customizing Receipt Layout Settings.
- Customizing Application Settings and Installer Settings.
### Database Tables
Database has three types of tables Control Table, Transaction Tables and Log Tables.
###### Control Tables
__*Admin_EServiceParam*__ \
This table store parameter for creating a new subscription.
it only use when creating new subscriptions.
__*Admin_FieldInfo*__ \
This table store predefined properties of a form control.
System will populate the properties of form control with the values from this table at run time using reflection.
__*Admin_FormInfo*__ \
This table store base information of a form control.
System will instantiate the form control object using data from this table.
__*Admin_InputInfo*__ \
This table store input group information that use in some form which has multiple input boxes.
System use data from this table to generate as series of input boxes.
__*Admin_Keypad*__ \
This table store information of Keypads.
This table helps to customize the layout of Keypad without modifying the code.
This table is logically partitioned by language and system will retrieve only those records belongs to active language.
__*Admin_Language*__ \
This table store language specific information.
__*Admin_Message*__ \
All application related messages for Message Box are stored in this table.
This table is logically partitioned by language and system will retrieve only those records which is belong to active language.
__*Admin_Routing*__ \
This table stored routing information. Currently, it has only single route record.
__*Admin_Session*__ \
This table stored active session information.
__*Admin_Setting*__ \
This table store all application settings.
The setting table is logically partitioned by Subscription ID.
__*Admin_Text*__ \
All application related texts and labels are stored in this table.
This table is logically partitioned by language and system will retrieve only those records which is belong to active language.
__*Admin_TimeZone*__ \
This table store Time Zone related information.
__*Admin_ToolBar*__ \
This table store various type of tool bar information which uses in different forms.
__*Admin_UrlRewrite*__ \
This table stored Url rewriting information. Currently, it has only single url rewrite record.
__*Admin_User*__ \
This table store user specific information for all subscription.
This table is logically partitioned by Subscription ID and system will check only users that belong to particular subscription.
__*Admin_Widgets*__ \
This table store information of all Widgets that can use in system.
The widget is an entry point to a particular Form Control.
TTo access strict Widgets, system will request PIN Code to ensure security of Widget usage.
__*Data_AppManifest*__ \
This table store application manifest related information.
System will generate iOS web clip or application manifest JSON file using information from this table.
__*Data_ComponentSubscription*__ \
This table store information about subscribed componenets.
The home screen will render widgets based on the information from this table.
__*Data_DynamicQuery*__ \
This table store queries that use in the application.
Queries has place holders which is replaced with actual value by system runtime.
__*Data_EService*__ \
This table store types of services which support by system.
Subscription table linked to EService ID in this table to identify the service type.
This table also allow to configure off-site storage of transactional data.
__*Data_Subscriber*__ \
This table store information of subscriber such as name, address and contact no. etc.
__*Data_Subscription*__ \
This is the primary table for subscriptions. System checks the Subscription ID against this table
to determine the validity of subscription and type of service that subscribed.
###### Transactional Tables
__*EData_ItemCatalogue*__ \
This table store the stock item related information.
This table is logically partitioned by Subscription ID and access is limited to its own subscription.
__*EData_Receipt*__ \
This table is master table of receipt related information.
All types of receipts are stored in this table.
This table also logically partitioned by Subscription ID and receipt serial number is base on the particular
subscription.
__*EData_StakeHolder*__ \
This table store information about customers or supplier. This table linked with
StakeHolderID of *EData_Receipt* table.
__*EData_StockIncoming*__ \
This table is detail table of the incoming receipt related information. This table linked with
*EData_Receipt* table using ReceiptID.
__*EData_StockOutgoing*__ \
This table is detail table of the outgoing receipt related information. This table linked with
*EData_Receipt* table using ReceiptID.
__*EData_TableList*__ \
This table store restaurant table and table group information.
This table also logically partitioned by Subscription ID and access is limited to its own subscription.
__*EData_Unit*__ \
This table store Unit related information of the stock.
This table also logically partitioned by Subscription ID and access is limited to its own subscription.
###### Log Tables
__*Logging Tables*__
*ELog_ItemCatalogue*, *ELog_Receipt*,*ELog_StockIncoming*,
*ELog_StockOutgoing*, *ELog_TableList*, *ELog_Unit*,
*Log_AppManifest*, *Log_Setting*, *Log_Subscriber*, *Log_Subsciption*, *Log_User*
are log tables for the corresponding tables.
__*Track_ServiceRequestLog*__ \
This table log the service request activity.
This information is to measure performance of the server.
### Stored Functions
__*Base64Decode*__ \
This stored function is to decode Base64 String to String.
```SQL
CREATE FUNCTION [dbo].[Base64Decode] ( @Input VARCHAR(MAX) )
RETURNS VARCHAR(MAX)
BEGIN
DECLARE @DecodedOutput VARCHAR(MAX)
set @DecodedOutput = CAST(CAST(N'' AS XML).value('xs:base64Binary(sql:variable("@Input"))', 'VARBINARY(MAX)') AS VARCHAR(MAX))
RETURN @DecodedOutput
END
```
__*Base64Encode*__ \
This stored function is to encode String to Base64 String.
```SQL
CREATE FUNCTION [dbo].[Base64Encode] ( @Input VARCHAR(MAX) )
RETURNS VARCHAR(MAX)
BEGIN
DECLARE @Input1 VARBINARY(MAX)
DECLARE @EncodedOutput VARCHAR(MAX)
set @Input1 = CONVERT(varbinary(MAX), @Input)
set @EncodedOutput = CAST(N'' as xml).value('xs:base64Binary(sql:variable("@Input1"))', 'VARCHAR(max)')
RETURN @EncodedOutput
END
```
__*Modulus11CheckChar*__ \
This stored function is to generate modulus 11 check character.
```SQL
CREATE FUNCTION [dbo].[Modulus11CheckChar] (@DataStr VARCHAR(MAX) )
RETURNS VARCHAR(1)
BEGIN
DECLARE @CharCode int;
DECLARE @Multiplier int;
DECLARE @Counter int = 0;
DECLARE @SumValue int = 0;
DECLARE @MultiplierArray varchar(10) = '1234567';
DECLARE @CheckCharArray varchar(11) = 'UCANGFJTRKX';
DECLARE @CheckChar as varchar(1);
WHILE @Counter < Len(@DataStr)
BEGIN
SET @CharCode = ASCII(Substring(@DataStr,@Counter + 1,1));
SET @Multiplier = CONVERT(int, Substring(@MultiplierArray, (7 - (@Counter % 7)),1));
SET @SumValue += @Multiplier * @CharCode;
SET @Counter = @Counter + 1;
END;
SET @SumValue = @SumValue % 11;
SET @SumValue = 11 - @SumValue;
SET @CheckChar = SUBSTRING(@CheckCharArray, @SumValue, 1);
RETURN @CheckChar
END
```
__*UTF8Base64Decode*__ \
This stored function is to decode Base64 String to UTF-8 String.
```SQL
CREATE FUNCTION [dbo].[UTF8Base64Decode] ( @Input NVARCHAR(MAX) )
RETURNS NVARCHAR(MAX)
BEGIN
DECLARE @BinaryValue VARBINARY(MAX) = CONVERT(XML, @Input).value('.','varbinary(max)');
DECLARE @IntermediaryValue VARCHAR(MAX) = CONVERT(VARCHAR(MAX), @BinaryValue);
DECLARE @EscapedValue VARCHAR(MAX) = dbo.XMLEscape(@IntermediaryValue,1);
DECLARE @DecodedOutput NVARCHAR(MAX) = CONVERT(NVARCHAR(MAX), CONVERT(XML, '<?xml version="1.0" encoding="UTF-8"?>' + @EscapedValue));
SET @DecodedOutput = dbo.UTF8XMLEscape(@DecodedOutput,0);
RETURN @DecodedOutput
END
```
__*UTF8XMLEscape*__ \
This stored function is to encode/decode xml escape in an UTF-8 String.
```SQL
CREATE FUNCTION [dbo].[UTF8XMLEscape] ( @Input NVARCHAR(MAX), @Escape TinyInt )
RETURNS NVARCHAR(MAX)
BEGIN
DECLARE @OutputData NVARCHAR(MAX) = @Input;
IF (@Escape = 0)
BEGIN
SET @OutputData = REPLACE(@OutputData,'&','&');
SET @OutputData = REPLACE(@OutputData,'<','<');
SET @OutputData = REPLACE(@OutputData,'>','>');
SET @OutputData = REPLACE(@OutputData,'"','"');
SET @OutputData = REPLACE(@OutputData,''','''');
END
ELSE
BEGIN
SET @OutputData = REPLACE(@OutputData,'&','&');
SET @OutputData = REPLACE(@OutputData,'<','<');
SET @OutputData = REPLACE(@OutputData,'>','>');
SET @OutputData = REPLACE(@OutputData,'"','"');
SET @OutputData = REPLACE(@OutputData,'''',''');
END
RETURN @OutputData
END
```
__*XMLEscape*__ \
This stored function is to encode/decode xml escape in a text string.
```SQL
CREATE FUNCTION [dbo].[XMLEscape] ( @Input VARCHAR(MAX), @Escape TinyInt )
RETURNS VARCHAR(MAX)
BEGIN
DECLARE @OutputData VARCHAR(MAX) = @Input;
IF (@Escape = 0)
BEGIN
SET @OutputData = REPLACE(@OutputData,'&','&');
SET @OutputData = REPLACE(@OutputData,'<','<');
SET @OutputData = REPLACE(@OutputData,'>','>');
SET @OutputData = REPLACE(@OutputData,'"','"');
SET @OutputData = REPLACE(@OutputData,''','''');
END
ELSE
BEGIN
SET @OutputData = REPLACE(@OutputData,'&','&');
SET @OutputData = REPLACE(@OutputData,'<','<');
SET @OutputData = REPLACE(@OutputData,'>','>');
SET @OutputData = REPLACE(@OutputData,'"','"');
SET @OutputData = REPLACE(@OutputData,'''',''');
END
RETURN @OutputData
END
```
### Stored Procedures
__*ES_CreateSubscription*__ \
This stored procedure is to create new subscriptions.
```SQL
CREATE PROCEDURE [dbo].[ES_CreateSubscription](
@EServiceID varchar(100),
@ActivationDate Datetime,
@ExpiryDate Datetime)
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS(SELECT * FROM Admin_EServiceParam WHERE EServiceID = @EServiceID)
BEGIN
RAISERROR('Invalid EService ID',16,1);
RETURN;
END;
DECLARE @ProductID varchar(50) = (SELECT ProductID FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @ServerID varchar(50) = (SELECT ServerID FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @EditionCode varchar(50) = (SELECT EditionCode FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @EditionSuffix varchar(50) = (SELECT EditionSuffix FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @AppName varchar(200) = (SELECT AppName FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @ShortName varchar(50) = (SELECT ShortName FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @BackgroundColor varchar(50) = (SELECT BackGroundColor FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @Description varchar(500) = (SELECT Description FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @Icon varchar(500) = (SELECT Icon FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @OrganizationID varchar(200) = (SELECT OrganizationID FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @OrganizationName varchar(500) = (SELECT OrganizationName FROM Admin_EServiceParam WHERE EServiceID = @EServiceID);
DECLARE @UserID int = 0;
DECLARE @LoginID varchar(200) = 'SYSTEM'
DECLARE @NewSerial INT;
DECLARE @NewSubscriptionSerial VARCHAR(50);
DECLARE @SubscriptionID VARCHAR(200);
SET @NewSerial = (SELECT ISNULL(Max(Convert(int, Substring(SubscriptionSerial,11,3))),0) + 1 from Data_Subscription where LEFT(SubscriptionSerial,3) = @ProductID AND SUBSTRING(SubscriptionSerial,5,3) = @ServerID AND SUBSTRING(SubscriptionSerial,9,2) = @EditionCode AND SUBSTRING(SubscriptionSerial,15,2) = @EditionSuffix);
SET @NewSubscriptionSerial = @ProductID + '-' + @ServerID + '-' + @EditionCode + RIGHT(' 000' + Convert(varchar(3), @NewSerial),3) + '-' + @EditionSuffix;
SET @NewSubscriptionSerial = @NewSubscriptionSerial + '-' + dbo.Modulus11CheckChar(@NewSubscriptionSerial);
SET @SubscriptionID = NEWID();
INSERT INTO Data_Subscription(UserID, LogInID, SubscriptionID, EServiceID, FrontEndPath, SubscriptionSerial, Type, Status, ActivationDate, ExpiryDate, SubscriptionRole, SubscriptionAttribute)
VALUES (@UserID, @LoginID, @SubscriptionID, @EServiceID, @NewSubscriptionSerial, @NewSubscriptionSerial, 'ANNUAL', 'ACTIVE', @ActivationDate, @ExpiryDate, '', '');
INSERT INTO Admin_User(SubscriptionID, LoginID, Type, Status, UserRole, Password)
VALUES (@SubscriptionID, "gizmo", 'SUPERADMIN', 'ACTIVE', '#ADMIN;', '96E79218965EB72C92A549DD5A330112');
INSERT INTO Data_AppManifest(SubscriptionID, AppName, ShortName, BackgroundColor, Description, FrontEndIcon, BackEndIcon, OrganizationID, OrganizationName)
VALUES (@SubscriptionID, @AppName, @ShortName, @BackgroundColor, @Description, @Icon, @Icon, @OrganizationID, @OrganizationName);
END
```
__*ES_POSChangeItemStatus*__ \
This stored procedure is to change the status of an item or category.
```SQL
CREATE PROCEDURE [dbo].[ES_POSChangeItemStatus](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@ItemID int,
@Status varchar(50),
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @IncomingQuantity decimal;
DECLARE @OutgoingQuantity decimal;
DECLARE @OutstandingQuantity decimal;
DECLARE @OutstandingInfo varchar(200);
DECLARE @OutstandingError varchar(400);
DECLARE @QuantativeItem tinyint;
IF ((@Status <> 'ACTIVE') And EXISTS(SELECT * From EData_ItemCatalogue Where SubscriptionID = @SubscriptionID and Status = 'ACTIVE' and Category = @ItemID))
BEGIN
RAISERROR('[#ERROR];err_categorynotempty;Category is Not Empty.',16,1);
RETURN;
END
IF (@Status <> "ACTIVE")
BEGIN
SET @QuantativeItem = (SELECT IIF(EntryType = 'ITEM' AND EntryAttribute <> 'STATIC',1,0) From EData_ItemCatalogue Where SubscriptionID = @SubscriptionID and Status = 'ACTIVE' and ItemID = @ItemID);
SET @OutgoingQuantity = ISNULL((SELECT Sum(so.Quantity) FROM EData_Receipt as rc, EData_StockOutgoing as so WHERE rc.SubscriptionID = @SubscriptionID And so.SubscriptionID = @SubscriptionID And rc.ReceiptID = so.ReceiptID and rc.Status = 'ACTIVE' and so.ItemID = @ItemID),0);
SET @IncomingQuantity = ISNULL((SELECT Sum(si.Quantity) FROM EData_Receipt as rc, EData_StockIncoming as si WHERE rc.SubscriptionID = @SubscriptionID And si.SubscriptionID = @SubscriptionID And rc.ReceiptID = si.ReceiptID and rc.Status = 'ACTIVE' and si.ItemID = @ItemID),0);
SET @OutstandingQuantity = @IncomingQuantity - @OutgoingQuantity;
IF ((@OutstandingQuantity <> 0) AND (@QuantativeItem = 1))
BEGIN
SET @OutstandingInfo = '{"quantity":"' + Cast(@OutstandingQuantity as varchar(18))+ '"}';
SET @OutstandingError = '[#ERROR];err_stocknotempty;$ITEMNAME still have $QUANTITYTEXT in stock.;' + @OutstandingInfo;
RAISERROR(@OutstandingError,16,1);
RETURN;
END
END
BEGIN TRY
INSERT INTO ELog_ItemCatalogue Select GetDate(), @UserID, @LoginID, @AccessInfo, Concat('STATUS->',@Status), EData_ItemCatalogue.* FROM EData_ItemCatalogue Where SubscriptionID = @SubscriptionID And Itemid = @ItemID;
UPDATE EData_ItemCatalogue
SET
UserID = @UserID,
LogInID = @LoginID,
AccessInfo = @AccessInfo,
Status = @Status
Where
SubscriptionID = @SubscriptionID And
ItemID = @ItemID;
Set @ReturnValue = 1;
END TRY
BEGIN CATCH
Set @ReturnValue = 0;
THROW
END CATCH
END
```
__*ES_POSChangeTableStatus*__ \
This stored procedure is to change the status of an table or table group.
```SQL
CREATE PROCEDURE [dbo].[ES_POSChangeTableStatus](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@TableID int,
@Status varchar(50),
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
IF ((@Status <> 'ACTIVE') And EXISTS(SELECT * From EData_TableList Where SubscriptionID = @SubscriptionID AND Status = 'ACTIVE' AND GroupID = @TableID))
BEGIN
RAISERROR('[#ERROR];err_tablegroupnotempty;Table Group is Not Empty.',16,1);
RETURN;
END
IF ((@Status <> 'ACTIVE') And EXISTS(SELECT * From EData_Receipt Where SubscriptionID = @SubscriptionID AND Status = 'PENDING' AND Reference = 'TABLE#' + CONVERT(VARCHAR(10),@TableID)))
BEGIN
RAISERROR('[#ERROR];err_tablenotempty;Table has pending receipt.',16,1);
RETURN;
END
BEGIN TRY
INSERT INTO ELog_TableList Select GetDate(), @UserID, @LoginID, @AccessInfo, Concat('STATUS->',@Status), EData_TableList.* FROM EData_TableList Where SubscriptionID = @SubscriptionID And TableID = @TableID;
UPDATE EData_TableList
SET
UserID = @UserID,
LogInID = @LoginID,
Status = @Status,
LastModified = GetDate()
Where
SubscriptionID = @SubscriptionID And
TableID = @TableID;
Set @ReturnValue = 1;
END TRY
BEGIN CATCH
Set @ReturnValue = 0;
THROW
END CATCH
END
```
__*ES_POSDeletePendingReceipt*__ \
This stored procedure is to delete the pending receipt which is used in restaurant.
```SQL
CREATE PROCEDURE [dbo].[ES_POSDeletePendingReceipt](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@ReceiptID int,
@LastModified DateTime,
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS(SELECT * From EData_Receipt where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID And Status = 'PENDING' And LastModified = @LastModified)
BEGIN
RAISERROR('[#ERROR];err_datachanged;Data was changed by some other user.',16,1);
RETURN;
END
ELSE
BEGIN
BEGIN TRY
INSERT INTO ELog_Receipt Select GetDate(), @UserID, @LoginID, @AccessInfo, Concat('STATUS->','ACTIVE'), EData_Receipt.* FROM EData_Receipt Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID;
Update EData_Receipt
SET
UserID = @UserID,
LogInID = @LoginID,
AccessInfo = @AccessInfo,
LastModified = GETDATE(),
Status = 'ACTIVE'
Where
SubscriptionID = @SubscriptionID And
ReceiptID = @ReceiptID And
LastModified = @LastModified And
Status = 'PENDING';
Set @ReturnValue = 1;
END TRY
BEGIN CATCH
Set @ReturnValue = 0;
THROW
END CATCH
END;
END
```
__*ES_POSGetDailyReceipt*__ \
This stored procedure is to retireve daily receipt information of particular date.
```SQL
CREATE PROCEDURE [dbo].[ES_POSGetDailyReceipt](
@SubscriptionID varchar(200),
@Date DateTime)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @StockUnitCost TABLE (ItemID int NOT NULL primary key,Cost decimal(18,2));
DECLARE @StockBalance TABLE (ItemID int NOT NULL primary key,Quantity decimal(18,2));
/* STOCK LATEST UNIT COST QUERY */
WITH StockUnitCost AS (
SELECT
ItemID, (UnitPrice / IIF(unitmode='MAJOR' And UnitRelationship > 0, UnitRelationShip,1)) AS Cost,
ROW_NUMBER() OVER (PARTITION BY ItemID ORDER BY rc.ReceiptDate, si.IncomingID DESC) AS RowNo
FROM
EDATA_Receipt AS rc, EData_StockIncoming AS si
WHERE
rc.SubscriptionID = @SubscriptionID AND
si.SubscriptionID = @SubscriptionID AND
UnitPrice > 0 AND
CONVERT(Date, rc.ReceiptDate) <= @Date AND
rc.receiptid = si.ReceiptID AND
si.Status = 'ACTIVE' And
rc.Status = 'ACTIVE' )
INSERT INTO
@StockUnitCost
SELECT
ItemID, Cost
FROM
StockUnitCost
WHERE
RowNo = 1;
/* STOCK BALANCE QUERY */
With StockBalance AS
(SELECT
ItemID, Sum(Quantity) AS Quantity
FROM
EData_Receipt AS rc, EData_StockIncoming AS si
WHERE
rc.ReceiptID = si.ReceiptID AND
rc.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) <= @Date AND
si.Status = 'ACTIVE' AND
rc.Status = 'ACTIVE'
GROUP BY
ItemID
UNION ALL
SELECT
ItemID, -Sum(Quantity) AS Quantity
FROM
EData_Receipt as rc, EData_StockOutgoing AS so
WHERE
rc.ReceiptID = so.ReceiptID AND
rc.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) <= @Date AND
so.Status = 'ACTIVE' AND
rc.Status = 'ACTIVE'
GROUP BY
ItemID)
INSERT INTO
@StockBalance
SELECT
ItemID, Sum(Quantity) AS Quantity
FROM
StockBalance
GROUP BY
ItemID;
WITH ReceiptDetail AS (
SELECT
rc.ReceiptID, ReceiptNo, rc.Status, ReceiptType, CONVERT(date, ReceiptDate) AS ReceiptDate,
IsNull(st.CodeNo,'') AS CodeNo, IsNull(st.Name,'') As Name,
si.Serial, si.ItemID, si.Quantity, si.UnitMode, si.UnitRelationship, si.UnitPrice, si.Discount, si.TaxAmount,
Round(si.Quantity / IIF(si.UnitMode = 'MAJOR' And si.UnitRelationship > 0,si.UnitRelationship,1),2) As DisplayQuantity,
Round(si.Quantity / IIF(si.UnitMode = 'MAJOR' And si.UnitRelationship > 0,si.UnitRelationship,1),2) * si.UnitPrice As TotalAmount,
ItemName, ic.EntryAttribute, IIF(si.UnitMode = 'MAJOR',ic.MajorUnitName, ic.MinorUnitName) as UnitName
FROM
EData_Receipt AS rc
INNER JOIN
EData_StockIncoming AS si
ON
(rc.ReceiptID = si.ReceiptID)
LEFT JOIN
EData_StakeHolder AS st
ON
(rc.StakeHolderID = st.StakeHolderID)
LEFT JOIN
EData_ItemCatalogue as ic
ON (si.ItemID = ic.ItemID)
WHERE
rc.SubscriptionID = @SubscriptionID AND
si.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) = @Date
UNION ALL
SELECT
rc.ReceiptID, ReceiptNo, rc.Status, ReceiptType, CONVERT(date, ReceiptDate) AS ReceiptDate,
IsNull(st.CodeNo,'') AS CodeNo, IsNull(st.Name,'') As Name,
so.Serial, so.ItemID, -so.Quantity As Quantity, so.UnitMode, so.UnitRelationship, so.UnitPrice, so.Discount, so.TaxAmount,
Round(so.Quantity / IIF(so.UnitMode = 'MAJOR' And so.Unitrelationship > 0,so.UnitRelationship,1) ,2) As DisplayQuantity,
(Round(so.Quantity / IIF(so.UnitMode = 'MAJOR' And so.Unitrelationship > 0,so.UnitRelationship,1),2) * so.UnitPrice) - so.Discount As TotalAmount,
ItemName, ic.EntryAttribute, IIF(so.UnitMode = 'MAJOR',ic.MajorUnitName,ic.MinorUnitName) as UnitName
FROM
EData_Receipt AS rc
INNER JOIN
EData_StockOutgoing AS so
ON
(rc.ReceiptID = so.ReceiptID)
LEFT JOIN
EData_StakeHolder AS st
ON
(rc.StakeHolderID = st.StakeHolderID)
LEFT JOIN
EData_ItemCatalogue as ic
ON (so.ItemID = ic.ItemID)
WHERE
rc.SubscriptionID = @SubscriptionID AND
so.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) = @Date /* AND
rc.Status = 'ACTIVE' */
)
SELECT
rd.*,
ISNULL(sb.Quantity - Sum(rd.Quantity) OVER(PARTITION BY rd.ItemID ORDER BY rd.ReceiptID desc, rd.Serial desc ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW),0) As RelativeBalance,
sb.Quantity as StockBalance, Concat(rd.ItemName,' ',Format(rd.DisplayQuantity,'0.##'),' ',rd.UnitName) As Description, ISNULL(sc.Cost,0) AS UnitCost, ROUND(ISNULL(sc.Cost,0) * rd.Quantity,2) AS TotalCost
FROM
ReceiptDetail AS rd
LEFT JOIN
@StockUnitCost AS sc
ON
(rd.ItemID = sc.ItemID)
LEFT JOIN
@StockBalance As sb
ON
(rd.ItemID = sb.ItemID)
ORDER BY
rd.ReceiptID, rd.Serial;
END
```
__*ES_POSGetDailyStockBalance*__ \
This stored procedure is to retireve daily stock balance information of particular date.
```SQL
CREATE PROCEDURE [dbo].[ES_POSGetDailyStockBalance](
@SubscriptionID varchar(200),
@Date DateTime)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @StockUnitCost TABLE (ItemID int NOT NULL primary key,Cost decimal(18,2));
DECLARE @StockBalance TABLE (ItemID int NOT NULL primary key,Quantity decimal(18,2));
/* STOCK LATEST UNIT COST QUERY */
WITH StockUnitCost AS (
SELECT
ItemID, (UnitPrice / IIF(unitmode='MAJOR' And UnitRelationship > 0, UnitRelationShip,1)) AS Cost,
ROW_NUMBER() OVER (PARTITION BY ItemID ORDER BY rc.ReceiptDate, si.IncomingID DESC) AS RowNo
FROM
EDATA_Receipt AS rc, EData_StockIncoming AS si
WHERE
rc.SubscriptionID = @SubscriptionID AND
si.SubscriptionID = @SubscriptionID AND
UnitPrice > 0 AND
CONVERT(Date, rc.ReceiptDate) <= @Date AND
rc.receiptid = si.ReceiptID AND
si.Status = 'ACTIVE' AND
rc.Status = 'ACTIVE' )
INSERT INTO
@StockUnitCost
SELECT
ItemID, Cost
FROM
StockUnitCost
WHERE
RowNo = 1;
/* STOCK BALANCE IN QUANTITY */
With StockBalance AS
(SELECT
ItemID, Sum(Quantity) AS Quantity
FROM
EData_Receipt AS rc, EData_StockIncoming AS si
WHERE
rc.ReceiptID = si.ReceiptID AND
rc.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) <= @Date AND
si.Status = 'ACTIVE' AND
rc.Status = 'ACTIVE'
GROUP BY
ItemID
UNION ALL
SELECT
ItemID, -Sum(Quantity) AS Quantity
FROM
EData_Receipt as rc, EData_StockOutgoing AS so
WHERE
rc.ReceiptID = so.ReceiptID AND
rc.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) <= @Date AND
so.Status = 'ACTIVE' AND
(rc.Status = 'ACTIVE' Or
rc.Status = 'PENDING')
GROUP BY
ItemID)
INSERT INTO
@StockBalance
SELECT
ItemID, Sum(Quantity) AS Quantity
FROM
StockBalance
GROUP BY
ItemID;
/* STOCK BALANCE WITH COST */
SELECT
ic.ItemID, ic.EntryType, ic.Category, ic.SortOrder, ic.UnitRelationship, ic.ItemName, ic.MajorUnitName, ic.MinorUnitName,
ISNULL(sb.Quantity,0) As Quantity, ISNULL(sc.Cost,0) as UnitCost, ROUND(ISNULL(sb.Quantity,0) * ISNULL(sc.Cost,0),2) as TotalCost,
CONCAT(IIF(ISNULL(sb.Quantity,0) / IIF(ic.UnitRelationship <= 1, 1, ic.UnitRelationship) > 0 , CONCAT(FORMAT(FLOOR(ISNULL(sb.Quantity,0) / IIF(ic.UnitRelationship <= 1, 1, ic.UnitRelationship)),'#'),' ', ic.MajorUnitName,' '),''),
IIF(ISNULL(sb.Quantity,0) % IIF(ic.UnitRelationship <= 1, 1, ic.UnitRelationship) > 0, CONCAT(FORMAT(ISNULL(sb.Quantity,0) % IIF(ic.UnitRelationship <= 1, 1, ic.UnitRelationship),'#.##'),' ', ic.MinorUnitName),'')) As QuantityText
FROM
EData_ItemCatalogue as ic
LEFT JOIN
@StockBalance AS sb
ON
(ic.ItemID = sb.ItemID)
LEFT JOIN
@StockUnitCost AS sc
ON
(sb.ItemID = sc.ItemID)
WHERE
ic.SubscriptionID = @SubscriptionID AND
ic.Status = 'ACTIVE' AND
ic.EntryAttribute = '' AND
ic.EntryType <> 'SERVICE';
END
```
__*ES_POSMonthlyTransaction*__ \
This stored procedure is to retireve monthly profit and loss information.
```SQL
CREATE PROCEDURE [dbo].[ES_POSGetMonthlyTransaction](
@SubscriptionID varchar(200),
@StartDate DateTime,
@EndDate DateTime)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @DateTable TABLE ([Date] DateTime NOT NULL);
DECLARE @ReceiptInfo TABLE (ReceiptDate DateTime NOT NULL,Cost decimal(18,2),Amount decimal(18,2));
IF @EndDate Is NULL
BEGIN
SET @StartDate = CONVERT(date,DATEADD(DAY,1,EOMONTH(DATEADD(MONTH,-1,@StartDate))));
SET @EndDate = CONVERT(date,EOMONTH(@StartDate));
END;
WITH DateTable AS
(SELECT [Date] = @StartDate
UNION ALL
SELECT DATEADD(day, 1, [Date])
FROM DateTable
WHERE DATEADD(day, 1, [Date]) <= @EndDate
)
INSERT INTO
@DateTable
SELECT [Date]
FROM DateTable
OPTION (MAXRECURSION 0);
WITH ReceiptDetail AS (
SELECT
rc.ReceiptID, ReceiptNo, rc.Status, ReceiptType, CONVERT(date, ReceiptDate) AS ReceiptDate,
IsNull(st.CodeNo,'') AS CodeNo, IsNull(st.Name,'') As Name,
so.Serial, so.ItemID, -so.Quantity As Quantity, so.UnitMode, so.UnitRelationship, so.UnitPrice, so.Discount, so.TaxAmount,
Round(so.Quantity / IIF(so.UnitMode = 'MAJOR' And so.Unitrelationship > 0,so.UnitRelationship,1) ,2) As DisplayQuantity,
(Round(so.Quantity / IIF(so.UnitMode = 'MAJOR' And so.Unitrelationship > 0,so.UnitRelationship,1),2) * so.UnitPrice) - so.Discount As TotalAmount,
ItemName, IIF(so.UnitMode = 'MAJOR',ic.MajorUnitName,ic.MinorUnitName) as UnitName
FROM
EData_Receipt AS rc
INNER JOIN
EData_StockOutgoing AS so
ON
(rc.ReceiptID = so.ReceiptID)
LEFT JOIN
EData_StakeHolder AS st
ON
(rc.StakeHolderID = st.StakeHolderID)
LEFT JOIN
EData_ItemCatalogue as ic
ON (so.ItemID = ic.ItemID)
WHERE
rc.SubscriptionID = @SubscriptionID AND
so.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) >= @StartDate AND
CONVERT(Date, rc.ReceiptDate) <= @EndDate AND
rc.Status = 'ACTIVE' AND
rc.ReceiptType = 'SALE'
)
INSERT INTO
@ReceiptInfo
SELECT
rd.ReceiptDate,
ROUND(ISNULL(sc.Cost,0) * rd.Quantity,2) AS TotalCost,
rd.TotalAmount
FROM
ReceiptDetail AS rd
OUTER APPLY
(
SELECT
TOP (1) ItemID,
FIRST_VALUE(UnitPrice / IIF(unitmode='MAJOR' And UnitRelationship > 0, UnitRelationShip,1)) OVER (PARTITION BY ItemID ORDER BY rc.ReceiptDate, si.IncomingID DESC) AS Cost
FROM
EDATA_Receipt AS rc, EData_StockIncoming AS si
WHERE
rc.SubscriptionID = @SubscriptionID AND
si.SubscriptionID = @SubscriptionID AND
UnitPrice > 0 AND
CONVERT(Date, rc.ReceiptDate) <= rc.ReceiptDate AND
si.ItemID = rd.ItemID AND
rc.receiptid = si.ReceiptID AND
si.Status = 'ACTIVE' AND
rc.Status = 'ACTIVE'
) As sc;
SELECT
[Date] As ReceiptDate,
ISNULL(Sum(Amount),0) As TotalAmount,
ISNULL(Sum(Cost),0) As TotalCost
FROM
@DateTable as dt
LEFT JOIN
@ReceiptInfo as rc
ON
dt.Date = rc.ReceiptDate
GROUP BY
dt.Date;
END
```
__*ES_POSGetPrintingInfo*__ \
This stored procedure is to generate printing information of a particular receipt.
```SQL
CREATE PROCEDURE [dbo].[ES_POSGetPrintingInfo](
@SubscriptionID VARCHAR(200),
@ReceiptType VARCHAR(50),
@ReceiptID INT,
@ReceiptInfo NVARCHAR(MAX) = '' OUTPUT)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ReceiptMaster NVARCHAR(2000);
DECLARE @ReceiptDetail NVARCHAR(MAX);
SET @ReceiptMaster = ( SELECT receiptid, receipttype,receiptno, receiptdate, taxpercent, taxinclusive, IIF(SUBSTRING(reference,1,6) = 'TABLE#','',reference) as reference, discount, servicechargepercent, servicecharge,
paymentcash, paymentbank, paymentcredit, paymentcontra, paymentetransfer, paymentcreditcard, change,
ISNULL(us.LoginID,'') AS staffname, ISNULL(st.Name,'') AS customername, ISNULL(tl.DisplayName,'') As tablename
FROM EData_Receipt AS rc LEFT JOIN Admin_User AS us ON (rc.SalePersonID = us.UserID)
LEFT JOIN EData_StakeHolder AS st ON (rc.StakeHolderID = st.StakeHolderID)
LEFT JOIN EData_TableList As tl ON (rc.Reference = CONCAT('TABLE#', tl.TableID))
WHERE rc.ReceiptID = @ReceiptID AND
rc.SubscriptionID = @SubscriptionID
FOR JSON AUTO);
IF ((@ReceiptType = 'SALE') OR (@ReceiptType = 'STOCKOUT'))
BEGIN
SET @ReceiptDetail = (SELECT so.receiptid as receiptid, so.serial as serial, IIF(so.UnitMode = 'MAJOR' AND so.UnitRelationShip <> 0,so.Quantity / so.UnitRelationship, so.Quantity) AS quantity,
so.unitprice as unitprice, so.discount as discount, so.taxamount as taxamount, item.itemname as itemname, IIF(so.UnitMode = 'MAJOR', item.MajorUnitName, item.MinorUnitName) AS unitname
FROM EData_StockOutgoing AS so, EData_ItemCatalogue AS item
WHERE item.ItemID = so.ItemID AND
so.SubscriptionID = @SubscriptionID AND
so.ReceiptID = @ReceiptID AND
so.Status = 'ACTIVE' FOR JSON AUTO);
END;
ELSE IF ((@ReceiptType = 'PURCHASE') OR (@ReceiptType = 'STOCKIN'))
BEGIN
SET @ReceiptDetail = (SELECT si.receiptid as receiptid, si.serial as serial, IIF(si.UnitMode = 'MAJOR' AND si.UnitRelationShip <> 0,si.Quantity / si.UnitRelationship, si.Quantity) AS quantity,
si.unitprice as unitprice, si.discount as discount, si.taxamount as taxamount, item.itemname as itemname, IIF(si.UnitMode = 'MAJOR', item.MajorUnitName, item.MinorUnitName) AS unitname
FROM EData_StockIncoming AS si, EData_ItemCatalogue AS item
WHERE item.ItemID = si.ItemID AND
si.SubscriptionID = @SubscriptionID AND
si.ReceiptID = @ReceiptID AND
si.Status = 'ACTIVE' FOR JSON AUTO);
END;
PRint 'MASTER : ' + @ReceiptMaster;
PRint 'DETAIL : ' + @ReceiptDetail;
IF ((@ReceiptMaster IS NULL) OR (@ReceiptDetail IS NULL))
BEGIN
RAISERROR('[#ERROR];err_retrieveinfo;Unable to retrieve information.',16,1);
RETURN;
END;
SET @ReceiptInfo = CONCAT('{"receiptmaster" : ',@ReceiptMaster,',"receiptdetail" : ',@ReceiptDetail,'}');
SELECT @ReceiptInfo;
END
```
__*ES_POSSetStaticItemCost*__ \
This stored procedure is set the cost of a static item. (i.e. item that can sell without purchase)
```SQL
CREATE PROCEDURE [dbo].[ES_POSSetStaticItemCost](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@Date DateTime,
@ItemID int,
@Cost Decimal(18,2))
AS
BEGIN
SET NOCOUNT ON;
DECLARE @ReceiptID int;
DECLARE @IncomingID int;
DECLARE @ReceiptNo int;
SELECT @ReceiptID = (SELECT ReceiptID FROM EData_Receipt Where SubscriptionID = @SubscriptionID And Status = 'ACTIVE' And ReceiptType = 'COST' And Convert(Date,ReceiptDate) = Convert(Date, @Date));
IF @ReceiptID IS NULL
BEGIN
SELECT @ReceiptNo = (SELECT ISNULL(MAX(RECEIPTNO),0) + 1 From EData_Receipt Where SubscriptionID = @SubscriptionID And ReceiptType = 'COST');
INSERT INTO EData_Receipt(SubscriptionID, UserID, LogInID, AccessInfo, ReceiptType, ReceiptNo, Status, ReceiptDate, TaxInclusive, StakeHolderID, Reference, Discount,
PaymentCash, PaymentBank, PaymentCredit, PaymentContra, SalePersonID, CreationtTime, LastModified)
VALUES(@SubscriptionID, @UserID, @LoginID, @AccessInfo, 'COST', @ReceiptNo, 'ACTIVE', Convert(Date,@Date), 1, 0, '', 0,
0, 0, 0, 0, 0, GetDate(), GetDate());
SELECT @ReceiptID = (SELECT Top 1 ReceiptID FROM EData_Receipt Where SubscriptionID = @SubscriptionID And Status = 'ACTIVE' And ReceiptType = 'COST' And Convert(Date,ReceiptDate) = Convert(Date, @Date));
END;
SELECT @IncomingID = (SELECT Top 1 IncomingID From EData_StockIncoming Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID And ItemID = @ItemID);
IF @IncomingID IS NULL
BEGIN
INSERT INTO EData_StockIncoming(SubscriptionID, UserID, LogInID, AccessInfo, ReceiptID, Status, Reference,
Serial, ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount, Remark, CreationTime, LastModified)
VALUES(@SubscriptionID, @UserID, @LoginID, @AccessInfo, @ReceiptID, 'ACTIVE','',
0, @ItemID, 1, 'MAJOR', 0, @Cost, 0, 0, '', GetDate(), GetDate());
SELECT @IncomingID = (SELECT Top 1 IncomingID From EData_StockIncoming Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID And ItemID = @ItemID);
END
ELSE
BEGIN
INSERT INTO Elog_StockIncoming(Log_Time, Log_UserID, Log_LogInID, Log_AccessInfo, SubscriptionID, UserID, LogInID,
AccessInfo, IncomingID, ReceiptID, Status, Reference, Serial,
ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount, Remark,
CreationTime, LastModified)
SELECT GetDate(), @UserID, @LoginID, @AccessInfo,
EData_StockIncoming.* From EData_StockIncoming Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID And IncomingID = @IncomingID;
UPDATE EData_StockIncoming
SET
Status = 'ACTIVE',
UnitPrice = @Cost
WHERE
SubscriptionID = @SubscriptionID And
ReceiptID = @ReceiptID And
IncomingID = @IncomingID;
END
RETURN @IncomingID;
END
```
__*ES_POSSwapTable*__ \
This stored procedure is swap the table in restaurant system.
```SQL
CREATE PROCEDURE [dbo].[ES_POSSwapTable](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@SourceReceiptID int,
@SourceReference varchar(100),
@SourceLastModified DateTime,
@TargetReceiptID int,
@TargetReference varchar(100),
@TargetLastModified DateTime,
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
IF NOT EXISTS(SELECT * From EData_Receipt where SubscriptionID = @SubscriptionID And ReceiptID = @SourceReceiptID And Reference = @SourceReference And Status = 'PENDING' And LastModified = @SourceLastModified)
BEGIN
RAISERROR('[#ERROR];err_datachanged;Data was changed by some other user.',16,1);
RETURN;
END
IF (@TargetReceiptID = -1)
BEGIN
IF EXISTS(SELECT * From EData_Receipt where SubscriptionID = @SubscriptionID And Reference = @TargetReference And Status = 'PENDING' )
BEGIN
RAISERROR('[#ERROR];err_datachanged;Data was changed by some other user.',16,1);
RETURN;
END
END
ELSE
BEGIN
IF NOT EXISTS(SELECT * From EData_Receipt where SubscriptionID = @SubscriptionID And ReceiptID = @TargetReceiptID And Reference = @TargetReference And Status = 'PENDING' And LastModified = @TargetLastModified)
BEGIN
RAISERROR('[#ERROR];err_datachanged;Data was changed by some other user.',16,1);
RETURN;
END
END
BEGIN TRY
INSERT INTO ELog_Receipt Select GetDate(), @UserID, @LoginID, @AccessInfo, 'REFERENCE_CHANGED', EData_Receipt.* FROM EData_Receipt Where SubscriptionID = @SubscriptionID And ReceiptID = @SourceReceiptID;
UPDATE EData_Receipt
SET
UserID = @UserID,
LogInID = @LoginID,
AccessInfo = @AccessInfo,
Reference = @TargetReference,
LastModified = GetDate()
WHERE
SubscriptionID = @SubscriptionID And
ReceiptID = @SourceReceiptID And
Reference = @SourceReference And
Status = 'PENDING';
IF (@TargetReceiptID <> -1)
BEGIN
INSERT INTO ELog_Receipt Select GetDate(), @UserID, @LoginID, @AccessInfo, 'REFERENCE_CHANGED', EData_Receipt.* FROM EData_Receipt Where SubscriptionID = @SubscriptionID And ReceiptID = @TargetReceiptID;
UPDATE EData_Receipt
SET
UserID = @UserID,
LogInID = @LoginID,
AccessInfo = @AccessInfo,
Reference = @SourceReference,
LastModified = GetDate()
WHERE
SubscriptionID = @SubscriptionID And
ReceiptID = @TargetReceiptID And
Reference = @TargetReference And
Status = 'PENDING';
END
Set @ReturnValue = 1;
END TRY
BEGIN CATCH
Set @ReturnValue = 0;
THROW
END CATCH
END
```
__*ES_POSUpdateItemRecord*__ \
This stored procedure is to insert or update the item record.
```SQL
CREATE PROCEDURE [dbo].[ES_POSUpdateItemRecord](
@SubscriptionID varchar(200),
@EserviceID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@Language varchar(50),
@ItemID int,
@Status varchar(50),
@EntryType varchar(50),
@EntryAttribute varchar(50),
@Category int,
@SortOrder int,
@ItemCode varchar(50),
@ItemName nvarchar(500),
@Description nvarchar(2000),
@UnitName nvarchar(50),
@MajorUnitName nvarchar(50),
@MinorUnitName nvarchar(50),
@AllowDecimalQuantity tinyint,
@UnitRelationship int,
@MajorPrice decimal(18,2),
@MinorPrice decimal(18,2),
@MajorMSP decimal(18,2),
@MinorMSP decimal(18,2),
@DownLinkID int,
@DownLinkRelationship int,
@Remark nvarchar(2000),
@Tag nvarchar(2000),
@Cost Decimal(18,2),
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @InsertMode tinyint;
DECLARE @ActiveCount int;
DECLARE @PrevUnitRelationship int;
DECLARE @PrevOutgoingTransactionCount int;
DECLARE @PrevIncomingTransactionCount int;
DECLARE @SystemConfig nvarchar(4000);
DECLARE @SystemItemLimit int;
DECLARE @OutputTable Table (ItemID int);
IF (@ItemID = -1) Set @InsertMode = 1;
ELSE Set @InsertMode = 0;
SET @SystemConfig = (Select Top 1 SettingValue From Admin_Setting Where SettingKey = '_SYSTEMCONFIG' And (SettingGroup = @SubscriptionID Or SettingGroup = '*' Or SettingGroup = LEFT(@EserviceID,4) Or SettingGroup = @EserviceID) Order By Len(SettingGroup) desc);
SET @ActiveCount = (SELECT COUNT(*) FROM EData_ItemCatalogue Where SubscriptionID = @SubscriptionID And Status = 'ACTIVE' And EntryType in ('ITEM','SERVICE'));
SET @SystemItemLimit = (SELECT ISNULL(JSON_VALUE(@SystemConfig,'$.itemlimit'),3));
IF ((@ActiveCount >= @SystemItemLimit) AND (@InsertMode = 1))
BEGIN
RAISERROR('[#ERROR];err_exceeditemlimit;Cannot exceed Item Limit of $ITEMLIMIT',16,1);
RETURN;
END
IF ((@EntryType In ('ITEM','SERVICE')) And (@ItemCode <> '') And EXISTS(SELECT ItemCode From EData_ItemCatalogue Where SubscriptionID = @SubscriptionID and Status = 'ACTIVE' and ItemCode = @ItemCode and ItemID <> @ItemID))
BEGIN
RAISERROR('[#ERROR];err_itemcodealreadyexist;Item Code Already Exist',16,1);
RETURN;
END
IF EXISTS(SELECT ItemName From EData_ItemCatalogue Where SubscriptionID = @SubscriptionID and Category = @Category and Status = 'ACTIVE' and ItemID <> @ItemID and LTRIM(RTRIM(ItemName)) = LTRIM(RTRIM(@ItemName)))
BEGIN
RAISERROR('[#ERROR];err_itemnamealreadyexist;Item Name Or Category Name Already Exist in Same Category.',16,1);
RETURN;
END
BEGIN TRY
BEGIN TRANSACTION
IF (@InsertMode = 1)
BEGIN
Set @ReturnValue = 0;
IF ((@EntryType = 'ITEM') and (LEN(LTRIM(RTRIM(@MinorUnitName))) > 0))
BEGIN
IF NOT EXISTS(SELECT UnitID From EData_Unit Where Status = 'ACTIVE' and UnitRelationship = @UnitRelationship and (LTRIM(RTRIM(MajorUnitName)) = LTRIM(RTRIM(@MajorUnitName))) and (LTRIM(RTRIM(MinorUnitName)) = LTRIM(RTRIM(@MinorUnitName))))
BEGIN
Set @UnitName = Concat(@MajorUnitName,'/',@MinorUnitName,' (',@UnitRelationship,')');
INSERT INTO EData_Unit(SubscriptionID, UserID, LoginID, AccessInfo, Status, Language, UnitType, UnitOrder,
UnitName, MajorUnitName, MinorUnitName, UnitRelationship, AdjustableRelationship)
Values (@SubscriptionID, @UserID, @LoginID, @AccessInfo, 'ACTIVE',@Language, 'CUSTOM',0,
@UnitName, @MajorUnitName, @MinorUnitName, @UnitRelationship, 0);
Set @ReturnValue = @ReturnValue + 1;
END
END
INSERT INTO EData_ItemCatalogue(SubscriptionID, UserID, LogInID, AccessInfo, Status, EntryType, EntryAttribute, Category, SortOrder,
ItemCode, ItemName, Description, UnitName, MajorUnitName, MinorUnitName, AllowDecimalQuantity,
UnitRelationship, MajorPrice, MinorPrice, MajorMSP, MinorMSP, DownLinkID, DownLinkRelationship, Remark, Tag)
OUTPUT Inserted.ItemID Into @OutputTable
VALUES
(@SubscriptionID, @UserID, @LoginID, @AccessInfo, @Status, @EntryType, @EntryAttribute, @Category, @SortOrder,
@ItemCode, @ItemName, @Description, @UnitName, @MajorUnitName, @MinorUnitName, @AllowDecimalQuantity,
@UnitRelationship, @MajorPrice, @MinorPrice, @MajorMSP, @MinorMSP, @DownLinkID, @DownLinkRelationship, @Remark, @Tag);
SELECT @ItemID = ItemID From @OutputTable;
Set @ReturnValue = @ReturnValue + 1;
END
ELSE
BEGIN
SET @PrevUnitRelationship = (SELECT UnitRelationship FROM EData_ItemCatalogue WHERE SubscriptionID = @SubscriptionID And ItemID = @ItemID);
SET @PrevOutgoingTransactionCount = ISNULL((SELECT Count(*) FROM EData_Receipt as rc, EData_StockOutgoing as so WHERE rc.SubscriptionID = @SubscriptionID And so.SubscriptionID = @SubscriptionID And rc.ReceiptID = so.ReceiptID and rc.Status = 'ACTIVE' and so.ItemID = @ItemID),0);
SET @PrevIncomingTransactionCount = ISNULL((SELECT Count(*) FROM EData_Receipt as rc, EData_StockIncoming as si WHERE rc.SubscriptionID = @SubscriptionID And si.SubscriptionID = @SubscriptionID And rc.ReceiptID = si.ReceiptID and rc.Status = 'ACTIVE' and si.ItemID = @ItemID),0);
IF ((@PrevUnitRelationship <> @UnitRelationship) And (@PrevIncomingTransactionCount + @PrevOutgoingTransactionCount > 0))
BEGIN
RAISERROR('[#ERROR];err_unitrelationshipcannotchange;Unable to change Unit Relationship, There is active transactions with this item.',16,1);
RETURN;
END;
INSERT INTO ELog_ItemCatalogue Select GetDate(), @UserID, @LoginID, @AccessInfo, 'MODIFY_RECORD', EData_ItemCatalogue.* FROM EData_ItemCatalogue Where SubscriptionID = @SubscriptionID And Itemid = @ItemID;
UPDATE EData_ItemCatalogue
SET
UserID = @UserID,
LogInID = @LogInID,
AccessInfo = @AccessInfo,
Status = @Status,
EntryType = @EntryType,
EntryAttribute = @EntryAttribute,
Category = @Category,
SortOrder = @SortOrder,
ItemCode = @ItemCode,
ItemName = @ItemName,
Description = @Description,
UnitName = @UnitName,
MajorUnitName = @MajorUnitName,
MinorUnitName = @MinorUnitName,
AllowDecimalQuantity = @AllowDecimalQuantity,
UnitRelationship = @UnitRelationship,
MajorPrice = @MajorPrice,
MinorPrice = @MinorPrice,
MajorMSP = @MajorMSP,
MinorMSP = @MinorMSP,
DownLinkID = @DownLinkID,
DownLinkRelationship = @DownLinkRelationship,
Remark = @Remark,
Tag = @Tag
WHERE
SubscriptionID = @SubscriptionID And
ItemID = @ItemID;
SET @ReturnValue = 1;
END;
IF (@EntryType = 'SERVICE') Or (@EntryType = 'ITEM ' And @EntryAttribute = 'STATIC')
BEGIN
EXEC ES_POSSetStaticItemCost @SubscriptionID = @SubscriptionID, @UserID = @UserID, @LoginId = @LoginID, @AccessInfo = @AccessInfo, @Date = '1900-01-01', @Itemid = @ItemID, @Cost = @Cost
END;
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
Set @ReturnValue = 0;
THROW
END CATCH
END
```
__*ES_POSUpdateReceiptRecord*__ \
This stored procedure is to issue receipt.
```SQL
CREATE PROCEDURE [dbo].[ES_POSUpdateReceiptRecord](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@ReceiptID int,
@ReceiptType varchar(50),
@Status varchar(50),
@ReceiptDate varchar(100),
@TaxPercent decimal(18,2),
@TaxInclusive varchar(50),
@CodeNo nvarchar(50),
@Name nvarchar(500),
@Reference nvarchar(100),
@Discount decimal(18,2),
@PaymentCash decimal(18,2),
@PaymentBank decimal(18,2),
@PaymentCredit decimal(18,2),
@PaymentContra decimal(18,2),
@Change decimal(18,2),
@AllowShortSell tinyint,
@ReceiptPrintMode tinyint,
@DataLastModified datetime,
@TransactionData nvarchar(max))
AS
BEGIN
SET NOCOUNT ON;
DECLARE @InsertMode tinyint;
DECLARE @StakeHolderIDTable Table (StakeHolderID int);
DECLARE @ReceiptIDTable Table (ReceiptID int);
DECLARE @StakeHolderID int;
DECLARE @ReceiptNo int;
DECLARE @JSONTransactionData nvarchar(max);
DECLARE @TaxInclusiveBoolean tinyint;
DECLARE @ShortSellInfo varchar(200);
DECLARE @ShortSellError varchar(400);
DECLARE @ReceiptPrintingInfo nvarchar(max);
IF (@ReceiptID = -1) Set @InsertMode = 1;
ELSE SET @InsertMode = 0;
IF (UPPER(@TaxInclusive) = 'TRUE' ) Set @TaxInclusiveBoolean = 1;
ELSE SET @TaxInclusiveBoolean = 0;
SELECT @ReceiptType = UPPER(@ReceiptType);
SELECT @JSONTransactionData = dbo.Base64Decode(@TransactionData);
IF (@InsertMode = 0)
BEGIN
IF NOT EXISTS(SELECT * From EData_Receipt where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID And LastModified = @DataLastModified)
BEGIN
RAISERROR('[#ERROR];err_datachanged;Data was changed by some other user.',16,1);
RETURN;
END
END
ELSE
BEGIN
IF (@Status = 'PENDING')
BEGIN
IF EXISTS(SELECT * From EData_Receipt where SubscriptionID = @SubscriptionID And Reference = @Reference And Status = 'PENDING')
BEGIN
RAISERROR('[#ERROR];err_datachanged;Data was changed by some other user.',16,1);
RETURN;
END;
END;
END;
IF (((@ReceiptType = 'SALE') Or (@ReceiptType='STOCKOUT')) And (@AllowShortSell = 0))
BEGIN
DECLARE @StockBalance TABLE (ItemID int NOT NULL primary key,Quantity decimal(18,2));
DECLARE @ShortSellItemID int;
DECLARE @ShortSellQuantity decimal(18,8);
With StockBalance AS
(SELECT
ItemID, Sum(Quantity) AS Quantity
FROM
EData_Receipt AS rc, EData_StockIncoming AS si
WHERE
rc.ReceiptID = si.ReceiptID AND
rc.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) <= @ReceiptDate AND
rc.Status = 'ACTIVE'
GROUP BY
ItemID
UNION ALL
SELECT
ItemID, -Sum(Quantity) AS Quantity
FROM
EData_Receipt as rc, EData_StockOutgoing AS so
WHERE
rc.ReceiptID = so.ReceiptID AND
rc.SubscriptionID = @SubscriptionID AND
CONVERT(Date, rc.ReceiptDate) <= @ReceiptDate AND
(rc.Status = 'ACTIVE' Or
rc.Status = 'PENDING' )
GROUP BY
ItemID)
INSERT INTO
@StockBalance
SELECT
ItemID, Sum(Quantity) AS Quantity
FROM
StockBalance
GROUP BY
ItemID;
SELECT
TOP (1) @ShortSellItemID = td.ItemID, @ShortSellQuantity = ISNULL(IIF(@InsertMode = 1,sb.Quantity,sb.Quantity + td.Quantity),0)
FROM
OPENJSON(@JSONTransactionData) With (serial int, itemid int,entrytype varchar(50), entryattribute varchar(50),unitmode nvarchar(50), unitrelationship int, quantity decimal(18,8), unitprice decimal(18,2), discount decimal(18,2), taxamount decimal(18,2)) As td
LEFT JOIN
@StockBalance as sb
ON
(td.ItemID = sb.ItemID)
LEFT JOIN
EData_StockOutgoing as so
ON
(so.ItemID = td.ItemID And so.ReceiptID = @ReceiptID)
WHERE
td.entrytype = 'ITEM' And UPPER(td.EntryAttribute) <> 'STATIC' And
(((@InsertMode = 1) And (td.Quantity > ISNULL(sb.Quantity,0))) Or
((@InsertMode = 0) And (td.Quantity - ISNULL(so.Quantity,0) > ISNULL(sb.Quantity,0))))
ORDER BY
td.Serial;
IF (@ShortSellQuantity IS NOT NULL)
BEGIN
IF (@ShortSellQuantity > 0)
BEGIN
SET @ShortSellInfo = '{"itemid":"' + Cast(@ShortSellItemID as varchar(10)) + '","quantity":"' + Cast(@ShortSellQuantity as varchar(18))+ '"}';
SET @ShortSellError = '[#ERROR];err_shortsellitem;$ITEMNAME is only available $QUANTITYTEXT.;' + @ShortSellInfo;
RAISERROR(@ShortSellError,16,1);
RETURN;
END
ELSE
BEGIN
SET @ShortSellInfo = '{"itemid":"' + Cast(@ShortSellItemID as varchar(10)) + '","quantity":"' + Cast(@ShortSellQuantity as varchar(18))+ '"}';
SET @ShortSellError = '[#ERROR];err_outofstock;$ITEMNAME is out of stock.;' + @ShortSellInfo;
RAISERROR(@ShortSellError,16,1);
RETURN;
END
END
END
BEGIN TRY
BEGIN TRANSACTION
IF ((LTRIM(RTRIM(@CodeNo)) = '') And (LTRIM(RTRIM(@Name)) = ''))
BEGIN
Set @StakeHolderID = 0;
END
ELSE
BEGIN
SET @StakeHolderID = ISNULL((SELECT Top 1 StakeHolderID From EData_StakeHolder Where CodeNo = @CodeNo And [Name] = @Name Order By StakeHolderID Desc) ,0);
IF @StakeHolderID = 0
BEGIN
INSERT INTO EData_StakeHolder(SubscriptionID, UserID, LoginID, AccessInfo, CodeNo, [Name]) OUTPUT INSERTED.StakeHolderID INTO @StakeHolderIDTable
Values (@SubscriptionID, @UserID, @LoginID, @AccessInfo, @CodeNo, @Name);
SELECT @StakeHolderID = StakeHolderID from @StakeHolderIDTable;
END
END
IF (@InsertMode = 1)
BEGIN
SELECT @ReceiptNo = (SELECT ISNULL(MAX(RECEIPTNO),0) + 1 From EData_Receipt Where SubscriptionID = @SubscriptionID And ReceiptType = @ReceiptType);
INSERT INTO EData_Receipt (SubscriptionID, UserID, LogInID, AccessInfo, ReceiptType, ReceiptNo, Status, ReceiptDate, TaxPercent, TaxInclusive, StakeHolderID, Reference, Discount,
PaymentCash, PaymentBank, PaymentCredit, PaymentContra, Change, SalePersonID) OUTPUT INSERTED.ReceiptID INTO @ReceiptIDTable
Values
(@SubscriptionID, @UserID, @LoginID, @AccessInfo, UPPER(@ReceiptType), @ReceiptNo, @Status, @ReceiptDate, @TaxPercent, @TaxInclusiveBoolean, @StakeHolderID, @Reference, @Discount,
@PaymentCash, @PaymentBank, @PaymentCredit, @PaymentContra, @Change, @UserID);
SELECT @JSONTransactionData = dbo.Base64Decode(@TransactionData);
SELECT @ReceiptID = ReceiptID from @ReceiptIDTable;
IF ((@ReceiptType = 'SALE') Or (@ReceiptType = 'STOCKOUT'))
BEGIN
INSERT INTO EData_StockOutgoing(SubscriptionID, UserID, LogInID, Accessinfo, ReceiptID, Status, Serial, ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount)
SELECT @SubscriptionID, @UserID, @LoginID, @AccessInfo, @ReceiptID, 'ACTIVE', Serial, ItemID, Quantity, UPPER(UnitMode), UnitRelationship, UnitPrice, Discount, TaxAmount From
OPENJSON(@JSONTransactionData) With (serial int, itemid int, unitmode nvarchar(50), unitrelationship int, quantity decimal(18,8), unitprice decimal(18,2), discount decimal(18,2), taxamount decimal(18,2));
END
ELSE
BEGIN
INSERT INTO EData_StockIncoming(SubscriptionID, UserID, LogInID, Accessinfo, ReceiptID, Status, Serial, ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount)
SELECT @SubscriptionID, @UserID, @LoginID, @AccessInfo, @ReceiptID, 'ACTIVE', Serial, ItemID, Quantity, UPPER(UnitMode), UnitRelationship, UnitPrice, Discount, TaxAmount From
OPENJSON(@JSONTransactionData) With (serial int, itemid int, unitmode nvarchar(50), unitrelationship int, quantity decimal(18,8), unitprice decimal(18,2), discount decimal(18,2), taxamount decimal(18,2));
END
END
ELSE
BEGIN
INSERT INTO ELog_Receipt Select GetDate(), @UserID, @LoginID, @AccessInfo, 'MODIFY_RECORD', EData_Receipt.* FROM EData_Receipt Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID;
IF ((UPPER(@ReceiptType) = 'SALE') Or (UPPER(@ReceiptType) = 'STOCKOUT'))
BEGIN
INSERT INTO Elog_StockOutgoing(Log_Time, Log_UserID, Log_LogInID, Log_AccessInfo, SubscriptionID, UserID, LogInID,
AccessInfo, OutgoingID, ReceiptID, Status, Reference, Serial,
ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount, Remark,
CreationTime, LastModified)
SELECT GetDate(), @UserID, @LoginID, @AccessInfo,
EData_StockOutgoing.* From EData_StockOutgoing Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID;
DELETE FROM EData_StockOutgoing Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID;
INSERT INTO EData_StockOutgoing(SubscriptionID, UserID, LogInID, Accessinfo, ReceiptID, Status, Serial, ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount)
SELECT @SubscriptionID, @UserID, @LoginID, @AccessInfo, @ReceiptID, 'ACTIVE', Serial, ItemID, Quantity, UPPER(UnitMode), UnitRelationship, UnitPrice, Discount, TaxAmount From
OPENJSON(@JSONTransactionData) With (serial int, itemid int, unitmode nvarchar(50), unitrelationship int, quantity decimal(18,8), unitprice decimal(18,2), discount decimal(18,2), taxamount decimal(18,2));
END
ELSE
BEGIN
INSERT INTO Elog_StockIncoming(Log_Time, Log_UserID, Log_LogInID, Log_AccessInfo, SubscriptionID, UserID, LogInID,
AccessInfo, IncomingID, ReceiptID, Status, Reference, Serial,
ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount, Remark,
CreationTime, LastModified)
SELECT GetDate(), @UserID, @LoginID, @AccessInfo,
EData_StockIncoming.* From EData_StockIncoming Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID;
DELETE FROM EData_StockIncoming Where SubscriptionID = @SubscriptionID And ReceiptID = @ReceiptID;
INSERT INTO EData_StockIncoming(SubscriptionID, UserID, LogInID, Accessinfo, ReceiptID, Status, Serial, ItemID, Quantity, UnitMode, UnitRelationship, UnitPrice, Discount, TaxAmount)
SELECT @SubscriptionID, @UserID, @LoginID, @AccessInfo, @ReceiptID, 'ACTIVE', Serial, ItemID, Quantity, UPPER(UnitMode), UnitRelationship, UnitPrice, Discount, TaxAmount From
OPENJSON(@JSONTransactionData) With (serial int, itemid int, unitmode nvarchar(50), unitrelationship int, quantity decimal(18,8), unitprice decimal(18,2), discount decimal(18,2), taxamount decimal(18,2));
END
SELECT @ReceiptNo = (SELECT ReceiptNo From EData_Receipt Where SubscriptionID = @SubscriptionID And ReceiptType = @ReceiptType And ReceiptID = @ReceiptID);
UPDATE EData_Receipt
SET
UserID = @UserID,
LogInID = @LogInID,
AccessInfo = @AccessInfo,
ReceiptType = @ReceiptType,
Status = @Status,
ReceiptDate = @ReceiptDate,
TaxPercent = @TaxPercent,
TaxInclusive = @TaxInclusiveBoolean,
StakeHolderID = @StakeHolderID,
Reference = @Reference,
Discount = @Discount,
PaymentCash = @PaymentCash,
PaymentBank = @PaymentBank,
PaymentCredit = @PaymentCredit,
PaymentContra = @PaymentContra,
Change = @Change,
LastModified = GETDATE()
Where
SubscriptionID = @SubscriptionID And
ReceiptID = @ReceiptID;
END
COMMIT TRANSACTION
IF @ReceiptPrintMode = 1
BEGIN
EXEC ES_POSGetPrintingInfo @SubscriptionID, @ReceiptType,@ReceiptID, @ReceiptPrintingInfo OUTPUT
END
ELSE
BEGIN
SELECT @ReceiptNo;
END
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
SELECT 0;
THROW
END CATCH
END
```
__*ES_POSUpdateSetting*__ \
This stored procedure is insert or update settings
```SQL
CREATE PROCEDURE [dbo].[ES_POSUpdateSetting](
@EServiceID varchar(100),
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@SettingData nvarchar(max))
AS
BEGIN
SET NOCOUNT ON;
DECLARE @SettingCursor CURSOR;
DECLARE @Key VARCHAR(200);
DECLARE @Value NVARCHAR(max);
DECLARE @CurrentValue NVARCHAR(max);
DECLARE @DefaultValue NVARCHAR(max);
DECLARE @JSONSettingData NVARCHAR(max);
SELECT @JSONSettingData = dbo.UTF8Base64Decode(@SettingData);
SET @SettingCursor = CURSOR FOR SELECT [key], [value] FROM OPENJSON(@JSONSettingData);
OPEN @SettingCursor
FETCH NEXT FROM @SettingCursor INTO @Key, @Value;
WHILE @@FETCH_STATUS = 0
BEGIN
IF (@Key = '[appmanifest]')
BEGIN
INSERT INTO Log_AppManifest(Log_Time, Log_UserId, Log_LogInID, Log_AccessInfo, Log_Action,
SubscriptionID, AppName, ShortName, BackgroundColor, Description,
FrontEndIcon, BackEndIcon, OrganizationID, OrganizationName, CreationTime, LastModified)
SELECT GETDATE(), @UserID, @LogInID, @AccessInfo, 'MODIFY', Data_AppManifest.*
FROM Data_AppManifest
WHERE SubscriptionID = @SubscriptionID;
WITH JSONSettingData AS
(
SELECT * FROM
OPENJSON(@Value) WITH (
subscriptionid varchar(200),
appname nvarchar(200),
shortname nvarchar(100),
backendicon nvarchar(500)
)
)
UPDATE ManifestTbl
SET
ManifestTbl.appname = JSONSettingData.appname,
ManifestTbl.shortname = JSONSettingData.shortname,
ManifestTbl.backendicon = JSONSettingData.backendicon
FROM
Data_AppManifest AS ManifestTbl
INNER JOIN
JSONSettingData ON JSONSettingData.subscriptionid = ManifestTbl.subscriptionid;
END
ELSE
BEGIN
SET @CurrentValue = (SELECT TOP(1) SettingValue from Admin_Setting where SettingKey = @Key and (SettingGroup = '*' Or SettingGroup = LEFT(@EServiceID,4) Or SettingGroup = LEFT(@EServiceID,5) Or SettingGroup = @EServiceID Or SettingGroup = @SubscriptionID) Order By LEN(SettingGroup) Desc);
SET @DefaultValue = (SELECT TOP(1) SettingValue from Admin_Setting where SettingKey = @Key and (SettingGroup = '*' Or SettingGroup = LEFT(@EServiceID,4) Or SettingGroup = LEFT(@EServiceID,5) Or SettingGroup = @EServiceID) Order By LEN(SettingGroup) Desc);
IF ((@CurrentValue IS NULL) Or (@Value <> @CurrentValue))
BEGIN
IF EXISTS(SELECT * FROM Admin_Setting WHERE SettingKey = @Key and SettingGroup = @SubscriptionID)
BEGIN
IF ((@DefaultValue IS NULL) OR (@DefaultValue <> @Value))
BEGIN
INSERT INTO Log_Setting(Log_Time, Log_UserId, Log_LogInID, Log_AccessInfo, Log_Action,
UserID, LoginID, SettingGroup, SettingKey, SettingValue, CreationTime, LastModified)
SELECT GETDATE(), @UserID, @LogInID, @AccessInfo, 'MODIFY', Admin_Setting.* FROM Admin_Setting
WHERE SettingKey = @Key and SettingGroup = @SubscriptionID;
UPDATE Admin_Setting
SET
UserID = @UserID,
LogInID = @LoginID,
SettingKey = @Key,
SettingValue = @Value
WHERE
SettingKey = @Key AND
SettingGroup = @SubscriptionID;
END
ELSE
BEGIN
DELETE FROM Admin_Setting Where SettingKey = @Key and SettingGroup = @SubscriptionID;
END
END
ELSE
BEGIN
IF ((@DefaultValue IS NULL) OR (@DefaultValue <> @Value))
BEGIN
INSERT INTO Admin_Setting (UserID, LogInID, SettingGroup, SettingKey, SettingValue)
VALUES (@UserID, @LoginID, @SubscriptionID, @Key, @Value);
END
END
END
END
FETCH NEXT FROM @SettingCursor INTO @Key, @Value;
END
CLOSE @SettingCursor;
DEALLOCATE @SettingCursor;
SELECT 1;
END
```
__*ES_POSUpdateSubscriberInfo*__ \
This stored procedure is to update the Subscriber's Information.
```SQL
CREATE PROCEDURE [dbo].[ES_POSUpdateSubscriberInfo](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@SubscriberInfo nvarchar(max))
AS
BEGIN
SET NOCOUNT ON;
DECLARE @JSONSubscriberInfo NVARCHAR(max);
SELECT @JSONSubscriberInfo = dbo.UTF8Base64Decode(@SubscriberInfo);
IF EXISTS(SELECT * From Data_Subscriber Where SubscriptionID = @SubscriptionID)
BEGIN
INSERT INTO Log_Subscriber Select GetDate(), @UserID, @LoginID, @AccessInfo, 'MODIFY_RECORD', Data_Subscriber.* FROM Data_Subscriber Where SubscriptionID = @SubscriptionID;
UPDATE sb
SET
sb.ContactPerson = ISNULL(js.ContactPerson,''),
sb.ContactPersonMobileNo = ISNULL(js.ContactPersonMobileNo,''),
sb.ContactPersonEmail = ISNULL(js.ContactPersonEmail,''),
sb.BusinessName = ISNULL(js.BusinessName,''),
sb.IndustryType = ISNULL(js.IndustryType,''),
sb.EmailAddress = ISNULL(js.EmailAddress,''),
sb.ContactNo = ISNULL(js.ContactNo,''),
sb.AddressInfo = ISNULL(js.AddressInfo,''),
sb.Township = ISNULL(js.Township,''),
sb.City = ISNULL(js.City,''),
sb.Country = ISNULL(js.Country,'')
FROM
Data_Subscriber sb JOIN
OPENJSON(@JSONSubscriberInfo) With
(SubscriptionID varchar(200), ContactPerson nvarchar(100), ContactPersonMobileNo nvarchar(100), ContactPersonEmail nvarchar(100),
BusinessName nvarchar(100), IndustryType nvarchar(100), EmailAddress nvarchar(100), ContactNo nvarchar(100), AddressInfo nvarchar(200),
Township nvarchar(50), City nvarchar(50), Country nvarchar(50)) Js
ON
Js.SubscriptionID = sb.SubscriptionID;
END
ELSE
BEGIN
INSERT INTO Data_Subscriber(SubscriptionID, ContactPerson, ContactPersonMobileNo, ContactPersonEmail,
BusinessName, IndustryType, EmailAddress, ContactNo, AddressInfo, Township, City, Country)
SELECT ISNULL(@SubscriptionID,''), ISNULL(ContactPerson,''), ISNULL(ContactPersonMobileNo,''), ISNULL(ContactPersonEmail,''),
ISNULL(BusinessName,''), ISNULL(IndustryType,''), ISNULL(EmailAddress,''), ISNULL(ContactNo,''), ISNULL(AddressInfo,''),
ISNULL(Township,''), ISNULL(City,''), ISNULL(Country,'')
FROM
OPENJSON(@JSONSubscriberInfo) With
(SubscriptionID varchar(200), ContactPerson nvarchar(100), ContactPersonMobileNo nvarchar(100), ContactPersonEmail nvarchar(100),
BusinessName nvarchar(100), IndustryType nvarchar(100), EmailAddress nvarchar(100), ContactNo nvarchar(100), AddressInfo nvarchar(200),
Township nvarchar(50), City nvarchar(50), Country nvarchar(50)) Js
END
SELECT 1;
END
```
__*ES_POSUpdateTableRecord*__ \
This stored procedure is to insert / update the table or table group.
```SQL
CREATE PROCEDURE [dbo].[ES_POSUpdateTableRecord](
@SubscriptionID varchar(200),
@EserviceID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@TableID int,
@Status varchar(50),
@EntryType varchar(50),
@GroupID int,
@DisplayName nvarchar(50),
@Capacity int,
@SortOrder int,
@TableCharge decimal,
@TableChargeMode varchar(50),
@Description nvarchar(500),
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @InsertMode tinyint;
DECLARE @ActiveCount int;
DECLARE @PrevUnitRelationship int;
DECLARE @PrevOutgoingTransactionCount int;
DECLARE @PrevIncomingTransactionCount int;
DECLARE @SystemConfig nvarchar(4000);
DECLARE @SystemLimit int;
DECLARE @OutputTable Table (TableID int);
IF (@TableID = -1) Set @InsertMode = 1;
ELSE Set @InsertMode = 0;
SET @SystemConfig = (Select Top 1 SettingValue From Admin_Setting Where SettingKey = '_SYSTEMCONFIG' And (SettingGroup = @SubscriptionID Or SettingGroup = '*' Or SettingGroup = LEFT(@EserviceID,4) Or SettingGroup = @EserviceID) Order By Len(SettingGroup) desc);
SET @ActiveCount = (SELECT COUNT(*) FROM EData_TableList Where SubscriptionID = @SubscriptionID And Status = 'ACTIVE' And EntryType = @EntryType);
IF (@EntryType = 'TABLE')
BEGIN
SET @SystemLimit = (SELECT ISNULL(JSON_VALUE(@SystemConfig,'$.tablelimit'),3));
END
ELSE
BEGIN
SET @SystemLimit = (SELECT ISNULL(JSON_VALUE(@SystemConfig,'$.tablegrouplimit'),3));
END;
IF ((@ActiveCount >= @SystemLimit) AND (@InsertMode = 1))
BEGIN
IF (@EntryType = 'TABLE')
BEGIN
RAISERROR('[#ERROR];err_exceedtablelimit;Cannot exceed Table limit of $ITEMLIMIT',16,1);
RETURN;
END
ELSE
BEGIN
RAISERROR('[#ERROR];err_exceedtablegrouplimit;Cannot exceed Table Group Limit of $ITEMLIMIT',16,1);
RETURN;
END;
END
IF EXISTS(SELECT DisplayName From EData_TableList Where SubscriptionID = @SubscriptionID and GroupID = @GroupID and Status = 'ACTIVE' and TableID <> @TableID and LTRIM(RTRIM(DisplayName)) = LTRIM(RTRIM(@DisplayName)))
BEGIN
RAISERROR('[#ERROR];err_tablenamealreadyexist;Table Name Or Table Group Name Already Exist in Same Category.',16,1);
RETURN;
END
BEGIN TRY
BEGIN TRANSACTION
IF (@InsertMode = 1)
BEGIN
Set @ReturnValue = 0;
INSERT INTO EData_TableList(SubscriptionID, UserID, LogInID, AccessInfo, Status, EntryType, GroupID, DisplayName, Capacity, SortOrder,
TableCharge, TableChargeMode, Description, CreationTime, LastModified)
OUTPUT Inserted.TableID Into @OutputTable
VALUES
(@SubscriptionID, @UserID, @LoginID, @AccessInfo, @Status, @EntryType, @GroupID, @DisplayName, @Capacity, @SortOrder,
@TableCharge, @TableChargeMode, @Description,GetDate(),GetDate());
SELECT @TableID = TableID From @OutputTable;
Set @ReturnValue = @ReturnValue + 1;
END
ELSE
BEGIN
INSERT INTO ELog_TableList Select GetDate(), @UserID, @LoginID, @AccessInfo, 'MODIFY_RECORD', EData_TableList.* FROM EData_TableList Where SubscriptionID = @SubscriptionID And TableID = @TableID;
UPDATE EData_TableList
SET
UserID = @UserID,
LogInID = @LogInID,
AccessInfo = @AccessInfo,
Status = @Status,
EntryType = @EntryType,
GroupiD = @GroupID,
DisplayName = @DisplayName,
Capacity = @Capacity,
SortOrder = @SortOrder,
TableCharge = @TableCharge,
TableChargeMode = @TableChargeMode,
Description = @Description,
LastModified = GetDate()
WHERE
SubscriptionID = @SubscriptionID And
TableID = @TableID;
SET @ReturnValue = 1;
END;
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
Set @ReturnValue = 0;
THROW
END CATCH
END
```
__*ES_POSUpdateUnitRecord*__ \
This stored procedure is insert or update Unit Record.
```SQL
CREATE PROCEDURE [dbo].[ES_POSUpdateUnitRecord](
@SubscriptionID varchar(200),
@UserID int,
@LoginID varchar(200),
@AccessInfo varchar(2000),
@UnitID int,
@Status varchar(50),
@Language varchar(50),
@UnitType varchar(50),
@UnitOrder int,
@UnitName nvarchar(50),
@MajorUnitName nvarchar(50),
@MinorUnitName nvarchar(50),
@UnitRelationship int,
@AdjustableRelationship tinyint,
@AllowDecimalQuantity tinyint,
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @InsertMode tinyint;
IF (@UnitID = -1) Set @InsertMode = 1;
ELSE Set @InsertMode = 0;
BEGIN TRY
IF (@UnitType = 'BASE')
BEGIN
Set @MajorUnitName = @UnitName
IF EXISTS(SELECT UnitID From EData_Unit Where Status = 'ACTIVE' and (SubscriptionID = '*' Or SubscriptionID = @SubscriptionID) and UnitID <> @UnitID and LTRIM(RTRIM(@UnitName)) = LTRIM(RTRIM(UnitName)))
BEGIN
RAISERROR('[#ERROR];err_unitalreadyexist;Unit name already exist.',16,1);
RETURN;
END
END
ELSE
BEGIN
IF EXISTS(SELECT UnitID From EData_Unit Where Status = 'ACTIVE' and UnitID <> @UnitID and UnitRelationship = @UnitRelationship and (LTRIM(RTRIM(MajorUnitName)) = LTRIM(RTRIM(@MajorUnitName))) and (LTRIM(RTRIM(MinorUnitName)) = LTRIM(RTRIM(@MinorUnitName))))
BEGIN
RAISERROR('[#ERROR];err_unitalreadyexist;Unit with same relationship already exist.',16,1);
RETURN;
END
Set @UnitName = Concat(@MajorUnitName,'/',@MinorUnitName,' (',@UnitRelationship,')');
END
IF (@InsertMode = 1)
BEGIN
Set @ReturnValue = 0;
INSERT INTO EData_Unit(SubscriptionID, UserID, LoginID, AccessInfo, Status, Language, UnitType, UnitOrder,
UnitName, MajorUnitName, MinorUnitName, UnitRelationship, AdjustableRelationship)
Values (@SubscriptionID, @UserID, @LoginID, @AccessInfo, 'ACTIVE', @Language, @UnitType,0,
@UnitName, @MajorUnitName, @MinorUnitName, @UnitRelationship, 0);
Set @ReturnValue = @ReturnValue + 1;
END
ELSE
BEGIN
UPDATE EData_Unit
SET
UserID = @UserID,
LogInID = @LogInID,
AccessInfo = @AccessInfo,
Status = @Status,
UnitType = @UnitType,
UnitOrder = @UnitOrder,
UnitName = @UnitName,
MajorUnitName = @MajorUnitName,
MinorUnitName = @MinorUnitName,
UnitRelationship = @UnitRelationship,
AdjustableRelationship = @AdjustableRelationship,
AllowDecimalQuantity = @AllowDecimalQuantity
Where
SubscriptionID = @SubscriptionID And
UnitID = @UnitID;
Set @ReturnValue = 1;
END
END TRY
BEGIN CATCH
Set @ReturnValue = 0;
THROW
END CATCH
END
```
__*ES_POSUpdateUserRecord*__ \
This stored procedure is to insert or update user record.
```SQL
CREATE PROCEDURE [dbo].[ES_POSUpdateUserRecord](
@Log_UserID int,
@Log_LogInID varchar(200),
@SubscriptionID varchar(200),
@EserviceID varchar(200),
@AccessInfo varchar(2000),
@UserID int,
@LoginID varchar(200),
@Password varchar(200),
@ReturnValue int Output)
AS
BEGIN
SET NOCOUNT ON;
DECLARE @InsertMode tinyint;
DECLARE @SystemConfig nvarchar(4000);
DECLARE @UserLimit int;
DECLARE @CurrentUserCount int;
IF (@UserID = -1) Set @InsertMode = 1;
ELSE Set @InsertMode = 0;
SET @SystemConfig = (Select Top 1 SettingValue From Admin_Setting Where SettingKey = '_SYSTEMCONFIG' And (SettingGroup = @SubscriptionID Or SettingGroup = '*' Or SettingGroup = LEFT(@EserviceID,4) Or SettingGroup = @EserviceID) Order By Len(SettingGroup) desc);
SET @CurrentUserCount = (Select Count(*) From Admin_User Where SubscriptionID = @SubscriptionID and Status = 'ACTIVE');
SET @UserLimit = (SELECT ISNULL(JSON_VALUE(@SystemConfig,'$.userlimit'),3));
IF ((@InsertMode = 1) And (@CurrentUserCount >= @UserLimit))
BEGIN
RAISERROR('[#ERROR];err_userlimitexceed;User Limit exceed.',16,1);
RETURN;
END
IF EXISTS(SELECT @LoginID From Admin_User Where SubscriptionID = @SubscriptionID and LogInID= @LogInID and UserID != @UserID)
BEGIN
RAISERROR('[#ERROR];err_loginidalreadyexist;Login ID already exist.',16,1);
RETURN;
END
BEGIN TRY
BEGIN TRANSACTION
IF (@InsertMode = 1)
BEGIN
INSERT INTO Admin_User(SubscriptionID, LogInID, Password)
Values
(@SubscriptionID, @LoginID, @Password);
Set @ReturnValue = 1;
END
ELSE
BEGIN
INSERT INTO Log_User(Log_time, Log_UserId, Log_LogInID, Log_AccessInfo, Log_Action,
UserID, SubscriptionID, LoginID, Type, Status, UserRole, CustomSetting,
Password, Name, EmailAddress, ContactNo, DateOfBirth, Gender, BuildingNo,
Floor, RoomNo, Street, Quarter, AddressInfo, Township, City, Country, Postal,
AdditionalInfo, CreationTime, LastModified)
Select GETDATE(), @Log_UserID, @Log_LogInID, @AccessInfo, 'DELETE', Admin_User.* from Admin_User
Where SubscriptionID = @SubscriptionID And UserID = @UserID;
UPDATE Admin_User
SET
LogInID = @LogInID,
Password = IIF(LEN(@Password) > 0, @Password, Password)
Where
SubscriptionID = @SubscriptionID And
UserID = @UserID;
Set @ReturnValue = 1;
END
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
Set @ReturnValue = 0;
THROW
END CATCH
END
```
-------
Designed and Developed by __*Kyi Phyo Cho @ Albert Cho*__
-------
| 38.045565 | 330 | 0.699789 | yue_Hant | 0.937199 |
6f38592bad209993ab8e15348ac8c2d8173d1f9a | 1,669 | md | Markdown | api/Access.DoCmd.OpenTable.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 283 | 2018-07-06T07:44:11.000Z | 2022-03-31T14:09:36.000Z | api/Access.DoCmd.OpenTable.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 1,457 | 2018-05-11T17:48:58.000Z | 2022-03-25T22:03:38.000Z | api/Access.DoCmd.OpenTable.md | CeptiveYT/VBA-Docs | 1d9c58a40ee6f2d85f96de0a825de201f950fc2a | [
"CC-BY-4.0",
"MIT"
] | 469 | 2018-06-14T12:50:12.000Z | 2022-03-27T08:17:02.000Z | ---
title: DoCmd.OpenTable method (Access)
keywords: vbaac10.chm4164
f1_keywords:
- vbaac10.chm4164
ms.prod: access
api_name:
- Access.DoCmd.OpenTable
ms.assetid: 6461c8c1-7452-f812-8914-e46406c58eae
ms.date: 03/07/2019
ms.localizationpriority: medium
---
# DoCmd.OpenTable method (Access)
The **OpenTable** method carries out the OpenTable action in Visual Basic.
## Syntax
_expression_.**OpenTable** (_TableName_, _View_, _DataMode_)
_expression_ A variable that represents a **[DoCmd](Access.DoCmd.md)** object.
## Parameters
|Name|Required/Optional|Data type|Description|
|:-----|:-----|:-----|:-----|
| _TableName_|Required|**Variant**|A string expression that's the valid name of a table in the current database. If you execute Visual Basic code containing the **OpenTable** method in a library database, Microsoft Access looks for the table with this name first in the library database, and then in the current database.|
| _View_|Optional|**[AcView](Access.AcView.md)**|An **AcView** constant that specifies the view in which the table will open. The default value is **acViewNormal**.|
| _DataMode_|Optional|**[AcOpenDataMode](Access.AcOpenDataMode.md)**|An **AcOpenDataMode** constant that specifies the data entry mode for the table. The default value is **acEdit**.|
## Remarks
You can use the **OpenTable** method to open a table in Datasheet view, Design view, or Print Preview. You can also select a data entry mode for the table.
## Example
The following example opens the **Employees** table in Print Preview.
```vb
DoCmd.OpenTable "Employees", acViewPreview
```
[!include[Support and feedback](~/includes/feedback-boilerplate.md)]
| 32.096154 | 322 | 0.747753 | eng_Latn | 0.796355 |
6f3882b19c07f1b2d3a8f5715e507a0089da91c9 | 67 | md | Markdown | CHANGELOG.md | welaika/ansible-role-dotfiles | f214bb95e4d77c687b807c417aeef1e2a2cb6b9e | [
"MIT"
] | null | null | null | CHANGELOG.md | welaika/ansible-role-dotfiles | f214bb95e4d77c687b807c417aeef1e2a2cb6b9e | [
"MIT"
] | null | null | null | CHANGELOG.md | welaika/ansible-role-dotfiles | f214bb95e4d77c687b807c417aeef1e2a2cb6b9e | [
"MIT"
] | null | null | null | 1.1.0
- User silver searcher instead of Ack
1.0.0
- First release
| 11.166667 | 37 | 0.716418 | eng_Latn | 0.989921 |
6f3903a8e1c0bb99a0d348e95e44c2a8ab47b0c5 | 1,386 | md | Markdown | README.md | Eonm/makemd-rs | 43e363cef6d81065ec85e32f4a73c10605bc2a97 | [
"MIT"
] | null | null | null | README.md | Eonm/makemd-rs | 43e363cef6d81065ec85e32f4a73c10605bc2a97 | [
"MIT"
] | null | null | null | README.md | Eonm/makemd-rs | 43e363cef6d81065ec85e32f4a73c10605bc2a97 | [
"MIT"
] | null | null | null | # MakeMD (CLI)
 [](https://travis-ci.org/Eonm/makemd-rs)
Build and lint your academic markdown documents with Pandoc and Zotero.
## Install
MakeMD is available for linux and windows. Download the last release of MakeMD.
OSX users have to build MakeMD by themselves.
### Dependencies
This dependencies are used to build documents
* pandoc
* pandoc-citeproc
* latex (texlive-full)
## Run
### Init a project
```sh
./makemd-rs init
```
You can tweak your project by editing the `.makemd` file. Or you can use the cross-platform MakeMD GUI.
### Build documents (pdf, presentation)
By default all your documents have to be placed in the `./md/` folder. MakeMD uses recursion to find markdown files.
**Markdon files starting with `[draft]` won't be builded.**
#### Build pdf
```sh
./makemd-rs build --pdf
```
#### Build presentation
```sh
./makemd-rs build --presentation
```
## Maintenance
### Download your bibliograpy
```sh
makemd-rs maintenance --update-bib
```
### Download your csl file
```sh
makemd-rs maintenance --update-csl
```
### Lint
_Comming soon_
## Build MakeMD
```
git clone https://github.com/Eonm/makemd-rs
cd makemd-rs
cargo build --release
```
## License
MakeMD is distributed under the MIT license.
| 17.769231 | 182 | 0.712843 | eng_Latn | 0.840948 |
6f3943e830892dffa85291d919cafee96d7ec144 | 6,925 | md | Markdown | articles/purview/how-to-browse-catalog.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 66 | 2017-07-09T03:34:12.000Z | 2022-03-05T21:27:20.000Z | articles/purview/how-to-browse-catalog.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 671 | 2017-06-29T16:36:35.000Z | 2021-12-03T16:34:03.000Z | articles/purview/how-to-browse-catalog.md | ZetaPR/azure-docs.es-es | 0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2 | [
"CC-BY-4.0",
"MIT"
] | 171 | 2017-07-25T06:26:46.000Z | 2022-03-23T09:07:10.000Z | ---
title: Cómo navegar por el catálogo de datos
description: En este artículo se proporciona información general sobre cómo explorar el catálogo de datos de Azure Purview en función del tipo de recurso.
author: djpmsft
ms.author: daperlov
ms.service: purview
ms.subservice: purview-data-catalog
ms.topic: conceptual
ms.date: 10/01/2021
ms.openlocfilehash: ab5cb2856343d4fdcfcbd6c3c88d45889c191d2d
ms.sourcegitcommit: 557ed4e74f0629b6d2a543e1228f65a3e01bf3ac
ms.translationtype: HT
ms.contentlocale: es-ES
ms.lasthandoff: 10/05/2021
ms.locfileid: "129456842"
---
# <a name="browse-the-azure-purview-data-catalog"></a>Navegación por el catálogo de datos de Azure Purview
La búsqueda de un catálogo de datos es una excelente herramienta para la detección de datos si un consumidor de datos sabe lo que busca, pero a menudo los usuarios no saben exactamente cómo se estructura su patrimonio de datos. El catálogo de datos de Azure Purview ofrece una experiencia de exploración que permite a los usuarios explorar qué datos están disponibles para ellos por colección o a través de la jerarquía de cada origen de datos del catálogo.
Para acceder a la experiencia de exploración, seleccione "Browse assets" (Examinar recursos) en la página principal del catálogo de datos.
:::image type="content" source="media/how-to-browse-catalog/studio-home-page.png" alt-text="Página principal de Purview" border="true":::
## <a name="browse-by-collection"></a>Examen por colección
Examinar por colección le permite explorar las distintas colecciones para las que es lector o conservador de datos.
> [!NOTE]
> Solo verá las colecciones a las que tiene acceso. Para más información, consulte [Creación y administración de colecciones de trabajos](how-to-create-and-manage-collections.md).
:::image type="content" source="media/how-to-browse-catalog/browse-by-collection.png" alt-text="Captura de pantalla que muestra la página de examinar por colección" border="true":::
Una vez seleccionada una colección, se obtiene una lista de recursos de esa colección con las facetas y filtros disponibles en la búsqueda. Como una colección puede tener miles de recursos, la herramienta de examinar usa el motor de relevancia de búsqueda de Purview para poner en primer lugar los recursos más importantes.
:::image type="content" source="media/how-to-browse-catalog/browse-collection-results.png" alt-text="Captura de pantalla que muestra los resultados de examinar por colección" border="true":::
Para determinadas anotaciones, puede hacer clic en los puntos suspensivos para elegir entre una condición AND o una condición OR.
:::image type="content" source="./media/how-to-search-catalog/search-and-or-choice.png" alt-text="Captura de pantalla en la que se muestra cómo elegir entre la condición AND o la condición OR" border="true":::
Si la colección seleccionada no contiene los datos que está buscando, puede navegar fácilmente a las colecciones relacionadas o volver atrás y ver todo el árbol de colecciones.
:::image type="content" source="media/how-to-browse-catalog/browse-collection-navigation.png" alt-text="Captura de pantalla que muestra cómo navegar entre colecciones" border="true":::
Cuando encuentre el recurso que está buscando, puede seleccionarlo para ver detalles adicionales, como el esquema, el linaje y una lista de clasificación detallada. Para más información sobre la página de detalles del recurso, consulte [Administración de recursos de catálogo](catalog-asset-details.md).
:::image type="content" source="./media/how-to-search-catalog/search-view-asset.png" alt-text="Captura de pantalla que muestra la página de detalles del recurso" border="true":::
## <a name="browse-by-source-type"></a>Examinar por tipo de origen
Examinar por tipo de origen permite a los consumidores de datos explorar las jerarquías de orígenes de datos mediante una vista de explorador. Seleccione un tipo de origen para ver la lista de orígenes examinados.
Por ejemplo, puede encontrar fácilmente un conjunto de datos llamado *DateDimension* en una carpeta denominada *Dimensiones* en Azure Data Lake Storage Gen 2. Puede usar la experiencia "Browse by source type" (Examinar por tipo de origen) para ir a la cuenta de almacenamiento de ADLS Gen 2 y, a continuación, ir al servicio > contenedor > carpeta(s) para acceder a la carpeta *Dimensiones* determinada y, a continuación, ver la tabla *DateDimension*.
Se proporciona una experiencia de navegación nativa con un espacio de nombres jerárquico para cada origen de datos correspondiente.
> [!NOTE]
> Tras un examen con ámbito correcto, puede haber un retraso hasta que los recursos recién examinados aparezcan en la experiencia de exploración.
1. En la página **Browse by source types** (Examinar por tipos de origen), los iconos se clasifican por orígenes de datos. Para explorar aún más los recursos de cada origen de datos, seleccione el icono correspondiente.
:::image type="content" source="media/how-to-browse-catalog/browse-asset-types.png" alt-text="Página Browse asset types (Examinar tipos de recursos)" border="true":::
> [!TIP]
> Algunos iconos son agrupaciones de una colección de orígenes de datos. Por ejemplo, el icono de cuenta de Azure Storage contiene todas las cuentas de Azure Blob Storage y Azure Data Lake Storage Gen2. El icono de Azure SQL Server mostrará los recursos de Azure SQL Server que contienen instancias de Azure SQL Database y de grupo de SQL dedicado de Azure ingeridas en el catálogo.
1. En la página siguiente, se enumeran los activos de nivel superior en el tipo de datos elegido. Elija uno de los recursos para explorar aún más su contenido. Por ejemplo, después de seleccionar "Azure SQL Database", verá una lista de bases de datos con recursos en el catálogo de datos.
:::image type="content" source="media/how-to-browse-catalog/asset-type-specific-browse.png" alt-text="Página de exploración de Azure SQL Database" border="true":::
1. Se abrirá la vista del explorador. Inicie la exploración seleccionando el recurso en el panel izquierdo. Los recursos secundarios se mostrarán en el panel derecho de la página.
:::image type="content" source="media/how-to-browse-catalog/explorer-view.png" alt-text="Vista del explorador" border="true":::
1. Para ver los detalles de un recurso, seleccione el nombre o el botón de puntos suspensivos en el extremo derecho.
:::image type="content" source="media/how-to-browse-catalog/view-asset-detail-click-ellipses.png" alt-text="Seleccione el botón de puntos suspensivos para ver la página de detalles del recurso" border="true":::
## <a name="next-steps"></a>Pasos siguientes
- [Cómo crear, importar y exportar términos del glosario](how-to-create-import-export-glossary.md)
- [Cómo administrar plantillas de términos para el glosario empresarial](how-to-manage-term-templates.md)
- [Cómo buscar en el catálogo de datos de Purview](how-to-search-catalog.md)
| 80.523256 | 457 | 0.788881 | spa_Latn | 0.987017 |
6f3aa09bb215fe8b4ceddb65d96d33cca17e3986 | 119 | md | Markdown | 0000-01-03-walkervalentinuss.md | walkervalentinuss/github-slideshow | a5d1ea95f4359f8d610e4670092a30688fdd090c | [
"MIT"
] | null | null | null | 0000-01-03-walkervalentinuss.md | walkervalentinuss/github-slideshow | a5d1ea95f4359f8d610e4670092a30688fdd090c | [
"MIT"
] | 2 | 2021-08-23T05:38:29.000Z | 2021-08-23T08:44:12.000Z | 0000-01-03-walkervalentinuss.md | walkervalentinuss/github-slideshow | a5d1ea95f4359f8d610e4670092a30688fdd090c | [
"MIT"
] | null | null | null | --
layout: slide
title: "Welcome to our second slide!"
--
Walker Valentinus Simanjuntak
use the left arrow to go Back!
| 17 | 37 | 0.747899 | eng_Latn | 0.978501 |
6f3af594a9e2340a50aabc730b429ffb818a311b | 11,741 | md | Markdown | docs/roadmap.md | AngeloCG97/eosio.lacchain.net | e3c65fb6d8e675f0c06dbb7a660e23c8d38fd6f9 | [
"Apache-2.0"
] | null | null | null | docs/roadmap.md | AngeloCG97/eosio.lacchain.net | e3c65fb6d8e675f0c06dbb7a660e23c8d38fd6f9 | [
"Apache-2.0"
] | 46 | 2020-07-27T16:07:19.000Z | 2021-05-18T17:43:55.000Z | docs/roadmap.md | AngeloCG97/eosio.lacchain.net | e3c65fb6d8e675f0c06dbb7a660e23c8d38fd6f9 | [
"Apache-2.0"
] | 4 | 2020-04-21T19:59:05.000Z | 2020-08-12T16:59:24.000Z | ---
id: roadmap
title: Ruta del Proyecto
sidebar_label: Ruta del Proyecto
---
### Estado Actual: Pro TestNet en progreso
Actualizado a Mayo 2021
` ✓ = requerido para el hito`
## Infraestructura
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| Network set up and stabilized | ✓ | ✓ | ✓ | ✓ | completado |
| Documentation available in the Github repository to deploy nodes | ✓ | ✓ | ✓ | ✓ | completado |
| Documentation available in the Github repository on topology and architecture | ✓ | ✓ | ✓ | ✓ | completado |
| Documentation available in the Github repository on how to deploy your apps | ✓ | ✓ | ✓ | ✓ | completado |
| Validator nodes enabled | ✓ | ✓ | ✓ | ✓ | completado |
| Writer nodes enabled | ✓ | ✓ | ✓ | ✓ | completado |
| Boot nodes enabled | | ✓ | ✓ | ✓ | completado |
| Smart-contract based permissioning | | ✓ | ✓ | ✓ | en progreso |
| App for managing permissioning | | ✓ | ✓ | ✓ | completado |
| Privacy leveraged as native | | ✓ | ✓ | ✓ | en progreso |
| Dashboard for nodes | | ✓ | ✓ | ✓ | completado |
| Transaction explorer | | ✓ | ✓ | ✓ | completado |
| Dashboard of entities running nodes | | ✓ | ✓ | ✓ | completado |
| Interface to visualize node's activity | | ✓ | ✓ | ✓ | en progreso |
| Tracking system to follow and fix installation issues | | | ✓ | ✓ | |
| Smart-contract based local whitelisting | | | ✓ | ✓ | completado |
| Dashboard of applications running on the blockchain | | | ✓ | ✓ | en progreso |
| Follow installation issues via a defined tracking system to enable and fix installation problems | | | ✓ | ✓ | |
| Cloud vendor integration | | | ✓ | ✓ | |
| Gas schema to manage the use of the network | | | ✓ | ✓ | completado |
| Observer nodes enabled | | | ✓ | ✓ | completado |
| Smart-contract-based rotation of core nodes | | | ✓ | ✓ | |
| Node's activity monitored | | | ✓ | ✓ | en progreso |
| Markeplace of applications | | | ✓ | ✓ | |
| Smart-contract based gas schema | | | | ✓ | completado |
| Data anlytics tools | | | | ✓ | en progreso |
| Quantum safe protocolos and algorithms | | | | ✓ | |
## Servicio
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| Installation support | ✓ | ✓ | ✓ | ✓ | completado |
| Writer nodes allowed to run tests and demos | ✓ | ✓ | ✓ | ✓ | completado |
| Writer nodes allowed to run tests, demos, POCs, MVPs and sandboxes | | ✓ | ✓ | ✓ | completado |
| Writer nodes allowed to run applications in production | | | ✓ | ✓ | |
| Cloud integration for node deploy and maintenance | | | ✓ | ✓ | |
| 24/7 technical support | | | | ✓ | |
## Legal
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| Test-net disclaimer | ✓ | ✓ | ✓ | ✓ | completado |
| Terms and Conditions for core nodes | | ✓ | ✓ | ✓ | completado |
| Terms and Conditions for writer nodes | | ✓ | ✓ | ✓ | completado |
| Privacy policy | | ✓ | ✓ | ✓ | completado |
| On-boarding agreement forms | | ✓ | ✓ | ✓ | completado |
| Forbidden use cases | | ✓ | ✓ | ✓ | completado |
| SLAs for operation | | | ✓ | ✓ | |
| Terms and conditions for satellite nodes | | | ✓ | ✓ | |
| Legal coverage | | | ✓ | ✓ | |
| Service agreement for node providers | | | ✓ | ✓ | |
| Business policies | | | ✓ | ✓ | |
| LACChain-Net | | | ✓ | ✓ | |
| Economics model | | | ✓ | ✓ | |
## Estándares y Protocolos
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| DIDs methods compatible to LACChain registered at W3C | | | ✓ | ✓ | |
| Templates, standards and protocols for verifiable credentials and presentations in areas as education, land registry, … | | | ✓ | ✓ | |
## Identificación y Aautenticación
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| Nodes identified and authenticated with DIDs and verifiable credentials | | | ✓ | ✓ | |
| LACChain ID app with different wallets integrated | | | ✓ | ✓ | |
## Comités
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| Satellite Permissioning Committee (SPC) constitued and operative | | | ✓ | ✓ | |
| Core Permissioning Committee (CPC) constituted and operative | | | ✓ | ✓ | |
## Tokenized Fiat Money
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| Regulated tokenized fiat money enabled | | | ✓ | ✓ | |
## Seguros
| | **TestNet** | **Pro TestNet** | **Pre MainNet** | **MainNet** | **Estado** |
|:------------------------------------------------------------------------------------------------------------------------|:--------------:|:--------------:|:--------------:|:-----------:|:-----------:|
| Blockchain Network Insurance Coverage | | | | ✓ | |
| 111.819048 | 203 | 0.227408 | eng_Latn | 0.398363 |
6f3b25f057dd7dfe1a7be09dfaa4d14caf000ed2 | 781 | md | Markdown | README.md | ryaangu/poope | 19e39fff81290a74c32ef58678f32ca69dfcdf5c | [
"MIT"
] | 4 | 2020-12-08T02:19:25.000Z | 2022-01-26T12:53:15.000Z | README.md | ryaangu/poope | 19e39fff81290a74c32ef58678f32ca69dfcdf5c | [
"MIT"
] | null | null | null | README.md | ryaangu/poope | 19e39fff81290a74c32ef58678f32ca69dfcdf5c | [
"MIT"
] | null | null | null | ## 💩 What is poope?
**poope** is basically an assembler, but **VERY** different!
## ⌨ Example Code
```
.data
; Define a byte variable called hello_world
byte hello_world: "Hello, World!"
.code
; Print 'Hello, World!' to console
cpy reg0, hello_world
call print
; Success (Exit Code: 0)
cpy reg, 0
ret
```
## 🚀 TODO List!
- [x] .code section
- [ ] .data section
- [x] labels
- [x] copy register, value
- [x] copy register, register
- [x] jump label
- [x] increment register
- [x] decrement register
- [x] return
---
- [x] entry point
---
- [x] 32-bit Support
- [ ] 64-bit Support
## 📝 NOTES
- **ALWAYS** optimize the source code.
- **ALWAYS** try to comment everything.
## ✍ LICENSE
Please refer to [**LICENSE**](https://github.com/ryaangu/poope/blob/main/LICENSE).
| 17.75 | 82 | 0.646607 | eng_Latn | 0.725968 |
6f3b2a6db7d0659b0090a9d54846b6fecbc5b925 | 1,126 | md | Markdown | Documentation/release/dev/vtkFFT-and-vtkTableFFT.md | cclauss/VTK | f62a52cce9044159efb4adb7cc0cfd7ec0bc8b6d | [
"BSD-3-Clause"
] | null | null | null | Documentation/release/dev/vtkFFT-and-vtkTableFFT.md | cclauss/VTK | f62a52cce9044159efb4adb7cc0cfd7ec0bc8b6d | [
"BSD-3-Clause"
] | null | null | null | Documentation/release/dev/vtkFFT-and-vtkTableFFT.md | cclauss/VTK | f62a52cce9044159efb4adb7cc0cfd7ec0bc8b6d | [
"BSD-3-Clause"
] | null | null | null | ## Better support of the FFT in VTK
Using the FFT within **VTK** is now easier and more performant. There's also some new
advanced feature.
---
For an FFT usage with stl container :
- The `vtkFFT` class now implements the FFT, its inverse as well as both their optimized
version for real input. The class uses the numpy terminology for these function names.
The implementation is done using the *kissfft* library.
- It also has utilities function for dealing with complex numbers and getting useful
informations such as the frequency array for a given length and sampling rate.
- Finally `vtkFFT` has functions for generating 1D and 2D kernels. For now it is possible to
construct Hanning, Bartlett, Sine, Blackman and Rectangular kernels.
---
The `vtkTableFFT` class now uses the `vtkFFT` class for its implementation which is faster than the
former implementation. It also offers more feature such as :
- Creating a frequency column in the output table
- Doing the FFT per block and then computing the average of all blocks to get the result
- Applying a window function before computing the FFT
| 43.307692 | 99 | 0.769982 | eng_Latn | 0.999823 |
6f3b38bb2ecb445f7544e17ccb5bc946526f67bb | 2,579 | md | Markdown | README.md | frankyroi/hackerbay-microservices | 359f95204e17cbed7f57768331ee3dd2fca4c135 | [
"AAL"
] | null | null | null | README.md | frankyroi/hackerbay-microservices | 359f95204e17cbed7f57768331ee3dd2fca4c135 | [
"AAL"
] | null | null | null | README.md | frankyroi/hackerbay-microservices | 359f95204e17cbed7f57768331ee3dd2fca4c135 | [
"AAL"
] | null | null | null | # Hackerbay-Microservices
A stateless microservice in Nodejs, with three major functionalities - Authentication, JSON patching, Image Thumbnail Generation
## Setup
The API requires Nodejs to be installed on your system
git clone this repo
cd Hackerbay-Microservice
Create a .env file and set jwtSecret to any secret key of your choice.
e.g
jwtSecret=secretKey
npm install
npm start
App can be reached at localhost:3000
## Testing the API routes.
Testing can be done with Postman
### Authentication
This is a demo authentication so you can use any username or password to login.
1. Make a POST request to /api/auth.
2. In the Body select x-www-form-urlencoded.
3. Set the username key and the password key with values of atleast 3 characters and 3 characters respectively.
4. Send
Expected result:
{
"user": "john",
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImpvaG4iLCJpYXQiOjE2MjM2NDM4MTksImV4cCI6MTYyMzY2NTQxOX0.wPW5KsMNJyI2X8t0TDyVWhPpfJ3hEod3auATgM0qGLg",
"authorized": true
}
### JSON patching
Apply json patch to a json object, and return the resulting json object.
1. Make a POST request to /api/patchJson.
2. Set the key to jsonObjec and value to an object. Set another key to jsonPatchObject and the value to an object.
Examples:
jsonObject
{ "user": { "firstName": "John", "lastName": "Doe" } }
jsonPatchObject
[{"op": "replace", "path": "/user/firstName", "value": "James"}, {"op": "replace", "path": "/user/lastName", "value": "Brown"}]
3. Add Header, set key as ```token``` and value as token received from **Authentication**.
4. Expected result:
{ "user": { "firstName": "John", "lastName": "Doe" } }
### Thumbnail Generator
Create a thumbnail.
1. Make a POST request to /api/thumbnail.
2. Set the key ```imageUrl``` to a public image url.
3. Add Header, set key as ```token``` and value as token received from **Authentication**.
4. Image will be downloaded and converted to a thumbnail of size 50x50 pixels with a sample Expected result:
{
"converted": true,
"user": "doej",
"success": "Image has been resized",
"thumbnail": "./public/images/resized/"
}
## Unit Testing
mochai is used for the unit testing .
cd to application's root directory then Run npm test.
## Logging
All logs are saved in ```logger.log``` in the application's root.
## Built With
* [Node.js](https://nodejs.org)
* [Express](https://expressjs.com/)
* [Mocha](https://mochajs.org/) - For testing
## Docker image
https://hub.docker.com/repository/docker/jelroi/hackerbay-microservices
| 28.655556 | 168 | 0.722373 | eng_Latn | 0.875397 |
6f3b5fd0cfe1b75204516e39243b7427d2ee3e12 | 4,104 | md | Markdown | windows-driver-docs-pr/display/converting-a-windows-nt-4-0-miniport-driver-to-windows-2000.md | jewelmir81/windows-driver-docs | 484020f1768981efaba97eeea41871c2593d2b44 | [
"CC-BY-4.0",
"MIT"
] | 485 | 2017-05-26T02:26:37.000Z | 2022-03-30T18:22:09.000Z | windows-driver-docs-pr/display/converting-a-windows-nt-4-0-miniport-driver-to-windows-2000.md | jewelmir81/windows-driver-docs | 484020f1768981efaba97eeea41871c2593d2b44 | [
"CC-BY-4.0",
"MIT"
] | 2,511 | 2017-05-16T23:06:32.000Z | 2022-03-31T23:57:00.000Z | windows-driver-docs-pr/display/converting-a-windows-nt-4-0-miniport-driver-to-windows-2000.md | jewelmir81/windows-driver-docs | 484020f1768981efaba97eeea41871c2593d2b44 | [
"CC-BY-4.0",
"MIT"
] | 687 | 2017-05-19T03:16:24.000Z | 2022-03-31T03:19:04.000Z | ---
title: Converting a Windows NT 4.0 Miniport Driver to Windows 2000
description: Converting a Windows NT 4.0 Miniport Driver to Windows 2000
keywords:
- video miniport drivers WDK Windows 2000 , multiple Windows versions, converting a Windows NT 4.0 driver
- converting video miniport drivers WDK Windows 2000
ms.date: 04/20/2017
ms.localizationpriority: medium
---
# Converting a Windows NT 4.0 Miniport Driver to Windows 2000
## <span id="ddk_converting_a_windows_nt_4_0_miniport_driver_to_windows_2000_gg"></span><span id="DDK_CONVERTING_A_WINDOWS_NT_4_0_MINIPORT_DRIVER_TO_WINDOWS_2000_GG"></span>
A good Windows NT 4.0 and previous miniport driver can easily become a Windows 2000 and later miniport driver. The following are some of the updates necessary to provide Plug and Play support, which is required in Windows 2000 and later miniport drivers:
- See [Plug and Play and Power Management in Video Miniport Drivers (Windows 2000 Model)](plug-and-play-and-power-management-in-video-miniport-drivers--windows-.md) for a list of new functions that must be implemented. Be sure to initialize the new members of [**VIDEO\_HW\_INITIALIZATION\_DATA**](/windows-hardware/drivers/ddi/video/ns-video-_video_hw_initialization_data) to point to these new functions.
- Update the call to [**VideoPortInitialize**](/windows-hardware/drivers/ddi/video/nf-video-videoportinitialize) in your [**DriverEntry**](./driverentry-of-video-miniport-driver.md) function. The fourth parameter (*HwContext*) must be **NULL** on Windows 2000 and later.
- Update your [*HwVidFindAdapter*](/windows-hardware/drivers/ddi/video/nc-video-pvideo_hw_find_adapter) function. For devices on an enumerable bus, *HwVidFindAdapter* must be changed as follows:
- Remove most of your device detection code. This is because a call to [*HwVidFindAdapter*](/windows-hardware/drivers/ddi/video/nc-video-pvideo_hw_find_adapter) on Windows 2000 means that the PnP manager has already detected the device.
- Call [**VideoPortGetAccessRanges**](/windows-hardware/drivers/ddi/video/nf-video-videoportgetaccessranges) to obtain the bus-relative physical addresses to which the device will respond. These addresses are assigned by the PnP manager.
- If the driver supports more than one device type, determine the type of device.
- Ignore the [*Again*](/windows-hardware/drivers/ddi/video/nc-video-pvideo_hw_find_adapter) parameter. This is because the system will call *HwVidFindAdapter* only once per device.
For a device on a nonenumerable bus such as ISA, PnP still attempts to start the device, although it is the responsibility of [*HwVidFindAdapter*](/windows-hardware/drivers/ddi/video/nc-video-pvideo_hw_find_adapter) to determine whether the device is actually present.
- Update the **.Mfg** section of the driver's INF file to include the device and vendor ID. This is required so that the PnP manager can associate the device with its INF file. Samples of the Windows NT 4.0 and updated Windows 2000 and later **.Mfg** sections follow:
```cpp
[ABC.Mfg] ; Windows NT V4.0 INF
%ABC% ABC Graphics Accelerator A = abc
%ABC% ABC Graphics Accelerator B = abc
[ABC.Mfg] ; Windows 2000 and later INF
%ABC% ABC Graphics Accelerator A = abc, PCI\VEN_ABCD&DEV_0123
%ABC% ABC Graphics Accelerator B = abc, PCI\VEN_ABCD&DEV_4567
```
You can use the *geninf.exe* tool that is included with the Driver Development Kit (DDK) to generate an INF. (The DDK preceded the Windows Driver Kit \[WDK\].) Keep in mind, however, that *geninf.exe* does not create an INF for Windows NT 4.0. You must modify the INF file produced by *geninf.exe* if you intend to support Windows NT 4.0. See [Creating Graphics INF Files](creating-graphics-inf-files.md) for more details.
The Windows 2000 and later video port supports Windows NT 4.0 miniport drivers as legacy drivers. The graphics adapter for a legacy miniport driver cannot be removed from the system while the system is running, nor are legacy miniport drivers automatically detected when added to a running system.
| 82.08 | 422 | 0.776559 | eng_Latn | 0.959817 |
6f3c0e1f83e0b7f7e71da5a0b4d7252695e4783e | 465 | md | Markdown | CONTRIBUTING.md | AndreyMashukov/mysql-json-serializer | 387263b1271ea481ad46a69279a42fb057dd53a8 | [
"MIT"
] | 11 | 2019-01-11T18:33:21.000Z | 2019-04-18T12:00:09.000Z | CONTRIBUTING.md | AndreyMashukov/mysql-json-serializer | 387263b1271ea481ad46a69279a42fb057dd53a8 | [
"MIT"
] | 25 | 2019-01-11T18:31:07.000Z | 2019-06-26T23:36:01.000Z | CONTRIBUTING.md | AndreyMashukov/mysql-json-serializer | 387263b1271ea481ad46a69279a42fb057dd53a8 | [
"MIT"
] | null | null | null | ### How to contribute
If you want to add you code to project you should do next steps:
* Write tests for your code, coverage should be greather or equal 80 percents
* Run build ant it should be passed (phpcs + phpmd + tests)
* No code duplication (see phpcpd build logs)
### Build project localy
I use `apache ant` for building project https://ant.apache.org
`how to install ant?`
```bash
apt-get install ant
```
`how to run?`
Just one command
```bash
ant
```
| 21.136364 | 77 | 0.71828 | eng_Latn | 0.991203 |
6f3ca2595751f2d82491bda431998fcdc26f3f72 | 855 | md | Markdown | README.md | eshaohu/Archives-Management | 8fcd31909d923ab17694c538ecbae04e2a55e15f | [
"Apache-2.0"
] | 1 | 2021-12-07T14:06:53.000Z | 2021-12-07T14:06:53.000Z | README.md | eshaohu/Archives-Management | 8fcd31909d923ab17694c538ecbae04e2a55e15f | [
"Apache-2.0"
] | null | null | null | README.md | eshaohu/Archives-Management | 8fcd31909d923ab17694c538ecbae04e2a55e15f | [
"Apache-2.0"
] | 1 | 2021-12-13T15:45:05.000Z | 2021-12-13T15:45:05.000Z | # Archives-Management
## 档案管理系统
这个项目主要是为公司所有的电子档案,电子公文提供一个人性化的WEB管理平台。Archives-Management的主要功能如下
- pdf文档的在线预览
- 丰富的图表展示档案统计信息
- 文档上传(新增)以及文档的版本管理
- 基于ldap的用户管理
- 文档内容的全文搜索
- 日程管理功能
### 主要网页示例
主页面:

档案列表:

上传档案:

日程管理:

### 开发环境以及项目所用到的框架
- **开发环境**centos7.2(PyCharm),mariadb-5.5.56, django1.11.6
- **html静态模板** [基于Amaze UI的后台管理模板](http://tpl.amazeui.org/content.html?21) 对妹子 UI表示感谢!
- **前端插件** jquery,jquery.medis.js, bootstrap, sweetalert, datatables, html模板自带的前端插件
### 关于项目的几点说明
未完待续
### 部署
未完待续
| 26.71875 | 95 | 0.766082 | yue_Hant | 0.966061 |
6f3d6b91379fd64ea3fb20fa3e912a5fb4e94a8c | 43 | md | Markdown | README.md | Jane-Sim/BookClub | 07291e8a6d0b0567358c76f5205f8c29e8636d03 | [
"MIT"
] | 3 | 2022-02-16T08:40:11.000Z | 2022-03-08T06:19:27.000Z | README.md | Jane-Sim/BookClub | 07291e8a6d0b0567358c76f5205f8c29e8636d03 | [
"MIT"
] | 17 | 2022-02-16T10:02:45.000Z | 2022-03-30T09:07:05.000Z | README.md | Jane-Sim/BookClub | 07291e8a6d0b0567358c76f5205f8c29e8636d03 | [
"MIT"
] | 9 | 2022-02-17T00:17:36.000Z | 2022-03-23T18:37:30.000Z | # BookClub
Book club of Virnect developers
| 14.333333 | 31 | 0.813953 | eng_Latn | 0.893024 |
6f3d95a89f0ca505ec87eedde5ecddbf47339d4d | 1,927 | md | Markdown | README.md | weblayouts/weblayouts-wp-theme | 6484489ab8f69894c48b061e9b96fb205fc4bb30 | [
"MIT"
] | null | null | null | README.md | weblayouts/weblayouts-wp-theme | 6484489ab8f69894c48b061e9b96fb205fc4bb30 | [
"MIT"
] | null | null | null | README.md | weblayouts/weblayouts-wp-theme | 6484489ab8f69894c48b061e9b96fb205fc4bb30 | [
"MIT"
] | null | null | null | # weblayouts-wp-theme
Wordpress theme of weblayouts.ca with full documentation
#Note of development methodology:
This theme's CSS architecture follows [CSS Guidelines](http://cssguidelin.es/) and [BEM](http://getbem.com/) methodologies.
# JavaScript & SCSS files processing:
Complete processing (validation, minification, ...) done by [Codekit](https://codekitapp.com/)
# Stylesheet resources (By order of delivery)
Stylesheet assets are located into : `assets/stylesheets`.
<ol>
<li>
**Critical stylesheet** : Must load before the rest of the page (for decent look). Very light and render blocking is very short. Included at the end of the `<head>` (as advised by [PageSpeed Tools's
Optimize CSS Delivery](https://developers.google.com/speed/docs/insights/OptimizeCSSDelivery)).
<ol>
<li>
```html
<head>
...
...
<style>
@import '<?php echo get_stylesheet_directory_uri(); ?>/css/style-critical.css';
</style>
</head>
```
</li>
</ol>
</li>
<li>**Non-critical stylesheets** : The site's remaining styles (loaded asynchronously via [loadCSS](https://gist.github.com/schilke/02357d9263ed28fc1769) )
<ol>
<li>style-non-critical.css (main styles)</li>
<li>font-awesome (4.6.3)</li>
<li>google-fonts (Lato:400,700,900)</li>
<li>**Plugin stylesheets**</li>
<li>admin.css (browser caching)</li>
<li>styles.css (contact-form-7)</li>
<li>pagenavi-css (wp-pagenavi)</li>
<li>style.min.css (social-warfare)</li>
<li>prism.css (prism-wp)</li>
<li>prism-line-highlight.css (prisΩm-wp)</li>
<li>prism-line-numbers.css (prism-wp)</li>
<li>jetpack.css (jetpack)</li>
</ol>
</li>
</ol>
# JavaScript resources (By order of delivery)
1. wl-main-min.js (renamed "weblayouts.js" in codekit
2. wl-document-min.js (depends on "loadCSS-min.js")
3. page-home-min.js (depends on "wl-document-min.js")
| 24.0875 | 201 | 0.667359 | eng_Latn | 0.422667 |
6f3e03801d56f78d0284a1cf0a6b62eeb5bd3893 | 246 | md | Markdown | README.md | daimeng/myanimelist-gql | f729836f34d78c0da81030376f91fd6dbf5c00e4 | [
"MIT"
] | null | null | null | README.md | daimeng/myanimelist-gql | f729836f34d78c0da81030376f91fd6dbf5c00e4 | [
"MIT"
] | null | null | null | README.md | daimeng/myanimelist-gql | f729836f34d78c0da81030376f91fd6dbf5c00e4 | [
"MIT"
] | null | null | null | # myanimelist-gql
Messing around with Python Graphene, no add/edit yet.
example POST gql query to https://mal-gql-demo.appspot.com/query
introspection query: https://github.com/graphql/graphql-js/blob/master/src/utilities/introspectionQuery.js
| 35.142857 | 106 | 0.804878 | kor_Hang | 0.445385 |
6f3e6b42f47861613b92547e4cef25894678ed80 | 13,743 | md | Markdown | src/pages/posts/2020-04-12T00:00:03-post.md | evanmacbride/reddit-digest | 47659c6b52d9b7d74025c517931e107cb2f6be94 | [
"MIT"
] | 1 | 2020-02-03T02:35:55.000Z | 2020-02-03T02:35:55.000Z | src/pages/posts/2020-04-12T00:00:03-post.md | evanmacbride/reddit-snapshots | 00dcad012a949243e7399a45dd9a37720cfe6576 | [
"MIT"
] | null | null | null | src/pages/posts/2020-04-12T00:00:03-post.md | evanmacbride/reddit-snapshots | 00dcad012a949243e7399a45dd9a37720cfe6576 | [
"MIT"
] | null | null | null | ---
title: '04/12/20 12:00AM UTC Snapshot'
date: '2020-04-12T00:00:03'
---
<ul>
<h2>Sci/Tech</h2>
<li><a href='https://www.desmogblog.com/2020/04/10/baltimore-rhode-island-climate-liability-court-misleading-conduct'><img src='https://b.thumbs.redditmedia.com/5jQncUkHdRKPXgvM3XXphLX9Ci5J0LMLeqdHk9QyE6A.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://www.desmogblog.com/2020/04/10/baltimore-rhode-island-climate-liability-court-misleading-conduct'>Baltimore, Rhode Island Argue They’re Suing Fossil Fuel Companies Over Climate Deception. The lawsuits target major fossil fuel companies like ExxonMobil, Chevron, and BP, alleging they deliberately deceived the public</a></div>(desmogblog.com) posted by <a href='https://www.reddit.com/user/Wagamaga'>Wagamaga</a> in <a href='https://www.reddit.com/r/Futurology'>Futurology</a> 9982 points & 262 <a href='https://www.reddit.com/r/Futurology/comments/fz3402/baltimore_rhode_island_argue_theyre_suing_fossil/'>comments</a></div></li>
<li><a href='https://www.wired.com/story/signal-earn-it-ransomware-security-news/'><svg version='1.1' viewBox='-34 -14 104 64' preserveAspectRatio='xMidYMid meet' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>link thumbnail</title>
<path d='M32,4H4A2,2,0,0,0,2,6V30a2,2,0,0,0,2,2H32a2,2,0,0,0,2-2V6A2,2,0,0,0,32,4ZM4,30V6H32V30Z'></path>
<path d='M8.92,14a3,3,0,1,0-3-3A3,3,0,0,0,8.92,14Zm0-4.6A1.6,1.6,0,1,1,7.33,11,1.6,1.6,0,0,1,8.92,9.41Z'></path>
<path d='M22.78,15.37l-5.4,5.4-4-4a1,1,0,0,0-1.41,0L5.92,22.9v2.83l6.79-6.79L16,22.18l-3.75,3.75H15l8.45-8.45L30,24V21.18l-5.81-5.81A1,1,0,0,0,22.78,15.37Z'></path>
</svg></a><div><div class='linkTitle'><a href='https://www.wired.com/story/signal-earn-it-ransomware-security-news/'>Signal Threatens to Leave the US If EARN IT Act Passes</a></div>(wired.com) posted by <a href='https://www.reddit.com/user/MyNameIsGriffon'>MyNameIsGriffon</a> in <a href='https://www.reddit.com/r/technology'>technology</a> 5738 points & 258 <a href='https://www.reddit.com/r/technology/comments/fz51sp/signal_threatens_to_leave_the_us_if_earn_it_act/'>comments</a></div></li>
<li><a href='https://www.reddit.com/r/askscience/comments/fz6nc4/since_photons_have_no_charge_what_force_or/'><svg version='1.1' viewBox='-34 -12 104 64' preserveAspectRatio='xMidYMid slice' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>text link thumbnail</title>
<path d='M12.19,8.84a1.45,1.45,0,0,0-1.4-1h-.12a1.46,1.46,0,0,0-1.42,1L1.14,26.56a1.29,1.29,0,0,0-.14.59,1,1,0,0,0,1,1,1.12,1.12,0,0,0,1.08-.77l2.08-4.65h11l2.08,4.59a1.24,1.24,0,0,0,1.12.83,1.08,1.08,0,0,0,1.08-1.08,1.64,1.64,0,0,0-.14-.57ZM6.08,20.71l4.59-10.22,4.6,10.22Z'>
</path>
<path d='M32.24,14.78A6.35,6.35,0,0,0,27.6,13.2a11.36,11.36,0,0,0-4.7,1,1,1,0,0,0-.58.89,1,1,0,0,0,.94.92,1.23,1.23,0,0,0,.39-.08,8.87,8.87,0,0,1,3.72-.81c2.7,0,4.28,1.33,4.28,3.92v.5a15.29,15.29,0,0,0-4.42-.61c-3.64,0-6.14,1.61-6.14,4.64v.05c0,2.95,2.7,4.48,5.37,4.48a6.29,6.29,0,0,0,5.19-2.48V26.9a1,1,0,0,0,1,1,1,1,0,0,0,1-1.06V19A5.71,5.71,0,0,0,32.24,14.78Zm-.56,7.7c0,2.28-2.17,3.89-4.81,3.89-1.94,0-3.61-1.06-3.61-2.86v-.06c0-1.8,1.5-3,4.2-3a15.2,15.2,0,0,1,4.22.61Z'>
</path>
</svg></a><div><div class='linkTitle'><a href='https://www.reddit.com/r/askscience/comments/fz6nc4/since_photons_have_no_charge_what_force_or/'>Since photons have no charge, what force or mechanism causes them to deflect/scatter when coming into contact with matter?</a></div>(reddit.com) posted by <a href='https://www.reddit.com/user/Ned_Wells'>Ned_Wells</a> in <a href='https://www.reddit.com/r/askscience'>askscience</a> 2685 points & 193 <a href='https://www.reddit.com/r/askscience/comments/fz6nc4/since_photons_have_no_charge_what_force_or/'>comments</a></div></li>
<li><a href='https://i.redd.it/lmgz1uj2w3s41.jpg'><img src='https://b.thumbs.redditmedia.com/IA6hX53k9_xb-NEsazMfcvJ_v-tyxuZ0mB9fWqov4JI.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/lmgz1uj2w3s41.jpg'>"50'th Anniversary of Apollo 13" Thread</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/nkkn_NK_Karthikeyan'>nkkn_NK_Karthikeyan</a> in <a href='https://www.reddit.com/r/nasa'>nasa</a> 2612 points & 61 <a href='https://www.reddit.com/r/nasa/comments/fyvdbc/50th_anniversary_of_apollo_13_thread/'>comments</a></div></li>
<li><a href='https://www.commondreams.org/news/2020/04/11/theyre-crooks-coal-industry-aims-exploit-coronavirus-crisis-cut-payments-miners'><img src='https://b.thumbs.redditmedia.com/egqtX39I4KfMJ80O7lbkX99Q8RpxMGyVjjlog36NHeA.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://www.commondreams.org/news/2020/04/11/theyre-crooks-coal-industry-aims-exploit-coronavirus-crisis-cut-payments-miners'>"They're Crooks": Coal Industry Aims to Exploit Coronavirus Crisis to Cut Payments to Miners With Black Lung - Coal companies are "going to try to use this virus thing to stop paying benefits," warned one retired miner suffering from black lung.</a></div>(commondreams.org) posted by <a href='https://www.reddit.com/user/iyoiiiu'>iyoiiiu</a> in <a href='https://www.reddit.com/r/environment'>environment</a> 1823 points & 44 <a href='https://www.reddit.com/r/environment/comments/fz3oc3/theyre_crooks_coal_industry_aims_to_exploit/'>comments</a></div></li>
<li><a href='https://i.redd.it/zl1izjdq43s41.jpg'><img src='https://b.thumbs.redditmedia.com/lV0ZOq04rEW44JPweX88zq3rngPSuzk_Vvpg-9kC4ps.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/zl1izjdq43s41.jpg'>Recently discovered fossils of Megaraptora (specifically rapator, australovenator, “lightning claw) suggest that Australia was the original country to house these medium sized predators, as far back as 120 million years ago.</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/tgood139'>tgood139</a> in <a href='https://www.reddit.com/r/Naturewasmetal'>Naturewasmetal</a> 1690 points & 61 <a href='https://www.reddit.com/r/Naturewasmetal/comments/fyswjb/recently_discovered_fossils_of_megaraptora/'>comments</a></div></li>
<h2>Maker</h2>
<li><a href='https://i.redd.it/csvrhh7yu6s41.jpg'><img src='https://b.thumbs.redditmedia.com/L772DTkPym5KLAXlgsQvlLayHtXlKiWwfRhuadgFgiE.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/csvrhh7yu6s41.jpg'>I designed a clock that shows the current day of the week</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/Daverant'>Daverant</a> in <a href='https://www.reddit.com/r/3Dprinting'>3Dprinting</a> 3697 points & 167 <a href='https://www.reddit.com/r/3Dprinting/comments/fz4zjg/i_designed_a_clock_that_shows_the_current_day_of/'>comments</a></div></li>
<li><a href='https://www.reddit.com/r/buildapc/comments/fytprr/i_had_a_defective_fan_on_my_gpu_for_over_5_years/'><svg version='1.1' viewBox='-34 -12 104 64' preserveAspectRatio='xMidYMid slice' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>text link thumbnail</title>
<path d='M12.19,8.84a1.45,1.45,0,0,0-1.4-1h-.12a1.46,1.46,0,0,0-1.42,1L1.14,26.56a1.29,1.29,0,0,0-.14.59,1,1,0,0,0,1,1,1.12,1.12,0,0,0,1.08-.77l2.08-4.65h11l2.08,4.59a1.24,1.24,0,0,0,1.12.83,1.08,1.08,0,0,0,1.08-1.08,1.64,1.64,0,0,0-.14-.57ZM6.08,20.71l4.59-10.22,4.6,10.22Z'>
</path>
<path d='M32.24,14.78A6.35,6.35,0,0,0,27.6,13.2a11.36,11.36,0,0,0-4.7,1,1,1,0,0,0-.58.89,1,1,0,0,0,.94.92,1.23,1.23,0,0,0,.39-.08,8.87,8.87,0,0,1,3.72-.81c2.7,0,4.28,1.33,4.28,3.92v.5a15.29,15.29,0,0,0-4.42-.61c-3.64,0-6.14,1.61-6.14,4.64v.05c0,2.95,2.7,4.48,5.37,4.48a6.29,6.29,0,0,0,5.19-2.48V26.9a1,1,0,0,0,1,1,1,1,0,0,0,1-1.06V19A5.71,5.71,0,0,0,32.24,14.78Zm-.56,7.7c0,2.28-2.17,3.89-4.81,3.89-1.94,0-3.61-1.06-3.61-2.86v-.06c0-1.8,1.5-3,4.2-3a15.2,15.2,0,0,1,4.22.61Z'>
</path>
</svg></a><div><div class='linkTitle'><a href='https://www.reddit.com/r/buildapc/comments/fytprr/i_had_a_defective_fan_on_my_gpu_for_over_5_years/'>I had a defective fan on my GPU for over 5 years, and I just found out today.</a></div>(reddit.com) posted by <a href='https://www.reddit.com/user/CrazyAsian'>CrazyAsian</a> in <a href='https://www.reddit.com/r/buildapc'>buildapc</a> 2177 points & 166 <a href='https://www.reddit.com/r/buildapc/comments/fytprr/i_had_a_defective_fan_on_my_gpu_for_over_5_years/'>comments</a></div></li>
<li><a href='https://www.inputmag.com/tech/ibm-will-offer-free-cobol-training-to-address-overloaded-unemployment-systems'><svg version='1.1' viewBox='-34 -14 104 64' preserveAspectRatio='xMidYMid meet' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>link thumbnail</title>
<path d='M32,4H4A2,2,0,0,0,2,6V30a2,2,0,0,0,2,2H32a2,2,0,0,0,2-2V6A2,2,0,0,0,32,4ZM4,30V6H32V30Z'></path>
<path d='M8.92,14a3,3,0,1,0-3-3A3,3,0,0,0,8.92,14Zm0-4.6A1.6,1.6,0,1,1,7.33,11,1.6,1.6,0,0,1,8.92,9.41Z'></path>
<path d='M22.78,15.37l-5.4,5.4-4-4a1,1,0,0,0-1.41,0L5.92,22.9v2.83l6.79-6.79L16,22.18l-3.75,3.75H15l8.45-8.45L30,24V21.18l-5.81-5.81A1,1,0,0,0,22.78,15.37Z'></path>
</svg></a><div><div class='linkTitle'><a href='https://www.inputmag.com/tech/ibm-will-offer-free-cobol-training-to-address-overloaded-unemployment-systems'>IBM will offer a course on COBOL next week</a></div>(inputmag.com) posted by <a href='https://www.reddit.com/user/Erglewalken'>Erglewalken</a> in <a href='https://www.reddit.com/r/programming'>programming</a> 1599 points & 404 <a href='https://www.reddit.com/r/programming/comments/fysxgs/ibm_will_offer_a_course_on_cobol_next_week/'>comments</a></div></li>
<li><a href='https://i.redd.it/xm7tw0pos7s41.jpg'><img src='https://b.thumbs.redditmedia.com/hzaXeHfXpcULT00VMSLoQEvczeKvXgSwmiIm2lQrSMM.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/xm7tw0pos7s41.jpg'>It ain't pretty, but it's convenient.</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/jimmyjames325'>jimmyjames325</a> in <a href='https://www.reddit.com/r/functionalprint'>functionalprint</a> 1451 points & 79 <a href='https://www.reddit.com/r/functionalprint/comments/fz8ose/it_aint_pretty_but_its_convenient/'>comments</a></div></li>
<h2>Etcetera</h2>
<li><a href='https://tvchart.benmiz.com/'><img src='https://b.thumbs.redditmedia.com/qbvOrC9BNO9dhAWWB0F07FlOFSFBbuQicJAxYfmHHCQ.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://tvchart.benmiz.com/'>A web app that lets you see a graph of any TV show's IMDb ratings, by episode. [OC]</a></div>(tvchart.benmiz.com) posted by <a href='https://www.reddit.com/user/Jaja321'>Jaja321</a> in <a href='https://www.reddit.com/r/InternetIsBeautiful'>InternetIsBeautiful</a> 2379 points & 140 <a href='https://www.reddit.com/r/InternetIsBeautiful/comments/fz8v9g/a_web_app_that_lets_you_see_a_graph_of_any_tv/'>comments</a></div></li>
<li><a href='https://i.redd.it/owhx6m2n78s41.png'><img src='https://a.thumbs.redditmedia.com/4tLJ95MU5c90AgPMSzOGqG5Vfag59F795en6Pz9O5G8.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/owhx6m2n78s41.png'>Some icons for a Discord channel</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/PixelDanc3r'>PixelDanc3r</a> in <a href='https://www.reddit.com/r/PixelArt'>PixelArt</a> 1784 points & 28 <a href='https://www.reddit.com/r/PixelArt/comments/fzbb3o/some_icons_for_a_discord_channel/'>comments</a></div></li>
<li><a href='https://www.reddit.com/r/scifi/comments/fz34si/if_sega_and_fox_had_half_a_brain_they_would_make/'><svg version='1.1' viewBox='-34 -12 104 64' preserveAspectRatio='xMidYMid slice' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink'>
<title>text link thumbnail</title>
<path d='M12.19,8.84a1.45,1.45,0,0,0-1.4-1h-.12a1.46,1.46,0,0,0-1.42,1L1.14,26.56a1.29,1.29,0,0,0-.14.59,1,1,0,0,0,1,1,1.12,1.12,0,0,0,1.08-.77l2.08-4.65h11l2.08,4.59a1.24,1.24,0,0,0,1.12.83,1.08,1.08,0,0,0,1.08-1.08,1.64,1.64,0,0,0-.14-.57ZM6.08,20.71l4.59-10.22,4.6,10.22Z'>
</path>
<path d='M32.24,14.78A6.35,6.35,0,0,0,27.6,13.2a11.36,11.36,0,0,0-4.7,1,1,1,0,0,0-.58.89,1,1,0,0,0,.94.92,1.23,1.23,0,0,0,.39-.08,8.87,8.87,0,0,1,3.72-.81c2.7,0,4.28,1.33,4.28,3.92v.5a15.29,15.29,0,0,0-4.42-.61c-3.64,0-6.14,1.61-6.14,4.64v.05c0,2.95,2.7,4.48,5.37,4.48a6.29,6.29,0,0,0,5.19-2.48V26.9a1,1,0,0,0,1,1,1,1,0,0,0,1-1.06V19A5.71,5.71,0,0,0,32.24,14.78Zm-.56,7.7c0,2.28-2.17,3.89-4.81,3.89-1.94,0-3.61-1.06-3.61-2.86v-.06c0-1.8,1.5-3,4.2-3a15.2,15.2,0,0,1,4.22.61Z'>
</path>
</svg></a><div><div class='linkTitle'><a href='https://www.reddit.com/r/scifi/comments/fz34si/if_sega_and_fox_had_half_a_brain_they_would_make/'>If Sega and Fox had half a brain they would make Alien Isolation 2 and set it in Hadley's Hope</a></div>(reddit.com) posted by <a href='https://www.reddit.com/user/the_nin_collector'>the_nin_collector</a> in <a href='https://www.reddit.com/r/scifi'>scifi</a> 407 points & 58 <a href='https://www.reddit.com/r/scifi/comments/fz34si/if_sega_and_fox_had_half_a_brain_they_would_make/'>comments</a></div></li>
<li><a href='https://i.redd.it/lno9oy6da6s41.jpg'><img src='https://a.thumbs.redditmedia.com/FV2D-fbUppqK6EgclXXy3UDZ3dMi4adIbA4NVOv_gM0.jpg' alt='link thumbnail'></a><div><div class='linkTitle'><a href='https://i.redd.it/lno9oy6da6s41.jpg'>Cyber Bunny - Rubinkowski</a></div>(i.redd.it) posted by <a href='https://www.reddit.com/user/Rubinkowski'>Rubinkowski</a> in <a href='https://www.reddit.com/r/alternativeart'>alternativeart</a> 331 points & 3 <a href='https://www.reddit.com/r/alternativeart/comments/fz2xl7/cyber_bunny_rubinkowski/'>comments</a></div></li>
</ul>
| 199.173913 | 975 | 0.720512 | yue_Hant | 0.310324 |
6f3f79049c590f7caf8126902d76256cc5bce15c | 2,388 | md | Markdown | doc/CodeStyle.md | Sayanta66/cortx | b7dc3f09712fe4fc3676f1a8d7855642cb5c08a8 | [
"Apache-2.0"
] | 1 | 2020-10-25T08:28:02.000Z | 2020-10-25T08:28:02.000Z | doc/CodeStyle.md | Sayanta66/cortx | b7dc3f09712fe4fc3676f1a8d7855642cb5c08a8 | [
"Apache-2.0"
] | null | null | null | doc/CodeStyle.md | Sayanta66/cortx | b7dc3f09712fe4fc3676f1a8d7855642cb5c08a8 | [
"Apache-2.0"
] | null | null | null | # Code Style Guide
We are excited that you are interested in contributing to the CORTX project! Since we are open source and have many contributors, it is very important that both our code (and the very necessary comments describing your code) follow consistent standards. In the table below please find all the Code Style Guides that we use for each language as well as which repositories use them. Thanks!
| **Language** | **Repository Names** |
|- |- |
|**[Bash](https://github.com/bahamas10/bash-style-guide)** | cortx-s3-server|
|**[C](https://github.com/Seagate/cortx-motr/blob/dev/doc/coding-style.md)**| cortx-motr</br>cortx-posix</br>cortx-monitor</br> |
|**[C++](https://google.github.io/styleguide/cppguide.html)** | cortx-s3-server|
|**[Java](https://google.github.io/styleguide/javaguide.html)** |cortx-s3-server|
|**[Python](https://google.github.io/styleguide/pyguide.html)**| cortx-s3-server</br> cortx-ha</br>cortx-posix</br>cortx-monitor</br> |
|**[Shell](https://google.github.io/styleguide/shellguide.html)**| cortx-ha</br>cortx-provisioner</br> |
| **YAML:**</br></p>**[Style](https://docs.saltstack.com/en/latest/topics/development/conventions/style.html)**</br>**[Formulae](https://docs.saltstack.com/en/latest/topics/development/conventions/formulas.html)**</br> | cortx-provisioner|
:warning: **Exceptions:**
Some repositories have their own style guides, please refer to these repository-specific coding style guides.
- **[Bash, Python, and C](https://github.com/Seagate/cortx-hare/tree/dev/rfc/8)** - cortx-hare.
- **TODO:** Add links for cortx-manager and cortx-management-portal coding style.
:page_with_curl: **Notes:**
The CORTX project is inclusive, and we have made it a priority to keep the project as accessible as possible by preferring literal and direct terminology over metaphorical language, slang, or other shorthand wherever possible. For example:
- Use *Allowlist* instead of *Whitelist*.
- Replace the *Master and Slave* terminology, use terminology that more precisely reflects the relationship such as *Primary and Secondary* or *Main and Dev*.
:page_with_curl: **Using Third Party Software**
To ensure that CORTX software remains available under our current open source licenses, please do not _copy-paste_ any software from other software repositories or from websites such as stackoverflow into any CORTX software.
| 77.032258 | 388 | 0.750419 | eng_Latn | 0.935749 |
6f4029ce16ca31b5adcb50f6d30271f5c8c85b68 | 1,241 | md | Markdown | docs/Enable-Configure-SNMP.ps1.md | MSAdministrator/PoshCodeMarkDown | 062c6065193eaeb46efc185ee9a25b4957ed98b5 | [
"MIT"
] | 7 | 2019-02-22T05:58:27.000Z | 2021-09-02T09:43:52.000Z | docs/Enable-Configure-SNMP.ps1.md | MSAdministrator/PoshCodeMarkDown | 062c6065193eaeb46efc185ee9a25b4957ed98b5 | [
"MIT"
] | 1 | 2021-05-19T09:30:21.000Z | 2021-05-19T09:30:21.000Z | docs/Enable-Configure-SNMP.ps1.md | MSAdministrator/PoshCodeMarkDown | 062c6065193eaeb46efc185ee9a25b4957ed98b5 | [
"MIT"
] | 2 | 2018-08-29T13:55:38.000Z | 2021-01-07T18:29:18.000Z | ---
Author: st3v3o
Publisher:
Copyright:
Email:
Version: 0.1
Encoding: ascii
License: cc0
PoshCode ID: 6351
Published Date: 2016-05-19t13
Archived Date: 2016-05-24t07
---
# enable/configure snmp -
## Description
enables snmp via add-windowsfeature (if not already enabled)
## Comments
## Usage
## TODO
## script
``
## Code
`#
#
$pmanagers = "ADD YOUR MANAGER(s)"
$commstring = "ADD YOUR COMM STRING"
Import-Module ServerManager
$check = Get-WindowsFeature | Where-Object {$_.Name -eq "SNMP-Services"}
If ($check.Installed -ne "True") {
Add-WindowsFeature SNMP-Services | Out-Null
}
If ($check.Installed -eq "True"){
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\PermittedManagers" /v 1 /t REG_SZ /d localhost /f | Out-Null
$i = 2
Foreach ($manager in $pmanagers){
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\PermittedManagers" /v $i /t REG_SZ /d $manager /f | Out-Null
$i++
}
Foreach ( $string in $commstring){
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\SNMP\Parameters\ValidCommunities" /v $string /t REG_DWORD /d 4 /f | Out-Null
}
}
Else {Write-Host "Error: SNMP Services Not Installed"}
`
| 19.092308 | 141 | 0.704271 | yue_Hant | 0.677749 |
6f409cf74b2e10ecb7e149a555fd536a9f9ee9dc | 1,284 | md | Markdown | _team/1-stephanie.md | kloosterpm/bdsi.utwente.nl | 4e8f86c872a94d9ad36a91084ba1f06a1ad845ba | [
"MIT"
] | null | null | null | _team/1-stephanie.md | kloosterpm/bdsi.utwente.nl | 4e8f86c872a94d9ad36a91084ba1f06a1ad845ba | [
"MIT"
] | null | null | null | _team/1-stephanie.md | kloosterpm/bdsi.utwente.nl | 4e8f86c872a94d9ad36a91084ba1f06a1ad845ba | [
"MIT"
] | null | null | null | ---
title: "Stéphanie van den Berg"
image: "/assets/images/team/Stephanie-thumbnail.png"
jobtitle: "Team Leader"
email: "stephanie.vandenberg@utwente.nl"
linkedinurl: "https://www.linkedin.com/in/stéphanie-van-den-berg-3038765/"
expertise:
- research methodology
- data analytics
- machine learning
- psychometrics
- bayesian statistics
- time-intensive data modelling
- learning R
- power analysis
- measuring behavioural traits
- research grant applications
- quantitative genetic models
---
_WHO:_ I am Stéphanie van den Berg and I am an associate professor in the area of research methodology and data analytics. I have a PhD in psychology. My research interests are in the field of machine learning, psychometrics, Bayesian statistics and modelling time-intensive data. I teach courses in data science and statistics at the University of Twente.
_ROLE:_ I am the head of BDSI.
_NEED HELP:_ I can help you with statistical modelling, learning and using R, applying statistical/machine learning, power analysis, measuring behavioural traits, discussing the research methodology of your project and writing applications for research grants. I also have extensive experience with quantitative genetic models and Bayesian data analysis.
| 49.384615 | 356 | 0.777259 | eng_Latn | 0.966314 |
6f40c88bc0cb5a2d67008585696ee9ee195040e7 | 320 | md | Markdown | README.md | notyetawizard/vindinium-love | b5ac59cecfec9503c18d6f3caf65063683a43f21 | [
"MIT"
] | null | null | null | README.md | notyetawizard/vindinium-love | b5ac59cecfec9503c18d6f3caf65063683a43f21 | [
"MIT"
] | null | null | null | README.md | notyetawizard/vindinium-love | b5ac59cecfec9503c18d6f3caf65063683a43f21 | [
"MIT"
] | null | null | null | # vindinium-love
A Vindinium client on the LÖVE framework
Not currently in a usable state.
Plans to:
- Include a light graphics system so that a browser is not needed to watch the game.
- Make it easy to load different bots and strategies on the fly or by cli
- Include mode, turn, map setting on the fly or by cli
| 24.615385 | 84 | 0.759375 | eng_Latn | 0.999941 |
6f40c953d399f28e98d52385e1ea58d8feed58cc | 706 | md | Markdown | README.md | alexandernst/angular-img-dl | 65f2917dcd9a1ce717221fbd96e61fd82df0920a | [
"MIT"
] | 1 | 2017-03-27T13:47:17.000Z | 2017-03-27T13:47:17.000Z | README.md | alexandernst/angular-img-dl | 65f2917dcd9a1ce717221fbd96e61fd82df0920a | [
"MIT"
] | null | null | null | README.md | alexandernst/angular-img-dl | 65f2917dcd9a1ce717221fbd96e61fd82df0920a | [
"MIT"
] | null | null | null | # AngularJS Image Downloader[](https://travis-ci.org/alexandernst/angular-img-dl)
Angular directive for downloading images as blobs.
Especially useful when used with [ng-file-upload](https://github.com/danialfarid/ng-file-upload).
## Attributes
* `angular-img-dl-url` - URL of the image that should be downloaded
* `angular-img-dl-model` - The scope variable where the image blob will be stored
## Example usage
<div ngf-background="blob"
angular-img-dl angular-img-dl-url="{{ demourl }}" angular-img-dl-model="blob">
</div>
## Demo
[Plnkr demo](http://plnkr.co/edit/53Dum3gQfKDUu8xkPsNq?p=preview)
| 35.3 | 165 | 0.733711 | eng_Latn | 0.482091 |
6f40cbbc7ec8ef6bde07ba35b6b5e48890dbc6cd | 9,039 | md | Markdown | docs/integration-services/expressions/functions-ssis-expression.md | jaredmoo/sql-docs | fae18f2837c5135d3482a26f999173ecf4f9f58e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/expressions/functions-ssis-expression.md | jaredmoo/sql-docs | fae18f2837c5135d3482a26f999173ecf4f9f58e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/integration-services/expressions/functions-ssis-expression.md | jaredmoo/sql-docs | fae18f2837c5135d3482a26f999173ecf4f9f58e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: "Functions (SSIS Expression) | Microsoft Docs"
ms.custom: ""
ms.date: "03/01/2017"
ms.prod: sql
ms.prod_service: "integration-services"
ms.service: ""
ms.component: "expressions"
ms.reviewer: ""
ms.suite: "sql"
ms.technology:
- "integration-services"
ms.tgt_pltfrm: ""
ms.topic: conceptual
helpviewer_keywords:
- "functions [Integration Services]"
- "expressions [Integration Services], functions"
- "string functions"
- "SQL Server Integration Services, functions"
- "SSIS, functions"
ms.assetid: e9a41a31-94f4-46a4-b737-c707dd59ce48
caps.latest.revision: 36
author: "douglaslMS"
ms.author: "douglasl"
manager: "craigg"
---
# Functions (SSIS Expression)
The expression language includes a set of functions for use in expressions. An expression can use a single function, but typically an expression combines functions with operators and uses multiple functions.
The functions can be categorized into the following groups:
- Mathematical functions that perform calculations based on numeric input values provided as parameters to the functions and return numeric values.
- String functions that perform operations on string or hexadecimal input values and return a string or numeric value.
- Date and time functions that perform operations on date and time values and return string, numeric, or date and time values.
- System functions that return information about an expression.
The expression language provides the following mathematical functions.
|Function|Description|
|--------------|-----------------|
|[ABS (SSIS Expression)](../../integration-services/expressions/abs-ssis-expression.md)|Returns the absolute, positive value of a numeric expression.|
|[EXP (SSIS Expression)](../../integration-services/expressions/exp-ssis-expression.md)|Returns the exponent to base e of the specified expression.|
|[CEILING (SSIS Expression)](../../integration-services/expressions/ceiling-ssis-expression.md)|Returns the smallest integer that is greater than or equal to a numeric expression.|
|[FLOOR (SSIS Expression)](../../integration-services/expressions/floor-ssis-expression.md)|Returns the largest integer that is less than or equal to a numeric expression.|
|[LN (SSIS Expression)](../../integration-services/expressions/ln-ssis-expression.md)|Returns the natural logarithm of a numeric expression.|
|[LOG (SSIS Expression)](../../integration-services/expressions/log-ssis-expression.md)|Returns the base-10 logarithm of a numeric expression.|
|[POWER (SSIS Expression)](../../integration-services/expressions/power-ssis-expression.md)|Returns the result of raising a numeric expression to a power.|
|[ROUND (SSIS Expression)](../../integration-services/expressions/round-ssis-expression.md)|Returns a numeric expression that is rounded to the specified length or precision. .|
|[SIGN (SSIS Expression)](../../integration-services/expressions/sign-ssis-expression.md)|Returns the positive (+), negative (-), or zero (0) sign of a numeric expression.|
|[SQUARE (SSIS Expression)](../../integration-services/expressions/square-ssis-expression.md)|Returns the square of a numeric expression.|
|[SQRT (SSIS Expression)](../../integration-services/expressions/sqrt-ssis-expression.md)|Returns the square root of a numeric expression.|
The expression evaluator provides the following string functions.
|Function|Description|
|--------------|-----------------|
|[CODEPOINT (SSIS Expression)](../../integration-services/expressions/codepoint-ssis-expression.md)|Returns the Unicode code value of the leftmost character of a character expression.|
|[FINDSTRING (SSIS Expression)](../../integration-services/expressions/findstring-ssis-expression.md)|Returns the one-based index of the specified occurrence of a character string within an expression.|
|[HEX (SSIS Expression)](../../integration-services/expressions/hex-ssis-expression.md)|Returns a string representing the hexadecimal value of an integer.|
|[LEN (SSIS Expression)](../../integration-services/expressions/len-ssis-expression.md)|Returns the number of characters in a character expression.|
|[LEFT (SSIS Expression)](../../integration-services/expressions/left-ssis-expression.md)|Returns the specified number of characters from the leftmost portion of the given character expression.|
|[LOWER (SSIS Expression)](../../integration-services/expressions/lower-ssis-expression.md)|Returns a character expression after converting uppercase characters to lowercase characters.|
|[LTRIM (SSIS Expression)](../../integration-services/expressions/ltrim-ssis-expression.md)|Returns a character expression after removing leading spaces.|
|[REPLACE (SSIS Expression)](../../integration-services/expressions/replace-ssis-expression.md)|Returns a character expression after replacing a string within the expression with either a different string or an empty string.|
|[REPLICATE (SSIS Expression)](../../integration-services/expressions/replicate-ssis-expression.md)|Returns a character expression, replicated a specified number of times.|
|[REVERSE (SSIS Expression)](../../integration-services/expressions/reverse-ssis-expression.md)|Returns a character expression in reverse order.|
|[RIGHT (SSIS Expression)](../../integration-services/expressions/right-ssis-expression.md)|Returns the specified number of characters from the rightmost portion of the given character expression.|
|[RTRIM (SSIS Expression)](../../integration-services/expressions/rtrim-ssis-expression.md)|Returns a character expression after removing trailing spaces.|
|[SUBSTRING (SSIS Expression)](../../integration-services/expressions/substring-ssis-expression.md)|Returns a part of a character expression.|
|[TRIM (SSIS Expression)](../../integration-services/expressions/trim-ssis-expression.md)|Returns a character expression after removing leading and trailing spaces.|
|[UPPER (SSIS Expression)](../../integration-services/expressions/upper-ssis-expression.md)|Returns a character expression after converting lowercase characters to uppercase characters.|
The expression evaluator provides the following date and time functions.
|Function|Description|
|--------------|-----------------|
|[DATEADD (SSIS Expression)](../../integration-services/expressions/dateadd-ssis-expression.md)|Returns a new DT_DBTIMESTAMP value by adding a date or time interval to a specified date.|
|[DATEDIFF (SSIS Expression)](../../integration-services/expressions/datediff-ssis-expression.md)|Returns the number of date and time boundaries crossed between two specified dates.|
|[DATEPART (SSIS Expression)](../../integration-services/expressions/datepart-ssis-expression.md)|Returns an integer representing a datepart of a date.|
|[DAY (SSIS Expression)](../../integration-services/expressions/day-ssis-expression.md)|Returns an integer that represents the day of the specified date.|
|[GETDATE (SSIS Expression)](../../integration-services/expressions/getdate-ssis-expression.md)|Returns the current date of the system.|
|[GETUTCDATE (SSIS Expression)](../../integration-services/expressions/getutcdate-ssis-expression.md)|Returns the current date of the system in UTC time (Universal Time Coordinate or Greenwich Mean Time).|
|[MONTH (SSIS Expression)](../../integration-services/expressions/month-ssis-expression.md)|Returns an integer that represents the month of the specified date.|
|[YEAR (SSIS Expression)](../../integration-services/expressions/year-ssis-expression.md)|Returns an integer that represents the year of the specified date.|
The expression evaluator provides the following null functions.
|Function|Description|
|--------------|-----------------|
|[ISNULL (SSIS Expression)](../../integration-services/expressions/isnull-ssis-expression.md)|Returns a Boolean result based on whether an expression is null.|
|[NULL (SSIS Expression)](../../integration-services/expressions/null-ssis-expression.md)|Returns a null value of a requested data type.|
Expression names are shown in uppercase characters, but expression names are not case-sensitive. For example, using "null" works as well as using "NULL".
## See Also
[Operators (SSIS Expression)](../../integration-services/expressions/operators-ssis-expression.md)
[Examples of Advanced Integration Services Expressions](../../integration-services/expressions/examples-of-advanced-integration-services-expressions.md)
[Integration Services (SSIS) Expressions](../../integration-services/expressions/integration-services-ssis-expressions.md)
| 86.913462 | 236 | 0.741564 | eng_Latn | 0.564881 |
6f40d62b2f001caba8ad0b0502031035a98a0144 | 171 | md | Markdown | src/works/116-tournesol-e-labeille.md | mimecine/neat-starter-ethr | d864d02b41fc9927a5297b8fa910b2b7194551a0 | [
"MIT"
] | null | null | null | src/works/116-tournesol-e-labeille.md | mimecine/neat-starter-ethr | d864d02b41fc9927a5297b8fa910b2b7194551a0 | [
"MIT"
] | null | null | null | src/works/116-tournesol-e-labeille.md | mimecine/neat-starter-ethr | d864d02b41fc9927a5297b8fa910b2b7194551a0 | [
"MIT"
] | null | null | null | ---
id: 116
title: Tournesol é l'abeille
categories: Fleurs
w: 49
h: 63
location: null
note: null
file: null
year: '1984'
year_start: 1984
year_end: null
image: null
---
| 10.6875 | 28 | 0.707602 | eng_Latn | 0.231875 |
6f417e9d9e354629d4644d016f849c21819a412a | 440 | md | Markdown | js/crowdemotion-api-client-js/docs/Stats.md | CrowdEmotion/crowdemotion-api-clients-examples | 9e4bd38279399e5694cf3cec6cc7fb0b3149bc39 | [
"MIT"
] | 1 | 2018-06-05T12:46:22.000Z | 2018-06-05T12:46:22.000Z | js/crowdemotion-api-client-js/docs/Stats.md | CrowdEmotion/crowdemotion-api-clients-examples | 9e4bd38279399e5694cf3cec6cc7fb0b3149bc39 | [
"MIT"
] | null | null | null | js/crowdemotion-api-client-js/docs/Stats.md | CrowdEmotion/crowdemotion-api-clients-examples | 9e4bd38279399e5694cf3cec6cc7fb0b3149bc39 | [
"MIT"
] | null | null | null | # CrowdemotionApiClientJs.Stats
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**media** | **Integer** | |
**visited** | **Integer** | |
**started** | **Integer** | |
**partial** | **Integer** | |
**completes** | **Integer** | |
**processed** | **Integer** | |
**failed** | **Integer** | |
**unprocessed** | **Integer** | |
**lastUpdated** | **String** | |
| 25.882353 | 60 | 0.445455 | yue_Hant | 0.171992 |
6f420b6b8884ca82a4631784459ccdd63b57eb6f | 25,270 | md | Markdown | _drafts/2021-04-07-what-you-should-really-expect-from-googles-custom-pixel-6-processor.md | sergioafanou/smart-cv | 516b8ede13f74951c64f7390fa08c4f7a2344dd6 | [
"MIT"
] | null | null | null | _drafts/2021-04-07-what-you-should-really-expect-from-googles-custom-pixel-6-processor.md | sergioafanou/smart-cv | 516b8ede13f74951c64f7390fa08c4f7a2344dd6 | [
"MIT"
] | 5 | 2020-01-09T10:46:58.000Z | 2021-11-03T15:13:42.000Z | _drafts/2021-04-07-what-you-should-really-expect-from-googles-custom-pixel-6-processor.md | sergioafanou/smart-cv | 516b8ede13f74951c64f7390fa08c4f7a2344dd6 | [
"MIT"
] | null | null | null | ---
title : "What you should really expect from Google’s custom Pixel 6 processor"
layout: post
tags: tutorial labnol
post_inspiration: https://www.androidauthority.com/google-pixel-6-processor-1215340/
image: "https://sergio.afanou.com/assets/images/image-midres-31.jpg"
---
<p><html><body><img class="size-large wp-image-1175102 noname aligncenter aa-img" title="Google Pixel 5 Grey Back" src="https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-1200x675.jpg" alt="Google Pixel 5 Grey Back" width="1200" height="675" data-attachment-id="1175102" srcset="https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-1200x675.jpg 1200w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-300x170.jpg 300w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-768x432.jpg 768w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-16x9.jpg 16w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-32x18.jpg 32w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-28x16.jpg 28w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-56x32.jpg 56w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-64x36.jpg 64w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-712x400.jpg 712w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-1000x563.jpg 1000w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-792x446.jpg 792w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-1280x720.jpg 1280w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-840x472.jpg 840w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-1340x754.jpg 1340w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-770x433.jpg 770w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-356x200.jpg 356w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back-675x380.jpg 675w, https://cdn57.androidauthority.net/wp-content/uploads/2020/11/Google-Pixel-5-Grey-Back.jpg 1920w" sizes="(max-width: 1200px) 100vw, 1200px" /></p>
<div class="aa-img-source-credit">
<div class="aa-img-source-and-credit full">
<div class="aa-img-credit text-right"><span>Credit: </span>Robert Triggs / Android Authority</div>
</div>
</div>
<div class="aa_opinions_shortcode_wrapper"><a class="overlay-link" id="overlay-link-bypass" href="https://www.androidauthority.com/opinions/"></a><div class="aa_opinions_shortcode_image"><img alt='' src='https://secure.gravatar.com/avatar/5bffc0be35bcffd28f07b2f65779f8d9?s=200&d=mm&r=g' srcset='https://secure.gravatar.com/avatar/5bffc0be35bcffd28f07b2f65779f8d9?s=400&d=mm&r=g 2x' class='avatar avatar-200 photo' height='200' width='200' /></div><div class="aa_opinions_shortcode_text"><div class="aa_opinion_by">Opinion post by</div><div class="aa_opinion_author">Robert Triggs</div></div></div>
<p>Rumor has it that the Google Pixel 6 will debut Google’s long-rumored custom smartphone silicon, codenamed “<a href="https://www.androidauthority.com/google-pixel-6-whitechapel-1214847/" target="_blank" rel="noopener">Whitechapel</a>.” In other words, the Pixel 6 appears set to ditch Qualcomm’s Snapdragon brand for something built in-house with the help of Samsung’s System Large-scale Integration (SLSI) division. Those are the folks at Samsung that integrate CPU cores, memory, modems, and other integrated circuit components into a single silicon chip (SoC).</p>
<p>Pundits have been quick to peg the story as Google finally taking on Apple’s processing dominance. Meanwhile, Pixel fans are already dreaming up the perfect silicon features to fit their perfect phone. As with all potentially major industry shifts, it’s important to take stock of the big picture, evaluate what’s possible, and reign in those lofty initial expectations.</p>
<p>There are definitely some intriguing possibilities for a Google-led mobile application processor. However, I’m certainly not expecting anything too groundbreaking from the Whitechapel SoC, especially in its first generation.</p>
<p style="text-align: center;"><strong>Read more:</strong> <a href="https://www.androidauthority.com/google-pixel-6-1207778/" target="_blank" rel="noopener">Google Pixel 6: What we want to see</a></p>
<h2>Google isn’t gearing up to beat Apple’s custom CPUs</h2>
<p style="text-align: left;">Let’s start with the “bad” news: Google’s Whitechapel SoC will not be more powerful than the <a href="https://www.androidauthority.com/qualcomm-snapdragon-888-1179156/" target="_blank" rel="noopener">Qualcomm Snapdragon 888</a> or Apple’s A14 Bionic. At least in the CPU and GPU departments.</p>
<p>How can I make such a bold claim without testing the chip? Simple. Google will be picking most of its parts off the shelf and therefore the bulk of its core processing components will almost certainly arrive in the form of familiar Arm Cortex and Mali components.</p>
<p>We know this because Samsung <a href="https://www.androidauthority.com/samsung-exynos-custom-cpu-1050518/">ditched its Mongoose CPUs</a> and Google doesn’t have a dedicated mobile CPU development team to build a competitive Arm-based core of its own. Although Google has a few chip-design staff onboard, these are spread around the company. Note that <a href="https://www.androidauthority.com/qualcomm-nuvia-explained-1192441/" target="_blank" rel="noopener">Qualcomm recently brought Nuvia</a> to help it build custom-Arm CPUs. Google hasn’t made anything like this size of investment to help catch Apple.</p>
<div class="clear"></div><blockquote class="quote_new center" style="color: #00D49F; border-color: #00D49F;"><p>There's no indication Google is going down the custom CPU and GPU silicon route.</p></blockquote><div class="clear"></div>
<p><a href="https://www.xda-developers.com/google-pixel-6-custom-system-on-chip/#update1" target="_blank" rel="noopener">Follow-up reports</a> note that Google is opting for a tried and tested tri-cluster CPU setup. This could be a familiar Cortex-X1, A78, and A55 affair, but could equally comprise two A78 or A78 and A76 performance tiers if cost and area efficiency are more important to Google’s design objectives.</p>
<p>Similarly, Google has no in-house mobile graphics division. It’s highly unlikely that Samsung will share the fruits of its <a href="https://www.androidauthority.com/samsung-exynos-amd-gpu-2022-1192046/" target="_blank" rel="noopener">AMD RDNA graphics partnership</a> before it debuts in its own Exynos chipset in 2022. Instead, an off-the-shelf Arm Mali-G78 or newer seems the most likely, but Mali has historically underperformed against Qualcomm’s Adreno. An Imagination Technologies GPU is also possible.</p>
<p>The bottom line is that it’s highly unlikely Google will offer anything game-changing in the key day-to-day performance departments.</p>
<h2>5G modem is a big unknown</h2>
<p><img class="aligncenter wp-image-1084430 noname aa-img" title="Qualcomm Snapdragon X60 chip" src="https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip.jpg" alt="Qualcomm Snapdragon X60 chip" width="1200" height="684" data-attachment-id="1084430" srcset="https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip.jpg 846w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-300x170.jpg 300w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-768x438.jpg 768w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-16x9.jpg 16w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-32x18.jpg 32w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-28x16.jpg 28w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-56x32.jpg 56w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-64x36.jpg 64w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-351x200.jpg 351w, https://cdn57.androidauthority.net/wp-content/uploads/2020/02/Qualcomm-Snapdragon-X60-chip-675x385.jpg 675w" sizes="(max-width: 1200px) 100vw, 1200px" /></p>
<div class="aa-img-source-credit"></div>
<p><a href="https://www.androidauthority.com/best-android-phones-568001/">2021 flagship smartphones</a> are finally shipping with integrated 5G modems onboard the main SoC, but this is not a given for the Pixel 6. Google doesn’t have 5G modem technology of its own. Apple purchases an external Snapdragon X55 5G modem to pair with its A14 Bionic inside the <a href="https://www.androidauthority.com/apple-iphone-12-1111954/" target="_blank" rel="noopener">iPhone 12 series</a>, for example. If Google wants to stick with Qualcomm’s 5G technology it certainly won’t be integrated, and that means higher costs and power consumption.</p>
<p>Remember, Qualcomm also holds CDMA patent exclusivity in the US. It’s a key reason why Samsung’s Galaxy smartphones feature Snapdragon chipsets in the US and Exynos elsewhere. Granted, this only impacts Verizon and legacy Sprint networks, and CDMA is finally being phased out. But it remains a legal headache that might keep Google tied to Qualcomm’s modem technology, even though it might not be essential to Google’s plans.</p>
<div class="clear"></div><blockquote class="quote_new center" style="color: #00D49F; border-color: #00D49F;"><p>Who Google sources a 5G modem from could be a big deal.</p></blockquote><div class="clear"></div>
<p>Qualcomm also has a robust 5G patent portfolio, but Samsung’s 5G modems have appeared in the US for broadband and commercial solutions. In theory, Samsung could provide its integrated 5G modem technology to Google for Whitechapel. In this case, it could turn out to be a breakthrough for Samsung-made 5G SoCs on US soil.</p>
<p>Of course, it will also be very interesting to see what 4G and 5G bands Google’s chipset supports and whether <a href="https://www.androidauthority.com/what-is-5g-explained-944868/">mmWave technology</a> is part of the picture.</p>
<h2>Expect a focus on security</h2>
<p><img class="aligncenter wp-image-915889 noname aa-img" title="Google Titan M security chip" src="https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-840x560.jpg" alt="Picture showing Google's Titan and Titan M security chip" width="1200" height="800" data-attachment-id="915889" srcset="https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-840x560.jpg 840w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-300x200.jpg 300w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-768x512.jpg 768w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-16x11.jpg 16w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-32x21.jpg 32w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-28x19.jpg 28w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-56x37.jpg 56w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-64x43.jpg 64w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-1000x667.jpg 1000w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip-1200x800.jpg 1200w, https://cdn57.androidauthority.net/wp-content/uploads/2018/10/Google-Titan-M-security-chip.jpg 1600w" sizes="(max-width: 1200px) 100vw, 1200px" /></p>
<div class="aa-img-source-credit">
<div class="aa-img-caption"><span>Google’s Titan server class chip (left) and Titan M smartphone security chip(right)</span></div>
</div>
<p>Remember Google’s <a href="https://www.androidauthority.com/google-pixel-3-titan-m-security-915531/" target="_blank" rel="noopener">Titan M security chip</a> that debuted in the Pixel 3? Me neither until today. The idea was clearly aimed at Apple’s Secure Enclave and rumors suggest the idea may resurface as an integrated part of Google’s Whitechapel silicon.</p>
<p>To recap, the general idea is to offer a secure processing environment that is entirely separate from the main CPU. This protects sensitive information and processing from CPU-exploiting attacks. Titan M can also be used to house sensitive information like biometrics and verify the boot status of your device when you turn it on. It’s a solid idea in a digital world that should increasingly value the emphasis placed on security.</p>
<p>However, similar security enclaves are already available with Qualcomm’s SecureBoot and Secure Processing Unit which have been around for years. Although Google may be doing a few things differently, reinventing the wheel hardly seems worthwhile. The separate hardware approach may also become redundant next year, as CPUs improve their ability to isolate code across apps and processes.</p>
<h2>Machine learning, cameras, and updates — the ones to watch</h2>
<p><img class="size-large wp-image-1114487 noname aligncenter aa-img" title="google pixel 4 xl revisited assistant" src="https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-1200x675.jpg" alt="google pixel 4 xl revisited assistant" width="1200" height="675" data-attachment-id="1114487" srcset="https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-1200x675.jpg 1200w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-300x170.jpg 300w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-768x432.jpg 768w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-16x9.jpg 16w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-32x18.jpg 32w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-28x16.jpg 28w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-56x32.jpg 56w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-64x36.jpg 64w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-712x400.jpg 712w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-1000x563.jpg 1000w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-792x446.jpg 792w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-1280x720.jpg 1280w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-840x472.jpg 840w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-1340x754.jpg 1340w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-770x433.jpg 770w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-356x200.jpg 356w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant-675x380.jpg 675w, https://cdn57.androidauthority.net/wp-content/uploads/2020/05/google-pixel-4-xl-revisited-assistant.jpg 1920w" sizes="(max-width: 1200px) 100vw, 1200px" /></p>
<div class="aa-img-source-credit">
<div class="aa-img-source-and-credit full">
<div class="aa-img-credit text-right"><span>Credit: </span>Oliver Cragg / Android Authority</div>
</div>
</div>
<p>Enough pessimism, surely there’s at least one good reason for Google to pursue its own chipset ambitions? Well, actually there are at least a couple.</p>
<p>The first is Google’s machine learning expertise and ambitions, and by extension its image processing capabilities. The chipset will allegedly come with Google’s in-house Tensor Processing Unit (TPU). Recall that Google has an <a href="https://cloud.google.com/edge-tpu/" target="_blank" rel="noopener">Edge TPU</a> that Coral has already packaged into a tiny chipset. An integrated version of Google’s Cloud TPU prowess is the next logical step.</p>
<p>This TPU will almost certainly build on the <a href="https://www.androidauthority.com/google-pixel-4-neural-core-1045318/">Pixel Neural Core</a> introduced with the Pixel 4 (which superseded the Pixel 3’s Visual Core), enhancing Google’s image and voice machine learning capabilities. So smarter Google Assistant features could be on the cards. Likewise, machine learning is a cornerstone of the Pixel series’ photography capabilities and there’s good reason to be excited about new photography hardware and software with Whitechapel.</p>
<p style="text-align: center;"><strong>Google Assistant guide:</strong> <a href="https://www.androidauthority.com/google-assistant-838138/">Make the most of your virtual assistant</a></p>
<p>Faster multi-frame processing for HDR+ and night shots, improved bokeh blur with real-time video, better object and semantic segmentation, and more could all be on the cards. Improved hardware may also allow the Pixel 6 to work its magic with much higher resolution or multiple image sensors at once. It could also extend its effects to 4K and even 8K video.</p>
<div class="clear"></div><blockquote class="quote_new center" style="color: #00D49F; border-color: #00D49F;"><p>Google's custom SoC will primarily leverage its in-house TPU machine learning hardware.</p></blockquote><div class="clear"></div>
<p>That being said, Google has already ported its impressive computational photography techniques to <a href="https://www.androidauthority.com/snapdragon-765g-1179486/">Qualcomm’s mid-range silicon</a>, albeit running a little slower than before. Whether or not Google’s TPU exceeds the machine learning and image processing capabilities already found in premium-tier mobile SoCs remains to be seen.</p>
<p>Finally, third-party SoCs are the weak link in the Android update chain. Once manufacturers stop supporting their chips, product designers at Google, Samsung, and others can’t supply further core updates to devices. With a custom SoC design, Google gains full control over this process and could match or even exceed Apple’s five-year update roadmap. Although this doesn’t seem like such a pressing issue in 2021, given that Google and Qualcomm have already teamed up to <a href="https://www.androidauthority.com/google-qualcomm-extend-android-updates-for-snapdragon-1186099/" target="_blank" rel="noopener">extend Android updates to four years</a>.</p>
<h2>Google Pixel 6 processor: What to expect</h2>
<p><img class="size-large wp-image-1172133 noname aligncenter aa-img" title="Google Pixel 5 Pixel 4 Pixel 3 cameras" src="https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-1200x675.jpg" alt="Google Pixel 5 Pixel 4 Pixel 3 cameras" width="1200" height="675" data-attachment-id="1172133" srcset="https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-1200x675.jpg 1200w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-300x170.jpg 300w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-768x432.jpg 768w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-16x9.jpg 16w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-32x18.jpg 32w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-28x16.jpg 28w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-56x32.jpg 56w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-64x36.jpg 64w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-712x400.jpg 712w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-1000x563.jpg 1000w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-792x446.jpg 792w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-1280x720.jpg 1280w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-840x472.jpg 840w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-1340x754.jpg 1340w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-770x433.jpg 770w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-356x200.jpg 356w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras-675x380.jpg 675w, https://cdn57.androidauthority.net/wp-content/uploads/2020/10/Google-Pixel-5-Pixel-4-Pixel-3-cameras.jpg 1920w" sizes="(max-width: 1200px) 100vw, 1200px" /></p>
<div class="aa-img-source-credit">
<div class="aa-img-source-and-credit full">
<div class="aa-img-credit text-right"><span>Credit: </span>Robert Triggs / Android Authority</div>
</div>
</div>
<p>Of course, all of the official details on Google’s Whitechapel SoC remain unknown. I think the most important thing to note is that Google is building this chip in conjunction with Samsung, so we’re not looking at anything close to a fully in-house effort. This almost certainly means the majority of the chipset will be designed using readily available parts licensed from Arm and Samsung.</p>
<p>If you’re expecting Apple A14 Bionic-beating performance, prepare to be disappointed. Frankly, Google doesn’t have the resources or expertise to pull ahead with bleeding-edge performance and that strategy wouldn’t fit with its more affordable approach to smartphone pricing either.</p>
<div class="clear"></div><blockquote class="quote_new center" style="color: #00D49F; border-color: #00D49F;"><p>If you're expecting Apple-beating performance, prepare to be disappointed. Whitechapel is Google's way to take what we loved about the Pixel 5 and make it even better.</p></blockquote><div class="clear"></div>
<p>Google hit the reset button on its strategy with the <a href="https://www.androidauthority.com/google-pixel-5-1123659/">Pixel 5</a> and a 180-degree return to premium pricing seems unlikely. This is partly because the more affordable approach seems to be working, in addition to the aforementioned caveats on what Google can achieve with custom silicon. Whitechapel is Google’s way to take its vision for the souped-up mid-range smartphone to the next level.</p>
<p>The Big G previously augmented the capabilities of Qualcomm chipsets with ideas like the Neural Core, Titan, and <a href="https://www.androidauthority.com/google-pixel-4-soli-motion-sense-1042448/">Soli</a>. The logical evolution is to embrace the benefits of custom integrated silicon, even if the chip is not designed to fight it out with the benchmark kings. “Good enough” day-to-day performance augmented with in-house hardware that complements Google’s industry-leading software could take what we loved about the Pixel 5 and make it even better.</p>
<p style="text-align: center;"><strong>Read more:</strong> <a href="https://www.androidauthority.com/google-pixel-5-review-revisited-1207814/">Google Pixel 5 revisited — the good and bad six months later</a></p>
<p>At this stage, networking and 5G capabilities are perhaps the hardest specifications to make an informed guess at, yet could be a defining feature of the handset. Of particular importance for the Pixel 6 is how much the chipset costs to design and build, the <a href="https://www.androidauthority.com/computer-chip-shortage-1212941/">availability of chips</a>, and what this means for pricing. We’ll find all that out soon enough.</p>
<p>What are you expecting from Google’s first dabble in the mobile SoC space?</body></html></p>
| 336.933333 | 2,520 | 0.788801 | eng_Latn | 0.560651 |
6f42a6f2b560a64db2b8d98bcd512761be64c33a | 6,400 | md | Markdown | content/rancher/v2.x/en/installation/requirements/ports/_index.md | JacksonHill/docs | 4278ecdb91eea0e6810bd8d515808161f3b0a725 | [
"Apache-2.0"
] | 1 | 2020-04-15T05:17:43.000Z | 2020-04-15T05:17:43.000Z | content/rancher/v2.x/en/installation/requirements/ports/_index.md | JacksonHill/docs | 4278ecdb91eea0e6810bd8d515808161f3b0a725 | [
"Apache-2.0"
] | null | null | null | content/rancher/v2.x/en/installation/requirements/ports/_index.md | JacksonHill/docs | 4278ecdb91eea0e6810bd8d515808161f3b0a725 | [
"Apache-2.0"
] | null | null | null | ---
title: Port Requirements
description: Read about port requirements needed in order for Rancher to operate properly, both for Rancher nodes and Kubernetes cluster nodes
weight: 300
---
To operate properly, Rancher requires a number of ports to be open on Rancher nodes and Kubernetes cluster nodes.
## Rancher Nodes
The following table lists the ports that need to be open to and from nodes that are running the Rancher server container for [single node installs]({{< baseurl >}}/rancher/v2.x/en/installation/single-node-install/) or pods for [high availability installs]({{< baseurl >}}/rancher/v2.x/en/installation/ha-server-install/).
{{< ports-rancher-nodes >}}
**Note** Rancher nodes may also require additional outbound access for any external authentication provider which is configured (LDAP for example).
## Kubernetes Cluster Nodes
The ports required to be open for cluster nodes changes depending on how the cluster was launched. Each of the tabs below list the ports that need to be opened for different [cluster creation options]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/#cluster-creation-options).
>**Tip:**
>
>If security isn't a large concern and you're okay with opening a few additional ports, you can use the table in [Commonly Used Ports](#commonly-used-ports) as your port reference instead of the comprehensive tables below.
{{% tabs %}}
{{% tab "Node Pools" %}}
The following table depicts the port requirements for [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) with nodes created in an [Infrastructure Provider]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/).
>**Note:**
>The required ports are automatically opened by Rancher during creation of clusters in cloud providers like Amazon EC2 or DigitalOcean.
{{< ports-iaas-nodes >}}
{{% /tab %}}
{{% tab "Custom Nodes" %}}
The following table depicts the port requirements for [Rancher Launched Kubernetes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/) with [Custom Nodes]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/custom-nodes/).
{{< ports-custom-nodes >}}
{{% /tab %}}
{{% tab "Hosted Clusters" %}}
The following table depicts the port requirements for [hosted clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/hosted-kubernetes-clusters).
{{< ports-imported-hosted >}}
{{% /tab %}}
{{% tab "Imported Clusters" %}}
The following table depicts the port requirements for [imported clusters]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/imported-clusters/).
{{< ports-imported-hosted >}}
{{% /tab %}}
{{% /tabs %}}
## Other Port Considerations
### Commonly Used Ports
These ports are typically opened on your Kubernetes nodes, regardless of what type of cluster it is.
| Protocol | Port | Description |
|:--------: |:----------------: |------------------------------------------------- |
| TCP | 22 | Node driver SSH provisioning |
| TCP | 2376 | Node driver Docker daemon TLS port |
| TCP | 2379 | etcd client requests |
| TCP | 2380 | etcd peer communication |
| UDP | 8472 | Canal/Flannel VXLAN overlay networking |
| UDP | 4789 | Flannel VXLAN overlay networking on Windows cluster |
| TCP | 9099 | Canal/Flannel livenessProbe/readinessProbe |
| TCP | 6783 | Weave Port |
| UDP | 6783-6784 | Weave UDP Ports |
| TCP | 10250 | kubelet API |
| TCP | 10254 | Ingress controller livenessProbe/readinessProbe |
| TCP/UDP | 30000-</br>32767 | NodePort port range |
----
### Local Node Traffic
Ports marked as `local traffic` (i.e., `9099 TCP`) in the above requirements are used for Kubernetes healthchecks (`livenessProbe` and`readinessProbe`).
These healthchecks are executed on the node itself. In most cloud environments, this local traffic is allowed by default.
However, this traffic may be blocked when:
- You have applied strict host firewall policies on the node.
- You are using nodes that have multiple interfaces (multihomed).
In these cases, you have to explicitly allow this traffic in your host firewall, or in case of public/private cloud hosted machines (i.e. AWS or OpenStack), in your security group configuration. Keep in mind that when using a security group as source or destination in your security group, explicitly opening ports only applies to the private interface of the nodes / instances.
### Rancher AWS EC2 security group
When using the [AWS EC2 node driver]({{< baseurl >}}/rancher/v2.x/en/cluster-provisioning/rke-clusters/node-pools/ec2/) to provision cluster nodes in Rancher, you can choose to let Rancher create a security group called `rancher-nodes`. The following rules are automatically added to this security group.
| Type | Protocol | Port Range | Source/Destination | Rule Type |
|-----------------|:--------:|:-----------:|------------------------|:---------:|
| SSH | TCP | 22 | 0.0.0.0/0 | Inbound |
| HTTP | TCP | 80 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 443 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 2376 | 0.0.0.0/0 | Inbound |
| Custom TCP Rule | TCP | 2379-2380 | sg-xxx (rancher-nodes) | Inbound |
| Custom UDP Rule | UDP | 4789 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 6443 | 0.0.0.0/0 | Inbound |
| Custom UDP Rule | UDP | 8472 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10250-10252 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 10256 | sg-xxx (rancher-nodes) | Inbound |
| Custom TCP Rule | TCP | 30000-32767 | 0.0.0.0/0 | Inbound |
| Custom UDP Rule | UDP | 30000-32767 | 0.0.0.0/0 | Inbound |
| All traffic | All | All | 0.0.0.0/0 | Outbound |
| 53.781513 | 378 | 0.62375 | eng_Latn | 0.953786 |
6f42cbecc0edaeea6b11e64807c4833a7960a6b2 | 1,407 | md | Markdown | content/2018/03/27/2018-03-27-responsibilities-of-a-software-architect.md | kevinguebert/kevinguebert-netlify | 39d2adb40fcd983a3186897685ea6e958717026c | [
"MIT"
] | null | null | null | content/2018/03/27/2018-03-27-responsibilities-of-a-software-architect.md | kevinguebert/kevinguebert-netlify | 39d2adb40fcd983a3186897685ea6e958717026c | [
"MIT"
] | null | null | null | content/2018/03/27/2018-03-27-responsibilities-of-a-software-architect.md | kevinguebert/kevinguebert-netlify | 39d2adb40fcd983a3186897685ea6e958717026c | [
"MIT"
] | null | null | null | ---
template: post
title: Responsibilities of a Software Architect
slug: responsibilities-of-a-software-architect
draft: false
date: '2018-03-27T21:59:00.000Z'
description: >-
While reading this new book, my posts may become more programming based. I
will try and throw in some other more personal-growth posts in along with it,
but today's posts is from the verrrryyy first...
category: Books
tags:
- Design It
- Books
- Coding
---
While reading this new book, my posts may become more programming based. I will try and throw in some other more personal-growth posts in along with it, but today's posts is from the verrrryyy first chapter of *Design It!*.
When talking about a software architect (SA), Keeling points out 5 specific responsibilities:
- Define the problem from an engineering perspective
- Partition the system and assign responsibilities
- Keep an eye on the bigger picture
- Decide trade-offs amont quality attributes
- Manage technical debt
- Grow the team's architecture skills
One of the first things I've noticed so far is that it has said nothing about programming. Does a software architect program? I'm not quite sure. From previous experiences, it seems like they know and understand coding, but maybe not syntax of specific languages or nuances? There's more to learn!
Post inspired from: **Design It! From Programmer to Software Architect** by Michael Keeling.
| 42.636364 | 297 | 0.778962 | eng_Latn | 0.99956 |