hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
98174020b3c940aab997871537f27c5b65d2d1ea
505
md
Markdown
about/index.md
theypsilon/JoseBG
688cb4c7097e635a043fc238f19d2f7d6c60ba2c
[ "MIT" ]
null
null
null
about/index.md
theypsilon/JoseBG
688cb4c7097e635a043fc238f19d2f7d6c60ba2c
[ "MIT" ]
null
null
null
about/index.md
theypsilon/JoseBG
688cb4c7097e635a043fc238f19d2f7d6c60ba2c
[ "MIT" ]
null
null
null
--- layout: layouts/home.njk templateClass: tmpl-post eleventyNavigation: key: About Me order: 3 --- <h1>About Me</h1> My name is José Manuel Barroso Galindo, and I'm a software engineer. Some people know me as __theypsilon__ In this page I'm listing some of the more recent gamedev projects that I did in my spare time. [GitHub](https://github.com/theypsilon/) [LinkedIn](https://www.linkedin.com/in/theypsilon/) [Twitter](https://twitter.com/josembarroso) [Ko-fi](https://ko-fi.com/theypsilon)
28.055556
174
0.742574
eng_Latn
0.760784
9819d0ded0a2e7d79000e100a1e68892c13e2719
48
md
Markdown
README.md
tariel-x/PLA
9715dd4c2996be602acb022c6644a0bd9b8a8a11
[ "Apache-2.0" ]
null
null
null
README.md
tariel-x/PLA
9715dd4c2996be602acb022c6644a0bd9b8a8a11
[ "Apache-2.0" ]
null
null
null
README.md
tariel-x/PLA
9715dd4c2996be602acb022c6644a0bd9b8a8a11
[ "Apache-2.0" ]
null
null
null
# PLA Predicate logic with anaphora realization
16
41
0.833333
eng_Latn
0.952858
981a5757d5bc92d3222f8aee8851b22fa7143d59
12,278
md
Markdown
articles/cosmos-db/troubleshoot-dot-net-sdk.md
eduarandilla/azure-docs.es-es
2d47e242f1f915183fb6a2852199649dbae474a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cosmos-db/troubleshoot-dot-net-sdk.md
eduarandilla/azure-docs.es-es
2d47e242f1f915183fb6a2852199649dbae474a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cosmos-db/troubleshoot-dot-net-sdk.md
eduarandilla/azure-docs.es-es
2d47e242f1f915183fb6a2852199649dbae474a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Diagnóstico y solución de problemas al usar el SDK de .NET de Azure Cosmos DB description: Use características como registro del lado cliente y otras herramientas de terceros para identificar, diagnosticar y solucionar problemas de Azure Cosmos DB cuando use el SDK de .NET. author: anfeldma-ms ms.service: cosmos-db ms.date: 06/16/2020 ms.author: anfeldma ms.subservice: cosmosdb-sql ms.topic: troubleshooting ms.reviewer: sngun ms.openlocfilehash: 1dd6bdc66146eb7dfe155e7d1091eee5cca450a0 ms.sourcegitcommit: dccb85aed33d9251048024faf7ef23c94d695145 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 07/28/2020 ms.locfileid: "87290918" --- # <a name="diagnose-and-troubleshoot-issues-when-using-azure-cosmos-db-net-sdk"></a>Diagnóstico y solución de problemas al usar el SDK de .NET de Azure Cosmos DB > [!div class="op_single_selector"] > * [SDK de Java v4](troubleshoot-java-sdk-v4-sql.md) > * [SDK de Java v2 asincrónico](troubleshoot-java-async-sdk.md) > * [.NET](troubleshoot-dot-net-sdk.md) > En este artículo se tratan problemas comunes, soluciones alternativas, pasos de diagnóstico y herramientas al usar el [SDK de .NET](sql-api-sdk-dotnet.md) con las cuentas de la API de SQL de Azure Cosmos DB. El SDK de .NET proporciona la representación lógica del lado cliente para acceder a la API de SQL de Azure Cosmos DB. En este artículo se describen herramientas y enfoques para ayudarle si surge algún problema. ## <a name="checklist-for-troubleshooting-issues"></a>Lista de comprobación para la solución de problemas Tenga en cuenta la siguiente lista de comprobación antes de que la aplicación pase a la fase de producción. El uso de la lista de comprobación impedirá que se produzcan distintos problemas comunes que podrían surgir. Puede diagnosticar rápidamente cuándo se produce un problema: * Utilice el [SDK](sql-api-sdk-dotnet-standard.md) más reciente. No se debe utilizar la previsualización de SDK en la producción. Esto evitará problemas conocidos que ya están solucionados. * Revise los [consejos de rendimiento](performance-tips.md) y siga las prácticas sugeridas. Esto ayudará a evitar el escalado, la latencia y otros problemas de rendimiento. * Habilite el registro de SDK para ayudarle a solucionar un problema. La habilitación del registro puede afectar al rendimiento, por lo que es mejor hacerlo solo para solucionar problemas. Puede habilitar los registros siguientes: * Registre [métricas](monitor-accounts.md) mediante Azure Portal. Las métricas del portal muestran la telemetría de Azure Cosmos DB, que resulta útil para determinar si el problema corresponde a Azure Cosmos DB o al cliente. * Registre la [cadena de diagnósticos](https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring) en el SDK de V2 o el [diagnóstico](https://docs.microsoft.com/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics) en el SDK de V3 a partir de las respuestas de operación de punto. * Registre las [métricas de consultas SQL](sql-api-query-metrics.md) desde todas las respuestas de consultas. * Siga la configuración del [registro del SDK]( https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/docs/documentdb-sdk_capture_etl.md). Eche un vistazo a la sección [Problemas y soluciones](#common-issues-workarounds) de este artículo. Compruebe que la [sección de problemas de GitHub](https://github.com/Azure/azure-cosmos-dotnet-v2/issues) se esté supervisando activamente. Compruebe si encuentra algún problema similar con una solución alternativa ya registrada. Si no logró solucionarlo, registre un problema de GitHub. Puede abrir una incidencia de soporte técnico para problemas urgentes. ## <a name="common-issues-and-workarounds"></a><a name="common-issues-workarounds"></a>Problemas comunes y soluciones alternativas ### <a name="general-suggestions"></a>Sugerencias generales * Ejecute la aplicación en la misma región de Azure que su cuenta de Azure Cosmos DB, si es posible. * Es posible que experimente problemas de conectividad o disponibilidad debido a falta de recursos en el equipo cliente. Le recomendamos que supervise el uso de la CPU en los nodos que ejecutan el cliente de Azure Cosmos DB y la escalación vertical u horizontal si están ejecutando una carga alta. ### <a name="check-the-portal-metrics"></a>Comprobación de las métricas del portal La comprobación de las [métricas del portal](monitor-accounts.md) le ayudarán a determinar si hay un problema por parte del cliente o si hay un problema con el servicio. Por ejemplo, si las métricas contienen una alta tasa de solicitudes de velocidad limitada (código de estado HTTP 429), lo que significa que la solicitud se ha limitado, consulte la sección [Tasa de solicitudes demasiado grande](troubleshoot-request-rate-too-large.md). ## <a name="common-error-status-codes"></a>Códigos de estado de error habituales <a id="error-codes"></a> | Código de estado | Descripción | |----------|-------------| | 400 | Solicitud incorrecta (depende del mensaje de error)| | 401 | [No autorizado](troubleshoot-unauthorized.md) | | 404 | [No se encuentra el recurso](troubleshoot-not-found.md) | | 408 | [Se ha agotado el tiempo de espera para la solicitud](troubleshoot-dot-net-sdk-request-timeout.md) | | 409 | Un error de conflicto se produce cuando un recurso existente ha tomado el identificador proporcionado para un recurso en una operación de escritura. Use otro identificador para que el recurso resuelva este problema, ya que el identificador debe ser único en todos los documentos con el mismo valor de clave de partición. | | 410 | Excepciones no superadas (error transitorio que no debería infringir el acuerdo de nivel de servicio). | | 412 | Un error en la condición previa se produce cuando la operación especificó un valor eTag que es diferente de la versión disponible en el servidor. Es un error de simultaneidad optimista. Vuelva a intentar la solicitud después de leer la versión más reciente del recurso y de actualizar el valor eTag en la solicitud. | 413 | [Entidad de solicitud demasiado grande](concepts-limits.md#per-item-limits) | | 429 | [Demasiadas solicitudes](troubleshoot-request-rate-too-large.md) | | 449 | Error transitorio que solo se produce en las operaciones de escritura y es seguro de reintentar. | | 500 | No se pudo realizar la operación debido a un error de servicio inesperado. Póngase en contacto con el servicio de soporte técnico. Consulte Cómo presentar un [problema de soporte técnico de Azure](https://aka.ms/azure-support). | | 503 | [Servicio no disponible](troubleshoot-service-unavailable.md) | ### <a name="azure-snat-pat-port-exhaustion"></a><a name="snat"></a>Agotamiento de puertos SNAT (PAT) de Azure Si la aplicación está implementada en [Azure Virtual Machines sin una dirección IP pública](../load-balancer/load-balancer-outbound-connections.md), los [puertos SNAT de Azure](../load-balancer/load-balancer-outbound-connections.md#preallocatedports) se usan de manera predeterminada para establecer conexiones con cualquier punto de conexión fuera de la VM. El número de conexiones permitidas desde la máquina virtual hasta el punto de conexión de Azure Cosmos DB está limitado por la [configuración de Azure SNAT](../load-balancer/load-balancer-outbound-connections.md#preallocatedports). Esta situación puede conducir a la limitación de la conexión, al cierre de la conexión o a los [tiempos de espera de solicitudes](troubleshoot-dot-net-sdk-request-timeout.md) mencionados anteriormente. Los puertos SNAT de Azure se usan solo cuando la VM tiene una dirección IP privada y se conecta a una dirección IP pública. Hay dos soluciones alternativas para evitar la limitación de SNAT de Azure (siempre que esté usando una única instancia de cliente en toda la aplicación): * Agregue el punto de conexión de servicio de Azure Cosmos DB a la subred de la red virtual de Azure Virtual Machines. Para obtener más información, consulte [puntos de conexión de servicio de red virtual de Azure](../virtual-network/virtual-network-service-endpoints-overview.md). Cuando se habilita el punto de conexión de servicio, las solicitudes ya no se envían desde una dirección IP pública a Azure Cosmos DB. En su lugar, se envían la red virtual y la identidad de la subred. Este cambio puede producir caídas de firewall si solo se permiten direcciones IP públicas. Si usa un firewall, cuando se habilite el punto de conexión de servicio, agregue una subred al firewall mediante las [ACL de Virtual Network](../virtual-network/virtual-networks-acl.md). * Asigne una [dirección IP pública a la VM de Azure](../load-balancer/troubleshoot-outbound-connection.md#assignilpip). ### <a name="high-network-latency"></a><a name="high-network-latency"></a>Latencia de red alta La latencia de red alta puede identificarse mediante la [cadena de diagnósticos ](https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet) en el SDK de V2 o el [diagnóstico](https://docs.microsoft.com/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet#Microsoft_Azure_Cosmos_ResponseMessage_Diagnostics) en el SDK de V3. Si no hay ningún [tiempo de espera ](troubleshoot-dot-net-sdk-request-timeout.md) presente y el diagnóstico muestra solicitudes únicas en las que la latencia alta es evidente en la diferencia entre `ResponseTime` y `RequestStartTime`, como se muestra a continuación (> 300 milisegundos en este ejemplo): ```bash RequestStartTime: 2020-03-09T22:44:49.5373624Z, RequestEndTime: 2020-03-09T22:44:49.9279906Z, Number of regions attempted:1 ResponseTime: 2020-03-09T22:44:49.9279906Z, StoreResult: StorePhysicalAddress: rntbd://..., ... ``` Esta latencia puede tener varias causas: * La aplicación no se está ejecutando en la misma región que la cuenta de Azure Cosmos DB. * La configuración de [PreferredLocations](https://docs.microsoft.com/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations) o [ApplicationRegion](https://docs.microsoft.com/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion) es incorrecta y está intentando conectarse a una región diferente de aquella en la que se está ejecutando actualmente la aplicación. * Puede haber un cuello de botella en la interfaz de red debido al elevado tráfico. Si la aplicación se ejecuta en Azure Virtual Machines, existen alternativas posibles: * Considere la posibilidad de usar una [máquina virtual con la opción Redes aceleradas habilitada](../virtual-network/create-vm-accelerated-networking-powershell.md). * Habilite la opción [Redes aceleradas en una máquina virtual existente](../virtual-network/create-vm-accelerated-networking-powershell.md#enable-accelerated-networking-on-existing-vms). * Considere la posibilidad de usar una [máquina virtual superior](../virtual-machines/windows/sizes.md). ### <a name="slow-query-performance"></a>Rendimiento lento de las consultas Las [métricas de consulta](sql-api-query-metrics.md) le ayudarán a determinar dónde está dedicando más tiempo la consulta. En las métricas de consulta, puede ver la cantidad que se dedica al back-end en comparación con el cliente. * Si la consulta de back-end se devuelve rápidamente y se dedica una gran cantidad de tiempo al cliente, compruebe la carga en la máquina. Es probable que no haya suficientes recursos y el SDK esté esperando que haya recursos disponibles para gestionar la respuesta. * Si la consulta de back-end es lenta, intente [optimizar la consulta](optimize-cost-queries.md) y examine la [directiva de indexación](index-overview.md). ## <a name="next-steps"></a>Pasos siguientes * Más información sobre las guías de rendimiento para [.NET V3](performance-tips-dotnet-sdk-v3-sql.md) y [.NET V2](performance-tips.md) * Más información sobre los [SDK de Java basados en reactores](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/master/reactor-pattern-guide.md) <!--Anchors--> [Common issues and workarounds]: #common-issues-workarounds [Enable client SDK logging]: #logging [Azure SNAT (PAT) port exhaustion]: #snat [Production check list]: #production-check-list
105.844828
792
0.790683
spa_Latn
0.973895
981ae0bdc1cebf700635b4539cb2f9e0371d79f7
127
md
Markdown
CHANGELOG.md
ardorcode/laravel-table-generator
d8eb8a432d9bac2dedc2602036a05f599ee488a2
[ "MIT" ]
2
2021-12-06T11:15:18.000Z
2022-01-09T09:13:08.000Z
CHANGELOG.md
ardorcode/laravel-table-generator
d8eb8a432d9bac2dedc2602036a05f599ee488a2
[ "MIT" ]
null
null
null
CHANGELOG.md
ardorcode/laravel-table-generator
d8eb8a432d9bac2dedc2602036a05f599ee488a2
[ "MIT" ]
1
2022-01-09T09:15:38.000Z
2022-01-09T09:15:38.000Z
# Changelog All notable changes to `tablegenerator` will be documented in this file ## 1.0.0 - 2021-12-01 - initial release
15.875
71
0.732283
eng_Latn
0.98054
981b247f5f26a82b8d74d4a792cf831ddca18abd
887
md
Markdown
_events/2011-10-10-vim.md
smacz42/smacz42.github.io
e19ff079914bdc82540aceb6dc0cd83e8485969c
[ "MIT" ]
null
null
null
_events/2011-10-10-vim.md
smacz42/smacz42.github.io
e19ff079914bdc82540aceb6dc0cd83e8485969c
[ "MIT" ]
null
null
null
_events/2011-10-10-vim.md
smacz42/smacz42.github.io
e19ff079914bdc82540aceb6dc0cd83e8485969c
[ "MIT" ]
null
null
null
--- title: Vim --- October 13th at 7:00PM in Dreese 266, Daniel Thau will be giving a presentation on Vim, an extremely powerful text editing program. Vim is known for having a difficult learning curve, but for many of those who do any appreciable amount of work editing text (such as Unix configuration, programming, etc) the benefits can far outweigh the cost. Daniel will cover Vim's basic concepts and should get most who know nothing about Vim up to the point where they can use it. Additionally, Daniel will cover several more advanced concepts such as folding, q-macros, blockwise-visual tricks, and others. Feel free to come in with questions on anything Vim. Vim is available on Windows, Macs, Linux and others. If you've got a laptop, it may be a good idea to install vim and bring it in; playing with the various aspects of Vim introduced are a great way to learn about them.
177.4
867
0.786922
eng_Latn
0.999839
981b39c8ae7c548fb1f2963f0f46820f3bce0bd3
2,240
md
Markdown
079_user_defined_parallelism_06/README.md
xenogenics/spl-for-beginners
a8e569fb40596b19bf784116de3914ef0f3d8ef3
[ "0BSD" ]
40
2015-10-17T17:35:01.000Z
2022-02-20T22:14:09.000Z
Examples-for-beginners/079_user_defined_parallelism_06/README.md
HelenaAH30/samples
e3b17d63339f7098b8d7f33bbf0b6002c1c602cf
[ "Apache-2.0" ]
65
2015-02-18T14:11:24.000Z
2021-03-08T16:49:42.000Z
Examples-for-beginners/079_user_defined_parallelism_06/README.md
HelenaAH30/samples
e3b17d63339f7098b8d7f33bbf0b6002c1c602cf
[ "Apache-2.0" ]
81
2015-03-13T13:36:31.000Z
2021-04-22T03:47:22.000Z
~~~~~~ Scala /* This is example 6 in the series of 12 User Defined Parallelism (UDP) scenarios. UDP is a great feature to parallelize an entire composite or a particular operator. This example code is taken from the Streams InfoCenter and added here to benefit the beginners of the Streams SPL programming model. Many thanks to our Streams colleague Scott Schneider for coming up with this set of UDP examples. Full credit goes to him. It is recommended that you run this example in Distributed mode and visualize the parallel region in the Streams instance graph. */ namespace com.acme.test; // In this example of user-defined parallelism, one stream feeds multiple parallel regions. // Each parallel region needs an independent splitter. In this instance, the parallel transformation adds // two independent splitters to a single PE output port. Each of these splitters feeds one of the two independent parallel regions. composite UDP6 { graph stream<int32 i> MyData = Beacon() { param iterations: 5000; } stream<MyData> EnrichedData = Custom(MyData) { logic state: { mutable int32 _i = 0; } onTuple MyData: { _i++; MyData.i = _i; submit(MyData, EnrichedData); } } // Create two parallel copies of the composite Comp6. @parallel (width=2) stream<EnrichedData> TransformedData1 = Comp6_1(EnrichedData) { config placement: partitionColocation("AB"); } @parallel (width=2) stream<EnrichedData> TransformedData2 = Comp6_2(EnrichedData) { config placement: partitionColocation("CD"); } () as MySink = FileSink(TransformedData1, TransformedData2) { param file: "Test1.csv"; } } composite Comp6_1(input In; output B) { graph stream<int32 i> A = Custom(In) { logic onTuple In: { In.i = In.i + 25; submit(In, A); } } stream<A> B = Custom(A) { logic onTuple A: { A.i = A.i - 4; submit(A, B); } } } composite Comp6_2(input In; output D) { graph stream<int32 i> C = Custom(In) { logic onTuple In: { In.i = In.i + 45; submit(In, C); } } stream<C> D = Custom(C) { logic onTuple C: { C.i = C.i - 8; submit(C, D); } } } ~~~~~~
22.857143
131
0.660268
eng_Latn
0.919241
981b9a0f067b34afabfbbc5cb077c96f6781caed
1,878
md
Markdown
en/data-streams/concepts/index.md
teminalk0/docs
2067fdc72e78b3a9ff9987723a56a2a1b4eea41d
[ "CC-BY-4.0" ]
1
2022-01-19T12:08:52.000Z
2022-01-19T12:08:52.000Z
en/data-streams/concepts/index.md
teminalk0/docs
2067fdc72e78b3a9ff9987723a56a2a1b4eea41d
[ "CC-BY-4.0" ]
null
null
null
en/data-streams/concepts/index.md
teminalk0/docs
2067fdc72e78b3a9ff9987723a56a2a1b4eea41d
[ "CC-BY-4.0" ]
null
null
null
# Overview Applications generate data that needs to be saved for further analysis or processing. Some of the data needs to be stored for a long time in <q>cold</q> storage that is rarely accessed, while other data should be stored in analytical databases for hot data processing. {{ yds-full-name }} makes it easier to transfer user application data to {{ yandex-cloud }} storage systems. ![overview](../../_assets/data-streams/overview.svg) The data is received in {{ yds-name }} as in a data bus that stores it in a fault-tolerant way across availability zones and is scaled based on the transferred amount of data. You can send the data to the bus using the Fluentd, Logstash, log4j/log4net, and other data streaming systems, as well as via HTTP over a protocol compatible with the Amazon Kinesis Data Streams API. The data that is transferred via the bus can then, using [{{ data-transfer-full-name }}](../../data-transfer/concepts/index.md) be saved to target systems, such as S3, {{ CH }}, and others. You can set up the transfer parameters in the {{ yandex-cloud }} management console or via the API. If, while saving the data, you need to change either the data itself or its format, or handle it in any other way (for example, delete sensitive information), you can do this using [{{ sf-full-name }}](../../functions/concepts/index.md) functions. {{ sf-short-name }} supports a variety of programming languages such as Python, Java, PHP, and more. ## Benefits {#advantages} * Support for a large number of targets, extensive customization options for streaming data. * The solution is fully integrated into the {{ yandex-cloud }} ecosystem and lets you centrally manage data streams using both the {{ yandex-cloud }} management console and API. * All components are fully managed, that is, they require no administration or a special team of DevOps engineers.
89.428571
375
0.760383
eng_Latn
0.998464
981be50b0e91038f86c50b79bfd7d82d131d250d
1,983
md
Markdown
_posts/2017-12-12-selenium.md
Dingxxxx/Dingxxxx.github.io
d380ead6ac870d6f93ee1750017cb21c9132fb29
[ "MIT" ]
null
null
null
_posts/2017-12-12-selenium.md
Dingxxxx/Dingxxxx.github.io
d380ead6ac870d6f93ee1750017cb21c9132fb29
[ "MIT" ]
null
null
null
_posts/2017-12-12-selenium.md
Dingxxxx/Dingxxxx.github.io
d380ead6ac870d6f93ee1750017cb21c9132fb29
[ "MIT" ]
null
null
null
--- layout: post author: Ding title: 用selenium操作浏览器 date: 2017-12-12 categories: Python tags: - Python --- * content {:toc} > Selenium是一个用于Web应用程序测试的工具。Selenium测试直接运行在浏览器中,就像真正的用户在操作一样。 ## 安装 [官方文档](https://seleniumhq.github.io/selenium/docs/api/py/index.html#) ``` conda install selenium ``` 在[Chrome](https://sites.google.com/a/chromium.org/chromedriver/home)官网上下载 chromedriverde或者[Firefox](https://github.com/mozilla/geckodriver/releases)的geckdriver。 ``` sudo cp Downloads/chromedriver /opt/google/chrome/ sudo cp chromedriver /home/ding/anaconda3/selenium/webdriver/ sudo cp geckdriver /home/ding/anaconda3/selenium/webdriver/ ``` 并将其加入Path ``` export PATH="$PATH:/home/ding/anaconda3/selenium/webdriver/geckodriver" ``` 安装键盘控制库[keyboard](https://github.com/boppreh/keyboard) ``` sudo pip install keyboard sudo cp -r /usr/local/lib/python2.7/dist-packages/keyboard /home/ding/anaconda3/lib/python3.6/site-packages ``` ## 打开网页 打开小恐龙网页并控制其开始游戏。 + Chrome ```python from selenium import webdriver browser = webdriver.Chrome('/home/ding/anaconda3/selenium/webdriver/chromedriver') browser.get("http://127.0.0.1/t-rex-runner") ``` Chrome暂时还不支持持续按住一个键,参考[Stackoverflow](https://stackoverflow.com/questions/17756532/how-to-hold-key-down-with-selenium) 没能有效解决问题,转而使用Firefox控制。 + Firefox ```python from selenium import webdriver driver = webdriver.Firefox() driver.get("http://127.0.0.1/t-rex-runner") ``` ## 控制网页 ```python from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.common.action_chains import ActionChains driver = webdriver.Firefox() driver.get("http://127.0.0.1/t-rex-runner") time.sleep(0.5) #browser.maximize_window() #browser.quit() obj = driver.find_element_by_id("t") # 开始游戏 obj.send_keys(Keys.SPACE) time.sleep(2) # 跳跃 obj.send_keys(Keys.SPACE) time.sleep(0.5) # 趴下两秒 action = ActionChains(driver) action.key_down(Keys.DOWN).perform() time.sleep(2) action.key_up(Keys.DOWN).perform() ```
19.067308
118
0.758951
yue_Hant
0.183828
981bffc7a14a634de4ed1b299f798ec5ed54757f
3,083
md
Markdown
add/metadata/System.Activities.Presentation/WorkflowElementDialog.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.Activities.Presentation/WorkflowElementDialog.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.Activities.Presentation/WorkflowElementDialog.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- uid: System.Activities.Presentation.WorkflowElementDialog author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.Context author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.WindowSizeToContent author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.ShowOkCancel author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.ModelItem author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.Title author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.WindowResizeMode author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.HelpKeyword author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.OnModelItemChanged(System.Object) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.OnWorkflowElementDialogClosed(System.Nullable{System.Boolean}) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.OnInitialized(System.EventArgs) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.EnableMaximizeButton author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.WindowResizeModeProperty author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.EnableMinimizeButton author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.ModelItemProperty author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.#ctor author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.Show author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.ContextProperty author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.EnableOk(System.Boolean) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.Owner author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.TitleProperty author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Activities.Presentation.WorkflowElementDialog.WindowSizeToContentProperty author: "Erikre" ms.author: "erikre" manager: "erikre" ---
20.019481
120
0.76711
yue_Hant
0.228246
981cf8e63a880b4f02a1dba29dd2cdd18a666196
1,060
md
Markdown
README.md
mqt0029/mqt0029
9893e77c9311978a88933192adca2c2949e6a868
[ "MIT" ]
null
null
null
README.md
mqt0029/mqt0029
9893e77c9311978a88933192adca2c2949e6a868
[ "MIT" ]
null
null
null
README.md
mqt0029/mqt0029
9893e77c9311978a88933192adca2c2949e6a868
[ "MIT" ]
null
null
null
# GitHub Business Card? I mean uh... Hello :wave: - 🙂 My name is Minh Tram (but people refer to me as Jerry) - 👨‍💼 I am currently a Graduate Research/Teaching Assistant at <a target="_blank" rel="noopener noreferrer" href="https://uta.edu">UT Arlington</a> - 🛠️ I am working on <a target="_blank" rel="noopener noreferrer" href="https://www.nist.gov/el/intelligent-systems-division-73500/agile-robotics-industrial-automation-competition">Agile Robotics for Industrial Automation Competition (ARIAC) 2021</a> - 📚 I am currently learning <a target="_blank" rel="noopener noreferrer" href="https://www.ros.org">ROS Melodic</a> for robotic competitions - 🔬 I am doing research on AR/VR interactions with robotics system for remote-augmented control. Think Pacific Rim minus the gigantic robots punching gigantic monsters. - 💼 I am also actively **looking for a full time job as a software engineer** - 📬 You can reach me via phone or email address available at my <a target="_blank" rel="noopener noreferrer" href="https://mqt0029.github.io">Portfolio Page</a>
96.363636
250
0.756604
eng_Latn
0.912592
981d60953a8c89dd2ca932070749c66192f9c7c3
178
md
Markdown
sturdy-octopus-0/README.md
hackermanone/sturdy-octopus
a868a2704106e40a6822048850aba9a46d646b31
[ "MIT" ]
null
null
null
sturdy-octopus-0/README.md
hackermanone/sturdy-octopus
a868a2704106e40a6822048850aba9a46d646b31
[ "MIT" ]
1
2019-03-19T20:14:29.000Z
2019-03-19T20:14:40.000Z
sturdy-octopus-0/README.md
hackermanone/sturdy-octopus
a868a2704106e40a6822048850aba9a46d646b31
[ "MIT" ]
2
2019-03-19T20:10:50.000Z
2019-04-12T17:47:03.000Z
## Simple Shopping Cart Application ### Two ways to run application ##### 1. `npm install` ##### 2. `npm start` ## or ##### 1. `npm run package` ##### 2. Run the executable
16.181818
35
0.589888
eng_Latn
0.931917
981da0444475fc010274f9e72ffea727189525c7
943
md
Markdown
conference/2021/speakers/harishankar.md
CppIndia-UserGroup/CppIndia-UserGroup
4b93c8222bbf9f339fdc2bed95c88e7448f60be6
[ "MIT" ]
null
null
null
conference/2021/speakers/harishankar.md
CppIndia-UserGroup/CppIndia-UserGroup
4b93c8222bbf9f339fdc2bed95c88e7448f60be6
[ "MIT" ]
null
null
null
conference/2021/speakers/harishankar.md
CppIndia-UserGroup/CppIndia-UserGroup
4b93c8222bbf9f339fdc2bed95c88e7448f60be6
[ "MIT" ]
null
null
null
--- layout: single title: Harishankar Singh permalink: /conference/2021/speakers/harishankar/ toc: false widget: true speakers: false registerforCppIndiaCon: true joinCppIndia: true cppindiaconsponsors: false --- ![Harishankar Singh](/conference/2021/graphics/hari.jpg "Harishankar Singh") **Harishankar** has **15 years** of experience in software industry. He had worked in various roles across domains like Avionics, Media, AI, Speech Recognition, and ML. Since April 2020, he is an independent Trainer and Mentor for C++. He is using his C++ experience to train and coach Professionals and Students. Encouraging people to write sustainable and clean code using C++. [![Harishankar Singh](/assets/images/linkedin.png "Harishankar Singh")](https://www.linkedin.com/in/harishankarsinghyadav/){:target="_blank"} [![Harishankar Singh](/assets/images/twitter.png "Harishankar Singh")](https://twitter.com/HarishankarSY){:target="_blank"}
42.863636
211
0.776246
eng_Latn
0.896319
981de97c48334c3e4e68d7f5287ac25a467fe709
14,475
md
Markdown
README.md
zhongxiali/cesium
171a0b714c8c55b4a1e218307dd434ecf9002e16
[ "Apache-2.0" ]
null
null
null
README.md
zhongxiali/cesium
171a0b714c8c55b4a1e218307dd434ecf9002e16
[ "Apache-2.0" ]
null
null
null
README.md
zhongxiali/cesium
171a0b714c8c55b4a1e218307dd434ecf9002e16
[ "Apache-2.0" ]
null
null
null
<p align="center"> <img src="https://github.com/AnalyticalGraphicsInc/cesium/wiki/logos/Cesium_Logo_Color.jpg" width="50%" /> </p> [![Build Status](https://travis-ci.org/AnalyticalGraphicsInc/cesium.svg?branch=master)](https://travis-ci.org/AnalyticalGraphicsInc/cesium)&nbsp; [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](http://www.apache.org/licenses/LICENSE-2.0.html) [![Docs](https://img.shields.io/badge/docs-online-orange.svg)](http://cesiumjs.org/tutorials.html) Cesium is a JavaScript library for creating 3D globes and 2D maps in a web browser without a plugin. It uses WebGL for hardware-accelerated graphics, and is cross-platform, cross-browser, and tuned for dynamic-data visualization. http://cesiumjs.org/ ### Get Started ### Visit the [Downloads page](http://cesiumjs.org/downloads.html) or use the npm module: ``` npm install cesium ``` Have questions? Ask them on the [forum](http://cesiumjs.org/forum.html). Interested in contributing? See [CONTRIBUTING.md](CONTRIBUTING.md). ### Mission ### Our mission is to create the leading 3D globe and map for static and time-dynamic content, with the best possible performance, precision, visual quality, platform support, community, and ease of use. ### License ### [Apache 2.0](http://www.apache.org/licenses/LICENSE-2.0.html). Cesium is free for both commercial and non-commercial use. We appreciate attribution by including the Cesium logo and link in your app. ### Featured Demos ### <p align="center"> <a href="http://cesiumjs.org/NewYork"><img src="http://cesiumjs.org/demos/images/nyc.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/fodarEarth.html"><img src="http://cesiumjs.org/demos/images/fodar/fodar_03_md.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/xalps.html"><img src="http://cesiumjs.org/demos/images/RedBull1.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/noradtrackssanta.html"><img src="http://cesiumjs.org/demos/images/noradtrackssanta.png" height="150" /></a>&nbsp; <a href="http://apps.agi.com/SatelliteViewer/?Status=Operational"><img src="http://cesiumjs.org/demos/images/SatelliteViewer.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/VestaTrek.html"><img src="http://cesiumjs.org/demos/images/VestaTrek.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/CyberCity3D.html"><img src="http://cesiumjs.org/demos/images/CyberCity.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GEFSonline.html"><img src="http://cesiumjs.org/demos/images/GEFS.jpg" height="150" /></a>&nbsp; </p> ### Demos ### <p align="center"> <a href="http://cesiumjs.org/demos/ShakeFinder.html"><img src="http://cesiumjs.org/demos/images/ShakeFinder.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GeoPort3D.html"><img src="http://cesiumjs.org/demos/images/GeoPort3D.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/HurricaneHunters.html"><img src="http://cesiumjs.org/demos/images/HurricaneHunters.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/HWRF.html"><img src="http://cesiumjs.org/demos/images/HWRF.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GPMNRTView.html"><img src="http://cesiumjs.org/demos/images/GPMNRTView.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/STORMVG.html"><img src="http://cesiumjs.org/demos/images/STORMVG.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/CubeCities.html"><img src="http://cesiumjs.org/demos/images/CubeCities.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/VirES.html"><img src="http://cesiumjs.org/demos/images/VirES.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/NASAweather.html"><img src="http://cesiumjs.org/demos/images/NASAweather.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/Citisens.html"><img src="http://cesiumjs.org/demos/images/citisens.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/ParalogPerformance.html"><img src="http://cesiumjs.org/demos/images/ParalogPerformance.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/FlightClub.html"><img src="http://cesiumjs.org/demos/images/FlightClub.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GDMOV.html"><img src="http://cesiumjs.org/demos/images/GDMOV.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/CanadianLandforms.html"><img src="http://cesiumjs.org/demos/images/CanadianLandforms.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/myCesiumflight.html"><img src="http://cesiumjs.org/demos/images/myCesiumflight.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/PowderGlobe.html"><img src="http://cesiumjs.org/demos/images/PowderGlobe.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/Flightradar24.html"><img src="http://cesiumjs.org/demos/images/Flightradar24.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/CubeGlobe.html"><img src="http://cesiumjs.org/demos/images/CubeGlobe.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/OrbitalPredictor.html"><img src="http://cesiumjs.org/demos/images/OrbitalPredictor.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/RapidScat.html"><img src="http://cesiumjs.org/demos/images/RapidScat.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/Wasurenai.html"><img src="http://cesiumjs.org/demos/images/Wasurenai.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/N2YO.html"><img src="http://cesiumjs.org/demos/images/N2YO.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/LiveTrack24.html"><img src="http://cesiumjs.org/demos/images/LiveTrack24.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/PHAROS.html"><img src="http://cesiumjs.org/demos/images/PHAROS.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/LSDSLAM.html"><img src="http://cesiumjs.org/demos/images/LSDSLAM.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GEFSonline.html"><img src="http://cesiumjs.org/demos/images/GEFS.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GeoglyphRail.html"><img src="http://cesiumjs.org/demos/images/GeoglyphRail.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/ParaglidingLogbook.html"><img src="http://cesiumjs.org/demos/images/ParaglidingLogbook.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/EarthClock.html"><img src="http://cesiumjs.org/demos/images/EarthClock.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/Quadrodynamics.html"><img src="http://cesiumjs.org/demos/images/quadrodynamics.jpg" height="150" /></a>&nbsp; <a href="http://apps.agi.com/SatelliteViewer/?Status=Operational"><img src="http://cesiumjs.org/demos/images/SatelliteViewer.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/WAVE.html"><img src="http://cesiumjs.org/demos/images/WAVE.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/Nanaimo.html"><img src="http://cesiumjs.org/demos/images/Nanaimo.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/HereYouGo.html"><img src="http://cesiumjs.org/demos/images/HereYouGo.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/CyberCity3D.html"><img src="http://cesiumjs.org/demos/images/CyberCity.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/EastJapanEarthquake.html"><img src="http://cesiumjs.org/demos/images/JapanEarthquake.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/PaperDrone.html"><img src="http://cesiumjs.org/demos/images/PaperDrone.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/OpenWebGIS.html"><img src="http://cesiumjs.org/demos/images/OpenWebGIS.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/3DHarvestingPlanner.html"><img src="http://cesiumjs.org/demos/images/3DHarvest1.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/2015/10/02/Red-Bull-X-Alps-in-Cesium/"><img src="http://cesiumjs.org/demos/images/RedBull1.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GeoAnimate.html"><img src="http://cesiumjs.org/demos/images/GeoAnimate.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/DataCurtains.html"><img src="http://cesiumjs.org/demos/images/DataCurtains.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/DronesOculus.html"><img src="http://cesiumjs.org/demos/images/DronesOculus.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/3DCityDB.html"><img src="http://cesiumjs.org/demos/images/3DCityDB.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/GridViz.html"><img src="http://cesiumjs.org/demos/images/grid_viz.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/TacMap.html"><img src="http://cesiumjs.org/demos/images/TacMap.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/VirtualCitiesProject.html"><img src="http://cesiumjs.org/demos/images/VirtualCitiesProject.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/MarsTrek.html"><img src="http://cesiumjs.org/demos/images/MarsTrek.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/raceQs.html"><img src="http://cesiumjs.org/demos/images/raceQs.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/EarthViewer.html"><img src="http://cesiumjs.org/demos/images/EarthViewerMain.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/cloudahoy.html"><img src="http://cesiumjs.org/demos/images/cloudahoy.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/VestaTrek.html"><img src="http://cesiumjs.org/demos/images/VestaTrek.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/Taipei3DCityNavigation.html"><img src="http://cesiumjs.org/demos/images/Taipei3DCityNavigation.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/4DChoroplethMap.html"><img src="http://cesiumjs.org/demos/images/4DChoroplethMap.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/RikiTraki.html"><img src="http://cesiumjs.org/demos/images/RikiTraki.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/EgyptianObeliskTracker.html"><img src="http://cesiumjs.org/demos/images/EgyptianObeliskTracker.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/hiroshima-archive.html"><img src="http://cesiumjs.org/demos/images/hiroshima/showcase.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/nasa-gibs.html"><img src="http://cesiumjs.org/demos/images/nasa-gibs/Cesium-GIBS1-md.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/fodarEarth.html"><img src="http://cesiumjs.org/demos/images/fodar/fodar_03_md.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/catalonia-spain.html"><img src="http://cesiumjs.org/demos/images/CataloniaSpain/overview_sm.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/woe.html"><img src="http://cesiumjs.org/demos/images/woe.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/2015/03/19/EclipseTracks-Interactive-Solar-Eclipses-with-Cesium/"><img src="http://cesiumjs.org/demos/images/eclipsetracks.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/divvy.html"><img src="http://cesiumjs.org/demos/images/divvy.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/geo.html"><img src="http://cesiumjs.org/demos/images/geo.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/create.html"><img src="http://cesiumjs.org/demos/images/create.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/cyclingthealps.html"><img src="http://cesiumjs.org/demos/images/cyclingthealps.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/bhuvan.html"><img src="http://cesiumjs.org/demos/images/bhuvan.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/nationalmap.html"><img src="http://cesiumjs.org/demos/images/nationalMapThumb.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/gplates.html"><img src="http://cesiumjs.org/demos/images/GPlates.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/youbeq.html"><img src="http://cesiumjs.org/demos/images/youbeq.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/ign.html"><img src="http://cesiumjs.org/demos/images/ign.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/atovisualizer.html"><img src="http://cesiumjs.org/demos/images/atovisualizer.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/sunshine.html"><img src="http://cesiumjs.org/demos/images/sunshine.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/noradtrackssanta.html"><img src="http://cesiumjs.org/demos/images/noradtrackssanta.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/doarama.html"><img src="http://cesiumjs.org/demos/images/doarama.jpg" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/powdertracks.html"><img src="http://cesiumjs.org/demos/images/powdertracks.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/earthkamexplorer.html"><img src="http://cesiumjs.org/demos/images/earthkamexplorer.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/d3.html"><img src="http://cesiumjs.org/demos/images/d3.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/koansys.html"><img src="http://cesiumjs.org/demos/images/koansys.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/subspace.html"><img src="http://cesiumjs.org/demos/images/subspace.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/agsattrack.html"><img src="http://cesiumjs.org/demos/images/agsattrack.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/weblvcsimulationviewer.html"><img src="http://cesiumjs.org/demos/images/weblvcsimulationviewer.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/demos/vega.html"><img src="http://cesiumjs.org/demos/images/vega.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/Cesium/Apps/Sandcastle/index.html"><img src="http://cesiumjs.org/images/Sandcastle.png" height="150" /></a>&nbsp; <a href="http://cesiumjs.org/Cesium/Build/Apps/CesiumViewer/"><img src="http://cesiumjs.org/images/CesiumViewer.png" height="150" /></a>&nbsp; </p>
106.433824
229
0.719378
yue_Hant
0.537732
981e2b3dc307ae5fb13edcf466e270fb83b8647f
69
md
Markdown
README.md
nelsoncash/angular-faded
46e039f2c8663be4c1d9046ce3a9698a7cd57c2c
[ "MIT" ]
null
null
null
README.md
nelsoncash/angular-faded
46e039f2c8663be4c1d9046ce3a9698a7cd57c2c
[ "MIT" ]
null
null
null
README.md
nelsoncash/angular-faded
46e039f2c8663be4c1d9046ce3a9698a7cd57c2c
[ "MIT" ]
null
null
null
# angular-faded A standalone AngularJS wrapper for the faded plugin.
23
52
0.811594
eng_Latn
0.992243
981ef78698b68aca75a1607fbd65243516bf054e
7,334
md
Markdown
articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-predict.md
flarocca/azure-docs.es-es
8d69748012641d57ddb2b81a3e1c2d079703ed8d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Puntuación de modelos de Machine Learning con PREDICT description: Aprenda a puntuar modelos de Machine Learning con la función PREDICT de T-SQL en Synapse SQL. services: synapse-analytics author: anumjs manager: craigg ms.service: synapse-analytics ms.topic: conceptual ms.subservice: machine-learning ms.date: 07/21/2020 ms.author: anjangsh ms.reviewer: jrasnick ms.custom: azure-synapse ms.openlocfilehash: ef56274e0bda3f1a9d494852520a77ecdfc25799 ms.sourcegitcommit: 8a7b82de18d8cba5c2cec078bc921da783a4710e ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 08/28/2020 ms.locfileid: "89048013" --- # <a name="score-machine-learning-models-with-predict"></a>Puntuación de modelos de Machine Learning con PREDICT Synapse SQL proporciona la capacidad de puntuar modelos de Machine Learning mediante el conocido lenguaje T-SQL. Con [PREDICT](https://docs.microsoft.com/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest) de T-SQL, puede traer los modelos de Machine Learning existentes entrenados con datos históricos y puntuarlos dentro de los límites de seguridad del almacenamiento de datos. La función PREDICT toma un modelo [ONNX (Open Neural Network Exchange)](https://onnx.ai/) y los datos como entradas. Esta característica elimina el paso de sacar datos valiosos del almacén de datos para realizar la puntuación. Su objetivo es permitir a los profesionales de datos implementar de manera sencilla los modelos de Machine Learning con la conocida interfaz de T-SQL, así como colaborar sin problemas con los científicos de datos que trabajan con el marco adecuado para su tarea. > [!NOTE] > Esta funcionalidad no se admite actualmente en SQL a petición. La funcionalidad requiere que el modelo se entrene fuera de Synapse SQL. Después de compilar el modelo, cárguelo en el almacenamiento de datos y puntúelo con la sintaxis de PREDICT de T-SQL para obtener información a partir de los datos. ![predictoverview](./media/sql-data-warehouse-predict/datawarehouse-overview.png) ## <a name="training-the-model"></a>Entrenamiento del modelo Synapse SQL espera un modelo previamente entrenado. Tenga en cuenta los factores siguientes al entrenar un modelo de Machine Learning que se usa para realizar predicciones en Synapse SQL. - Synapse SQL solo admite modelos con formato ONNX. ONNX es un formato de modelo de código abierto que permite intercambiar modelos entre varios marcos para habilitar la interoperabilidad. Puede convertir los modelos existentes al formato ONNX con marcos que lo admitan de forma nativa o que hayan convertido los paquetes disponibles. Por ejemplo, el paquete [sklearn-onnx](https://github.com/onnx/sklearn-onnx) convierte los modelos scikit-learn a ONNX. El [repositorio ONNX de GitHub](https://github.com/onnx/tutorials#converting-to-onnx-format) proporciona una lista de ejemplos y marcos compatibles. Si usa [ML automatizado](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml) para entrenamiento, asegúrese de establecer el parámetro *enable_onnx_compatible_models* en TRUE para generar un modelo con formato ONNX. [Automated Machine Learning Notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb) muestra un ejemplo de cómo usar ML automatizado para crear un modelo de Machine Learning con formato ONNX. - Los tipos de datos siguientes se admiten para los datos de entrada: - int, bigint, real, float - char, varchar, nvarchar - Los datos de puntuación deben tener el mismo formato que los datos de entrenamiento. Los tipos de datos complejos, como las matrices multidimensionales, no son compatibles con PREDICT. Por lo tanto, para el entrenamiento, asegúrese de que cada entrada del modelo corresponda a una sola columna de la tabla de puntuación en lugar de pasar una matriz única que contenga todas las entradas. - Asegúrese de que los nombres y los tipos de datos de las entradas del modelo coinciden con los nombres de columna y los tipos de datos de los datos de predicción nuevos. La visualización de un modelo ONNX con diversas herramientas de código abierto disponibles en línea puede ayudarlo con la depuración. ## <a name="loading-the-model"></a>Carga del modelo El modelo se almacena en una tabla de usuario de Synapse SQL como una cadena hexadecimal. Se pueden agregar columnas adicionales como Identificador y Descripción en la tabla del modelo para identificar el modelo. Use varbinary(max) como el tipo de datos de la columna del modelo. A continuación, se muestra un ejemplo de código para una tabla que se puede usar para almacenar modelos: ```sql -- Sample table schema for storing a model and related data CREATE TABLE [dbo].[Models] ( [Id] [int] IDENTITY(1,1) NOT NULL, [Model] [varbinary](max) NULL, [Description] [varchar](200) NULL ) WITH ( DISTRIBUTION = ROUND_ROBIN, HEAP ) GO ``` Una vez que el modelo se convierte en una cadena hexadecimal y se especifica la definición de la tabla, use el [comando COPY](https://docs.microsoft.com/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest) o Polybase para cargar el modelo en la tabla de Synapse SQL. En el ejemplo de código siguiente se usa el comando Copy para cargar el modelo. ```sql -- Copy command to load hexadecimal string of the model from Azure Data Lake storage location COPY INTO [Models] (Model) FROM '<enter your storage location>' WITH ( FILE_TYPE = 'CSV', CREDENTIAL=(IDENTITY= 'Shared Access Signature', SECRET='<enter your storage key here>') ) ``` ## <a name="scoring-the-model"></a>Puntuación del modelo Una vez que se cargan el modelo y los datos en el almacenamiento de datos, use la función **PREDICT de T-SQL** para puntuar el modelo. Asegúrese de que los datos de entrada nuevos están en el mismo formato que los datos de entrenamiento que se usan para generar el modelo. PREDICT de T-SQL toma dos entradas: el modelo y los nuevos datos de entrada de puntuación, y genera columnas nuevas para la salida. El modelo se puede especificar como una variable, un literal o una subconsulta escalar. Use [WITH common_table_expression](https://docs.microsoft.com/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver15) para especificar un conjunto de resultados con nombre para el parámetro de datos. En el ejemplo siguiente se muestra una consulta de ejemplo que usa la función de predicción. Se crea una columna adicional con el nombre *Score* y el tipo de datos *float* que contiene los resultados de la predicción. Todas las columnas de datos de entrada, así como las columnas de predicción de salida, se pueden mostrar con la instrucción SELECT. Para más detalles, consulte [PREDICT (Transact-SQL)](https://docs.microsoft.com/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest). ```sql -- Query for ML predictions SELECT d.*, p.Score FROM PREDICT(MODEL = (SELECT Model FROM Models WHERE Id = 1), DATA = dbo.mytable AS d) WITH (Score float) AS p; ``` ## <a name="next-steps"></a>Pasos siguientes Para más información sobre la función PREDICT, consulte [PREDICT (Transact-SQL)](https://docs.microsoft.com/sql/t-sql/queries/predict-transact-sql?view=azure-sqldw-latest).
75.608247
881
0.792064
spa_Latn
0.964421
981f0bd7ef458d7088eb53286c66a15f5c5ca05d
1,281
md
Markdown
README.md
kenpapa/laravel_talkapp_ver3
b18d22214a9dc95bf43e32e7a5ec61b51507fc13
[ "MIT" ]
3
2017-01-24T09:36:29.000Z
2017-03-08T23:13:05.000Z
README.md
kenpapa/laravel_talkapp_ver3
b18d22214a9dc95bf43e32e7a5ec61b51507fc13
[ "MIT" ]
null
null
null
README.md
kenpapa/laravel_talkapp_ver3
b18d22214a9dc95bf43e32e7a5ec61b51507fc13
[ "MIT" ]
null
null
null
# Laravel グループ交流アプリ ver3 このレポジトリは次のKindle電子書籍で作成しているアプリケーションのソースコードの一つです。 **Webアプリケーションを作ってみよう** **(Bootstrap Laravel MySQL 活用編)** - アプリケーションを作成しよう1(画面遷移まで) - アプリケーションを作成しよう2(基本動作まで) - アプリケーションを作成しよう3(完成まで) - **アプリケーションを作成しよう4(派生アプリ作成) <--このソースコードです** ## 動作環境 次の環境で動作を確認しています。 OS: Ubuntu16.04 Bootstrap: 3.3.7 MySQL: 5.7.16 PHP: 7.1.0 Laravel: 5.3.28 ## インストール手順 [0]事前にアプリが動作する環境を構築しておきます。 (動作環境の構築については書籍の付録などをご確認ください) [1] GithubのレポジトリでClone or downloadボタンを押して圧縮ファイルをダウンロードします。 [2]ファイルを解凍しlaravel_talkapp_ver3-masterディレクトリに移動します。 [3] 下記コマンドを実行してvendorフォルダを用意します。 composer install [4].env.exampleをコピーして.envファイルを用意します。 [5]MySQLでデータベースとユーザーを準備してその情報を.envに設定します。 (例) create database talkapp DEFAULT CHARACTER SET utf8 COLLATE utf8_general_ci; grant all privileges on talkapp.* to ken@localhost identified by 'pass123'; と準備した場合.envには次のように設定します。 DB_DATABASE=talkapp DB_USERNAME=ken DB_PASSWORD=pass123 [6].envファイルのAPP_KEYにアプリケーションキーを設定するため下記を実行します。 php artisan key:generate [7]次のコマンドを実行してデータベースに必要なテーブルを作成します。 php artisan migrate [8]サーバーを起動しhttp://localhost:8000でアクセス php artisan serve ## ライセンス LaravelのフレームワークはMITライセンスのもとにリリースされています。このプログラムはMITライセンスを採用しています。ライセンスの詳細についてはLICENSEファイルを参照してください。
21
98
0.776737
yue_Hant
0.540212
981f3cd9f3230ef82986d055afeceab901e81ef3
2,024
markdown
Markdown
data/posts/2016-10-26-dfgdglx-2016-keepcontact.markdown
smithereen/2016DevFestLisbon
4f46a3e2088e767002b8607518027a6099c0f60a
[ "MIT" ]
null
null
null
data/posts/2016-10-26-dfgdglx-2016-keepcontact.markdown
smithereen/2016DevFestLisbon
4f46a3e2088e767002b8607518027a6099c0f60a
[ "MIT" ]
null
null
null
data/posts/2016-10-26-dfgdglx-2016-keepcontact.markdown
smithereen/2016DevFestLisbon
4f46a3e2088e767002b8607518027a6099c0f60a
[ "MIT" ]
1
2019-08-05T23:38:43.000Z
2019-08-05T23:38:43.000Z
**Keep in Contact** Don't be a stranger, stay in touch with everyone. Here is contacts of our speakers and their slides they pleasantly gave us at DevFest Lisbon 2016. Name | Talk | Contact ------------ | ------------- | ------------- Sérgio Almeida | _"Development Wars: Sabotage Your Project"_-[url](https://docs.google.com/presentation/d/1rDMQbrZR_zb4-HpU65N1CKAMb_YhgthuyKBX2aByApc/edit?usp=sharing) | [LinkedIn](https://pt.linkedin.com/in/sergioralmeida/en) João Ventura | _"Reuse Python code in native Android applications"_-[url](https://drive.google.com/file/d/0B68-KZmKS9o8XzlPVXg3X0R4Y2I0QnpMTm5CcFp2NkhoQ1Jz/view?usp=sharing) | Bruno Oliveira | _"Create your Gradle plugin using Kotlin"_ | [Twitter](https://twitter.com/_bmoliveira) Fabio Carballo | _"Using Spek to test with Kotlin"_-[url](https://speakerdeck.com/fabiocarballo/testing-with-kotlin-using-spek-and-mockito) | [Twitter](https://twitter.com/fabiocarballo) Gonçalo Silva | _"Building Todoist: past, present and future"_-[url](https://speakerdeck.com/goncalossilva/building-todoist-past-present-and-future) | [Twitter](https://twitter.com/goncalossilva) Carlos Mota | _"Perception of Speed"_ | [Twitter](https://twitter.com/cafonsomota) Pedro Vicente | _"Android’s Warp pipe. Really. Like Super Mario"_ | [Twitter](https://twitter.com/neteinstein) Ivan Kutil | _"Business Progressive Web App with Firebase, Google Apps Script and Google Maps API"_ | [Twitter](https://twitter.com/ivankutil) Francisco Franco | _"From the bottom to the top: Android Kernel, Userspace and Mutative Design"_ | [Twitter](https://twitter.com/franciscof_1990) And don't forget about the awesome guys from [HackerSchool](http://hackerschool.io/) and GDGLx! Follow us on [Facebook](https://www.facebook.com/GDGLisbon/), [Twitter](https://twitter.com/GDGLisbon), [LinkedIn](https://www.linkedin.com/groups/8487369) and [Google+](https://plus.google.com/+GDG-Lisbon) for all the latest updates. Want to just chat? Join us at Telegram [here](https://telegram.me/gdglisbon).
92
234
0.756917
yue_Hant
0.483893
98201c7ea18a6bfb0483a054ab5ad7e16ba1e93d
3,408
md
Markdown
README.md
ixjf/msi-rgb
b178d572678e207f04da580401a28da93b28a8c8
[ "0BSD" ]
null
null
null
README.md
ixjf/msi-rgb
b178d572678e207f04da580401a28da93b28a8c8
[ "0BSD" ]
null
null
null
README.md
ixjf/msi-rgb
b178d572678e207f04da580401a28da93b28a8c8
[ "0BSD" ]
null
null
null
Utility for controlling RGB header on MSI boards [How this utility came to be](http://kazlauskas.me/entries/i-reverse-engineered-a-motherboard.html) This utility not only works on any linux system you find around, it also is much more flexible than the 7 colours MSI’s own Gaming App. Futhermore, unlike the MSI’s utility, this does not make your system vulnerable to anybody who cares to fiddle around the system. * Linux (/dev/port, might work on WSL?) or FreeBSD (/dev/io); * Only MSI motherboards with NCT6795D super I/O chip; * Run a recent version of sensors-detect to check if you have this chip; * No warranty whatsoever (read the license); * If you find your board misbehaving, try clearing CMOS; # Working boards This is a list of reportedly working motherboards. If the tool works on your motherboard and it is not listed here, consider filling an issue or writing me an email and I’ll add it here. * B350 MORTAR ARCTIC * B350 TOMAHAWK * H270 MORTAR ARCTIC * H270 TOMAHAWK ARCTIC * X470 GAMING PRO * X470 GAMING PLUS * Z270 SLI PLUS * Z370 MORTAR If your board is not working, and your motherboard is not [on this list](https://github.com/nagisa/msi-rgb/issues?q=is%3Aissue+is%3Aopen+label%3Aboard), a new issue would be greatly appreciated. # How to compile and run To compile this project you’ll need rustc and cargo. Get them at your package manager or [here](https://www.rust-lang.org/en-US/install.html). Then: ``` git clone https://github.com/nagisa/msi-rgb cd msi-rgb cargo build --release ``` You’ll need root to run this program: ``` sudo ./target/release/msi-rgb 00000000 FFFFFFFF 00000000 # for green ``` The hexa numbers represent each color as a sequence *in time* per byte so 4 change of colors. ``` sudo ./target/release/msi-rgb FF000000 00FF0000 0000FF00 # this makes red then green then blue then off then red etc.. ``` Run following for more options: ``` ./target/release/msi-rgb -h ``` # Examples ## Heartbeat ``` sudo ./target/release/msi-rgb 206487a9 206487a9 10325476 -ir -ig -ib -d 5 ``` [![animation of pulse](https://thumbs.gfycat.com/BlueWhichAntbear-size_restricted.gif)](https://gfycat.com/BlueWhichAntbear) ## Police ``` sudo ./target/release/msi-rgb -d15 FF00FF00 0 00FF00FF ``` [![animation of police](https://thumbs.gfycat.com/RemoteChiefBobolink-size_restricted.gif)](https://gfycat.com/RemoteChiefBobolink) ## Happy Easter [From colourlovers](http://www.colourlovers.com/palette/4479254/Happy-Easter-2017!) ``` sudo ./target/release/msi-rgb 58e01c0d 504fdcb9 e4aa75eb --blink 2 -d 32 ``` [![animation of happyeaster](https://thumbs.gfycat.com/DirectBleakBuzzard-size_restricted.gif)](https://gfycat.com/DirectBleakBuzzard) ## Hue wheel (t HUE, 0.9 SATURATION, 1.0 VALUE) ![animation of hue wheel](https://thumbs.gfycat.com/ViciousGreenBittern-size_restricted.gif) ``` echo -e "import colorsys, time, subprocess\ni=0\nwhile True:\n subprocess.call(['target/release/msi-rgb', '-d511'] + map(lambda x: ('{0:01x}'.format(int(15*x)))*8, colorsys.hsv_to_rgb((i % 96.0) / 96.0, 0.9, 1)))\n time.sleep(0.1)\n i+=1" | sudo python - ``` # Implementation For implementation details, including the registers used by super I/O and their meanings see the comment in the `src/main.rs` file. # License Code is licensed under the permissive ISC license. If you create derivative works and/or nice RGB schemes, I would love to see them :)
31.266055
257
0.742958
eng_Latn
0.907897
98205e634cf60f6f60939e08ad1c3f30126c68f8
1,561
md
Markdown
docs/2014/relational-databases/errors-events/mssqlserver-1101-database-engine-error.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/errors-events/mssqlserver-1101-database-engine-error.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/errors-events/mssqlserver-1101-database-engine-error.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: MSSQLSERVER_1101 | Microsoft Docs ms.custom: '' ms.date: 03/06/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: supportability ms.topic: conceptual helpviewer_keywords: - 1101 (Database Engine error) ms.assetid: d63b67d5-59f5-4f77-904e-5ba67f2dd850 author: MashaMSFT ms.author: mathoma manager: craigg ms.openlocfilehash: d4468e85f8170ecb6b23abf5af8ee3a114a6bef3 ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 06/15/2019 ms.locfileid: "62870161" --- # <a name="mssqlserver1101"></a>MSSQLSERVER_1101 ## <a name="details"></a>Detalhes ||| |-|-| |Nome do produto|SQL Server| |ID do evento|1101| |Origem do evento|MSSQLSERVER| |Componente|SQLEngine| |Nome simbólico|NOALLOCPG| |Texto da mensagem|Não foi possível alocar uma nova página ao banco de dados '%.*ls' devido a espaço em disco insuficiente no grupo de arquivos '%.\*ls'. Crie o espaço necessário descartando objetos no grupo de arquivos, adicionando arquivos ao grupo de arquivos ou definindo o aumento automático para arquivos existentes no grupo de arquivos.| ## <a name="explanation"></a>Explicação Não há espaço disponível em disco em um grupo de arquivos. ## <a name="user-action"></a>Ação do usuário As ações a seguir podem criar espaço disponível no grupo de arquivos. - Ative o AUTOGROW. - Adicione mais arquivos ao grupo de arquivos. - Libere espaço em disco descartando índices ou tabelas desnecessários no grupo de arquivos.
32.520833
346
0.748879
por_Latn
0.988713
98208b7f799f9c09071d78f89a556ce33c6606d3
3,707
md
Markdown
_posts/2020-04-15-build-openjdk.md
liuzhengyang/bytejava
b0158f2a1b24f01f578fb03d8b06b9b1c2f8d2b8
[ "Apache-2.0" ]
null
null
null
_posts/2020-04-15-build-openjdk.md
liuzhengyang/bytejava
b0158f2a1b24f01f578fb03d8b06b9b1c2f8d2b8
[ "Apache-2.0" ]
null
null
null
_posts/2020-04-15-build-openjdk.md
liuzhengyang/bytejava
b0158f2a1b24f01f578fb03d8b06b9b1c2f8d2b8
[ "Apache-2.0" ]
null
null
null
--- title: 手把手教你构建、debug、开发Java虚拟机 date: 2020-04-06 10:34:16 tags: JVM categories: - JVM - Java --- ## 目的 Java虚拟机是Java开发者最常使用的平台,了解其中的运行原理可以帮助我们成为更好的开发者、遇到问题更快解决。对于很多虚拟机知识点,大多数人通常是通过看书或文章来了解相关知识的。这样的缺点是一个知识经过了两次信息理解传递,可能导致信息不准,那么为什么不自己去探究虚拟机的实现原理呢。了解自己常使用的工具的原理,便于更好的使用工具,就像更了解轮胎的赛车手能更好地驾驶汽车、更了解锅和菜刀的厨师可以练出更好的厨艺。而了解虚拟机的最直接的方式就是去构建、debug、开发它! <!-- more --> ## 下载代码 openjdk的代码在[mercurial](http://hg.openjdk.java.net/jdk/)中,下载起来很慢。我们使用github上的mirror即可,这里选择了一个比较新的jdk14分支,代码比较多,下载时间会稍长一些。 ``` git clone https://github.com/openjdk/jdk14u ``` ## 编译 为了构建虚拟机、以及debug调试,需要对代码进行编译。 ### 编译依赖 编译依赖Xcode,通过AppStore搜索下载安装就可以。 编译jdk需要一个低一些版本的jdk作为boot jdk,对于jdk14先到[jdk官网](https://www.oracle.com/java/technologies/javase-jdk13-downloads.html)下载安装一下jdk13 然后安装编译需要的一些依赖包 ``` brew install autoconf freetype ccache ``` ### 开始编译 ``` #首先cd到代码目录中 cd jdk14u # 进行configure bash configure --with-debug-level=slowdebug --enable-dtrace --with-jvm-variants=server --with-target-bits=64 --enable-ccache --with-num-cores=8 --with-memory-size=8000 --disable-warnings-as-errors # 进行make,这个过程稍久一些 make all ``` make成功 ![openjdkmake](/images/openjdkmake.png) 验证下build出来的热乎的jdk ``` ./build/macosx-x86_64-server-slowdebug/jdk/bin/java -version ``` ![openjdkimage](/images/openjdkversion.png) ## 导入IDE openjdk中的代码包含了Java(jdk各种jar包)和C++(hotspot虚拟机部分),本文主要针对hotspot部分。 现代化的IDE是阅读、开发、调试代码的好工具,这里推荐使用Jetbrains公司(也是开发IntelliJ Idea的)提供的[CLion](https://www.jetbrains.com/clion/)。 打开CLion后,选择File -> New CMake Project from Sources.. 选择jdk14u下面的src/hotspot目录,然后点OK。 CLion会帮助我们配置好CMake项目使用的CMakeLists.txt,并且构建代码索引、符号表等,等待加载完成。 加载后完成点击Clion右上角部分的hotspot|Debug这里,添加一个新的Configuration。 ![clionconfiguration](/images/addnewconfiguration.png) 点击Configure Custom Build Targets,点击Add target。 ![addbuildtarget](/images/addbuildtarget.png) name设置成build openjdk,点击Build右边的...,创建External Tools, 点击左下角加号,创建一个Tool,name填make, Program填make,Working directory填下载的openjdk的代码的目录的路径位置,点击OK,保存。 ![addbuildtool](/images/addbuildtool.png) ![customtargetdone](/images/customtargetdone.png) 然后在Run/Debug Configurations页面中,Target选择刚才创建好的target。Executable选择build出来的jdk的java文件,即上两层目录下的jdk14u/build/macosx-x86_64-server-slowdebug/jdk/bin/java。Program arguments暂时填一个 -version。最后点击Apply OK保存。 ![application](/images/CustomBuildApplication.png) 然后点击debug ![startdebug](/images/startdebug.png) 经过几个断点后,可以看到熟悉的java -version的结果 ![javaversion](/images/javaversiondebug.png) ## 解决IDE代码大量红色提示 我们随便打开几个cpp文件,发现里面有大量的红色的提示,让不是强迫症的我都有些难受,并且问题较大的是不能够跳转,给代码阅读带来了很大困难,还是要解决一下。 这里主要原因是一些代码路径问题,我们修改下CMakeLists.txt,先加上这几行,然后点击Reload changes,大部分代码都正常了,如果有遇到其他的,可以按照类似方法解决。并且即使是红色提示,大部分代码都是可跳转的。 ``` include_directories(share) include_directories(../java.base/unix/native/include) include_directories(../java.base/share/native/include) include_directories(../../build/macosx-x86_64-server-slowdebug/jdk/include) include_directories(../../build/macosx-x86_64-server-slowdebug/hotspot/variant-server/gensrc) include_directories(../../build/macosx-x86_64-server-slowdebug/hotspot/variant-server/gensrc/jvmtifiles) ``` ![CMakeListsModify](/images/CMakeListsModify.png) ## 修改hotspot代码 这里我们对代码进行一些简单的修改,验证一下修改流程。 找到执行java -version的相关代码, abstract_vm_version.cpp,用目前不太熟悉的C++语言打印出一个Hello World。然后重新点击debug按钮。 ![debugcodemodify](/images/vmversion.png) ## 其他问题 根据jdk版本不同,编译主机环境不同,上述步骤可能会遇到一些其他问题,不过定位问题、解决问题的方法还是不变的。 ## 总结 上述部分就是编译、debug、开发openjdk的一个简单流程了,授人以鱼不如授人以渔,有了这些方法就可以更方便的查看实现以及排查问题了。 但是最后也要提醒大家保持初心、不要过于沉溺于底层实现、不要盲目崇拜开发虚拟机认为是什么高深莫测的工作。这里引用[王垠](http://www.yinwang.org/blog-cn/2019/12/24/compilers)的一段话 > 每当有人向我表示编译器高深莫测,向往却又高攀不上,我都会给他打一个比方:做编译器就像做菜刀。你可以做出非常好的菜刀,然而你终究只是一个铁匠。铁匠不知道如何用这菜刀做出五花八门,让人心旷神怡,米其林级别的菜肴,因为那是大厨的工作。要做菜还是要打铁,那是你自己的选择,并没有贵贱之分。
30.891667
235
0.813056
yue_Hant
0.377228
9820bec45b0d73d92a2b9a885cd492d333e0183b
4,596
md
Markdown
RELEASE_NOTES.md
FoothillSolutions/Npgsql.FSharp.Analyzer
843e08c0ffd09bd7df517c5203c39e2f8f710b8c
[ "MIT" ]
133
2020-02-19T20:17:54.000Z
2022-03-08T17:07:44.000Z
RELEASE_NOTES.md
TheAngryByrd/Npgsql.FSharp.Analyzer
798ab4fe989690688d61ed1196ca79bb4e1aca37
[ "MIT" ]
34
2020-04-17T04:02:30.000Z
2022-01-10T16:42:55.000Z
RELEASE_NOTES.md
TheAngryByrd/Npgsql.FSharp.Analyzer
798ab4fe989690688d61ed1196ca79bb4e1aca37
[ "MIT" ]
8
2020-02-20T02:15:23.000Z
2021-04-24T07:42:36.000Z
### 3.26.1 - 2021-05-06 * Fix nuget packaging ### 3.26.0 - 2021-05-06 * Allow the SQL parser to handle comments and update to latest FSharp Analyzers SDK ### 3.25.0 - 2021-04-26 * Improvements to the SQL parser ### 3.24.0 - 2021-04-09 * fixes dynamically applied parameters with more complex expressions ### 3.23.0 - 2021-03-26 * Detect typed let bindings ### 3.22.1 - 2021-02-09 * Let the analyzer continue its work when it comes across enum types ### 3.22.0 - 2020-12-08 * Detect queries within sequential expressions or statements ### 3.21.0 - 2020-12-08 * Detect queries within lambda expressions wrapped in single case unions ### 3.20.0 - 2020-12-08 * Correctly retain selected column non-nullability when casted or aliased to another type ### 3.18.0 - 2020-12-06 * Analyze SQL blocks from within lambda expressions ### 3.17.0 - 2020-12-06 * Support for datetimeOffset and datetimeOffsetOrNone when reading columns of type timestamptz ### 3.16.0 - 2020-12-06 * Analyze top level do expressions ### 3.15.0 - 2020-09-15 * Analyze transaction parameter sets * Allow for literal queries on transactions ### 3.14.0 - 2020-09-07 * Analyze transaction queries ### 3.13.0 - 2020-09-04 * The ability to suppress warning messages generated by the analyzer ### 3.12.1 - 2020-08-31 * Remove NpgsqlFSharpAnalyzer.Core nuget package reference from the analyzer ### 3.12.0 - 2020-08-29 * Parameter nullability inference for parsable queries * Detecting the missing columns which are required for INSERT queries * Better error messages when reading from a result set which doesn't return any columns ### 3.11.0 - 2020-08-18 * Even better error messages that include whether types were arrays or not #### 3.10.0 - 2020-08-18 * Better error messages when showing the possible functions to use. * Warning when using Sql.execute and the query doesn't return a result set * Support for text int and uuid arrays both when reading columns and writing parameters #### 3.9.0 - 2020-07-19 * Updated FSharp.Analyzers.SDK to 0.5.0 #### 3.8.0 - 2020-06-26 * Trim whitespace from parameter names * Fix aggressive syntactic matching #### 3.7.0 - 2020-05-19 * Account for parameters of type `jsonb` and provide proper type mismatch error. #### 3.6.0 - 2020-05-19 * Add Sql.executeRow(Async) and Sql.iter(Async) to the analysis #### 3.5.0 - 2020-05-19 * Expand syntactic analysis to include searching through (async) computation expressions #### 3.4.0 - 2020-05-19 * Search through nested modules and nested recursively * Configure the connection string of the analyzer via a local file #### 3.3.0 - 2020-04-17 * Update FSharp.Analyzers SDK 0.4.0 -> 0.4.1 #### 3.2.0 - 2020-03-05 * Update FSharp.Analyzers SDK 0.3.1 -> 0.4.0 with named analyzers. #### 3.1.0 - 2020-03-05 * Update FSharp.Analyzers SDK and compiler services to align types. #### 3.0.0 - 2020-02-26 * Update for Npgsql.FSharp 3.x to be able to analyze column reading functions as `{type}OrNone` instead of `{type}OrNull` #### 2.0.0 - 2020-02-24 * Update for Npgsql.FSharp 2.x * Detect incorrect parameter type with code fixes * Detect redundant parameters * Detect nullable types and and suggest using proper function that handle null values #### 1.9.0 - 2020-02-20 * Optimize number of database calls by reusing the database schema on each invokation * Detect redundant query parameters in a clear warning message * Provide code fixes and better suggestions for mismatched query parameters * Remove duplicate messages about missing parameters * Refactor and simplify parts of `InformatioSchema` and `SqlAnalysis` #### 1.8.0 - 2020-02-19 * Provide column name fix suggestions when reading an unknown column #### 1.7.0 - 2020-02-19 * Read parameters as soon as they written and implement proper code fixes. #### 1.6.0 - 2020-02-19 * Read queries as soon as they written without expecting `Sql.executeReader` #### 1.5.0 - 2020-02-19 * Improved syntactic F# analysis in `Sql` module is used in combination with other generic functions #### 1.4.0 - 2020-02-18 * Enable reading `[<Literal>]` queries from the same module and add docs #### 1.3.0 - 2020-02-18 * Detect type-mismatch when reading columns of type 'bool' from the database. Simplify parameter mismatch when there is only one parameter. #### 1.2.0 - 2020-02-18 * Remove warning when there is no query provided (to avoid making a bother-ware analyzer) #### 1.1.0 - 2020-02-17 * Proper packaging that includes third-party dependencies required for dynamic loading #### 1.0.0 - 2020-02-17 * Initial release with working SQL analysis including syntax and type-checking
34.298507
139
0.734769
eng_Latn
0.98777
9822e5c85d2c237d53284ea01ba115f627775c16
5,238
md
Markdown
docs/installer/actions/uninstall-installation.md
Bhaskers-Blu-Org2/AdvocacyPlatform
eb953cca126fa8bafce0c4ff3a30108612a158fd
[ "MIT" ]
9
2019-07-01T05:12:22.000Z
2022-03-06T22:35:31.000Z
docs/installer/actions/uninstall-installation.md
microsoft/AdvocacyPlatform
eb953cca126fa8bafce0c4ff3a30108612a158fd
[ "MIT" ]
null
null
null
docs/installer/actions/uninstall-installation.md
microsoft/AdvocacyPlatform
eb953cca126fa8bafce0c4ff3a30108612a158fd
[ "MIT" ]
6
2019-11-07T00:03:55.000Z
2020-12-12T02:07:11.000Z
# Uninstall **_Note:_** To view additional information regarding ongoing operations at any time click on the checkbox next to **Show Details** in the lower left corner. <img src="../../media/installer/user-guide/installer/installer-show-details.png" style="width: 500px;"> ### I. Dependencies #### Checking for Dependencies In order to connect to and interact with the required Microsoft services, the installer needs to have consent for the indicated API permissions granted for the application in your tenant. One of the actions taken by the installer, registering the Function App application with Azure AD, needs an API permission requiring tenant administrator consent. If you are not a tenant administrator please forward the link on this page to your tenant administrator and ask for the application to be granted consent for the indicated API permissions. If you have not granted consent to this application before, click on *this link* in the description to open a browser frame in the installer and response appropriately to the prompts. If you have previously granted consent, click on the **Next** button and continue to [Feature Selection](#II.-Feature-Selection) <img src="../../media/installer/user-guide/installer/installer-application-consent-page.png" style="width: 500px;"><br /> <br /> Click on the **Accept** button. <img src="../../media/installer/user-guide/installer/installer-grant-consent.png" style="width: 500px;"> The wizard should automatically navigate to the next screen. If you encounter any issues, share the resulting error message with your tenant administrator. ### II. Components to Uninstall The **Uninstall Options** page describes the components to uninstall. Select the components you want to uninstall (leave all selected to complete remove) and click on the **Next** button to continue. |Component Name|Description| |-|-| |Subscription|This represent the subscription a resource group to remove exists in. **No Azure subscription is removed as a part of this process.**| |Resource Group|The resource group to remove containing the Advocacy Platform components. **The entire resource group will be removed as part of this process.**| |Environment Name|The Dynamics 365 CRM Organization, Common Data Services database, and PowerApps environment to remove.| |App Registration Name|The application registration to remove from Azure Active Directory.| |LUIS Application Name|The LUIS application to remove.| |LUIS Authoring Region|The region of your LUIS account.| |LUIS Authoring Key|The authoring key required to make calls to the LUIS Authoring API to remove the application.| <br /> <img src="../../media/installer/user-guide/uninstall/uninstall-feature-selection.png" style="width: 500px;"> ### III. Confirmation Before component removal begins, the installer will present a list of all of the components being removed. If you consent to the removal of the listed components, click on the **Next** button to navigate to the next page and begin the removal process. <img src="../../media/installer/user-guide/uninstall/uninstall-confirm.png" style="width: 500px;"> ### III. Uninstalling Now you just need to sit back and wait for the removal process to complete. If any errors occur, they will be visible in the output log in the middle-right of the installation wizard. After the removal process completes, the installer will automatically navigate to the final page. <img src="../../media/installer/user-guide/uninstall/uninstall-remove-azure-resource-group.png" style="width: 500px;"> <br /> <br /> <img src="../../media/installer/user-guide/uninstall/uninstall-azportal-resource-group-deleting.png" style="width: 500px;"> ### IV. Uninstall Completed The final page of the installer will let you know if the removal process was successful or not. If the removal process was successful, please delete your saved installation configuration file as it will no longer be valid. <img src="../../media/installer/user-guide/uninstall/uninstall-completed.png" style="width: 500px;"> ### V. Uninstallation Validation To validate the removal process, you will want to navigate to the respective service portals for each of these components and ensure they no longer exits. #### Azure The Azure resource group should no longer exist in your Azure subscription. <img src="../../media/installer/user-guide/uninstall/uninstall-azportal-resource-group-deleted.png" style="width: 500px;"> #### Azure AD Application Registration The application registration should no longer exist in Azure Active Directory for your tenant. <img src="../../media/installer/user-guide/uninstall/uninstall-azportal-app-registration-deleted.png" style="width: 500px;"> #### LUIS Application The LUIS application should no longer exist in your LUIS account. <img src="../../media/installer/user-guide/uninstall/uninstall-luisportal-app-deleted.png" style="width: 500px;"> #### Power Apps Environment\Common Data Services Database\Dynamics 365 CRM Organization The Dynamics 365 CRM Organization, Common Data Services database, and PowerApps environment should no longer exist. <img src="../../media/installer/user-guide/uninstall/uninstall-powerappsportal-environment-deleted.png" style="width: 500px;">
65.475
723
0.781023
eng_Latn
0.967706
98230b5b3eabe3a67db8c97f304c38b75050ee05
1,672
md
Markdown
docs/ctf_writeups/FwordCTF_2021/final_check.cpp.md
IxZZZ/IxZZZ.github.io
5b2a8b689d57dc062ed93d5a451407c13175f609
[ "MIT" ]
null
null
null
docs/ctf_writeups/FwordCTF_2021/final_check.cpp.md
IxZZZ/IxZZZ.github.io
5b2a8b689d57dc062ed93d5a451407c13175f609
[ "MIT" ]
null
null
null
docs/ctf_writeups/FwordCTF_2021/final_check.cpp.md
IxZZZ/IxZZZ.github.io
5b2a8b689d57dc062ed93d5a451407c13175f609
[ "MIT" ]
null
null
null
```c int __fastcall final_check(__int64 a1, void *a2, void *a3) { int result; // eax __int64 v4; // [rsp+0h] [rbp-80h] BYREF __int16 v5; // [rsp+36h] [rbp-4Ah] BYREF SIZE_T NumberOfBytesRead; // [rsp+38h] [rbp-48h] BYREF char v7[2]; // [rsp+46h] [rbp-3Ah] BYREF char v8[2]; // [rsp+48h] [rbp-38h] BYREF __int16 Buffer; // [rsp+4Ah] [rbp-36h] BYREF BOOL v10; // [rsp+4Ch] [rbp-34h] struct _CONTEXT Context; // [rsp+50h] [rbp-30h] BYREF int v12; // [rsp+52Ch] [rbp+4ACh] memset(&v4 + 10, 0, 0x4D0ui64); Context.ContextFlags = 1048579; v10 = GetThreadContext(a3, &Context); if ( !v10 ) sub_4033E0("ret", "nano.cc", 88i64); Buffer = 0; v8[0] = -112; v8[1] = -112; v7[0] = -61; v7[1] = -112; NumberOfBytesRead = 0i64; v10 = ReadProcessMemory(a2, (LPCVOID)Context.Rip, &Buffer, 2ui64, &NumberOfBytesRead); if ( !v10 ) sub_4033E0("ret", "nano.cc", 97i64); v5 = 0; result = (unsigned __int8)Buffer; if ( (_BYTE)Buffer == 15 ) { result = HIBYTE(Buffer); if ( HIBYTE(Buffer) == 11 ) { LOBYTE(v5) = Context.R12; v12 = sub_401584((unsigned __int8 *)&v5); if ( v12 == Context.R11 && Context.R13 == 1 ) ++dword_419030; if ( dword_419030 == 39 ) { sub_40F190("Correct Flag :)\n"); v10 = WriteProcessMemory(a2, (LPVOID)Context.Rip, v7, 2ui64, &NumberOfBytesRead); SetThreadContext(a3, &Context); } result = dword_419030; if ( dword_419030 != 39 ) { v10 = WriteProcessMemory(a2, (LPVOID)Context.Rip, v8, 2ui64, &NumberOfBytesRead); result = SetThreadContext(a3, &Context); } } } return result; } ```
29.333333
89
0.587919
yue_Hant
0.634826
982317b12ad408e672e313f6dd2e9a193d89f54a
271
md
Markdown
api_docs/v1/FunnelStep.md
ubisoft/datadog-api-client-java
3ef6f78cf1ce1a8041b7230eda085b558edf869c
[ "Apache-2.0" ]
null
null
null
api_docs/v1/FunnelStep.md
ubisoft/datadog-api-client-java
3ef6f78cf1ce1a8041b7230eda085b558edf869c
[ "Apache-2.0" ]
null
null
null
api_docs/v1/FunnelStep.md
ubisoft/datadog-api-client-java
3ef6f78cf1ce1a8041b7230eda085b558edf869c
[ "Apache-2.0" ]
null
null
null
# FunnelStep The funnel step. ## Properties | Name | Type | Description | Notes | | --------- | ---------- | ---------------------- | ----- | | **facet** | **String** | The facet of the step. | | **value** | **String** | The value of the step. |
24.636364
59
0.416974
eng_Latn
0.721054
982322c8ef1a5b22f7c68ff40ff426395818c788
8,166
md
Markdown
Standard/index.md
tyued/oabasedata
c177d1acaab288443bdf616fb23a91af08b3ae64
[ "Apache-1.1" ]
null
null
null
Standard/index.md
tyued/oabasedata
c177d1acaab288443bdf616fb23a91af08b3ae64
[ "Apache-1.1" ]
null
null
null
Standard/index.md
tyued/oabasedata
c177d1acaab288443bdf616fb23a91af08b3ae64
[ "Apache-1.1" ]
null
null
null
## 项目主要目录结构 ```shell ├── build // 构建相关   ├── config // 配置相关 ├── src // 源代码 │   ├── api // 所有请求 │   ├── assets // 主题 字体等静态资源 │   ├── components // 全局公用组件 │   ├── directive // 全局指令 │   ├── filtres // 全局filter │   ├── mock // mock数据 │   ├── router // 路由 │   ├── store // 全局store管理 │   ├── styles // 全局样式 │   ├── utils // 全局公用方法 │   ├── view // view │   ├── App.vue // 入口页面 │   └── main.js // 入口 加载组件 初始化等 ├── static // 第三方不打包资源 │   ├── jquery │   └── Tinymce // 富文本 ├── .babelrc // babel-loader 配置 ├── eslintrc.js // eslint 配置项 ├── .gitignore // git 忽略项 ├── favicon.ico // favicon图标 ├── index.html // html模板 └── package.json // package.json ``` ## 状态管理 后台只有user和app配置相关状态使用vuex存在全局,其它数据都由每个业务页面自己管理。 ## router管理 一个应用项目建立相应的路由文件,统一 import 到router/index.js中 路由文件命名需和在view里的项目文件夹命名一致.并做好中文说明(例如:znpk--智能排考) ## views层目录结构 ```shell views ├── admin // 基础数据模块 │   ├── acadyear // 学年维护 │   ├── campus // 校区管理 │   ├── class // 班级维护 │   ├── course // 课程维护 │   ├── gataLog // 操作日志管理 │   ├── grade // 年级维护 │   ├── group // 角色权限管理 │   ├── groupType // 角色类型管理 │   ├── major // 专业维护 │   ├── menu // 菜单管理 │   ├── place // 场地维护 │   ├── school // 学校维护 │   ├── specialstu // (暂不知道是否还有必要留着) │   ├── specialType // 特长生类型 │   ├── student // 学生维护 │   ├── teach // 教师管理 │   ├── teachclass // 教师任课设置 │   ├── teachergroup // 教师组维护 │   ├── term // 学期维护 │   ├── unit // (暂不知道是否还有必要留着) │   ├── user // 用户管理 │   └── index.vue ├── audit // 审批管理 (应用) │   ├── approvalprocess // 审批流程 │   │ ├── index // 审批流程首页 │   │ ├── audittemplate // 创建新审批 │   │ ├── selectform // 选择表单 │   │ ├── spcx // 审批历史 │   │ ├── szspbz // 设置审批流程步骤 │   │ └── ChooseMember // 选择成员 (组件) │   ├── holidaymanager // 假期管理 (审批管理首页) │   └── mobile // 审批申请 │   ├── spindex // 审批申请首页 │   ├── spsq // 创建新审批 │   └── spslmx // 审批详情 ├── auth // 服务管理 │   └── service // 服务权限管理 ├── charts // 基于Echarts的图表 (开源框架原有的,可以考虑删除) ├── components // 公告组件库 (目前里面全是开源框架带的,不用的组件可以删除) ├── dashboard // 整个平台的首页 ├── dkgl // (空页面建议删除) ├── errorlog // 异常捕获页面 (目前不知是否有用) ├── error // 404 401页面出错后,跳转页面 ├── introduction // 简述? (空页面建议删除) ├── ksgl // 考试管理 (应用) │   ├── kslx // 考试类型 │   └── kswh // 考试维护 │   ├── ksfxwh // 科目分项设置 (组件) │   ├── kssz // 考试选班设置 (组件) │   ├── kstj // 人数统计 (组件) │   ├── index // 考试维护 │   └── top // 考试设置 ├── layout // 整系统的整体框架结构:如顶部,左边栏,面包屑,主体内容等 ├── login // 登录模块,可扩展比如第三方登录等 ├── main // 平台登录后的首页 ├── maintenance // 学籍管理 (应用) │   └── graduateEnquiries // 毕业生查询 ├── monitor // 监控模块 (空页面建议删除) ├── newstudent // 新生管理 (应用) │   ├── autogroup // 自动分班页面 │   ├── groupconfirm // 分班确认 │   ├── grouphistory // 分班记录 │   ├── groupresult // 分班结果 │   └── innewstudent // 新生信息录入 ├── oasysys // OA系统演示页面 ├── permission // 切换权限 (目前不清楚是什么功能,是否可以删除) ├── pkxt // 排课系统 (应用) │   └── arrangingTask // 排课任务 │   ├── ckkb // 查看课表 (组件) │   ├── gztj // 规则条件 (组件) │   ├── jcsz // 基础设置 (组件) │   ├── pktz // 排课调整 (组件) │   ├── zdpk // 自动排课 (组件) │   ├── index.vue // 排课首页 │   └── pksz.vue // 排课设置 ├── project // 学分管理系统 ├── qiniu // 七牛文件上传 (可以删除) ├── readgrade // 阅读考级项目 (应用) │   ├── examCount // 考试统计 │   │ ├── bj // 班级 │   │ ├── nj // 年级 │   │ └── student // 学生 │   ├── publicLibManager // 公共库管理 │   │ ├── bookTitleCheck // 书目题目审核 │   │ ├── bookTitleManager // 书目题目管理 │   │ ├── PageBar // 页面栏目 (组件) │   │ ├── questionCheck // 题目审核 │   │ └── questionManagement // 题目管理 │   ├── readCount // 阅读成果统计 │   │ ├── bj // 班级 │   │ ├── nj // 年级 │   │ └── student // 学生 │   ├── testManager // 考试管理 │   │ ├── bookManager // 书目管理 │   │ │ └── intoBook // 书目导入 │   │ │ └── textBook // 查看题目 │   │ ├── paperSetting // 试卷规范设置 │   │ │ ├── echart // 试卷规范设置-图表管理 │   │ │ └── paperTable // 试卷规范设置-表格管理 │   │ └── questionManager // 题库管理 │   └── webSetting // 网站设置 │   ├── growthValueSet // 成长值系统设置 │   ├── roleManager // (目前不知是否有用) │   ├── systemSet // 系统设置 │   └── userManager // (目前不知是否有用) ├── staticpages // 测试静态页面供开发调试用 (可以删除) ├── tdk // 调代课 (应用) │   ├── dkkb // 我的课表 │   ├── dktj // 代课统计 │   ├── dkwh // 代课维护 │   └── tkwh // 调课维护 ├── theme // 系统皮肤设置 (可以删除) ├── xkxt // 选课系统 (应用) │   ├── courseTypeManager // 课程类别维护 │   ├── kcxxManager // 课程管理 │   │ ├── addkcxx // 课程增加 │   │ ├── ckkcxx // 课程查看 │   │ ├── index // 课程管理首页 │   │ └── updatekcxx // 课程修改 │   ├── specialRaw // 特长生维护 │   ├── stuPreallocation // 学生预分配 │   ├── xkjgmanager // 选课结果管理 │   ├── xkjgtzmanager // 选课结果调整 │   └── xkrwmanager // 选课任务 │   ├── ckgzsz // 查看规则设置 │   ├── ckxkrw // 查看选课任务 │   └── xkgzsz // 选课规则设置 ├── xxaf // 校园安防 (应用) │   ├── aqsb // 校园安防上报设置 │   └── jgcx // 结果查询 └── znpk // 智能排考 (应用)   └── examination // 考试   ├── cxpk // 查看排考   ├── jksz // 监考设置   ├── kcsz // 课程设置   ├── khsz // 考号设置   ├── kssz // 考试设置   ├── mksz // 免考设置   ├── pktz // 排考调整   ├── znpk // 智能排考   ├── index.vue   └── top.vue // 排考设置 (首页) ``` ## 组件管理 应用内组件写在当前应用文件夹内已components命名,入平台级组件例如弹层,tips等写入src/components里
42.978947
70
0.303943
yue_Hant
0.649313
9823dd516805668b3c7253ed0ddedabba57fe9f5
1,901
md
Markdown
docs/framework/wcf/diagnostics/tracing/system-servicemodel-channels-peermaintaineractivity.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-channels-peermaintaineractivity.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-channels-peermaintaineractivity.md
Graflinger/docs.de-de
9dfa50229d23e2ee67ef4047b6841991f1e40ac4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: System.ServiceModel.Channels.PeerMaintainerActivity ms.date: 03/30/2017 ms.assetid: ef28d086-d7fb-4e81-82e9-45a54647783b ms.openlocfilehash: ea4c8110a8f820e0c6204fbd22b3d5b747709fba ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 04/08/2019 ms.locfileid: "59126034" --- # <a name="systemservicemodelchannelspeermaintaineractivity"></a>System.ServiceModel.Channels.PeerMaintainerActivity Das PeerMaintainer-Modul führt einen bestimmten Vorgang (Details innerhalb des Ablaufverfolgungsnachrichtentexts) aus. ## <a name="description"></a>Beschreibung Diese Ablaufverfolgung tritt während verschiedener PeerMaintainer-Vorgänge auf. PeerMaintainer ist eine interne Komponente von PeerNode. Jede Minute &amp;#8211; oder nach 32&amp;#160;empfangenen Nachrichten &amp;#8211; wird eine LinkUtility-Nachricht an die Nachbarn gesendet. Diese Nachricht enthält eine Statistik mit der Anzahl der ausgetauschten Nachrichten sowie mit Informationen dazu, wie viele der Nachrichten nützlich waren (also keine Duplikate und nicht manipuliert). Dadurch lässt sich das Verknüpfungsprogramm eines bestimmten Nachbarn ermitteln. Etwa alle fünf Minuten überprüft der Maintainer die Integrität der Nachbarverbindungen. Übersteigt die Anzahl von Nachbarverbindungen die Idealmenge, werden die am wenigsten nützlichen Verbindungen entfernt. Sind nicht genügend Verbindungen verfügbar, werden vom Maintainer neue Verbindungen eingerichtet. ## <a name="see-also"></a>Siehe auch - [Ablaufverfolgung](../../../../../docs/framework/wcf/diagnostics/tracing/index.md) - [Verwenden der Ablaufverfolgung zum Beheben von Anwendungsfehlern](../../../../../docs/framework/wcf/diagnostics/tracing/using-tracing-to-troubleshoot-your-application.md) - [Verwaltung und Diagnose](../../../../../docs/framework/wcf/diagnostics/index.md)
76.04
788
0.811152
deu_Latn
0.976552
982442b90b0ae771df69e19f73a53aa50092b5d3
920
md
Markdown
README.md
VicFic2006/StravaActivityAnalyser
c09f98bde4af79dd10a113240a38e10aee807cae
[ "MIT" ]
null
null
null
README.md
VicFic2006/StravaActivityAnalyser
c09f98bde4af79dd10a113240a38e10aee807cae
[ "MIT" ]
null
null
null
README.md
VicFic2006/StravaActivityAnalyser
c09f98bde4af79dd10a113240a38e10aee807cae
[ "MIT" ]
null
null
null
# Strava Activity Analyser This is just 2 scripts which I used to get 100+ people's strava activity for a club challenge. My only target was to get all of these people's activities in a spreadsheet, so if you want to use it, you need need make some tweaks to get it working. **I recommend reading [this guide](https://medium.com/swlh/using-python-to-connect-to-stravas-api-and-analyse-your-activities-dummies-guide-5f49727aac86), since I mostly followed what it says.** ### FYI - First, you need register your app in your **[Strava settings](https://www.strava.com/settings/api)**. - The server outputs profile json responses to a google sheet, so - You should setup a google api service account and put the credentials(`creds.json`) in the same folder as `app.py` - `profiles.csv` should be a copy of your google sheet - `activity_ouput.csv` is the output of `fetch.py` - Be careful of the number of API calls
46
247
0.755435
eng_Latn
0.989259
98248dd2aa95c7df0f44edde1bd62ee3bdff5f62
554
md
Markdown
_posts/2018-6-17-din.md
SEP3WATER/SEP3WATER.github.io
40cf8eb3978ae9ce29eaec3892576fbc87ea57e1
[ "Apache-2.0" ]
null
null
null
_posts/2018-6-17-din.md
SEP3WATER/SEP3WATER.github.io
40cf8eb3978ae9ce29eaec3892576fbc87ea57e1
[ "Apache-2.0" ]
null
null
null
_posts/2018-6-17-din.md
SEP3WATER/SEP3WATER.github.io
40cf8eb3978ae9ce29eaec3892576fbc87ea57e1
[ "Apache-2.0" ]
null
null
null
--- layout: post title: 引导页 subtitle: date: 2018-05-28 author: YIN header-img: img/about-bg-sky.jpg catalog: true tags: - Design --- ## 前言 ## 大型全新升级 - 版本一 ![blue](https://github.com/SEP3WATER/SEP3WATER.github.io/blob/master/img/post-3-man.jpg?raw=true) **版本一** 希望展现出用户在使用我们产品新功能时,即将拥有的体验感。在这个主题下,选择了以人物动态为主的造型,希望用户可以置身于中,对产品留下更加人性化的记忆。 ## 大型全新升级 - 版本二 ![blue](https://github.com/SEP3WATER/SEP3WATER.github.io/blob/master/img/post-3-blue.jpg?raw=true) **版本一** 主要通过更加直观念的物体去展现主题,在颜色也选择了产品的主题色-蓝色作为主导颜色,将品牌色更加深入的植入用户心中。
19.785714
98
0.705776
yue_Hant
0.591717
9824cd39ee2b698811c0acbbb27d6ad8ab85b7b2
11,997
md
Markdown
articles/active-directory-domain-services/password-policy.md
Myhostings/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
16
2017-08-28T08:29:36.000Z
2022-01-02T16:46:30.000Z
articles/active-directory-domain-services/password-policy.md
Ahmetmaman/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
470
2017-11-11T20:59:16.000Z
2021-04-10T17:06:28.000Z
articles/active-directory-domain-services/password-policy.md
Ahmetmaman/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
25
2017-11-11T19:39:08.000Z
2022-03-30T13:47:56.000Z
--- title: Azure AD Domain Services parola ilkeleri oluşturma ve kullanma | Microsoft Docs description: Azure AD DS yönetilen bir etki alanında hesap parolalarını güvenli hale getirmek ve denetlemek için hassas parola ilkelerinin nasıl ve neden kullanılacağını öğrenin. services: active-directory-ds author: justinha manager: daveba ms.assetid: 1a14637e-b3d0-4fd9-ba7a-576b8df62ff2 ms.service: active-directory ms.subservice: domain-services ms.workload: identity ms.topic: how-to ms.date: 07/06/2020 ms.author: justinha ms.openlocfilehash: df132af1675b3f373fe1eab5685c5d2f07813445 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 03/29/2021 ms.locfileid: "96619241" --- # <a name="password-and-account-lockout-policies-on-azure-active-directory-domain-services-managed-domains"></a>Azure Active Directory Domain Services yönetilen etki alanlarında parola ve hesap kilitleme ilkeleri Azure Active Directory Domain Services (Azure AD DS) ' de Kullanıcı güvenliğini yönetmek için hesap kilitleme ayarlarını veya en düşük parola uzunluğunu ve karmaşıklığı denetleyen hassas parola ilkeleri tanımlayabilirsiniz. Azure AD DS yönetilen bir etki alanındaki tüm kullanıcılara varsayılan bir hassas parola ilkesi oluşturulur ve uygulanır. Ayrıntılı denetim sağlamak ve belirli iş veya uyumluluk ihtiyaçlarını karşılamak için, ek ilkeler oluşturulup belirli kullanıcı gruplarına uygulanabilir. Bu makalede, Azure AD DS Active Directory Yönetim Merkezi kullanarak hassas bir parola ilkesi oluşturma ve yapılandırma açıklanmaktadır. > [!NOTE] > Parola ilkeleri yalnızca Kaynak Yöneticisi dağıtım modeli kullanılarak oluşturulan yönetilen etki alanları için kullanılabilir. Klasik kullanılarak oluşturulan eski yönetilen etki alanları için, [Klasik sanal ağ modelinden Kaynak Yöneticisi 'e geçiş][migrate-from-classic]yapın. ## <a name="before-you-begin"></a>Başlamadan önce Bu makaleyi tamamlayabilmeniz için aşağıdaki kaynaklar ve ayrıcalıklar gereklidir: * Etkin bir Azure aboneliği. * Azure aboneliğiniz yoksa [bir hesap oluşturun](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Abonelikle ilişkili bir Azure Active Directory kiracısı, şirket içi bir dizinle veya yalnızca bulut diziniyle eşitlenir. * Gerekirse, [bir Azure Active Directory kiracı oluşturun][create-azure-ad-tenant] veya [bir Azure aboneliğini hesabınızla ilişkilendirin][associate-azure-ad-tenant]. * Azure AD kiracınızda etkinleştirilmiş ve yapılandırılmış Azure Active Directory Domain Services yönetilen bir etki alanı. * Gerekirse, [Azure Active Directory Domain Services yönetilen bir etki alanı oluşturmak ve yapılandırmak][create-azure-ad-ds-instance]için öğreticiyi doldurun. * Yönetilen etki alanının Kaynak Yöneticisi dağıtım modeli kullanılarak oluşturulmuş olması gerekir. Gerekirse, [Klasik sanal ağ modelinden Kaynak Yöneticisi ' ye geçirin][migrate-from-classic]. * Yönetilen etki alanına katılmış bir Windows Server Yönetim sanal makinesi. * Gerekirse, [bir yönetim sanal makinesi oluşturmak][tutorial-create-management-vm]için öğreticiyi izleyin. * Azure AD kiracınızda *Azure AD DC Administrators* grubunun üyesi olan bir kullanıcı hesabı. ## <a name="default-password-policy-settings"></a>Varsayılan parola ilkesi ayarları Hassas parola ilkeleri (FGPPs), bir etki alanındaki farklı kullanıcılara parola ve hesap kilitleme ilkeleri için özel kısıtlamalar uygulamanıza imkan tanır. Örneğin, ayrıcalıklı hesapların güvenliğini sağlamak için, normal ayrıcalıklı olmayan hesaplardan daha sıkı hesap kilitleme ayarları uygulayabilirsiniz. Yönetilen bir etki alanı içinde birden fazla FGPPs oluşturabilir ve bunları kullanıcılara uygulamak için öncelik sırasını belirtebilirsiniz. Parola ilkeleri hakkında daha fazla bilgi ve Active Directory Yönetim Merkezi 'ni kullanma hakkında daha fazla bilgi için aşağıdaki makalelere bakın: * [Hassas parola ilkeleri hakkında bilgi edinin](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770394(v=ws.10)) * [AD Yönetim merkezini kullanarak hassas parola ilkelerini yapılandırma](/windows-server/identity/ad-ds/get-started/adac/introduction-to-active-directory-administrative-center-enhancements--level-100-#fine_grained_pswd_policy_mgmt) İlkeler, yönetilen bir etki alanında Grup ilişkilendirmesi aracılığıyla dağıtılır ve yaptığınız tüm değişiklikler bir sonraki Kullanıcı oturumu sırasında uygulanır. İlkenin değiştirilmesi, zaten kilitlenen bir kullanıcı hesabının kilidini açmıyor. Parola ilkeleri, uygulanan kullanıcı hesabının oluşturulma şekline bağlı olarak biraz daha farklı davranır. Azure AD DS bir kullanıcı hesabının oluşturulabilmesi için iki yol vardır: * Kullanıcı hesabı Azure AD 'den eşitlenebilir. Bu, doğrudan Azure 'da oluşturulan bulut Kullanıcı hesaplarını ve Azure AD Connect kullanılarak şirket içi AD DS ortamından eşitlenmiş karma Kullanıcı hesaplarını içerir. * Azure AD DS 'daki Kullanıcı hesaplarının çoğu Azure AD 'den eşitleme işlemi aracılığıyla oluşturulur. * Kullanıcı hesabı, yönetilen bir etki alanında el ile oluşturulabilir ve Azure AD 'de mevcut değildir. Tüm kullanıcılar, oluşturulma şeklinden bağımsız olarak, Azure AD DS varsayılan parola ilkesi tarafından uygulanan aşağıdaki hesap kilitleme ilkelerine sahiptir: * **Hesap kilitleme süresi:** 30 * **İzin verilen başarısız oturum açma denemesi sayısı:** 5 * **Başarısız oturum açma denemesi sayısı sıfırlama süresi:** 30 dakika * **En fazla parola yaşı (ömür):** 90 gün Bu varsayılan ayarlarla, 2 dakika içinde beş geçersiz parola kullanılırsa, Kullanıcı hesapları 30 dakika boyunca kilitlenir. Hesaplar 30 dakika sonra otomatik olarak açılır. Hesap kilitleme işlemleri yalnızca yönetilen etki alanı içinde oluşur. Kullanıcı hesapları yalnızca Azure AD DS ve yönetilen etki alanına karşı başarısız oturum açma girişimleri nedeniyle kilitlidir. Azure AD 'den veya şirket içinde eşitlenmiş Kullanıcı hesapları, kaynak dizinlerinde yalnızca Azure AD DS kilitli değildir. 90 günden büyük bir maksimum parola yaşı belirten bir Azure AD parola ilkeniz varsa, bu parola yaşı Azure AD DS varsayılan ilkesine uygulanır. Azure AD DS 'de farklı bir maksimum parola yaşı tanımlamak için özel bir parola ilkesi yapılandırabilirsiniz. Azure AD 'de bir Azure AD DS parola ilkesinde yapılandırılmış en kısa bir parola yaşı varsa, Azure AD 'den veya şirket içi bir AD DS ortamından bir dikkatli olmanız gerekir. Bu senaryoda, Azure AD 'de veya şirket içi AD DS ortamında değiştirilmesi istenmeden önce kullanıcının parolasının AD DS kullanım süreleri dolacak. Yönetilen bir etki alanında el ile oluşturulan kullanıcı hesapları için, varsayılan ilkeden aşağıdaki ek parola ayarları da uygulanır. Bu ayarlar, Azure AD 'den eşitlenen Kullanıcı hesaplarına uygulanmaz, çünkü bir Kullanıcı, parolasını doğrudan Azure AD DS 'de güncelleştiremez. * **En az parola uzunluğu (karakter):** 7 * **Parolaların karmaşıklık gereksinimlerini karşılaması gerekir** Varsayılan parola ilkesindeki hesap kilitleme veya parola ayarlarını değiştiremezsiniz. Bunun yerine, *AAD DC yöneticileri* grubunun üyeleri özel parola ilkeleri oluşturabilir ve sonraki bölümde gösterildiği gibi varsayılan yerleşik ilkeyi geçersiz kılmak (öncelikli hale getirmeleri) için yapılandırabilir. ## <a name="create-a-custom-password-policy"></a>Özel parola ilkesi oluşturma Azure 'da uygulama oluşturup çalıştırdığınızda, özel bir parola ilkesi yapılandırmak isteyebilirsiniz. Örneğin, farklı hesap kilitleme ilkesi ayarlarını ayarlamak için bir ilke oluşturabilirsiniz. Özel parola ilkeleri, yönetilen bir etki alanındaki gruplara uygulanır. Bu yapılandırma varsayılan ilkeyi etkin bir şekilde geçersiz kılar. Özel bir parola ilkesi oluşturmak için, etki alanına katılmış bir VM 'den Active Directory yönetim araçlarını kullanırsınız. Active Directory Yönetim Merkezi, OU 'Lar dahil olmak üzere yönetilen bir etki alanında kaynakları görüntülemenize, düzenlemenize ve oluşturmanıza olanak sağlar. > [!NOTE] > Yönetilen bir etki alanında özel bir parola ilkesi oluşturmak için, *AAD DC Administrators* grubunun üyesi olan bir kullanıcı hesabında oturum açmış olmanız gerekir. 1. Başlangıç ekranından **Yönetim Araçları**' nı seçin. [Yönetim sanal makinesi oluşturmak][tutorial-create-management-vm]için öğreticide yüklü olan kullanılabilir yönetim araçlarının bir listesi gösterilir. 1. OU 'Ları oluşturup yönetmek için, yönetim araçları listesinden **Active Directory Yönetim Merkezi** ' yi seçin. 1. Sol bölmede, *aaddscontoso.com* gibi yönetilen etki alanınızı seçin. 1. **Sistem** kapsayıcısını ve sonra **parola ayarları kapsayıcısı** açın. Yönetilen etki alanı için yerleşik bir parola ilkesi gösterilir. Bu yerleşik ilkeyi değiştiremezsiniz. Bunun yerine, varsayılan ilkeyi geçersiz kılmak için özel bir parola ilkesi oluşturun. ![Active Directory Yönetim Merkezi parola ilkesi oluşturma](./media/password-policy/create-password-policy-adac.png) 1. Sağdaki **Görevler** panelinde **Yeni > parola ayarları**' nı seçin. 1. **Parola ayarlarını oluştur** iletişim kutusunda, Ilke Için *Mycustomfgpp* gibi bir ad girin. 1. Birden çok parola ilkesi varsa, en yüksek önceliğe veya önceliğe sahip ilke bir kullanıcıya uygulanır. Sayı ne kadar düşükse öncelik o kadar yüksektir. Varsayılan parola ilkesinin önceliği *200*' dir. Varsayılan değer olan *1*' i geçersiz kılmak için özel parola ilkenizin önceliğini ayarlayın. 1. Diğer parola ilkesi ayarlarını istediğiniz gibi düzenleyin. Aşağıdaki anahtar noktalarını unutmayın: * Yalnızca yönetilen bir etki alanında el ile oluşturulan kullanıcılar için parola karmaşıklığı, yaşı veya sona erme zamanı gibi ayarlar. * Hesap kilitleme ayarları tüm kullanıcılar için geçerlidir, ancak Azure AD 'de değil yalnızca yönetilen etki alanı içinde etkili olur. ![Özel bir hassas parola ilkesi oluşturma](./media/password-policy/custom-fgpp.png) 1. **Yanlışlıkla silinmeye karşı koru** seçeneğinin işaretini kaldırın. Bu seçenek işaretliyse, FGPP 'yi kaydedemezsiniz. 1. **Doğrudan Için geçerli** bölümünde, **Ekle** düğmesini seçin. **Kullanıcıları veya grupları seç** Iletişim kutusunda **konumlar** düğmesini seçin. ![Parola ilkesinin uygulanacağı kullanıcıları ve grupları seçin](./media/password-policy/fgpp-applies-to.png) 1. Parola ilkeleri yalnızca gruplara uygulanabilir. **Konumlar** iletişim kutusunda, *aaddscontoso.com* gibi etki alanı adını genişletin ve ardından **aaddc kullanıcıları** gibi bir OU seçin. Uygulamak istediğiniz kullanıcı grubunu içeren özel bir OU varsa, bu OU 'yu seçin. ![Grubun ait olduğu OU 'yu seçin](./media/password-policy/fgpp-container.png) 1. İlkeyi uygulamak istediğiniz grubun adını yazın ve ardından grubun var olduğunu doğrulamak için **adları denetle** ' yi seçin. ![FGPP uygulanacak grubu arayıp seçin](./media/password-policy/fgpp-apply-group.png) 1. Şimdi, bölümünde seçtiğiniz grubun adı bölüm **Için geçerli** olarak görüntüleniyorsa, özel parola ilkenizi kaydetmek için **Tamam** ' ı seçin. ## <a name="next-steps"></a>Sonraki adımlar Parola ilkeleri hakkında daha fazla bilgi ve Active Directory Yönetim Merkezi 'ni kullanma hakkında daha fazla bilgi için aşağıdaki makalelere bakın: * [Hassas parola ilkeleri hakkında bilgi edinin](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc770394(v=ws.10)) * [AD Yönetim merkezini kullanarak hassas parola ilkelerini yapılandırma](/windows-server/identity/ad-ds/get-started/adac/introduction-to-active-directory-administrative-center-enhancements--level-100-#fine_grained_pswd_policy_mgmt) <!-- INTERNAL LINKS --> [create-azure-ad-tenant]: ../active-directory/fundamentals/sign-up-organization.md [associate-azure-ad-tenant]: ../active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md [create-azure-ad-ds-instance]: tutorial-create-instance.md [tutorial-create-management-vm]: tutorial-create-management-vm.md [migrate-from-classic]: migrate-from-classic-vnet.md
83.895105
574
0.818121
tur_Latn
0.999907
9824dc5e93010e87941698e8960397a5033b2c6b
559
md
Markdown
packages/browser-runtime-core/README.md
tony-jang/mongosh
2916a101992afe0d612b5ae6d7b48e7446620e64
[ "Apache-2.0" ]
175
2019-10-03T01:47:43.000Z
2022-03-26T20:49:00.000Z
packages/browser-runtime-core/README.md
tony-jang/mongosh
2916a101992afe0d612b5ae6d7b48e7446620e64
[ "Apache-2.0" ]
203
2020-01-14T10:24:32.000Z
2022-03-31T13:42:56.000Z
packages/browser-runtime-core/README.md
tony-jang/mongosh
2916a101992afe0d612b5ae6d7b48e7446620e64
[ "Apache-2.0" ]
24
2019-12-30T09:35:39.000Z
2022-03-16T19:07:13.000Z
# browser-runtime-core Core and support classes and types used by runtimes. ## Api ### `Runtime` Encapsulates the details of evaluation logic exposing an implementation agnostic interface. All runtimes implement the following interface: - `evaluate(code: string): Promise<ShellResult>`: Evaluates a string of code. ### `ShellResult` An object holding the result of an evaluation. Has the following properties: - `type: string`: the shell api type if the entry value is a shell api object. - `value: any`: the value that has to be rendered in output.
25.409091
78
0.758497
eng_Latn
0.992558
98257673a148b0f5857c196625ae2a7a6dd1d281
2,052
md
Markdown
Case_Studies/Regression/Hitters Baseball/README.md
d4rk-lucif3r/Portfolio
36e94449a1c09ab2bd5aff055cb465512e212306
[ "MIT" ]
2
2021-05-28T18:40:43.000Z
2021-05-30T11:39:32.000Z
Case_Studies/Regression/Hitters Baseball/README.md
d4rk-lucif3r/Portfolio
36e94449a1c09ab2bd5aff055cb465512e212306
[ "MIT" ]
null
null
null
Case_Studies/Regression/Hitters Baseball/README.md
d4rk-lucif3r/Portfolio
36e94449a1c09ab2bd5aff055cb465512e212306
[ "MIT" ]
null
null
null
# Hitters Baseball - [Notebook's Kaggle](https://www.kaggle.com/d4rklucif3r/salary-eda-luciferml-plotly) - [Dataset's Kaggle](https://www.kaggle.com/mathchi/hitters-baseball-data) ## About the Dataset ## Description Major League Baseball Data from the 1986 and 1987 seasons. ## Format A data frame with 322 observations of major league players on the following 20 variables. - AtBat: Number of times at bat in 1986 - Hits: Number of hits in 1986 - HmRun: Number of home runs in 1986 - Runs: Number of runs in 1986 - RBI: Number of runs batted in in 1986 - Walks: Number of walks in 1986 - Years: Number of years in the major leagues - CAtBat: Number of times at bat during his career - CHits: Number of hits during his career - CHmRun: Number of home runs during his career - CRuns: Number of runs during his career - CRBI: Number of runs batted in during his career - CWalks: Number of walks during his career - League: A factor with levels A and N indicating player's league at the end of 1986 - Division: A factor with levels E and W indicating player's division at the end of 1986 - PutOuts: Number of put outs in 1986 - Assists: Number of assists in 1986 - Errors: Number of errors in 1986 - Salary: 1987 annual salary on opening day in thousands of dollars - NewLeague: A factor with levels A and N indicating player's league at the beginning of 1987 ## Source This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. This is part of the data that was used in the 1988 ASA Graphics Section Poster Session. The salary data were originally from Sports Illustrated, April 20, 1987. The 1986 and career statistics were obtained from The 1987 Baseball Encyclopedia Update published by Collier Books, Macmillan Publishing Company, New York. ## References Games, G., Witten, D., Hastie, T., and Tibshirani, R. (2013) An Introduction to Statistical Learning with applications in R, www.StatLearning.com, Springer-Verlag, New York Dataset imported [from](https://www.r-project.org)
31.569231
415
0.763158
eng_Latn
0.9846
9825d1d8f3d82962449b4ee9d40e69aa4c5fee1a
5,079
md
Markdown
relnotes/v0.54.0.md
dvandersluis/rubocop
783cb69643ab9a40fee9e8917aa557299be5b59a
[ "MIT" ]
6,297
2015-01-01T13:32:27.000Z
2018-05-31T06:19:46.000Z
relnotes/v0.54.0.md
dvandersluis/rubocop
783cb69643ab9a40fee9e8917aa557299be5b59a
[ "MIT" ]
3,984
2015-01-01T15:31:51.000Z
2018-05-31T00:02:58.000Z
relnotes/v0.54.0.md
dvandersluis/rubocop
783cb69643ab9a40fee9e8917aa557299be5b59a
[ "MIT" ]
1,943
2015-01-02T11:16:59.000Z
2018-05-29T08:57:15.000Z
### New features * [#5597](https://github.com/rubocop/rubocop/pull/5597): Add new `Rails/HttpStatus` cop. ([@anthony-robin][]) * [#5643](https://github.com/rubocop/rubocop/pull/5643): Add new `Style/UnpackFirst` cop. ([@bdewater][]) ### Bug fixes * [#5683](https://github.com/rubocop/rubocop/issues/5683): Fix message for `Naming/UncommunicativeXParamName` cops. ([@jlfaber][]) * [#5680](https://github.com/rubocop/rubocop/issues/5680): Fix `Layout/ElseAlignment` for `rescue/else/ensure` inside `do/end` blocks. ([@YukiJikumaru][]) * [#5642](https://github.com/rubocop/rubocop/pull/5642): Fix `Style/Documentation` `:nodoc:` for compact-style nested modules/classes. ([@ojab][]) * [#5648](https://github.com/rubocop/rubocop/issues/5648): Suggest valid memoized instance variable for predicate method. ([@satyap][]) * [#5670](https://github.com/rubocop/rubocop/issues/5670): Suggest valid memoized instance variable for bang method. ([@pocke][]) * [#5623](https://github.com/rubocop/rubocop/pull/5623): Fix `Bundler/OrderedGems` when a group includes duplicate gems. ([@colorbox][]) * [#5633](https://github.com/rubocop/rubocop/pull/5633): Fix broken `--fail-fast`. ([@mmyoji][]) * [#5630](https://github.com/rubocop/rubocop/issues/5630): Fix false positive for `Style/FormatStringToken` when using placeholder arguments in `format` method. ([@koic][]) * [#5651](https://github.com/rubocop/rubocop/pull/5651): Fix NoMethodError when specified config file that does not exist. ([@onk][]) * [#5647](https://github.com/rubocop/rubocop/pull/5647): Fix encoding method of RuboCop::MagicComment::SimpleComment. ([@htwroclau][]) * [#5619](https://github.com/rubocop/rubocop/issues/5619): Do not register an offense in `Style/InverseMethods` when comparing constants with `<`, `>`, `<=`, or `>=`. If the code is being used to determine class hierarchy, the correction might not be accurate. ([@rrosenblum][]) * [#5641](https://github.com/rubocop/rubocop/issues/5641): Disable `Style/TrivialAccessors` auto-correction for `def` with `private`. ([@pocke][]) * Fix bug where `Style/SafeNavigation` does not auto-correct all chained methods resulting in a `Lint/SafeNavigationChain` offense. ([@rrosenblum][]) * [#5436](https://github.com/rubocop/rubocop/issues/5436): Allow empty kwrest args in `UncommunicativeName` cops. ([@pocke][]) * [#5674](https://github.com/rubocop/rubocop/issues/5674): Fix auto-correction of `Layout/EmptyComment` when the empty comment appears on the same line as code. ([@rrosenblum][]) * [#5679](https://github.com/rubocop/rubocop/pull/5679): Fix a false positive for `Style/EmptyLineAfterGuardClause` when guard clause is before `rescue` or `ensure`. ([@koic][]) * [#5694](https://github.com/rubocop/rubocop/issues/5694): Match Rails versions with multiple digits when reading the TargetRailsVersion from the bundler lock files. ([@roberts1000][]) * [#5700](https://github.com/rubocop/rubocop/pull/5700): Fix a false positive for `Style/EmptyLineAfterGuardClause` when guard clause is before `else`. ([@koic][]) * Fix false positive in `Naming/ConstantName` when using conditional assignment. ([@drenmi][]) ### Changes * [#5626](https://github.com/rubocop/rubocop/pull/5626): Change `Naming/UncommunicativeMethodParamName` add `to` to allowed names in default config. ([@unused][]) * [#5640](https://github.com/rubocop/rubocop/issues/5640): Warn about user configuration overriding other user configuration only with `--debug`. ([@jonas054][]) * [#5637](https://github.com/rubocop/rubocop/issues/5637): Fix error for `Layout/SpaceInsideArrayLiteralBrackets` when contains an array literal as an argument after a heredoc is started. ([@koic][]) * [#5610](https://github.com/rubocop/rubocop/issues/5610): Use `gems.locked` or `Gemfile.lock` to determine the best `TargetRubyVersion` when it is not specified in the config. ([@roberts1000][]) * [#5390](https://github.com/rubocop/rubocop/issues/5390): Allow exceptions to `Style/InlineComment` for inline comments which enable or disable rubocop cops. ([@jfelchner][]) * Add progress bar to offenses formatter. ([@drewpterry][]) * [#5498](https://github.com/rubocop/rubocop/issues/5498): Correct `IndentHeredoc` message for Ruby 2.3 when using `<<~` operator with invalid indentation. ([@hamada14][]) [@anthony-robin]: https://github.com/anthony-robin [@bdewater]: https://github.com/bdewater [@jlfaber]: https://github.com/jlfaber [@YukiJikumaru]: https://github.com/YukiJikumaru [@ojab]: https://github.com/ojab [@satyap]: https://github.com/satyap [@pocke]: https://github.com/pocke [@colorbox]: https://github.com/colorbox [@mmyoji]: https://github.com/mmyoji [@koic]: https://github.com/koic [@onk]: https://github.com/onk [@htwroclau]: https://github.com/htwroclau [@rrosenblum]: https://github.com/rrosenblum [@roberts1000]: https://github.com/roberts1000 [@drenmi]: https://github.com/drenmi [@unused]: https://github.com/unused [@jonas054]: https://github.com/jonas054 [@hamada14]: https://github.com/hamada14 [@jfelchner]: https://github.com/jfelchner [@drewpterry]: https://github.com/drewpterry
87.568966
278
0.732821
eng_Latn
0.270272
9825f073e7ff27189ac65822a23ab19868bb2333
625
md
Markdown
README.md
rajatrjoshi/Python_Calculator
ecf4bd86dfd2f65b1d5de65f5aeb02098ec25c89
[ "MIT" ]
5
2020-10-23T20:06:57.000Z
2021-07-22T11:18:33.000Z
README.md
rajatrjoshi/Python_Calculator
ecf4bd86dfd2f65b1d5de65f5aeb02098ec25c89
[ "MIT" ]
null
null
null
README.md
rajatrjoshi/Python_Calculator
ecf4bd86dfd2f65b1d5de65f5aeb02098ec25c89
[ "MIT" ]
null
null
null
# Python_Calculator The calculator is one application that we all use in our day to day lives. If you are trying to get your hands dirty with programming in python, Calculator is a project which is easy and useful at the same time and also using the calculator you made has a different sense of fulfillment in it. Python offers various utilities to design the GUI wiz Graphical User Interface, and one such utility is Tkinter which is most commonly used. It is indeed one of the fastest and easiest ways to build GUI application. Moreover, Tkinter is cross-platform, hence the same code works on macOS, Windows, and Linux.
125
308
0.8016
eng_Latn
0.999987
9826603870f8dbf8fd3a86685777a80335234855
1,801
md
Markdown
add/metadata/System.Workflow.Activities.Rules/RuleSet.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.Workflow.Activities.Rules/RuleSet.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.Workflow.Activities.Rules/RuleSet.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- uid: System.Workflow.Activities.Rules.RuleSet author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.Execute(System.Workflow.Activities.Rules.RuleExecution) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.GetHashCode author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.Description author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.Equals(System.Object) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.#ctor(System.String,System.String) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.#ctor(System.String) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.Name author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.#ctor author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.Clone author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.Rules author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.#ctor author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.Validate(System.Workflow.Activities.Rules.RuleValidation) author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Workflow.Activities.Rules.RuleSet.ChainingBehavior author: "Erikre" ms.author: "erikre" manager: "erikre" ---
18.377551
103
0.729595
yue_Hant
0.126656
9826a4a0e06c53ea5b9f3fa576281028f7b24583
117
md
Markdown
README.md
Fusl/tinc-deploy
e5bdb7b5d50b56d807a1ca30105d85f93630b679
[ "BSD-3-Clause" ]
10
2016-11-20T11:00:39.000Z
2021-04-21T03:55:43.000Z
README.md
Fusl/tinc-deploy
e5bdb7b5d50b56d807a1ca30105d85f93630b679
[ "BSD-3-Clause" ]
null
null
null
README.md
Fusl/tinc-deploy
e5bdb7b5d50b56d807a1ca30105d85f93630b679
[ "BSD-3-Clause" ]
2
2021-04-21T03:55:48.000Z
2021-07-03T16:32:35.000Z
# tinc-deploy Simple script for automatically deploying a tinc VPN network - Uses /etc/hosts as configuration source
39
102
0.811966
eng_Latn
0.971969
9826b11552e271b6a00d5684f173f1d649ffe6fd
4,389
md
Markdown
docs/relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/sample-xpath-queries-sqlxml-4-0.md
bingenortuzar/sql-docs.es-es
9e13730ffa0f3ce461cce71bebf1a3ce188c80ad
[ "CC-BY-4.0", "MIT" ]
1
2021-04-26T21:26:08.000Z
2021-04-26T21:26:08.000Z
docs/relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/sample-xpath-queries-sqlxml-4-0.md
jlporatti/sql-docs.es-es
9b35d3acbb48253e1f299815df975f9ddaa5e9c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/sample-xpath-queries-sqlxml-4-0.md
jlporatti/sql-docs.es-es
9b35d3acbb48253e1f299815df975f9ddaa5e9c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Consultas XPath (SQLXML 4.0) de ejemplo | Microsoft Docs ms.custom: '' ms.date: 03/17/2017 ms.prod: sql ms.prod_service: database-engine, sql-database ms.reviewer: '' ms.technology: xml ms.topic: reference helpviewer_keywords: - examples [SQLXML], XPath - sample applications [SQLXML] - sample XPath queries [SQLXML] - mapping schema [SQLXML], queries - XPath queries [SQLXML], samples ms.assetid: 1595c2d4-0e9c-4969-84c8-a793a32df57d author: MightyPen ms.author: genemi monikerRange: =azuresqldb-current||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current ms.openlocfilehash: dc1ba85aa5705094e3873381ee443413cee9f8da ms.sourcegitcommit: b2464064c0566590e486a3aafae6d67ce2645cef ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 07/15/2019 ms.locfileid: "68119458" --- # <a name="sample-xpath-queries-sqlxml-40"></a>Consultas XPath de ejemplo (SQLXML 4.0) [!INCLUDE[appliesto-ss-asdb-xxxx-xxx-md](../../../includes/appliesto-ss-asdb-xxxx-xxx-md.md)] En esta sección se proporcionan ejemplos de consultas XPath para SQLXML 4.0. Con fines meramente ilustrativos, estas consultas XPath de ejemplo se especifican en una plantilla ejecutada con ADO. Por lo tanto, deberá usar un archivo de esquema de asignación, SampleSchema1.xml, que también se proporciona en esta sección. Guarde este archivo en el directorio donde estén almacenadas sus plantillas. > [!NOTE] > Las consultas de ejemplo de esta sección están agrupadas por el tipo de operación XPath que realiza la consulta. ## <a name="in-this-section"></a>En esta sección [Esquema XSD anotado de ejemplo para obtener ejemplos de XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/sample-annotated-xsd-schema-for-xpath-examples-sqlxml-4-0.md) Use este archivo con los ejemplos de consultas XPath que se proporcionan en esta sección. [Especificar ejes en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-axes-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican los ejes en las consultas XPath. [Especificar predicados con valores booleanos en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-boolean-valued-predicates-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican los predicados con valores booleanos en las consultas XPath. [Especificar operadores relacionales en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-relational-operators-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican los operadores relacionales en las consultas XPath. [Especificar operadores aritméticos en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-arithmetic-operators-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican los operadores aritméticos en las consultas XPath. [Especificar funciones de conversión explícita en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-explicit-conversion-functions-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican las funciones de conversión explícita en las consultas XPath. [Especificar operadores booleanos en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-boolean-operators-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican los operadores booleanos en las consultas XPath. [Especificar funciones booleanas en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-boolean-functions-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican las funciones booleanas en las consultas XPath. [Especificar Variables XPath en consultas XPath &#40;SQLXML 4.0&#41;](../../../relational-databases/sqlxml-annotated-xsd-schemas-xpath-queries/samples/specifying-xpath-variables-in-xpath-queries-sqlxml-4-0.md) Muestra cómo se especifican las variables XPath en las consultas XPath.
69.666667
401
0.776487
spa_Latn
0.730576
982704e11cb8f2b5d0492514d61611cfc5a78ec3
602
md
Markdown
spring-5-data-reactive/README.md
fcovillegast/tutorials
02aef362899dde34bb5a5bbc8c4ebfe99e9ac410
[ "MIT" ]
6
2019-10-14T05:00:11.000Z
2021-08-04T21:56:33.000Z
spring-5-data-reactive/README.md
Kriss322/tutorials
68a513529919f6dc7c53ee3fb8f7a61ff5b8966a
[ "MIT" ]
2
2021-02-18T19:54:43.000Z
2021-03-19T14:17:30.000Z
spring-5-data-reactive/README.md
Kriss322/tutorials
68a513529919f6dc7c53ee3fb8f7a61ff5b8966a
[ "MIT" ]
4
2019-08-14T17:51:42.000Z
2021-07-08T06:24:16.000Z
## Spring Data Reactive Project This module contains articles about reactive Spring 5 Data ### The Course The "REST With Spring" Classes: http://bit.ly/restwithspring ### Relevant Articles - [Reactive Flow with MongoDB, Kotlin, and Spring WebFlux](http://www.baeldung.com/kotlin-mongodb-spring-webflux) - [Spring Data Reactive Repositories with MongoDB](http://www.baeldung.com/spring-data-mongodb-reactive) - [Spring Data MongoDB Tailable Cursors](https://www.baeldung.com/spring-data-mongodb-tailable-cursors) - [A Quick Look at R2DBC with Spring Data](https://www.baeldung.com/spring-data-r2dbc)
46.307692
113
0.777409
yue_Hant
0.54825
982740fb74ce21bea1e99af85687a08d8dd15c66
2,053
md
Markdown
problems/construct-binary-search-tree-from-preorder-traversal/README.md
TheDudeThatCode/leetcode-1
8a39953bae13502dbf993d0e0ffbe2213cafc246
[ "MIT" ]
9
2020-02-13T16:13:28.000Z
2021-05-12T16:20:22.000Z
problems/construct-binary-search-tree-from-preorder-traversal/README.md
sweetpand/leetcode
186d384382849057a9433413bd9f0656b96c5c0c
[ "MIT" ]
null
null
null
problems/construct-binary-search-tree-from-preorder-traversal/README.md
sweetpand/leetcode
186d384382849057a9433413bd9f0656b96c5c0c
[ "MIT" ]
10
2020-04-14T15:59:36.000Z
2022-03-30T06:38:27.000Z
<!--|This file generated by command(leetcode description); DO NOT EDIT. |--> <!--+----------------------------------------------------------------------+--> <!--|@author openset <openset.wang@gmail.com> |--> <!--|@link https://github.com/openset |--> <!--|@home https://github.com/openset/leetcode |--> <!--+----------------------------------------------------------------------+--> [< Previous](../minimum-domino-rotations-for-equal-row "Minimum Domino Rotations For Equal Row")                  [Next >](../complement-of-base-10-integer "Complement of Base 10 Integer") ## [1008. Construct Binary Search Tree from Preorder Traversal (Medium)](https://leetcode.com/problems/construct-binary-search-tree-from-preorder-traversal "先序遍历构造二叉树") <p>Return the root node of a binary <strong>search</strong> tree that matches the given <code>preorder</code> traversal.</p> <p><em>(Recall that a binary search tree&nbsp;is a binary tree where for every <font face="monospace">node</font>, any descendant of <code>node.left</code> has a value <code>&lt;</code>&nbsp;<code>node.val</code>, and any descendant of <code>node.right</code> has a value <code>&gt;</code>&nbsp;<code>node.val</code>.&nbsp; Also recall that a preorder traversal&nbsp;displays the value of the&nbsp;<code>node</code> first, then traverses <code>node.left</code>, then traverses <code>node.right</code>.)</em></p> <p>&nbsp;</p> <p><strong>Example 1:</strong></p> <pre> <strong>Input: </strong><span id="example-input-1-1">[8,5,1,7,10,12]</span> <strong>Output: </strong><span id="example-output-1">[8,5,10,1,7,null,12] <img alt="" src="https://assets.leetcode.com/uploads/2019/03/06/1266.png" style="height: 200px; width: 306px;" /></span> </pre> <p>&nbsp;</p> <p><strong>Note:</strong>&nbsp;</p> <ol> <li><code>1 &lt;= preorder.length &lt;= 100</code></li> <li>The values of <code>preorder</code> are distinct.</li> </ol> ### Related Topics [[Tree](../../tag/tree/README.md)]
52.641026
511
0.598149
eng_Latn
0.332782
9827c3a5862b82aec975b70caea2b8e5bf0b84e4
230
md
Markdown
messages/1.0.0.md
olivierpilotte/cyanide-theme
ba3b86efbdb617979cda019d088b6dfca3416670
[ "MIT" ]
149
2015-01-04T04:28:27.000Z
2022-02-03T20:21:03.000Z
messages/1.0.0.txt
hraban/cyanide-theme
bcbc6d84be81cf366509521bf516156c6df6ff87
[ "MIT" ]
26
2015-01-02T15:51:50.000Z
2022-02-15T06:17:18.000Z
messages/1.0.0.txt
hraban/cyanide-theme
bcbc6d84be81cf366509521bf516156c6df6ff87
[ "MIT" ]
35
2015-01-05T05:30:34.000Z
2022-01-05T02:25:00.000Z
Cyanide 1.0.0 =============== * Update theme for Sublime Text build 3065 Thanks for downloading Cyanide Theme! Feel free to open an issue on Github if you have any problem with the theme. https://github.com/lefoy/cyanide-theme
23
76
0.726087
eng_Latn
0.933068
982a18b344ecb6bd32c48e59685477aa22ad70dd
983
md
Markdown
README.md
MsEleos/where_I_work
92cff5fdee638bef0119974fd24c2c9d5b2c7155
[ "MIT" ]
null
null
null
README.md
MsEleos/where_I_work
92cff5fdee638bef0119974fd24c2c9d5b2c7155
[ "MIT" ]
null
null
null
README.md
MsEleos/where_I_work
92cff5fdee638bef0119974fd24c2c9d5b2c7155
[ "MIT" ]
null
null
null
Since I'm against github centralization, there are a lot of place where I work and publish code, documentation. **WARNING**: This repo and README.md file might be outdated, don't hesitate to checkout my different [links](https://links.eleos.space) to see what I'm doing of my life For any information about any of my work, you can mail my at [contact at eleos dot space](mailto:contact@eleos.space) ## Most of my personnal work is here: My personal place is [Framagit](https://framagit.org/Eleos). You can find some small projects or information about what I use, like my [dotfiles](https://framagit.org/Eleos/dotfiles) or my small [custom scripts](https://framagit.org/Eleos/eleos-custom-scripts). ## Projects outside of github I participate in: * [Funkwhale](https://funkwhale.audio/), is a community-driven project that lets you listen and share music and audio within a decentralized, open network. You can find [my participations here]( https://dev.funkwhale.audio/Eleos)
65.533333
229
0.768057
eng_Latn
0.989623
982ad0dba08ffaae3efb449bc3bed6556e517700
3,012
md
Markdown
src/pages/does-life-get-boring/index.md
princiya/princiya-blog
ba82c3d280049dce77856da5be0b906ff5e03036
[ "MIT" ]
null
null
null
src/pages/does-life-get-boring/index.md
princiya/princiya-blog
ba82c3d280049dce77856da5be0b906ff5e03036
[ "MIT" ]
25
2021-03-01T21:18:12.000Z
2022-02-27T07:03:53.000Z
src/pages/does-life-get-boring/index.md
princiya/princiya-blog
ba82c3d280049dce77856da5be0b906ff5e03036
[ "MIT" ]
1
2020-08-31T12:14:31.000Z
2020-08-31T12:14:31.000Z
--- title: Does life as a software developer ever get boring date: '2020-12-05' spoiler: Motivational speech from an MLH mentor to the December 2020 graduating batch cta: 'Career advice for juniors in tech' tags: ["mentorship", "advice", "q&a"] cover: './boring.jpg' --- It's been 3 months now and I work as a part-time Software Developer Coach (JavaScript) with [Raise.dev](http://raise.dev/). During this term I have been mentoring for the [Major League Hacking - MLH Fellowship](https://fellowship.mlh.io/) program. The graduation is on 18th December and I was asked to record a ~2 minute graduation speech. ## Major League Hacking - MLH graduation speech December 2020 The following is an excerpt from my graduation speech to the MLH fellows graduating in December 2020. Hello everyone, congratulations to the graduating batch. You have come this far, finished a term successfully despite the pandemic, so this is something you will definitely cherish one day. At the beginning of the fellowship, we had a Q&A and one fellow asked - ### Does life as a software developer ever get boring after years of work? Now that you have completed a term, I would like to answer by asking all of you, - How does it feel? Are you bored? - Was it challenging? - Were the days exciting? - Did you think of giving up? Now, take a moment to reflect on the thing you did prior to this fellowship. It could be your college graduation, or a recent university exam. Do you notice any similarities with the questions I just asked? - Boring - Challenging - Exciting - Give up Life as a software developer too goes through these phases, most of the time! ## Failures and success Failures will come along the way; you may not get your desired promotion or it might take a lot of time until you land your first job. When I graduated in 2009, almost 11 years ago, there was a global recession and things didn't go as planned for me. I cried a lot the day after my first failed job interview. It was the first biggest failure for me. Looking back at it, If I were to [advice](../lessons-from-my-younger-self) my younger self, then I would say that everything happens for good. If you aren't being rejected more than accepted, you're not asking for enough, reaching high enough and valuing yourself enough. Try new things, do things you are scared of. ## Goals and habits If the plan does not work, change the plan, have a plan B! Avoid changing the goal itself! Most importantly, have a goal in your life, always! It's the tiny habits that you will cultivate along the way that will help you shape your career into a successful one. Be honest, trustworthy and be true to yourself and your job. Don't be guilty if you decide to binge watch Netflix or just get a good sleep and enjoy being lazyful over the weekends. Take breaks, drink water, workout, focus on your health. Health is wealth. Nothing is easy and you need to keep learning and working hard all the time. Work hard, but smart! I wish you all the best!
51.931034
339
0.767264
eng_Latn
0.999859
982bcc60ed20512c355af42fccbebe546e9f7011
3,874
md
Markdown
connectors/README.md
davidrabinowitz/initialization-actions
6060dc9298ba2ea1670eeab4cb4e4cbad93de6f1
[ "Apache-2.0" ]
null
null
null
connectors/README.md
davidrabinowitz/initialization-actions
6060dc9298ba2ea1670eeab4cb4e4cbad93de6f1
[ "Apache-2.0" ]
null
null
null
connectors/README.md
davidrabinowitz/initialization-actions
6060dc9298ba2ea1670eeab4cb4e4cbad93de6f1
[ "Apache-2.0" ]
null
null
null
-------------------------------------------------------------------------------- # NOTE: *Updating Cloud Storage connector with this initialization action is not recommended* **Instead of using this initialization action you can update Cloud Storage Connector through `GCS_CONNECTOR_VERSION` metadata value on supported Dataproc images.** -------------------------------------------------------------------------------- # Google Cloud Storage and BigQuery connectors This initialization action installs specified versions of [Google Cloud Storage connector](https://github.com/GoogleCloudDataproc/hadoop-connectors/tree/master/gcs), [Hadoop BigQuery connector](https://github.com/GoogleCloudDataproc/hadoop-connectors/tree/master/bigquery) and [Spark BigQuery connector](https://github.com/GoogleCloudDataproc/spark-bigquery-connector) on a [Google Cloud Dataproc](https://cloud.google.com/dataproc) cluster. ## Using this initialization action **:warning: NOTICE:** See [best practices](/README.md#how-initialization-actions-are-used) of using initialization actions in production. You can use this initialization action to create a new Dataproc cluster with an updated Google Cloud Storage connector, Hadoop BigQuery connector and Spark BigQuery connector installed: - to update connector by specifying version, use `gcs-connector-version`, `bigquery-connector-version` and `spark-bigquery-connector-version` metadata values: ``` REGION=<region> CLUSTER_NAME=<cluster_name> gcloud dataproc clusters create ${CLUSTER_NAME} \ --region ${REGION} \ --initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/connectors/connectors.sh \ --metadata gcs-connector-version=2.2.0 \ --metadata bigquery-connector-version=1.2.0 \ --metadata spark-bigquery-connector-version=0.20.0 ``` - to update connector by specifying URL, use `gcs-connector-url`, `bigquery-connector-url`and `spark-bigquery-connector-url` metadata values: ``` REGION=<region> CLUSTER_NAME=<cluster_name> gcloud dataproc clusters create ${CLUSTER_NAME} \ --region ${REGION} \ --initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/connectors/connectors.sh \ --metadata gcs-connector-url=gs://path/to/custom/gcs/connector.jar \ --metadata bigquery-connector-url=gs://path/to/custom/hadoop/bigquery/connector.jar \ --metadata spark-bigquery-connector-url=gs://path/to/custom/spark/bigquery/connector.jar ``` This script downloads specified Google Cloud Storage connector, Hadoop BigQuery connector and Spark BigQuery connector and deletes an old version of these connectors if they were installed. To specify connector version, find the connector version on the [Hadoop connectors releases page](https://github.com/GoogleCloudDataproc/hadoop-connectors/releases) and [Spark BigQuery connector releases page](https://github.com/GoogleCloudDataproc/spark-bigquery-connector/releases), and set it as the `gcs-connector-version`, `bigquery-connector-version` or `spark-bigquery-connector-version` metadata key value. If only one connector version is specified (Google Cloud Storage, Hadoop BigQuery or Spark BigQuery) then only this connector will be updated. For example: * if Google Cloud Storage connector 2.2.0 version is specified and neither Hadoop BigQuery connector not Spark BigQuery connector versions are specified, then only Google Cloud Storage connector will be updated to 2.2.0 version: ``` REGION=<region> CLUSTER_NAME=<cluster_name> gcloud dataproc clusters create ${CLUSTER_NAME} \ --region ${REGION} \ --initialization-actions gs://goog-dataproc-initialization-actions-${REGION}/connectors/connectors.sh \ --metadata gcs-connector-version=2.2.0 ```
44.528736
115
0.726381
eng_Latn
0.756338
982beaf884745254b0b9899b92fa06bb6bce846d
17,652
md
Markdown
pages/services/nifi/0.2.0-1.5.0/overview/index.md
farhan5900/dcos-docs-site
0797219a9fe2e23f3f7eca94318f6b46913bfffe
[ "Apache-2.0" ]
null
null
null
pages/services/nifi/0.2.0-1.5.0/overview/index.md
farhan5900/dcos-docs-site
0797219a9fe2e23f3f7eca94318f6b46913bfffe
[ "Apache-2.0" ]
null
null
null
pages/services/nifi/0.2.0-1.5.0/overview/index.md
farhan5900/dcos-docs-site
0797219a9fe2e23f3f7eca94318f6b46913bfffe
[ "Apache-2.0" ]
null
null
null
--- layout: layout.pug navigationTitle: Overview title: Overview menuWeight: 10 excerpt: Getting started with DC/OS NiFi Service featureMaturity: enterprise: false --- # Components The following components work together to deploy and maintain the service. - Mesos Mesos is the foundation of the DC/OS cluster. Everything launched within the cluster is allocated resources and managed by Mesos. A typical Mesos cluster has one or three Masters that manage resources for the entire cluster. On DC/OS, the machines running the Mesos Masters will typically run other cluster services as well, such as Marathon and Cosmos, as local system processes. Separately from the Master machines are the Agent machines, which are where in-cluster processes are run. For more information on Mesos architecture, see the [Apache Mesos documentation](https://mesos.apache.org/documentation/latest/architecture/). For more information on DC/OS architecture, see the [DC/OS architecture documentation](https://docs.mesosphere.com/latest/overview/architecture/). - ZooKeeper ZooKeeper is a common foundation for DC/OS system components, like Marathon and Mesos. It provides distributed key-value storage for configuration, synchronization, name registration, and cluster state storage. DC/OS comes with ZooKeeper installed by default, typically with one instance per DC/OS master. SDK Schedulers use the default ZooKeeper instance to store persistent state across restarts (under znodes named dcos-service-<svcname>). This allows Schedulers to be killed at any time and continue where they left off. **Note**: SDK Schedulers currently require ZooKeeper, but any persistent configuration storage (such as etcd) could fit this role. ZooKeeper is a convenient default because it is always present in DC/OS cluster - Marathon Marathon is the “init system” of a DC/OS cluster. Marathon launches tasks in the cluster and keeps them running. From the perspective of Mesos, Marathon is itself another Scheduler running its own tasks. Marathon is more general than SDK Schedulers and mainly focuses on tasks that don’t require managing local persistent state. SDK services rely on Marathon to run the Scheduler and to provide it with a configuration via environment variables. The Scheduler, however, maintains its own service tasks without any direct involvement by Marathon. - Scheduler The Scheduler is the “management layer” of the service. It launches the service nodes and keeps them running. It also exposes endpoints to allow end users to control the service and diagnose problems. The Scheduler is kept online by the cluster’s “init system”, Marathon. The Scheduler itself is effectively a Java application that is configured via environment variables provided by Marathon. - Packaging Apache NiFi is packaged for deployment on DC/OS. DC/OS packages follow the [Universe schema](https://github.com/mesosphere/universe), which defines how packages expose customization options at initial installation. When a package is installed on the cluster, the packaging service (named ‘Cosmos’) creates a Marathon app that contains a rendered version of the marathon.json.mustache template provided by the package. For DC/OS Apache NiFi, this Marathon app is the Scheduler for the service For further discussion of DC/OS components, see the [architecture documentation](https://docs.mesosphere.com/latest/overview/architecture/components/). # Deployment Internally, NiFi treats “Deployment” as moving from one state to another state. By this definition, “Deployment” applies to many scenarios: - When NiFi is first installed, deployment is moving from a null configuration to a deployed configuration. - When the deployed configuration is changed by editing an environment variable in the Scheduler, deployment is moving from an initial running configuration to a new proposed configuration. In this section, we’ll describe how these scenarios are handled by the Scheduler. ## Initial Install This is the flow for deploying a new service: ### Steps handled by the DC/OS cluster 1. The user runs dcos package install NiFi in the DC/OS CLI or clicks Install for a given package on the DC/OS Dashboard. 2. A request is sent to the Cosmos packaging service to deploy the requested package along with a set of configuration options. 3. Cosmos creates a Marathon app definition by rendering NiFi’s marathon.json.mustache with the configuration options provided in the request, which represents NiFi’s Scheduler. Cosmos queries Marathon to create the app. 4. Marathon launches the NiFi’s Scheduler somewhere in the cluster using the rendered app definition provided by Cosmos. 5. NiFi’s Scheduler is launched. From this point onwards, the SDK handles deployment. ### Steps handled by the Scheduler The Scheduler starts with the following state: - A `svc.yml` template that represents the service configuration. - Environment variables provided by Marathon, to be applied onto the svc.yml template. - Any custom logic implemented by the service developer in their Main function (we’ll be assuming this is left with defaults for the purposes of this explanation). 1. The `svc.yml` template is rendered using the environment variables provided by Marathon. 2. The rendered `svc.yml` “Service Spec” contains the host/port for the ZooKeeper instance, which the Scheduler uses for persistent configuration/state storage. The default is `master.mesos:2181`, but may be manually configured to use a different ZooKeeper instance. The Scheduler always stores its information under a znode named `dcos-service-<svcname>`. 3. The Scheduler connects to that ZooKeeper instance and checks to see if it has previously stored a Mesos Framework ID for itself. - If the Framework ID is present, the Scheduler will attempt to reconnect to Mesos using that ID. This may result in a “Framework has been removed” error if Mesos doesn’t recognize that Framework ID, indicating an incomplete uninstall. - If the Framework ID is not present, the Scheduler will attempt to register with Mesos as a Framework. Assuming this is successful, the resulting Framework ID is then immediately stored. 4. Now that the Scheduler has registered as a Mesos Framework, it is able to start interacting with Mesos and receiving offers. When this begins, the Scheduler will begin running the Offer Cycle and deploying NiFi. See that section for more information. 5. The Scheduler retrieves its deployed task state from ZooKeeper and finds that there are tasks that should be launched. This is the first launch, so all tasks need to be launched. 6. The Scheduler deploys those missing tasks through the Mesos offer cycle using a Deployment Plan to determine the ordering of that deployment. 7. Once the Scheduler has launched the missing tasks, its current configuration should match the desired configuration defined by the “Service Spec” extracted from `svc.yml`. a. When the current configuration matches the desired configuration, the Scheduler will tell Mesos to suspend sending new offers, as there is nothing to be done. b. The Scheduler idles until it receives an RPC from Mesos notifying it of a task status change, it receives an RPC from an end user against one of its HTTP APIs, or until it is killed by Marathon as the result of a configuration change. ## Reconfiguration This is the flow for reconfiguring a DC/OS service either in order to update specific configuration values, or to upgrade it to a new package version. ### Steps handled by the Scheduler As with initial install above, at this point the Scheduler is re-launched with the same three sources of information it had before: - `svc.yml` template. - New environment variables. - Custom logic implemented by the service developer (if any). In addition, the Scheduler now has a fourth piece: - Pre-existing state in ZooKeeper Scheduler reconfiguration is slightly different from initial deployment because the Scheduler is now comparing its current state to a non-empty prior state and determining what needs to be changed. 1. After the Scheduler has rendered its `svc.yml` against the new environment variables, it has two Service Specs, reflecting two different configurations. - The Service Spec that was just rendered, reflecting the configuration change. - The prior Service Spec (or “Target Configuration”) that was previously stored in ZooKeeper. 2. The Scheduler automatically compares the changes between the old and new Service Specs. a. Change validation: Certain changes, such as editing volumes and scale-down, are not currently supported because they are complicated and dangerous to get wrong. - If an invalid change is detected, the Scheduler will send an error message and refuse to proceed until the user has reverted the change by relaunching the Scheduler app in Marathon with the prior config. - If the changes are valid, the new configuration is stored in ZooKeeper as the new Target Configuration and the change deployment proceeds as described below. b. Change deployment: The Scheduler produces a diff between the current state and some future state, including all of the Mesos calls (reserve, unreserve, launch, destroy, etc.) needed to get there. For example, if the number of tasks has been increased, then the Scheduler will launch the correct number of new tasks. If a task configuration setting has been changed, the Scheduler will deploy that change to the relevant affected tasks by relaunching them. Tasks that aren’t affected by the configuration change will be left as-is. c. Custom update logic: Some services may have defined a custom update Plan in its svc.yml, in cases where different logic is needed for an update/upgrade than is needed for the initial deployment. When a custom update plan is defined, the Scheduler will automatically use this Plan, instead of the default deploy Plan, when rolling out an update to the service. ## Uninstallation This is the flow for uninstalling NiFi. ### Steps handled by the Cluster 1. The user uses the DC/OS CLI’s `dcos package uninstall` command to uninstall the service. 2. The DC/OS package manager instructs Marathon to kill the current Scheduler and to launch a new Scheduler with the environment variable `SDK_UNINSTALL` set to “true”. ### Steps handled by the Scheduler When started in uninstall mode, the Scheduler performs the following actions: - Any Mesos resource reservations are unreserved. **Warning:** Any data stored in reserved disk resources will be irretrievably lost. - Preexisting state in ZooKeeper is deleted. # Pods A Task generally maps to a single process within the service. A Pod is a collection of colocated Tasks that share an environment. All Tasks in a Pod will come up and go down together. Therefore, most maintenance operations against the service are at Pod granularity rather than Task granularity. # Plans The Scheduler organizes its work into a list of Plans. Every SDK Scheduler has at least a Deployment Plan and a Recovery Plan, but other Plans may also be added for things like custom Backup operations. The Deployment Plan is in charge of performing an initial deployment of the service. It is also used for rolling out configuration changes to the service (or in more abstract terms, handling the transition needed to get the service from some state to another state), unless the service developer provided a custom update Plan. The Recovery Plan is in charge of relaunching any exited tasks that should always be running. Plans have a fixed three-level hierarchy. Plans contain Phases, and Phases contain Steps. For example, imagine a service with two index nodes and three data nodes. The Plan structure for a Scheduler in this configuration could look like this: Deployment Plan (deploy) Index Node Phase Index Node 0 Step Index Node 1 Step Data Node Phase Data Node 0 Step Data Node 1 Step Data Node 2 Step Custom Update Plan (update) (custom logic, if any, for rolling out a config update or software upgrade) Recovery Plan (recovery) (phases and steps are autogenerated as failures occur) Index Backup Plan Run Reindex Phase Index Node 0 Step Index Node 1 Step Upload Data Phase Index Node 0 Step Index Node 1 Step Data Backup Plan Data Backup Phase Data Node 0 Step Data Node 1 Step Data Node 2 Step As you can see, in addition to the default Deployment and Recovery Plans, this Scheduler also has a custom Update Plan which provides custom logic for rolling out a change to the service. If a custom plan is not defined then the Deployment Plan is used for this scenario. In addition, the service defines auxiliary Plans that support other custom behavior, specifically one Plan that handles backing up Index nodes, and another for that backs up Data nodes. In practice, there would likely also be Plans for restoring these backups. These auxiliary Plans could all be invoked manually by an operator, and may include additional parameters such as credentials or a backup location. Those are omitted here for brevity. In short, Plans are the SDK’s abstraction for a sequence of tasks to be performed by the Scheduler. By default, these include deploying and maintaining the cluster, but additional maintenance operations may also be fit into this structure. ## Custom Update Plan By default, the service will use the Deployment Plan when rolling out a configuration change or software upgrade, but some services may need custom logic in this scenario, in which case the service developer may have defined a custom plan named update. # Virtual networks The SDK allows pods to join virtual networks, with the dcos virtual network available by defualt. You can specify that a pod should join the virtual network by using the networks keyword in your YAML definition. Refer to the [Developer Guide](https://mesosphere.github.io/dcos-commons/developer-guide/) for more information about how to define virtual networks in your service. When a pod is on a virtual network such as the dcos: - Every pod gets its own IP address and its own array of ports. - Pods do not use the ports on the host machine. - Pod IP addresses can be resolved with the DNS: `<task_name>.<service_name>.autoip.dcos.thisdcos.directory`. - You can also pass labels while invoking CNI plugins. Refer to the Developer Guide for more information about adding CNI labels. # Placement Constraints Placement constraints allow you to customize where a service is deployed in the DC/OS cluster. Depending on the service, some or all components may be configurable using Marathon operators (reference) with this syntax: field:OPERATOR[:parameter]. For example, if the reference lists [["hostname", "UNIQUE"]], you should use `hostname:UNIQUE`. A common task is to specify a list of whitelisted systems to deploy to. To achieve this, use the following syntax for the placement constraint: ```shell hostname:LIKE:10.0.0.159|10.0.1.202|10.0.3.3 ``` You must include spare capacity in this list, so that if one of the whitelisted systems goes down, there is still enough room to repair your service (via pod replace) without requiring that system. # Integration with DC/OS access controls In DC/OS 1.10 and later versions, you can integrate your SDK-based service with DC/OS ACLs to grant users and groups access to only certain services. You do this by installing your service into a folder, and then restricting access to some number of folders. Folders also allow you to namespace services. For instance, staging/nifi and production/nifi. Steps: 1. In the DC/OS GUI, create a group, then add a user to the group. Or, just create a user. Click **Organization > Groups > +** or **Organization > Users > +**. If you create a group, you must also create a user and add them to the group. 2. Give the user permissions for the folder where you will install your service. In this example, we are creating a user called developer, who will have access to the /testing folder. Select the group or user you created. Select ADD PERMISSION and then toggle to INSERT PERMISSION STRING. Add each of the following permissions to your user or group, and then click ADD PERMISSIONS. ```shell dcos:adminrouter:service:marathon full dcos:service:marathon:marathon:services:/testing full dcos:adminrouter:ops:mesos full dcos:adminrouter:ops:slave full ``` Install a service (in this example, nifi) into a folder called `test`. Go to Catalog, then search for `beta-nifi`. Click CONFIGURE and change the service name to /testing/nifi, then deploy. The slashes in your service name are interpreted as folders. You are deploying nifi in the /testing folder. Any user with access to the /testing folder will have access to the service. **Caution:** Services cannot be renamed. Because the location of the service is specified in the name, you cannot move services between folders. DC/OS 1.9 does not accept slashes in service names. You may be able to create the service, but you will encounter unexpected problems. ### Interacting with your foldered service Interact with your foldered service via the DC/OS CLI with this flag: `--name=/path/to/myservice`. To interact with your foldered service over the web directly, use http://<dcos-url>/service/path/to/myservice. Example: http://<dcos-url>/service/testing/nifi/v1/endpoints.
72.942149
780
0.782518
eng_Latn
0.999071
982bf79f3b50b3232ad522ba1565fef527e5ac89
10,607
md
Markdown
contrib-notes/clojars/background.md
tomjkidd/clojure
ce18e73079ed3ac107823a1c132d033dee84b563
[ "MIT" ]
1
2018-03-27T03:00:01.000Z
2018-03-27T03:00:01.000Z
contrib-notes/clojars/background.md
tomjkidd/clojure
ce18e73079ed3ac107823a1c132d033dee84b563
[ "MIT" ]
null
null
null
contrib-notes/clojars/background.md
tomjkidd/clojure
ce18e73079ed3ac107823a1c132d033dee84b563
[ "MIT" ]
null
null
null
The provided [instructions](https://github.com/clojars/clojars-web/blob/453a90c2d280bbb36bc672b4630636f399804929/README.md) resulted in an error for me. user=> (migrate) Running migration: initial-schema Running migration: add-promoted-field BatchUpdateException batch entry 0: [SQLITE_ERROR] SQL error or missing database (no such table: jars) org.sqlite.jdbc3.JDBC3Statement.executeBatch (JDBC3Statement.java:210) To get more info, I ran it through lein lein run -m user/migrate And got the following (abbreviated) results ... [B] at clojars.db.migrate$add_promoted_field.invokeStatic(migrate.clj:15) at clojars.db.migrate$add_promoted_field.invoke(migrate.clj:14) at clojure.lang.Var.invoke(Var.java:379) at clojars.db.migrate$run_and_record.invokeStatic(migrate.clj:39) at clojars.db.migrate$run_and_record.invoke(migrate.clj:37) at clojars.db.migrate$migrate$fn__9489.invoke(migrate.clj:68) at clojure.java.jdbc$db_transaction_STAR_.invokeStatic(jdbc.clj:595) at clojure.java.jdbc$db_transaction_STAR_.doInvoke(jdbc.clj:568) at clojure.lang.RestFn.invoke(RestFn.java:521) at clojure.java.jdbc$db_transaction_STAR_.invokeStatic(jdbc.clj:611) at clojure.java.jdbc$db_transaction_STAR_.doInvoke(jdbc.clj:568) at clojure.lang.RestFn.invoke(RestFn.java:425) at clojars.db.migrate$migrate.invokeStatic(migrate.clj:62) at clojars.db.migrate$migrate.invoke(migrate.clj:55) [A] at user$migrate.invokeStatic(user.clj:42) at user$migrate.invoke(user.clj:41) at clojure.lang.Var.invoke(Var.java:375) at user$eval11958.invokeStatic(form-init1159289827140649818.clj:1) at user$eval11958.invoke(form-init1159289827140649818.clj:1) at clojure.lang.Compiler.eval(Compiler.java:6927) at clojure.lang.Compiler.eval(Compiler.java:6917) at clojure.lang.Compiler.load(Compiler.java:7379) ... [A] is a call to clojars.db.migrate/migrate [B] filtered to clojars, this is the top of the stack trace, the add-promoted-field function. This function tries to add a column to the jars table through SQL. I then tried to debug the issue with the following commands (require '[clojars.config :as config]) config/config That gave me this output user=> (pprint config/config) {:deletion-backup-dir "data/dev_deleted_items", :db {:classname "org.sqlite.JDBC", :subprotocol "sqlite", :subname "data/dev_db"}, :mail {:hostname "localhost", :from "noreply@clojars.org", :ssl false}, :stats-dir "data/stats", :bcrypt-work-factor 12, :port 8080, :base-url "http://localhost:8080", :nrepl-port 7991, :index-path "data/index", :repo "data/dev_repo", :yeller-environment "development", :bind "0.0.0.0"} (migrate) calls the migrate function in clojar-web/dev/user.clj, which is actually defined in clojars.db.migrate in src/clojars/db/migrate.clj. The call is made with the :db accessor, which refers to this: {:classname "org.sqlite.JDBC", :subprotocol "sqlite", :subname "data/dev_db"} Based on :db, the database should be in data/dev_db Based on the stack trace above, I would expect that maybe the `jars` table does not exist. Using SQLit Manager with Firefox, this is indeed the case. # How is the jars table created? Did a search in the project for "CREATE TABLE" resources/queries/clojars.sql has the statement to create the jars table. # Where is this SQL statement called? Did a search for queries/clojars.sql and found a reference in src/clojars/db/migrate.clj, line 8. This is part of the initial-schema function. This symbol is part of the `migrations` symbol, and should be the first SQL commands to run for line 66 in the migrate function. # What does sql/db-do-commands do? skipped for now # What does sql/with-db-transaction do? skipped for now # What does doseq do? skipped for now skipped for now # What does run-and-record do? skipped for now skipped for now # What if I run resources/queries/clojars.sql first through SQLite Manager? D:\github-work\clojars-web>lein run -m user/migrate Running migration: initial-schema Running migration: add-promoted-field Running migration: add-jars-index Running migration: add-pgp-key Running migration: add-added-by Running migration: add-password-reset-code Running migration: add-password-reset-code-created-at I am able to get the project to launch using the instructions after running this command. # What is a maven repository? I was unable to use rsync (windows does not provide it with Git Bash), so I had to try and find an alternative way to get the equivalent information. [Maven intro to repositories](https://maven.apache.org/guides/introduction/introduction-to-repositories.html) The Repository is like NPM for Java artifacts. [Clojars Groups](https://maven.apache.org/guides/introduction/introduction-to-repositories.html) Groups are used to identify a container for Projects. Lein uses the form [groupId/artifactId "version"] for dependencies. (defproject org.clojars.tomjkidd/projectName ...) for creating a dev project # What does the `rsync -av --delete clojars.org::clojars copy-of-clojars` do? -av | v -> --verbose, a -> --archive (equivalent to -rlptgoD) | r -> --recursive, l -> --links (copy symlinks as symlinks) | p -> --perms, t -> --times, g -> --group | o -> --owner, D -> --devices --specials --delete | Delete extraneous files from dest dirs clojars.org | HOST clojars | SRC copy-of-clojars | DEST # Can I use ~/.m2/repository? Git Bash cp -r ~/.m2/repository /d/github-work/clojars-web/data/dev_repo Lein Bash lein run -m clojars.tools.setup-dev This led to an exception being thrown. at clojure.lang.Compiler.load(Compiler.java:7391) at clojure.lang.Compiler.loadFile(Compiler.java:7317) at clojure.main$load_script.invokeStatic(main.clj:275) at clojure.main$init_opt.invokeStatic(main.clj:277) at clojure.main$init_opt.invoke(main.clj:277) at clojure.main$initialize.invokeStatic(main.clj:308) at clojure.main$null_opt.invokeStatic(main.clj:342) at clojure.main$null_opt.invoke(main.clj:339) at clojure.main$main.invokeStatic(main.clj:421) at clojure.main$main.doInvoke(main.clj:384) at clojure.lang.RestFn.invoke(RestFn.java:421) at clojure.lang.Var.invoke(Var.java:383) at clojure.lang.AFn.applyToHelper(AFn.java:156) at clojure.lang.Var.applyTo(Var.java:700) at clojure.main.main(main.java:37) Caused by: java.lang.NullPointerException at clojure.string$replace.invokeStatic(string.clj:101) at clojure.string$replace.invoke(string.clj:75) at clojars.dev.setup$import_repo$iter__12005__12009$fn__12010.invoke(setup.clj:75) at clojure.lang.LazySeq.sval(LazySeq.java:40) at clojure.lang.LazySeq.seq(LazySeq.java:49) at clojure.lang.RT.seq(RT.java:521) at clojure.core$seq__4357.invokeStatic(core.clj:137) at clojure.core$filter$fn__4812.invoke(core.clj:2700) at clojure.lang.LazySeq.sval(LazySeq.java:40) at clojure.lang.LazySeq.seq(LazySeq.java:49) at clojure.lang.RT.seq(RT.java:521) at clojure.core$seq__4357.invokeStatic(core.clj:137) at clojure.core.protocols$seq_reduce.invokeStatic(protocols.clj:24) at clojure.core.protocols$fn__6738.invokeStatic(protocols.clj:75) at clojure.core.protocols$fn__6738.invoke(protocols.clj:75) at clojure.core.protocols$fn__6684$G__6679__6697.invoke(protocols.clj:13) at clojure.core$reduce.invokeStatic(core.clj:6545) at clojure.core$reduce.invoke(core.clj:6527) [A] at clojars.dev.setup$import_repo.invokeStatic(setup.clj:89) at clojars.dev.setup$import_repo.invoke(setup.clj:60) at clojars.dev.setup$setup_dev_environment.invokeStatic(setup.clj:110) at clojars.dev.setup$setup_dev_environment.invoke(setup.clj:96) at clojars.tools.setup_dev$_main.invokeStatic(setup_dev.clj:8) at clojars.tools.setup_dev$_main.doInvoke(setup_dev.clj:6) at clojure.lang.RestFn.invoke(RestFn.java:397) at clojure.lang.Var.invoke(Var.java:375) at user$eval11958.invokeStatic(form-init8869485324395708507.clj:1) at user$eval11958.invoke(form-init8869485324395708507.clj:1) at clojure.lang.Compiler.eval(Compiler.java:6927) at clojure.lang.Compiler.eval(Compiler.java:6917) at clojure.lang.Compiler.load(Compiler.java:7379) ... 14 more [A] is the top of the clojars part of the stack trace, located in src/clojars/dev/setup.clj, line 89 is part of the import-repo function. Try it with a smaller subset, just lein-ring for now -> Still fails. Figured out what the problem was after a while... In the following regex, `/` is used as the file separator, but for windows it is `\` "/(.*)/([^/]*)$" This small, but sneaky problem wasted about an hour of my life trying to figure out why the migrate kept failing. You can access the separator from Clojure with the following: (java.io.File/separator) After overcoming this issue, I tried again to follow the instructions, but another error has surfaced. This time it's when I try to access http://localhost:8080 after the (go) command in the repl. I get the following error. No implementation of method: :-report-error of protocol: #'clojars.errors/ErrorReporter found for class: clojars.errors.StdOutReporter It does appear to be defined in clojars.errors.StdOutReporter ... [clj-stacktrace.repl :refer [pst]] ... (defrecord StdOutReporter [] ErrorReporter (-report-error [_ e _ id] (println "ERROR ID:" id) (pst e))) ... The def for pst (defn pst "Print to *out* a pretty stack trace for a (parsed) exception, by default *e." [& [e]] (pst-on *out* false (or e *e))) And it's about this point that I ran out of steam.
46.318777
223
0.684077
eng_Latn
0.821409
982c10b8ca05da1e5620bb525bf496393b16907b
1,673
md
Markdown
docs/reporting-services/lesson-9-build-and-run-the-application.md
jaredmoo/sql-docs
fae18f2837c5135d3482a26f999173ecf4f9f58e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reporting-services/lesson-9-build-and-run-the-application.md
jaredmoo/sql-docs
fae18f2837c5135d3482a26f999173ecf4f9f58e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reporting-services/lesson-9-build-and-run-the-application.md
jaredmoo/sql-docs
fae18f2837c5135d3482a26f999173ecf4f9f58e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Lesson 9: Build and Run the Application | Microsoft Docs" ms.custom: "" ms.date: "05/18/2016" ms.prod: reporting-services ms.prod_service: "reporting-services-native" ms.service: "" ms.component: "reporting-services" ms.reviewer: "" ms.suite: "pro-bi" ms.technology: ms.tgt_pltfrm: "" ms.topic: "article" applies_to: - "SQL Server 2016" ms.assetid: f52d3f3a-0b09-4b34-9112-0b3655271587 caps.latest.revision: 9 author: "markingmyname" ms.author: "maghan" manager: "kfile" --- # Lesson 9: Build and Run the Application After you create a data filter for the data table, your next step is to build and run the website application. ### To build and run the application 1. Press **CTRL+F5** to run the Default.aspx page without debugging, or press F5 to run the page with debugging. As part of the build process, the report is compiled and any errors found (such as a syntax error in an expression used in the report) are added to the **Task List** that is located at the bottom of the Visual Studio window. The webpage appears in the browser. The ReportViewer control displays the report. You can use the toolbar to browse through the report, zoom, and export the report to Excel. 2. Hover the mouse over any of the rows under **Name** column. The mouse cursor will display a Hand symbol. 3. Select a value in the **Name** column. The child report is shown with the corresponding filtered data. 4. Select the icon, **Go back to parent report**, in the **ReportViewer** tool bar to navigate back to the **Parent** report. 5. Close the browser to exit.
37.177778
231
0.702929
eng_Latn
0.992729
982cabc038452d451329c3c53f26f4c45f50c56f
5,668
md
Markdown
collections/_pretty_pictures/lesson-03.md
carlos-ar/carlos-ar.githib.io
e3b3bb457564e791378fb1676f294d098c4ecffd
[ "CC-BY-3.0" ]
1
2020-05-28T00:14:41.000Z
2020-05-28T00:14:41.000Z
collections/_pretty_pictures/lesson-03.md
carlos-ar/carlos-ar.githib.io
e3b3bb457564e791378fb1676f294d098c4ecffd
[ "CC-BY-3.0" ]
null
null
null
collections/_pretty_pictures/lesson-03.md
carlos-ar/carlos-ar.githib.io
e3b3bb457564e791378fb1676f294d098c4ecffd
[ "CC-BY-3.0" ]
null
null
null
--- layout: lesson title: Lesson 03 lesson_name: What tools can I use? lesson: 3 --- This section will be a run-down of some examples of my own personal use-cases for image creation. I cannot endorse certain programs over others from a techincal perspective of optimization, performance, or professional technique. However, these programs have specific usage areas, and my own biases will come through. Mandatory shout out to my friend and lab mate, Dr. Monica Keiko Lieng, who helped me with finding my "workflow". *<u>Disclaimer:</u>* I have received many of these programs free through my university at one point or another. The free and open source alternatives are in the last section. ## Learning Objectives - Participants will list several 1) paid and 2) free programs made for graphics - Participants will describe the workflow for Adobe Creative Cloud ## The workflow I have spent many years working exclusively with Adobe Photoshop. In fact, it was the first program that I was introdcued to many years ago when I was in high school. We had a one semester class focused around computer and information technology. Our teacher had studied computer graphics in college and he emphasized two very important things: 1. How to use Adobe Photoshop for image manipulation and to create a magazine print cover 2. If you don't know how to do something, "google it" While learning how to use Photoshop was definitely more directly relevant to the class, learning the power of using a search engine for design problems is probably one that I have refined over and over again since my first "[yahooligans](https://en.wikipedia.org/wiki/Yahoo!_Kids)" search back in 1999. Either way, one *important* lesson that I missed, or might have never heard in that classroom of 35 hyperactive high school students with access to the entire internet was: - Professional designers and illustrators were using a variety of programs to do their work, not just Photoshop It wasn't until much later that I understoond design better, and knew how Adobe had capitalized on the working professionals to offer one consistent experience which merged into the now discontinued Adobe Creative Suite in favor of the Adobe Creative Cloud subscription service. But of course access to these programs were behind a huge paywall. And luckily I was able to use these programs with my university license access. Because of this, and wide avaiability of online tutorials, my workflow became this: ![workflow](img/workflow.png) This had 3 basic working steps: 1. Edit the image from the real word using Photoshop 2. Create any illustrations that you will use using Illustrator 3. Put them together with text for print using InDesign And for the most part of my image creating career, this was the preferred method for my work. I recently began to switch over to free and open source programs for many things. This included a shift in using free programming [IDE](https://en.wikipedia.org/wiki/Integrated_development_environment)s to generate images. I previously used MatLab for all my graphs and plotting, but now I have shifted to the Python programming language for this. So, in this vein, I might as well also begin using these free programs to generate my images. ## The Programs The follow list of programs are taken from my limited experiences switching out of paid subscriptions. They are not comprehensive, but showcase some good ones! ### Free and open source - [Inkscape](https://inkscape.org) -- The first one on my list is of course Inkscape, which has a huge community and is completely cross-platform (Windows, OSX, and Linux). ![inkscape](img/inkscape.png) - [Krita](https://krita.org) -- This one is fairly new to me, but I really enjoy it. Mainly used by artists for drawing, but it has a really nice and easy to use interface. Cross platform as well. ![krita](img/krita.png) - [GIMP](https://gimp.org) -- A tried and true classic program. I've used this one in the past but didn't know I was using it. It has a lot of features that are accessible for beginners. ![gimp](img/gimp.png) ### Paid programs - Microsoft Office PowerPoint -- This one is a bit of a contradiction. I was vehemently opposed to using any MS Office product for image production. But some nice features have been creeping in recently, and they can now export high quality image files. Give it a shot for quick and dirty images. - Adobe Creative Cloud -- While I would support this more if it was affordable, for the beginner, it might be too overwhelming at first. However, these are still the industry standard, if you can get your hands on a license, give them a try. - Corel -- At one point, CorelDRAW was one of the biggest image manipulating programs used. But recently, has not been as popular and widespread. In anycase, this is still a good paid program to try, if you can find a license. ## Additional Resources Others have looked into many aspects of programs that can help you make a decision with comments about performanace and more specific use cases. There are some articles that you can read for more complete suggestions. - [EPS vs SVG](https://www.educba.com/svg-vs-eps/) All about what vector image format you should use for your images - [Exporting High Res PowerPoint](https://www.slidecow.com/powerpoint-tutorials/export-high-resolution-high-quality-images-powerpoint/) Now Microsoft Office has much more capability and can help when you are in a bind - [Which program?](https://medium.com/@inkbotdesign/top-8-free-open-source-tools-for-graphic-designers-3c34768e2c86) If you want a short, slightly more technical description of some of the free programs
77.643836
535
0.785992
eng_Latn
0.999673
982cf20ac30c473cfece67f5f92b116e17e647b4
124
md
Markdown
README.md
Knight-Rider888/DimensGenerator
d3c256447745bfd2df5d5a9c923642af1f0fb4ee
[ "Apache-2.0" ]
null
null
null
README.md
Knight-Rider888/DimensGenerator
d3c256447745bfd2df5d5a9c923642af1f0fb4ee
[ "Apache-2.0" ]
null
null
null
README.md
Knight-Rider888/DimensGenerator
d3c256447745bfd2df5d5a9c923642af1f0fb4ee
[ "Apache-2.0" ]
null
null
null
# DimensGenerator Dimen适配文件生成器(运行项目可自动生成相应文件夹下的dimens.xml) 列表一行显示几个 可用生成的int进行适配(使用了int 就不要再用dp适配了) dp适配,会自动匹配设备宽度的dp值
17.714286
41
0.846774
yue_Hant
0.263423
982d68acd9afcce5160c001da120cf8fe0132138
20,406
md
Markdown
reference_crm/enum/time_zone/index.md
GrooveHQ/docs
a7486e2ad5c4e72840acd3ec41d36fd5f773ad5f
[ "MIT" ]
null
null
null
reference_crm/enum/time_zone/index.md
GrooveHQ/docs
a7486e2ad5c4e72840acd3ec41d36fd5f773ad5f
[ "MIT" ]
8
2020-03-23T03:25:26.000Z
2021-09-28T00:32:12.000Z
reference_crm/enum/time_zone/index.md
GrooveHQ/docs
a7486e2ad5c4e72840acd3ec41d36fd5f773ad5f
[ "MIT" ]
null
null
null
--- title: TimeZone parent: Enums grand_parent: Reference CRM --- # TimeZone <h3 id="values">Values</h3> <h4 id="abu_dhabi" class="name anchored">ABU_DHABI</h4> <div class="description-wrapper"> <p>(GMT+04:00) Abu Dhabi</p> </div> <h4 id="adelaide" class="name anchored">ADELAIDE</h4> <div class="description-wrapper"> <p>(GMT+09:30) Adelaide</p> </div> <h4 id="alaska" class="name anchored">ALASKA</h4> <div class="description-wrapper"> <p>(GMT-09:00) Alaska</p> </div> <h4 id="almaty" class="name anchored">ALMATY</h4> <div class="description-wrapper"> <p>(GMT+06:00) Almaty</p> </div> <h4 id="american_samoa" class="name anchored">AMERICAN_SAMOA</h4> <div class="description-wrapper"> <p>(GMT-11:00) American Samoa</p> </div> <h4 id="amsterdam" class="name anchored">AMSTERDAM</h4> <div class="description-wrapper"> <p>(GMT+01:00) Amsterdam</p> </div> <h4 id="arizona" class="name anchored">ARIZONA</h4> <div class="description-wrapper"> <p>(GMT-07:00) Arizona</p> </div> <h4 id="astana" class="name anchored">ASTANA</h4> <div class="description-wrapper"> <p>(GMT+06:00) Astana</p> </div> <h4 id="athens" class="name anchored">ATHENS</h4> <div class="description-wrapper"> <p>(GMT+02:00) Athens</p> </div> <h4 id="atlantic_time_canada" class="name anchored">ATLANTIC_TIME_CANADA</h4> <div class="description-wrapper"> <p>(GMT-04:00) Atlantic Time (Canada)</p> </div> <h4 id="auckland" class="name anchored">AUCKLAND</h4> <div class="description-wrapper"> <p>(GMT+12:00) Auckland</p> </div> <h4 id="azores" class="name anchored">AZORES</h4> <div class="description-wrapper"> <p>(GMT-01:00) Azores</p> </div> <h4 id="baghdad" class="name anchored">BAGHDAD</h4> <div class="description-wrapper"> <p>(GMT+03:00) Baghdad</p> </div> <h4 id="baku" class="name anchored">BAKU</h4> <div class="description-wrapper"> <p>(GMT+04:00) Baku</p> </div> <h4 id="bangkok" class="name anchored">BANGKOK</h4> <div class="description-wrapper"> <p>(GMT+07:00) Bangkok</p> </div> <h4 id="beijing" class="name anchored">BEIJING</h4> <div class="description-wrapper"> <p>(GMT+08:00) Beijing</p> </div> <h4 id="belgrade" class="name anchored">BELGRADE</h4> <div class="description-wrapper"> <p>(GMT+01:00) Belgrade</p> </div> <h4 id="berlin" class="name anchored">BERLIN</h4> <div class="description-wrapper"> <p>(GMT+01:00) Berlin</p> </div> <h4 id="bern" class="name anchored">BERN</h4> <div class="description-wrapper"> <p>(GMT+01:00) Bern</p> </div> <h4 id="bogota" class="name anchored">BOGOTA</h4> <div class="description-wrapper"> <p>(GMT-05:00) Bogota</p> </div> <h4 id="brasilia" class="name anchored">BRASILIA</h4> <div class="description-wrapper"> <p>(GMT-03:00) Brasilia</p> </div> <h4 id="bratislava" class="name anchored">BRATISLAVA</h4> <div class="description-wrapper"> <p>(GMT+01:00) Bratislava</p> </div> <h4 id="brisbane" class="name anchored">BRISBANE</h4> <div class="description-wrapper"> <p>(GMT+10:00) Brisbane</p> </div> <h4 id="brussels" class="name anchored">BRUSSELS</h4> <div class="description-wrapper"> <p>(GMT+01:00) Brussels</p> </div> <h4 id="bucharest" class="name anchored">BUCHAREST</h4> <div class="description-wrapper"> <p>(GMT+02:00) Bucharest</p> </div> <h4 id="budapest" class="name anchored">BUDAPEST</h4> <div class="description-wrapper"> <p>(GMT+01:00) Budapest</p> </div> <h4 id="buenos_aires" class="name anchored">BUENOS_AIRES</h4> <div class="description-wrapper"> <p>(GMT-03:00) Buenos Aires</p> </div> <h4 id="cairo" class="name anchored">CAIRO</h4> <div class="description-wrapper"> <p>(GMT+02:00) Cairo</p> </div> <h4 id="canberra" class="name anchored">CANBERRA</h4> <div class="description-wrapper"> <p>(GMT+10:00) Canberra</p> </div> <h4 id="cape_verde_is" class="name anchored">CAPE_VERDE_IS</h4> <div class="description-wrapper"> <p>(GMT-01:00) Cape Verde Is.</p> </div> <h4 id="caracas" class="name anchored">CARACAS</h4> <div class="description-wrapper"> <p>(GMT-04:00) Caracas</p> </div> <h4 id="casablanca" class="name anchored">CASABLANCA</h4> <div class="description-wrapper"> <p>(GMT+01:00) Casablanca</p> </div> <h4 id="central_america" class="name anchored">CENTRAL_AMERICA</h4> <div class="description-wrapper"> <p>(GMT-06:00) Central America</p> </div> <h4 id="central_time_us_and_canada" class="name anchored">CENTRAL_TIME_US_AND_CANADA</h4> <div class="description-wrapper"> <p>(GMT-06:00) Central Time (US &amp; Canada)</p> </div> <h4 id="chatham_is" class="name anchored">CHATHAM_IS</h4> <div class="description-wrapper"> <p>(GMT+12:45) Chatham Is.</p> </div> <h4 id="chennai" class="name anchored">CHENNAI</h4> <div class="description-wrapper"> <p>(GMT+05:30) Chennai</p> </div> <h4 id="chihuahua" class="name anchored">CHIHUAHUA</h4> <div class="description-wrapper"> <p>(GMT-07:00) Chihuahua</p> </div> <h4 id="chongqing" class="name anchored">CHONGQING</h4> <div class="description-wrapper"> <p>(GMT+08:00) Chongqing</p> </div> <h4 id="copenhagen" class="name anchored">COPENHAGEN</h4> <div class="description-wrapper"> <p>(GMT+01:00) Copenhagen</p> </div> <h4 id="darwin" class="name anchored">DARWIN</h4> <div class="description-wrapper"> <p>(GMT+09:30) Darwin</p> </div> <h4 id="dhaka" class="name anchored">DHAKA</h4> <div class="description-wrapper"> <p>(GMT+06:00) Dhaka</p> </div> <h4 id="dublin" class="name anchored">DUBLIN</h4> <div class="description-wrapper"> <p>(GMT+01:00) Dublin</p> </div> <h4 id="eastern_time_us_and_canada" class="name anchored">EASTERN_TIME_US_AND_CANADA</h4> <div class="description-wrapper"> <p>(GMT-05:00) Eastern Time (US &amp; Canada)</p> </div> <h4 id="edinburgh" class="name anchored">EDINBURGH</h4> <div class="description-wrapper"> <p>(GMT+00:00) Edinburgh</p> </div> <h4 id="ekaterinburg" class="name anchored">EKATERINBURG</h4> <div class="description-wrapper"> <p>(GMT+05:00) Ekaterinburg</p> </div> <h4 id="fiji" class="name anchored">FIJI</h4> <div class="description-wrapper"> <p>(GMT+12:00) Fiji</p> </div> <h4 id="georgetown" class="name anchored">GEORGETOWN</h4> <div class="description-wrapper"> <p>(GMT-04:00) Georgetown</p> </div> <h4 id="greenland" class="name anchored">GREENLAND</h4> <div class="description-wrapper"> <p>(GMT-03:00) Greenland</p> </div> <h4 id="guadalajara" class="name anchored">GUADALAJARA</h4> <div class="description-wrapper"> <p>(GMT-06:00) Guadalajara</p> </div> <h4 id="guam" class="name anchored">GUAM</h4> <div class="description-wrapper"> <p>(GMT+10:00) Guam</p> </div> <h4 id="hanoi" class="name anchored">HANOI</h4> <div class="description-wrapper"> <p>(GMT+07:00) Hanoi</p> </div> <h4 id="harare" class="name anchored">HARARE</h4> <div class="description-wrapper"> <p>(GMT+02:00) Harare</p> </div> <h4 id="hawaii" class="name anchored">HAWAII</h4> <div class="description-wrapper"> <p>(GMT-10:00) Hawaii</p> </div> <h4 id="helsinki" class="name anchored">HELSINKI</h4> <div class="description-wrapper"> <p>(GMT+02:00) Helsinki</p> </div> <h4 id="hobart" class="name anchored">HOBART</h4> <div class="description-wrapper"> <p>(GMT+10:00) Hobart</p> </div> <h4 id="hong_kong" class="name anchored">HONG_KONG</h4> <div class="description-wrapper"> <p>(GMT+08:00) Hong Kong</p> </div> <h4 id="indiana_east" class="name anchored">INDIANA_EAST</h4> <div class="description-wrapper"> <p>(GMT-05:00) Indiana (East)</p> </div> <h4 id="international_date_line_west" class="name anchored">INTERNATIONAL_DATE_LINE_WEST</h4> <div class="description-wrapper"> <p>(GMT-11:00) International Date Line West</p> </div> <h4 id="irkutsk" class="name anchored">IRKUTSK</h4> <div class="description-wrapper"> <p>(GMT+08:00) Irkutsk</p> </div> <h4 id="islamabad" class="name anchored">ISLAMABAD</h4> <div class="description-wrapper"> <p>(GMT+05:00) Islamabad</p> </div> <h4 id="istanbul" class="name anchored">ISTANBUL</h4> <div class="description-wrapper"> <p>(GMT+03:00) Istanbul</p> </div> <h4 id="jakarta" class="name anchored">JAKARTA</h4> <div class="description-wrapper"> <p>(GMT+07:00) Jakarta</p> </div> <h4 id="jerusalem" class="name anchored">JERUSALEM</h4> <div class="description-wrapper"> <p>(GMT+02:00) Jerusalem</p> </div> <h4 id="kabul" class="name anchored">KABUL</h4> <div class="description-wrapper"> <p>(GMT+04:30) Kabul</p> </div> <h4 id="kaliningrad" class="name anchored">KALININGRAD</h4> <div class="description-wrapper"> <p>(GMT+02:00) Kaliningrad</p> </div> <h4 id="kamchatka" class="name anchored">KAMCHATKA</h4> <div class="description-wrapper"> <p>(GMT+12:00) Kamchatka</p> </div> <h4 id="karachi" class="name anchored">KARACHI</h4> <div class="description-wrapper"> <p>(GMT+05:00) Karachi</p> </div> <h4 id="kathmandu" class="name anchored">KATHMANDU</h4> <div class="description-wrapper"> <p>(GMT+05:45) Kathmandu</p> </div> <h4 id="kolkata" class="name anchored">KOLKATA</h4> <div class="description-wrapper"> <p>(GMT+05:30) Kolkata</p> </div> <h4 id="krasnoyarsk" class="name anchored">KRASNOYARSK</h4> <div class="description-wrapper"> <p>(GMT+07:00) Krasnoyarsk</p> </div> <h4 id="kuala_lumpur" class="name anchored">KUALA_LUMPUR</h4> <div class="description-wrapper"> <p>(GMT+08:00) Kuala Lumpur</p> </div> <h4 id="kuwait" class="name anchored">KUWAIT</h4> <div class="description-wrapper"> <p>(GMT+03:00) Kuwait</p> </div> <h4 id="kyiv" class="name anchored">KYIV</h4> <div class="description-wrapper"> <p>(GMT+02:00) Kyiv</p> </div> <h4 id="la_paz" class="name anchored">LA_PAZ</h4> <div class="description-wrapper"> <p>(GMT-04:00) La Paz</p> </div> <h4 id="lima" class="name anchored">LIMA</h4> <div class="description-wrapper"> <p>(GMT-05:00) Lima</p> </div> <h4 id="lisbon" class="name anchored">LISBON</h4> <div class="description-wrapper"> <p>(GMT+00:00) Lisbon</p> </div> <h4 id="ljubljana" class="name anchored">LJUBLJANA</h4> <div class="description-wrapper"> <p>(GMT+01:00) Ljubljana</p> </div> <h4 id="london" class="name anchored">LONDON</h4> <div class="description-wrapper"> <p>(GMT+00:00) London</p> </div> <h4 id="madrid" class="name anchored">MADRID</h4> <div class="description-wrapper"> <p>(GMT+01:00) Madrid</p> </div> <h4 id="magadan" class="name anchored">MAGADAN</h4> <div class="description-wrapper"> <p>(GMT+11:00) Magadan</p> </div> <h4 id="marshall_is" class="name anchored">MARSHALL_IS</h4> <div class="description-wrapper"> <p>(GMT+12:00) Marshall Is.</p> </div> <h4 id="mazatlan" class="name anchored">MAZATLAN</h4> <div class="description-wrapper"> <p>(GMT-07:00) Mazatlan</p> </div> <h4 id="melbourne" class="name anchored">MELBOURNE</h4> <div class="description-wrapper"> <p>(GMT+10:00) Melbourne</p> </div> <h4 id="mexico_city" class="name anchored">MEXICO_CITY</h4> <div class="description-wrapper"> <p>(GMT-06:00) Mexico City</p> </div> <h4 id="midway_island" class="name anchored">MIDWAY_ISLAND</h4> <div class="description-wrapper"> <p>(GMT-11:00) Midway Island</p> </div> <h4 id="mid_atlantic" class="name anchored">MID_ATLANTIC</h4> <div class="description-wrapper"> <p>(GMT-02:00) Mid-Atlantic</p> </div> <h4 id="minsk" class="name anchored">MINSK</h4> <div class="description-wrapper"> <p>(GMT+03:00) Minsk</p> </div> <h4 id="monrovia" class="name anchored">MONROVIA</h4> <div class="description-wrapper"> <p>(GMT+00:00) Monrovia</p> </div> <h4 id="monterrey" class="name anchored">MONTERREY</h4> <div class="description-wrapper"> <p>(GMT-06:00) Monterrey</p> </div> <h4 id="montevideo" class="name anchored">MONTEVIDEO</h4> <div class="description-wrapper"> <p>(GMT-03:00) Montevideo</p> </div> <h4 id="moscow" class="name anchored">MOSCOW</h4> <div class="description-wrapper"> <p>(GMT+03:00) Moscow</p> </div> <h4 id="mountain_time_us_and_canada" class="name anchored">MOUNTAIN_TIME_US_AND_CANADA</h4> <div class="description-wrapper"> <p>(GMT-07:00) Mountain Time (US &amp; Canada)</p> </div> <h4 id="mumbai" class="name anchored">MUMBAI</h4> <div class="description-wrapper"> <p>(GMT+05:30) Mumbai</p> </div> <h4 id="muscat" class="name anchored">MUSCAT</h4> <div class="description-wrapper"> <p>(GMT+04:00) Muscat</p> </div> <h4 id="nairobi" class="name anchored">NAIROBI</h4> <div class="description-wrapper"> <p>(GMT+03:00) Nairobi</p> </div> <h4 id="newfoundland" class="name anchored">NEWFOUNDLAND</h4> <div class="description-wrapper"> <p>(GMT-03:30) Newfoundland</p> </div> <h4 id="new_caledonia" class="name anchored">NEW_CALEDONIA</h4> <div class="description-wrapper"> <p>(GMT+11:00) New Caledonia</p> </div> <h4 id="new_delhi" class="name anchored">NEW_DELHI</h4> <div class="description-wrapper"> <p>(GMT+05:30) New Delhi</p> </div> <h4 id="novosibirsk" class="name anchored">NOVOSIBIRSK</h4> <div class="description-wrapper"> <p>(GMT+07:00) Novosibirsk</p> </div> <h4 id="nuku_alofa" class="name anchored">NUKU_ALOFA</h4> <div class="description-wrapper"> <p>(GMT+13:00) Nuku'alofa</p> </div> <h4 id="osaka" class="name anchored">OSAKA</h4> <div class="description-wrapper"> <p>(GMT+09:00) Osaka</p> </div> <h4 id="pacific_time_us_and_canada" class="name anchored">PACIFIC_TIME_US_AND_CANADA</h4> <div class="description-wrapper"> <p>(GMT-08:00) Pacific Time (US &amp; Canada)</p> </div> <h4 id="paris" class="name anchored">PARIS</h4> <div class="description-wrapper"> <p>(GMT+01:00) Paris</p> </div> <h4 id="perth" class="name anchored">PERTH</h4> <div class="description-wrapper"> <p>(GMT+08:00) Perth</p> </div> <h4 id="port_moresby" class="name anchored">PORT_MORESBY</h4> <div class="description-wrapper"> <p>(GMT+10:00) Port Moresby</p> </div> <h4 id="prague" class="name anchored">PRAGUE</h4> <div class="description-wrapper"> <p>(GMT+01:00) Prague</p> </div> <h4 id="pretoria" class="name anchored">PRETORIA</h4> <div class="description-wrapper"> <p>(GMT+02:00) Pretoria</p> </div> <h4 id="quito" class="name anchored">QUITO</h4> <div class="description-wrapper"> <p>(GMT-05:00) Quito</p> </div> <h4 id="rangoon" class="name anchored">RANGOON</h4> <div class="description-wrapper"> <p>(GMT+06:30) Rangoon</p> </div> <h4 id="riga" class="name anchored">RIGA</h4> <div class="description-wrapper"> <p>(GMT+02:00) Riga</p> </div> <h4 id="riyadh" class="name anchored">RIYADH</h4> <div class="description-wrapper"> <p>(GMT+03:00) Riyadh</p> </div> <h4 id="rome" class="name anchored">ROME</h4> <div class="description-wrapper"> <p>(GMT+01:00) Rome</p> </div> <h4 id="samara" class="name anchored">SAMARA</h4> <div class="description-wrapper"> <p>(GMT+04:00) Samara</p> </div> <h4 id="samoa" class="name anchored">SAMOA</h4> <div class="description-wrapper"> <p>(GMT+13:00) Samoa</p> </div> <h4 id="santiago" class="name anchored">SANTIAGO</h4> <div class="description-wrapper"> <p>(GMT-04:00) Santiago</p> </div> <h4 id="sapporo" class="name anchored">SAPPORO</h4> <div class="description-wrapper"> <p>(GMT+09:00) Sapporo</p> </div> <h4 id="sarajevo" class="name anchored">SARAJEVO</h4> <div class="description-wrapper"> <p>(GMT+01:00) Sarajevo</p> </div> <h4 id="saskatchewan" class="name anchored">SASKATCHEWAN</h4> <div class="description-wrapper"> <p>(GMT-06:00) Saskatchewan</p> </div> <h4 id="seoul" class="name anchored">SEOUL</h4> <div class="description-wrapper"> <p>(GMT+09:00) Seoul</p> </div> <h4 id="singapore" class="name anchored">SINGAPORE</h4> <div class="description-wrapper"> <p>(GMT+08:00) Singapore</p> </div> <h4 id="skopje" class="name anchored">SKOPJE</h4> <div class="description-wrapper"> <p>(GMT+01:00) Skopje</p> </div> <h4 id="sofia" class="name anchored">SOFIA</h4> <div class="description-wrapper"> <p>(GMT+02:00) Sofia</p> </div> <h4 id="solomon_is" class="name anchored">SOLOMON_IS</h4> <div class="description-wrapper"> <p>(GMT+11:00) Solomon Is.</p> </div> <h4 id="srednekolymsk" class="name anchored">SREDNEKOLYMSK</h4> <div class="description-wrapper"> <p>(GMT+11:00) Srednekolymsk</p> </div> <h4 id="sri_jayawardenepura" class="name anchored">SRI_JAYAWARDENEPURA</h4> <div class="description-wrapper"> <p>(GMT+05:30) Sri Jayawardenepura</p> </div> <h4 id="stockholm" class="name anchored">STOCKHOLM</h4> <div class="description-wrapper"> <p>(GMT+01:00) Stockholm</p> </div> <h4 id="st_petersburg" class="name anchored">ST_PETERSBURG</h4> <div class="description-wrapper"> <p>(GMT+03:00) St. Petersburg</p> </div> <h4 id="sydney" class="name anchored">SYDNEY</h4> <div class="description-wrapper"> <p>(GMT+10:00) Sydney</p> </div> <h4 id="taipei" class="name anchored">TAIPEI</h4> <div class="description-wrapper"> <p>(GMT+08:00) Taipei</p> </div> <h4 id="tallinn" class="name anchored">TALLINN</h4> <div class="description-wrapper"> <p>(GMT+02:00) Tallinn</p> </div> <h4 id="tashkent" class="name anchored">TASHKENT</h4> <div class="description-wrapper"> <p>(GMT+05:00) Tashkent</p> </div> <h4 id="tbilisi" class="name anchored">TBILISI</h4> <div class="description-wrapper"> <p>(GMT+04:00) Tbilisi</p> </div> <h4 id="tehran" class="name anchored">TEHRAN</h4> <div class="description-wrapper"> <p>(GMT+03:30) Tehran</p> </div> <h4 id="tijuana" class="name anchored">TIJUANA</h4> <div class="description-wrapper"> <p>(GMT-08:00) Tijuana</p> </div> <h4 id="tokelau_is" class="name anchored">TOKELAU_IS</h4> <div class="description-wrapper"> <p>(GMT+13:00) Tokelau Is.</p> </div> <h4 id="tokyo" class="name anchored">TOKYO</h4> <div class="description-wrapper"> <p>(GMT+09:00) Tokyo</p> </div> <h4 id="ulaanbaatar" class="name anchored">ULAANBAATAR</h4> <div class="description-wrapper"> <p>(GMT+08:00) Ulaanbaatar</p> </div> <h4 id="urumqi" class="name anchored">URUMQI</h4> <div class="description-wrapper"> <p>(GMT+06:00) Urumqi</p> </div> <h4 id="utc" class="name anchored">UTC</h4> <div class="description-wrapper"> <p>(GMT+00:00) UTC</p> </div> <h4 id="vienna" class="name anchored">VIENNA</h4> <div class="description-wrapper"> <p>(GMT+01:00) Vienna</p> </div> <h4 id="vilnius" class="name anchored">VILNIUS</h4> <div class="description-wrapper"> <p>(GMT+02:00) Vilnius</p> </div> <h4 id="vladivostok" class="name anchored">VLADIVOSTOK</h4> <div class="description-wrapper"> <p>(GMT+10:00) Vladivostok</p> </div> <h4 id="volgograd" class="name anchored">VOLGOGRAD</h4> <div class="description-wrapper"> <p>(GMT+04:00) Volgograd</p> </div> <h4 id="warsaw" class="name anchored">WARSAW</h4> <div class="description-wrapper"> <p>(GMT+01:00) Warsaw</p> </div> <h4 id="wellington" class="name anchored">WELLINGTON</h4> <div class="description-wrapper"> <p>(GMT+12:00) Wellington</p> </div> <h4 id="west_central_africa" class="name anchored">WEST_CENTRAL_AFRICA</h4> <div class="description-wrapper"> <p>(GMT+01:00) West Central Africa</p> </div> <h4 id="yakutsk" class="name anchored">YAKUTSK</h4> <div class="description-wrapper"> <p>(GMT+09:00) Yakutsk</p> </div> <h4 id="yerevan" class="name anchored">YEREVAN</h4> <div class="description-wrapper"> <p>(GMT+04:00) Yerevan</p> </div> <h4 id="zagreb" class="name anchored">ZAGREB</h4> <div class="description-wrapper"> <p>(GMT+01:00) Zagreb</p> </div>
22.548066
95
0.637019
kor_Hang
0.144272
982d963fa0d25205ce82de9b1f9f6d360617cafc
10,381
markdown
Markdown
content/2012-05-23-python-faq-descriptors.markdown
encukou/eev.ee
2b10fa74956d63dad48e761113dc2e3bafa5ac41
[ "ISC", "CC-BY-3.0" ]
null
null
null
content/2012-05-23-python-faq-descriptors.markdown
encukou/eev.ee
2b10fa74956d63dad48e761113dc2e3bafa5ac41
[ "ISC", "CC-BY-3.0" ]
null
null
null
content/2012-05-23-python-faq-descriptors.markdown
encukou/eev.ee
2b10fa74956d63dad48e761113dc2e3bafa5ac41
[ "ISC", "CC-BY-3.0" ]
null
null
null
title: Python FAQ: Descriptors date: 2012-05-23 21:16 tags: python category: python faq Part of my [Python FAQ][]. **How does `@property` work? Why does it call my `__getattr__`? What’s a "descriptor"?** <!-- more --> Python offers several ways to hook into attribute access—that is, there are several ways you can affect what happens when someone does `obj.foo` to your object. The most boring behavior is that the object has a `foo` attribute (perhaps set in `__init__`), or the class has a `foo` method or attribute of its own. If you need total flexibility, there are the magic methods `__getattr__` and `__getattribute__`, which can return a value depending on the attribute name. Somewhere between these two extremes lie _descriptors_. A descriptor handles the attribute lookup for a _single_ attribute, but can otherwise run whatever code it wants. [Properties][property] are very simple descriptors. If you haven't used them before, they look like this: ```python class Whatever(object): def __init__(self, n): self.n = n @property def twice_n(self): return self.n * 2 @twice_n.setter def twice_n(self, new_n): self.n = new_n / 2 obj = Whatever(2) print obj.n # 2 print obj.twice_n # 4 obj.twice_n = 10 print obj.n # 5 ``` This _does some stuff_ to create a descriptor object named `twice_n`, which jumps in whenever code tries to use the `twice_n` attribute of a `Whatever` object. In the case of `@property`, you can then have things that look like plain attributes but act like methods. But descriptors are a bit more powerful. ## How they work A descriptor is just an object; there's nothing inherently special about it. Like many powerful Python features, they're surprisingly simple. To get the descriptor behavior, only three conditions need to be met: 1. You have a new-style class. 2. It has some object as a class attribute. 3. That object's class has the appropriate special descriptor method. Note very carefully that these conditions are in terms of **classes**. In particular, a descriptor **will not work** if it's assigned to an _object_ instead of a class, and an object is **not** a descriptor if you assign the _object_ a function named `__get__`. Descriptors are all about modifying behavior for classes, **not** individual objects! Ahem. So, about those special descriptor methods. There are three of them, and your object can implement whichever ones it needs. Assuming this useless setup: ```python class OwnerClass(object): descriptor = DescriptorClass() obj = OwnerClass() ``` You can implement these methods, sometimes called the "descriptor protocol": * `__get__(self, instance, owner)` hooks into reading, for both an object and the class itself. `obj.descriptor` will call `descriptor.__get__(obj, OwnerClass)`. `OwnerClass.descriptor` will call `descriptor.__get__(None, OwnerClass)`. Here, it's polite to just return `self`, so you can still get at the descriptor object like a regular class attribute. * `__set__(self, instance, value)` hooks into writing. `obj.descriptor = 5` will call `descriptor.__set__(obj, 5)`. * `__delete__(self, instance)` hooks into deletion. `del obj.descriptor` will call `descriptor.__delete__(obj)`. Note this is **not** the same as `__del__`; that's something different entirely. A minor point of confusion here: the descriptor is triggered by touching attributes on `obj`, but inside these methods, `self` is the descriptor object itself, _not_ `obj`. You can implement any combination of these you like, and whichever you implement will be triggered. This may or may not be what you want, e.g.: if you only implement `__set__`, you won't get a write-only attribute; `obj.descriptor` will act as normal and produce your descriptor object. ## Writing a descriptor Talking about descriptors involves juggling several classes and instances. Let's try a simple example, instead: recreating `property`. First, the read-only behavior. ```python class prop(object): def __init__(self, get_func): self.get_func = get_func def __get__(self, instance, owner): if instance is None: return self return self.get_func(instance) class Demo(object): @prop def attribute(self): return 133 print Demo().attribute ``` This code sneaks the descriptor in using a decorator. Remember that decorators can be rewritten as regular function calls. The class definition is roughly equivalent to this: ```python def getter(self): return 133 class Demo(object): attribute = prop(getter) ``` So the descriptor, `attribute`, is just an object wrapping a single function. When code reads from `Demo().attribute`, the descriptor calls its stored function on the `Demo` instance and passes along the return value. (The instance has to be passed in manually because the function isn't being called as a method. If you refer to them within a class body directly, methods are just regular functions; they only get method magic added to them at the end of the `class` block. It's complicated.) With this implementation, code could still do `obj.attribute = 3` and the descriptor would be shadowed. Want setter behavior, too? No problem; add a `__set__`. ```python class prop(object): # __init__ and __get__ same as before... def __set__(self, instance, value): self.set_func(instance, value) def setter(self, set_func): self.set_func = set_func return self def set_func(self, instance, value): raise TypeError("can't set me") class Demo(object): _value = None @prop def readwrite(self): return self._value @readwrite.setter def readwrite(self, value): self._value = value @prop def readonly(self): return 133 obj = Demo() print obj.readwrite obj.readwrite = 'foo' print obj.readwrite print obj.readonly obj.readonly = 'bar' # TypeError! ``` Look at all this crazy stuff going on. Take it a step at a time. The new `__set__` method is pretty much the same as before: it calls a stored function on the given `instance`. The `setter` method makes the `@readwrite.setter` decoration work. It stores the function, and then returns itself—remember, it's a decorator, so whatever it returns will end up assigned to the decorated function's name, `readwrite`. The class definition is equivalent to: ```python def func1(self): return self._value readwrite = prop(func1) def func2(self, value): self._value = value readwrite = readwrite.setter(func2) ``` Don't be fooled: it looks like there are two `readwrite` functions, but the class ends up with a _single_ object that happens to contain two functions. I include a default setter function, `set_func`, so that properties are read-only unless the class specifies otherwise. It's got three arguments because it's a regular method: calling it with `(instance, value)` will tack the descriptor object on as the first argument. This is most of the way to an exact clone of Python's builtin `property` type, and it's only a handful of very short methods. ## Potential uses Properties are an obvious use, but they're built in, so why would you care about descriptors otherwise? Maybe you wouldn't. It's metaprogramming, after all, so you either know you need it or can't imagine why you ever would. I've used them a couple times, though, and I've seen them in the wild enough. Some examples: * Pyramid includes a nifty decorator-descriptor, `@reify`. It acts like `@property`, except that the function is only ever called once; after that, the value is cached as a regular attribute. This gives you lazy attribute creation on objects that are meant to be immutable. It's handy enough that I've wished it were in the standard library more than once. * SQLAlchemy's ORM classes rely heavily on descriptors: `SomeTableClass.column == 3` is actually using a descriptor that overloads a bunch of operators. * If you're writing a class with a lot of properties that all do similar work, you can write your own descriptor class to factor out the logic, rather than writing a bunch of similar property functions that all call more methods. * If you find yourself writing a `__getattr__` with a huge stack of `if`s or attribute name parsing or similar, consider writing a descriptor instead. * Ever wonder how, exactly, `self` gets passed to a method call? Well, methods are just these class attributes that do something special when accessed via an object... surprise, methods are descriptors! ## Descriptors and `AttributeError` One final gotcha. A `__get__` method is allowed to raise an `AttributeError` if it wants to express that the attribute doesn't exist. Python will then fall back to `__getattr__` as usual. Consider this, then: ```python def __get__(self, instance, owner): log.debg("i'm in a descriptor!") # do stuff... ``` `log.debg` probably doesn't exist, so that code will raise an `AttributeError`... which Python will take to mean the descriptor is saying _it_ doesn't exist. This is probably not what you want. Be very careful with attribute access inside a descriptor, _especially_ for classes that also implement `__getattr__`. ## Conclusion * `property` is cool. * Descriptors are cool. * They aren't hard to write, if you can keep `self` and `instance` straight. * They only work as class attributes! ## Further reading * The [Python documentation][descriptor docs] on descriptors. Short, to the point, and totally useless for explaining what these things are. * The [Python HowTo](http://docs.python.org/howto/descriptor.html) on descriptors. Rather more useful. * Perhaps also read up on [`__getattr__`](http://docs.python.org/reference/datamodel.html#customizing-attribute-access) and [`__getattribute__`](http://docs.python.org/reference/datamodel.html#more-attribute-access-for-new-style-classes). * The [implementation of `reify`](https://github.com/Pylons/pyramid/blob/master/pyramid/decorator.py) is a nice example, and short enough that you may want to just paste it into your own project. [Python FAQ]: /blog/2011/07/22/python-faq/ [descriptor docs]: http://docs.python.org/reference/datamodel.html#implementing-descriptors [property]: http://docs.python.org/library/functions.html#property
43.074689
359
0.743088
eng_Latn
0.998228
982db6573ce1e687585bfe04cbd16b10400bec41
84
md
Markdown
epoch/lucca_qt/README.md
oprogramadorreal/vize
042c16f96d8790303563be6787200558e1ec00b2
[ "MIT" ]
47
2020-03-30T14:36:46.000Z
2022-03-06T07:44:54.000Z
epoch/lucca_qt/README.md
oprogramadorreal/vize
042c16f96d8790303563be6787200558e1ec00b2
[ "MIT" ]
null
null
null
epoch/lucca_qt/README.md
oprogramadorreal/vize
042c16f96d8790303563be6787200558e1ec00b2
[ "MIT" ]
8
2020-04-01T01:22:45.000Z
2022-01-02T13:06:09.000Z
# README # ![luccaqt](lucca_qt.jpg) ## Lucca Qt This is Lucca's Qt-dependent part.
14
34
0.690476
eng_Latn
0.86992
982df8fbd4a0a33c2ced279fa4c1165feaeff647
6,560
md
Markdown
msix-src/store-developer-package-update.md
myevit/msix-docs
5ef33421890b35f1b9f5177ac770a0e07c801392
[ "CC-BY-4.0", "MIT" ]
null
null
null
msix-src/store-developer-package-update.md
myevit/msix-docs
5ef33421890b35f1b9f5177ac770a0e07c801392
[ "CC-BY-4.0", "MIT" ]
null
null
null
msix-src/store-developer-package-update.md
myevit/msix-docs
5ef33421890b35f1b9f5177ac770a0e07c801392
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Update Store-published apps from your code description: Describes how MSIX packages can be updated by developers in code. author: Huios ms.date: 01/24/2020 ms.topic: article keywords: windows 10, uwp, app package, app update, msix, appx ms.custom: "RS5, seodec18" --- # Update Store-published apps from your code Starting in Windows 10, version 1607 (build 14393), Windows 10 allows developers to make stronger guarantees around app updates from the Store. Doing this requires a few simple APIs, creates a consistent and predictable user experience and lets developers to focus on what they do best while allowing Windows to do the heavy lifting. There are two fundamental ways that app updates can be managed. In both cases, the net result for these methods is the same - the update is applied. However, in one case, you can choose to let the system do all the work while in the other case you might want to have a deeper level of control over the user experience. ## Simple updates First and foremost is the very simple API call that tells the system to check for updates, download them and then request permission from the user to install them. You'll start by using the [StoreContext](/uwp/api/Windows.Services.Store.StoreContext) class to get [StorePackageUpdate](/uwp/api/Windows.Services.Store.StorePackageUpdate) objects, download and install them. ```csharp using Windows.Services.Store; private async void GetEasyUpdates() { StoreContext updateManager = StoreContext.GetDefault(); IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync(); if (updates.Count > 0) { IAsyncOperationWithProgress<StorePackageUpdateResult, StorePackageUpdateStatus> downloadOperation = updateManager.RequestDownloadAndInstallStorePackageUpdatesAsync(updates); StorePackageUpdateResult result = await downloadOperation.AsTask(); } } ``` At this point the user has two options they can choose from: apply the update now or defer the update. Whatever choice the user makes will be returned back to via the `StorePackageUpdateResult` object allowing developers to take further actions such as closing down the app if the update is required to continue or simply trying again later. ## Fine-controlled updates For developers who want to have a completely customized experience, additional APIs are provided which enable more control over the update process. The platform enables you to do the following: * Get progress events on an individual package download or on the whole update. * Apply updates at the user's and app's convenience rather than one or the other. Developers are able to download updates in the background (while app is in use) then request the user install updates, if they decline, you can simply disable capabilities affected by the update if you choose. ### Download updates ```csharp private async void DownloadUpdatesAsync() { StoreContext updateManager = StoreContext.GetDefault(); IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync(); if (updates.Count > 0) { IAsyncOperationWithProgress<StorePackageUpdateResult, StorePackageUpdateStatus> downloadOperation = updateManager.RequestDownloadStorePackageUpdatesAsync(updates); downloadOperation.Progress = async (asyncInfo, progress) => { // Show progress UI }; StorePackageUpdateResult result = await downloadOperation.AsTask(); if (result.OverallState == StorePackageUpdateState.Completed) { // Update was downloaded, add logic to request install } } } ``` ### Install updates ```csharp private async void InstallUpdatesAsync() { StoreContext updateManager = StoreContext.GetDefault(); IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync(); // Save app state here IAsyncOperationWithProgress<StorePackageUpdateResult, StorePackageUpdateStatus> installOperation = updateManager.RequestDownloadAndInstallStorePackageUpdatesAsync(updates); StorePackageUpdateResult result = await installOperation.AsTask(); // Under normal circumstances, app will terminate here // Handle error cases here using StorePackageUpdateResult from above } ``` ## Making updates mandatory In some cases, it might actually be desirable to have an update that must be installed to a user's device - making it truly mandatory (e.g. a critical fix to an app that can't wait). In these cases, there are additional measures that you can take to make the update mandatory. 1. Implement the mandatory update logic in your app code (would need to be done before mandatory update itself). 2. During submission to the Dev Center, ensure the **Make this update mandatory** box is selected. ### Implementing app code In order to take full advantage of mandatory updates, you'll need to make some slight modifications to the code above. You'll need to use the [StorePackageUpdate object](/uwp/api/Windows.Services.Store.StorePackageUpdate) to determine if the update is mandatory. ```csharp private async bool CheckForMandatoryUpdates() { StoreContext updateManager = StoreContext.GetDefault(); IReadOnlyList<StorePackageUpdate> updates = await updateManager.GetAppAndOptionalStorePackageUpdatesAsync(); if (updates.Count > 0) { foreach (StorePackageUpdate u in updates) { if (u.Mandatory) return true; } } return false; } ``` Then you'll need to create a custom in app dialog to inform the user that there is a mandatory update and that they must install it to continue full use of the app. If the user declines the update, the app could either degrade functionality (for example, prevent online access) or terminate completely (for example, online-only games). ### Partner Center To ensure the [StorePackageUpdate](/uwp/api/Windows.Services.Store.StorePackageUpdate) shows true for a mandatory update, you will need to mark the update as mandatory in the Partner Center in the **Packages** page. A couple of things to note: * If a device comes back online after a mandatory update has been superseded with another non-mandatory update, the non-mandatory update will still show up on the device as mandatory given the missed update before it was mandatory. * Developer-controlled updates and mandatory updates are currently limited to the Store.
48.235294
372
0.769207
eng_Latn
0.8839
982e01ec61539c05320060bd2b66e4b8fe040775
9,587
md
Markdown
notes/audio.md
knight-ryu12/StarFoxAdventures
83648dabda9777d7de5da1bf9b83ddb82d7dd2a0
[ "MIT" ]
22
2019-11-29T21:20:59.000Z
2022-01-25T06:39:02.000Z
notes/audio.md
knight-ryu12/StarFoxAdventures
83648dabda9777d7de5da1bf9b83ddb82d7dd2a0
[ "MIT" ]
7
2021-01-26T11:57:25.000Z
2022-02-07T11:00:06.000Z
notes/audio.md
knight-ryu12/StarFoxAdventures
83648dabda9777d7de5da1bf9b83ddb82d7dd2a0
[ "MIT" ]
3
2021-01-03T23:47:37.000Z
2021-08-06T09:02:11.000Z
every sound channel seems to have a structure like: FFFFFFFF 00004000 or 00000000 or 00010000 volume volume copy 0 0 0 1 volume again? 0 0 0 0 there seem to be several channels enabled but not used, maybe because I turned off music setting volume negative is weird, it becomes very loud and dings? maybe it's inverting volume of disabled effects? for global it's just silent code at 80270AE8, 80270AF0 reads these with offset 0x5D4 but from bases 803BD180, 803BD1B0 which are < 0x5D4 apart, so they don't seem to be fields of an object of that size. 803398b0 controller inputs 803a32c8 set to 0101 to become Fox as Krystal it's 0001 so maybe only first byte matters 803428f8 Y velocity 803a33ab u16 staff upgrades 803a32a8 u8 current health, max health (4 = 1 heart) 803a32b0 u8 money or something 803a32ac u8 set to 1 to enable Sharpclaw Disguise using it as Krystal turns you back into Fox but keeps her voice 803a336a something to do with camera - set above 7F to pan somewhere else 803a32c4 save file name 803a32d0 some item unlocks 80270ae8 code that reads global volume (from R4 + 0x5D4) R4 values: 803bd360 - global volume 803bd210 - does look like volume for something (in fact it's wind effect) `(*809F4038) + 0x81A` should be the voice flag (16-bit) but this address might not be constant (it's not) I think it's field 0xB8 in the Player object and Krystal's voice is indeed loaded, but changing this doesn't fix the animations 80270af0 code that reads SFX volume (from R3 + 0x5D4) R3 values: 803bd1b0 - SFX 803bd180 - music call stack: 80270970 [480128E5] - no effect if NOP'd 802713F4 - NOP kills all sound (was 4E800021 "blrl") "branch to link register and link", ie tail call changing to just blr freezes the game this is a computed jump (to 80270938) - thread state? params: r0: unused r1: 803f8040 - stack pointer r2: 803e6500 - many floats which do nothing? r3: first argument; seen: 1, 0x28, 0x2D r4: 0xB4B4, 0x6464, 0xACAC... r5: unused (overwritten with 803c0000) 80271508 802830F0 802846E8 8024FD3C 80243F60 802461AC 80336E88 80246B28 8004A9F8 80270afc [41820034] - NOP here prevents any new sound effects from playing. maybe checking if volume > 0, but restoring the opcode doesn't bring them back before that it's getting a byte from 0x8139edbc + 0x120 (0x8139eedc) and checking if it's 0xFF if so, it takes the branch sfx.bin: FoxFallScream ID |Ofs|Max|Of2|Mx2|? |range| unknown table |randTbl |rndMx| ?|flg 0026 5F 00 7F 00 0064 0280 00F2 0000 0000 0000 0000 0000 64 00 00 00 00 00 0064 43 10 assumption: n = randInRange(0, randMax) for i, v in enumerate(randTbl): if n >= v: # use unkTbl[i] break so in this case only one possible value 8110eb20 sound 0x3CE (KrystalRoll2) Offset +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +A +B +C +D +E +F 8110EB20 [0000] 03 CE 5A 00 7F 00 00 64 02 80 02 65 00 00 00 00 8110EB30 [0010] 00 00 00 00 00 00 64 00 00 00 00 00 00 64 43 10 offset = 0x5A maxOffsetRand = 0 offset2 = 0x7F maxOffset2Rand = 0 unk06 = 0x0064 range = 0x0280 randVals = 0x0265, 0, 0, 0, 0, 0 randTbl = 0x64, 0, 0, 0, 0, 0 randMax = 0x0064 unk1E = 0x43 flags = 0x10 randVals 0x0261 = "Stay!" the SfxBinEntry structure is something like: SoundId id; //the actual sound effect ID u8 baseVolume; u8 volumeRand; //volume = rand(baseVolume - volumeRand, baseVolume + volumeRand) u8 baseVol2; u8 vol2Rand; u16 unk06; u16 range; //how far from source object to silence u16 randVals[6]; //actual sound to play (not same as SoundId) u8 randTbl[6]; //chance to pick each sound u16 randMax; //sum of randTbl u8 unk1E; //iiii a??b //iiii: index into sfxTable_803db248 //a, b: unknown u8 numIdxs : 4; //number of items in randVals/randTbl u8 prevIdx : 4; //previously played idx to play a sound: look up the entry with the desired ID if entry->numIdxs == 0, don't play if the ID is 0xAB: //no idea what this is, just some kind of whoosh or creak. //it alternates between two different sounds every time it's played. entry->prevIdx ^= 1 idx = entry->prevIdx else: n = rand(1, entry->randMax) idx = the index of value n in entry->randVals //eg if randVals = [10, 20, 30] and n = 22, //then idx = 2, since randVals[2] >= n //avoid playing the same sound twice in a row. //if we chose the same one as last time, use the next one. if entry->prevIdx == idx: idx += 1 if idx > entry->numIdxs: idx = 0 entry->prevIdx = idx outId = entry->randVals[idx] if outId is 0, don't play compute the volume: if entry->volumeRand == 0: outVolume = entry->baseVolume else: outVolume = rand( entry->baseVolume - entry->volumeRand, entry->baseVolume + entry->volumeRand) //same calculation here, but result is cast to float. maxOfs2 = entry->vol2Rand if entry->vol2Rand == 0: outVol2 = (float)entry->baseVol2 else: outVol2 = rand( entry->baseVol2 - entry->vol2Rand, entry->baseVol2 + entry->vol2Rand) outField6 = (float)unk06 outRange = (float)range outTable1E = sfxTable_803db248[unk1E >> 4] out1ELow = unk1E & 1 out1EHigh = (unk1E >> 3) & 1 u8 sfxTable_803db248[8] = {0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0}; 8000bb18 void audioPlaySound(ObjInstance *sourceObj, SoundId soundId); 8000c400 SfxBinEntry * audioGetSfxBinEntry(uint soundId); //given a sound ID, look up its entry from SFX.BIN 802751b8 SoundEffect * audioGetSoundEffectById(SoundId2 id); //SoundId2 -> SoundEffect* struct SoundEffect { u16 id; u16 unk02; u8 unk04; u8 unk05; u8 unk06; u8 unk07; u32 offset; //u8 idx, u24 offset - not sure u16 rate; u16 pitch; int length; u32 repeatStart; u32 repeatEnd; u32 variation; } files under /audio: midi.wad starfox.h.bak - old list of SFX IDs (don't match final) starfoxm.poo, starfoxs.poo - pool file starfoxm.pro, starfoxs.pro - project file starfoxm.sam, starfoxs.sam - sample file (the actual audio data) starfoxm.sdi, starfoxs.sdi - sample directory m=music, s=sfx? https://github.com/axiodl/amuse is able to parse the audio data of course, it does not build, and has some utterly insane dependencies (why the shit does an audio decoder want a 3D graphics library!?) .proj: project structure, what belongs to which group .pool: all data except samples .sdir: locations of samples needed by groups .samp: sample data /audio/data: EmptyN .bin 256K where N is 0..7 Music .bin 2.5K Sfx .bin 38K Streams.bin 20K SoundMacro: 0x0 4 Chunk Size (note: includes the size value itself) 0x4 2 SoundMacro ObjectID 0x6 2 Padding commands... 8 bytes each Table: 0x0 4 Chunk Size 0x4 2 Table ObjectID 0x6 2 Padding Keymap: 0x0 4 Chunk Size; (usually 0x1032) 0x4 2 Keymap ObjectID 0x6 2 Padding Keymap Entry: 0x0 2 ObjectID 0x2 1 Transpose 0x3 1 Pan 0x4 1 Priority Offset 0x8 Padded to 8 bytes Layer: 0x0 4 Chunk Size 0x4 2 Layer ObjectID 0x6 2 Padding Chunk Size Layer data Layer Data: 0x0 2 ObjectID 0x2 1 Key Lo 0x3 1 Key Hi 0x4 1 Transpose 0x5 1 Volume 0x6 1 Priority Offset 0x7 1 Surround Pan; (0: extreme forward, 64: center, 127: extreme rearward) 0x8 1 Pan; (0: extreme left, 64: center, 127: extreme right) 0xC Padded to 12 bytes Project: 0x0 4 Group end offset (points to next group in project) 0x2 2 Group ID 0x4 2 Group Type; 0 for SongGroup (for use with CSNG), 1 for SFXGroup. 0x8 4 SoundMacro ID table offset 0xC 4 Sample ID table offset 0x10 4 Tables table offset 0x14 4 Keymaps table offset 0x18 4 Layers table offset 0x1C 4 Normal page table (SongGroup) / SFX table offset (SFXGroup) 0x20 4 Drum page table offset (SongGroup) 0x24 4 MIDI Setup table offset (SongGroup) 0x20 End of group header Normal/Drum Page Entry: 0x0 2 ObjectID 0x2 1 Priority; voices are limited, so priority is used to play more important sounds over others 0x3 1 Max number of voices 0x4 1 GM Program Number 0x5 1 Padding SFX Entry: 0x0 2 DefineID; referenced by game code 0x2 2 ObjectID 0x4 1 Priority; voices are limited, so priority is used to play more important sounds over others 0x5 1 Max number of voices 0x6 1 Definite Velocity; volume (usually 7F) 0x7 1 Panning 0x8 1 Definite Key; The default pitch - usually 0x3C (MIDI C4) 0x9 1 Padding MIDI Setup Entry: 0x0 1 Program Number 0x1 1 Volume 0x2 1 Panning 0x3 1 Reverb 0x4 1 Chorus SampleDir Table A: 0x0 2 Sound ID 0x2 2 Padding; always 0 0x4 4 Sound start offset, relative to the start of the ADPCM chunk 0x8 4 Unknown 0xC 1 Base Note; Corresponds to the MIDI note played in the sample, at the native sample-rate (which MusyX obtains from the INST chunk of .aiff files or SMPL chunk of .wav files, along with looping info). To play at a specified pitch in cents, set the playback sample rate using this formula: sampleRate * 2((pitch - baseNote * 100) / 1200.0) 0xD 1 Padding; always 0 0xE 2 Sample rate 0x10 1 Audio format DSP-ADPCM DSP-ADPCM (Drum Sample) PCM N64-VADPCM (Legacy Format) 0x11 3 Number of samples 0x14 4 Loop start sample 0x18 4 Loop length, in samples. To get the loop end sample, add this to the start sample and subtract 1. 0x1C 4 Table B entry offset, relative to the start of the sound metadata chunk Table B: 0x0 2 Unknown; always 8 0x2 1 Initial predictor/scale (matches first frame header) 0x3 1 Loop predictor/scale (matches loop start frame header) 0x4 2 Loop context sample history 2 0x6 2 Loop context sample history 1 0x8 2 × 16 Decode coefficients
29.959375
344
0.721185
eng_Latn
0.919964
982e5d1ea9ce75db51c057cc2886dddd136b4474
17,503
md
Markdown
docs/scenarios/data-management/organize-team-functions.md
kjpoulton/cloud-adoption-framework
7ba418465c7adddd205cf423269a5a55595e210c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/scenarios/data-management/organize-team-functions.md
kjpoulton/cloud-adoption-framework
7ba418465c7adddd205cf423269a5a55595e210c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/scenarios/data-management/organize-team-functions.md
kjpoulton/cloud-adoption-framework
7ba418465c7adddd205cf423269a5a55595e210c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Understand teams and functions for data management and analytics in Azure description: Learn about teams and functions for the data management and analytics scenario in Azure. author: mboswell ms.author: mboswell ms.date: 08/06/2021 ms.topic: conceptual ms.service: cloud-adoption-framework ms.subservice: ready ms.custom: think-tank, e2e-data --- # Understand teams and functions for data management and analytics in Azure For the data management and analytics scenario, we recommend by moving teams like ingest, processing, analysis, consumption, and visualization from working in horizontally siloed teams to agile vertical cross domain teams in each tier. Platform teams like data platform operations and platform operations are grouped together in a common platform group. ![Diagram of the data management and analytics scenario teams.](./images/enterprise-scale-analytics-ai-teams.png) ## Platform group The platform group consists of two teams: - **Platform ops:** Platform ops is part of the platform group. It operates and owns the cloud platform. This team is responsible for instantiating the data management landing zone and data landing zone scaffolding like networking, peering, core service, and monitoring within the data management and analytics scenario. They usually help data platform ops to develop IT service management interfaces for personas in the data landing zone at the start of rolling out the data management and analytics scenario. These interfaces tend to be REST API calls to a service to onboard datasets, set security, and add services to data landing zones. - **Data platform ops:** The data platform ops group is housed within the platform group. Data platform ops provides services such as central monitoring, cataloging, and reusable policies for data landing zones and products. Data platform ops owns the data management landing zone, and the team's other responsibilities are: ### Develop infrastructure - Develop infrastructure-as-code templates for data landing zone personas; the templates must be updated and maintained over time, and they can cover multiple scenarios. - Prioritize templates and add new functionalities based on a feedback cycle from other teams. - Work in an agile framework with the common goal to produce standard infrastructure templates. ### Respond to new data landing zone requests The data platform ops team must provide the tools and services to support the templates that they've created. IT service management tools like ServiceNow can handle ticket requests approved by the data platform ops team for creating new data landing zones. Once approved, a new landing zone would fork from the base template to create a new DevOps project, and pipelines would deploy templates to a new environment. ### The data platform ops feedback and enhancement loop Two options are available to enhance the templates: - Teams in charge of infrastructure template instances would enhance their DevOps templates and deployments. If teams discover issues in the templates, data platform ops can support the teams and merge changes back from their fork into the template. - Other data landing zone teams should be able to create improvement and backlog tickets that would enhance templates based on how the tickets are prioritized. ### Azure policies for the enterprise-scale for analytics and AI The enterprise-scale for analytics and AI principles emphasize self-service agility and guardrails to protect data, costs, and patterns. Data platform ops works with platform ops to define quality, and these teams collaborate to implement specific data policies. Data platform ops should follow a review process to update and maintain new features that are added to products. ### Deploy and operate data management landing zones Data platform ops and platform ops work together to deploy and operate data management landing zones. A data management landing zone provides shared services to data landing zones, making it a central piece of enterprise-scale for analytics and AI. ## Data landing zone ops Data landing zone ops operates and maintains their data landing zone instance while responding to new data integration and product service requests. They provide many of the same services as data platform ops but are limited to their data landing zone. They work out of the forked repo that's created when a data landing zone is created. To request policy changes, they have to raise tickets to data platform ops to allow these exceptions. ### Support the data product team to customize products The data landing zone ops team supports the data product team by using pull requests to submit new product templates to their respective data product repositories. As the owner of the landing zone, Azure DevOps would route the approval for changes to data landing zone ops: - If approved, the template changes would be moved to the main branch and deployed to production via continuous integration/continuous development, causing the data product platform/infrastructure to be updated. - If denied, data landing zone ops would work with the data product team to fix the changes. ### Respond to new data integration and data product requests Data landing zone ops supports integration ops and data product teams to create new data integration and data products. When integration ops or a data product teams request assistance, an IT service management solution, for example, an automation logic app, orchestrates the approval or deployment of a new data integration or data products repository. Data landing zone ops would be notified of new requests and approve or decline deployments. Once approved, a new DevOps project is created, the main template and artifacts are forked, and a new data integration or data product is deployed. ### Adhere to the Azure Well-Architected Framework Data landing zone ops is responsible for the data landing zone, and it's recommended for the team to be proficient in the [Azure Well-Architected Framework](/azure/architecture/framework/), which provides guidance on cost optimization, reliability, and security. ### Business as usual Data landing zone ops is responsible for business tasks that include gathering feedback and enhancement requests. These requests are prioritized and shared with data platform ops on a regular basis. The team monitors the data landing zone for incidents and health events. They will engage other ops teams during severe incidents to mitigate, restore backups, failover, and scale services. ## Integration ops Integration ops' main task is to ingest data from the source and provide a read data store version in the data landing zone. The only change that they make to the structure is to add conformed data types. [Data integration and data product deployment process](./eslz-provision-platform.md#data-integration-and-data-product-deployment-process) describes onboarding integration. Jordan is a data manager within integration ops. This team provides access to reusable data assets and must carefully assess access controls, reviews data attributes (compliance), and supports the wider community. ### Triage new dataset requests IT service management solutions field dataset onboarding requests from the business to integration ops. The team reviews the data catalog for existing assets and source systems and collects metadata such as schema, location, privacy requirements, and ingest patterns to be associated with the source. They use their forked repo to develop ingestion pipelines and deploy to their data integration resource groups. The final part of the business' dataset onboarding process is to register the dataset by: - Registering it in the data catalog. - Creating Azure Data Lake folders for the dataset. - Notifying integration ops and data product teams of the new dataset. ### Update existing datasets IT service management solutions field dataset update requests from the business to integration ops. The team uses their forked repo to develop ingestion pipelines and deploy to their data integration resource groups. Upon deployment, they update the dataset in the data catalog and notify everyone in integration ops and the data product team of the new data asset. ### Manage access requests to datasets As previously described in [Understand security provisioning for data management and analytics in Azure](./security-provisioning.md#grant-access), integration operations is responsible for approving access to datasets. ### Review dataset telemetry Integration ops can use a data access heatmap to identify traffic and hotspots that can help to identify popular assets. Heatmaps can also help to prioritize support investments and manage storage costs while highlighting data assets wit low traction. Low traction dataset would lead integration ops to contact the owners to evaluate archiving options. > [!NOTE] > Some data catalog solutions feature heatmaps as part of their integrated solution. However, it's also possible to do this with other reporting tools like Microsoft Power BI. ### The integration operations feedback and enhancement loop Feedback portals and other channels (DL, open office hours, and others) provide feedback to integration ops. They work with the business the team to identify major blockers for data options and collaborate with data landing zone ops on process-related issues and data asset owners on data quality issues. This information is entered into the integration ops backlog to enhance pipelines. ## The data product team The data product team delivers new data products to the business. They source from data integrations' read data stores and transform them into business solutions. Anything that transforms data for use is classified as a **data product**. This team is often a mix of technical specialists and subject matter experts who can help the business to achieve value quickly. Data products can range from simple reports and new data assets to custom setups with data-driven Kubernetes web apps. ### New data products Product owners and business representatives create requests for new data product when they're needed. The data office assesses the requirements and assembles a new data product team with a range of expertise. The team identifies the data assets required for the data product and requests permission to the data asset. If a new data asset is needed, integration ops receives a ticket to ingest it. The team identifies the services required for the new data product and requests a new data product via the [data integration and data product deployment process](./eslz-provision-platform.md#data-integration-and-data-product-deployment-process). The data product team receives a forked repo from the master data products template to deploy the data product. ### Certify data products In a self-service platform, anyone can create reports, curate datasets in an Azure Data Lake workspace account, and release data products for the business to use. Data product review requests occur when: - Business sponsors log tickets to certify data products. - Data platform ops nominates data products based on popularity. A data product team can drive a certification process, to be defined data platform ops and digital security, which might include: - Tests devised to validate data transformations and business logic - Assessments for: security, compliance, or performance impact Upon certification, artifacts are collated and uploaded to a data product repository, documentation is published, and the data product team is notified. ### Product support Users can submit feedback with an IT service management solution or directly within the product as a ticket is routed to the data product owner. This individual triages the request and determines whether to escalate it to the data product team to fix or enter feedback into a product backlog and review during product planning cycles. ## The data science products team While the data science products team creates data products, it's distinct because their functions lead to data integrations, assets, or products. This results in published models becoming data products for others to use, and the pattern follows a Machine Learning ops model that's associated with the data landing zone. The data science products team starts by searching and finding relevant datasets for their use case. Data governance solutions can reveal more details like data quality, lineage, or a similar dataset or profile. They research if a sample dataset is available and if the data is relevant to the project. Once data access is granted via a data catalog or an Azure AD access package, the team uses the services in the data landing zone to explore and analyze the data. Before processing all data, the team uses local or remote compute to process and analyze sample datasets. They can optimize remote compute targets with larger datasets to train and develop machine learning models with runs, outputs, and models that are tracked inside Azure Machine Learning. When the team has developed machine learning models, they start operationalizing them. For this, they expand the team to include DataOps and machine learning engineers who can assist with moving the models into a new data product, as outlined in a data product team role. The data science team will continue to work with the associated data product owners to capture feedback, support, and resolved and update models in production using a [machine learning ops methodology](/azure/machine-learning/concept-model-management-and-deployment). ## Analyst Analysts represent a large group that includes business analysts, power users, and generally anyone in the organization with an interest in optimizing data to create new business insights. Self-service enablement is a key principle that supports analysts to access analytics and data without having to secure formal IT budget and resources. > [!TIP] > Enterprises should view insights created by analysts as the next set of potential data products to be certified for others to use within the business. ### Find and request data Analysts consult data marketplaces/catalogs to discover relevant datasets. - If the data asset can't be found or doesn't exist, then analysts open a support ticket with integration ops. Integration ops assist with finding the dataset or add the request to their backlog to assess it in another development cycle. - If the dataset exists, analytics can identify Azure AD group membership for assets listed in catalog and use the Azure access package portal to request access to the Azure AD group. ### Build new reports Analysts can use tools like Microsoft Power BI to integrate datasets into reports. These reports can be for their individual use or publishing a certified data product. Before publishing the report across the organization, it would need to be certified with a data product certification process for security, compliance, and performance. ### Run as-needed queries Enterprise-scale for analytics and AI has shared workspaces where analysts can query data, subject to permissions. It's common for data products to provide dedicated compute to run queries as they're needed. In both cases, analyst can run queries against data assets in the data landing zones. It's also subject to permissions. The results from the queries can be stored in Azure Data Lake workspaces to be used again. ### User feedback Since analysts can serve as an untapped source information and improvements, enterprises are highly encouraged to create user feedback groups for each data landing zone. In addition to participating in these user groups, analysts should submit data asset feedback to integration ops and data catalog issues within the data catalog or the IT service management solution. They can submit data process issues to the data product team or within an IT service management solution. > [!NOTE] > An IT service management should serve as a central location for submitting feedback and escalating issues. Submitting direct feedback to individual teams might seem to be a faster solution, but this approach doesn't give the business visibility into the challenges in the platform. An IT service management solution with correct routing to integration ops and, the data product teams can give the business one view across the enterprise. ## Responsibility assignment matrix - Responsible: Who is completing the task? - Accountable: Who is making decisions and taking actions on the tasks(s)? - Consulted: Who will receive communication about decisions and tasks? - Informed: Who will be updated about the decisions and actions during the project? |Role |Cloud environment|Data management landing zone|Data landing zone|Data integration|Data products| |-|-|-|-|-|-| |Service owner|Informed|Accountable|Consulted informed|Consulted informed|Consulted informed| |Data landing zone service owner|Informed|Consulted informed|Accountable|Accountable|Accountable| |Cloud platform ops|Responsible|Consulted|Consulted|Consulted|Consulted| |Data platform ops|Consulted|Responsible|Responsible|Consulted|Consulted| |Data landing zone ops|Informed|Responsible|Responsible|Responsible|Responsible| |Integration ops||Informed|Informed|Responsible|Consulted| |Data product team||Informed|Informed|Informed|Responsible| ## Next steps [The Azure Well-Architected Framework for data workloads](./well-architected-framework.md)
84.966019
754
0.813975
eng_Latn
0.998659
982ec4421346cbce2a58a8381b3118bf5d460e0f
421
md
Markdown
_posts/2006/2006-12-09-strohengel-vor-wechsel.md
eintracht-stats/eintracht-stats.github.io
9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43
[ "MIT" ]
null
null
null
_posts/2006/2006-12-09-strohengel-vor-wechsel.md
eintracht-stats/eintracht-stats.github.io
9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43
[ "MIT" ]
1
2021-04-01T17:08:43.000Z
2021-04-01T17:08:43.000Z
_posts/2006/2006-12-09-strohengel-vor-wechsel.md
eintracht-stats/eintracht-stats.github.io
9d1cd3d82bff1b70106e3b5cf3c0da8f0d07bb43
[ "MIT" ]
null
null
null
--- layout: post title: "Stroh-Engel vor Wechsel" --- Die Anzeichen verdichten sich, dass sich Dominik Stroh-Engel bereits in der Winterpause einem neuen Verein anschließt. Der Stürmer, der in der Profimannschaft wohl keine Chance auf den Durchbruch mehr hat, wird wohl zum SV Wehen wechseln. Die Taunussteiner stehen im Augenblick auf Platz 1 der Regionalliga Süd und haben beste Chancen auf den Zweitliga-Aufstieg.
42.1
362
0.7981
deu_Latn
0.999482
982ef2e9f5bfd2a1da67c8c30e7519184380f6d7
435
md
Markdown
README.md
oorkan/terminal-clock
d8d3028bfbc114e4bc5ee2bf5c95c4acc7f6982b
[ "Unlicense" ]
2
2021-01-06T17:17:52.000Z
2021-01-10T01:43:53.000Z
README.md
oorkan/terminal-clock
d8d3028bfbc114e4bc5ee2bf5c95c4acc7f6982b
[ "Unlicense" ]
null
null
null
README.md
oorkan/terminal-clock
d8d3028bfbc114e4bc5ee2bf5c95c4acc7f6982b
[ "Unlicense" ]
null
null
null
![bitime.png](https://gist.githubusercontent.com/oorkan/b03c8b68a0807d3ea4e2e398df63adbb/raw/6a1a68b73466b3cb9093df82835b9fe51ec7ff9a/bitime.png) **Description** A simple binary clock running in your terminal. &nbsp; **Installation** ```bash chmod +x bitime.py ``` ```bash sudo mv bitime.py /usr/bin/bitime ``` ```bash bitime # show the clock ``` &nbsp; **Requirements** Requires Python>=3.8. Tested only under Debian.
15.535714
145
0.728736
eng_Latn
0.270394
982f38dd3b3bd6a97144f208eb08d741f3046914
74
md
Markdown
content/blog/wiki/ikvm.md
andrewmcdonough/andrewmcdonough.com
9562d123b76384d24e849310d37f191bd20fe6e5
[ "MIT" ]
null
null
null
content/blog/wiki/ikvm.md
andrewmcdonough/andrewmcdonough.com
9562d123b76384d24e849310d37f191bd20fe6e5
[ "MIT" ]
null
null
null
content/blog/wiki/ikvm.md
andrewmcdonough/andrewmcdonough.com
9562d123b76384d24e849310d37f191bd20fe6e5
[ "MIT" ]
null
null
null
# IKVM IKBM is an implementation of Java for Mono and the .NET framework
18.5
65
0.77027
eng_Latn
0.988401
983046bec533546f3a704bd1095a28496f560929
452
md
Markdown
500-plt/_books/anatomy/03-elements.md
mandober/cs-hierarchy
1761ede34671b72c4e0d815e45b3f8ecc062f026
[ "Unlicense" ]
1
2021-08-22T08:16:56.000Z
2021-08-22T08:16:56.000Z
500-plt/_books/anatomy/03-elements.md
mandober/cs-hierarchy
1761ede34671b72c4e0d815e45b3f8ecc062f026
[ "Unlicense" ]
null
null
null
500-plt/_books/anatomy/03-elements.md
mandober/cs-hierarchy
1761ede34671b72c4e0d815e45b3f8ecc062f026
[ "Unlicense" ]
null
null
null
# Elements of Language (The Anatomy Of Programming Languages. Alice E. Fischer, Frances S. Grodzinsky. Prentice Hall. 1993. Chapter 2.) Names are used in programming languages to give names to objects like functions, memory locations, variables, constants, types, etc. A **variable declaration** is a directive to a translator to set aside storage to represent some real-world object, then give a name to that storage so that it may be referred to.
56.5
179
0.783186
eng_Latn
0.996985
9830fd90f2a9a7789d2c0357227ba1a3d3417ed1
2,223
md
Markdown
resources/views/laravel-medialibrary/v4/converting-other-file-types/using-image-generators.md
Skullbock/docs.spatie.be
d1392b10135efd9261c8730faec56750ee7c1827
[ "MIT" ]
1
2021-05-14T13:53:44.000Z
2021-05-14T13:53:44.000Z
resources/views/laravel-medialibrary/v4/converting-other-file-types/using-image-generators.md
Skullbock/docs.spatie.be
d1392b10135efd9261c8730faec56750ee7c1827
[ "MIT" ]
null
null
null
resources/views/laravel-medialibrary/v4/converting-other-file-types/using-image-generators.md
Skullbock/docs.spatie.be
d1392b10135efd9261c8730faec56750ee7c1827
[ "MIT" ]
2
2019-07-26T02:10:32.000Z
2020-03-27T23:48:21.000Z
--- title: Using image generators --- As explained in the [Defining conversions](/laravel-medialibrary/v4/converting-images/defining-conversions/) section this package use [Glide](http://glide.thephpleague.com/) under the hood which only perform conversions on images files. To generate conversions of other media types – most notably PDFs and videos – the medialibrary uses a image generators to create a derived image file of the media. Conversion of specific file type are defined in the exact same way as images: ```php $this->addMediaConversion('thumb') ->setManipulations(['w' => 368, 'h' => 232]) ->performOnCollections('videos'); ``` The medialibrary includes image generators for the following file types: - [PDF](/laravel-medialibrary/v4/converting-other-file-types/using-image-generators#pdf) - [SVG](/laravel-medialibrary/v4/converting-other-file-types/using-image-generators#svg) - [Video](/laravel-medialibrary/v4/converting-other-file-types/using-image-generators#video) ## PDF The only requirement to perform a conversion of a PDF file is [Imagick](http://php.net/manual/en/imagick.setresolution.php). ## SVG The only requirement to perform a conversion of a SVG file is [Imagick](http://php.net/manual/en/imagick.setresolution.php). ## Video The video image generator uses the [PHP-FFMpeg](https://github.com/PHP-FFMpeg/PHP-FFMpeg) package that you can install via composer: ```bash $ composer require php-ffmpeg/php-ffmpeg ``` You'll also need to follow `FFmpeg` installation instructions on their [official website](https://ffmpeg.org/download.html). The video image generator allows you to choose at which time of the video the derived file should be created using the `setExtractVideoFrameAtSecond` on the conversion. ```php $this->addMediaConversion('thumb') ->setManipulations(['w' => 368, 'h' => 232]) ->setExtractVideoFrameAtSecond(20) ->performOnCollections('videos'); ``` Once the conversion is created you can easily use the thumbnail in a html `video` tag for example: ```html <video controls poster="{{ $video->getUrl('thumb') }}"> <source src="{{ $video->getUrl() }}" type="video/mp4"> Your browser does not support the video tag. </video> ```
39
168
0.746739
eng_Latn
0.892097
98312274f0138d2cdb2686ad7226ef54b004183d
4,087
md
Markdown
manifests/function/generate-secrets-example/README.md
vladiskuz/airshipctl
aca738afa1b956059ec40fcd61bf7fda94b3595e
[ "Apache-2.0" ]
null
null
null
manifests/function/generate-secrets-example/README.md
vladiskuz/airshipctl
aca738afa1b956059ec40fcd61bf7fda94b3595e
[ "Apache-2.0" ]
null
null
null
manifests/function/generate-secrets-example/README.md
vladiskuz/airshipctl
aca738afa1b956059ec40fcd61bf7fda94b3595e
[ "Apache-2.0" ]
null
null
null
Function: generate-secrets-example ================================= This function provide an example on how to generate secrets using templator and variable catalogue. The generated secrets are usually of `kind: VariableCatalogue`. These generated secrets then be used in conjuction with `kind: ReplacementTransformer` to subsitute accordingly in the site manifests. If the generated secrets needs to be deployed on the cluster then define the secret as `kind: Secret` and appropriately mark it with `deploy-k8s: true` annotation. ## Generating & Encrypting Secrets Make a copy of this folder to the appropraite site for which secrets has to be generated and then edit the [secret-generation.yaml](secret-generation.yaml) with the required secret generation details. For example refer to [generator](../../site/test-site/target/generator/) folder. Once the secret definitions are in place in the site manifests, we can add a new phase to generate secrets pointing to the folder in site manifests. Below is an example of how to add phase to the [phases.yaml](../../phases/phases.yaml). ``` apiVersion: airshipit.org/v1alpha1 kind: Phase metadata: name: secret-generate config: executorRef: apiVersion: airshipit.org/v1alpha1 kind: GenericContainer name: encrypter documentEntryPoint: target/generator ``` The executorRef is of `kind: GenericContainer` and should also have the following definition in [executor.yaml](../../phases/executor.yaml) ``` --- apiVersion: airshipit.org/v1alpha1 kind: GenericContainer metadata: name: encrypter labels: airshipit.org/deploy-k8s: "false" kustomizeSinkOutputDir: "target/generator/results/generated" spec: container: image: quay.io/aodinokov/sops:v0.0.3 envs: - SOPS_IMPORT_PGP - SOPS_PGP_FP config: | apiVersion: v1 kind: ConfigMap data: cmd: encrypt unencrypted-regex: '^(kind|apiVersion|group|metadata)$' ``` The container spec in the `kind: GenericContainer` is specified with sops spec so that the generated secrets would be encrypted and then stored in the `kustomizeSinkOutputDir` directory. Sops uses pgp keys and sops fingerprint key environment variable from the terminal to perform encryption on the generated secrets. ## Steps to execute using airshipctl command 1. Sops environment variable has to be exported which will be used for encryption. Download the sops key file. If you want to use custom sops key copy it to the current location with filename as `key.asc`. `curl -fsSL -o key.asc https://raw.githubusercontent.com/mozilla/sops/master/pgp/sops_functional_tests_key.asc` 2. Export key file and set corresponding fingerprint which will be used for encryption. `export SOPS_IMPORT_PGP="$(cat key.asc)" && export SOPS_PGP_FP="FBC7B9E2A4F9289AC0C1D4843D16CEE4A27381B4"` 3. Then run the airshipctl command `airshipctl phase run <secret-generate>` Once the command executes successfully, we can see the generated and encrypted secrets will be placed in `kustomizeSinkOutputDir`. ## Generate Secrets without encryption(Not recommended) In case if no encryption is required for the secrets then use the below `kind: GenericContainer` definition in the [executor.yaml](../../phases/executor.yaml) ``` --- apiVersion: airshipit.org/v1alpha1 kind: GenericContainer metadata: name: encrypter labels: airshipit.org/deploy-k8s: "false" kustomizeSinkOutputDir: "target/generator/results/generated" spec: container: image: quay.io/airshipit/templater:latest config: | foo: bar ``` ## Decrypt to read the secrets To decrypt the secrets for readability purposes run the kustomize build command on the generated secrets folder with the [kustomization.yaml](../../site/test-site/target/generator/results/kustomization.yaml) and [decrypt-secrets.yaml](../../site/test-site/target/generator/results/decrypt-secrets.yaml) files in place in the same folder. Kustomize command to decrypt: `KUSTOMIZE_PLUGIN_HOME=$(pwd)/manifests SOPS_IMPORT_PGP=$(cat key.asc) kustomize build \ --enable_alpha_plugins \ manifests/site/test-site/target/generator/results`
34.635593
230
0.773428
eng_Latn
0.963642
9831a569b46239ea3dd79dbb7607e03d9bbfb69a
332
md
Markdown
README.md
tdg5-cookbooks/docker_registry
5db7d7e0040ba80ae88d9de980fe00c321477bfd
[ "MIT" ]
null
null
null
README.md
tdg5-cookbooks/docker_registry
5db7d7e0040ba80ae88d9de980fe00c321477bfd
[ "MIT" ]
null
null
null
README.md
tdg5-cookbooks/docker_registry
5db7d7e0040ba80ae88d9de980fe00c321477bfd
[ "MIT" ]
null
null
null
# docker_registry Installs and manages a docker container running the docker registry. For more information see: - [Deploying a registry server](https://github.com/docker/distribution/blob/master/docs/deploying.md) - [Registry Configuration Reference](https://github.com/docker/distribution/blob/master/docs/configuration.md)
33.2
86
0.801205
eng_Latn
0.343295
9832953fcd025afc5c0a8a2949c207391c5263e5
218
md
Markdown
_watches/M20190222_003644_TLP_1.md
Meteoros-Floripa/meteoros.floripa.br
7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad
[ "MIT" ]
5
2020-01-22T17:44:06.000Z
2020-01-26T17:57:58.000Z
_watches/M20190222_003644_TLP_1.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
null
null
null
_watches/M20190222_003644_TLP_1.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
2
2020-05-19T17:06:27.000Z
2020-09-04T00:00:43.000Z
--- layout: watch title: TLP1 - 22/02/2019 - M20190222_003644_TLP_1T.jpg date: 2019-02-22 00:36:44 permalink: /2019/02/22/watch/M20190222_003644_TLP_1 capture: TLP1/2019/201902/20190221/M20190222_003644_TLP_1T.jpg ---
27.25
62
0.784404
kor_Hang
0.039469
9832ed713bdd72b21021c7e05968c7e30a33b4c4
911
markdown
Markdown
_posts/2021-03-23-hello-memo-et.markdown
memoetapp/blog
d0d5ca20ba416025c0bab3edddc4247849d50c34
[ "MIT" ]
null
null
null
_posts/2021-03-23-hello-memo-et.markdown
memoetapp/blog
d0d5ca20ba416025c0bab3edddc4247849d50c34
[ "MIT" ]
null
null
null
_posts/2021-03-23-hello-memo-et.markdown
memoetapp/blog
d0d5ca20ba416025c0bab3edddc4247849d50c34
[ "MIT" ]
null
null
null
--- layout: post title: "Hello world, this is Memoet" date: 2021-03-23 03:00:00 tags: ["Product", "Tech"] --- Memoet is a web app that dedicate to help people finding fun again while learning to memorize things. **Memo** comes from "memory" and **et** is a suffix to make the name sound more human ;) In the beginning, we have combined a quiz system (type answer & multiple choice question for now) with a spaced repetition algorithm (SuperMemo2) and built them right into notes. The app is already up and running at memoet.manhtai.com, give it a try when you had the time. That was the first step of a long journey. Along the way, we want to share things we are learning and building to make Memoet better. It will cover all things from technical stuffs to product development. We dedicate ourselves to make learning easier, we hope that Memoet would become the first successful solution. Stay tuned!
33.740741
80
0.760703
eng_Latn
0.999742
98333b452c1d3981cddefb4fee01212fcbbb0d61
6,075
md
Markdown
docs/getting-started.md
vipinsun/opencbdc-tx
724f307548f92676423e98d7f2c1bfc2c66f79ef
[ "MIT" ]
1
2022-02-09T22:25:02.000Z
2022-02-09T22:25:02.000Z
docs/getting-started.md
vipinsun/opencbdc-tx
724f307548f92676423e98d7f2c1bfc2c66f79ef
[ "MIT" ]
null
null
null
docs/getting-started.md
vipinsun/opencbdc-tx
724f307548f92676423e98d7f2c1bfc2c66f79ef
[ "MIT" ]
1
2022-02-10T02:31:32.000Z
2022-02-10T02:31:32.000Z
# Helpful Resources * The [Project Readme](/README.md) includes a crash-course for getting setup with the code and running a few tests * The [Contribution Guide](contributing.md) includes more thorough discussion of the project's guiding principles, governance model, and how to get involved * The [Technical Reference](https://mit-dci.github.io/opencbdc-tx/) includes documentation for the code itself and how it works * The [Architecture Guide](architecture.md) walks through the data model of transactions and the currently-implemented architectures * The [Contribution Lifecycle Guide](lifecycle.md) takes you through what you can expect throughout the process of submitting a contribution to OpenCBDC # Frequently Asked Questions ## What does a good commit look like? <details> A good commit message clearly communicates the goal of the included changes (the *why* rather than the *what*). Chris Beams wrote a [great article](https://chris.beams.io/posts/git-commit/) on writing good commit messages. A commit's contents should be very focused on accomplishing a single task (e.g., fixing a single bug). In particular, you should strive for your commits to be [atomic](https://www.freshconsulting.com/insights/blog/atomic-commits/). If there is an issue open that your contribution addresses, reference that issue number in the commit message's body text. </details> ## How do I sign-off on my contributions? <details> OpenCBDC uses a [developer certificate of origin](https://developercertificate.org) (or DCO) to ensure that all contributions are made freely available under the same license as OpenCBDC's original code base. To do that, when contributors submit code (or other changes that are reflected in any repository), they are required to “sign-off” their commits. To sign off, you can just add the `-s` argument when you create your commit with `git commit` (i.e., use `git commit -s`). This adds the following line to the bottom of your commit: ``` Signed-off-by: Your Name <your.email.address@example.com> ``` (You could manually type this out if you want to.) </details> ## I've already created commits but forgot to add sign-offs; how do I fix them? <details> There are several options to add sign-offs retroactively: ### `--amend` your most-recent commit If you only need to change the last commit you made, you can do the following: ``` $ git commit --amend --no-edit --signoff $ git push --force origin <your-branch-name> ``` ### `rebase` all the commits in your contribution (requires git version 2.13 or newer) If you need to add a sign-off to each of the commits in your contribution, you can use `git rebase` to automatically add it to each one: ``` $ git rebase --signoff HEAD~X # replace X with the number of commits in your contribution $ git push --force origin <your-branch-name> ``` ### interactive `rebase` If your version of git is older than 2.13 or you only need to add sign-offs to particular commits in your contribution, you can use an interactive rebase to choose the commits to modify: ``` $ git rebase -i HEAD~X # replace X with the number of commits in your contribution ``` A text editor will open showing your commits (make sure only your commits are listed; if not, exit the file, and rerun the rebase command with the correct value for `X`). Mark all the commits that need a sign-off as “reword”. The rebase will stop at each of these commits and let you run commands. Run these two commands until the rebase is complete: ``` $ git commit --amend --no-edit --signoff $ git rebase --continue ``` Now, force-push your branch: ``` $ git push --force origin <your-branch-name> ``` </details> ## Can I contribute without using docker (running some other environment, etc.)? <details> Absolutely! However, we only officially support the included docker compose files (as they mirror our automated test environment). After cloning the code, ``scripts/configure.sh`` will attempt to configure your environment. **Note:** ``scripts/configure.sh`` only supports Ubuntu-based linux distributions and macOS (which depends on [Homebrew](https://brew.sh/)). However, it can be used as a guide to understand what you must do to get your environment setup. In short, ``scripts/configure.sh`` does the following: * installs a couple packages needed for building and testing (e.g., clang, LLVM, cmake, make, lcov, googletest, git) * installs the external dependencies: * [Google's LevelDB](https://github.com/google/leveldb) * [eBay's NuRaft](https://github.com/eBay/NuRaft) * downloads a helper python script to run code linting and static analysis **Note:** The code assumes it is running on Linux on an x86\_64 processor. However, we generally tend towards keeping code portable, so any \*nix-like operating system on an x86\_64 processor may function well. </details> ## What can I do to make it more likely my code will get merged quickly? <details> First and foremost, respond to feedback for your contributions quickly and cordially. The faster any issues reviewers bring up are fixed, the faster we can merge your code! However, here are several things you can do to make review as easy and quick as possible: * Keep your working branch up-to-date with our main branch and free of merge conflicts * Run ``./scripts/lint.sh`` and ``./scripts/test.sh`` and ensure both succeed before committing changes * You can use a tool like [`act`](https://github.com/nektos/act) to run the CI locally and see if your changes would pass automated-review * Author [good commits](#what-does-a-good-commit-look-like) </details> ## Are there easy things for me to get started on? <details> Definitely! Take a look at our issue tracker's list of [good first issues](https://github.com/mit-dci/opencbdc-tx/labels/difficulty%2F01-good-first-issue). </details> ## Is anyone available to give talks, presentations, or interviews? <details> Possibly! Please send us [an email](mailto:dci-press@mit.edu) with your questions or requests and we will get back to you as soon as possible! </details>
40.231788
208
0.75572
eng_Latn
0.998293
98336fd6e809151b7d31b27b3dd25dbbc2a910dd
42
md
Markdown
README.md
adam-szymanski/GoXtb
b2907cba506cc231a857d60a82160c2d811cdd1a
[ "MIT" ]
null
null
null
README.md
adam-szymanski/GoXtb
b2907cba506cc231a857d60a82160c2d811cdd1a
[ "MIT" ]
null
null
null
README.md
adam-szymanski/GoXtb
b2907cba506cc231a857d60a82160c2d811cdd1a
[ "MIT" ]
1
2020-03-24T16:41:02.000Z
2020-03-24T16:41:02.000Z
# GoXtb XTB API client written in Golang
14
33
0.761905
eng_Latn
0.920255
983370b67b9e0a784a2897d9cb7f1a3d885b3feb
1,418
md
Markdown
README.md
marpple/FxSVG
0a29bcf1e9e42a6cfc5bc1b41ed95c2a84ccf776
[ "MIT" ]
11
2020-06-01T09:36:07.000Z
2021-07-14T12:22:45.000Z
README.md
marpple/FxSVG
0a29bcf1e9e42a6cfc5bc1b41ed95c2a84ccf776
[ "MIT" ]
2
2020-07-28T01:29:44.000Z
2020-08-03T03:23:28.000Z
README.md
marpple/FxSVG
0a29bcf1e9e42a6cfc5bc1b41ed95c2a84ccf776
[ "MIT" ]
1
2020-07-15T04:17:54.000Z
2020-07-15T04:17:54.000Z
# FxSVG [EN](./README.md) | [KR](./README_KR.md) Functional SVG Handling Library ## Installation FxSVG use the ECMAScript Module system. There are two type packages provided. - ECMAScript Module - bundle file for using in browser environment ### ECMAScript Module ```shell script npm install fxsvg ``` ```javascript import { $$createSVGTransformTranslate } from "fxsvg"; import FxSVG from "fxsvg"; const $el = document.querySelector("svg rect"); const transform = $$createSVGTransformTranslate({ tx: 10, ty: 20 }); FxSVG.getBaseTransformList($el).initialize(transform); ``` ### In a Browser FxSVG supports only modern browsers that follow the ECMAScript 6+ spec and SVG 1.1+ spec. FxSVG uses `$$` property of `window` object as a namespace for itself. ```shell script npm install fxsvg ``` ```html <script src="path/to/node_modules/fxsvg/dist/fxsvg.js"></script> ``` ```javascript const { $$el } = $$; const $rect = $$el(`<rect x="10" y="10" width="100" height="100"></rect>`); const { controller } = $$.controlTranslateTransform()($rect); controller.append({ tx: 10 }).append({ ty: 10 }).end(); ``` ## Documentation - [Function Interface](./doc/FUNCTION_INTERFACE.md) - [API Reference](./doc/API.md) - [Test](./doc/TEST.md) ## Contributing FxSVG always welcome all the developers who want to join in. Please following the guide when you contribute. - [Contributing Guide](./doc/CONTRIBUTING.md)
21.815385
89
0.704513
eng_Latn
0.605897
98341967f7f60fcbccc1629f744f5ab2ee306bf2
4,757
md
Markdown
articles/site-recovery/azure-to-azure-replicate-after-migration.md
y32saji/azure-docs
cf971fe82e9ee70db9209bb196ddf36614d39d10
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/site-recovery/azure-to-azure-replicate-after-migration.md
y32saji/azure-docs
cf971fe82e9ee70db9209bb196ddf36614d39d10
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/site-recovery/azure-to-azure-replicate-after-migration.md
y32saji/azure-docs
cf971fe82e9ee70db9209bb196ddf36614d39d10
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Set up disaster recovery for Azure VMs after migration to Azure with Azure Site Recovery | Microsoft Docs description: This article describes how to prepare machines to set up disaster recovery between Azure regions after migration to Azure using Azure Site Recovery. services: site-recovery author: rayne-wiselman ms.service: site-recovery ms.topic: article ms.date: 03/18/2019 ms.author: raynew --- # Set up disaster recovery for Azure VMs after migration to Azure Use this article if you've [migrated on-premises machines to Azure VMs](tutorial-migrate-on-premises-to-azure.md) using the [Site Recovery](site-recovery-overview.md) service, and you now want to get the VMs set up for disaster recovery to a secondary Azure region. The article describes how to ensure that the Azure VM agent is installed on migrated VMs, and how to remove the Site Recovery Mobility service that's no longer needed after migration. ## Verify migration Before you set up disaster recovery, make sure that migration has completed as expected. To complete a migration successfully, after the failover, you should select the **Complete Migration** option, for each machine you want to migrate. ## Verify the Azure VM agent Each Azure VM must have the [Azure VM agent](../virtual-machines/extensions/agent-windows.md) installed. To replicate Azure VMs, Site Recovery installs an extension on the agent. - If the machine is running version 9.7.0.0 or later of the Site Recovery Mobility service, the Azure VM agent is automatically installed by the Mobility service on Windows VMs. On earlier versions of the Mobility service, you need to install the agent automatically. - For Linux VMs, you must install the Azure VM agent manually.You only need to install the Azure VM agent if the Mobility service installed on the migrated machine is v9.6 or earlier. ### Install the agent on Windows VMs If you're running a version of the Site Recovery mobility service earlier than 9.7.0.0, or you have some other need to install the agent manually, do the following: 1. Ensure you have admin permissions on the VM. 2. Download the [VM Agent installer](https://go.microsoft.com/fwlink/?LinkID=394789&clcid=0x409). 3. Run the installer file. #### Validate the installation To check that the agent is installed: 1. On the Azure VM, in the C:\WindowsAzure\Packages folder, you should see the WaAppAgent.exe file. 2. Right-click the file, and in **Properties**, select the **Details** tab. 3. Verify that the **Product Version** field shows 2.6.1198.718 or higher. [Learn more](https://docs.microsoft.com/azure/virtual-machines/extensions/agent-windows) about agent installation for Windows. ### Install the agent on Linux VMs Install the [Azure Linux VM](../virtual-machines/extensions/agent-linux.md) agent manually as follows: 1. Make sure you have admin permissions on the machine. 2. We strongly recommend that you install the Linux VM agent using an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](https://docs.microsoft.com/azure/virtual-machines/linux/endorsed-distros) integrate the Azure Linux agent package into their images and repositories. - We strongly recommend that you update the agent only through a distribution repository. - We don't recommend installing the Linux VM agent directly from GitHub and updating it. - If the latest agent for your distribution is not available, contact distribution support for instructions on how to install it. #### Validate the installation 1. Run this command: **ps -e** to ensure that the Azure agent is running on the Linux VM. 2. If the process isn't running, restart it by using the following commands: - For Ubuntu: **service walinuxagent start** - For other distributions: **service waagent start** ## Uninstall the Mobility service 1. Manually uninstall the Mobility service from the Azure VM, using one of the following methods. - For Windows, in the Control Panel > **Add/Remove Programs**, uninstall **Microsoft Azure Site Recovery Mobility Service/Master Target server**. At an elevated command prompt, run: ``` MsiExec.exe /qn /x {275197FC-14FD-4560-A5EB-38217F80CBD1} /L+*V "C:\ProgramData\ASRSetupLogs\UnifiedAgentMSIUninstall.log" ``` - For Linux, sign in as a root user. In a terminal, go to **/user/local/ASR**, and run the following command: ``` uninstall.sh -Y ``` 2. Restart the VM before you configure replication. ## Next steps [Review troubleshooting](site-recovery-extension-troubleshoot.md) for the Site Recovery extension on the Azure VM agent. [Quickly replicate](azure-to-azure-quickstart.md) an Azure VM to a secondary region.
57.313253
449
0.76687
eng_Latn
0.991656
98348e57c7102da3a1f67d84ec3444aea0bd6acc
4,717
md
Markdown
Java/Difference between static and regular nested classes.md
2tanayk/Android-Interview-Questions
da1e641fd3348e8ed757655e4d6d979ab3d909c4
[ "Apache-2.0" ]
78
2020-03-23T00:46:05.000Z
2022-03-23T08:22:21.000Z
Java/Difference between static and regular nested classes.md
2tanayk/Android-Interview-Questions
da1e641fd3348e8ed757655e4d6d979ab3d909c4
[ "Apache-2.0" ]
300
2020-01-06T15:45:23.000Z
2022-03-29T11:35:45.000Z
Java/Difference between static and regular nested classes.md
2tanayk/Android-Interview-Questions
da1e641fd3348e8ed757655e4d6d979ab3d909c4
[ "Apache-2.0" ]
25
2020-03-23T10:02:24.000Z
2022-03-23T08:22:25.000Z
# Inner classes vs Static nested classes The Java programming language allows you to define a class within another class. Such a class is called a nested class and is illustrated here: ``` class OuterClass { ... class NestedClass { ... } } ``` **Terminology**: Nested classes are divided into two categories: *static* and *non-static*. Nested classes that are declared `static` are called *static nested classes*. Non-static nested classes are called *inner classes*. Example: ``` class OuterClass { ... static class StaticNestedClass { ... } class InnerClass { ... } } ``` A nested class is a member of its enclosing class. Non-static nested classes (inner classes) have access to other members of the enclosing class, even if they are declared private. Static nested classes do not have access to other members of the enclosing class. As a member of the `OuterClass`, a nested class can be declared `private`, `public`, `protected`, or *package private*. (Recall that outer classes can only be declared `public` or *package private*.) ## Why Use Nested Classes? Compelling reasons for using nested classes include the following: - **It is a way of logically grouping classes that are only used in one place**: If a class is useful to only one other class, then it is logical to embed it in that class and keep the two together. Nesting such "helper classes" makes their package more streamlined. - **It increases encapsulation**: Consider two top-level classes, A and B, where B needs access to members of A that would otherwise be declared `private`. By hiding class B within class A, A's members can be declared private and B can access them. In addition, B itself can be hidden from the outside world. - **It can lead to more readable and maintainable code**: Nesting small classes within top-level classes places the code closer to where it is used. ## Static Nested Classes As with class methods and variables, a static nested class is associated with its outer class. And like static class methods, a static nested class cannot refer directly to instance variables or methods defined in its enclosing class: it can use them only through an object reference. **Note**: A static nested class interacts with the instance members of its outer class (and other classes) just like any other top-level class. In effect, a static nested class is behaviorally a top-level class that has been nested in another top-level class for packaging convenience. Static nested classes are accessed using the enclosing class name: `OuterClass.StaticNestedClass` For example, to create an object for the static nested class, use this syntax: `OuterClass.StaticNestedClass nestedObject = new OuterClass.StaticNestedClass();` ## Inner Classes As with instance methods and variables, an inner class is associated with an instance of its enclosing class and has direct access to that object's methods and fields. Also, because an inner class is associated with an instance, it cannot define any static members itself. Objects that are instances of an inner class exist *within* an instance of the outer class. Consider the following classes: ``` class OuterClass { ... class InnerClass { ... } } ``` An instance of `InnerClass` can exist only within an instance of `OuterClass` and has direct access to the methods and fields of its enclosing instance. To instantiate an inner class, you must first instantiate the outer class. Then, create the inner object within the outer object with this syntax: `OuterClass.InnerClass innerObject = outerObject.new InnerClass();` | Inner classes | Static nested classes | |---|---| | Without existing outer class object there is no chance of existing inner class object. That is inner class object is always associated with outer class object. | Without existing outer class object there may be a chance of existing static nested class object. That is static nested class object is not associated with outer class object. | | Inside inner class static members can’t be declared. | Inside static nested class static members can be declared. | | As `main()` method can’t be declared, regular inner class can’t be invoked directly from the command prompt. | As `main()` method can be declared, static nested class can be invoked directly from the command prompt. | | Both static and non static members of outer class can be accessed directly. | Only static member of outer class can be accessed directly. | ## Links https://docs.oracle.com/javase/tutorial/java/javaOO/nested.html https://www.geeksforgeeks.org/nested-classes-java http://tutorials.jenkov.com/java/nested-classes.html
60.474359
462
0.760865
eng_Latn
0.999558
983497346e2fa8b786cd9067389736e5d523bd04
570
md
Markdown
README.md
saebyeok/runpm
51579d483f6be370dd852ae325463ef86ac52d80
[ "MIT" ]
1
2016-03-09T01:05:42.000Z
2016-03-09T01:05:42.000Z
README.md
saebyeok/runpm
51579d483f6be370dd852ae325463ef86ac52d80
[ "MIT" ]
null
null
null
README.md
saebyeok/runpm
51579d483f6be370dd852ae325463ef86ac52d80
[ "MIT" ]
null
null
null
```runpm.js``` is a package manager to reuse ```node_modules ``` folders. #Installation If you have the node package manager, npm, installed: ```shell npm install -g runpm ``` #Getting Started Register a folder where a specific project directory you want : ```shell /path/to/project1 $ runpm --config [name] ``` Create a link to an other project: ```shell /path/to/project2 $ runpm [name] ``` You can find registered list: ```shell $ runpm list ``` If you want to remove a registered link : ```shell $ runpm --remove [name] ``` #License MIT
12.666667
75
0.666667
eng_Latn
0.984286
9834caf7bda1a18690cad56ce354406a0266ae49
289
md
Markdown
README.md
mikeadams1/folder-pane
fe86e8b6de7dd8f0ca5364ed61afe9ff26a72e83
[ "MIT" ]
null
null
null
README.md
mikeadams1/folder-pane
fe86e8b6de7dd8f0ca5364ed61afe9ff26a72e83
[ "MIT" ]
null
null
null
README.md
mikeadams1/folder-pane
fe86e8b6de7dd8f0ca5364ed61afe9ff26a72e83
[ "MIT" ]
1
2019-11-04T03:26:08.000Z
2019-11-04T03:26:08.000Z
# folder-pane Folder browser fo Sold file system: traverse, add new folders, objects, upload files etc To upload files drag them, onto the green "+" To delete a file, to into the Internals pane (cogwheel button) To create a new folder, click on the green "+" and then the folder icon
28.9
88
0.747405
eng_Latn
0.982357
9834de05573a622e1af74f990fb36fd840cf9765
6,051
md
Markdown
README.md
delpikye-v/react-perfect-scrollbar
65b3975fe2bcbbb5708aa10892eb0f07cdb78944
[ "MIT" ]
null
null
null
README.md
delpikye-v/react-perfect-scrollbar
65b3975fe2bcbbb5708aa10892eb0f07cdb78944
[ "MIT" ]
null
null
null
README.md
delpikye-v/react-perfect-scrollbar
65b3975fe2bcbbb5708aa10892eb0f07cdb78944
[ "MIT" ]
null
null
null
<div align="center"> <h1>react-perfect-scrollbar-z</h1> <br /> <a href="https://codesandbox.io/s/react-perfect-scrollbar-z-8ikb5">LIVE EXAMPLE</a> </div> --- #### Description + It is wrap the <b>[perfect-scrollbar](https://github.com/mdbootstrap/perfect-scrollbar)</b> for the element. + Auto update scrollbar (resize, change data), you don't have to do anything. + Support for scroll-y for only the body of the table. (Keep header) --- #### Usage ```js npm install react-perfect-scrollbar-z ``` Import the module in the place you want to use: ```js import 'react-perfect-scrollbar-z/build/styles.css'; import Scrollbar from 'react-perfect-scrollbar-z' ``` #### Snippet ##### `simple` ```js // tagName = 'div' wrapName='div' // something1 (..any, showHide, data2, data3) <Scrollbar height="100px" effectData={something1...}> { something1... } </Scrollbar> ``` <br /> ##### `special tagName (tbody, ul, dl, ol)` ```js // const refScroll = useRef(null) // you handle scrollbars <Scrollbar tagName="tbody" // tbody, ul, dl, ol maxHeight="400px" className="list-group" effectData={listData} always // onScrollY={evt => console.log(evt)} // refScroll={refScroll} > { listData.map(item => <tr>...</tr>) } </Scrollbar> ``` ```js // access scrollbar (your handler) refScroll.current.element.scrollTop = 0 || refScroll.current.update() ``` <br /> --- #### props | props | type | description | |----------------------|-------------------------------|----------------------------------------------------------------------------| | options | Object | [perfect-scrollbar/options](https://github.com/mdbootstrap/perfect-scrollbar#options) | | tagName | String | Container scrollbar. Default `div` | | effectData | String, Array, Object,..... | Automatically update the scrollbar if the `effectData` has changed. | | always | boolean | Always show scrollbar if data is overflow (`true`). Default `false` | | maxHeight | `px, %, vh` | max-height of scrollbar | | height | `px, %, vh` | height of scrollbar | | maxWidth | `px, %, vw` | max-width of scrollbar | | width | `px, %, vw` | width of scrollbar | | className | String | Your css-class | | style | Object | Your css-style | | libTable | Boolean | When you update for 3th-party table. Default `false` | | wrapName | String | Wrap all element scroll (`div`).When tagName is not in [tbody, ul, ol, dl.]| | wheelStop | Boolean | wheelPropagation (quick in options). Default: `true` | | refScroll | useRef | If you want to use scrollbars (ps scrollbar) | | --- | --- | --- | | onScrollY | Function | y-axis is scrolled in either direction. | | onScrollX | Function | x-axis is scrolled in either direction. | | onScrollUp | Function | scrolling upwards. | | onScrollDown | Function | scrolling downwards. | | onScrollLeft | Function | scrolling to the left. | | onScrollRight | Function | scrolling to the right. | | onYReachStart | Function | scrolling reaches the start of the y-axis. | | onYReachEnd | Function | scrolling reaches the end of the y-axis (useful for infinite scroll). | | onXReachStart | Function | scrolling reaches the start of the x-axis. | | onXReachEnd | Function | scrolling reaches the end of the x-axis (useful for infinite scroll). | <br /> #### Note + tbody only `scroll-y` (no x). You should not use maxWidth, width (default by table). + Update `scrollTop`, `scrollLeft`: using `refScroll` + `ul/ol/dl/tbody`. This is a special. (multi childs), so you shouldn't update the border for tagName. ```js <Scrollbar style={{ border: "1px solid" }} tagName="tbody" ... /> => no <parent style={{ border: "1px solid" }}> <Scrollbar tagName="tbody" ... /> </parent> => OK ``` + `libTable` ```js <Scrollbar libTable={true}><CustomTag></CustomTag></Scrollbar> It will try to add the perfect scrollbar to the `tbody` of the `first` table found. (Checking...) ``` + you should use `ul/dl/ol` with basic ```js <Scrollbar effectData={abcd} .... > <ul> <for>...</for> </ul> <Scrollbar> ``` <br /> #### RUN <a href="https://codesandbox.io/s/react-perfect-scrollbar-z-8ikb5">LIVE EXAMPLE</a> ```js npm install ``` ```js npm run dev npm run start ``` ### License MIT
41.731034
144
0.446538
eng_Latn
0.582143
983599b1f6c7cf6ad8338baa9d1dc97df8bbbe59
3,374
md
Markdown
examples/granted-strict/README.md
asos-craigmorten/permission-guard
66c095cadf7b9ca63601cfc925e57de17d8f150c
[ "MIT" ]
3
2020-05-27T08:34:20.000Z
2020-10-12T09:46:06.000Z
examples/granted-strict/README.md
asos-craigmorten/permission-guard
66c095cadf7b9ca63601cfc925e57de17d8f150c
[ "MIT" ]
null
null
null
examples/granted-strict/README.md
asos-craigmorten/permission-guard
66c095cadf7b9ca63601cfc925e57de17d8f150c
[ "MIT" ]
2
2020-05-28T07:50:29.000Z
2020-06-09T07:35:15.000Z
# granted-strict This example demonstrates using Permission Guard specifying the following allowed / required permissions: - A top-level permission for `env` - A scoped permission for `net` scoped to the domain `http://google.com` The `log` flag has been enabled for additional verbosity, and both exit flags have been passed as `true` to demonstrate how the guard prevents further code execution with it's strictest settings. ## Scenarios ### No errors ```bash deno run --unstable --allow-env --allow-net=google.com ./examples/granted-strict/index.ts ``` When both `--allow-env` and `--allow-net=google.com` are set and no additional permissions, the guard permits code execution. ### Missing top-level permission ```bash deno run --unstable --allow-net=google.com ./examples/granted-strict/index.ts ``` When only `--allow-net=google.com` is set and no additional permissions, the guard stops code execution due to the missing `env` permission with the following logs: ```console permission-guard: warning: missing permission "--allow-env" permission-guard: exiting due to missing required permissions ``` ### Missing scoped permission ```bash deno run --unstable --allow-env ./examples/granted-strict/index.ts ``` When only `--allow-env` is set and no additional permissions, the guard stops code execution due to the missing `net` permission with the following logs: ```console permission-guard: warning: missing permission "--allow-net=google.com" permission-guard: exiting due to missing required permissions ``` ### Insecure top-level permission ```bash deno run --unstable --allow-env --allow-net=google.com --allow-write ./examples/granted-strict/index.ts ``` When the additional `--allow-write` is set, the guard stops code execution due to the addition of the insecure top-level permission with the following logs: ```console permission-guard: error: insecure top-level permission "--allow-write" has been provided permission-guard: exiting due to insecure top-level permissions ``` ### Insecure all permissions ```bash deno run --unstable -A ./examples/granted-strict/index.ts ``` When `-A` or `--allow-all` is set, the guard stops code execution due to the addition of the insecure top-level permissions with the following logs: ```console permission-guard: error: insecure top-level permission "--allow-run" has been provided permission-guard: error: insecure top-level permission "--allow-read" has been provided permission-guard: error: insecure top-level permission "--allow-write" has been provided permission-guard: error: insecure top-level permission "--allow-plugin" has been provided permission-guard: error: insecure top-level permission "--allow-hrtime" has been provided permission-guard: exiting due to insecure top-level permissions ``` ### Insecure scoped permission ```bash deno run --unstable --allow-env --allow-net=google.com --allow-write=/usr ./examples/granted-strict/index.ts ``` Permission Guard is unable to act on scoped permissions such as `--allow-write=/usr` due to limitations in the Deno Permissions API. This is because there is currently no way to enumerate all permissions that have been requested. In this example, the `--allow-write=/usr` _could_ allow for unsolicited writes to the `/usr` directory as an attack vector, should you accidentally pull a malicious third party library into your code.
39.694118
229
0.765264
eng_Latn
0.987742
9835d532d02a6957a26e12317638d963a9ced1a3
2,569
md
Markdown
Answers/Dismiss_sheet_withNavigationView_child.md
Asperi-Demo/4SwiftUI
6670e68517b49353c82b9901ca026b376109458a
[ "MIT" ]
1
2020-03-14T22:12:49.000Z
2020-03-14T22:12:49.000Z
Answers/Dismiss_sheet_withNavigationView_child.md
Asperi-Demo/4SwiftUI
6670e68517b49353c82b9901ca026b376109458a
[ "MIT" ]
null
null
null
Answers/Dismiss_sheet_withNavigationView_child.md
Asperi-Demo/4SwiftUI
6670e68517b49353c82b9901ca026b376109458a
[ "MIT" ]
1
2020-03-14T22:13:01.000Z
2020-03-14T22:13:01.000Z
``` BOYCOTT on russia! Don't buy, sell, support - HELP TO STOP WAR! «Русский военный корабль, иди на хуй!» (c) Grybov, Ukrainian Frontier Guard ATTENTION: By using this you agree do not repost any part of this code on StackOverflow site. Thanks, Asperi. ``` Q: Dismiss a parent modal in SwiftUI from a NavigationView (by codewithfeeling) A: Here is possible approach based on usage own explicitly created environment key (actually I have feeling that it is not correct to use `presentationMode` for this use-case.. anyway). Proposed approach is generic and works from any view in modal view hierarchy. Tested & works with Xcode 11.2 / iOS 13.2. // define env key to store our modal mode values struct ModalModeKey: EnvironmentKey { static let defaultValue = Binding<Bool>.constant(false) // < required } // define modalMode value extension EnvironmentValues { var modalMode: Binding<Bool> { get { return self[ModalModeKey.self] } set { self[ModalModeKey.self] = newValue } } } struct ParentModalTest: View { @State var showModal: Bool = false var body: some View { Button(action: { self.showModal.toggle() }) { Text("Launch Modal") } .sheet(isPresented: self.$showModal, onDismiss: { }) { PageOneContent() .environment(\.modalMode, self.$showModal) // < bind modalMode } } } struct PageOneContent: View { var body: some View { NavigationView { VStack { Text("I am Page One") } .navigationBarTitle("Page One") .navigationBarItems( trailing: NavigationLink(destination: PageTwoContent()) { Text("Next") }) } } } struct PageTwoContent: View { @Environment (\.modalMode) var modalMode // << extract modalMode var body: some View { NavigationView { VStack { Text("This should dismiss the modal. But it just pops the NavigationView") .padding() Button(action: { self.modalMode.wrappedValue = false // << close modal }) { Text("Finish") } .padding() .foregroundColor(.white) .background(Color.blue) } .navigationBarTitle("Page Two") } } }
27.623656
86
0.558583
eng_Latn
0.652311
983618867291142acfb992c30c427f3d34099055
4,014
md
Markdown
subscriptions/mpsa.md
hahaysh/visualstudio-docs.ko-kr
38fd1d7bd27067ebbb756f79879e7d9012e19ba2
[ "CC-BY-4.0", "MIT" ]
null
null
null
subscriptions/mpsa.md
hahaysh/visualstudio-docs.ko-kr
38fd1d7bd27067ebbb756f79879e7d9012e19ba2
[ "CC-BY-4.0", "MIT" ]
null
null
null
subscriptions/mpsa.md
hahaysh/visualstudio-docs.ko-kr
38fd1d7bd27067ebbb756f79879e7d9012e19ba2
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: MPSA(Microsoft 제품 및 서비스 계약)에서 Visual Studio 구독 | Microsoft Docs author: evanwindom ms.author: jaunger manager: evelynp ms.date: 03/14/2018 ms.topic: Get-Started-Article description: MPSA(Microsoft 제품 및 서비스 계약)에서 Visual Studio 구독 ms.prod: vs-subscription ms.technology: vs-subscriptions searchscope: VS Subscription ms.openlocfilehash: a18565a97c0cd85ce42109961592a57c490d92a1 ms.sourcegitcommit: 3724338a5da5a6d75ba00452b0a607388b93ed0c ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 04/06/2018 --- # <a name="visual-studio-subscriptions-in-a-microsoft-products-and-services-agreement-mpsa"></a>MPSA(Microsoft 제품 및 서비스 계약)에서 Visual Studio 구독 MPSA 프로그램을 통해 Visual Studio 구독을 구매한 경우 Visual Studio 구독 관리자가 되어 구독을 다른 사용자에게 할당하기 전에 기억해야 할 몇 가지가 있습니다. 이미 관리자로 설정되어 있으면 Visual Studio 구독 [관리 포털](https://manage.visualstudio.com/)로 직접 이동할 수 있습니다. MPSA 고객으로서 MPSA을 통해 구매한 자산을 관리할 수 있는 포털을 소개합니다. 이 새 포털은 [비즈니스 센터](https://businessaccount.microsoft.com/)라고 하며 VLSC(볼륨 라이선스 서비스 센터)와 동일하고 새로운 기능의 일부를 지원합니다. 이러한 기능에는 라이선스 요약, 주문, 다운로드, 키, 사용자 등의 보기가 포함됩니다. 그러나 MPSA에서 Visual Studio 구독은 마치 클라우드 서비스처럼 동작합니다. 또한 비즈니스 센터는 Microsoft 계정 대신 회사 계정을 사용하여 로그인합니다. 조직이 Office 365 또는 Azure Active Directory 같은 클라우드 서비스를 사용하고 사용자 이메일이 이러한 두 서비스에 속한 경우 이는 이미 회사 계정입니다. 이 계정을 사용하면 조직이 사용자에게 할당한 기존 암호를 사용하여 비즈니스 센터에 등록할 수 있습니다. 조직에서 클라우드 서비스를 사용하지 않고 이메일도 회사 계정이 아닌 경우라도 기존 암호를 사용해 비즈니스 센터에 등록할 수 있으므로 걱정하지 않아도 됩니다. 또한 Visual Studio 구독 [관리 포털](https://manage.visualstudio.com/)에서는 일단 Visual Studio 관리자가 되면 구독을 구독자에게 할당할 수 있습니다. MPSA에서 Visual Studio 구독자는 Visual Studio 구독 관리 포털인 해당 관리 포털에 프로비전해야 합니다. 이렇게 하려면 테넌트(예: contoso.onmicrosoft.com)에 구매 계정을 연결해야 합니다. 테넌트에는 두 가지 유형(관리되는 테넌트 및 관리되지 않는 테넌트)이 있습니다. 관리되는 테넌트는 관리자로서 조직에서 이미 관리하고 있는 테넌트를 가리킵니다. 관리되지 않는 테넌트는 관리자 없는 테넌트로서 Office 365 같은 온라인 서비스에는 사용할 수 없습니다. 또한 관리되지 않는 테넌트는 회사 계정이 아닌 이메일을 사용하여 비즈니스 센터에 등록할 때 만들어집니다. 비즈니스 센터에 등록할 때 암호를 만들도록 요청받는 경우 이는 이메일이 회사 계정이 아니며 관리되지 않는 테넌트를 만들었다는 의미입니다. 테넌트 연결을 완료하기 전에 Visual Studio 구독 관리자가 되는 데 필요한 몇 가지 요구 사항/단계가 있습니다. ## <a name="pre-tenant-association-managed-tenant"></a>사전 테넌트 연결(관리되는 테넌트) - 비즈니스 센터에 등록된 사용자여야 합니다. - 사용자는 자신이 속한 테넌트 내에서 사용자 관리자(최소한) 또는 전역 관리자여야 합니다. (회사가 이미 클라우드 서비스를 사용하는 경우 해당됩니다). Visual Studio 구독 관리자가 되려면 두 역할 중 하나가 필요합니다. - 사용자는 자신의 테넌트에 구매 계정을 연결할 수 있으려면 자신이 속한 테넌트의 전역 관리자여야 합니다. - 비즈니스 센터에서 사용자는 계정 관리자여야 합니다. - [Azure](https://portal.azure.com/)의 사용자 프로필(및 다른 모든 사용자)에서 "국가 또는 지역" 필드는 해당 지역(예: 미국, 캐나다 등)에 따라 적절하게 채워야 합니다. > [!NOTE] > Visual Studio 구독 관리자로 만들려는 사용자 모두는 2단계 및 5단계의 조건을 충족하는 데 필요하므로 비즈니스 센터의 사용자가 될 것을 요구받지 않습니다. 위의 다섯 단계의 조건을 모두 충족하면 아래 단계에 따라 사용자는 자신의 테넌트에 구매 계정을 연결할 수 있습니다. 1. [비즈니스 센터](https://businessaccount.microsoft.com/)에 로그인합니다. 2. **계정** 탭을 클릭하고 **도메인 연결**을 선택합니다. 3. **구매 계정**(1 초과 구매 계정이 있는 경우)을 선택합니다. 4. **테넌트**(예: contoso.onmicrosoft.com)를 선택합니다. 5. **도메인 연결**을 클릭합니다. 연결 시 필요한 조건을 충족한 모든 사용자는 일반적으로 몇 분 이내에 Visual Studio 구독 관리자로서 프로비젼됩니다. 그러나 때로는 최대 24시간이 걸릴 수도 있습니다. 프로비전되면 Visual Studio 구독 관리 포털에 액세스할 수 있게 됩니다. 24시간 보다 오래 걸릴 경우 MPSA 고객 지원 팀에 문의하세요. > [!NOTE] > (연결 후에) 2단계 및 5단계 조건을 충족하는 새 사용자가 있는 경우 MPSA 고객 지원 팀에 문의해야 합니다. MPSA 고객 지원 팀은 새 Visual Studio 구독 관리자를 프로비전하기 위한 지원을 제공합니다. ## <a name="tenant-association-unmanaged"></a>테넌트 연결(관리되지 않는) 위의 다섯 번째 단락에서 설명한 것처럼 회사 계정이 아닌("Azure AD"(Azure Active Directory)에 등록되지 않은) 이메일을 사용하여 비즈니스 센터에 등록한 경우 테넌트 연결이 약간 달라집니다. 이른 바 “도메인 인수”를 수행해야 합니다. 이 프로세스 동안 사용자 스스로 전역 관리자가 되어 관리되지 않는 테넌트에서 관리되는 테넌트로 변경해야 합니다. 이 프로세스에 대한 자세한 내용은 [빠른 시작 안내](https://www.microsoft.com/en-us/Licensing/existing-customer/business-center-training-and-resources.aspx)를 참조합니다. 도메인 인수 과정을 안내해 줄 *"온라인 서비스 설치 및 사용"*이라는 가이드를 다운로드합니다. 이 작업이 완료되면 사용자의 구매 계정이 테넌트에 연결됩니다. > [!NOTE] > 도메인 인수 과정을 완료한 후 사용자는 사전 테넌트 연결(관리되는)에 대한 섹션에서 다섯 단계의 모든 조건을 준수해야 합니다. 이러한 조건이 충족되면 추가 Visual Studio 구독 관리자를 프로비전하기 위해 MPSA 고객 지원 팀에 문의해야 합니다. 도움이 필요하거나 질문이 있는 경우 전화나 이메일로 고객 지원 팀에 문의할 수 있습니다. MPSA 고객 지원 팀: **1-866-200-9611**, 월요일-금요일 오전 5시 30분에서 오후 5시 30분까지(태평양 표준시) 문의 가능 이메일: ngvlsup@microsoft.com
59.029412
550
0.731938
kor_Hang
1.00001
98368fdf68ce03c8d7c33e099a2c82cab1dd4047
2,702
md
Markdown
README.md
xPand4B/BKR-Verleih
a37306117f8973046b763b34b5977b7a3e3eb1f6
[ "MIT" ]
1
2022-02-12T08:33:14.000Z
2022-02-12T08:33:14.000Z
README.md
xPand4B/BKR-Verleih
a37306117f8973046b763b34b5977b7a3e3eb1f6
[ "MIT" ]
null
null
null
README.md
xPand4B/BKR-Verleih
a37306117f8973046b763b34b5977b7a3e3eb1f6
[ "MIT" ]
null
null
null
__©2018 - Made by Eric Heinzl__ * __Github Repository:__ <https://github.com/xPand4B/BKR-Verleih> ## How To Start ## 0.) Read [__'CHANGE-LOG'__](https://github.com/xPand4B/BKR-Verleih/blob/master/CHANGE-LOG.md) if you already have a version of this project 1.) Check your Configuration Settings inside ` config.php ` file. 2.) Drag this Project inside your webserver directory (hosted __OR__ local). 3.) Open Project inside your browser. 4.) Select your database-type. 5.) Create your first Employee/Admin User * Necessary to add... * movies * customer * employee/admin * rental 6.) Have fun __(^-^)/__ ## Features ## * force HTTPS for live server * Easy to edit Configuration * Self-import for database * Database Version control * Responsive Design * Dynamic Page loading * Reload content section only * Pages are mostly loaded from partials * 404-Not-Found Error Page * 403-Access-forbidden * Site Access Control ## System Recommandations ## | Name | Used Version | Source | | ---------------------------- |:--------------- |:-------------------------------------- | | PHPMyAdmin | 4.8.0.1 | https://www.phpmyadmin.net/downloads/ | | PHP | 7.2.4 | http://php.net/downloads.php | ## Powerd with ## | Name | Source | | ---------------------------- |:----------------------------------------------------------- | | JQuery | https://jquery.com | | The Movie DB | https://www.themoviedb.org | | Font Awesome | https://fontawesome.com | | Lightbox 2 | http://lokeshdhakar.com/projects/lightbox2/ | | Parsley Validation | http://parsleyjs.org | ## Changelog ## The changelog is located inside the [CHANGE-LOG.md](https://github.com/xPand4B/BKR-Verleih/blob/master/CHANGE-LOG.md) file. ## Overall Code-lines (10.06.2018) ## | File Extension | Lines | | ---------------------- |:----------- | | .html | 55 | | .php | 2593 | | .css | 867 | | .js | 99 | | .sql | 3063 | | Total | 6677 | | ---------------------- |:----------- | __PowerShell command:__ `dir -Recurse *.<EXTENSION> | Get-Content | Measure-Object -Line`
35.090909
140
0.457439
eng_Latn
0.277653
9837ef994335dc5e85fe8881b9c99a586ba0fb7c
6,462
md
Markdown
articles/app-service-web/app-service-web-get-started-python.md
OpenLocalizationTestOrg/azure-docs-pr15_de-AT
ca82887d8067662697adba993b87860bdbefea29
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
1
2020-11-29T22:55:06.000Z
2020-11-29T22:55:06.000Z
articles/app-service-web/app-service-web-get-started-python.md
Allyn69/azure-docs-pr15_de-CH
211ef2a7547f43e3b90b3c4e2cb49e88d7fe139f
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service-web/app-service-web-get-started-python.md
Allyn69/azure-docs-pr15_de-CH
211ef2a7547f43e3b90b3c4e2cb49e88d7fe139f
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
2
2019-07-03T20:05:49.000Z
2020-11-29T22:55:15.000Z
<properties pageTitle="Bereitstellen Ihrer ersten Python Anwendung in Azure Minuten | Microsoft Azure" description="Erfahren Sie, wie einfach die webapps durch Bereitstellung einer Beispiel-app in App Service ausgeführt wird. Starten Sie Entwicklung schnell und die sehen Sie Ergebnisse sofort." services="app-service\web" documentationCenter="" authors="cephalin" manager="wpickett" editor="" /> <tags ms.service="app-service-web" ms.workload="web" ms.tgt_pltfrm="na" ms.devlang="na" ms.topic="hero-article" ms.date="10/13/2016" ms.author="cephalin" /> # <a name="deploy-your-first-python-web-app-to-azure-in-five-minutes"></a>Bereitstellen Sie Ihrer ersten Python Anwendung in Azure in fünf Minuten In diesem Lernprogramm können Sie Ihrer ersten Python Anwendung in [Azure App Service](../app-service/app-service-value-prop-what-is.md)bereitstellen. App Service können Sie Web-apps [mobile-app back-Ends](/documentation/learning-paths/appservice-mobileapps/)und [API-apps](../app-service-api/app-service-api-apps-why-best-platform.md)erstellen. Sie können: - Erstellen Sie eine Webanwendung in Azure App Service. - Bereitstellen Sie Python-Beispielcode. - Siehe Code live in der Produktion ausgeführt. - Aktualisieren Sie Ihrer Anwendung wie Sie [Push Git begeht](https://git-scm.com/docs/git-push). ## <a name="prerequisites"></a>Erforderliche Komponenten - [Git](http://www.git-scm.com/downloads). - [Azure CLI](../xplat-cli-install.md). - Ein Microsoft Azure-Konto. Haben Sie ein Konto, können Sie [sich für eine kostenlose Testversion](/pricing/free-trial/?WT.mc_id=A261C142F) oder [die Visual Studio-Abonnementvorteile aktivieren](/pricing/member-offers/msdn-benefits-details/?WT.mc_id=A261C142F). >[AZURE.NOTE] Sie können [Versuchen App Service](http://go.microsoft.com/fwlink/?LinkId=523751) ohne ein Azure-Konto. Erstellen einer app Starter und spielen für bis zu einer Stunde – keine Kreditkarte, keine Zusagen. ## <a name="deploy-a-python-web-app"></a>Bereitstellen einer Python Web app 1. Öffnen Sie eine neue Befehlszeile von Windows PowerShell-Fenster, Linux Shell oder OS X Terminal. Ausführen `git --version` und `azure --version` , ob Git und Azure CLI auf Ihrem Computer installiert sind. ![Testen Sie Installation des CLI-Tools für Ihrer ersten Anwendung in Azure](./media/app-service-web-get-started/1-test-tools.png) Wenn Sie die Tools installiert haben, finden Sie unter [Komponenten](#Prerequisites) herunterzuladen. 3. Melden Sie sich bei Azure wie folgt an: azure login Führen Sie den Hilfetext den Anmeldevorgang fort. ![Melden Sie sich bei Azure Erstellen Ihrer ersten Anwendung](./media/app-service-web-get-started/3-azure-login.png) 4. Azure-CLI ASM-Modus ändern, und legen Sie die Bereitstellung für App-Dienst fest. Sie werden mit den Anmeldeinformationen später Code bereitstellen. azure config mode asm azure site deployment user set --username <username> --pass <password> 1. Ändern in ein Arbeitsverzeichnis (`CD`) und Klonen Sie die Beispiel-app wie folgt: git clone https://github.com/Azure-Samples/app-service-web-python-get-started.git 2. Ändern Sie im Repository die Beispiel-app. Zum Beispiel: cd app-service-web-python-get-started 4. Erstellen Sie die App-Dienstressource app in Azure mit einer eindeutigen app und die Bereitstellung zuvor konfigurierten Wenn Sie aufgefordert werden, geben Sie die Anzahl der gewünschten Region. azure site create <app_name> --git --gitusername <username> ![Erstellen der Azure-Ressource Ihrer ersten Anwendung in Azure](./media/app-service-web-get-started-languages/python-site-create.png) Ihre Anwendung wird jetzt in Azure erstellt. Außerdem ist das aktuelle Verzeichnis Git initialisiert und mit der neuen App Service-app als eine remote Git. Finden Sie die app-URL (http://&lt;Anwendungsname >. *.azurewebsites.NET) schöne HTML-Standardseite, sondern wir tatsächlich jetzt Code vorhanden. 4. Bereitstellen Sie Beispielcode für Ihre Azure-Anwendung, wie Sie Code mit Git würde. Wenn Sie aufgefordert werden, verwenden Sie das Kennwort zuvor konfigurierten. git push azure master ![Push-Code auf Ihrer ersten Anwendung in Azure](./media/app-service-web-get-started-languages/python-git-push.png) `git push`nicht nur Code in Azure, sondern auch Bereitstellungsaufgaben in das Bereitstellungsmodul auslöst. Haben Sie requirements.txt (Python) Dateien im Projektstammverzeichnis (Repository) stellt das Bereitstellungsskript die erforderlichen Pakete für Sie. Herzlichen Glückwunsch, Sie Ihre app Azure App Service bereitgestellt haben. ## <a name="see-your-app-running-live"></a>Finden Sie Ihre Anwendung mit live Um Ihr Leben in Azure ausgeführte app anzuzeigen, führen Sie diesen Befehl aus dem Verzeichnis in Ihrem Repository: azure site browse ## <a name="make-updates-to-your-app"></a>Updates für Ihre Anwendung erstellen Git können nun um jederzeit Ihr Projekt (Repository) Stamm der Website ein Update zu drücken. Sie tun es genauso wie den Code zum ersten Mal bereitgestellt. Beispielsweise jedes Mal, wenn Sie eine neue Änderung getestet haben lokal übertragen möchten, führen Sie einfach die folgenden Befehle aus der Projektstamm (Repository): git add . git commit -m "<your_message>" git push azure master ## <a name="next-steps"></a>Nächste Schritte [Erstellen, konfigurieren und Bereitstellen einer Django Web app in Azure in Visual Studio](web-sites-python-ptvs-django-mysql.md). Anhand dieses Lernprogramms erfahren Sie Grundkenntnisse in Azure ausgeführt werden Python Web app müssen einschließlich: - Erstellen und Bereitstellen einer Python-Anwendung mithilfe einer Vorlage. - Festlegen Sie Python-Version. - Erstellen Sie virtuelle Umgebungen. - Verbinden Sie mit einer Datenbank. Oder möchten Sie Ihrer ersten Anwendung. Zum Beispiel: - Probieren Sie [dazu Code in Azure bereitstellen](../app-service-web/web-sites-deploy.md). Z. B. zum Bereitstellen eines GitHub Repositories wählen Sie **GitHub** anstelle von **Lokalen Git Repository** in den **Bereitstellungsoptionen**. - Nehmen Sie Ihre Azure-Anwendung auf die nächste Ebene. Benutzerauthentifizierung. Skalierung auf Anforderung. Einige Performance-Alarme einrichten. Mit wenigen Mausklicks. Finden Sie unter [Hinzufügen auf Ihrer ersten Anwendung](app-service-web-get-started-2.md).
54.762712
327
0.770195
deu_Latn
0.986136
98387bec0c4d70080241685332bf6450f80b0579
202
md
Markdown
README.md
sergeychuvakin/r_meetup
00f06c08ab0218d3e3a4dbbebe480c9d48fb6d6c
[ "MIT" ]
null
null
null
README.md
sergeychuvakin/r_meetup
00f06c08ab0218d3e3a4dbbebe480c9d48fb6d6c
[ "MIT" ]
null
null
null
README.md
sergeychuvakin/r_meetup
00f06c08ab0218d3e3a4dbbebe480c9d48fb6d6c
[ "MIT" ]
null
null
null
# R meetups Этот репозиторий предназначен для материалов лекций, которые были прочитаны в разные периоды времени в Европейском Университете в Санкт-Петербурге на открытом семминаре посвященном языку R.
67.333333
189
0.846535
rus_Cyrl
0.988518
983897bc811fa65f556456554d42a97c0d091f1e
787
md
Markdown
desafio_01.md
hrqlp/EstacionamentoSeuJertuz
058954a2636feefe13b9829d44e1a57e0938591b
[ "Apache-2.0" ]
null
null
null
desafio_01.md
hrqlp/EstacionamentoSeuJertuz
058954a2636feefe13b9829d44e1a57e0938591b
[ "Apache-2.0" ]
null
null
null
desafio_01.md
hrqlp/EstacionamentoSeuJertuz
058954a2636feefe13b9829d44e1a57e0938591b
[ "Apache-2.0" ]
null
null
null
## Desafio 01 - Bonificação final de ano ### Contexto: O ano está acabando, e as festas de final de ano estão se aproximando. Seu Jertuz pretende dar uma bonificação para ajudar seus funcionarios com as festas, a bonificação será um percentual tendo como base o salário de cada tipo de funcionário de seu estacionamento. #### Seguem regras de percentual conforme função: Função | Percentual bônus ------------ | ------------- Caixa | 10% Manobrista | 12% Segurança | 09% > Ou seja: se um funcionário recebe de salário R$ 1000 e sua bonificação é de 10%, no final ele recebera R$ 1100. ##### **Imagem do exemplo de como deve ser o funcionamento do programa:** ![Imagem de exemplo de interface](https://i.imgur.com/2lwtiVk.png) ### E aí, bora cumprir esse desafio??? 👩‍💻
34.217391
195
0.70521
por_Latn
0.999884
98389fdec2d04d41015ca03a07b4e79130f444db
668
md
Markdown
just-lightbox/CHANGELOG.md
digikare/just-lightbox
e6f6069e3770a77f3a26e5a4308049ed7284e4fb
[ "MIT" ]
1
2021-08-04T08:55:58.000Z
2021-08-04T08:55:58.000Z
just-lightbox/CHANGELOG.md
digikare/just-lightbox
e6f6069e3770a77f3a26e5a4308049ed7284e4fb
[ "MIT" ]
1
2022-03-02T04:00:56.000Z
2022-03-02T04:00:56.000Z
just-lightbox/CHANGELOG.md
digikare/just-lightbox
e6f6069e3770a77f3a26e5a4308049ed7284e4fb
[ "MIT" ]
1
2022-03-01T10:47:04.000Z
2022-03-01T10:47:04.000Z
## [0.2.2](https://github.com/fayriot/just-lightbox/compare/v0.2.1...v0.2.2) (2021-12-23) ## [0.2.1](https://github.com/fayriot/just-lightbox/compare/v0.2.0...v0.2.1) (2021-08-07) # [0.2.0](https://github.com/fayriot/just-lightbox/compare/v0.1.6...v0.2.0) (2021-08-07) ## [0.1.6](https://github.com/fayriot/just-lightbox/compare/0.1.4...v0.1.6) (2021-08-06) ## [0.1.4](https://github.com/fayriot/just-lightbox/compare/0.1.3...0.1.4) (2021-08-02) ## [0.1.3](https://github.com/fayriot/just-lightbox/compare/0.1.2...0.1.3) (2021-08-01) ## [0.1.2](https://github.com/fayriot/just-lightbox/compare/0.1.1...0.1.2) (2021-08-01) ## 0.1.1 (2021-08-01)
20.242424
89
0.624251
yue_Hant
0.269047
9838f490365b123e3dfcbc8174dc484ac8d9634c
897
md
Markdown
help/marketo/product-docs/predictive-content/working-with-all-content/edit-content.md
vladislavroslyak/marketo.en
2b29b9c2e4c6c9a64a4abfd80223ef7cf9fe7088
[ "MIT" ]
null
null
null
help/marketo/product-docs/predictive-content/working-with-all-content/edit-content.md
vladislavroslyak/marketo.en
2b29b9c2e4c6c9a64a4abfd80223ef7cf9fe7088
[ "MIT" ]
24
2021-05-16T19:21:20.000Z
2022-03-18T08:33:55.000Z
help/marketo/product-docs/predictive-content/working-with-all-content/edit-content.md
vladislavroslyak/marketo.en
2b29b9c2e4c6c9a64a4abfd80223ef7cf9fe7088
[ "MIT" ]
18
2020-07-29T20:10:13.000Z
2022-02-23T12:56:13.000Z
--- unique-page-id: 11384653 description: Edit Content - Marketo Docs - Product Documentation title: Edit Content exl-id: 138b620e-4435-4a81-b4c8-132c2d6e25f5 --- # Edit Content {#edit-content} You can make some edits to listings on the All Content Page. 1. On the **All Content** page, hover over the row of the title you want to edit and click the edit icon. ![](assets/image2017-10-3-9-3a8-3a1.png) 1. Make changes to the Content Title and Content URL (query parameters checkbox is optional). ![](assets/edit-content-2.png) 1. Click the **Categories** field to add/remove categories. Select new ones from the drop-down. You can remove a currently selected category by clicking its **X**. ![](assets/edit-content-3.png) 1. Check the **Approve for Predictive Content** box to approve, or uncheck the box to unapprove. Click **Save** when done. ![](assets/edit-content-4.png)
34.5
163
0.730212
eng_Latn
0.945895
98390e4a5215a423b90b6a66c474624c1c938ecd
71
md
Markdown
README.md
dnaspider/ascii_calc
61b784dc41393eea7c501ae404d297c24c1f817a
[ "MIT" ]
null
null
null
README.md
dnaspider/ascii_calc
61b784dc41393eea7c501ae404d297c24c1f817a
[ "MIT" ]
null
null
null
README.md
dnaspider/ascii_calc
61b784dc41393eea7c501ae404d297c24c1f817a
[ "MIT" ]
null
null
null
# ascii_calc ASCII code calculator. Adds what you input. Console app.
23.666667
57
0.774648
eng_Latn
0.774492
983a0be1c168f5ea7c33a79af66f4f51c57bf623
10,965
md
Markdown
www/docs/es/3.5.0/guide/platforms/ios/plugin.md
NiklasMerz/cordova-docs
44678acb622002f5ed2f322a699aaad9716ff041
[ "Apache-2.0" ]
1
2020-12-15T14:00:24.000Z
2020-12-15T14:00:24.000Z
www/docs/es/3.5.0/guide/platforms/ios/plugin.md
NiklasMerz/cordova-docs
44678acb622002f5ed2f322a699aaad9716ff041
[ "Apache-2.0" ]
1
2021-02-23T13:54:18.000Z
2021-02-23T14:42:20.000Z
www/docs/es/3.5.0/guide/platforms/ios/plugin.md
NiklasMerz/cordova-docs
44678acb622002f5ed2f322a699aaad9716ff041
[ "Apache-2.0" ]
2
2020-12-16T06:54:13.000Z
2021-08-17T09:50:40.000Z
--- license: > Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. title: iOS Plugins --- # iOS Plugins Esta sección proporciona información detallada de cómo implementar el código del plugin nativo en la plataforma iOS. Antes de leer esto, vea aplicación Plugins para tener una visión general de la estructura del plugin y su interfaz común de JavaScript. Esta sección sigue demostrando el plugin *Eco* muestra que comunica desde la webview Cordova a la plataforma nativa y de regreso. Un plugin de iOS se implementa como una clase de Objective-C que se extiende la `CDVPlugin` clase. Para JavaScript `exec` del método `service` parámetro para asignar a un Objective-C, clase cada plugin debe estar registrado como un `<feature>` tag en del directorio la aplicación llamado `config.xml` archivo. ## Asignación de clase plugin La porción de JavaScript de un plugin utiliza el `cordova.exec` método de la siguiente manera: exec(<successFunction>, <failFunction>, <service>, <action>, [<args>]); Esto mariscales una solicitud de la `UIWebView` al lado nativo iOS, efectivamente llamando el `action` método de la `service` clase, con los argumentos pasados en la `args` matriz. Especificar el plugin como un `<feature>` etiqueta de proyecto de la aplicación Cordova-iOS `config.xml` archivo, usando el `plugin.xml` archivo para inyectar este marcado automáticamente, como se describe en aplicación Plugins: <feature name="LocalStorage"> <param name="ios-package" value="CDVLocalStorage" /> </feature> De la característica `name` atributo debe coincidir con lo que se especifica como el JavaScript `exec` llamada `service` parámetro. El `value` atributo debe coincidir con el nombre de clase de Objective-C del plugin. El `<param>` del elemento `name` siempre debe ser `ios-package` . Si usted no sigue estas pautas, el plugin puede compilar, pero Córdoba todavía no puede ser capaz de acceder a él. ## Vida e inicialización de Plugin Para la vida de cada uno se crea una instancia de un objeto plugin `UIWebView` . Plugins normalmente se instancian cuando primero hace referencia a una llamada desde JavaScript. De lo contrario puede instanciarse mediante el establecimiento de un `param` llamado `onload` a `true` en el `config.xml` archivo: <feature name="Echo"> <param name="ios-package" value="Echo" /> <param name="onload" value="true" /> </feature> No hay *ningún* señalado a inicializador de plugins. Por el contrario, debe usar plugins el `pluginInitialize` método para su lógica de arranque. Actividad de fondo plugins con solicitudes de larga duración, tales como medios de reproducción, los oyentes o que mantener el estado interno debe implementar el `onReset` método para limpiar esas actividades. El método ejecuta cuando el `UIWebView` se desplaza a una nueva página o actualizaciones, que vuelve a cargar el JavaScript. ## Escribir un iOS Cordova Plugin ¿Una llamada JavaScript dispara una solicitud plugin nativo al lado, y el iOS correspondiente plugin Objective-C se asigna correctamente en el `config.xml` archivo, pero lo que le gusta el final iOS Objective-C plugin mirada de clase? Lo que es enviado al plugin de JavaScript `exec` función se pasa a la clase correspondiente plugin `action` método. Esta firma cuenta con un método de plugin: - (void)myMethod:(CDVInvokedUrlCommand*)command { CDVPluginResult* pluginResult = nil; NSString* myarg = [command.arguments objectAtIndex:0]; if (myarg != nil) { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK]; } else { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_ERROR messageAsString:@"Arg was null"]; } [self.commandDelegate sendPluginResult:pluginResult callbackId:command.callbackId]; } Para más detalles, ver `[CDVInvokedUrlCommand.h](https://github.com/apache/cordova-ios/blob/master/CordovaLib/Classes/CDVInvokedUrlCommand.h)` , `[CDVPluginResult.h](https://github.com/apache/cordova-ios/blob/master/CordovaLib/Classes/CDVPluginResult.h)` , y`[CDVCommandDelegate.h](https://github.com/apache/cordova-ios/blob/master/CordovaLib/Classes/CDVCommandDelegate.h)`. ## iOS tipos de mensaje CDVPluginResult Puede utilizar `CDVPluginResult` para devolver una variedad de resultado tipos de vuelta a las devoluciones de llamadas de JavaScript, usando métodos de la clase que siguen este patrón: + (CDVPluginResult*)resultWithStatus:(CDVCommandStatus)statusOrdinal messageAs... Puede crear `String`, `Int`, `Double`, `Bool`, `Array`, `Dictionary`, `ArrayBuffer` y `Multipart` tipos. Puede también dejar de lado ningún argumento para enviar un estatus, o devolver un error, o incluso optar por no enviar ningún resultado del plugin, en cuyo caso se despide ni devolución de llamada. Tenga en cuenta lo siguiente para valores complejos de retorno: * `messageAsArrayBuffer`Espera `NSData*` y se convierte en un `ArrayBuffer` en la devolución de llamada JavaScript. Asimismo, cualquier `ArrayBuffer` el JavaScript envía a un plugin se convierte en`NSData*`. * `messageAsMultipart`Espera un `NSArray*` que contengan cualquiera de las otras apoyado tipos y envía la matriz completa como el `arguments` a la devolución de llamada JavaScript. De esta manera, todos los argumentos son serializa o deserializa según sea necesario, así que es seguro volver `NSData*` como varias partes, pero no como `Array` /`Dictionary`. ## Eco iOS ejemplo Plugin Para hacer coincidir la función de *Eco* de la interfaz JavaScript descrita en Plugins de aplicación, utilice el `plugin.xml` para inyectar un `feature` Especificación de la plataforma local `config.xml` archivo: <platform name="ios"> <config-file target="config.xml" parent="/*"> <feature name="Echo"> <param name="ios-package" value="Echo" /> </feature> </config-file> </platform> Entonces nos gustaría añadir lo siguiente `Echo.h` y `Echo.m` los archivos en el `Plugins` carpeta dentro del directorio de la aplicación de Cordova-iOS: /********* Echo.h Cordova Plugin Header *******/ #import <Cordova/CDV.h> @interface Echo : CDVPlugin - (void)echo:(CDVInvokedUrlCommand*)command; @end /********* Echo.m Cordova Plugin Implementation *******/ #import "Echo.h" #import <Cordova/CDV.h> @implementation Echo - (void)echo:(CDVInvokedUrlCommand*)command { CDVPluginResult* pluginResult = nil; NSString* echo = [command.arguments objectAtIndex:0]; if (echo != nil && [echo length] > 0) { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString:echo]; } else { pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_ERROR]; } [self.commandDelegate sendPluginResult:pluginResult callbackId:command.callbackId]; } @end Las importaciones necesarias en la parte superior del archivo amplía la clase de `CDVPlugin` . En este caso, el plugin sólo es compatible con un solo `echo` acción. Obtiene la cadena Eco llamando el `objectAtIndex` método de conseguir el primer parámetro de la `arguments` matriz, que corresponde a los argumentos pasaron en el JavaScript `exec()` función. Comprueba el parámetro para asegurarse de que no es `nil` o una cadena vacía, devolviendo un `PluginResult` con un `ERROR` estado si es así. Si el parámetro pasa la cuenta, devuelve un `PluginResult` con un `OK` estado, generando así en el original `echo` cadena. Por último, envía el resultado a `self.commandDelegate` , que ejecuta el `exec` callbacks de éxito o fracaso del método en el lado de JavaScript. Si se llama a la devolución de llamada de éxito, pasa en la `echo` parámetro. ## iOS integración El `CDVPlugin` clase cuenta con otros métodos que puede invalidar su plugin. Por ejemplo, usted puede capturar el `[pause](../../../cordova/events/events.pause.html)` , `[resume](../../../cordova/events/events.resume.html)` , poner fin a la aplicación y `handleOpenURL` eventos. Vea la clase [CDVPlugin.h][1] y [CDVPlugin.m][2] para la dirección. [1]: https://github.com/apache/cordova-ios/blob/master/CordovaLib/Classes/CDVPlugin.h [2]: https://github.com/apache/cordova-ios/blob/master/CordovaLib/Classes/CDVPlugin.m ## Roscar Plugin métodos normalmente se ejecutan en el mismo subproceso como la interfaz principal. Si tu plugin requiere una gran cantidad de procesamiento o requiere una llamada de bloquea, debe utilizar un subproceso de fondo. Por ejemplo: - (void)myPluginMethod:(CDVInvokedUrlCommand*)command { // Check command.arguments here. [self.commandDelegate runInBackground:^{ NSString* payload = nil; // Some blocking logic... CDVPluginResult* pluginResult = [CDVPluginResult resultWithStatus:CDVCommandStatus_OK messageAsString:payload]; // The sendPluginResult method is thread-safe. [self.commandDelegate sendPluginResult:pluginResult callbackId:command.callbackId]; }]; } ## Depuración de iOS Plugins Para depurar en el lado de Objective-C, necesitas a depurador incorporado de Xcode. Para JavaScript, en iOS 5,0 puede utilizar [Weinre, un proyecto de Cordova Apache][3] o [iWebInspector, una utilidad de terceros][4]. Para iOS 6, puede adjuntar a su aplicación ejecuta dentro del iOS 6 simulador Safari 6.0. [3]: https://github.com/apache/cordova-weinre [4]: http://www.iwebinspector.com/ ## Errores comunes * No te olvides de agregar asignación de su plugin para `config.xml` . Si se olvida, se registra un error en la consola de Xcode. * No olvide agregar los hosts que conectarse en la lista blanca, como se describe en la guía de lista blanca de dominio. Si se olvida, se registra un error en la consola de Xcode.
59.592391
487
0.721933
spa_Latn
0.941119
983a6bc9f168e60c08915181279a71e432dcc469
385
md
Markdown
misc/Pysh/writeup.md
killua4564/2019-AIS3-preexam
b13b5c9d3a2ec8beef7cca781154655bb51605e3
[ "MIT" ]
1
2019-06-15T11:45:41.000Z
2019-06-15T11:45:41.000Z
misc/Pysh/writeup.md
killua4564/2019-AIS3-preexam
b13b5c9d3a2ec8beef7cca781154655bb51605e3
[ "MIT" ]
null
null
null
misc/Pysh/writeup.md
killua4564/2019-AIS3-preexam
b13b5c9d3a2ec8beef7cca781154655bb51605e3
[ "MIT" ]
null
null
null
### Pysh ``` #!/usr/bin/python import os import sys black_list = "bcfghijkmnoqstuvwxz!@#|[]{}\"'&*()?01234569" your_input = raw_input(":") for i in range(len(black_list)): if black_list[i] in your_input: print "Bad hacker...." exit() print os.system("bash -c '" + your_input + "'") ``` - 注意到black_list只有過濾部分小寫字元 並沒有擋大寫 所以直接利用ubuntu的env變數來拿shell - payload: `$SHELL`
24.0625
58
0.649351
eng_Latn
0.26558
983a722a04ef6f569f865e9f784fce3ab104d2d4
677
md
Markdown
2021/CVE-2021-30845.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
2,340
2022-02-10T21:04:40.000Z
2022-03-31T14:42:58.000Z
2021/CVE-2021-30845.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
19
2022-02-11T16:06:53.000Z
2022-03-11T10:44:27.000Z
2021/CVE-2021-30845.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
280
2022-02-10T19:58:58.000Z
2022-03-26T11:13:05.000Z
### [CVE-2021-30845](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-30845) ![](https://img.shields.io/static/v1?label=Product&message=macOS&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=%3C%2011.6%20&color=brighgreen) ![](https://img.shields.io/static/v1?label=Vulnerability&message=A%20local%20user%20may%20be%20able%20to%20read%20kernel%20memory&color=brighgreen) ### Description An out-of-bounds read was addressed with improved bounds checking. This issue is fixed in macOS Big Sur 11.6. A local user may be able to read kernel memory. ### POC #### Reference No PoCs from references. #### Github - https://github.com/zanezhub/PIA-PC
37.611111
157
0.747415
eng_Latn
0.355955
983ad3dacb44e7d803c5bc18277b22ad82f9b202
19,513
md
Markdown
2017-11-27.md
pampang/github-trending-python
fca19696e9343110911c87fb46dfe7ec863e8f44
[ "MIT" ]
null
null
null
2017-11-27.md
pampang/github-trending-python
fca19696e9343110911c87fb46dfe7ec863e8f44
[ "MIT" ]
null
null
null
2017-11-27.md
pampang/github-trending-python
fca19696e9343110911c87fb46dfe7ec863e8f44
[ "MIT" ]
1
2021-02-27T06:12:06.000Z
2021-02-27T06:12:06.000Z
## 2017-11-27 ### general * [tldr-pages / tldr](https://github.com/tldr-pages/tldr):📚 Simplified and community-driven man pages * [edent / SuperTinyIcons](https://github.com/edent/SuperTinyIcons):Under 1KB each! Super Tiny Icons are miniscule SVG versions of your favourite website and app logos * [s-macke / VoxelSpace](https://github.com/s-macke/VoxelSpace):Terrain rendering in less than 20 lines of code * [russellgoldenberg / scrollama](https://github.com/russellgoldenberg/scrollama):Scrollytelling with IntersectionObserver. * [carp-lang / Carp](https://github.com/carp-lang/Carp):A statically typed lisp, without a GC, for high performance applications. * [aksnzhy / xlearn](https://github.com/aksnzhy/xlearn):High Performance, Easy-to-use, and Scalable Machine Learning Package * [antvis / g2](https://github.com/antvis/g2):G2 (The Grammar of Graphics) * [gomatcha / matcha](https://github.com/gomatcha/matcha):Build native mobile apps in Go. * [uNmAnNeR / imaskjs](https://github.com/uNmAnNeR/imaskjs):vanilla javascript input mask * [wtsxDev / reverse-engineering](https://github.com/wtsxDev/reverse-engineering):List of awesome reverse engineering resources * [k88hudson / git-flight-rules](https://github.com/k88hudson/git-flight-rules):Flight rules for git * [KupynOrest / DeblurGAN](https://github.com/KupynOrest/DeblurGAN): * [airbnb / Lona](https://github.com/airbnb/Lona):A tool for defining design systems and using them to generate cross-platform UI code, Sketch files, images, and other artifacts. * [z-pattern-matching / z](https://github.com/z-pattern-matching/z):native pattern matching for javascript * [tj / node-prune](https://github.com/tj/node-prune):Remove unnecessary files from node_modules (.md, .ts, etc) * [tensorflow / tensorflow](https://github.com/tensorflow/tensorflow):Computation using data flow graphs for scalable machine learning * [davedelong / Chronology](https://github.com/davedelong/Chronology):Building a better date/time library for Swift * [papers-we-love / papers-we-love](https://github.com/papers-we-love/papers-we-love):Papers from the computer science community to read and discuss. * [moment / luxon](https://github.com/moment/luxon):⏱ A library for working with dates and times in JS * [sindresorhus / awesome](https://github.com/sindresorhus/awesome):😎 Curated list of awesome lists * [jung-kurt / gofpdf](https://github.com/jung-kurt/gofpdf):A PDF document generator with high level support for text, drawing and images * [photopea / UPNG.js](https://github.com/photopea/UPNG.js):Fast and advanced PNG (APNG) decoder and encoder (lossy / lossless) * [yudai / gotty](https://github.com/yudai/gotty):Share your terminal as a web application * [denisraslov / react-spreadsheet-grid](https://github.com/denisraslov/react-spreadsheet-grid):Excel-like grid component for React with custom cell editors, performant scroll & resizable columns * [JedWatson / react-select](https://github.com/JedWatson/react-select):A Select control built with and for React JS ### python * [KupynOrest / DeblurGAN](https://github.com/KupynOrest/DeblurGAN): * [pytorch / ignite](https://github.com/pytorch/ignite): * [tensorflow / models](https://github.com/tensorflow/models):Models and examples built with TensorFlow * [fchollet / keras](https://github.com/fchollet/keras):Deep Learning library for Python. Runs on TensorFlow, Theano, or CNTK. * [josephmisiti / awesome-machine-learning](https://github.com/josephmisiti/awesome-machine-learning):A curated list of awesome Machine Learning frameworks, libraries and software. * [pytorch / pytorch](https://github.com/pytorch/pytorch):Tensors and Dynamic neural networks in Python with strong GPU acceleration * [XX-net / XX-Net](https://github.com/XX-net/XX-Net):a web proxy tool * [scikit-learn / scikit-learn](https://github.com/scikit-learn/scikit-learn):scikit-learn: machine learning in Python * [vinta / awesome-python](https://github.com/vinta/awesome-python):A curated list of awesome Python frameworks, libraries, software and resources * [sharkdp / shell-functools](https://github.com/sharkdp/shell-functools):Functional programming tools for the shell * [python / cpython](https://github.com/python/cpython):The Python programming language * [home-assistant / home-assistant](https://github.com/home-assistant/home-assistant):🏡 Open-source home automation platform running on Python 3 * [donnemartin / system-design-primer](https://github.com/donnemartin/system-design-primer):Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards. * [pallets / flask](https://github.com/pallets/flask):A microframework based on Werkzeug, Jinja2 and good intentions * [naturomics / CapsNet-Tensorflow](https://github.com/naturomics/CapsNet-Tensorflow):A Tensorflow implementation of CapsNet(Capsules Net) in Hinton's paper Dynamic Routing Between Capsules * [django / django](https://github.com/django/django):The Web framework for perfectionists with deadlines. * [rg3 / youtube-dl](https://github.com/rg3/youtube-dl):Command-line program to download videos from YouTube.com and other video sites * [requests / requests](https://github.com/requests/requests):Python HTTP Requests for Humans™ ✨ 🍰 ✨ * [openai / gym](https://github.com/openai/gym):A toolkit for developing and comparing reinforcement learning algorithms. * [soimort / you-get](https://github.com/soimort/you-get):⏬ Dumb downloader that scrapes the web * [ansible / ansible](https://github.com/ansible/ansible):Ansible is a radically simple IT automation platform that makes your applications and systems easier to deploy. Avoid writing scripts or custom code to deploy and update your applications— automate in a language that approaches plain English, using SSH, with no agents to install on remote systems. * [donnemartin / data-science-ipython-notebooks](https://github.com/donnemartin/data-science-ipython-notebooks):Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines. * [vi3k6i5 / flashtext](https://github.com/vi3k6i5/flashtext):Extract Keywords from sentence or Replace keywords in sentences. * [Zeta36 / chess-alpha-zero](https://github.com/Zeta36/chess-alpha-zero):Chess reinforcement learning by AlphaGo Zero methods. * [jakubroztocil / httpie](https://github.com/jakubroztocil/httpie):Modern command line HTTP client – user-friendly curl alternative with intuitive UI, JSON support, syntax highlighting, wget-like downloads, extensions, etc. https://httpie.org ### javascript * [russellgoldenberg / scrollama](https://github.com/russellgoldenberg/scrollama):Scrollytelling with IntersectionObserver. * [antvis / g2](https://github.com/antvis/g2):G2 (The Grammar of Graphics) * [uNmAnNeR / imaskjs](https://github.com/uNmAnNeR/imaskjs):vanilla javascript input mask * [z-pattern-matching / z](https://github.com/z-pattern-matching/z):native pattern matching for javascript * [moment / luxon](https://github.com/moment/luxon):⏱ A library for working with dates and times in JS * [photopea / UPNG.js](https://github.com/photopea/UPNG.js):Fast and advanced PNG (APNG) decoder and encoder (lossy / lossless) * [denisraslov / react-spreadsheet-grid](https://github.com/denisraslov/react-spreadsheet-grid):Excel-like grid component for React with custom cell editors, performant scroll & resizable columns * [JedWatson / react-select](https://github.com/JedWatson/react-select):A Select control built with and for React JS * [bichenkk / coinmon](https://github.com/bichenkk/coinmon):💰 The cryptocurrency price tool on CLI. 🖥 * [thedaviddias / Front-End-Checklist](https://github.com/thedaviddias/Front-End-Checklist):🗂 The perfect Front-End Checklist for modern websites and meticulous developers * [oliviertassinari / react-swipeable-views](https://github.com/oliviertassinari/react-swipeable-views):A React component for swipeable views. ❄️ * [netlify / netlify-cms](https://github.com/netlify/netlify-cms):A CMS for Static Site Generators * [frappe / charts](https://github.com/frappe/charts):Simple, responsive, modern SVG Charts with zero dependencies: https://frappe.github.io/charts * [vuejs / vue](https://github.com/vuejs/vue):A progressive, incrementally-adoptable JavaScript framework for building UI on the web. * [nodejs / node](https://github.com/nodejs/node):Node.js JavaScript runtime ✨ 🐢 🚀 ✨ * [facebook / react](https://github.com/facebook/react):A declarative, efficient, and flexible JavaScript library for building user interfaces. * [airbnb / javascript](https://github.com/airbnb/javascript):JavaScript Style Guide * [facebookincubator / create-react-app](https://github.com/facebookincubator/create-react-app):Create React apps with no build configuration. * [sindresorhus / meow](https://github.com/sindresorhus/meow):CLI app helper * [ostera / tldr.jsx](https://github.com/ostera/tldr.jsx):📚 A Reactive web client for tldr-pages * [GoogleChrome / puppeteer](https://github.com/GoogleChrome/puppeteer):Headless Chrome Node API * [simonw / datasette](https://github.com/simonw/datasette):An instant JSON API for your SQLite databases * [technopagan / sqip](https://github.com/technopagan/sqip):"SQIP" (pronounced \skwɪb\ like the non-magical folk of magical descent) is a SVG-based LQIP technique. * [facebook / react-native](https://github.com/facebook/react-native):A framework for building native apps with React. * [Pau1fitz / react-spotify](https://github.com/Pau1fitz/react-spotify):A Spotify client built with React / Redux 🎤 🎺 🎸 🎷 ### swift * [airbnb / Lona](https://github.com/airbnb/Lona):A tool for defining design systems and using them to generate cross-platform UI code, Sketch files, images, and other artifacts. * [davedelong / Chronology](https://github.com/davedelong/Chronology):Building a better date/time library for Swift * [freshOS / KeyboardLayoutGuide](https://github.com/freshOS/KeyboardLayoutGuide):⌨️ Apple's missing KeyboardLayoutGuide * [dylanslewis / stylesync](https://github.com/dylanslewis/stylesync):A command line tool to extract shared styles from a Sketch document, and generate native code for any platform. * [Juanpe / SkeletonView](https://github.com/Juanpe/SkeletonView):An elegant way to show users that something is happening and also prepare them to which contents he is waiting * [lhc70000 / iina](https://github.com/lhc70000/iina):The modern video player for macOS. * [agens-no / EllipticCurveKeyPair](https://github.com/agens-no/EllipticCurveKeyPair):Sign, verify, encrypt and decrypt using the Secure Enclave * [dkhamsing / open-source-ios-apps](https://github.com/dkhamsing/open-source-ios-apps):📱 Collaborative List of Open-Source iOS Apps * [MuShare / Httper-iOS](https://github.com/MuShare/Httper-iOS):App for developers to test REST API. * [sergdort / CleanArchitectureRxSwift](https://github.com/sergdort/CleanArchitectureRxSwift):Example of Clean Architecture of iOS app using RxSwift * [shadowsocks / ShadowsocksX-NG](https://github.com/shadowsocks/ShadowsocksX-NG):Next Generation of ShadowsocksX * [malcommac / UIWindowTransitions](https://github.com/malcommac/UIWindowTransitions):Animated transitions for UIWindow's rootViewController property * [johnjcsmith / iPhoneMoCapiOS](https://github.com/johnjcsmith/iPhoneMoCapiOS): * [vsouza / awesome-ios](https://github.com/vsouza/awesome-ios):A curated list of awesome iOS ecosystem, including Objective-C and Swift Projects * [PureSwift / Cacao](https://github.com/PureSwift/Cacao):Pure Swift Cross-platform UIKit (Cocoa Touch) implementation (Supports Linux) * [lkzhao / Hero](https://github.com/lkzhao/Hero):Elegant transition library for iOS & tvOS * [ReactiveX / RxSwift](https://github.com/ReactiveX/RxSwift):Reactive Programming in Swift * [olucurious / Awesome-ARKit](https://github.com/olucurious/Awesome-ARKit):A curated list of awesome ARKit projects and resources. Feel free to contribute! * [vapor / vapor](https://github.com/vapor/vapor):💧 A server-side Swift web framework. * [Carthage / Carthage](https://github.com/Carthage/Carthage):A simple, decentralized dependency manager for Cocoa * [Alamofire / Alamofire](https://github.com/Alamofire/Alamofire):Elegant HTTP Networking in Swift * [soapyigu / Swift30Projects](https://github.com/soapyigu/Swift30Projects):30 mini Swift Apps for self-study * [raywenderlich / swift-algorithm-club](https://github.com/raywenderlich/swift-algorithm-club):Algorithms and data structures in Swift, with explanations! * [onevcat / Kingfisher](https://github.com/onevcat/Kingfisher):A lightweight, pure-Swift library for downloading and caching images from the web. * [IBAnimatable / IBAnimatable](https://github.com/IBAnimatable/IBAnimatable):Design and prototype customized UI, interaction, navigation, transition and animation for App Store ready Apps in Interface Builder with IBAnimatable. ### java * [iluwatar / java-design-patterns](https://github.com/iluwatar/java-design-patterns):Design patterns implemented in Java * [spring-projects / spring-boot](https://github.com/spring-projects/spring-boot):Spring Boot * [alibaba / dubbo](https://github.com/alibaba/dubbo):Dubbo is a high-performance, java based, open source RPC framework * [novoda / android-demos](https://github.com/novoda/android-demos):Examples of Android applications * [elastic / elasticsearch](https://github.com/elastic/elasticsearch):Open Source, Distributed, RESTful Search Engine * [spring-projects / spring-framework](https://github.com/spring-projects/spring-framework):Spring Framework * [appwise-labs / NoInternetDialog](https://github.com/appwise-labs/NoInternetDialog):A beautiful Dialog which appears when you have lost your internet connection. * [Ramotion / circle-menu-android](https://github.com/Ramotion/circle-menu-android):⭕️ CircleMenu is a simple, elegant UI menu with a circular layout and material design animations. Made by @Ramotion * [Wechat-Group / weixin-java-tools](https://github.com/Wechat-Group/weixin-java-tools):微信支付、开放平台、小程序、企业号和公众号(包括服务号和订阅号) Java SDK开发工具包 * [MindorksOpenSource / PRDownloader](https://github.com/MindorksOpenSource/PRDownloader):PRDownloader - A file downloader library for Android with pause and resume support * [scwang90 / SmartRefreshLayout](https://github.com/scwang90/SmartRefreshLayout):🔥 下拉刷新、上拉加载、RefreshLayout、OverScroll,Android智能下拉刷新框架,支持越界回弹,具有极强的扩展性,集成了几十种炫酷的Header和 Footer。 * [square / retrofit](https://github.com/square/retrofit):Type-safe HTTP client for Android and Java by Square, Inc. * [ReactiveX / RxJava](https://github.com/ReactiveX/RxJava):RxJava – Reactive Extensions for the JVM – a library for composing asynchronous and event-based programs using observable sequences for the Java VM. * [square / okhttp](https://github.com/square/okhttp):An HTTP+HTTP/2 client for Android and Java applications. * [Fotoapparat / Fotoapparat](https://github.com/Fotoapparat/Fotoapparat):Making Camera for Android more friendly. 📸 * [libgdx / libgdx](https://github.com/libgdx/libgdx):Desktop/Android/HTML5/iOS Java game development framework * [JetBrains / kotlin](https://github.com/JetBrains/kotlin):The Kotlin Programming Language * [airbnb / lottie-android](https://github.com/airbnb/lottie-android):Render After Effects animations natively on Android and iOS, Web, and React Native * [alibaba / fastjson](https://github.com/alibaba/fastjson):🚄 A fast JSON parser/generator for Java * [razerdp / AnimatedPieView](https://github.com/razerdp/AnimatedPieView):// 一个好吃的甜甜圈? * [florent37 / MyLittleCanvas](https://github.com/florent37/MyLittleCanvas):You find canvas hard to use ? try MyLittleCanvas :) Don't work with canvas methods, use objects instead ! * [PhilJay / MPAndroidChart](https://github.com/PhilJay/MPAndroidChart):A powerful Android chart view / graph view library, supporting line- bar- pie- radar- bubble- and candlestick charts as well as scaling, dragging and animations. * [tiann / epic](https://github.com/tiann/epic):Epic is the continution of Dexposed on ART. * [JackyAndroid / AndroidInterview-Q-A](https://github.com/JackyAndroid/AndroidInterview-Q-A):The top Internet companies android interview questions and answers * [apache / kafka](https://github.com/apache/kafka):Mirror of Apache Kafka ### go * [gomatcha / matcha](https://github.com/gomatcha/matcha):Build native mobile apps in Go. * [tj / node-prune](https://github.com/tj/node-prune):Remove unnecessary files from node_modules (.md, .ts, etc) * [jung-kurt / gofpdf](https://github.com/jung-kurt/gofpdf):A PDF document generator with high level support for text, drawing and images * [yudai / gotty](https://github.com/yudai/gotty):Share your terminal as a web application * [alibaba / pouch](https://github.com/alibaba/pouch):Pouch is an open-source project created to promote the container technology movement. * [GoogleCloudPlatform / container-diff](https://github.com/GoogleCloudPlatform/container-diff):container-diff: Diff your Docker containers * [gdamore / tcell](https://github.com/gdamore/tcell):Tcell is an alternate terminal package, similar in some ways to termbox, but better in others. * [ethereum / go-ethereum](https://github.com/ethereum/go-ethereum):Official Go implementation of the Ethereum protocol * [golang / go](https://github.com/golang/go):The Go programming language * [kubernetes / kubernetes](https://github.com/kubernetes/kubernetes):Production-Grade Container Scheduling and Management * [avelino / awesome-go](https://github.com/avelino/awesome-go):A curated list of awesome Go frameworks, libraries and software * [rsc / 2fa](https://github.com/rsc/2fa):Two-factor authentication on the command line * [gohugoio / hugo](https://github.com/gohugoio/hugo):A Fast and Flexible Static Site Generator built with love in GoLang. * [gobuffalo / packr](https://github.com/gobuffalo/packr):The simple and easy way to embed static files into Go binaries. * [gin-gonic / gin](https://github.com/gin-gonic/gin):Gin is a HTTP web framework written in Go (Golang). It features a Martini-like API with much better performance -- up to 40 times faster. If you need smashing performance, get yourself some Gin. * [istio / istio](https://github.com/istio/istio):An open platform to connect, manage, and secure microservices. * [fogleman / primitive](https://github.com/fogleman/primitive):Reproducing images with geometric primitives. * [kataras / iris](https://github.com/kataras/iris):The fastest web framework for Go in (THIS) Universe. HTTP/2 & MVC fully featured. 🎁 Real-time support * [gogits / gogs](https://github.com/gogits/gogs):Gogs is a painless self-hosted Git service. * [golang / dep](https://github.com/golang/dep):Go dependency management tool * [evilsocket / sg1](https://github.com/evilsocket/sg1):A wanna be swiss army knife for data encryption, exfiltration and covert communication. * [ncw / rclone](https://github.com/ncw/rclone):"rsync for cloud storage" - Google Drive, Amazon Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Cloudfiles, Google Cloud Storage, Yandex Files * [awslabs / aws-sam-local](https://github.com/awslabs/aws-sam-local):AWS SAM Local 🐿 is a CLI tool for local development and testing of Serverless applications * [minio / minio](https://github.com/minio/minio):Minio is an open source object storage server compatible with Amazon S3 APIs * [syncthing / syncthing](https://github.com/syncthing/syncthing):Open Source Continuous File Synchronization
118.981707
356
0.77702
eng_Latn
0.373888
983afe4196b7b0f565c3a1b525d829cfce075439
4,376
md
Markdown
README.md
Tcll/pylivecoding
5139ccafe9ebacf4b556bbd2b79185c4eced76fe
[ "MIT" ]
null
null
null
README.md
Tcll/pylivecoding
5139ccafe9ebacf4b556bbd2b79185c4eced76fe
[ "MIT" ]
null
null
null
README.md
Tcll/pylivecoding
5139ccafe9ebacf4b556bbd2b79185c4eced76fe
[ "MIT" ]
null
null
null
# pylivecoding Pylivecoding is a live coding environment implementation inspired by Smalltalk Essentially this library reloads modules and updates all live instances of classes defined in those modules to the latest code of the class definition without losing any of the data/state. This way you can change code in your favorite code editor and IDE and immediately see the results without any delays. # How to use Pylivecoding is an extremely small library and its functionality is super simple. First of course you import the single module pylovecoding.py to your project. ```python import livecoding ``` In order for your modules to be reloaded and the live instances to be updated you have to issue the update command ```python livecoding.update_env() ``` This should be coded in a module that won't be used for live coding. Also the update and general live coding makes sense for code that repeats so its better put this update inside a loop of some sort. Your modules can be any kind of Python modules as long as all live classes are subclasses of livecoding.LiveObject. ```python import livecoding class MyClass(livecoding.LiveObject): ``` Thats all you have to do and you can code as you awlays code following whatever style you want. # Debugging live coding Traditional debugging compared to live code debugging is like fire compared to nucleal power. Because not only you see the problems in your source code you can change the live code while still the debugger is stepping through your code. This allows coding Smalltalk style. In Smalltalk some coders code entirely inside the debugger, they make intential mistakes under the safety that they can correct their errors with no delays at all because there is no need to restard the debugger and each new error triggers the debugger again.When the error is fixed via live coding, the breakpoint can be removed and the debugger instructed to continue execution like nothing happened. Fortunately python does provide such functionality through the use of post mortem debugging. Essentially it means that in the case of error the debugger triggers using the line that triggered the error as a temporary breakpoint. The code is the following ```python try: live_env.update() execute_my_code() except Exception as inst: type, value, tb = sys.exc_info() traceback.print_exc() pdb.post_mortem(tb) ``` As you can see we have here a usual exception handling code, inside the try we first live update our code to make sure it updated to the latest source code and execute our code , if an error occur or anyting else, it is stored and printed and then the debugger is triggered , hitting c inside pdb will continue execution first statement being updating to live code. The assumption here is that all this runs inside a loop of some sort so you can actually see the results of the updated code. Obviously if it is not and this is the last line of code , the application will just end end execution after the debugger was instructed to continue with the "c" command ;) # The actual benefits of live coding Technically speaking you can even use your source code editor as a debugger the reason being because of live coding you can print real time whatever value you want, inspect and even modify existing objects and generally do all the things you want even create your own breakpoints using if condition that will stop the execution if specific criteria are not met. Also you wont have the bigest disadvantage of a debugger , its inability to change the source code. Obviously this works great with Test Driven Development because the ability to lively manipulate tests making writting tests far easier. Live coding empowers the users with the ease needed to constantly experiment with code and it makes the whole experience far more enjoyable and productive. live coding make repls also uneccessary for the same reason. # Future plans The library is far from finished. The Smalltalk enviroment comes with a wealth of conveniences and automations and a very powerful IDE. Generally Python is powerful enough to do those things and there are good enough IDEs out there but I will be replicating some of the ideas to make my life easier. So to do list is the following - Make the library smart enough to detect changes inside modules and automatically update the live code/state
74.169492
676
0.800731
eng_Latn
0.999803
983b08fff279204cf7f2bf4dfc0c1e82a1be3253
181
md
Markdown
_site/_weapons/warp-blast-mw.md
jaguilar87/tp.net-armageddon.org
be0b6e1c8278fc7c32e73d9b2d58cc692da81059
[ "0BSD" ]
2
2018-01-23T13:17:37.000Z
2021-07-29T17:48:58.000Z
_site/_weapons/warp-blast-mw.md
jaguilar87/tp.net-armageddon.org
be0b6e1c8278fc7c32e73d9b2d58cc692da81059
[ "0BSD" ]
16
2015-08-13T21:43:54.000Z
2022-02-17T14:37:29.000Z
_site/_weapons/warp-blast-mw.md
jaguilar87/tp.net-armageddon.org
be0b6e1c8278fc7c32e73d9b2d58cc692da81059
[ "0BSD" ]
6
2019-11-28T20:52:11.000Z
2021-12-29T16:54:52.000Z
--- name: Warp Blast modes: - range: 30cm firepower: AP5+/AA6+ - boolean: and range: (15cm) firepower: Small Arms special_rules: - macro-weapon ---
13.923077
25
0.569061
eng_Latn
0.906949
983bfbce8fc93a3a43126d8290e3b55ae7b4e17e
7,315
md
Markdown
_posts/2019-09-10-人工智能(四).md
wu-kan/wu-kan.github.io
bf0ef818f3c69cc3217f5685234108f63c709c51
[ "MIT" ]
230
2019-02-23T04:25:07.000Z
2022-03-12T04:29:57.000Z
_posts/2019-09-10-人工智能(四).md
wu-kan/wu-kan.github.io
bf0ef818f3c69cc3217f5685234108f63c709c51
[ "MIT" ]
17
2019-09-08T09:15:48.000Z
2021-12-18T13:28:13.000Z
_posts/2019-09-10-人工智能(四).md
wu-kan/wu-kan.github.io
bf0ef818f3c69cc3217f5685234108f63c709c51
[ "MIT" ]
1,145
2019-01-06T11:42:02.000Z
2022-03-29T15:24:13.000Z
--- redirect_from: /_posts/2019-09-10-%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD-%E5%9B%9B/ title: 人工智能(四) tags: 人工智能 --- ## Game Tree Search ### Generalizing search problems - So far: our search problems have assumed agent has complete control of environment - state does not change unless the agent (robot) changes it - Assumption not always reasonable - other agents whose interests conflict with yours - In these cases, we need to generalize our view of search to handle state changes that are not in the control of the agent ### What are key features of a game - Players have their own interests - Each player tries to alter the world so as to best benefit itself - They are hard because: How you should play depends on how you think the other person will play; but how they play depends on how they think you will play ### Game properties - Two-player - Discrete: Game states or decisions can be mapped on discrete values. - Finite: There are only a finite number of states and possible decisions that can be made - Zero-sum ("): Fully competitive - if one player wins, the other loses an equal amount - note that some games don’t have this property - Deterministic: no chance involved - no dice, or random deals of cards, or coin flips, etc. - Perfect information: all aspects of the state are fully observable - e.g., no hidden cards. ### Extensive Form Two-Player Zero-Sum Games - But R,P,S is a simple /one shot0(一次性) game - single move each - in game theory: a strategic or normal form game (策略或范式博弈) - Many games extend over multiple moves - turn-taking: players act alternatively - e.g., chess, checkers, etc. - in game theory: extensive form games (扩展形式博弈) - We'll focus on the extensive form - that’s where the computational questions emerges ### Two-Player Zero-Sum Game – Definition - Two players A (Max) and B (Min) - Set of states S (a finite set of states of the game) - An initial stateI∈S(where game begins) - Terminal positionsT⊆S(Terminal states of the game:states where the game is over) - Successors (or Succs - a function that takes a state as inputand returns a set of possible next states to whomever is dueto move) - Utility (效益) or payoff (收益) functionV:T→R. (amapping from terminal states to real numbers that show howgood is each terminal state for player A and bad for player B.)Y. LiuIntro to AI8 / 48 ### Two-Player Zero-Sum Game – Intuition - Players alternate moves (starting with A, or Max) - Game ends when some terminal t ∊T is reached - A game state: a state-player pair - Tells us what state we’re in and whose move it is - Utility function and terminals replace goals - A, or Max, wants to maximize the terminal payoff - B, or Min, wants to minimize the terminal payoff - Think of it as: - A, or Max, gets V(t)and B, or Min, gets –V(t) for terminal node t - This is why it’s called zero (or constant) sum ## The MiniMax Strategy - Assume that the other player will always play their best move - you always play a move that will minimize the payoff thatcould be gained by the other player. - By minimizing the other player’s payoff, you maximize yourown. - Note that if you know that Min will play poorly in somecircumstances, there might be a better strategy than MiniMax(i.e., a strategy that gives you a better payoff) - Build full game tree (all leaves are terminals) - Root is start state, edges are possible moves, etc. - Label terminal nodes with utilities - Back values upthe tree - U(t)is defined for all terminals (part of input) - U(n)= min {U(c) : c isa child of n} if nis a Min node - U(n)= max {U(c) : cis a child of n} if nis a Max node ### DFS Minmax - Building the entire game tree and backing up values gives each player their strategy. - However, the game tree is exponential in size. - Furthermore, as we will see later it is not necessary to know all of the tree. - To solve these problems we find a depth-firstimplementation of minimax. - We run the depth-first search after each move to compute what is the next move for the MAXplayer. (We could do the same for the MINplayer). - This avoids explicitly representing the exponentially sized game tree: we just compute each move as it is needed. ### Pruning - It is not necessary to examine entire tree to make correct MiniMax decision - Assume depth-first generation of tree - After generating value for only someof n’s children we can prove that we’ll never reach n in a MiniMax strategy. - So we needn’t generate or evaluate any further children of n! - Two types of pruning (cuts): - pruning of max nodes (α-cuts) - pruning of min nodes (β-cuts) ### Cutting Max Nodes (Alpha Cuts) - At a Max node n: - Let βbe the lowest value of n’s siblings examined so far (siblings to the left of n that have already been searched) - Letαbe the highest value of n’s children examined so far (changes as children examined) - While at a Max node n, if αbecomes ≥ βwe can stop expanding the children of n - Min will never choose to move from n’s parent to nsince it would choose one of n’s lower valued siblings first ## Practical Matters "Real" games are too large to enumerate tree - e.g., chess branching factor is roughly 35 - Depth 10 tree: 2,700,000,000,000,000 nodes - Even alpha-beta pruning won’t help here! We must limit depth of search tree - Can’t expand all the way to terminal nodes - We must make heuristic estimates about the values of the(nonterminal) states at the leaves of the tree - These heuristics are often called evaluation function ## Evaluation functions: basic requirements - Should order the terminal states in the same way as the trueutility function. - The computation must not take too long! - For nonterminal states, the evaluation function should bestrongly correlated with the actual chances of winning. ## How to design evaluation functions - Features of the states,e.g., in chess, the number of whitepawns(卒), black pawns, white queens, etc. - The features, taken together, define various tags orequivalence classes of states: the states in each category havethe same values for all the features. - Any given category will contain some states that lead to wins,some that lead to draws, and some that lead to losses. - e.g., suppose our experience suggests that 72% of the statesin a category lead to a win, 20% to a loss, and 8% to a draw. - Then a reasonable evaluation for states in the category is theexpected utility value:0.72·1 + 0.20·(−1) + 0.08·0 = 0.52. - However, there are too many tags - Most evaluation functions compute separate numerical contributions from each feature and then combine them - e.g., each pawn is worth 1, a knight(马) or bishop(象) isworth 3, a rook(车)) 5, and the queen 9 - Mathematically, a weighted linear functionEval(s) =w1·f1(s) +...+wn·fn(s) =∑ni=1wi·fi(s) - Deep Blue used over 8000 features - This involves a strong assumption: the contribution of eachfeature is independent of the values of the other features. - The assumption may not hold, hence nonlinear combinationsare also used - The features and weights are not part of the rules of chess! - They come from centuries of human chess-playing experience. - In case this kind of experience is not available, the weights ofthe evaluation function can be estimated by machine learning techniques.
49.761905
192
0.749966
eng_Latn
0.999422
983c3dfedce56f8ea42ae33b45464b30484d500d
1,151
md
Markdown
docs/midiplus_studio_s/README.md
ffio1/rs_asio
3e53c2626757c84595a1fe3b9c5f24d1567d1418
[ "MIT" ]
591
2019-09-06T00:06:15.000Z
2022-03-30T22:39:40.000Z
docs/midiplus_studio_s/README.md
ffio1/rs_asio
3e53c2626757c84595a1fe3b9c5f24d1567d1418
[ "MIT" ]
267
2019-09-08T18:14:18.000Z
2022-03-29T09:31:23.000Z
docs/midiplus_studio_s/README.md
ffio1/rs_asio
3e53c2626757c84595a1fe3b9c5f24d1567d1418
[ "MIT" ]
74
2019-09-07T06:52:27.000Z
2022-03-03T16:49:22.000Z
# MIDIPLUS Studio S ## Config files **RS_ASIO.ini** ```ini [Config] EnableWasapiOutputs=0 EnableWasapiInputs=0 EnableAsio=1 [Asio] ; available buffer size modes: ; driver - respect buffer size setting set in the driver ; host - use a buffer size as close as possible as that requested by the host application ; custom - use the buffer size specified in CustomBufferSize field BufferSizeMode=custom CustomBufferSize=64 [Asio.Output] Driver=Midiplus Studio USB BaseChannel=0 AltBaseChannel= EnableSoftwareEndpointVolumeControl=1 EnableSoftwareMasterVolumeControl=1 SoftwareMasterVolumePercent=100 [Asio.Input.0] Driver=Midiplus Studio USB Channel=1 EnableSoftwareEndpointVolumeControl=1 EnableSoftwareMasterVolumeControl=1 SoftwareMasterVolumePercent=100 [Asio.Input.1] Driver= Channel=1 EnableSoftwareEndpointVolumeControl=1 EnableSoftwareMasterVolumeControl=1 SoftwareMasterVolumePercent=100 [Asio.Input.Mic] Driver= Channel=1 EnableSoftwareEndpointVolumeControl=1 EnableSoftwareMasterVolumeControl=1 SoftwareMasterVolumePercent=100 ## Troubleshooting If your audio start cracking, you can increase the `CustomBufferSize` value.
21.314815
94
0.830582
yue_Hant
0.794673
983d40dd9a175f395f101329c0504b82b5aadbc8
76
md
Markdown
README.md
fenhl/willkommeninwoellstein.de
c0545209fa65ad666a9beb4a9f1d3ee1002d23ea
[ "MIT" ]
1
2015-11-22T15:53:48.000Z
2015-11-22T15:53:48.000Z
README.md
fenhl/willkommeninwoellstein.de
c0545209fa65ad666a9beb4a9f1d3ee1002d23ea
[ "MIT" ]
6
2015-11-22T15:57:24.000Z
2018-05-02T08:51:08.000Z
README.md
fenhl/willkommeninwoellstein.de
c0545209fa65ad666a9beb4a9f1d3ee1002d23ea
[ "MIT" ]
null
null
null
# willkommeninwoellstein.de Homepage der Initiative Willkommen in Wöllstein
25.333333
47
0.868421
deu_Latn
0.924993
983d7ad7844fa476c0506eee78113620b6a640a8
459
md
Markdown
pages/windows/get-filehash.md
stautonico/tldr
bebfa4e965256c2e7fefa0aabf1d775a92c4fb3a
[ "CC-BY-4.0" ]
1
2022-01-13T08:47:58.000Z
2022-01-13T08:47:58.000Z
pages/windows/get-filehash.md
stautonico/tldr
bebfa4e965256c2e7fefa0aabf1d775a92c4fb3a
[ "CC-BY-4.0" ]
1
2022-02-01T00:50:40.000Z
2022-03-03T00:59:18.000Z
pages/windows/get-filehash.md
stautonico/tldr
bebfa4e965256c2e7fefa0aabf1d775a92c4fb3a
[ "CC-BY-4.0" ]
null
null
null
# Get-FileHash > Calculate a hash for a file. > This command can only be used through PowerShell. > More information: <https://docs.microsoft.com/powershell/module/microsoft.powershell.utility/get-filehash>. - Calculate a hash for a specified file using the SHA256 algorithm: `Get-FileHash {{path/to/file}}` - Calculate a hash for a specified file using a specified algorithm: `Get-FileHash {{path/to/file}} -Algorithm {{SHA1|SHA384|SHA256|SHA512|MD5}}`
32.785714
109
0.75817
eng_Latn
0.542677
983e3e8baf865e2b1038c14cbf6806fdf673f7c4
2,425
md
Markdown
docs/odbc/reference/appendixes/parameter-data-types.md
Sticcia/sql-docs.it-it
31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/odbc/reference/appendixes/parameter-data-types.md
Sticcia/sql-docs.it-it
31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/odbc/reference/appendixes/parameter-data-types.md
Sticcia/sql-docs.it-it
31c0db26a4a5b25b7c9f60d4ef0a9c59890f721e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: I tipi di dati parametro | Microsoft Docs ms.custom: '' ms.date: 01/19/2017 ms.prod: sql ms.technology: connectivity ms.topic: conceptual helpviewer_keywords: - data types [ODBC], parameters - parameter data type [ODBC] - minimum SQL syntax supported [ODBC] - ODBC drivers [ODBC], minimum SQL syntax supported ms.assetid: fd7e99d8-d26a-408c-9733-6ffccde99f75 author: MightyPen ms.author: genemi ms.reviewer: '' manager: craigg ms.openlocfilehash: e1f1097927f61355cf4a50f4287397d823fd3177 ms.sourcegitcommit: f7fced330b64d6616aeb8766747295807c92dd41 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 04/23/2019 ms.locfileid: "62632414" --- # <a name="parameter-data-types"></a>Tipi di dati parametro Anche se ogni parametro specificato con **SQLBindParameter** è definiti utilizzando un tipo di dati SQL, i parametri in un'istruzione SQL non contengono intrinsechi dati digitare. Di conseguenza, i marcatori di parametro possono essere incluso in un'istruzione SQL solo se i tipi di dati possono essere dedotto da un altro operando nell'istruzione. Ad esempio, in un'espressione aritmetica, ad esempio? + COLUMN1, il tipo di dati del parametro può essere dedotto dal tipo di dati della colonna denominata rappresentato da COLUMN1. Un'applicazione non è possibile utilizzare un marcatore di parametro se non è possibile determinare il tipo di dati. La tabella seguente descrive come un tipo di dati viene determinato per diversi tipi di parametri, in conformità con SQL-92. Per una specifica più completa di deduzione del tipo di parametro quando si utilizzano altre clausole SQL, vedere la specifica di SQL-92. |Posizione del parametro|Si presuppone che il tipo di dati| |---------------------------|-----------------------| |Un operando di un operatore di confronto o aritmetici binario|Uguale all'altro operando| |Il primo operando in un **BETWEEN** clausola|Stesso come il secondo operando| |Il secondo o terzo operando in un **BETWEEN** clausola|Il primo operando uguale| |Un'espressione utilizzata con **IN**|Uguale al primo valore o la colonna di risultati della sottoquery| |Un valore usato con **IN**|Diverso da quello dell'espressione o il primo valore se è presente un marcatore di parametro nell'espressione| |Un valore del modello usato con **, ad esempio**|VARCHAR| |Un valore di aggiornamento usato con **aggiornare**|Corrisponde alla colonna di aggiornamento|
62.179487
649
0.773196
ita_Latn
0.997213
983f426de47af242d01e9bbd67b3fc4806be9f34
2,383
md
Markdown
src/tensorrt/docs/dataset_benchmarks.md
joehandzik/dlcookbook-dlbs
7c5ca5a6dfa4e2f7b8b4d81c60bd8be343dabd30
[ "Apache-2.0" ]
123
2017-11-28T20:21:24.000Z
2022-03-22T11:21:04.000Z
src/tensorrt/docs/dataset_benchmarks.md
joehandzik/dlcookbook-dlbs
7c5ca5a6dfa4e2f7b8b4d81c60bd8be343dabd30
[ "Apache-2.0" ]
17
2018-01-05T00:05:13.000Z
2020-09-18T05:12:45.000Z
src/tensorrt/docs/dataset_benchmarks.md
joehandzik/dlcookbook-dlbs
7c5ca5a6dfa4e2f7b8b4d81c60bd8be343dabd30
[ "Apache-2.0" ]
48
2018-01-04T20:52:51.000Z
2022-03-06T00:47:17.000Z
# Dataset Benchmarks ### Overview In order to warm-up system and/or benchmark storage/network, a tool named `benchmark_tensor_dataset` can be used. This tool can be used to identify maximal performance in terms of number of images/sec an inference benchmark can stream from some storage to a host memory assuming inference itself is not performed. In general, if an inference benchmark reports better throughput than achieved with this test, this can be a signal that OS has cached some files and cache is the place where benchmark was streaming data from. This tools works with datasets built with [images2tensors](images2tensors.md) tool. ### Command line arguments The following command line arguments are supported: 1. `--data_dir` Path to a dataset to use. 2. `--batch_size` Create batches of this size. 3. `--img_size` Size of images in a dataset (width = height). 4. `--num_prefetchers` Number of prefetchers (data readers). 5. `--prefetch_pool_size` Number of pre-allocated batches. Memory for batches is preallocated in advance and then reused by prefetchers. 6. `--num_warmup_batches` Number of warmup iterations. 7. `--num_batches` Number of benchmark iterations. 8. `--dtype` Tensor data type in the dataset- 'float' or 'uchar'. For instance: ```bash benchmark_tensor_dataset --data_dir=/mnt/data/imagenet100k/tensorrt --batch_size=512 \ --dtype=uchar --img_size=227 --num_prefetchers=3 \ --prefetch_pool_size=9 --num_warmup_batches=1000 \ --num_batches=5000 ``` If a benchmark runs on a multi-socket machine and streams data from a network attached storage, you may want to use `numactl` to pin benchmark process to the closest CPU and also enforce local memory allocations, e.g.: ```bash numactl --localalloc --physcpubind 0-17 benchmark_tensor_dataset ... ``` ### Running benchmarks with DLBS DLBS provides example script `tutorials/dlcookbook/tensorrt/benchmark_tensor_dataset.sh` that helps with running dataset benchmarks with containers: ```bash source ./scripts/environment.sh script=./tutorials/dlcookbook/tensorrt/benchmark_tensor_dataset.sh $script --data_dir /mnt/data/imagenet100k/tensorrt \ --dtype uchar --img_size 227 \ --batch_size 512 --num_prefetchers 8 \ --num_preallocated_batches 32 \ --num_warmup_batches 2000 --num_batches 8000 ```
45.826923
88
0.747797
eng_Latn
0.982334
983f6b7ce2a0fb644f7d0bdd6e92589f0843e83d
13,256
md
Markdown
articles/databox-online/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy.md
Myhostings/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
16
2017-08-28T08:29:36.000Z
2022-01-02T16:46:30.000Z
articles/databox-online/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy.md
Ahmetmaman/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
470
2017-11-11T20:59:16.000Z
2021-04-10T17:06:28.000Z
articles/databox-online/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy.md
Ahmetmaman/azure-docs.tr-tr
536eaf3b454f181f4948041d5c127e5d3c6c92cc
[ "CC-BY-4.0", "MIT" ]
25
2017-11-11T19:39:08.000Z
2022-03-30T13:47:56.000Z
--- title: Azure portal 'de Azure Stack Edge Mini R cihazının ağ ayarlarını yapılandırma öğreticisi description: Azure Stack Edge Mini R dağıtımı öğreticisi, fiziksel cihazınız için ağ, bilgi işlem ağı ve Web proxy ayarlarını yapılandırmanızı sağlar. services: databox author: alkohli ms.service: databox ms.subservice: edge ms.topic: tutorial ms.date: 02/04/2021 ms.author: alkohli ms.openlocfilehash: 34a11679626653afd04b0cd17c77188cbc995308 ms.sourcegitcommit: 73fb48074c4c91c3511d5bcdffd6e40854fb46e5 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 03/31/2021 ms.locfileid: "106061732" --- # <a name="tutorial-configure-network-for-azure-stack-edge-mini-r"></a>Öğretici: Azure Stack Edge Mini R için ağı yapılandırma Bu öğreticide, yerel Web Kullanıcı arabirimini kullanarak Azure Stack Edge Mini R cihazınız için ağın birlikte bulunan GPU ile nasıl yapılandırılacağı açıklanmaktadır. Bağlantı işleminin tamamlanması 20 dakika sürebilir. Bu öğreticide şunları öğrenirsiniz: > [!div class="checklist"] > > * Önkoşullar > * Ağı yapılandırma > * İşlem ağını etkinleştir > * Web proxy yapılandırma ## <a name="prerequisites"></a>Önkoşullar Azure Stack Edge Mini R cihazınızı yapılandırmadan ve ayarlamadan önce şunları yaptığınızdan emin olun: * Fiziksel cihazı [yükleme Azure Stack Edge Mini R](azure-stack-edge-gpu-deploy-install.md)' de ayrıntılı olarak yüklediniz. * [Azure Stack Edge Mini R 'ye bağlanma](azure-stack-edge-mini-r-deploy-connect.md) bölümünde açıklandığı gibi cihazın yerel Web Kullanıcı arabirimine bağlandınız ## <a name="configure-network"></a>Ağı yapılandırma **Başlarken** sayfasında, fiziksel cihazı Azure Stack Edge hizmetine göre yapılandırmak ve kaydetmek için gereken çeşitli ayarlar görüntülenir. Cihazınızın ağını yapılandırmak için bu adımları izleyin. 1. Cihazınızın yerel Web Kullanıcı arabiriminde **Başlarken** sayfasına gidin. 2. Sıfır gün güncelleştirme gerekiyorsa bunu, kablolu bağlantıyla bir veri bağlantı noktası yapılandırarak yapabilirsiniz. Bu cihaz için kablolu bağlantı ayarlama hakkında daha fazla yönerge için bkz. [cihazınızı kablolu](azure-stack-edge-mini-r-deploy-install.md#cable-the-device)olarak kullanma. Güncelleştirme bittikten sonra kablolu bağlantıyı kaldırabilirsiniz. 3. Wi-Fi ve imzalama zinciri için sertifika oluşturun. Hem imzalama zinciri hem de Wi-Fi sertifikaları bir *. cer* dosya uzantısıyla der biçiminde olmalıdır. Yönergeler için bkz. [sertifika oluşturma](azure-stack-edge-gpu-manage-certificates.md). 4. Yerel Web Kullanıcı arabiriminde, **kullanmaya** başlayın ' a gidin. **Güvenlik** kutucuğunda **Sertifikalar** ' ı seçin ve ardından **Yapılandır**' ı seçin. [![Yerel Web Kullanıcı arabirimi "Sertifikalar" sayfası](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/get-started-1.png#lightbox) 1. **+ Sertifika ekle**' yi seçin. [![Yerel Web Kullanıcı arabirimi "Sertifikalar" sayfa 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-1.png#lightbox) 2. İmzalama zincirini karşıya yükleyin ve **Uygula**' yı seçin. ![Yerel Web Kullanıcı arabirimi "Sertifikalar" sayfa 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-2.png) 3. Yordamı Wi-Fi sertifikayla tekrarlayın. ![Yerel Web Kullanıcı arabirimi "Sertifikalar" sayfa 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-4.png) 4. Yeni sertifikaların **Sertifikalar** sayfasında gösterilmesi gerekir. [![Yerel Web Kullanıcı arabirimi "Sertifikalar" sayfa 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-5.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-cert-5.png#lightbox) 5. **Kullanmaya** başlamak için geri dönün. 3. **Ağ** kutucuğunda **Yapılandır**' ı seçin. Fiziksel cihazınızda beş ağ arabirimi vardır. PORT 1 ve PORT 2, 1 Gb/sn'lik ağ arabirimleridir. Bağlantı noktası 3 ve bağlantı noktası 4 tüm 10 Gbps ağ arabirimlerdir. Beşinci bağlantı noktası Wi-Fi bağlantı noktasıdır. [![Yerel Web Kullanıcı arabirimi "ağ ayarları" sayfa 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-1.png)](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-1.png#lightbox) Wi-Fi bağlantı noktasını seçin ve bağlantı noktası ayarlarını yapılandırın. > [!IMPORTANT] > Wi-Fi bağlantı noktası için bir statik IP adresi yapılandırmanız önemle önerilir. ![Yerel Web Kullanıcı arabirimi "ağ ayarları" sayfa 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-2.png) Wi-Fi bağlantı noktası ayarlarını uyguladıktan sonra **ağ** sayfası güncellenir. ![Yerel Web Kullanıcı arabirimi "ağ ayarları" sayfa 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/configure-wifi-4.png) 4. **Wi-Fi profili Ekle** ' yi seçin ve Wi-Fi profilinizi karşıya yükleyin. ![Yerel Web Kullanıcı arabirimi "bağlantı noktası WiFi ağ ayarları" 1](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-1.png) Bir kablosuz ağ profili, bir kablosuz ağa bağlanabilmek için SSID (ağ adı), parola anahtarı ve güvenlik bilgilerini içerir. Ortamınızın Wi-Fi profilini ağ yöneticinizden edinebilirsiniz. ![Yerel Web Kullanıcı arabirimi "bağlantı noktası WiFi ağ ayarları" 2](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-2.png) Profil eklendikten sonra, Wi-Fi profillerinin listesi yeni profili yansıtacak şekilde güncelleştirilir. Profilde **bağlantı durumu** bağlantısı **kesik** olarak gösterilmelidir. ![Yerel Web Kullanıcı arabirimi "bağlantı noktası WiFi ağ ayarları" 3](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-3.png) 5. Kablosuz ağ profili başarıyla yüklendikten sonra bu profile bağlanın. **Wi-Fi Profile Bağlan**' ı seçin. ![Yerel Web Kullanıcı arabirimi "bağlantı noktası Wi-Fi ağ ayarları" 4](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-4.png) 6. Önceki adımda eklediğiniz Wi-Fi profilini seçin ve **Uygula**' yı seçin. ![Yerel Web Kullanıcı arabirimi "bağlantı noktası Wi-Fi ağ ayarları" 5](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-5.png) **Bağlantı durumu** **bağlı** olarak güncelleştirilecek. Sinyal gücü, sinyalin kalitesini göstermek için güncelleştirilir. ![Yerel Web Kullanıcı arabirimi "bağlantı noktası Wi-Fi ağ ayarları" 6](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/add-wifi-profile-6.png) > [!NOTE] > Büyük miktarlardaki verileri aktarmak için kablosuz ağ yerine kablolu bağlantı kullanmanızı öneririz. 6. Cihazdaki bağlantı noktası 1 ' i dizüstü bilgisayardan sökün. 7. Ağ ayarlarını yapılandırırken şunları aklınızda bulundurun: - Ortamınızda DHCP etkinse, ağ arabirimleri otomatik olarak yapılandırılır. IP adresi, alt ağ, ağ geçidi ve DNS otomatik olarak atanır. - DHCP etkinleştirilmemişse, gerekirse statik IP 'Ler atayabilirsiniz. - Ağ arabiriminizi IPv4 olarak yapılandırabilirsiniz. - Ağ arabirimi kartı (NIC) grubu oluşturma veya bağlantı toplama, Azure Stack Edge ile desteklenmez. - Herhangi bir bağlantı noktasının seri numarası, düğüm seri numarasına karşılık gelir. K serisi bir cihaz için yalnızca bir seri numarası görüntülenir. >[!NOTE] > Cihaza bağlanmak için başka bir IP adresiniz yoksa, ağ arabiriminin yerel IP adresini statikten DCHP’ye değiştirmeniz önerilmez. Tek bir ağ arabirimi kullanıyorsanız ve DHCP’ye geçiyorsanız, DHCP adresini belirlemenin hiçbir yolu yoktur. DHCP adresini değiştirmek istiyorsanız cihazın hizmete kaydolmasını bekleyin ve ondan sonra değiştirin. Daha sonra, hizmetinize yönelik Azure portal **cihaz özelliklerindeki** tüm bağdaştırıcıların IP 'lerini görüntüleyebilirsiniz. Ağ ayarlarını yapılandırdıktan ve uyguladıktan sonra, Ileri ' yi seçin. işlem ağını yapılandırmak için **işlem** . ## <a name="enable-compute-network"></a>İşlem ağını etkinleştir İşlem ağını etkinleştirmek ve yapılandırmak için bu adımları izleyin. 1. **İşlem** sayfasında, işlem için etkinleştirmek istediğiniz bir ağ arabirimi seçin. ![Yerel Kullanıcı arabirimi 2 ' de işlem sayfası](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-1.png) 1. **Ağ ayarları** Iletişim kutusunda **Etkinleştir**' i seçin. İşlem ' ı etkinleştirdiğinizde, cihazınızda bu ağ arabirimindeki bir sanal anahtar oluşturulur. Sanal anahtar, cihazdaki işlem altyapısı için kullanılır. 1. **Kubernetes düğüm IP 'leri** atayın. Bu statik IP adresleri, işlem sanal makinesi içindir. *N* düğümlü bir cihaz için, başlangıç ve bitiş IP adreslerini kullanan işlem VM 'si için en az *n + 1* IPv4 adresi (veya daha fazla) bulunur. Verilen Azure Stack Edge 1 düğümlü bir cihazdır, en az 2 bitişik IPv4 adresi sağlanır. > [!IMPORTANT] > Azure Stack Edge üzerinde Kubernetes, pod için 172.27.0.0/16 alt ağını ve hizmet için 172.28.0.0/16 alt ağını kullanır. Bunların ağınızda kullanımda olmadığından emin olun. Bu alt ağlar ağınızda zaten kullanılıyorsa, bu alt ağları, `Set-HcsKubeClusterNetworkInfo` cihazın PowerShell arabiriminden cmdlet 'ini çalıştırarak değiştirebilirsiniz. Daha fazla bilgi için bkz. [Kubernetes Pod ve hizmet alt ağlarını değiştirme](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets). 1. **Kubernetes dış hizmet IP 'leri** atayın. Bunlar ayrıca Yük Dengeleme IP adresleridir. Bu bitişik IP adresleri, Kubernetes kümesi dışında kullanıma sunmak istediğiniz hizmetlere yöneliktir ve sunulan hizmet sayısına bağlı olarak statik IP aralığını belirtebilirsiniz. > [!IMPORTANT] > İşlem modüllerine erişmek için Azure Stack Edge Mini R hub hizmeti için en az 1 IP adresi belirtmenizi önemle tavsiye ederiz. Daha sonra isteğe bağlı olarak, küme dışından erişilmesi gereken diğer hizmetler/IoT Edge modülleri (hizmet/modül başına 1) için ek IP adresleri belirtebilirsiniz. Hizmet IP adresleri daha sonra güncelleştirilebilen olabilir. 1. **Uygula**’yı seçin. ![Yerel Kullanıcı arabirimi 3 ' te işlem sayfası](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-3.png) 1. Yapılandırmanın uygulanması birkaç dakika sürer ve tarayıcıyı yenilemeniz gerekebilir. Belirtilen bağlantı noktasının işlem için etkinleştirildiğini görebilirsiniz. ![Yerel Kullanıcı arabirimi 4 ' te işlem sayfası](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/compute-network-4.png) Ileri ' yi seçin: Web proxy 'yi yapılandırmak için **Web proxy** . ## <a name="configure-web-proxy"></a>Web proxy yapılandırma Bu, isteğe bağlı bir yapılandırmadır. > [!IMPORTANT] > * İşlem ve Azure Stack Edge Mini R cihazınızda IoT Edge modülü kullanıyorsanız, Web proxy kimlik doğrulamasını **hiçbiri** olarak ayarlamanızı öneririz. NTLM desteklenmez. >* Proxy-otomatik yapılandırma (PAC) dosyaları desteklenmez. PAC dosyası, Web tarayıcılarının ve diğer Kullanıcı aracılarının belirli bir URL 'YI getirmek için uygun proxy sunucusunu (erişim yöntemi) otomatik olarak nasıl seçebileceğini tanımlar. Ara sunucunun sertifikası güvenilir olmadığından, tüm trafiği kesmeye ve okumaya çalışır (sonra her şeyi kendi sertifikalarıyla yeniden imzala) uyumlu değildir. Genellikle saydam proxy 'ler Azure Stack Edge Mini R ile iyi çalışır. saydam olmayan Web proxy 'leri desteklenmez. 1. **Web proxy ayarları** sayfasında, aşağıdaki adımları uygulayın: 1. **Web proxy URL 'si** kutusuna URL 'yi şu biçimde girin: `http://host-IP address or FQDN:Port number` . HTTPS URL'leri desteklenmez. 2. **Kimlik Doğrulaması**'nın altında **Yok** veya **NTLM**'yi seçin. İşlem ve Azure Stack Edge Mini R cihazınızda IoT Edge modülü kullanıyorsanız, Web proxy kimlik doğrulamasını **none** olarak ayarlamanızı öneririz. **NTLM** desteklenmiyor. 3. Kimlik doğrulaması kullanıyorsanız, bir Kullanıcı adı ve parola girin. 4. Yapılandırılmış Web proxy ayarlarını doğrulamak ve uygulamak için **Uygula**' yı seçin. ![Yerel Web Kullanıcı arabirimi "Web proxy ayarları" sayfası](./media/azure-stack-edge-mini-r-deploy-configure-network-compute-web-proxy/web-proxy-1.png) 2. Ayarlar uygulandıktan sonra **İleri: cihaz**' ı seçin. ## <a name="next-steps"></a>Sonraki adımlar Bu öğreticide hakkında bilgi edindiniz: > [!div class="checklist"] > * Önkoşullar > * Ağı yapılandırma > * İşlem ağını etkinleştir > * Web proxy yapılandırma Azure Stack Edge Mini R cihazınızı ayarlamayı öğrenmek için bkz.: > [!div class="nextstepaction"] > [Cihaz ayarlarını yapılandırma](./azure-stack-edge-mini-r-deploy-set-up-device-update-time.md)
63.12381
523
0.777912
tur_Latn
0.998974
983fb5a0d41108aa0e8fc37ca10acf87378e7ae4
17,514
md
Markdown
articles/azure-stack/azure-stack-key-features.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-stack/azure-stack-key-features.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-stack/azure-stack-key-features.md
OpenLocalizationTestOrg/azure-docs-pr15_fr-BE
753623e5195c97bb016b3a1f579431af9672c200
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Fonctionnalités clés et concepts dans la pile d’Azure | Microsoft Azure" description="Découvrez les fonctionnalités et les concepts de la pile d’Azure." services="azure-stack" documentationCenter="" authors="Heathl17" manager="byronr" editor=""/> <tags ms.service="azure-stack" ms.workload="na" ms.tgt_pltfrm="na" ms.devlang="na" ms.topic="article" ms.date="10/25/2016" ms.author="helaw"/> # <a name="key-features-and-concepts-in-azure-stack"></a>Les principales fonctionnalités et les concepts de pile d’Azure Si vous êtes novice dans la pile de Microsoft Azure, ces termes et les descriptions de la fonctionnalité peuvent être utiles. ## <a name="personas"></a>Personnages Il existe deux types d’utilisateurs pour la pile de Microsoft Azure, l’administrateur du service et le client (client). - Un **administrateur de service** peuvent configurer et gérer les fournisseurs de ressources, des offres de clients, plans, services, quotas et de tarification. - Un **locataire** acquiert (ou achats) services qui offre de l’administrateur de service. Locataires peuvent configurer, surveiller et gérer les services qu’ils ont souscrit, telles que les applications Web, de stockage et de Machines virtuelles. ## <a name="portal"></a>Portail Les principales méthodes d’interaction avec Microsoft Azure pile est le portail et PowerShell. ![](media/azure-stack-key-features/image3.png) Le portail de Microsoft Azure pile est une instance d’Azure portal en cours d’exécution sur vos serveurs. Il s’agit d’un site web qui fournit une expérience de libre-service pour les administrateurs de service et les locataires avec contrôle d’accès basé sur les rôles (RBAC) aux ressources et aux capacités de nuage, l’activation d’une application rapide et de développement de services et de déploiement. ## <a name="regions-services-plans-offers-and-subscriptions"></a>Régions, des services, des plans, des offres et abonnements Dans la pile d’Azure, les services sont fournis aux locataires à l’aide de régions, les abonnements, les offres et les plans. Locataires peuvent s’abonner à plusieurs offres. Offres peuvent avoir un ou plusieurs plans et plans peuvent avoir un ou plusieurs services. ![](media/azure-stack-key-features/image4.png) Exemple de hiérarchie d’abonnements d’un client aux offres, chacun avec différents plans et de services. ### <a name="regions"></a>Régions Régions de pile Azure sont un élément base d’échelle et de gestion. Une organisation peut disposer de plusieurs régions avec les ressources disponibles dans chaque région. Régions peuvent également avoir des offres de services disponibles. ### <a name="services"></a>Services Pile de Microsoft Azure permet à des fournisseurs fournir un large éventail de services et d’applications, telles que des machines virtuelles, SQL Server bases de données, SharePoint, Exchange et bien plus encore. ### <a name="plans"></a>Plans de Les plans sont des regroupements d’un ou plusieurs services. En tant que fournisseur, vous créez des plans d’offrir à vos locataires. À son tour, votre locataires s’abonner à vos offres à utiliser les plans et les services qu’ils comprennent. Chaque service ajouté à un plan peut être configuré avec les paramètres de quota pour vous aider à gérer les capacités de votre nuage. Les quotas peuvent inclure des restrictions comme les limites de la machine virtuelle, la mémoire et du processeur et sont appliquées par abonnement de l’utilisateur. Les quotas peuvent être différenciés par emplacement. Par exemple, un plan contenant les services de calcul à partir de la zone A peut avoir un quota de 10 cœurs de processeur, 4 Go de RAM et deux machines virtuelles. Lorsque vous composez une offre, l’administrateur du service peut inclure des **plans de base**. Ces plans de base sont inclus par défaut lorsqu’un client s’abonne à cette offre. Dès qu’un utilisateur s’abonne (et la création de l’abonnement), l’utilisateur a accès à tous les fournisseurs de ressources spécifié dans les plans de base (avec les contingents correspondants). L’administrateur de service peut également inclure des **plans de module complémentaire** dans une offre. Plans de module complémentaire ne sont pas inclus par défaut dans l’abonnement. Plans de modules complémentaires sont des plans supplémentaires (quotas) disponibles dans une offre propriétaire d’un abonnement peut ajouter à leurs abonnements. ### <a name="offers"></a>Offres Offres sont des groupes de plans d’un ou plusieurs fournisseurs de présentent aux locataires d’acheter (s’abonner à). Par exemple, proposer un Alpha peut contenir Plan A (à partir de 1 et la région contenant un ensemble de services de calcul) et Plan de B (de 2 de la région contenant un ensemble de services de stockage et réseau). Une offre est livré avec un ensemble de plans de base, et les administrateurs de service peuvent créer des plans de module complémentaire qui locataires peuvent ajouter à leur abonnement. ### <a name="subscriptions"></a>Abonnements Un abonnement est comment les locataires achètent vos offres. Un abonnement est une combinaison d’un client avec une offre. Un client peut avoir plusieurs offres des abonnements. Chaque abonnement s’applique à une seule offre. Les abonnements d’un client de déterminer les plans de services qu’ils peuvent accéder. Abonnements organiser fournisseurs d’accès et l’utilisation des services et des ressources de cloud. ## <a name="azure-resource-manager"></a>Gestionnaire de ressources Azure En utilisant le Gestionnaire de ressources Azure, vous pouvez travailler avec les ressources de votre infrastructure dans un modèle basé sur un modèle, déclaratives. Il fournit une interface unique qui vous permet de déployer, gérer et surveiller vos composants de la solution, tels que les ordinateurs virtuels, des comptes de stockage, des applications web et des bases de données. Pour plus d’informations et de conseils, consultez la [vue d’ensemble du Gestionnaire de ressources Azure](../azure-resource-manager/resource-group-overview.md). ### <a name="resource-groups"></a>Groupes de ressources Groupes de ressources sont des ensembles de ressources, des services et applications, et chaque ressource a un type, tel que les ordinateurs virtuels, réseaux virtuels, public IPs, comptes de stockage et des sites Web. Chaque ressource doit être dans un groupe de ressources et organiser afin d’aident les groupes de ressources logiquement les ressources, telles que par la charge de travail ou un emplacement. Voici quelques éléments importants à prendre en compte lors de la définition d’un groupe de ressources : - Chaque ressource ne peut exister que dans un groupe de ressources. - Vous déployer, mettre à jour et supprimer des éléments dans un groupe de ressources entre eux. Si une ressource, tel qu’un serveur de base de données, doit exister sur un cycle de déploiement différentes, il doit être dans un autre groupe de ressources. - Vous pouvez ajouter ou supprimer une ressource à un groupe de ressources à tout moment. - Vous pouvez déplacer une ressource d’un groupe de ressources vers un autre groupe. - Un groupe de ressources peut contenir des ressources qui se trouvent dans différentes régions. - Un groupe de ressources peut servir à portée de contrôle d’accès pour les opérations d’administration. - Une ressource peut être liée à une ressource dans un autre groupe de ressources lorsque les deux ressources doivent interagir entre eux, mais ils ne partagent pas le même cycle de vie. Par exemple, plusieurs applications doivent se connecter à une base de données, mais cette base de données ne doit pas être mis à jour ou supprimé au même rythme que les applications. - Dans la pile de Microsoft Azure, des ressources telles que les plans et les offres sont également gérés dans des groupes de ressources. - Vous pouvez redéployer un groupe de ressources. Cela est utile à des fins de test ou de développement. ### <a name="azure-resource-manager-templates"></a>Modèles de gestionnaire de ressources Azure Avec le Gestionnaire de ressources d’Azure, vous pouvez créer un modèle simple (au format JSON) qui définit la configuration de votre application et de déploiement. Ce modèle est appelé un modèle de gestionnaire de ressources Azure et offre un moyen déclaratif pour définir le déploiement. À l’aide d’un modèle, vous pouvez à plusieurs reprises déployer votre application tout au long du cycle de vie d’application et avoir confiance que vos ressources sont déployés dans un état cohérent. ## <a name="resource-providers-rpsnetwork-rp-compute-rp-storage-rp"></a>Les fournisseurs de ressources (RPs) : réseau RP, calculer RP, stockage RP Les fournisseurs de ressources sont les services web qui constituent la base pour IaaS basée sur Azure de toutes les et les services PaaS. Azure le Gestionnaire de ressources s’appuie sur différents RPs pour fournir l’accès à des services d’un hébergeur. Il existe trois RPs principal : réseau, de stockage et de calcul. Chacun de ces RPs vous permet de configurer et de contrôler ses ressources respectives. Les administrateurs de service peuvent également ajouter des nouveaux fournisseurs de ressources personnalisées. ### <a name="compute-rp"></a>Calculer le RP Le fournisseur de ressources de calcul (CRP) permet de locataires de la pile d’Azure créer leurs propres ordinateurs virtuels. Il fournit également des fonctionnalités pour l’administrateur du service installer et configurer le fournisseur de ressources pour les locataires. Le CRP inclut la possibilité de créer des machines virtuelles, ainsi que des extensions de la Machine virtuelle. Le service extension de Machine virtuelle permet de fournir des fonctions de IaaS pour les machines virtuelles Windows et Linux. ### <a name="network-rp"></a>Réseau RP Le fournisseur de ressources réseau (NRP) propose une série de fonctionnalités de mise en réseau défini par logiciel (SDN) et la virtualisation de fonction réseau (NFV) pour le cloud privé. Ces fonctionnalités sont cohérentes avec le nuage public Azure afin que les modèles d’application peuvent être écrites une seule fois et déployés à la fois dans le nuage public Azure ou sur site Microsoft Azure pile. Le RP de réseau vous donne un contrôle plus précis, des balises de métadonnées, une configuration plus rapide, personnalisation rapide et reproductible et plusieurs interfaces de contrôle (y compris PowerShell, Kit de développement .NET, Node.JS SDK, API basée sur le reste). Vous pouvez utiliser le NRP pour créer des groupes de sécurité logiciel charge équilibreurs, public IPs, réseau, réseaux virtuels, entre autres. ### <a name="storage-rp"></a>Stockage RP Le RP de stockage offre quatre services de stockage Azure cohérents : blob, table, file d’attente et la gestion des comptes. Il offre également un service d’administration stockage cloud pour faciliter l’administration du service fournisseur de services de stockage Azure cohérents. Stockage Azure offre la possibilité de stocker et de récupérer de grandes quantités de données non structurées, telles que des documents et des fichiers de support avec les objets BLOB Azure, NoSQL structuré les données et les tables Azure. Pour plus d’informations sur le stockage Azure, consultez [Introduction au stockage Azure de Microsoft](../storage/storage-introduction.md). #### <a name="blob-storage"></a>Stockage des objets BLOB Stockage des objets BLOB stocke un jeu de données. Un blob peut être n’importe quel type de données texte ou binaires, comme un document, un fichier multimédia ou un programme d’installation de l’application. Stockage de table stocke les jeux de données structurées. Stockage de table est un magasin de données de l’attribut clé NoSQL qui permet un développement rapide et un accès rapide à grandes quantités de données. Stockage de la file d’attente fournit une messagerie fiable pour le traitement de flux de travail et pour la communication entre les composants des services en nuage. Chaque objet blob est organisée sous un conteneur. Les conteneurs fournissent également un moyen pratique d’affecter des stratégies de sécurité à des groupes d’objets. Un compte de stockage peut contenir un nombre quelconque de conteneurs et un conteneur peut contenir un nombre quelconque d’objets BLOB, dans la limite de capacité de 500 To du compte de stockage. BLOB de stockage offre trois types d’objets BLOB, bloquer les objets BLOB, ajouter des objets BLOB et les objets BLOB (disques) de la page. BLOB de bloc est optimisées en continu et le stockage d’objets de nuage et constitue un bon choix pour le stockage des documents, des fichiers multimédias, des sauvegardes, etc.. Ajouter des objets BLOB sont semblables aux objets BLOB de bloc, mais sont optimisés pour les opérations d’ajout. Un blob append peut être mis à jour uniquement par l’ajout d’un nouveau bloc à la fin. Ajouter des objets BLOB sont un bon choix pour les scénarios tels que la journalisation, où les nouvelles données doivent être écrites uniquement à la fin de l’objet blob. BLOB de page est optimisés pour les représentant les disques IaaS et aléatoire écrit et peut être jusqu'à 1 To de taille. Un réseau des machines virtuelles Azure joint IaaS disque est un disque dur virtuel stocké sous la forme d’un blob de la page. #### <a name="table-storage"></a>Stockage de table Stockage de table est banque de clé/attribut NoSQL de Microsoft : il possède un design sans schémas, ce qui diffère des bases de données relationnelles traditionnelles. Dans la mesure où les schémas de manque de banques de données, il est facile d’adapter vos données selon les besoins du evolve de votre application. Stockage de table est facile à utiliser, afin que les développeurs peuvent créer rapidement des applications. Stockage de table est un magasin de la clé-attribut, c'est-à-dire que chaque valeur dans une table est stockée sous un nom de propriété typée. Le nom de propriété peut être utilisé pour filtrer et définir des critères de sélection. Une collection de propriétés et leurs valeurs comprennent une entité. Depuis les schémas de Table stockage manque deux entités dans la même table peuvent contenir différents ensembles de propriétés, et ces propriétés peuvent être de différents types. Vous pouvez utiliser le stockage de Table pour stocker des groupes de données flexible, comme les données utilisateur pour les applications web, des carnets d’adresses, informations sur les périphériques et tout autre type de métadonnées nécessaires à votre service. Vous pouvez stocker n’importe quel nombre d’entités dans une table, et un compte de stockage peut contenir un nombre quelconque de tables, dans la limite de la capacité du compte de stockage. #### <a name="queue-storage"></a>Stockage de la file d’attente Stockage de file d’attente Azure fournit nuage entre les composants de l’application de messagerie. Dans la conception d’applications à l’échelle, les composants d’application sont souvent découplées, afin qu’ils peuvent faire évoluer indépendamment. Stockage de la file d’attente offre une messagerie asynchrone pour des communications entre les composants de l’application, si elles sont en cours d’exécution dans le nuage, sur le bureau, sur un serveur local ou sur un périphérique mobile. Stockage de la file d’attente prend également en charge la gestion des tâches asynchrones et la création de flux de travail de processus. ## <a name="role-based-access-control-rbac"></a>Contrôle d’accès (RBAC) basée sur les rôles Vous pouvez utiliser RBAC pour accorder l’accès de système pour les utilisateurs autorisés, les groupes et les services en leur attribuant des rôles au niveau de la ressource, groupe de ressources ou abonnement. Chaque rôle définit le niveau d’accès qu'un utilisateur, un groupe ou un service a les ressources Microsoft Azure pile. RBAC Azure a trois rôles de base qui s’appliquent à tous les types de ressources : propriétaire, collaboration et lecture. Propriétaire dispose d’un accès complet à toutes les ressources, y compris le droit de déléguer l’accès à d’autres personnes. Collaborateur peut créer et gérer tous les types de ressources Azure, mais ne peut pas accorder l’accès à d’autres personnes. Lecteur peut uniquement visualiser les ressources existantes Azure. Le reste des rôles RBAC dans Azure autoriser la gestion des ressources Azure spécifiques. Par exemple, le rôle de collaborateur de Machine virtuelle permet la création et la gestion des machines virtuelles, mais ne permet pas de gestion de réseau virtuel ou le sous-réseau que l’ordinateur virtuel se connecte à. ## <a name="usage-data"></a>Données d’utilisation Microsoft Azure pile collecte et rassemble les données d’utilisation entre tous les fournisseurs de ressources afin de fournir un rapport concis par utilisateur. Données peuvent être aussi simple que le nombre de ressources consommées, ou aussi complexe que les compteurs de performances et d’évolutivité. Les données sont disponibles via l’API REST. Il est un Azure cohérentes locataire API ainsi que fournisseur et déléguée des API de fournisseur pour obtenir des données d’utilisation sur tous les abonnements de clients. Ces données peuvent être utilisées pour s’intégrer à un outil externe ou un service de facturation ou de refacturation. ## <a name="next-steps"></a>Étapes suivantes [Déployer la pile Azure Technical Preview 2 (VT)](azure-stack-deploy.md)
115.223684
1,369
0.798904
fra_Latn
0.993718
983febab53182a8af667de9c39f5f694e1b1172b
474
md
Markdown
staging/SushiSwap/Models/OrderCancellation.md
manifoldfinance/docs
fadedf2d2c00ef8f8b2e78bb8f598f94196d7129
[ "MIT" ]
3
2021-07-11T15:50:06.000Z
2021-11-21T02:51:03.000Z
staging/SushiSwap/Models/OrderCancellation.md
manifoldfinance/docs
fadedf2d2c00ef8f8b2e78bb8f598f94196d7129
[ "MIT" ]
6
2021-08-13T11:33:04.000Z
2022-02-01T21:32:03.000Z
staging/SushiSwap/Models/OrderCancellation.md
manifoldfinance/docs
fadedf2d2c00ef8f8b2e78bb8f598f94196d7129
[ "MIT" ]
null
null
null
# OrderCancellation ## Properties | Name | Type | Description | Notes | | ------------- | --------------- | ---------------------------------------------------------------------------- | ---------------------------- | | **signature** | [**Object**](#) | 65 bytes encoded as hex with &#x60;0x&#x60; prefix. r + s + v from the spec. | [optional] [default to null] |
59.25
145
0.291139
eng_Latn
0.506424
984011745e2db25a7f168eed2b0efc984924e975
2,130
md
Markdown
_posts/2017-10-18-reflections-on-the-last-dojo.md
sandordargo/sandordargo.github.io
27944c15d33ba10cd1b0e23cd6bc66a140ed10ba
[ "MIT" ]
null
null
null
_posts/2017-10-18-reflections-on-the-last-dojo.md
sandordargo/sandordargo.github.io
27944c15d33ba10cd1b0e23cd6bc66a140ed10ba
[ "MIT" ]
null
null
null
_posts/2017-10-18-reflections-on-the-last-dojo.md
sandordargo/sandordargo.github.io
27944c15d33ba10cd1b0e23cd6bc66a140ed10ba
[ "MIT" ]
null
null
null
--- layout: post title: "Reflections on the dojo I facilitated at Soiree du Test Logiciel" date: 2017-10-18 category: dev tags: [database, graphs, cypher, neo4j, dojo] header: "As <a href=\"/blog/2017/09/15/upcoming-dojo-about-graph-databases\">I mentioned before</a> I had the chance to facilitate a dojo at an <a href=\"http://www.telecom-valley.fr/5-octobre-soiree-test-logiciel/\">Evening about Software Testing</a>. This was the first dojo I organized outside of my work environment and it taught me some important lessons." --- ​I admit the dojo did not go well. Among the prerequisites communicated to the participants, there were only two things: Java 8 and maven. Still, almost none of the participants showed up with such an environment. ​I had been thinking about creating a docker image, but it seemed to be an overkill. I still think it would have been. In addition, you still need docker on your machine which I think is rarer than Java and maven. ​Probably I should have asked for the e-mail addresses of the registered participants of the dojo and contact them directly asking them to make sure that their environment is fine. And at this point, I should have communicated the GitHub address so that they might compile the [code](https://github.com/sandordargo/neo-wine-services) and run the tests before they come. I could have also ported the code to Python to have even fewer requirements. I will do that for sure! But this event was not only about the dojo, I also made a presentation about the basic ideas of graph databases. I mostly covered the topics of [this](/blog/2017/09/06/intro-to-graph-databases) and [this](/blog/2017/10/04/cypher-introduction) articles. Actually, this part went well. I think that my audience appreciated the presentation. Only one person was aware of graph databases before and they liked the concepts I introduced. That's already a success, they had ideas to take away. On my side, I also learnt a lot and still gained some confidence! If you'd like to do the kata yourself, go to this [GitHub repository](https://github.com/sandordargo/neo-wine-services) and get started!
101.428571
432
0.779343
eng_Latn
0.999174
98402aee54a57b4e43c1c4d4abec4e35c30b0d1d
146
md
Markdown
pages/tags/c++.md
xpack/web-jekyll
ca7c4166c581b7d016f717baee33960b40d2669f
[ "MIT", "BSD-3-Clause" ]
null
null
null
pages/tags/c++.md
xpack/web-jekyll
ca7c4166c581b7d016f717baee33960b40d2669f
[ "MIT", "BSD-3-Clause" ]
5
2020-09-10T15:15:47.000Z
2021-04-30T21:24:18.000Z
pages/tags/c++.md
xpack/web-jekyll
ca7c4166c581b7d016f717baee33960b40d2669f
[ "MIT", "BSD-3-Clause" ]
1
2021-03-23T15:03:02.000Z
2021-03-23T15:03:02.000Z
--- title: "c++" permalink: /tags/c++/ tagName: c++ date: 2020-09-09 13:33:59 +0300 --- {% include taglogic.html %} {% include links.html %}
10.428571
31
0.582192
eng_Latn
0.265968
9840775dd668a03e355f1689874a9dec5e1b1592
4,221
md
Markdown
ldf/docs/modules/_liturgical_document_.md
toddfoster/venite
43ba9f105d67af4dfd9254a2042750f8536fc14e
[ "MIT" ]
null
null
null
ldf/docs/modules/_liturgical_document_.md
toddfoster/venite
43ba9f105d67af4dfd9254a2042750f8536fc14e
[ "MIT" ]
null
null
null
ldf/docs/modules/_liturgical_document_.md
toddfoster/venite
43ba9f105d67af4dfd9254a2042750f8536fc14e
[ "MIT" ]
null
null
null
[@venite/ldf](../README.md) › [Globals](../globals.md) › ["liturgical-document"](_liturgical_document_.md) # Module: "liturgical-document" ## Index ### Enumerations * [Responsive](../enums/_liturgical_document_.responsive.md) ### Classes * [LiturgicalDocument](../classes/_liturgical_document_.liturgicaldocument.md) ### Type aliases * [DisplayFormat](_liturgical_document_.md#displayformat) * [DisplayFormatTuple](_liturgical_document_.md#displayformattuple) * [Lookup](_liturgical_document_.md#lookup) * [LookupTypeTuple](_liturgical_document_.md#lookuptypetuple) * [TypeTuple](_liturgical_document_.md#typetuple) * [Value](_liturgical_document_.md#value) * [ValuePiece](_liturgical_document_.md#valuepiece) ### Variables * [DISPLAY_FORMATS](_liturgical_document_.md#const-display_formats) * [LOOKUP_TYPES](_liturgical_document_.md#const-lookup_types) * [TYPES](_liturgical_document_.md#const-types) ## Type aliases ### DisplayFormat Ƭ **DisplayFormat**: *DisplayFormatTuple[number]* *Defined in [liturgical-document.ts:70](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L70)* ___ ### DisplayFormatTuple Ƭ **DisplayFormatTuple**: *typeof DISPLAY_FORMATS* *Defined in [liturgical-document.ts:69](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L69)* ___ ### Lookup Ƭ **Lookup**: *object* *Defined in [liturgical-document.ts:37](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L37)* #### Type declaration: * **allow_multiple**? : *undefined | false | true* * **filter**? : *"seasonal" | "evening" | "day"* * **item**? : *string | number | object* * **random**? : *undefined | false | true* * **rotate**? : *undefined | false | true* * **table**? : *string | object* * **type**: *LookupTypeTuple[number]* ___ ### LookupTypeTuple Ƭ **LookupTypeTuple**: *typeof LOOKUP_TYPES* *Defined in [liturgical-document.ts:35](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L35)* ___ ### TypeTuple Ƭ **TypeTuple**: *typeof TYPES* *Defined in [liturgical-document.ts:32](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L32)* ___ ### Value Ƭ **Value**: *[LiturgicalDocument](../classes/_liturgical_document_.liturgicaldocument.md)[] | [ResponsivePrayerLine](../classes/_responsive_prayer_.responsiveprayerline.md)[] | ([BibleReadingVerse](../classes/_bible_reading_bible_reading_verse_.biblereadingverse.md)‹› | [Heading](../classes/_heading_.heading.md)‹›)[] | [PsalmSection](../classes/_psalm_.psalmsection.md)[] | string[]* *Defined in [liturgical-document.ts:54](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L54)* ___ ### ValuePiece Ƭ **ValuePiece**: *[LiturgicalDocument](../classes/_liturgical_document_.liturgicaldocument.md) | [ResponsivePrayerLine](../classes/_responsive_prayer_.responsiveprayerline.md) | [BibleReadingVerse](../classes/_bible_reading_bible_reading_verse_.biblereadingverse.md) | [Heading](../classes/_heading_.heading.md) | [PsalmSection](../classes/_psalm_.psalmsection.md) | string* *Defined in [liturgical-document.ts:60](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L60)* ## Variables ### `Const` DISPLAY_FORMATS • **DISPLAY_FORMATS**: *string[]* = ['default', 'omit', 'unison', 'abbreviated', 'force_dropcap'] *Defined in [liturgical-document.ts:68](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L68)* ___ ### `Const` LOOKUP_TYPES • **LOOKUP_TYPES**: *string[]* = ['lectionary', 'canticle', 'category', 'slug', 'collect'] *Defined in [liturgical-document.ts:34](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L34)* ___ ### `Const` TYPES • **TYPES**: *["liturgy", "heading", "option", "refrain", "rubric", "text", "responsive", "bible-reading", "psalm", "meditation", "image", "parallel"]* = [ 'liturgy', 'heading', 'option', 'refrain', 'rubric', 'text', 'responsive', 'bible-reading', 'psalm', 'meditation', 'image', 'parallel', ] as const *Defined in [liturgical-document.ts:18](https://github.com/gbj/venite/blob/41d0c651/ldf/src/liturgical-document.ts#L18)*
30.366906
386
0.725657
yue_Hant
0.192528
98418b8693bb80a0345c4b147f65e77cbf25b84e
3,424
md
Markdown
code.md
gracewhang/gracewhang.github.io
9b654c65eea982c7c842969a7243d5b928613cfe
[ "MIT" ]
null
null
null
code.md
gracewhang/gracewhang.github.io
9b654c65eea982c7c842969a7243d5b928613cfe
[ "MIT" ]
null
null
null
code.md
gracewhang/gracewhang.github.io
9b654c65eea982c7c842969a7243d5b928613cfe
[ "MIT" ]
null
null
null
--- layout: default title: Code permalink: /code/ --- I'd like to use this space to share simple programs I write for random personal and research related projects I work on from time to time. I primarily use MATLAB but am hoping to also include codes written for Python and Processing. Stay tuned~ ## Tomato Code [MATLAB] Here is a [link](https://github.com/gracewhang/tomato_code) to the tomato_code repository. This image processing script can take any .jpg image, transform it to grayscale, and threshold your image at varying levels (for version 1, it's 3 levels). For the future version 2, I'd like to give the user the ability to designate how many grayscale values they want to allow the image to have and also add the possibility of choosing colors to replace the grayscale values. But for now, here it is, plain and simple. ![Image description](/images/tomato_code.png) If you want some music to pair this code with, I'd recommend "Hang on Little Tomato" by Pink Martini. <center><iframe src="https://open.spotify.com/embed/track/15ffrEHBHIbXWJcsGx402o" width="300" height="380" frameborder="0" allowtransparency="true" allow="encrypted-media"></iframe> </center> ## Determining Capacity from a Cyclic Voltammetry (a fancy way of saying area analysis)[MATLAB] Here is a [link](https://github.com/gracewhang/Cyclic_Voltammetry_Capacity_Analysis) to the Cyclic_Voltammagram_Capacity_Analysis repository. The premise of this work is to save me time going forward by investing some time now. In the field of electrochemistry, cyclic voltammetry (CV) is an electrochemical technqiue that experimentalists use to look at the current response of a system when sweeping the voltage in a specified range. In terms of battery materials, CVs are used to look at the potentials where redox (charge storage) occurs. CV is a powerful tool that can enable researchers like me, to study the kinetics of a material and better understand what is happening at the electrode and at what potential. One information we can also get from a CV curve is the capacity (typically reported in units of milliamp hour [mAh]), which tells us how much charge is being stored. If you look at the units of capacity: mA x h, it's current x time. The axes of a CV curve are typically shown as current (y-axis) and voltage (x-axis) but your instrument should also have the elapsed time data and thus can also be plotted as current vs time. By integrating (finding the area of) the region of each peak in a current [mA] vs time [hour] plot, we then have calculated the capacity [mAh]. ![Image description](/images/CV_capacity.png) ## Creating 3D Surface Plot with 3D Scatter Plot Data [MATLAB] 3D bode analysis as a methode to understand and differentiate different charge storage mechanisms is a relatively new technique demonstrated by [Jesse Ko](https://www.researchgate.net/profile/Jesse-Ko/publication/340640541_Differentiating_Double-Layer_Pseudocapacitance_and_Battery-like_Mechanisms_by_Analyzing_Impedance_Measurements_in_Three_Dimensions/links/5f349cad458515b7291be672/Differentiating-Double-Layer-Pseudocapacitance-and-Battery-like-Mechanisms-by-Analyzing-Impedance-Measurements-in-Three-Dimensions.pdf) at John Hopkins University. Here is a [link] to the repository to make these 3D bode plots. If you aren't interested in this specific application, you can use this general code framework to make your own 3D plot. enjoy!
92.540541
1,144
0.798773
eng_Latn
0.996502
9842769d4c609ed2937b9b03267b960b0d1daeaa
1,384
md
Markdown
README.md
jonalvarezz/page-visibility
1f54dc34d317bf54992417d4af096638a8a38f29
[ "BSD-3-Clause" ]
1
2018-10-04T23:14:03.000Z
2018-10-04T23:14:03.000Z
README.md
jonalvarezz/react-page-visible
1f54dc34d317bf54992417d4af096638a8a38f29
[ "BSD-3-Clause" ]
null
null
null
README.md
jonalvarezz/react-page-visible
1f54dc34d317bf54992417d4af096638a8a38f29
[ "BSD-3-Clause" ]
null
null
null
# React Page Visible ![alt React Page Visible Demo](./assets/screenshot.png) There are two common approaches to know if your page is currently visible by the user: * [Focus event](https://developer.mozilla.org/en-US/docs/Web/Events/focus) * [Page Visibility API](https://developer.mozilla.org/en-US/docs/Web/API/Page_Visibility_API) None of the approaches does cover all user cases. For instance, switching to a different application, in desktop enviroment, does not trigger the Visibility API but the focus event. In a mobile device is the opposite. [See difference table](https://page-visibility.now.sh/compat) This implementation, which is a React Component using render props, also introduces a new `visible` property which is aimed to cover both approaches and give a consistent behaviour across browsers and mobile devices. [See demo](https://page-visibility.now.sh) ## Example ```js import PageVisible from 'react-page-visible' export default class App extends React.Component { render() { return ( <PageVisible> {({ visible }) => ( <h1>My page is {visible ? 'visible' : 'hidden'}</h1> )} </PageVisible> ) } } ``` ## Installation ```js yarn add react-page-visible ``` ## Development This app is powered by [Next.js](https://nextjs.org). Install dependencies ``` yarn ``` Start dev server: ``` yarn && yarn dev ```
27.137255
279
0.71315
eng_Latn
0.96687