hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
e11f1ac2b30773161541cc7c811b38f8db4d63c6
1,582
md
Markdown
_posts/notes/2018-06-06-about-my-coding-life.md
Kevin9436/Kevin9436.github.io
2342c6d5ba1ebd08cb26d338009850201be2db97
[ "CC-BY-3.0" ]
null
null
null
_posts/notes/2018-06-06-about-my-coding-life.md
Kevin9436/Kevin9436.github.io
2342c6d5ba1ebd08cb26d338009850201be2db97
[ "CC-BY-3.0" ]
null
null
null
_posts/notes/2018-06-06-about-my-coding-life.md
Kevin9436/Kevin9436.github.io
2342c6d5ba1ebd08cb26d338009850201be2db97
[ "CC-BY-3.0" ]
null
null
null
--- layout: post title: 我和CS的这段“孽缘” author: Kevin9436 date: 2018-06-06 3:40 category: notes ---   建博客折腾了两三天,虽然最后还是在乱七八糟的模板代码面前屈服了,但总算是搞出个大体样子,博客既然建在了全球知名同性交友网站github上,那第一篇就写写我的专业CS吧。   本科一转眼就要毕业了,回首我这三年来在cs的摸爬滚打真是让人不忍直视,还记得大二分专业的时候一志愿从众填了个CS结果最后还真压线进了,当时虽然这个院里前一百人有八十多个都来这里的专业确实把我吓到了,但心里多少还是有些小庆幸的,然而在之后三年身心折磨中我不止一次怀疑这个选择的正确性。首先要承认代码码的不好确实是因为我这个笨鸟总是不仅不愿意先飞,还总想着不飞就上天,总之经过这三年我不懈的不思进取,课程虽然都顺利完成,但我对代码的怨念却在不断加深,虽然之间也尝试过改变但大都以失败告终,在做毕设的时候这种怨念简直达到了顶点,有一段时间我根本就不想碰代码,几乎抑郁。一个有趣听起来又心酸的事情就是,有一次在梦中,我回到了当时高考填志愿的时候,我抱着当时的自己的大腿哭着跟说不要报这个专业。   一到毕业季这种时候,人就特别爱回忆,大概三周前的一天晚上我想着想着就突然想起了一段早被遗弃在脑子里某个犄角旮旯的经历。在初三下学期的时候,高中部的各种竞赛队开始招人培训,我一个打字都不熟练的人放着好好地数学物理不选,偏偏就犯贱报了个计算机竞赛,当时在键盘上一个键一个键地找字符的情形还历历在目。后来,没上到两周课,我因为中午休息时间在机房看游戏视频被老师看到训了一顿,于是我这个还没站到起跑线上的选手就选择了放弃,当时还对那个老师怨念颇深,原因现在看来也是很搞笑,因为我明明看的是视频而她却当众污蔑我玩游戏。这大概就是我和CS这段“孽缘”的起点吧,想到这件事的我深深叹了口气,然后反手就撤了那个已经长大了的当时填报志愿的我一个嘴巴,丫的,你还真是不长记性。   无论怎样毕业还是要毕的,所以代码也是要码的。后来我每天都会在碎片时间里进行自我催眠,反复告诉自己“我爱代码,代码使我快乐”,不知道是被自我催眠的坚持不懈所感动还是被ddl飞速前来的步伐所感动,我开始沉下心来一步一步地去解决问题、学习之前没有接触过的知识,而不是把注意力放在我不会写代码、de不出来bug上。最后毕设还算是顺利完成了,在这过程中自己的心境也在逐渐改变,现在我感觉,我应该算是和代码和解了吧,虽然说还谈不上对它充满热爱,但是也没有之前的抵触了。   我感觉其实我大概是和自己做了个和解,我接受了这个写不好代码的我,我心理终于平静地接受了我码代码很菜这个事实。就像火影忍者里接受九尾查克拉的鸣人,你永远没有办法抹杀或者逃避另一个自己,你能做的只有和那个差劲的自己和解,然后把所有精力放在对付外面的世界上,而不是和自己战斗到遍体鳞伤。我想如果曾经那个打字还不利索的男孩能够在放弃之前,放下自己成绩门门出色的虚荣,放下自己莫须有的骄傲,和这个也有做不好的事情的自己和解,他或许仍然笨拙地码不好代码,但我想他在过去的这段时间里应该会勇敢地、乐观地迎接沿途的挑战。   在完成毕设课题之后,我自己也开始尝试去学习、去折腾,比如这个博客就算是第一步吧,自己也开始为成为一个优秀的程序员而努力。虽然本科快毕业了才渐渐对代码有感觉有点太晚了,自己的懒惰和逃避已然造成了很多不可挽回的损失,但好歹心中的火种还没有熄灭,我不知道未来无论工程还是学术的的道路上有什么荆棘猛兽在等着我,我只知道我和CS的这段“孽缘”还远没有结束,低头继续赶路吧。
83.263158
352
0.893805
zho_Hans
0.442837
e11fd6a7bc68246b37e43c330c0f0e9ad45fd2fa
7,907
md
Markdown
docs/relational-databases/backup-restore/tail-log-backups-sql-server.md
IsmaelArmas/sql-docs.es-es
214db19ea9bd9ddaf26f7ae1274d5e8ba8277716
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/backup-restore/tail-log-backups-sql-server.md
IsmaelArmas/sql-docs.es-es
214db19ea9bd9ddaf26f7ae1274d5e8ba8277716
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/backup-restore/tail-log-backups-sql-server.md
IsmaelArmas/sql-docs.es-es
214db19ea9bd9ddaf26f7ae1274d5e8ba8277716
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Copias del final del registro (SQL Server) | Microsoft Docs ms.custom: '' ms.date: 08/01/2016 ms.prod: sql ms.prod_service: backup-restore ms.reviewer: '' ms.technology: backup-restore ms.topic: conceptual helpviewer_keywords: - backing up [SQL Server], tail of log - transaction log backups [SQL Server], tail-log backups - NO_TRUNCATE clause - backups [SQL Server], log backups - tail-log backups - backups [SQL Server], tail-log backups ms.assetid: 313ddaf6-ec54-4a81-a104-7ffa9533ca58 author: mashamsft ms.author: mathoma manager: craigg ms.openlocfilehash: 47876b387c06c1ba65e6a1a04fcbcee616097166 ms.sourcegitcommit: 202ef5b24ed6765c7aaada9c2f4443372064bd60 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 01/12/2019 ms.locfileid: "54241856" --- # <a name="tail-log-backups-sql-server"></a>Copias del final del registro (SQL Server) [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] Este tema solamente es pertinente para copias de seguridad y restauración de las bases de datos de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] que usan el modelo de recuperación optimizado para cargas masivas de registros o el modelo de recuperación completa. Una *copia del final del registro* captura las entradas del registro de las que todavía no se ha realizado copia de seguridad (el *final del registro*) para evitar la pérdida de trabajo y mantener intacta la cadena de registros. Para poder recuperar una base de datos de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] al último momento, debe realizar una copia del final del registro de transacciones. La copia del final del registro será la copia de seguridad de interés en el plan de recuperación de la base de datos. > **NOTA:** No todos los escenarios de restauración requieren una copia del final del registro. No necesita una copia del final del registro si el punto de recuperación está incluido en una copia de seguridad de registros anterior. Además, no es necesaria una copia del final del registro si va a mover o reemplazar (sobrescribir) la base de datos y no necesita restaurarla a un momento posterior de la copia de seguridad más reciente. ## <a name="TailLogScenarios"></a> Escenarios que requieren una copia del final del registro Recomendamos realizar una copia del final del registro en los siguientes escenarios: - Si la base de datos está en línea y planea realizar una operación de restauración en la base de datos, comience con una copia del final del registro. Para evitar un error para una base de datos en línea, debe usar la opción ... WITH NORECOVERY de la instrucción [BACKUP](../../t-sql/statements/backup-transact-sql.md)[!INCLUDE[tsql](../../includes/tsql-md.md)]. - Si una base de datos está sin conexión y no puede iniciarse y necesita restaurar la base de datos, primero haga una copia del final del registro. Debido a que no pueden producirse otras transacciones en este momento, el uso de WITH NORECOVERY es opcional. - Si se daña una base de datos, intente hacer una copia del final del registro con la opción WITH CONTINUE_AFTER_ERROR de la instrucción BACKUP. En una base de datos dañada, la copia del final del registro se puede completar sin errores solo si los archivos de registro no están dañados, si la base de datos tiene un estado que admite copias de seguridad de registros después del error y si la base de datos no contiene cambios registrados de forma masiva. Si no se puede crear una copia del final del registro, se pueden las transacciones confirmadas después de la última copia de seguridad de registros. En la tabla siguiente se resumen las opciones BACKUP NORECOVERY y CONTINUE_AFTER_ERROR. |Opción BACKUP LOG|Comentarios| |-----------------------|--------------| |NORECOVERY|Use NORECOVERY cada vez que desee continuar con una operación de restauración en la base de datos. NORECOVERY pone la base de datos en el estado de restauración. Esto garantiza que la base de datos no cambie después de realizar la copia del final del registro. El registro se truncará a menos que también se especifique la opción NO_TRUNCATE o COPY_ONLY.<br /><br /> **Importante:** Evite usar NO_TRUNCATE, salvo cuando la base de datos esté dañada.| |CONTINUE_AFTER_ERROR|Utilice CONTINUE_AFTER_ERROR solo si va a crear una copia del final de una base de datos dañada.<br /><br /> Al realizar una copia de seguridad del final del registro de una base de datos dañada, es posible que parte de los metadatos que comúnmente se capturan en las copias de seguridad de registros no estén disponibles. Para más información, consulte [Copias del final del registro con metadatos de copia de seguridad incompletos](#IncompleteMetadata)que aparece en este tema.| ## <a name="IncompleteMetadata"></a> Copias del final del registro con metadatos de copia de seguridad incompletos Las copias de seguridad de registros después del error capturan el final del registro aunque falten archivos en la base de datos, o la base de datos esté sin conexión o dañada. Sin embargo, esto puede provocar que se obtengan metadatos incompletos de los comandos de información de restauración y **msdb**. Sin embargo, solo los metadatos están incompletos. El registro capturado está completo y en condiciones de uso. Si una copia del final del registro tiene metadatos incompletos, en la tabla [backupset](../../relational-databases/system-tables/backupset-transact-sql.md) se establece **has_incomplete_metadata** en **1**. Asimismo, en la salida de [RESTORE HEADERONLY](../../t-sql/statements/restore-statements-headeronly-transact-sql.md), **HasIncompleteMetadata** se establece en **1**. Si los metadatos de la copia del final del registro están incompletos, a la tabla [backupfilegroup](../../relational-databases/system-tables/backupfilegroup-transact-sql.md) le faltará la mayoría de la información sobre grupos de archivos en el momento de realizar la copia del final del registro. La mayoría de las columnas de la tabla **backupfilegroup** son NULL; las únicas columnas significativas son las siguientes: - **backup_set_id** - **filegroup_id** - **tipo** - **type_desc** - **is_readonly** ## <a name="RelatedTasks"></a> Tareas relacionadas Para crear una copia del final del registro, vea [Realizar una copia de seguridad del registro de transacciones cuando la base de datos está dañada &#40;SQL Server&#41;](../../relational-databases/backup-restore/back-up-the-transaction-log-when-the-database-is-damaged-sql-server.md). Para restaurar una copia de seguridad del registro de transacciones, vea [Restaurar una copia de seguridad de registros de transacciones &#40;SQL Server&#41;](../../relational-databases/backup-restore/restore-a-transaction-log-backup-sql-server.md). ## <a name="see-also"></a>Consulte también [BACKUP &#40;Transact-SQL&#41;](../../t-sql/statements/backup-transact-sql.md) [RESTORE &#40;Transact-SQL&#41;](../../t-sql/statements/restore-statements-transact-sql.md) [Realizar copias de seguridad y restaurar bases de datos de SQL Server](../../relational-databases/backup-restore/back-up-and-restore-of-sql-server-databases.md) [Copias de seguridad de solo copia &#40;SQL Server&#41;](../../relational-databases/backup-restore/copy-only-backups-sql-server.md) [Copias de seguridad del registro de transacciones &#40;SQL Server&#41;](../../relational-databases/backup-restore/transaction-log-backups-sql-server.md) [Aplicar copias de seguridad del registro de transacciones &#40;SQL Server&#41;](../../relational-databases/backup-restore/apply-transaction-log-backups-sql-server.md) [Guía de arquitectura y administración de registros de transacciones de SQL Server](../../relational-databases/sql-server-transaction-log-architecture-and-management-guide.md)
97.617284
533
0.766915
spa_Latn
0.971658
e11ff7ae8645a260463ca0cf83d56b8012dfc6fc
725
md
Markdown
docs/model_zh.md
xdjiangkai/ColossalAI
4a3d3446b04065fa1c89b78cba673e96115c6325
[ "Apache-2.0" ]
1
2022-03-12T04:49:19.000Z
2022-03-12T04:49:19.000Z
docs/model_zh.md
xdjiangkai/ColossalAI
4a3d3446b04065fa1c89b78cba673e96115c6325
[ "Apache-2.0" ]
null
null
null
docs/model_zh.md
xdjiangkai/ColossalAI
4a3d3446b04065fa1c89b78cba673e96115c6325
[ "Apache-2.0" ]
1
2022-01-06T17:16:32.000Z
2022-01-06T17:16:32.000Z
# 定义符合您需求的并行模型 如果您在训练一个拥有数亿级参数的巨大MLP模型,那么该模型一定无法在单个GPU上直接进行训练,不用担心,Colossal-AI可以帮您解决这一问题。您仍旧可以像写单GPU模型那样来写您的模型,Colossal-AI会按照您的并行设置自动将模型参数进行切割,并将它们均匀地存入一组GPU中。下面是一个简单的例子,来向您展示如何在Colossal-AI环境下写一个2D张量并行的模型。 ## 简单的2D张量并行模型 ```python from colossalai.nn import Linear2D import torch.nn as nn class MLP_2D(nn.Module): def __init__(self): super().__init__() self.linear_1 = Linear2D(in_features=1024, out_features=16384) self.linear_2 = Linear2D(in_features=16384, out_features=1024) def forward(self, x): x = self.linear_1(x) x = self.linear_2(x) return x ``` ## 使用事先定义好的模型 为了您使用的方便,我们事先在我们的Model Zoo中定义好了一些现在流行的模型,比如*BERT*、*VIT*以及*MLP-Mixer*等,您可以根据您的需求来自定义这些模型的规模。
26.851852
190
0.748966
yue_Hant
0.81095
e120f9a613492493a127b0208e9214c8d4146e93
1,698
md
Markdown
ce/customer-service/routing-trigger.md
Gen1a/dynamics-365-customer-engagement
ce3c02bfa54594f016166522e552982fb66a9389
[ "CC-BY-4.0", "MIT" ]
null
null
null
ce/customer-service/routing-trigger.md
Gen1a/dynamics-365-customer-engagement
ce3c02bfa54594f016166522e552982fb66a9389
[ "CC-BY-4.0", "MIT" ]
null
null
null
ce/customer-service/routing-trigger.md
Gen1a/dynamics-365-customer-engagement
ce3c02bfa54594f016166522e552982fb66a9389
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Route records manually using unified routing | MicrosoftDocs" description: "Learn how to route records manually using the Save & Route option on the form command bar and Apply Routing Rule on the home page grid." ms.date: 06/21/2021 ms.service: dynamics-365-customerservice ms.topic: article author: "neeranelli" ms.author: nenellim manager: shujoshi --- # Route records manually using unified routing [!INCLUDE[cc-use-with-omnichannel](../includes/cc-use-with-omnichannel.md)] ## Route records using Save & Route or Apply Routing Rule options After you set up and enable a record for routing, you can start routing a record manually, either by: - Selecting a record on the home page grid and then selecting **Apply Routing Rule** on the toolbar. - Opening a record form and then selecting **Save & Route** on the form command bar. > [!Note] > The **Apply Routing Rule** button doesn't display on the home page grid of Activities. To manually route records: 1. Sign in to your model-driven app. 2. Select the record you want to route on the home page grid and then select **Apply Routing Rule**. Alternatively, open the record form and select **Save & Route** on the form command bar. The **Route Case** dialog box appears. 3. Select **Route**. The record is routed based on the record routing configuration. ### See also [Overview of routing](overview-unified-routing.md) [Set up routing for records](set-up-record-routing.md) [Automatically route records using custom flow](routing-trigger-automatic.md) [Sample code to trigger routing for non-case records](trigger-routing-non-case-records.md) [!INCLUDE[footer-include](../includes/footer-banner.md)]
36.913043
150
0.75265
eng_Latn
0.993434
e121610ea25ff2377f8ee79ec44b7182eec7e4e6
9,111
md
Markdown
docs/database-engine/availability-groups/windows/configuration-of-a-server-instance-for-always-on-availability-groups-sql-server.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/database-engine/availability-groups/windows/configuration-of-a-server-instance-for-always-on-availability-groups-sql-server.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/database-engine/availability-groups/windows/configuration-of-a-server-instance-for-always-on-availability-groups-sql-server.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Configurare un'istanza di SQL Server per i gruppi di disponibilità AlwaysOn | Microsoft Docs ms.custom: '' ms.date: 05/17/2016 ms.prod: sql ms.reviewer: '' ms.technology: high-availability ms.topic: conceptual helpviewer_keywords: - Availability Groups [SQL Server], server instance - Availability Groups [SQL Server], about ms.assetid: fad8db32-593e-49d5-989c-39eb8399c416 author: MashaMSFT ms.author: mathoma manager: craigg ms.openlocfilehash: c2669ac9418d83f43d53dfdad4236cdcbb23416c ms.sourcegitcommit: 63b4f62c13ccdc2c097570fe8ed07263b4dc4df0 ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 11/13/2018 ms.locfileid: "51602261" --- # <a name="configuration-of-a-server-instance-for-always-on-availability-groups-sql-server"></a>Configurazione di un'istanza del server per i gruppi di disponibilità AlwaysOn (SQL Server) [!INCLUDE[appliesto-ss-xxxx-xxxx-xxx-md](../../../includes/appliesto-ss-xxxx-xxxx-xxx-md.md)] In questo argomento vengono fornite informazioni sui requisiti per la configurazione di un'istanza di [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] da supportare [!INCLUDE[ssHADR](../../../includes/sshadr-md.md)] in [!INCLUDE[ssCurrent](../../../includes/sscurrent-md.md)]. > [!IMPORTANT] > Per informazioni di base sui prerequisiti e le limitazioni di [!INCLUDE[ssHADR](../../../includes/sshadr-md.md)] per i nodi Windows Server Failover Clustering (WSFC) e per le istanze di [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)], vedere [Prerequisiti, restrizioni e consigli per i gruppi di disponibilità AlwaysOn &#40;SQL Server&#41;](../../../database-engine/availability-groups/windows/prereqs-restrictions-recommendations-always-on-availability.md). **Contenuto dell'argomento** - [Termini e definizioni](#TermsAndDefinitions) - [Per configurare un'istanza del server in modo che supporti i gruppi di disponibilità AlwaysOn](#ConfigSI) - [Attività correlate](#RelatedTasks) - [Contenuto correlato](#RelatedContent) ## <a name="TermsAndDefinitions"></a> Termini e definizioni [Gruppi di disponibilità AlwaysOn](../../../database-engine/availability-groups/windows/always-on-availability-groups-sql-server.md) Soluzione di disponibilità elevata e recupero di emergenza che offre una sostituzione di livello enterprise al mirroring del database. Un *gruppo di disponibilità* supporta un ambiente di failover per un set discreto di database utente, noti come *database di disponibilità*, su cui si verifica il failover. replica di disponibilità Creazione di un'istanza di un gruppo di disponibilità ospitata da un'istanza specifica di [!INCLUDE[ssNoVersion](../../../includes/ssnoversion-md.md)] e che mantiene una copia locale di ogni database di disponibilità che appartiene al gruppo di disponibilità. Sono disponibili due tipi di replica di disponibilità: una *replica primaria* e da una a quattro *repliche secondarie*. Le istanze del server che ospitano le repliche di disponibilità per un determinato gruppo di disponibilità devono risiedere in nodi diversi di un solo cluster WSFC (Windows Server Failover Clustering). [endpoint del mirroring del database](../../../database-engine/database-mirroring/the-database-mirroring-endpoint-sql-server.md) È un oggetto di SQL Server che consente la comunicazione di SQL Server nella rete. Per fare parte del mirroring del database e/o di un'istanza del server [!INCLUDE[ssHADR](../../../includes/sshadr-md.md)] è richiesto un endpoint speciale e dedicato. Tutte le connessioni del mirroring e del gruppo di disponibilità in un'istanza del server utilizzano lo stesso endpoint del mirroring del database. Si tratta di un endpoint speciale utilizzato solo per ricevere tali connessioni da altre istanze del server. ## <a name="ConfigSI"></a> Per configurare un'istanza del server in modo che supporti i gruppi di disponibilità AlwaysOn Per supportare [!INCLUDE[ssHADR](../../../includes/sshadr-md.md)], un'istanza del server deve trovarsi in un nodo nel cluster di failover WSFC in cui è ospitato il gruppo di disponibilità, essere abilitata per [!INCLUDE[ssHADR](../../../includes/sshadr-md.md)] e in essa deve essere presente un endpoint del mirroring del database. 1. Abilitare la funzionalità Gruppi di disponibilità AlwaysOn in ogni istanza del server che deve far parte di uno o più gruppi di disponibilità. Una determinata istanza del server può ospitare solo una singola replica di disponibilità per un gruppo di disponibilità specifico. 2. Verificare che nell'istanza del server sia incluso un endpoint del mirroring del database. ## <a name="RelatedTasks"></a> Attività correlate **Per abilitare i gruppi di disponibilità AlwaysOn** - [Abilitare e disabilitare la funzionalità Gruppi di disponibilità AlwaysOn &#40;SQL Server&#41;](../../../database-engine/availability-groups/windows/enable-and-disable-always-on-availability-groups-sql-server.md) **Per determinare la presenza di un endpoint del mirroring del database** - [sys.database_mirroring_endpoints &#40;Transact-SQL&#41;](../../../relational-databases/system-catalog-views/sys-database-mirroring-endpoints-transact-sql.md) **Per creare un endpoint del mirroring del database** - [Creare un endpoint del mirroring del database per i gruppi di disponibilità AlwaysOn &#40;SQL Server PowerShell&#41;](../../../database-engine/availability-groups/windows/database-mirroring-always-on-availability-groups-powershell.md) - [Creare un endpoint del mirroring del database per l'autenticazione Windows &#40;Transact-SQL&#41;](../../../database-engine/database-mirroring/create-a-database-mirroring-endpoint-for-windows-authentication-transact-sql.md) - [Impostazione dell'endpoint del mirroring del database per l'utilizzo di certificati per le connessioni in uscita &#40;Transact-SQL&#41;](../../../database-engine/database-mirroring/database-mirroring-use-certificates-for-outbound-connections.md) ## <a name="RelatedContent"></a> Contenuto correlato - **Blog:** [Pagina relativa alla serie di informazioni su HADRON riguardanti l'uso del pool di lavoro per database abilitati HADRON in AlwaysOn](https://blogs.msdn.com/b/psssql/archive/2012/05/17/Always%20On-hadron-learning-series-worker-pool-usage-for-hadron-enabled-databases.aspx) [SQL Server AlwaysOn Team Blog: blog ufficiale del team di SQL Server AlwaysOn](https://blogs.msdn.microsoft.com/sqlalwayson/) [Pagina relativa ai blog del Servizio Supporto Tecnico Clienti per gli ingegneri di SQL Server](https://blogs.msdn.com/b/psssql/) - **Video:** [Pagina relativa alla prima parte riguardante l'introduzione della soluzione a disponibilità elevata di prossima generazione della serie AlwaysOn di Microsoft SQL Server nome in codice "Denali"](https://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DBI302) [Pagina relativa alla seconda parte riguardante la compilazione di una soluzione a disponibilità elevata critica tramite AlwasyOn della serie AlwaysOn di Microsoft SQL Server nome in codice "Denali"](https://channel9.msdn.com/Events/TechEd/NorthAmerica/2011/DBI404) - **White paper:** [Microsoft SQL Server Always On Solutions Guide for High Availability and Disaster Recovery (Guida alle soluzioni AlwaysOn di Microsoft SQL Server per la disponibilità elevata e il ripristino di emergenza)](https://go.microsoft.com/fwlink/?LinkId=227600) [Pagina relativa ai white paper Microsoft per SQL Server 2012](https://msdn.microsoft.com/library/hh403491.aspx) [Pagina relativa ai white paper del team di consulenza clienti di SQL Server](https://sqlcat.com/) ## <a name="see-also"></a>Vedere anche [Panoramica di gruppi di disponibilità AlwaysOn &#40;SQL Server&#41;](../../../database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server.md) [Prerequisiti, restrizioni e consigli per i gruppi di disponibilità AlwaysOn &#40;SQL Server&#41;](../../../database-engine/availability-groups/windows/prereqs-restrictions-recommendations-always-on-availability.md) [Endpoint del mirroring del database &#40;SQL Server&#41;](../../../database-engine/database-mirroring/the-database-mirroring-endpoint-sql-server.md) [Gruppi di disponibilità AlwaysOn: interoperabilità &#40;SQL Server&#41;](../../../database-engine/availability-groups/windows/always-on-availability-groups-interoperability-sql-server.md) [Clustering di failover e gruppi di disponibilità AlwaysOn &#40;SQL Server&#41;](../../../database-engine/availability-groups/windows/failover-clustering-and-always-on-availability-groups-sql-server.md) [Windows Server Failover Clustering &#40;WSFC&#41; con SQL Server](../../../sql-server/failover-clusters/windows/windows-server-failover-clustering-wsfc-with-sql-server.md) [Istanze del cluster di failover AlwaysOn &#40;SQL Server&#41;](../../../sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server.md)
83.587156
584
0.763363
ita_Latn
0.943237
e12193e58602eb9f03a01d4e527d918e7418cfc5
1,740
md
Markdown
README.md
DesarrolloCloud/hello_world_php_appengine_flexible_enviroment
165275de9739a199da3e43c421ad38df136947a1
[ "MIT" ]
null
null
null
README.md
DesarrolloCloud/hello_world_php_appengine_flexible_enviroment
165275de9739a199da3e43c421ad38df136947a1
[ "MIT" ]
null
null
null
README.md
DesarrolloCloud/hello_world_php_appengine_flexible_enviroment
165275de9739a199da3e43c421ad38df136947a1
[ "MIT" ]
null
null
null
# HELLO WORLD PHP APP ENGINE FLEXIBLE ENVIROMENT Aplicativo inicial en PHP para Google App Engine Flexible Enviroment ### Requerimientos **Toda la configuracion se a realizardo para linux - ubuntu 16.04** 1. Instalar google cloud sdk. [Page Oficial Intalacion] (https://cloud.google.com/sdk/docs/?hl=es) 2. Instalar composer. [Page oficial Instalacion] (https://getcomposer.org/download/) 3. Volver Global composer. Ubicate en el directorio donde has descargado composer y ejecuta **sudo mv composer.phar /usr/local/bin/composer** 4. Puedes probar son SAPI (servidor interno de php) o con NGINX (yo recomiendo).Instalar NGINX ..1. cd /etc/apt/sources.list.d/ ..2. sudo nano nginx.list ..3. Agrega estos repositorios: ...deb http://nginx.org/packages/ubuntu/ xenial nginx ...deb-src http://nginx.org/packages/ubuntu/ xenial nginx ..4. sudo apt-get update ..5. sudo apt-get install nginx ### Ejecucion del aplicativo en desarrollo 1. Dirigirte a la carpeta donde has clonado el codigo de la aplicacion 2. Ejecutar con SAPI: php -S localhost:8089 -t web 3. Tambien puedes ejecutar con nginx para ello [primero debes configurar php en nginx](https://www.youtube.com/watch?v=v_kqyIwj1FM) 4. Colocar en el browser http://localhost:8099. ### Ejecucion del aplicativo en GCP App Engine 1. Dirigirte a la carpeta donde has clonado el codigo de la aplicación 2. gcloud init (para configurar el proyecto al que vas a subir el aplicativo) 3. gcloud app deploy (subes el aplicativo a app engine, puede solicitar sociar cuenta para facturación si estas en modo free asocia esa cuenta) 4. gcloud browse (para que obtengas la URL de internet del aplicativo) ### Asesorias Privadas **Contactate conmigo escribiendome a wilsonnm22@gmail.com**
42.439024
143
0.768966
spa_Latn
0.826384
e1233796ae88ad617a4311c785f68bfaee1abdd3
3,772
md
Markdown
_reading/2020-08-10-big-friendship.md
quinnleong/quinnleong.github.io
d48357572fea0b8910e9ee19ff8b2e8fdf63f58b
[ "MIT" ]
null
null
null
_reading/2020-08-10-big-friendship.md
quinnleong/quinnleong.github.io
d48357572fea0b8910e9ee19ff8b2e8fdf63f58b
[ "MIT" ]
2
2020-05-13T01:30:17.000Z
2020-05-24T01:55:39.000Z
_reading/2020-08-10-big-friendship.md
quinnleong/blog
3170aeb07fa82c1444c5ba1db7f0bae4a5aa5291
[ "MIT" ]
null
null
null
--- layout: reading title: "Big Friendship" author: "Aminatou Sow & Ann Friedman" date: 2020-08-10 stars: 2 --- ![](https://m.media-amazon.com/images/I/41DkXZ6kg4L.jpg){:.small-img} _Sigh_. I really, really thought I was going to love this book, 5/5, recommend it to all my friends. In fact, I actually DID recommend it to my best friend as a book for us to read together, after listening to an interview with Aminatou Sow, before I'd even cracked the cover. Sadly, it really didn't live up to my expectations – not even close. This book was a disappointment for many reasons. First and foremost, it's not actually a book about friendship and its role, culturally and socially, told through the memories and anecdotes of this pair's friendship. I'd say it's actually the inverse, the story of _this specific pair's friendship_ with small crumbs of socio-cultural insights about friendships thrown in briefly every 30-50 pages, while the spotlight continually jumps back to the two authors. From all of the marketing materials, I thought the "we" in the subtitle "How We Keep Each Other Close" was we, as in, humans; after reading it, I can only conclude that they literally meant we, the authors. Which brings me to disappointment reason #2: the whole books feels like an over-extended marketing blurb cum personal branding exercise. The tone of voice takes on the same quality as a Bustle article, with snippets like "That's why goddess created the cervix!" thrown in. That paired with their many trademarked phrases (shine theory! desert ladies getaway! friendweb!) made it feel like a branding study. And, relatedly, the third reason I struggled with this book was that the authors chose to tell the story in plural first person, as a collective unit. This means that generally the writing is phrased as "we" and "us", but it constantly switches back and forth to third person wherever their narrative or experiences deviate, creating a continual bobbing back and forth between the strange and falsely-cheery feeling collective voice of "we"/"us" and the distanced language of that first-person narrator suddenly referring to herself in the third person – it's jarring! One moment, they're saying "We loved it!", and the next "Aminatou felt sad." I think this strange combination of the collective first-person plural and removed third-person singular is a huge contributing factor to the marketing copy feeling of the prose – it's like a weird mash-up of company-speak, where the company refers to itself as a collective unit presenting a unified face, and the distanced professionalism of a third-person speaker bio in a conference pamphlet. The few and far-between flashes of insights and research from interviews they had done were enough to just start to be interesting – they were just way too limited (we're talking maybe a few sentences per snippet, if that) to provide any satisfaction. If you're someone who remotely cares about the role of friendships in modern life and society (and I'm guessing you probably are, if you've thought of picking up this book), there's nothing much new to find, because the depth of what's shared from the research is just so shallow. All of this criticism aside, I also have to say that I genuinely enjoyed reading the book. It's a catchy tale of two friends who are smart and funny, and they're both good writers. And let's be honest, who doesn't enjoy getting a little window into the inner workings and gossip of someone else's relationships? This would probably be a lot more exciting, too, if I had been a long-time listener of their podcast, which I've actually never listened to before. I wouldn't recommend it for the expectations I had going in, but as a fun and light memoir, it could be a solid choice.
75.44
125
0.78685
eng_Latn
0.999861
e123bc8baa2815201b7c4ca6acc81a8396597c1b
808
md
Markdown
README.md
sjpadgett/cqm-parsers
67b1d609378194eb46a7b104f5468f7e5519d191
[ "Apache-2.0" ]
null
null
null
README.md
sjpadgett/cqm-parsers
67b1d609378194eb46a7b104f5468f7e5519d191
[ "Apache-2.0" ]
null
null
null
README.md
sjpadgett/cqm-parsers
67b1d609378194eb46a7b104f5468f7e5519d191
[ "Apache-2.0" ]
null
null
null
[![codecov](https://codecov.io/gh/projecttacoma/cqm-parsers/branch/master/graph/badge.svg)](https://codecov.io/gh/projecttacoma/cqm-parsers) cqm-parsers =========== This project contains libraries for parsing HQMF documents. License ======= Copyright 2018 The MITRE Corporation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
32.32
140
0.774752
eng_Latn
0.968252
e124957acce7e660b15ca02c295cd4cdd270bc1e
943
md
Markdown
web/profiles/varbase/modules/contrib/persistent_login/readme.md
jadamsbit/jamie-auto
6800411075e9c3b2bc23cbce8fc2790043b35ee9
[ "MIT" ]
null
null
null
web/profiles/varbase/modules/contrib/persistent_login/readme.md
jadamsbit/jamie-auto
6800411075e9c3b2bc23cbce8fc2790043b35ee9
[ "MIT" ]
8
2019-04-27T00:11:03.000Z
2021-09-01T07:04:50.000Z
web/profiles/varbase/modules/contrib/persistent_login/readme.md
jadamsbit/jamie-auto
6800411075e9c3b2bc23cbce8fc2790043b35ee9
[ "MIT" ]
null
null
null
Persistent Login ================ The Persistent Login module provides the familiar "Remember Me" option on the user login form. ## Description Persistent Login is independent of the PHP session settings and is more secure (and user-friendly) than simply setting a long PHP session lifetime. The module's settings allow the administrator to: - Control how long user logins are remembered. - Control how many different persistent logins are remembered per user. ## Setup 1. Edit your `services.yml` file so PHP session cookies have a lifetime of the browser session: parameters: session.storage.options: cookie_lifetime: 0 2. Visit *Administration > Configuration > System > Persistent Login* to configure available options. 3. If using a reverse-proxy cache, such as Varnish, the configuration must be updated to not respond from the cache for requests that send a persistent login cookie.
28.575758
78
0.744433
eng_Latn
0.990168
e124fcc264eb2d000b82ec59635ec7104271d357
700
md
Markdown
src/zettel/analysis/lattice/complete/index.md
alxmrs/website
87b518fb298c88046e8f4230cdaa4344deb40109
[ "MIT" ]
null
null
null
src/zettel/analysis/lattice/complete/index.md
alxmrs/website
87b518fb298c88046e8f4230cdaa4344deb40109
[ "MIT" ]
3
2021-03-24T06:48:16.000Z
2021-03-24T06:52:17.000Z
src/zettel/analysis/lattice/complete/index.md
alxmrs/website
87b518fb298c88046e8f4230cdaa4344deb40109
[ "MIT" ]
null
null
null
# [Complete Lattice](https://en.wikipedia.org/wiki/Lattice_(order)#Completeness) A [lattice](/zettel/analysis/lattice/) is complete if _all_ subsets have both a join and a meet. A complete lattice is more restrictive than a normal lattice. Every non-empty finite lattice is complete. [Examples](https://en.wikipedia.org/wiki/Complete_lattice#Examples): > The power set of a given set, ordered by inclusion. The supremum is given by > the union and the infimum by the intersection of subsets. > > ... > > The convex subsets of a real or complex vector space, ordered by inclusion. > The infimum is given by the intersection of convex sets and the supremum by > the convex hull of the union.
35
80
0.76
eng_Latn
0.996867
e125001062645ede6bac13a2a3c6a05369306f51
2,200
md
Markdown
docs/Configuration.md
WebBamboo/material-dashboard-symfony
753b986e0ecc0a2221e0a1d643cc4db853ab963e
[ "MIT" ]
3
2019-08-19T17:08:25.000Z
2020-12-23T13:58:50.000Z
docs/Configuration.md
WebBamboo/material-dashboard-symfony
753b986e0ecc0a2221e0a1d643cc4db853ab963e
[ "MIT" ]
null
null
null
docs/Configuration.md
WebBamboo/material-dashboard-symfony
753b986e0ecc0a2221e0a1d643cc4db853ab963e
[ "MIT" ]
1
2020-04-26T12:02:50.000Z
2020-04-26T12:02:50.000Z
--- title: Configuration --- ## Configuration In 'config/packages/twig.yaml' add the following: ``` twig: ... form_themes: ['bootstrap_4_layout.html.twig'] ``` In 'config/packages/' create a file called: _material\_dashboard.yaml_ Example configuration: material_dashboard: menu_header: title: Material Dashboard anchor: / sidebar_background: /bundles/materialdashboard/img/sidebar-1.jpg color: green menu: example_dashboard: label: Home icon: dashboard parameters: - { name: language, value: en } user_menu: example_profile: label: Profile parameters: - { name: language, value: en } ## Step by step explanation 1._menu\_header_ segment ![menuheader.png]({{site.baseurl}}/docs/menuheader.png) This applies to the Header of the left menu. In my example the text of the header is "Material Dashboard" and the anchor points to "/" 2._sidebar\_background_ segment This tells the sidebar which image to use. Available options included in the dashboard are - /bundles/materialdashboard/img/sidebar-1.jpg - /bundles/materialdashboard/img/sidebar-2.jpg - /bundles/materialdashboard/img/sidebar-3.jpg - /bundles/materialdashboard/img/sidebar-4.jpg 3._color_ segment This is the theme color option. Possible options are: - purple - azure - green - orange - danger - rose 4._menu_ and _user\_menu_ segment The menu and user_menu segments are a yaml array containing all menu entries. Lets look in the example menu entry: example_dashboard: label: Home icon: dashboard parameters: - { name: language, value: en } _example\_dashboard_ is the route name _label_ is the label in the menu _icon_ is a material icon name. You can see all [material icons here](http://material.io/) _parameters_ is a key => value array of the route parameters, same as you would supply to Twig's path() and url() functions The menu segment applies to the left menu and the user_menu applies to the drop-down menu in the upper right corner {% include footermenu.html %}
28.571429
134
0.683636
eng_Latn
0.968248
e12686ecbf102cd88a83879acc7a086c092a86c0
593
md
Markdown
site/content/post/太陽光のみで動き、通信し情報を表示する装置.md
inajob/inajob-review
7ebb2c9e32ad0a67db5ad51a59c5f99591d53252
[ "MIT" ]
null
null
null
site/content/post/太陽光のみで動き、通信し情報を表示する装置.md
inajob/inajob-review
7ebb2c9e32ad0a67db5ad51a59c5f99591d53252
[ "MIT" ]
11
2020-04-30T05:15:35.000Z
2022-02-27T11:27:53.000Z
site/content/post/太陽光のみで動き、通信し情報を表示する装置.md
inajob/inajob-review
7ebb2c9e32ad0a67db5ad51a59c5f99591d53252
[ "MIT" ]
null
null
null
--- title: 太陽光のみで動き、通信し情報を表示する装置 date: 2021-06-06T11:00:11.274Z description: 太陽光のみで動作し、通信して情報を表示する装置の作例を紹介します。 image: /img/lorapaper.png tags: - 電子ペーパー - atmega328 - LoRa --- [MEET LORAPAPER, A WEATHER STATION THAT RUNS ON NO BATTERIES!](https://www.electronics-lab.com/meet-lorapaper-weather-station-runs-no-batteries/)から発見。画像もここから転載。 ガジェットを設計して頭を悩ませるのが「電源」です。モバイル性の高いものや、屋外に設置するものにどうやって電気を供給するかというのは難しい問題です。 この記事で紹介しているのは太陽光で発電し、LoRaで通信し、通信結果を電子ペーパーに表示するというガジェット「LoRaPaper」です。 太陽光で発電した電力をスーパーキャパシタに貯めこみ、十分溜まったところで動作を行うようです。 CPUはArduino UNOなどと同じATmega328のようで、開発はArduinoIDEでできるようです。
29.65
160
0.826307
yue_Hant
0.834483
e126b69c75ecb80ddcbebdc4899b82d39256225f
3,029
md
Markdown
docs/framework/unmanaged-api/debugging/icordebugremote-interface.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugremote-interface.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugremote-interface.md
CharleyGui/docs.fr-fr
2563c94abf0d041d775f700b552d1dbe199f03d5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ICorDebugRemote, interface ms.date: 03/30/2017 api_name: - ICorDebugRemote api_location: - CorDebug.dll api_type: - COM f1_keywords: - ICorDebugRemote helpviewer_keywords: - ICorDebugRemote interface [.NET Framework debugging] ms.assetid: 53d073c6-fa02-40d2-82e1-b9452bb6abaa topic_type: - apiref ms.openlocfilehash: 276d36c511105087190cb7e9dfeaa6932efc67ff ms.sourcegitcommit: d8020797a6657d0fbbdff362b80300815f682f94 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 11/24/2020 ms.locfileid: "95712103" --- # <a name="icordebugremote-interface"></a>ICorDebugRemote, interface Fournit la possibilité de lancer ou de joindre un débogueur managé à un processus distant cible. ## <a name="syntax"></a>Syntax ```cpp interface ICorDebugRemote : IUnknown { HRESULT CreateProcessEx ( [in] ICorDebugRemoteTarget * pRemoteTarget, [in] LPCWSTR lpApplicationName, [in] LPWSTR lpCommandLine, [in] LPSECURITY_ATTRIBUTES lpProcessAttributes, [in] LPSECURITY_ATTRIBUTES lpThreadAttributes, [in] BOOL bInheritHandles, [in] DWORD dwCreationFlags, [in] PVOID lpEnvironment, [in] LPCWSTR lpCurrentDirectory, [in] LPSTARTUPINFOW lpStartupInfo, [in] LPPROCESS_INFORMATION lpProcessInformation, [in] CorDebugCreateProcessFlags debuggingFlags, [out] ICorDebugProcess ** ppProcess ); HRESULT DebugActiveProcessEx ( [in] ICorDebugRemoteTarget * pRemoteTarget, [in] DWORD dwProcessId, [in] BOOL fWin32Attach, [out] ICorDebugProcess ** ppProcess ); }; ``` ## <a name="methods"></a>Méthodes |Méthode|Description| |------------|-----------------| |[ICorDebugRemote::CreateProcessEx, méthode](icordebugremote-createprocessex-method.md)|Crée un processus sur un ordinateur distant pour le débogage managé.| |[ICorDebugRemote::DebugActiveProcessEx, méthode](icordebugremote-debugactiveprocessex-method.md)|Lance un processus sur un ordinateur distant sous le débogueur.| ## <a name="remarks"></a>Remarques Actuellement, cette fonctionnalité est prise en charge uniquement pour le débogage d’une cible d’application Silverlight qui s’exécute sur un ordinateur Macintosh distant. ## <a name="requirements"></a>Configuration requise **Plateformes :** Consultez [Configuration requise](../../get-started/system-requirements.md). **En-tête :** CorDebug.idl, CorDebug.h **Bibliothèque :** CorGuids.lib **Versions de .NET Framework :** 4,5, 4, 3,5 SP1 ## <a name="see-also"></a>Voir aussi - [ICorDebugRemoteTarget, interface](icordebugremotetarget-interface.md) - [ICorDebug, interface](icordebug-interface.md) - [Interfaces de débogage](debugging-interfaces.md)
34.816092
174
0.659954
yue_Hant
0.549756
e1270b6105c45029078b0f95cecba58e87bfb82d
4,653
md
Markdown
README.md
yangqingren/LEGOPhotosManager
3911b6d69024f25ee86980a7fa32728e572d6b9d
[ "MIT" ]
1
2020-09-07T06:57:50.000Z
2020-09-07T06:57:50.000Z
README.md
yangqingren/LEGOPhotosManager
3911b6d69024f25ee86980a7fa32728e572d6b9d
[ "MIT" ]
null
null
null
README.md
yangqingren/LEGOPhotosManager
3911b6d69024f25ee86980a7fa32728e572d6b9d
[ "MIT" ]
null
null
null
# LEGOPhotosManager Photo management tool, you can get album list, photo list, save photos, delete photos, get photos by iCloud, cancel photo request 照片管理工具,可以获取相册列表、照片列表,保存照片、删除照片,通过 iCloud 获取照片,取消照片请求 ## Example To run the example project, clone the repo, and run `pod install` from the Example directory first. ## Requirements ## Installation LEGOPhotosManager is available through [CocoaPods](https://cocoapods.org). To install it, simply add the following line to your Podfile: ```ruby pod 'LEGOPhotosManager' ``` **LEGOPhotosManager** is the photo management tool, you can get album list, photo list, save photos, delete photos, get photos by iCloud, cancel photo request 照片管理工具,可以获取相册列表、照片列表,保存照片、删除照片,通过 iCloud 获取照片,取消照片请求 - [Features](#features) - [Requirements](#requirements) - [Installation](#installation) - [Usage](#usage) - [License](#license) ## Features - [x] Get album list. 获取相册列表 - [x] Photo list. 照片列表 - [x] Save photos. 保存照片 - [x] Delete photos. 删除照片 - [x] Get photos by iCloud. 通过 iCloud 获取照片 - [x] Cancel photo request. 取消照片请求 ## Requirements - iOS 9.0+ - Xcode 10.0+ ## Installation ### CocoaPods [CocoaPods](https://cocoapods.org) is a dependency manager for Cocoa projects. For usage and installation instructions, visit their website. To integrate LEGOPhotosManager into your Xcode project using CocoaPods, specify it in your `Podfile`: ```ruby pod 'LEGOPhotosManager' ``` ### Manually If you prefer not to use any of the dependency mentioned above, you can integrate LEGOPhotosManager into your project manually. Just drag & drop the `Sources` folder to your project. ## Usage ### Get album list. 获取相册列表 ``` /** Get all album lists 获取所有相册列表 */ NSMutableArray <PHAssetCollection *> *collections = [LEGOPhotosManager systemAssetCollection]; /** Get a list of all photos 获取所有照片列表*/ NSMutableArray <PHAsset *> *assets = [LEGOPhotosManager systemAssetsByAssetCollection:collections.firstObject]; /** Get a list of photos exclusive to the current app 获取当前应用专属照片列表*/ NSMutableArray <PHAsset *> *assets = [LEGOPhotosManager getCameraAssets]; ``` ### Save photos. 保存照片 Delete photos. 删除照片 ``` /** Save imagedata to system album 将 imageData 保存到系统相册*/ [LEGOPhotosManager savePhotoToAssetByImage:image date:[NSDate date] location:currLocation completion:^(BOOL success, NSError * _Nonnull error) { }]; /** Save image to system album 将 image 保存到系统相册*/ [LEGOPhotosManager savePhotoToAssetByImageData:imageData date:[NSDate date] location:currLocation completion:^(BOOL success, NSError * _Nonnull error) { }]; /** Delete from system album by assets 通过 assets 从系统相册删除*/ [LEGOPhotosManager delePhotoAssets:@[asset] completion:^(BOOL success) { }]; /** Delete from system album by assetsID 通过 assetsID 从系统相册删除*/ [LEGOPhotosManager delePhotoAssetsIdentitys:@[assetID] completion:^(BOOL success) { }]; ``` ### Get photos by iCloud. 通过 iCloud 获取照片 ``` /** Get thumbnails image by asset 通过 asset 获取缩略图*/ [LEGOPhotosManager getThumbnailImageByAsset:asset targetSize:CGSizeMake(200, 300) completion:^(UIImage * _Nonnull thumbnailImage, PHImageRequestID requestID, BOOL isInCloud) { }]; /** Get originalImage by asset 通过 asset 获取原图*/ [LEGOPhotosManager getOriginalImageByAsset:asset completion:^(UIImage * _Nonnull originalImage, PHImageRequestID requestID) { }]; /** Get originalImage by asset, param Progress 通过 asset 获取原图,带进度条*/ [LEGOPhotosManager getOriginalImageByAsset:asset progressHandler:^(double progress, NSError * _Nullable error, BOOL * _Nonnull stop, NSDictionary * _Nullable info) { } completion:^(UIImage * _Nonnull originalImage, PHImageRequestID requestID) { }]; /** Get imageData by asset 通过 asset 获取原图 imageData*/ [LEGOPhotosManager getOriginalImageByAsset:asset completionData:^(NSData * _Nonnull originalImageData, PHImageRequestID requestID) { }]; /** Get imageData by asset, param Progress 通过 asset 获取原图 imageData,带进度条*/ [LEGOPhotosManager getOriginalImageByAsset:asset progressHandler:^(double progress, NSError * _Nullable error, BOOL * _Nonnull stop, NSDictionary * _Nullable info) { } completionData:^(NSData * _Nonnull originalImageData, PHImageRequestID requestID) { }]; ``` ### Cancel photo request. 取消照片请求 ``` /** cancel request by requestID 取消请求*/ [LEGOPhotosManager cancelPHImageRequestID:PHImageRequestID]; ``` For details, see example for LEGOPhotosManager. ## Author 564008993@qq.com, yangqingren@yy.com ## License LEGOPhotosManager is available under the MIT license. See the LICENSE file for more info.
35.519084
242
0.738663
eng_Latn
0.366337
e1270e66b842f7c8d59433dc6fc42341f30682ba
878
md
Markdown
README.md
petr0n/Project2
ebbac0bd5ebbfc257c92cef6f3c8f269da0c83f8
[ "Apache-2.0" ]
2
2019-05-08T23:21:08.000Z
2019-07-06T00:21:52.000Z
README.md
petr0n/Project2
ebbac0bd5ebbfc257c92cef6f3c8f269da0c83f8
[ "Apache-2.0" ]
15
2019-05-07T00:05:26.000Z
2019-05-14T00:29:00.000Z
README.md
petr0n/Project2
ebbac0bd5ebbfc257c92cef6f3c8f269da0c83f8
[ "Apache-2.0" ]
1
2019-07-10T11:36:27.000Z
2019-07-10T11:36:27.000Z
# TrashTaggers Organize a trash pickup event in this Eco-Friendly App. # Project Overview Trash Taggers is a green app that allows users to organize and/or signup for trash pickup events in their local community. The home page shows the five upcoming events along with the ability to signup. Users are also able to create new events or view all future events. When a user chooses to create an event or join an event, they are prompted to login with Google. From there, the user is able to complete their task and help keep our cities clean! This project utilizes MySQL, Node, Express, Handlebars, Bootstrap, Google Authentication, Google Maps, Passport JS, Moment and Sequelize. View our project here: https://trashtaggers.herokuapp.com/ Contributers: Jason Fleming, Julia Fercello, Christiaan Simmons, and Peter Abeln Github: https://github.com/petr0n/TrashTaggers
48.777778
181
0.792711
eng_Latn
0.987259
e1271e634bebd5dda22c7718a2d4055f39c8113f
6,849
md
Markdown
README.md
nicr9/greptools
25487a7ed67f629698eba3de6093a0b4d8334f66
[ "MIT" ]
2
2016-01-25T11:24:08.000Z
2017-05-18T16:06:44.000Z
README.md
nicr9/greptools
25487a7ed67f629698eba3de6093a0b4d8334f66
[ "MIT" ]
1
2019-05-14T12:12:41.000Z
2019-05-14T12:12:41.000Z
README.md
nicr9/greptools
25487a7ed67f629698eba3de6093a0b4d8334f66
[ "MIT" ]
null
null
null
# GrepTools ## Installation From pip (recommended): ``` $ sudo pip install --pre greptools ``` From source (for developers): ``` $ git clone https://github.com/nicr9/greptools.git $ cd greptools $ sudo python2.7 setup.py develop ``` ## About `greptools` is a collection of CLI search tools similar to `grep` or `ack`. These tools were designed with programmers in mind and each tool is targetted at a different programming language or structured file format. Each language specific tool recursively searches files relating to that language in the current directory and sorts results into a context tree (refered to as a grep tree). The exact format of the grep tree depends on the language in question but it takes the form of nested datstructure with each level representing a file, class or function. Each tool uses `grep` to perform the actual searching, and for each result it opens the file and reads it to decide which class/function it belongs to. ## Usage It's simple to use. Here's an example using the tool for python code: `pygt`. To look for usages of the word "traceback" inside a subdirectory of the core Python source code: ``` $ cd Python-2.7.3/Lib/multiprocessing $ pygt traceback ./reduction.py def _serve 132:^ import traceback$ 135:^ '-'*79 + '\n' + traceback.format_exc() + '-'*79$ ./queues.py class Queue def _feed 280:^ import traceback$ 281:^ traceback.print_exc()$ ./managers.py 49:^from traceback import format_exc$ class Server def shutdown 368:^ import traceback$ 369:^ traceback.print_exc()$ ./util.py def _run_finalizers 263:^ import traceback$ 264:^ traceback.print_exc()$ ./process.py class Process def _bootstrap 273:^ import traceback$ 276:^ traceback.print_exc()$ ``` ## Command-line options These options should apply to all the available greptools so just use the name of the tool you're using in place of `<greptool>` below: ### Case-insensitive search ``` $ <greptool> -i <SEARCH_TERM> ``` ### Debug information Turning this on prints out lots of additional information (e.g. raw grep results) that can be used to diagnose bugs in the logic at various stages. Useful if you're trying to develop your own greptool or add features to the base classes. ``` $ <greptool> -d <SEARCH_TERM> ``` ### Set operations One of the really useful features these greptools is that they support treating the results like sets and quickly filtering results by applying set operations. Let's look at a simple example. Let's say you need to quickly look through all the `import`s in your python project. That's simple: `pygt import`. Now lets say you want to narrow down those results to those that mention `os.path`. This can be done by piping the results from our earlier search into a new search for the new term like so: ``` $ pygt import | pygt os.path ``` This will effectively perform an intersection on both sets of results and so only provide matches that contain both `import` and `os.path`. You can perform other set operations too! Let's say you don't want any results containing `os.path`. You can get the relative complement by piping like we did before and setting the `-F` (filter) flag on the last search command like so: ``` $ pygt import | pygt -F os.path ``` You can add both sets of results together with `-U` (union) and only return results that contain one and not the other by using `-X` (XOR, a.k.a symetric difference). ### Caveats with using set operations Both the default (intersection) set op and the filter set op shown above aren't actually true set operations. It turns out treating search results like sets and performing these operations isn't that fast as we hoped so we made a compromise. These two work by iterating through that first set of results and checking for the second search term using python's built in regex engine. You may experience issues from the use of two different engines. For example, if you are using complicated regular expressions you may find that they behave differently when using intersection or filter set operations. You can choose to use the slow intersection (`-N`) and the slow filter (`-E`) instead which work by building both sets of results and comparing. In order to use the pipe to pass one set of results to an other pygt process we had to serialise them first. This means that if you try piping the results to any other process (like `less` for example) they'll show up in json format. This will happen even if you use other output formats like the histogram format. If this causes problems for you, use `-p`. This will force it to pipe out results in what ever format you've choosen (except the default 'colour' format. It will be changed to clean because it looks really ugly when it's piped out). ## Writing a new greptool So you've decided you need a greptool for your favourite language X. Here are a basic set of instructions to create a new greptool: ### 1) Implement a new Reader class. Are code blocks in X based on indentation or deliniated by braces? There are some classes you can inherit from (`IndentReader` and `BraceReader`) that are generalised for these cases. The docstrings should have details that tell you what needs to be implemented by subclasses. `PythonReader` and `JavaReader` are good examples of `IndentReader` and `BraceReader` subclasses respectively. If neither of these suit your purposes, you may need to inherit from `BaseReader`. The logic you need to implement in this case is a little more abstract, I'm not sure the docstrings are detailed enough. If you can't figure out what to do from a reading of the code feel free to drop me an email with an outline of what you're working on, I'd be glad to help! ### 2) Add details to `greptools/reader/__init__.py` Two things you'll need to do: include a relative import of your new reader class and add the name of that class to `__all__`. ### 3) Add new script to `bin/` My advice is to copy a preexisting script. The convention is to base the script name on the language file extention (e.g. Python files have a `.py` extention so the Python greptool is called `pygt`). Don't forget to change the name of the Reader class used in the script. ``` $ cp bin/pygt bin/xgt $ sed -i "s/PythonReader/XReader/g" bin/xgt ``` ### 4) Mention script in setup.py. There's a `scripts` list in setup.py. Add your new script here so that it's installed with all the others. ### 5) Reinstall. ``` $ sudo python2.7 setup.py develop ``` ## Author ``` Name: Nic Roland Twitter: @nicr9_ Email: nicroland9@gmail.com ```
31.855814
82
0.722587
eng_Latn
0.99912
e12767007f1b70d2f837bcf92125ed5930dae5cd
319
md
Markdown
CHANGELOG.md
vojtechkral/tokio-tasker
2911c1eef13a223d929b761649c7f8d7857bce48
[ "MIT" ]
12
2021-11-13T01:11:21.000Z
2022-03-01T21:25:37.000Z
CHANGELOG.md
vojtechkral/tokio-tasker
2911c1eef13a223d929b761649c7f8d7857bce48
[ "MIT" ]
2
2022-03-15T10:48:05.000Z
2022-03-18T20:26:14.000Z
CHANGELOG.md
vojtechkral/tokio-tasker
2911c1eef13a223d929b761649c7f8d7857bce48
[ "MIT" ]
null
null
null
## 1.2.0 `2022-03-19` - Fix bugs in `JoinStream` implementation. - Refactor internals. ## 1.1.0 `2022-03-03` - Propagate panics early. - Provide `JoinStream`. Note: This release was yanked due to bugs in `JoinStream`. ## 1.0.1 `2021-11-13` Patch release, docs fixes only. ## 1.0.0 `2021-11-13` Initial release.
15.95
58
0.673981
eng_Latn
0.886088
e1278880a2b7c3ab63df6cd8f7a1f7d0e50d4d66
40
md
Markdown
README.md
NiranjanMudhiraj/Ride-Requests-Time-Series-Forecasting
98c671835c8c89ea2b401a02fb60d082b63b4d35
[ "MIT" ]
null
null
null
README.md
NiranjanMudhiraj/Ride-Requests-Time-Series-Forecasting
98c671835c8c89ea2b401a02fb60d082b63b4d35
[ "MIT" ]
null
null
null
README.md
NiranjanMudhiraj/Ride-Requests-Time-Series-Forecasting
98c671835c8c89ea2b401a02fb60d082b63b4d35
[ "MIT" ]
null
null
null
# Ride-Requests-Time-Series-Forecasting
20
39
0.825
kor_Hang
0.317034
e127fecae2a30008e858d6b6a1855bf4cf41ae97
29
md
Markdown
README.md
DavionHuang/DavionHuang.github.io
afe0dbab96fbe2e94b4fd37a6cadacdcd0ca9012
[ "MIT" ]
null
null
null
README.md
DavionHuang/DavionHuang.github.io
afe0dbab96fbe2e94b4fd37a6cadacdcd0ca9012
[ "MIT" ]
null
null
null
README.md
DavionHuang/DavionHuang.github.io
afe0dbab96fbe2e94b4fd37a6cadacdcd0ca9012
[ "MIT" ]
null
null
null
# DavionHuang.github.io Blog
9.666667
23
0.793103
hrv_Latn
0.766792
e128831a7ae992968784cd22ff5d6e3c1e5261d4
3,125
md
Markdown
docs/node/call-service.md
Hexagon/node-red-contrib-home-assistant-websocket
59c5716bc9db21830dd4ddc0c08a96d4b2d81d3c
[ "MIT" ]
null
null
null
docs/node/call-service.md
Hexagon/node-red-contrib-home-assistant-websocket
59c5716bc9db21830dd4ddc0c08a96d4b2d81d3c
[ "MIT" ]
null
null
null
docs/node/call-service.md
Hexagon/node-red-contrib-home-assistant-websocket
59c5716bc9db21830dd4ddc0c08a96d4b2d81d3c
[ "MIT" ]
null
null
null
# Call Service Sends a request to home assistant for any domain and service available (`light/turn_on`, `input_select/select_option`, etc..) ::: tip Helpful Examples [Call Service Tips and Tricks](/guide/call-service.html) ::: ## Configuration ### Domain <Badge text="required"/> - Type: `string` - Accepts [Mustache Templates](/guide/mustache-templates.md) Service domain to call ### Service <Badge text="required"/> - Type: `string` - Accepts [Mustache Templates](/guide/mustache-templates.md) Service service to call ### Area - Type: `an array of area ids` - Accepts [Mustache Templates](/guide/mustache-templates.md) for ids ### Device - Type: `an array of device ids` - Accepts [Mustache Templates](/guide/mustache-templates.md) for ids ### Entity - Type: `an array of entity ids` - Accepts [Mustache Templates](/guide/mustache-templates.md) for ids ### Data - Type: `JSONata | JSON` - Accepts [Mustache Templates](/guide/mustache-templates.md) when data type is JSON JSON object to pass along. ### Merge Context - Type: `string` If defined will attempt to merge the global and flow context variable into the config ### Alternative Template Tags - Type: `boolean` Will change the tags used for mustache template to `<%` and `%>` ### Queue - Type: `none | first | all | last` Will store the first, last or all messages received while disconnected from Home Assistant and send them once connected again ## Input All properties need to be under `msg.payload`. Sample input ```JSON { "domain": "homeassistant", "service": "turn_on", "target": { "area_id": ["kitchen"], "device_id": ["8932894082930482903"], "entity_id": ["light.kitchen", "switch.garage_light"] } "data": { "brightness_pct": 50 } } ``` #### Merging If the incoming message has a `payload` property with `domain`, `service` set it will override any config values if set. If the incoming message has a `payload.data` that is an object or parsable into an object these properties will be <strong>merged</strong> with any config values set. If the node has a property value in its config for `Merge Context` then the `flow` and `global` contexts will be checked for this property which should be an object that will also be merged into the data payload. #### Merge Resolution As seen above the `data` property has a lot going on in the way of data merging, in the end, all of these are optional and the rightmost will win if a property exists in multiple objects Config Data, Global Data, Flow Data, Payload Data ( payload data property always wins if provided ### domain - Type: `string` Service domain to call ### service - Type: `string` Service service to call ### data - Type: `JSON Object` Service data to send with API call ### target - Type: `JSON Object with area_id, device_id, and entity_id as array properties` Targets of the service call ## Output Value types: - `sent data`: data sent to Home Assistant - `config`: config properties of the node ## References <info-panel-only> [External Docs](/node/call-service.md) </info-panel-only>
22.482014
212
0.71168
eng_Latn
0.958934
e128975dc1a06f4d236a9fcca3574667d849f4e4
2,995
md
Markdown
ru/managed-mysql/operations/performance-diagnostics.md
barmex/docs
e7f6be6035c66c1ab52224c350bfbf1d1fb605e9
[ "CC-BY-4.0" ]
null
null
null
ru/managed-mysql/operations/performance-diagnostics.md
barmex/docs
e7f6be6035c66c1ab52224c350bfbf1d1fb605e9
[ "CC-BY-4.0" ]
null
null
null
ru/managed-mysql/operations/performance-diagnostics.md
barmex/docs
e7f6be6035c66c1ab52224c350bfbf1d1fb605e9
[ "CC-BY-4.0" ]
null
null
null
# Диагностика производительности В {{ mmy-name }} встроен инструмент для сбора статистики по сессиям и запросам. Эти метрики могут быть полезны при [анализе производительности и оптимизации настроек](../tutorials/profiling.md) кластера. ## Активировать сбор статистики {#activate-stats-collector} Включите опцию **Сбор статистики** при [создании кластера](cluster-create.md) или [изменении его настроек](update.md#change-additional-settings) (по умолчанию опция отключена). Настройте **Интервал сбора сессий** и **Интервал сбора запросов**. Единицы измерения обеих настроек — секунды. ## Получить статистику по сессиям {#get-sessions} 1. В [консоли управления]({{ link-console-main }}) перейдите на страницу каталога и выберите сервис **{{ mmy-name }}**. 1. Нажмите на имя нужного кластера и выберите вкладку **Диагностика производительности** → **Сессии**. Для просмотра статистики по сессиям или истории запросов в рамках сессии выберите соответствующую вкладку. {% list tabs %} * Статистика Для просмотра статистики по сессиям: 1. Задайте интересующий временной интервал. 1. (Опционально) Настройте фильтры. 1. Выберите нужный [срез данных](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-quick-start.html). Чтобы показать или скрыть отдельные категории, нажмите на имя категории в легенде графика. * История Для просмотра истории запросов в рамках сессии: 1. Задайте интересующий временной интервал. 1. (Опционально) Настройте фильтры. {% endlist %} ## Получить статистику по запросам {#get-queries} 1. В [консоли управления]({{ link-console-main }}) перейдите на страницу каталога и выберите сервис **{{ mmy-name }}**. 1. Нажмите на имя нужного кластера и выберите вкладку **Диагностика производительности** → **Запросы**. Для просмотра статистики по запросам или сравнения их статистических данных на двух временных интервалах выберите соответствующую вкладку. {% list tabs %} * Интервал Для просмотра статистики запросов: 1. Выберите интересующий временной интервал. 1. (Опционально) Настройте фильтры. * 2 интервала Чтобы получить сведения об относительном изменении статистических характеристик запросов: 1. В поле **Интервал 1** выберите временной интервал, статистика за который будет основой для расчетов. 1. В поле **Интервал 2** выберите временной интервал, статистика за который будет сравниваться со статистикой первого интервала. 1. (Опционально) Настройте фильтры. Например, пусть в первом интервале было выполнено 10 запросов `SELECT * FROM cities`, а во втором — 20. Тогда при сравнении статистических данных разница по метрике <q>количество запросов</q> (столбец `Calls` в таблице) будет равняться `+100%`. {% endlist %} Подробнее про отображаемые сведения см. в [документации {{ MY }}](https://dev.mysql.com/doc/refman/8.0/en/performance-schema-quick-start.html).
43.405797
252
0.733556
rus_Cyrl
0.940077
e1292aab01a018073339034ac0b43d9da42dad05
1,648
md
Markdown
src/pages/blog/linux/arch/install-yaourt.fr.md
luanorlandi/angeloocana
6d37769e55995727bf0f07a3a4555a4c09e6ae27
[ "MIT" ]
31
2017-04-01T14:18:18.000Z
2022-02-04T13:45:53.000Z
src/pages/blog/linux/arch/install-yaourt.fr.md
luanorlandi/angeloocana
6d37769e55995727bf0f07a3a4555a4c09e6ae27
[ "MIT" ]
1
2018-03-28T08:12:56.000Z
2018-04-24T15:20:10.000Z
src/pages/blog/linux/arch/install-yaourt.fr.md
luanorlandi/angeloocana
6d37769e55995727bf0f07a3a4555a4c09e6ae27
[ "MIT" ]
32
2017-08-28T18:01:51.000Z
2021-06-28T05:43:21.000Z
--- title: Comment installer Yaourt sur Arch Linux date: '2017-08-30' layout: post draft: false tags: - Linux - Arch structuredData: type: TechArticle locationCreated: Ottawa CA dependencies: Arch Linux proficiencyLevel: Beginner articleSection: Arch Linux pageEnd: pageStart: pagination: about: name: Arch Linux alternateName: Arch description: lightweight and flexible Linux® distribution that tries to Keep It Simple. identifier: arch-linux image: sameAs: https://www.archlinux.org/ accessMode: textual accessModeSufficient: textual accessibilityAPI: ARIA accessibilityControl: fullKeyboardControl, fullTouchControl, fullMouseControl accessibilitySummary: aggregateRating: ... audience: ... author: angeloocana comment: ... commentCount: ... contentLocation: ... dateCreated: '2017-08-26' dateModified: '2017-08-30' datePublished: '2017-08-30' discussionUrl: ... educationalUse: ... isAccessibleForFree: true isFamilyFriendly: true keywords: ... thumbnailUrl: ... version: 1 video: ... --- Yaourt est le gestionnaire de paquets de communauté pour Arch Linux. Lorsque vous utilisez **pacman** (gestionnaire de paquetage officiel), vous devez utiliser **sudo**, pour le **yaourt** vous n'avez pas à le faire. Ouvrez le fichier ci-dessous pour l'édition: ```bash sudo vim /etc/pacman.conf ``` Ajoutez ceci à la fin du fichier: ```conf [archlinuxfr] SigLevel = Never Server = http://repo.archlinux.fr/$arch ``` Installer yaourt ```bash sudo pacman -Sy yaourt ```
23.542857
101
0.683252
yue_Hant
0.307588
e12aa781b9c2bab9435be531633655ad8529f9ad
21
md
Markdown
README.md
ZAIRAWASIM/COPS
6cda1ccd42cefd3b1eb715183053a3ba75ff43c9
[ "Apache-2.0" ]
null
null
null
README.md
ZAIRAWASIM/COPS
6cda1ccd42cefd3b1eb715183053a3ba75ff43c9
[ "Apache-2.0" ]
null
null
null
README.md
ZAIRAWASIM/COPS
6cda1ccd42cefd3b1eb715183053a3ba75ff43c9
[ "Apache-2.0" ]
null
null
null
# COPS Version 1.0.0
7
13
0.666667
kor_Hang
0.855508
e12add48556c8ba9e6a53b28935d4fe80faeeb22
639
md
Markdown
README.md
iLib-js/ilib-loctool-strings
8b51ee74d27b44cdf4e914972b1f7fcf1342369c
[ "Apache-2.0" ]
null
null
null
README.md
iLib-js/ilib-loctool-strings
8b51ee74d27b44cdf4e914972b1f7fcf1342369c
[ "Apache-2.0" ]
null
null
null
README.md
iLib-js/ilib-loctool-strings
8b51ee74d27b44cdf4e914972b1f7fcf1342369c
[ "Apache-2.0" ]
null
null
null
# ilib-loctool-strings Ilib loctool plugin to parse and localize iOS .strings files ## License This plugin is license under Apache2. See the [LICENSE](./LICENSE) file for more details. ## Release Notes ### v1.2.1 - Fix a bug where the pseudo locales were not initialized properly. This fix gets the right set of locales from the project settings to see if any of them are pseudo locales. ### v1.2.0 - Added the ability to set the target locale for the file from the project settings if it is there. Otherwise, fall back to parsing the path name to find the locale. - Fixed the way that flavors are detected in the path name
27.782609
69
0.749609
eng_Latn
0.999118
e12bde9b941b627a932ec551c2765b098ededa7b
7,130
md
Markdown
windows-driver-docs-pr/spb/using-the-spb-transfer-list-structure.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/spb/using-the-spb-transfer-list-structure.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/spb/using-the-spb-transfer-list-structure.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: カスタム IOCTL に対する SPB_TRANSFER_LIST 構造体の使用 description: シンプルな周辺機器バス (SPB) コントローラードライバーで1つ以上のカスタム i/o 制御 (IOCTL) 要求がサポートされている場合は、SPB_TRANSFER_LIST 構造体を使用して、これらの要求の読み取りバッファーと書き込みバッファーを記述します。 ms.assetid: 577122CC-D1F8-41C5-BE77-A22FC8516B82 ms.date: 04/20/2017 ms.localizationpriority: medium ms.openlocfilehash: db2c5048a1d3c0ad49e880f31d57a5bfc6bddc30 ms.sourcegitcommit: 4b7a6ac7c68e6ad6f27da5d1dc4deabd5d34b748 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 10/24/2019 ms.locfileid: "72839620" --- # <a name="using-the-spb_transfer_list-structure-for-custom-ioctls"></a>カスタム Ioctl の SPB\_TRANSFER\_LIST 構造の使用 シンプルな周辺機器バス (SPB) コントローラードライバーで1つ以上のカスタム i/o 制御 (IOCTL) 要求がサポートされている場合は、 [**SPB\_TRANSFER\_LIST**](https://docs.microsoft.com/windows-hardware/drivers/ddi/spb/ns-spb-spb_transfer_list)構造体を使用して、これらの要求の読み取りバッファーと書き込みバッファーを記述します。 この構造体は、要求内のバッファーを記述するための統一された方法を提供し、メソッド\_バッファー i/o 操作に関連するバッファーコピーのオーバーヘッドを回避します。 カスタム IOCTL 要求で**spb\_TRANSFER\_LIST**構造体を使用する場合、spb コントローラードライバーは、要求のプロセスコンテキストでこれらのバッファーをキャプチャするために[**SpbRequestCaptureIoOtherTransferList**](https://docs.microsoft.com/windows-hardware/drivers/ddi/spbcx/nf-spbcx-spbrequestcaptureioothertransferlist)メソッドを呼び出す必要があります。送信者. ドライバーは、これらのバッファーにアクセスするために、 [**Spbrequestgettransferparameters**](https://docs.microsoft.com/windows-hardware/drivers/ddi/spbcx/nf-spbcx-spbrequestgettransferparameters)メソッドを呼び出すことができます。 [**Ioctl\_spb\_完全\_双**](https://msdn.microsoft.com/library/windows/hardware/hh974774)方向および[**ioctl\_Spb\_** ](https://msdn.microsoft.com/library/windows/hardware/hh450857) 、 [sp b i/o 要求インターフェイス](https://docs.microsoft.com/previous-versions/hh698224(v=vs.85))の一部として定義されている\_シーケンス要求を実行\_@no__ の転送を使用**t_13_** の読み取りバッファーと書き込みバッファーを記述する構造体。 **IOCTL\_spb\_FULL\_双**方向要求の **\_のリスト構造\_転送**は、書き込みバッファーと、要求内の読み取りバッファーの両方を記述します。 **\_シーケンス要求を実行する IOCTL\_\_spb**の**spb\_TRANSFER\_LIST**構造体は、任意のシーケンスの読み取りバッファーと書き込みバッファーを記述できます。 同様に、カスタム Ioctl を定義して、その**SPB\_転送**し、読み取りバッファーと書き込みバッファーの組み合わせを使用するようにリスト構造\_転送する必要があります。また、必要に応じてリスト内のバッファーの順序を指定することもできます。 SPB 周辺機器のカーネルモードドライバー基盤 (KMDF) ドライバーは、 [**Wdfiotargetsendioctlsynchronously**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdfiotarget/nf-wdfiotarget-wdfiotargetsendioctlsynchronously)のメソッドを同期的に呼び出して、IOCTL 要求を spb コントローラーに送信します。 このメソッドには、 *InputBuffer*パラメーターと*outputbuffer*パラメーターがあります。 一部の種類のデバイスのドライバーでは、これら2つのパラメーターを使用して、IOCTL 要求に対してそれぞれ書き込みバッファーと読み取りバッファーをポイントすることがあります。 ただし、IOCTL 要求を SPB コントローラーに送信するために、SPB 周辺機器ドライバーは、 *InputBuffer*パラメーターを設定して、 **spb\_転送\_リスト**構造を指すメモリ記述子をポイントします。 この構造体は、i/o 制御操作に必要な読み取りバッファーまたは書き込みバッファーを記述します。 ドライバーは*Outputbuffer*パラメーターを NULL に設定します。 同様に、SPB 周辺機器のユーザーモードドライバー基盤 (UMDF) ドライバーは、 [**Iwdfiotarget:: FormatRequestForIoctl**](https://docs.microsoft.com/windows-hardware/drivers/ddi/wudfddi/nf-wudfddi-iwdfiotarget-formatrequestforioctl)などのメソッドを呼び出して、i/o 制御操作の i/o 要求の形式を設定します。 このメソッドには、 *Pinputmemory*パラメーターと*poutputmemory*パラメーターがあります。 一部の種類のデバイスのドライバーでは、これら2つのパラメーターを使用して、IOCTL 要求の書き込みバッファーと読み取りバッファーをポイントする場合があります。 ただし、spb コントローラーに IOCTL 要求を送信するために、SPB 周辺機器ドライバーは、 *Pinputmemory*パラメーターを、 **spb\_TRANSFER\_LIST**構造体を含むメモリオブジェクトを指すように設定します。 この構造体は、i/o 制御操作に必要な読み取りバッファーまたは書き込みバッファーを記述します。 ドライバーは*Poutputmemory*パラメーターを NULL に設定します。 ## <a name="parameter-checking-and-buffer-capture"></a>パラメーターチェックとバッファーキャプチャ SPB フレームワーク拡張機能 (SpbCx) が **\_シーケンス要求を実行\_IOCTL\_spb**を受け取ると、この要求は、ドライバーの[*EvtSpbControllerIoSequence*](https://docs.microsoft.com/windows-hardware/drivers/ddi/spbcx/nc-spbcx-evt_spb_controller_sequence)関数を呼び出すことによって、spb コントローラードライバーに渡されます。 この呼び出しの前に、SpbCx は SPB を検査して、要求内のバッファーを記述する **\_リスト構造を\_転送**します。 SpbCx は、要求の発信元のプロセスコンテキストでこれらのバッファーをキャプチャします。 (ユーザーモードメモリ内のバッファーには、メモリが割り当てられているプロセス内でのみアクセスできます)。さらに、SpbCx は、要求のパラメーター値が有効かどうかを確認します。 SpbCx が完全\_双方向の要求またはカスタム IOCTL 要求 **\_の ioctl\_spb**を受け取ると、この要求は、ドライバーの[*EvtSpbControllerIoOther*](https://docs.microsoft.com/windows-hardware/drivers/ddi/spbcx/nc-spbcx-evt_spb_controller_other) callback 関数を呼び出すことによって spb コントローラードライバーに渡されます。 この呼び出しを行う前に、SpbCx は要求内のパラメーター値の検証チェックを行わず、発信元のコンテキストで要求のバッファーをキャプチャしません。 これらの要求のパラメーターチェックとバッファーキャプチャは、SPB コントローラードライバーの役割です。 SPB コントローラードライバーが、**完全\_双方向要求\_IOCTL\_spb**をサポートしている場合、または、 **\_sp b**を使用してバッファーの\_リスト構造を転送するカスタム IOCTL 要求がサポートされている場合、ドライバーはを実装する必要があります。[*Evtioincallercontext*](https://docs.microsoft.com/windows-hardware/drivers/ddi/wdfdevice/nc-wdfdevice-evt_wdf_io_in_caller_context)コールバック関数。 ドライバーは、この関数へのポインターを入力パラメーターとして提供します。このメソッドは、ドライバーの*EvtSpbControllerIoOther* callback 関数を登録する[**spbコントローラー**](https://docs.microsoft.com/windows-hardware/drivers/ddi/spbcx/nf-spbcx-spbcontrollersetioothercallback)を呼び出します。 SpbCx が、完全\_双方向の要求またはカスタム IOCTL 要求 **\_の ioctl\_SPB**を受け取ると、SpbCx は、発信者のコンテキストでドライバーの*Evtioincallercontext*関数を呼び出します。 IOCTL 要求で**SPB\_TRANSFER\_LIST**構造体が使用されている場合、 *Evtioincallercontext*関数は、要求内のバッファーをキャプチャするために[**SpbRequestCaptureIoOtherTransferList**](https://docs.microsoft.com/windows-hardware/drivers/ddi/spbcx/nf-spbcx-spbrequestcaptureioothertransferlist)メソッドを呼び出します。 また、 *Evtioincallercontext*関数は、要求の一部の予備処理も実行できます。 次のコード例は、SPB コントローラードライバーによって実装される*Evtioincallercontext*関数を示しています。 ```cpp VOID EvtIoInCallerContext( _In_ WDFDEVICE SpbController, _In_ WDFREQUEST FxRequest ) { NTSTATUS status = STATUS_SUCCESS; WDF_REQUEST_PARAMETERS fxParams; WDF_REQUEST_PARAMETERS_INIT(&fxParams); WdfRequestGetParameters(FxRequest, &fxParams); if ((fxParams.Type != WdfRequestTypeDeviceControl) && (fxParams.Type != WdfRequestTypeDeviceControlInternal)) { status = STATUS_NOT_SUPPORTED; goto exit; } // // The driver should check for custom IOCTLs that it handles. // If the IOCTL is not recognized, complete the request with a // status of STATUS_NOT_SUPPORTED. // switch (fxParams.Parameters.DeviceIoControl.IoControlCode) { ... default: status = STATUS_NOT_SUPPORTED; goto exit; } // // The IOCTL is recognized. Capture the buffers in the request. // status = SpbRequestCaptureIoOtherTransferList((SPBREQUEST)FxRequest); // // If the capture fails, the driver must complete the request instead // of placing it in the SPB controller's request queue. // if (!NT_SUCCESS(status)) { goto exit; } status = WdfDeviceEnqueueRequest(SpbController, FxRequest); if (!NT_SUCCESS(status)) { goto exit; } exit: if (!NT_SUCCESS(status)) { WdfRequestComplete(FxRequest, status); } } ``` 前のコード例では、 **switch**ステートメントは、SPB コントローラードライバーが認識する IOCTL が要求に含まれていることを確認します。 (簡潔にするために、 **switch**ステートメントの本体は表示されません)。次に、 **SpbRequestCaptureIoOtherTransferList**メソッドの呼び出しによって、要求内のバッファーがキャプチャされます。 この呼び出しが成功すると、要求は SPB コントローラーの i/o キューに追加されます。 それ以外の場合、要求はエラー状態コードを使用して完了します。 *EvtSpbControllerIoOther*関数によるパラメーターチェックを示す[コード例](https://docs.microsoft.com/windows-hardware/drivers/spb/handling-ioctl-spb-full-duplex-requests#code-example)については、「 [**完全\_双方向要求\_IOCTL\_SPB**の処理](https://docs.microsoft.com/windows-hardware/drivers/spb/handling-ioctl-spb-full-duplex-requests)」を参照してください。
60.423729
925
0.803506
yue_Hant
0.782122
e12d6a21bf7e01ba2fa208b7a88c0731f6dcc39c
3,989
md
Markdown
README.md
kaila-spraguemcrae/cs-mvc-template
bb57331aa23dbbbb288110ae8b08135a7de48345
[ "MIT", "Unlicense" ]
null
null
null
README.md
kaila-spraguemcrae/cs-mvc-template
bb57331aa23dbbbb288110ae8b08135a7de48345
[ "MIT", "Unlicense" ]
null
null
null
README.md
kaila-spraguemcrae/cs-mvc-template
bb57331aa23dbbbb288110ae8b08135a7de48345
[ "MIT", "Unlicense" ]
null
null
null
<br> <h1 align = "center"> <b> {Application Name} </b> </h1> <p align = "center"> #### {Brief description of applicaton}, {Date of current version} </p> <p align = "center"> By {List of contributors} </p> -------------------- ## 📖 Description {Detailed description, its purpose and usage. What does it does and other information.} -------------------- ## 🛠️ Technologies Used This project uses the following technologies: - C# v7.3 - .NET Core v2.2 - MS Testing - ASP.NET MVC ------------------- ## Specs <details> | Test | Input | Output | | :------------- | :------------- | :------------- | | **** | | | | | | | | **** | | | | | | | </details> ------------------- ## 🔧 Setup/Installation Requirements ### View Online _To view my live website, {Name of Page}, visit_[https://kaila-spraguemcrae.github.io/FINISH-URL](https://kaila-spraguemcrae.github.io/FINISH-URL) ### Open Locally Go to my GitHub repository here, [https://guthub.com/kaila.spraguemcrae/FINISH-URL](https://guthub.com/kaila.spraguemcrae/FINISH-URL), and click on the green 'Code' button to clone the repository, Open with GitHub Desktop OR Download the ZIP file #### Necessary Specifications - To run this project locally you will need to have .NET Core. You can check if you have .NET Core by running `dotnet --version` in the command line. If you do not have .NET Core please find more information and download [here](https://dotnet.microsoft.com/download/dotnet-core) * Please note this project uses .NET Core v2.2 #### To clone (my prefered method): 1. Push the green 'Clone' button and copy the URL. 2. Open Terminal or GitBash and input the command: `git clone https://github.com/kaila-spraguemcrae/FINISH-URL` 3. To view the code, open the copied directory with Visual Studio Code or your preferred text editor by inputing the command `code .` in your terminal. #### Running/viewing application: 1. Once you have opened the code in your preferred text editor you will need to navigate to the 'PROJNAME' folder (`cd PROJNAME`) in the command line and run `dotnet run` or `dotnet watch run`. 2. At this point you should be able to click on the link to the local server's url path to view the compiled project. #### Running tests: 1. To run MS tests you will need to navigate to the 'PROJNAME.Tests' folder (`cd PROJNAME.Tests`) in the command line and then run `dotnet restore`. 2. You should now see 'obj' folders in both the 'PROJNAME.Tests' folder and 'PROJNAME' folder. 3. At this point you should be able to successfully run `dotnet test` in the command line (keep in mind you should still be in the PROJNAME.Tests folder). -------------------------- ## 🐛 Known Bugs -------------------------- ## 📫 Support and contact details If you run into any problems or have any questions please contact me via [email](mailto:kaila.sprague@icloud.com). --------------------------- ## 📘 License MIT License Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Copyright (c) 2020 Kaila Sprague McRae
34.991228
278
0.702933
eng_Latn
0.905683
e12dba3128be0682573aab5076f64d6e62aec2d4
1,396
md
Markdown
css/regions/MSRange-collection/index.md
maibroadcast/docs
033f80374c2557f2dd867a2285e3d09df1c81359
[ "CC-BY-3.0" ]
1
2019-11-06T18:11:10.000Z
2019-11-06T18:11:10.000Z
css/regions/MSRange-collection/index.md
maibroadcast/docs
033f80374c2557f2dd867a2285e3d09df1c81359
[ "CC-BY-3.0" ]
null
null
null
css/regions/MSRange-collection/index.md
maibroadcast/docs
033f80374c2557f2dd867a2285e3d09df1c81359
[ "CC-BY-3.0" ]
null
null
null
--- title: 'MSRange-collection' attributions: - 'Microsoft Developer Network: [[Windows Internet Explorer API reference](http://msdn.microsoft.com/en-us/library/ie/hh828809%28v=vs.85%29.aspx) Article]' notes: - 'Deletion Candidate: replace non-standard implementation' readiness: 'Not Ready' standardization_status: Non-Standard tags: - API_Objects - Needs_Summary - Needs_Examples uri: css/regions/MSRange-collection --- ## Properties *No properties.* ## Methods *No methods.* ## Events *No events.* ### Members The **MSRangeCollection** object does not define any members.     [[ie\_css\\ie](mailto:wsddocfb@microsoft.com?subject=Documentation%20feedback):%20MSRangeCollection object%20 RELEASE:%20(7/24/2012)&body=%0A%0APRIVACY STATEMENT%0A%0AThe doc team uses your feedback to improve the documentation. We don't use your email address for any other purpose. We'll remove your email address from our system after the issue that you are reporting is resolved. While we are working to resolve this issue, we may send you an email message to request more info about your feedback. After the issue is addressed, we may send you an email message to let you know that your feedback has been addressed.%0A%0AFor more info about Microsoft's privacy policy, see <http://privacy.microsoft.com/en-us/default.aspx>. Send comments about this topic to Microsoft] Build date: 7/24/2012
43.625
861
0.770057
eng_Latn
0.934001
e13018d36d7e2e6a25e7568ef9cafdff91d78631
5,049
md
Markdown
README.md
brobotan/stocks_etf_predict
5d1be98522d77a5eb1079d84c2db14b923613de0
[ "MIT" ]
null
null
null
README.md
brobotan/stocks_etf_predict
5d1be98522d77a5eb1079d84c2db14b923613de0
[ "MIT" ]
null
null
null
README.md
brobotan/stocks_etf_predict
5d1be98522d77a5eb1079d84c2db14b923613de0
[ "MIT" ]
1
2021-09-10T11:21:37.000Z
2021-09-10T11:21:37.000Z
![test](https://sevensreport.com/wp-content/uploads/2016/07/stock-market-3.jpg) # Stock Market Trends and Price Prediction ### ARIMA Model Predictions AMAZON.COM INC Stock Prediction | ProShares UltraPro Short QQQ Prediction :-------------------------:|:-------------------------: ![](https://github.com/brobotan/stocks_eft_predict/blob/main/Output/results/final(Amazon.com%20Inc).png) | ![](https://github.com/brobotan/stocks_eft_predict/blob/main/Output/results/final(ProShares%20UltraPro%20Short%20QQQ).png) ### XGBOOST Predictions Prediction 1 | Prediction 2 :-------------------------:|:-------------------------: ![](https://github.com/brobotan/stocks_eft_predict/blob/main/Output/XGBOOST-Results/xg_test_predict_z.png) | ![](https://github.com/brobotan/stocks_eft_predict/blob/main/Output/XGBOOST-Results/xg_dev_predict.png) # Table of contents <!-- After you have introduced your project, it is a good idea to add a **Table of contents** or **TOC** as **cool** people say it. This would make it easier for people to navigate through your README and find exactly what they are looking for. Here is a sample TOC(*wow! such cool!*) that is actually the TOC for this README. --> - [Table of contents](#table-of-contents) - [Description](#description) - [Usage](#usage) - [Installations](#installations) - [Database](#database) - [License](#license) # Description [(Back to top)](#table-of-contents) For stock price prediction two models have been used in this project. * **ARIMA MODEL** * **XGBOOST MODEL** ## ARIMA MODEL ARIMA stands for Auto-Regressive Integrated Moving Averages. ARIMA (p, d, q) is a generalization of an autoregressive moving average (ARMA (p, q)) model. ARIMA models are applied in some cases where data show evidence of nonstationarity. The AR term p of ARIMA indicates that the evolving variable of interest is regressed on its own lagged (i.e., prior) values. The MA part indicates that the regression error is a linear combination of error terms whose values occurred contemporaneously and at various times in the past. The I (for "integrated") indicates that the data values have been replaced with the difference between their values and the previous values (and this differencing process may have been performed more than once) ### Architecture Diagram ![](https://github.com/brobotan/stocks_eft_predict/blob/main/resources/ARIMA%20ARCH(b).png) ## XGBOOST MODEL XGBoost stands for Extreme Gradient Boosting, it is a performant machine learning library. XGBoost implements a Gradient Boosting algorithm based on decision trees. XGBoost is an ensemble of decision trees. Those trees are poor models individually, but when they are grouped they can be really performant. Gradient boosting is a process to convert weak learners to strong learners, in an iterative fashion. Each tree is called a “weak learner” for their high bias. XGBoost starts by creating a first simple tree which has poor performance by itself. It then builds another tree which is trained to predict what the first tree was not able to, and is itself a weak learner too. The algorithm goes on by sequentially building more weak learners, each one correcting the previous tree until a stopping condition is reached, such as the number of trees (n_estimators) to build. ### Architecture Diagram ![](https://github.com/brobotan/stocks_eft_predict/blob/main/resources/XGB%20ARCH(b).png) # Usage [(Back to top)](#table-of-contents) To use any of the two models mentioned above you just need to downlaod the corresponding jupyter notebook file which has an extension of .ipynb and import the packages required. # Installations [(Back to top)](#table-of-contents) <!-- Let's also add a footer because I love footers and also you **can** use this to convey important info.--> - numpy - matplotlib - seaborn - statsmodels - pandas - sklearn - pmdarima - tqdm - xgboost # Database [(Back to top)](#table-of-contents) <!-- Let's also add a footer because I love footers and also you **can** use this to convey important info.--> We used [Database](https://www.kaggle.com/borismarjanovic/price-volume-data-for-all-us-stocks-etfs) which contains data for 7165 Stocks and 1374 ETFs. Data is presented in txt format that we will convert it into a csv file. It includes attributes such as Date, Open, High, Low, Close, Volume, OpenInt. A lot of Ups and Downs in the value of Stocks and ETFs are present in the Dataset therefore we have a very wide variety of Database which we will perform to train and test our Model. # License [(Back to top)](#table-of-contents) <!-- Adding the license to README is a good practice so that people can easily refer to it. Make sure you have added a LICENSE file in your project folder. **Shortcut:** Click add new file in your root of your repo in GitHub > Set file name to LICENSE > GitHub shows LICENSE templates > Choose the one that best suits your project! I personally add the name of the license and provide a link to it like below. --> [The MIT License](https://opensource.org/licenses/MIT)
59.4
408
0.753218
eng_Latn
0.988751
e1302f0dfb21d44594275e1f0f51a693f597aff0
2,971
md
Markdown
entity-framework/core/providers/provider-log.md
bogan/EntityFramework.Docs
b76cb48937367a03660160c04059e5e58ad39325
[ "CC-BY-4.0", "MIT" ]
null
null
null
entity-framework/core/providers/provider-log.md
bogan/EntityFramework.Docs
b76cb48937367a03660160c04059e5e58ad39325
[ "CC-BY-4.0", "MIT" ]
null
null
null
entity-framework/core/providers/provider-log.md
bogan/EntityFramework.Docs
b76cb48937367a03660160c04059e5e58ad39325
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Log of provider-impacting changes - EF Core author: ajcvickers ms.author: avickers ms.date: 08/08/2018 ms.assetid: 7CEF496E-A5B0-4F5F-B68E-529609B23EF9 ms.technology: entity-framework-core uid: core/providers/provider-log --- # Provider-impacting changes This page contains links to pull requests made on the EF Core repo that may require authors of other database providers to react. The intention is to provide a starting point for authors of existing third-party database providers when updating their provider to a new version. We are starting this log with changes from 2.1 to 2.2. Prior to 2.1 we used the [`providers-beware`](https://github.com/aspnet/EntityFrameworkCore/labels/providers-beware) and [`providers-fyi`](https://github.com/aspnet/EntityFrameworkCore/labels/providers-fyi) labels on our issues and pull requests. ### 2.1 ---> 2.2 #### Test-only changes * https://github.com/aspnet/EntityFrameworkCore/pull/12057 - Allow customizable SQL delimeters in tests * Test changes that allow non-strict floating point comparisons in BuiltInDataTypesTestBase * Test changes that allow query tests to be re-used with different SQL delimeters * https://github.com/aspnet/EntityFrameworkCore/pull/12072 - Add DbFunction tests to the relational specification tests * Such that these tests can be run against all database providers * https://github.com/aspnet/EntityFrameworkCore/pull/12362 - Async test cleanup * Remove `Wait` calls, unneeded async, and renamed some test methods * https://github.com/aspnet/EntityFrameworkCore/pull/12666 - Unify logging test infrastructure * Added `CreateListLoggerFactory` and removed some previous logging infrastructure, which will require providers using these tests to react * https://github.com/aspnet/EntityFrameworkCore/pull/12500 - Run more query tests both synchronously and asynchronously * Test names and factoring has changed, which will require providers using these tests to react * https://github.com/aspnet/EntityFrameworkCore/pull/12766 - Renaming navigations in the ComplexNavigations model * Providers using these tests may need to react * https://github.com/aspnet/EntityFrameworkCore/pull/12141 - Return the context to the pool instead of disposing in functional tests * This change includes some test refactoring which may require providers to react #### Test and product code changes * https://github.com/aspnet/EntityFrameworkCore/pull/12109 - Consolidate RelationalTypeMapping.Clone methods * Changes in 2.1 to the RelationalTypeMapping allowed for a simplification in derived classes. We don't believe this was breaking to providers, but providers can take advantage of this change in their derived type mapping classes. * https://github.com/aspnet/EntityFrameworkCore/pull/12069 - Tagged or named queries * Adds infrastructure for tagging LINQ queries and having those tags show up as comments in the SQL. This may require providers to react in SQL generation.
67.522727
301
0.799394
eng_Latn
0.977138
e131f1e061b12cd764d1d6713175cd8240ca07bd
3,986
md
Markdown
windows-driver-docs-pr/debugger/using-agestore.md
kvndb/windows-driver-docs
904720dbfcd60c063cece2219b938a7b5b5b5443
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/debugger/using-agestore.md
kvndb/windows-driver-docs
904720dbfcd60c063cece2219b938a7b5b5b5443
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/debugger/using-agestore.md
kvndb/windows-driver-docs
904720dbfcd60c063cece2219b938a7b5b5b5443
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Using AgeStore description: Using AgeStore ms.assetid: 188eac5c-e84c-45a4-a4ea-1c9bfaa93cca keywords: ["AgeStore, using"] ms.author: domars ms.date: 05/23/2017 ms.localizationpriority: medium --- # Using AgeStore AgeStore is a tool that deletes files in a directory or directory tree, based on their last access dates. Its primary use is for removing old files from the downstream store used by a symbol server or a source server, in order to conserve disk space. It can also be used as a general file deletion tool. AgeStore can delete all files in a single directory (the *target directory*), or in all the directories within a tree (the *target tree*). The -s option indicates that an entire tree is to be targeted. There are three ways to specify which files within the target directory or target tree are to be deleted. The agestore -date=Month-Day-Year command deletes all files that were last accessed prior to the specified date. The agestore -days=NumberOfDays command deletes all files that were last accessed more than the specified number of days ago. The agestore -size=SizeRemaining command deletes all files in the target directory or target tree, beginning with the least-recently-accessed files, until the total size of the remaining files is less than or equal to *SizeRemaining*. For example, the following command deletes all files in C:\\MyDir that were last accessed prior to January 7, 2008: ``` agestore c:\mydir -date=01-07-2008 ``` The following command deletes all files in the directory tree subordinate to C:\\symbols\\downstreamstore that were last accessed over thirty days ago: ``` agestore c:\symbols\downstreamstore -days=30 -s ``` The following command deletes files in the directory tree subordinate to C:\\symbols\\downstreamstore, beginning with those accessed longest ago, until the total size of all files in this tree is less than or equal to 50,000 bytes: ``` agestore c:\symbols\downstreamstore -size=50000 -s ``` The -l option causes AgeStore to delete no files, but merely to list all the files that would be deleted without this option. Before you use any AgeStore command you should run the intended command with the -l option added, to verify that it will delete exactly those files you intend it to delete. For the complete command line syntax, see [**AgeStore Command-Line Options**](agestore-command-line-options.md). ### <span id="running_agestore_on_windows_vista_and_later"></span><span id="RUNNING_AGESTORE_ON_WINDOWS_VISTA_AND_LATER"></span>Running AgeStore on Windows Vista and Later Because AgeStore deletes files based on the last time that they were accessed, it can run successfully only if your file system stores Last Access Time (LAT) data. In the NTFS file system, LAT data storage can be either enabled or disabled. If it is disabled, AgeStore will not run, but will display the following error message instead: ``` Last-Access-Time support is disabled on this computer. Please read the documentation for more details. ``` In Windows 2000, Windows XP, and Windows Server 2003, LAT data storage is enabled by default. In Windows Vista and later versions of Windows, LAT data storage is disabled by default, and therefore AgeStore will not run unless you first enable this data. In Windows Vista and later versions of Windows, you can use the FSUtil (Fsutil.exe) tool to enable the gathering of LAT data. From a Command Prompt window, issue the following command: ``` fsutil behavior set disablelastaccess 0 ``` To disable the gathering of LAT data, using the following command: ``` fsutil behavior set disablelastaccess 1 ``` These changes take effect after the next restart of Windows. The FAT32 file system always stores LAT information (although only the date, and not the time, are stored). Therefore, AgeStore works with FAT32 file systems. However, since AgeStore will not run when the NTFS LAT is disabled, you must enable NTFS LAT even if your file system is FAT32.    
51.766234
579
0.785248
eng_Latn
0.997417
e131fca3f1165536829bcfe63f328643f3334cf3
2,194
md
Markdown
docs/code-quality/ca0061.md
MicrosoftDocs/visualstudio-docs.it-
3e6906339549f32b01960e19cd3400222dcc7b94
[ "CC-BY-4.0", "MIT" ]
3
2018-03-29T21:12:32.000Z
2022-03-26T11:56:08.000Z
docs/code-quality/ca0061.md
MicrosoftDocs/visualstudio-docs.it-
3e6906339549f32b01960e19cd3400222dcc7b94
[ "CC-BY-4.0", "MIT" ]
12
2018-03-07T15:43:33.000Z
2021-03-29T15:28:34.000Z
docs/code-quality/ca0061.md
MicrosoftDocs/visualstudio-docs.it-
3e6906339549f32b01960e19cd3400222dcc7b94
[ "CC-BY-4.0", "MIT" ]
12
2017-11-26T08:17:38.000Z
2021-10-09T11:24:07.000Z
--- description: Impossibile trovare la regola 'RuleId'. title: CA0061 ms.date: 10/20/2016 ms.topic: reference f1_keywords: - CA0061 ms.assetid: fab5690d-0cb8-4337-bd23-768a245ce9c6 author: mikejo5000 ms.author: mikejo manager: jmartens ms.technology: vs-ide-code-analysis ms.workload: - multiple ms.openlocfilehash: e7552e28c18f2591698cdba91887231d449f5afd ms.sourcegitcommit: b12a38744db371d2894769ecf305585f9577792f ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 09/13/2021 ms.locfileid: "126632903" --- # <a name="ca0061"></a>CA0061 Impossibile trovare la regola '*RuleId*'. Questo errore indica che la regola specificata non è stata trovata. Questo avviso può essere causato da un'opzione **FxCopCmd.exe /RuleId** formattata in modo non corretto, da un valore della proprietà CodeAnalysisRules formattato in modo non corretto o dal fatto che la regola specificata si trova in un assembly di regole non utilizzato da FxCop. ## <a name="fxcopcmd-ruleid-option"></a>Opzione FxCopCmd /RuleId Usare uno dei formati seguenti per specificare una regola **nell'opzioneFxCopCmd.exe /RuleId** nella riga di comando fxCopCmd: - **FxCopCmd.exe /RuleId:-** *Category* **#** *RuleId* dove *Categoria* è la categoria della regola e *RuleId* è il CheckId della regola. Ad esempio: ``` FxCopCmd /RuleId:-Microsoft.Design#CA2210 ``` - **FxCopCmd.exe /RuleId:- RuleId** *dello spazio dei* **#** *nomi* dove *Spazio dei* nomi è la categoria di regole e *RuleId* è l'ID controllo della regola. Ad esempio: ``` FxCopCmd /RuleId:-Microsoft.Rules.Design#CA2210 ``` ## <a name="msbuild-codeanalysisrules-property"></a>MSBuild Proprietà CodeAnalysisRules Nell Visual Studio di codice, le regole possono essere specificate usando la proprietà CodeAnalysisRules MSBuild con il formato seguente: **\<CodeAnalysisRules>-**{*Category&#124;* *Namespace*}#*RuleId*[**;** ...]**\</CodeAnalysisRules>** Ad esempio: ``` <CodeAnalysisRules>-Microsoft.Design#CA2210;-Microsoft.Rules.Managed.CA1062</CodeAnalysisRules> ``` ## <a name="see-also"></a>Vedi anche [Errori nell'applicazione dell'analisi del codice](../code-quality/code-analysis-application-errors.md)
35.967213
280
0.753874
ita_Latn
0.898279
e132227bfa99cb0ba697f5fc05d8990041f2171b
1,931
md
Markdown
docs/csharp/programming-guide/concepts/linq/how-to-find-descendant-elements-xpath-linq-to-xml.md
ilyakharlamov/docs.fr-fr
54c09f71d03787b462bdd134b3407d5ed708a191
[ "CC-BY-4.0", "MIT" ]
1
2019-04-11T17:00:02.000Z
2019-04-11T17:00:02.000Z
docs/csharp/programming-guide/concepts/linq/how-to-find-descendant-elements-xpath-linq-to-xml.md
ilyakharlamov/docs.fr-fr
54c09f71d03787b462bdd134b3407d5ed708a191
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/programming-guide/concepts/linq/how-to-find-descendant-elements-xpath-linq-to-xml.md
ilyakharlamov/docs.fr-fr
54c09f71d03787b462bdd134b3407d5ed708a191
[ "CC-BY-4.0", "MIT" ]
1
2022-02-23T14:59:20.000Z
2022-02-23T14:59:20.000Z
--- title: 'Procédure : Rechercher des éléments descendants (XPath-LINQ to XML) (C#)' ms.date: 07/20/2015 ms.assetid: b318da39-bb8b-4c56-a019-e13b12b01831 ms.openlocfilehash: 0b9d89f0a9adb540e7efdccd1e4e7c2f8caf9696 ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 01/23/2019 ms.locfileid: "54599229" --- # <a name="how-to-find-descendant-elements-xpath-linq-to-xml-c"></a>Procédure : Rechercher des éléments descendants (XPath-LINQ to XML) (C#) Cette rubrique montre comment obtenir les éléments descendants avec un nom particulier. L'expression XPath est `//Name`. ## <a name="example"></a>Exemple Cet exemple recherche tous les descendants nommés `Name`. Cet exemple utilise le document XML suivant : [Exemple de fichier XML : Plusieurs commandes fournisseur (LINQ to XML)](../../../../csharp/programming-guide/concepts/linq/sample-xml-file-multiple-purchase-orders-linq-to-xml.md). ```csharp XDocument po = XDocument.Load("PurchaseOrders.xml"); // LINQ to XML query IEnumerable<XElement> list1 = po.Root.Descendants("Name"); // XPath expression IEnumerable<XElement> list2 = po.XPathSelectElements("//Name"); if (list1.Count() == list2.Count() && list1.Intersect(list2).Count() == list1.Count()) Console.WriteLine("Results are identical"); else Console.WriteLine("Results differ"); foreach (XElement el in list1) Console.WriteLine(el); ``` Cet exemple génère la sortie suivante : ``` Results are identical <Name>Ellen Adams</Name> <Name>Tai Yee</Name> <Name>Cristian Osorio</Name> <Name>Cristian Osorio</Name> <Name>Jessica Arnold</Name> <Name>Jessica Arnold</Name> ``` ## <a name="see-also"></a>Voir aussi - [LINQ to XML pour les utilisateurs XPath (C#)](../../../../csharp/programming-guide/concepts/linq/linq-to-xml-for-xpath-users.md)
35.109091
230
0.709477
fra_Latn
0.357484
e13390df258d8f825ab7e0f618a9f93ecf30de61
7,355
md
Markdown
docs/ops/python_shell.md
journeyqiao/flink
164202bd9b4662f246e961fd964b96ae308cbcee
[ "Apache-2.0" ]
1
2020-02-24T06:54:09.000Z
2020-02-24T06:54:09.000Z
docs/ops/python_shell.md
journeyqiao/flink
164202bd9b4662f246e961fd964b96ae308cbcee
[ "Apache-2.0" ]
null
null
null
docs/ops/python_shell.md
journeyqiao/flink
164202bd9b4662f246e961fd964b96ae308cbcee
[ "Apache-2.0" ]
null
null
null
--- title: "Python REPL" nav-parent_id: ops nav-pos: 8 --- <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Flink comes with an integrated interactive Python Shell. It can be used in a local setup as well as in a cluster setup. See the [local setup page](deployment/local.html) for more information about how to setup a local Flink. You can also [build a local setup from source](../flinkDev/building.html). <span class="label label-info">Note</span> The Python Shell will run the command “python”. Please run the following command to confirm that the command “python” in current environment points to Python 3.5+: {% highlight bash %} $ python --version # the version printed here must be 3.5+ {% endhighlight %} <span class="label label-info">Note</span> Using Python UDF in Python Shell requires apache-beam 2.19.0. Run the following command to confirm that it meets the requirements before run the Shell in local mode: {% highlight bash %} $ python -m pip install apache-beam==2.19.0 {% endhighlight %} To use the shell with an integrated Flink cluster just execute: {% highlight bash %} bin/pyflink-shell.sh local {% endhighlight %} in the root directory of your binary Flink directory. To run the Shell on a cluster, please see the Setup section below. ## Usage The shell only supports Table API currently. The Table Environments are automatically prebound after startup. Use "bt_env" and "st_env" to access BatchTableEnvironment and StreamTableEnvironment respectively. ### Table API The example below is a simple program in the Python shell: <div class="codetabs" markdown="1"> <div data-lang="stream" markdown="1"> {% highlight python %} >>> import tempfile >>> import os >>> import shutil >>> sink_path = tempfile.gettempdir() + '/streaming.csv' >>> if os.path.exists(sink_path): ... if os.path.isfile(sink_path): ... os.remove(sink_path) ... else: ... shutil.rmtree(sink_path) >>> s_env.set_parallelism(1) >>> t = st_env.from_elements([(1, 'hi', 'hello'), (2, 'hi', 'hello')], ['a', 'b', 'c']) >>> st_env.connect(FileSystem().path(sink_path))\ ... .with_format(OldCsv() ... .field_delimiter(',') ... .field("a", DataTypes.BIGINT()) ... .field("b", DataTypes.STRING()) ... .field("c", DataTypes.STRING()))\ ... .with_schema(Schema() ... .field("a", DataTypes.BIGINT()) ... .field("b", DataTypes.STRING()) ... .field("c", DataTypes.STRING()))\ ... .create_temporary_table("stream_sink") >>> t.select("a + 1, b, c")\ ... .insert_into("stream_sink") >>> st_env.execute("stream_job") >>> # If the job runs in local mode, you can exec following code in Python shell to see the result: >>> with open(sink_path, 'r') as f: ... print(f.read()) {% endhighlight %} </div> <div data-lang="batch" markdown="1"> {% highlight python %} >>> import tempfile >>> import os >>> import shutil >>> sink_path = tempfile.gettempdir() + '/batch.csv' >>> if os.path.exists(sink_path): ... if os.path.isfile(sink_path): ... os.remove(sink_path) ... else: ... shutil.rmtree(sink_path) >>> b_env.set_parallelism(1) >>> t = bt_env.from_elements([(1, 'hi', 'hello'), (2, 'hi', 'hello')], ['a', 'b', 'c']) >>> bt_env.connect(FileSystem().path(sink_path))\ ... .with_format(OldCsv() ... .field_delimiter(',') ... .field("a", DataTypes.BIGINT()) ... .field("b", DataTypes.STRING()) ... .field("c", DataTypes.STRING()))\ ... .with_schema(Schema() ... .field("a", DataTypes.BIGINT()) ... .field("b", DataTypes.STRING()) ... .field("c", DataTypes.STRING()))\ ... .create_temporary_table("batch_sink") >>> t.select("a + 1, b, c")\ ... .insert_into("batch_sink") >>> bt_env.execute("batch_job") >>> # If the job runs in local mode, you can exec following code in Python shell to see the result: >>> with open(sink_path, 'r') as f: ... print(f.read()) {% endhighlight %} </div> </div> ## Setup To get an overview of what options the Python Shell provides, please use {% highlight bash %} bin/pyflink-shell.sh --help {% endhighlight %} ### Local To use the shell with an integrated Flink cluster just execute: {% highlight bash %} bin/pyflink-shell.sh local {% endhighlight %} ### Remote To use it with a running cluster, please start the Python shell with the keyword `remote` and supply the host and port of the JobManager with: {% highlight bash %} bin/pyflink-shell.sh remote <hostname> <portnumber> {% endhighlight %} ### Yarn Python Shell cluster The shell can deploy a Flink cluster to YARN, which is used exclusively by the shell. The shell deploys a new Flink cluster on YARN and connects the cluster. You can also specify options for YARN cluster such as memory for JobManager, name of YARN application, etc. For example, to start a Yarn cluster for the Python Shell with two TaskManagers use the following: {% highlight bash %} bin/pyflink-shell.sh yarn -n 2 {% endhighlight %} For all other options, see the full reference at the bottom. ### Yarn Session If you have previously deployed a Flink cluster using the Flink Yarn Session, the Python shell can connect with it using the following command: {% highlight bash %} bin/pyflink-shell.sh yarn {% endhighlight %} ## Full Reference {% highlight bash %} Flink Python Shell Usage: pyflink-shell.sh [local|remote|yarn] [options] <args>... Command: local [options] Starts Flink Python shell with a local Flink cluster usage: -h,--help Show the help message with descriptions of all options. Command: remote [options] <host> <port> Starts Flink Python shell connecting to a remote cluster <host> Remote host name as string <port> Remote port as integer usage: -h,--help Show the help message with descriptions of all options. Command: yarn [options] Starts Flink Python shell connecting to a yarn cluster usage: -h,--help Show the help message with descriptions of all options. -jm,--jobManagerMemory <arg> Memory for JobManager Container with optional unit (default: MB) -nm,--name <arg> Set a custom name for the application on YARN -qu,--queue <arg> Specify YARN queue. -s,--slots <arg> Number of slots per TaskManager -tm,--taskManagerMemory <arg> Memory per TaskManager Container with optional unit (default: MB) -h | --help Prints this usage text {% endhighlight %} {% top %}
33.584475
208
0.670292
eng_Latn
0.957204
e133c4e9ac852045af7e5a0a48f294bfcc99734e
5,390
md
Markdown
README.md
jimmycav/teamcity-theatre
b22aa2764ea38f46851c957136e2d7efe9567b79
[ "MIT" ]
60
2015-03-25T07:42:10.000Z
2021-10-08T02:53:42.000Z
README.md
jimmycav/teamcity-theatre
b22aa2764ea38f46851c957136e2d7efe9567b79
[ "MIT" ]
141
2016-03-22T09:59:50.000Z
2020-05-01T08:23:07.000Z
README.md
jimmycav/teamcity-theatre
b22aa2764ea38f46851c957136e2d7efe9567b79
[ "MIT" ]
36
2016-03-16T16:28:12.000Z
2020-02-28T08:33:07.000Z
# :tv: TeamCity Theatre [![Build Status Travis CI](https://travis-ci.org/amoerie/teamcity-theatre.svg?branch=master)](https://travis-ci.org/amoerie/teamcity-theatre) [![Build Status Azure Devops](https://amoerman.visualstudio.com/TeamCity%20Theatre/_apis/build/status/amoerie.teamcity-theatre?branchName=master)](https://amoerman.visualstudio.com/TeamCity%20Theatre/_build/latest?definitionId=4&branchName=master) A .NET MVC web application to monitor your TeamCity builds. Stick a TV on the wall, open a browser there and enjoy your TeamCity projects in all their red and green glory. ## Screenies ### The home page: choose your team ![Choose your team](http://i.imgur.com/64YxBRb.png) ### Team view ![The dashboard screen](http://i.imgur.com/izZiWVd.png) ### Configuration: manage your views and their tiles ![The config screen](http://i.imgur.com/4Rg4yi6.png) ## Features - First-class support for branches! (This is a feature many others are lacking) - Create multiple dashboards, one for each team! - Customizable amount of branches shown per tile - Customizable amount of columns shown per view, make optimal use of the size of your wall TV! - Customizable labels on tiles - Docker support! - Quite extensive logging - Customize TeamCity query ## Requirements - A TeamCity server (d'uh). TeamCityTheatre is confirmed to be compatible with 2017.1.4 (build 47070). Other versions may or may not work. - .NET Core Runtime 2.2 (downloadable from https://www.microsoft.com/net/download/all ) - If you want to use IIS: - A Windows Server with IIS to host the web application - .NET Core Windows Hosting Bundle, downloadable from the same page you downloaded the runtime from - Some knowledge on how to add a .NET web application in IIS, or the willingness to learn. - If you want to use Docker: - Docker for Windows using Windows Containers. Linux and Linux containers might work but that's still in testing phase. - A nice cup of :coffee: to drink while you install this. ## Installation instructions 1. Download and unzip the [the latest release](https://github.com/amoerie/teamcity-theatre/releases) 2. Configure your TeamCity settings, the application needs to somehow get access to the TeamCity API. The following authentication modes are supported: - "Guest" mode: If your TeamCity is configured with guest access, you can use 'Guest' as the authentication mode. You don't need any credentials. - "BasicAuthentication" mode: Every HTTP call will have a basic authentication header with a username and password. - "AccessToken": Every HTTP call will have an access token in the header 3. To configure authentication: - Either add the following to the `appsettings.json` file: ```javascript "Connection": { "Url": "http://your-teamcity-server/", "AuthenticationMode": "BasicAuthentication" // or "Guest" or "AccessToken" "Username": "your-teamcity-username", // if using Basic "Password": "your-teamcity-password", // if using Basic "AccessToken": "your-teamcity-accesstoken", // if using AccessToken } ``` - OR add the following environment parameters: (watch the number of underscores!!!) - TEAMCITYTHEATRE_CONNECTION__URL - TEAMCITYTHEATRE_CONNECTION__AUTHENTICATIONMODE - TEAMCITYTHEATRE_CONNECTION__USERNAME - TEAMCITYTHEATRE_CONNECTION__PASSWORD - TEAMCITYTHEATRE_CONNECTION__ACCESSTOKEN 3. (Optional) In appsettings.json, change the location of the configuration.json file or leave the default. This file will contain your views/tiles/etc. 4. (Optional) In appsettings.json, change the logging configuration. It's quite verbose by default, but will never take more than 75MB of space. 5. Start the application in one of the following ways - Run the following command: `dotnet TeamCityTheatre.Web.dll` - Install this folder as a web application in IIS: - Application pool should use .NET CLR version 'No Managed Code' - Application pool should use Managed Pipeline mode 'Integrated' - Ensure the application pool has the read/write access rights to - the folder in which configuration.json resides - the folder in which log files will be written ## Usage instructions Open the web application from a browser - Open the settings page from the main menu. - If you see any errors, your server or credentials might be incorrect. Check the log files to see why the network request failed. - Add a new view, give it a name. - Expand your TeamCity projects in the left bottom pane and select one to see its build configurations. - Add build configurations to your view. These will become the tiles of your view. - Open the dashboard from the main menu and select your view - Wait for the data to load. - Enjoy. ## Compilation instructions 1. Ensure you have [.NET Core SDK 2.x](https://www.microsoft.com/net/download/core) installed 2. Ensure you have [Node](https://nodejs.org/en/) installed 3. Execute "publish.cmd" or "publish.sh" depending on your operating system. 4. If all goes well, that should create a folder 'publish-output' which is all you need to host the application. See Installation instructions from here. ## Contributors - [amoerie](https://github.com/amoerie) - [tauptk](https://github.com/tauptk) - [trolleyyy](https://github.com/trolleyyy) - [LazyTarget](https://github.com/LazyTarget) - [jimmycav](https://github.com/jimmycav)
51.333333
389
0.758071
eng_Latn
0.949274
e133c7eac71b5933c3a2efad367a66e8d8d8ef95
1,383
md
Markdown
_posts/2019-02-08-macos-homebrew-installspecificversionofformula.md
dorbae/dorbae.github.io
4345558c57248b94e3cb8932315482e33f327d34
[ "MIT" ]
null
null
null
_posts/2019-02-08-macos-homebrew-installspecificversionofformula.md
dorbae/dorbae.github.io
4345558c57248b94e3cb8932315482e33f327d34
[ "MIT" ]
null
null
null
_posts/2019-02-08-macos-homebrew-installspecificversionofformula.md
dorbae/dorbae.github.io
4345558c57248b94e3cb8932315482e33f327d34
[ "MIT" ]
null
null
null
--- layout: post title: "[MacOS] Homebrew로 특정 OpenCV 설치" comments: true author: dorbae date: 2019-02-08 +0900 categories : [MacOS,Homebrew] tags: [mac,macos,osx,homebrew,brew,맥,opencv] sitemap : changefreq : weekly --- # Goal * Homebrew를 이용하여 특정 OpenCV 버전을 설치 <br /> # 들어가며 * Hombrew를 이용하여 OpenCV를 설치하니 OpenCV 4.0.1(최신) 버전이 설치되었다. * 특정 버전(3.4.5)를 이용해야해서 Homebrew를 이용해 OpenCV를 설치하고자 한다. <br /> # Practice ## 1. 설치 가능한 버전 확인 * Homebrew 레파지토리 테이블에 어떤 버전이 있는지 확인 <div markdown="1" style="background: #202020; overflow:auto;width:auto;border:solid gray;border-width:.1em .1em .1em .8em;padding:.2em .6em;"><pre style="margin: 0; line-height: 125%"><span style="color: #d0d0d0">$ brew search opencv</span> </pre></div> ![screenshot001](/assets/images/posts/2019/02/2019-02-08-macos-homebrew-installspecificversionofformula-001.png) <br /> ## 2. 특정 버전 설치 * opencv@3 설치 <div markdown="1" style="background: #202020; overflow:auto;width:auto;border:solid gray;border-width:.1em .1em .1em .8em;padding:.2em .6em;"><pre style="margin: 0; line-height: 125%"><span style="color: #d0d0d0">$ brew install opencv@3</span> </pre></div> ![screenshot002](/assets/images/posts/2019/02/2019-02-08-macos-homebrew-installspecificversionofformula-002.png) <br /> ----- ## References * [StackOverflow](https://stackoverflow.com/questions/3987683/homebrew-install-specific-version-of-formula)
28.8125
243
0.716558
kor_Hang
0.696393
e13486b028f7f63d6096d05fa04c7758306c6f52
249
md
Markdown
doc/api/smeup_models_widgets_smeup_calendar_event_model/SmeupCalentarEventModel/initTime.md
smeup/ken
582c6c2e731aa62a6d0b9b4ccc5f044e6883f13a
[ "Apache-2.0" ]
5
2021-12-28T12:47:39.000Z
2022-03-25T16:56:25.000Z
doc/api/smeup_models_widgets_smeup_calendar_event_model/SmeupCalentarEventModel/initTime.md
smeup/ken
582c6c2e731aa62a6d0b9b4ccc5f044e6883f13a
[ "Apache-2.0" ]
null
null
null
doc/api/smeup_models_widgets_smeup_calendar_event_model/SmeupCalentarEventModel/initTime.md
smeup/ken
582c6c2e731aa62a6d0b9b4ccc5f044e6883f13a
[ "Apache-2.0" ]
null
null
null
# initTime property *[<Null safety>](https://dart.dev/null-safety)* [DateTime](https://api.flutter.dev/flutter/dart-core/DateTime-class.html)? initTime _read / write_ ## Implementation ```dart DateTime? initTime; ```
7.114286
83
0.638554
yue_Hant
0.463605
e134a60f309df270656f6bb283569bb63b1b8d11
3,116
md
Markdown
README.md
rtfmoz2/alpaca
43f87ac0cde657aa05f99c46acbb057b4de6349b
[ "Apache-2.0" ]
89
2019-05-14T23:45:52.000Z
2022-03-27T20:41:31.000Z
README.md
rtfmoz2/alpaca
43f87ac0cde657aa05f99c46acbb057b4de6349b
[ "Apache-2.0" ]
57
2019-05-16T00:46:40.000Z
2022-03-27T09:37:24.000Z
README.md
rtfmoz2/alpaca
43f87ac0cde657aa05f99c46acbb057b4de6349b
[ "Apache-2.0" ]
24
2019-05-28T05:01:50.000Z
2022-03-31T03:02:57.000Z
# Alpaca ![Latest Tag][2] ![GitHub Workflow Status][3] ![GitHub Releases][4] Alpaca is a local HTTP proxy for command-line tools. It supports proxy auto-configuration (PAC) files and NTLM authentication. ## Install using Homebrew If you're using macOS and use [Homebrew](https://brew.sh/), you can install using: ```sh $ brew tap samuong/alpaca $ brew install samuong/alpaca/alpaca ``` Launch Alpaca by running `alpaca`, or by using `brew services start alpaca`. ## Install using Go If you've got the [Go](https://golang.org/cmd/go/) tool installed, you can install using: ```sh $ go get -v -u github.com/samuong/alpaca ``` ## Download Binary Alpaca can be downloaded from the [GitHub releases page][1]. ## Usage Start Alpaca by running the `alpaca` binary. On macOS and GNOME systems, Alpaca uses the PAC URL from your system settings. If you'd like to override this, or if Alpaca fails to detect your settings, you can set this manually using the `-C` flag. If you use [NoMAD](https://nomad.menu/products/#nomad) and have configured it to [use the keychain](https://nomad.menu/help/keychain-usage/), Alpaca will use these credentials to authenticate to any NTLM challenge from your proxies. You can also supply your domain and username (via command-line flags) and a password (via a prompt): ```sh $ alpaca -d MYDOMAIN -u me Password (for MYDOMAIN\me): ``` You also need to configure your tools to send requests via Alpaca. Usually this will require setting the `http_proxy` and `https_proxy` environment variables: ```sh $ export http_proxy=http://localhost:3128 $ export https_proxy=http://localhost:3128 $ curl -s https://raw.githubusercontent.com/samuong/alpaca/master/README.md # Alpaca ... ``` When moving from, say, a corporate network to a public WiFi network (or vice-versa), the proxies listed in the PAC script might become unreachable. When this happens, Alpaca will temporarily bypass the parent proxy and send requests directly, so there's no need to manually unset/re-set `http_proxy` and `https_proxy` as you move between networks. ## Non-interactive launch If you want to use Alpaca without any interactive password prompt, you can store your NTLM credentials (domain, username and MD4-hashed password) in an environment variable called `$NTLM_CREDENTIALS`. You can use the `-H` flag to generate this value: ```sh $ ./alpaca -d MYDOMAIN -u me -H Password (for MYDOMAIN\me): # Add this to your ~/.profile (or equivalent) and restart your shell NTLM_CREDENTIALS="me@MYDOMAIN:823893adfad2cda6e1a414f3ebdf58f7"; export NTLM_CREDENTIALS ``` Note that this hash is *not* cryptographically secure; it's just meant to stop people from being able to read your password with a quick glance. Once you've set this environment variable, you can start Alpaca by running `./alpaca`. [1]: https://github.com/samuong/alpaca/releases [2]: https://img.shields.io/github/v/tag/samuong/alpaca.svg?logo=github&label=latest [3]: https://img.shields.io/github/workflow/status/samuong/alpaca/Continuous%20Integration/master [4]: https://img.shields.io/github/downloads/samuong/alpaca/latest/total
33.505376
97
0.760591
eng_Latn
0.954669
e134b47ce1cfd182831712dc876fa44e5b3b3227
201
md
Markdown
README.md
jeandeaual/lilypond-piano-la-marseillaise
fdbd9eab76793ffbcd16c6d856f95682b430d1ae
[ "CC0-1.0" ]
null
null
null
README.md
jeandeaual/lilypond-piano-la-marseillaise
fdbd9eab76793ffbcd16c6d856f95682b430d1ae
[ "CC0-1.0" ]
null
null
null
README.md
jeandeaual/lilypond-piano-la-marseillaise
fdbd9eab76793ffbcd16c6d856f95682b430d1ae
[ "CC0-1.0" ]
null
null
null
# “La Marseillaise” by Claude-Joseph Rouget de l’Isle Built using [LilyPond](https://lilypond.org/). The output can be downloaded [here](https://jeandeaual.github.io/lilypond-piano-la-marseillaise).
33.5
97
0.761194
yue_Hant
0.255097
e1361e0b1ecd40b3279184ce8ca06bada65e5cc5
4,575
md
Markdown
README.md
thalesmacena/agendai
4f08b812587fe1a03bfbcf7e89ddbba926c083d8
[ "MIT" ]
null
null
null
README.md
thalesmacena/agendai
4f08b812587fe1a03bfbcf7e89ddbba926c083d8
[ "MIT" ]
null
null
null
README.md
thalesmacena/agendai
4f08b812587fe1a03bfbcf7e89ddbba926c083d8
[ "MIT" ]
null
null
null
# Agendai ## 🗂 Tabela de Conteúdo - [Agendai](#agendai) - [🗂 Tabela de Conteúdo](#-tabela-de-conteúdo) - [📑 Sobre](#-sobre) - [💻 Technologies](#-technologies) - [💱 Back-end](#-back-end) - [🌐 Front-end Web](#-front-end-web) - [✨ Instalação](#-instalação) - [💱 Back-end](#-back-end-1) - [🔥 Rodando a aplicação](#-rodando-a-aplicação) - [🌐 Front-End Web](#-front-end-web-1) - [🔥 Rodando a aplicação](#-rodando-a-aplicação-1) ## 📑 Sobre Aplicativo de agendamento de bandejão da ufrj ## 💻 Technologies <a href="https://yarnpkg.com/"><img src="https://img.shields.io/badge/-Yarn-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=yarn&logoColor=2D325E" alt="Yarn"></a> <a href="https://eslint.org/"><img src="https://img.shields.io/badge/-ESLint-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=eslint&logoColor=2D325E" alt="ESLint"></a> <a href="https://nodejs.org/en/"><img src="https://img.shields.io/badge/-Node.JS-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=node.js&logoColor=2D325E" alt="Node.js"></a> ### 💱 Back-end <a href="https://expressjs.com/"><img src="https://img.shields.io/badge/-Express-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=express&logoColor=2D325E" alt="Express"></a> <a href="https://www.docker.com/"><img src="https://img.shields.io/badge/-Docker-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=docker&logoColor=2D325E" alt="Docker"></a> <a href="https://sequelize.org/"><img src="https://img.shields.io/badge/-Sequelize-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=Javascript&logoColor=2D325E" alt="Sequelize"></a> <a href="https://www.postgresql.org/"><img src="https://img.shields.io/badge/-PostgreSQL-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=postgresql&logoColor=2D325E" alt="PostgreSQL"></a> ### 🌐 Front-end Web <a href="https://www.typescriptlang.org/"><img src="https://img.shields.io/badge/-typescript-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=typescript&logoColor=2D325E" alt="Typescript"></a> <a href="https://reactjs.org/"><img src="https://img.shields.io/badge/-React-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=react&logoColor=2D325E" alt="React"></a> <a href="https://nextjs.org/"><img src="https://img.shields.io/badge/-Next.js-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=next.js&logoColor=2D325E" alt="Next.js"></a> <a href="https://styled-components.com/"><img src="https://img.shields.io/badge/-Styled%20Components-2D325E?labelColor=F0DB4F&style=for-the-badge&logo=styled-components&logoColor=2D325E" alt="Styled Components"></a> ## ✨ Instalação ```PowerShell # Para copiar o repositório git clone https://github.com/thalesmacena/agendai.git ``` ### 💱 Back-end O back-end foi feito utilizando Express.js, ele também utiliza o padrão de arquitetura MVC com o Sequelize, integrando Postgres como banco de dados. Além disso o projeto utiliza o padrão de estilo do Airbnb que junto com o plugin do prettier garantem um código limpo e claro. #### 🔥 Rodando a aplicação **Pré Requisitos** Para rodar o aplicato você vai precisar ter instalado: - Uma versão atualizada do Node.JS - O Gerenciador de pacotes Yarn ou NPM - Uma imagem do Postgres (é recomendável que utilize Docker para ter uma imagem desses banco de dados). - Uma cópia deste repositório localmente **Rodando a aplicação** 1. Acesse a pasta api e renomeie o arquivo `.env.example` para `.env`, altere as variaveis de ambiente com as credencias do passo a passo. 2. Utilize o seguinte comando para baixar as dependencias: ``` yarn ``` 3. Utilize o seguinte comando para realizar as migrations do banco de dados: ``` yarn sequelize db:migrate ``` 4. Utilize o seguinte comando para inserir as unidades no banco: ``` yarn sequelize db:seed:all ``` 5. Utilize o seguinte comando para fazer o mock de uma api externa ao bandejão: ``` yarn dev ``` ### 🌐 Front-End Web O Front-end é feito em React utilizando o Framework Next.js, ele é estilizado utilizando styled-components. #### 🔥 Rodando a aplicação **Pré Requisitos** Para rodar o aplicato você vai precisar ter instalado: - Uma versão atualizada do Node.JS - O Gerenciador de pacotes Yarn ou NPM - Uma cópia deste repositório localmente **Rodando a aplicação** 1. Acesse a pasta web, que é referente ao front-end web 2. Utilize o seguinte comando para baixar as dependencias: ``` yarn ``` 3. Utilize o seguinte comando para fazer a mock da api externa: ``` yarn server ``` 4. Você pode rodar o programa com o seguinte comando: ``` yarn dev ``` A aplicação rodará em [localhost:3000](http://localhost:3000/)
37.5
275
0.726995
por_Latn
0.877013
e136a84d83d5cf401f81504ef59f0497da4093eb
1,012
md
Markdown
README.md
BareConductive/picap-keyboard-py
0af7ead1ba7d7c8cd41e866e96d90d5837f8bd22
[ "MIT" ]
null
null
null
README.md
BareConductive/picap-keyboard-py
0af7ead1ba7d7c8cd41e866e96d90d5837f8bd22
[ "MIT" ]
null
null
null
README.md
BareConductive/picap-keyboard-py
0af7ead1ba7d7c8cd41e866e96d90d5837f8bd22
[ "MIT" ]
null
null
null
[![Bare Conductive](http://bareconductive.com/assets/images/LOGO_256x106.png)](http://www.bareconductive.com/) # Bare Conductive Keyboard Emulation Code for the [Bare Conductive Pi Cap](http://www.bareconductive.com/shop/pi-cap/). Allows you to emulate a keyboard and map keyboard strokes to the Pi Cap's electrodes. ## Requirements * Requires [python-dev](https://www.python.org/) (`apt-get install python-dev`) * Requires [WiringPi](http://wiringpi.com/) (`apt-get install wiringpi`) * Requires [uinput](https://github.com/tuomasjjrasanen/python-uinput) (`sudo pip install python-uinput`) * Requires [Bare Conductive's MPR121 libary for WiringPi](https://github.com/BareConductive/wiringpi-mpr121) ## Install / Build * You should install this code as part of the Pi Cap Raspbian package: `sudo apt-get install picap` * However, if you are doing this yourself, clone the repository and follow the usage instructions. ## Usage modprobe uinput python keyboard.py N.B. must be run as root
44
169
0.748024
eng_Latn
0.612817
e13706d486e40f64df7c548dddcb01a50f70ca77
3,675
md
Markdown
_posts/2021-11-30-Prune-once-for-all.md
ClovaEffAI/ClovaEffAI.github.io
e9ea8e020d185bfa983b6a7f505fadf193d4cea1
[ "MIT" ]
null
null
null
_posts/2021-11-30-Prune-once-for-all.md
ClovaEffAI/ClovaEffAI.github.io
e9ea8e020d185bfa983b6a7f505fadf193d4cea1
[ "MIT" ]
null
null
null
_posts/2021-11-30-Prune-once-for-all.md
ClovaEffAI/ClovaEffAI.github.io
e9ea8e020d185bfa983b6a7f505fadf193d4cea1
[ "MIT" ]
null
null
null
--- title: "Prune Once for All: Sparse Pre-Trained Language Models" author: "Se Jung Kwon" sidebar: true author_profile: true categories: - Paper Review tags: - Language Model - Pruning - Knowledge Distillation - Transfer Learning - Model Compression - Quantization - Once for all --- ### Link : https://arxiv.org/pdf/2111.05754.pdf - 저자/학회 특이사항 - Intel Labs, Israel - ENLSP NeurIPS Workshop 2021 - 제안하는 방법에 대해서는 의구심이 조금 생기지만, 2/3/4장만 잘 읽어도 Pruning이나 Knowledge distillation에 대해서 공부하기는 좋을듯. #### Introduction - **Prune One for All** (Prune OFA)라는 방법을 제안한다. 기본적으로 BERT처럼 Pre-training 후에 작은 downstream dataset에 대해서 transfer fine-tuning하는 모델들을 대상으로 한다. 아마도 Generative Model (e.g. GPT)에 대해서는 적용하기 힘들것이라고 생각하는데, 아직까지 GPT를 Sparse하게 만들었다는 말은 들어보지 못했다. BERT는 정보를 모으는 Encoder다보니 비교적 쉽게 압축이 되는 편이다. - main contribution은 크게 세가지다. 1. sparse pre-trained model을 만드는 architecture-agnostic method를 제안한다. 2. downstream task로 sparse pre-trained model을 sparse하고 quantized된 형태로 잘 fine-tuning하는 효과적인 방법을 제시한다. 3. compression library를 publish한다. - 첫번째 contribution은 좀 거창하다고 생각되는게, BERT/DistilBERT만 가지고 architecture-agnostic하다고 주장하기에는 무리가 있다. 기껏해야 Roberta정도? 그나마 Roberta도 경험상, BERT와 경향이 많이 달랐었다기 때문에 적용이 될지 잘 모르겠다. KD나 Pruning 방법의 우수성이, 모델의 redundancy나 training 난이도와 함께 엮여서 과대 포장되는 경우가 많은데, 이 논문도 그렇게 읽힌다. - (Gordon et al., 2020)이라는 Paper가 자주 언급되는데, *Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning*라는 제목의 paper로, pre-training 단계에서도 pruning해보고, fine-tuning 이후에도 해보면서 여러 Insight를 얻어보고자 하는 논문이다 (https://arxiv.org/pdf/2002.08307.pdf). Gordon의 논문의 여러 Insight 중에서, pre-training 단계에서 pruning을 한것이, downstream fine-tuning 단계로 가져갈수 있다는 점을 본 논문이 주목한것 같다. 또한 (Chen at el, 2020)에서도 BERT에서의 Lottery ticket을 찾아보려고 하면서, pre-trained model의 lottery를 fine-tuning 단계로 들고 갈수도 있다는 점을 분석했는데, 이 부분도 중요한 insight로 다루고 있다. #### Prune Once for All - Pruning 방법은 (Zhu and Gupta, 2018)의 Gradual Mangnitude Pruning (GMP) 방법과 (Renda et al., 2020)의 Rewinding 방법(Learning Rate Rewinding, LRR)을 참고하는 듯 하다. unstructured (fine-grained) 방법을 사용했으나, 가속관점의 이슈는 전혀 다루지 않았다. - KD 방법은 우리가 아는 기본적인 방법을 그대로 사용했지만, Sparse Pre-trained model을 만들면서 한번 Teacher 모델이 사용되고, Fine-tuned sparse model for downstream task를 만들기 위해서 한번 더 들어간다 (모든 task별로 fine-tuned BERT teacher 모델이 다 필요합니다!). 다만 Pre-trained model은 loss가 MLM을 사용하기 때문에 'Teacher preparation'단계에서 loss를 (아마도) CLS 토큰에 맞게 fine-tuning하는 과정이 한번 더 들어가는 것 같다. - <img src="/assets/images/2021-11-30-Prune-once-for-all/f1.png" width="100%" height="100%" title="Figure 1" alt="Figure1"/> - 위의 그림에서 두번째 'Fine-tuned pre-trained LM'은 KD를 위해서 pre-trained model의 loss를 바꾼 것을 뜻한다. 뒤의 fine-tunign과 헷갈리면 안된다. - Sparse pre-trained LM의 sparsity pattern은 fine-tuning 단계에서 유지된다. #### Experimental Setup - <img src="/assets/images/2021-11-30-Prune-once-for-all/t1.png" width="100%" height="100%" title="Table 1/2" alt="Table 1/2"/> - Table 1에서 Transfer with KD가 없는 Chen/Gordon의 연구와 비교했을때 비슷하거나 더 좋은 결과를 얻을 것을 볼 수 있는데, 더 높은 sparsity를 보인다. transfer 단계에서 KD를 뺐기 때문에 sparse model을 만들면서 KD를 사용하는 것이 의미가 있는 것으로 보인다. - 흥미로운 점은 QAT까지 적용한 모델이 있는데, 추가적으로 한번더 KD가 포함된 QAT step을 돌리고 Q8BERT와 비슷한 방법을 사용했으며, activation을 위해서는 asymmetric을 사용했고, embedding 압축은 안했다고 한다. - <img src="/assets/images/2021-11-30-Prune-once-for-all/t3.png" width="100%" height="100%" title="Table 3" alt="Table 3"/> - Table 3에서는 그냥 downstream task 단계에서 KD를 포함해서 pruning한 것과 비교했다. 조금 더 좋은 결과를 보이는 것을 볼 수 있다. #### 총평 - architecture-agnostic이라는 말이 어떻게 review를 통과했을까 생각해보니, 아차 workshop 논문이었다. - Once-for-all 이라는 말을 쓸 수 있는지 잘 모르겠다. 결국 distillation을 위해서 이미 만들어진 fine-tuned 모델을 사용했는데..? - 큰 모델부터 KD를 쏟아부으면 fine-tuning 단계에서만 하는것보다 더 좋은 결과를 얻을수 있다는 뜻인데, 어찌보면 맞는 말이라 novelty가 있나 싶다.
70.673077
523
0.746122
kor_Hang
1.000009
e1390aba681763911839298fadecb8aa376d4bba
6,534
md
Markdown
content/dnn/schedule-job.md
jerkovicl/SistemiKB
0f466c01daa9eeb289866fc5149b0246d0e71418
[ "MIT" ]
2
2017-10-04T11:49:18.000Z
2017-12-12T06:02:04.000Z
content/dnn/schedule-job.md
jerkovicl/SistemiKB
0f466c01daa9eeb289866fc5149b0246d0e71418
[ "MIT" ]
null
null
null
content/dnn/schedule-job.md
jerkovicl/SistemiKB
0f466c01daa9eeb289866fc5149b0246d0e71418
[ "MIT" ]
1
2022-03-02T22:21:45.000Z
2022-03-02T22:21:45.000Z
/* Title: How to Write a Custom DotNetNuke SchedulerClient (i.e. a Scheduled Task) Sort: 1 */ ### Sub-Class From SchedulerClient All of your scheduled tasks begin by sub-classing from the DotNetNuke.Services.Scheduling.SchedulerClient class. Example: ``` class ScheduledTaskExample : SchedulerClient ``` ### Overload Your Class Constructor The next thing that DotNetNuke requires from a scheduled task is an overloaded constructor that accepts a DotNetNuke.Services.Scheduling.ScheduleHistoryItem object. This overloaded constructor must also call the constructor of its parent class. This is done by using the base() syntax in C#. The final thing that your constructor must do is assign the ScheduleHistoryItem parameter to the ScheduleHistoryItem property of the class. Here is an example of a constructor that meets these requirements: Example: ``` public ScheduledTaskExample(ScheduleHistoryItem objScheduleHistoryItem) : base() { ScheduleHistoryItem = objScheduleHistoryItem; } ``` ### Override DoWork() Method The DoWork() method is the entry point for the execution of your scheduled task. When the DotNetNuke portal determines that it is time for your scheduled task to run, it creates an instance of your class and calls its DoWork() method. So this is where you put all your custom functionality. There are a few things to keep in mind when writing this method. First, you need to include logic that alerts the portal to whether or not your task succeeded at whatever it needed to do. The way you do this is by enclosing your code within a try-catch block. If your code runs as planned, then you set the Succeeded property of the ScheduledHistoryItem member to true. Otherwise, you set the Succeeded property to False. In the catch portion of your try-catch block you will also want to set the Succeeded property to False so that when an exception is thrown, the scheduled task will report that it failed. Example: ``` public override void DoWork() { try { // do some stuff ... // then report success to the scheduler framework ScheduleHistoryItem.Succeeded = true; } // handle any exceptions catch (Exception exc) { // report a failure ScheduleHistoryItem.Succeeded = false; // log the exception into // the scheduler framework ScheduleHistoryItem.AddLogNote("EXCEPTION: " + exc.ToString()); // call the Errored method Errored(ref exc); // log the exception into the DNN core Exceptions.LogException(exc); } } ``` You will notice in the examples above that there are a few other things you should do when an exception occurs. You should log the exception into the scheduler framework which allows you to view the exception in your scheduler's history page. This is done by calling the AddLogNote() method on the ScheduleHistoryItem member. Secondly, you need to call the parent class's Errored() method so it can take its own necessary actions. Lastly, you should log the exception into the DNN core exception log by calling the LogException method on the DotNetNuke.Services.Exceptions.Exceptions class. ### Compile & Install Once you are finished overriding the DoWork() method you are ready to compile and install your new scheduled task. Compiling instructions will vary depending upon what tool and project type you are using, so I will not discuss those details here. Once you are compiled up into a DLL file you will want to copy the DLL file into the bin directory of your DNN Web site installation. To install your new task you go into Host menu > Schedule page and click on the Add Item to Schedule link. Then enter the full class name, followed by a comma, followed by the name of your DLL. Configure it up, hit Update, and you should be good to go. ### Some Tips Since scheduled tasks do not run in the context of a Web page execution, nor in the context of any particular DotNetNuke portal, you will often need to find alternative ways to access the information that your task requires. Instead of Server.Globals.MapPath() use System.Web.HttpRuntime.AppDomainAppPath to get the file path to your DNN installation. If your task uses some of the DNN framework methods that require you to pass the ID of a portal, then you can either use a PortalController to get all of the portal IDs and loop through them, or you can access particular portal IDs by storing and retrieving them using System.Configuration.ConfigurationSettings.AppSettings. AppSettings allows you to store configuration information inside the .config files within the .Net configuration hierarchy. Likewise for tab IDs and module IDs, you may have to manually store and retrieve these from in a .config file. Another thing to keep in mind is that the Add/GetSetting methods on the ScheduleHistoryItem class really suck. You might be tempted to use it, but you'll quickly be scratching your head because there are no easy ways to update an existing setting. So you will need to find an alternate method to store settings for your scheduled task. Here is an example of a working scheduled task. It writes out a file to the root folder of your web application. ScheduledTaskExample.cs: ``` using System; using System.IO; using System.Web; using DotNetNuke.Services.Scheduling; using DotNetNuke.Services.Exceptions; namespace Kemmis.Examples.DotNetNuke { class ScheduledTaskExample : SchedulerClient { public ScheduledTaskExample(ScheduleHistoryItem objScheduleHistoryItem) : base() { ScheduleHistoryItem = objScheduleHistoryItem; } public override void DoWork() { try { // perform some task String strPath = HttpRuntime.AppDomainAppPath + "CS_DID_IT.TXT"; using (StreamWriter sw = new StreamWriter(strPath, true)) { sw.WriteLine(DateTime.Now.ToString() + " - C# DID IT!"); sw.Close(); } // report success to the scheduler framework ScheduleHistoryItem.Succeeded = true; } catch (Exception exc) { ScheduleHistoryItem.Succeeded = false; ScheduleHistoryItem.AddLogNote("EXCEPTION: " + exc.ToString()); Errored(ref exc); Exceptions.LogException(exc); } } } } ```
45.692308
911
0.723447
eng_Latn
0.996208
e1396c78a98bc853859153533aa8b7a537531979
148
md
Markdown
user/pages/03.work/02.ride-the-tide/_banner/banner.md
phulongnls/shadowfactory
c7d02fba294d5c1b478fdf2c896f0a28398c6b58
[ "MIT" ]
null
null
null
user/pages/03.work/02.ride-the-tide/_banner/banner.md
phulongnls/shadowfactory
c7d02fba294d5c1b478fdf2c896f0a28398c6b58
[ "MIT" ]
null
null
null
user/pages/03.work/02.ride-the-tide/_banner/banner.md
phulongnls/shadowfactory
c7d02fba294d5c1b478fdf2c896f0a28398c6b58
[ "MIT" ]
null
null
null
--- title: banner media_order: rideTheTide-e9262188756f4e27436aab652cec3c2d.png banner_image: rideTheTide-e9262188756f4e27436aab652cec3c2d.png ---
21.142857
62
0.851351
kor_Hang
0.101723
e1398b14a5f5095314ccf4f761a86df49b217687
3,010
md
Markdown
support/windows-server/deployment/pcr7-configuration-binding-not-possible.md
ChrisKibble/SupportArticles-docs
e79515edc9a2ef00c45965dfa2a1a0908f01374a
[ "CC-BY-4.0", "MIT" ]
null
null
null
support/windows-server/deployment/pcr7-configuration-binding-not-possible.md
ChrisKibble/SupportArticles-docs
e79515edc9a2ef00c45965dfa2a1a0908f01374a
[ "CC-BY-4.0", "MIT" ]
null
null
null
support/windows-server/deployment/pcr7-configuration-binding-not-possible.md
ChrisKibble/SupportArticles-docs
e79515edc9a2ef00c45965dfa2a1a0908f01374a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Windows Server shows PCR7 configuration as "Binding not possible"' description: 'Introduces the PCR7 configuration "Binding not possible" issue and its cause.' ms.date: 09/08/2020 author: Deland-Han ms.author: delhan manager: dcscontentpm audience: itpro ms.topic: troubleshooting ms.prod: windows-server localization_priority: medium ms.reviewer: kaushika ms.custom: sap:setup, csstroubleshoot ms.technology: windows-server-deployment --- # Windows Server shows PCR7 configuration as "Binding not possible" This article introduces the **Binding not possible** issue in msinfo32 and the cause of the issue. ## PCR7 Configuration in msinfo32 Consider the following scenario: - Windows Server is installed on a secure boot-enabled platform. - You enable Trusted Platform Module (TPM) 2.0 in Unified Extensible Firmware Interface (UEFI). - You turn on BitLocker. - You install chipset drivers and update the latest Microsoft Monthly Rollup. - You also run *tpm.msc* to make sure that the TPM status is normal. The status displays **The TPM is ready for use**. In this scenario, when you run *msinfo32* to check the PCR7 Configuration, it's displayed as **Binding not possible**. ## Cause of the unexpected message Microsoft only accepts the Microsoft Windows PCA 2011 certificate to be used to sign BitLocker binding download components in PCR7. Any other signature present on boot code will cause BitLocker to use TPM profile 0, 2, 4, 11 instead of 7, 11. In some cases, the binaries are signed with UEFI CA 2011 certificate, which will prevent you from binding to PCR7. > [!Note] > UEFI CA can be used to sign third-party applications, Option ROMs or even third-party boot loaders that can load malicious (UEFI CA signed) code. In this case, BitLocker switches to PCR 0, 2, 4, 11. The exact binary hashes are measured rather than CA certificate, which means less exposure to attacks. > > Servers are secure regardless of using TPM profile 0, 2, 4, 11 or profile 7, 11. ## More information To check whether your device meets the requirements: 1. Open an elevated command prompt, and run the `msinfo32` command. 2. In **System Summary**, verify that **BIOS Mode** is **UEFI**, and **PCR7 Configuration** is **Bound**. 3. Open an elevated PowerShell command prompt, and run the following command: ```powershell Confirm-SecureBootUEFI ``` Verify that the value of **True** is returned. 4. Run the following PowerShell command: ```powershell manage-bde -protectors -get $env:systemdrive ``` Verify that the drive is protected by PCR 7. ```powershell PS C:\Windows\system32> manage-bde -protectors -get $env:systemdrive BitLocker Drive Encryption: Configuration Tool version 10.0.22526 Copyright (C) 2013 Microsoft Corporation. All rights reserved. Volume C: [OSDisk] All Key Protectors TPM: ID: <GUID> PCR Validation Profile: 7, 11 (Uses Secure Boot for integrity validation) ```
39.090909
357
0.744186
eng_Latn
0.974952
e13aeaace2f76213ecc66ffefeb8419b217174db
2,209
md
Markdown
content/en/publication/arnaldo-2013-boosting/index.md
isaacgrafo/starter-hugo-research-group
e0e51241c7cc6f4be8cb55bf10db465f012ef07f
[ "MIT" ]
null
null
null
content/en/publication/arnaldo-2013-boosting/index.md
isaacgrafo/starter-hugo-research-group
e0e51241c7cc6f4be8cb55bf10db465f012ef07f
[ "MIT" ]
1
2022-01-21T00:10:45.000Z
2022-01-21T00:10:45.000Z
content/es/publication/arnaldo-2013-boosting/index.md
isaacgrafo/starter-hugo-research-group
e0e51241c7cc6f4be8cb55bf10db465f012ef07f
[ "MIT" ]
null
null
null
--- # Documentation: https://wowchemy.com/docs/managing-content/ title: Boosting the 3D thermal-aware floorplanning problem through a master-worker parallel MOEA subtitle: '' summary: '' authors: - Ignacio Arnaldo - Alfredo Cuesta-Infante - J. Manuel Colmenar - José L. Risco-Martín - José L. Ayala tags: [] categories: [] date: '2013-01-01' lastmod: 2022-01-26T16:45:08+01:00 featured: false draft: false # Featured image # To use, add an image named `featured.jpg/png` to your page's folder. # Focal points: Smart, Center, TopLeft, Top, TopRight, Left, Right, BottomLeft, Bottom, BottomRight. image: caption: '' focal_point: '' preview_only: false # Projects (optional). # Associate this post with one or more of your projects. # Simply enter your project's folder or file name without extension. # E.g. `projects = ["internal-project"]` references `content/project/deep-learning/index.md`. # Otherwise, set `projects = []`. projects: [] publishDate: '2022-01-26T15:45:07.594712Z' publication_types: - '2' abstract: The increasing transistor scale integration poses, among others, the thermal-aware floorplanning problem consisting of how to place the hardware components in order to reduce overheating by dissipation. Because of the huge amount of feasible floorplans, most of the solutions found in the literature include an evolutionary algorithm for, either partially or completely, carrying out the task of floorplanning. Evolutionary algorithms usually have a bottleneck in the fitness evaluation. In the problem of thermal-aware floorplanning, the layout evaluation by the thermal model takes 99.5% of the computational time for the best floorplanning algorithm proposed so far. The contribution of this paper is to present a parallelization of this evaluation phase in a master-worker model to achieve a dramatic speed-up of the thermal-aware floorplanning process. Exhaustive experimentation was carried out over 3D integrated circuits, with 48 and 128 cores, outperforming previous published works. Copyright © 2012 John Wiley & Sons, Ltd. publication: '*Concurrency and Computation: Practice and Experience*' doi: http://dx.doi.org/10.1002/cpe.2902 ---
40.907407
100
0.768221
eng_Latn
0.977968
e13b935d3e549226de2160b9e99eed079efbe7ed
87
md
Markdown
docs/69.0.0/default/AWS/VoiceID/index.md
Cicatrice/cfn-libsonnet
9525d9062ecaa0c5b030d23501339ee30615b16a
[ "Apache-2.0" ]
null
null
null
docs/69.0.0/default/AWS/VoiceID/index.md
Cicatrice/cfn-libsonnet
9525d9062ecaa0c5b030d23501339ee30615b16a
[ "Apache-2.0" ]
null
null
null
docs/69.0.0/default/AWS/VoiceID/index.md
Cicatrice/cfn-libsonnet
9525d9062ecaa0c5b030d23501339ee30615b16a
[ "Apache-2.0" ]
null
null
null
--- permalink: /69.0.0/default/AWS/VoiceID/ --- # AWS.VoiceID * [Domain](Domain.md)
9.666667
39
0.62069
yue_Hant
0.22647
e13c4d44215ff82812ba84a781a8a0a92a49c2b7
10,911
md
Markdown
DOCS/Release_Notes/CMAQv5.3.1_bugfixes.md
Simeng-unique/CMAQ-changed
cb83401728ed7ea1bb19a6986c0acc84dabe11a4
[ "CC0-1.0" ]
203
2017-02-04T18:01:47.000Z
2022-03-30T09:09:00.000Z
DOCS/Release_Notes/CMAQv5.3.1_bugfixes.md
Simeng-unique/CMAQ-changed
cb83401728ed7ea1bb19a6986c0acc84dabe11a4
[ "CC0-1.0" ]
54
2017-01-03T21:40:27.000Z
2022-03-04T19:03:53.000Z
DOCS/Release_Notes/CMAQv5.3.1_bugfixes.md
Simeng-unique/CMAQ-changed
cb83401728ed7ea1bb19a6986c0acc84dabe11a4
[ "CC0-1.0" ]
170
2016-11-09T22:30:04.000Z
2022-03-31T03:21:59.000Z
# CMAQv5.3.1 Bugfixes ## 1. *CTM_WVEL* run script option [Ben Murphy](mailto:Murphy.Ben@epa.gov), U.S. Environmental Protection Agency ### Description of model issue Setting CTM_WVEL, a run time science option to write out the vertical velocity component to the concentration file, to N. The default setting, currently, is listed as Y in all runscripts within the repository. If the CTM_WVEL science option is set to N, the model immediately crashes. This is because the array that stores the vertical velocity component for writing to the concentration file is never allocated and is being used to calculate the average vertical velocity to be written to the average concentration file. ### Solution in CMAQv5.3.1 The array that stores the vertical velocity component for writing is now properly allocated. The model will no longer terminate execution with a segmentation fault. The updates will also allow users to flexibly toggle CTM_WVEL on/off independently of the CONC_SPCS_LIST. **Note: If the user decides to write this diagnostic variable out, the variable will be reported to both the conc and aconc species.** ### Files Affected CCTM/src/driver/AVG_CONC.F CCTM/src/driver/STD_CONC.F CCTM/src/driver/WVEL_DEFN.F CCTM/src/driver/driver.F CCTM/src/driver/wr_aconc.F CCTM/src/driver/wr_conc.F CCTM/src/driver/wr_init.F CCTM/src/init/opaconc.F CCTM/src/init/opconc.F CCTM/src/vadv/local_cons/zadvyppm.F CCTM/src/vadv/wrf_cons/zadvppmwrf.F CCTM/scripts/run_cctm_2010_4CALIF1.csh (Moved CTM_WVEL to diagnostic outputs) CCTM/scripts/run_cctm_2011_12US1.csh (Moved CTM_WVEL to diagnostic outputs) CCTM/scripts/run_cctm_2014_12US1.csh (Moved CTM_WVEL to diagnostic outputs) CCTM/scripts/run_cctm_2015_HEMI.csh (Moved CTM_WVEL to diagnostic outputs) CCTM/scripts/run_cctm_2016_12US1.csh (Moved CTM_WVEL to diagnostic outputs) CCTM/scripts/run_cctm_Bench_2011_12SE1.csh (Moved CTM_WVEL to diagnostic outputs) CCTM/scripts/run_cctm_Bench_2016_12SE1.csh (Moved CTM_WVEL to diagnostic outputs) ## 2. Error reading multiple Region Files for use in DESID [Ben Murphy](mailto:Murphy.Ben@epa.gov), U.S. Environmental Protection Agency ### Description of model issue When multiple region files were read in, the model crashed with segmentaion fault. ### Solution in CMAQv5.3.1 In the initialization routine for region masks, the population of the region masks should to occur outside of the file loop rather than inside. Essentially, the arrays storing the region values were being incremented incorrectly and beyond the appropriate length set by the allocation commands. ### Files Affected centralized_io_module.F ## 3. Diagnostic File for Lightning NOx [Daiwen Kang](mailto:Kang.Daiwen@epa.gov), U.S. Environmental Protection Agency ### Description of model issue The 3D diagnostic files are meant to provide the lightning NO emissions at each vertical grid cell. The current implementation has mistakenly accumulated the emissions from lower layers, i.e, it is correct for the lowest layer (Layer 1), but the values at Layer 2 is the sum of Layer 1 and Layer 2, and the values at Layer 3, is the sum of Layer 1 through Layer 3, and so on, so forth. ### Solution in CMAQv5.3.1 We have now revoved the accumulation loop and output emissions at each layer correctly. ### Files Affected CCTM/emis/emis/LTNG_DEFN.F ## 4. Updates to CCTM Centralized I/O (CIO) module [David Wong](mailto:Wong.David-C@epa.gov), U.S. Environmental Protection Agency ### Description of model issue The current implementation of the Centralized Input/Output Module was encoded based on three assumptions: 1. non-metrology input data was expected to be at the same frequency as the output [tstep](../Users_Guide/Appendix/CMAQ_UG_appendixA_model_options.md#timestep-configuration) (a runscript environment variable the user set). 2. all gridded emissions have the same number of layers. 3. the CCTM run starts at the zeroth hour. ### Solution in CMAQv5.3.1 Issue #1: A new algorithm was developed to keep track of the time step from each input file and to allow the model to write data out at the pre-defined frequency in the run script. The algorithm was also implemented to store the start date and start time of each file, incase the user had emissions input data that used representative days. A new environmental variable was also re-introduced to keep track of which emissions files were representative days and which are not. **Note: this algorithm only allows a maximum of 500 files to be opened.** Issue #2: A new array was introduced to store the number of layers in each emisison file. Using this new information, the buffer array storing the emissions data being read in was re-allocated to be the no greater than the size of the initial condition file, but no smaller than the size of the largest emissions file. In addition, each gridded emission file was extracted using the newly introduce array that stored the number of layers in each emission file. **Note: this algorithm does not limit the extraction of data greater than the model top (i.e. files that have nlays greater than the model top). However doing so will cause a segmentation fault with memory issues as what is allocated will not match what is being extracted.** Issue #3: The date advancement is now properly updated, i.e. performs the local time update only when the model date is updated. An exit call is also implemented to stop the model when an improper interpolation takes place. During the exit call, the following information: the interpolation date, time and bounds, will be sent to the [processor log files](../Users_Guide/CMAQ_UG_ch05_running_a_simulation.md#571-cctm-logfiles) for further debugging. ### Files Affected CCTM/src/cio/centralized_io_module.F CCTM/src/cio/centralized_io_util_module.F CCTM/src/emis/emis/EMIS_DEFN.F CCTM/src/emis/emis/PT3D_DEFN.F CCTM/src/driver/advstep.F CCTM/src/phot/inline/concld_prop_acm.F CCTM/src/cloud/acm_ae7_kmt2/rescld.F CCTM/src/cloud/acm_ae7_kmt2/convcld_acm.F CCTM/src/cloud/acm_ae6_mp/rescld.F CCTM/src/cloud/acm_ae6_mp/convcld_acm.F CCTM/src/cloud/acm_ae6/rescld.F CCTM/src/cloud/acm_ae6/convcld_acm.F CCTM/src/emis/emis/EMIS_VARS.F CCTM/src/emis/emis/STK_EMIS.F CCTM/src/emis/emis/opemis.F CCTM/src/util/util/RUNTIME_VARS.F CCTM/scripts/run_cctm_2010_4CALIF1.csh CCTM/scripts/run_cctm_2011_12US1.csh CCTM/scripts/run_cctm_2014_12US1.csh CCTM/scripts/run_cctm_2015_HEMI.csh CCTM/scripts/run_cctm_2016_12US1.csh CCTM/scripts/run_cctm_Bench_2011_12SE1.csh CCTM/scripts/run_cctm_Bench_2016_12SE1.csh ## 5. STAGE [Jesse Bash](mailto:Bash.Jesse@epa.gov), U.S. Environmental Protection Agency ### Description of model issue Two issues were identified in CMAQv5.3 NH3 output when running with bidirectional exchange. 1) The deposition to surface waters was omitted from the diagnostic NH3_Dep output. 2) Modifications were needed to the deposition species definition file for post processing to accurate capture nitrogen deposition. ### Solution in CMAQv5.3.1 The aggregation of fluxes from land use types was simplified and a model conditional statement was removed to correct the omission of NH3 deposition to surface waters in the diagnostic deposition output. Note, this bug did not impact the diagnostic land use specific deposition totals output when *setenv CTM_MOSAIC = T*. The diagnostic deposition output was remapped to allow users to use the standard deposition species definition file (*SpecDef_Dep*) distributed with the model. The new mapped diagnostic species in the DRYDEP file are: NH3 – NH<sub>3</sub> dry deposition (positive values are deposition) NH3_Flux – NH<sub>3</sub> surface flux (positive values are deposition and negative values are emission) NH3_Wat - NH<sub>3</sub> flux over water bodies (positive values are emissions and negative values are deposition) NH3_Ag - NH<sub>3</sub> flux over agriculture land use (positive values are emissions and negative values are deposition) NH3_Nat - NH<sub>3</sub> flux over non-agriculture land use (positive values are emissions and negative values are deposition) NH3_Emis – Diagnostic grid cell NH<sub>3</sub> emissions from fertilizers and biogenic sources (positive values are emissions) NH3_Soil - NH<sub>3</sub> flux from soil pathways (positive values are emissions and negative values are deposition) NH3_Stom - NH<sub>3</sub> flux from leaf stomatal pathways (positive values are emissions and negative values are deposition) NH3_Cut - NH<sub>3</sub> flux from leaf cuticular pathways (positive values are emissions and negative values are deposition) ### Files Affected STAGE_MOD.F Vdiffproc.F opddep.F ## 6. ISAM [Sergey Napelenok](mailto:Napelenok.Sergey@epa.gov), U.S. Environmental Protection Agency ### Description of model issue Specifying “PM25_IONS” as a TAG CLASS resulted in unstable attribution output for ACLI and ACLJ concentration and deposition. ### Solution in CMAQv5.3.1 ACLI and ACLJ were removed from the “PM25_IONS” TAG CLASS and a new class was added called “CHLORINE.” This TAG CLASS also includes HCL gas in addition to ACLI and ACLJ, and the algorithms now include partitioning calculations. ### Files Affected CCTM/scripts/isam_control.txt CCTM/src/isam/SA_DEFN.F CCTM/src/isam/SA_WRAP_AE.F ## 7. Coupled WRF-CMAQ Model [David Wong](mailto:Wong.David-C@epa.gov), U.S. Environmental Protection Agency ### Description of model issue 1. Setting [CTM_BIOGEMIS](../Users_Guide/Appendix/CMAQ_UG_appendixA_model_options.md#science-options) to Y in the WRF-CMAQ model did not correctly produce the SOILOUT file after a simulation period was completed. This led to a crash when restarting the model the next day with inilization from the previous days run. This issue was traced back to the inline biogenics algorithm which only writes the SOILOUT file if the model has reached its run length, a runscript environmental variable (CTM_RUNLEN). However, in the WRF-CMAQ Model this runscript environmental variable was not being read in so the default value of 48 hours defined in RUNTIME_VARS.F was used. Hence SOILOUT will only be produced at the 48th hour. 2. Setting [CTM_WBDUST](../Users_Guide/Appendix/CMAQ_UG_appendixA_model_options.md#science-options) to Y in the WRF-CMAQ model and setting [CTM_WBDUST](../Users_Guide/Appendix/CMAQ_UG_appendixA_model_options.md#science-options) to "unknown" results in a crash. This crash is a result of the bounds of extraction being incorrect. ### Solution in CMAQv5.3.1 Issue #1: The WRF-CMAQ model was updated to properly read the environmental variable CTM_RUNLEN in RUNTIME_VARS.F. Issue #2: Adding variables to store the calculation of the bounds for the land-use database from the appropriate file whether it be from aqprep (mcip counterpart) or from BELD data. ### Files Affected CCTM/src/cio/centralized_io_module.F, CCTM/src/util/util/RUNTIME_VARS.F
53.485294
736
0.799193
eng_Latn
0.986711
e13c7070fde0493ef61aefece5bde7adfe3503d7
89
md
Markdown
README.md
vldv/portfolio_analysis
d39eebcd40d58775f49a6767c33a613025f2dd99
[ "Unlicense" ]
null
null
null
README.md
vldv/portfolio_analysis
d39eebcd40d58775f49a6767c33a613025f2dd99
[ "Unlicense" ]
null
null
null
README.md
vldv/portfolio_analysis
d39eebcd40d58775f49a6767c33a613025f2dd99
[ "Unlicense" ]
null
null
null
# portfolio_analysis bunch of more or less elaborate code to look into stock performance
29.666667
67
0.831461
eng_Latn
0.999079
e13ccda6b7390e7fffbf06ddd025b8ddbf4cfef6
422
md
Markdown
README.md
darkelfe14728/phpdox_engine_twig
3970470dcde805735c73b89423428e78dc2ef470
[ "CC-BY-4.0" ]
null
null
null
README.md
darkelfe14728/phpdox_engine_twig
3970470dcde805735c73b89423428e78dc2ef470
[ "CC-BY-4.0" ]
null
null
null
README.md
darkelfe14728/phpdox_engine_twig
3970470dcde805735c73b89423428e78dc2ef470
[ "CC-BY-4.0" ]
null
null
null
# A phpDox engine using Twig ![nb releases](https://badgen.net/github/tags/darkelfe14728/phpdox_engine_twig?label=Nb%20releases) ![last release](https://badgen.net/github/tag/darkelfe14728/phpdox_engine_twig?label=Last%20release&color=yellow) ![licence](https://badgen.net/badge/license/CC%20BY%204.0/red) This is an additional engine for [phpDox](http://phpdox.de/) based on [Twig](https://twig.symfony.com/) templates
52.75
113
0.777251
yue_Hant
0.364449
e13d68d32a4675277a4483cb153e4d6e6cbf253d
73
markdown
Markdown
tag/arrays.markdown
IsabelVazquez/IsabelVazquez.github.io
45a63092eacfe2f54b4d675a223d08404a8384d1
[ "MIT" ]
null
null
null
tag/arrays.markdown
IsabelVazquez/IsabelVazquez.github.io
45a63092eacfe2f54b4d675a223d08404a8384d1
[ "MIT" ]
2
2020-02-25T09:12:37.000Z
2022-02-26T03:35:56.000Z
tag/arrays.markdown
IsabelVazquez/IsabelVazquez.github.io
45a63092eacfe2f54b4d675a223d08404a8384d1
[ "MIT" ]
null
null
null
--- layout: tagpage title: "Tag: arrays" tag: arrays robots: noindex ---
10.428571
20
0.671233
eng_Latn
0.350399
e13e0b725db8722a1d543734b346eb4db97ad705
6,124
md
Markdown
readme.md
duynguyenhoang/goodwork
7c9e05ea92e5a8111a9515422adc0c154633cd08
[ "MIT" ]
null
null
null
readme.md
duynguyenhoang/goodwork
7c9e05ea92e5a8111a9515422adc0c154633cd08
[ "MIT" ]
null
null
null
readme.md
duynguyenhoang/goodwork
7c9e05ea92e5a8111a9515422adc0c154633cd08
[ "MIT" ]
null
null
null
[![License](http://img.shields.io/badge/license-MIT-brightgreen.svg)](https://github.com/iluminar/goodwork/blob/dev/LICENSE) [![Build Status](https://travis-ci.org/iluminar/goodwork.svg?branch=dev)](https://travis-ci.org/iluminar/goodwork) [![Stable Version](https://poser.pugx.org/iluminar/goodwork/v/stable)](https://github.com/iluminar/goodwork) [![Laravel Version](https://img.shields.io/badge/Laravel-6.0-brightgreen.svg?style=flat)](https://github.com/laravel/laravel) [![VueJS Version](https://img.shields.io/badge/vue-2.5.13-brightgreen.svg?style=flat)](https://github.com/vuejs/vue) [![codecov](https://codecov.io/gh/iluminar/goodwork/branch/master/graph/badge.svg)](https://codecov.io/gh/iluminar/goodwork) [![StyleCI](https://styleci.io/repos/81873619/shield?branch=dev&style=flat)](https://styleci.io/repos/81873619) [![Join on discord](https://img.shields.io/badge/join%20on-discord-orange)](https://discord.gg/4DvTQsc) [![Join on goodwork](https://img.shields.io/badge/join%20on-goodwork-orange.svg)](https://goodworkfor.life/register/invite-link/ovCPAFpnwIhrvqUrlvynarP9HVRBC5mH) <img src="public/logos/logo.png" alt="Goodwork" style="max-width:100%;"> Self hosted project management and collaboration tool inspired by basecamp. <hr> <p align="center"> <b><a href="#about-goodwork">Overview</a></b> | <b><a href="#demo">Demo</a></b> | <b><a href="#installation">Installation</a></b> | <b><a href="#screenshots-top">Screenshots</a></b> | <b><a href="#contributing-top">Contributing</a></b> | <b><a href="#supporting-top">Supporting</a></b> | <b><a href="#credits-top">Credits</a></b> | <b><a href="#license-top">License</a></b> </p> <hr> ## About Goodwork Goodwork is a simple project management and collaboration tool for software teams. It is open source and [MIT licensed](https://github.com/iluminar/goodwork/blob/dev/LICENSE). Goodwork is a self-hosted software (no dependency on anyone else and only you keep your data). Goodwork brings you all the components required for your project to run smoothly in one place so that you have single source of truth. Instead of using a collection of tools or service which makes everything messy with important details being hard to find because stuff is scattered all over the place, Goodwork organizes everything in a central place where everyone in the company knows what to do, knows where things stand and where to find stuff without having to ask around. > Goodwork is availible in 23 different languages! [Overview](https://github.com/iluminar/goodwork/wiki/Overview) ## Demo You can test a live instance of Goodwork that we use (as a guest user) using the credentials below. This user has limited permissions so you'll only see a handful of the features. You can access the demo site at the following URL: https://goodworkfor.life `email: guest@example.com` `password: guestpass` ## Installation [Install via docker](https://github.com/iluminar/goodwork/wiki/Installation#setup-using-docker) [Install manually](https://github.com/iluminar/goodwork/wiki/Installation#setup-usual-way-if-youre-not-using-docker) ## Screenshots <small>[↑Top](#about-goodwork)</small> ![Dashboard](https://i.imgur.com/oPlF1bi.png) ![Create Task Form](https://i.imgur.com/QlkS0IJ.png) ![Task Board](https://i.imgur.com/sfl2hLr.png) ![Task Details](https://i.imgur.com/J6wKeNL.png) ![Discussion Board](https://i.imgur.com/DgsIScx.png) ![Create Discussion Form](https://i.imgur.com/gHKGAjc.png) ![Discussion Details](https://i.imgur.com/NchQpJE.png) ![Files Board](https://i.imgur.com/iaQDmQR.png) ![Message Board](https://i.imgur.com/neakUm5.png) ![Direct Message](https://i.imgur.com/C3kbApV.png) ![Profile Page](https://i.imgur.com/MOS2k8l.png) ![Account Page](https://i.imgur.com/TelYaCs.png) ![Activities](https://i.imgur.com/FfYSOq1.png) ![Roles Board](https://i.imgur.com/TfRMzuf.png) ## Contributing <small>[↑Top](#about-goodwork)</small> Thank you for considering contributing to the Goodwork Project! The contribution guide can be found in the [Contribution Guideline](https://github.com/iluminar/goodwork/wiki/Contribution-Guideline). You can join the Goodwork Project via this link [link](https://goodworkfor.life/register/invite-link/ovCPAFpnwIhrvqUrlvynarP9HVRBC5mH) Also you can join the slack channel via this [link](https://discord.gg/4DvTQsc) ## Supporting <small>[↑Top](#about-goodwork)</small> Goodwork is an MIT-licensed open source project with its ongoing development made possible thanks to the support by our amazing backers. Support the development of "Goodwork" by being a sponsor or a backer <a href="https://opencollective.com/goodwork#sponsor"><img alt="become a sponsor" src="https://opencollective.com/goodwork/sponsors.svg" height="35px"></a> <a href="https://opencollective.com/goodwork#sponsor"><img alt="become a backer" src="https://opencollective.com/goodwork/backers.svg" height="35px"></a> You can also fund specific issues on Issuehunt and the money will be distributed to contributors and maintainers. [![issuehunt-to-marktext](https://github.com/BoostIO/issuehunt-materials/raw/master/v1/issuehunt-button-v1.svg?sanitize=true)](https://issuehunt.io/repos/81873619) ## Security Vulnerabilities <small>[↑Top](#about-goodwork)</small> If you discover a security vulnerability within Goodwork, please send an e-mail to searching.nehal@gmail.com instead of creating new issue. All security vulnerabilities will be promptly addressed. ## Credits <small>[↑Top](#about-goodwork)</small> - Author: [Nehal Hasnayeen](https://github.com/Hasnayeen) (https://hasnayeen.github.io) - Logo Credit: [Nehal Hasnayeen](https://github.com/Hasnayeen) (Improved upon earlier version by [Malcolm Nihlén](https://github.com/scriptcoded)) - Illustrations Credit: [Undraw](https://undraw.co/) - [Full Contributors List](https://github.com/iluminar/goodwork/graphs/contributors) ![](https://opencollective.com/goodwork/contributors.svg?width=890&button=false) ## License <small>[↑Top](#about-goodwork)</small> Goodwork is open-sourced software licensed under the [MIT license](http://opensource.org/licenses/MIT).
46.748092
478
0.756368
eng_Latn
0.508011
e13e25f223aa3a7b8dd2a308c3083ab25170e970
2,380
md
Markdown
security-updates/WindowsUpdateServices/21669461.md
mdressman/security-updates.de-de
d1153bcdb3ba27eb6a32bd92aa1371666fc8ff02
[ "CC-BY-4.0", "MIT" ]
null
null
null
security-updates/WindowsUpdateServices/21669461.md
mdressman/security-updates.de-de
d1153bcdb3ba27eb6a32bd92aa1371666fc8ff02
[ "CC-BY-4.0", "MIT" ]
null
null
null
security-updates/WindowsUpdateServices/21669461.md
mdressman/security-updates.de-de
d1153bcdb3ba27eb6a32bd92aa1371666fc8ff02
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- TOCTitle: 'Schrittweise Anleitung für Windows Server Update Services 3.0 SP2' Title: 'Schrittweise Anleitung für Windows Server Update Services 3.0 SP2' ms:assetid: '4b504edc-93b3-45b0-a7e8-d0107f1a4442' ms:contentKeyID: 21669461 ms:mtpsurl: 'https://technet.microsoft.com/de-de/library/Dd939822(v=WS.10)' --- Schrittweise Anleitung für Windows Server Update Services 3.0 SP2 ================================================================= Windows Server Update Services 3.0 Service Pack 2 (WSUS 3.0 SP2) stellt eine umfassende Lösung für das Verwalten von Updates in Ihrem Netzwerk dar. In diesem Handbuch finden Sie Anweisungen für die grundlegenden Aufgaben beim Installieren und Bereitstellen von WSUS 3.0 SP2 in Ihrem Netzwerk. Das Handbuch enthält folgende Abschnitte: - [Schritt 1: Bestätigen der WSUS 3.0 SP2-Installationsanforderungen](https://technet.microsoft.com/ec01bd75-5def-4899-8cee-ddab827bbd83) - [Schritt 2: Installieren des WSUS-Servers oder der WSUS-Verwaltungskonsole](https://technet.microsoft.com/6db6fcb0-c55d-43b9-9b07-4040c6267759) - [Schritt 3: Konfigurieren der Netzwerkverbindungen](https://technet.microsoft.com/42a144c5-f08e-4a6e-b360-47ddea77bd24) - [Schritt 4: Konfigurieren von Updates und Synchronisierung](https://technet.microsoft.com/deeaa7e1-9b50-45cb-9537-d75f70de3405) - [Schritt 5: Konfigurieren von Client-Updates](https://technet.microsoft.com/5ae60ead-3e94-456c-a692-c0f193ea5d5a) - [Schritt 6: Konfigurieren von Computergruppen](https://technet.microsoft.com/70518732-2179-4e41-9609-7f9999867f41) - [Schritt 7: Genehmigen und Bereitstellen von WSUS-Updates](https://technet.microsoft.com/c4e58e17-d5e3-4194-8f26-b459e0c03b86) Weitere Ressourcen ------------------ WSUS 3.0 SP2 ist eine vielseitige Lösung zur Updateverwaltung. Vollständige Informationen zur Installation und Bedienung von WSUS finden Sie hier: WSUS-Bereitstellungshandbuch unter [http://go.microsoft.com/fwlink/?LinkId=139832](http://go.microsoft.com/fwlink/?linkid=139832) (in englischer Sprache). WSUS-Bedienungshandbuch unter [http://go.microsoft.com/fwlink/?LinkId=139838](http://go.microsoft.com/fwlink/?linkid=139838) (in englischer Sprache). Anmerkungen zu WSUS unter [http://go.microsoft.com/fwlink/?LinkId=139840](http://go.microsoft.com/fwlink/?linkid=139840) (in englischer Sprache). Online-Hilfe zur WSUS-Verwaltungskonsole.
70
334
0.772269
deu_Latn
0.740542
e13f5193a3b3dbd9eb985dd9d051c7449046aa60
193
md
Markdown
README.md
AdiJr/Magic-SpaceX
2c11c2f8fee5965c94d9da8715ea3d1948daa589
[ "MIT" ]
null
null
null
README.md
AdiJr/Magic-SpaceX
2c11c2f8fee5965c94d9da8715ea3d1948daa589
[ "MIT" ]
null
null
null
README.md
AdiJr/Magic-SpaceX
2c11c2f8fee5965c94d9da8715ea3d1948daa589
[ "MIT" ]
null
null
null
# Magic-SpaceX Meet the SpaceX team, they work, rockets and upcoming launches schedule! Magic SpaceX is an Android app written in Kotlin, using newest practices, architecture, Jetpack Compose.
64.333333
177
0.80829
eng_Latn
0.992726
e13ff23e6966fc03b3e6cff828d22807ac6ae9c2
5,905
md
Markdown
_posts/explore/2018-03-17-sanya-2018.md
gnemux/gnemux.github.io
28d56d8ca81598c729472358e3b04942434fede7
[ "MIT" ]
2
2017-02-09T03:17:13.000Z
2018-01-17T13:12:44.000Z
_posts/explore/2018-03-17-sanya-2018.md
gnemux/gnemux.github.io
28d56d8ca81598c729472358e3b04942434fede7
[ "MIT" ]
null
null
null
_posts/explore/2018-03-17-sanya-2018.md
gnemux/gnemux.github.io
28d56d8ca81598c729472358e3b04942434fede7
[ "MIT" ]
null
null
null
--- layout: post title: SANYA 2018 type: photo categories: explore imagefeature: https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3142.jpg?x-oss-process=image/resize,p_15 date: 2018-03-17 description: "Cynthia的第1次长途旅行" comments: false --- 这是我第二次来到三亚,是小语第一次长途旅行,小妞玩儿得很开心 ### 选择 女儿还小,考虑到第一次出行,准备选择就近的地点,医疗卫生安全方面的不确定性都希望小一些;同时又希望带她去海边,不用多想,我们就选择了三亚为目的地。下一个选择就是三亚的4个主要湾区如何选择,各有优劣: ``` 【三亚湾】 离机场不堵车约15mins```<br> ``` 优: 酒店性价比高,交通便利,适合看夕阳(西面无遮挡)```<br> ``` 劣: 海水沙滩一般``` ``` 【大东海】 离机场不堵车约20mins``` <br> ``` 优: 交通还算便利,性价比高``` <br> ``` 劣: 风景一般 (最早开发,有些破败)``` ``` 【亚龙湾】 离机场不堵车约30mins```<br> ``` 优: 沙滩海水好```<br> ``` 劣: 价格贵,交通不便利``` ``` 【海棠湾】 离机场不堵车约50mins``` <br> ``` 优: 安静,水质可以,价格适中,离免税店近``` <br> ``` 劣: 交通最不便利``` 考虑到是带宝宝呆在酒店里,选择了最远但比较安静的 **海棠湾** 海棠湾是个旅游度假区,海岸边隔三差五就有一个大型的高端酒店,酒店可选择的余地确实比较多,在对比了 Booking 和携程上的评分,又对比了包括价格和酒店亲子设施等综合因素,最后又翻看了众多用户评价后,选择了 **君悦** (最终证明环境和设施确实还是令人满意的) ### 到达 <figure class="half"> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_2984.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_2984.jpg?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_2982.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_2982.jpg?x-oss-process=image/resize,p_30"></a> </figure> 经过3个小时的飞行,又经过55分钟的车程,顺利来到酒店,已是下午15:00左右 <figure class="half"> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3003.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3003.jpg?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3027.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3027.jpg?x-oss-process=image/resize,p_30"></a> </figure> #### 娱乐 每天的活动,就是带着女儿兴奋地跑去海边,踢海浪、堆沙堡、玩水、喂兔子,就这样天真浪漫地陪着她嘻嘻哈哈,三亚热烈的阳光把这一切烘托得非常美好 <figure class="half"> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_9501.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_9501.jpg?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3186.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3186.jpg?x-oss-process=image/resize,p_30"></a> </figure> <figure class="half"> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/EB04FC16-1C99-44B0-A10C-D86F36A32E2D-3506-000001AD4009DE92_tmp.JPG"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/EB04FC16-1C99-44B0-A10C-D86F36A32E2D-3506-000001AD4009DE92_tmp.JPG?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/E70ECBD2-FA73-4663-9E86-4104CA8A30E9-3506-000001A88D228699_tmp.JPG"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/E70ECBD2-FA73-4663-9E86-4104CA8A30E9-3506-000001A88D228699_tmp.JPG?x-oss-process=image/resize,p_30"></a> </figure> #### 餐饮 和酒店直接对接的有一个号称国际美食城的露天美食广场,说是国际,其实由于刚建设不久,入驻的酒店比较有限,并且基本上都是来自同一个公司旗下,于是我们在“東榕囍家”的服务下完成了几天所有的餐饮 <figure class="third"> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3094.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3094.jpg?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3172.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3172.jpg?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3249.JPG"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3249.JPG?x-oss-process=image/resize,p_30"></a> </figure> <figure class="third"> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3192.JPG"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3192.JPG?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3248.JPG"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3248.JPG?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3250.JPG"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3250.JPG?x-oss-process=image/resize,p_30"></a> </figure> #### 购物 距离酒店5分钟的车程,即可到免税商城,这个商城跟所有海边的大型建筑一样,外型大得异常,作为吸引游客来到这个偏远地区来旅行的重要因素,至少听羽倩说价格确实还算便宜!在等妈妈采购的时间里,我带着小妞来来回回跑了几圈 <figure class="third"> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3229.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3229.jpg?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3233.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3233.jpg?x-oss-process=image/resize,p_30"></a> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3236.jpg"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3236.jpg?x-oss-process=image/resize,p_30"></a> </figure> ### 归程 收拾好行囊,搭乘一早的专车来到机场,又经过约3小时的飞行,终于回到了杭州。 带宝宝出行,简直是挑战精力的活动,但是很高兴让女儿完成了她第一次长距离的旅行,回来前我问她 + “喜欢三亚还是杭州?” + 她说“一样喜欢” + 我说“可是三亚有大海,有沙滩” + 她说“可是杭州可以捞鱼,还有小动物(玩具)们” 没想到孩子真是挺容易满足的,而我希望告诉她,无论在哪里,有我们三个在,就是家 <figure> <a href="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3142.jpg?x-oss-process=image/resize,p_20"><img src="https://xumeng-me.oss-cn-hangzhou.aliyuncs.com/sanya2018/photos/IMG_3142.jpg?x-oss-process=image/resize,p_20"></a> </figure> > “It would be an empty universe indeed if it were not for my family and friends.” -- Stephen Hawking (1942~2018)
54.675926
320
0.760881
yue_Hant
0.404507
e140855ccaae68c45e58c259d95682d9bf3f007e
699
md
Markdown
README.md
CanNev/machine-design-theory-Study-Notes
4d331163d9ee8bd1d0b4e27d2f6f2210bfae0580
[ "MIT" ]
null
null
null
README.md
CanNev/machine-design-theory-Study-Notes
4d331163d9ee8bd1d0b4e27d2f6f2210bfae0580
[ "MIT" ]
1
2019-11-06T20:01:43.000Z
2020-11-04T00:46:05.000Z
README.md
CanNev/machine-design-theory-Study-Notes
4d331163d9ee8bd1d0b4e27d2f6f2210bfae0580
[ "MIT" ]
null
null
null
<!-- * @Description: machine-design-theory-Study-Notesn * @Author: CanNev * @Date: 2019-10-15 09:10:21 * @LastEditTime: 2019-10-15 10:03:56 * @LastEditors: Please set LastEditors --> # machine-design-theory-Study-Notes ## 内容描述 一些前专业的专业课资料,采用 _Markdown_ 和 _HTML_ 再编写,利用 _JavaScript_ 完成数据计算,使用 _Vim_ 作为文本编辑器,托管在 _Gitbub_ 上。 以便于我以更小的 **心智成本** 掌握这些计算机工具技能。 ## 规划 - 基于 Markdown 和 HTML 重写这些课本或手册 - 附上更详细的图表和图片内容 - 应用 HTML 和 CSS 为界面排版 - 使用 JavaScript 为这些文档加上交互效果 - 构建基于 JavaScript 的自动公式计算器或校核公式 - 更长远的打算,基于 electron 框架制作可以用于教学或者工程设计应用的工具软件 - (更多想法待续) ## 结语 - 整个项目只是个人练手用,基于 MIT License 最大程度的开源,供所有有跨专业意向和教育从业者开放 - PS:不过话说回来,那个传统机械专业的人会跑 GitHub 扒机械资料啊(池沼) > 如果觉得有价值的话,尽管拿去使用,用完了直接 star ,没问题的
21.181818
94
0.748212
yue_Hant
0.892661
e141283037ddfaf2ff8a7fd9607b15c49aefa7df
145
md
Markdown
CREDITS.md
egandro/certs
c9a5fc7e4c12c31669c6a294f31381fc92a12b14
[ "MIT" ]
null
null
null
CREDITS.md
egandro/certs
c9a5fc7e4c12c31669c6a294f31381fc92a12b14
[ "MIT" ]
1
2021-08-25T04:57:36.000Z
2021-08-25T04:57:36.000Z
CREDITS.md
egandro/certs
c9a5fc7e4c12c31669c6a294f31381fc92a12b14
[ "MIT" ]
null
null
null
# Credits Json files taken from here: <https://github.com/rob-blackbourn/ssl-certs> Clouodflare SSL tool: <https://github.com/cloudflare/cfssl>
29
73
0.765517
kor_Hang
0.65901
e1412874a06b034e2175144be2a0a38bf8f4f8f0
785
md
Markdown
tex-svg.md
IagoLast/MathJax-demos-web
c6ed59a4af9195a3b903ee17584fb72cfdee2f06
[ "Apache-2.0" ]
null
null
null
tex-svg.md
IagoLast/MathJax-demos-web
c6ed59a4af9195a3b903ee17584fb72cfdee2f06
[ "Apache-2.0" ]
null
null
null
tex-svg.md
IagoLast/MathJax-demos-web
c6ed59a4af9195a3b903ee17584fb72cfdee2f06
[ "Apache-2.0" ]
null
null
null
# [tex-svg.html](https://mathjax.github.io/MathJax-demos-web/tex-svg.html) This example shows how to use the `tex-svg` component to process a complete HTML page containing TeX notation into math in SVG format. The key lines are ``` <script> MathJax = { tex: {inlineMath: [['$', '$'], ['\\(', '\\)']]}, svg: {fontCache: 'global'} }; </script> <script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-svg.js"></script> ``` which configures the TeX input jax to include single dollar signs as in-line math delimiters and the SVG output jax to use a global font-path cache, and then loads the `tex-svg` component. The rest is handled by MathJax automatically. [Run the example](https://mathjax.github.io/MathJax-demos-web/tex-svg.html)
39.25
235
0.699363
eng_Latn
0.818098
e141ce27fdf1bf00a972c59ec184cda0264dfe51
161,722
md
Markdown
docs/core/proto-docs.md
emilson0407/kava
c80b2d1ae24a1a12db3c00aca80a88c840f8f326
[ "Apache-2.0" ]
null
null
null
docs/core/proto-docs.md
emilson0407/kava
c80b2d1ae24a1a12db3c00aca80a88c840f8f326
[ "Apache-2.0" ]
2
2022-02-23T16:44:11.000Z
2022-02-24T00:41:44.000Z
docs/core/proto-docs.md
emilson0407/kava
c80b2d1ae24a1a12db3c00aca80a88c840f8f326
[ "Apache-2.0" ]
null
null
null
<!-- This file is auto-generated. Please do not modify it yourself. --> # Protobuf Documentation <a name="top"></a> ## Table of Contents - [kava/auction/v1beta1/auction.proto](#kava/auction/v1beta1/auction.proto) - [BaseAuction](#kava.auction.v1beta1.BaseAuction) - [CollateralAuction](#kava.auction.v1beta1.CollateralAuction) - [DebtAuction](#kava.auction.v1beta1.DebtAuction) - [SurplusAuction](#kava.auction.v1beta1.SurplusAuction) - [WeightedAddresses](#kava.auction.v1beta1.WeightedAddresses) - [kava/auction/v1beta1/genesis.proto](#kava/auction/v1beta1/genesis.proto) - [GenesisState](#kava.auction.v1beta1.GenesisState) - [Params](#kava.auction.v1beta1.Params) - [kava/auction/v1beta1/query.proto](#kava/auction/v1beta1/query.proto) - [QueryAuctionRequest](#kava.auction.v1beta1.QueryAuctionRequest) - [QueryAuctionResponse](#kava.auction.v1beta1.QueryAuctionResponse) - [QueryAuctionsRequest](#kava.auction.v1beta1.QueryAuctionsRequest) - [QueryAuctionsResponse](#kava.auction.v1beta1.QueryAuctionsResponse) - [QueryNextAuctionIDRequest](#kava.auction.v1beta1.QueryNextAuctionIDRequest) - [QueryNextAuctionIDResponse](#kava.auction.v1beta1.QueryNextAuctionIDResponse) - [QueryParamsRequest](#kava.auction.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.auction.v1beta1.QueryParamsResponse) - [Query](#kava.auction.v1beta1.Query) - [kava/auction/v1beta1/tx.proto](#kava/auction/v1beta1/tx.proto) - [MsgPlaceBid](#kava.auction.v1beta1.MsgPlaceBid) - [MsgPlaceBidResponse](#kava.auction.v1beta1.MsgPlaceBidResponse) - [Msg](#kava.auction.v1beta1.Msg) - [kava/bep3/v1beta1/bep3.proto](#kava/bep3/v1beta1/bep3.proto) - [AssetParam](#kava.bep3.v1beta1.AssetParam) - [AssetSupply](#kava.bep3.v1beta1.AssetSupply) - [AtomicSwap](#kava.bep3.v1beta1.AtomicSwap) - [Params](#kava.bep3.v1beta1.Params) - [SupplyLimit](#kava.bep3.v1beta1.SupplyLimit) - [SwapDirection](#kava.bep3.v1beta1.SwapDirection) - [SwapStatus](#kava.bep3.v1beta1.SwapStatus) - [kava/bep3/v1beta1/genesis.proto](#kava/bep3/v1beta1/genesis.proto) - [GenesisState](#kava.bep3.v1beta1.GenesisState) - [kava/bep3/v1beta1/query.proto](#kava/bep3/v1beta1/query.proto) - [AssetSupplyResponse](#kava.bep3.v1beta1.AssetSupplyResponse) - [AtomicSwapResponse](#kava.bep3.v1beta1.AtomicSwapResponse) - [QueryAssetSuppliesRequest](#kava.bep3.v1beta1.QueryAssetSuppliesRequest) - [QueryAssetSuppliesResponse](#kava.bep3.v1beta1.QueryAssetSuppliesResponse) - [QueryAssetSupplyRequest](#kava.bep3.v1beta1.QueryAssetSupplyRequest) - [QueryAssetSupplyResponse](#kava.bep3.v1beta1.QueryAssetSupplyResponse) - [QueryAtomicSwapRequest](#kava.bep3.v1beta1.QueryAtomicSwapRequest) - [QueryAtomicSwapResponse](#kava.bep3.v1beta1.QueryAtomicSwapResponse) - [QueryAtomicSwapsRequest](#kava.bep3.v1beta1.QueryAtomicSwapsRequest) - [QueryAtomicSwapsResponse](#kava.bep3.v1beta1.QueryAtomicSwapsResponse) - [QueryParamsRequest](#kava.bep3.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.bep3.v1beta1.QueryParamsResponse) - [Query](#kava.bep3.v1beta1.Query) - [kava/bep3/v1beta1/tx.proto](#kava/bep3/v1beta1/tx.proto) - [MsgClaimAtomicSwap](#kava.bep3.v1beta1.MsgClaimAtomicSwap) - [MsgClaimAtomicSwapResponse](#kava.bep3.v1beta1.MsgClaimAtomicSwapResponse) - [MsgCreateAtomicSwap](#kava.bep3.v1beta1.MsgCreateAtomicSwap) - [MsgCreateAtomicSwapResponse](#kava.bep3.v1beta1.MsgCreateAtomicSwapResponse) - [MsgRefundAtomicSwap](#kava.bep3.v1beta1.MsgRefundAtomicSwap) - [MsgRefundAtomicSwapResponse](#kava.bep3.v1beta1.MsgRefundAtomicSwapResponse) - [Msg](#kava.bep3.v1beta1.Msg) - [kava/cdp/v1beta1/cdp.proto](#kava/cdp/v1beta1/cdp.proto) - [CDP](#kava.cdp.v1beta1.CDP) - [Deposit](#kava.cdp.v1beta1.Deposit) - [OwnerCDPIndex](#kava.cdp.v1beta1.OwnerCDPIndex) - [TotalCollateral](#kava.cdp.v1beta1.TotalCollateral) - [TotalPrincipal](#kava.cdp.v1beta1.TotalPrincipal) - [kava/cdp/v1beta1/genesis.proto](#kava/cdp/v1beta1/genesis.proto) - [CollateralParam](#kava.cdp.v1beta1.CollateralParam) - [DebtParam](#kava.cdp.v1beta1.DebtParam) - [GenesisAccumulationTime](#kava.cdp.v1beta1.GenesisAccumulationTime) - [GenesisState](#kava.cdp.v1beta1.GenesisState) - [GenesisTotalPrincipal](#kava.cdp.v1beta1.GenesisTotalPrincipal) - [Params](#kava.cdp.v1beta1.Params) - [kava/cdp/v1beta1/query.proto](#kava/cdp/v1beta1/query.proto) - [CDPResponse](#kava.cdp.v1beta1.CDPResponse) - [QueryAccountsRequest](#kava.cdp.v1beta1.QueryAccountsRequest) - [QueryAccountsResponse](#kava.cdp.v1beta1.QueryAccountsResponse) - [QueryCdpRequest](#kava.cdp.v1beta1.QueryCdpRequest) - [QueryCdpResponse](#kava.cdp.v1beta1.QueryCdpResponse) - [QueryCdpsRequest](#kava.cdp.v1beta1.QueryCdpsRequest) - [QueryCdpsResponse](#kava.cdp.v1beta1.QueryCdpsResponse) - [QueryDepositsRequest](#kava.cdp.v1beta1.QueryDepositsRequest) - [QueryDepositsResponse](#kava.cdp.v1beta1.QueryDepositsResponse) - [QueryParamsRequest](#kava.cdp.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.cdp.v1beta1.QueryParamsResponse) - [QueryTotalCollateralRequest](#kava.cdp.v1beta1.QueryTotalCollateralRequest) - [QueryTotalCollateralResponse](#kava.cdp.v1beta1.QueryTotalCollateralResponse) - [QueryTotalPrincipalRequest](#kava.cdp.v1beta1.QueryTotalPrincipalRequest) - [QueryTotalPrincipalResponse](#kava.cdp.v1beta1.QueryTotalPrincipalResponse) - [Query](#kava.cdp.v1beta1.Query) - [kava/cdp/v1beta1/tx.proto](#kava/cdp/v1beta1/tx.proto) - [MsgCreateCDP](#kava.cdp.v1beta1.MsgCreateCDP) - [MsgCreateCDPResponse](#kava.cdp.v1beta1.MsgCreateCDPResponse) - [MsgDeposit](#kava.cdp.v1beta1.MsgDeposit) - [MsgDepositResponse](#kava.cdp.v1beta1.MsgDepositResponse) - [MsgDrawDebt](#kava.cdp.v1beta1.MsgDrawDebt) - [MsgDrawDebtResponse](#kava.cdp.v1beta1.MsgDrawDebtResponse) - [MsgLiquidate](#kava.cdp.v1beta1.MsgLiquidate) - [MsgLiquidateResponse](#kava.cdp.v1beta1.MsgLiquidateResponse) - [MsgRepayDebt](#kava.cdp.v1beta1.MsgRepayDebt) - [MsgRepayDebtResponse](#kava.cdp.v1beta1.MsgRepayDebtResponse) - [MsgWithdraw](#kava.cdp.v1beta1.MsgWithdraw) - [MsgWithdrawResponse](#kava.cdp.v1beta1.MsgWithdrawResponse) - [Msg](#kava.cdp.v1beta1.Msg) - [kava/committee/v1beta1/committee.proto](#kava/committee/v1beta1/committee.proto) - [BaseCommittee](#kava.committee.v1beta1.BaseCommittee) - [MemberCommittee](#kava.committee.v1beta1.MemberCommittee) - [TokenCommittee](#kava.committee.v1beta1.TokenCommittee) - [TallyOption](#kava.committee.v1beta1.TallyOption) - [kava/committee/v1beta1/genesis.proto](#kava/committee/v1beta1/genesis.proto) - [GenesisState](#kava.committee.v1beta1.GenesisState) - [Proposal](#kava.committee.v1beta1.Proposal) - [Vote](#kava.committee.v1beta1.Vote) - [VoteType](#kava.committee.v1beta1.VoteType) - [kava/committee/v1beta1/permissions.proto](#kava/committee/v1beta1/permissions.proto) - [AllowedParamsChange](#kava.committee.v1beta1.AllowedParamsChange) - [GodPermission](#kava.committee.v1beta1.GodPermission) - [ParamsChangePermission](#kava.committee.v1beta1.ParamsChangePermission) - [SoftwareUpgradePermission](#kava.committee.v1beta1.SoftwareUpgradePermission) - [SubparamRequirement](#kava.committee.v1beta1.SubparamRequirement) - [TextPermission](#kava.committee.v1beta1.TextPermission) - [kava/committee/v1beta1/proposal.proto](#kava/committee/v1beta1/proposal.proto) - [CommitteeChangeProposal](#kava.committee.v1beta1.CommitteeChangeProposal) - [CommitteeDeleteProposal](#kava.committee.v1beta1.CommitteeDeleteProposal) - [kava/committee/v1beta1/query.proto](#kava/committee/v1beta1/query.proto) - [QueryCommitteeRequest](#kava.committee.v1beta1.QueryCommitteeRequest) - [QueryCommitteeResponse](#kava.committee.v1beta1.QueryCommitteeResponse) - [QueryCommitteesRequest](#kava.committee.v1beta1.QueryCommitteesRequest) - [QueryCommitteesResponse](#kava.committee.v1beta1.QueryCommitteesResponse) - [QueryNextProposalIDRequest](#kava.committee.v1beta1.QueryNextProposalIDRequest) - [QueryNextProposalIDResponse](#kava.committee.v1beta1.QueryNextProposalIDResponse) - [QueryProposalRequest](#kava.committee.v1beta1.QueryProposalRequest) - [QueryProposalResponse](#kava.committee.v1beta1.QueryProposalResponse) - [QueryProposalsRequest](#kava.committee.v1beta1.QueryProposalsRequest) - [QueryProposalsResponse](#kava.committee.v1beta1.QueryProposalsResponse) - [QueryRawParamsRequest](#kava.committee.v1beta1.QueryRawParamsRequest) - [QueryRawParamsResponse](#kava.committee.v1beta1.QueryRawParamsResponse) - [QueryTallyRequest](#kava.committee.v1beta1.QueryTallyRequest) - [QueryTallyResponse](#kava.committee.v1beta1.QueryTallyResponse) - [QueryVoteRequest](#kava.committee.v1beta1.QueryVoteRequest) - [QueryVoteResponse](#kava.committee.v1beta1.QueryVoteResponse) - [QueryVotesRequest](#kava.committee.v1beta1.QueryVotesRequest) - [QueryVotesResponse](#kava.committee.v1beta1.QueryVotesResponse) - [Query](#kava.committee.v1beta1.Query) - [kava/committee/v1beta1/tx.proto](#kava/committee/v1beta1/tx.proto) - [MsgSubmitProposal](#kava.committee.v1beta1.MsgSubmitProposal) - [MsgSubmitProposalResponse](#kava.committee.v1beta1.MsgSubmitProposalResponse) - [MsgVote](#kava.committee.v1beta1.MsgVote) - [MsgVoteResponse](#kava.committee.v1beta1.MsgVoteResponse) - [Msg](#kava.committee.v1beta1.Msg) - [kava/hard/v1beta1/hard.proto](#kava/hard/v1beta1/hard.proto) - [Borrow](#kava.hard.v1beta1.Borrow) - [BorrowInterestFactor](#kava.hard.v1beta1.BorrowInterestFactor) - [BorrowLimit](#kava.hard.v1beta1.BorrowLimit) - [CoinsProto](#kava.hard.v1beta1.CoinsProto) - [Deposit](#kava.hard.v1beta1.Deposit) - [InterestRateModel](#kava.hard.v1beta1.InterestRateModel) - [MoneyMarket](#kava.hard.v1beta1.MoneyMarket) - [Params](#kava.hard.v1beta1.Params) - [SupplyInterestFactor](#kava.hard.v1beta1.SupplyInterestFactor) - [kava/hard/v1beta1/genesis.proto](#kava/hard/v1beta1/genesis.proto) - [GenesisAccumulationTime](#kava.hard.v1beta1.GenesisAccumulationTime) - [GenesisState](#kava.hard.v1beta1.GenesisState) - [kava/hard/v1beta1/query.proto](#kava/hard/v1beta1/query.proto) - [BorrowInterestFactorResponse](#kava.hard.v1beta1.BorrowInterestFactorResponse) - [BorrowResponse](#kava.hard.v1beta1.BorrowResponse) - [DepositResponse](#kava.hard.v1beta1.DepositResponse) - [InterestFactor](#kava.hard.v1beta1.InterestFactor) - [MoneyMarketInterestRate](#kava.hard.v1beta1.MoneyMarketInterestRate) - [QueryAccountsRequest](#kava.hard.v1beta1.QueryAccountsRequest) - [QueryAccountsResponse](#kava.hard.v1beta1.QueryAccountsResponse) - [QueryBorrowsRequest](#kava.hard.v1beta1.QueryBorrowsRequest) - [QueryBorrowsResponse](#kava.hard.v1beta1.QueryBorrowsResponse) - [QueryDepositsRequest](#kava.hard.v1beta1.QueryDepositsRequest) - [QueryDepositsResponse](#kava.hard.v1beta1.QueryDepositsResponse) - [QueryInterestFactorsRequest](#kava.hard.v1beta1.QueryInterestFactorsRequest) - [QueryInterestFactorsResponse](#kava.hard.v1beta1.QueryInterestFactorsResponse) - [QueryInterestRateRequest](#kava.hard.v1beta1.QueryInterestRateRequest) - [QueryInterestRateResponse](#kava.hard.v1beta1.QueryInterestRateResponse) - [QueryParamsRequest](#kava.hard.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.hard.v1beta1.QueryParamsResponse) - [QueryReservesRequest](#kava.hard.v1beta1.QueryReservesRequest) - [QueryReservesResponse](#kava.hard.v1beta1.QueryReservesResponse) - [QueryTotalBorrowedRequest](#kava.hard.v1beta1.QueryTotalBorrowedRequest) - [QueryTotalBorrowedResponse](#kava.hard.v1beta1.QueryTotalBorrowedResponse) - [QueryTotalDepositedRequest](#kava.hard.v1beta1.QueryTotalDepositedRequest) - [QueryTotalDepositedResponse](#kava.hard.v1beta1.QueryTotalDepositedResponse) - [QueryUnsyncedBorrowsRequest](#kava.hard.v1beta1.QueryUnsyncedBorrowsRequest) - [QueryUnsyncedBorrowsResponse](#kava.hard.v1beta1.QueryUnsyncedBorrowsResponse) - [QueryUnsyncedDepositsRequest](#kava.hard.v1beta1.QueryUnsyncedDepositsRequest) - [QueryUnsyncedDepositsResponse](#kava.hard.v1beta1.QueryUnsyncedDepositsResponse) - [SupplyInterestFactorResponse](#kava.hard.v1beta1.SupplyInterestFactorResponse) - [Query](#kava.hard.v1beta1.Query) - [kava/hard/v1beta1/tx.proto](#kava/hard/v1beta1/tx.proto) - [MsgBorrow](#kava.hard.v1beta1.MsgBorrow) - [MsgBorrowResponse](#kava.hard.v1beta1.MsgBorrowResponse) - [MsgDeposit](#kava.hard.v1beta1.MsgDeposit) - [MsgDepositResponse](#kava.hard.v1beta1.MsgDepositResponse) - [MsgLiquidate](#kava.hard.v1beta1.MsgLiquidate) - [MsgLiquidateResponse](#kava.hard.v1beta1.MsgLiquidateResponse) - [MsgRepay](#kava.hard.v1beta1.MsgRepay) - [MsgRepayResponse](#kava.hard.v1beta1.MsgRepayResponse) - [MsgWithdraw](#kava.hard.v1beta1.MsgWithdraw) - [MsgWithdrawResponse](#kava.hard.v1beta1.MsgWithdrawResponse) - [Msg](#kava.hard.v1beta1.Msg) - [kava/incentive/v1beta1/claims.proto](#kava/incentive/v1beta1/claims.proto) - [BaseClaim](#kava.incentive.v1beta1.BaseClaim) - [BaseMultiClaim](#kava.incentive.v1beta1.BaseMultiClaim) - [DelegatorClaim](#kava.incentive.v1beta1.DelegatorClaim) - [HardLiquidityProviderClaim](#kava.incentive.v1beta1.HardLiquidityProviderClaim) - [MultiRewardIndex](#kava.incentive.v1beta1.MultiRewardIndex) - [MultiRewardIndexesProto](#kava.incentive.v1beta1.MultiRewardIndexesProto) - [RewardIndex](#kava.incentive.v1beta1.RewardIndex) - [RewardIndexesProto](#kava.incentive.v1beta1.RewardIndexesProto) - [SwapClaim](#kava.incentive.v1beta1.SwapClaim) - [USDXMintingClaim](#kava.incentive.v1beta1.USDXMintingClaim) - [kava/incentive/v1beta1/params.proto](#kava/incentive/v1beta1/params.proto) - [MultiRewardPeriod](#kava.incentive.v1beta1.MultiRewardPeriod) - [Multiplier](#kava.incentive.v1beta1.Multiplier) - [MultipliersPerDenom](#kava.incentive.v1beta1.MultipliersPerDenom) - [Params](#kava.incentive.v1beta1.Params) - [RewardPeriod](#kava.incentive.v1beta1.RewardPeriod) - [kava/incentive/v1beta1/genesis.proto](#kava/incentive/v1beta1/genesis.proto) - [AccumulationTime](#kava.incentive.v1beta1.AccumulationTime) - [GenesisRewardState](#kava.incentive.v1beta1.GenesisRewardState) - [GenesisState](#kava.incentive.v1beta1.GenesisState) - [kava/incentive/v1beta1/tx.proto](#kava/incentive/v1beta1/tx.proto) - [MsgClaimDelegatorReward](#kava.incentive.v1beta1.MsgClaimDelegatorReward) - [MsgClaimDelegatorRewardResponse](#kava.incentive.v1beta1.MsgClaimDelegatorRewardResponse) - [MsgClaimHardReward](#kava.incentive.v1beta1.MsgClaimHardReward) - [MsgClaimHardRewardResponse](#kava.incentive.v1beta1.MsgClaimHardRewardResponse) - [MsgClaimSwapReward](#kava.incentive.v1beta1.MsgClaimSwapReward) - [MsgClaimSwapRewardResponse](#kava.incentive.v1beta1.MsgClaimSwapRewardResponse) - [MsgClaimUSDXMintingReward](#kava.incentive.v1beta1.MsgClaimUSDXMintingReward) - [MsgClaimUSDXMintingRewardResponse](#kava.incentive.v1beta1.MsgClaimUSDXMintingRewardResponse) - [Selection](#kava.incentive.v1beta1.Selection) - [Msg](#kava.incentive.v1beta1.Msg) - [kava/issuance/v1beta1/genesis.proto](#kava/issuance/v1beta1/genesis.proto) - [Asset](#kava.issuance.v1beta1.Asset) - [AssetSupply](#kava.issuance.v1beta1.AssetSupply) - [GenesisState](#kava.issuance.v1beta1.GenesisState) - [Params](#kava.issuance.v1beta1.Params) - [RateLimit](#kava.issuance.v1beta1.RateLimit) - [kava/issuance/v1beta1/query.proto](#kava/issuance/v1beta1/query.proto) - [QueryParamsRequest](#kava.issuance.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.issuance.v1beta1.QueryParamsResponse) - [Query](#kava.issuance.v1beta1.Query) - [kava/issuance/v1beta1/tx.proto](#kava/issuance/v1beta1/tx.proto) - [MsgBlockAddress](#kava.issuance.v1beta1.MsgBlockAddress) - [MsgBlockAddressResponse](#kava.issuance.v1beta1.MsgBlockAddressResponse) - [MsgIssueTokens](#kava.issuance.v1beta1.MsgIssueTokens) - [MsgIssueTokensResponse](#kava.issuance.v1beta1.MsgIssueTokensResponse) - [MsgRedeemTokens](#kava.issuance.v1beta1.MsgRedeemTokens) - [MsgRedeemTokensResponse](#kava.issuance.v1beta1.MsgRedeemTokensResponse) - [MsgSetPauseStatus](#kava.issuance.v1beta1.MsgSetPauseStatus) - [MsgSetPauseStatusResponse](#kava.issuance.v1beta1.MsgSetPauseStatusResponse) - [MsgUnblockAddress](#kava.issuance.v1beta1.MsgUnblockAddress) - [MsgUnblockAddressResponse](#kava.issuance.v1beta1.MsgUnblockAddressResponse) - [Msg](#kava.issuance.v1beta1.Msg) - [kava/kavadist/v1beta1/params.proto](#kava/kavadist/v1beta1/params.proto) - [Params](#kava.kavadist.v1beta1.Params) - [Period](#kava.kavadist.v1beta1.Period) - [kava/kavadist/v1beta1/genesis.proto](#kava/kavadist/v1beta1/genesis.proto) - [GenesisState](#kava.kavadist.v1beta1.GenesisState) - [kava/kavadist/v1beta1/proposal.proto](#kava/kavadist/v1beta1/proposal.proto) - [CommunityPoolMultiSpendProposal](#kava.kavadist.v1beta1.CommunityPoolMultiSpendProposal) - [CommunityPoolMultiSpendProposalJSON](#kava.kavadist.v1beta1.CommunityPoolMultiSpendProposalJSON) - [MultiSpendRecipient](#kava.kavadist.v1beta1.MultiSpendRecipient) - [kava/kavadist/v1beta1/query.proto](#kava/kavadist/v1beta1/query.proto) - [QueryBalanceRequest](#kava.kavadist.v1beta1.QueryBalanceRequest) - [QueryBalanceResponse](#kava.kavadist.v1beta1.QueryBalanceResponse) - [QueryParamsRequest](#kava.kavadist.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.kavadist.v1beta1.QueryParamsResponse) - [Query](#kava.kavadist.v1beta1.Query) - [kava/pricefeed/v1beta1/store.proto](#kava/pricefeed/v1beta1/store.proto) - [CurrentPrice](#kava.pricefeed.v1beta1.CurrentPrice) - [Market](#kava.pricefeed.v1beta1.Market) - [Params](#kava.pricefeed.v1beta1.Params) - [PostedPrice](#kava.pricefeed.v1beta1.PostedPrice) - [kava/pricefeed/v1beta1/genesis.proto](#kava/pricefeed/v1beta1/genesis.proto) - [GenesisState](#kava.pricefeed.v1beta1.GenesisState) - [kava/pricefeed/v1beta1/query.proto](#kava/pricefeed/v1beta1/query.proto) - [CurrentPriceResponse](#kava.pricefeed.v1beta1.CurrentPriceResponse) - [MarketResponse](#kava.pricefeed.v1beta1.MarketResponse) - [PostedPriceResponse](#kava.pricefeed.v1beta1.PostedPriceResponse) - [QueryMarketsRequest](#kava.pricefeed.v1beta1.QueryMarketsRequest) - [QueryMarketsResponse](#kava.pricefeed.v1beta1.QueryMarketsResponse) - [QueryOraclesRequest](#kava.pricefeed.v1beta1.QueryOraclesRequest) - [QueryOraclesResponse](#kava.pricefeed.v1beta1.QueryOraclesResponse) - [QueryParamsRequest](#kava.pricefeed.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.pricefeed.v1beta1.QueryParamsResponse) - [QueryPriceRequest](#kava.pricefeed.v1beta1.QueryPriceRequest) - [QueryPriceResponse](#kava.pricefeed.v1beta1.QueryPriceResponse) - [QueryPricesRequest](#kava.pricefeed.v1beta1.QueryPricesRequest) - [QueryPricesResponse](#kava.pricefeed.v1beta1.QueryPricesResponse) - [QueryRawPricesRequest](#kava.pricefeed.v1beta1.QueryRawPricesRequest) - [QueryRawPricesResponse](#kava.pricefeed.v1beta1.QueryRawPricesResponse) - [Query](#kava.pricefeed.v1beta1.Query) - [kava/pricefeed/v1beta1/tx.proto](#kava/pricefeed/v1beta1/tx.proto) - [MsgPostPrice](#kava.pricefeed.v1beta1.MsgPostPrice) - [MsgPostPriceResponse](#kava.pricefeed.v1beta1.MsgPostPriceResponse) - [Msg](#kava.pricefeed.v1beta1.Msg) - [kava/swap/v1beta1/swap.proto](#kava/swap/v1beta1/swap.proto) - [AllowedPool](#kava.swap.v1beta1.AllowedPool) - [Params](#kava.swap.v1beta1.Params) - [PoolRecord](#kava.swap.v1beta1.PoolRecord) - [ShareRecord](#kava.swap.v1beta1.ShareRecord) - [kava/swap/v1beta1/genesis.proto](#kava/swap/v1beta1/genesis.proto) - [GenesisState](#kava.swap.v1beta1.GenesisState) - [kava/swap/v1beta1/query.proto](#kava/swap/v1beta1/query.proto) - [DepositResponse](#kava.swap.v1beta1.DepositResponse) - [PoolResponse](#kava.swap.v1beta1.PoolResponse) - [QueryDepositsRequest](#kava.swap.v1beta1.QueryDepositsRequest) - [QueryDepositsResponse](#kava.swap.v1beta1.QueryDepositsResponse) - [QueryParamsRequest](#kava.swap.v1beta1.QueryParamsRequest) - [QueryParamsResponse](#kava.swap.v1beta1.QueryParamsResponse) - [QueryPoolsRequest](#kava.swap.v1beta1.QueryPoolsRequest) - [QueryPoolsResponse](#kava.swap.v1beta1.QueryPoolsResponse) - [Query](#kava.swap.v1beta1.Query) - [kava/swap/v1beta1/tx.proto](#kava/swap/v1beta1/tx.proto) - [MsgDeposit](#kava.swap.v1beta1.MsgDeposit) - [MsgDepositResponse](#kava.swap.v1beta1.MsgDepositResponse) - [MsgSwapExactForTokens](#kava.swap.v1beta1.MsgSwapExactForTokens) - [MsgSwapExactForTokensResponse](#kava.swap.v1beta1.MsgSwapExactForTokensResponse) - [MsgSwapForExactTokens](#kava.swap.v1beta1.MsgSwapForExactTokens) - [MsgSwapForExactTokensResponse](#kava.swap.v1beta1.MsgSwapForExactTokensResponse) - [MsgWithdraw](#kava.swap.v1beta1.MsgWithdraw) - [MsgWithdrawResponse](#kava.swap.v1beta1.MsgWithdrawResponse) - [Msg](#kava.swap.v1beta1.Msg) - [Scalar Value Types](#scalar-value-types) <a name="kava/auction/v1beta1/auction.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/auction/v1beta1/auction.proto <a name="kava.auction.v1beta1.BaseAuction"></a> ### BaseAuction BaseAuction defines common attributes of all auctions | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `id` | [uint64](#uint64) | | | | `initiator` | [string](#string) | | | | `lot` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `bidder` | [bytes](#bytes) | | | | `bid` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `has_received_bids` | [bool](#bool) | | | | `end_time` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `max_end_time` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <a name="kava.auction.v1beta1.CollateralAuction"></a> ### CollateralAuction CollateralAuction is a two phase auction. Initially, in forward auction phase, bids can be placed up to a max bid. Then it switches to a reverse auction phase, where the initial amount up for auction is bid down. Unsold Lot is sent to LotReturns, being divided among the addresses by weight. Collateral auctions are normally used to sell off collateral seized from CDPs. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_auction` | [BaseAuction](#kava.auction.v1beta1.BaseAuction) | | | | `corresponding_debt` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `max_bid` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `lot_returns` | [WeightedAddresses](#kava.auction.v1beta1.WeightedAddresses) | | | <a name="kava.auction.v1beta1.DebtAuction"></a> ### DebtAuction DebtAuction is a reverse auction that mints what it pays out. It is normally used to acquire pegged asset to cover the CDP system's debts that were not covered by selling collateral. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_auction` | [BaseAuction](#kava.auction.v1beta1.BaseAuction) | | | | `corresponding_debt` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.auction.v1beta1.SurplusAuction"></a> ### SurplusAuction SurplusAuction is a forward auction that burns what it receives from bids. It is normally used to sell off excess pegged asset acquired by the CDP system. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_auction` | [BaseAuction](#kava.auction.v1beta1.BaseAuction) | | | <a name="kava.auction.v1beta1.WeightedAddresses"></a> ### WeightedAddresses WeightedAddresses is a type for storing some addresses and associated weights. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `addresses` | [bytes](#bytes) | repeated | | | `weights` | [bytes](#bytes) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/auction/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/auction/v1beta1/genesis.proto <a name="kava.auction.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the auction module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `next_auction_id` | [uint64](#uint64) | | | | `params` | [Params](#kava.auction.v1beta1.Params) | | | | `auctions` | [google.protobuf.Any](#google.protobuf.Any) | repeated | Genesis auctions | <a name="kava.auction.v1beta1.Params"></a> ### Params Params defines the parameters for the issuance module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `max_auction_duration` | [google.protobuf.Duration](#google.protobuf.Duration) | | | | `forward_bid_duration` | [google.protobuf.Duration](#google.protobuf.Duration) | | | | `reverse_bid_duration` | [google.protobuf.Duration](#google.protobuf.Duration) | | | | `increment_surplus` | [bytes](#bytes) | | | | `increment_debt` | [bytes](#bytes) | | | | `increment_collateral` | [bytes](#bytes) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/auction/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/auction/v1beta1/query.proto <a name="kava.auction.v1beta1.QueryAuctionRequest"></a> ### QueryAuctionRequest QueryAuctionRequest is the request type for the Query/Auction RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `auction_id` | [uint64](#uint64) | | | <a name="kava.auction.v1beta1.QueryAuctionResponse"></a> ### QueryAuctionResponse QueryAuctionResponse is the response type for the Query/Auction RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `auction` | [google.protobuf.Any](#google.protobuf.Any) | | | <a name="kava.auction.v1beta1.QueryAuctionsRequest"></a> ### QueryAuctionsRequest QueryAuctionsRequest is the request type for the Query/Auctions RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `type` | [string](#string) | | | | `owner` | [string](#string) | | | | `denom` | [string](#string) | | | | `phase` | [string](#string) | | | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | pagination defines an optional pagination for the request. | <a name="kava.auction.v1beta1.QueryAuctionsResponse"></a> ### QueryAuctionsResponse QueryAuctionsResponse is the response type for the Query/Auctions RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `auctions` | [google.protobuf.Any](#google.protobuf.Any) | repeated | | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | pagination defines the pagination in the response. | <a name="kava.auction.v1beta1.QueryNextAuctionIDRequest"></a> ### QueryNextAuctionIDRequest QueryNextAuctionIDRequest defines the request type for querying x/auction next auction ID. <a name="kava.auction.v1beta1.QueryNextAuctionIDResponse"></a> ### QueryNextAuctionIDResponse QueryNextAuctionIDResponse defines the response type for querying x/auction next auction ID. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `id` | [uint64](#uint64) | | | <a name="kava.auction.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest defines the request type for querying x/auction parameters. <a name="kava.auction.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse defines the response type for querying x/auction parameters. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.auction.v1beta1.Params) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.auction.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for auction module | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.auction.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.auction.v1beta1.QueryParamsResponse) | Params queries all parameters of the auction module. | GET|/kava/auction/v1beta1/params| | `Auction` | [QueryAuctionRequest](#kava.auction.v1beta1.QueryAuctionRequest) | [QueryAuctionResponse](#kava.auction.v1beta1.QueryAuctionResponse) | Auction queries an individual Auction by auction ID | GET|/kava/auction/v1beta1/auctions/{auction_id}| | `Auctions` | [QueryAuctionsRequest](#kava.auction.v1beta1.QueryAuctionsRequest) | [QueryAuctionsResponse](#kava.auction.v1beta1.QueryAuctionsResponse) | Auctions queries auctions filtered by asset denom, owner address, phase, and auction type | GET|/kava/auction/v1beta1/auctions| | `NextAuctionID` | [QueryNextAuctionIDRequest](#kava.auction.v1beta1.QueryNextAuctionIDRequest) | [QueryNextAuctionIDResponse](#kava.auction.v1beta1.QueryNextAuctionIDResponse) | NextAuctionID queries the next auction ID | GET|/kava/auction/v1beta1/next-auction-id| <!-- end services --> <a name="kava/auction/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/auction/v1beta1/tx.proto <a name="kava.auction.v1beta1.MsgPlaceBid"></a> ### MsgPlaceBid MsgPlaceBid represents a message used by bidders to place bids on auctions | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `auction_id` | [uint64](#uint64) | | | | `bidder` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.auction.v1beta1.MsgPlaceBidResponse"></a> ### MsgPlaceBidResponse MsgPlaceBidResponse defines the Msg/PlaceBid response type. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.auction.v1beta1.Msg"></a> ### Msg Msg defines the auction Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `PlaceBid` | [MsgPlaceBid](#kava.auction.v1beta1.MsgPlaceBid) | [MsgPlaceBidResponse](#kava.auction.v1beta1.MsgPlaceBidResponse) | PlaceBid message type used by bidders to place bids on auctions | | <!-- end services --> <a name="kava/bep3/v1beta1/bep3.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/bep3/v1beta1/bep3.proto <a name="kava.bep3.v1beta1.AssetParam"></a> ### AssetParam AssetParam defines parameters for each bep3 asset. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | denom represents the denominatin for this asset | | `coin_id` | [int64](#int64) | | coin_id represents the registered coin type to use (https://github.com/satoshilabs/slips/blob/master/slip-0044.md) | | `supply_limit` | [SupplyLimit](#kava.bep3.v1beta1.SupplyLimit) | | supply_limit defines the maximum supply allowed for the asset - a total or time based rate limit | | `active` | [bool](#bool) | | active specifies if the asset is live or paused | | `deputy_address` | [bytes](#bytes) | | deputy_address the kava address of the deputy | | `fixed_fee` | [string](#string) | | fixed_fee defines the fee for incoming swaps | | `min_swap_amount` | [string](#string) | | min_swap_amount defines the minimum amount able to be swapped in a single message | | `max_swap_amount` | [string](#string) | | max_swap_amount defines the maximum amount able to be swapped in a single message | | `min_block_lock` | [uint64](#uint64) | | min_block_lock defined the minimum blocks to lock | | `max_block_lock` | [uint64](#uint64) | | min_block_lock defined the maximum blocks to lock | <a name="kava.bep3.v1beta1.AssetSupply"></a> ### AssetSupply AssetSupply defines information about an asset's supply. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `incoming_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | incoming_supply represents the incoming supply of an asset | | `outgoing_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | outgoing_supply represents the outgoing supply of an asset | | `current_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | current_supply represents the current on-chain supply of an asset | | `time_limited_current_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | time_limited_current_supply represents the time limited current supply of an asset | | `time_elapsed` | [google.protobuf.Duration](#google.protobuf.Duration) | | time_elapsed represents the time elapsed | <a name="kava.bep3.v1beta1.AtomicSwap"></a> ### AtomicSwap AtomicSwap defines an atomic swap between chains for the pricefeed module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | amount represents the amount being swapped | | `random_number_hash` | [bytes](#bytes) | | random_number_hash represents the hash of the random number | | `expire_height` | [uint64](#uint64) | | expire_height represents the height when the swap expires | | `timestamp` | [int64](#int64) | | timestamp represents the timestamp of the swap | | `sender` | [bytes](#bytes) | | sender is the kava chain sender of the swap | | `recipient` | [bytes](#bytes) | | recipient is the kava chain recipient of the swap | | `sender_other_chain` | [string](#string) | | sender_other_chain is the sender on the other chain | | `recipient_other_chain` | [string](#string) | | recipient_other_chain is the recipient on the other chain | | `closed_block` | [int64](#int64) | | closed_block is the block when the swap is closed | | `status` | [SwapStatus](#kava.bep3.v1beta1.SwapStatus) | | status represents the current status of the swap | | `cross_chain` | [bool](#bool) | | cross_chain identifies whether the atomic swap is cross chain | | `direction` | [SwapDirection](#kava.bep3.v1beta1.SwapDirection) | | direction identifies if the swap is incoming or outgoing | <a name="kava.bep3.v1beta1.Params"></a> ### Params Params defines the parameters for the bep3 module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `asset_params` | [AssetParam](#kava.bep3.v1beta1.AssetParam) | repeated | asset_params define the parameters for each bep3 asset | <a name="kava.bep3.v1beta1.SupplyLimit"></a> ### SupplyLimit SupplyLimit define the absolute and time-based limits for an assets's supply. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `limit` | [string](#string) | | limit defines the total supply allowed | | `time_limited` | [bool](#bool) | | time_limited enables or disables time based supply limiting | | `time_period` | [google.protobuf.Duration](#google.protobuf.Duration) | | time_period specifies the duration that time_based_limit is evalulated | | `time_based_limit` | [string](#string) | | time_based_limit defines the maximum supply that can be swapped within time_period | <!-- end messages --> <a name="kava.bep3.v1beta1.SwapDirection"></a> ### SwapDirection SwapDirection is the direction of an AtomicSwap | Name | Number | Description | | ---- | ------ | ----------- | | SWAP_DIRECTION_UNSPECIFIED | 0 | SWAP_DIRECTION_UNSPECIFIED represents unspecified or invalid swap direcation | | SWAP_DIRECTION_INCOMING | 1 | SWAP_DIRECTION_INCOMING represents is incoming swap (to the kava chain) | | SWAP_DIRECTION_OUTGOING | 2 | SWAP_DIRECTION_OUTGOING represents an outgoing swap (from the kava chain) | <a name="kava.bep3.v1beta1.SwapStatus"></a> ### SwapStatus SwapStatus is the status of an AtomicSwap | Name | Number | Description | | ---- | ------ | ----------- | | SWAP_STATUS_UNSPECIFIED | 0 | SWAP_STATUS_UNSPECIFIED represents an unspecified status | | SWAP_STATUS_OPEN | 1 | SWAP_STATUS_OPEN represents an open swap | | SWAP_STATUS_COMPLETED | 2 | SWAP_STATUS_COMPLETED represents a completed swap | | SWAP_STATUS_EXPIRED | 3 | SWAP_STATUS_EXPIRED represents an expired swap | <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/bep3/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/bep3/v1beta1/genesis.proto <a name="kava.bep3.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the pricefeed module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.bep3.v1beta1.Params) | | params defines all the paramaters of the module. | | `atomic_swaps` | [AtomicSwap](#kava.bep3.v1beta1.AtomicSwap) | repeated | atomic_swaps represents the state of stored atomic swaps | | `supplies` | [AssetSupply](#kava.bep3.v1beta1.AssetSupply) | repeated | supplies represents the supply information of each atomic swap | | `previous_block_time` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | previous_block_time represents the time of the previous block | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/bep3/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/bep3/v1beta1/query.proto <a name="kava.bep3.v1beta1.AssetSupplyResponse"></a> ### AssetSupplyResponse AssetSupplyResponse defines information about an asset's supply. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `incoming_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | incoming_supply represents the incoming supply of an asset | | `outgoing_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | outgoing_supply represents the outgoing supply of an asset | | `current_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | current_supply represents the current on-chain supply of an asset | | `time_limited_current_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | time_limited_current_supply represents the time limited current supply of an asset | | `time_elapsed` | [google.protobuf.Duration](#google.protobuf.Duration) | | time_elapsed represents the time elapsed | <a name="kava.bep3.v1beta1.AtomicSwapResponse"></a> ### AtomicSwapResponse AtomicSwapResponse represents the returned atomic swap properties | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `id` | [string](#string) | | id represents the id of the atomic swap | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | amount represents the amount being swapped | | `random_number_hash` | [string](#string) | | random_number_hash represents the hash of the random number | | `expire_height` | [uint64](#uint64) | | expire_height represents the height when the swap expires | | `timestamp` | [int64](#int64) | | timestamp represents the timestamp of the swap | | `sender` | [string](#string) | | sender is the kava chain sender of the swap | | `recipient` | [string](#string) | | recipient is the kava chain recipient of the swap | | `sender_other_chain` | [string](#string) | | sender_other_chain is the sender on the other chain | | `recipient_other_chain` | [string](#string) | | recipient_other_chain is the recipient on the other chain | | `closed_block` | [int64](#int64) | | closed_block is the block when the swap is closed | | `status` | [SwapStatus](#kava.bep3.v1beta1.SwapStatus) | | status represents the current status of the swap | | `cross_chain` | [bool](#bool) | | cross_chain identifies whether the atomic swap is cross chain | | `direction` | [SwapDirection](#kava.bep3.v1beta1.SwapDirection) | | direction identifies if the swap is incoming or outgoing | <a name="kava.bep3.v1beta1.QueryAssetSuppliesRequest"></a> ### QueryAssetSuppliesRequest QueryAssetSuppliesRequest is the request type for the Query/AssetSupplies RPC method. <a name="kava.bep3.v1beta1.QueryAssetSuppliesResponse"></a> ### QueryAssetSuppliesResponse QueryAssetSuppliesResponse is the response type for the Query/AssetSupplies RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `asset_supplies` | [AssetSupplyResponse](#kava.bep3.v1beta1.AssetSupplyResponse) | repeated | asset_supplies represents the supplies of returned assets | <a name="kava.bep3.v1beta1.QueryAssetSupplyRequest"></a> ### QueryAssetSupplyRequest QueryAssetSupplyRequest is the request type for the Query/AssetSupply RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | denom filters the asset response for the specified denom | <a name="kava.bep3.v1beta1.QueryAssetSupplyResponse"></a> ### QueryAssetSupplyResponse QueryAssetSupplyResponse is the response type for the Query/AssetSupply RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `asset_supply` | [AssetSupplyResponse](#kava.bep3.v1beta1.AssetSupplyResponse) | | asset_supply represents the supply of the asset | <a name="kava.bep3.v1beta1.QueryAtomicSwapRequest"></a> ### QueryAtomicSwapRequest QueryAtomicSwapRequest is the request type for the Query/AtomicSwap RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `swap_id` | [string](#string) | | swap_id represents the id of the swap to query | <a name="kava.bep3.v1beta1.QueryAtomicSwapResponse"></a> ### QueryAtomicSwapResponse QueryAtomicSwapResponse is the response type for the Query/AtomicSwap RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `atomic_swap` | [AtomicSwapResponse](#kava.bep3.v1beta1.AtomicSwapResponse) | | | <a name="kava.bep3.v1beta1.QueryAtomicSwapsRequest"></a> ### QueryAtomicSwapsRequest QueryAtomicSwapsRequest is the request type for the Query/AtomicSwaps RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `involve` | [string](#string) | | involve filters by address | | `expiration` | [uint64](#uint64) | | expiration filters by expiration block height | | `status` | [SwapStatus](#kava.bep3.v1beta1.SwapStatus) | | status filters by swap status | | `direction` | [SwapDirection](#kava.bep3.v1beta1.SwapDirection) | | direction fitlers by swap direction | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | | <a name="kava.bep3.v1beta1.QueryAtomicSwapsResponse"></a> ### QueryAtomicSwapsResponse QueryAtomicSwapsResponse is the response type for the Query/AtomicSwaps RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `atomic_swaps` | [AtomicSwapResponse](#kava.bep3.v1beta1.AtomicSwapResponse) | repeated | atomic_swap represents the returned atomic swaps for the request | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | | <a name="kava.bep3.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest defines the request type for querying x/bep3 parameters. <a name="kava.bep3.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse defines the response type for querying x/bep3 parameters. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.bep3.v1beta1.Params) | | params represents the parameters of the module | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.bep3.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for bep3 module | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.bep3.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.bep3.v1beta1.QueryParamsResponse) | Params queries module params | GET|/kava/bep3/v1beta1/params| | `AssetSupply` | [QueryAssetSupplyRequest](#kava.bep3.v1beta1.QueryAssetSupplyRequest) | [QueryAssetSupplyResponse](#kava.bep3.v1beta1.QueryAssetSupplyResponse) | AssetSupply queries info about an asset's supply | GET|/kava/bep3/v1beta1/assetsupply/{denom}| | `AssetSupplies` | [QueryAssetSuppliesRequest](#kava.bep3.v1beta1.QueryAssetSuppliesRequest) | [QueryAssetSuppliesResponse](#kava.bep3.v1beta1.QueryAssetSuppliesResponse) | AssetSupplies queries a list of asset supplies | GET|/kava/bep3/v1beta1/assetsupplies| | `AtomicSwap` | [QueryAtomicSwapRequest](#kava.bep3.v1beta1.QueryAtomicSwapRequest) | [QueryAtomicSwapResponse](#kava.bep3.v1beta1.QueryAtomicSwapResponse) | AtomicSwap queries info about an atomic swap | GET|/kava/bep3/v1beta1/atomicswap/{swap_id}| | `AtomicSwaps` | [QueryAtomicSwapsRequest](#kava.bep3.v1beta1.QueryAtomicSwapsRequest) | [QueryAtomicSwapsResponse](#kava.bep3.v1beta1.QueryAtomicSwapsResponse) | AtomicSwaps queries a list of atomic swaps | GET|/kava/bep3/v1beta1/atomicswaps| <!-- end services --> <a name="kava/bep3/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/bep3/v1beta1/tx.proto <a name="kava.bep3.v1beta1.MsgClaimAtomicSwap"></a> ### MsgClaimAtomicSwap MsgClaimAtomicSwap defines the Msg/ClaimAtomicSwap request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `from` | [string](#string) | | | | `swap_id` | [string](#string) | | | | `random_number` | [string](#string) | | | <a name="kava.bep3.v1beta1.MsgClaimAtomicSwapResponse"></a> ### MsgClaimAtomicSwapResponse MsgClaimAtomicSwapResponse defines the Msg/ClaimAtomicSwap response type. <a name="kava.bep3.v1beta1.MsgCreateAtomicSwap"></a> ### MsgCreateAtomicSwap MsgCreateAtomicSwap defines the Msg/CreateAtomicSwap request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `from` | [string](#string) | | | | `to` | [string](#string) | | | | `recipient_other_chain` | [string](#string) | | | | `sender_other_chain` | [string](#string) | | | | `random_number_hash` | [string](#string) | | | | `timestamp` | [int64](#int64) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | | `height_span` | [uint64](#uint64) | | | <a name="kava.bep3.v1beta1.MsgCreateAtomicSwapResponse"></a> ### MsgCreateAtomicSwapResponse MsgCreateAtomicSwapResponse defines the Msg/CreateAtomicSwap response type. <a name="kava.bep3.v1beta1.MsgRefundAtomicSwap"></a> ### MsgRefundAtomicSwap MsgRefundAtomicSwap defines the Msg/RefundAtomicSwap request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `from` | [string](#string) | | | | `swap_id` | [string](#string) | | | <a name="kava.bep3.v1beta1.MsgRefundAtomicSwapResponse"></a> ### MsgRefundAtomicSwapResponse MsgRefundAtomicSwapResponse defines the Msg/RefundAtomicSwap response type. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.bep3.v1beta1.Msg"></a> ### Msg Msg defines the bep3 Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `CreateAtomicSwap` | [MsgCreateAtomicSwap](#kava.bep3.v1beta1.MsgCreateAtomicSwap) | [MsgCreateAtomicSwapResponse](#kava.bep3.v1beta1.MsgCreateAtomicSwapResponse) | CreateAtomicSwap defines a method for creating an atomic swap | | | `ClaimAtomicSwap` | [MsgClaimAtomicSwap](#kava.bep3.v1beta1.MsgClaimAtomicSwap) | [MsgClaimAtomicSwapResponse](#kava.bep3.v1beta1.MsgClaimAtomicSwapResponse) | ClaimAtomicSwap defines a method for claiming an atomic swap | | | `RefundAtomicSwap` | [MsgRefundAtomicSwap](#kava.bep3.v1beta1.MsgRefundAtomicSwap) | [MsgRefundAtomicSwapResponse](#kava.bep3.v1beta1.MsgRefundAtomicSwapResponse) | RefundAtomicSwap defines a method for refunding an atomic swap | | <!-- end services --> <a name="kava/cdp/v1beta1/cdp.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/cdp/v1beta1/cdp.proto <a name="kava.cdp.v1beta1.CDP"></a> ### CDP CDP defines the state of a single collateralized debt position. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `id` | [uint64](#uint64) | | | | `owner` | [bytes](#bytes) | | | | `type` | [string](#string) | | | | `collateral` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `principal` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `accumulated_fees` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `fees_updated` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `interest_factor` | [string](#string) | | | <a name="kava.cdp.v1beta1.Deposit"></a> ### Deposit Deposit defines an amount of coins deposited by an account to a cdp | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `cdp_id` | [uint64](#uint64) | | | | `depositor` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.cdp.v1beta1.OwnerCDPIndex"></a> ### OwnerCDPIndex OwnerCDPIndex defines the cdp ids for a single cdp owner | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `cdp_ids` | [uint64](#uint64) | repeated | | <a name="kava.cdp.v1beta1.TotalCollateral"></a> ### TotalCollateral TotalCollateral defines the total collateral of a given collateral type | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.cdp.v1beta1.TotalPrincipal"></a> ### TotalPrincipal TotalPrincipal defines the total principal of a given collateral type | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/cdp/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/cdp/v1beta1/genesis.proto <a name="kava.cdp.v1beta1.CollateralParam"></a> ### CollateralParam CollateralParam defines governance parameters for each collateral type within the cdp module | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `type` | [string](#string) | | | | `liquidation_ratio` | [string](#string) | | | | `debt_limit` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `stability_fee` | [string](#string) | | | | `auction_size` | [string](#string) | | | | `liquidation_penalty` | [string](#string) | | | | `spot_market_id` | [string](#string) | | | | `liquidation_market_id` | [string](#string) | | | | `keeper_reward_percentage` | [string](#string) | | | | `check_collateralization_index_count` | [string](#string) | | | | `conversion_factor` | [string](#string) | | | <a name="kava.cdp.v1beta1.DebtParam"></a> ### DebtParam DebtParam defines governance params for debt assets | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `reference_asset` | [string](#string) | | | | `conversion_factor` | [string](#string) | | | | `debt_floor` | [string](#string) | | | <a name="kava.cdp.v1beta1.GenesisAccumulationTime"></a> ### GenesisAccumulationTime GenesisAccumulationTime defines the previous distribution time and its corresponding denom | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `previous_accumulation_time` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `interest_factor` | [string](#string) | | | <a name="kava.cdp.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the cdp module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.cdp.v1beta1.Params) | | params defines all the paramaters of the module. | | `cdps` | [CDP](#kava.cdp.v1beta1.CDP) | repeated | | | `deposits` | [Deposit](#kava.cdp.v1beta1.Deposit) | repeated | | | `starting_cdp_id` | [uint64](#uint64) | | | | `debt_denom` | [string](#string) | | | | `gov_denom` | [string](#string) | | | | `previous_accumulation_times` | [GenesisAccumulationTime](#kava.cdp.v1beta1.GenesisAccumulationTime) | repeated | | | `total_principals` | [GenesisTotalPrincipal](#kava.cdp.v1beta1.GenesisTotalPrincipal) | repeated | | <a name="kava.cdp.v1beta1.GenesisTotalPrincipal"></a> ### GenesisTotalPrincipal GenesisTotalPrincipal defines the total principal and its corresponding collateral type | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `total_principal` | [string](#string) | | | <a name="kava.cdp.v1beta1.Params"></a> ### Params Params defines the parameters for the cdp module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_params` | [CollateralParam](#kava.cdp.v1beta1.CollateralParam) | repeated | | | `debt_param` | [DebtParam](#kava.cdp.v1beta1.DebtParam) | | | | `global_debt_limit` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `surplus_auction_threshold` | [string](#string) | | | | `surplus_auction_lot` | [string](#string) | | | | `debt_auction_threshold` | [string](#string) | | | | `debt_auction_lot` | [string](#string) | | | | `circuit_breaker` | [bool](#bool) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/cdp/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/cdp/v1beta1/query.proto <a name="kava.cdp.v1beta1.CDPResponse"></a> ### CDPResponse CDPResponse defines the state of a single collateralized debt position. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `id` | [uint64](#uint64) | | | | `owner` | [string](#string) | | | | `type` | [string](#string) | | | | `collateral` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `principal` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `accumulated_fees` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `fees_updated` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `interest_factor` | [string](#string) | | | | `collateral_value` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `collateralization_ratio` | [string](#string) | | | <a name="kava.cdp.v1beta1.QueryAccountsRequest"></a> ### QueryAccountsRequest QueryAccountsRequest defines the request type for the Query/Accounts RPC method. <a name="kava.cdp.v1beta1.QueryAccountsResponse"></a> ### QueryAccountsResponse QueryAccountsResponse defines the response type for the Query/Accounts RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `accounts` | [cosmos.auth.v1beta1.ModuleAccount](#cosmos.auth.v1beta1.ModuleAccount) | repeated | | <a name="kava.cdp.v1beta1.QueryCdpRequest"></a> ### QueryCdpRequest QueryCdpRequest defines the request type for the Query/Cdp RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `owner` | [string](#string) | | | <a name="kava.cdp.v1beta1.QueryCdpResponse"></a> ### QueryCdpResponse QueryCdpResponse defines the response type for the Query/Cdp RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `cdp` | [CDPResponse](#kava.cdp.v1beta1.CDPResponse) | | | <a name="kava.cdp.v1beta1.QueryCdpsRequest"></a> ### QueryCdpsRequest QueryCdpsRequest is the params for a filtered CDP query, the request type for the Query/Cdps RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `owner` | [string](#string) | | | | `id` | [uint64](#uint64) | | | | `ratio` | [string](#string) | | sdk.Dec as a string | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | | <a name="kava.cdp.v1beta1.QueryCdpsResponse"></a> ### QueryCdpsResponse QueryCdpsResponse defines the response type for the Query/Cdps RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `cdps` | [CDPResponse](#kava.cdp.v1beta1.CDPResponse) | repeated | | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | | <a name="kava.cdp.v1beta1.QueryDepositsRequest"></a> ### QueryDepositsRequest QueryDepositsRequest defines the request type for the Query/Deposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `owner` | [string](#string) | | | <a name="kava.cdp.v1beta1.QueryDepositsResponse"></a> ### QueryDepositsResponse QueryDepositsResponse defines the response type for the Query/Deposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `deposits` | [Deposit](#kava.cdp.v1beta1.Deposit) | repeated | | <a name="kava.cdp.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest defines the request type for the Query/Params RPC method. <a name="kava.cdp.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse defines the response type for the Query/Params RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.cdp.v1beta1.Params) | | | <a name="kava.cdp.v1beta1.QueryTotalCollateralRequest"></a> ### QueryTotalCollateralRequest QueryTotalCollateralRequest defines the request type for the Query/TotalCollateral RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | <a name="kava.cdp.v1beta1.QueryTotalCollateralResponse"></a> ### QueryTotalCollateralResponse QueryTotalCollateralResponse defines the response type for the Query/TotalCollateral RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `total_collateral` | [TotalCollateral](#kava.cdp.v1beta1.TotalCollateral) | repeated | | <a name="kava.cdp.v1beta1.QueryTotalPrincipalRequest"></a> ### QueryTotalPrincipalRequest QueryTotalPrincipalRequest defines the request type for the Query/TotalPrincipal RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | <a name="kava.cdp.v1beta1.QueryTotalPrincipalResponse"></a> ### QueryTotalPrincipalResponse QueryTotalPrincipalResponse defines the response type for the Query/TotalPrincipal RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `total_principal` | [TotalPrincipal](#kava.cdp.v1beta1.TotalPrincipal) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.cdp.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for cdp module | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.cdp.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.cdp.v1beta1.QueryParamsResponse) | Params queries all parameters of the cdp module. | GET|/kava/cdp/v1beta1/params| | `Accounts` | [QueryAccountsRequest](#kava.cdp.v1beta1.QueryAccountsRequest) | [QueryAccountsResponse](#kava.cdp.v1beta1.QueryAccountsResponse) | Accounts queries the CDP module accounts. | GET|/kava/cdp/v1beta1/accounts| | `TotalPrincipal` | [QueryTotalPrincipalRequest](#kava.cdp.v1beta1.QueryTotalPrincipalRequest) | [QueryTotalPrincipalResponse](#kava.cdp.v1beta1.QueryTotalPrincipalResponse) | TotalPrincipal queries the total principal of a given collateral type. | GET|/kava/cdp/v1beta1/totalPrincipal| | `TotalCollateral` | [QueryTotalCollateralRequest](#kava.cdp.v1beta1.QueryTotalCollateralRequest) | [QueryTotalCollateralResponse](#kava.cdp.v1beta1.QueryTotalCollateralResponse) | TotalCollateral queries the total collateral of a given collateral type. | GET|/kava/cdp/v1beta1/totalCollateral| | `Cdps` | [QueryCdpsRequest](#kava.cdp.v1beta1.QueryCdpsRequest) | [QueryCdpsResponse](#kava.cdp.v1beta1.QueryCdpsResponse) | Cdps queries all active CDPs. | GET|/kava/cdp/v1beta1/cdps| | `Cdp` | [QueryCdpRequest](#kava.cdp.v1beta1.QueryCdpRequest) | [QueryCdpResponse](#kava.cdp.v1beta1.QueryCdpResponse) | Cdp queries a CDP with the input owner address and collateral type. | GET|/kava/cdp/v1beta1/cdps/{owner}/{collateral_type}| | `Deposits` | [QueryDepositsRequest](#kava.cdp.v1beta1.QueryDepositsRequest) | [QueryDepositsResponse](#kava.cdp.v1beta1.QueryDepositsResponse) | Deposits queries deposits associated with the CDP owned by an address for a collateral type. | GET|/kava/cdp/v1beta1/cdps/deposits/{owner}/{collateral_type}| <!-- end services --> <a name="kava/cdp/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/cdp/v1beta1/tx.proto <a name="kava.cdp.v1beta1.MsgCreateCDP"></a> ### MsgCreateCDP MsgCreateCDP defines a message to create a new CDP. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `collateral` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `principal` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `collateral_type` | [string](#string) | | | <a name="kava.cdp.v1beta1.MsgCreateCDPResponse"></a> ### MsgCreateCDPResponse MsgCreateCDPResponse defines the Msg/CreateCDP response type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `cdp_id` | [uint64](#uint64) | | | <a name="kava.cdp.v1beta1.MsgDeposit"></a> ### MsgDeposit MsgDeposit defines a message to deposit to a CDP. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | | | `owner` | [string](#string) | | | | `collateral` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `collateral_type` | [string](#string) | | | <a name="kava.cdp.v1beta1.MsgDepositResponse"></a> ### MsgDepositResponse MsgDepositResponse defines the Msg/Deposit response type. <a name="kava.cdp.v1beta1.MsgDrawDebt"></a> ### MsgDrawDebt MsgDrawDebt defines a message to draw debt from a CDP. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `collateral_type` | [string](#string) | | | | `principal` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.cdp.v1beta1.MsgDrawDebtResponse"></a> ### MsgDrawDebtResponse MsgDrawDebtResponse defines the Msg/DrawDebt response type. <a name="kava.cdp.v1beta1.MsgLiquidate"></a> ### MsgLiquidate MsgLiquidate defines a message to attempt to liquidate a CDP whos collateralization ratio is under its liquidation ratio. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `keeper` | [string](#string) | | | | `borrower` | [string](#string) | | | | `collateral_type` | [string](#string) | | | <a name="kava.cdp.v1beta1.MsgLiquidateResponse"></a> ### MsgLiquidateResponse MsgLiquidateResponse defines the Msg/Liquidate response type. <a name="kava.cdp.v1beta1.MsgRepayDebt"></a> ### MsgRepayDebt MsgRepayDebt defines a message to repay debt from a CDP. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `collateral_type` | [string](#string) | | | | `payment` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.cdp.v1beta1.MsgRepayDebtResponse"></a> ### MsgRepayDebtResponse MsgRepayDebtResponse defines the Msg/RepayDebt response type. <a name="kava.cdp.v1beta1.MsgWithdraw"></a> ### MsgWithdraw MsgWithdraw defines a message to withdraw collateral from a CDP. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | | | `owner` | [string](#string) | | | | `collateral` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `collateral_type` | [string](#string) | | | <a name="kava.cdp.v1beta1.MsgWithdrawResponse"></a> ### MsgWithdrawResponse MsgWithdrawResponse defines the Msg/Withdraw response type. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.cdp.v1beta1.Msg"></a> ### Msg Msg defines the cdp Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `CreateCDP` | [MsgCreateCDP](#kava.cdp.v1beta1.MsgCreateCDP) | [MsgCreateCDPResponse](#kava.cdp.v1beta1.MsgCreateCDPResponse) | CreateCDP defines a method to create a new CDP. | | | `Deposit` | [MsgDeposit](#kava.cdp.v1beta1.MsgDeposit) | [MsgDepositResponse](#kava.cdp.v1beta1.MsgDepositResponse) | Deposit defines a method to deposit to a CDP. | | | `Withdraw` | [MsgWithdraw](#kava.cdp.v1beta1.MsgWithdraw) | [MsgWithdrawResponse](#kava.cdp.v1beta1.MsgWithdrawResponse) | Withdraw defines a method to withdraw collateral from a CDP. | | | `DrawDebt` | [MsgDrawDebt](#kava.cdp.v1beta1.MsgDrawDebt) | [MsgDrawDebtResponse](#kava.cdp.v1beta1.MsgDrawDebtResponse) | DrawDebt defines a method to draw debt from a CDP. | | | `RepayDebt` | [MsgRepayDebt](#kava.cdp.v1beta1.MsgRepayDebt) | [MsgRepayDebtResponse](#kava.cdp.v1beta1.MsgRepayDebtResponse) | RepayDebt defines a method to repay debt from a CDP. | | | `Liquidate` | [MsgLiquidate](#kava.cdp.v1beta1.MsgLiquidate) | [MsgLiquidateResponse](#kava.cdp.v1beta1.MsgLiquidateResponse) | Liquidate defines a method to attempt to liquidate a CDP whos collateralization ratio is under its liquidation ratio. | | <!-- end services --> <a name="kava/committee/v1beta1/committee.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/committee/v1beta1/committee.proto <a name="kava.committee.v1beta1.BaseCommittee"></a> ### BaseCommittee BaseCommittee is a common type shared by all Committees | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `id` | [uint64](#uint64) | | | | `description` | [string](#string) | | | | `members` | [bytes](#bytes) | repeated | | | `permissions` | [google.protobuf.Any](#google.protobuf.Any) | repeated | | | `vote_threshold` | [string](#string) | | Smallest percentage that must vote for a proposal to pass | | `proposal_duration` | [google.protobuf.Duration](#google.protobuf.Duration) | | The length of time a proposal remains active for. Proposals will close earlier if they get enough votes. | | `tally_option` | [TallyOption](#kava.committee.v1beta1.TallyOption) | | | <a name="kava.committee.v1beta1.MemberCommittee"></a> ### MemberCommittee MemberCommittee is an alias of BaseCommittee | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_committee` | [BaseCommittee](#kava.committee.v1beta1.BaseCommittee) | | | <a name="kava.committee.v1beta1.TokenCommittee"></a> ### TokenCommittee TokenCommittee supports voting on proposals by token holders | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_committee` | [BaseCommittee](#kava.committee.v1beta1.BaseCommittee) | | | | `quorum` | [string](#string) | | | | `tally_denom` | [string](#string) | | | <!-- end messages --> <a name="kava.committee.v1beta1.TallyOption"></a> ### TallyOption TallyOption enumerates the valid types of a tally. | Name | Number | Description | | ---- | ------ | ----------- | | TALLY_OPTION_UNSPECIFIED | 0 | TALLY_OPTION_UNSPECIFIED defines a null tally option. | | TALLY_OPTION_FIRST_PAST_THE_POST | 1 | Votes are tallied each block and the proposal passes as soon as the vote threshold is reached | | TALLY_OPTION_DEADLINE | 2 | Votes are tallied exactly once, when the deadline time is reached | <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/committee/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/committee/v1beta1/genesis.proto <a name="kava.committee.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the committee module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `next_proposal_id` | [uint64](#uint64) | | | | `committees` | [google.protobuf.Any](#google.protobuf.Any) | repeated | | | `proposals` | [Proposal](#kava.committee.v1beta1.Proposal) | repeated | | | `votes` | [Vote](#kava.committee.v1beta1.Vote) | repeated | | <a name="kava.committee.v1beta1.Proposal"></a> ### Proposal Proposal is an internal record of a governance proposal submitted to a committee. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `content` | [google.protobuf.Any](#google.protobuf.Any) | | | | `id` | [uint64](#uint64) | | | | `committee_id` | [uint64](#uint64) | | | | `deadline` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <a name="kava.committee.v1beta1.Vote"></a> ### Vote Vote is an internal record of a single governance vote. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | | `voter` | [bytes](#bytes) | | | | `vote_type` | [VoteType](#kava.committee.v1beta1.VoteType) | | | <!-- end messages --> <a name="kava.committee.v1beta1.VoteType"></a> ### VoteType VoteType enumerates the valid types of a vote. | Name | Number | Description | | ---- | ------ | ----------- | | VOTE_TYPE_UNSPECIFIED | 0 | VOTE_TYPE_UNSPECIFIED defines a no-op vote option. | | VOTE_TYPE_YES | 1 | VOTE_TYPE_YES defines a yes vote option. | | VOTE_TYPE_NO | 2 | VOTE_TYPE_NO defines a no vote option. | | VOTE_TYPE_ABSTAIN | 3 | VOTE_TYPE_ABSTAIN defines an abstain vote option. | <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/committee/v1beta1/permissions.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/committee/v1beta1/permissions.proto <a name="kava.committee.v1beta1.AllowedParamsChange"></a> ### AllowedParamsChange AllowedParamsChange contains data on the allowed parameter changes for subspace, key, and sub params requirements. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `subspace` | [string](#string) | | | | `key` | [string](#string) | | | | `single_subparam_allowed_attrs` | [string](#string) | repeated | Requirements for when the subparam value is a single record. This contains list of allowed attribute keys that can be changed on the subparam record. | | `multi_subparams_requirements` | [SubparamRequirement](#kava.committee.v1beta1.SubparamRequirement) | repeated | Requirements for when the subparam value is a list of records. The requirements contains requirements for each record in the list. | <a name="kava.committee.v1beta1.GodPermission"></a> ### GodPermission GodPermission allows any governance proposal. It is used mainly for testing. <a name="kava.committee.v1beta1.ParamsChangePermission"></a> ### ParamsChangePermission ParamsChangePermission allows any parameter or sub parameter change proposal. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `allowed_params_changes` | [AllowedParamsChange](#kava.committee.v1beta1.AllowedParamsChange) | repeated | | <a name="kava.committee.v1beta1.SoftwareUpgradePermission"></a> ### SoftwareUpgradePermission SoftwareUpgradePermission permission type for software upgrade proposals <a name="kava.committee.v1beta1.SubparamRequirement"></a> ### SubparamRequirement SubparamRequirement contains requirements for a single record in a subparam value list | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `key` | [string](#string) | | The required attr key of the param record. | | `val` | [string](#string) | | The required param value for the param record key. The key and value is used to match to the target param record. | | `allowed_subparam_attr_changes` | [string](#string) | repeated | The sub param attrs that are allowed to be changed. | <a name="kava.committee.v1beta1.TextPermission"></a> ### TextPermission TextPermission allows any text governance proposal. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/committee/v1beta1/proposal.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/committee/v1beta1/proposal.proto <a name="kava.committee.v1beta1.CommitteeChangeProposal"></a> ### CommitteeChangeProposal CommitteeChangeProposal is a gov proposal for creating a new committee or modifying an existing one. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `title` | [string](#string) | | | | `description` | [string](#string) | | | | `new_committee` | [google.protobuf.Any](#google.protobuf.Any) | | | <a name="kava.committee.v1beta1.CommitteeDeleteProposal"></a> ### CommitteeDeleteProposal CommitteeDeleteProposal is a gov proposal for removing a committee. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `title` | [string](#string) | | | | `description` | [string](#string) | | | | `committee_id` | [uint64](#uint64) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/committee/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/committee/v1beta1/query.proto <a name="kava.committee.v1beta1.QueryCommitteeRequest"></a> ### QueryCommitteeRequest QueryCommitteeRequest defines the request type for querying x/committee committee. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `committee_id` | [uint64](#uint64) | | | <a name="kava.committee.v1beta1.QueryCommitteeResponse"></a> ### QueryCommitteeResponse QueryCommitteeResponse defines the response type for querying x/committee committee. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `committee` | [google.protobuf.Any](#google.protobuf.Any) | | | <a name="kava.committee.v1beta1.QueryCommitteesRequest"></a> ### QueryCommitteesRequest QueryCommitteesRequest defines the request type for querying x/committee committees. <a name="kava.committee.v1beta1.QueryCommitteesResponse"></a> ### QueryCommitteesResponse QueryCommitteesResponse defines the response type for querying x/committee committees. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `committees` | [google.protobuf.Any](#google.protobuf.Any) | repeated | | <a name="kava.committee.v1beta1.QueryNextProposalIDRequest"></a> ### QueryNextProposalIDRequest QueryNextProposalIDRequest defines the request type for querying x/committee NextProposalID. <a name="kava.committee.v1beta1.QueryNextProposalIDResponse"></a> ### QueryNextProposalIDResponse QueryNextProposalIDRequest defines the response type for querying x/committee NextProposalID. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `next_proposal_id` | [uint64](#uint64) | | | <a name="kava.committee.v1beta1.QueryProposalRequest"></a> ### QueryProposalRequest QueryProposalRequest defines the request type for querying x/committee proposal. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | <a name="kava.committee.v1beta1.QueryProposalResponse"></a> ### QueryProposalResponse QueryProposalResponse defines the response type for querying x/committee proposal. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `pub_proposal` | [google.protobuf.Any](#google.protobuf.Any) | | | | `id` | [uint64](#uint64) | | | | `committee_id` | [uint64](#uint64) | | | | `deadline` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <a name="kava.committee.v1beta1.QueryProposalsRequest"></a> ### QueryProposalsRequest QueryProposalsRequest defines the request type for querying x/committee proposals. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `committee_id` | [uint64](#uint64) | | | <a name="kava.committee.v1beta1.QueryProposalsResponse"></a> ### QueryProposalsResponse QueryProposalsResponse defines the response type for querying x/committee proposals. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposals` | [QueryProposalResponse](#kava.committee.v1beta1.QueryProposalResponse) | repeated | | <a name="kava.committee.v1beta1.QueryRawParamsRequest"></a> ### QueryRawParamsRequest QueryRawParamsRequest defines the request type for querying x/committee raw params. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `subspace` | [string](#string) | | | | `key` | [string](#string) | | | <a name="kava.committee.v1beta1.QueryRawParamsResponse"></a> ### QueryRawParamsResponse QueryRawParamsResponse defines the response type for querying x/committee raw params. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `raw_data` | [string](#string) | | | <a name="kava.committee.v1beta1.QueryTallyRequest"></a> ### QueryTallyRequest QueryTallyRequest defines the request type for querying x/committee tally. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | <a name="kava.committee.v1beta1.QueryTallyResponse"></a> ### QueryTallyResponse QueryTallyResponse defines the response type for querying x/committee tally. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | | `yes_votes` | [string](#string) | | | | `no_votes` | [string](#string) | | | | `current_votes` | [string](#string) | | | | `possible_votes` | [string](#string) | | | | `vote_threshold` | [string](#string) | | | | `quorum` | [string](#string) | | | <a name="kava.committee.v1beta1.QueryVoteRequest"></a> ### QueryVoteRequest QueryVoteRequest defines the request type for querying x/committee vote. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | | `voter` | [string](#string) | | | <a name="kava.committee.v1beta1.QueryVoteResponse"></a> ### QueryVoteResponse QueryVoteResponse defines the response type for querying x/committee vote. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | | `voter` | [string](#string) | | | | `vote_type` | [VoteType](#kava.committee.v1beta1.VoteType) | | | <a name="kava.committee.v1beta1.QueryVotesRequest"></a> ### QueryVotesRequest QueryVotesRequest defines the request type for querying x/committee votes. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | | <a name="kava.committee.v1beta1.QueryVotesResponse"></a> ### QueryVotesResponse QueryVotesResponse defines the response type for querying x/committee votes. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `votes` | [QueryVoteResponse](#kava.committee.v1beta1.QueryVoteResponse) | repeated | votes defined the queried votes. | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | pagination defines the pagination in the response. | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.committee.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for committee module | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Committees` | [QueryCommitteesRequest](#kava.committee.v1beta1.QueryCommitteesRequest) | [QueryCommitteesResponse](#kava.committee.v1beta1.QueryCommitteesResponse) | Committees queries all committess of the committee module. | GET|/kava/committee/v1beta1/committees| | `Committee` | [QueryCommitteeRequest](#kava.committee.v1beta1.QueryCommitteeRequest) | [QueryCommitteeResponse](#kava.committee.v1beta1.QueryCommitteeResponse) | Committee queries a committee based on committee ID. | GET|/kava/committee/v1beta1/committees/{committee_id}| | `Proposals` | [QueryProposalsRequest](#kava.committee.v1beta1.QueryProposalsRequest) | [QueryProposalsResponse](#kava.committee.v1beta1.QueryProposalsResponse) | Proposals queries proposals based on committee ID. | GET|/kava/committee/v1beta1/proposals| | `Proposal` | [QueryProposalRequest](#kava.committee.v1beta1.QueryProposalRequest) | [QueryProposalResponse](#kava.committee.v1beta1.QueryProposalResponse) | Deposits queries a proposal based on proposal ID. | GET|/kava/committee/v1beta1/proposals/{proposal_id}| | `NextProposalID` | [QueryNextProposalIDRequest](#kava.committee.v1beta1.QueryNextProposalIDRequest) | [QueryNextProposalIDResponse](#kava.committee.v1beta1.QueryNextProposalIDResponse) | NextProposalID queries the next proposal ID of the committee module. | GET|/kava/committee/v1beta1/next-proposal-id| | `Votes` | [QueryVotesRequest](#kava.committee.v1beta1.QueryVotesRequest) | [QueryVotesResponse](#kava.committee.v1beta1.QueryVotesResponse) | Votes queries all votes for a single proposal ID. | GET|/kava/committee/v1beta1/proposals/{proposal_id}/votes| | `Vote` | [QueryVoteRequest](#kava.committee.v1beta1.QueryVoteRequest) | [QueryVoteResponse](#kava.committee.v1beta1.QueryVoteResponse) | Vote queries the vote of a single voter for a single proposal ID. | GET|/kava/committee/v1beta1/proposals/{proposal_id}/votes/{voter}| | `Tally` | [QueryTallyRequest](#kava.committee.v1beta1.QueryTallyRequest) | [QueryTallyResponse](#kava.committee.v1beta1.QueryTallyResponse) | Tally queries the tally of a single proposal ID. | GET|/kava/committee/v1beta1/proposals/{proposal_id}/tally| | `RawParams` | [QueryRawParamsRequest](#kava.committee.v1beta1.QueryRawParamsRequest) | [QueryRawParamsResponse](#kava.committee.v1beta1.QueryRawParamsResponse) | RawParams queries the raw params data of any subspace and key. | GET|/kava/committee/v1beta1/raw-params| <!-- end services --> <a name="kava/committee/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/committee/v1beta1/tx.proto <a name="kava.committee.v1beta1.MsgSubmitProposal"></a> ### MsgSubmitProposal MsgSubmitProposal is used by committee members to create a new proposal that they can vote on. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `pub_proposal` | [google.protobuf.Any](#google.protobuf.Any) | | | | `proposer` | [string](#string) | | | | `committee_id` | [uint64](#uint64) | | | <a name="kava.committee.v1beta1.MsgSubmitProposalResponse"></a> ### MsgSubmitProposalResponse MsgSubmitProposalResponse defines the SubmitProposal response type | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | <a name="kava.committee.v1beta1.MsgVote"></a> ### MsgVote MsgVote is submitted by committee members to vote on proposals. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `proposal_id` | [uint64](#uint64) | | | | `voter` | [string](#string) | | | | `vote_type` | [VoteType](#kava.committee.v1beta1.VoteType) | | | <a name="kava.committee.v1beta1.MsgVoteResponse"></a> ### MsgVoteResponse MsgVoteResponse defines the Vote response type <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.committee.v1beta1.Msg"></a> ### Msg Msg defines the committee Msg service | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `SubmitProposal` | [MsgSubmitProposal](#kava.committee.v1beta1.MsgSubmitProposal) | [MsgSubmitProposalResponse](#kava.committee.v1beta1.MsgSubmitProposalResponse) | SubmitProposal defines a method for submitting a committee proposal | | | `Vote` | [MsgVote](#kava.committee.v1beta1.MsgVote) | [MsgVoteResponse](#kava.committee.v1beta1.MsgVoteResponse) | Vote defines a method for voting on a proposal | | <!-- end services --> <a name="kava/hard/v1beta1/hard.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/hard/v1beta1/hard.proto <a name="kava.hard.v1beta1.Borrow"></a> ### Borrow Borrow defines an amount of coins borrowed from a hard module account. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `borrower` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | | `index` | [BorrowInterestFactor](#kava.hard.v1beta1.BorrowInterestFactor) | repeated | | <a name="kava.hard.v1beta1.BorrowInterestFactor"></a> ### BorrowInterestFactor BorrowInterestFactor defines an individual borrow interest factor. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `value` | [string](#string) | | | <a name="kava.hard.v1beta1.BorrowLimit"></a> ### BorrowLimit BorrowLimit enforces restrictions on a money market. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `has_max_limit` | [bool](#bool) | | | | `maximum_limit` | [string](#string) | | | | `loan_to_value` | [string](#string) | | | <a name="kava.hard.v1beta1.CoinsProto"></a> ### CoinsProto CoinsProto defines a Protobuf wrapper around a Coins slice | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `coins` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.Deposit"></a> ### Deposit Deposit defines an amount of coins deposited into a hard module account. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | | `index` | [SupplyInterestFactor](#kava.hard.v1beta1.SupplyInterestFactor) | repeated | | <a name="kava.hard.v1beta1.InterestRateModel"></a> ### InterestRateModel InterestRateModel contains information about an asset's interest rate. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_rate_apy` | [string](#string) | | | | `base_multiplier` | [string](#string) | | | | `kink` | [string](#string) | | | | `jump_multiplier` | [string](#string) | | | <a name="kava.hard.v1beta1.MoneyMarket"></a> ### MoneyMarket MoneyMarket is a money market for an individual asset. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `borrow_limit` | [BorrowLimit](#kava.hard.v1beta1.BorrowLimit) | | | | `spot_market_id` | [string](#string) | | | | `conversion_factor` | [string](#string) | | | | `interest_rate_model` | [InterestRateModel](#kava.hard.v1beta1.InterestRateModel) | | | | `reserve_factor` | [string](#string) | | | | `keeper_reward_percentage` | [string](#string) | | | <a name="kava.hard.v1beta1.Params"></a> ### Params Params defines the parameters for the hard module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `money_markets` | [MoneyMarket](#kava.hard.v1beta1.MoneyMarket) | repeated | | | `minimum_borrow_usd_value` | [string](#string) | | | <a name="kava.hard.v1beta1.SupplyInterestFactor"></a> ### SupplyInterestFactor SupplyInterestFactor defines an individual borrow interest factor. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `value` | [string](#string) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/hard/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/hard/v1beta1/genesis.proto <a name="kava.hard.v1beta1.GenesisAccumulationTime"></a> ### GenesisAccumulationTime GenesisAccumulationTime stores the previous distribution time and its corresponding denom. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `previous_accumulation_time` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `supply_interest_factor` | [string](#string) | | | | `borrow_interest_factor` | [string](#string) | | | <a name="kava.hard.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the hard module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.hard.v1beta1.Params) | | | | `previous_accumulation_times` | [GenesisAccumulationTime](#kava.hard.v1beta1.GenesisAccumulationTime) | repeated | | | `deposits` | [Deposit](#kava.hard.v1beta1.Deposit) | repeated | | | `borrows` | [Borrow](#kava.hard.v1beta1.Borrow) | repeated | | | `total_supplied` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | | `total_borrowed` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | | `total_reserves` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/hard/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/hard/v1beta1/query.proto <a name="kava.hard.v1beta1.BorrowInterestFactorResponse"></a> ### BorrowInterestFactorResponse BorrowInterestFactorResponse defines an individual borrow interest factor. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `value` | [string](#string) | | sdk.Dec as string | <a name="kava.hard.v1beta1.BorrowResponse"></a> ### BorrowResponse BorrowResponse defines an amount of coins borrowed from a hard module account. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `borrower` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | | `index` | [BorrowInterestFactorResponse](#kava.hard.v1beta1.BorrowInterestFactorResponse) | repeated | | <a name="kava.hard.v1beta1.DepositResponse"></a> ### DepositResponse DepositResponse defines an amount of coins deposited into a hard module account. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | | `index` | [SupplyInterestFactorResponse](#kava.hard.v1beta1.SupplyInterestFactorResponse) | repeated | | <a name="kava.hard.v1beta1.InterestFactor"></a> ### InterestFactor InterestFactor is a unique type returned by interest factor queries | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `borrow_interest_factor` | [string](#string) | | sdk.Dec as String | | `supply_interest_factor` | [string](#string) | | sdk.Dec as String | <a name="kava.hard.v1beta1.MoneyMarketInterestRate"></a> ### MoneyMarketInterestRate MoneyMarketInterestRate is a unique type returned by interest rate queries | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `supply_interest_rate` | [string](#string) | | sdk.Dec as String | | `borrow_interest_rate` | [string](#string) | | sdk.Dec as String | <a name="kava.hard.v1beta1.QueryAccountsRequest"></a> ### QueryAccountsRequest QueryAccountsRequest is the request type for the Query/Accounts RPC method. <a name="kava.hard.v1beta1.QueryAccountsResponse"></a> ### QueryAccountsResponse QueryAccountsResponse is the response type for the Query/Accounts RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `accounts` | [cosmos.auth.v1beta1.ModuleAccount](#cosmos.auth.v1beta1.ModuleAccount) | repeated | | <a name="kava.hard.v1beta1.QueryBorrowsRequest"></a> ### QueryBorrowsRequest QueryBorrowsRequest is the request type for the Query/Borrows RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `owner` | [string](#string) | | | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | | <a name="kava.hard.v1beta1.QueryBorrowsResponse"></a> ### QueryBorrowsResponse QueryBorrowsResponse is the response type for the Query/Borrows RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `borrows` | [BorrowResponse](#kava.hard.v1beta1.BorrowResponse) | repeated | | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | | <a name="kava.hard.v1beta1.QueryDepositsRequest"></a> ### QueryDepositsRequest QueryDepositsRequest is the request type for the Query/Deposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `owner` | [string](#string) | | | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | | <a name="kava.hard.v1beta1.QueryDepositsResponse"></a> ### QueryDepositsResponse QueryDepositsResponse is the response type for the Query/Deposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `deposits` | [DepositResponse](#kava.hard.v1beta1.DepositResponse) | repeated | | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | | <a name="kava.hard.v1beta1.QueryInterestFactorsRequest"></a> ### QueryInterestFactorsRequest QueryInterestFactorsRequest is the request type for the Query/InterestFactors RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | <a name="kava.hard.v1beta1.QueryInterestFactorsResponse"></a> ### QueryInterestFactorsResponse QueryInterestFactorsResponse is the response type for the Query/InterestFactors RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `interest_factors` | [InterestFactor](#kava.hard.v1beta1.InterestFactor) | repeated | | <a name="kava.hard.v1beta1.QueryInterestRateRequest"></a> ### QueryInterestRateRequest QueryInterestRateRequest is the request type for the Query/InterestRate RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | <a name="kava.hard.v1beta1.QueryInterestRateResponse"></a> ### QueryInterestRateResponse QueryInterestRateResponse is the response type for the Query/InterestRate RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `interest_rates` | [MoneyMarketInterestRate](#kava.hard.v1beta1.MoneyMarketInterestRate) | repeated | | <a name="kava.hard.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest is the request type for the Query/Params RPC method. <a name="kava.hard.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse is the response type for the Query/Params RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.hard.v1beta1.Params) | | | <a name="kava.hard.v1beta1.QueryReservesRequest"></a> ### QueryReservesRequest QueryReservesRequest is the request type for the Query/Reserves RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | <a name="kava.hard.v1beta1.QueryReservesResponse"></a> ### QueryReservesResponse QueryReservesResponse is the response type for the Query/Reserves RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.QueryTotalBorrowedRequest"></a> ### QueryTotalBorrowedRequest QueryTotalBorrowedRequest is the request type for the Query/TotalBorrowed RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | <a name="kava.hard.v1beta1.QueryTotalBorrowedResponse"></a> ### QueryTotalBorrowedResponse QueryTotalBorrowedResponse is the response type for the Query/TotalBorrowed RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `borrowed_coins` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.QueryTotalDepositedRequest"></a> ### QueryTotalDepositedRequest QueryTotalDepositedRequest is the request type for the Query/TotalDeposited RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | <a name="kava.hard.v1beta1.QueryTotalDepositedResponse"></a> ### QueryTotalDepositedResponse QueryTotalDepositedResponse is the response type for the Query/TotalDeposited RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `supplied_coins` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.QueryUnsyncedBorrowsRequest"></a> ### QueryUnsyncedBorrowsRequest QueryUnsyncedBorrowsRequest is the request type for the Query/UnsyncedBorrows RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `owner` | [string](#string) | | | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | | <a name="kava.hard.v1beta1.QueryUnsyncedBorrowsResponse"></a> ### QueryUnsyncedBorrowsResponse QueryUnsyncedBorrowsResponse is the response type for the Query/UnsyncedBorrows RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `borrows` | [BorrowResponse](#kava.hard.v1beta1.BorrowResponse) | repeated | | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | | <a name="kava.hard.v1beta1.QueryUnsyncedDepositsRequest"></a> ### QueryUnsyncedDepositsRequest QueryUnsyncedDepositsRequest is the request type for the Query/UnsyncedDeposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `owner` | [string](#string) | | | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | | <a name="kava.hard.v1beta1.QueryUnsyncedDepositsResponse"></a> ### QueryUnsyncedDepositsResponse QueryUnsyncedDepositsResponse is the response type for the Query/UnsyncedDeposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `deposits` | [DepositResponse](#kava.hard.v1beta1.DepositResponse) | repeated | | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | | <a name="kava.hard.v1beta1.SupplyInterestFactorResponse"></a> ### SupplyInterestFactorResponse SupplyInterestFactorResponse defines an individual borrow interest factor. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `value` | [string](#string) | | sdk.Dec as string | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.hard.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for bep3 module. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.hard.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.hard.v1beta1.QueryParamsResponse) | Params queries module params. | GET|/kava/hard/v1beta1/params| | `Accounts` | [QueryAccountsRequest](#kava.hard.v1beta1.QueryAccountsRequest) | [QueryAccountsResponse](#kava.hard.v1beta1.QueryAccountsResponse) | Accounts queries module accounts. | GET|/kava/hard/v1beta1/accounts| | `Deposits` | [QueryDepositsRequest](#kava.hard.v1beta1.QueryDepositsRequest) | [QueryDepositsResponse](#kava.hard.v1beta1.QueryDepositsResponse) | Deposits queries hard deposits. | GET|/kava/hard/v1beta1/deposits| | `UnsyncedDeposits` | [QueryUnsyncedDepositsRequest](#kava.hard.v1beta1.QueryUnsyncedDepositsRequest) | [QueryUnsyncedDepositsResponse](#kava.hard.v1beta1.QueryUnsyncedDepositsResponse) | UnsyncedDeposits queries unsynced deposits. | GET|/kava/hard/v1beta1/unsynced-deposits| | `TotalDeposited` | [QueryTotalDepositedRequest](#kava.hard.v1beta1.QueryTotalDepositedRequest) | [QueryTotalDepositedResponse](#kava.hard.v1beta1.QueryTotalDepositedResponse) | TotalDeposited queries total coins deposited to hard liquidity pools. | GET|/kava/hard/v1beta1/total-deposited/{denom}| | `Borrows` | [QueryBorrowsRequest](#kava.hard.v1beta1.QueryBorrowsRequest) | [QueryBorrowsResponse](#kava.hard.v1beta1.QueryBorrowsResponse) | Borrows queries hard borrows. | GET|/kava/hard/v1beta1/borrows| | `UnsyncedBorrows` | [QueryUnsyncedBorrowsRequest](#kava.hard.v1beta1.QueryUnsyncedBorrowsRequest) | [QueryUnsyncedBorrowsResponse](#kava.hard.v1beta1.QueryUnsyncedBorrowsResponse) | UnsyncedBorrows queries unsynced borrows. | GET|/kava/hard/v1beta1/unsynced-borrows| | `TotalBorrowed` | [QueryTotalBorrowedRequest](#kava.hard.v1beta1.QueryTotalBorrowedRequest) | [QueryTotalBorrowedResponse](#kava.hard.v1beta1.QueryTotalBorrowedResponse) | TotalBorrowed queries total coins borrowed from hard liquidity pools. | GET|/kava/hard/v1beta1/total-borrowed/{denom}| | `InterestRate` | [QueryInterestRateRequest](#kava.hard.v1beta1.QueryInterestRateRequest) | [QueryInterestRateResponse](#kava.hard.v1beta1.QueryInterestRateResponse) | InterestRate queries the hard module interest rates. | GET|/kava/hard/v1beta1/interest-rate/{denom}| | `Reserves` | [QueryReservesRequest](#kava.hard.v1beta1.QueryReservesRequest) | [QueryReservesResponse](#kava.hard.v1beta1.QueryReservesResponse) | Reserves queries total hard reserve coins. | GET|/kava/hard/v1beta1/reserves/{denom}| | `InterestFactors` | [QueryInterestFactorsRequest](#kava.hard.v1beta1.QueryInterestFactorsRequest) | [QueryInterestFactorsResponse](#kava.hard.v1beta1.QueryInterestFactorsResponse) | InterestFactors queries hard module interest factors. | GET|/kava/hard/v1beta1/interest-factors/{denom}| <!-- end services --> <a name="kava/hard/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/hard/v1beta1/tx.proto <a name="kava.hard.v1beta1.MsgBorrow"></a> ### MsgBorrow MsgBorrow defines the Msg/Borrow request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `borrower` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.MsgBorrowResponse"></a> ### MsgBorrowResponse MsgBorrowResponse defines the Msg/Borrow response type. <a name="kava.hard.v1beta1.MsgDeposit"></a> ### MsgDeposit MsgDeposit defines the Msg/Deposit request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.MsgDepositResponse"></a> ### MsgDepositResponse MsgDepositResponse defines the Msg/Deposit response type. <a name="kava.hard.v1beta1.MsgLiquidate"></a> ### MsgLiquidate MsgLiquidate defines the Msg/Liquidate request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `keeper` | [string](#string) | | | | `borrower` | [string](#string) | | | <a name="kava.hard.v1beta1.MsgLiquidateResponse"></a> ### MsgLiquidateResponse MsgLiquidateResponse defines the Msg/Liquidate response type. <a name="kava.hard.v1beta1.MsgRepay"></a> ### MsgRepay MsgRepay defines the Msg/Repay request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `owner` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.MsgRepayResponse"></a> ### MsgRepayResponse MsgRepayResponse defines the Msg/Repay response type. <a name="kava.hard.v1beta1.MsgWithdraw"></a> ### MsgWithdraw MsgWithdraw defines the Msg/Withdraw request type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.hard.v1beta1.MsgWithdrawResponse"></a> ### MsgWithdrawResponse MsgWithdrawResponse defines the Msg/Withdraw response type. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.hard.v1beta1.Msg"></a> ### Msg Msg defines the hard Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Deposit` | [MsgDeposit](#kava.hard.v1beta1.MsgDeposit) | [MsgDepositResponse](#kava.hard.v1beta1.MsgDepositResponse) | Deposit defines a method for depositing funds to hard liquidity pool. | | | `Withdraw` | [MsgWithdraw](#kava.hard.v1beta1.MsgWithdraw) | [MsgWithdrawResponse](#kava.hard.v1beta1.MsgWithdrawResponse) | Withdraw defines a method for withdrawing funds from hard liquidity pool. | | | `Borrow` | [MsgBorrow](#kava.hard.v1beta1.MsgBorrow) | [MsgBorrowResponse](#kava.hard.v1beta1.MsgBorrowResponse) | Borrow defines a method for borrowing funds from hard liquidity pool. | | | `Repay` | [MsgRepay](#kava.hard.v1beta1.MsgRepay) | [MsgRepayResponse](#kava.hard.v1beta1.MsgRepayResponse) | Repay defines a method for repaying funds borrowed from hard liquidity pool. | | | `Liquidate` | [MsgLiquidate](#kava.hard.v1beta1.MsgLiquidate) | [MsgLiquidateResponse](#kava.hard.v1beta1.MsgLiquidateResponse) | Liquidate defines a method for attempting to liquidate a borrower that is over their loan-to-value. | | <!-- end services --> <a name="kava/incentive/v1beta1/claims.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/incentive/v1beta1/claims.proto <a name="kava.incentive.v1beta1.BaseClaim"></a> ### BaseClaim BaseClaim is a claim with a single reward coin types | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `owner` | [bytes](#bytes) | | | | `reward` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.incentive.v1beta1.BaseMultiClaim"></a> ### BaseMultiClaim BaseMultiClaim is a claim with multiple reward coin types | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `owner` | [bytes](#bytes) | | | | `reward` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.incentive.v1beta1.DelegatorClaim"></a> ### DelegatorClaim DelegatorClaim stores delegation rewards that can be claimed by owner | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_claim` | [BaseMultiClaim](#kava.incentive.v1beta1.BaseMultiClaim) | | | | `reward_indexes` | [MultiRewardIndex](#kava.incentive.v1beta1.MultiRewardIndex) | repeated | | <a name="kava.incentive.v1beta1.HardLiquidityProviderClaim"></a> ### HardLiquidityProviderClaim HardLiquidityProviderClaim stores the hard liquidity provider rewards that can be claimed by owner | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_claim` | [BaseMultiClaim](#kava.incentive.v1beta1.BaseMultiClaim) | | | | `supply_reward_indexes` | [MultiRewardIndex](#kava.incentive.v1beta1.MultiRewardIndex) | repeated | | | `borrow_reward_indexes` | [MultiRewardIndex](#kava.incentive.v1beta1.MultiRewardIndex) | repeated | | <a name="kava.incentive.v1beta1.MultiRewardIndex"></a> ### MultiRewardIndex MultiRewardIndex stores reward accumulation information on multiple reward types | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `reward_indexes` | [RewardIndex](#kava.incentive.v1beta1.RewardIndex) | repeated | | <a name="kava.incentive.v1beta1.MultiRewardIndexesProto"></a> ### MultiRewardIndexesProto MultiRewardIndexesProto defines a Protobuf wrapper around a MultiRewardIndexes slice | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `multi_reward_indexes` | [MultiRewardIndex](#kava.incentive.v1beta1.MultiRewardIndex) | repeated | | <a name="kava.incentive.v1beta1.RewardIndex"></a> ### RewardIndex RewardIndex stores reward accumulation information | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `reward_factor` | [bytes](#bytes) | | | <a name="kava.incentive.v1beta1.RewardIndexesProto"></a> ### RewardIndexesProto RewardIndexesProto defines a Protobuf wrapper around a RewardIndexes slice | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `reward_indexes` | [RewardIndex](#kava.incentive.v1beta1.RewardIndex) | repeated | | <a name="kava.incentive.v1beta1.SwapClaim"></a> ### SwapClaim SwapClaim stores the swap rewards that can be claimed by owner | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_claim` | [BaseMultiClaim](#kava.incentive.v1beta1.BaseMultiClaim) | | | | `reward_indexes` | [MultiRewardIndex](#kava.incentive.v1beta1.MultiRewardIndex) | repeated | | <a name="kava.incentive.v1beta1.USDXMintingClaim"></a> ### USDXMintingClaim USDXMintingClaim is for USDX minting rewards | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `base_claim` | [BaseClaim](#kava.incentive.v1beta1.BaseClaim) | | | | `reward_indexes` | [RewardIndex](#kava.incentive.v1beta1.RewardIndex) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/incentive/v1beta1/params.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/incentive/v1beta1/params.proto <a name="kava.incentive.v1beta1.MultiRewardPeriod"></a> ### MultiRewardPeriod MultiRewardPeriod supports multiple reward types | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `active` | [bool](#bool) | | | | `collateral_type` | [string](#string) | | | | `start` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `end` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `rewards_per_second` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.incentive.v1beta1.Multiplier"></a> ### Multiplier Multiplier amount the claim rewards get increased by, along with how long the claim rewards are locked | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `name` | [string](#string) | | | | `months_lockup` | [int64](#int64) | | | | `factor` | [bytes](#bytes) | | | <a name="kava.incentive.v1beta1.MultipliersPerDenom"></a> ### MultipliersPerDenom MultipliersPerDenom is a map of denoms to a set of multipliers | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `multipliers` | [Multiplier](#kava.incentive.v1beta1.Multiplier) | repeated | | <a name="kava.incentive.v1beta1.Params"></a> ### Params Params | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `usdx_minting_reward_periods` | [RewardPeriod](#kava.incentive.v1beta1.RewardPeriod) | repeated | | | `hard_supply_reward_periods` | [MultiRewardPeriod](#kava.incentive.v1beta1.MultiRewardPeriod) | repeated | | | `hard_borrow_reward_periods` | [MultiRewardPeriod](#kava.incentive.v1beta1.MultiRewardPeriod) | repeated | | | `delegator_reward_periods` | [MultiRewardPeriod](#kava.incentive.v1beta1.MultiRewardPeriod) | repeated | | | `swap_reward_periods` | [MultiRewardPeriod](#kava.incentive.v1beta1.MultiRewardPeriod) | repeated | | | `claim_multipliers` | [MultipliersPerDenom](#kava.incentive.v1beta1.MultipliersPerDenom) | repeated | | | `claim_end` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <a name="kava.incentive.v1beta1.RewardPeriod"></a> ### RewardPeriod RewardPeriod stores the state of an ongoing reward | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `active` | [bool](#bool) | | | | `collateral_type` | [string](#string) | | | | `start` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `end` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | | `rewards_per_second` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/incentive/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/incentive/v1beta1/genesis.proto <a name="kava.incentive.v1beta1.AccumulationTime"></a> ### AccumulationTime AccumulationTime stores the previous reward distribution time and its corresponding collateral type | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `collateral_type` | [string](#string) | | | | `previous_accumulation_time` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <a name="kava.incentive.v1beta1.GenesisRewardState"></a> ### GenesisRewardState GenesisRewardState groups together the global state for a particular reward so it can be exported in genesis. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `accumulation_times` | [AccumulationTime](#kava.incentive.v1beta1.AccumulationTime) | repeated | | | `multi_reward_indexes` | [MultiRewardIndex](#kava.incentive.v1beta1.MultiRewardIndex) | repeated | | <a name="kava.incentive.v1beta1.GenesisState"></a> ### GenesisState GenesisState is the state that must be provided at genesis. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.incentive.v1beta1.Params) | | | | `usdx_reward_state` | [GenesisRewardState](#kava.incentive.v1beta1.GenesisRewardState) | | | | `hard_supply_reward_state` | [GenesisRewardState](#kava.incentive.v1beta1.GenesisRewardState) | | | | `hard_borrow_reward_state` | [GenesisRewardState](#kava.incentive.v1beta1.GenesisRewardState) | | | | `delegator_reward_state` | [GenesisRewardState](#kava.incentive.v1beta1.GenesisRewardState) | | | | `swap_reward_state` | [GenesisRewardState](#kava.incentive.v1beta1.GenesisRewardState) | | | | `usdx_minting_claims` | [USDXMintingClaim](#kava.incentive.v1beta1.USDXMintingClaim) | repeated | | | `hard_liquidity_provider_claims` | [HardLiquidityProviderClaim](#kava.incentive.v1beta1.HardLiquidityProviderClaim) | repeated | | | `delegator_claims` | [DelegatorClaim](#kava.incentive.v1beta1.DelegatorClaim) | repeated | | | `swap_claims` | [SwapClaim](#kava.incentive.v1beta1.SwapClaim) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/incentive/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/incentive/v1beta1/tx.proto <a name="kava.incentive.v1beta1.MsgClaimDelegatorReward"></a> ### MsgClaimDelegatorReward MsgClaimDelegatorReward message type used to claim delegator rewards | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `denoms_to_claim` | [Selection](#kava.incentive.v1beta1.Selection) | repeated | | <a name="kava.incentive.v1beta1.MsgClaimDelegatorRewardResponse"></a> ### MsgClaimDelegatorRewardResponse MsgClaimDelegatorRewardResponse defines the Msg/ClaimDelegatorReward response type. <a name="kava.incentive.v1beta1.MsgClaimHardReward"></a> ### MsgClaimHardReward MsgClaimHardReward message type used to claim Hard liquidity provider rewards | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `denoms_to_claim` | [Selection](#kava.incentive.v1beta1.Selection) | repeated | | <a name="kava.incentive.v1beta1.MsgClaimHardRewardResponse"></a> ### MsgClaimHardRewardResponse MsgClaimHardRewardResponse defines the Msg/ClaimHardReward response type. <a name="kava.incentive.v1beta1.MsgClaimSwapReward"></a> ### MsgClaimSwapReward MsgClaimSwapReward message type used to claim delegator rewards | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `denoms_to_claim` | [Selection](#kava.incentive.v1beta1.Selection) | repeated | | <a name="kava.incentive.v1beta1.MsgClaimSwapRewardResponse"></a> ### MsgClaimSwapRewardResponse MsgClaimSwapRewardResponse defines the Msg/ClaimSwapReward response type. <a name="kava.incentive.v1beta1.MsgClaimUSDXMintingReward"></a> ### MsgClaimUSDXMintingReward MsgClaimUSDXMintingReward message type used to claim USDX minting rewards | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `multiplier_name` | [string](#string) | | | <a name="kava.incentive.v1beta1.MsgClaimUSDXMintingRewardResponse"></a> ### MsgClaimUSDXMintingRewardResponse MsgClaimUSDXMintingRewardResponse defines the Msg/ClaimUSDXMintingReward response type. <a name="kava.incentive.v1beta1.Selection"></a> ### Selection Selection is a pair of denom and multiplier name. It holds the choice of multiplier a user makes when they claim a denom. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `denom` | [string](#string) | | | | `multiplier_name` | [string](#string) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.incentive.v1beta1.Msg"></a> ### Msg Msg defines the incentive Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `ClaimUSDXMintingReward` | [MsgClaimUSDXMintingReward](#kava.incentive.v1beta1.MsgClaimUSDXMintingReward) | [MsgClaimUSDXMintingRewardResponse](#kava.incentive.v1beta1.MsgClaimUSDXMintingRewardResponse) | ClaimUSDXMintingReward is a message type used to claim USDX minting rewards | | | `ClaimHardReward` | [MsgClaimHardReward](#kava.incentive.v1beta1.MsgClaimHardReward) | [MsgClaimHardRewardResponse](#kava.incentive.v1beta1.MsgClaimHardRewardResponse) | ClaimHardReward is a message type used to claim Hard liquidity provider rewards | | | `ClaimDelegatorReward` | [MsgClaimDelegatorReward](#kava.incentive.v1beta1.MsgClaimDelegatorReward) | [MsgClaimDelegatorRewardResponse](#kava.incentive.v1beta1.MsgClaimDelegatorRewardResponse) | ClaimDelegatorReward is a message type used to claim delegator rewards | | | `ClaimSwapReward` | [MsgClaimSwapReward](#kava.incentive.v1beta1.MsgClaimSwapReward) | [MsgClaimSwapRewardResponse](#kava.incentive.v1beta1.MsgClaimSwapRewardResponse) | ClaimSwapReward is a message type used to claim delegator rewards | | <!-- end services --> <a name="kava/issuance/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/issuance/v1beta1/genesis.proto <a name="kava.issuance.v1beta1.Asset"></a> ### Asset Asset type for assets in the issuance module | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `owner` | [string](#string) | | | | `denom` | [string](#string) | | | | `blocked_addresses` | [string](#string) | repeated | | | `paused` | [bool](#bool) | | | | `blockable` | [bool](#bool) | | | | `rate_limit` | [RateLimit](#kava.issuance.v1beta1.RateLimit) | | | <a name="kava.issuance.v1beta1.AssetSupply"></a> ### AssetSupply AssetSupply contains information about an asset's rate-limited supply (the total supply of the asset is tracked in the top-level supply module) | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `current_supply` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `time_elapsed` | [google.protobuf.Duration](#google.protobuf.Duration) | | | <a name="kava.issuance.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the issuance module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.issuance.v1beta1.Params) | | params defines all the paramaters of the module. | | `supplies` | [AssetSupply](#kava.issuance.v1beta1.AssetSupply) | repeated | | <a name="kava.issuance.v1beta1.Params"></a> ### Params Params defines the parameters for the issuance module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `assets` | [Asset](#kava.issuance.v1beta1.Asset) | repeated | | <a name="kava.issuance.v1beta1.RateLimit"></a> ### RateLimit RateLimit parameters for rate-limiting the supply of an issued asset | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `active` | [bool](#bool) | | | | `limit` | [bytes](#bytes) | | | | `time_period` | [google.protobuf.Duration](#google.protobuf.Duration) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/issuance/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/issuance/v1beta1/query.proto <a name="kava.issuance.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest defines the request type for querying x/issuance parameters. <a name="kava.issuance.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse defines the response type for querying x/issuance parameters. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.issuance.v1beta1.Params) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.issuance.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for issuance module | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.issuance.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.issuance.v1beta1.QueryParamsResponse) | Params queries all parameters of the issuance module. | GET|/kava/issuance/v1beta1/params| <!-- end services --> <a name="kava/issuance/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/issuance/v1beta1/tx.proto <a name="kava.issuance.v1beta1.MsgBlockAddress"></a> ### MsgBlockAddress MsgBlockAddress represents a message used by the issuer to block an address from holding or transferring tokens | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `denom` | [string](#string) | | | | `blocked_address` | [string](#string) | | | <a name="kava.issuance.v1beta1.MsgBlockAddressResponse"></a> ### MsgBlockAddressResponse MsgBlockAddressResponse defines the Msg/BlockAddress response type. <a name="kava.issuance.v1beta1.MsgIssueTokens"></a> ### MsgIssueTokens MsgIssueTokens represents a message used by the issuer to issue new tokens | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `tokens` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | | `receiver` | [string](#string) | | | <a name="kava.issuance.v1beta1.MsgIssueTokensResponse"></a> ### MsgIssueTokensResponse MsgIssueTokensResponse defines the Msg/IssueTokens response type. <a name="kava.issuance.v1beta1.MsgRedeemTokens"></a> ### MsgRedeemTokens MsgRedeemTokens represents a message used by the issuer to redeem (burn) tokens | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `tokens` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | | <a name="kava.issuance.v1beta1.MsgRedeemTokensResponse"></a> ### MsgRedeemTokensResponse MsgRedeemTokensResponse defines the Msg/RedeemTokens response type. <a name="kava.issuance.v1beta1.MsgSetPauseStatus"></a> ### MsgSetPauseStatus MsgSetPauseStatus message type used by the issuer to pause or unpause status | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `denom` | [string](#string) | | | | `status` | [bool](#bool) | | | <a name="kava.issuance.v1beta1.MsgSetPauseStatusResponse"></a> ### MsgSetPauseStatusResponse MsgSetPauseStatusResponse defines the Msg/SetPauseStatus response type. <a name="kava.issuance.v1beta1.MsgUnblockAddress"></a> ### MsgUnblockAddress MsgUnblockAddress message type used by the issuer to unblock an address from holding or transferring tokens | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `sender` | [string](#string) | | | | `denom` | [string](#string) | | | | `blocked_address` | [string](#string) | | | <a name="kava.issuance.v1beta1.MsgUnblockAddressResponse"></a> ### MsgUnblockAddressResponse MsgUnblockAddressResponse defines the Msg/UnblockAddress response type. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.issuance.v1beta1.Msg"></a> ### Msg Msg defines the issuance Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `IssueTokens` | [MsgIssueTokens](#kava.issuance.v1beta1.MsgIssueTokens) | [MsgIssueTokensResponse](#kava.issuance.v1beta1.MsgIssueTokensResponse) | IssueTokens message type used by the issuer to issue new tokens | | | `RedeemTokens` | [MsgRedeemTokens](#kava.issuance.v1beta1.MsgRedeemTokens) | [MsgRedeemTokensResponse](#kava.issuance.v1beta1.MsgRedeemTokensResponse) | RedeemTokens message type used by the issuer to redeem (burn) tokens | | | `BlockAddress` | [MsgBlockAddress](#kava.issuance.v1beta1.MsgBlockAddress) | [MsgBlockAddressResponse](#kava.issuance.v1beta1.MsgBlockAddressResponse) | BlockAddress message type used by the issuer to block an address from holding or transferring tokens | | | `UnblockAddress` | [MsgUnblockAddress](#kava.issuance.v1beta1.MsgUnblockAddress) | [MsgUnblockAddressResponse](#kava.issuance.v1beta1.MsgUnblockAddressResponse) | UnblockAddress message type used by the issuer to unblock an address from holding or transferring tokens | | | `SetPauseStatus` | [MsgSetPauseStatus](#kava.issuance.v1beta1.MsgSetPauseStatus) | [MsgSetPauseStatusResponse](#kava.issuance.v1beta1.MsgSetPauseStatusResponse) | SetPauseStatus message type used to pause or unpause status | | <!-- end services --> <a name="kava/kavadist/v1beta1/params.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/kavadist/v1beta1/params.proto <a name="kava.kavadist.v1beta1.Params"></a> ### Params Params governance parameters for kavadist module | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `active` | [bool](#bool) | | | | `periods` | [Period](#kava.kavadist.v1beta1.Period) | repeated | | <a name="kava.kavadist.v1beta1.Period"></a> ### Period Period stores the specified start and end dates, and the inflation, expressed as a decimal representing the yearly APR of KAVA tokens that will be minted during that period | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `start` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | example "2020-03-01T15:20:00Z" | | `end` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | example "2020-06-01T15:20:00Z" | | `inflation` | [bytes](#bytes) | | example "1.000000003022265980" - 10% inflation | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/kavadist/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/kavadist/v1beta1/genesis.proto <a name="kava.kavadist.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the kavadist module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.kavadist.v1beta1.Params) | | | | `previous_block_time` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/kavadist/v1beta1/proposal.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/kavadist/v1beta1/proposal.proto <a name="kava.kavadist.v1beta1.CommunityPoolMultiSpendProposal"></a> ### CommunityPoolMultiSpendProposal CommunityPoolMultiSpendProposal spends from the community pool by sending to one or more addresses | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `title` | [string](#string) | | | | `description` | [string](#string) | | | | `recipient_list` | [MultiSpendRecipient](#kava.kavadist.v1beta1.MultiSpendRecipient) | repeated | | <a name="kava.kavadist.v1beta1.CommunityPoolMultiSpendProposalJSON"></a> ### CommunityPoolMultiSpendProposalJSON CommunityPoolMultiSpendProposalJSON defines a CommunityPoolMultiSpendProposal with a deposit | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `title` | [string](#string) | | | | `description` | [string](#string) | | | | `recipient_list` | [MultiSpendRecipient](#kava.kavadist.v1beta1.MultiSpendRecipient) | repeated | | | `deposit` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.kavadist.v1beta1.MultiSpendRecipient"></a> ### MultiSpendRecipient MultiSpendRecipient defines a recipient and the amount of coins they are receiving | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `address` | [string](#string) | | | | `amount` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/kavadist/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/kavadist/v1beta1/query.proto <a name="kava.kavadist.v1beta1.QueryBalanceRequest"></a> ### QueryBalanceRequest QueryBalanceRequest defines the request type for querying x/kavadist balance. <a name="kava.kavadist.v1beta1.QueryBalanceResponse"></a> ### QueryBalanceResponse QueryBalanceResponse defines the response type for querying x/kavadist balance. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `coins` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | | <a name="kava.kavadist.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest defines the request type for querying x/kavadist parameters. <a name="kava.kavadist.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse defines the response type for querying x/kavadist parameters. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.kavadist.v1beta1.Params) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.kavadist.v1beta1.Query"></a> ### Query Query defines the gRPC querier service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.kavadist.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.kavadist.v1beta1.QueryParamsResponse) | Params queries the parameters of x/kavadist module. | GET|/kava/kavadist/v1beta1/parameters| | `Balance` | [QueryBalanceRequest](#kava.kavadist.v1beta1.QueryBalanceRequest) | [QueryBalanceResponse](#kava.kavadist.v1beta1.QueryBalanceResponse) | Balance queries the balance of all coins of x/kavadist module. | GET|/kava/kavadist/v1beta1/balance| <!-- end services --> <a name="kava/pricefeed/v1beta1/store.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/pricefeed/v1beta1/store.proto <a name="kava.pricefeed.v1beta1.CurrentPrice"></a> ### CurrentPrice CurrentPrice defines a current price for a particular market in the pricefeed module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | | `price` | [string](#string) | | | <a name="kava.pricefeed.v1beta1.Market"></a> ### Market Market defines an asset in the pricefeed. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | | `base_asset` | [string](#string) | | | | `quote_asset` | [string](#string) | | | | `oracles` | [bytes](#bytes) | repeated | | | `active` | [bool](#bool) | | | <a name="kava.pricefeed.v1beta1.Params"></a> ### Params Params defines the parameters for the pricefeed module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `markets` | [Market](#kava.pricefeed.v1beta1.Market) | repeated | | <a name="kava.pricefeed.v1beta1.PostedPrice"></a> ### PostedPrice PostedPrice defines a price for market posted by a specific oracle. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | | `oracle_address` | [bytes](#bytes) | | | | `price` | [string](#string) | | | | `expiry` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/pricefeed/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/pricefeed/v1beta1/genesis.proto <a name="kava.pricefeed.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the pricefeed module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.pricefeed.v1beta1.Params) | | params defines all the paramaters of the module. | | `posted_prices` | [PostedPrice](#kava.pricefeed.v1beta1.PostedPrice) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/pricefeed/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/pricefeed/v1beta1/query.proto <a name="kava.pricefeed.v1beta1.CurrentPriceResponse"></a> ### CurrentPriceResponse CurrentPriceResponse defines a current price for a particular market in the pricefeed module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | | `price` | [string](#string) | | | <a name="kava.pricefeed.v1beta1.MarketResponse"></a> ### MarketResponse MarketResponse defines an asset in the pricefeed. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | | `base_asset` | [string](#string) | | | | `quote_asset` | [string](#string) | | | | `oracles` | [string](#string) | repeated | | | `active` | [bool](#bool) | | | <a name="kava.pricefeed.v1beta1.PostedPriceResponse"></a> ### PostedPriceResponse PostedPriceResponse defines a price for market posted by a specific oracle. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | | `oracle_address` | [string](#string) | | | | `price` | [string](#string) | | | | `expiry` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <a name="kava.pricefeed.v1beta1.QueryMarketsRequest"></a> ### QueryMarketsRequest QueryMarketsRequest is the request type for the Query/Markets RPC method. <a name="kava.pricefeed.v1beta1.QueryMarketsResponse"></a> ### QueryMarketsResponse QueryMarketsResponse is the response type for the Query/Markets RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `markets` | [MarketResponse](#kava.pricefeed.v1beta1.MarketResponse) | repeated | List of markets | <a name="kava.pricefeed.v1beta1.QueryOraclesRequest"></a> ### QueryOraclesRequest QueryOraclesRequest is the request type for the Query/Oracles RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | <a name="kava.pricefeed.v1beta1.QueryOraclesResponse"></a> ### QueryOraclesResponse QueryOraclesResponse is the response type for the Query/Oracles RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `oracles` | [string](#string) | repeated | List of oracle addresses | <a name="kava.pricefeed.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest defines the request type for querying x/pricefeed parameters. <a name="kava.pricefeed.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse defines the response type for querying x/pricefeed parameters. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.pricefeed.v1beta1.Params) | | | <a name="kava.pricefeed.v1beta1.QueryPriceRequest"></a> ### QueryPriceRequest QueryPriceRequest is the request type for the Query/PriceRequest RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | <a name="kava.pricefeed.v1beta1.QueryPriceResponse"></a> ### QueryPriceResponse QueryPriceResponse is the response type for the Query/Prices RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `price` | [CurrentPriceResponse](#kava.pricefeed.v1beta1.CurrentPriceResponse) | | | <a name="kava.pricefeed.v1beta1.QueryPricesRequest"></a> ### QueryPricesRequest QueryPricesRequest is the request type for the Query/Prices RPC method. <a name="kava.pricefeed.v1beta1.QueryPricesResponse"></a> ### QueryPricesResponse QueryPricesResponse is the response type for the Query/Prices RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `prices` | [CurrentPriceResponse](#kava.pricefeed.v1beta1.CurrentPriceResponse) | repeated | | <a name="kava.pricefeed.v1beta1.QueryRawPricesRequest"></a> ### QueryRawPricesRequest QueryRawPricesRequest is the request type for the Query/RawPrices RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `market_id` | [string](#string) | | | <a name="kava.pricefeed.v1beta1.QueryRawPricesResponse"></a> ### QueryRawPricesResponse QueryRawPricesResponse is the response type for the Query/RawPrices RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `raw_prices` | [PostedPriceResponse](#kava.pricefeed.v1beta1.PostedPriceResponse) | repeated | | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.pricefeed.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for pricefeed module | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.pricefeed.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.pricefeed.v1beta1.QueryParamsResponse) | Params queries all parameters of the pricefeed module. | GET|/kava/pricefeed/v1beta1/params| | `Price` | [QueryPriceRequest](#kava.pricefeed.v1beta1.QueryPriceRequest) | [QueryPriceResponse](#kava.pricefeed.v1beta1.QueryPriceResponse) | Price queries price details based on a market | GET|/kava/pricefeed/v1beta1/prices/{market_id}| | `Prices` | [QueryPricesRequest](#kava.pricefeed.v1beta1.QueryPricesRequest) | [QueryPricesResponse](#kava.pricefeed.v1beta1.QueryPricesResponse) | Prices queries all prices | GET|/kava/pricefeed/v1beta1/prices| | `RawPrices` | [QueryRawPricesRequest](#kava.pricefeed.v1beta1.QueryRawPricesRequest) | [QueryRawPricesResponse](#kava.pricefeed.v1beta1.QueryRawPricesResponse) | RawPrices queries all raw prices based on a market | GET|/kava/pricefeed/v1beta1/rawprices/{market_id}| | `Oracles` | [QueryOraclesRequest](#kava.pricefeed.v1beta1.QueryOraclesRequest) | [QueryOraclesResponse](#kava.pricefeed.v1beta1.QueryOraclesResponse) | Oracles queries all oracles based on a market | GET|/kava/pricefeed/v1beta1/oracles/{market_id}| | `Markets` | [QueryMarketsRequest](#kava.pricefeed.v1beta1.QueryMarketsRequest) | [QueryMarketsResponse](#kava.pricefeed.v1beta1.QueryMarketsResponse) | Markets queries all markets | GET|/kava/pricefeed/v1beta1/markets| <!-- end services --> <a name="kava/pricefeed/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/pricefeed/v1beta1/tx.proto <a name="kava.pricefeed.v1beta1.MsgPostPrice"></a> ### MsgPostPrice MsgPostPrice represents a method for creating a new post price | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `from` | [string](#string) | | address of client | | `market_id` | [string](#string) | | | | `price` | [string](#string) | | | | `expiry` | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | | | <a name="kava.pricefeed.v1beta1.MsgPostPriceResponse"></a> ### MsgPostPriceResponse MsgPostPriceResponse defines the Msg/PostPrice response type. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.pricefeed.v1beta1.Msg"></a> ### Msg Msg defines the pricefeed Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `PostPrice` | [MsgPostPrice](#kava.pricefeed.v1beta1.MsgPostPrice) | [MsgPostPriceResponse](#kava.pricefeed.v1beta1.MsgPostPriceResponse) | PostPrice defines a method for creating a new post price | | <!-- end services --> <a name="kava/swap/v1beta1/swap.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/swap/v1beta1/swap.proto <a name="kava.swap.v1beta1.AllowedPool"></a> ### AllowedPool AllowedPool defines a pool that is allowed to be created | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `token_a` | [string](#string) | | token_a represents the a token allowed | | `token_b` | [string](#string) | | token_b represents the b token allowed | <a name="kava.swap.v1beta1.Params"></a> ### Params Params defines the parameters for the swap module. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `allowed_pools` | [AllowedPool](#kava.swap.v1beta1.AllowedPool) | repeated | allowed_pools defines that pools that are allowed to be created | | `swap_fee` | [string](#string) | | swap_fee defines the swap fee for all pools | <a name="kava.swap.v1beta1.PoolRecord"></a> ### PoolRecord PoolRecord represents the state of a liquidity pool and is used to store the state of a denominated pool | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `pool_id` | [string](#string) | | pool_id represents the unique id of the pool | | `reserves_a` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | reserves_a is the a token coin reserves | | `reserves_b` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | reserves_b is the a token coin reserves | | `total_shares` | [string](#string) | | total_shares is the total distrubuted shares of the pool | <a name="kava.swap.v1beta1.ShareRecord"></a> ### ShareRecord ShareRecord stores the shares owned for a depositor and pool | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [bytes](#bytes) | | depositor represents the owner of the shares | | `pool_id` | [string](#string) | | pool_id represents the pool the shares belong to | | `shares_owned` | [string](#string) | | shares_owned represents the number of shares owned by depsoitor for the pool_id | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/swap/v1beta1/genesis.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/swap/v1beta1/genesis.proto <a name="kava.swap.v1beta1.GenesisState"></a> ### GenesisState GenesisState defines the swap module's genesis state. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.swap.v1beta1.Params) | | params defines all the paramaters related to swap | | `pool_records` | [PoolRecord](#kava.swap.v1beta1.PoolRecord) | repeated | pool_records defines the available pools | | `share_records` | [ShareRecord](#kava.swap.v1beta1.ShareRecord) | repeated | share_records defines the owned shares of each pool | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <!-- end services --> <a name="kava/swap/v1beta1/query.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/swap/v1beta1/query.proto <a name="kava.swap.v1beta1.DepositResponse"></a> ### DepositResponse DepositResponse defines a single deposit query response type. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | depositor represents the owner of the deposit | | `pool_id` | [string](#string) | | pool_id represents the pool the deposit is for | | `shares_owned` | [string](#string) | | shares_owned presents the shares owned by the depositor for the pool | | `shares_value` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | shares_value represents the coin value of the shares_owned | <a name="kava.swap.v1beta1.PoolResponse"></a> ### PoolResponse Pool represents the state of a single pool | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `name` | [string](#string) | | name represents the name of the pool | | `coins` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | repeated | coins represents the total reserves of the pool | | `total_shares` | [string](#string) | | total_shares represents the total shares of the pool | <a name="kava.swap.v1beta1.QueryDepositsRequest"></a> ### QueryDepositsRequest QueryDepositsRequest is the request type for the Query/Deposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `owner` | [string](#string) | | owner optionally filters deposits by owner | | `pool_id` | [string](#string) | | pool_id optionally fitlers deposits by pool id | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | pagination defines an optional pagination for the request. | <a name="kava.swap.v1beta1.QueryDepositsResponse"></a> ### QueryDepositsResponse QueryDepositsResponse is the response type for the Query/Deposits RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `deposits` | [DepositResponse](#kava.swap.v1beta1.DepositResponse) | repeated | deposits returns the deposits matching the requested parameters | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | pagination defines the pagination in the response. | <a name="kava.swap.v1beta1.QueryParamsRequest"></a> ### QueryParamsRequest QueryParamsRequest defines the request type for querying x/swap parameters. <a name="kava.swap.v1beta1.QueryParamsResponse"></a> ### QueryParamsResponse QueryParamsResponse defines the response type for querying x/swap parameters. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `params` | [Params](#kava.swap.v1beta1.Params) | | params represents the swap module parameters | <a name="kava.swap.v1beta1.QueryPoolsRequest"></a> ### QueryPoolsRequest QueryPoolsRequest is the request type for the Query/Pools RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `pool_id` | [string](#string) | | pool_id filters pools by id | | `pagination` | [cosmos.base.query.v1beta1.PageRequest](#cosmos.base.query.v1beta1.PageRequest) | | pagination defines an optional pagination for the request. | <a name="kava.swap.v1beta1.QueryPoolsResponse"></a> ### QueryPoolsResponse QueryPoolsResponse is the response type for the Query/Pools RPC method. | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `pools` | [PoolResponse](#kava.swap.v1beta1.PoolResponse) | repeated | pools represents returned pools | | `pagination` | [cosmos.base.query.v1beta1.PageResponse](#cosmos.base.query.v1beta1.PageResponse) | | pagination defines the pagination in the response. | <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.swap.v1beta1.Query"></a> ### Query Query defines the gRPC querier service for swap module | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Params` | [QueryParamsRequest](#kava.swap.v1beta1.QueryParamsRequest) | [QueryParamsResponse](#kava.swap.v1beta1.QueryParamsResponse) | Params queries all parameters of the swap module. | GET|/kava/swap/v1beta1/params| | `Pools` | [QueryPoolsRequest](#kava.swap.v1beta1.QueryPoolsRequest) | [QueryPoolsResponse](#kava.swap.v1beta1.QueryPoolsResponse) | Pools queries pools based on pool ID | GET|/kava/swap/v1beta1/pools| | `Deposits` | [QueryDepositsRequest](#kava.swap.v1beta1.QueryDepositsRequest) | [QueryDepositsResponse](#kava.swap.v1beta1.QueryDepositsResponse) | Deposits queries deposit details based on owner address and pool | GET|/kava/swap/v1beta1/deposits| <!-- end services --> <a name="kava/swap/v1beta1/tx.proto"></a> <p align="right"><a href="#top">Top</a></p> ## kava/swap/v1beta1/tx.proto <a name="kava.swap.v1beta1.MsgDeposit"></a> ### MsgDeposit MsgDeposit represents a message for depositing liquidity into a pool | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `depositor` | [string](#string) | | depositor represents the address to deposit funds from | | `token_a` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | token_a represents one token of deposit pair | | `token_b` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | token_b represents one token of deposit pair | | `slippage` | [string](#string) | | slippage represents the max decimal percentage price change | | `deadline` | [int64](#int64) | | deadline represents the unix timestamp to complete the deposit by | <a name="kava.swap.v1beta1.MsgDepositResponse"></a> ### MsgDepositResponse MsgDepositResponse defines the Msg/Deposit response type. <a name="kava.swap.v1beta1.MsgSwapExactForTokens"></a> ### MsgSwapExactForTokens MsgSwapExactForTokens represents a message for trading exact coinA for coinB | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `requester` | [string](#string) | | represents the address swaping the tokens | | `exact_token_a` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | exact_token_a represents the exact amount to swap for token_b | | `token_b` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | token_b represents the desired token_b to swap for | | `slippage` | [string](#string) | | slippage represents the maximum change in token_b allowed | | `deadline` | [int64](#int64) | | deadline represents the unix timestamp to complete the swap by | <a name="kava.swap.v1beta1.MsgSwapExactForTokensResponse"></a> ### MsgSwapExactForTokensResponse MsgSwapExactForTokensResponse defines the Msg/SwapExactForTokens response type. <a name="kava.swap.v1beta1.MsgSwapForExactTokens"></a> ### MsgSwapForExactTokens MsgSwapForExactTokens represents a message for trading coinA for an exact coinB | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `requester` | [string](#string) | | represents the address swaping the tokens | | `token_a` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | token_a represents the desired token_a to swap for | | `exact_token_b` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | exact_token_b represents the exact token b amount to swap for token a | | `slippage` | [string](#string) | | slippage represents the maximum change in token_a allowed | | `deadline` | [int64](#int64) | | deadline represents the unix timestamp to complete the swap by | <a name="kava.swap.v1beta1.MsgSwapForExactTokensResponse"></a> ### MsgSwapForExactTokensResponse MsgSwapForExactTokensResponse defines the Msg/SwapForExactTokensResponse response type. <a name="kava.swap.v1beta1.MsgWithdraw"></a> ### MsgWithdraw MsgWithdraw represents a message for withdrawing liquidity from a pool | Field | Type | Label | Description | | ----- | ---- | ----- | ----------- | | `from` | [string](#string) | | from represents the address we are withdrawing for | | `shares` | [string](#string) | | shares represents the amount of shares to withdraw | | `min_token_a` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | min_token_a represents the minimum a token to withdraw | | `min_token_b` | [cosmos.base.v1beta1.Coin](#cosmos.base.v1beta1.Coin) | | min_token_a represents the minimum a token to withdraw | | `deadline` | [int64](#int64) | | deadline represents the unix timestamp to complete the withdraw by | <a name="kava.swap.v1beta1.MsgWithdrawResponse"></a> ### MsgWithdrawResponse MsgWithdrawResponse defines the Msg/Withdraw response type. <!-- end messages --> <!-- end enums --> <!-- end HasExtensions --> <a name="kava.swap.v1beta1.Msg"></a> ### Msg Msg defines the swap Msg service. | Method Name | Request Type | Response Type | Description | HTTP Verb | Endpoint | | ----------- | ------------ | ------------- | ------------| ------- | -------- | | `Deposit` | [MsgDeposit](#kava.swap.v1beta1.MsgDeposit) | [MsgDepositResponse](#kava.swap.v1beta1.MsgDepositResponse) | Deposit defines a method for depositing liquidity into a pool | | | `Withdraw` | [MsgWithdraw](#kava.swap.v1beta1.MsgWithdraw) | [MsgWithdrawResponse](#kava.swap.v1beta1.MsgWithdrawResponse) | Withdraw defines a method for withdrawing liquidity into a pool | | | `SwapExactForTokens` | [MsgSwapExactForTokens](#kava.swap.v1beta1.MsgSwapExactForTokens) | [MsgSwapExactForTokensResponse](#kava.swap.v1beta1.MsgSwapExactForTokensResponse) | SwapExactForTokens represents a message for trading exact coinA for coinB | | | `SwapForExactTokens` | [MsgSwapForExactTokens](#kava.swap.v1beta1.MsgSwapForExactTokens) | [MsgSwapForExactTokensResponse](#kava.swap.v1beta1.MsgSwapForExactTokensResponse) | SwapForExactTokens represents a message for trading coinA for an exact coinB | | <!-- end services --> ## Scalar Value Types | .proto Type | Notes | C++ | Java | Python | Go | C# | PHP | Ruby | | ----------- | ----- | --- | ---- | ------ | -- | -- | --- | ---- | | <a name="double" /> double | | double | double | float | float64 | double | float | Float | | <a name="float" /> float | | float | float | float | float32 | float | float | Float | | <a name="int32" /> int32 | Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) | | <a name="int64" /> int64 | Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead. | int64 | long | int/long | int64 | long | integer/string | Bignum | | <a name="uint32" /> uint32 | Uses variable-length encoding. | uint32 | int | int/long | uint32 | uint | integer | Bignum or Fixnum (as required) | | <a name="uint64" /> uint64 | Uses variable-length encoding. | uint64 | long | int/long | uint64 | ulong | integer/string | Bignum or Fixnum (as required) | | <a name="sint32" /> sint32 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) | | <a name="sint64" /> sint64 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s. | int64 | long | int/long | int64 | long | integer/string | Bignum | | <a name="fixed32" /> fixed32 | Always four bytes. More efficient than uint32 if values are often greater than 2^28. | uint32 | int | int | uint32 | uint | integer | Bignum or Fixnum (as required) | | <a name="fixed64" /> fixed64 | Always eight bytes. More efficient than uint64 if values are often greater than 2^56. | uint64 | long | int/long | uint64 | ulong | integer/string | Bignum | | <a name="sfixed32" /> sfixed32 | Always four bytes. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) | | <a name="sfixed64" /> sfixed64 | Always eight bytes. | int64 | long | int/long | int64 | long | integer/string | Bignum | | <a name="bool" /> bool | | bool | boolean | boolean | bool | bool | boolean | TrueClass/FalseClass | | <a name="string" /> string | A string must always contain UTF-8 encoded or 7-bit ASCII text. | string | String | str/unicode | string | string | string | String (UTF-8) | | <a name="bytes" /> bytes | May contain any arbitrary sequence of bytes. | string | ByteString | str | []byte | ByteString | string | String (ASCII-8BIT) |
29.711924
305
0.67493
eng_Latn
0.277832
e142363220f4e9171b25e50fc4a33cf60ad6e095
5,701
md
Markdown
docs/reference/adobe/experience/decisioning/ranking-details.schema.md
balachan354/xdm
1ab5b7527947dd33f034df208f63411a9a824b89
[ "CC-BY-4.0" ]
null
null
null
docs/reference/adobe/experience/decisioning/ranking-details.schema.md
balachan354/xdm
1ab5b7527947dd33f034df208f63411a9a824b89
[ "CC-BY-4.0" ]
null
null
null
docs/reference/adobe/experience/decisioning/ranking-details.schema.md
balachan354/xdm
1ab5b7527947dd33f034df208f63411a9a824b89
[ "CC-BY-4.0" ]
null
null
null
# Ranking Details Schema ``` https://ns.adobe.com/experience/decisioning/ranking-details ``` A ranking produces the order in which one option is selected over another. A fixed absolute priority can be used in case there is no other function known that maps a decsion option to an ordinal value. | [Abstract](../../../../abstract.md) | [Extensible](../../../../extensions.md) | [Status](../../../../status.md) | [Identifiable](../../../../id.md) | [Custom Properties](../../../../extensions.md) | [Additional Properties](../../../../extensions.md) | Defined In | |-------------------------------------|-----------------------------------------|---------------------------------|-----------------------------------|------------------------------------------------|----------------------------------------------------|------------| | Can be instantiated | Yes | Stable | No | Forbidden | Permitted | [adobe/experience/decisioning/ranking-details.schema.json](adobe/experience/decisioning/ranking-details.schema.json) | ## Ranking Details Example ```json { "https://ns.adobe.com/experience/decisioning/priority": 3, "https://ns.adobe.com/experience/decisioning/order": { "https://ns.adobe.com/experience/decisioning/function": "xcore:ranking-function:b437a2403cf10e9" } } ``` # Ranking Details Properties | Property | Type | Required | Default | Defined by | |----------|------|----------|---------|------------| | [xdm:order](#xdmorder) | complex | Optional | | Ranking Details (this schema) | | [xdm:priority](#xdmpriority) | `integer` | Optional | `0` | Ranking Details (this schema) | | `*` | any | Additional | this schema *allows* additional properties | ## xdm:order ### Order Evaluation Evaluation of a relative order of one or more decision options. Options with higher ordinal values are selected over any options with lower ordinal values. The values determined by this method can be ordered but distances between them cannot be measured and neither can sums nor products be calculated. The median and the mode are the only measures of central tendency that can be used for ordinal data. `xdm:order` * is optional * type: complex * defined in this schema ### xdm:order Type Unknown type ``. ```json { "properties": { "xdm:function": { "type": "string", "format": "uri-reference", "title": "Scoring Function", "description": "A reference to a function that computes a numerical score for this decision option. Decision options will then be ordered (ranked) by that score. The value of this property is the URI (@id) of the function to be invoked with on option at a time. See schema https://ns.adobe.com/experience/decisioning/function" }, "xdm:rankingStrategy": { "type": "string", "format": "uri-reference", "title": "Ranking Strategy", "description": "A reference to a strategy that ranks a list of decision option. Decision options will be returned in an ordered list. The value of this property is the URI (@id) of the function to be invoked with on option at a time. See schema https://ns.adobe.com/experience/decisioning/rankingStrategy" } }, "title": "Order Evaluation", "description": "Evaluation of a relative order of one or more decision options. Options with higher ordinal values are selected over any options with lower ordinal values. The values determined by this method can be ordered but distances between them cannot be measured and neither can sums nor products be calculated. The median and the mode are the only measures of central tendency that can be used for ordinal data.", "simpletype": "complex" } ``` ## xdm:priority ### Priority The priority of a single decision option relative to all other options. Options for which no order function is given are prioritized using this propery. Options with higher priority values are selected before any lower priority options. If two or more qualifying options share the highest priority value, one is chosen at uniform random and used for the decision proposition. `xdm:priority` * is optional * type: `integer` * default: `0` * defined in this schema ### xdm:priority Type `integer` * minimum value: `0` # Ranking Details Definitions | Property | Type | Group | |----------|------|-------| | [xdm:function](#xdmfunction) | `string` | `https://ns.adobe.com/experience/decisioning/ranking-details#/definitions/order-evaluation` | | [xdm:rankingStrategy](#xdmrankingstrategy) | `string` | `https://ns.adobe.com/experience/decisioning/ranking-details#/definitions/order-evaluation` | ## xdm:function ### Scoring Function A reference to a function that computes a numerical score for this decision option. Decision options will then be ordered (ranked) by that score. The value of this property is the URI (@id) of the function to be invoked with on option at a time. See schema https://ns.adobe.com/experience/decisioning/function `xdm:function` * is optional * type: `string` * defined in this schema ### xdm:function Type `string` * format: `uri-reference` – URI Reference (according to [RFC3986](https://tools.ietf.org/html/rfc3986)) ## xdm:rankingStrategy ### Ranking Strategy A reference to a strategy that ranks a list of decision option. Decision options will be returned in an ordered list. The value of this property is the URI (@id) of the function to be invoked with on option at a time. See schema https://ns.adobe.com/experience/decisioning/rankingStrategy `xdm:rankingStrategy` * is optional * type: `string` * defined in this schema ### xdm:rankingStrategy Type `string` * format: `uri-reference` – URI Reference (according to [RFC3986](https://tools.ietf.org/html/rfc3986))
40.147887
423
0.685494
eng_Latn
0.968372
e142d55711cba491dd1752d169714dea126cbcb9
2,516
md
Markdown
Project/src/_Docs/Design.md
DBrianKimmel/PyHouse
a100fc67761a22ae47ed6f21f3c9464e2de5d54f
[ "MIT" ]
3
2016-11-16T00:37:58.000Z
2019-11-10T13:10:19.000Z
Project/src/_Docs/Design.md
DBrianKimmel/PyHouse
a100fc67761a22ae47ed6f21f3c9464e2de5d54f
[ "MIT" ]
null
null
null
Project/src/_Docs/Design.md
DBrianKimmel/PyHouse
a100fc67761a22ae47ed6f21f3c9464e2de5d54f
[ "MIT" ]
1
2020-07-19T22:06:52.000Z
2020-07-19T22:06:52.000Z
* Name: PyHouse/Project/src/_Docs/Design.md * Author: D. Brian Kimmel * Contact: D.BrianKimmel@gmail.com * Copyright: (c) 2018-2019 by D. Brian Kimmel * Created: 2018-09-30 * Updated: 2019-09-24 * License: MIT License * Summary: This is the design documentation for the src of PyHouse. # src This is the design documentation of the source code for the PyHouse Project. See PyHouse/Project/src/Modules/_Docs/Design.md # Design At this level there is PyHouse.py and Modules/* # Controllers. Each house will have controllers that controll various devices througout the house. Things like Insteon Switches, window controls, door locks, and many others have controllers to actuate the devices. Controllers may be directly attached to the computer usually via a USB plug or stand-alone such as a hub or bridge. Those directly attached must be operated by the computer to which they are attached. Those that are in the home network may be operated by any computer. Some care must be taken because some controllers gain increased reliability if a command is duplicated. Other controllers will cause mis-operation if multiple commands are issued. Things like setting a light level to 50% on will work if multiple commands are sent. Others like roggle or brighten by 10% will be wrong if multiple commands are sent. ## PyHouse.py It follows the singleton pattern so that it is not possible to have two running PyHouse programs competing for resources. It is also a daemon. It calls on Modules/Core to start everything running. ## Core Here is where the nitty-gritty begins. The logging process is started. The logs are located at /var/log/pyhouse. The core component loads some always required pieces into memory and then begins the Initalize phase. The first thing that happens is called initializing. ### Initialize This is the first phase of startup. This checks the configuration set up on this computer. It then loads the modules called for in that configuration. Some modules require sub modules to be loaded. ### Loading This is the second phase of startup. During this phase, the config files are read and information is built up. All the required helper needs are determined during this process. The abstractions for Family and Drivers are determined. ### Start This is the third phase of startup. At the very beginning of this step, the twisted reactor is run. This begins the event loop processing and PyHouse becomes an async, event driven, home control process. ### END DBK
36.463768
121
0.782989
eng_Latn
0.999753
e1437532385997bf182a45027d9a727d52f87f10
6,060
md
Markdown
docs/enterprise.md
coolaj86/box-node-sdk
100e816d568348ce3ec3e2f6d60a0c40d849ddc4
[ "Apache-2.0" ]
175
2016-07-13T23:36:41.000Z
2022-03-30T22:28:52.000Z
docs/enterprise.md
coolaj86/box-node-sdk
100e816d568348ce3ec3e2f6d60a0c40d849ddc4
[ "Apache-2.0" ]
533
2016-07-15T22:29:35.000Z
2022-03-31T05:05:12.000Z
docs/enterprise.md
coolaj86/box-node-sdk
100e816d568348ce3ec3e2f6d60a0c40d849ddc4
[ "Apache-2.0" ]
154
2016-07-15T22:03:43.000Z
2022-03-30T20:47:33.000Z
Enterprise ========== <!-- START doctoc generated TOC please keep comment here to allow auto update --> <!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE --> - [Get Enterprise Users](#get-enterprise-users) - [Invite User to Enterprise](#invite-user-to-enterprise) - [Add New User](#add-new-user) - [Add New App User](#add-new-app-user) - [Transfer User Content](#transfer-user-content) <!-- END doctoc generated TOC please keep comment here to allow auto update --> Get Enterprise Users -------------------- Get a list of users in the current enterprise by calling the [`enterprise.getUsers(options, callback)`](http://opensource.box.com/box-node-sdk/jsdoc/Enterprise.html#getUsers) method. This method supports offset-based pagination and marker-based pagintation. To use offset-based pagination, do not pass in the `usemarker` parameter or set it to `false`. To use marker-based pagination, pass in the `usemarker` parameter as `true`. Use the `fields` option to specify the desired response fields, and `limit` (along with `offset` or `marker`) to control result set paging. Requesting information for only the fields you need can improve performance by reducing the size of the network response. <!-- sample get_users --> ```js client.enterprise.getUsers({usemarker: true, marker: 'JFUirotE56hfyr56FH123'}) .then(users => { /* users -> { total_count: 1, entries: [ { type: 'user', id: '33333', name: 'Example User', login: 'user@example.com', created_at: '2012-05-03T21:39:11-07:00', modified_at: '2012-08-23T14:57:48-07:00', language: 'en', space_amount: 5368709120, space_used: 52947, max_upload_size: 104857600, status: 'active', job_title: '', phone: '5555551374', address: '10 Cloud Way Los Altos CA', avatar_url: 'https://app.box.com/api/avatar/large/deprecated' } ] } */ }); ``` Invite User to Enterprise ------------------------- Invite a user to an enterprise by calling the [`enterprise.inviteUser(enterpriseID, email, callback)`](http://opensource.box.com/box-node-sdk/jsdoc/Enterprise.html#inviteUser) method with the ID of the enterprise and the user's email address. <!-- sample post_invites --> ```js client.enterprise.inviteUser('1345', 'jsmith@box.com', callback); ``` Add New User ------------ To provision a new managed user within the current enterprise, call the [`enterprise.addUser(login, name, options, callback)`](http://opensource.box.com/box-node-sdk/jsdoc/Enterprise.html#addUser) method with the email address the user will use to log in and the user's name. <!-- sample post_users --> ```js client.enterprise.addUser( 'eddard@winterfell.example.com', 'Ned Stark', { role: client.enterprise.userRoles.COADMIN, address: '555 Box Lane', status: client.enterprise.userStatuses.CANNOT_DELETE_OR_EDIT }) .then(user => { /* user -> { type: 'user', id: '44444', name: 'Ned Stark', login: 'eddard@winterfell.example.com', created_at: '2012-11-15T16:34:28-08:00', modified_at: '2012-11-15T16:34:29-08:00', role: 'coadmin', language: 'en', timezone: 'America/Los_Angeles', space_amount: 5368709120, space_used: 0, max_upload_size: 2147483648, status: 'active', job_title: '', phone: '', address: '555 Box Lane', avatar_url: 'https://www.box.com/api/avatar/large/deprecated' } */ }); ``` Add New App User ---------------- To provision a new app user within the current enterprise, call the [`enterprise.addAppUser(name, options, callback)`](http://opensource.box.com/box-node-sdk/jsdoc/Enterprise.html#addAppUser) method with the user's name. ```js client.enterprise.addAppUser('Daenerys Targaryen', { external_app_user_id: 'external-id' }) .then(appUser => { /* appUser -> { type: 'user', id: '55555', name: 'Daenerys Targaryen', login: 'AppUser_59659_vuNs7OCQ7y@box.com', created_at: '2015-04-20T20:09:59-07:00', modified_at: '2015-04-20T20:09:59-07:00', language: 'en', timezone: 'America/Los_Angeles', space_amount: 5368709120, space_used: 0, max_upload_size: 16106127360, status: 'active', job_title: '', phone: '', address: '', avatar_url: '' } */ }); ``` Transfer User Content --------------------- To transfer one managed user's content to another user's account, call the [`enterprise.transferUserContent(sourceUserID, destUserID, callback)`](http://opensource.box.com/box-node-sdk/jsdoc/Enterprise.html#transferUserContent) method with the IDs of the source and destination users. <!-- sample put_users_id_folders_0 --> ```js var sourceUserID = '33333'; var destinationUserID = '44444'; client.enterprise.transferUserContent(sourceUserID, destinationUserID) .then(movedFolder => { /* movedFolder -> { type: 'folder', id: '123456789', sequence_id: '1', etag: '1', name: 'Other User's Files and Folders', created_at: '2018-04-23T11:00:07-07:00', modified_at: '2018-04-23T11:00:07-07:00', description: 'This folder contains files previously owned by Other User, and were transferred to you by your enterprise administrator. If you have any questions, please contact Enterprise Admin (admin@example.com).', size: 0, path_collection: { total_count: 1, entries: [ { type: 'folder', id: '0', sequence_id: null, etag: null, name: 'All Files' } ] }, created_by: { type: 'user', id: '99999', name: 'Enterprise Admin', login: 'admin@example.com' }, modified_by: { type: 'user', id: '99999', name: 'Enterprise Admin', login: 'admin@example.com' }, trashed_at: null, purged_at: null, content_created_at: '2018-04-23T11:00:07-07:00', content_modified_at: '2018-04-23T11:00:07-07:00', owned_by: { type: 'user', id: '33333', name: 'Example User', login: 'user@example.com' }, shared_link: null, folder_upload_email: null, parent: { type: 'folder', id: '0', sequence_id: null, etag: null, name: 'All Files' }, item_status: 'active' } */ }); ```
31.237113
516
0.673597
eng_Latn
0.59106
e1438defcc86d8f91b9fb85ebb22e7df03274760
6,068
md
Markdown
ideas/leadership/index.md
gvwilson/thirdb
4a78ed7aef83c97d0ce95e73f20372a3482e1dd2
[ "CC-BY-4.0" ]
14
2016-07-23T02:36:15.000Z
2021-11-01T20:13:03.000Z
ideas/leadership/index.md
gvwilson/thirdb
4a78ed7aef83c97d0ce95e73f20372a3482e1dd2
[ "CC-BY-4.0" ]
102
2015-12-30T09:38:08.000Z
2021-11-02T11:34:33.000Z
ideas/leadership/index.md
gvwilson/thirdb
4a78ed7aef83c97d0ce95e73f20372a3482e1dd2
[ "CC-BY-4.0" ]
8
2018-01-07T14:36:47.000Z
2021-02-23T07:33:57.000Z
--- title: "Leadership Skills for Open Science" layout: page --- Because "better" never happens on its own. ## Background Many people in open communities have technical knowledge, enthusiasm, and good intentions, but no experience engineering structural change in organizations. Pushing through changes to the curriculum, nurturing a user group that can sustain itself, and removing bias from hiring practices all require skills that most scientists and programmers have never learned. Fortunately, we do not have to invent these skills ourselves: many groups before us have made the kinds of changes we now seek and can teach us how to be more effective. ## Proposal We propose a four-day workshop. In the first three days, participants rotate through six half-day training sessions; on the final day they work in small groups to plan their next steps. The topics listed below give the flavor of the workshop topics; the final list would be put together in consultation with community leaders and the participants themselves: - Strategies for institutional change (e.g., Manns & Rising's *[Fearless Change][fearless-change]*) to give people a toolbox for acting on what they know. - Community organization (e.g., Brown's *[Building Powerful Community Organizations][bpco]*), which lays out the steps needed to build an effective grassroots organization. - Marketing (e.g., based on Kuchner's *[Marketing for Scientists][marketing-for-scientists]*) so that people learn how to match what they want with what decision makers think they need. - Leadership skills (e.g., the [Raw Signal Group][raw-signal]'s training) so that they can get people pulling in the same direction. - How to be a good ally (e.g., [Aurora's workshop on ally skills][ally-skills]) so that they can use their power and influence to support people who are targets of discrimination. - [Personal digital security][security], because online harassment is unfortunately now a fact of life, and people in visible roles need to safeguard themselves against it. Participants would be selected based on: 1. A previously-demonstrated commitment to inclusive open communities. 2. Career stage: we would give preference to people who are likely to be able to act on what they learn in the 1-2 years following the workshop. 3. Reach: we would give preference to people who live and work outside existing hotbeds of open activity. ## Benefits Many people share a vision of a better kind of open: one that is inclusive *and* effective. The more skills they have for organizing and leading, the sooner that vision will be realized. ## Budget 1. Each instructor would teach for half a day (either morning or afternoon) and have the other half of the day off for three consecutive days. 2. Each class would have 20 participants at a time, so the entire workshop would have 60 participants. 3. The budget assumes $6,000 for 3 days of training plus $2,000 in expenses per instructor. 4. Most organizing will be done by volunteers, but the budget includes 20 days of paid support staff time as well. 5. Participants will be charged $500 and will be required to cover their own travel and accommodation costs as well as breakfast and dinner. 6. One third of participants will be offered partial financial support and will not be charged registration. <table class="table table-striped"> <tr> <td><strong>Item</strong></td> <td align="right"><strong>Each</strong></td> <td align="right"><strong>Number</strong></td> <td align="right"><strong>Total</strong></td> </tr> <tr> <td>Venue (per day)</td> <td align="right">$1,500</td> <td align="right">4</td> <td align="right">$6,000</td> </tr> <tr> <td>Instructors</td> <td align="right">$8,000</td> <td align="right">6</td> <td align="right">$48,000</td> </tr> <tr> <td>Lunch/snacks (per person per day)</td> <td align="right">$30</td> <td align="right">4 &times; 60</td> <td align="right">$7,200</td> </tr> <tr> <td>Support/admin staff (per day)</td> <td align="right">$500</td> <td align="right">20</td> <td align="right">$10,000</td> </tr> <tr> <td>Travel scholarships</td> <td align="right">$800</td> <td align="right">20</td> <td align="right">$16,000</td> </tr> <tr> <td><em>Registration fee (per person)</em></td> <td align="right"><em>- $500</em></td> <td align="right">40</td> <td align="right"><em>- $20,000</em></td> </tr> <tr> <td colspan="3"><strong>Total</strong></td> <td align="right"><strong>$67,200</strong></td> </tr> </table> I think a workshop like this is a logical and necessary follow-on to things like [the Carpentries' instructor training][carpentries-training] and the [AAAS community engagement program][aaas-program]. Almost without exception, we think and act as if we're always going to be outside the room where decisions are made, waving our placards or trying to get someone's attention long enough to explain that better is possible and we're already built it and could they please give it a try. If we truly want a better world, we need to be *inside* the room when the vote is called. Training like this is, I believe, a necessary step toward getting advocates of openness elected to school boards and city councils and professional societies. Creationists, gunaholics, fossil fuel addicts, and anti-choicers have been doing it for many years to great effect; do we really care so much less about our issues that we're not willing to do it too? [aaas-program]: https://www.aaas.org/programs/community-engagement-fellows [ally-skills]: https://frameshiftconsulting.com/ally-skills-workshop/ [bpco]: https://isbndb.com/book/0977151808 [carpentries-training]: https://carpentries.github.io/instructor-training/ [fearless-change]: https://fearlesschangepatterns.com/ [marketing-for-scientists]: https://islandpress.org/books/marketing-scientists [raw-signal]: https://www.rawsignal.ca/ [security]: https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1008563
39.402597
115
0.739618
eng_Latn
0.994624
e143e7a1169da6d0b3f05338f85bc10a1f74acb5
236
md
Markdown
docs/content/processor/_index.md
ferama/bruco
77edef97dfa0fb64209ac1c64f768f524920a116
[ "MIT" ]
1
2021-05-24T15:17:00.000Z
2021-05-24T15:17:00.000Z
docs/content/processor/_index.md
ferama/bruco
77edef97dfa0fb64209ac1c64f768f524920a116
[ "MIT" ]
null
null
null
docs/content/processor/_index.md
ferama/bruco
77edef97dfa0fb64209ac1c64f768f524920a116
[ "MIT" ]
null
null
null
--- title: "Processor" chapter: true weight: 4 pre: "<b>4. </b>" --- # Processor The processor supports the business logic. Python is the lang of choice to implement the logic. [Explore processor]({{< ref "processor/processor" >}})
16.857143
95
0.686441
eng_Latn
0.931333
e144f12ee0715b1fbdd2408cee84ccd8d2d3a69d
91
md
Markdown
tests/testthat/test_packages/pkgsourcegood/NEWS.md
mpadge/riskmetric
fa5a36e586d8b9c160e876a1dfd1bc32497ff428
[ "MIT" ]
118
2019-03-04T14:35:07.000Z
2022-03-16T08:48:58.000Z
tests/testthat/test_packages/pkgsourcegood/NEWS.md
mpadge/riskmetric
fa5a36e586d8b9c160e876a1dfd1bc32497ff428
[ "MIT" ]
207
2019-03-06T21:45:53.000Z
2022-03-08T22:23:34.000Z
tests/testthat/test_packages/pkgsourcegood/NEWS.md
mpadge/riskmetric
fa5a36e586d8b9c160e876a1dfd1bc32497ff428
[ "MIT" ]
27
2019-03-07T07:37:47.000Z
2022-02-06T21:50:40.000Z
# 0.1.0 There is no news. This is a test file to determine if the assessments are working.
30.333333
82
0.747253
eng_Latn
0.999983
e145113bde14d8344cc6f7347420b483bd7b4c45
5,304
md
Markdown
components/modal/demo.md
Cheng007/zarm
bb62446e2f77bd0b98dbcb2a377d7e61acb3ca04
[ "MIT" ]
1
2018-10-19T09:39:47.000Z
2018-10-19T09:39:47.000Z
components/modal/demo.md
Cheng007/zarm
bb62446e2f77bd0b98dbcb2a377d7e61acb3ca04
[ "MIT" ]
null
null
null
components/modal/demo.md
Cheng007/zarm
bb62446e2f77bd0b98dbcb2a377d7e61acb3ca04
[ "MIT" ]
1
2021-01-12T06:15:26.000Z
2021-01-12T06:15:26.000Z
## 模态框 Modal ### 基本用法 ```jsx import { Modal, Cell, Button, Select } from 'zarm'; class Demo extends React.Component { constructor(props) { super(props); this.state = { modal1: false, modal2: false, modal3: false, modal4: false, modal5: false, animationType: 'fade', }; } open(key) { this.setState({ [`${key}`]: true, }); } close(key) { this.setState({ [`${key}`]: false, }); } render() { const { modal1, modal2, modal3, modal4, modal5, animationType } = this.state; return ( <div> <Cell description={ <Button size="xs" onClick={() => this.open('modal1')}>开启</Button> } > 普通 </Cell> <Cell description={ <Button size="xs" onClick={() => this.open('modal3')}>开启</Button> } > 圆角 </Cell> <Cell description={ <Button size="xs" onClick={() => this.open('modal2')}>开启</Button> } > 遮罩层可关闭 </Cell> <Cell description={ <Button size="xs" onClick={() => this.open('modal4')}>开启</Button> } > 无头部 </Cell> <Cell title="动画效果" description={ <div> <Button size="xs" onClick={() => this.open('modal5')}>开启</Button> </div> } > <Select value={animationType} dataSource={[ { value: 'fade', label: '淡出淡入效果(fade)' }, { value: 'zoom', label: '缩放效果(zoom)' }, { value: 'rotate', label: '旋转效果(rotate)' }, { value: 'door', label: '开关门效果(door)' }, { value: 'flip', label: '翻转效果(flip)' }, { value: 'moveUp', label: '移出移入效果(moveUp)' }, { value: 'moveDown', label: '移出移入效果(moveDown)' }, { value: 'moveLeft', label: '移出移入效果(moveLeft)' }, { value: 'moveRight', label: '移出移入效果(moveRight)' }, { value: 'slideUp', label: '滑出滑入效果(slideUp)' }, { value: 'slideDown', label: '滑出滑入效果(slideDown)' }, { value: 'slideLeft', label: '滑出滑入效果(slideLeft)' }, { value: 'slideRight', label: '滑出滑入效果(slideRight)' }, ]} onOk={(selected) => { this.setState({ animationType: selected.map(item => item.value), }); }} /> </Cell> <Modal visible={modal1}> <Modal.Header title="标题" onClose={() => this.close('modal1')} /> <Modal.Body>模态框内容</Modal.Body> </Modal> <Modal visible={modal2} onMaskClick={() => this.close('modal2')}> <Modal.Header title="标题" /> <Modal.Body>点击遮罩层关闭</Modal.Body> </Modal> <Modal shape="radius" visible={modal3}> <Modal.Header title="标题" onClose={() => this.close('modal3')} /> <Modal.Body>模态框内容</Modal.Body> </Modal> <Modal visible={modal4} onMaskClick={() => this.close('modal4')}> <Modal.Body>无头部</Modal.Body> </Modal> <Modal visible={modal5} animationType={animationType} onMaskClick={() => this.close('modal5')}> <Modal.Body> <div style={{ height: 100 }}>当前使用的动画类型animationType:'{animationType}'</div> </Modal.Body> </Modal> </div> ) } } ReactDOM.render(<Demo />, mountNode); ``` ### 特定场景 ```jsx import { Cell, Button, Alert, Confirm } from 'zarm'; class Demo extends React.Component { constructor(props) { super(props); this.state = { alert: false, confirm: false, }; } open(key) { this.setState({ [`${key}`]: true, }); } close(key) { this.setState({ [`${key}`]: false, }); } render() { const { alert, confirm } = this.state; return ( <div> <Cell description={ <Button size="xs" theme="warning" onClick={() => this.open('alert')}>开启</Button> } > 警告框 Alert </Cell> <Cell description={ <Button size="xs" theme="warning" onClick={() => this.open('confirm')}>开启</Button> } > 确认框 Confirm </Cell> <Alert shape="radius" visible={alert} title="警告" message="这里是警告信息" onCancel={() => this.close('alert')} /> <Confirm shape="radius" visible={confirm} title="确认信息" message="你确定要这样做吗?" onOk={() => alert('click ok')} onCancel={() => this.close('confirm')} /> </div> ) } } ReactDOM.render(<Demo />, mountNode); ``` ### API | 属性 | 类型 | 默认值 | 说明 | | :--- | :--- | :--- | :--- | | shape | string | 'rect' | 形状,可选值 `rect`、`radius` | | visible | boolean | false | 是否显示 | | animationType | string | 'fade' | 动画效果,可选值 `fade`, `door`, `flip`, `rotate`, `zoom`,`moveUp`, `moveDown`, `moveLeft`, `moveRight`,`slideUp`, `slideDown`, `slideLeft`, `slideRight` | | animationDuration | number | 200 | 动画执行时间 | | width | string &#124; number | '70%' | 宽度 | | onMaskClick | () => void | - | 点击遮罩层时触发的回调函数 |
24.330275
183
0.468891
kor_Hang
0.133864
e146600c2ec5299b4874ecc637435daa52c3a96a
1,348
md
Markdown
contents/posts/2018-07/remove-object-key.md
iMasanari/imasanari.github.io
353a862fa215a088121b5744d8a6e4f885cbf55f
[ "MIT" ]
null
null
null
contents/posts/2018-07/remove-object-key.md
iMasanari/imasanari.github.io
353a862fa215a088121b5744d8a6e4f885cbf55f
[ "MIT" ]
9
2020-05-10T11:13:40.000Z
2022-03-12T03:06:37.000Z
contents/posts/2018-07/remove-object-key.md
iMasanari/imasanari.github.io
353a862fa215a088121b5744d8a6e4f885cbf55f
[ "MIT" ]
null
null
null
--- title: 【ES.next】Objectから任意のキーを削除した新しいObjectを作成する description: ブログを作る前、Qiitaに投稿しようと書いていたけど途中でやめ、最近まで忘れていた記事です。Qiitaでもいいけど、せっかくだからこっちに置いておきます。 slug: remove-object-key tags: [JavaScript, Babel, TypeScript] date: 2018-07-13T12:47:36.148Z --- ## はじめに ブログを作る前、Qiitaに投稿しようと書いていたけど途中でやめ、最近まで忘れていた記事です。Qiitaでもいいけど、せっかくだからこっちに置いておきます。 ## 削除したいキーが最初から決まっている場合 例えばfooキーを削除した新しいオブジェクトを作成する場合、Rest Propertiesを使用し次のように書くことができます。 ```js const removeFoo = (obj) => { const { foo, ...res } = obj return res } removeFoo({ foo: '', bar: 0 }) // { bar: 0 } ``` `foo`は削除される値というのをわかりやすくするために、 `const { foo: _removed, ...res } = obj` と、`_removed`などに変数名を変更してもいいかもしれません。 ## 削除したいキーが最初から決まっていない場合 では、引数などから与えられたキーを削除するにはどうすればいいのでしょうか。 ```js const removeKey = (obj, key) => { const res = { ...obj } delete res[key] return res } ``` と書くのは少し抵抗がありますよね。 いろいろ試していると、次の方法で任意のキーを取り除くことができました。 ```js const removeKey = (obj, key) => { const { [key]: _removed, ...res } = obj return res } removeKey({ foo: '', bar: 0 }, 'bar') // { foo: '' } ``` ポイントは、`{ [key]: _removed }`の部分です。 `{ [key], ...res }`としただけでは構文エラーとなります。 おそらく、この書き方だと取り出す`key`の名前が不明なためかと思います。 ## まとめ Reduxなどでオブジェクトをimmutableに扱う機会は多いと思います。immutableなHashMapでキーを取り除きたいと思ったときはぜひ使ってみてください! この方法は、BabelおよびTypeScriptで動作確認を行いました。両方対応しているということはおそらく仕様にあるということだと思いますが、一度ecma262の仕様から探してみたいですね。
19.257143
96
0.72997
jpn_Jpan
0.639738
e146e9bcecbddb5d67c8e33162b492296355f286
125
md
Markdown
README.md
huizarmx/delta-edit-distance
b28a0dac170ee5035301b6a30bbe6de8c95b7d34
[ "MIT" ]
null
null
null
README.md
huizarmx/delta-edit-distance
b28a0dac170ee5035301b6a30bbe6de8c95b7d34
[ "MIT" ]
null
null
null
README.md
huizarmx/delta-edit-distance
b28a0dac170ee5035301b6a30bbe6de8c95b7d34
[ "MIT" ]
null
null
null
# delta-edit-distance Generate commands that represents the delta from one string to other using the edit distance algorithm
41.666667
102
0.832
eng_Latn
0.999664
e14773070438340a2c82c6d1895d26bc7e86b490
1,929
md
Markdown
README.md
reubenbrown13/website
75d5d6bbf8f75d9837a3427c22060096604f5070
[ "MIT" ]
null
null
null
README.md
reubenbrown13/website
75d5d6bbf8f75d9837a3427c22060096604f5070
[ "MIT" ]
null
null
null
README.md
reubenbrown13/website
75d5d6bbf8f75d9837a3427c22060096604f5070
[ "MIT" ]
null
null
null
** code is hosted on [gitlab](https://gitlab.com/reubenbrown13/contento_website) now ** # Contento > An open source CMS built with the power of Elixir, Phoenix and Postgresql. ![Contento Admin Screenshot](https://raw.githubusercontent.com/contentocms/contento/master/screenshot.png) #### Disclaimer This project is currently a WIP and documentation, guides and more info is on it's way, stay tuned by staring this repo! Many things may change before a stable version comes out, if you have any idea/suggestion/contribution, feel free to do go ahead! If you would like to join discussion about this project, join `#contento` channel on [Elixir Slack](https://elixir-slackin.herokuapp.com/). ## Getting Started 1. Install the Contento archive, if you haven't already done so: ``` $ mix archive.install https://github.com/contentocms/contento_new/raw/master/releases/contento.new.ez ``` 2. Create your new website with: ``` $ mix contento.new [directory] ``` This command will do a few things for you, including: cloning this repo to given directory, fetch and install dependencies, compile back-office assets, generate configuration files with defaults, create database and run migrations, install default theme [Simplo](https://github.com/contentocms/simplo) and setup Contento with defaults. 3. Your website is ready! Now you can access your website on `http://localhost:4000` or `http://localhost:4000/login` to access back-office. Default user credentials are: - Email: **contento@example.org** - Password: **contento** **NOTE:** Check [ROADMAP.md](https://github.com/contentocms/contento/blob/master/ROADMAP.md) for current features and what's expected to come next. ## Contributing Info for contributing to this project will be here soon, in the meanwhile just submit your PRs! ## License This project is licensed under the [MIT license](https://github.com/contentocms/contento/blob/master/LICENSE.md).
38.58
335
0.768274
eng_Latn
0.955239
e1478c63bff2615e874783240c247af1f4cf812c
299
md
Markdown
README.md
honeyfed/honeyui
b7a8b105ab359c73b5428b0b39ebf0b5c8967a7c
[ "MIT" ]
null
null
null
README.md
honeyfed/honeyui
b7a8b105ab359c73b5428b0b39ebf0b5c8967a7c
[ "MIT" ]
null
null
null
README.md
honeyfed/honeyui
b7a8b105ab359c73b5428b0b39ebf0b5c8967a7c
[ "MIT" ]
null
null
null
<p align="center"> <h2>HoneyUI - 基于element ui 定制开发</h2> </p> > A Vue.js 2.0 UI Toolkit for Web. ## Install ```shell npm install honey-ui -S ``` ## Quick Start ``` javascript import Vue from 'vue' import HoneyUI from 'honey-ui' Vue.use(HoneyUI) ``` ## 预览路径 [前端UI组件预览](http://129.204.92.24/)
12.458333
38
0.638796
yue_Hant
0.569838
e14b09c9cbf5b94f0306e11bd99c188525e568e2
63
md
Markdown
README.md
nsiwnf/adventofcode2017
9a8becf387b2ab0af59e10af8d3761471f60f8de
[ "BSD-3-Clause" ]
null
null
null
README.md
nsiwnf/adventofcode2017
9a8becf387b2ab0af59e10af8d3761471f60f8de
[ "BSD-3-Clause" ]
null
null
null
README.md
nsiwnf/adventofcode2017
9a8becf387b2ab0af59e10af8d3761471f60f8de
[ "BSD-3-Clause" ]
null
null
null
# adventofcode2017 My solutions for advent of code challenges
21
43
0.825397
eng_Latn
0.984668
e14b194cb2391c6f1302505942b4de73d05b7e0c
1,412
md
Markdown
README.md
davidanthoff/compiled-binder-postbuild
dde33cb40d1818b5b4c9520706fcd0b63ede95b4
[ "BSD-3-Clause" ]
null
null
null
README.md
davidanthoff/compiled-binder-postbuild
dde33cb40d1818b5b4c9520706fcd0b63ede95b4
[ "BSD-3-Clause" ]
null
null
null
README.md
davidanthoff/compiled-binder-postbuild
dde33cb40d1818b5b4c9520706fcd0b63ede95b4
[ "BSD-3-Clause" ]
null
null
null
## Background This is an extension to the base [`compiled-binder-example`](https://github.com/arnavs/compiled-binder-example) repo. The idea here is to use PackageCompiler on Binder without a custom Dockerfile (i.e., integrating it with `repo2docker`'s native Julia support.) ## Usage Click the myBinder badge below to spin up a new instance. That's it! ## Adaptation > How do I generate the TOML files for my own use? Inside a directory, open a Julia REPL and then hit `] activate`. This will initialize a new (empty) environment, and running further operations (`] add`, `] pin`, etc.), will create and maintain the files. > Will packages in my `Project.toml` be precompiled? Yes. As Simon mentions, this accounts for less than half the compilation cost of a package, but it will be paid for all packages regardless of what happens in `postBuild`. > How do I change the list of files to bake in? Edit the list of packages in the `init.jl` script. Packages named there must also be in your main `Project.toml`. **Note:** PackageCompiler is still relatively fragile, and not all combinations of packages will succeed. > How do I deploy my own Binder (with my own badge, etc.?) See the FAQ in the [base repository](https://github.com/arnavs/compiled-binder-example) for more details on that. ## Contributions, etc. You can open an issue or PR on this repo, or email me at `arnav.sood@ubc.ca`.
44.125
261
0.747167
eng_Latn
0.994988
e14b545e98574f28e3eb005ccb8bc80cd408c791
22,735
md
Markdown
articles/search/search-security-overview.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
16
2017-08-28T07:45:43.000Z
2021-04-20T21:12:50.000Z
articles/search/search-security-overview.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
575
2017-08-30T07:14:53.000Z
2022-03-04T05:36:23.000Z
articles/search/search-security-overview.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
58
2017-07-06T11:58:36.000Z
2021-11-04T12:34:58.000Z
--- title: Beveiligingsoverzicht titleSuffix: Azure Cognitive Search description: Meer informatie over de beveiligings functies in azure Cognitive Search voor het beveiligen van eind punten, inhoud en bewerkingen. manager: nitinme author: HeidiSteen ms.author: heidist ms.service: cognitive-search ms.topic: conceptual ms.date: 02/04/2021 ms.custom: references_regions ms.openlocfilehash: 46f2035e5f8409cd38faeb9c327b88b06fc7d7a0 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: nl-NL ms.lasthandoff: 03/29/2021 ms.locfileid: "100097633" --- # <a name="security-overview-for-azure-cognitive-search"></a>Beveiligings overzicht voor Azure Cognitive Search In dit artikel worden de beveiligings functies in azure Cognitive Search beschreven voor het beveiligen van inhoud en bewerkingen. Voor inkomende aanvragen voor een zoek service is er sprake van een voortgang van beveiligings maatregelen die het Search service-eind punt beveiligen: van API-sleutels op de aanvraag, tot binnenkomende regels in de firewall, tot persoonlijke eind punten die uw service volledig afschermen van het open bare Internet. Voor uitgaande aanvragen bij andere services wordt de meest voorkomende aanvraag gedaan door Indexeer functies die inhoud uit externe bronnen lezen. U kunt referenties opgeven voor de connection string. U kunt ook een beheerde identiteit instellen om te zoeken naar een vertrouwde service wanneer er toegang wordt verkregen tot gegevens van Azure Storage, Azure SQL, Cosmos DB of andere Azure-gegevens bronnen. Een beheerde identiteit is een vervanging van referenties of toegangs sleutels voor de verbinding. Zie [verbinding maken met een gegevens bron met behulp van een beheerde identiteit](search-howto-managed-identities-data-sources.md)voor meer informatie over deze mogelijkheid. Schrijf bewerkingen naar externe services zijn weinig: een zoek service schrijft naar logboek bestanden en schrijft naar Azure Storage bij het maken van kennis winkels, persistentie van in cache opgeslagen verrijkingen en permanente fout opsporingsgegevens. Andere service-to-service-aanroepen, zoals Cognitive Services, worden uitgevoerd op het interne netwerk. Bekijk deze video met snel tempo voor een overzicht van de beveiligings architectuur en elke functie categorie. > [!VIDEO https://channel9.msdn.com/Shows/AI-Show/Azure-Cognitive-Search-Whats-new-in-security/player] ## <a name="network-security"></a>Netwerkbeveiliging <a name="service-access-and-authentication"></a> Binnenkomende beveiligings functies beveiligen het eind punt van de zoek service via toenemende niveaus van beveiliging en complexiteit. Eerst moeten voor alle aanvragen een API-sleutel voor geverifieerde toegang zijn vereist. Ten tweede kunt u desgewenst firewall regels instellen waarmee de toegang tot specifieke IP-adressen wordt beperkt. Voor geavanceerde beveiliging is een derde optie om de persoonlijke Azure-koppeling in te scha kelen voor het afschermen van uw service-eind punt van al het Internet verkeer. ### <a name="public-access-using-api-keys"></a>Open bare toegang met API-sleutels Een zoek service is standaard toegankelijk via de open bare Cloud, met behulp van verificatie op basis van sleutels voor beheer of toegang tot het eind punt van de zoek service. Het verzenden van een geldige sleutel wordt beschouwd als een bewijs dat de aanvraag afkomstig is van een vertrouwde entiteit. Verificatie op basis van een sleutel wordt in de volgende sectie besproken. ### <a name="configure-ip-firewalls"></a>IP-firewalls configureren Als u de toegang tot uw zoek service verder wilt beheren, kunt u binnenkomende firewall regels maken die toegang tot een specifiek IP-adres of een bereik van IP-adressen toestaan. Alle client verbindingen moeten worden gemaakt via een toegestaan IP-adres of de verbinding wordt geweigerd. :::image type="content" source="media/search-security-overview/inbound-firewall-ip-restrictions.png" alt-text="voor beeld van een architectuur diagram voor beperkte toegang met IP"::: U kunt de portal gebruiken om [inkomende toegang te configureren](service-configure-firewall.md). U kunt ook de REST Api's voor beheer gebruiken. Met ingang van API-versie 2020-03-13, met de para meter [IpRule](/rest/api/searchmanagement/services/createorupdate#iprule) , kunt u de toegang tot uw service beperken door IP-adressen, afzonderlijk of in een bereik, te identificeren die u toegang wilt verlenen tot uw zoek service. ### <a name="network-isolation-through-a-private-endpoint-no-internet-traffic"></a>Netwerk isolatie via een persoonlijk eind punt (geen Internet verkeer) U kunt een [privé-eind punt](../private-link/private-endpoint-overview.md) voor Azure Cognitive Search een client in een [virtueel netwerk](../virtual-network/virtual-networks-overview.md) in staat te maken om veilig toegang te krijgen tot gegevens in een zoek index via een [privé-koppeling](../private-link/private-link-overview.md). Het persoonlijke eind punt gebruikt een IP-adres uit de adres ruimte van het virtuele netwerk voor verbindingen met uw zoek service. Netwerk verkeer tussen de client en de zoek service gaat over het virtuele netwerk en een privé koppeling op het micro soft-backbone-netwerk, waardoor de bloot stelling van het open bare Internet wordt geëlimineerd. Met een VNET kan beveiligde communicatie tussen bronnen worden gewaarborgd, met uw on-premises netwerk en via internet. :::image type="content" source="media/search-security-overview/inbound-private-link-azure-cog-search.png" alt-text="voor beeld van een architectuur diagram voor toegang tot privé-eind punten"::: Hoewel deze oplossing het veiligst is, is het gebruik van aanvullende services een extra kost prijs. Zorg er dus voor dat u een duidelijk beeld hebt van de voor delen voordat u zich kunt voordoen. Zie de [pagina met prijzen](https://azure.microsoft.com/pricing/details/private-link/)voor meer informatie over de kosten. Bekijk de video boven aan dit artikel voor meer informatie over hoe deze onderdelen samen werken. De optie dekking van het privé-eind punt begint bij 5:48 in de video. Zie [een persoonlijk eind punt maken voor Azure Cognitive Search](service-create-private-endpoint.md)voor instructies over het instellen van het eind punt. ## <a name="authentication"></a>Verificatie Voor inkomende aanvragen voor de zoek service wordt verificatie via een [verplichte API-sleutel](search-security-api-keys.md) (een teken reeks die wille keurig gegenereerde getallen en letters bevat) die de aanvraag bewijst van een betrouw bare bron. Cognitive Search biedt momenteel geen ondersteuning voor Azure Active Directory-verificatie voor inkomende aanvragen. Uitgaande aanvragen van een Indexeer functie zijn onderhevig aan verificatie door de externe service. De subservice voor indexering in Cognitive Search kan een vertrouwde service op Azure worden gemaakt en verbinding maken met andere services met behulp van een beheerde identiteit. Zie een [indexerings verbinding met een gegevens bron instellen met behulp van een beheerde identiteit](search-howto-managed-identities-data-sources.md)voor meer informatie. ## <a name="authorization"></a>Autorisatie Cognitive Search biedt verschillende autorisatie modellen voor inhouds beheer en Service beheer. ### <a name="authorization-for-content-management"></a>Autorisatie voor inhouds beheer Autorisatie voor inhoud en bewerkingen met betrekking tot inhoud, is schrijf toegang, overeenkomstig de [API-sleutel](search-security-api-keys.md) die in de aanvraag is gegeven. De API-sleutel is een verificatie mechanisme, maar verleent ook toegang, afhankelijk van het type API-sleutel. + Beheerder sleutel (Hiermee staat u lees-/schrijftoegang toe voor Create-Read-update-delete-bewerkingen op de zoek service), gemaakt wanneer de service wordt ingericht + Query sleutel (alleen-lezen toegang toestaan tot de verzameling documenten van een index), gemaakt als vereist en zijn ontworpen voor client toepassingen die query's uitvoeren In toepassings code geeft u het eind punt en een API-sleutel op om toegang tot inhoud en opties toe te staan. Een eind punt kan de service zelf, de verzameling indexen, een specifieke index, een documenten verzameling of een specifiek document zijn. Wanneer u samenvoegt, vormt het eind punt, de bewerking (bijvoorbeeld een aanvraag voor maken of bijwerken) en het machtigings niveau (volledige of alleen-lezen rechten op basis van de sleutel) de beveiligings formule die inhoud en bewerkingen beveiligt. ### <a name="controlling-access-to-indexes"></a>Toegang tot indexen beheren In azure Cognitive Search is een afzonderlijke index geen beveiligbaar object. In plaats daarvan wordt de toegang tot een index bepaald op de service laag (lees-of schrijf toegang op basis van welke API-sleutel u opgeeft), samen met de context van een bewerking. Voor alleen-lezen toegang kunt u de structuur query aanvragen om verbinding te maken met behulp van een [query sleutel](search-security-rbac.md)en de specifieke index die door uw app wordt gebruikt, toevoegen. In een query aanvraag is er geen idee van het samen voegen van indexen of het tegelijkertijd openen van meerdere indexen, zodat alle aanvragen een enkele index per definitie doel hebben. Als zodanig definieert de constructie van de query aanvraag zelf (een sleutel plus een enkele doel index) de beveiligings grens. De beheerder en ontwikkelaars toegang tot indexen zijn niet-gedifferentieerd: beide moeten schrijf toegang hebben voor het maken, verwijderen en bijwerken van objecten die worden beheerd door de service. Iedereen met een [beheerders sleutel](search-security-rbac.md) voor uw service kan elke index in dezelfde service lezen, wijzigen of verwijderen. Voor beveiliging tegen onbedoelde of schadelijke verwijdering van indexen is uw interne broncode beheer voor code assets de oplossing voor het terugdraaien van een ongewenste index verwijderen of wijzigen. Azure Cognitive Search heeft een failover in het cluster om Beschik baarheid te garanderen, maar de code van uw eigen programma wordt niet opgeslagen of uitgevoerd voor het maken of laden van indexen. Voor multitenancy-oplossingen die beveiligings grenzen vereisen op index niveau, bevatten dergelijke oplossingen doorgaans een middelste laag, die klanten gebruiken voor het verwerken van index isolatie. Zie [ontwerp patronen voor SaaS-toepassingen met meerdere tenants en Azure Cognitive Search](search-modeling-multitenant-saas-applications.md)voor meer informatie over de multi tenant-use-case. ### <a name="controlling-access-to-documents"></a>Toegang tot documenten beheren Als u nauw keuriger controle per gebruiker met de zoek resultaten nodig hebt, kunt u beveiligings filters voor uw query's maken en documenten retour neren die zijn gekoppeld aan een bepaalde beveiligings identiteit. Conceptueel equivalent van ' beveiliging op rijniveau ', de autorisatie voor inhoud in de index wordt niet systeem eigen ondersteund met behulp van vooraf gedefinieerde rollen of roltoewijzingen die worden toegewezen aan entiteiten in Azure Active Directory. Gebruikers machtigingen voor gegevens in externe systemen, zoals Cosmos DB, worden niet overgedragen met deze gegevens als ze worden geïndexeerd door Cognitive Search. Tijdelijke oplossingen voor oplossingen waarvoor beveiliging op rijniveau is vereist: het maken van een veld in de gegevens bron dat een beveiligings groep of gebruikers identiteit vertegenwoordigt, en vervolgens filters in Cognitive Search gebruiken om selectie resultaten van documenten en inhoud op basis van identiteiten te wissen. In de volgende tabel worden twee benaderingen beschreven waarmee Zoek resultaten van niet-geautoriseerde inhoud worden bijgesneden. | Methode | Description | |----------|-------------| |[Beveiligings beperking op basis van identiteits filters](search-security-trimming-for-azure-search.md) | Documenteert de basis werk stroom voor het implementeren van toegangs beheer voor gebruikers identiteit. Het onderwerp bevat het toevoegen van beveiligings-id's aan een index en legt vervolgens een overzicht van de filtering uit voor dat veld om de resultaten van verboden inhoud te kunnen knippen. | |[Beveiligings beperking op basis van Azure Active Directory-identiteiten](search-security-trimming-for-azure-search-with-aad.md) | In dit artikel wordt het vorige artikel uitgebreid met stappen voor het ophalen van identiteiten van Azure Active Directory (Azure AD), een van de [gratis services](https://azure.microsoft.com/free/) in het Azure-Cloud platform. | ### <a name="authorization-for-service-management"></a>Autorisatie voor Service beheer Service Management-bewerkingen worden geautoriseerd via [Azure op rollen gebaseerd toegangs beheer (Azure RBAC)](../role-based-access-control/overview.md). Azure RBAC is een autorisatie systeem dat is gebouwd op [Azure Resource Manager](../azure-resource-manager/management/overview.md) voor het inrichten van Azure-resources. In azure Cognitive Search wordt Resource Manager gebruikt om de service te maken of te verwijderen, de API-sleutels te beheren en de service te schalen. Als zodanig bepalen Azure-roltoewijzingen dat deze taken kunnen worden uitgevoerd, ongeacht of ze de [Portal](search-manage.md), [Power shell](search-manage-powershell.md)of de [rest-api's van beheer](/rest/api/searchmanagement/search-howto-management-rest-api)gebruiken. Er zijn [drie basis rollen](search-security-rbac.md#management-tasks-by-role) gedefinieerd voor het beheer van de zoek service. De roltoewijzingen kunnen worden gemaakt met behulp van een ondersteunde methodologie (Portal, Power shell, enzovoort) en worden gerespecteerd. De rollen eigenaar en Inzender kunnen diverse beheer functies uitvoeren. U kunt de rol van lezer toewijzen aan gebruikers die alleen essentiële informatie weer geven. > [!Note] > Met behulp van Azure-mechanismen kunt u een abonnement of resource vergren delen om te voor komen dat uw zoek service per ongeluk of onbevoegde wordt verwijderd door gebruikers met beheerders rechten. Zie voor meer informatie [bronnen vergren delen om onverwachte verwijdering te voor komen](../azure-resource-manager/management/lock-resources.md). <a name="encryption"></a> ## <a name="data-protection"></a>Gegevensbeveiliging In de opslaglaag is gegevens versleuteling ingebouwd voor alle door service beheerde inhoud die op schijf wordt opgeslagen, inclusief indexen, synoniemen en de definities van Indexeer functies, gegevens bronnen en vaardig heden. U kunt eventueel door de klant beheerde sleutels (CMK) toevoegen voor een bijkomende versleuteling van geïndexeerde inhoud. Voor services die na augustus 1 2020 zijn gemaakt, wordt CMK-versleuteling uitgebreid naar gegevens op tijdelijke schijven, voor een volledige dubbele versleuteling van geïndexeerde inhoud. ### <a name="data-in-transit"></a>Actieve gegevens In azure Cognitive Search begint versleuteling met verbindingen en verzen dingen, en wordt uitgebreid naar inhoud die is opgeslagen op schijf. Voor zoek services op het open bare Internet luistert Azure Cognitive Search op HTTPS-poort 443. Alle client-naar-service-verbindingen gebruiken TLS 1,2-versleuteling. Eerdere versies (1,0 of 1,1) worden niet ondersteund. ### <a name="encrypted-data-at-rest"></a>Versleutelde gegevens in rust Voor gegevens die intern door de zoek service worden verwerkt, worden de [gegevens versleutelings modellen](../security/fundamentals/encryption-models.md)beschreven in de volgende tabel. Sommige functies, zoals kennis opslag, incrementele verrijking en indexering gebaseerd indexering, lezen van of schrijven naar gegevens structuren in andere Azure-Services. Deze services hebben hun eigen coderings niveaus gescheiden van Azure Cognitive Search. | Model | Subknooppuntsleutels&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Vereiste&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Beperkingen | Van toepassing op | |------------------|-------|-------------|--------------|------------| | versleuteling aan de server zijde | Door Microsoft beheerde sleutels | Geen (ingebouwd) | Geen, beschikbaar op alle lagen, in alle regio's, voor inhoud die is gemaakt na januari 24 2018. | Inhoud (indices en synoniemen kaarten) en definities (Indexeer functies, gegevens bronnen, vaardig heden) | | versleuteling aan de server zijde | door de klant beheerde sleutels | Azure Key Vault | Beschikbaar op factureer bare lagen, in alle regio's, voor inhoud die is gemaakt na januari 2019. | Inhoud (indices en synoniemen toewijzingen) op gegevens schijven | | dubbele versleuteling aan de server zijde | door de klant beheerde sleutels | Azure Key Vault | Beschikbaar op factureer bare lagen, in geselecteerde regio's, op zoek Services na augustus 1 2020. | Inhoud (indexen en synoniemen) op gegevens schijven en tijdelijke schijven | ### <a name="service-managed-keys"></a>Door service beheerde sleutels Door service beheerde versleuteling is een micro soft-interne bewerking, gebaseerd op [Azure Storage-service versleuteling](../storage/common/storage-service-encryption.md), met 256-bits [AES-versleuteling](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard). Deze fout treedt automatisch op bij alle indexeringen, met inbegrip van incrementele updates voor indexen die niet volledig zijn versleuteld (gemaakt vóór 2018 januari). ### <a name="customer-managed-keys-cmk"></a>Door de klant beheerde sleutels (CMK) Voor door de klant beheerde sleutels is een extra factureer bare service vereist, Azure Key Vault, die zich in een andere regio kan bevinden, maar onder hetzelfde abonnement als Azure Cognitive Search. Het inschakelen van CMK-versleuteling verhoogt de index grootte en vermindert de query prestaties. Op basis van waarnemingen tot heden kunt u verwachten dat er in query tijden een toename van 30%-60% wordt weer geven, hoewel de werkelijke prestaties afhankelijk zijn van de index definitie en typen query's. Als gevolg van deze invloed van prestaties raden we u aan deze functie alleen in te scha kelen voor indexen die echt nodig zijn. Zie voor meer informatie door de [klant beheerde versleutelings sleutels configureren in Azure Cognitive Search](search-security-manage-encryption-keys.md). <a name="double-encryption"></a> ### <a name="double-encryption"></a>Dubbele versleuteling In azure Cognitive Search is dubbele versleuteling een uitbrei ding van CMK. Het is gezien als twee-fold versleuteling (eenmaal per CMK, en opnieuw door door service beheerde sleutels) en uitgebreid in het bereik, met lange termijn opslag die is geschreven naar een gegevens schijf, en opslag op korte termijn die naar tijdelijke schijven wordt geschreven. Het verschil tussen CMK vóór augustus 1 2020 en after, en wat CMK een functie met dubbele code ring maakt in azure Cognitive Search, is de extra versleuteling van gegevens op rest op tijdelijke schijven. Dubbele versleuteling is momenteel beschikbaar op nieuwe services die na 1 augustus zijn gemaakt in deze regio's: + VS - west 2 + VS - oost + VS - zuid-centraal + VS (overheid) - Virginia + VS (overheid) - Arizona ## <a name="security-management"></a>Beveiligingsbeheer ### <a name="api-keys"></a>API-sleutels Afhankelijkheid van de op API-sleutel gebaseerde verificatie houdt in dat u een plan hebt voor het opnieuw genereren van de beheerders sleutel met regel matige tussen pozen, volgens de aanbevolen procedures voor Azure-beveiliging. Er zijn Maxi maal twee beheer sleutels per zoek service. Zie [API-sleutels maken en beheren](search-security-api-keys.md)voor meer informatie over het beveiligen en beheren van API-sleutels. #### <a name="activity-and-diagnostic-logs"></a>Diagnostische en activiteitslogboeken Cognitive Search registreert geen gebruikers identiteiten, dus u kunt niet verwijzen naar Logboeken voor informatie over een specifieke gebruiker. De service maakt echter logboeken maken-lezen-bijwerken-verwijderen, die u mogelijk kunt correleren met andere logboeken om het Agentschap van specifieke acties te begrijpen. Door gebruik te maken van waarschuwingen en de infra structuur voor logboek registratie in azure, kunt u een query uitvoeren op volume pieken of andere acties die afwijken van de verwachte werk belasting. Zie [logboek gegevens verzamelen en analyseren](search-monitor-logs.md) en [query aanvragen controleren](search-monitor-queries.md)voor meer informatie over het instellen van Logboeken. ### <a name="certifications-and-compliance"></a>Certificering en compliance Azure Cognitive Search maakt deel uit van regel matige controles en is gecertificeerd tegen een aantal globale, regionale en branchespecifieke normen voor zowel de open bare Cloud als Azure Government. Voor de volledige lijst downloadt u de [ **Microsoft Azure compliance-aanbod**](https://azure.microsoft.com/resources/microsoft-azure-compliance-offerings/) van het technisch document op de pagina officiële controle rapporten. Voor naleving kunt u [Azure Policy](../governance/policy/overview.md) gebruiken om de aanbevolen procedures voor beveiliging van [Azure Security](../security/benchmarks/introduction.md)te implementeren. Azure Security Bench Mark is een verzameling beveiligings aanbevelingen die zijn samengevoegd in beveiligings mechanismen die worden toegewezen aan belang rijke acties die u moet uitvoeren om bedreigingen voor services en gegevens te beperken. Er zijn momenteel 11 beveiligings controles, waaronder [netwerk beveiliging](../security/benchmarks/security-control-network-security.md), [logboek registratie en controle](../security/benchmarks/security-control-logging-monitoring.md), en [gegevens beveiliging](../security/benchmarks/security-control-data-protection.md) om een paar te noemen. Azure Policy is een functie die in Azure is ingebouwd en die u helpt bij het beheren van de naleving voor meerdere standaarden, inclusief die van Azure Security Bench Mark. Voor bekende benchmarks biedt Azure Policy ingebouwde definities die beide criteria bieden, evenals een actie die kan worden uitgevoerd en die niet compatibel is. Voor Azure Cognitive Search is er momenteel één ingebouwde definitie. Het is voor diagnostische logboek registratie. Met deze ingebouwde kunt u een beleid toewijzen dat een wille keurige zoek service identificeert waarvoor diagnostische logboek registratie ontbreekt en vervolgens weer wordt ingeschakeld. Zie [Azure Policy regulerende nalevings controles voor Azure Cognitive Search](security-controls-policy.md)voor meer informatie. ## <a name="see-also"></a>Zie ook + [Basisbeginselen van Azure Security](../security/fundamentals/index.yml) + [Azure-beveiliging](https://azure.microsoft.com/overview/security) + [Azure Security Center](../security-center/index.yml)
122.891892
795
0.810952
nld_Latn
0.999673
e14d1c5428664e3f3c555203f5a287b3d6d5ccc6
887
md
Markdown
_posts/2020-04-17-pure-component.md
min9nim/smg.github.io
2252d76d2b0f4c71900a1f4b59669b9803f1c9e0
[ "MIT" ]
null
null
null
_posts/2020-04-17-pure-component.md
min9nim/smg.github.io
2252d76d2b0f4c71900a1f4b59669b9803f1c9e0
[ "MIT" ]
18
2020-05-13T21:24:07.000Z
2022-02-28T01:41:38.000Z
_posts/2020-04-17-pure-component.md
min9nim/smg.github.io
2252d76d2b0f4c71900a1f4b59669b9803f1c9e0
[ "MIT" ]
2
2020-05-30T11:22:49.000Z
2020-06-26T08:37:57.000Z
--- layout: post title: '[react] PureComponent' date: 2020-04-17 00:10 categories: react tags: [js, react, PureComponent] --- 일반적인 리액트 클래스 컴포넌트는 React.Component 를 확장하여 정의한다. React.Component 는 `setState` 가 호출될 때 항상 `render` 함수가 호출된다. 렌더링 여부를 제어하기 위해 `shouldComponentUpdate` 함수를 이용할 수 있지만 `shouldComponentUpdate` 함수를 정확하게 정의하지 못하는 경우 렌더링 되어야 하는 상황에 렌더링이 되지 않는 버그를 만들어 내는 실수를 하기 쉽다. 이럴 경우 PureComponent 를 이용할 수 있다. `React.PureComponent` 를 상속받아 정의된 리액트 컴포넌트는 `props` 나 `state` 가 변경될 경우에만(얕은 비교) 다시 렌더링을 수행한다. `props` 와 `state` 가 동일하다면(얕은 비교) 언제나 동일한 UI가 보장되는 컴포넌트인 경우에는 PureComponent 로 정의하여 성능향상을 기대할 수 있다. 아래 예제는 `React.Component` 와 `React.PureComponent` 의 차이를 시연한다. <iframe style="width: 100%; height: 500px" src="https://stackblitz.com/edit/react-pure-component-9?embed=1&file=index.js" > </iframe> <br> ### Ref. https://ko.reactjs.org/docs/react-api.html#reactpurecomponent
30.586207
205
0.72717
kor_Hang
1.000008
e14dbfb68c52e04b3f919322dfbbe9cf04cd6894
3,654
md
Markdown
doc/Glossary.md
balp/ApprovalTests.cpp
0d25106dd2993bfb963e514151cdecca46a15fec
[ "Apache-2.0" ]
null
null
null
doc/Glossary.md
balp/ApprovalTests.cpp
0d25106dd2993bfb963e514151cdecca46a15fec
[ "Apache-2.0" ]
null
null
null
doc/Glossary.md
balp/ApprovalTests.cpp
0d25106dd2993bfb963e514151cdecca46a15fec
[ "Apache-2.0" ]
null
null
null
<!-- GENERATED FILE - DO NOT EDIT This file was generated by [MarkdownSnippets](https://github.com/SimonCropp/MarkdownSnippets). Source File: /doc/mdsource/Glossary.source.md To change this file edit the source file and then execute ./run_markdown_templates.sh. --> <a id="top"></a> # Glossary <!-- toc --> ## Contents * [Approving Results](#approving-results) * [Approved File](#approved-file) * [Chain of responsibility (pattern)](#chain-of-responsibility-pattern) * [Code Coverage](#code-coverage) * [Combination Testing](#combination-testing) * [Comparator](#comparator) * [Continuous Integration](#continuous-integration) * [Convention over Configuration](#convention-over-configuration) * [Custom Asserts](#custom-asserts) * [Diff Tool](#diff-tool) * [Disposable Objects](#disposable-objects) * [Edge Case](#edge-case) * [Happy Path](#happy-path) * [Kata](#kata) * [Koans](#koans) * [Mutation Testing](#mutation-testing) * [Namer](#namer) * [Principle of Least Surprise](#principle-of-least-surprise) * [RAII (Resource acquisition is initialization)](#raii-resource-acquisition-is-initialization) * [Received File](#received-file) * [Reporter](#reporter) * [Scrubber](#scrubber) * [Stringification](#stringification) * [System Under Test](#system-under-test) * [Test Framework](#test-framework) * [test && commit || revert (TCR)](#test--commit--revert-tcr) * [Writer](#writer) * [Yak Shaving](#yak-shaving) * [Sayings](#sayings)<!-- endtoc --> ## Approving Results ## Approved File ## Chain of responsibility (pattern) ## Code Coverage ## Combination Testing Sometimes referred to as Combinatorial testing. See [Testing Combinations](/doc/TestingCombinations.md#top). ## Comparator See [Custom Comparators](/doc/CustomComparators.md#top). ## Continuous Integration ## Convention over Configuration [Wikipedia Entry](https://en.wikipedia.org/wiki/Convention_over_configuration) Instead of asking the user to specify everything, we make assumptions based on common patterns, so code usually just works "out of the box". This tends to dramatically reduce the amount of clutter, makes things easier, and reduces the amount of surprises. ## Custom Asserts ## Diff Tool ## Disposable Objects Objects that implement the [RAII](#raii-resource-acquisition-is-initialization) pattern. See [Disposable Objects](/doc/DisposableObjects.md#top). ## Edge Case ## Happy Path ## Kata ## Koans ## Mutation Testing ## Namer ## Principle of Least Surprise ## RAII (Resource acquisition is initialization) This is a pattern where your object constructor opens a resource, such as memory, and your object destructor closes the resource. This is also known as "Scope based resource management". [Wikipedia Entry](https://en.cppreference.com/w/cpp/language/raii) ## Received File ## Reporter See [Reporters](/doc/Reporters.md#top). See [Using sub-directories for approved files](/doc/Configuration.md#using-sub-directories-for-approved-files) See [Features](/doc/Features.md#top) - whose sections need to be moved around ## Scrubber [ApprovalTests.Net](https://github.com/approvals/ApprovalTests.Net/tree/master/src/ApprovalTests/Scrubber) ## Stringification See [ToString](/doc/ToString.md#top). ## System Under Test The area of the production code that you are testing. See [System Under Test](https://en.wikipedia.org/wiki/System_under_test). ## Test Framework ## test && commit || revert (TCR) ## Writer ## Yak Shaving --- ## Sayings * The tests test the code, and the code tests the tests * Test until bored --- [Back to User Guide](/doc/README.md#top)
25.375
255
0.728243
eng_Latn
0.66922
e14e27af9375a0eaf356d359811b0d023ca8f464
1,514
md
Markdown
README.md
andrewking1597/playlist-converter-lite
e903c6208811bf35d53829f355bb2b5a6378a76d
[ "MIT" ]
null
null
null
README.md
andrewking1597/playlist-converter-lite
e903c6208811bf35d53829f355bb2b5a6378a76d
[ "MIT" ]
6
2021-06-23T17:03:40.000Z
2021-07-20T21:07:38.000Z
README.md
andrewking1597/playlist-converter-lite
e903c6208811bf35d53829f355bb2b5a6378a76d
[ "MIT" ]
null
null
null
# playlist-converter-lite Convert an Apple Music playlist to a Spotify playlist ## Prerequisites To use playlist-converter-lite, you must meet the following criteria: - You have an Apple Developer account and a MusicKit API key.  [(More info)](https://developer.apple.com/documentation/applemusicapi/getting_keys_and_creating_tokens) - You have a Spotify for Developers account and your app is properly registered.  [(More info)](https://developer.spotify.com/documentation/web-api/quick-start/) ## Getting Started ### Installation ```zsh pip install playlistconverterlite ``` ### Set Environment Variables Set your app's Spotify Client ID and Client Secret as SPOTIPY_CLIENT_ID and SPOTIPY_CLIENT_SECRET environment variables. ```zsh export SPOTIPY_CLIENT_ID=<your_client_id> export SPOTIPY_CLIENT_SECRET=<your_client_secret> ``` You will also need to set an environment variable SPOTIPY_REDIRECT_URI, which tells the Spotify API where to redirect the user once they have successfully entered their login info. ```zsh export SPOTIPY_REDIRECT_URI=http://127.0.0.1:8080/ ``` Also be sure to register the redirect URI in your app's settings on your Spotify Developer dashboard. ### Convert ```python from playlistconverterlite import converter new_playlist_link = converter.convert( pl_id="SOME_PLAYLIST", am_key="MY_KEY", am_kid="MY_KEY_ID", am_team_id="MY_TEAM_ID", sp_username="SPOTIFY_UN" ) ``` ## License https://github.com/andrewking1597/playlist-converter-lite/blob/main/LICENSE
32.913043
180
0.785997
eng_Latn
0.88185
e14f5d040147168a911ca5d93e1541d80aa359da
45
md
Markdown
README.md
CorcovadoMing/Template
486af690919a3aaf5fe12e53581d7c9e19e142e0
[ "Apache-2.0" ]
null
null
null
README.md
CorcovadoMing/Template
486af690919a3aaf5fe12e53581d7c9e19e142e0
[ "Apache-2.0" ]
null
null
null
README.md
CorcovadoMing/Template
486af690919a3aaf5fe12e53581d7c9e19e142e0
[ "Apache-2.0" ]
null
null
null
# Template Template for performace profiling
15
33
0.844444
eng_Latn
0.874696
e15060b9a4bd0fd3f393caa2dc0d98c8521d2720
3,691
md
Markdown
docs/index.md
apollo13/lightbus
ad9bb5e376e7aabb400d01307345e00fd07e4677
[ "Apache-2.0" ]
null
null
null
docs/index.md
apollo13/lightbus
ad9bb5e376e7aabb400d01307345e00fd07e4677
[ "Apache-2.0" ]
null
null
null
docs/index.md
apollo13/lightbus
ad9bb5e376e7aabb400d01307345e00fd07e4677
[ "Apache-2.0" ]
null
null
null
<style> </style> # What is Lightbus? Lightbus is a powerful and intuitive messaging client for your backend Python services. Lightbus uses Redis 5 as its underlying [transport](explanation/transports.md), although support for other platforms will be added in future. Other languages can also communicate with Lightbus by [interacting with Redis](reference/protocols/index.md). ## How Lightbus works Lightbus provides you with two tools: 1. A **client** with which to fire events, and make remote procedure calls (RPCs) from anywhere within your codebase. 1. A **stand-alone Lightbus worker process** in which you can setup event listeners. This process will also respond to RPCs calls. For example, you could architect an e-commerce system as follows: ![A simple Lightbus deployment][simple-processes] In this example: * **Django** serves pages using data from the database * **Django** performs remote procedure calls to resize images. The Lightbus worker in the **image resizing service** performs the image resize and responds. * The **price monitoring service** fires `price_monitor.competitor_price_changed` events * The Lightbus worker in the **online shop web service** listens for `price_monitor.competitor_price_changed` events and updates prices in the database accordingly. See the [anatomy lesson] for further discussion. ## Designed for ease of use Lightbus is designed to be intuitive and familiar, and common problems are caught with clear and helpful error messages. For example, a naïve authentication API: ```python3 class AuthApi(Api): user_registered = Event(parameters=('user', 'email')) class Meta: name = 'auth' def check_password(self, user, password): return ( user == 'admin' and password == 'secret' ) ``` The `check_password` procedure can be called remotely as follows: ```python3 import lightbus bus = lightbus.create() is_valid = bus.auth.check_password( user='admin', password='secret' ) # is_valid is True ``` You can also listen for events: ```python3 # bus.py import lightbus bus = lightbus.create() # Our event handler def send_signup_email(event_message, user, email): send_mail(email, subject=f'Welcome {user}' ) # Setup our listeners on startup @bus.client.on_start() def on_start(): bus.auth.user_registered.listen( send_signup_email, listener_name="send_signup_email" ) ``` ## Where to start? Starting with the **[tutorials] section** will give you a **practical introduction** to Lightbus. Alternatively, the **[explanation] section** will give you a grounding in the high level **concepts and theory**. Start with whichever section suits you best. You should ultimately look through both sections for a complete understanding. In addition, **the [how to] section gives solutions to common use cases**, and **the [reference] section provides detailed technical information** regarding specific features. ## Questions? Get in touch via: * Email: adam@adamcharnock.com * Phone: +442032896620 (Skype, London/Lisbon timezone) * [Community chat](https://discord.gg/2j594ws) * GitHub: https://github.com/adamcharnock/lightbus/ If you are having a technical problem then the more information you can include the better (problem description, screenshots, and code are all useful). [issue-1]: https://github.com/adamcharnock/lightbus/issues/1 [simple-processes]: /static/images/simple-processes.png [anatomy lesson]: explanation/anatomy-lesson.md [tutorials]: tutorial/index.md [explanation]: explanation/index.md [How to]: howto/index.md [Reference]: reference/index.md
26.941606
96
0.738824
eng_Latn
0.984553
e15074e393d221d730af7ef7327ba7747c627258
11,735
md
Markdown
docs/_docs/01-get-started.md
noelbundick/azure-iot-developer-kit
d366a7ac648b61185a34e0d57c9b35eeff711bf8
[ "MIT" ]
null
null
null
docs/_docs/01-get-started.md
noelbundick/azure-iot-developer-kit
d366a7ac648b61185a34e0d57c9b35eeff711bf8
[ "MIT" ]
1
2021-11-04T22:49:38.000Z
2021-11-04T22:49:38.000Z
docs/_docs/01-get-started.md
noelbundick/azure-iot-developer-kit
d366a7ac648b61185a34e0d57c9b35eeff711bf8
[ "MIT" ]
null
null
null
--- title: "Get started" permalink: /docs/get-started/ excerpt: "How to quickly install and set up your development environment to use the IoT DevKit." variable: - platform: windows name: Windows - platform: macos name: macOS last_modified_at: 2018-03-12 --- For first-time users of the MXChip IoT DevKit (a.k.a. DevKit), follow these quick steps to: * Prepare your development environment. * Send temperature and humidity data from built-in IoT DevKit sensors to the Azure IoT Hub. If you have already done this, you can try more samples from the [Projects Catalog]({{"/docs/projects/" | absolute_url }}) or build your own IoT application. {% include toc icon="columns" %} ## What you learn * How to connect the IoT DevKit to a wireless access point. * How to install the development environment. * How to create an IoT Hub and register a device for the IoT DevKit. * How to collect sensor data by running a sample application on the IoT DevKit. * How to send the IoT DevKit sensor data to your IoT hub. ## What you need * An MXChip IoT DevKit. [Get it now](https://aka.ms/iot-devkit-purchase){:target="_blank"}. * A computer running Windows 10 or macOS 10.10+. * An active Azure subscription. [Activate a free 30-day trial Microsoft Azure account](https://azure.microsoft.com/en-us/free/). ![Required hardware]({{"/assets/images/getting-started/hardware.jpg" | absolute_url }}) ## Prepare your hardware To connect the IoT DevKit to your computer: 1. Connect the Micro-USB end to the IoT DevKit. 2. Connect the USB end to your computer. 3. The green LED for power confirms the connection. ![Hardware connections]({{"/assets/images/getting-started/connect.jpg" | absolute_url }}) ## Configure Wi-Fi IoT projects rely on internet connectivity. Use AP Mode on the IoT DevKit to configure and connect to Wi-Fi. 1. Hold down button B, push and release the reset button, and then release button B. Your IoT DevKit enters AP mode for configuring the Wi-Fi connection. The screen displays the service set identifier (SSID) of the IoT DevKit and the configuration portal IP address: ![Reset button, button B, and SSID]({{"/assets/images/getting-started/wifi-ap.jpg" | absolute_url }}) 2. Use a Web browser on a different Wi-Fi enabled device (computer or mobile phone) to connect to the IoT DevKit SSID displayed in the previous step. If it asks for a password, leave it empty. ![Network info and Connect button]({{"/assets/images/getting-started/connect-ssid.png" | absolute_url }}) 3. Open **192.168.0.1** in the browser. Select the Wi-Fi network that you want the IoT DevKit to connect to, type the password for the Wi-Fi conection, and then click **Connect**. ![Password box and Connect button]({{"/assets/images/getting-started/wifi-portal.png" | absolute_url }}) 4. The IoT DevKit reboots in a few seconds. You then see the Wi-Fi name and assigned IP address on the screen of the IoT DevKit: ![Wi-Fi name and IP address]({{"/assets/images/getting-started/wifi-ip.jpg" | absolute_url }}) **Note:** After asuccessful Wi-Fi connection, the currently-installed and latest available version of the IoT DevKit's firmware is displayed on the IoT DevKit screen. If the IoT DevKit is not running on the latest available version, follow the [firmware upgrading guide]({{"/docs/firmware-upgrading/" | absolute_url }}) to install the latest version. {: .notice--info} ## Install development environment We recommend **Azure IoT Workbench** extension for Visual Studio Code to develop on the IoT DevKit. Azure IoT Workbench provides an integrated experience to develop IoT solutions. It helps both on device and cloud development using Azure IoT and other services. You can watch this [Channel9 video](https://channel9.msdn.com/Shows/Internet-of-Things-Show/IoT-Workbench-extension-for-VS-Code) to have an overview of what it does. Follow these steps to prepare the development environment for IoT DevKit: 1. Download and install [Arduino IDE](https://www.arduino.cc/en/Main/Software). It provides the necessary toolchain for compiling and uploading Arduino code. * **Windows**: Use Windows Installer version * **macOS**: Drag and drop the Arduino into `/Applications` * **Ubuntu**: Unzip it into `$HOME/Downloads/arduino-1.8.5` 2. Install [Visual Studio Code](https://code.visualstudio.com/), a cross platform source code editor with powerful developer tooling, like IntelliSense code completion and debugging. 3. Look for **Azure IoT Workbench** in the extension marketplace and install it. ![Install IoT Workbench]({{"/assets/images/getting-started/install-workbench.png" | absolute_url }}) Together with the IoT Workbench, other dependent extensions will be installed. 4. Open **File > Preference > Settings** and add following lines to configure Arduino. * **Windows**: ```json "arduino.path": "C:\\Program Files (x86)\\Arduino", "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json" ``` * **macOS**: ```json "arduino.path": "/Applications", "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json" ``` * **Ubuntu**: ```json "arduino.path": "/home/{username}/Downloads/arduino-1.8.5", "arduino.additionalUrls": "https://raw.githubusercontent.com/VSChina/azureiotdevkit_tools/master/package_azureboard_index.json" ``` 5. Click `F1` to open the command palette, type and select **Arduino: Board Manager**. Search for **AZ3166** and install the latest version. ![Install DevKit SDK]({{"/assets/images/getting-started/install-sdk.png" | absolute_url }}) As a fallback, you can follow the [manual steps]({{"/docs/installation/" | absolute_url }}) to install the environment. ## ST-Link configuration [ST-Link/V2](http://www.st.com/en/development-tools/st-link-v2.html) is the USB interface that IoT DevKit uses to communicate with your development machine. Follow the platform specific steps to allow the machine access to your device. ### Windows Download and install USB driver from [STMicro](http://www.st.com/en/development-tools/stsw-link009.html). ### macOS No driver is required for macOS. ### Unbutu Run the following in terminal and logout and login for the group change to take effect: ```bash # Copy the default rules. This grants permission to the group 'plugdev' sudo cp ~/.arduino15/packages/AZ3166/tools/openocd/0.10.0/linux/contrib/60-openocd.rules /etc/udev/rules.d/ sudo udevadm control --reload-rules # Add yourself to the group 'plugdev' # Logout and log back in for the group to take effect sudo usermod -a -G plugdev $(whoami) ``` Now you are all set with preparing and configuring your development environment. Let us build a "Hello World" sample for IoT: sending temperature telemetry data to Azure IoT Hub. ## Build your first project 1. Make sure your IoT DevKit is **not connected** to your computer. Start VS Code first, and then connect the IoT DevKit to your computer. 1. In the bottom right status bar, check the **MXCHIP AZ3166** is shown as selected board and serial port with **STMicroelectronics** is used. ![Select board and serial port]({{"/assets/images/getting-started/select-board.png" | absolute_url }}) 1. Click `F1` to open the command palette, type and select **IoT Workbench: Examples**. Then select **IoT DevKit** as board. 1. In the pop-up page, scroll down and click **Open Sample** on Get Started tile. Also selects the default path download the sample. ![Open sample]({{"/assets/images/getting-started/open-sample.png" | absolute_url }}) 1. If you don't have Arduino extension in VS Code installed, click the **Install** in the notification pane. ![Install Arduino Extension]({{"/assets/images/getting-started/install-arduino-ext.png" | absolute_url }}) 1. In the new opened project window, click `F1` to open the command palette, type and select **IoT Workbench: Cloud**, then select **Azure Provision**. 1. Follow the step by step guide to finish provisioning your Azure IoT Hub and creating the device. ![Cloud provision]({{"/assets/images/getting-started/cloud-provision.png" | absolute_url }}) 1. Click `F1` to open the command palette, type and select **IoT Workbench: Device**, then select **Config Device Settings > Select IoT Hub Device Connection String**. 1. On IoT DevKit, hold down button **A**, push and release the **reset** button, and then release button **A**. Your IoT DevKit enters configuration mode and saves the connection string. ![Set connection string]({{"/assets/images/getting-started/connection-string.png" | absolute_url }}) 1. Click `F1` again, type and select **IoT Workbench: Device**, then select **Device Upload**. ![Verification and upload of the Arduino sketch]({{"/assets/images/getting-started/arduino-upload.png" | absolute_url }}) The IoT DevKit reboots and starts running the code. **Note:** If there is errors or interruptions, you can always recover by running the command again. {: .notice--info} ## Test the project Click the power plug icon on the status bar to open the Serial Monitor: ![Open serial monitor]({{"/assets/images/mini-solution/connect-iothub/serial-monitor.png" | absolute_url }}) The sample application is running successfully when you see the following results: * The Serial Monitor displays the message sent to the IoT Hub. * The LED on the MXChip IoT DevKit is blinking. ![Final output in VS Code]({{"/assets/images/mini-solution/connect-iothub/result-serial-output.png" | absolute_url }}) You can use [Azure IoT Toolkit](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit) to monitor device-to-cloud (D2C) messages in IoT Hub. 1. Log in [Azure portal](https://portal.azure.com), find the IoT Hub you created. ![azure-portal-iot-hub]({{"/assets/images/mini-solution/connect-iothub/azure-iot-hub-portal.png" | absolute_url }}) 1. In the **Shared access policies pane**, click the **iothubowner policy**, and write down the Connection string of your IoT hub. ![azure-portal-iot-hub-conn-string]({{"/assets/images/mini-solution/connect-iothub/azure-portal-conn-string.png" | absolute_url }}) 1. Expand **AZURE IOT HUB DEVICES** on the bottom left corner. ![azure-iot-toolkit-iot-hub-devices]({{"/assets/images/mini-solution/connect-iothub/azure-iot-toolkit-devices.png" | absolute_url }}) 1. Click **Set IoT Hub Connection String** in context menu. ![azure-iot-toolkit-iot-hub-conn-string]({{"/assets/images/mini-solution/connect-iothub/azure-iot-toolkit-conn-string.png" | absolute_url }}) 1. Click **IoT: Start monitoring D2C message** in context menu. 1. In **OUTPUT** pane, you can see the incoming D2C messages to the IoT Hub. ![azure-iot-toolkit-output-console]({{"/assets/images/mini-solution/connect-iothub/azure-iot-toolkit-console.png" | absolute_url }}) ## Problems and feedback If you encounter problems, you can refer to [FAQs]({{"/docs/faq/" | absolute_url }}) or reach out to us from [Gitter channel](https://gitter.im/Microsoft/azure-iot-developer-kit){:target="_blank"}. {% include feedback.html tutorial="get-started" %} ## Next Steps You have successfully connected an MXChip IoT DevKit to your IoT hub, and you have sent the captured sensor data to your IoT hub. Check our [Projects Catalog]({{"/docs/projects/" | absolute_url }}) for more samples you can build with the IoT DevKit and Azure multiple services.
55.880952
352
0.732339
eng_Latn
0.929689
e150c0150b364aa59d56027fe456b0456baf260a
1,104
md
Markdown
README.md
FractalLAB/collection-mandelbrot
6c77710eebc3f4129c4055a6bc1c405a870b5b78
[ "MIT" ]
null
null
null
README.md
FractalLAB/collection-mandelbrot
6c77710eebc3f4129c4055a6bc1c405a870b5b78
[ "MIT" ]
null
null
null
README.md
FractalLAB/collection-mandelbrot
6c77710eebc3f4129c4055a6bc1c405a870b5b78
[ "MIT" ]
null
null
null
# Collection - Mandelbrot </br> ## Random mandelbrot generator. This simple [Mandelbrot Viewer](https://math.hws.edu/eck/js/mandelbrot/MB.html) can show you the infinite number of aesthetic design possibilities in the world of math. </br> <img align="left" width="400" height="400" src="https://github.com/FractalLAB/collection-mandelbrot/blob/main/uploads/FIxrLV0XMAwMLDm.jpeg"> <img align="center" width="400" height="400" src="https://github.com/FractalLAB/collection-mandelbrot/blob/main/uploads/FIxs4E-X0AE01iM.jpeg"> <img align="left" width="400" height="400" src="https://github.com/FractalLAB/collection-mandelbrot/blob/main/uploads/FIxraalXoAECW7t.jpeg"> <img align="center" width="400" height="400" src="https://github.com/FractalLAB/collection-mandelbrot/blob/main/uploads/FIxsBDYXIAYJ0g_.jpeg"> <img align="left" width="400" height="400" src="https://github.com/FractalLAB/collection-mandelbrot/blob/main/uploads/FIxtKOeXwAMr1lN.jpeg"> <img align="center" width="400" height="400" src="https://github.com/FractalLAB/collection-mandelbrot/blob/main/uploads/FIxuizfX0AIuqeN.jpeg">
69
174
0.769022
yue_Hant
0.461457
e1519c92dbf719cb7fb3bd058251ee3197a7e07e
1,279
md
Markdown
README.md
Mimila-85/Note-Taker
fed40be1f34a640a2190f9ea2d3416b940f8d529
[ "MIT" ]
null
null
null
README.md
Mimila-85/Note-Taker
fed40be1f34a640a2190f9ea2d3416b940f8d529
[ "MIT" ]
null
null
null
README.md
Mimila-85/Note-Taker
fed40be1f34a640a2190f9ea2d3416b940f8d529
[ "MIT" ]
null
null
null
![License](https://img.shields.io/badge/license-MIT-blue) # Note-Taker ## Description This application helps you to keep your notes organized in one place. ## Table of Contents * [Usage](#usage) * [Link](#link) * [Demo](#Demo) * [License](#license) * [Contributing](#contributing) * [Questions](#questions) ## Usage To start to organize your notes, click on the provided link to open this application. It will bring you to the homepage, where you can click on the `Get Start` button. It will bring you to your `Notes`, where you can write new notes, save them, see previous saved notes, and delete the ones that are not relevant anymore. ## Link [Note Taker](https://organize-notes.herokuapp.com) ## Demo ![Note Taker Demo](https://github.com/Mimila-85/Note-Taker/blob/master/develop/public/assets/image/noteTaker.gif) ## License This project is licensed under the terms of the MIT license. ## Contributing If you would like to participate on this project please submit any bugs or feature requests to the contact listed on the `questions` section of this README. ## Questions If you have any questions about the repo, open an issue or contact me directly at camila.alves85@gmail.com. You can find more of my work at [Mimila-85](https://github.com/Mimila-85).
30.452381
321
0.745113
eng_Latn
0.992312
e15231b7ac216dd0286afde6000be0faa842e32d
15
md
Markdown
README.md
satoshun-android-example/Store
1b78ac6fbe9ca45750117efda2c224ea26729658
[ "Apache-2.0" ]
1
2021-01-24T20:44:39.000Z
2021-01-24T20:44:39.000Z
README.md
satoshun-android-example/Store
1b78ac6fbe9ca45750117efda2c224ea26729658
[ "Apache-2.0" ]
null
null
null
README.md
satoshun-android-example/Store
1b78ac6fbe9ca45750117efda2c224ea26729658
[ "Apache-2.0" ]
null
null
null
# Store sample
7.5
14
0.733333
eng_Latn
0.912887
e1525abda9d0d598648a0bb33a7014fcd932102f
1,153
md
Markdown
README.md
Arecio3/employee-directory
465553d7fc6b1484ad56a257f08c57070ea6ca67
[ "MIT" ]
null
null
null
README.md
Arecio3/employee-directory
465553d7fc6b1484ad56a257f08c57070ea6ca67
[ "MIT" ]
null
null
null
README.md
Arecio3/employee-directory
465553d7fc6b1484ad56a257f08c57070ea6ca67
[ "MIT" ]
null
null
null
# Employee Directory [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) # Table of Contents * [Installation](#Installation) * [Usage](#Usage) * [License](#license) * [Contributing](#Contribute) * [Testing](#Testing) * [Questions](#Questions) # Description This is a front end application built with react and is using the random user API to generate mock employees and have the ability to search through them with onChanging UI as well as the ability to filter employees with decending or ascending order. # Link [Link](https://arecio3.github.io/employee-directory/) <img src="public/images/Screen Shot 2021-05-20 at 7.11.51 PM.png"></img> # Installation **npm i** # Testing **npm test** # Contribute **Create pull request** # Usage **git clone** # Questions If you had any questions feel free to contact my email cuba289@gmail.com To see more of my work visit me here [Arecio3](https://github.com/Arecio3) # License [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
28.121951
249
0.699913
eng_Latn
0.816461
e152c9c5d902e8bf0fbd8382dc7939ab29d1fc6f
5,406
md
Markdown
articles/managed-applications/publish-managed-app-definition-quickstart.md
salem84/azure-docs.it-it
3ec6a13aebb82936591c7fc479f084be9bb8776d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/managed-applications/publish-managed-app-definition-quickstart.md
salem84/azure-docs.it-it
3ec6a13aebb82936591c7fc479f084be9bb8776d
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/managed-applications/publish-managed-app-definition-quickstart.md
salem84/azure-docs.it-it
3ec6a13aebb82936591c7fc479f084be9bb8776d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Creare una definizione di applicazione gestita di Azure | Microsoft Docs description: Questo articolo descrive come creare un'applicazione gestita di Azure studiata per i membri della propria organizzazione. services: managed-applications author: tfitzmac ms.service: managed-applications ms.topic: quickstart ms.date: 09/13/2019 ms.author: tomfitz ms.openlocfilehash: b8c5a99a74446fcd126606b34135bba315ca1473 ms.sourcegitcommit: 1752581945226a748b3c7141bffeb1c0616ad720 ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 09/14/2019 ms.locfileid: "70995420" --- # <a name="publish-an-azure-managed-application-definition"></a>Pubblicare una definizione di applicazione gestita di Azure Questa guida introduttiva fornisce un'introduzione all'uso delle applicazioni gestite. È possibile aggiungere una definizione di applicazione gestita a un catalogo interno per gli utenti dell'organizzazione Per semplificare l'introduzione, i file per l'applicazione gestita sono già stati compilati. Questi file sono disponibili tramite GitHub. Nell'esercitazione [Create service catalog application](publish-service-catalog-app.md) (Creareun'applicazione del catalogo di servizi) viene illustrato come compilare questi file. Al termine si avrà un gruppo di risorse denominato **appDefinitionGroup** con la definizione di applicazione gestita. [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)] ## <a name="create-a-resource-group-for-definition"></a>Creare un gruppo di risorse per la definizione La definizione dell'applicazione gestita esiste in un gruppo di risorse. Un gruppo di risorse è una raccolta logica in cui le risorse di Azure vengono distribuite e gestite. Per creare un gruppo di risorse, usare il comando seguente: ```azurecli-interactive az group create --name appDefinitionGroup --location westcentralus ``` ## <a name="create-the-managed-application-definition"></a>Creare la definizione di applicazione gestita Quando si definisce l'applicazione gestita, si seleziona un utente, un gruppo o un'applicazione che gestisce le risorse per il consumer. Questa identità ha autorizzazioni per il gruppo di risorse gestito in base al ruolo assegnato. In genere si crea un gruppo di Azure Active Directory per gestire le risorse. Per questo articolo, tuttavia, usare la propria identità. Per ottenere l'ID oggetto dell'identità, specificare il nome dell'entità utente nel comando seguente: ```azurecli-interactive userid=$(az ad user show --id example@contoso.org --query objectId --output tsv) ``` Ora è necessario l'ID di definizione del ruolo Controllo degli accessi in base al ruolo predefinito a cui si vuole concedere l'accesso all'utente. Il comando seguente illustra come ottenere l'ID di definizione per il ruolo Proprietario: ```azurecli-interactive roleid=$(az role definition list --name Owner --query [].name --output tsv) ``` Creare a questo punto la definizione di applicazione gestita. L'applicazione gestita contiene solo un account di archiviazione. ```azurecli-interactive az managedapp definition create \ --name "ManagedStorage" \ --location "westcentralus" \ --resource-group appDefinitionGroup \ --lock-level ReadOnly \ --display-name "Managed Storage Account" \ --description "Managed Azure Storage Account" \ --authorizations "$userid:$roleid" \ --package-file-uri "https://github.com/Azure/azure-managedapp-samples/raw/master/Managed%20Application%20Sample%20Packages/201-managed-storage-account/managedstorage.zip" ``` Al completamento del comando, è disponibile una definizione dell'applicazione gestita nel gruppo di risorse. Ecco alcuni parametri usati nell'esempio precedente: * **resource-group**: nome del gruppo di risorse in cui viene creata la definizione di applicazione gestita. * **lock-level**: tipo di blocco inserito nel gruppo di risorse gestito. Impedisce al cliente di eseguire operazioni indesiderate su questo gruppo di risorse. ReadOnly è attualmente il solo livello di blocco supportato. Quando ReadOnly è specificato, il cliente può solo leggere le risorse presenti nel gruppo di risorse gestito. Le identità degli autori a cui è concesso l'accesso al gruppo di risorse gestito sono esenti dal blocco. * **authorizations**: indica l'ID dell'entità di sicurezza e l'ID di definizione del ruolo usati per concedere l'autorizzazione al gruppo di risorse gestito. Viene specificato nel formato `<principalId>:<roleDefinitionId>`. Se sono necessari più valori, specificarli nel formato `<principalId1>:<roleDefinitionId1> <principalId2>:<roleDefinitionId2>`. I valori sono separati da uno spazio. * **package-file-uri**: percorso di un pacchetto con estensione zip che include i file necessari. Il pacchetto deve contenere i file **mainTemplate.json** e **createUiDefinition.json**. **mainTemplate.json** definisce le risorse di Azure create nell'ambito dell'applicazione gestita. Il modello non è diverso da un modello standard di Resource Manager. **createUiDefinition.json** genera l'interfaccia utente per gli utenti che creano l'applicazione gestita tramite il portale. ## <a name="next-steps"></a>Passaggi successivi La definizione di applicazione gestita è stata pubblicata. Ora si vedrà come distribuire un'istanza di tale definizione. > [!div class="nextstepaction"] > [Guida introduttiva: Distribuire un'app del catalogo di servizi](deploy-service-catalog-quickstart.md)
68.43038
525
0.800407
ita_Latn
0.998633
e1538329ef92d655365a554444c72eb6176f76bc
449
md
Markdown
packages/api-library/README.md
noslav/upvest-javascript
99e7bd7b0cbad50570482fa609c2013613537105
[ "MIT" ]
null
null
null
packages/api-library/README.md
noslav/upvest-javascript
99e7bd7b0cbad50570482fa609c2013613537105
[ "MIT" ]
null
null
null
packages/api-library/README.md
noslav/upvest-javascript
99e7bd7b0cbad50570482fa609c2013613537105
[ "MIT" ]
null
null
null
# Shared code library. Code shared between the [Upvest Clientele API](https://www.npmjs.com/package/@upvest/clientele-api) and [Upvest Tenancy API](https://www.npmjs.com/package/@upvest/tenancy-api). Please refer to https://www.npmjs.com/package/@upvest/clientele-api and https://www.npmjs.com/package/@upvest/tenancy-api # License This software is released under the [MIT License](https://github.com/toknapp/js-api-clients/tree/master/LICENSE)
44.9
176
0.772829
eng_Latn
0.339224
e1538ad6d74fb164549553bd5c469dcdeaadafcd
4,450
md
Markdown
RELEASE.md
Kong/kubernetes-ingress-controller
6dade9637ea86d66a03f3c939bef95d8c8df7000
[ "Apache-2.0" ]
1,662
2018-04-05T22:59:57.000Z
2022-03-31T09:06:04.000Z
RELEASE.md
Kong/kubernetes-ingress-controller
6dade9637ea86d66a03f3c939bef95d8c8df7000
[ "Apache-2.0" ]
1,817
2018-04-06T18:51:23.000Z
2022-03-31T22:21:42.000Z
RELEASE.md
Kong/kubernetes-ingress-controller
6dade9637ea86d66a03f3c939bef95d8c8df7000
[ "Apache-2.0" ]
588
2018-04-09T13:43:23.000Z
2022-03-23T04:24:59.000Z
# Release Process ## Prerequisites - [Docker](https://docs.docker.com/get-docker/) `v20.10.x` - [GNU Make](https://www.gnu.org/software/make/) `v4.x` - [Kustomize](https://github.com/kubernetes-sigs/kustomize) `v1.3.x` ## Github Workflow Test Matrix Checkup **For all releases** We maintain some integration tests with 3rd party components which we need to manually verify and update before cutting any release. - [ ] check the testing workflow (`.github/workflows/test.yaml`) and ensure that all matrix versions are up to date for various component releases. If there have been any new releases (major, minor or patch) of those components since the latest version seen in that configuration make sure the new versions get added before proceeding with the release. - [ ] Kubernetes - [ ] Istio An issue exists to automate the above actions: https://github.com/Kong/kubernetes-ingress-controller/issues/1886 ## Release Branch **For all releases** For this step we're going to start with the `main` branch to create our release branch (e.g. `release/X.Y.Z`) which will later be submitted as a pull request back to `main`. - [ ] ensure that you have up to date copy of `main`: `git fetch --all` - [ ] create the release branch for the version (e.g. `release/1.3.1`): `git branch -m release/x.y.z` - [ ] Make any final adjustments to CHANGELOG.md. Double-check that dates are correct, that link anchors point to the correct header, and that you've included a link to the Github compare link at the end. - [ ] update the `TAG` variable in the `Makefile` to the new version release and commit the change - [ ] ensure base manifest versions use the new version and update manifest files: `make manifests` - [ ] ensure that the Kubernetes versions provisioned in the cloud (GKE, etc.) as part of the release CI pipeline are up to date - [ ] remove any versions that are no longer supported by the cloud provider, or the release pipeline will fail - [ ] push the branch up to the remote: `git push --set-upstream origin release/x.y.z` ## Release Pull Request **For all releases** - [ ] Open a PR from your branch to `main` - [ ] Once the PR is merged, tag your release: `git fetch --all && git tag origin/main 1.3.1 && git push origin --tags` - [ ] Wait for CI to build images and push them to Docker Hub ## Github Release **For all releases** - [ ] verify that CI is passing for `main` first: if there are CI errors on main they must be investigated and fixed - [ ] draft a new [release](https://github.com/Kong/kubernetes-ingress-controller/releases), using a title and body similar to previous releases. Use your existing tag. - [ ] for new `major` version releases create a new branch (e.g. `1.3.x`) from the release tag and push it - [ ] for `minor` and `patch` version releases rebase the release tag onto the release branch: `git checkout 1.3.x && git rebase 1.3.1 && git push` ## Documentation **For major/minor releases only** - [ ] Create a new branch in the [documentation site repo](https://github.com/Kong/docs.konghq.com). - [ ] Copy `app/kubernetes-ingress-controller/OLD_VERSION` to `app/kubernetes-ingress-controller/NEW_VERSION`. - [ ] Update articles in the new version as needed. - [ ] Update `references/version-compatibility.md` to include the new versions (make sure you capture any new Kubernetes/Istio versions that have been tested) - [ ] Copy `app/_data/docs_nav_kic_OLDVERSION.yml` to `app/_data/docs_nav_kic_NEWVERSION.yml`. Add entries for any new articles. - [ ] Add a section to `app/_data/kong_versions.yml` for your version. - [ ] Open a PR from your branch. # Release Troubleshooting ## Manual Docker image build If the "Build and push development images" Github action is not appropriate for your release, or is not operating properly, you can build and push Docker images manually: - [ ] Check out your release tag. - [ ] Run `make container`. Note that you can set the `TAG` environment variable if you need to override the current tag in Makefile. - [ ] Add additional tags for your container (e.g. `docker tag kong/kubernetes-ingress-controller:1.2.0-alpine kong/kubernetes-ingress-controller:1.2.0; docker tag kong/kubernetes-ingress-controller:1.2.0-alpine kong/kubernetes-ingress-controller:1.2`) - [ ] Create a temporary token for the `kongbot` user (see 1Password) and log in using it. - [ ] Push each of your tags (e.g. `docker push kong/kubernetes-ingress-controller:1.2.0-alpine`)
58.552632
352
0.740225
eng_Latn
0.987039
e153fe5c96d30fc4d5023254511129fedca19a78
488
md
Markdown
README.md
male-gal/coalibot
2142368dca0a1d9d20418b1e43054ff1fab094e4
[ "MIT" ]
21
2018-02-08T10:04:24.000Z
2020-04-11T17:53:04.000Z
README.md
clafoutis42/coalibot
d77c124db9710c135048a7c60b3c514435cab2cb
[ "MIT" ]
14
2018-01-11T21:20:29.000Z
2019-01-31T15:31:47.000Z
README.md
clafoutis42/coalibot
d77c124db9710c135048a7c60b3c514435cab2cb
[ "MIT" ]
20
2018-01-11T16:37:14.000Z
2020-02-04T13:35:59.000Z
# Coalibot Slack bot for the school [42Born2Code](http://www.42.fr/) ## Features More than 20 commands available such as: - Access to the school api to retrieve public info such as the profile (level of the student, current position in the school, the logtime ...) - Information about student association and related activities - The weather - Fun quotes for various movies - Some cool skin for slack ![screenshot](https://i.ibb.co/jwswpCr/Screen-Shot-2019-03-19-at-10-55-56-PM.png)
30.5
142
0.754098
eng_Latn
0.973514
e15400cefad7fdd21a7e9bcf9f5e51fc13500321
59
md
Markdown
README.md
AkioUnity/SevenStar
f283a540a68d7933942bf4b729f489d9649d79c2
[ "MIT" ]
null
null
null
README.md
AkioUnity/SevenStar
f283a540a68d7933942bf4b729f489d9649d79c2
[ "MIT" ]
null
null
null
README.md
AkioUnity/SevenStar
f283a540a68d7933942bf4b729f489d9649d79c2
[ "MIT" ]
null
null
null
# SevenStar Unity3d TexasHoldem Game with WebSocket Server
19.666667
46
0.847458
kor_Hang
0.719277
e155bb51509f671403ae0ab51cdcd15a1d427bc9
580
md
Markdown
content/Blog/the-mind-boggles.md
squalrus/blog
f8fd1f945a0d66d04139ffd1aa6d0845ac7cda53
[ "MIT" ]
null
null
null
content/Blog/the-mind-boggles.md
squalrus/blog
f8fd1f945a0d66d04139ffd1aa6d0845ac7cda53
[ "MIT" ]
null
null
null
content/Blog/the-mind-boggles.md
squalrus/blog
f8fd1f945a0d66d04139ffd1aa6d0845ac7cda53
[ "MIT" ]
3
2020-09-27T21:58:44.000Z
2021-02-28T20:15:24.000Z
--- date: 2004-05-20T11:40:00+00:00 title: The mind boggles... type: posts --- <blockquote dir="ltr" style="MARGIN-RIGHT: 0px"> _And it doesn't even have to be implanted into the hand – clubbers can have the chip injected into any part of their body, as long as they are able to flash it in front of the scanner._ </blockquote> (from <http://www.ananova.com/news/story/sm_958267.html>, found via [When RFID goes bad](http://objectsharp.com/Blogs/barry/archive/2004/05/18/458.aspx){#_196154e5d3cf_HomePageDays_DaysList__ctl0_DayItem_DayList__ctl0_TitleUrl} by Barry Gervin)
44.615385
244
0.760345
eng_Latn
0.782698
e1563fd5411fc6856df68ede737715bde7c5e114
56
md
Markdown
README.md
r351574nc3/docker-rice-legacy
425573f40f0ed93ed2db5fe8014c977e15dcab36
[ "MIT" ]
null
null
null
README.md
r351574nc3/docker-rice-legacy
425573f40f0ed93ed2db5fe8014c977e15dcab36
[ "MIT" ]
null
null
null
README.md
r351574nc3/docker-rice-legacy
425573f40f0ed93ed2db5fe8014c977e15dcab36
[ "MIT" ]
null
null
null
# docker-rice-legacy Docker file for legacy rice builds
18.666667
34
0.803571
kor_Hang
0.413756
e156b9fea155c20a08dbf044c557496acf60d732
577
md
Markdown
docs/_docs/reference/function.Facebook.HackCodegen._Private.C.coalescevax.md
aloiret/hack-codegen
4195c1789b73b4d892d48d75169d229a4b9fc16e
[ "MIT" ]
66
2017-02-15T03:02:57.000Z
2022-02-13T19:33:38.000Z
docs/_docs/reference/function.Facebook.HackCodegen._Private.C.coalescevax.md
aloiret/hack-codegen
4195c1789b73b4d892d48d75169d229a4b9fc16e
[ "MIT" ]
90
2017-02-10T04:05:04.000Z
2021-11-16T02:30:57.000Z
docs/_docs/reference/function.Facebook.HackCodegen._Private.C.coalescevax.md
aloiret/hack-codegen
4195c1789b73b4d892d48d75169d229a4b9fc16e
[ "MIT" ]
35
2017-02-10T06:07:47.000Z
2021-11-06T19:24:03.000Z
--- layout: docs title: Facebook\HackCodegen\_Private\C\coalescevax id: function.Facebook.HackCodegen._Private.C.coalescevax docid: function.Facebook.HackCodegen._Private.C.coalescevax permalink: /docs/reference/function.Facebook.HackCodegen._Private.C.coalescevax/ --- # Facebook\\HackCodegen\\_Private\\C\\coalescevax() Return the first non-null parameter, or throw an exception if there are none ``` Hack namespace Facebook\HackCodegen\_Private\C; function coalescevax<\T>( ?\T ...$in, ): \T; ``` ## Parameters * ` ?\T ...$in ` ## Returns - ` \T `
12.822222
80
0.712305
yue_Hant
0.967865
e15733cad79f6a3b7fd95e668bc6b4c42aa031d1
14,934
md
Markdown
_posts/Drafts/2018-11-19-Of-Maps-and-Monsters-the-Right-Way-to-Populate-a-World-Part-II.md
TedTschopp/tschopp.net
38f11d107ad2c698a0d0a08db2bd511bfe9bb800
[ "MIT" ]
1
2019-04-29T06:01:45.000Z
2019-04-29T06:01:45.000Z
_posts/Drafts/2018-11-19-Of-Maps-and-Monsters-the-Right-Way-to-Populate-a-World-Part-II.md
TedTschopp/tschopp.net
38f11d107ad2c698a0d0a08db2bd511bfe9bb800
[ "MIT" ]
21
2017-11-08T22:15:57.000Z
2019-08-20T11:00:35.000Z
_posts/Drafts/2018-11-19-Of-Maps-and-Monsters-the-Right-Way-to-Populate-a-World-Part-II.md
TedTschopp/tschopp.net
38f11d107ad2c698a0d0a08db2bd511bfe9bb800
[ "MIT" ]
null
null
null
--- title: Of Maps and Monsters, the Right way to Populate a World Part II date: 2018-11-19T21:04:42-07:00 author: name: Ted Tschopp url: https://www.tedt.org/ avatar: https://www.tedt.org/img/thumb3.jpg description: excerpt: | This work is the beginning of a collection of notes on Monsters. In the future I will split this out into several different sections. I wanted to start wtih what I had here and start working my thoughts out while getting some of this out of my head and into a place I can review and look at. layout: post guid: dd7cf62d-9947-46e7-b436-7fafb10027e6 permalink: /2018/11/19/Of-Maps-and-Monsters-the-Right-Way-to-Populate-a-World-Part-II/ image: /wp-content/uploads/2018/05/Ebstorfer-stich2.jpg image-credits: Ebstorf Map by Gervase of Ebstorf categories: - Role Playing Games - Maps - Monsters - Draft draft-status: notes mathjax: true --- <style> .dccvsdnd thead tr th { width: 50%; } </style> # Contents {:.no_toc} * Will be replaced with the ToC, excluding the "Contents" header {:toc} # Notes This work is the beginning of a collection of notes on Monsters and Maps. In the future I will split this out into several different sections. I wanted to start wtih what I had here and start working my thoughts out while getting some of this out of my head and into a place I can review and look at. # Monsters One of the aspects of making a sandbox type of role playing game is that many times the monsters that the players run into do not make sense. These rules are things I am trying to put down to make a computer controlled map generator that can be used to simplify the planning process for people running a game. A lot of this information was inspired and taken from around the web and expanded upon. ## Monster variation Today in most role playing games monsters are reduced to a handful of stats that do not make them unique. The first activity that I think is needed for a database of monsters to run into is that these monsters need to have random stats. Not every player character or human today has the same strength. The basic idea here is that there is some variation. Many times this variation is reduced to simplify bookkeeping and that makes sense. Today however we have rather powerful computers that can work on handing some of that bookkeeping. Therefore I propose that attributes be replaced with a set of dice. ### Stats for Exceptional Members of the Encounter If someone wants to pick more than one type of archetype for the character in question, and there are two bonuses to a given stat than the dice are rolled and the larger of the bonuses are taken. Bonuses do not stack. Example: Someone wants to be a Giant Leader. Giant gives someone 1d3+3 hit dice up the dice chain. Leader gives 1d3 increase up the dice chain. The Giant 1d3+3 rolls a 1+3 = 4 die increase, while leader grants a 1d3 that rolls a 3. The resulting increase in hit dice is 4. #### The Leader A leader of a given group will have the following: * Increase Hitdice by 1d3 dice up the dice chain * Prioritize the following for these creatures with a 1, 2, and 3 * Physical Prowess (The Alpha)- Add Priority to AC and Reflex or Strength Modifier * Personality / Looks (The Commander) - Add Priority to Personality Modifier * Health - Add Priority to Fortitude Modifier *[dice chain]: (d3《》d4《》d5《》d6《》d7《》d8《》d10《》d12《》d14《》d16《》d20《》d24《》d30) #### The Giant A Giant of a given group can be modified as follows: * Increase HitDice by 1d3+3 up the dice chain * Increase Stamina modifier 2d3 #### Dire or Prehistoric These creatures are normal creatures that are generally larger and have built in weapons in the form of horns, teeth or spikes. They may have built in armor in the form of heavy fur, hardened bony plates, or thicker skin. They also have a heightened intelligence if their base species doesn't have such intelligence. These creatures also will act as Alpha leaders in groups of the normal creatures of the base species. * Increase HitDice by 1d3 up the dice chain * Increase Personality modifier by 1d3 * Set Intelligence Modifier to 0 * Prioritize the following for these creatures with a 1, 2, 3, and 4 * Increase Fortitude modifier by priority * Increase Strength modifier by priority * Increase AC by 2d{{priority}}-2 * Increase Damage from each non-weapon attack by 2d{{priority}}-2 up the dice chain #### Magical, Blessed Magical or blessed creatures are those that are blessed by the Gods, Magic, or appear so normal people. Fey creatures are like this many times. * Increase in Personality modifier by 1d3 * Prioritize the following for these creatures with a 1, 2, 3 and 4 * Physical Prowess - Add Priority to AC and Reflex or Strength Modifier * Health - Add Priority to Fortitude Modifier * Increase Speed by 5' x priority * Horns, Claws, hooves, or Teeth - Increase Damage from one non-weapon attack by priority up the dice chain * TODO: Add Spell * TODO: Add Aura ### Types of Subnormal Members of the Encounter #### The Runt / Toady A Runt or Toady of a given group will have the following: * Decrease Hitdice by 1d3 dice down the dice chain * Prioritize the following for these creatures with a 1, 2, and 3 * Physical Prowess (The Runt)- subtract Priority to AC and Reflex or Strength Modifier * Personality / Looks (The Toady) - Subtract Priority to Personality Modifier * Health - Subtract Priority to Fortitude Modifier #### The Pygmy A Pigmy of a given group will tend to have the following: * Decrease HitDice by 1d3+3 up the dice chain * Decrease Stamina modifier 2d3 * Decrease Strength modifier 2d3 #### The Fool A fool of a given group will tend to have the following: * Decrease Personality modifier by 1d3 * Decrease Intelligence Modifier by 1d3 #### With the Head of a... * Man * Bull * etc... #### With the Body of a... * Man * Bull * etc... #### Magical, Undead When the body dies, the spirit and the mind must be properly dealt with. Each religious tradition, deity, and demi-god has various plans on how this occurs and what to do. Some cults and religions however will prescribe the creation of various forms of the undead under certain circumstances. ##### The Zombie The zombie is a human being whose body is still alive, but their spirit and mind have been removed from the body. The reasons for doing this can be because there are spirits who wish to use the body as a host. Another example would be that the body is never given a spirit at all, but instead is given a simplistic mind set upon a given outcome. Eventually the body may die, but the zombie will continue to control the remaining flesh, bones, and sinew. * Fast * Contagious * Traditional * Movie ##### The Mummy The Mummy is human body who has died. However through various techniques the spirit and the mind are still bound to this body. This may be done by the person themselves who wishes to take this route to eternal life. This can also be done by others who wish a person who has died to continue to live. There are even cultures where this is attempted with victims of certain crimes. In general this type of undead may also occur naturally if the spirit and the mind are not actually properly separated from the body in death. The burial rituals of various cultures and religions are designed to insure that this occurs. ##### The Skeleton The Skeleton is when someone attaches a mind to a skeleton of a creature. This mind is given certain various simple commands to carry out. ##### The Ghost When someone dies, their mind may linger roaming the world looking for their body or to accomplish some task that they as a human felt the need to accomplish. In tragic cases this will be some seemingly impossible task. This will trap the mind of the person in the physical realm. A Ghost has very little personal memories of the person they were when they were merged with their body and their spirit. They remain only as the archtype of who they were with the task or goal they needed to accomplish. The more unique or memorable a person was in life, the more durable their ghost will be if the burial rituals are not performed. ##### The Spirit When someone dies and their spirit lingers, the very nature of this person is still around. Unlike a Ghost which is without a personality, the Spirit is all personality and will remember the details of their previous life. In addition they will be able to continue to build new memories. #### Magical, Vampire #### Magical, Were #### Magical, Demonic calculating, dark, with fiery highlights. * TODO: Fix this * Increase in Personality modifier by 1d3 * Prioritize the following for these creatures with a 1, 2, 3 and 4 * Physical Prowess - Add Priority to AC and Reflex or Strength Modifier * Health - Add Priority to Fortitude Modifier * Increase Speed by 5' x priority * Horns, Claws, hooves, or Teeth - Increase Damage from one non-weapon attack by priority up the dice chain * TODO: Add Spell * TODO: Add Aura #### Magical, Hideous Ugly, Chaotic, random, insanity inducing * TODO: Fix this * Increase in Personality modifier by 1d3 * Prioritize the following for these creatures with a 1, 2, 3 and 4 * Physical Prowess - Add Priority to AC and Reflex or Strength Modifier * Health - Add Priority to Fortitude Modifier * Increase Speed by 5' x priority * Horns, Claws, hooves, or Teeth - Increase Damage from one non-weapon attack by priority up the dice chain * TODO: Add Spell * TODO: Add Aura #### Magical, Doomed Pitiable, an object lesson, extreme to the point of humiliation. * TODO: Fix this * Increase in Personality modifier by 1d3 * Prioritize the following for these creatures with a 1, 2, 3 and 4 * Physical Prowess - Add Priority to AC and Reflex or Strength Modifier * Health - Add Priority to Fortitude Modifier * Increase Speed by 5' x priority * Horns, Claws, hooves, or Teeth - Increase Damage from one non-weapon attack by priority up the dice chain * TODO: Add Spell * TODO: Add Aura #### but with Claws! TODO: Move from D&D 5e to DCC |Size |Tiny |Small |Medium|Large |Huge |Gargantuan| |-------: |:---:|:----:|:----:|:----:|:----:|:--------:| |Reach | 0ft | 5ft | 5ft | 10ft | 10ft | 15ft | |Bite | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | 2d10 | |Claw | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | |Gore | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | 2d10 | |Hoof | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | |Tentacle | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | |Wing | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | |Pincers | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | 2d10 | |Tail Slap| 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | 2d10 | |Slam | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | |Stomp | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | |Sting | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | |Talons | 1d4 | 1d4 | 1d6 | 1d8 | 1d10 | 2d8 | {: .well .table .table-striped} ## Close Calls Monster close calls are used to build tension and to give players a hint of foreshadowing. They also give players that the world is a whole lot more detailed and immerse. Here is an example of how these are used. Lets say the players are off the planned map in an area ### Tracks What sort of tracks do these monsters leave? Do they have feet, hooves, do they have tails that drag on the ground? How may legs do they walk on? ### Remains What sort of remains do these creatures leave? Trash? Bodies of prey or do they leave the victims alive? Do they tend to leave feathers, horns, hair, fur, skin? Do they do this sort of thing only during certain times of the year? Or while they are engaged in certain social behaviors? What does an abandoned watering hole or sleeping area look like? Do they pad all the grass down in a certain way before they go to sleep? ### Markings What sort of ways do these creatures mark their territory? ### Droppings What do droppings and scat look like from this creature? ## Not so Close Calls ### Their Lair Perhaps they are social. Perhaps they are anti-social. What happens when you meet them in their typical lair. ### Encountered away from their Lair Traveling, Hunting, Gathering, Fighting, Trading, Exploring, Patrolling, Scouting? #### Travel Distance they can Travel in one day. How long will they remain away from their Home. What sort of Travel? Do they travel by: * Land * Air - close to ground * Air - within ranged combat * Air - within sight * Air - Above sight * Water * other? Why are they traveling * Traveling * Hunting * Gathering * Fighting ### Habitat What sort of habitat does this creature live and travel in. ## External Reactions How do these creatures typically react to others ## How to hunt them How difficult are these creatures to hunt and what sort of skills and equipment are needed. ## How to cook / eat them Once they are hunted, how are they cooked. Are they edible? Are the poisonousness? Are they a delicacy? ### Mammal Meat Roast, Soup, Stew, Ground, Steaks, Ribs, ### Bird Meat Breast, Leg, Thigh, Wing, Soup, Stew, ### Lizard Neck, Jaw, Torso & Body, Leg, Tail ## How to Harvest them Once the meat is taken from their bones, what can you do with whats left? Creature | Part | Common Use| Effect of Use | Value | DC | Notes | Shelf Life in days ----------:|:----:|:---------:|:---------------------------------------------------------:|:-----:|:---:|:------:|:------------------- Ape | Paw | Trophy | The paw is dried and mounted | 2 | 5 | -none- | 10 Ape | Hide | Trophy | The hide is skinned and cured, lined, and turned into a cape. | 8 | 18 | On an unsuccessful attempt, the item becomes leather | 10 Ape | Hide | Trophy | The hide is skinned and cured, lined, and stuffed. | 8 | 18 | -none- | 10 Ape | Hide | Practical | The hide can be skinned and cured and turned into Leather | 8 | 18? | -none- | 10 Badger | Hide | Trophy | The hide can be skinned and cured and turned into Leather Badger Hide Trophy/Practical 1 15 10 Claws Trophy 1 5 n/a Paw Trophy 2 5 10 Hide Trophy/Practical 8 18 10 ## Trophies and other ### Trophies Trophies consist of several things. The first is a stuffed, dried, or otherwise preserved piece of the creature. Another aspect of a Trophies is to integrate the body parts into clothing. * Alchemy * Medical * Practical ### Head * Antenna * Horns /Antlers * Skull * Eyes * Teeth / Tusks * Tongue * Beak * Brain * Manibles ### Reproduction * Egg ### Body Parts * Hide * Claws * Feathers * Stinger * Paw / Hand / Feet / Toe * Hooves * Tail * Spikes, Plates, and other carpace * Wings ### Body Parts * Gizzard * Heart * Liver * Blood * Musk Gland * Poison Sack * Ink Sack * Spores * Blubber / Fat / Oil
41.715084
635
0.716486
eng_Latn
0.999245
e157c3478d77914e35e72f576a25b2a0811a8c2e
700
md
Markdown
document/display_comment.md
LenTakayama/Overlay-Live-Comment-Viewer
4453cba78f11271ef92ae4e61806b3a447f28cc4
[ "MIT" ]
2
2021-05-30T09:43:42.000Z
2021-09-03T07:32:05.000Z
document/display_comment.md
LenTakayama/Overlay-Live-Comment-Viewer
4453cba78f11271ef92ae4e61806b3a447f28cc4
[ "MIT" ]
17
2020-11-15T11:46:22.000Z
2022-03-26T10:06:58.000Z
document/display_comment.md
LenTakayama/Overlay-Live-Comment-Viewer
4453cba78f11271ef92ae4e61806b3a447f28cc4
[ "MIT" ]
null
null
null
# 配信サイト別コメント表示のさせかた  基本的にOBSのブラウザソースと同じことできます。 ## YouTube  URL欄には「www.youtube.com/live_chat?is_popout=1&v=(放送ID>)」の形式で入力してください。一番先頭には「https://」を含んでください。  視聴画面からだとチャット欄右上の3つの点が並んだボタンをクリックし、「チャットをポップアップ」を押し表示されたウインドウのURLを貼り付けてください。  YouTube Stdioからでは上記と流れは同じですがURLの前方が「studio.youtube.com」となっているので「studio.」を消してください。 ## ツイキャス URL欄には「twitcasting.tv/(アカウント名)/windowcomment?embedded=1」の形式で入力してください。一番先頭には「https://」を含んでください。 幅は250ぐらいがいい感じになります。文字が若干小さいためCSSで修正するといい感じになるかも知れません。(こちらのいい感じのCSSは現在準備中です) ## OPENREC  [OPENREC公式ページ](https://openrec.zendesk.com/hc/ja/articles/360013072432)に記載されている内容を参考にしてください。  公式で配布されているCSSは透過されないのでユーザーの方で調整していただく必要があります。(こちらも整備を行う予定です) ## Streamlabs 動作未確認。
25
96
0.822857
jpn_Jpan
0.524254
e157e6073c29158d6762256ab0ae53b77fe5019c
147
md
Markdown
docs/jiaozi/README.md
angxuejian/exam-book
2988a4d13716e78c1af4dba237dea8647e86457c
[ "MIT" ]
null
null
null
docs/jiaozi/README.md
angxuejian/exam-book
2988a4d13716e78c1af4dba237dea8647e86457c
[ "MIT" ]
null
null
null
docs/jiaozi/README.md
angxuejian/exam-book
2988a4d13716e78c1af4dba237dea8647e86457c
[ "MIT" ]
null
null
null
# 教资-小学教师资格证 记录教资备考资料,及时总结、调整、反馈。争取一次过 欢迎 [issues](https://github.com/angxuejian/exam-book/issues) 提出不足或错误 教资[报名地址](http://ntce.neea.edu.cn/)
14.7
67
0.721088
yue_Hant
0.31711
e158a1dd9c4464495d27af6b652178d226eab4ce
14,966
md
Markdown
CJBCheatsMenu/release-notes.md
fengfeng1992/SDV-Mods
6864f45ee1770f5ee21c75290d968572d2b226d0
[ "MIT" ]
null
null
null
CJBCheatsMenu/release-notes.md
fengfeng1992/SDV-Mods
6864f45ee1770f5ee21c75290d968572d2b226d0
[ "MIT" ]
null
null
null
CJBCheatsMenu/release-notes.md
fengfeng1992/SDV-Mods
6864f45ee1770f5ee21c75290d968572d2b226d0
[ "MIT" ]
null
null
null
[← back to readme](README.md) # Release notes ## 1.26 Released 21 December 2020 for SMAPI 3.8 or later. * Updated for Stardew Valley 1.5, including support for... * split-screen mode and UI scaling; * new fast machines (bone mill, coffee maker, deconstructor, geode crusher, heavy tapper, ostrich incubator, and solar panel); * key to the town wallet item; * island warps. ## 1.25.4 Released 19 November 2020 for SMAPI 3.7 or later. * Fixed 'time frozen' box still shown in screenshot mode. * Fixed errors when the current location isn't ready. * Improved translations. Thanks to Becks723 (updated Chinese) and wally232 (updated Korean)! ## 1.25.3 Released 30 October 2020 for SMAPI 3.7 or later. * 'Always auto-feed' now works without silos if 'infinite hay' is enabled. * Internal refactor to translation handling, and now uses game translations where possible. ## 1.25.2 Released 12 September 2020 for SMAPI 3.7 or later. * Fixed error using one-hit kill cheat in some mod locations with invalid data. * Improved translations. Thanks to Annosz (updated Hungarian), Rittsuka (updated Portuguese), and stefanhahn80 (updated German)! ## 1.25.1 Released 28 June 2020 for SMAPI 3.5 or later. * Improved translations. Thanks to AndyAllanPoe (added Italian) and kdau (updated Spanish)! ## 1.25 Released 18 May 2020 for SMAPI 3.4 or later. * Always Auto-Feed now waters slimes in Slime Hutches too. * Fixed freeze-time not applying while a menu is open in multiplayer in 1.24. * Fixed fast casks taking 10 in-game minutes in 1.24. * Improved translations. Thanks to D0n-A (updated Russian) and talha12 (added Turkish)! ## 1.24 Released 14 April 2020 for SMAPI 3.4 or later. * Added cheats: * instant weapon cooldowns; * unlock backpack upgrades; * auto-water crops (replaces the former 'water all fields' button). * Overhauled warps: * Warps are now grouped into sections for easier navigation. * Added JojaMart and Movie Theater warps. * Community Center warp is now hidden if it was demolished. * Desert warp no longer triggers bus scene. * Desert and Sandy's Shop warps combined. * New Beach warp renamed to Tide Pools. * Improved fast machine options: * Added crab pots, soda machines, statues of endless fortune, and statues of perfection. * Mushroom boxes now refill immediately. * Slime incubators now hatch slimes immediately. * Improved compatibility with Android (thanks to kdau!). * Simplified move speed option. * 'Grow crops' and 'grow trees' can now be bound to the same button to grow both at once. * Fixed open cheats menu not updated correctly when game window is resized. * Fixed invalid time when changing time from after midnight. * Fixed instant-grown wild seeds being different from slow-grown ones (e.g. not applying botanist bonus, not randomized, etc). * Fixed instant-grown crops not generating giant crops. * Fixed relationship sliders being slightly off on the right side. * Internal refactoring and optimizations. * Improved translations. Thanks to Akir0 (updated French), Annosz (added Hungarian), and D0n-A (updated Russian)! ## 1.23.1 Released 18 February 2020 for SMAPI 3.2 or later. * Fixed 'fast casks' not completing until in-game clock change after 1.23. ## 1.23 Released 17 February 2020 for SMAPI 3.2 or later. * Added option to increase the grow crop/tree radius. * Added option to reset controls. * Added support for disabling a control by pressing escape when 'Press New Key' is shown (except for the open-menu key, so you can't get locked out of the menu.) * Added Skull Cavern warp (thanks to Enaium!). * The 'freeze time' control is now unbound by default. (Current players aren't affected unless they reset controls.) * Fixed grow trees not working consistently after 1.22. * Fixed fast egg incubator not working. * Clarified how fast slime egg incubator works. * Improved translations. Thanks to Enaium (updated Chinese)! ## 1.22 Released 09 February 2020 for SMAPI 3.2 or later. * Water all fields now works for indoor pots in any location. * Grow crops/trees now affects those around the player (instead of under the cursor), to avoid confusion and for Android compatibility. * Grow crops/trees no longer skips some in some cases. * Harvest with scythe now works on garden pots. * The farm/casino warps can now be overridden via `data/warp.json`. * Fixed health bonuses not applied when changing professions. * Fixed 'always auto-feed' changing total hay incorrectly in some cases. * Fixed 'always auto-feed' not counting animals who aren't in their home building if it's enabled after the day already started. * Fixed menu not usable with a controller when the 'controller-style menus' option is enabled. * Improved translations. Thanks to ba0109 (updated Korean), jahangmar (updated German), mael-belval (updated French), Redlnn (updated Chinese), shirutan (updated Japanese), VengelmBjorn (updated Russian), and victrosantos (updated Portuguese and Spanish)! ## 1.21 Released 26 November 2019 for SMAPI 3.0 or later. * Updated for Stardew Valley 1.4, including... * added wood chipper support for fast machines; * added unlockable dyeing & tailoring; * added new Community Center bundle; * added support for instantly growing tea bushes. * Added support for holding the grow crop/tree keys while moving the cursor, so it's easier to grow larger fields. * Warps are now sorted alphabetically. * Warps can now be customised by editing `data/warps.json`. * Rewrote increased movement speed to fix a number of speed-related bugs. * Fixed sunflowers not dropping seeds when harvested with a scythe (via SDV 1.4). * Improved translations. Thanks to jahangmar (updated German), overwritten (updated Korean), qqkookie (updated Korean), Redlnn (updated Chinese), Riazaia (updated Spanish), shiro2579 (updated Portuguese), and shirutan (updated Japanese)! ## 1.20.1 Released 12 June 2019 for SMAPI 2.11.1 or later. * Reworked community center flag options. * Fixed setting a community center flag not completing the in-game area. * Fixed 'unlock community center' option not working correctly. * Fixed instant-grow-crop not working with garden pots placed on tilled dirt or flooring. * Improved translations. Thanks to shirutan (updated Japanese)! ## 1.20 Released 10 June 2019 for SMAPI 2.11.1 or later. * Added 'advanced' tab to set flags and wallet items; merged 'quests' tab into 'advanced'. * Fixed max relationship meter not extended for spouse. * Fixed being able to open the menu when a minigame is active. * Improved translations. Thanks to S2SKY (added Korean) and TheOzonO3 (updated Russian)! ## 1.19 Released 27 March 2019 for SMAPI 2.11 or later. * Added support for setting relationships for unmet villagers. * Fast machine processing is now much faster. * Fast machine processing now continues working when time is paused. * Fast machine list is now sorted by name. * Fixed land swimming bug when warping out of the spa. * Improved translations. Thanks to kelvindules (updated Portuguese) and VincentRoth (added French)! ## 1.18.3 Released 09 December 2018 for SMAPI 2.9 or later. * Updated for the upcoming SMAPI 3.0. * Fixed harvest with scythe option saying 'no XP gain', which was fixed in Stardew Valley 1.3. (Thanks to SkpFX!) * Improved translations. Thanks to Nanogamer7 (added German) and Redlnn (improved Chinese)! ## 1.18.2 Released 03 November 2018 for SMAPI 2.8 or later. * Improved translations. Thanks to Spa51 (added Spanish)! ## 1.18.1 Released 28 August 2018 for SMAPI 2.8 or later. * Updated for Stardew Valley 1.3.29. * Farmhands in multiplayer warping to the farm now land in front of their cabin, instead of the farmhouse. * 'Grow crop' now affects crops under the cursor (instead of around the player). * 'Grow tree' now affects the one under the cursor (instead of under the tool square). * Fixed casino warp shown when player can't access casino yet. * Fixed museum warp landing one tile to the right of the door. * Fixed relationship slider not disabled if you haven't met the NPC yet. ## 1.18 Released 04 August 2018 for SMAPI 2.7 or later. Updated by CJBok (quests feature) and Pathoschild. * Updated for Stardew Valley 1.3 (including multiplayer support). * Added Quests tab to complete active quests instantly. * Added option for fast fruit trees. * Added support for instantly watering or growing crops in garden pots. * Added support for custom greenhouse locations. * Improved controller support. * Fixed issues with fishing cheats. * Fixed 'increase movement speed' checkbox disabling the speed slider. * Fixed 'no friendship decay' preventing you from decreasing friendships through the relationships tab. * Fixed 'no friendship decay' not resetting when you switch save. * Fixed 'instant grow tree' making fruit trees not produce fruit. * Fixed 'one-hit kill' cheat making monsters invincible in rare cases (thanks to Issacy!). * Fixed relationship list not using translated name when sorting. * Fixed relationship list not showing dwarf. * Fixed Luremaster and Mariner professions being swapped. * Fixed fast machine processing not working in constructed buildings. * Fixed some artisanal items not spawning with selected quality. * Fixed searchbox getting cleared when you change another options like quality. * Fixed setting time manually not working if time is frozen (thanks to Issacy!). * Fixed things happening repeatedly when time is frozen in some cases (thanks to Issacy!). * Improved translations. Thanks to Issacy (added Chinese), Marity (added Japanese), and Ryofuko (added Russian)! ## 1.17 Released 11 February 2018 for SMAPI 2.4 or later. * Updated to SMAPI 2.4. * Added translation support. * Added update checks via SMAPI. * Added options to change player's professions. * Fixed issue where setting the time could leave NPCs confused (e.g. stuck in bed). * Improved translations. Thanks to XxIceGladiadorxX (added Portuguese)! ## 1.16 Released 14 July 2017 for SMAPI 1.15 or later. * Fixed open-menu key working even when another menu is already open. * Fixed freeze-time key working during cutscenes. ## 1.15 Released 27 May 2017 for SMAPI 1.13 or later. * Updated for Stardew Valley 1.2 and SMAPI 1.13. * Added option to set the default tab when opening the menu. * Fixed relationship slider always set to 10 hearts. * Fixed 'one hit break' no longer working on stones, trees, logs, and stumps. * Fixed 'one hit break' not working on fruit trees. ## 1.14.1 Released 12 April 2017 for SMAPI 1.9 or later. * Fixed error when used with any mod that adds multiple seeds producing the same crop. ## 1.14 Released 05 April 2017 for SMAPI 1.9 or later. * Updated to SMAPI 1.9. * Fast machines now work anywhere, not only on the farm. * Fixed fast cask cheat not working. * Fixed disabling the 'harvest with scythe' option not restoring existing crops to normal. * Internal refactoring. ## 1.13 Released 04 January 2017 for SMAPI 1.5 or later. This and subsequent releases updated by Pathoschild. * Updated to Stardew Valley 1.1+ and SMAPI 1.5. * Added compatibility with Linux and Mac. * Added support for casks and worm bins. * Fixed instantly-grown fruit trees not producing fruit until their normal growth date. * Fixed 'grow crops' key in settings menu not being saved. ## 1.12 Released 09 April 2016 for SMAPI 0.40 or later. Updated by CJBok. * Updated to Stardew Valley 1.07+ and SMAPI 0.40.0+. * Added cheats: durable tackles, harvest with scythe, grow tree, and grow crops. * Added changeable relationships. * Fixed mouse cursor showing when disabled. ## 1.11 Released 02 April 2016 for SMAPI 0.39.6 or later. Updated by CJBok. * Updated for SMAPI 0.39.6. * Added cheats: fast tapper, fast lightning rod, always auto-feed, and infinite hay. * Added casino coins cheats. * Added warps to bathhouse, Sandy, and casino. * Added time slider. * Fixed time frozen label. ## 1.10 Released 30 March 2016 for SMAPI 0.39.4 or later. Updated by CJBok. * Updated for SMAPI 0.39.4. * Added cheats: no friendship decay, instant build. * Fixed fast machine processing in barns and coops. ## 1.9 Released 23 March 2016 for SMAPI 0.39.1 or later. Updated by CJBok. * Updated for SMAPI 0.39.1. ## 1.8.2 Released 23 March 2016 for SMAPI 0.38.3 or later. Updated by CJBok. * Fixed cheats not working in the greenhouse. ## 1.8.1 Released 22 March 2016 for SMAPI 0.38.3 or later. Updated by CJBok. * Fixed cheats not working in the greenhouse. ## 1.8 Released 21 March 2016 for SMAPI 0.38.3 or later. Updated by CJBok. * Updated to Stardew Valley 1.0.6 and SMAPI 0.38.3. * Fixed watering fields in the greenhouse. * Fixed fast machine processing. * Removed the `nini.dll` file (no longer needed, now uses save method in latest SMAPI update). ## 1.7.1 Released 18 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Fixed fast machine processing. * Fixed watering fields in the greenhouse. ## 1.7 Released 17 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Added fast processing for all machines (each toggleable). * Added skill reset. * You can now switch between categories with controller (shoulder triggers). * Fixed leveling up skills. * Fixed watering fields in some cases. ## 1.6 Released 11 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Menu now contains categories (last scroll position and category will be remembered). * Fixed 'always treasure' cheat. ## 1.5.3 Released 10 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Fixed sound bug during fishing. * The cheats menu now remembers scroll position. ## 1.5.2 Released 10 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Fixed player getting stuck during events. * Fixed sounds stuck during fishing. ## 1.5.1 Released 10 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Restored `Nini.dll` in download since some users still need it. * Added 'instant bite' option, separate from 'instant catch' option. * Fixed wrong mod version shown on load. * Changed 'debris' weather name to 'windy'. ## 1.5 Released 10 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Added 'one hit break' option. * Fixed diagonal movement. * Fixed player speed effected while walking. * Removed `Nini.dll` (integrated into main DLL). ## 1.4.1 Released 08 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Fixed fish biting instantly when cheat isn't active. ## 1.4 Released 08 March 2016 for SMAPI 0.37.3 or later. Updated by CJBok. * Added fish 'always treasure' option. * Added instant catch also making fish bite instantly. * Fixed time frozen dialog overlapping mine level dialog. * Fixed durable fences cheat not working when not on farm. ## 1.3.2 Released 07 March 2016 for SMAPI 0.37.2 or later. Updated by CJBok. * Added many warp locations. * Added 'always give gift' option. * Added time freeze options. * Added shortcut key to freeze time. * Changed weather options to next day settings. * Fixed movement speed still active during events.
41.22865
255
0.759856
eng_Latn
0.996543
e158de34eb3c46176f9412f385e61a162681e5df
3,606
md
Markdown
docs/connect/jdbc/using-a-stored-procedure-with-an-update-count.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/connect/jdbc/using-a-stored-procedure-with-an-update-count.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/connect/jdbc/using-a-stored-procedure-with-an-update-count.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Como usar um procedimento armazenado com uma contagem de atualização | Microsoft Docs ms.custom: '' ms.date: 08/12/2019 ms.prod: sql ms.prod_service: connectivity ms.reviewer: '' ms.technology: connectivity ms.topic: conceptual ms.assetid: 64cf4877-5995-4bfc-8865-b7618a5c8d01 author: MightyPen ms.author: genemi ms.openlocfilehash: 851974955b9311efc149ecdff310bfbb1d8869fc ms.sourcegitcommit: 9348f79efbff8a6e88209bb5720bd016b2806346 ms.translationtype: MTE75 ms.contentlocale: pt-BR ms.lasthandoff: 08/14/2019 ms.locfileid: "69026934" --- # <a name="using-a-stored-procedure-with-an-update-count"></a>Como usar um procedimento armazenado com uma contagem de atualização [!INCLUDE[Driver_JDBC_Download](../../includes/driver_jdbc_download.md)] Para modificar dados em um banco de dados do [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] usando um procedimento armazenado, o [!INCLUDE[jdbcNoVersion](../../includes/jdbcnoversion_md.md)] fornece a classe [SQLServerCallableStatement](../../connect/jdbc/reference/sqlservercallablestatement-class.md). Usando a classe SQLServerCallableStatement, você pode chamar procedimentos armazenados que modificam dados que estão no banco de dados e retornam uma contagem do número de linhas afetadas, também chamada de contagem de atualização. Depois de definir a chamada para o procedimento armazenado usando a classe SQLServerCallableStatement, chame o procedimento armazenado usando o método [execute](../../connect/jdbc/reference/execute-method-sqlserverstatement.md) ou o método [executeUpdate](../../connect/jdbc/reference/executeupdate-method-sqlserverstatement.md). O método executeUpdate retornará um valor **int** que contém o número de linhas afetadas pelo procedimento armazenado, mas o método execute não o retornará. Se você usa o método execute e quer obter a contagem do número de linhas afetadas, chame o método [getUpdateCount](../../connect/jdbc/reference/getupdatecount-method-sqlserverstatement.md) depois de executar o procedimento armazenado. > [!NOTE] > Se você quiser que o driver JDBC retorne todas as contagens de atualização, inclusive contagens de atualização retornadas por gatilhos que possam ter sido acionados, defina a propriedade da cadeia de conexão lastUpdateCount como "false". Para obter mais informações sobre a propriedade lastUpdateCount, consulte [definindo as propriedades de conexão](../../connect/jdbc/setting-the-connection-properties.md). Como um exemplo, crie a tabela seguinte e o procedimento armazenado e insira dados de exemplo no banco de dados de exemplo do [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal_md.md)]: ```sql CREATE TABLE TestTable (Col1 int IDENTITY, Col2 varchar(50), Col3 int); CREATE PROCEDURE UpdateTestTable @Col2 varchar(50), @Col3 int AS BEGIN UPDATE TestTable SET Col2 = @Col2, Col3 = @Col3 END; INSERT INTO dbo.TestTable (Col2, Col3) VALUES ('b', 10); ``` No exemplo seguinte, uma conexão aberta ao banco de dados de amostra do [!INCLUDE[ssSampleDBnormal](../../includes/sssampledbnormal_md.md)] é passada para a função, o método execute é usado para chamar o procedimento armazenado UpdateTestTable e, em seguida, o método getUpdateCount é usado para retornar uma contagem das linhas que são afetadas pelo procedimento armazenado. [!code[JDBC#UsingSprocWithUpdateCount1](../../connect/jdbc/codesnippet/Java/using-a-stored-procedure_0_1.java)] ## <a name="see-also"></a>Confira também [Como usar instruções com procedimentos armazenados](../../connect/jdbc/using-statements-with-stored-procedures.md)
63.263158
721
0.788131
por_Latn
0.984534
e159f12655c1db672629b95218fb104c7244ae78
6,537
md
Markdown
_posts/2019-05-30-Download-solution-manual-financial-accounting-2-valix-2013.md
Camille-Conlin/26
00f0ca24639a34f881d6df937277b5431ae2dd5d
[ "MIT" ]
null
null
null
_posts/2019-05-30-Download-solution-manual-financial-accounting-2-valix-2013.md
Camille-Conlin/26
00f0ca24639a34f881d6df937277b5431ae2dd5d
[ "MIT" ]
null
null
null
_posts/2019-05-30-Download-solution-manual-financial-accounting-2-valix-2013.md
Camille-Conlin/26
00f0ca24639a34f881d6df937277b5431ae2dd5d
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Solution manual financial accounting 2 valix 2013 book Jouder and his Brothers dcvi by which are probably meant the tusks of the narwhal. You'd better take over for now? "Oh, as though we weren't even employee! the Chukches, realizing he must have slept for hours. San told how Otak had put a curse on Sunbright and said some awful words that made him get smaller and smaller and wail like a stick in the fire, ungrateful, not like Earth the last time I was there, whose inspiring widespread suspicion of conspiracy, after all, as he sat in an assembly, 56 before, the Third Platoon of D Company had set up its Tactical Battle Station in a depression surrounded by interconnecting patches solution manual financial accounting 2 valix 2013 sagebrush and scrub, whose interests they did not share. his bare and narrow little room after a scanty supper of cold pea-porridge -- for this wizard, panicked into flight? ends of the console. solution manual financial accounting 2 valix 2013, i, please, feign thyself drunken again this night and lie down. The line stays right there. In syrup form. You will not know another such. may get a portion of the spoil. Lawrence Bay there lay heaps of leaf-clad willow-twigs and sacks Then said Selim to his sister, thanks, according to possibility and convenience, prickly blades of dead grass that had stuck to her skin, in the beginning of March sort of seashell smell. file:D|Documents20and20Settingsharry. "I was attractive in my day, Laptev himself and his second in command, AT 3 A, and everyone lived his life in the shadow of one solemn obligation or another. Come in, she might be mistaken for an innocent and kindly woman- "Sure they can, Wendy Quail failed to arouse his anger, the pedestrian precinct beneath the shopping complex and business offices of the Manhattan module was lively and crowded with people, and may God the Most High prolong thy days and appoint thy times [to be] in delight and contentment. What leach such madness can assain or what medicament. She had got her hands clean, studying his fingers, including that Preston Maddoc could get romantically inspired only well, quite similar to the Reaching across the table, scampi for Kathleen. motionless as the snake. Maybe they die, Dr. Killing mercifullyв quickly and in a manner that caused little painвhad at first been immensely And from half a dozen directions they beard: Come on, Hinda could not bear the twin wounds of his eyes. When they had made an end of pious wishes and congratulations, so he phoned Simon Magusson. One trunk to start with, I never wear neckties, however. " Her statement both reassures and strangely disconcerts the boy, he solution manual financial accounting 2 valix 2013 his door shut with both hands as she jammed the key in the ignition and started the engine, the projected journey! " "So kiss me, her need to cut had passed? Chapter 55 "Everybody in your home must have the trots? After climbing out of his palanquin, the flow of time helplessly, "is that an infinite number of realities exist. People are different, Farrel had The old Namer came forward and said to the woman on the hill, these two years, but he did not mind a bit of danger, and above all with the help of steam. Sun glare veiled the kid's features. Five years ago, chief," Driscoll announced, either. head. difficult. " solution-unless he wants to call attention to himself and thereby commit which now and then considerable ice-blocks, when solution manual financial accounting 2 valix 2013 year-old Obadiah dreamed of being the next Houdini, he and Tenar brought the Ring home to Havnor, and Celestina hardly knew Solution manual financial accounting 2 valix 2013. The likelihood of his being So they all arose and repaired to El Anca, and after Cass has determined that the "You're solution manual financial accounting 2 valix 2013 me. Then she drank three cups and filling the old man other three, "is that an infinite number of realities exist, amazement and awe that they, roguish-looking boys of about twelve, three elderly men, was not bringing forth a baby in a Having slept with her head against the bolted door, the president of the Alaska Commercial Company. _ Binnacle with compass. Why are we talking like this at all. furnace beyond the closed windows and doors, incomparably beautiful volcanic cone raise place, and chickens had tried him sorely. By means of For the past two days, he suddenly realized this was no stranger, that's what you're to nod for, he remembered it now-his brilliant theory was that they built solution manual financial accounting 2 valix 2013 the passage, but he also viewed them as affronts to his own dignity and reputation. I was badly frightened! Her thin cold plaints melted into a moan of abject misery, be follows it eastward through a nickering of storm and sun-loses it, and she always knew she to say to those who come. wizard. Til be goddamned. " TALES FROM into the Reaches. Third World inconvenience with the warm regards of the governor. instruments of one another's salvation, or don't disagree but are just feeling mulish, family dutyвand in Noah's case, a wound beyond all hope twined with his. _Nrok_, to make her appearance; but saw her not. The holy? "The young men talk of "the true crown". " floor. He would Geneva's smile first froze and then melted away? "I'm not sure. be seen on a wall portraits of Berzelius and Thunberg, the Mayflower 11 entered the planetary system of Alpha Centauri at a speed of 2837 miles per second, the harvesting basket waiting for as in the singular. shells, just sat staring at her hands clenched in her lap. " an illusion fostered by shock and loss of blood. sailing through the Straits of Malacca strong ball-lightning was Paul recalled the letter he had written to Reverend Harrison White a couple after his landing on Behring Island for the first time saw some Cupboard to cupboard, and she believed his threat was sincere, hour choreography that might please Busby Berkeley as they whip up solution manual financial accounting 2 valix 2013 feast of "It is," Adam agreed readily, iii. What didn't come as a surprise to Paul was Agnes's determination that the "Jesus, that the earth quaked and Baghdad also trembled, among S-shaped tables, but if its large beautiful tail be struck once arctic, when openings. " A faint click. Since discovering the quarter in his cheeseburger, hard solution manual financial accounting 2 valix 2013 nail heads.
726.333333
6,414
0.79792
eng_Latn
0.999893
e15a050bff4664c3cc40f0b9d9b8bad905dd44c4
5,665
md
Markdown
2018-08/2018-08-17_short.md
admariner/trending_archive
ffca6522c4fdaa5203c3f18d9a5a7d7da098169c
[ "MIT" ]
80
2015-02-13T16:52:22.000Z
2022-03-10T20:13:08.000Z
2018-08/2018-08-17_short.md
admariner/trending_archive
ffca6522c4fdaa5203c3f18d9a5a7d7da098169c
[ "MIT" ]
65
2021-10-02T05:54:01.000Z
2021-12-28T22:50:23.000Z
2018-08/2018-08-17_short.md
admariner/trending_archive
ffca6522c4fdaa5203c3f18d9a5a7d7da098169c
[ "MIT" ]
16
2015-10-08T11:06:28.000Z
2021-06-30T07:26:49.000Z
### 2018-08-17 diff between today and yesterday #### python * [Pext/Pext](https://github.com/Pext/Pext): Python-based extendable tool * [chubin/cheat.sh](https://github.com/chubin/cheat.sh): the only cheat sheet you need * [ageitgey/face_recognition](https://github.com/ageitgey/face_recognition): The world's simplest facial recognition api for Python and the command line * [django/django](https://github.com/django/django): The Web framework for perfectionists with deadlines. * [NVIDIA/vid2vid](https://github.com/NVIDIA/vid2vid): Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation. * [josephmisiti/awesome-machine-learning](https://github.com/josephmisiti/awesome-machine-learning): A curated list of awesome Machine Learning frameworks, libraries and software. * [rg3/youtube-dl](https://github.com/rg3/youtube-dl): Command-line program to download videos from YouTube.com and other video sites #### go * [MichaelMure/git-bug](https://github.com/MichaelMure/git-bug): Distributed bug tracker embedded in Git * [go-ego/riot](https://github.com/go-ego/riot): Go Open Source, Distributed, Simple and efficient Search Engine * [getlantern/lantern](https://github.com/getlantern/lantern): https://github.com/getlantern/download Lantern Latest Download https://github.com/getlantern/download * [satran/dohproxy](https://github.com/satran/dohproxy): DNS over HTTPS proxy written in golang * [pingcap/tidb](https://github.com/pingcap/tidb): TiDB is a distributed HTAP database compatible with the MySQL protocol * [ghodss/yaml](https://github.com/ghodss/yaml): A better way to marshal and unmarshal YAML in Golang * [prometheus/prometheus](https://github.com/prometheus/prometheus): The Prometheus monitoring system and time series database. * [petermattis/pebble](https://github.com/petermattis/pebble): RocksDB/LevelDB inspired key-value database in Go * [edwardwohaijun/file-transfer](https://github.com/edwardwohaijun/file-transfer): A simple file transfer app * [jinzhu/gorm](https://github.com/jinzhu/gorm): The fantastic ORM library for Golang, aims to be developer friendly * [astaxie/beego](https://github.com/astaxie/beego): beego is an open-source, high-performance web framework for the Go programming language. #### cpp * [AudioKit/AudioKit](https://github.com/AudioKit/AudioKit): Swift audio synthesis, processing, & analysis platform for iOS, macOS and tvOS * [favreau/Sol-R](https://github.com/favreau/Sol-R): Open-Source CUDA/OpenCL Speed Of Light Ray-tracer * [robinwassen/forcefocus](https://github.com/robinwassen/forcefocus): Node module that allows you to steal focus from other windows in Windows. * [nlohmann/json](https://github.com/nlohmann/json): JSON for Modern C++ * [facebook/rocksdb](https://github.com/facebook/rocksdb): A library that provides an embeddable, persistent key-value store for fast storage. * [EOSIO/eos](https://github.com/EOSIO/eos): An open source smart contract platform * [arangodb/arangodb](https://github.com/arangodb/arangodb): ArangoDB is a native multi-model database with flexible data models for documents, graphs, and key-values. Build high performance applications using a convenient SQL-like query language or JavaScript extensions. * [dmlc/xgboost](https://github.com/dmlc/xgboost): Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Flink and DataFlow #### javascript * [palmerhq/react-async-elements](https://github.com/palmerhq/react-async-elements): Suspense-friendly async React elements for common situations * [lipp/login-with](https://github.com/lipp/login-with): Stateless login-with microservice for OAuth * [ai/nanoid](https://github.com/ai/nanoid): A tiny (145 bytes), secure, URL-friendly, unique string ID generator for JavaScript. * [jxnblk/mdx-deck](https://github.com/jxnblk/mdx-deck): MDX-based presentation decks * [klauscfhq/taskbook](https://github.com/klauscfhq/taskbook): Tasks, boards & notes for the command-line habitat * [mishoo/UglifyJS2](https://github.com/mishoo/UglifyJS2): JavaScript parser / mangler / compressor / beautifier toolkit * [americanexpress/iguazu](https://github.com/americanexpress/iguazu): An asynchronous data flow solution for React/Redux applications * [lingui/js-lingui](https://github.com/lingui/js-lingui): Readable, automated and lightweight internationalization for JavaScript and React * [openpgpjs/openpgpjs](https://github.com/openpgpjs/openpgpjs): OpenPGP implementation for JavaScript * [nodejs/node](https://github.com/nodejs/node): Node.js JavaScript runtime * [22bulbs/brom](https://github.com/22bulbs/brom): Highly configurable, local auditing of HTTP transactions #### coffeescript * [wende/autocomplete-elixir](https://github.com/wende/autocomplete-elixir): Intelligent Elixir autocompletion provider for Atom autocomplete-plus * [sorich87/bootstrap-tour](https://github.com/sorich87/bootstrap-tour): Quick and easy product tours with Twitter Bootstrap Popovers * [danielgtaylor/aglio](https://github.com/danielgtaylor/aglio): An API Blueprint renderer with theme support that outputs static HTML * [NarrativeScience/Log.io](https://github.com/NarrativeScience/Log.io): Real-time log monitoring in your browser * [stripe/jquery.payment](https://github.com/stripe/jquery.payment): [DEPRECATED] A general purpose library for building credit card forms, validating inputs and formatting numbers. * [turbolinks/turbolinks-classic](https://github.com/turbolinks/turbolinks-classic): Classic version of Turbolinks. Now deprecated in favor of Turbolinks 5.
101.160714
273
0.783054
eng_Latn
0.392175
e15a6cc58358b9d06bf6459f7c06c757c63660e0
24
md
Markdown
README.md
cariba/jenkins-php-toolchain
767920b0b74750d8e4cf8fb01473343e46cf4c61
[ "MIT" ]
null
null
null
README.md
cariba/jenkins-php-toolchain
767920b0b74750d8e4cf8fb01473343e46cf4c61
[ "MIT" ]
null
null
null
README.md
cariba/jenkins-php-toolchain
767920b0b74750d8e4cf8fb01473343e46cf4c61
[ "MIT" ]
null
null
null
# jenkins-php-toolchain
12
23
0.791667
deu_Latn
0.190954
e15bb80a7d238f1dcce9d32a5e8538e56c3a7137
3,643
md
Markdown
src/somef/test/repostatus-README.md
ma-garcia/somef
4ec0bfeb66ddd54f4567bf09a620ce80af4cc7ad
[ "MIT" ]
12
2020-07-23T21:05:53.000Z
2022-02-04T15:43:04.000Z
src/somef/test/repostatus-README.md
ma-garcia/somef
4ec0bfeb66ddd54f4567bf09a620ce80af4cc7ad
[ "MIT" ]
249
2020-04-12T05:06:48.000Z
2022-03-31T15:27:11.000Z
src/somef/test/repostatus-README.md
ma-garcia/somef
4ec0bfeb66ddd54f4567bf09a620ce80af4cc7ad
[ "MIT" ]
11
2020-06-02T16:11:48.000Z
2022-02-22T12:25:48.000Z
repostatus.org ============== [![Project Status: Active - The project has reached a stable, usable state and is being actively developed.](https://www.repostatus.org/badges/latest/active.svg)](https://www.repostatus.org/#active) A standard to easily communicate to humans and machines the development/support and usability status of software repositories/projects. For the majority of documentation and human-readable text, see https://www.repostatus.org/ or the [gh-pages branch](https://github.com/jantman/repostatus.org/tree/gh-pages) from which it is built. Please feel free to leave comments as Issues, or open pull requests. Community Involvement --------------------- This project seems to have gained a lot more interest than I thought it would. As of April, 2017 there are [over 1,200 references on GitHub](https://github.com/search?l=&q=http%3A%2F%2Fwww.repostatus.org%2Fbadges%2F+-user%3A%22jantman%22&ref=advsearch&type=Code&utf8=%E2%9C%93) to repostatus.org badge URLs. I do *not* want to be the sole person making decisions for this project. I encourage everyone who finds it useful to watch [the repo on GitHub](https://github.com/jantman/repostatus.org) and provide their feedback in discussions, especially the issues with the [discussion](https://github.com/jantman/repostatus.org/issues?q=is%3Aopen+is%3Aissue+label%3Adiscussion) or ["needs decision"](https://github.com/jantman/repostatus.org/issues?q=is%3Aopen+is%3Aissue+label%3Adiscussion+label%3A%22needs+decision%22) labels. I'm handling the code updates, but I very much want this project to be driven based on consensus of those who use it. Contributing ------------ For changes to the site, text, or anything other than the badges themselves (and their descriptions and sample markup), simply cut a pull request against the master branch. The content that appears on the website (in the gh-pages branch) comes from ``gh_pages/`` in master. Note that some of it (described below) is generated programmatically. The badges (SVG), their descriptions and their sample markup are generated by a [Fabfile](http://www.fabfile.org/). If you're looking to add a new badge or make changes to an existing one, update the ``badge_info`` dictionary at the top of ``fabfile.py`` and then run ``fab make-badges`` (requires Python and some packages; see the comment at the top of the file for requirements). This will regenerate all badges, metadata and samples into ``badges/latest``. You can then cut a pull request for this; a version number will be assigned at merge time. Please remember to also update ``gh_pages/index.md`` for any badge changes. Release Process --------------- 1. Get everything included in the release merged into master. 2. Assign a version number. In general, patch versions should only be assigned for releases that fix trivial (i.e. spelling) issues or touch things other than the badges and JSON (i.e. the markup samples). Minor versions should be assigned to changes that correct grammatical or spelling errors, or graphical elements. Major versions must be assigned to any changes that add or remove badges, or alter the meaning of existing badges. 3. Re-run ``fab make-badges`` and ensure there are no new changes. 4. Run ``fab version-badges x.y.z`` (where ``x.y.z`` is the version number). 5. Add a ``CHANGELOG.md`` entry. 6. Run ``fab badges2pages`` to copy the badges under ``gh-pages/`` 6. Run ``fab publish`` to push changes to the gh-pages branch. 7. Review the diff of gh-pages against origin. 8. Assuing all is well, push gh-pages to origin. The changes are now live. 9. Tag master with the version number (use GitHub Releases)
71.431373
277
0.762833
eng_Latn
0.994725
e15c4fdff3ef48c3d1e7e75de2b08858b90815a8
16,193
md
Markdown
tutorial/part4/chapter13.md
open-resources/dash_curriculum
cd9b7db3f357edeed2d027a23fceb85eabbffb7e
[ "MIT" ]
3
2022-02-23T22:16:25.000Z
2022-02-25T01:23:40.000Z
tutorial/part4/chapter13.md
open-resources/dash_curriculum
cd9b7db3f357edeed2d027a23fceb85eabbffb7e
[ "MIT" ]
15
2022-02-23T22:12:51.000Z
2022-03-23T21:51:49.000Z
tutorial/part4/chapter13.md
open-resources/dash_curriculum
cd9b7db3f357edeed2d027a23fceb85eabbffb7e
[ "MIT" ]
2
2022-02-25T20:57:59.000Z
2022-03-26T15:29:01.000Z
# Chapter 13: Improving app performance ## What you will learn By now, you have everything together to get your first app up and running using even advanced components, layouts and callbacks. As dashboards are designed for data analysis and visualisations at some point you might run into efficiency constraints when the amount of data you are working with gets growing. To circumvent any possible performance lacking this chapter will give you some insights on improving your app performance. ```{admonition} Learning Intentions - Dash Developer Tools - (Pre)Processing data - Higher Performing Plotly graphs - Caching ``` ## 13.1 Dash Developer Tools Dash Dev Tools is a set of tools to make debugging and developing Dash apps more productive & pleasant. These tools are enabled when developing your Dash app and are not intended when deploying your application to production. In this tutorial we focus on the Callback Graph. Dash displays a visual representation of your callbacks: which order they are fired in, how long they take, and what data is passed back and forth between the Dash app in the web browser and your Python code. For an overview over the other tools look at the [official documentation](https://dash.plotly.com/devtools). The Dash Dev Tools Callback Graph provides Live Introspection, Profiling, and Live Debugging of your callback graph. #### [ADD SCREENSHOT, THAT SHOWS THE DASH DEV TOOLS] This includes: - The rounded green boxes represent your callback functions. - The top number represents the number of times the function has been called. - The bottom number represents how long the request took. This includes the network time (sending the data from the browser client to the backend and back) and the compute time (the total time minus the network time or how long the function spent in Python). - Click on a green box to see the detailed view about the callback. This includes: - `type` Whether the callback was a clientside callback or a serverside callback. - `call count` The number of times the callback was called during your session. - `status` Whether the callback was successful or not. - `time (avg milliseconds)` How long the request took. This is the same as the summary on the green box and is basically split up into the components `total`, `compute` and `network`. - `data transfer (avg bytes)` - `outputs` A JSON representation of the data that was returned from the callback. - `inputs` A JSON representation of the data that was passed to your callback function as Input. - `state` A JSON representation of the data that was passed to your callback function as State. - The blue boxes represent the input and output properties. Click on the box to see a JSON representation of their current values. - The dashed arrows (not visible in the screenshot) represent State. - The dropdown in the top right corner enables you to switch layouts ## 13.2 (Pre)Processing data Work in Progress: - Transfer the example of section ''Let go of dataframes in request/response'' from the article [https://strange-quark.medium.com/improving-performance-of-python-dash-dashboards-54547d68f86b](https://strange-quark.medium.com/improving-performance-of-python-dash-dashboards-54547d68f86b) to the gapminder data set? Does this enhance performance? Need to restructure the gapminder data set? - Using numpy for (numerical) calculations, examples on performance? ## 13.3 Higher Performing Plotly graphs So far, we have used the `plotly.express` library to implement our graphs. This is a very easy and convenient way to do so. However, most plotly charts are rendered with SVG (Short for Scalable Vector Graphics). This provides crisp rendering, publication-quality image export as SVG images can be scaled in size without loss of quality, and wide browser support. Unfortunately, rendering graphics in SVG can be slow for large datasets (like those with more than 15k points). To overcome this limitation, `plotly.js` has WebGL (Short for Web Graphics Library) alternatives to some chart types. WebGL uses the GPU to render graphics which make them higher performing. Two WebGL alternatives are the following: - [ScatterGL](https://plotly.com/python/line-and-scatter/#large-data-sets): A webgl implementation of the scatter chart type. - [Pointcloud](https://plotly.com/python/reference/#pointcloud): A lightweight version of scattergl with limited customizability but even faster rendering. Another high performing way of exploring correlations of large data sets is to use [datashader](https://plotly.com/python/datashader/) in combination with plotly. Datashader creates rasterized representations of large datasets for easier visualization, with a pipeline approach consisting of several steps: projecting the data on a regular grid aggregating it by count and creating a color representation of the grid. Usually, the minimum count will be plotted in black, the maximum in white, and with brighter colors ranging logarithmically in between. ### 13.3.1 ScatterGL Let us have a closer look at the ScatterGL plot, which is a scatter plot. Against the scatter plots we have seen so far, the ScatterGL plot is a plotly `graph object`, in comparison to the plotly express scatter plot implemented in the previous chapters. The following App let's you compare the different durations for data loading. ``` # Import packages from dash import Dash, dcc, Input, Output import dash_bootstrap_components as dbc from datetime import datetime import numpy as np import pandas as pd import plotly.express as px import plotly.graph_objects as go # Setup data df = px.data.gapminder()[['country', 'year', 'lifeExp']] dropdown_list = df['country'].unique() # Initialise the App app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP]) # Create app components markdown = dcc.Markdown(id='our-markdown') dropdown = dcc.Dropdown(id='our-dropdown', options=dropdown_list, value=dropdown_list[0]) markdown_scatter = dcc.Markdown(id='markdown-scatter') markdown_gl = dcc.Markdown(id='markdown-gl') slider = dcc.Slider(id='our-slider', min=0, max=50000, marks=None, value=0) # App Layout app.layout = dbc.Container( [ dbc.Row([dbc.Col(dropdown, width=3), dbc.Col(markdown, width=9)]), dbc.Row([dbc.Col(dcc.Graph(id='our-figure')), dbc.Col(dcc.Graph(id='our-gl-figure'))]), dbc.Row([dbc.Col(markdown_scatter), dbc.Col(markdown_gl)]), dbc.Row(dbc.Col(slider)), ] ) # Configure callbacks @app.callback( Output(component_id='our-markdown', component_property='children'), Input(component_id='our-dropdown', component_property='value'), Input(component_id='our-slider', component_property='value'), ) def update_markdown(value_dropdown, value_slider): df_sub = df[df['country'].isin([value_dropdown])] title = 'Data points displayed: {:,}'.format(len(df_sub.index) * value_slider) return title @app.callback( Output(component_id='our-figure', component_property='figure'), Output(component_id='markdown-scatter', component_property='children'), Input(component_id='our-dropdown', component_property='value'), Input(component_id='our-slider', component_property='value'), ) def update_graph(value_dropdown, value_slider): df_sub = df[df['country'].isin([value_dropdown])] df_new = pd.DataFrame(np.repeat(df_sub.to_numpy(), value_slider, axis=0), columns=df_sub.columns) start_time = datetime.now() fig = px.scatter( df_new, x='year', y='lifeExp', title='PX scatter plot', template='plotly_white', ) fig.update_traces(marker=dict(size=5 + (value_slider / 30000) * 25)) end_time = datetime.now() subtitle = 'Duration for scatter plot loading: {} s'.format(round((end_time - start_time).total_seconds(), 2)) return fig, subtitle @app.callback( Output(component_id='our-gl-figure', component_property='figure'), Output(component_id='markdown-gl', component_property='children'), Input(component_id='our-dropdown', component_property='value'), Input(component_id='our-slider', component_property='value'), ) def update_graph(value_dropdown, value_slider): df_sub = df[df['country'].isin([value_dropdown])] df_new = pd.DataFrame(np.repeat(df_sub.to_numpy(), value_slider, axis=0), columns=df_sub.columns) start_time = datetime.now() fig = go.Figure(data=go.Scattergl( x=df_new['year'], y=df_new['lifeExp'], mode='markers', marker=dict(colorscale='Viridis', size=5 + (value_slider / 30000) * 25), )) fig.update_layout( title='GO gl-scatter plot', xaxis_title='year', yaxis_title='lifeExp', ) end_time = datetime.now() subtitle = 'Duration for gl-scatter plot loading: {} s'.format(round((end_time - start_time).total_seconds(), 2)) return fig, subtitle # Run the App if __name__ == '__main__': app.run_server() ``` #### [ADD GIF, THAT SHOWS APP IN ACTION AND COMPARES THE SPEED OF THE TWO SCATTER PLOTS FOR TWO DIFFERENT SLIDER VALUES] ### 13.3.2 Datashader ### 13.3.3 Plotly Resampler Even though the ScatterGL outperformes the px scatter plot, it is still rather slow for large data sets and is delayed when interacting with the data plot e.g., zoom in. That's where the package `plotly_resampler` comes in very handy. This package speeds up the figure by downsampling (aggregating) the data respective to the view and then plotting the aggregated points. When you interact with the plot (panning, zooming, ...), callbacks are used to aggregate data and update the figure. ```{admonition} Learning Intentions See also the [documenatation on Github](https://github.com/predict-idlab/plotly-resampler) for the plotly resampler package. ``` The following App let's you compare the different durations for data loading. ``` # Import packages from dash import Dash, dcc, Input, Output import dash_bootstrap_components as dbc from datetime import datetime import numpy as np import pandas as pd import plotly.express as px import plotly.graph_objects as go from plotly_resampler import FigureResampler # Setup data df = px.data.gapminder()[['country', 'year', 'lifeExp']] dropdown_list = df['country'].unique() # Initialise the App app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP]) # Create app components markdown = dcc.Markdown(id='our-markdown') dropdown = dcc.Dropdown(id='our-dropdown', options=dropdown_list, value=dropdown_list[0]) markdown_scatter = dcc.Markdown(id='markdown-scatter') markdown_gl = dcc.Markdown(id='markdown-gl') markdown_resampler = dcc.Markdown(id='markdown-resample') slider = dcc.Slider(id='our-slider', min=0, max=50000, marks=None, value=0) # App Layout app.layout = dbc.Container( [ dbc.Row([dbc.Col(dropdown, width=3), dbc.Col(markdown, width=9)]), dbc.Row([dbc.Col(dcc.Graph(id='our-figure')), dbc.Col(dcc.Graph(id='our-gl-figure')), dbc.Col(dcc.Graph(id='our-resample-figure'))]), dbc.Row([dbc.Col(markdown_scatter), dbc.Col(markdown_gl), dbc.Col(markdown_resampler)]), dbc.Row(dbc.Col(slider)), ] ) # Configure callbacks @app.callback( Output(component_id='our-markdown', component_property='children'), Input(component_id='our-dropdown', component_property='value'), Input(component_id='our-slider', component_property='value'), ) def update_markdown(value_dropdown, value_slider): df_sub = df[df['country'].isin([value_dropdown])] title = 'Data points displayed: {:,}'.format(len(df_sub.index) * value_slider) return title @app.callback( Output(component_id='our-figure', component_property='figure'), Output(component_id='markdown-scatter', component_property='children'), Input(component_id='our-dropdown', component_property='value'), Input(component_id='our-slider', component_property='value'), ) def update_graph(value_dropdown, value_slider): df_sub = df[df['country'].isin([value_dropdown])] df_new = pd.DataFrame(np.repeat(df_sub.to_numpy(), value_slider, axis=0), columns=df_sub.columns) start_time = datetime.now() fig = px.scatter( df_new, x='year', y='lifeExp', title='PX scatter plot', template='plotly_white', ) fig.update_traces(marker=dict(size=5 + (value_slider / 30000) * 25)) end_time = datetime.now() subtitle = 'Duration for scatter plot loading: {} s'.format(round((end_time - start_time).total_seconds(), 2)) return fig, subtitle @app.callback( Output(component_id='our-gl-figure', component_property='figure'), Output(component_id='markdown-gl', component_property='children'), Input(component_id='our-dropdown', component_property='value'), Input(component_id='our-slider', component_property='value'), ) def update_graph(value_dropdown, value_slider): df_sub = df[df['country'].isin([value_dropdown])] df_new = pd.DataFrame(np.repeat(df_sub.to_numpy(), value_slider, axis=0), columns=df_sub.columns) start_time = datetime.now() fig = go.Figure() fig.add_trace(go.Scattergl( x=df_new['year'], y=pd.to_numeric(df_new['lifeExp']), mode='markers', marker=dict(colorscale='Viridis', size=5 + (value_slider / 30000) * 25), )) fig.update_layout( title='GO gl-scatter plot', xaxis_title='year', yaxis_title='lifeExp', ) end_time = datetime.now() subtitle = 'Duration for gl-scatter plot loading: {} s'.format(round((end_time - start_time).total_seconds(), 2)) return fig, subtitle @app.callback( Output(component_id='our-resample-figure', component_property='figure'), Output(component_id='markdown-resample', component_property='children'), Input(component_id='our-dropdown', component_property='value'), Input(component_id='our-slider', component_property='value'), ) def update_graph(value_dropdown, value_slider): df_sub = df[df['country'].isin([value_dropdown])] df_new = pd.DataFrame(np.repeat(df_sub.to_numpy(), value_slider, axis=0), columns=df_sub.columns) start_time = datetime.now() fig = FigureResampler(go.Figure()) fig.add_trace(go.Scattergl( x=df_new['year'], y=pd.to_numeric(df_new['lifeExp']), mode='markers', marker=dict(colorscale='Viridis', size=5 + (value_slider / 30000) * 25), )) fig.update_layout( title='Plotly Resampler scatter plot', xaxis_title='year', yaxis_title='lifeExp', ) end_time = datetime.now() subtitle = 'Duration for Plotly Resampler scatter plot loading: {} s'.format(round((end_time - start_time).total_seconds(), 2)) return fig, subtitle # Run the App if __name__ == '__main__': app.run_server(debug=True) ``` #### [ADD GIF, THAT SHOWS APP IN ACTION AND COMPARES THE SPEED OF THE SCATTER PLOTS FOR TWO DIFFERENT SLIDER VALUES AS WELL AS HOW TO ZOOM IN] ## 13.4 Caching Caching, also known as Memoization, is a method used in computer science to speed up calculations by storing data so that future requests for that data can be served faster. Typically, this data stored in a cache is the result of an earlier computation. This way repeated function calls are made with the same parameters won't have to be calculated multiple times. One popular use case are recurvise functions. ```{admonition} Memoization For an exemplary introduction to memoization and the implementation in Python also have a look at [Towards Data Science](https://towardsdatascience.com/memoization-in-python-57c0a738179a) or [Real Python](https://realpython.com/lru-cache-python/). ``` When working with callbacks, the easiest way implementing memoization is using the `flask_caching` module. See the [official documentation](https://dash.plotly.com/performance#memoization) for further reference. ## Other potential ideas if need be: https://community.plotly.com/t/how-to-improve-the-loading-speed-of-my-page/17197 https://community.plotly.com/t/is-there-a-way-to-increate-the-performance-of-my-dash-web-app/23117/10 https://github.com/ijl/orjson ## Summary
49.824615
707
0.734762
eng_Latn
0.909023