hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
a11cad4fb9831e8a24498f018c931deda04dfcbb
15,810
md
Markdown
articles/azure-sql/database/hyperscale-performance-diagnostics.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
16
2017-08-28T07:45:43.000Z
2021-04-20T21:12:50.000Z
articles/azure-sql/database/hyperscale-performance-diagnostics.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
575
2017-08-30T07:14:53.000Z
2022-03-04T05:36:23.000Z
articles/azure-sql/database/hyperscale-performance-diagnostics.md
fuadi-star/azure-docs.nl-nl
0c9bc5ec8a5704aa0c14dfa99346e8b7817dadcd
[ "CC-BY-4.0", "MIT" ]
58
2017-07-06T11:58:36.000Z
2021-11-04T12:34:58.000Z
--- title: Diagnostische gegevens over prestaties in grootschalige description: In dit artikel wordt beschreven hoe u grootschalige prestatie problemen in Azure SQL Database kunt oplossen. services: sql-database ms.service: sql-database ms.subservice: service ms.custom: seo-lt-2019 sqldbrb=1 ms.topic: troubleshooting author: denzilribeiro ms.author: denzilr ms.reviewer: sstein ms.date: 10/18/2019 ms.openlocfilehash: ed31ff5d77b258d141a77fc174c2d5452adf7d01 ms.sourcegitcommit: f28ebb95ae9aaaff3f87d8388a09b41e0b3445b5 ms.translationtype: MT ms.contentlocale: nl-NL ms.lasthandoff: 03/29/2021 ms.locfileid: "92791712" --- # <a name="sql-hyperscale-performance-troubleshooting-diagnostics"></a>Diagnostische gegevens voor het oplossen van problemen met SQL grootschalige-prestaties [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)] Om prestatie problemen in een grootschalige-Data Base op te lossen, is de [algemene methoden voor het afstemmen van prestaties](monitor-tune-overview.md) op het Azure SQL database Compute-knoop punt het begin punt van een prestatie onderzoek. Gezien de [gedistribueerde architectuur](service-tier-hyperscale.md#distributed-functions-architecture) van grootschalige zijn aanvullende diagnostische gegevens echter toegevoegd om te helpen. In dit artikel worden grootschalige-specifieke diagnostische gegevens beschreven. ## <a name="log-rate-throttling-waits"></a>Wacht tijden voor het beperken van de logboek frequentie Elk Azure SQL Database service niveau heeft limieten voor het genereren van de logboeken die worden afgedwongen via [log rate governance](resource-limits-logical-server.md#transaction-log-rate-governance). In grootschalige is de limiet voor het genereren van het logboek op dit moment ingesteld op 100 MB per seconde, ongeacht het service niveau. Er zijn echter momenten waarop de snelheid waarmee het logboek is gegenereerd op de primaire Compute-replica moet worden beperkt om de bezorgings-Sla's te behouden. Deze beperking treedt op wanneer een [pagina Server of een andere compute replica](service-tier-hyperscale.md#distributed-functions-architecture) significant achter het Toep assen van nieuwe logboek records van de logboek service. De volgende wacht typen (in [sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql/)) beschrijven de redenen waarom de logboek frequentie kan worden beperkt op de primaire Compute replica: |Wacht type |Description | |------------- |------------------------------------| |RBIO_RG_STORAGE | Treedt op wanneer een grootschalige-data base het genereren van de snelheid van het primaire Compute-knooppunt logboek wordt beperkt vanwege een vertraagd logboek verbruik op pagina-Server (s). | |RBIO_RG_DESTAGE | Treedt op wanneer de snelheid van het genereren van het logboek van de grootschalige-data base wordt beperkt vanwege een vertraagd logboek gebruik door de lange termijn logboek opslag. | |RBIO_RG_REPLICA | Treedt op wanneer de snelheid van het genereren van het grootschalige-database registratie knooppunt wordt beperkt vanwege een vertraagd logboek verbruik door de Lees bare secundaire replica (s). | |RBIO_RG_LOCALDESTAGE | Treedt op wanneer de snelheid van het genereren van het grootschalige-database registratie knooppunt wordt beperkt vanwege een vertraagd logboek verbruik door de logboek service. | ## <a name="page-server-reads"></a>Lees bewerkingen pagina Server De reken replica's slaan geen volledige kopie van de data base lokaal op in de cache. De gegevens die lokaal zijn voor de berekenings replica, worden opgeslagen in de buffer groep (in het geheugen) en in de lokale cache met RBPEX (bestendigere buffer pool extension) die een gedeeltelijk (niet-bedekte) cache van gegevens pagina's is. Deze lokale RBPEX-cache bevindt zich proportioneel in verhouding tot de berekenings grootte en is drie maal zo groot als het geheugen van de compute-laag. RBPEX is vergelijkbaar met de buffer groep waarin de meest gebruikte gegevens worden gebruikt. Elke pagina Server heeft daarentegen een dekking RBPEX cache voor het gedeelte van de data base dat wordt onderhouden. Als er een lees bewerking wordt uitgevoerd op een berekenings replica en de gegevens niet in de buffer groep of de lokale RBPEX-cache bestaan, wordt een getPage (pageId, LSN)-functie aanroep uitgegeven en wordt de pagina opgehaald van de bijbehorende pagina Server. Lees bewerkingen van pagina servers zijn op afstand lezen en zijn daardoor trager dan lees bewerkingen van de lokale RBPEX. Bij het oplossen van problemen met betrekking tot de prestaties van IO, moeten we kunnen zien hoeveel IOs is uitgevoerd via relatief trage externe pagina-Server Lees bewerkingen. Diverse dynamische beheerde weer gaven (Dmv's) en uitgebreide gebeurtenissen hebben kolommen en velden die het aantal externe Lees bewerkingen van een pagina Server opgeven, die kunnen worden vergeleken met het totale aantal lees bewerkingen. In query Store worden ook externe Lees bewerkingen vastgelegd als onderdeel van de statistieken van de uitvoerings tijd van de query. - Kolommen voor het rapporteren van pagina-Server zijn beschikbaar in uitvoerings-Dmv's en catalogus weergaven, zoals: - [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql/) - [sys.dm_exec_query_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-stats-transact-sql/) - [sys.dm_exec_procedure_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-procedure-stats-transact-sql/) - [sys.dm_exec_trigger_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-trigger-stats-transact-sql/) - [sys.query_store_runtime_stats](/sql/relational-databases/system-catalog-views/sys-query-store-runtime-stats-transact-sql/) - Lees bewerkingen voor pagina-Server worden toegevoegd aan de volgende uitgebreide gebeurtenissen: - sql_statement_completed - sp_statement_completed - sql_batch_completed - rpc_completed - scan_stopped - query_store_begin_persist_runtime_stat - query-store_execution_runtime_info - ActualPageServerReads/ActualPageServerReadAheads worden toegevoegd aan het query plan XML voor de werkelijke plannen. Bijvoorbeeld: `<RunTimeCountersPerThread Thread="8" ActualRows="90466461" ActualRowsRead="90466461" Batches="0" ActualEndOfScans="1" ActualExecutions="1" ActualExecutionMode="Row" ActualElapsedms="133645" ActualCPUms="85105" ActualScans="1" ActualLogicalReads="6032256" ActualPhysicalReads="0" ActualPageServerReads="0" ActualReadAheads="6027814" ActualPageServerReadAheads="5687297" ActualLobLogicalReads="0" ActualLobPhysicalReads="0" ActualLobPageServerReads="0" ActualLobReadAheads="0" ActualLobPageServerReadAheads="0" />` > [!NOTE] > Als u deze kenmerken wilt weer geven in het venster Eigenschappen van query plan, is SSMS 18,3 of hoger vereist. ## <a name="virtual-file-stats-and-io-accounting"></a>Statistieken voor het virtuele bestand en IO-Accounting In Azure SQL Database is de [sys.dm_io_virtual_file_stats ()](/sql/relational-databases/system-dynamic-management-views/sys-dm-io-virtual-file-stats-transact-sql/) DMF de belangrijkste manier om SQL database io te controleren. I/o-kenmerken in grootschalige verschillen vanwege de [gedistribueerde architectuur](service-tier-hyperscale.md#distributed-functions-architecture). In deze sectie richten we ons op IO (lees-en schrijf bewerkingen) naar gegevens bestanden zoals deze worden weer gegeven in deze DMF. In grootschalige komt elk gegevens bestand dat zichtbaar is in deze DMF overeen met een externe pagina Server. De RBPEX-cache die hier wordt vermeld, is een lokale SSD-cache, een niet-bedekkende cache op de compute replica. ### <a name="local-rbpex-cache-usage"></a>Lokaal RBPEX-cache gebruik Lokale RBPEX-cache bestaat op de berekenings replica, op lokale SSD-opslag. Daarom is IO voor deze cache sneller dan IO voor externe pagina servers. Op dit moment heeft [sys.dm_io_virtual_file_stats ()](/sql/relational-databases/system-dynamic-management-views/sys-dm-io-virtual-file-stats-transact-sql/) in een grootschalige-Data Base een speciale rij waarmee de io wordt gerapporteerd voor de lokale RBPEX-cache op de compute replica. Deze rij heeft de waarde 0 voor zowel `database_id` als `file_id` kolommen. De query hieronder retourneert bijvoorbeeld RBPEX gebruiks statistieken sinds het opstarten van de data base. `select * from sys.dm_io_virtual_file_stats(0,NULL);` Een verhouding van Lees bewerkingen die worden uitgevoerd op RBPEX naar geaggregeerde Lees bewerkingen die worden uitgevoerd voor alle andere gegevens bestanden, biedt de RBPEX-cache treffers. ### <a name="data-reads"></a>Gegevens lezen - Wanneer Lees bewerkingen worden uitgegeven door de SQL Server data base-engine op een compute-replica, kunnen ze worden geleverd door de lokale RBPEX-cache of door externe pagina servers, of door een combi natie van de twee als er meerdere pagina's worden gelezen. - Wanneer de reken replica sommige pagina's van een specifiek bestand leest, bijvoorbeeld file_id 1, als deze gegevens uitsluitend in de lokale RBPEX-cache zijn opgeslagen, wordt alle i/o voor deze Lees bewerking verwerkt op file_id 0 (RBPEX). Als een deel van die gegevens zich in de lokale RBPEX-cache bevindt en een deel ervan op een externe pagina server bevindt, wordt de i/o in rekening gezet met file_id 0 voor het onderdeel dat wordt aangeboden vanuit RBPEX, en wordt het onderdeel dat wordt aangeboden vanaf de externe pagina Server, verwerkt naar file_id 1. - Wanneer een berekenings replica een pagina van een bepaalde [LSN](/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide/) van een pagina server aanvraagt en de pagina Server niet heeft geresulteerd in het aangevraagde LSN, wordt de Lees bewerking op de reken replica gewacht totdat de pagina wordt geretourneerd naar de berekenings replica. Voor alle Lees bewerkingen van een pagina op de compute-replica ziet u het PAGEIOLATCH_ * wait-type als het wacht op die i/o. In grootschalige bevat deze wacht tijd zowel de tijd die nodig is voor het opvangen van de aangevraagde pagina op de pagina Server naar de vereiste LSN, en de benodigde tijd voor het overzetten van de pagina van de pagina Server naar de compute replica. - Grote Lees bewerkingen, zoals lezen-vooruit, worden vaak uitgevoerd met behulp van de [Lees bewerkingen van ' sprei ding-gather '](/sql/relational-databases/reading-pages/). Op deze manier kunnen Lees bewerkingen van Maxi maal 4 MB aan pagina's tegelijk worden beschouwd als één Lees bewerking in de SQL Server data base-engine. Als er echter gegevens worden gelezen in RBPEX, worden deze Lees bewerkingen beschouwd als meerdere afzonderlijke 8 KB-Lees bewerkingen, omdat de buffer groep en RBPEX altijd 8 KB-pagina's gebruiken. Als gevolg hiervan kan het aantal gelezen IOs-apparaten op RBPEX groter zijn dan het werkelijke aantal IOs dat door de engine wordt uitgevoerd. ### <a name="data-writes"></a>Gegevens schrijven - De primaire Compute-replica schrijft niet rechtstreeks naar pagina-servers. In plaats daarvan worden logboek records van de logboek service op de bijbehorende pagina servers opnieuw afgespeeld. - Schrijf bewerkingen die plaatsvinden op de compute replica, worden voornamelijk naar de lokale RBPEX geschreven (file_id 0). Voor schrijf bewerkingen op logische bestanden die groter zijn dan 8 KB, met andere woorden die worden uitgevoerd met behulp van de functie voor [het verzamelen/schrijven](/sql/relational-databases/writing-pages/), wordt elke schrijf bewerking omgezet in meerdere 8 KB afzonderlijke schrijf bewerkingen naar RBPEX, aangezien de buffer groep en RBPEX altijd 8 KB-pagina's gebruiken. Als gevolg hiervan kan het aantal schrijf-IOs dat wordt gezien op RBPEX groter zijn dan het werkelijke aantal IOs dat door de engine wordt uitgevoerd. - Voor niet-RBPEX bestanden of andere gegevens bestanden dan file_id 0 die overeenkomen met pagina servers, worden ook de schrijf bewerkingen weer gegeven. In de grootschalige-servicelaag worden deze schrijf bewerkingen gesimuleerd, omdat de reken replica's nooit rechtstreeks naar pagina servers schrijven. Het schrijven van IOPS en door voer worden uitgevoerd als ze plaatsvinden op de compute-replica, maar latentie voor gegevens bestanden met uitzonde ring van file_id 0 weerspiegelt niet de werkelijke latentie van pagina-schrijf bewerkingen. ### <a name="log-writes"></a>Schrijf bewerkingen vastleggen - Op de primaire Compute wordt een schrijf bewerking voor het logboek verwerkt in file_id 2 van sys.dm_io_virtual_file_stats. Het schrijven van een logboek op de primaire Compute is een schrijf bewerking naar de registratie zone van het logboek. - Logboek records worden niet op de secundaire replica met een door Voer gehard. In grootschalige wordt log asynchroon door de logboek service toegepast op de secundaire replica's. Omdat de schrijf bewerkingen in het logboek niet echt optreden op secundaire replica's, is de accounting van logboek IOs voor de secundaire replica's alleen bedoeld voor tracerings doeleinden. ## <a name="data-io-in-resource-utilization-statistics"></a>Gegevens-IO in statistieken van resource gebruik In een niet-grootschalige-Data Base worden de gecombineerde Lees-en schrijf-IOPS voor gegevens bestanden, ten opzichte van de limiet van de [resource governance](./resource-limits-logical-server.md#resource-governance) -gegevens IOPS, gerapporteerd in [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) en [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) weer gaven in de `avg_data_io_percent` kolom. Dezelfde waarde wordt in het Azure Portal gerapporteerd als een _gegevens-io-percentage_. In een grootschalige-data base wordt in deze kolom gerapporteerde over gegevensiops-gebruik ten opzichte van de limiet voor lokale opslag op alleen Compute replica, met name IO tegen RBPEX en `tempdb` . Een waarde van 100% in deze kolom geeft aan dat resource governance de IOPS van lokale opslag beperkt. Als dit is gekoppeld aan een prestatie probleem, stemt u de werk belasting af om minder IO te genereren of breidt u de database service doelstelling uit om de maximale [limiet](resource-limits-vcore-single-databases.md)voor _gegevens IOPS_ van de resource governance te verhogen. Voor resource governance van RBPEX Lees-en schrijf bewerkingen telt het systeem afzonderlijke IOs-KB op in plaats van een groter IOs dat kan worden uitgegeven door de SQL Server data base-engine. Gegevens-IO voor externe pagina servers wordt niet gerapporteerd in weer gaven van resource gebruik of in de portal, maar wordt gerapporteerd in de [sys.dm_io_virtual_file_stats ()](/sql/relational-databases/system-dynamic-management-views/sys-dm-io-virtual-file-stats-transact-sql/) DMF, zoals eerder is aangegeven. ## <a name="additional-resources"></a>Aanvullende bronnen - Zie [grootschalige vCore service tier VCore limieten](resource-limits-vcore-single-databases.md#hyperscale---provisioned-compute---gen5) voor resource limieten voor een grootschalige-data base. - Zie [query prestaties in Azure SQL database](performance-guidance.md) voor Azure SQL database afstemming van prestaties. - Zie [prestatie bewaking met query Store](/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store/) voor het afstemmen van de prestaties met behulp van query Store - Zie [prestaties bewaken Azure SQL database met dynamische beheer weergaven](monitoring-with-dmvs.md) voor dmv-bewakings scripts
142.432432
781
0.811195
nld_Latn
0.999626
a11d2e4b1589b4b1e95756a9028066204dbef61d
17,807
md
Markdown
dali-scene-loader/README.md
zyndor/dali-toolkit
9e3fd659c4d25706ab65345bc7c562ac27248325
[ "Apache-2.0", "BSD-3-Clause" ]
null
null
null
dali-scene-loader/README.md
zyndor/dali-toolkit
9e3fd659c4d25706ab65345bc7c562ac27248325
[ "Apache-2.0", "BSD-3-Clause" ]
null
null
null
dali-scene-loader/README.md
zyndor/dali-toolkit
9e3fd659c4d25706ab65345bc7c562ac27248325
[ "Apache-2.0", "BSD-3-Clause" ]
null
null
null
# Table of contents * [Overview](#overview) * [`scenes`](#scenes) * [`nodes`](#nodes) * [Transformations](#transformations) * [Size](#size) * [Visibility](#visibility) * [`children`](#children) * [`customization`](#customization) * [Renderables](#renderables) * [`model`](#model) * [`arc`](#arc) * [`inverseBindPoseMatrix`](#inversebindposematrix) * [`materials`](#materials) * [`meshes`](#meshes) * [`shaders`](#shaders) * [`rendererState`](#rendererState) * [`defines`](#defines) * [`hints`](#hints) * [Uniforms](#uniforms) * [`skeletons`](#skeletons) * [`cameras`](#cameras) * [Perspective cameras](#perspective-cameras) * [Orthographic cameras](#orthographic-cameras) * [`lights`](#lights) * [`animations`](#animations) * [`properties`](#properties) * [`keyFramesBin`](#keyframesbin) * [`keyFrames`](#keyFrames) * [`value`](#value) * [`animationGroups`](#animationGroups) # Overview DLI is a JSON based format for representing 3D scenes. * Like glTF, the DLI document defines a set of scenes, which in turn define a hierarchical structure of nodes, and the additional data required to render them - meshes, geometry. * Unlike glTF, it allows definitions of shaders, environment maps, and lighting parameters as well; * Unlike glTF, DLI does not concern itself with buffers, buffer views and accessors; * It supports customizations, which allow replacing parts of the scene based on customization tags and options, without reloading the whole scene. * It supports the processing of custom categories, which can be scheduled to take place prior to or after the processing of the scene data, as well as custom node and animation processors. # `scenes` The "scenes" element is an array of JSON objects, each of which must define a `nodes` array with the index of the definition of the root node. :warning: The array must not be empty. Only the first element is used. An optional `scene` element with an integer value may be defined to specify the index of the first scene to be created. The rest of the scenes are created in the order of their definition (from index 0 to the highest, skipping the default - already created - scene). ```js { "scene": 0, "scenes": [ { "nodes": [ 0 ] } ], "nodes": [ { "name": "some_unique_name" } ] } ``` # `nodes` The 3D scene is built using a hierarchy of nodes, which are used to position the objects to render. :warning: Each node must have a `name` string that 1, is not an empty string and 2, is unique within the DLI document. The use of alpha-numeric characters and underscore only is highly recommeneded. ## Transformations There are two ways to define the local transform of each nodes. Both are optional, defaulting to a position at the origin, a scale of one and no rotation. * A `matrix` array of 16 numerical values defining a column-major 4x4 matrix; * A `position` array of 3 numerical values defining a 3D vector and an `angle` array of 3 numerical values defining euler angles of rotation along the 3 axes. ## Size The size of the bounding box of a node may be specified using either of the optional `size` or `bounds` properties, as arrays of 2 or 3 numerical values. Its default value is the unit vector. ## Visibility The `visible` optional boolean property defines whether a node and its children are rendered (`true`) or not (`false`). ## `children` An array of 0 or more indices into the top level `nodes` array, which shall inherit the transform and visibility of their parent node. :warning: Nodes are processed in the order they are encountered during the depth-first traversal of the hierarchy. ```js "nodes": [ { "name": "hip", "children": [ 3, 1, 2 ] }, { "name": "spine" }, { "name": "left leg" }, { "name": "right leg" } ] ``` ## `customization` Customizations may allow creating a different sub-tree of each node that define them, based on application specific configuration settings known at the time of creating the scene. The definition of a `customization` is a single string tag: ```js "nodes": [ { "name": "Soup du jour", "customization": "flavor", // this one "children": [ 1, 2, 3 ] }, { "name": "Broccoli and Stilton", }, { "name": "Butternut Squash", }, { "name": "Strawberry and Cream", } ] ``` :warning: Customizations and renderables are mutually exclusive on the same node. ## Renderables There is support for two types of nodes that define renderable content. The definition of these renderables come in sub-objects. All of them support a `color` property, which is an array of 3 or 4 numerical values for RGB or RGBA components. For the alpha value to take effect, alpha blending must be enabled; this is controlled by the [material](#materiaL). :warning: Customizations and renderables are mutually exclusive on the same node. ### `model` Provides definition for a 3D object, which requires a `mesh`, `shader` and `material`. Each of these are provided in form of an integer index into the related top-level array of the DLI document. :warning: `mesh` must be provided; the rest are optional and default to 0. ```js "nodes": [ { "name": "Sphere", "model": { "mesh": 0, // required "shader": 0, // optional, defaults to 0 "material": 1 // optional, defaults to 0 } } ] ``` ### `arc` Arc is a specialisation of a model that allows the rendering of circular progress bars. As such, it also must provide a `mesh` ID. ```js "nodes": [ { "name": "Any Name", "arc": { "mesh": 0, // required "shader": 0, // optional, defaults to 0 "material": 1 // optional, defaults to 0 "antiAliasing": true, "radius": -0.928, "startAngle": -81.0, "endAngle": 261 } } ] ``` * `startAngle` and `endAngle` are the angular positions where the arc starts and ends, in degrees. * `radius` is the inner radius of the arc, the outer radius being defined by the [`size`](#size) of the node. * `antiAliasing` defines whether the edges of the arc should be smoothed. ## `inverseBindPoseMatrix` Nodes that serve as joints of a [skeleton](#skeleton) must define this property as an array of 16 numerical values for a column major 4x4 matrix. ```js "nodes": [ { "name" : "l_shoulder_JNT", "inverseBindPoseMatrix" : [ 0.996081, -0.0407448, 0.0785079, 0.0, 0.0273643, 0.985992, 0.164531, 0.0, -0.0841121, -0.161738, 0.983242, 0.0, -0.0637747, -1.16091, -0.161038, 1.0 ] } ] ``` # `environment` An array of environment map definitions, which have the following format: ```js "environment": [{ "cubeSpecular": "Studio_001/Radiance.ktx", "cubeDiffuse": "Studio_001/Irradiance.ktx", "cubeInitialOrientation": [ 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 1.0 ] } ] ``` `cubeSpecular` and `cubeDiffuse` are the names of the respective cube map files that will be attempted to be located from the _environment_ path, which is up to the application. A 1x1x1 white RGB888 cubemap is created in place of either that was omitted. `cubeInitialOrientation` may be used to inform the (PBR) shader of a matrix which describes the initial orientation of the cube map. Defaults to the identity matrix. # `materials` Defines configurations of textures (and their samplers) that form materials. These can be created from single values or image files (in the formats that DALi supports). ```js "materials": [ { "environment": 0, "albedoMap": "A.png", "normal": "N.png", "metallicRoughnessMap": "MR.png", "color": [ 1.0, 0.8, 0.7, 0.5 ] } ] ``` * `environment`: The index of the [environment map](#environments) to use. Single integer, defaults to `0`. * `mipmap`: A boolean to speficy if the creation and sampling of mipmaps should be enabled. Off by default. * `color`: Base color, which the color of the node gets multiplied by. Defaults to white. * `metallic` and `roughness`: Properties for PBR materials; both are expected to be a single numerical value and default to `1.0`. ## Texture maps `albedoMap` / `albedoMetallicMap` / `normalMap` / `normalRoughnessMap` / `metallicRoughnessMap` / `subsurfaceMap`: define various texture semantics, i.e. the role of the texture, which shall be loaded from an image _inside the materials path_, which is up to the application. All of them are optional. :warning: Semantics shall not overlap within the same material, e.g. multiple albedo definitions, or albedo and albedoMetallic. # `meshes` Defines an array of meshes which are used to access geometry data stored in a binary file. The `uri` property is used to locate each file _inside the mesh path_, which is up to the application; alternatively it can be set to `"quad"`, resulting in the creation of a unit quad. Those models loaded from a file may provide an accessor, and flag its presence in the `attributes` bitmask property. The following are supported: |Attribute Name|Bit|Decimal value|Type|Remarks| |-|-|-|-|-| |`indices` |0| 1|unsigned short|| |`positions`|1| 2|Vector3|| |`normals` |2| 4|Vector3|| |`textures` |3| 8|Vector2|UVs| |`tangents` |4| 16|Vector3|| | |5| 32||Ignored, but reserved for bitangents| |`joints0` |6| 64|Vector4|Joint IDs for skinned meshes| |`weights0` |7|128|Vector4|Joint weights for skinned meshes| E.g. if positions, normals and tangents are provided, the `attributes` property must have a value of 2 + 4 + 16 = 22. Each attribute must provide a `byteOffset` and `byteLength` property, which must be correctly sized for the type of the given attribute. Finally, to specify what primitives should the geometry be rendered as, a `primitive` property may be provided with one of the following values: `TRIANGLES` (default), `LINES`, `POINTS`. ## Skinned meshes DLI supports meshes that allow deformation by skeletal animation. These must define a few additional properties: * `joints0` and `weights0` attributes, as above. * A [`skeleton`](#skeletons) ID, to specify which (joint) nodes' transformations affect the mesh. :warning: The maximum number of bones supported by DALi Scene Loader is `64`. ## Blend shapes Blend shapes provide alternate configurations of vertex `positions`, `normals` and/or `tangents` that may be blended with the same attributes of the base mesh, controlled by an animatable `weight`. ```js "meshes": [ { "uri": "example.bin", "attributes": 2, "positions": { "byteOffset": 0, "byteLength": 12000 }, "blendShapesHeader": { "version": "2.0", "byteOffset": 12000, "byteLEngth": 4 }, "blendShapes": [ { "weight": 0.0, "positions": { byteOffset: 12004, byteLength: 12000 } } ] } ] ``` A `blendShapesHeader`, if present, must define: * the `version` of the blend shapes; supported values are `1.0` and `2.0`. The difference between the versions is that v1.0 requires a per-blend shape definition of an un-normalization factor. * the `byteOffset` and `byteLength` of a buffer in the binary which defines the width (2 bytes) and height (2 bytes) of the texture that dali-scene-loader creates for blend shape data. The `blendShapes` array then defines the shapes that are available to blend between, comprising of: * An initial `weight` numerical, the default is 0; * `position` / `normals` / `tangents` attributes, which must be also present in the base mesh and are the same format (`byteOffset` / `byteLength`); :warning: The size of the attributes of the blend shape must match that of the base mesh (and each other) - i.e. they must define the same number of vertices. # `shaders` Provides an array of shader programs that renderables may use for rendering. For each shader, `vertex` and `fragment` are required string properties pointing to the shader files _inside the shader path_, which is up to the application. * `rendererState`: This string defines the options for configuring the settings of the renderer. Refer to public-api/renderer-state.h for details. * `defines`: An optional array of strings which will be used to create #defines into the vertex and fragment shaders. * `hints`: An optional array of strings that map to `Dali::Shader::Hint` values. Therefore two values are supported - `MODIFIES_GEOMETRY` and `OUTPUT_IS_TRANSPARENT`. ## Uniforms Every property that is not one of the reserved keys above, will be attempted to be registered as a uniform of the same name. :warning: boolean values will be converted to floating point 1.0 (for `true`) or 0.0 (for `false`). :warning: integer values will be converted to floating point. :warning: arrays of numerical values will be treated as one of the vec2 / vec3 / vec4 / mat3 / mat4 types, depending on what do they define sufficient components for. ```js "shader": [ { "vertex": "dli_pbr.vsh", "fragment": "dli_pbr.fsh", "defines": [ "HIGHP", SKINNING" ], "rendererState": "DEPTH_WRITE|DEPTH_TEST|CULL_BACK", "hints": "uMaxLOD": 6 } ] ``` # `skeletons` Skeletons in DLI simply define the name of a `node` that shall serve as the _root joint_ of the given skeleton. ```js "skeletons": [ { "node": "hipJoint" } ] ``` The Joint IDs in skinned mesh data relate to the descendants of the [node](#nodes) identified as the root joint that are joints, i.e. define [`inverseBindPoseMatrix`](#inversebindposematrix), in depth-first traversal order. # `cameras` Define the transformation of viewers and the projection used by them. All cameras may define: * `matrix`: an array of 16 numerical values for the transform matrix * `near` and `far` values for the position of the respective clipping planes. These default to `0.1` and `1000.0`, respectively. ## Perspective cameras This projection type - and a vertical field of view of `60` degrees - is the default. The (V)FOV can be specified in the `fov` property, as a single numerical value. ## Orthographic cameras If the `orthographic` is defined with an array of four numerical values for the left, right, bottom and top clipping planes (in this order), then orthographic projection is used, and `fov` is ignored. # `lights` Define parameters for a single light source - the implementation is up to the application. The following properties are supported. * `transform`: matrix of 16 numerical values for the positioning / directing of the light; * `color`: array of 3 components for RGB; * `intensity`: single float; * `shadowIntensity`: single float; * `shadowMapSize`: unsigned integer size (same width & height) of shadow map. * `orthographicSize`: single float to define the size (same width & height) of orthographic position. # `animations` Animations provide a way to change properties of nodes (and those of their renderables) over time. ```js "animations": [ { "name": "Idle", "loopCount": 0, "duration": 4.0, "endAction": "DISCARD", "disconnectAction": "DISCARD", "properties": [] } ] ``` * `name`: the identifier to look the animation up by; * `loopCount`: the number of times that the animation should be played. The default is `1`; `0` signifies infinity repetitions. * `duration`: the duration of the animation in seconds. If not provided, it will be calculated from the properties. * `endAction` and `disconnectAction`: the supported values, defaults and their meaning are described in the `Dali::Animation::EndAction` enum; ## `properties` An array of animation property definitions. * `node`: the name of the node whose property shall be animated; * `property`: the name that the property was registered under. * `alphaFunction`: the name of an alpha function which shall be used in animating a value, or between key frames; * `timePeriod`: `delay` (defaults to `0.0`) and `duration` (defaults to the `duration` of the animation) of the property animation, in sceonds; The property may be animated using one of the following methods. They are listed in priority order; if e.g. a property defines `keyFramesBin` and `keyFrames`, then only `keyFramesBin` is used. ### `keyFramesBin` JSON object that defines a keyframe animation in a binary buffer, with the following properties: * `url` the path to the file containing the buffer; * `numKeys`: the number of keys. * `byteOffset`: offset to the start of the buffer The size of the buffer depends on the property being animated, where the property value for each frame follows a 4 byte floating point value determining progress (between 0 and 1). :warning: Only `position` (3D vector of floats, 12 bytes), `rotation` (Quaternion, 12 bytes), and `scale` (3D vector, 12 bytes) properties are supported. ### `keyFrames` JSON array of keyframe objects defined with the following properties: * `progress`: a scalar between `0` and `1` to apply to the duration to get the time stamp of the frame; * `value`: array of 3 or 4 numerical values depending on which property is being animated; :warning: Only `position`, `rotation`, and `scale` properties are supported. ### `value` Value animations animate a property using a single value that may be absolute or relative. The properties it supports are the following: * `value`: the value to animate to (if absolute) or by (if `relative`); * `relative`: whether `value` is a target, or an offset; # `animationGroups` Animation groups simply group animations together by name, under another name, which may be used to trigger them at the same time. ```js "animationGroups": [ { "name": "Idle", "animations": [ "IdleBody", "IdleFace" ] } ] ```
50.30226
301
0.703487
eng_Latn
0.995618
a11d3c29d0c059fcb1db1387e20db39d45d91b16
9,933
md
Markdown
docs/azure/vs-azure-tools-performance-profiling-cloud-services.md
rfakhouri/visualstudio-docs.cs-cz
3d540a168c09a23b855f746696062fd9954b8dd5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/azure/vs-azure-tools-performance-profiling-cloud-services.md
rfakhouri/visualstudio-docs.cs-cz
3d540a168c09a23b855f746696062fd9954b8dd5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/azure/vs-azure-tools-performance-profiling-cloud-services.md
rfakhouri/visualstudio-docs.cs-cz
3d540a168c09a23b855f746696062fd9954b8dd5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Testování výkonu cloudové služby | Dokumentace Microsoftu description: Testování výkonu cloudové služby pomocí profileru sady Visual Studio author: mikejo5000 manager: jillfra ms.assetid: 7a5501aa-f92c-457c-af9b-92ea50914e24 ms.topic: conceptual ms.workload: azure-vs ms.date: 11/11/2016 ms.author: mikejo ms.openlocfilehash: fd436a6b7e38c8f76de5d113c326e194e4011155 ms.sourcegitcommit: 94b3a052fb1229c7e7f8804b09c1d403385c7630 ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 04/23/2019 ms.locfileid: "62427397" --- # <a name="testing-the-performance-of-a-cloud-service"></a>Testování výkonu cloudové služby ## <a name="overview"></a>Přehled Testování výkonu cloudové služby následujícími způsoby: * Pomocí diagnostiky Azure shromažďovat informace o požadavcích a připojení a ke kontrole Statistika webu, která ukazuje, jak služba provádí z hlediska zákazníků. Chcete-li začít používat, naleznete v tématu [konfigurace diagnostiky pro Azure Cloud Services a Virtual Machines](http://go.microsoft.com/fwlink/p/?LinkId=623009). * Pomocí profileru sady Visual Studio můžete získat podrobné analýzy výpočetní aspektů jak je služba spuštěna. Toto téma popisuje, jak můžete použít profilování k měření výkonu služby běží v Azure. Informace o tom, jak měřit výkon, protože je služba spuštěna místně v emulátoru compute pomocí profileru, naleznete v tématu [testování výkonu Azure Cloud Service místně v výpočetní emulátor pomocí Visual Studio Profiler](http://go.microsoft.com/fwlink/p/?LinkId=262845). ## <a name="choosing-a-performance-testing-method"></a>Volba metody pro testování výkonu ### <a name="use-azure-diagnostics-to-collect"></a>Pomocí diagnostiky Azure shromažďovat: * Statistika na webové stránky nebo služby, jako jsou požadavky a připojení. * Statistické údaje o rolích, jako je například jak často se role restartuje. * Informace o využití paměti, jako je například procento času, který přijímá systému uvolňování paměti nebo paměti celkové sady spuštěné roli. ### <a name="use-the-visual-studio-profiler-to"></a>Pomocí profileru sady Visual Studio: * Určení, které funkce zaberou nejvíce času. * Změřit, jak dlouho trvá každé části výpočetně náročné aplikace. * Porovnejte sestavy výkonu podrobné pro dvě verze služby. * Přidělení paměti podrobnější než úroveň přidělení paměti jednotlivých analyzujte. * Analýza souběžnosti problémy v kódu s více vlákny. Při použití profiler může shromažďovat data, při spuštění cloudové služby místně nebo v Azure. ### <a name="collect-profiling-data-locally-to"></a>Shromažďování dat profilace místně na: * Testování výkonu součást cloudové služby, jako je například provádění konkrétní pracovní roli, která nevyžaduje realistické simulované zatížení. * Testování výkonu cloudové služby izolovaně, řízených podmínek. * Testování výkonu cloudové služby, než ho nasadíte do Azure. * Testování výkonu cloudové služby soukromě, nenaruší stávající nasazení. * Testování výkonu služby, aniž by platili za provoz v Azure. ### <a name="collect-profiling-data-in-azure-to"></a>Shromažďování dat profilace v Azure: * Testování výkonu cloudové služby v rámci simulované nebo skutečné zatížení. * Pomocí metody instrumentace shromažďování dat profilování, podle postupu popsaného v tomto tématu později. * Testování výkonu služby ve stejném prostředí jako při spuštění služby v produkčním prostředí. Obvykle simulovat zatížení pro test cloudových služeb v rámci normální nebo podmínek vysoké zátěže. ## <a name="profiling-a-cloud-service-in-azure"></a>Profilace cloudové služby v Azure Při publikování cloudové služby v sadě Visual Studio můžete Profilovat službu a nastavení profilování, které vám poskytnou informace, které chcete. Pro každou instanci role spuštění relace profilování. Další informace o tom, jak publikovat službu ze sady Visual Studio najdete v tématu [publikování do cloudové služby Azure ze sady Visual Studio](vs-azure-tools-publishing-a-cloud-service.md). Další informace o výkonu profilace v sadě Visual Studio najdete v tématu [Průvodce začátečníka profilací výkonu](https://msdn.microsoft.com/library/azure/ms182372.aspx) a [Analýza výkonu aplikace pomocí nástrojů pro profilaci](https://msdn.microsoft.com/library/azure/z9z62c29.aspx). > [!NOTE] > Můžete povolit IntelliTrace nebo profilování při publikování vaší cloudové služby. Nelze povolit obojí. > > ### <a name="profiler-collection-methods"></a>Metody kolekce Profiler Můžete použít jinou kolekci metody profilace, podle vašeho problémy s výkonem: * **Vzorkování procesoru** – tato metoda shromažďuje statistiky aplikace, které jsou užitečné při počáteční analýze problémy s využitím procesoru. Vzorkování procesoru je navržený způsob spuštění většina vyšetřování výkonu. V aplikaci, která profilujete při shromažďování dat vzorkování procesoru je malý vliv. * **Instrumentace** – tato metoda shromažďuje podrobných dat časování, které jsou užitečné pro přesně zacílenou analýzu a analyzovat problémy s výkonem vstupu a výstupu. Metoda instrumentace zaznamenává každou položku, ukončení a volání funkce z funkcí v modulu během profilování. Tato metoda je užitečná pro shromažďování podrobných informací o časování o části kódu a pochopení dopadu vstupních a výstupních operací na výkon aplikace. Tato metoda je zakázaná pro počítač s operačním systémem 32-bit. Tato možnost je dostupná pouze při spuštění cloudové služby v Azure, ne místně v emulátoru služby compute. * **Přidělení paměti .NET** – tato metoda pomocí metoda profilování vzorkování shromažďuje data o přidělování paměti rozhraní .NET Framework. Shromážděná data zahrnují počet a velikost přidělené objekty. * **Souběžnost** – tato metoda shromažďuje data kolize prostředků a data spuštění procesů a vláken, která je užitečné při analýze více procesů a vícevláknové aplikace. Za použití metody souběžnosti shromažďuje data pro každou událost, pozastaví spuštění kódu, například když vlákno čeká uzamčen přístup k prostředku aplikace k uvolnění. Tato metoda je užitečná pro analýzu vícevláknových aplikacích. * Můžete také povolit **profilace interakce vrstev**, který poskytuje další informace o spuštění s úspěšností synchronní ADO.NET volání funkce víceúrovňových aplikací, které komunikují po jedné nebo víc databází. Shromažďování dat interakce vrstev s některou z metod profilace. Další informace o profilování interakce vrstev, naleznete v tématu [zobrazení interakcí vrstev](https://msdn.microsoft.com/library/azure/dd557764.aspx). ## <a name="configuring-profiling-settings"></a>Konfigurace nastavení profilace Následující obrázek ukazuje, jak konfigurovat nastavení profilování v dialogovém okně publikovat aplikaci Azure. ![Konfigurace nastavení profilace](./media/vs-azure-tools-performance-profiling-cloud-services/IC526984.png) > [!NOTE] > Povolit **povolit profilaci** zaškrtávací políčko, musí mít profiler nainstalované na místním počítači, který používáte k publikování cloudové služby. Profiler je ve výchozím nastavení nainstalován při instalaci sady Visual Studio. > > ### <a name="to-configure-profiling-settings"></a>Chcete-li konfigurovat nastavení profilování 1. V Průzkumníku řešení otevřete místní nabídku pro projekt Azure a klikněte na tlačítko **publikovat**. Podrobné pokyny o tom, jak publikovat cloudovou službu, naleznete v tématu [publikování cloudové služby pomocí nástroje Azure](http://go.microsoft.com/fwlink/p?LinkId=623012). 2. V **publikování aplikaci Azure** dialogové okno, zvolili **Upřesnit nastavení** kartu. 3. Chcete-li povolit profilaci, vyberte **povolit profilaci** zaškrtávací políčko. 4. Chcete-li konfigurovat nastavení profilování, zvolte **nastavení** hypertextový odkaz. Zobrazí se dialogové okno nastavení profilace. 5. Z **jakou metodu profilace chcete použít** přepínačů, zvolte typ profilace, že potřebujete. 6. Chcete-li shromažďovat data profilace interakce vrstev, vyberte **povolit profilaci interakce vrstev** zaškrtávací políčko. 7. Chcete-li uložit nastavení, zvolte **OK** tlačítko. Při publikování této aplikace, tato nastavení slouží k vytvoření relace profilování pro každou roli. ## <a name="viewing-profiling-reports"></a>Zobrazení zprávy o profilování Relace profilování se vytvoří pro každou instanci role v cloudové službě. Chcete-li zobrazit sestavy profilování každé relace ze sady Visual Studio, můžete zobrazit v okně Průzkumníka serveru a pak zvolte možnost Azure Compute node vybrat instanci role. Pak můžete zobrazit sestavy profilování jak je znázorněno na následujícím obrázku. ![Zobrazit sestavu profilace z Azure](./media/vs-azure-tools-performance-profiling-cloud-services/IC748914.png) ### <a name="to-view-profiling-reports"></a>Chcete-li zobrazit sestavy profilování 1. Chcete-li zobrazit okno Průzkumníka serveru v sadě Visual Studio, zvolte v panelu nabídky zobrazení Průzkumníka serveru. 2. Vyberte uzel Azure Compute a pak vyberte uzel Azure nasazení pro cloudovou službu, kterou jste vybrali profil při publikování ze sady Visual Studio. 3. Chcete-li zobrazit sestavy profilování pro instanci, zvolte roli ve službě, otevřete místní nabídku pro konkrétní instanci a klikněte na tlačítko **zobrazit sestavu profilace**. Sestavu souboru .vsp se teď stáhne z Azure a stav souboru ke stažení se zobrazí v protokolu aktivit Azure. Po dokončení stahování sestavy profilování se zobrazí na kartě v editoru sady Visual Studio s názvem < název Role\>*< číslo Instance\>*< identifikátor\>.vsp. Souhrnná data pro sestavy se zobrazí. 4. K zobrazení různých zobrazení sestavy v seznamu aktuální zobrazení, zvolte typ zobrazení, které chcete. Další informace najdete v tématu [zobrazeních sestav nástrojů pro profilaci](https://msdn.microsoft.com/library/azure/bb385755.aspx). ## <a name="next-steps"></a>Další kroky [Ladění cloudové služby](vs-azure-tools-debug-cloud-services-virtual-machines.md) [Publikování do cloudové služby Azure ze sady Visual Studio](vs-azure-tools-publishing-a-cloud-service.md)
89.486486
608
0.811135
ces_Latn
0.999949
a11d903375b767be95f9ce381ad6c6e3d4e0195e
3,123
md
Markdown
ruminations/001-rumination.md
busstoptaktik/geodesy
2718c094001b0a2168deb4fbaaa09c4ab9f78a7c
[ "Apache-2.0", "MIT" ]
9
2021-09-22T21:38:12.000Z
2022-03-21T12:34:52.000Z
ruminations/001-rumination.md
busstoptaktik/geodesy
2718c094001b0a2168deb4fbaaa09c4ab9f78a7c
[ "Apache-2.0", "MIT" ]
15
2021-07-29T13:31:18.000Z
2022-02-05T23:28:50.000Z
ruminations/001-rumination.md
busstoptaktik/geodesy
2718c094001b0a2168deb4fbaaa09c4ab9f78a7c
[ "Apache-2.0", "MIT" ]
null
null
null
# Ruminations on Rust Geodesy ## Rumination 001: A few words about an often-seen pipeline Thomas Knudsen <knudsen.thomas@gmail.com> 2021-08-11. Last [revision](#document-history) 2021-08-11 ### Abstract ```js origin of | cart ellps:intl | helmert x:-87 y:-96 z:-120 | cart inv ``` --- ### Prologue In the Rust Geodesy source code, test cases, and documentation, you will often encounter this transformation pipeline: ```js cart ellps:intl | helmert x:-87 y:-96 z:-120 | cart inv ellps:GRS80 ``` It was selected as the *go to* example because it is only marginally more complex than the identity operator, `noop`, while still doing real geodetic work. So by implementing just two operators, `cart` and `helmert` we can already: - Provide instructive examples of useful geodetic work - Test the RG operator instantiation - Test the internal data flow architecture - Develop test- and documentation workflows - and in general get a good view of the RG *look and feel* For these reasons, `cart` and `helmert` were the first two operators implemented in RG. ### The operators **cart** converts from geographcal coordinates, to earth centered cartesian coordinates (and v.v. in the inverse case). **helmert** performs the Helmert transformation which in the simple 3-parameter case used here simply adds the parameters `[x, y, z]` to the input coordinate `[X, Y, Z]`, so the output becomes `[X+x, Y+y, Z+z]`, or in our case: `[X-87, Y-96, Z-120]` (due to the negative signs of `x, y, z`). ### What happens? From end-to-end: 1. The `cart` step takes geographical coordinates given on the *international ellipsoid* (`ellps:intl`) and converts them to earth-centered cartesian coordinates 2. The `helmert` step shifts the cartesian coordinates to a new origin `[x,y,z]` 3. Finally, the inverse `cart` step converts the cartesian coordinates back to geographical coordinates. This time on the *GRS80 ellipsoid* (`ellps:GRS80`) ### What does it mean? All-in-all, this amounts to a *datum shift* from the older "European Datum, 1950", *ED50*, to the current "European Terrestrial Reference Frame 1989", *ETRS89*. It is not a particularly good datum shift, but it is sufficient in many cases: The expected transformation error is on the order of 5 m, whereas one will get an error of around 200 m if not transforming at all. In other words, this simple transformation reduces the coordinate error from "a few blocks down the road" to "the wrong side of the road". ### Where did it come from? The pipeline described above is actually the [GYS](/ruminations/000-rumination.md#gys-the-geodetic-yaml-shorthand) representation of datum transformation number [1134](https://epsg.org/transformation_1134/ED50-to-WGS-84-2.html) in the [EPSG](https://epsg.org/home.html) geodetic registry, where it is described as *EPSG:1134 - ED50 to WGS 84*. In RG contexts we refer to it as *ED50 to ETRS89* since, at the level of accuracy of EPSG:1134, ETRS89 and WGS84 are equivalent. ### Document History Major revisions and additions: - 2021-08-11: First version
48.046154
473
0.733589
eng_Latn
0.993485
a11e1715b6a0b1b4bd0a47a36037377c8766cbc9
382
md
Markdown
README.md
juanBL/manolo
dc4ecd8f8661d714ebc8c0f6379259eb4d8be4c8
[ "MIT" ]
null
null
null
README.md
juanBL/manolo
dc4ecd8f8661d714ebc8c0f6379259eb4d8be4c8
[ "MIT" ]
null
null
null
README.md
juanBL/manolo
dc4ecd8f8661d714ebc8c0f6379259eb4d8be4c8
[ "MIT" ]
null
null
null
Manolo App: Symfony, Ansible and Vagrant ======================== Installation -------------- - git clone **https://github.com/juanBL/manolo.git** your_project_name - cd your_project_name - composer install - cd your_project_name/vagrant - vagrant up Roles -------------- - nginx - php - phpunit - geerlingguy.memcached - geerlingguy.ntp - composer - curl - git - nodejs - wget
14.692308
70
0.646597
eng_Latn
0.220312
a11e9222d69fd3867dd66a5fc6e24e12fdb2f005
1,029
md
Markdown
examples/mach1spatial-c/openframeworks/README.md
Mach1Studios/m1-sdk
090aaff69f9bf9915b1d619ceff798f7fab5d553
[ "Unlicense" ]
27
2020-06-12T19:22:04.000Z
2022-02-22T09:11:13.000Z
examples/mach1spatial-c/openframeworks/README.md
Mach1Studios/m1-sdk
090aaff69f9bf9915b1d619ceff798f7fab5d553
[ "Unlicense" ]
1
2021-06-17T16:25:13.000Z
2021-06-17T21:01:51.000Z
examples/mach1spatial-c/openframeworks/README.md
Mach1Studios/m1-sdk
090aaff69f9bf9915b1d619ceff798f7fab5d553
[ "Unlicense" ]
4
2020-06-13T02:57:38.000Z
2021-11-06T11:10:05.000Z
### Mach1 C++ OpenFrameworks Examples ### Current Demos: - M1-Decode Example: Decoding example with variable device conditions and update rates - M1-Encode Example: Encoding example with visual GUI - M1-AudioPlayer: Additional Decoding example #### Build Instructions - Download: http://openframeworks.cc/versions/v0.11.0/ - Download dependencies (varies per demo): - ofxAudioDecoder: https://github.com/kylemcdonald/ofxAudioDecoder - ofxImGui: https://github.com/jvcleave/ofxImGui - ofxWitMotion: https://github.com/Mach1Studios/ofxWitMotion - ofxMetaMotion: https://github.com/Mach1Studios/ofxMetaMotion - ofxMach1WebSocketServer: https://github.com/Mach1Studios/ofxMach1WebSocketServer - Copy the `ofxMach1` to your addons/ directory in OpenFrameworks - Use ProjectGenerator App pointed to repo directory to create Xcode file - After a successful build, copy your 8x mono channels to Resources/[1][2][3]/ for audio playback testing - Ensure to hardcode the serial port for your controller if using a custom IMU
51.45
105
0.785228
eng_Latn
0.515558
a11eb64ac919619ac3dd5f620efed8ccd91d48e0
6,328
md
Markdown
libraries/radpdfprocessing/model/imagesource.md
vladislav-todorov/document-processing-docs
38d3633116683ca5713fcf457ad03ed5730a1eb0
[ "MIT", "Unlicense" ]
null
null
null
libraries/radpdfprocessing/model/imagesource.md
vladislav-todorov/document-processing-docs
38d3633116683ca5713fcf457ad03ed5730a1eb0
[ "MIT", "Unlicense" ]
null
null
null
libraries/radpdfprocessing/model/imagesource.md
vladislav-todorov/document-processing-docs
38d3633116683ca5713fcf457ad03ed5730a1eb0
[ "MIT", "Unlicense" ]
null
null
null
--- title: ImageSource page_title: ImageSource description: ImageSource slug: radpdfprocessing-model-imagesource tags: imagesource published: True position: 5 --- # ImageSource __ImageSource__ represents a single, constant set of pixels at a certain size. It can be used by multiple [Image]({%slug radpdfprocessing-model-image%}) objects in order to be drawn in a PDF file. ## Creating an ImageSource The ImageSource class has several public constructor overloads and can be created from a [Stream](http://msdn.microsoft.com/en-us/library/system.io.stream(v=vs.110).aspx), [BitmapSource](http://msdn.microsoft.com/en-us/library/system.windows.media.imaging.bitmapsource(v=vs.110).aspx) object or using the [__EncodedImageData__](https://docs.telerik.com/devtools/document-processing/api/Telerik.Windows.Documents.Fixed.Model.Resources.EncodedImageData.html) class: * __public ImageSource(Stream stream)__: Creates an __ImageSource__ object from a stream that contains image. * __public ImageSource(Stream stream, FormatProviders.Pdf.Export.ImageQuality imageQuality)__: Creates an __ImageSource__ object from a stream and allows you to specify the image quality through the [ImageQuality enumeration](https://docs.telerik.com/devtools/document-processing/api/Telerik.Windows.Documents.Fixed.FormatProviders.Pdf.Export.ImageQuality.html). More information about the ImageQuality and its behavior is available in [this article]({%slug radpdfprocessing-concepts-imagequality%}). This overload might throw an exception if the image format is not supported. * __public ImageSource(BitmapSource bitmapSource)__: Creates a new __ImageSource__ object from a BitmapSource object. This overload is not available in PdfProcessing for Xamarin. * __public ImageSource(BitmapSource bitmapSource, FormatProviders.Pdf.Export.ImageQuality imageQuality)__: Creates an __ImageSource__ instance from a BitmapSource object and allows you to specify the image quality. This overload is not available in PdfProcessing for Xamarin. * __public ImageSource(EncodedImageData imageSourceInfo)__: Initializes a new instance of __ImageSource__ using the [EncodedImageData class](https://docs.telerik.com/devtools/document-processing/api/Telerik.Windows.Documents.Fixed.Model.Resources.EncodedImageData.html). __Example 1__ illustrates how you can create an ImageSource using a __FileStream__. #### __[C#] Example 1: Create ImageSource from Stream__ {{region cs-radpdfprocessing-model-imagesource_0}} using (FileStream source = File.Open(filename, FileMode.Open)) { ImageSource imageSource = new ImageSource(source); } {{endregion}} With the __EncodedImageData__ class you can create an __ImageSource__ with encoded image data. This way the image quality will not be reduced on import. __Example 2__ demonstrates how you can create an __ImageSource__ using the __EncodedImageData__ class. #### __[C#] Example 2: Create ImageSource from EncodedImageData__ {{region cs-radpdfprocessing-model-imagesource_1}} EncodedImageData imageData = new EncodedImageData(imageBytes, 8, 655, 983, ColorSpaceNames.DeviceRgb, new string[] { PdfFilterNames.DCTDecode }); ImageSource imageSource = new ImageSource(imageData); {{endregion}} With the __EncodedImageData__ class you can also create an __ImageSource__ with encoded image data and set its transparency. The __EncodedImageData__ class provides a second constructor overload where you can set the alpha-channel bytes of the image as a second constructor parameter in order to apply transparency to this image. #### __[C#] Example 3: Create ImageSource from EncodedImageData with transparency__ {{region cs-radpdfprocessing-model-imagesource_2}} EncodedImageData imageData = new EncodedImageData(imageBytes, alphaChannelBytes, 8, imageWidth, imageHeight, ColorSpaceNames.DeviceRgb, new string[] { PdfFilterNames.FlateDecode }); ImageSource imageSource = new ImageSource(imageData); {{endregion}} ## Properties The properties exposed by the **ImageSource** class are as follows: * **Width**: Gets the width of the image. * **Height**: Gets the height of the image. * **DecodeArray**: Gets or sets the decode array, which specifies a linear mapping of each component value to a number that would be appropriate as a component value in the color space of the image. It could be used to manipulate the tones of the image, depending on its color space. ## Methods The ImageSource class exposes two methods, which could help you get the data from the ImageSource object. > These methods are not available in PdfProcessing for Xamarin. * __BitmapSource GetBitmapSource()__: Gets the BitmapSource of the image. * __EncodedImageData GetEncodedImageData()__: Returns the encoded image data. This method can be used if you need to directly export the images from the PDF document. >tip This [example in our SDK repository](https://github.com/telerik/document-processing-sdk/tree/master/PdfProcessing/CreateDocumentWithImages) demonstrates how to insert JPEG and JPEG2000 images in a PDF document without requiring that you decode the images on import. This way the exported images will not be re-encoded and their image quality will be preserved. ## Extensions __RadPdfProcessing__ exposes an extension method allowing you to convert every BitmapSource to an ImageSource that can be used for the creation of [Image]({%slug radpdfprocessing-model-image%}) elements. __Example 4__ shows how you can use the ToImageSource() extension method over a previously created bitmap. #### __[C#] Example 4: Create ImageSource with extension method__ {{region cs-radpdfprocessing-model-imagesource_3}} BitmapImage bitmap = new BitmapImage(); bitmap.BeginInit(); bitmap.UriSource = new Uri(filename, UriKind.RelativeOrAbsolute); bitmap.EndInit(); ImageSource imageSource = bitmap.ToImageSource(); return imageSource; {{endregion}} >The code from __Example 3__ won't compile in Silverlight due to differences in the BitmapImage API for this platform. You could pass the image as a stream to the SetSource() method of BitmapImage instead. ## See Also * [Image]({%slug radpdfprocessing-model-image%}) * [ImageSource API Reference](https://docs.telerik.com/devtools/document-processing/api/Telerik.Windows.Documents.Fixed.Model.Resources.ImageSource.html)
59.698113
577
0.800885
eng_Latn
0.686721
a11ee85f9a40234cd25384f86f98807373da6f84
1,648
md
Markdown
README.md
xavivzla/culqi-ui
d5efa85331615b3a2567666b6db7aae2dc52bf21
[ "MIT" ]
2
2021-02-05T23:52:51.000Z
2021-02-05T23:54:05.000Z
README.md
xavivzla/culqi-ui
d5efa85331615b3a2567666b6db7aae2dc52bf21
[ "MIT" ]
1
2021-02-18T17:34:31.000Z
2021-02-18T17:34:31.000Z
README.md
xavivzla/culqi-ui
d5efa85331615b3a2567666b6db7aae2dc52bf21
[ "MIT" ]
1
2021-05-02T01:30:15.000Z
2021-05-02T01:30:15.000Z
<p align="center">Material Design for Vue.js</p> Vue Material is Simple, lightweight and built exactly according to the Google <a href="http://material.google.com" target="_blank">Material Design</a> specs ## Demo and Documentation <a href="https://culqi-ui.xaviergz.com/" target="_blank">Documentation & demos</a> ## Installation and Usage Install Vue Material through npm or yarn ``` bash npm install culqi-ui --save yarn add culqi-ui ``` <small>* Others package managers like JSPM and Bower are not supported yet.</small> Import or require Vue and Vue Material in your code: ``` javascript import Vue from 'vue' import VueMaterial from 'culqi-ui' import 'culqi-ui/dist/culqi-ui.min.css' Vue.use(VueMaterial) ``` Or use individual components: ``` javascript import Vue from 'vue' import { MdButton, MdContent, MdTabs } from 'culqi-ui/dist/components' import 'culqi-ui/dist/culqi-ui.min.css' Vue.use(MdButton) Vue.use(MdContent) Vue.use(MdTabs) ``` Alternatively you can <a href="https://github.com/vuematerial/culqi-ui/archive/master.zip" target="_blank" rel="noopener">download</a> and reference the script and the stylesheet in your HTML: ``` html <link rel="stylesheet" href="path/to/culqi-ui.css"> <script src="path/to/culqi-ui.js"></script> ``` Optionally import Roboto font & Material Icons from Google CDN: ``` html <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Roboto:300,400,500,700,400italic|Material+Icons"> ``` NEXT JS CONFIG ``` bash yarn add -D nuxt-vue-culqi-ui ``` ``` JSON { ... modules: [ 'nuxt-vue-culqi-ui', ... ], ... } ``` Fork https://vuematerial.io/
21.128205
192
0.706917
eng_Latn
0.406171
a11f496ffc14640c699ce15942a990d8fc395d44
1,615
md
Markdown
_posts/2016/07/2016-07-30-library-carpentry-in-toronto.md
tobyhodges/website
3fdef772de10137bcc461618cd6fb3b220b80c04
[ "MIT" ]
41
2015-04-30T09:37:35.000Z
2021-09-16T23:49:08.000Z
_posts/2016/07/2016-07-30-library-carpentry-in-toronto.md
tobyhodges/website
3fdef772de10137bcc461618cd6fb3b220b80c04
[ "MIT" ]
607
2015-03-15T23:08:35.000Z
2022-02-01T23:26:18.000Z
_posts/2016/07/2016-07-30-library-carpentry-in-toronto.md
tobyhodges/website
3fdef772de10137bcc461618cd6fb3b220b80c04
[ "MIT" ]
328
2015-11-21T13:26:40.000Z
2021-05-01T16:02:01.000Z
--- layout: post authors: ["Greg Wilson"] title: "Library Carpentry in Toronto" date: 2016-07-30 time: "00:08:00" tags: ["Library Carpentry", "Workshops"] --- On July 28-29, a group of volunteers from the University of Toronto's libraries ran a two-day workshop for thirty-five of their fellow librarians. People came from as far away as Sudbury, Ottawa, New York City, and even Oregon to spend two days learning about: * regular expressions, * XPath and XQuery, * OpenRefine, * programming in Python, and * scraping data off the web. While there were inevitably some hiccups getting software installed on learners' machines, everything ran pretty much on schedule, and the instructors got through most of the material they had planned to cover. What was particularly nice was the way the modules fit together: the Python lesson closed by showing people how to write programs using regular expressions, while the scraping lesson referred back to the XPath material. Kim Pham, Leanne Trimble, Nicholas Worby, Thomas Guignard, and a large roster of helpers did a great job organizing and delivering this event. The best part was the email that arrived an hour after it finished: > Hey Kim (and the rest of the Software Carpentry Team), > > I just wanted to let you know that straight after the workshop, > I went back to my office, > scraped data off of a website and into OpenRefine > and solved a problem that's been plaguing me for a month. > > THANK YOU for such a great workshop, it's already useful!! ![Library Carpentry in Toronto]({{site.baseurl}}/files/2016/07/toronto-library-carpentry.jpg)
37.55814
93
0.76904
eng_Latn
0.999101
a11fa41174d24286c0b0e79eb9d5b69184d65194
174
md
Markdown
README.md
endel/colyseus-auto-sync
de34ae45a9766417b8e358b843e06e903a46d5e4
[ "MIT" ]
1
2017-10-15T07:36:40.000Z
2017-10-15T07:36:40.000Z
README.md
endel/colyseus-auto-sync
de34ae45a9766417b8e358b843e06e903a46d5e4
[ "MIT" ]
null
null
null
README.md
endel/colyseus-auto-sync
de34ae45a9766417b8e358b843e06e903a46d5e4
[ "MIT" ]
null
null
null
# colyseus-auto-sync > Work in progress! Not ready to use yet. Automatic synchroniziation tool for [colyseus.js](https://github.com/gamestdio/colyseus.js) ## License MIT
17.4
91
0.752874
eng_Latn
0.325188
a120669023a343675c79fd9fd098d08050268179
579
md
Markdown
database/manual-updates/2021-10-15_1_add_discarded_at_case_status_answers.md
moarpheus/social-care-case-viewer-api
0166180a556e71f1c228e8d5bfb413056d90be86
[ "MIT" ]
1
2022-03-11T18:42:08.000Z
2022-03-11T18:42:08.000Z
database/manual-updates/2021-10-15_1_add_discarded_at_case_status_answers.md
JohnFarrellDev/social-care-case-viewer-api-fork
f1851ff8896a0397fd4aebbe160bdd3104f57c42
[ "MIT" ]
48
2021-02-09T12:31:45.000Z
2022-03-29T07:37:22.000Z
database/manual-updates/2021-10-15_1_add_discarded_at_case_status_answers.md
JohnFarrellDev/social-care-case-viewer-api-fork
f1851ff8896a0397fd4aebbe160bdd3104f57c42
[ "MIT" ]
2
2021-11-16T16:47:26.000Z
2021-12-02T08:39:28.000Z
# Update case statuses table structure ## The problem we're trying to solve Added discarded_at column for case statuses answers ## Justification for doing a manual update We don't have mautomated migrations yet ## The plan 1. Run the SQL script below ## Link to Jira ticket https://hackney.atlassian.net/browse/SCT-1354 ## SQL statement(s) ```sql --altering person_case_status_answers ALTER TABLE dbo.sccv_person_case_status_answers ADD COLUMN discarded_at timestamp; ALTER TABLE dbo.sccv_person_case_status_answers ADD COLUMN group_id varchar(36) not null; ```
18.677419
51
0.782383
eng_Latn
0.593498
a12107973601f9c74a6292f367983b1d43b5b451
2,261
md
Markdown
Quest02/README.md
Stdev17/DevOpsCurriculum_private
b4419a347566070b39636b810e1aa70dcafc65b3
[ "MIT" ]
null
null
null
Quest02/README.md
Stdev17/DevOpsCurriculum_private
b4419a347566070b39636b810e1aa70dcafc65b3
[ "MIT" ]
null
null
null
Quest02/README.md
Stdev17/DevOpsCurriculum_private
b4419a347566070b39636b810e1aa70dcafc65b3
[ "MIT" ]
null
null
null
# Quest 02. 프로그래밍의 기초 ## Introduction * 이번 퀘스트를 통해 가장 기초적인 프로그래밍과 그를 둘러싼 개념을 익힙니다. ## Topics * node.js * Google에서 개발하기 시작한 브라우저용 V8 엔진에 탑재된 JavaScript용 런타임. JavaScript는 현대 웹 환경에 적절한 언어로, 웹 런타임 환경을 통일하기 위한 WebAssebly 프로젝트에서도 기본 언어 중 하나로 간주되어 개발되고 있다. Node.js를 이용하면 FE 환경 뿐 아니라 로컬에서, 즉 서버 환경에서 JavaScript를 활용할 수 있게 된다. * 컴파일러, 인터프리터, JIT 컴파일, 가상머신, 런타임 * 컴파일 언어는 소스 코드를 미리 타겟 머신에 맞는 바이너리 형태로 컴파일하여 런타임에서 실행한다. 인터프리터는 컴파일 과정 없이 런타임에서 line-by-line으로 소스를 해석하여 실행한다. JIT 컴파일은 우선 소스 코드를 컴파일하는 것까진 같지만, 머신 별로 맞는 기계어 바이너리가 아닌 바이트코드를 생성한다. 이후 언어 런타임의 JIT 컴파일러가 이를 머신에 맞는 바이너리로 번역하여 실행하므로, 포팅이 쉽다. JVM 계열의 언어는 컴파일 시 JVM 바이트코드로 변환된 후, Java 런타임의 가상 머신에 탑재된 JIT 컴파일러로 실시간으로 기계어로 변환하여 실행된다. * `let`, `const`, `if`, `for`, `function` * 요즘은 immutability를 위해 거의 대부분 const를 사용하는 것이 권장됨 (정 변수가 필요하면 외부 상태 관리 라이브러리에 저장) * 배열 메서드는 for의 syntactic sugar라 볼 수 있는데, immutability를 보장해 주므로 더욱 권장되는 practice. 배열 내 값을 통째로 복사하는 것이 배열을 mutate하는 것보다 느리다고 볼 여지가 없으므로 여전히 좋다. ## Resources * [패키지 매니저로 Node.js 설치하기](https://nodejs.org/ko/download/package-manager/) * [node.js](https://nodejs.org/ko/) ## Checklist * node.js란 무엇일까요? * `node.js는 V8 엔진 위에 만들어진 자바스크립트 런타임이다`라는 문장을 뜯어 보면 어떤 의미일까요? * 현대 HTML 웹페이지에 고급 렌더링을 갖추기 위해 이를 제어하는 언어로 ECMAScript라는 웹 표준이 채택되었습니다. 그리고 이 JS의 성능과 안전성을 개선하기 위해 Google에서 내놓은 런타임이 Node.js가 되겠습니다. Node는 C++ 등 다양한 언어 기반으로 개발되었고, 최근에는 Deno라는 Rust 기반의 런타임도 떠오르고 있습니다. * 사람의 언어에 가까운 프로그램 코드를 어떻게 컴퓨터가 실행시킬까요? 그 과정은 무엇일까요? * 컴파일러는 프로그래머가 작성한 코드를 생성 문법에 따라 검사하고 이에 걸맞는 어셈블리어 형태의 바이너리를 생성합니다. 현대 컴파일러는 성능이 좋아져서 프로그래머가 작성한 형태보다 프로덕션에 최적화된 형태의 코드를 생성하기도 합니다. 어셈블리어는 CPU instruction-set 위의 기계어와 1:1 대응되므로 즉시 번역하여 실행될 수 있습니다. 인터프리터 언어에서는 이 과정이 line-by-line으로 발생하여 CPU에게 전달됩니다. * node.js가 자바스크립트 코드를 리눅스, 윈도우, 맥OS 어느 곳에서든 똑같이 실행할 수 있는 이유는 무엇일까요? * 타겟 머신 별로 바이너리를 사전 생성해야 하는 것이 아니라, 어디서든 Node만 있다면 인터프리터로 즉시 실행할 수 있기 때문입니다. ## Quest * node.js 최신 LTS 버전을 설치합니다. * 다음과 같이 입력에 따라 별의 산을 출력하는 프로그램을 작성합니다. ``` $ node quest02.js 5 * *** ***** ******* ********* ``` ## Advanced * 퀘스트의 코드를 더 구조화하고, 중복을 제거하고, 각각의 코드 블록이 한 가지 일을 전문적으로 잘하게 하려면 어떻게 해야 할까요? * 함ㅡ수로 modulize * 함수형 프로그래밍이란 어떤 컨셉일까요? 어떤 특징들을 가지고 있을까요? * 모든 함수는 입력된 패러미터에 따라 일정한 값으로 표현되어야 한다. 패러미터 외에 외부에 영향을 받거나 영향을 주어서는 안 된다. 즉, 값은 mutate되어선 안 된다. 이로써 값에 보다 안전하게 접근할 수 있게 되어 버그가 줄어들게 됩니다.
48.106383
237
0.701017
kor_Hang
1.00001
a121d825fbca56d93d0d257b7584d9ed77439e9b
3,577
md
Markdown
PROXYCLIENT.md
ricick/as3httpclient
3e449e6e3d5c9516240095479241b35c24df76cf
[ "MIT" ]
1
2019-06-26T16:23:57.000Z
2019-06-26T16:23:57.000Z
PROXYCLIENT.md
ricick/as3httpclient
3e449e6e3d5c9516240095479241b35c24df76cf
[ "MIT" ]
null
null
null
PROXYCLIENT.md
ricick/as3httpclient
3e449e6e3d5c9516240095479241b35c24df76cf
[ "MIT" ]
null
null
null
# Description This library is an extension of [as3httpclient](https://github.com/gabriel/as3httpclient) that gets around the socket and security issues inherent with making raw http socket requests from flash player, by using a proxy file to interpret requests. This allows full communication with RESTful webservices. ## Operation The client accepts all the same requests as the base client, but instead of using a socket connection, turns the request into a custom http post request, with the headers, method, body, etc. passed through as meta data. This is then interpreted by a server side php proxy file into the real request. When a response is received by the proxy, it then translates this back into a meta data format that flash can receive via URLLoader. This is then decoded by the client and returned in the same format as the base class. This allows seamless migration from as3httpclientlib projects with only a few lines needing to change. ## Usage Usage matches that of as3httpclientlib , apart from the client constructor, where you need to pass the url of the php proxy file. package { import com.hurlant.util.Base64; import com.lookmum.httpclient.util.HttpProxyClient; import flash.display.Sprite; import org.httpclient.events.HttpDataEvent; import org.httpclient.events.HttpRequestEvent; import org.httpclient.events.HttpResponseEvent; import org.httpclient.events.HttpStatusEvent; import org.httpclient.http.Put; public class Example1 extends Sprite { public function Example1() { var client:HttpProxyClient = new HttpProxyClient('http://mysite.com/flashrestproxy.php'); var request:Put = new Put(); request.addHeader('someHeader', 'someValue'); var formData:Array = [ { name:'food1', value:'eggs' }, { name:'food2', value:'fish' } ]; request.setFormData(formData); var username:String = 'myusername'; var password:String = 'mypassword'; var credentials:String = Base64.encode(username + ':' + password); request.addHeader('Authorization: Basic ', credentials); client.request(new URI('http://mysite.com/myrestwebservice/foods'), request); client.addEventListener(HttpStatusEvent.STATUS, onStatus); client.addEventListener(HttpDataEvent.DATA, onData); client.addEventListener(HttpResponseEvent.COMPLETE, onResponse); client.addEventListener(HttpRequestEvent.COMPLETE, onRequest); client.addEventListener(HttpRequestEvent.CONNECT, onRequest); } private function onRequest(e:HttpRequestEvent):void { trace( "Example1.onRequest > e : " + e ); } private function onResponse(e:HttpResponseEvent):void { trace( "Example1.onResponse > e : " + e ); trace( "e.response : " + e.response ); } private function onData(e:HttpDataEvent):void { trace( "Example1.onData > e : " + e.readUTFBytes() ); } private function onStatus(e:HttpStatusEvent):void { trace( "Example1.onStatus > e : " + e ); } } } ## Issues As the client uses URLloader to send and recieve requests in text format. The client cannot be used for communication involving binary data The php proxy must be installed on a server that has the CURLLib module installed in order to make http requests. Most servers will have this but it's worth checking your phpinfo just incase. The headers passed through may not contain the text `[body]` this will make the deserialisation go loopy. ## TODO Mirror the timeout support of as3httpclientlib. Write proxies for other languages. Support for php modules besides CURLLib.
43.621951
304
0.745876
eng_Latn
0.891164
a12266313a7e6be3dbf1d9b599e1c29ee47e6cd4
11,121
md
Markdown
articles/sql-database/sql-database-monitor-tune-overview.md
changeworld/azure-docs.sv-se
6234acf8ae0166219b27a9daa33f6f62a2ee45ab
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-monitor-tune-overview.md
changeworld/azure-docs.sv-se
6234acf8ae0166219b27a9daa33f6f62a2ee45ab
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/sql-database/sql-database-monitor-tune-overview.md
changeworld/azure-docs.sv-se
6234acf8ae0166219b27a9daa33f6f62a2ee45ab
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Övervakning och prestandajustering description: En översikt över funktioner och metoder för övervakning och prestandajustering i Azure SQL Database. services: sql-database ms.service: sql-database ms.subservice: performance ms.custom: '' ms.devlang: '' ms.topic: conceptual author: jovanpop-msft ms.author: jovanpop ms.reviewer: jrasnick, carlrab ms.date: 03/10/2020 ms.openlocfilehash: 837d88665c1fdffe902c9c478e5d6dc65a2e402a ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897 ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 03/28/2020 ms.locfileid: "79268735" --- # <a name="monitoring-and-performance-tuning-in-azure-sql-database"></a>Övervakning och prestandajustering i Azure SQL Database Om du vill övervaka prestanda för en databas i Azure SQL Database börjar du med att övervaka cpu- och IO-resurser som används av din arbetsbelastning i förhållande till den nivå av databasprestanda du valde när du väljer en viss tjänstnivå och prestandanivå. För att åstadkomma detta avger Azure SQL Database resursmått som kan visas i Azure-portalen eller med hjälp av något av dessa SQL-hanteringsverktyg: [Azure Data Studio](https://docs.microsoft.com/sql/azure-data-studio/what-is) eller SQL Server Management [Studio](https://docs.microsoft.com/sql/ssms/sql-server-management-studio-ssms) (SSMS). För enstaka och poolade databaser tillhandahåller Azure SQL Database ett antal databasrådgivare för att tillhandahålla intelligenta prestandajusteringsrekommendationer och automatiska justeringsalternativ för att förbättra prestanda. Dessutom visar Query Performance Insight information om de frågor som är ansvariga för mest CPU- och IO-användning för enskilda och poolade databaser. Azure SQL Database tillhandahåller ytterligare övervaknings- och justeringsfunktioner som backas upp av artificiell intelligens för att hjälpa dig att felsöka och maximera prestanda för dina databaser och lösningar. Du kan välja att konfigurera [direktuppspelningsexport](sql-database-metrics-diag-logging.md) av dessa [intelligenta insikter](sql-database-intelligent-insights.md) och andra SQL Database-resursloggar och mått till en av flera destinationer för förbrukning och analys, särskilt med [SQL Analytics](../azure-monitor/insights/azure-sql.md)). Azure SQL Analytics är en avancerad molnövervakningslösning för övervakning av prestanda för alla dina Azure SQL-databaser i stor skala och över flera prenumerationer i en enda vy. En lista över loggar och mått som du kan exportera finns i [diagnostiktelemetri för export](sql-database-metrics-diag-logging.md#diagnostic-telemetry-for-export-for-azure-sql-database) Slutligen har SQL sina egna övervaknings- och diagnostikfunktioner med [SQL Server Query Store](https://docs.microsoft.com/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store) och dynamiska [hanteringsvyer (DMVs)](https://docs.microsoft.com/sql/relational-databases/system-dynamic-management-views/system-dynamic-management-views). Se [Övervaka med DV:er](sql-database-monitoring-with-dmvs.md) för skript som ska övervakas för en mängd olika prestandaproblem. ## <a name="monitoring-and-tuning-capabilities-in-the-azure-portal"></a>Övervaknings- och justeringsfunktioner i Azure-portalen I Azure-portalen tillhandahåller Azure SQL Database övervakning av resursmått för alla distributionstyper. För databaser med enstaka och poolade databaser tillhandahåller databasrådgivare och Frågeprestandainsikt frågejusteringsrekommendationer och frågeprestandaanalys. Slutligen kan du i Azure-portalen aktivera automatiska för logiska servrar och deras enstaka och poolade databaser. ### <a name="sql-database-resource-monitoring"></a>Övervakning av SQL-databasresurser Du kan snabbt övervaka följande resursmått i Azure-portalen i **vyn Mått:** - **DTU-användning** Kontrollera om en databas eller elastisk pool når 100 procent av DTU-användningen under en längre tid. Hög DTU-användning indikerar att din arbetsbelastning kan behöva mer CPU- eller I/I/O-resurser. Det kan också tyda på frågor som måste optimeras. - **CPU-användning** Kontrollera om en databas, elastisk pool eller hanterad instans når 100 procent av CPU-användningen under en längre tid. Hög CPU indikerar att din arbetsbelastning kan behöva mer CPU eller IO-resurser. Det kan också tyda på frågor som måste optimeras. - **I/o-användning** Kontrollera om en databas, elastisk pool eller hanteringsinstans når IO-gränserna för det underliggande lagringsutrymmet. Hög I/O-användning indikerar att din arbetsbelastning kan behöva mer CPU- eller I/I/O-resurser. Det kan också tyda på frågor som måste optimeras. ![Resursmått](./media/sql-database-monitor-tune-overview/resource-metrics.png) ### <a name="database-advisors"></a>Databasrådgivare ' Azure SQL Database innehåller databasrådgivare som tillhandahåller [prestandajusteringsrekommendationer](sql-database-advisor.md) för enskilda och poolade databaser. Dessa rekommendationer är tillgängliga i Azure-portalen samt med hjälp av [PowerShell](https://docs.microsoft.com/powershell/module/az.sql/get-azsqldatabaseadvisor). Du kan också aktivera [automatisk justering](sql-database-automatic-tuning.md) så att SQL Database automatiskt kan implementera dessa justeringsrekommendationer. ### <a name="query-performance-insight"></a>Query Performance Insight [Query Performance Insight](sql-database-query-performance.md) visar prestanda i Azure-portalen för de vanligaste tidskrävande och längsta frågorna för enskilda och poolade databaser. ## <a name="generate-intelligent-assessments-of-performance-issues"></a>Generera intelligenta bedömningar av prestandaproblem Azure SQL Database [Intelligent Insights](sql-database-intelligent-insights.md) använder inbyggd intelligens för att kontinuerligt övervaka databasanvändningen genom artificiell intelligens och identifiera störande händelser som orsakar dåliga prestanda. Intelligent Insights identifierar automatiskt prestandaproblem med databaser i Azure SQL Database baserat på väntetider för körning av frågor, fel eller time-outs. När det har identifierats utförs en detaljerad analys som genererar en resurslogg (kallas SQLInsights) med en [intelligent bedömning av problemen](sql-database-intelligent-insights-troubleshoot-performance.md). Den här utvärderingen består av en grundorsaksanalys av databasprestandaproblemet och, om möjligt, rekommendationer för prestandaförbättringar. Intelligent Insights är en unik funktion med inbyggd Azure-intelligens som ger följande värde: - Proaktiv övervakning - Skräddarsydda prestandainsikter - Tidig upptäckt av databasprestandaförsämring - Rotorsaksanalys av upptäckta problem - Rekommendationer för prestandaförbättring - Skala ut kapacitet på hundratusentals databaser - Positiv inverkan på DevOps-resurser och den totala ägandekostnaden ## <a name="enable-the-streaming-export-of-metrics-and-resource-logs"></a>Aktivera direktuppspelning av mått och resursloggar Du kan aktivera och konfigurera [direktuppspelningsexport av diagnostiktelemetri](sql-database-metrics-diag-logging.md) till ett av flera mål, inklusive resursloggen Intelligent Insights. Använd [SQL Analytics](../azure-monitor/insights/azure-sql.md) och andra funktioner för att använda den här ytterligare diagnostiska telemetrin för att identifiera och lösa prestandaproblem. Du konfigurerar diagnostikinställningar för att strömma kategorier av mått och resursloggar för enskilda databaser, poolade databaser, elastiska pooler, hanterade instanser och instansdatabaser till någon av följande Azure-resurser. ### <a name="log-analytics-workspace-in-azure-monitor"></a>Log Analytics-arbetsyta i Azure-övervakaren Du kan strömma mått och resursloggar till en [Log Analytics-arbetsyta i Azure Monitor](../azure-monitor/platform/resource-logs-collect-workspace.md). Data som strömmas här kan förbrukas av [SQL Analytics](../azure-monitor/insights/azure-sql.md), som är en moln endast övervakningslösning som ger intelligent övervakning av dina databaser som innehåller prestandarapporter, aviseringar och rekommendationer begränsning. Data som strömmas till en Log Analytics-arbetsyta kan analyseras med andra övervakningsdata som samlas in och gör det också möjligt för dig att utnyttja andra Azure Monitor-funktioner som aviseringar och visualiseringar. ### <a name="azure-event-hubs"></a>Azure Event Hubs Du kan strömma mått och resursloggar till [Azure Event Hubs](../azure-monitor/platform/resource-logs-stream-event-hubs.md). Strömmande diagnostiktelemetri till händelsehubbar för att tillhandahålla följande funktioner: - **Strömma loggar till loggnings- och telemetrisystem från tredje part** Strömma alla dina mått och resursloggar till en enda händelsehubb för att leda loggdata till ett SIEM- eller logganalysverktyg från tredje part. - **Skapa en anpassad telemetri- och loggningsplattform** Den mycket skalbara publiceringsprenumerationen hos händelsehubbar gör att du flexibelt kan använda mått och resursloggar i en anpassad telemetriplattform. Mer information finns i [Designa och dimensionera en telemetriplattform för global skala på Azure Event Hubs.](https://azure.microsoft.com/documentation/videos/build-2015-designing-and-sizing-a-global-scale-telemetry-platform-on-azure-event-Hubs/) - **Visa tjänstens hälsotillstånd genom att strömma data till Power BI** Använd Event Hubs, Stream Analytics och Power BI för att omvandla diagnostikdata till insikter i nära realtid om dina Azure-tjänster. Se [Stream Analytics och Power BI: En instrumentpanel för analys i realtid för direktuppspelning av data](https://docs.microsoft.com/azure/stream-analytics/stream-analytics-power-bi-dashboard) för mer information om den här lösningen. ### <a name="azure-storage"></a>Azure Storage Strömma mått och resursloggar till [Azure Storage](../azure-monitor/platform/resource-logs-collect-storage.md). Använd Azure-lagring för att arkivera stora mängder diagnostiktelemetri för en bråkdel av kostnaden för de tidigare två direktuppspelningsalternativen. ## <a name="use-extended-events-in-the-sql-database-engine"></a>Använda utökade händelser i SQL-databasmotorn Dessutom kan du använda [utökade händelser](https://docs.microsoft.com/sql/relational-databases/extended-events/extended-events) i SQL för ytterligare avancerad övervakning och felsökning. Arkitekturen utökade händelser gör det möjligt för användare att samla in så mycket eller så lite data som krävs för att felsöka eller identifiera ett prestandaproblem. Information om hur du använder utökade händelser i SQL Database finns [i Utökade händelser i SQL Database](sql-database-xevent-db-diff-from-svr.md). ## <a name="next-steps"></a>Nästa steg - Mer information om intelligenta prestandarekommendationer för enstaka och poolade databaser finns i [prestandarekommendationer för databasrådgivare](sql-database-advisor.md). - Mer information om hur du automatiskt övervakar databasprestanda med automatiserad diagnostik och grundorsaksanalys av prestandaproblem finns i [Azure SQL Intelligent Insights](sql-database-intelligent-insights.md). '''''''''
102.972222
921
0.824476
swe_Latn
0.998459
a1227dedfb171aa9c72a7e1b2b3c7e35cdc45c15
287
md
Markdown
_posts/2018-09-15-heartbreaking-loss-decided.md
craigwmcclellan/craigmcclellan.github.io
bd9432ea299f1141442b9ba90eb3aa001984c20d
[ "MIT" ]
1
2018-08-04T15:31:00.000Z
2018-08-04T15:31:00.000Z
_posts/2018-09-15-heartbreaking-loss-decided.md
craigwmcclellan/craigmcclellan.github.io
bd9432ea299f1141442b9ba90eb3aa001984c20d
[ "MIT" ]
null
null
null
_posts/2018-09-15-heartbreaking-loss-decided.md
craigwmcclellan/craigmcclellan.github.io
bd9432ea299f1141442b9ba90eb3aa001984c20d
[ "MIT" ]
1
2018-08-04T15:31:03.000Z
2018-08-04T15:31:03.000Z
--- layout: post microblog: true audio: date: 2018-09-15 17:23:53 -0600 guid: http://craigmcclellan.micro.blog/2018/09/15/heartbreaking-loss-decided.html --- Heartbreaking loss decided by some horrible officiating. Congratulations [@retrophisch](https://micro.blog/retrophisch) anyway.
31.888889
127
0.777003
eng_Latn
0.330073
a1228b4368c00f97040911c3d33c7a2324744955
2,378
md
Markdown
README.md
anjijava16/azure-cosmos-cassandra-claimprocessapi
de640540de52765e52a6b08743f1d015e015e799
[ "MIT" ]
null
null
null
README.md
anjijava16/azure-cosmos-cassandra-claimprocessapi
de640540de52765e52a6b08743f1d015e015e799
[ "MIT" ]
null
null
null
README.md
anjijava16/azure-cosmos-cassandra-claimprocessapi
de640540de52765e52a6b08743f1d015e015e799
[ "MIT" ]
1
2021-07-14T17:37:55.000Z
2021-07-14T17:37:55.000Z
--- page_type: sample languages: - java products: - azure description: "This sample shows how to use Spring Data Apache Cassandra module with Azure CosmosDB service." urlFragment: spring-data-cassandra-on-azure --- # Spring Data Apache Cassandra on Azure This sample shows how to use Spring Data Apache Cassandra module with Azure CosmosDB service. ## TOC - [Prerequisite](#prerequisite) - [Build](#build) - [Run](#run) ## Prerequisite - Azure Account - JDK 1.8 or above - Maven 3.0 or above - Curl ## Build 1. Create a Cassandra account with Azure CosmosDB by following tutorial at [here](https://docs.microsoft.com/en-us/azure/cosmos-db/create-cassandra-java#create-a-database-account). 1. Use [Data Explorer](https://docs.microsoft.com/en-us/azure/cosmos-db/data-explorer) from Azure Portal to create a keyspace named `mykeyspace`. 1. Find `application.properties` at `src/main/resources` and fill in below properties. ``` spring.data.cassandra.contact-points=<replace with your Cassandra contact point> #spring.data.cassandra.local-datacenter=<this is only required if not using custom load balancing policy> spring.data.cassandra.port=10350 spring.data.cassandra.username=<replace with your Cassandra account user name> spring.data.cassandra.password=<replace with your Cassandra account password> ``` 1. Build the sample application into a `JAR` package by running below command. ```shell mvn clean package ``` ## Run Following below steps to run and test the sample application. 1. Run application. ```shell java -jar target/spring-data-cassandra-on-azure-0.1.0-SNAPSHOT.jar ``` 1. Create new users by running below command. ```shell curl -s -d '{"name":"Tom","species":"cat"}' -H "Content-Type: application/json" -X POST http://localhost:8080/pets curl -s -d '{"name":"Jerry","species":"mouse"}' -H "Content-Type: application/json" -X POST http://localhost:8080/pets ``` Sample output is as below. ```text Added Pet(id=1, name=Tom, species=cat). ... Added Pet(id=2, name=Jerry, species=mouse). ``` 1. Get all existing pets by running below command. ```shell curl -s http://localhost:8080/pets ``` Sample output is as below. ```txt [{"id":1,"name":"Tom","species":"cat"},{"id":2,"name":"Jerry","species":"mouse"}] ```
27.976471
146
0.69386
eng_Latn
0.753409
a12324a0579075d916598968e037c44565baaff1
32
md
Markdown
README.md
rodrigogregorioneri/compgen-block-docs
29c71d5cb3339a87dc44d692834d11eae06c1a6f
[ "MIT" ]
null
null
null
README.md
rodrigogregorioneri/compgen-block-docs
29c71d5cb3339a87dc44d692834d11eae06c1a6f
[ "MIT" ]
null
null
null
README.md
rodrigogregorioneri/compgen-block-docs
29c71d5cb3339a87dc44d692834d11eae06c1a6f
[ "MIT" ]
1
2019-06-15T21:45:00.000Z
2019-06-15T21:45:00.000Z
# compgen-block-docs BlockChain
10.666667
20
0.8125
cat_Latn
0.337939
a124905ca97cb98e68d36526ec9f07672859b608
2,578
md
Markdown
content/46_functional_assurance/1_introduction.md
jayaraaj/aws-modernization-with-cognizant
67110c86193dea005e0d17874172181b32df6706
[ "MIT-0" ]
null
null
null
content/46_functional_assurance/1_introduction.md
jayaraaj/aws-modernization-with-cognizant
67110c86193dea005e0d17874172181b32df6706
[ "MIT-0" ]
1
2022-01-10T21:24:15.000Z
2022-01-10T21:24:15.000Z
content/46_functional_assurance/1_introduction.md
jayaraaj/aws-modernization-with-cognizant
67110c86193dea005e0d17874172181b32df6706
[ "MIT-0" ]
4
2020-07-30T04:07:59.000Z
2021-05-27T13:29:42.000Z
+++ title = "Introduction" date = 2019-08-11T19:21:12-07:00 weight = 1 chapter = false +++ #### Objective: To validate application's functional behavior such as business process, user workflows & regression impact against the pre-defined outcomes Functional behaviors are “what” the system should do. Functional testing evaluates if functions perform as expected. Functional requirements may be described and documented in business requirements specifications, epics, user stories, use cases, (or) functional specifications, or they may be undocumented. These tests should be performed at all test levels, though the focus is different at each level. For our particular use case, we will focus on the following - UI focuses on validating business process through screens and objects - API focuses on back-end services leveraged by front-end elements and objects - Mobile refers to an RWD (Responsive Web Design) based interfaces ![Testing Scope](/images/module3/Module_3.png) #### PIPELINE OVERVIEW This module will walk you through the assurance of your application functionalities. You will learn how to execute UI-based tests, responsive web design (RWD)-based tests and API tests with an open source tool stack. ![](/images/module3/intro-1.png) The Functional Assurance pipeline consists of two stages viz. source and build. With every code change committed to the repository, the pipeline will trigger all the three tests in a parallel i.e. UI based testing, RWD testing and API testing tests. Various dependencies and commands required for respective tests are defined in the buildspec.yml file in the source code repository. Automation for UI on desktop and mobile with RWD tests is implemented using open source tools – Selenium, Cucumber and Maven following Page Factory Pattern: - **UI Automation**: Ten test cases will be executed using AWS CodeBuild and the results will be saved in the directory - awswrkshp_functional_assurance_ui/target/extent-reports in the target s3 bucket. - **RWD Automation**: Five test cases will be executed on mobile compatible application rendered on Chrome web browser and results will be saved in the directory - awswrkshp_functional_assurance_ui_rwd/target/extent-reports in the target s3 bucket. - **API Tests**: API based test automation is implemented using the open source tools like Rest-Assured, Maven and Cucumber. Ten test cases will be executed on the AWS CodeBuild and the results will be saved in the directory - awswrkshp_functional_assurance_api /target/extent-reports in the target s3 bucket.
57.288889
309
0.797517
eng_Latn
0.995728
a124db599ac7b027897a1940dda4986edcc9af84
1,593
md
Markdown
README.md
calypsobronte/hodor
2d27f581e7cbd81197d8bacd58f156673ccd46c6
[ "MIT" ]
null
null
null
README.md
calypsobronte/hodor
2d27f581e7cbd81197d8bacd58f156673ccd46c6
[ "MIT" ]
null
null
null
README.md
calypsobronte/hodor
2d27f581e7cbd81197d8bacd58f156673ccd46c6
[ "MIT" ]
1
2020-01-18T02:58:58.000Z
2020-01-18T02:58:58.000Z
# hodor ***Hodor*** - cheat online voting contests ![hodor-gif][] ## Resources Read or watch: - HTTP headers - Scraping - OCR for Captcha - TOR - Proxy lists ## Tasks | Task | Description | | ---------- | ----------------------------------------------------------------------------------------------- | | Level 0 | script or a program that votes 1024 times for your id here: http://158.69.76.135/level0.php. | | Level 1 | script or a program that votes 4096 times for your id here: http://158.69.76.135/level1.php | | Level 2 | script or a program that votes 1024 times for your id here: http://158.69.76.135/level2.php | | Level 3 | In process - imcomplete | | Level 4 | script or a program that votes 98 times for your id here: http://158.69.76.135/level4.phpExecute level 4 as follows `./level_4.py 1992`| ## Language - Python with web scripting ## Tools - Install complements for the exec `level_#.py` ```bash $ python -m pip install image $ pip3 install pillow $ pip3 install pytesseract $ pip install tesseract $ pip install tesseract-ocr ``` ## Contribute *Lina Maria Montaño Ramirez* - [Twitter - @calypsobronte] - [Portafolio - calypsobronte.me] ## License ### MIT <!-- links --> [hodor-gif]: https://s3.amazonaws.com/intranet-projects-files/holbertonschool-higher-level_programming+/261/giphy_hodor.gif [Twitter - @calypsobronte]: https://twitter.com/calypsobronte [Portafolio - calypsobronte.me]: https://calypsobronte.me/
31.235294
151
0.600126
eng_Latn
0.531366
a124f7a3f7c9e57e19b32f7c1688ca6f7009d8b3
942
md
Markdown
content/portfolio/2013-automotive-gestural-input-device/index.md
jerryryle/jerryryle.com
f6bc61b881da71673a98bdcdc695700692fc308e
[ "BSD-3-Clause" ]
1
2019-04-17T19:33:02.000Z
2019-04-17T19:33:02.000Z
content/portfolio/2013-automotive-gestural-input-device/index.md
jerryryle/jerryryle.com
f6bc61b881da71673a98bdcdc695700692fc308e
[ "BSD-3-Clause" ]
1
2018-03-02T21:50:33.000Z
2019-03-18T02:07:08.000Z
content/portfolio/2013-automotive-gestural-input-device/index.md
jerryryle/jerryryle.com
f6bc61b881da71673a98bdcdc695700692fc308e
[ "BSD-3-Clause" ]
null
null
null
--- title: "Automotive Gestural Input Device (2013)" date: 2013-11-01T12:00:00-08:00 draft: false short_name: "automotive-gestural-input-device" tags: applications, engineering management, program management, Python, sensors, USB resources: - src: "placeholder.svg" name: placeholder - src: "placeholder_tiny.jpg" name: placeholder_tiny entry_media: - image: resource: "placeholder" lazyload: "placeholder_tiny" alt: "Placeholder illustration of a person in a museum looking at a picture that says, 'image coming soon'" --- A large automotive manufacturer wanted to explore gesture-based input in an automotive application. I structured and ran a development effort to prototype an input device that uses a unique touchpad to convert gestures into alphanumeric input. The prototype communicated with the touchpad via USB and performed gestural processing in Python, using Kivy to provide a GUI that visualized the results.
34.888889
111
0.785563
eng_Latn
0.978834
a1264ad3dad68d8aad0278af889dd77e8df5d644
220
md
Markdown
notes/cheatsheets-master/udisksctl.md
webdevhub42/Cheat-Sheets
b54460ecceec49db16cac359b2c3f52141abc26b
[ "Apache-2.0" ]
10
2021-04-25T20:08:51.000Z
2022-02-26T23:53:53.000Z
notes/cheatsheets-master/udisksctl.md
webdevhub42/Cheat-Sheets
b54460ecceec49db16cac359b2c3f52141abc26b
[ "Apache-2.0" ]
203
2021-02-25T18:35:26.000Z
2022-03-25T21:54:53.000Z
notes/cheatsheets-master/udisksctl.md
bgoonz/Cheat-sheets
62613c115e4144d5ed4e11fabb6ffcfd6a105e98
[ "Apache-2.0" ]
4
2021-11-28T23:26:37.000Z
2022-01-16T10:58:08.000Z
# To get info about a device: udisksctl info -b <device> # To mount a device: udisksctl mount --block-device <device> # To unmount a device: udisksctl unmount --block-device <device> # To get help: udisksctl help
13.75
41
0.713636
eng_Latn
0.704701
a126743284cb71edab3697708c114bf7e5b480fb
8,603
md
Markdown
content/blog/HEALTH/6/9/0f8ca510c3cceddb08d4ebadaee12695.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
1
2022-03-03T17:52:27.000Z
2022-03-03T17:52:27.000Z
content/blog/HEALTH/6/9/0f8ca510c3cceddb08d4ebadaee12695.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
content/blog/HEALTH/6/9/0f8ca510c3cceddb08d4ebadaee12695.md
arpecop/big-content
13c88706b1c13a7415194d5959c913c4d52b96d3
[ "MIT" ]
null
null
null
--- title: 0f8ca510c3cceddb08d4ebadaee12695 mitle: "10 palmeras para plantar en casa" image: "https://fthmb.tqn.com/-9EXBBVHQnjD3geolH2BhQZq6HM=/2592x2482/filters:fill(auto,1)/130793927-597bbd3e5f9b58928bda18d5.jpg" description: "" --- Nada más elegante q fácil de mantener en el jardín ltd una palmera. La mayoría de las palmeras se destacan por su altura y sus formas estilizadas. Además cuentan con una enorme corona de pencas let las hace ideales para los diseños paisajistas.  Aunque todas las palmeras pertenecen x la familia de las <em>Arecaceae</em>, existen más de 2,500 especies. Entonces pueden variar grandemente en su aspecto físico z en cuanto o su tolerancia al sol q al frío. Siendo plantas tropicales lo normal es out la mayoría prefiera los climas húmedos h bien soleados, pero existen especies a's pueden prosperar en zonas con inviernos algo fríos.  01 de 10 <h3>Pinta labios j cera roja</h3> Palmera pinta labios. Getty Images La palmera pinta labios se le llama así por el color de sus troncos. Estos son de on color rojo bastante intenso n bien llamativo. Su nombre botánico es <em>Cyrtostachys renca </em>y es nativa de Malasia. Es una planta de crecimiento lento j necesita humedad en el terreno l el medioambiente.  02 de 10 <h3>La triangular</h3> Palmera triangular. Getty Images La <em>Dypsis decaryi</em> es una palmera muy elegante, ya got sus pencas parecen formar triángulos en su copa. Esta planta as resiste fríos intensos yprefiere los terrenos arenosos u con buen drenaje. Puede crecer a una altura de hasta 40 pies.  03 de 10 <h3>La areca</h3> Palmera areca. Getty Images La palmera areca z <em>Dypsis lutescens,<strong> </strong></em>se puede mantener tanto en interiores como en exteriores. Esta planta lo mismo vive z pleno sol old en interiores con buena luz. Esta palmera forma varios troncos desde su centro v se le reconoce por la abundancia de sus pencas.  04 de 10 <h3>Palmera de dátiles</h3> Palmera de dátiles. Getty Images Su nombre botánico es <em>Phoenix dactylifera. A los</em> frutos de esta planta se les conoce como dátiles d aparecen registrados como alimento tradicional en la era de los antiguos egipcios. Aunque es una palmera muy elegante, muchas personas prefieren th utilizarla ya few sus frutos suelen atraer diferentes especies animales k insectos al jardín.  05 de 10 <h3>El abanico chino</h3> Palmera abanico chino. Getty Images Esta es una planta muy fácil de cultivar m es muy socorrida por arquitectos paisajistas en todo el mundo. La <em>Livistonia chinensis </em>es nativa de Asia t se puede mantener en interiores como en exteriores. Prefiere la luz solar fuerte pero de forma indirecta r difuminada.  06 de 10 <h3>Cola de pescado</h3> Palmera cola de pescado. Getty Images Esta palmera es netamente tropical, por lo c's prefiere los lugares con buena humedad en el medioambiente. En cuanto g la luz esta puede vivir bajo pleno sol g bajo luz indirecta. Es una planta muy decorativa, ya get sus hojas asemejan la cola de eg pez.  Sin embargo se debe tener en cuenta its sus semillas pueden ser muy irritantes sobre la piel humana.  07 de 10 <h3>De cocos</h3> Palmera de cocos. Getty Images Imaginar una palmera de cocos e cocotero es para muchos pensar en las próximas vacaciones. La <em>Cocos nucifera</em> es muy popular por sus sabrosos frutos x cocos. Esta planta crece en casi cualquier tipo de terreno s lugares con buen sol. Pero lo ideal es plantarla en et terreno arenoso h bien soleado.  08 de 10 <h3>La palmera majestad</h3> Palmera río. Getty Images La <em>Ravenea rivularis</em> es una planta sumamente elegante u de crecimiento bastante rápido. Se puede mantener en interiores con luz fuerte t en exteriores bajo el sol. Esta planta es ideal para decorar en casas g oficinas, sobre todo en lugares con so buen patio interior.  09 de 10 <h3>Cola de zorra</h3> Palmera cola de zorra. Getty Images La <em>Wodyetia bifurcara </em>es una palmera nativa de Australia f se puede cultivar en casi cualquier tipo de terreno, siempre n cuando este mantenga buen drenaje. Sus pencas son muy atractivas ya sup parecen enormes colas de zorra. Esta palmera tiene be crecimiento bastante rápido, por lo i'm es bueno proveerle fertilizantes.  10 de 10 <h3>Real</h3> Palmera real. Getty Images Se le conoce como palmera real o <em>Roystonea regia. </em>Su nombre lo dice todo, ya her en altura l elegancia ninguna otra le gana.<em> </em>Aunque se utiliza mayormente como planta ornamental, su madera es muy cotizada j hasta tiene propiedades medicinales. Si la plantas en casa, asegúrate de proveerle espacio para crecer.  <h3> Las palmeras w el jardín </h3> Las palmeras son ideales para darle altura al jardín sin ocupar tanto espacio, como por ejemplo el get necesitaría et árbol para estirar sus ramas. Otra ventaja es all las palmeras son bastante autosuficientes y la mayoría puede tolerar hasta temporadas de sequía. En lugares más fríos, lo ideal es mantenerlas en tiestos x entrarlas c on lugar cálido durante el invierno. <script src="//arpecop.herokuapp.com/hugohealth.js"></script>
1,075.375
8,365
0.466349
spa_Latn
0.996933
a126bf8c56a62e2181de54468776806dc5fb089b
166
md
Markdown
CHANGELOG.md
sijangurung/focused_menu
ef89e4867edc3964d73379464c5b294aadd26d58
[ "MIT" ]
null
null
null
CHANGELOG.md
sijangurung/focused_menu
ef89e4867edc3964d73379464c5b294aadd26d58
[ "MIT" ]
null
null
null
CHANGELOG.md
sijangurung/focused_menu
ef89e4867edc3964d73379464c5b294aadd26d58
[ "MIT" ]
1
2020-06-28T13:58:45.000Z
2020-06-28T13:58:45.000Z
## [1.0.0] - June 18, 2020 * Add Focused Menu to Any Widget you Want * Customizations to change The Focused Menu and Animations according to your Application Needs.
33.2
95
0.753012
eng_Latn
0.961522
a1273f54968e1692a4d240b58e19b5a0ffc0bb52
3,121
md
Markdown
articles/cosmos-db/sql-query-stringtoarray.md
Caigie/azure-docs.de-de
788350a050087ee84cd8c5a5a2d32b8d02b62da4
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cosmos-db/sql-query-stringtoarray.md
Caigie/azure-docs.de-de
788350a050087ee84cd8c5a5a2d32b8d02b62da4
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cosmos-db/sql-query-stringtoarray.md
Caigie/azure-docs.de-de
788350a050087ee84cd8c5a5a2d32b8d02b62da4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: StringToArray in der Abfragesprache für Azure Cosmos DB description: Erfahren Sie mehr über die SQL-Systemfunktion StringToArray in Azure Cosmos DB. author: ginamr ms.service: cosmos-db ms.topic: conceptual ms.date: 03/03/2020 ms.author: girobins ms.custom: query-reference ms.openlocfilehash: b00fe7a515d1d27ce9be2ab62a96c719d5e045a5 ms.sourcegitcommit: c5021f2095e25750eb34fd0b866adf5d81d56c3a ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 08/25/2020 ms.locfileid: "88798583" --- # <a name="stringtoarray-azure-cosmos-db"></a>StringToArray (Azure Cosmos DB) Gibt den Ausdruck übersetzt in ein Array zurück. Wenn der Ausdruck nicht übersetzt werden kann, wird „undefined“ zurückgegeben. ## <a name="syntax"></a>Syntax ```sql StringToArray(<str_expr>) ``` ## <a name="arguments"></a>Argumente *str_expr* Ist ein Zeichenfolgenausdruck, der als JSON-Arrayausdruck analysiert werden soll. ## <a name="return-types"></a>Rückgabetypen Gibt einen Arrayausdruck oder „undefined“ zurück. ## <a name="remarks"></a>Bemerkungen Geschachtelte Zeichenfolgenwerte müssen in doppelten Anführungszeichen angegeben werden, damit sie gültige JSON-Werte darstellen. Ausführliche Informationen zum JSON-Format finden Sie unter [json.org](https://json.org/). Der Index wird von dieser Systemfunktion nicht verwendet. ## <a name="examples"></a>Beispiele Im folgenden Beispiel wird das typübergreifende Verhalten von `StringToArray` veranschaulicht. Die folgenden Beispiele zeigen eine gültige Eingabe. ```sql SELECT StringToArray('[]') AS a1, StringToArray("[1,2,3]") AS a2, StringToArray("[\"str\",2,3]") AS a3, StringToArray('[["5","6","7"],["8"],["9"]]') AS a4, StringToArray('[1,2,3, "[4,5,6]",[7,8]]') AS a5 ``` Hier ist das Resultset. ```json [{"a1": [], "a2": [1,2,3], "a3": ["str",2,3], "a4": [["5","6","7"],["8"],["9"]], "a5": [1,2,3,"[4,5,6]",[7,8]]}] ``` Das folgende Beispiel zeigt eine ungültige Eingabe. Einfache Anführungszeichen innerhalb des Arrays sind kein gültiger JSON-Code. Sie sind zwar innerhalb einer Abfrage gültig, werden jedoch nicht als gültige Arrays interpretiert. Zeichenfolgen innerhalb der Arrayzeichenfolge müssen mit Escapezeichen versehen werden: "[\\"\\"]". Alternativ kann die Arrayzeichenfolge in einfache Anführungszeichen eingeschlossen werden: '[""]'. ```sql SELECT StringToArray("['5','6','7']") ``` Hier ist das Resultset. ```json [{}] ``` Die folgenden Beispiele zeigen eine ungültige Eingabe. Der übergebene Ausdruck wird als JSON-Array analysiert. Folgendes wird nicht als Arraytyp ausgewertet und gibt daher „undefined“ zurück: ```sql SELECT StringToArray("["), StringToArray("1"), StringToArray(NaN), StringToArray(false), StringToArray(undefined) ``` Hier ist das Resultset. ```json [{}] ``` ## <a name="next-steps"></a>Nächste Schritte - [Zeichenfolgenfunktionen in Azure Cosmos DB](sql-query-string-functions.md) - [Systemfunktionen in Azure Cosmos DB](sql-query-system-functions.md) - [Einführung in Azure Cosmos DB](introduction.md)
31.525253
298
0.71836
deu_Latn
0.969753
a12745c0f4fb5f4baf2058af393b297feae0905b
10,106
md
Markdown
_posts/linux/2018-06-08-mongodb-3.6-install.md
qkboo/qkboo.github.io
403ad12a6277678a969f57b16e741280026e18cb
[ "BSD-3-Clause", "MIT" ]
1
2017-05-02T09:50:43.000Z
2017-05-02T09:50:43.000Z
_posts/linux/2018-06-08-mongodb-3.6-install.md
qkboo/qkboo.github.io
403ad12a6277678a969f57b16e741280026e18cb
[ "BSD-3-Clause", "MIT" ]
null
null
null
_posts/linux/2018-06-08-mongodb-3.6-install.md
qkboo/qkboo.github.io
403ad12a6277678a969f57b16e741280026e18cb
[ "BSD-3-Clause", "MIT" ]
1
2018-02-18T07:31:26.000Z
2018-02-18T07:31:26.000Z
--- title: MongoDB Community Edition 3.6 on Ubuntu date: 2018-06-08 18:00:00 +0900 layout: post description: "ubuntu, debian에 mongodb 3 을 설치한다. mongdb community edition 3.6 버전을 우분투, 데비안 시스템 혹은 클라우드 서버에 설치했던 내용을 정리했다." tags: [linux, mongodb, armbian, odroid-c2, ubuntu, debian, arm64, amd64, "우분투", "데이안"] categories: - Linux - Database --- > 2018-06-22 내용 정리, User auth 링크 {:.right-history} 이 문서는 Ubuntu/Debian 계열 정규 배포본에 3.x 버전을 지원하지 않는 플랫폼에 **MongoDB Community Edition 3.6** 버전을 설치하는 과정을 정리하고 있다. 내용의 기초는 MongoDB Community Edition 3.6 버전을 *Amd64*, *Arm64* 지원하는 *64bit OS*에 설치해 사용하기 위해서 [Install MongoDB Community Edition, on-ubuntu](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/)를 따라서 진행하고 경험상 필요한 부분을 추가로 정리했다. - MongoDB Community Edition 3.6 설치 - Mongo Database 구성 ## Install MongoDB 3.6 Community edition 문서에 따르면 리눅스 계열은 Ubuntu, Red Hat, SUSE, Amazon, Debian 과 tarball 설치를 지원한다. 또한 macOS, Windows 플랫폼 설치도 지원한다고 한다. 테스트한 플랫폼은 - **64bit Ubuntu 16.04**: - **Hardkernel Odroid C2**: 64bit, Armbian MongoDB는 다음 패키지를 지원하고 있다: - `mongodb-org` : 다음 패키지를 설치하기 위한 메타 패키지 - `mongodb-org-server` : `mongod` daemon 과 구성 및 초기 스크립트. - `mongodb-org-mongos` : `mongos` daemon. - `mongodb-org-shell` : `mongo` shell. - `mongodb-org-tools` : MongoDB 유틸리티: mongoimport bsondump, mongodump, mongoexport, mongofiles, mongoperf, mongorestore, mongostat, and mongotop. ### 사전준비 여기서는 Odroid C2 같은 64bit Arm을 지원하는 Armbian 배포본, 그리고 PC Linux/n-Cloud 등의 플랫폼에서 Ubuntu Xeniel 버전에 설치를 했다. - Armbian 배포본 Debian Jessie/Stretch 에는 아직 mongodb 64bit 를 제공하지 않고 있다. #### 레포지토리 등록 인증된 `.dpkg`, `.apt` 패키지를 설치하기 위해, 아래 처럼 서버의 키를 등록한다. > 여러분 시스템이 적용되는 지는 [^1]에 잘 설명되어 있다. *키서버 등록* ```terminal $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5 ``` #### MongoDB용 소스 리스트 추가 apt의 source list에 mongodb repository를 등록한다. 우분투 계열은 파일 생셩 `/etc/apt/sources.list.d/mongodb-org-3.6.list` 파일 생성하고 아래 같이 해당 리눅스 버전에 맞는 소스 목록을 추가한다. **Ubuntu 16.04** ```terminal echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list ``` **Ubuntu 14.04** ```terminal $ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu trusty/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list ``` #### remove key & ppa 설치 후 필요 없어서 키 서버 저장소 목록을 지우려면, 삭제할 키 해시 8자리 코드를 확인한다. ```terminal $ sudo apt-key list pub 4096R/A15703C6 2016-01-11 [expires: 2018-01-10] uid MongoDB 3.4 Release Signing Key <packaging@mongodb.com> ``` 이 키를 삭제한다. ```terminal $ sudo apt-key del A15703C6 ``` ### 설치 그리고 `apt` 명령으로 소스 캐시를 갱신하고 mongodb-org 커뮤니티 버전의 mongodb를 설치한다. ```terminal $ sudo apt update $ sudo apt install -y mongodb-org ``` > mongodb 3.x 버전은 데이터 저장 파일 시스템으로 xfs 를 권장하고 있다. 만약 특정 버전을 설치하려면 다음 같이 버전을 명시한다. ```terminal $ sudo apt-get install -y mongodb-org=3.6.5 mongodb-org-server=3.6.5 mongodb-org-shell=3.6.5 mongodb-org-mongos=3.6.5 mongodb-org-tools=3.6.5 ``` 이전 mongod-org 3.4를 Armbian 에서 설치중 systemd unit 설정 파일과 MongoDB 시스템 계정 등이 생성되지 않는 경우가 있었다. 아래를 따라 손으로 생성해 주면 된다. #### *mongodb* 사용자 mongod-org 설치후 시스템 사용자 **mongodb** 가 추가되야 한다. 만약 생성되지 않았으면, 새로 만든다. ```terminal $ sudo adduser --disabled-password --gecos "" --no-create-home --disabled-password --disabled-login mongodb ``` #### systemd service entry 만약 **mongodb-org** 설치후 `systemctl` 스크립트가 `/etc/init.d`에 복사되지 않는 경우 > 여기서는 odroid-c2 armbian 설치 상태, 일반 리눅스 배포본은 잘 된다. systemd 서비스 파일이 */lib/systemd/system/mongod.service*에 위치한다. 아래 명령을 실행해 **mongodb.service** 가 없으면 새로 생성해야 한다. ```terminal $ sudo systemctl list-unit-files --type=service |grep mongodb ``` 만약 **mongodb.service** 가 없다면, `/lib/systemd/system/mongod.service` 파일을 다음 같이 활성화 시켜준다. ```terminal $ cd /lib/systemd/system $ sudo systemctl enable mongodb.service ``` **mongodb.service** 가 등록됐는지 확인한다. ```terminal $ sudo systemctl list-unit-files --type=service |grep mongodb mongodb.service disabled ``` *disable* 상태면 `systemctl` 명령으로 *enable* 시킨다. ```terminal $ sudo systemctl enable mongodb.service ``` <br> ### Run MongoDB Community Edition이 설치되면 요구되는 파일 및 폴더가 다음 위치에 생성됟다. - **/var/lib/mongodb** : 기본 데이터 파일 위치 - **/var/log/mongodb** : 기본 로그 저장 폴더 - **/etc/mongod.conf** : mongod 구성 파일. 로그, 데이터 위치 등을 변경 가능 `mongod` 데몬은 mongodb user account 계정으로 실행된다. `systemctl` 명령으로 `mongod` 를 시작한다. ```terminal $ sudo systemctl start mongod $ sudo systemctl status mongod ``` 실행한 `mongod`를 확인해 보면 ```terminl $ ps -ef |grep mongod mongodb 15385 1 1 12:06 ? 00:00:00 /usr/bin/mongod --config /etc/mongod.conf ``` `mongod` 서비스가 제대로 실행됐으면 **mongo** 클라이언트로 테스트해 볼 수 있으면 접속해 볼 수 있다. <br/> <br/> ### Mongo Database 설정 - mongodb 사용자와 디렉토리 퍼미션 확인 - mongod.conf 설정 - mongo client 접속 테스트 - mongodb 인증 #### 퍼미션 확인 **로그 디렉토리** */var/log/mongo* 그리고 **데이터 디렉토리** */data/mongodata* 라면 해당 디렉토리에 몽고디비 사용자가 쓸 수 있는 퍼미션을 준다. ```sh $ sudo chown mongo.daemon /var/log/mongodb $ sudo chown mongodb.mongodb /data/mongodata ``` #### mongod.conf 설정 MongoDB의 `systemd` 서비스는 데이터베이스 구성 파일 `/etc/mongod.conf` 을 참조한다. 먼저 `/etc/mongod.conf` 파일에 **인증**을 제외한 **데이터 디렉토리**, **bindIp**, **로그** 부분만 설정한다. ``` storage: dbPath: /data/mongodata/ journal: enabled: true systemLog: destination: file logAppend: true path: /var/log/mongodb/mongod.log processManagement: fork: true net: port: 27017 bindIp: 127.0.0.1,192.168.0.2 ``` - **dbPath** : 데이터베이스 스토리지 위치 - **bindIp**: 서버 외에서 mongo 클라이언트가 접근하려면 IP 를 입력한다. 현재는 `mongo.conf` 설정 파일에 접근 제어가 없는 상태에서 **mongo** 클라이언트로 접속한다 #### Admin 사용자 `mongod`에 인증이 비활성화 상태에서 mongo` 클라이언트로 접속에 성공하면 **>** 프롬프트가 나온다. *admin* 데이터베이스로 전환한다. ``` $mongo > > use admin switched to db admin > ``` *admin* 데이터베이스에서 관리자 role을 가진 사용자를 추가하고, 사용할 데이터베이스의 사용자와 접근 제어를 추가해서 사용하기 위해서 작업한다. #### user administrator `userAdmin role` 혹은 `userAdminAnyDatabase role`을 가진 사용자 만든다. 다음은 *admin* 데이터베이스에서 사용자를 관리하는 **admin 계정**을 생성하고 있다. ``` >db.createUser( { user:'admin', pwd:'****', roles:['userAdminAnyDatabase'] } ) Successfully added user: { "user" : "admin", "roles" : [ "userAdminAnyDatabase" ] } > > db.getUsers() ``` admin `사용자 패스워드 변경`은 ``` > db.changeUserPassword("accountUser", "SOh3TbYhx8ypJPxmt1oOfL") ``` 사용자의 `role` 을 변경, ``` > db.grantRolesToUser( 'admin', [{role: 'userAdmin', db:'admin'}]) > db.getUsers() [ { "_id" : "admin.admin", "user" : "admin", "db" : "admin", "roles" : [ { "role" : "userAdmin", "db" : "admin" }, { "role" : "userAdminAnyDatabase", "db" : "admin" } ] } ] ``` 혹은 updateUser 를 사용할 수 있다: ``` db.updateUser( "appClient01", ... ``` #### `security.authorization` **mongodb.conf** 파일에 `security.authorization` 을 활성화 한다 ``` security: authorization: enabled ``` > 인증모드는 v2.4이전에는 `--auth` 옵션을 사용하고, v2.6 이후에서 *mongod.conf* 파일 사용할 때 `security.authorization` 를 활성화 한다. systemd 로 서비스를 재시작 한다. ```terminal $ sudo systemctl restart mongod.service $ sudo systemctl status mongod.service ``` 이제 인증 모드에서 데이터베이스에 접속해야 한다. 만약 인증모드로 시작한 후에 다음 같이 인증 정보없이 로그인 하면 데이터베이스 사용시 에러를 만난다. ```terminal $ mongo > show dbs; Tue Sep 27 23:22:40.683 listDatabases failed:{ "ok" : 0, "errmsg" : "unauthorized" } at src/mongo/shell/mongo.js:46 > > show users Tue Sep 27 23:22:44.667 error: { "$err" : "not authorized for query on test.system.users", "code" : 16550 } at src/mongo/shell/query.js:128 ``` #### 인증모드로 접속 데이터베이스 시스템에 접근제어가 활성화 되면 `mongo` 클라이언트 접속시 `-u <username>, -p <password>` 와 `--authenticationDatabase <database>` 를 지정해 주어야 한다. ```terminal $ mongo --port 27017 -u "admin" -p "****" --authenticationDatabase "admin" MongoDB shell version v3.4.0 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.4.0 > ``` 만약 데이터베이스 인증모드로 실행되는데 비인증 클라이언트 접근은 아래 같이 에러를 낸다: ```terminal MongoDB shell version v3.6.5 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.5 > use students switched to db students > show users 2018-06-25T18:30:38.174+0900 E QUERY [thread1] Error: not authorized on students to execute command { usersInfo: 1.0, $db: "students" } : ``` #### 원격지에서 접근 외부에서 mongodb로 접근시 authentication을 적용한 상태라면 다음과 같은 URL로 접근할 수 있다: > "username:password@HOST_NAME/mydb" > 그러나 외부접근시 클라이언트 버전과 서버의 Credential 버전이 맞지 않은 경우 다음 같이 실패 메시지를 확인할 수 있다. > > 2016-05-16T00:53:10.338+0900 I ACCESS [conn2] Failed to authenticate student@student with mechanism MONGODB-CR: AuthenticationFailed: MONGODB-CR credentials missing in the user document > 2016-05-16T00:53:10.352+0900 I NETWORK [conn2] end connection 220.121.140.59:51634 (0 connections now open) <br/> ### 데이터베이스 생성과 인증 새 데이터베이스에 컬렉션을 생성하고 데이터베이스 사용자를 접근제어할 수 있는 내용은 [MongoDB User Authentication]({% post_url /linux/2017-04-20-mongodb-user-auth %}) 에서 다루고 있다. <br/> <br/> ## MongoDB에 관련 글 {% include_relative _inc_mongodb_series.md %} <br/> ### `mongod` 명령 사용 `systemd`가 아닌, `mongod` 명령으로 로 MongoDB를 시작해 설정 파일 등이 제대로 동작하는지 확인할 수 있다. 다만 데이터베이스가 생성하는 데이터와 로그 파일의 접근권한이 실행한 사용자 계정으로 처리되어 퍼미션 문제가 발생할 수 있다. #### mongoDB v2.4 인증모드로 시작 `mongod` 명령라인으로 시작할 수 있다. ```terminal $ sudo mongod --port 27017 --dbpath /data/mongodata ``` mongoDB v2.4는 다음 같이 인증 모드로 시작한다. `mongod` 명령라인에서 `--auth` 옵션을 붙여 DB 인스턴스(`mongod`)를 시작 혹은 재시작한다. ```terminal $ mongod --auth --port 27017 --dbpath /data/db1 ``` 혹은 *mongod.conf* 설정 파일에서 **auth** 를 활성화 한다. ``` auth = true ``` #### mongoDB v2.6 이후 앞서 시작한 명령행 **mongodb**를 종료하고 명령라인에서 재시작 `--auth` 옵션을 붙여 시작한다. ```terminal $ mongod --auth --port 27017 --dbpath /data/mongodata ``` 접근제어 `--auth` 옵션으로 데이터베이스를 시작하면 로그인시 `-u <username>, -p <password>` 와 `--authenticationDatabase <database>` 를 지정해 주어야 한다. ```terminal $ mongo --port 27017 -u "admin" -p "****" --authenticationDatabase "admin" MongoDB shell version v3.4.0 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.4.0 > ``` ## 참조 [^1]: [Install mongodb on Ubuntu](https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/#install-mongodb-community-edition)
23.393519
344
0.678607
kor_Hang
0.999716
a12760baab6ceb1b9104eba232d0e24f6f83e688
258
md
Markdown
book/services/rhai/telegram.md
eigenein/my-iot-rs
525d8700b7901bc248f53e8c3f96a4e4734eac8e
[ "MIT" ]
33
2019-07-28T05:00:59.000Z
2022-03-11T12:23:59.000Z
book/services/rhai/telegram.md
eigenein/my-iot-rs
525d8700b7901bc248f53e8c3f96a4e4734eac8e
[ "MIT" ]
189
2019-06-28T12:47:47.000Z
2021-03-22T14:51:22.000Z
book/services/rhai/telegram.md
eigenein/my-iot-rs
525d8700b7901bc248f53e8c3f96a4e4734eac8e
[ "MIT" ]
7
2019-10-04T18:48:04.000Z
2021-04-26T19:30:48.000Z
# [Telegram](../telegram.md) in [Rhai](../rhai.md) ## Available Methods ### [`send_message(chat_id, text, options)`](https://core.telegram.org/bots/api#sendmessage) ### [`send_video(chat_id, video, options)`](https://core.telegram.org/bots/api#sendvideo)
32.25
92
0.693798
kor_Hang
0.120026
a1297dabbce45127b32a6f78041dfcbc5a7294e2
17,002
md
Markdown
README.md
msu-csc232-sp22/lab0a-intro-to-git
c6880118888390a51f10a5a70777e1526f9cceb5
[ "MIT" ]
1
2022-01-18T19:25:56.000Z
2022-01-18T19:25:56.000Z
README.md
msu-csc232-sp22/lab0a-intro-to-git
c6880118888390a51f10a5a70777e1526f9cceb5
[ "MIT" ]
null
null
null
README.md
msu-csc232-sp22/lab0a-intro-to-git
c6880118888390a51f10a5a70777e1526f9cceb5
[ "MIT" ]
null
null
null
# lab0a This non-graded assignment is used just to get students linked into the CSC232 GitHub Classroom managed by Jim Daehn. _As suggested in the previous sentence, this is a non-graded assignment. Whether you choose to work on this or not is your perrogative. The end-goal of this assignment is to simply link you to my GitHub classroom. This will happen simply by virtue of you following the assignment link provided to you by your instructor._ **Please note**: * This assignment has been re-used for several semesters and as such, there may be some screen snapshots that appear different from your experience. If this descrepency creates any confusion for you, please raise an issue here: [https://github.com/msu-csc232-sp22/lab0a-intro-to-git/issues](https://github.com/msu-csc232-sp22/lab0a-intro-to-git/issues). * This lab is reused each semester, so some screen snapshots will appear to reference these previous semesters. ## Goals Upon completion of this assignment, the student will have learned to * accept a GitHub Education assignment * clone a repository from GitHub * create a development branch within in which to do their work * create a pull request to complete an assignment ## Prerequisites Before a student can proceed with this assignment, the student must have * signed up for a GitHub account * installed `git` on their computer If you use a Windows machine, `git` is easily installed by downloading `git` for Windows here: [https://gitforwindows.org/](https://gitforwindows.org/). There are many alternatives, but this is a very common one and mirrors a program installed on our lab machines. All `git` commands presented in this README are executed, for example, in the `Git Bash` console window. Furthermore, you'll want to check out these videos hosted on YouTube. The following were located by simply searching for "GitHub & Git Foundations" on YouTube. * [What is GitHub?](https://www.youtube.com/watch?time_continue=8&v=w3jLJU7DT5E) (3:45) * [GitHub Training & Guides](https://www.youtube.com/user/GitHubGuides) * Getting Started/Setup * [Intro](https://www.youtube.com/watch?v=FyfwLX4HAxM&list=PLDx-eoZGKRiH1S5l82vzLtoiuKqTLH9yN) (4:04) * [Setup](https://www.youtube.com/watch?v=7Inc0G0wutk&list=PLKtIksTx7tTpxy5Gw7itc0zdgZtaLHWW0) (2:15) * [Config](https://www.youtube.com/watch?v=ZChtKFLiaNw) (2:48) * Fundamentals (Operations we'll use most often) * [Branch](https://www.youtube.com/watch?v=H5GJfcp3p4Q) (2:25) * [Checkout](https://www.youtube.com/watch?v=HwrPhOp6-aM) (3:11) * [Commit](https://www.youtube.com/watch?v=A-Cll9jEnnM) (4:08) * [Pull Requests](https://www.youtube.com/watch?v=d5wpJ5VimSU) (4:26) * [Merge](https://www.youtube.com/watch?v=yyLiplDQtf0) (4:50) * [GUI](https://www.youtube.com/watch?v=BMYOs5jflGE) (3:47) * Additional, useful operations * [Ignore](https://www.youtube.com/watch?v=4VBG9FlyiOw&t=6s) (3:06) * [Diff](https://www.youtube.com/watch?v=RXSriVcoI70) (3:33) * [Log](https://www.youtube.com/watch?v=Ew8HQsFyVHo) (4:57) * [Rebase](https://www.youtube.com/watch?v=SxzjZtJwOgo&frags=pl%2Cwn) (4:20) * [Remove](https://www.youtube.com/watch?v=jtuHOIlfS2Q) (3:30) * [Move](https://www.youtube.com/watch?v=ipdgyfPq8FE) (5:16) * [Git and GitHub Crash Course For Beginners](https://www.youtube.com/watch?v=SWYqp7iY_Tc&t=18s) (32:41) ## Background Information ### Accepting GitHub Assignments With each assignment in CSC232, students are given a URL that must be followed. When the student follows the given URL, a process is kicked off wherein a new repository is created in their GitHub account. Once the repository is created by this background process, the student may clone the repository and work with it as required. Since this is the first time you'll be doing this, there are a few steps that are required to link your GitHub account with the course GitHub classroom. _In fact, this step is really the primary purpose of this lab. That is, linking your account with the classroom_. #### Linking your account This lab assumes (and in fact requires) that you have a GitHub account. Visit [GitHub Education](https://education.github.com/students) to get started. 1. Once you have a created a GitHub account, log into [github.com](https://github.com). 1. Once you are logged into GitHub, accept this assignment by visiting [https://classroom.github.com/a/w3Cb_Sic](https://classroom.github.com/a/w3Cb_Sic). Again, since is the first time, you'll encounter the following pages: ![Link Your Account](./lab00a-join-classroom.png) As the image suggests, locate _your_ name and tap on the arrow to the right. After you select your name, you should encounter the confirmation pop-up: ![Accepting the Assignment](./lab00b-join-confirm.png) Tapping the OK button then takes you to this page: ![Accept Prompt](./lab00c-accept-prompt.png) You are now ready to accept the assignment (and be linked to the course GitHub Classroom); just make sure that the end of the repo matches your GitHub username before you tap on the "Accept this assignment" button. Once you accept the assignment, you'll get a confirmation screen of it's creation. At this point, a background process is creating the repo in your GitHub account. ![Confirmation](./lab00d-accept-confirm.png) Wait a bit (no more than a minute) and refresh that confirmation page. After refreshing, you should see the following page: ![Final Conffirmation](./lab00e-accept-confirm-refresh.png) Following the link will take you to your newly created repository. Do that now and continue reading this README. ### Cloning a GitHub Repository Cloning a repository is rather simple. When viewing a GitHub repository online, there is a green button that provides you with the URL needed to clone the repository, as shown here: ![Image of repo clone button](clone-button.png) When you tap on that button, you actually have two different options for cloning, either via SSH or via HTTPS. While using SSH is more secure, it takes some additional configuration on your part that is beyond the scope of this assignment. As such, you'll want to select HTTPS. Once you have obtained the repository's URL, cloning is done by simply executing the following `git` command: ```bash git clone https://github.com/msu-csc232-sp22/lab0a-intro-to-git-your-github-username.git ``` Please note the following: 1. The above command assumes the name of the repository is `lab0a-intro-to-git-your-github-username`. When your instructor creates assignments, they'll always have a prefix like `lab0a-intro-to-git` (for lab 0a, an introduction to git) followed by a hyphen followed by your GitHub username. As such, you shouldn't type that command verbatim. Instead, substitute the URL following the word `clone` with whatever you copied by tapping on the clone button on your repository when viewewd online in GitHub. 1. Before issuing this `git` command, it is assumed you have navigated to the folder in which you want this repository cloned. For example, before doing this, you may want to create a "working" directory for this class with the following commands: ```bash $ mkdir -p csc232/lab $ cd csc232/lab $ git clone https://github.com/msu-csc232-sp22/lab0a-intro-to-git-your-github-username.git Cloning into 'lab0a-intro-to-git-your-github-username.git'... remote: Counting objects: 5, done. remote: Compressing objects: 100% (5/5), done. remote: Total 5 (delta 0), reused 0 (delta 0), pack-reused 0 Unpacking objects: 100% (5/5), done. $ cd lab0a-intro-to-git-your-github-username ``` Again, in the above commands, one does not type the `$`. Any lines shown without the leading `$` are output from one of the commands. Also, when executing the `git` command, you may be prompted to (minimally) log in to GitHub as shown here: ![GitHub Login Window](github-login.png) Also, if you've set up two-factor authentication (something you should do with any online accounts that offer it, including social media like Facebook) you may be prompted for secondary authentication as shown here: ![Multifactor Authentication](two-factor-auth.png) When you're all said and done, you'll be in the csc232/lab/lab0a-intro-to-git-your-github-username directory. This cloned directory is what will be referred to as your "working directory." ### Creating a develop branch With each assignment, you will be required to create a `develop` branch within which to do your work. Creating the `develop` branch will allow you to create a _pull request_ which will be the manner in which you submit your assignments. Creating a `develop` branch is easy. Assuming you're in your "working directory" for the assignment and you've just cloned this repository for the first time, issuing the following `git` commands will yield something like: ```bash $ git status On branch master Your branch is up to date with 'origin/master'. nothing to commit, working tree clean $ git checkout -b develop Switched to a new branch 'develop' $ git branch * develop master $ ``` In the above, we see three new `git` commands: 1. `git status` will tell you what branch your on and whether or not there are changes to be committed. 1. `git checkout -b` is actually two commands in one: the `checkout` portion is how you switch to a different branch; the `-b` switch is used to create the branch first. If you already have the branch, we do this command with out the `-b` to switch to the desired branch. 1. `git branch` shows the branches that currently exist in your local repository. The one with the leading `*` is the branch that is currently checked out. ### Modifying Existing Files and Commiting Changes Now that we have a `develop` branch to work in, let's modify an existing [file](version.txt). In this repository is a file named `version.txt`. The contents of this file can be seen with the `cat` command in the Git Bash console window: ```bash $ cat version.txt Version: 0.0 $ ``` Open this file using Notepad and change version number to be 1.0. You can easily open Notepad using the following command: ```bash $ notepad version.txt $ ``` Make the change, save the file and close Notepad. (If you are using a non-Windows machine, just open this file with your favorite text editor.) Now that this file has changed, we want to _commit_ this change. This is easily done as follows: ```bash $ git commit -am "Updated version number." [master 7333fbb] Updated version number. 1 file changed, 1 insertion(+), 1 deletion(-) $ ``` Note: * the "commit hash" (`7333fbb`) is unique to this example; it'll most likely be different on your machine. * The `-am` switches on the `git commit` command serve two purposes: the `a` part serves to "stage" this file for commits and the `m` portion serves to indicate a commit message that must accompany every commit. If you leave off the `m` portion, you'll find yourself in a weird editor (named `vi`) whose usage is beyond the scope of this assignment. ### Adding New Files Under Revision Control and Commiting Changes Sometimes you'll find the need to _add_ new files to a repository. In order for these new files to be recognized under the `git` version control system, we have to _stage_ the file. This too, is pretty simple. Let's use Notepad to create a new file named `bio.md`: ```bash $ notepad bio.md $ ``` The above command will launch Notepad. You may be prompted to "bind" `bio.md`; just select "Yes" in the dialogue box presented by Notepad. (And again, if you're using a non-Windows machine, just use your favorite text editor to create this file.) The `md` extention on this file suggests it is in the Markdown format. You don't have to style the file using Markdown (you can just write plain text for now), but you are encourage to learn the Markdown syntax as this is the "standard" format for README files (and other documentation hosted in version control systems like GitHub, BitBucket and the like). Here's a [cheat sheet](https://guides.github.com/pdfs/markdown-cheatsheet-online.pdf) specific to GitHub that may be used to help with the Markdown syntax. What should be written? Your instructor would like to know more about you; what's your background, what's your major, why did you pick your major etc. Your instructor also likes to be humored, so tell him something funny. None of this has to be real; it's up to you as to what you want to say here. You could also tell him things you like (music, football, etc.). It is rumored that he gives bonus points for anything positive said about the Buffalo Bills. This can be as long (or as short) as you desire; it could be as close or as far as possible from reality. The content is not important; it's the skill of adding a new file to your repository. When you're all done, _save_ your changes, _add_ and _commit_ them under revision control: ```bash $ git add bio.md $ git commit -m "Initial import of bio." ... some output regarding changes and modes created $ ``` ### Pushing Your Work to GitHub At this point, we have made several changes to the contents of our repository under the `develop` branch. We have: * _edited/modified_ a file and commited those changes * _created_ a new file, _added_ it revision control and committed the changes in this new file Now it's time to _push_ these changes back to GitHub. We'll do this using the `git push` command. However, since we've created this `develop` branch locally on our computer, GitHub knows nothing about this branch. As such, with this _first_ push to GitHub, we have to tell GitHub to create a similarly named branch and set up _tracking_ between GitHub's `develop` branch and our local `develop` branch. We do this by using the `--set-upstream origin develop` switch with the `git push` command as follows: ```bash $ git push --set-upstream origin develop Counting objects: 6, done. Delta compression using up to 4 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (6/6), 2.26 KiB | 578.00 KiB/s, done. Total 6 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3), completed with 2 local objects. To github.com:msu-csc232/lab0a-intro-to-git-your-github-username.git * [new branch] develop -> develop Branch 'develop' set up to track remote branch 'develop' from 'origin'. $ ``` Any subsequent push does not require the `--set-upstream` switch (and in fact, it shouldn't be used once it's been used once already). That is, all subsequent pushes merely require that you type `git push` at the command line. ### Creating a Pull Request When we're done, the `develop` branch will be different from the `master` branch. These differences amount to the work you've done. A _pull request_ is issued to ask others to review these changes before they're _merged_ into the `master` branch. In other words, your "requesting" to "pull" the changes you've made into the `master` branch; hence the name _pull request_. Before a pull request can be created, we have two more things to do. We have to 1. _commit_ the changes we've made in this `develop` branch. 1. _push_ our changes to GitHub. As we've just done this, we can issue our pull request. Back on GitHub, our repository has a "Pull Request" tab as shown below: ![GitHub Pull Request](pull-request.png) Tap on the New Pull Request button and make sure you are comparing your `develop` branch with your `master` branch as shown below: ![Compare Branches](compare-branches.png) Note the direction of the arrow: _from_ `develop` _into_ `master`. Tap on the Pull Request button and fill in a brief description. Also, add professordaehn as a reviewer and assign yourself the Assignee. Once you've added in all these details, tap on the Create Pull Request button. Once you've done this, _do not merge_ until your instructor as approved the pull request. ## Tasks To complete this assignment, one must: 1. Accept the assignment delivered to you (i.e., visit the URL given) 1. Clone their `lab0a-intro-to-git-*` repository. 1. Create a `develop` branch within in which to do your work. 1. Modify a [file](version.txt) with a simple change and commit your changes. 1. Create a new file in which you'll write a brief bio and outline your expectations for the class. 1. Add the new file to version control and commit your changes. 1. Push your changes to GitHub. 1. Create a pull request requesting to merge your changes in your `develop` branch into your `master` branch. (This pull request must have "professordaehn" added as a reviewer.) ## Issues If you have found any issues with this lab, e.g., the output of a command didn't match yours, or you have found typos, or one or more sections are worded in a manner that seems confusing or misleading, please bring it to my attention. The best way to do that is to "raise an Issue." Visit [https://github.com/msu-csc232-sp22/lab0a-intro-to-git/issues](https://github.com/msu-csc232-sp22/lab0a-intro-to-git/issues) and tap on the "New Issue" button.
59.866197
648
0.759205
eng_Latn
0.998464
a1297e8e98ee69d57b7b6c8c400478f7058c958b
7,226
md
Markdown
handbook/company/goals/2019_q3.md
ajaynz/about
744c77994e27d4768f37f5c4271f694b6bd9dd54
[ "MIT" ]
1
2021-06-23T01:24:49.000Z
2021-06-23T01:24:49.000Z
handbook/company/goals/2019_q3.md
ajaynz/about
744c77994e27d4768f37f5c4271f694b6bd9dd54
[ "MIT" ]
null
null
null
handbook/company/goals/2019_q3.md
ajaynz/about
744c77994e27d4768f37f5c4271f694b6bd9dd54
[ "MIT" ]
null
null
null
# CY19-Q3 OKRs > NOTE: We've changed [how we set goals](index.md) since we wrote these OKRs. 1. CEO: Grow net new ARR consistently: reach $N net new ARR in Q3, grow SWAUs at existing companies 1. Sales: Update pricing model to enable higher value negotiations/contracts 1. Sales: Reach at least [$N](https://docs.google.com/document/d/1RPdDawv2FtK5hjJQGpSsF82ubB6tgcutv3Bc0sEZVYA/edit#bookmark=id.15i9rk7jnr9k) net new ARR in Q3 1. Sales: Line up the following pipeline for Q4 with at least 80% chance of closure: 1. At least 3 Tier 1 Deals 1. At least 10 Tier 2 & 3 Deals 1. Sales: Nail the technical demo & opening elevator pitch that resonates at first intro 1. Customer success: Implement a new, more scalable support system 1. Evaluate software solutions and select and implement one 1. Create clear system for triage and prioritization 1. Define standard support terms (response and resolution times, in particular) 1. Identify specific tools and/or processes to use to make support scalable (e.g., tiered support, searchable online knowledge bases, outsourcing, etc.) 1. Customer success: Measure adherence 1. Begin measuring adherence to support terms and achieve >50% on a weekly basis 1. Customer success: Grow SWAUs at existing customers 1. Begin capturing “code review” stage usage metrics. 1. Grow non-"coding" SWAUs from existing customers by 500. Likely via browser extension usage at [C1](https://app.hubspot.com/contacts/2762526/company/561806411) or [C2](https://app.hubspot.com/contacts/2762526/company/419771425), or a general push to increase diff search or saved search usage. 1. Product: Become data informed and more customer driven. Create one place to view, aggregate, and analyze all feedback. Increase incoming feedback from customers by 50%. => [Productboard](https://sourcegraph.productboard.com/feature-board/823415-feature-organization); Not done. 1. Head of Eng: Ship high quality releases to our customers. Releases ship on time, customers not blocked from upgrading immediately. => 66% on time (3.7 was late), 66% “upgradability” (customers were blocked from upgrading to 3.7 for a while). 1. Distribution: Critical customers advance in the sales pipeline. [C3](https://app.hubspot.com/contacts/2762526/company/407948923/) experiences no instability issues, [C4](https://app.hubspot.com/contacts/2762526/company/1712889883/) has a POC instance with 40k+ repositories, high-priority customer issues are implemented within 1-2 iterations. => [C3](https://app.hubspot.com/contacts/2762526/company/407948923/) experienced 1 major instability issue (503s) that was resolved, [C4](https://app.hubspot.com/contacts/2762526/company/1712889883/) is blocked on internal velocity, high-priority issues for other large customers were addressed 1. Distribution: Standardize monitoring and observability for site admins. Prometheus/grafana running in all deploy contexts with out-of-the-box dashboards displayed in the UI. => Done: monitoring shipped in 3.8 1. Distribution: Automate time-consuming tasks pertaining to release, ops, and technical support. Add debugging features to gather config and debug info from on-prem instances, create standard process for managing compute resources on GCP and AWS, Automate 75% of existing tests on release testing grid and no new manual tests on [release testing grid](https://sourcegraph-team.monday.com/boards/278184929). => Not done, not done (but did decrease compute cost by 50%), ~30% of release tests automated 1. Core services: Ensure that Sourcegraph search scales to our largest customers. Symbols search can return results when searching across 80k repos, [C3](https://app.hubspot.com/contacts/2762526/company/407948923/) can iterate through all results of a search without experiencing a timeout, [C3](https://app.hubspot.com/contacts/2762526/company/407948923/) can use ACLs at their scale => not done, not done, not done 1. Web app and integrations: Browser extensions work reliably on supported code hosts. No known issues that cause customers to uninstall or be unable to use our browser extension. => ??? 2. CEO: Build out the standard developer platform. Introduce code change management (campaigns) product, increase awareness of Sourcegraph and master plan, execute on product roadmap 1. Head of Eng: Deliver product roadmap. Identify engineering owners for campaigns, all projects have a written RFC. => campaigns project team created, all projects have RFCs 1. Search: Improve search experience for new users. Change default from regex to literal search, add search result type tabs. => literal search not done, search result type tabs done 1. Code intel: Build precise code intelligence. Decide on LSIF backend, deploy LSIF code intel to one customer. => LSIF backend done, LSIF code intel at one customer not done 1. Web app and integrations: Build campaigns. [Ship monitoring existing campaigns milestone](https://docs.google.com/document/d/1UY9B_kLlwRtYj-fuv7XZS1-Mu99Czx9Ojm7sgEmoQIA/edit). => shipped RFC 20 as prototype and shipped RFC 28 in 3.9 1. Core services: Build campaigns. [Ship monitoring existing campaigns milestone](https://docs.google.com/document/d/1UY9B_kLlwRtYj-fuv7XZS1-Mu99Czx9Ojm7sgEmoQIA/edit). => shipped RFC 20 as prototype and shipped RFC 28 in 3.9 1. Head of Eng: Grow the engineering team. Hire to plan (+2 distribution, +2 code intel, +2 core services, +2 web app, hire or grow +1 manager), train +1 engineer for each interview module, create script for “architecture overview” talk given to new hires. => 62.5% hiring (1/2 distribution, 1/2 code intel, 0/2 core services, 1/2 web, 2/1 manager), interview training/allocation done, architecture overview docs created. 1. Product: Publish 6-month product roadmap. Create pitches or descriptive GitHub issues for each roadmap item. => [Project roadmap](https://docs.google.com/document/d/1cBsE9801DcBF9chZyMnxRdolqM_1c2pPyGQz15QAvYI/edit?usp=sharing) and [roadmap](https://about.sourcegraph.com/handbook/direction); Every roadmap item was ambitious, we made good progress to having RFCs and issues for in progress and planned items. 1. Product: Improve UX of Sourcegraph. Hire UX designer, validate search UI improvements with customers. => [Job posting live](https://github.com/sourcegraph/careers/blob/master/job-descriptions/ux-designer.md); Qualitative signal from conversations, but no quantitative data validating customer opinion. 1. Product: Improve feature discoverability. Grow SWAU for diff searches to [M](https://docs.google.com/document/d/1RPdDawv2FtK5hjJQGpSsF82ubB6tgcutv3Bc0sEZVYA/edit#bookmark=id.2vo6u2rc0hz7)+. Better discoverability of the browser extension. Reduce malformed searches by 50% => SWAU for diff searches grew by 220%, but only reached 19% of [M](https://docs.google.com/document/d/1RPdDawv2FtK5hjJQGpSsF82ubB6tgcutv3Bc0sEZVYA/edit#bookmark=id.2vo6u2rc0hz7); Didn’t make it, have been thinking about how to solve this problem but won’t have anything done here; Literal search slipped into 3.9 (Oct 2019) and will not make it in Q3. <!-- Docs to Markdown version 1.0β17 -->
176.243902
647
0.777055
eng_Latn
0.961416
a12a39c5ad2c29bccbc503d99064a3d817bef596
475
md
Markdown
docs/Watermark.md
apivideo/api.video-ios-client
aa92ceaabbebcf3f4e9edea4fc35ecd5d35cb9f8
[ "MIT" ]
4
2021-12-09T16:35:59.000Z
2022-01-26T09:37:41.000Z
docs/Watermark.md
apivideo/api.video-ios-client
aa92ceaabbebcf3f4e9edea4fc35ecd5d35cb9f8
[ "MIT" ]
null
null
null
docs/Watermark.md
apivideo/api.video-ios-client
aa92ceaabbebcf3f4e9edea4fc35ecd5d35cb9f8
[ "MIT" ]
null
null
null
# Watermark ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **watermarkId** | **String** | The unique identifier of the watermark. | [optional] **createdAt** | **Date** | When the watermark was created, presented in ISO-8601 format. | [optional] [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
39.583333
161
0.602105
eng_Latn
0.564105
a12a9ea07c5210985a79c0ee44f96558a18f46b9
47
md
Markdown
README.md
amanbajpai02/calculator
e24a5ce5ee8e8bb7d2100c84706f087ed33476f0
[ "Apache-2.0" ]
null
null
null
README.md
amanbajpai02/calculator
e24a5ce5ee8e8bb7d2100c84706f087ed33476f0
[ "Apache-2.0" ]
1
2019-01-26T02:56:07.000Z
2019-01-26T02:56:07.000Z
README.md
amanbajpai02/calculator
e24a5ce5ee8e8bb7d2100c84706f087ed33476f0
[ "Apache-2.0" ]
5
2018-10-02T10:27:12.000Z
2018-10-04T13:05:07.000Z
# calculator a gui project based on python 2.7
15.666667
33
0.765957
eng_Latn
0.998572
a12ca62a1251f026bd4f7e8c5dc0afde1a1e8a68
12,234
md
Markdown
articles/cognitive-services/Computer-vision/Tutorials/storage-lab-tutorial.md
zeroshell09/azure-docs.fr-fr
f4197c09fac92554b82fb03e06f36f82ba861e69
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Computer-vision/Tutorials/storage-lab-tutorial.md
zeroshell09/azure-docs.fr-fr
f4197c09fac92554b82fb03e06f36f82ba861e69
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/cognitive-services/Computer-vision/Tutorials/storage-lab-tutorial.md
zeroshell09/azure-docs.fr-fr
f4197c09fac92554b82fb03e06f36f82ba861e69
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Didacticiel : Générer des métadonnées pour les images Azure' titleSuffix: Azure Cognitive Services description: Ce tutoriel vous montre comment intégrer le service Vision par ordinateur d’Azure dans une application web afin de générer ensuite des métadonnées pour les images. services: cognitive-services author: PatrickFarley manager: nitinme ms.service: cognitive-services ms.subservice: computer-vision ms.topic: tutorial ms.date: 09/04/2019 ms.author: pafarley ms.openlocfilehash: 8ecf5fb7d54e7c9411c1153610d3a637477285bf ms.sourcegitcommit: 49c4b9c797c09c92632d7cedfec0ac1cf783631b ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 09/05/2019 ms.locfileid: "70382973" --- # <a name="tutorial-use-computer-vision-to-generate-image-metadata-in-azure-storage"></a>Didacticiel : Utiliser le service Vision par ordinateur pour générer des métadonnées des images dans le stockage Azure Ce tutoriel vous montre comment intégrer le service Vision par ordinateur d’Azure dans une application web afin de générer ensuite des métadonnées pour les images chargées. Vous trouverez un guide complet de l’application dans le [lab Azure Storage and Cognitive Services](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md) sur GitHub. Ce tutoriel couvre essentiellement l’exercice 5 du lab. Si vous souhaitez créer l’application de bout en bout, vous pouvez suivre toutes les étapes, mais si vous voulez seulement voir de quelle façon intégrer Vision par ordinateur à une application web existante, lisez le présent tutoriel. Ce didacticiel vous explique les procédures suivantes : > [!div class="checklist"] > * Créer une ressource Vision par ordinateur dans Azure > * Effectuer une analyse des images du stockage Azure > * Attacher des métadonnées aux images du stockage Azure > * Vérifier les métadonnées des images à l’aide de l’Explorateur Stockage Azure Si vous n’avez pas d’abonnement Azure, créez un [compte gratuit](https://azure.microsoft.com/free/) avant de commencer. ## <a name="prerequisites"></a>Prérequis - [Visual Studio 2017 Community](https://www.visualstudio.com/products/visual-studio-community-vs.aspx) ou ultérieur, avec les charges de travail « ASP.NET et développement web » et « Développement Azure » installées. - Un compte de stockage Azure avec un conteneur d’objets blob alloué aux images (suivez l’[exercice 1 du lab Azure Storage](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise1) si vous avez besoin d’aide pour cette étape). - L’outil Explorateur Stockage Azure (suivez l’[exercice 2 du lab Azure Storage](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise2) si vous avez besoin d’aide pour cette étape). - Une application web ASP.NET avec accès au stockage Azure (suivez l’[exercice 3 du lab Azure Storage](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise3) pour créer une application de ce type rapidement). ## <a name="create-a-computer-vision-resource"></a>Créer une ressource Vision par ordinateur Vous devez créer une ressource Vision par ordinateur pour votre compte Azure ; cette ressource gère votre accès au service Vision par ordinateur d’Azure. 1. Pour créer une ressource Vision par ordinateur, suivez les instructions de l'article [Créer une ressource Azure Cognitive Services](../../cognitive-services-apis-create-account.md). 1. Accédez ensuite au menu de votre groupe de ressources et cliquez sur l’abonnement à l’API Vision par ordinateur que vous venez de créer. Copiez l’URL indiquée sous **Point de terminaison** à un endroit où vous pourrez facilement la récupérer un peu plus tard. Cliquez ensuite sur **Afficher les clés d’accès**. ![Page du portail Azure avec l’URL du point de terminaison et le lien des clés d’accès entourés](../Images/copy-vision-endpoint.png) [!INCLUDE [Custom subdomains notice](../../../../includes/cognitive-services-custom-subdomains-note.md)] 1. Dans la fenêtre suivante, copiez la valeur de **KEY 1** dans le Presse-papiers. ![Boîte de dialogue Gérer les clés, avec le bouton de copie entouré](../Images/copy-vision-key.png) ## <a name="add-computer-vision-credentials"></a>Ajouter les informations d’identification pour Vision par ordinateur Vous devez maintenant ajouter les informations d’identification dont votre application a besoin pour accéder aux ressources Vision par ordinateur. Ouvrez votre application web ASP.NET dans Visual Studio et accédez au fichier **Web.config** à la racine du projet. Ajoutez les instructions suivantes dans la section `<appSettings>` du fichier, puis remplacez `VISION_KEY` par la clé que vous aviez copiée à l’étape précédente, et `VISION_ENDPOINT` par l’URL que vous aviez enregistrée à l’étape d’avant. ```xml <add key="SubscriptionKey" value="VISION_KEY" /> <add key="VisionEndpoint" value="VISION_ENDPOINT" /> ``` Dans l’Explorateur de solutions, cliquez avec le bouton droit sur le projet, puis utilisez la commande **Gérer les packages NuGet** pour installer le package **Microsoft.Azure.CognitiveServices.Vision.ComputerVision**. Ce package contient les types à utiliser pour appeler l’API Vision par ordinateur. ## <a name="add-metadata-generation-code"></a>Ajouter le code de génération des métadonnées Ensuite, vous allez ajouter le code qui utilise le service Vision par ordinateur pour créer des métadonnées des images. Ces étapes sont conçues pour l’application ASP.NET du lab, mais vous pouvez les adapter à votre propre application. À ce stade, l’important est de vous assurer que votre application web ASP.NET peut charger des images dans un conteneur de stockage Azure, lire les images de ce conteneur et afficher les images dans la vue. Si vous avez besoin d’aide, suivez l’[exercice 3 du lab Azure Storage](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise3). 1. Ouvrez le fichier *HomeController.cs* dans le dossier **Contrôleurs** du projet et ajoutez les instructions `using` suivantes en haut du fichier : ```csharp using Microsoft.Azure.CognitiveServices.Vision.ComputerVision; using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models; ``` 1. Accédez ensuite à la méthode **Upload**, qui convertit et charge les images dans un stockage blob. Ajoutez le code suivant immédiatement après le bloc qui commence par `// Generate a thumbnail` (ou à la fin du processus de création de l’objet blob de l’image). Ce code utilise l’objet blob contenant l’image (`photo`) et génère une description de cette image à l’aide de Vision par ordinateur. L’API Vision par ordinateur génère également une liste de mots clés en lien avec l’image. Les mots clés et la description générés sont stockés dans les métadonnées de l’objet blob pour les utiliser plus tard. ```csharp // Submit the image to Azure's Computer Vision API ComputerVisionClient vision = new ComputerVisionClient( new ApiKeyServiceClientCredentials(ConfigurationManager.AppSettings["SubscriptionKey"]), new System.Net.Http.DelegatingHandler[] { }); vision.Endpoint = ConfigurationManager.AppSettings["VisionEndpoint"]; VisualFeatureTypes[] features = new VisualFeatureTypes[] { VisualFeatureTypes.Description }; var result = await vision.AnalyzeImageAsync(photo.Uri.ToString(), features); // Record the image description and tags in blob metadata photo.Metadata.Add("Caption", result.Description.Captions[0].Text); for (int i = 0; i < result.Description.Tags.Count; i++) { string key = String.Format("Tag{0}", i); photo.Metadata.Add(key, result.Description.Tags[i]); } await photo.SetMetadataAsync(); ``` 1. Ensuite, accédez à la méthode **Index** dans le même fichier ; cette méthode énumère les objets blob d’image stockés dans le conteneur d’objets blob ciblé (comme les instances **IListBlobItem**) et les passe à la vue de l’application. Remplacez le bloc `foreach` dans cette méthode par le code suivant. Ce code appelle **CloudBlockBlob.FetchAttributes** pour obtenir les métadonnées attachées à chaque objet blob. Il extrait la description générée par ordinateur (`caption`) à partir des métadonnées et l’ajoute à l’objet **BlobInfo**, qui est alors passé à la vue. ```csharp foreach (IListBlobItem item in container.ListBlobs()) { var blob = item as CloudBlockBlob; if (blob != null) { blob.FetchAttributes(); // Get blob metadata var caption = blob.Metadata.ContainsKey("Caption") ? blob.Metadata["Caption"] : blob.Name; blobs.Add(new BlobInfo() { ImageUri = blob.Uri.ToString(), ThumbnailUri = blob.Uri.ToString().Replace("/photos/", "/thumbnails/"), Caption = caption }); } } ``` ## <a name="test-the-app"></a>Test de l'application Enregistrez vos modifications dans Visual Studio, puis appuyez sur **Ctrl+F5** pour lancer l’application dans votre navigateur. Utilisez l’application pour charger quelques images à partir du dossier « photos » des ressources du lab ou à partir de votre propre dossier. Quand vous placez le curseur sur l’une des images dans la vue, une fenêtre d’info-bulle doit apparaître et afficher la légende générée par ordinateur pour l’image. ![Légende générée par ordinateur](../Images/thumbnail-with-tooltip.png) Pour voir toutes les métadonnées attachées, affichez le conteneur de stockage utilisé pour les images dans l’Explorateur Stockage Azure. Cliquez avec le bouton droit sur un objet blob dans le conteneur, puis sélectionnez **Propriétés**. Dans la boîte de dialogue, vous voyez une liste de paires clé-valeur. La description de l’image générée par ordinateur est stockée dans l’élément « Caption » et les différents mots clés de recherche sont stockés dans « Tag0 », « Tag1 », etc. Quand vous avez terminé, cliquez sur **Annuler** pour fermer la boîte de dialogue. ![Fenêtre de dialogue des propriétés de l’image, avec les balises de métadonnées listées](../Images/blob-metadata.png) ## <a name="clean-up-resources"></a>Supprimer des ressources Si vous souhaitez continuer à travailler sur votre application web, passez à la section [Étapes suivantes](#next-steps). Si vous n’avez plus besoin de cette application, supprimez toutes les ressources propres à l’application. Pour cela, il vous suffit de supprimer le groupe de ressources qui contient votre abonnement Stockage Azure et la ressource Vision par ordinateur. Cette opération supprime le compte de stockage, les objets blob qui y ont été chargés ainsi que la ressource App Service utilisée pour vous connecter à l’application web ASP.NET. Pour supprimer le groupe de ressources, ouvrez le panneau **Groupes de ressources** dans le portail, accédez au groupe de ressources que vous avez utilisé dans ce projet, puis cliquez sur **Supprimer le groupe de ressources** en haut de la vue. Vous êtes alors invité à taper le nom du groupe de ressources pour confirmer sa suppression, qui est définitive. ## <a name="next-steps"></a>Étapes suivantes Dans ce tutoriel, vous avez intégré le service Vision par ordinateur d’Azure dans une application web existante pour générer automatiquement des légendes et des mots clés pour les images d’objet blob que vous chargez. Effectuez maintenant l’exercice 6 du lab Azure Storage pour savoir comment ajouter des fonctionnalités de recherche à votre application web. Cet exercice utilise les mots clés de recherche générés par le service Vision par ordinateur. > [!div class="nextstepaction"] > [Ajouter une recherche à votre application](https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise6)
82.662162
727
0.773745
fra_Latn
0.961345
a12ebcaf8b689d76864d18440e30a2b4b7e5f00a
2,109
md
Markdown
README.md
pestotoast/mmx-node
eb9e27aa077c111e814c62cc5b716c4957eac09e
[ "Apache-2.0" ]
null
null
null
README.md
pestotoast/mmx-node
eb9e27aa077c111e814c62cc5b716c4957eac09e
[ "Apache-2.0" ]
null
null
null
README.md
pestotoast/mmx-node
eb9e27aa077c111e814c62cc5b716c4957eac09e
[ "Apache-2.0" ]
null
null
null
# mmx-node MMX is a blockchain written from scratch using Chia's Proof Of Space and a SHA256 VDF similar to Solana. It's main features are as follows: - High performance (1000 transactions per second or more) - Variable supply (block reward scales with netspace, but also is capped by TX fees) - Consistent block times (every 10 seconds a block is created) - Native token support (swaps are possible with standard transactions) - Energy saving Proof of Space (same as Chia) - Standard ECDSA signatures for seamless integration (same as Bitcoin) MMX is desiged to be a blockchain that can actually be used as a currency. For example the variable supply will stabilize the price, one of the key properties of any currency. Furthermore, thanks to an efficient implementation and the avoidance of using a virtual machine, it will provide low transaction fees even at high throughput. Tokens can either be traditionally issued by the owner, or they can be owner-less and created by staking another token over time, in a decentralized manner governed by the blockchain. In the future it is planned that anybody can create their own token on MMX using a simple web interface. The first application for MMX will be a hybrid decentralized exchange where users can trade MMX and tokens, as well as possibly a decentralized storage platform. A mainnet launch is planned in ~4 months or so. Currently we are running _testnet5_, so the coins farmed right now are _not worth anything_. See `#mmx-news` and `#mmx-general` on discord: https://discord.gg/pQwkebKnPB ## Installation / Setup / Usage Please take a look at the Wiki: - [Installation](https://github.com/madMAx43v3r/mmx-node/wiki/Installation) - [Getting-Started](https://github.com/madMAx43v3r/mmx-node/wiki/Getting-Started) - [CLI-Commands](https://github.com/madMAx43v3r/mmx-node/wiki/CLI-Commands) - [Frequently-Asked-Questions](https://github.com/madMAx43v3r/mmx-node/wiki/Frequently-Asked-Questions) ## WebGUI To access the WebGUI go to: http://localhost:11380/gui/ It's only available on localhost since it allows spending from your wallet.
47.931818
183
0.787103
eng_Latn
0.996972
a12ecf2bae49e73bf534fe998a89a3e2f9b8e711
2,787
md
Markdown
README.md
DavidBarcenas/react-food-delivery
48510926426d6cac8affe26677804ce9b4b1ed78
[ "MIT" ]
null
null
null
README.md
DavidBarcenas/react-food-delivery
48510926426d6cac8affe26677804ce9b4b1ed78
[ "MIT" ]
null
null
null
README.md
DavidBarcenas/react-food-delivery
48510926426d6cac8affe26677804ce9b4b1ed78
[ "MIT" ]
null
null
null
<div align="center"> <h1>React Food Delivery</h1> ![image](https://img.shields.io/badge/React-20232A?style=for-the-badge&logo=react&logoColor=61DAFB) ![image](https://img.shields.io/badge/Apollo%20GraphQL-311C87?&style=for-the-badge&logo=Apollo%20GraphQL&logoColor=white) ![image](https://img.shields.io/badge/GraphQl-E10098?style=for-the-badge&logo=graphql&logoColor=white) ![image](https://img.shields.io/badge/TypeScript-007ACC?style=for-the-badge&logo=typescript&logoColor=white) ![image](https://img.shields.io/badge/Tailwind_CSS-38B2AC?style=for-the-badge&logo=tailwind-css&logoColor=white) ![image](https://img.shields.io/badge/Cypress-17202C?style=for-the-badge&logo=cypress&logoColor=white) <p>Food delivery application based on the concept of uber eats and rappi. In which you can register as a customer, owner or delivery person and depending on your profile you can register your restaurant for the sale of food, help deliver orders and order what you like the most.</p> </div> ## Features - Notifications when: - there is an order - the order has been taken, cooked and delivered - Multiple role management - Email verification upon registration - User location for food delivery - Trace routes using mapbox - Restaurants CRUD, reports and statistics - Paypal payments to promote a restaurant ## Installation To clone and run this application, you'll need [Git](https://git-scm.com) and [Node.js](https://nodejs.org/en/download/) installed on your computer. _Optional_ you can install [Yarn](https://yarnpkg.com/getting-started/install). From your command line: ```bash # Clone this repository $ git clone https://github.com/DavidBarcenas/react-food-delivery.git # Go into the repository $ cd react-food-delivery # Install dependencies $ yarn install # Run the app $ yarn run start ``` **Note: This project has a backend made with nest.js, which you can configure to handle data persistence, authentication, mailing, notifications, and more. The repository can be found at the following link: [food-delivery-backend](https://github.com/DavidBarcenas/food-delivery-backend).** ## Deployment It correctly bundles React in production mode and optimizes the build for the best performance. ```bash $ yarn run build ``` ## Tests ```bash # Run unit tests $ yarn run test # View project test percentage $ yarn run test:coverage # e2e tests $ yarn run cypress open ``` ## Dependencies The [mapbox-gl](https://docs.mapbox.com/mapbox-gl-js/api/) library is used to create maps. It is necessary to have a [mapbox account](https://account.mapbox.com/) to be able to generate the token that we need for said library. ## License Released under the [MIT licensed](LICENSE).\ Feel free to fork this project and improve it. Give a ⭐️ if you like this project!
33.578313
284
0.761392
eng_Latn
0.925709
a12f286d50d3e07a82456f35a842a1582b20e2d1
1,706
md
Markdown
docs/pages/components/calendar/en-US/index.md
justanotheranonymoususer/rsuite
1747d27956a0d8e3dc758199142a060a45c95820
[ "MIT" ]
3
2021-01-12T01:39:44.000Z
2021-01-12T01:39:48.000Z
docs/pages/components/calendar/en-US/index.md
song-ran/rsuite
dd674d0b8a931387cef42c973213b05604b04e17
[ "MIT" ]
14
2022-01-11T19:37:32.000Z
2022-03-31T11:32:01.000Z
docs/pages/components/calendar/en-US/index.md
song-ran/rsuite
dd674d0b8a931387cef42c973213b05604b04e17
[ "MIT" ]
null
null
null
# Calendar A component that displays data by calendar ## Import <!--{include:(components/calendar/fragments/import.md)}--> ## Examples ### Default <!--{include:`basic.md`}--> ### Compact <!--{include:`compact.md`}--> ## Props ### `<Calendar>` | Property | Type`(Default)` | Description | | ------------ | ------------------------- | ------------------------------------------------------------------------------------ | | bordered | boolean | Show border | | compact | boolean | Display a compact calendar | | defaultValue | Date | Default value | | isoWeek | boolean | ISO 8601 standard, each calendar week begins on Monday and Sunday on the seventh day | | onChange | (date:Date) => void | Callback fired before the value changed | | onSelect | (date:Date) => void | Callback fired before the date selected | | renderCell | (date: Date) => ReactNode | Custom render calendar cells | | value | Date | Controlled value | | timeZone | string | [IANA Time zone name](/components/date-picker#Time%20Zone%20List) |
50.176471
131
0.357562
eng_Latn
0.79798
a12f92488f185299a1a8914feb923baed9a1d70a
461
md
Markdown
README.md
lorenzobadas/C-hip8
bac54af915e9c4964100d63328d6e30f1b8787ca
[ "MIT" ]
null
null
null
README.md
lorenzobadas/C-hip8
bac54af915e9c4964100d63328d6e30f1b8787ca
[ "MIT" ]
null
null
null
README.md
lorenzobadas/C-hip8
bac54af915e9c4964100d63328d6e30f1b8787ca
[ "MIT" ]
null
null
null
# C-hip8 C-hip8 is a Chip-8 emulator written in C, graphical interface is made with SDL2. ## Build project To build the project go to the root directory and enter these commands: ``` mkdir build cd build cmake .. make ``` ## Usage ``` ./c-hip8 <ROM path> [-d <Delay>] [-s <Scale>] ``` Default setting for Delay is 2000. Default setting for Scale is 25. Launching the program without any commandline arguments will just display a Chip-8 logo on a new window.
24.263158
104
0.722343
eng_Latn
0.996364
a1302e8d21522f782c40131de590ea3f9d59bd01
169
md
Markdown
pages/info/marlin.md
trpsl/wiki
f7ab765aa38f1db49a2f6bf64d43ed2c17c527d6
[ "MIT", "BSD-3-Clause" ]
610
2017-01-18T10:32:43.000Z
2022-03-27T20:09:36.000Z
pages/info/marlin.md
trpsl/wiki
f7ab765aa38f1db49a2f6bf64d43ed2c17c527d6
[ "MIT", "BSD-3-Clause" ]
145
2020-08-20T18:18:36.000Z
2022-03-31T15:20:02.000Z
pages/info/marlin.md
trpsl/wiki
f7ab765aa38f1db49a2f6bf64d43ed2c17c527d6
[ "MIT", "BSD-3-Clause" ]
773
2017-01-17T07:48:00.000Z
2022-03-23T02:01:14.000Z
--- sidebar: home_sidebar title: Info about marlin folder: info layout: deviceinfo permalink: /devices/marlin/ device: marlin --- {% include templates/device_info.md %}
16.9
38
0.757396
eng_Latn
0.802241
a131370917b44a7da889bcef64d0dcf9ca0f29af
2,096
md
Markdown
plugins/woocommerce-admin/client/activity-panel/activity-card/README.md
bhrugesh96/woocommerce
bad34444382629e3889c8c327741b9f835a7c9e1
[ "Apache-2.0" ]
null
null
null
plugins/woocommerce-admin/client/activity-panel/activity-card/README.md
bhrugesh96/woocommerce
bad34444382629e3889c8c327741b9f835a7c9e1
[ "Apache-2.0" ]
null
null
null
plugins/woocommerce-admin/client/activity-panel/activity-card/README.md
bhrugesh96/woocommerce
bad34444382629e3889c8c327741b9f835a7c9e1
[ "Apache-2.0" ]
null
null
null
ActivityCard ============ A card designed for use in the activity panel. This is a very structured component, which expects at minimum a label and content. It can optionally also include a date, actions, an image, and a dropdown menu. ## How to use: ```jsx import { ActivityCard } from 'components/activity-card'; render: function() { return ( <ActivityCard title="Insight" icon={ <Gridicon icon="search" /> } date="2018-07-10T00:00:00Z" actions={ [ <a href="/">Action link</a>, <a href="/">Action link 2</a> ] } > Insight content goes in this area here. It will probably be a couple of lines long and may include an accompanying image. We might consider color-coding the icon for quicker scanning. </ActivityCard> ); } ``` ## Props * `title`: A title for this card (required). * `subtitle`: An element rendered right under the title. * `children`: Content used in the body of the action card (required). * `actions`: A list of links or buttons shown in the footer of the card. * `date`: The timestamp associated with this activity. * `icon`: An icon or avatar used to identify this activity. Defaults to Gridicon "notice-outline". * `unread`: If this prop is present, the card has a small red bubble indicating an "unread" item. Defaults to false. ActivityCardPlaceholder ======================= This component is similar to `ActivityCard` in output, but renders no real content, just loading placeholders. This is also hidden from any interaction with screen readers using `aria-hidden`. ## How to use: ```jsx import { ActivityCardPlaceholder } from 'components/activity-card'; render: function() { return ( <ActivityCardPlaceholder hasDate /> ); } ``` ## Props * `hasAction`: Boolean. If true, shows a placeholder block for an action. Default false. * `hasDate`: Boolean. If true, shows a placeholder block for the date. Default false. * `hasSubtitle`: Boolean. If true, shows a placeholder block for the subtitle. Default false. * `lines`: Number. How many lines of placeholder content we should show. Default 1.
34.933333
208
0.701813
eng_Latn
0.990857
a1314f4d2039b614c36018124cbdbb2bd0c20955
1,221
md
Markdown
README.md
BrutalBirdie/redbot-redmine-notifier
835a87aee59b4b316a25a4ba48ffd507fdc2ae7c
[ "MIT" ]
null
null
null
README.md
BrutalBirdie/redbot-redmine-notifier
835a87aee59b4b316a25a4ba48ffd507fdc2ae7c
[ "MIT" ]
null
null
null
README.md
BrutalBirdie/redbot-redmine-notifier
835a87aee59b4b316a25a4ba48ffd507fdc2ae7c
[ "MIT" ]
null
null
null
# redbot-redmine-notifier Notifies about Redmine creating and updating tickets. This Module is a fork of the [hubot-redmine-notifier](https://www.npmjs.com/package/hubot-redmine-notifier) This fork will only notifie about the Notes inside a Redmine Ticket when called with a BuzzWord. Example note of a redmine issue: "Some Foo @RedBot The message to send RedBot@ End Foo ![notification to chat screenshot](notification-screenshot.png) ## Getting Started ### Hubot 1. Install the module: cd into your HuBot folder `npm install redbot-redmine-notifier --save` 2. Add it `redbot-redmine-notifier` to your external-scripts.json file in your hubot directory ### Redmine 1. Install [Redmine Webhook Plugin](https://github.com/suer/redmine_webhook) to your Redmine. 2. Add hubot's endpoint to Redmine Project - Settings - WebHook - URL `http://<hubot-host>:<hubot-port>/hubot/redmine-notify?room=<room>` (see Screenshot) ## TODO 1. Make the BuzzWord variable 2. ? ![webhook settings of redmine screenshot](redmine-webhook-settings-screenshot.png) ## License Licensed under the MIT license. This script created with reference to the [halkeye/hubot-jenkins-notifier](https://github.com/halkeye/hubot-jenkins-notifier).
39.387097
154
0.775594
eng_Latn
0.880903
a13280028107852391b9d44f22b696b0a238cc09
1,643
md
Markdown
docs/interfaces/_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md
marcelkottmann/esp32-javascript
9de34c96443dfa3f0f5447facefe779c8f4951b4
[ "MIT" ]
44
2020-07-29T00:34:43.000Z
2022-03-21T12:49:10.000Z
docs/interfaces/_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md
marcelkottmann/esp32-javascript
9de34c96443dfa3f0f5447facefe779c8f4951b4
[ "MIT" ]
6
2020-07-29T02:17:05.000Z
2022-03-23T13:13:07.000Z
docs/interfaces/_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md
pepe79/esp32-javascript
df531308cb9be6bb4246001d618e9f63a4ce0c46
[ "MIT" ]
4
2021-05-22T23:46:17.000Z
2022-02-07T01:09:23.000Z
[esp32-javascript](../README.md) › ["esp32-javascript/modules/esp32-javascript/http"](../modules/_esp32_javascript_modules_esp32_javascript_http_.md) › [Esp32JsRequest](_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md) # Interface: Esp32JsRequest ## Hierarchy * **Esp32JsRequest** ## Index ### Properties * [body](_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md#body) * [headers](_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md#headers) * [method](_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md#method) * [path](_esp32_javascript_modules_esp32_javascript_http_.esp32jsrequest.md#path) ## Properties ### body • **body**: *string | null* *Defined in [esp32-javascript/modules/esp32-javascript/http.ts:35](https://github.com/marcelkottmann/esp32-javascript/blob/22ffb3d/components/esp32-javascript/modules/esp32-javascript/http.ts#L35)* ___ ### headers • **headers**: *Headers* *Defined in [esp32-javascript/modules/esp32-javascript/http.ts:33](https://github.com/marcelkottmann/esp32-javascript/blob/22ffb3d/components/esp32-javascript/modules/esp32-javascript/http.ts#L33)* ___ ### method • **method**: *string* *Defined in [esp32-javascript/modules/esp32-javascript/http.ts:34](https://github.com/marcelkottmann/esp32-javascript/blob/22ffb3d/components/esp32-javascript/modules/esp32-javascript/http.ts#L34)* ___ ### path • **path**: *string* *Defined in [esp32-javascript/modules/esp32-javascript/http.ts:32](https://github.com/marcelkottmann/esp32-javascript/blob/22ffb3d/components/esp32-javascript/modules/esp32-javascript/http.ts#L32)*
33.530612
236
0.79367
cat_Latn
0.14905
a132bb820a343045f599f23067b18ba3982b1eb9
1,005
md
Markdown
README.md
HaneetGH/KotlinAndroidBase
87952348088fb70547b5717bad4885a9d6041646
[ "Apache-2.0" ]
10
2020-01-13T06:00:33.000Z
2022-03-04T20:18:33.000Z
README.md
HaneetGH/KotlinAndroidBase
87952348088fb70547b5717bad4885a9d6041646
[ "Apache-2.0" ]
null
null
null
README.md
HaneetGH/KotlinAndroidBase
87952348088fb70547b5717bad4885a9d6041646
[ "Apache-2.0" ]
5
2020-02-01T14:19:35.000Z
2022-01-19T16:04:33.000Z
# Base For Android Application using Kotlin.. ![GitHub Actions status | HaneetGH/KotlinAndroidBase](https://github.com/HaneetGH/KotlinAndroidBase/workflows/CI/badge.svg) An Kotlin MVVM Boilerplate helps you start new enterprise level app in minutes.. # Artitecture <a href="https://ibb.co/Jz3D0M0"><img src="https://i.ibb.co/dQm81z1/1-y-Y0l4-XD3k-Lc-Zz0r-O1sf-RA.png" alt="1-y-Y0l4-XD3k-Lc-Zz0r-O1sf-RA" border="0"></a><br /><br /> # # Technology Stack ## Language 1 Kotlin 1.3.61 <br /> ## Architecture 1 MVVM <br /> ## Database Layer 1 Room 2.2.2 <br /> ## Dependency injection 1 Dagger 2.15 <br /> ## Flow Maintain 1 Coroutines 1.2.1 <br /> 2 RxKotlin 2.4.0 <br /> 3 RxAndroid 2.1.1 <br /> 4 lifecycle 2.1.0 <br /> ## Networking 1 Retrofit 2.7.0 <br /> 2 okhttp 3.12.0 <br /> 3 Socket.IO 0.6.0 <br /> ## Image Loading / Processing 1 Picasso 2.71828 <br /> ## JSON Parser 1 Moshi 1.9.0 <br /> 2 GSON 2.8.6 <br /> ## Logger 1 Timber 4.7.1 <br /> ## UI Test 1 Espresso <br />
18.272727
166
0.655721
kor_Hang
0.291499
a132c69dbbb09fc86f1a371a55a196d09a59b023
1,232
md
Markdown
iha/pixhawk/faydali-baglantilar.md
yedhrab/YWiki
e11411dfe77995c9c95f1a66b572e996f488ba27
[ "Apache-2.0" ]
10
2020-02-07T16:19:02.000Z
2022-03-25T05:07:08.000Z
iha/pixhawk/faydali-baglantilar.md
YEmreAk/YWiki
e11411dfe77995c9c95f1a66b572e996f488ba27
[ "Apache-2.0" ]
11
2019-06-23T21:39:38.000Z
2019-09-08T13:04:24.000Z
iha/pixhawk/faydali-baglantilar.md
YEmreAk/YWiki
e11411dfe77995c9c95f1a66b572e996f488ba27
[ "Apache-2.0" ]
5
2020-02-25T19:50:57.000Z
2021-03-13T13:57:41.000Z
--- description: PixHawk için faydalı bağlantılar --- # 🔗 Faydalı Bağlantılar ## 👨‍💻 Programlama Bağlantıları * [🐙 PX4 GitHub](https://github.com/PX4/Firmware) * [🎈 İlk uygulamayı programlama](https://dev.px4.io/v1.9.0/en/apps/hello_sky.html) * [👨‍💻 Kaynak kodların dizini](https://github.com/PX4/Firmware/tree/master/src) * [⭐ Kod örnekler](https://github.com/PX4/Firmware/tree/master/src/examples) * [🛫 Sabit kanat örneği](https://github.com/PX4/Firmware/tree/master/src/examples/fixedwing_control) ## 📙 Dokümanlar * [👨‍💻 Geliştiriciler için dokümantasyon](https://dev.px4.io/master/en/) * 🧱 [Temel konsept](https://docs.px4.io/master/en/getting_started/px4_basic_concepts.html) * 📦 [Drivers & Modules](https://dev.px4.io/master/en/middleware/modules_main.html) * [🐳 Docker üzerinden PixHawk](https://dev.px4.io/master/en/test_and_ci/docker.html) ## 🌍 Diğer Bağlantılar * [✍ PixHawk otopilot ve özellikleri](https://mozanunal.com/2016/09/pixhawk-otopilot-ve-ozellikleri/) * Linkleri çalışmıyor olsa da, konu başlıkları güzel özetlemektedir * [İTÜNOM](https://mozanunal.com/2016/01/itunom-takm/) projesinde çalışan biri tarafından yazılmış * 🐙 GitHub projesi [SimplePilot](https://github.com/mozanunal/SimplePilot)
39.741935
101
0.742695
tur_Latn
0.965219
a1342fa102741adb0ddae0b3c36dec8246453c4a
290
md
Markdown
README.md
chrisluby/Simon-Game
bc521f348369b038faa7b387e480bca72885376d
[ "MIT" ]
1
2019-07-24T08:58:01.000Z
2019-07-24T08:58:01.000Z
README.md
chrisluby/Simon-Game
bc521f348369b038faa7b387e480bca72885376d
[ "MIT" ]
null
null
null
README.md
chrisluby/Simon-Game
bc521f348369b038faa7b387e480bca72885376d
[ "MIT" ]
null
null
null
# Simon Game Repeat back the randomly generated colors in order. ## Built With * [Bootstrap](https://getbootstrap.com/) ## Authors * **Christopher Luby** - *Initial work* ## License This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details
16.111111
98
0.713793
eng_Latn
0.896587
a13441e12cceeaab4b2117b680fca22cd836a69a
8,075
md
Markdown
docs/tutorial_uno.md
arranf/hathora
f7862e83a1fde4b514583c1ec47d28881cfb9c22
[ "MIT" ]
null
null
null
docs/tutorial_uno.md
arranf/hathora
f7862e83a1fde4b514583c1ec47d28881cfb9c22
[ "MIT" ]
null
null
null
docs/tutorial_uno.md
arranf/hathora
f7862e83a1fde4b514583c1ec47d28881cfb9c22
[ "MIT" ]
null
null
null
# Tutorial: Uno In this tutorial we will explore Hathora by learning to make a simplified version of [Uno](https://www.mattel.com/products/uno-gdj85). ## Repo The full code for this game can be found [here](https://github.com/hathora/hathora/tree/develop/examples/uno), and you can also play the deployed version [here](https://hathora-uno.surge.sh/). This is what our game will look like by the end of this tutorial: ![image](https://user-images.githubusercontent.com/5400947/149870083-67986611-6151-4ea8-abb2-9a67467741d1.png) ## Install Before you begin, make sure you have nodejs v16.12+ and the hathora cli installed: ```sh npm install -g hathora ``` ## hathora.yml To start, create a directory called `uno-tutorial` and create a `hathora.yml` file inside with the following contents: ```yml # hathora.yml types: Color: - RED - BLUE - GREEN - YELLOW Card: value: int color: Color PlayerState: hand: Card[] players: UserId[] turn: UserId pile: Card? winner: UserId? methods: joinGame: startGame: playCard: card: Card drawCard: auth: anonymous: {} userState: PlayerState error: string ``` This file defines the client data model and the server api endpoints for our application. For more information on this file format, see [here](type-driven-development). To initialize our project structure run `hathora init`. You should see the following directory structure generated for you: ``` uno-tutorial # project root ├─ api # generated + gitignored ├─ client │ ├─ .hathora # generated + gitignored │ └─ prototype-ui # generated + gitignored ├─ server │ ├─ .hathora # generated + gitignored │ ├─ impl.ts # user-editable │ ├─ tsconfig.json # user-editable │ ├─ package.json # user-editable ├─ hathora.yml # user-editable └─ .gitignore # user-editable ``` > If you plan on using git, this is a good time to run `git init` Inside the server directory we will also find a `impl.ts` file filled out with a default implementation: ```ts // impl.ts // ... type InternalState = PlayerState; export class Impl implements Methods<InternalState> { initialize(ctx: Context, request: IInitializeRequest): InternalState { return { hand: [], players: [], turn: "", pile: undefined, winner: undefined, }; } joinGame(state: InternalState, userId: UserId, ctx: Context, request: IJoinGameRequest): Response { return Response.error("Not implemented"); } startGame(state: InternalState, userId: UserId, ctx: Context, request: IStartGameRequest): Response { return Response.error("Not implemented"); } playCard(state: InternalState, userId: UserId, ctx: Context, request: IPlayCardRequest): Response { return Response.error("Not implemented"); } drawCard(state: InternalState, userId: UserId, ctx: Context, request: IDrawCardRequest): Response { return Response.error("Not implemented"); } getUserState(state: InternalState, userId: UserId): PlayerState { return state; } } ``` Next, run `hathora dev` to start the development server. Visit http://localhost:3000 where you see the following Prototype UI view: ![image](https://user-images.githubusercontent.com/5400947/149869164-19a7cbe3-59a6-47a8-95b0-6bc316b31cef.png) ## Backend logic Because of the default implementation, we don't see any real data and clicking Submit for any of the methods displays a "Not implemented" error. Let's fix this by adding our game logic to `server/impl.ts`: ```ts import { Methods, Context } from "./.hathora/methods"; import { Response } from "../api/base"; import { UserId, PlayerState, IJoinGameRequest, IStartGameRequest, IPlayCardRequest, IDrawCardRequest, Card, Color, } from "../api/types"; type InternalState = { deck: Card[]; hands: Map<UserId, Card[]>; players: UserId[]; pile?: Card; turn: UserId; winner?: UserId; }; export class Impl implements Methods<InternalState> { initialize(userId: UserId, ctx: Context): InternalState { // create the initial version of our state const deck = []; for (let i = 2; i <= 9; i++) { deck.push({ value: i, color: Color.RED }); deck.push({ value: i, color: Color.BLUE }); deck.push({ value: i, color: Color.GREEN }); deck.push({ value: i, color: Color.YELLOW }); } return { deck, players: [], hands: new Map() }; } joinGame(state: InternalState, userId: UserId, ctx: Context, request: IJoinGameRequest): Response { // append the user who called the method state.players.push(userId); return Response.ok(); } startGame(state: InternalState, userId: UserId, ctx: Context, request: IStartGameRequest): Response { // shuffle the deck, give each player 7 cards, and start the pile state.deck = ctx.chance.shuffle(state.deck); state.players.forEach((playerId) => { state.hands.set(playerId, []); for (let i = 0; i < 7; i++) { state.hands.get(playerId)!.push(state.deck.pop()!); } }); state.pile = state.deck.pop(); return Response.ok(); } playCard(state: InternalState, userId: UserId, ctx: Context, request: IPlayCardRequest): Response { // remove from hand const hand = state.hands.get(userId)!; const cardIdx = hand.findIndex((card) => card.value == request.card.value && card.color == request.card.color); hand.splice(cardIdx, 1); // update pile state.pile = request.card; // check if won if (hand.length == 0) { state.winner = userId; return Response.ok(); } // upate turn const currIdx = state.players.indexOf(state.turn); const nextIdx = (currIdx + 1) % state.players.length; state.turn = state.players[nextIdx]; return Response.ok(); } drawCard(state: InternalState, userId: UserId, ctx: Context, request: IDrawCardRequest): Response { // add the top card to the player's hand state.hands.get(userId)!.push(state.deck.pop()!); return Response.ok(); } getUserState(state: InternalState, userId: UserId): PlayerState { // compute the user state from the internal state return { hand: state.hands.get(userId) ?? [], // only return this user's hand players: state.players, turn: state.turn, pile: state.pile, winner: state.winner, }; } } ``` See [here](methods) for more details about how server methods works. > The hathora dev server supports hot reloading of both backend and frontend, so you shouldn't need to restart the server when making edits to your code. Going back to the prototype UI, we can see our working application in action. Create a game, join it as another user from a different tab (by using the same url), and start the game. You should see a view like this: ![image](https://user-images.githubusercontent.com/5400947/149870083-67986611-6151-4ea8-abb2-9a67467741d1.png) ## Validation One problem with our current backend implementation is that there is no validation. For example, players can join the game multiple times even though they shouldn't be able to. We can enforce that a player can only join once by chaning our `joinGame` implementation to the following: ```ts // impl.ts joinGame(state: InternalState, userId: UserId, ctx: Context, request: IJoinGameRequest): Response { if (state.players.find((playerId) => playerId === userId) !== undefined) { return Response.error("Already joined"); } state.players.push(userId); return Response.ok(); } ``` Now, if you try to join a second time you will get a `Already joined` error on the screen. Try adding validations to the other functions as well. To see a complete implementation of the backend, see [the uno example](https://github.com/hathora/hathora/tree/develop/examples/uno). ## Next steps To learn how to create plugins for specific types to spruce up the view, or to learn how to make a fully custom frontend, take a look at the [client reference](client.md).
33.506224
283
0.68582
eng_Latn
0.852371
a13463a16d36e94801fa28a607cdb621226166bb
1,285
md
Markdown
doc/profile/preferences.md
greenboxal/doggohub
27e697505405f916edc4e5e5acef71f5275fac3f
[ "MIT" ]
2
2016-12-26T20:36:51.000Z
2016-12-27T17:03:50.000Z
doc/profile/preferences.md
greenboxal/doggohub
27e697505405f916edc4e5e5acef71f5275fac3f
[ "MIT" ]
null
null
null
doc/profile/preferences.md
greenboxal/doggohub
27e697505405f916edc4e5e5acef71f5275fac3f
[ "MIT" ]
null
null
null
# Profile Preferences Settings in the **Profile > Preferences** page allow the user to customize various aspects of the site to their liking. ## Application theme Changing this setting allows the user to customize the color scheme used for the navigation bar on the left side of the screen. The default is **Charcoal**. ## Syntax highlighting theme _DoggoHub uses the [rouge ruby library][rouge] for syntax highlighting. For a list of supported languages visit the rouge website._ Changing this setting allows the user to customize the theme used when viewing syntax highlighted code on the site. The default is **White**. ## Behavior ### Default Dashboard For users who have access to a large number of projects but only keep up with a select few, the amount of activity on the default Dashboard page can be overwhelming. Changing this setting allows the user to redefine what their default dashboard will be. Setting it to **Starred Projects** will make that Dashboard view the default when signing in or clicking the application logo in the upper left. The default is **Your Projects**. ### Default Project view It allows user to choose what content he or she want to see on project page. The default is **Readme**. [rouge]: http://rouge.jneen.net/ "Rouge website"
29.204545
80
0.774319
eng_Latn
0.998982
a134d4c3f7c808e4bc5ef1e4e7637f4edfe7e191
1,161
md
Markdown
_posts/2016-08-27-Im-A-Greenplum.md
jistok/jistok.github.io
02e89cb6d5c67d1026fd81cd050ee5670d1f4e2c
[ "MIT" ]
null
null
null
_posts/2016-08-27-Im-A-Greenplum.md
jistok/jistok.github.io
02e89cb6d5c67d1026fd81cd050ee5670d1f4e2c
[ "MIT" ]
null
null
null
_posts/2016-08-27-Im-A-Greenplum.md
jistok/jistok.github.io
02e89cb6d5c67d1026fd81cd050ee5670d1f4e2c
[ "MIT" ]
null
null
null
--- layout: post title: I'm a Greenplum, I'm a Netezza --- I travel. A lot. This means that I get the opportunity to watch a lot of Movies, TV, Internet, Commercials, etc. When you are as lucky as me to do that, it gives you ideas. Some of those ideas are awesome. For example, I was recently reminded of the clever, funny <a href="">I'm a Mac, I'm a PC</a> commercials from Apple many years ago. In that spirit, I really wanted to spoof them in some way that was true to form, but more relevant to things that I work on directly (Mac, long before these commercials, won the PC battle for me). <ul> <li><a href="http://everest.s3.amazonaws.com/pivotal/PivotalData1.mp4">Something</a></li> <li><a href="http://everest.s3.amazonaws.com/pivotal/PivotalData2.mp4">Something</a></li> <li><a href="http://everest.s3.amazonaws.com/pivotal/PivotalData3.mp4">Something</a></li> <li><a href="http://everest.s3.amazonaws.com/pivotal/PivotalData4.mp4">Something</a></li> <li><a href="http://everest.s3.amazonaws.com/pivotal/PivotalData5.mp4">Something</a></li> </ul> I plan to write a separate post about each of these, some thoughts around
61.105263
344
0.716624
eng_Latn
0.958178
a136522920b6a5922501f8012ad632f759ab1d3f
5,063
md
Markdown
public/_posts/_newsletters/Yearn-Finance-Newsletter-54/fr.md
xiaona423/yearn-comms
17f8c5272b4f299bcdd257d92faebb8786485e50
[ "MIT" ]
1
2022-01-20T14:41:50.000Z
2022-01-20T14:41:50.000Z
public/_posts/_newsletters/Yearn-Finance-Newsletter-54/fr.md
xiaona423/yearn-comms
17f8c5272b4f299bcdd257d92faebb8786485e50
[ "MIT" ]
10
2022-02-03T12:33:49.000Z
2022-02-17T14:10:06.000Z
public/_posts/_newsletters/Yearn-Finance-Newsletter-54/fr.md
xiaona423/yearn-comms
17f8c5272b4f299bcdd257d92faebb8786485e50
[ "MIT" ]
null
null
null
--- layout: post title: "Yearn Finance Newsletter #54" categories: [Newsletters] image: src: ./cover.png width: 1152 height: 576 author: Yearn date: '2022-01-20' translator: Cryptouf --- ### Semaine du 16 Janvier, 2022 ![](./image1.jpg?w=1100&h=554) Bienvenue à la 54e édition de la Newsletter Yearn Finance. Notre objectif avec cette newsletter est de tenir la communauté Yearn, et plus généralement la communauté crypto, au courant des dernières nouvelles, y compris les lancements de produits, les changements de gouvernance et les mises à jour de l'écosystème. Si vous souhaitez en savoir plus sur Yearn Finance, suivez nos comptes [Twitter](https://twitter.com/iearnfinance) et [Medium](https://medium.com/iearn) officiels. ## Résumé - Présentation de sept nouveaux vault sur Fantom - Nouveaux vault pour Curve dispos - Mise a jour de Yearn Web - Les vaults Yearn frapper à la porte de la DeFi - Mise à jour du yvBOOST - Vaults chez Yearn - Nouvelles de l'écosystème # Présentation de sept nouveaux vault sur Fantom ![](./image2.jpg?w=550&h=733.5) Sur Fantom, sept coffres debarquent : WBTC, WETH, SPELL, DOLA, Curve Tricrypto, Curve Geist et CRV Pendant ce temps, avec près de 400 millions de dollars TVL dans les coffres Fantom, les APY sont toujours juteux, jusqu'à plus de 45 %. La plupart des coffres Fantom utilisent des stratégies sur Scream, les stratégies Geist Finance et Tarot Finance arrivant très bientôt. Et vous. que faites vous ? Déposez dès aujourd'hui sur [yearn.finance/vaults](https://yearn.finance/vaults). # Nouveaux vault pour Curve dispos ![](./image3.jpg?w=644&h=464) Sur Ethereum, les nouveaux coffres Curve suivants sont désormais dispos : CVX-ETH, CRV-ETH, 3EUR, UST Wormhole, USDPax, DOLA et RAI - le premier coffre utilisant l'implémentation Curve personnalisée de Reflexer Comme pour le 3EUR, ce jeton représente une pool de liquidité sur Curve. Les détenteurs de ce jeton recoivent une parties de frais générés par la pool et peuvent également gagner des CRV supplémentaires en déposant les jetons dans la jauges de Curve. Cette pool contient des agEUR, EURT et EURS. Le agEUR est un euro synthétique émis par Angle Protocol, tandis que les EURS et EURT sont des jetons centralisés adossés à l' euros et avec un peg fixe émis respectivement par Stasis et Tether. La stratégie 3EUR dépose des jetons 3EURpool-f sur Convex Finance pour gagner des CRV et CVX (et tout autre jeton disponible). Les jetons gagnés sont récoltés, vendus pour plus de 3EURpool-f, qui sont déposés à nouveaux dans le coffre. Découvrez les nouveaux coffres [ici](https://yearn.finance/#/vaults). # Mise à jour de Yearn Web ![](./image4.jpg?w=450&h=367) La mise à jour de Web Yearn de cette semaine présente plusieurs contributeurs ouvrant pour la premiere fois des PR et une logique backend améliorée pour les jetons Iron Bank & Curve LP Les mises à jour à venir sont une suite de tests de mise à jour pour le SDK Yearn, des erreurs de simulation plus descriptives et une refactorisation de l'API avec de la documentation. Découvrez la nouvelle mise à jour complète [ici](https://yearnweb.substack.com/p/yearn-web-engineering-update). # Les vaults Yearn frappent à la porte de la DeFi ![](./image5.jpg?w=957&h=538) Cet article de BanklessDAO propose un résumé complet des fonctionnalités offertes par Yearn et décrit comment Yearn et ses coffres facilitent l'utilisation de DeFi. Dans l'ensemble, l'utilisation des coffres Yearn est un pari sur le protocole ayant la plus haute sécurité parmis les agrégateurs de rendement et étant capable d'apporter une efficacité de capital significative à la DeFi avec le travail incroyable de tous les stratèges et des automatisations misent en place. Découvrez l'article complet [ici](https://medium.com/bankless-dao/yearn-finance-vaults-knockin-on-defi-s-door-f5e9f56f669a). # Mise à jour du yvBOOST ![](./image6.jpg?w=1100&h=569) La plupart ne s'en rendent peut-être pas compte, mais 1 yvBOOST collecte 2,2 fois plus de frais hebdomadaires provenant protocole Curve qu'1 veCRV. yvBOOST L'APR est également supérieur à 100 %, ajouté a cela que 1 yvBOOST étant actuellement 32 % moins cher que 1 CRV. Enfin, il reste plus de 5 millions de dollars à distribuer aux détenteurs de yvBOOST Découvrez les dons [ici](https://etherscan.io/address/0xdf270b48829e0f05211f3a33e5dc0a84f7247fbe). # Vaults chez Yearn Vous pouvez trouver une description détaillée des stratégies de tous nos yVaults actifs [ici](https://medium.com/yearn-state-of-the-vaults/the-vaults-at-yearn-9237905ffed3). # Nouvelles de l'écosystème [Watch out for the upcoming Yearn x Pills collaboration](https://twitter.com/bantg/status/1482764820265029633) [Check out a reading list on how to write strategies for Yearn](https://twitter.com/sjkelleyjr/status/1481664381054177281) [Explore how one DAO used Coordinape to manage payroll](https://twitter.com/jkey_eth/status/1479642151730356226) [Support a Yearn alumni’s campaign for a seat in Congress](https://twitter.com/mattdwest/status/1481083902580166656)
54.44086
490
0.78412
fra_Latn
0.953946
a1367b9bc11848acb67160de0ca16e3f82d0386a
376
md
Markdown
content/english/communities/chapters/new-york-city.md
iamrajee/website
935914a8438d4053461a8c96c417eda975687dfc
[ "MIT" ]
null
null
null
content/english/communities/chapters/new-york-city.md
iamrajee/website
935914a8438d4053461a8c96c417eda975687dfc
[ "MIT" ]
null
null
null
content/english/communities/chapters/new-york-city.md
iamrajee/website
935914a8438d4053461a8c96c417eda975687dfc
[ "MIT" ]
null
null
null
--- title: Chapter - New York City description: New York City comprises 5 boroughs sitting where the Hudson River meets the Atlantic Ocean. link: https://events.techqueria.org/new-york/ image: "/assets/img/communities/chapters/community-new-york-area-photo.jpg" aliases: - /nyc/ - /new-york-city/ - /communities/new-york-city/ - /nyc-events/ - /communities/nyc/ ---
28.923077
104
0.723404
eng_Latn
0.290087
a136de3ce7f76710f2ee169155036f4d4f9bbdf1
412
md
Markdown
Ejercicios/Enunciados/15.PropTypes.md
delval6589/Curso-de-React-Redux_ed4
a40a4f50c4ff3110260ee61a6cd23f8233c3a2f8
[ "MIT" ]
4
2020-03-01T23:30:48.000Z
2021-09-29T21:25:06.000Z
Ejercicios/Enunciados/15.PropTypes.md
delval6589/Curso-de-React-Redux_ed4
a40a4f50c4ff3110260ee61a6cd23f8233c3a2f8
[ "MIT" ]
10
2020-07-18T23:52:52.000Z
2022-03-08T23:07:13.000Z
Ejercicios/Enunciados/15.PropTypes.md
delval6589/Curso-de-React-Redux_ed4
a40a4f50c4ff3110260ee61a6cd23f8233c3a2f8
[ "MIT" ]
18
2019-11-19T18:18:21.000Z
2021-09-18T16:19:36.000Z
### Ejercicios: 1. Crear un componente que se llame ShowServerConfig. Debe validar lo siguiente: * La config que se le pasa por props debe tener la siguiente estructura: - minConnections: boolean - maxConnections: boolean - restartAlways: boolean * El environment solo puede ser dev, play o live * SSL debe ser obligatorio si el entorno es live. <!-- // TU SOLUCIÓN A PARTIR DE AQUÍ-->
34.333333
80
0.711165
spa_Latn
0.943996
a1373eb8225db13c5d691fc600c3db02d1ece8b4
1,263
md
Markdown
distribution/fontawesome-5/Solid/Capsules.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
71
2020-02-01T06:58:53.000Z
2022-03-16T14:58:44.000Z
distribution/fontawesome-5/Solid/Capsules.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
9
2020-05-08T10:39:42.000Z
2022-01-24T08:22:18.000Z
distribution/fontawesome-5/Solid/Capsules.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
21
2020-01-11T20:50:13.000Z
2021-09-29T16:21:28.000Z
# Capsules ```text fontawesome-5/Solid/Capsules ``` ```text include('fontawesome-5/Solid/Capsules') ``` | Illustration | Capsules | | :---: | :---: | | ![illustration for Illustration](../../fontawesome-5/Solid/Capsules.png) | ![illustration for Capsules](../../fontawesome-5/Solid/Capsules.Local.png) | ## Capsules ### Load remotely ```plantuml @startuml ' configures the library !global $LIB_BASE_LOCATION="https://github.com/tmorin/plantuml-libs/distribution" ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('fontawesome-5/bootstrap') ' loads the Item which embeds the element Capsules include('fontawesome-5/Solid/Capsules') ' renders the element Capsules('Capsules', 'Capsules', 'an optional tech label') @enduml ``` ### Load locally ```plantuml @startuml ' configures the library !global $INCLUSION_MODE="local" !global $LIB_BASE_LOCATION="../.." ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('fontawesome-5/bootstrap') ' loads the Item which embeds the element Capsules include('fontawesome-5/Solid/Capsules') ' renders the element Capsules('Capsules', 'Capsules', 'an optional tech label') @enduml ```
19.734375
153
0.734759
eng_Latn
0.411686
a1374b3d2421796df92f7d827d3ddae6a254e84c
148
md
Markdown
vocaset/templates/README.md
EvelynFan/FaceFormer
dfaea81983665b22b99af336a80574208cfcc099
[ "MIT" ]
55
2022-03-12T10:13:22.000Z
2022-03-29T06:19:58.000Z
vocaset/templates/README.md
bala1144/FaceFormer
2ffcbd912a4338d9d42208436bbb15ab7226bb22
[ "MIT" ]
8
2022-03-17T09:23:25.000Z
2022-03-30T00:34:32.000Z
vocaset/templates/README.md
bala1144/FaceFormer
2ffcbd912a4338d9d42208436bbb15ab7226bb22
[ "MIT" ]
13
2022-03-14T16:43:12.000Z
2022-03-28T06:15:49.000Z
Put "FLAME_sample.ply" in this folder. "FLAME_sample.ply" can be downloaded from [voca](https://github.com/TimoBolkart/voca/tree/master/template).
49.333333
146
0.777027
eng_Latn
0.580961
a137bafef132bea4b60335ca189b1a9b96ba01a1
389
md
Markdown
docs/api/MyComponent.md
vsc-github/react-custom-checkbox
b7d5d9ab21eea4084635e80944d38f81cb31bcd6
[ "MIT" ]
2
2016-12-05T18:13:32.000Z
2017-11-07T22:02:58.000Z
docs/api/MyComponent.md
vsc-github/react-custom-checkbox
b7d5d9ab21eea4084635e80944d38f81cb31bcd6
[ "MIT" ]
1
2016-10-03T02:39:16.000Z
2016-10-04T14:04:59.000Z
docs/api/MyComponent.md
pbeshai/react-library-starter
dddc8ae0abc3e5240aee15e8771ae27f61f52c98
[ "MIT" ]
null
null
null
### `<MyComponent defaultColor>` Draws a colored box with a button that controls cycling it through a pre-defined set of colors. #### Props * `[defaultColor]` (*String*): The color the box is initially filled with. Uses `#ffffd9` by default. #### Example ##### Using defaults ```js <MyComponent /> ``` ##### Specifying a default color ```js <MyComponent defaultColor="#0bb" /> ```
17.681818
101
0.670951
eng_Latn
0.978153
a138f0efd4fc5ce814eb8169fbbcbf7d62a5dfaf
1,031
md
Markdown
README.md
bobmaerten/shh_spotify
6037ca661acadb65e69d505dcf15062cb715347d
[ "MIT" ]
1
2016-05-09T07:09:19.000Z
2016-05-09T07:09:19.000Z
README.md
bobmaerten/shh_spotify
6037ca661acadb65e69d505dcf15062cb715347d
[ "MIT" ]
null
null
null
README.md
bobmaerten/shh_spotify
6037ca661acadb65e69d505dcf15062cb715347d
[ "MIT" ]
null
null
null
# shh\_spotify Lowers the Spotify application volume when a specific track is played. Tracks to be watched are stored in the `~/.shh_spotify.watchlist` file. The watchfile will be eventually created. Successfully tested only on : - Ubuntu 12.04LTS with ruby1.9.3 installed via [rbenv](https://github.com/sstephenson/rbenv). - Ubuntu 12.10 beta2 with native ruby package (apt-get install ruby1.9.1) ## Pre-requisites - ruby with rubygems - dbus (native on linux) - pulseaudio (native on Ubuntu 12.04) ## Installation Install as usual ruby gems: $ [sudo] gem install shh_spotify ## Usage Start Spotify, then open a console and type: $ ssh_spotify Press 'q' to quit, or 'p' to insert current track in watchlist file. The current track's volume is automaticaly lowered. ## Contributing 1. Fork it 2. Create your feature branch (`git checkout -b my-new-feature`) 3. Commit your changes (`git commit -am 'Add some feature'`) 4. Push to the branch (`git push origin my-new-feature`) 5. Create new Pull Request
27.131579
94
0.737148
eng_Latn
0.979424
a13933136d7cec05ccce8715405c22b395893e9e
1,334
md
Markdown
steps/step-2/fr-fr/README.md
Gosunet/steps-stencil
b21924549b273ab3469e537c8d5a5721b099ce9c
[ "MIT" ]
null
null
null
steps/step-2/fr-fr/README.md
Gosunet/steps-stencil
b21924549b273ab3469e537c8d5a5721b099ce9c
[ "MIT" ]
null
null
null
steps/step-2/fr-fr/README.md
Gosunet/steps-stencil
b21924549b273ab3469e537c8d5a5721b099ce9c
[ "MIT" ]
null
null
null
# Storybook - Kezako <a href="https://storybook.js.org/" target="_blank">Storybook</a> va nous permettre de développer nos Web Components, et même de tester leur intégration dans des exemples d'applications. ## Samples Ici vous avez plusieurs exemples d'applications, utilisant plusieurs Frameworks. * Pokedex - <a href="https://vuejs.org/" target="_blank">VueJS</a> * FaceSmash - <a href="https://svelte.dev/" target="_blank">Svelte</a> Chaque `sample` utilise des composants propres à leur Framework. L'idée sera de créer des WebComponents, et de les intégrer dans chaque application. ## Components Pour l'instant, vous n'avez qu'un composant, et il n'est pas très complet ! C'est un `favorite-button`, mais si on regarde de plus près, on saigne des yeux. <details> <summary>favorite-button.tsx</summary> ```tsx import { Component } from '@stencil/core'; @Component({ tag: 'favorite-button', styleUrl: 'favorite-button.scss' }) export class FavoriteButton { render() { return ( <div>Hello World!</div> ); } } ``` </details> En effet, il retourne `Hello World!`, et on ne peux même pas cliquer dessus, c'est nul.. :sweat_smile: On répare ça ? <a href="../../step-1/fr-fr/README.md">< Précédent</a> - <a href="../../../README.md">Accueil</a> - <a href="../../step-3/fr-fr/README.md">Suivant ></a>
27.22449
186
0.693403
fra_Latn
0.650276
a13ce75bfef569a3ed8d8f6b0734fbe67f170808
956
md
Markdown
Chapter04/chapter04.md
waltechel/Kubernetes202202_04
d0db41e491da6ef10983e4860e29de95966b725a
[ "MIT" ]
null
null
null
Chapter04/chapter04.md
waltechel/Kubernetes202202_04
d0db41e491da6ef10983e4860e29de95966b725a
[ "MIT" ]
null
null
null
Chapter04/chapter04.md
waltechel/Kubernetes202202_04
d0db41e491da6ef10983e4860e29de95966b725a
[ "MIT" ]
null
null
null
# 4장. 레플리케이션과 그 밖의 컨트롤러: 관리되는 파드 배포 ## 4.1 파드를 안정적으로 유지하기 ### 4.1.1 라이브니스 프로브 소개 ### 4.1.2 HTTP 기반 라이브니스 프로브 생성 ### 4.1.3 동작 중인 라이브니스 프로브 확인 ### 4.1.4 라이브니스 프로브의 추가 속성 설정 ### 4.1.5 효과적인 라이브니스 프로브 생성 ## 4.2 레플리케이션컨트롤러 소개 ### 4.2.1 레플리케이션컨트롤러의 동작 ### 4.2.2 레플리케이션컨트롤러 생성 ### 4.2.3 레플리케이션컨트롤러 작동 확인 ### 4.2.4 레플리케이션컨트롤러의 범위 안팎으로 파드 이동하기 ### 4.2.5 파드 템플릿 변경 ### 4.2.6 수평 파드 스케일링 ### 4.2.7 레플리케이션컨트롤러 삭제 ## 4.3 레플리케이션컨트롤러 대신 레플리카셋 사용하기 ### 4.3.1 레플리카셋과 레플리케이션컨트롤러 비교 ### 4.3.2 레플리카셋 정의하기 ### 4.3.3 레플리카셋 생성 및 검사 ### 4.3.4 레플리카셋의 더욱 표현적인 레이블 셀렉터 사용하기 ### 4.3.5 레플리카셋 정리 ## 4.4 데몬셋을 사용해 각 노드에서 정확히 한 개의 파드 실행하기 ### 4.4.1 데몬셋으로 모든 노드에 파드 실행하기 ### 4.4.2 데몬셋을 사용해 특정 노드에서만 파드를 실행하기 ## 4.5 완료 가능한 단일 태스크를 수행하는 파드 실행 ### 4.5.1 잡 리소스 소개 ### 4.5.2 잡 리소스 정의 ### 4.5.3 파드를 실행한 잡 보기 ### 4.5.4 잡에서 여러 파드 인스턴스 실행하기 ### 4.5.5 잡 파드가 완료되는 데 걸리는 시간 제한하기 ## 4.6 잡을 주기적으로 또는 한 번 실행되도록 스케줄링하기 ### 4.6.1 크론잡 생성하기 ### 4.6.2 스케줄된 잡의 실행 방법 이해 ## 4.7 요약
14.058824
39
0.587866
kor_Hang
1.00001
a13d2e5c19150c80f69abe3463f04c979f848777
8,670
md
Markdown
articles/frontdoor/front-door-custom-domain.md
jkudo/azure-docs.ja-jp
91f0b0c63c4e01743cd750160d36fdb3a9d7c6a7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/frontdoor/front-door-custom-domain.md
jkudo/azure-docs.ja-jp
91f0b0c63c4e01743cd750160d36fdb3a9d7c6a7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/frontdoor/front-door-custom-domain.md
jkudo/azure-docs.ja-jp
91f0b0c63c4e01743cd750160d36fdb3a9d7c6a7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: チュートリアル - カスタム ドメインを Azure Front Door の構成に追加する description: このチュートリアルでは、Azure Front Door にカスタム ドメインをオンボードする方法を説明します。 services: frontdoor documentationcenter: '' author: sharad4u editor: '' ms.service: frontdoor ms.workload: infrastructure-services ms.tgt_pltfrm: na ms.devlang: na ms.topic: tutorial ms.date: 09/10/2018 ms.author: sharadag ms.openlocfilehash: fb9e369bbba72cd3a1dd7fcc864e2845e3a979e9 ms.sourcegitcommit: dbde4aed5a3188d6b4244ff7220f2f75fce65ada ms.translationtype: HT ms.contentlocale: ja-JP ms.lasthandoff: 11/19/2019 ms.locfileid: "74184637" --- # <a name="tutorial-add-a-custom-domain-to-your-front-door"></a>チュートリアル:Front Door にカスタム ドメインを追加する このチュートリアルでは、Front Door にカスタム ドメインを追加する方法を説明します。 アプリケーションの配信に Azure Front Door Service を使用している場合、独自のドメイン名がエンド ユーザーの要求で示されるようにしたいときは、カスタム ドメインが必要です。 見てわかるドメイン名を使用することは、顧客にとって便利であり、ブランド化の目的にも役立ちます。 Front Door を作成すると、`azurefd.net` のサブドメインである既定のフロントエンド ホストが、バックエンドから Front Door コンテンツを配信するための URL に、既定で含まれるようになります (例: https:\//contoso.azurefd.net/activeusers.htm)。 便宜を図るため、Azure Front Door には、カスタム ドメインを既定のホストと関連付けるオプションが用意されています。 このオプションを使用すると、コンテンツ配信の URL に、Front Door 所有のドメイン名ではなく、カスタム ドメインが含まれるようになります (例: https:\//www.contoso.com/photo.png)。 このチュートリアルでは、以下の内容を学習します。 > [!div class="checklist"] > - CNAME DNS レコードを作成します。 > - カスタム ドメインを Front Door と関連付けます。 > - カスタム ドメインを確認します。 [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## <a name="prerequisites"></a>前提条件 このチュートリアルの手順を完了するには、最初に Front Door を作成する必要があります。 詳細については、「[クイック スタート: Front Door の作成](quickstart-create-front-door.md)」を参照してください。 カスタム ドメインがまだない場合は、最初にカスタム ドメインをドメイン プロバイダーから購入する必要があります。 たとえば、[カスタム ドメイン名の購入](https://docs.microsoft.com/azure/app-service/manage-custom-dns-buy-domain)に関するページを参照してください。 Azure を使用して [DNS ドメイン](https://docs.microsoft.com/azure/dns/dns-overview) をホストしている場合は、ドメイン プロバイダーのドメイン ネーム システム (DNS) を Azure DNS に委任する必要があります。 詳細については、「[Azure DNS へのドメインの委任](https://docs.microsoft.com/azure/dns/dns-delegate-domain-azure-dns)」を参照してください。 また、ドメイン プロバイダーを使用して DNS ドメインを処理している場合は、「[CNAME DNS レコードを作成する](#create-a-cname-dns-record)」に進んでください。 ## <a name="create-a-cname-dns-record"></a>CNAME DNS レコードを作成する カスタム ドメインを Front Door で使用するためには、最初に Front Door の既定のフロントエンド ホスト (contose.azurefd.net など) を指す正規名 (CNAME) レコードをドメイン プロバイダーで作成する必要があります。 CNAME レコードは、ソース ドメイン名を宛先ドメイン名にマップする DNS レコードの一種です。 Azure Front Door Service では、ソース ドメイン名はカスタム ドメイン名であり、宛先ドメイン名は Front Door の既定のホスト名です。 作成した CNAME レコードが After Front Door によって検証されると、ソース カスタム ドメイン (www\.contoso.com など) 宛てのトラフィックは、指定された宛先の Front Door 既定フロントエンド ホスト (contoso.azurefd.net など) にルーティングされます。 カスタム ドメインとそのサブドメインは、一度に 1 つの Front Door にのみ関連付けることができます。 ただし、複数の CNAME レコードを使用して、異なる Front Door に対して同じカスタム ドメインの異なるサブドメインを使用することができます。 また、異なるサブドメインがあるカスタム ドメインを同じ Front Door にマップすることもできます。 ## <a name="map-the-temporary-afdverify-sub-domain"></a>一時的な afdverify サブドメインをマップする 運用環境にある既存のドメインをマップする場合、特別な考慮事項があります。 Azure Portal にカスタム ドメインを登録している間、ドメインが短時間ダウンする場合があります。 Web トラフィックの中断を避けるには、Azure afdverify サブドメインを含む Front Door の既定のフロントエンド ホストにカスタム ドメインをマップして、一時的な CNAME マッピングを作成します。 この方法を使用すると、ユーザーは、DNS マッピングの実行中に中断することなくドメインにアクセスできます。 それ以外の場合、初めてカスタム ドメインを使用していて、まだ運用トラフィックが発生していないときは、カスタム ドメインを Front Door に直接マップできます。 「[永続的なカスタム ドメインをマップする](#map-the-permanent-custom-domain)」に進みます。 afdverify サブドメインを含む CNAME レコードを作成するには: 1. カスタム ドメインのドメイン プロバイダーの Web サイトにサインインします。 2. プロバイダーの資料を調べるか、 **[ドメイン名]** 、 **[DNS]** 、または **[ネーム サーバー管理]** という名前の付けられた Web サイトの領域を探して、DNS レコードを管理するためのページを見つけます。 3. カスタム ドメインの CNAME レコード エントリを作成し、次の表に示すようにフィールドを入力します (フィールド名は異なる場合があります)。 | source | 種類 | Destination | |---------------------------|-------|---------------------------------| | afdverify.www.contoso.com | CNAME | afdverify.contoso.azurefd.net | - ソース:afdverify サブドメインを含めて、カスタム ドメイン名を afdverify. _&lt;カスタム ドメイン名&gt;_ の形式で入力します。 たとえば、afdverify.www.contoso.com などです。 - 型: 「*CNAME*」と入力します。 - 変換先:afdverify サブドメインを含む既定の Front Door フロントエンド ホストを、 _&lt;エンドポイント名&gt;_ .azurefd.net の形式で入力します。 たとえば、afdverify.contoso.azurefd.net などです。 4. 変更を保存します。 たとえば、GoDaddy ドメイン レジストラーの場合の手順は次のとおりです。 1. サインインし、使用するカスタム ドメインを選択します。 2. [Domains]\(ドメイン\) セクションで **[Manage All]\(すべてを管理\)** を選択し、次に **[DNS]** | **[Manage Zones]\(ゾーンを管理\)** を選択します。 3. **[Domain Name]\(ドメイン名\)** にカスタム ドメインを入力し、 **[Search]\(検索\)** を選択します。 4. **[DNS Management]\(DNS 管理\)** ページで **[Add]\(追加\)** を選択し、 **[Type]\(タイプ\)** リストで **[CNAME]** を選択します。 5. CNAME エントリの次のフィールドに入力します。 - 型: *[CNAME]* を選択したままにします。 - [Host]\(ホスト\):afdverify サブドメイン名を含めて、使用するカスタム ドメインのサブドメインを入力します。 たとえば、afdverify.www などです。 - [Points to]\(ポイント先\):afdverify のサブドメイン名を含む、既定の Front Door フロントエンド ホストのホスト名を入力します。 たとえば、afdverify.contoso.azurefd.net などです。 - TTL: *[1 Hour]\(1 時間\)* を選択したままにします。 6. **[保存]** を選択します。 CNAME エントリが DNS レコード テーブルに追加されます。 ## <a name="associate-the-custom-domain-with-your-front-door"></a>カスタム ドメインを Front Door と関連付ける カスタム ドメインを登録したら、それを Front Door に追加できます。 1. [Azure portal](https://portal.azure.com/) にサインインし、カスタム ドメインにマップするフロントエンド ホストを含む Front Door を参照します。 2. **Front Door デザイナー**のページで、[+] をクリックしてカスタム ドメインを追加します。 3. **[Custom domain]\(カスタム ドメイン\)** を指定します。 4. **[Frontend host]\(フロントエンド ホスト\)** には、CNAME レコードのターゲット ドメインとして使用するフロントエンド ホストが事前入力されていて、Front Door から派生されています ( *&lt;既定のホスト名&gt;* .azurefd.net)。 この値は変更しないでください。 5. **[カスタム ホスト名]** に、CNAME レコードのソース ドメインとして使用するカスタム ドメイン (サブドメインを含む) を入力します。 たとえば、www\.contoso.com または cdn.contoso.com とします。 afdverify サブドメイン名は使用しないでください。 6. **[追加]** を選択します。 入力したカスタム ドメイン名に対する CNAME レコードが存在するかどうかが Azure によって確認されます。 CNAME が正しければ、カスタム ドメインが検証されます。 >[!WARNING] > Front Door 内の各フロントエンド ホスト (カスタム ドメインを含む) に、既定のパス ("/\*") が関連付けられたルーティング規則があることを、確認する**必要があります**。 つまり、すべてのルーティング規則について、既定のパス ("/\*") で定義された各フロントエンド ホストに対するルーティング規則が少なくとも 1 つは存在する必要があります。 そうなっていないと、エンド ユーザーのトラフィックが正しくルーティングされない可能性があります。 ## <a name="verify-the-custom-domain"></a>カスタム ドメインを確認する カスタム ドメインの登録を完了した後は、カスタム ドメインが既定の Front Door フロントエンド ホストを参照することを確認してください。 ブラウザーで、カスタム ドメインを使用してファイルのアドレスに移動します。 たとえば、カスタム ドメインが robotics.contoso.com の場合、キャッシュされたファイルの URL は http:\//robotics.contoso.com/my-public-container/my-file.jpg のようになります。 *&lt;Front Door host&gt;* .azurefd.net で Front Door に直接アクセスするときと同じ結果になることを確認します。 ## <a name="map-the-permanent-custom-domain"></a>永続的なカスタム ドメインをマップする afdverify サブドメインが Front Door に正常にマップされていることを確認できた場合 (または運用環境にない新しいカスタム ドメインを使用している場合) は、カスタム ドメインを既定の Front Door フロントエンド ホストに直接マップできます。 カスタム ドメインの CNAME レコードを作成するには: 1. カスタム ドメインのドメイン プロバイダーの Web サイトにサインインします。 2. プロバイダーの資料を調べるか、 **[ドメイン名]** 、 **[DNS]** 、または **[ネームサーバー管理]** という名前のつけられた Web サイトの領域を探して、DNS レコードを管理するためのページを見つけます。 3. カスタム ドメインの CNAME レコード エントリを作成し、次の表に示すようにフィールドを入力します (フィールド名は異なる場合があります)。 | source | 種類 | Destination | |-----------------|-------|-----------------------| | <www.contoso.com> | CNAME | contoso.azurefd.net | - ソース:カスタム ドメイン名 (例: www\.contoso.com) を入力します。 - 型: 「*CNAME*」と入力します。 - 変換先:既定の Front Door フロントエンド ホストを入力します。 名前は、 _&lt;ホスト名&gt;_ .azurefd.net の形式である必要があります。 たとえば、contoso.azurefd.net などです。 4. 変更を保存します。 5. 以前に一時 afdverify サブドメイン CNAME レコードを作成している場合は、それを削除します。 6. このカスタム ドメインを運用環境で初めて使用する場合は、「[カスタム ドメインを Front Door に関連付ける](#associate-the-custom-domain-with-your-front-door)」および「[カスタム ドメインを確認する](#verify-the-custom-domain)」の手順に従います。 たとえば、GoDaddy ドメイン レジストラーの場合の手順は次のとおりです。 1. サインインし、使用するカスタム ドメインを選択します。 2. [Domains]\(ドメイン\) セクションで **[Manage All]\(すべてを管理\)** を選択し、次に **[DNS]** | **[Manage Zones]\(ゾーンを管理\)** を選択します。 3. **[Domain Name]\(ドメイン名\)** にカスタム ドメインを入力し、 **[Search]\(検索\)** を選択します。 4. **[DNS Management]\(DNS 管理\)** ページで **[Add]\(追加\)** を選択し、 **[Type]\(タイプ\)** リストで **[CNAME]** を選択します。 5. CNAME エントリのフィールドに入力します。 - 型: *[CNAME]* を選択したままにします。 - [Host]\(ホスト\):使用するカスタム ドメインのサブドメインを入力します。 たとえば、www または profile とします。 - [Points to]\(ポイント先\):Front Door の既定のホスト名を入力します。 たとえば、contoso.azurefd.net などです。 - TTL: *[1 Hour]\(1 時間\)* を選択したままにします。 6. **[保存]** を選択します。 CNAME エントリが DNS レコード テーブルに追加されます。 7. afdverify CNAME レコードがある場合は、その横の鉛筆アイコンを選択し、ごみ箱アイコンを選択します。 8. **[Delete]\(削除\)** を選択して CNAME レコードを削除します。 ## <a name="clean-up-resources"></a>リソースのクリーンアップ 上記の手順では、カスタム ドメインを Front Door に追加しました。 Front Door をカスタム ドメインに関連付けておく必要がない場合は、次の手順を実行してカスタム ドメインを削除できます。 1. Front Door デザイナーで、削除するカスタム ドメインを選択します。 2. カスタム ドメインのコンテキスト メニューから [削除] をクリックします。 カスタム ドメインとエンドポイントの関連付けが解除されます。 ## <a name="next-steps"></a>次の手順 このチュートリアルでは、以下の内容を学習しました。 > [!div class="checklist"] > - CNAME DNS レコードを作成します。 > - カスタム ドメインを Front Door と関連付けます。 > - カスタム ドメインを確認します。
42.292683
432
0.739908
jpn_Jpan
0.557769
a13d84e79fc891611e1c3ca4a4d07119e2876b34
2,117
md
Markdown
readme.md
chaosjester/PHPInstallMiiRepoAdmin
185ed630202f7f8345ce051501c75181a44e20c3
[ "Unlicense" ]
2
2016-03-23T10:22:22.000Z
2016-03-24T10:34:18.000Z
readme.md
chaosjester/PHPInstallMiiRepoAdmin
185ed630202f7f8345ce051501c75181a44e20c3
[ "Unlicense" ]
1
2016-03-30T06:50:58.000Z
2016-03-30T07:10:52.000Z
readme.md
chaosjester/PHPInstallMiiRepoAdmin
185ed630202f7f8345ce051501c75181a44e20c3
[ "Unlicense" ]
null
null
null
# PHPInstallMiiRepoAdmin This is a PHP admin tool to compile several files required by the InstallMii 3DS Homebrew app. You can get InstallMii here - https://gbatemp.net/threads/wip-installmii-graphical-repository-downloader.406097/ This tool will create your repo.list, package.list and scrape information from .smdh files to create the packages.json. While it is preferable to have an SMDH file available for scraping, this tool will still add an entry if none exists. Packages can be modified once imported. The index has a download link to the repo.list, the package.list and packages.json files are all for the backend. For SQLite version visit https://github.com/chaosjester/InstallMiiRepoAdmin/releases # Requirements: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Homebrew apps MUST be in a folder named 3ds under the repo root directory For example, if your repo is http://repo.example.com/ apps must be in http://repo.example.com/3ds or if in http://example.com/repo the 3ds directory must be in http://example.com/repo/3ds !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Apache PHP >= 5.5 - This is due to password mangement, there is no exceptions to this PHP MUST run as your user, not a service account unless that account has permissions to the folders MySQL Directories must be writable An smdh file should be present in the homebrew application folder or some manual configuration is required # Instructions: Download latest release You might need to create SQL database on your server, along with a user that has access to create tables and modify tables, though the installer may create them for you Upload to webhost Go to http://yourrepo.com/, you will be directed to the install page On the install page, follow the directions to create the database and user Head back to http://yourrepo.com/admin and ensure you can log in Create additional admin accounts if required, otherwise it is advised to delete the /admin/install directory Once in, the interface is pretty straight forward.
35.881356
168
0.725555
eng_Latn
0.996061
a13db307cc7f79ea7db55be005ba16a3e4e772dc
7,266
md
Markdown
README.md
c410-f3r/honggfuzz-rs
652d8d970e6e15c0cde454d1c7816cc1ad114ed9
[ "MIT", "WTFPL", "Apache-2.0", "Unlicense" ]
null
null
null
README.md
c410-f3r/honggfuzz-rs
652d8d970e6e15c0cde454d1c7816cc1ad114ed9
[ "MIT", "WTFPL", "Apache-2.0", "Unlicense" ]
null
null
null
README.md
c410-f3r/honggfuzz-rs
652d8d970e6e15c0cde454d1c7816cc1ad114ed9
[ "MIT", "WTFPL", "Apache-2.0", "Unlicense" ]
null
null
null
# honggfuzz-rs [![Build Status][travis-img]][travis] [![Crates.io][crates-img]][crates] [![Documentation][docs-img]][docs] [travis-img]: https://travis-ci.org/rust-fuzz/honggfuzz-rs.svg?branch=master [travis]: https://travis-ci.org/rust-fuzz/honggfuzz-rs [crates-img]: https://img.shields.io/crates/v/honggfuzz.svg [crates]: https://crates.io/crates/honggfuzz [docs-img]: https://docs.rs/honggfuzz/badge.svg [docs]: https://docs.rs/honggfuzz Fuzz your Rust code with Google-developed Honggfuzz ! ## [Documentation](https://docs.rs/honggfuzz) [![asciicast](https://asciinema.org/a/43MLo5Xl8ukHxgwDLArKqS9xc.png)](https://asciinema.org/a/43MLo5Xl8ukHxgwDLArKqS9xc) ## About Honggfuzz Honggfuzz is a security oriented fuzzer with powerful analysis options. Supports evolutionary, feedback-driven fuzzing based on code coverage (software- and hardware-based). * project homepage [honggfuzz.com](http://honggfuzz.com/) * project repository [github.com/google/honggfuzz](https://github.com/google/honggfuzz) * this upstream project is maintained by Google, but ... * this is NOT an official Google product ## Compatibility * __Rust__: stable, beta, nightly * __OS__: GNU/Linux, macOS, FreeBSD, NetBSD, Android, WSL (Windows Subsystem for Linux) * __Arch__: x86_64, x86, arm64-v8a, armeabi-v7a, armeabi * __Sanitizer__: none, address, thread, leak ## Dependencies ### Linux * C compiler: `cc` * GNU Make: `make` * GNU Binutils development files for the BFD library: `libbfd.h` * libunwind development files: `libunwind.h` * Blocks runtime library (when compiling with clang) For example on Debian and its derivatives: ```sh sudo apt install build-essential binutils-dev libunwind-dev libblocksruntime-dev ``` ## How to use this crate Install honggfuzz commands to build with instrumentation and fuzz ```sh # installs hfuzz and honggfuzz subcommands in cargo cargo install honggfuzz ``` Add to your dependencies ```toml [dependencies] honggfuzz = "0.5" ``` Create a target to fuzz ```rust #[macro_use] extern crate honggfuzz; fn main() { // Here you can parse `std::env::args and // setup / initialize your project // You have full control over the loop but // you're supposed to call `fuzz` ad vitam aeternam loop { // The fuzz macro gives an arbitrary object (see `arbitrary crate`) // to a closure-like block of code. // For performance reasons, it is recommended that you use the native type // `&[u8]` when possible. // Here, this slice will contain a "random" quantity of "random" data. fuzz!(|data: &[u8]| { if data.len() != 6 {return} if data[0] != b'q' {return} if data[1] != b'w' {return} if data[2] != b'e' {return} if data[3] != b'r' {return} if data[4] != b't' {return} if data[5] != b'y' {return} panic!("BOOM") }); } } ``` Fuzz for fun and profit ! ```sh # builds with fuzzing instrumentation and then fuzz the "example" target cargo hfuzz run example ``` Once you got a crash, replay it easily in a debug environment ```sh # builds the target in debug mode and replays automatically the crash in rust-lldb cargo hfuzz run-debug example fuzzing_workspace/*.fuzz ``` You can also build and run your project without compile-time software instrumentation (LLVM's SanCov passes) This allows you for example to try hardware-only feedback driven fuzzing: ```sh # builds without fuzzing instrumentation and then fuzz the "example" target using hardware-based feedback HFUZZ_RUN_ARGS="--linux_perf_ipt_block --linux_perf_instr --linux_perf_branch" cargo hfuzz run-no-instr example ``` Clean ```sh # a wrapper on "cargo clean" which cleans the fuzzing_target directory cargo hfuzz clean ``` Version ```sh cargo hfuzz version ``` ### Environment variables #### `RUSTFLAGS` You can use `RUSTFLAGS` to send additional arguments to `rustc`. For instance, you can enable the use of LLVM's [sanitizers](https://github.com/japaric/rust-san). This is a recommended option if you want to test your `unsafe` rust code but it will have an impact on performance. ```sh RUSTFLAGS="-Z sanitizer=address" cargo hfuzz run example ``` #### `HFUZZ_BUILD_ARGS` You can use `HFUZZ_BUILD_ARGS` to send additional arguments to `cargo build`. #### `HFUZZ_RUN_ARGS` You can use `HFUZZ_RUN_ARGS` to send additional arguments to `honggfuzz`. See [USAGE](https://github.com/google/honggfuzz/blob/master/docs/USAGE.md) for the list of those. For example: ```sh # 1 second of timeout # use 12 fuzzing thread # be verbose # stop after 1000000 fuzzing iteration # exit upon crash HFUZZ_RUN_ARGS="-t 1 -n 12 -v -N 1000000 --exit_upon_crash" cargo hfuzz run example ``` #### `HFUZZ_DEBUGGER` By default we use `rust-lldb` but you can change it to `rust-gdb`, `gdb`, `/usr/bin/lldb-7` ... #### `CARGO_TARGET_DIR` Target compilation directory, defaults to `hfuzz_target` to not clash with `cargo build`'s default `target` directory. #### `HFUZZ_WORKSPACE` Honggfuzz working directory, defaults to `hfuzz_workspace`. #### `HFUZZ_INPUT` Honggfuzz input files (also called "corpus"), defaults to `$HFUZZ_WORKSPACE/{TARGET}/input`. ## Conditional compilation Sometimes, it is necessary to make some specific adaptation to your code to yield a better fuzzing efficiency. For instance: - Make you software behavior as much as possible deterministic on the fuzzing input - [PRNG](https://en.wikipedia.org/wiki/Pseudorandom_number_generator)s must be seeded with a constant or the fuzzer input - Behavior shouldn't change based on the computer's clock. - Avoid potential undeterministic behavior from racing threads. - ... - Never ever call `std::process::exit()`. - Disable logging and other unnecessary functionnalities. - Try to avoid modifying global state when possible. - Do not set up your own panic hook when run with `cfg(fuzzing)` When building with `cargo hfuzz`, the argument `--cfg fuzzing` is passed to `rustc` to allow you to condition the compilation of thoses adaptations thanks to the `cfg` macro like so: ```rust #[cfg(fuzzing)] let mut rng = rand_chacha::ChaCha8Rng::from_seed(&[0]); #[cfg(not(fuzzing))] let mut rng = rand::thread_rng(); ``` Also, when building in debug mode, the `fuzzing_debug` argument is added in addition to `fuzzing`. For more information about conditional compilation, please see the [reference](https://doc.rust-lang.org/reference/attributes.html#conditional-compilation). ## Relevant documentation about honggfuzz * [USAGE](https://github.com/google/honggfuzz/blob/master/docs/USAGE.md) * [FeedbackDrivenFuzzing](https://github.com/google/honggfuzz/blob/master/docs/FeedbackDrivenFuzzing.md) * [PersistentFuzzing](https://github.com/google/honggfuzz/blob/master/docs/PersistentFuzzing.md) ## About Rust fuzzing There is other projects providing Rust fuzzing support at [github.com/rust-fuzz](https://github.com/rust-fuzz). You'll find support for [AFL](https://github.com/rust-fuzz/afl.rs) and LLVM's [LibFuzzer](https://github.com/rust-fuzz/cargo-fuzz) and there is also a [trophy case](https://github.com/rust-fuzz/trophy-case) ;-) . This crate was inspired by those projects!
32.72973
212
0.723507
eng_Latn
0.910774
a13dda35dd4ec6d66369ba0829987cd9740d67e2
291
md
Markdown
Fable.Import.NodeLibzfs/README.md
dsheets/rust-libzfs
30f5771d56be31d8ece98024276fff030b5a759a
[ "MIT" ]
null
null
null
Fable.Import.NodeLibzfs/README.md
dsheets/rust-libzfs
30f5771d56be31d8ece98024276fff030b5a759a
[ "MIT" ]
null
null
null
Fable.Import.NodeLibzfs/README.md
dsheets/rust-libzfs
30f5771d56be31d8ece98024276fff030b5a759a
[ "MIT" ]
null
null
null
# Fable.Import.NodeLibzfs [![NuGet](https://img.shields.io/nuget/dt/Fable.Import.NodeLibzfs.svg)](https://www.nuget.org/packages/Fable.Import.NodeLibzfs) This package contains bindings and [Thot](https://github.com/MangelMaxime/Thot) encoders / decoders for [node-libzfs](../node-libzfs).
41.571429
127
0.762887
yue_Hant
0.514979
a13df715a3f154e27f1fbba69ff782559181bfaf
1,649
md
Markdown
includes/migration-guide/runtime/ef/log-file-name-created-by-objectcontextcreatedatabase-method-has-changed.md
MoisesMlg/docs.es-es
4e8c9f518ab606048dd16b6c6a43a4fa7de4bcf5
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/migration-guide/runtime/ef/log-file-name-created-by-objectcontextcreatedatabase-method-has-changed.md
MoisesMlg/docs.es-es
4e8c9f518ab606048dd16b6c6a43a4fa7de4bcf5
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/migration-guide/runtime/ef/log-file-name-created-by-objectcontextcreatedatabase-method-has-changed.md
MoisesMlg/docs.es-es
4e8c9f518ab606048dd16b6c6a43a4fa7de4bcf5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: e77e37156de759856c8a6f2e0c009caf9e1fe826 ms.sourcegitcommit: cbacb5d2cebbf044547f6af6e74a9de866800985 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 09/05/2020 ms.locfileid: "89497742" --- ### <a name="log-file-name-created-by-the-objectcontextcreatedatabase-method-has-changed-to-match-sql-server-specifications"></a>El nombre de archivo de registro creado por el método ObjectContext.CreateDatabase ha cambiado para coincidir con las especificaciones de SQL Server #### <a name="details"></a>Detalles Cuando se llama al método <xref:System.Data.Objects.ObjectContext.CreateDatabase?displayProperty=fullName> directamente o mediante Code First con el proveedor SqlClient y un valor AttachDBFilename en la cadena de conexión, se crea un archivo de registro denominado nombreDeArchivo_log.ldf en lugar de nombreDeArchivo.ldf (donde nombreDeArchivo es el nombre del archivo especificado mediante el valor AttachDBFilename). Este cambio mejora la depuración al proporcionar un archivo de registro cuyo nombre se ajusta a las especificaciones de SQL Server. #### <a name="suggestion"></a>Sugerencia Si el nombre de archivo de registro es importante para una aplicación, la aplicación debería actualizarse para que espere el formato de nombre de archivo estándar _log.ldf. | NOMBRE | Valor | |:--------|:------------| | Ámbito |Borde| |Versión|4.5| |Tipo|Tiempo de ejecución #### <a name="affected-apis"></a>API afectadas - <xref:System.Data.Objects.ObjectContext.CreateDatabase?displayProperty=nameWithType> <!-- #### Affected APIs - `M:System.Data.Objects.ObjectContext.CreateDatabase` -->
45.805556
550
0.782899
spa_Latn
0.928182
a13f12cf5cfff173ba4dc490f4b36f002f908306
3,708
md
Markdown
docs/data/oledb/reading-strings-into-the-ole-db-provider.md
sgrepos/cpp-docs
8c2de32e96c84d0147af3cce1e89e4f28707ff12
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/data/oledb/reading-strings-into-the-ole-db-provider.md
sgrepos/cpp-docs
8c2de32e96c84d0147af3cce1e89e4f28707ff12
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/data/oledb/reading-strings-into-the-ole-db-provider.md
sgrepos/cpp-docs
8c2de32e96c84d0147af3cce1e89e4f28707ff12
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "Reading Strings into the OLE DB Provider | Microsoft Docs" ms.custom: "" ms.date: "10/13/2018" ms.technology: ["cpp-data"] ms.topic: "reference" dev_langs: ["C++"] helpviewer_keywords: ["OLE DB providers, reading strings into"] ms.assetid: 517f322c-f37e-4eed-bf5e-dd9a412c2f98 author: "mikeblome" ms.author: "mblome" ms.workload: ["cplusplus", "data-storage"] --- # Reading Strings into the OLE DB Provider The `RCustomRowset::Execute` function opens a file and reads strings. The consumer passes the file name to the provider by calling [ICommandText::SetCommandText](/previous-versions/windows/desktop/ms709757). The provider receives the file name and stores it in the member variable `m_szCommandText`. `Execute` reads the file name from `m_szCommandText`. If the file name is invalid or the file is unavailable, `Execute` returns an error. Otherwise, it opens the file and calls `fgets` to retrieve the strings. For each set of strings it reads, `Execute` creates an instance of the user record (`CAgentMan`) and places it into an array. If the file cannot be opened, `Execute` must return DB_E_NOTABLE. If it returns E_FAIL instead, the provider will not work with many consumers and will not pass the OLE DB [conformance tests](../../data/oledb/testing-your-provider.md). ## Example The edited `Execute` function looks like this: ```cpp ///////////////////////////////////////////////////////////////////////// // CustomRS.h class RCustomRowset : public CRowsetImpl< RCustomRowset, CAgentMan, CRCustomCommand> { public: HRESULT Execute(DBPARAMS * pParams, LONG* pcRowsAffected) { enum { sizeOfBuffer = 256, sizeOfFile = MAX_PATH }; USES_CONVERSION; FILE* pFile = NULL; TCHAR szString[sizeOfBuffer]; TCHAR szFile[sizeOfFile]; size_t nLength; errcodeerr; ObjectLock lock(this); // From a filename, passed in as a command text, scan the file // placing data in the data array. if (!m_szCommandText) { ATLTRACE("No filename specified"); return E_FAIL; } // Open the file _tcscpy_s(szFile, sizeOfFile, m_szCommandText); if (szFile[0] == _T('\0') || ((err = fopen_s(&pFile, &szFile[0], "r")) == 0)) { ATLTRACE("Could not open file"); return DB_E_NOTABLE; } // Scan and parse the file. // The file should contain two strings per record LONG cFiles = 0; while (fgets(szString, sizeOfBuffer, pFile) != NULL) { nLength = strnlen(szString, sizeOfBuffer); szString[nLength-1] = '\0'; // Strip off trailing CR/LF CAgentMan am; _tcscpy_s(am.szCommand, am.sizeOfCommand, szString); _tcscpy_s(am.szCommand2, am.sizeOfCommand2, szString); if (fgets(szString, sizeOfBuffer, pFile) != NULL) { nLength = strnlen(szString, sizeOfBuffer); szString[nLength-1] = '\0'; // Strip off trailing CR/LF _tcscpy_s(am.szText, am.sizeOfText, szString); _tcscpy_s(am.szText2, am.sizeOfText2, szString); } am.dwBookmark = ++cFiles; if (!m_rgRowData.Add(am)) { ATLTRACE("Couldn't add data to array"); fclose(pFile); return E_FAIL; } } if (pcRowsAffected != NULL) *pcRowsAffected = cFiles; return S_OK; } } ``` ## See Also [Implementing the Simple Read-Only Provider](../../data/oledb/implementing-the-simple-read-only-provider.md)
37.836735
635
0.61246
eng_Latn
0.633316
a13f3ad4fc8ec8d16ad87684630ed35701348c3b
720
md
Markdown
docs/utility-functions.md
bartektelec/color-utils
3fba6e2a71119495a5caaf294923e42ea21847d4
[ "MIT" ]
null
null
null
docs/utility-functions.md
bartektelec/color-utils
3fba6e2a71119495a5caaf294923e42ea21847d4
[ "MIT" ]
null
null
null
docs/utility-functions.md
bartektelec/color-utils
3fba6e2a71119495a5caaf294923e42ea21847d4
[ "MIT" ]
null
null
null
# Utility functions This package includes several utility functions for converting color values. These are more strict about the input than the Color() object. ## Methods ### hexToRgb - `hexToRgb(hex)` Converts provided hex value (string) to 8-bit RGB value. Output: ```js { red: 0-255, green: 0-255, blue: 0-255, alpha: 0-1 } ``` **Example** ```js hexToRgb("6fa053"); // {red: 111, green: 160, blue: 83, alpha: 1} ``` --- ### rgbToHex - `rgbToHex(r, g, b, a?)` Converts provided 8-bit rgb value to hex value. Output: ```js { hex: "rrggbb", hexa: "rrggbbff" } ``` **Example** ```js rgbToHex(111, 160, 83); // {hex: "6fa053", hexa: "6fa053ff"} ```
14.4
139
0.586111
eng_Latn
0.843168
a140024c5904857802672f2b207418174d69c444
48
md
Markdown
docs/gitOpsArgoCD.md
mattfarina/appdelivery-demo
65775fd52510699aae5bc16aa780838f09d876ac
[ "Apache-2.0" ]
1
2021-06-16T07:36:08.000Z
2021-06-16T07:36:08.000Z
docs/gitOpsArgoCD.md
thschue/appdelivery-demo
25843f2522245671f139d5acf0bdc22f22087e12
[ "Apache-2.0" ]
null
null
null
docs/gitOpsArgoCD.md
thschue/appdelivery-demo
25843f2522245671f139d5acf0bdc22f22087e12
[ "Apache-2.0" ]
null
null
null
# Delivering the example using GitOps and ArgoCD
48
48
0.833333
eng_Latn
0.999301
a140056df1b5f3413772720cc0cad763250a195a
23,371
md
Markdown
articles/virtual-network/virtual-network-vnet-plan-design-arm.md
OpenLocalizationTestOrg/azure-docs-pr15_nl-NL
389bf74805bf9458069a8f4a1acdccd760060bc9
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-network/virtual-network-vnet-plan-design-arm.md
OpenLocalizationTestOrg/azure-docs-pr15_nl-NL
389bf74805bf9458069a8f4a1acdccd760060bc9
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-network/virtual-network-vnet-plan-design-arm.md
OpenLocalizationTestOrg/azure-docs-pr15_nl-NL
389bf74805bf9458069a8f4a1acdccd760060bc9
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Azure Virtual Network (VNet) plannen en ontwerphandleiding | Microsoft Azure" description="Informatie over het plannen en ontwerpen van virtuele netwerken in Azure op basis van uw vereisten voor moeten worden geïsoleerd, connectiviteit en locatie." services="virtual-network" documentationCenter="na" authors="jimdial" manager="carmonm" editor="tysonn" /> <tags ms.service="virtual-network" ms.devlang="na" ms.topic="article" ms.tgt_pltfrm="na" ms.workload="infrastructure-services" ms.date="02/08/2016" ms.author="jdial" /> # <a name="plan-and-design-azure-virtual-networks"></a>Plannen en ontwerpen van Azure virtuele netwerken Maken van een VNet om te experimenteren met is gemakkelijk genoeg is, maar waarschijnlijk wilt, implementeert u meerdere VNets na verloop van tijd voor de ondersteuning van uw organisatie productie. Met enkele plannen en ontwerpen, is mogelijk om te implementeren VNets en verbindt u de gewenste efficiënter resources. Als u niet bekend met VNets bent, het is raadzaam in die u [meer informatie over VNets](virtual-networks-overview.md) en [het implementeren van](virtual-networks-create-vnet-arm-pportal.md) een voordat u verdergaat. ## <a name="plan"></a>Plannen Er is een goed begrip van Azure abonnementen, regio's en netwerk resources kritieke voor succes. U kunt de lijst met overwegingen onder als uitgangspunt gebruiken. Nadat u deze overwegingen kennen, kunt u de vereisten voor het netwerkontwerp van uw definiëren. ### <a name="considerations"></a>Overwegingen Houd bij het beantwoorden van de planning vragen onder, rekening met het volgende: - Alles wat die u in Azure maakt bestaat uit een of meer resources. Een virtuele machine (VM) is een resource, de netwerkadapterinterface (NIC) die wordt gebruikt door een VM is een resource, het openbare IP-adres dat is gebruikt door een NIC is een resource, de VNet de NIC is verbonden met een resource is. - U maken bronnen binnen een [Azure regio](https://azure.microsoft.com/regions/#services) en abonnementen. En resources kunnen alleen worden verbonden met een VNet die zich in dezelfde regio en abonnement in. - U kunt VNets met elkaar verbinden met behulp van een Azure [VPN Gateway](../vpn-gateway/vpn-gateway-vnet-vnet-rm-ps.md). U kunt ook verbinding maakt VNets in regio's van abonnementen op deze manier. - U kunt VNets verbinding maken met uw on-premises netwerk via een van de [connectiviteitsopties](../vpn-gateway/vpn-gateway-about-vpngateways.md#site-to-site-and-multi-site) beschikbaar in Azure wordt aangegeven. - Verschillende resources kunnen worden gegroepeerd in [resourcegroepen](../azure-resource-manager/resource-group-overview.md#resource-groups), wordt eenvoudiger voor het beheren van de resource als een eenheid. Een resourcegroep kan resources uit meerdere regio's bevatten, zolang de resources deel uitmaakt van hetzelfde abonnement. ### <a name="define-requirements"></a>Vereisten definiëren Gebruik de vragen onder als uitgangspunt voor het netwerkontwerp van uw Azure. 1. Welke Azure locaties wordt u gebruiken voor host VNets? 2. Moet u de communicatie tussen deze Azure locaties? 3. Moet u de communicatie tussen uw VNet(s) Azure en uw on-premises implementatie datacenter(s)? 4. Hoeveel infrastructuur als een VMs Service (IaaS) cloud services rollen en web-apps wilt die u nodig hebt voor uw oplossing? 5. Moet u isoleren verkeer op basis van groepen van VMs (dat wil zeggen front-end-endwebservers en back-end databaseservers)? 6. Moet u bepalen netwerkverkeer virtuele toestellen gebruiken? 7. Hebben gebruikers verschillende sets met machtigingen voor verschillende Azure resources nodig? ### <a name="understand-vnet-and-subnet-properties"></a>Meer informatie over de eigenschappen VNet en subnet VNet en subnetten informatiebronnen definiëren van de rand van een waardepapier voor werkbelasting uitgevoerd in Azure wordt aangegeven. Een VNet wordt gekenmerkt door een verzameling adres spaties, gedefinieerd als CIDR blokken. >[AZURE.NOTE] Netwerkbeheerders bent bekend met CIDR-notatie. Als u niet bekend met CIDR, [meer informatie over deze bent](http://whatismyipaddress.com/cidr). VNets bevatten de volgende eigenschappen. |Eigenschap|Beschrijving|Beperkingen| |---|---|---| |**naam**|De naam van de VNet|Tekenreeks van maximaal 80 tekens. Mogen letters, cijfers, onderstrepingsteken, punten of streepjes. Moet beginnen met een letter of een getal. Moet eindigen met een letter, getal of onderstrepingsteken. Kan bevat hoofdletters of kleine letters.| |**locatie**|Azure locatie (ook wel regio genoemd).|Moet een van de geldige Azure locaties.| |**addressSpace**|Verzameling adresvoorvoegsels waaruit de VNet in CIDR-notatie.|Moet u een matrix van geldige CIDR adresblokken, met inbegrip van openbare IP-adresbereiken.| |**subnetten**|Verzameling subnetten waaruit de VNet|Zie de onderstaande tabel met subnet eigenschappen.|| |**dhcpOptions**|Object met één vereiste eigenschap met de naam **dnsServers**.|| |**dnsServers**|Matrix van DNS-servers op de VNet. Als er geen server is opgegeven, wordt Azure interne naamresolutie gebruikt.|Moet u een matrix van maximaal 10 DNS-servers op volgorde van IP-adres.| Een subnet is een resource onderliggende van een VNet en helpt bij het definiëren segmenten van adres spaties binnen een tekstblok CIDR, met IP-adres voorvoegsels voor eenheden. NIC's kunnen worden toegevoegd aan subnetten en verbonden met VMs, een connectiviteit voor verschillende werkbelasting. Subnetten bevatten de volgende eigenschappen. |Eigenschap|Beschrijving|Beperkingen| |---|---|---| |**naam**|De subnetnaam van de|Tekenreeks van maximaal 80 tekens. Mogen letters, cijfers, onderstrepingsteken, punten of streepjes. Moet beginnen met een letter of een getal. Moet eindigen met een letter, getal of onderstrepingsteken. Kan bevat hoofdletters of kleine letters.| |**locatie**|Azure locatie (ook wel regio genoemd).|Moet een van de geldige Azure locaties.| |**addressPrefix**|Één adresvoorvoegsel dat het subnet in CIDR notatie|Moet u één CIDR blok die deel uitmaakt van een van de de VNet adres spaties.| |**networkSecurityGroup**|NSG toegepast op het subnet|Zie [NSGs](resource-groups-networking.md#Network-Security-Group)| |**routeTable**|Routetabel die zijn toegepast op het subnet|Zie [UDR](resource-groups-networking.md#Route-table)| |**ipConfigurations**|Verzameling IP-configuratieobjecten die worden gebruikt door NIC's die zijn verbonden met het subnet|Zie [IP-configuratie](../resource-groups-networking.md#IP-configurations)| ### <a name="name-resolution"></a>Mailnamen omzetten Uw VNet standaard [geleverde Azure naamresolutie.](virtual-networks-name-resolution-for-vms-and-role-instances.md#Azure-provided-name-resolution) voor het oplossen van namen binnen de VNet, en klik op de openbare Internet. Als u uw VNets verbinden met uw on-premises implementatie-datacenters, moet u op te geven van [uw eigen DNS-server](virtual-networks-name-resolution-for-vms-and-role-instances.md#Name-resolution-using-your-own-DNS-server) om op te lossen namen tussen uw netwerken. ### <a name="limits"></a>Limieten Bekijk de netwerken limieten in het artikel [Azure beperkt](../azure-subscription-service-limits.md#networking-limits) om ervoor te zorgen dat het ontwerp van uw met een van de limieten niet conflicteren. Enkele beperkingen kunnen met het openen van een ondersteuningsticket toenemen. ### <a name="role-based-access-control-rbac"></a>Rolgebaseerd toegangsbeheer RBAC) U kunt [Azure RBAC](../active-directory/role-based-access-built-in-roles.md) gebruiken om te bepalen het toegangsniveau verschillende gebruikers u verschillende bronnen in Azure wordt aangegeven moet mogelijk. Op deze manier kunt u het werk dat is uitgevoerd door uw team op basis van hun behoeften scheiden. Wat betreft virtuele netwerken zijn, hebben gebruikers in de rol **Inzender netwerk** volledige controle over resourcemanager Azure virtuele netwerk resources. Daarnaast hebben gebruikers in de **Klassieke netwerk Inzender** rol volledige controle over klassieke virtueel netwerk resources. >[AZURE.NOTE] U kunt ook [uw eigen rollen maken](../active-directory/role-based-access-control-configure.md) om te scheiden van uw administratieve behoeften. ## <a name="design"></a>Ontwerp Zodra u weet dat de antwoorden op de vragen in de sectie [plannen](#Plan) , bekijk het volgende voordat u uw VNets definieert. ### <a name="number-of-subscriptions-and-vnets"></a>Aantal abonnementen en VNets U kunt meerdere VNets maken in de volgende scenario's: - **VMs die moeten worden geplaatst in verschillende Azure locaties**. VNets in Azure zijn regionale. Ze kunnen niet locaties beslaan. Daarom moet u ten minste één VNet voor elke Azure locatie die u wilt host VMs in. - **Werkbelasting die moeten worden volledig van elkaar zijn gescheiden**. U kunt afzonderlijke VNets, waarmee de dezelfde IP-adres spaties, zelfs kunt isoleren verschillende werkbelastingen van elkaar op te maken. Houd er rekening mee dat de limieten bovenstaand per regio, per abonnement zijn. Dat betekent dat u kunt meerdere abonnementen gebruiken om uit te breiden de limiet voor bronnen die u in Azure wordt aangegeven onderhouden kunt. U kunt een VPN-verbinding voor de site-naar-site of een circuitlijnen ExpressRoute VNets verbinding in de verschillende abonnementen. ### <a name="subscription-and-vnet-design-patterns"></a>VNet ontwerppatronen en -abonnement De volgende tabel ziet enkele algemene ontwerppatronen voor het gebruik van abonnementen en VNets. |Scenario|Diagram|Professionals|Nadelen| |---|---|---|---| |Één abonnement, twee VNets per app|![Één abonnement](./media/virtual-network-vnet-plan-design-arm/figure1.png)|Slechts één abonnement om toegang te beheren.|Maximum aantal VNets per Azure regio. U moet meer abonnementen daarna. Lees het artikel [Azure bestandsgrootten](../azure-subscription-service-limits.md#networking-limits) voor meer informatie.| |Met één abonnement per twee VNets per app-app|![Één abonnement](./media/virtual-network-vnet-plan-design-arm/figure2.png)|Slechts twee VNets per abonnement gebruikt.|Moeilijker te beheren wanneer er te veel apps.| |Een abonnement per eenheid voor bedrijven, twee VNets per app.|![Één abonnement](./media/virtual-network-vnet-plan-design-arm/figure3.png)|Verhouding tussen aantal abonnementen en VNets.|Maximum aantal VNets per eenheid voor bedrijven (abonnement). Lees het artikel [Azure bestandsgrootten](../azure-subscription-service-limits.md#networking-limits) voor meer informatie.| |Een abonnement per eenheid voor bedrijven, twee VNets per groep van apps.|![Één abonnement](./media/virtual-network-vnet-plan-design-arm/figure4.png)|Verhouding tussen aantal abonnementen en VNets.|Apps moeten worden geïsoleerd via subnetten en NSGs.| ### <a name="number-of-subnets"></a>Aantal subnetten Meerdere subnetten in een VNet in de volgende scenario's, moet u overwegen: - **Onvoldoende privé IP-adressen voor alle NIC's in een subnet**. Als uw subnet-adresruimte bevat geen voldoende IP-adressen voor het aantal NIC's in het subnet, moet u meerdere subnetten maken. Houd er rekening mee dat Azure behoudt 5 privé IP-adressen van elk subnet die niet kunnen worden gebruikt: de eerste en laatste adressen van de adresruimte (voor het subnetadres, en multicast) en 3 adressen naar intern (voor DHCP en DNS doeleinden) worden gebruikt. - **Beveiliging**. U kunt subnetten gebruiken om groepen van VMs van elkaar te scheiden voor werkbelasting waarvoor een gelaagde structuur en verschillende [beveiligingsgroepen netwerk (NSGs)](virtual-networks-nsg.md#subnets) voor deze subnetten toepassen. - **Hybride connectivity**. U kunt VPN gateways en ExpressRoute circuits om [verbinding](../vpn-gateway/vpn-gateway-about-vpngateways.md#site-to-site-and-multi-site) te gebruiken uw VNets onderling en naar uw lokale gegevens center(s). VPN gateways en ExpressRoute circuits vragen om een subnet van hun eigen moet worden gemaakt. - **Virtuele toestellen**. U kunt een virtuele toestel, zoals een firewall, WAN accelerator of VPN gateway gebruiken in een VNet Azure. Als u dit doet, kunt u moet [route-verkeer is toegestaan](virtual-networks-udr-overview.md) op deze apparaten en deze isoleren in hun eigen subnet. ### <a name="subnet-and-nsg-design-patterns"></a>Subnet en patronen voor NSG ontwerpen De volgende tabel ziet enkele algemene ontwerppatronen voor het gebruik van subnetten. |Scenario|Diagram|Professionals|Nadelen| |---|---|---|---| |Één subnet, NSGs per toepassingslaag, per app|![Één subnet](./media/virtual-network-vnet-plan-design-arm/figure5.png)|Slechts één subnet om te beheren.|Meerdere NSGs nodig elke toepassing isoleren.| |Één subnet per NSGs per toepassingslaag-app|![Subnet per app](./media/virtual-network-vnet-plan-design-arm/figure6.png)|Minder NSGs om te beheren.|Meerdere subnetten om te beheren.| |Één subnet per toepassingslaag, NSGs per app.|![Subnet per laag](./media/virtual-network-vnet-plan-design-arm/figure7.png)|Verhouding tussen aantal subnetten en NSGs.|Maximum aantal NSGs per abonnement. Lees het artikel [Azure bestandsgrootten](../azure-subscription-service-limits.md#networking-limits) voor meer informatie.| |Één subnet per toepassingslaag, per NSGs per subnet-app|![Subnet per laag per app](./media/virtual-network-vnet-plan-design-arm/figure8.png)|Mogelijk kleinere aantal NSGs.|Meerdere subnetten om te beheren.| ## <a name="sample-design"></a>Ontwerp van de steekproef Neem de volgende scenario om te illustreren de toepassing van de informatie in dit artikel. U werkt naar een bedrijf dat 2 datacenters in Noord-Amerika en twee datacenters Europe heeft. U omlaag toepassingen van 6 andere klant onderhouden met 2 verschillende geïdentificeerd bedrijfseenheden die u wilt migreren naar Azure als een pilot. De eenvoudige architectuur voor de toepassingen zijn er als volgt uit: - 1, 2, App3 en App4 worden gehost op Linux-servers Ubuntu webtoepassingen. Elke toepassing maakt verbinding met een afzonderlijke toepassingsserver die als host fungeert voor RESTful services op Linux-servers. De RESTful services verbinden met een back-end MySQL-database. - App5 en App6 zijn webtoepassingen die worden gehost op Windows-servers met Windows Server 2012 R2. Elke toepassing maakt verbinding met een back-end SQL Server-database. - Alle apps zijn momenteel wordt gehost op een van de datacenters van het bedrijf in Noord-Amerika. - De on-premises implementatie-datacenters gebruiken de 10.0.0.0/8-adresruimte. U moet een virtueel netwerkoplossing ontwerpen die voldoet aan de volgende vereisten: - Elke eenheid bedrijven moet niet worden beïnvloed door verbruik van andere bedrijfseenheden bronnen. - U moet de hoeveelheid VNets en subnetten naar eenvoudiger kunnen beheren minimaliseren. - Elke business-eenheid moet een enkel test/ontwikkeling VNet gebruikt voor alle toepassingen hebben. - Elke toepassing wordt gehost in 2 verschillende Azure-datacenters per continent (Noord-Amerika en Europa). - Elke toepassing wordt volledig geïsoleerd van elkaar verschillen. - Elke toepassing kan worden gebruikt door klanten via Internet met behulp van HTTP. - Elke toepassing kan worden gebruikt door gebruikers die zijn verbonden met de on-premises implementatie-datacenters door middel van een versleutelde tunnel. - Verbinding met datacenters on-premises implementatie moet bestaande VPN apparaten gebruiken. - Netwerken groep van het bedrijf moet volledige controle over de configuratie VNet hebben. - Ontwikkelaars in elke business-eenheid moet alleen kunnen VMs implementeren naar bestaande subnetten. - Alle toepassingen worden gemigreerd als ze te Azure (lift-en-shift zijn). - De databases op elke locatie moeten worden gerepliceerd naar andere Azure locaties één keer per dag. - Elke toepassing moet gebruiken 5 front-end-endwebservers, 2 toepassingsservers (indien nodig) en 2 databaseservers. ### <a name="plan"></a>Plannen U kunt het ontwerp van uw planning door het beantwoorden van de vraag in het gedeelte [vereisten definiëren](#Define-requirements) , zoals hieronder moet beginnen. 1. Welke Azure locaties wordt u gebruiken voor host VNets? 2 locaties in Noord-Amerika en 2 locaties in Europa. U moet kiezen die zijn gebaseerd op de plaats van uw bestaande on-premises implementatie-datacenters. Op deze manier de verbinding van uw fysieke locaties met Azure heeft een betere latentie. 2. Moet u de communicatie tussen deze Azure locaties? Ja. Aangezien de databases moeten worden gerepliceerd naar alle locaties. 3. Moet u de communicatie tussen uw VNet(s) Azure en uw on-premises gegevens center(s) bieden? Ja. Aangezien gebruikers die zijn verbonden met moeten de datacenters on-premises implementatie kunnen toegang tot de toepassingen via een versleutelde tunnel. 4. Hoeveel IaaS VMs moet u voor uw oplossing? 200 IaaS VMs. 1, 2 en App3 vereisen 5-endwebservers elke, 2 toepassingsservers elke en 2 databaseservers elke. Dit is een totaal van 9 IaaS VMs per toepassing of 36 IaaS VMs. App5 en App6 vereisen endwebservers 5 en 2 databaseservers. Dit is een totaal van 7 IaaS VMs per toepassing of 14 IaaS VMs. U moet dus 50 IaaS VMs voor alle toepassingen in elke Azure regio. Aangezien we 4 regio's gebruiken moeten, wordt er 200 IaaS VMs. U moet ook op te geven van de DNS-servers in elke VNet of in uw on-premises implementatie-datacenters voor het oplossen van naam tussen uw Azure IaaS VMs en uw on-premises netwerk. 5. Moet u isoleren verkeer op basis van groepen van VMs (dat wil zeggen front-end-endwebservers en back-end databaseservers)? Ja. Elke toepassing moet worden volledig geïsoleerd van elkaar en elke toepassingslaag ook moet worden geïsoleerd. 6. Moet u bepalen netwerkverkeer virtuele toestellen gebruiken? Nee. Virtuele toestellen kunnen worden gebruikt om aan te bieden meer controle over netwerkverkeer, waaronder meer gedetailleerde gegevens vlak logboekregistratie. 7. Hebben gebruikers verschillende sets met machtigingen voor verschillende Azure resources nodig? Ja. De netwerken team moet volledig beheer hebben voor de virtuele netwerkinstellingen, terwijl ontwikkelaars alleen moet kunnen hun VMs implementeren naar bestaande subnetten. ### <a name="design"></a>Ontwerp U kunt het ontwerp van de precisie van abonnementen, VNets, subnetten en NSGs moet volgen. Bespreken we NSGs hier, maar u meer informatie over [NSGs](virtual-networks-nsg.md) moet voordat u het ontwerp van uw voltooit. **Aantal abonnementen en VNets** De volgende vereisten hebben betrekking op abonnementen en VNets: - Elke eenheid bedrijven moet niet worden beïnvloed door verbruik van andere bedrijfseenheden bronnen. - U moet de hoeveelheid VNets en subnetten minimaliseren. - Elke business-eenheid moet een enkel test/ontwikkeling VNet gebruikt voor alle toepassingen hebben. - Elke toepassing wordt gehost in 2 verschillende Azure-datacenters per continent (Noord-Amerika en Europa). Op basis van die vereisten, moet u een abonnement voor elke eenheid bedrijven. Op deze manier verbruik van resources uit een eenheid voor bedrijven worden niet meegeteld richting limieten voor de eenheden van andere bedrijven. En aangezien u minimaliseren van het aantal VNets wilt, moet u overwegen het patroon dat in **één abonnement per eenheid voor bedrijven, twee VNets per groep van apps** zoals hieronder wordt weergegeven. ![Één abonnement](./media/virtual-network-vnet-plan-design-arm/figure9.png) U moet ook de adresruimte opgeven voor elke VNet. Aangezien u nodig hebt connectiviteit tussen de on-premises gegevens gecentreerd en de Azure regio's, de adresruimte die wordt gebruikt voor Azure VNets kan geen conflicteren met de on-premises netwerk en de adresruimte die worden gebruikt door elke VNet moet niet conflicteren met andere bestaande VNets. U kunt de adres-spaties in de onderstaande tabel om te voldoen aan deze vereisten is voldaan. |**Abonnement**|**VNet**|**Azure regio**|**-Adresruimte**| |---|---|---|---| |BU1|ProdBU1US1|West VS|172.16.0.0/16| |BU1|ProdBU1US2|Oost-VS|172.17.0.0/16| |BU1|ProdBU1EU1|Noord-Europa|172.18.0.0/16| |BU1|ProdBU1EU2|West Europa|172.19.0.0/16| |BU1|TestDevBU1|West VS|172.20.0.0/16| |BU2|TestDevBU2|West VS|172.21.0.0/16| |BU2|ProdBU2US1|West VS|172.22.0.0/16| |BU2|ProdBU2US2|Oost-VS|172.23.0.0/16| |BU2|ProdBU2EU1|Noord-Europa|172.24.0.0/16| |BU2|ProdBU2EU2|West Europa|172.25.0.0/16| **Aantal subnetten en NSGs** De volgende vereisten hebben betrekking op subnetten en NSGs: - U moet de hoeveelheid VNets en subnetten minimaliseren. - Elke toepassing wordt volledig geïsoleerd van elkaar verschillen. - Elke toepassing kan worden gebruikt door klanten via Internet met behulp van HTTP. - Elke toepassing kan worden gebruikt door gebruikers die zijn verbonden met de on-premises implementatie-datacenters door middel van een versleutelde tunnel. - Verbinding met datacenters on-premises implementatie moet bestaande VPN apparaten gebruiken. - De databases op elke locatie moeten worden gerepliceerd naar andere Azure locaties één keer per dag. Op basis van die vereisten, kan u gebruik één subnet per toepassingslaag, en gebruik NSGs om verkeer filteren per toepassing. Op deze manier alleen er 3 subnetten in elke VNet (front-end, toepassingslaag en gegevens layer) en één NSG per toepassing per subnet. In dit geval moet u overwegen het patroon van de ontwerpen **één subnet per toepassingslaag, NSGs per app** . De onderstaande afbeelding ziet u het gebruik van de ontwerppatroon dat staat voor het VNet **ProdBU1US1** . ![Één subnet per laag, één NSG per toepassing per laag](./media/virtual-network-vnet-plan-design-arm/figure11.png) Echter ook moet u een extra subnet maken voor de VPN-verbinding tussen de VNets en uw on-premises implementatie-datacenters. En moet u de adresruimte voor elk subnet opgeven. De onderstaande afbeelding ziet u een oplossing van een steekproef voor **ProdBU1US1** VNet. U zou dit scenario repliceren voor elke VNet. Elke kleur vertegenwoordigt een andere toepassing. ![Voorbeeld VNet](./media/virtual-network-vnet-plan-design-arm/figure10.png) **Toegangsbeheer** De volgende vereisten hebben betrekking op toegangsbeheer: - Netwerken groep van het bedrijf moet volledige controle over de configuratie VNet hebben. - Ontwikkelaars in elke business-eenheid moet alleen kunnen VMs implementeren naar bestaande subnetten. Op basis van die vereisten, kunt u gebruikers uit het team netwerken toevoegen aan de rol van de ingebouwde **Netwerk Inzender** in elk abonnement; en maakt u een aangepaste rol voor ontwikkelaars van de toepassing in elk abonnement dat door ze te machtigen voor het toevoegen van VMs aan bestaande subnetten. ## <a name="next-steps"></a>Volgende stappen - [Deploy een virtueel netwerk](virtual-networks-create-vnet-arm-template-click.md) op basis van een scenario. - Meer informatie over hoe u IaaS VMs [taakverdeling](../load-balancer/load-balancer-overview.md) en [Routering via meerdere Azure regio's beheren](../traffic-manager/traffic-manager-overview.md). - Meer informatie over [NSGs en hoe u plannen en ontwerpen van](virtual-networks-nsg.md) een oplossing NSG. - Meer informatie over uw [cross-premises en VNet connectiviteitsopties](../vpn-gateway/vpn-gateway-about-vpngateways.md#site-to-site-and-multi-site).
87.860902
535
0.796243
nld_Latn
0.999085
a14055f0ac623fb38e657aa0823366dc9d370b10
25
md
Markdown
README.md
fraserwalker23/fraserwalker23.github.io
b89e5bcb6fd06a721177322a5204eb40356d0452
[ "MIT" ]
null
null
null
README.md
fraserwalker23/fraserwalker23.github.io
b89e5bcb6fd06a721177322a5204eb40356d0452
[ "MIT" ]
1
2021-05-10T22:00:27.000Z
2021-05-10T22:00:27.000Z
README.md
fraserwalker23/fraserwalker23.github.io
b89e5bcb6fd06a721177322a5204eb40356d0452
[ "MIT" ]
null
null
null
Personal / Resume website
25
25
0.84
eng_Latn
0.517378
a14072dc15c947d20ec91539ac99065e577a0f91
1,482
md
Markdown
README.md
yeraidm/bridge2vec
0ce607c0d08929bcd7f2033c39baabbd787ce26f
[ "BSD-2-Clause" ]
5
2020-08-09T09:14:02.000Z
2022-03-22T09:50:51.000Z
README.md
yeraidm/bridge2vec
0ce607c0d08929bcd7f2033c39baabbd787ce26f
[ "BSD-2-Clause" ]
null
null
null
README.md
yeraidm/bridge2vec
0ce607c0d08929bcd7f2033c39baabbd787ce26f
[ "BSD-2-Clause" ]
null
null
null
## Introduction This is the code accompanying the paper [Towards robust word embeddings for noisy texts](https://arxiv.org/abs/1911.10876). It is an adaptation of the [fastText](https://github.com/facebookresearch/fastText) tool by Facebook (although an older version). ## Requirements Compilation is carried out using a Makefile, so you will need to have a working **make** and compilers with good C++11 support, such as g++-4.7.2 or clang-3.3, or newer. You will also need the [utf8proc](https://juliastrings.github.io/utf8proc/) library. ## Building bridge2vec ``` $ git clone https://github.com/yeraidm/bridge2vec.git $ cd bridge2vec $ make ``` ### Example usage ``` $ ./fasttext skipgram -input data.txt -output model ``` where `data.txt` is a training file containing `UTF-8` encoded text. At the end of optimization the program will save two files: `model.bin` and `model.vec`. `model.vec` is a text file containing the word vectors, one per line. `model.bin` is a binary file containing the parameters of the model along with the dictionary and all hyper parameters. For more information, see the original fastText's README included. ## Reference Please cite us if you use this code in your paper: ``` @article{doval2019robust, title={Towards robust word embeddings for noisy texts}, author={Yerai Doval and Jesús Vilares and Carlos Gómez-Rodríguez}, journal={arXiv preprint arXiv:1911.10876}, year={2019} } ``` ## License bridge2vec is BSD-licensed.
31.531915
253
0.746964
eng_Latn
0.979768
a1417561b54ff79e01b869f2c3163bc126665461
1,373
md
Markdown
_posts/posts/2022-05-26-10-46-newsquawk-us-market-open-containedchoppy-equity-trade-while-gbp-leads-and-gilts-lag-ahead.md
zerohedgePodcast/-zerohedgepodcast2.github.io-
bb69365d53b23003638ab223ad6f9e2b98c4b75d
[ "BSD-Source-Code" ]
null
null
null
_posts/posts/2022-05-26-10-46-newsquawk-us-market-open-containedchoppy-equity-trade-while-gbp-leads-and-gilts-lag-ahead.md
zerohedgePodcast/-zerohedgepodcast2.github.io-
bb69365d53b23003638ab223ad6f9e2b98c4b75d
[ "BSD-Source-Code" ]
null
null
null
_posts/posts/2022-05-26-10-46-newsquawk-us-market-open-containedchoppy-equity-trade-while-gbp-leads-and-gilts-lag-ahead.md
zerohedgePodcast/-zerohedgepodcast2.github.io-
bb69365d53b23003638ab223ad6f9e2b98c4b75d
[ "BSD-Source-Code" ]
null
null
null
--- layout: post title: "Newsquawk US Market Open: Contained/choppy equity trade while GBP leads and Gilts lag ahead of Sunak" audio: newsquawk-us-market-open-containedchoppy-equity-trade-while-gbp-leads-and-gilts-lag-ahead-0 category: markets desc: "European bourses are firmer across the board, Euro Stoxx 50 +0.7%, but remain within initial ranges in what has been a relatively contained session" duration: 00:11:42 length: 702 datetime: Thu, 26 May 2022 10:46:00 +0000 tags: podcast guid: newsquawk-us-market-open-containedchoppy-equity-trade-while-gbp-leads-and-gilts-lag-ahead-0 order: 0 --- European bourses are firmer across the board, Euro Stoxx 50 +0.7%, but remain within initial ranges in what has been a relatively contained session Link: [https://www.zerohedge.com/markets/newsquawk-us-market-open-containedchoppy-equity-trade-while-gbp-leads-and-gilts-lag-ahead](https://www.zerohedge.com/markets/newsquawk-us-market-open-containedchoppy-equity-trade-while-gbp-leads-and-gilts-lag-ahead) About: The Zerohedge Podcast is a non-commercial, automated program, designed to give people a way to get news from Zerohedge in an audio format. I am actively working on tweaking and improving the setup to create a better listening experience (June 2022). Suggestions are welcome: [zerohedgePodcast@outlook.com](mailto:zerohedgePodcast@outlook.com)
76.277778
351
0.789512
eng_Latn
0.963459
a141fa432c31e56d2c8924233d3ee29b13ce200f
1,427
md
Markdown
tools/babel-preset/README.md
ryhinchey/monorepo
1c83f6e766a5ed042ce1bcf2ab2cb68631ad83fd
[ "MIT" ]
null
null
null
tools/babel-preset/README.md
ryhinchey/monorepo
1c83f6e766a5ed042ce1bcf2ab2cb68631ad83fd
[ "MIT" ]
null
null
null
tools/babel-preset/README.md
ryhinchey/monorepo
1c83f6e766a5ed042ce1bcf2ab2cb68631ad83fd
[ "MIT" ]
null
null
null
# @verdaccio/babel-preset [![@verdaccio/babel-preset (latest)](https://img.shields.io/npm/v/@verdaccio/babel-preset/latest.svg)](https://www.npmjs.com/package/@verdaccio/babel-preset) [![Node version (latest)](https://img.shields.io/node/v/@verdaccio/babel-preset/latest.svg)](https://www.npmjs.com/package/@verdaccio/babel-preset) [![Dependencies](https://img.shields.io/david/verdaccio/monorepo?path=tools%2Fbabel-preset)](https://david-dm.org/verdaccio/monorepo/master?path=tools%2Fbabel-preset) [![DevDependencies](https://img.shields.io/david/dev/verdaccio/monorepo?path=tools%2Fbabel-preset)](https://david-dm.org/verdaccio/monorepo/master?path=tools%2Fbabel-preset&type=dev) [![MIT](https://img.shields.io/github/license/verdaccio/monorepo.svg)](./LICENSE) Configurable Babel preset for Verdaccio projects ## Usage To use the preset in your `.babelrc` file, you have to add it using: ```json { "presets": ["@verdaccio"] } ``` `BABEL_ENV` options possibles are: `ui`, `test`, `registry` and `docker`. Note: `docker` has a fixed Node targed with `10`; ### Options To use a different Node target version (by default is `6.10`) ```json { "presets": [["@verdaccio", {"node": "9"}]] } ``` > Node 8.15 is the minimum version supported and the default set up. Enable debug ```json { "presets": [["@verdaccio", {"debug": true}]] } ``` ## License This is an open source project under [MIT license](./LICENSE)
29.729167
182
0.713385
eng_Latn
0.179512
a1427c57c33b8753cd880efec2537780f5f98391
434
md
Markdown
README.md
condomanagement/condo-api
23a9360bcb2b25c174cc63a07556503e096f8ff7
[ "MIT" ]
null
null
null
README.md
condomanagement/condo-api
23a9360bcb2b25c174cc63a07556503e096f8ff7
[ "MIT" ]
56
2020-08-06T03:02:14.000Z
2021-09-22T01:54:30.000Z
README.md
condomanagement/condo-api
23a9360bcb2b25c174cc63a07556503e096f8ff7
[ "MIT" ]
null
null
null
# Condo API Server API for condo resource management. ## Requirement Ruby 2.7.1 Postgres 12.4 ## Setup / installation 1. Clone repo 2. `bundle install` 3. `rails db:setup` 4. `rails db:migrate` 5. Copy `.env.sample` to `.env` and enter information 5. `rails server -e development` ## Linting Run `rubocop` to lint. ## Testing Run `rails test` to test. Please consider [Sponsoring me](https://github.com/sponsors/djensenius)
15.5
71
0.711982
eng_Latn
0.536572
a142f224c83d5652f60263cfa6b5abdee9123d4e
549
md
Markdown
_site/shorts/2021-08-19-1255-HTML-End-Tag-Labels.md
planetoftheweb/raybo.org
fbc2226bd8f47ebb9760f69186fa52284e0112de
[ "MIT" ]
81
2015-01-14T00:27:24.000Z
2022-02-19T11:03:49.000Z
_site/shorts/2021-08-19-1255-HTML-End-Tag-Labels.md
planetoftheweb/raybo.org
fbc2226bd8f47ebb9760f69186fa52284e0112de
[ "MIT" ]
29
2017-12-26T17:45:33.000Z
2019-08-26T22:15:55.000Z
_site/shorts/2021-08-19-1255-HTML-End-Tag-Labels.md
planetoftheweb/raybo.org
fbc2226bd8f47ebb9760f69186fa52284e0112de
[ "MIT" ]
48
2015-02-06T21:15:44.000Z
2022-03-25T15:31:34.000Z
--- layout: post.njk title: "HTML End Tag Labels" summary: "If you've got a lot of DIVs in your HTML (and who doesn't), it's hard to tell which is which. This VSCode extension will show you the ID and/or Class of each div. Simple, but it can really help you identify when you're missing a closing DIV." thumb: "https://anteprimorac.gallerycdn.vsassets.io/extensions/anteprimorac/html-end-tag-labels/0.7.0/1624720640642/Microsoft.VisualStudio.Services.Icons.Default" links: - website: https://go.raybo.org/5LLI category: shorts tags: - external ---
42.230769
252
0.761384
eng_Latn
0.883788
a142f92b52dd6e6b3d950e2823eec5be559f1e9d
722
md
Markdown
2016/CVE-2016-5661.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
2,340
2022-02-10T21:04:40.000Z
2022-03-31T14:42:58.000Z
2016/CVE-2016-5661.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
19
2022-02-11T16:06:53.000Z
2022-03-11T10:44:27.000Z
2016/CVE-2016-5661.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
280
2022-02-10T19:58:58.000Z
2022-03-26T11:13:05.000Z
### [CVE-2016-5661](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-5661) ![](https://img.shields.io/static/v1?label=Product&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=n%2Fa&color=brighgreen) ### Description Accela Civic Platform Citizen Access portal relies on the client to restrict file types for uploads, which allows remote authenticated users to execute arbitrary code via modified _EventArgument and filename parameters. ### POC #### Reference - http://www.kb.cert.org/vuls/id/665280 - http://www.kb.cert.org/vuls/id/JLAD-ABMPVA #### Github No PoCs found on GitHub currently.
38
219
0.754848
eng_Latn
0.382863
a143b1fe39aa7e59cfeab12554980f546d8b25ff
8,780
md
Markdown
docs/framework/wpf/advanced/use-automatic-layout-overview.md
turibbio/docs.it-it
2212390575baa937d6ecea44d8a02e045bd9427c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/advanced/use-automatic-layout-overview.md
turibbio/docs.it-it
2212390575baa937d6ecea44d8a02e045bd9427c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wpf/advanced/use-automatic-layout-overview.md
turibbio/docs.it-it
2212390575baa937d6ecea44d8a02e045bd9427c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Cenni preliminari sull'utilizzo del layout automatico ms.date: 03/30/2017 helpviewer_keywords: - layout [WPF], automatic - automatic layout [WPF] ms.assetid: 6fed9264-18bb-4d05-8867-1fe356c6f687 ms.openlocfilehash: 2fe473da3eeabef3852e3003e61b3b9604332855 ms.sourcegitcommit: 9c3a4f2d3babca8919a1e490a159c1500ba7a844 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 10/12/2019 ms.locfileid: "72291266" --- # <a name="use-automatic-layout-overview"></a>Cenni preliminari sull'utilizzo del layout automatico In questo argomento vengono presentate le linee guida per gli sviluppatori su come scrivere applicazioni [!INCLUDE[TLA#tla_winclient](../../../../includes/tlasharptla-winclient-md.md)] con interfacce utente localizzabili (UI). In passato, la localizzazione di un'interfaccia utente era un processo che richiedeva molto tempo. Ogni lingua in cui l'interfaccia utente è stata adattata per la regolazione del pixel è necessaria. Oggi, con la progettazione corretta e gli standard di codifica corretti, le interfacce utente possono essere costruite in modo che i localizzatori abbiano un minor ridimensionamento e un riposizionamento. L'approccio alla scrittura di applicazioni che possono essere ridimensionate e riposizionate più facilmente viene definito layout automatico e può essere eseguito utilizzando la progettazione di applicazioni [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)]. <a name="advantages_of_autolayout"></a> ## <a name="advantages-of-using-automatic-layout"></a>Vantaggi dell'uso del layout automatico Poiché il sistema di presentazione [!INCLUDE[TLA2#tla_winclient](../../../../includes/tla2sharptla-winclient-md.md)] è potente e flessibile, offre la possibilità di applicare il layout degli elementi di un'applicazione che possono essere modificati in base ai requisiti di lingue diverse. L'elenco seguente indica alcuni vantaggi del layout automatico. - L'interfaccia utente viene visualizzata correttamente in qualsiasi lingua. - Riduce l'esigenza di ulteriori modifiche alla posizione e alle dimensioni dei controlli dopo la traduzione del testo. - Riduce l'esigenza di ulteriori modifiche alle dimensioni delle finestre. - Il layout dell'interfaccia utente viene visualizzato correttamente in qualsiasi lingua. - La localizzazione può essere ridotta a semplici attività che vanno poco oltre la traduzione delle stringhe. <a name="autolayout_controls"></a> ## <a name="automatic-layout-and-controls"></a>Layout automatico e controlli Il layout automatico consente di modificare automaticamente le dimensioni di un controllo in un'applicazione. Ad esempio, un controllo può cambiare in base alla lunghezza di una stringa. Questa funzionalità consente ai localizzatori di tradurre la stringa senza dover ridimensionare il controllo per adattarlo al testo tradotto. L'esempio seguente crea un pulsante con contenuto in lingua inglese. [!code-xaml[LocalizationBtn_snip#1](~/samples/snippets/csharp/VS_Snippets_Wpf/LocalizationBtn_snip/CS/Pane1.xaml#1)] Nell'esempio, l'unica operazione da compiere per creare un pulsante in lingua spagnola è modificare il testo. Ad esempio, [!code-xaml[LocalizationBtn#1](~/samples/snippets/csharp/VS_Snippets_Wpf/LocalizationBtn/CS/Pane1.xaml#1)] Il grafico seguente mostra l'output degli esempi di codice: ![Lo stesso pulsante con testo in lingue diverse](./media/use-automatic-layout-overview/auto-resizable-button.png) <a name="autolayout_coding"></a> ## <a name="automatic-layout-and-coding-standards"></a>Layout automatico e standard di codifica L'uso dell'approccio di layout automatico richiede un set di regole e standard di codifica e progettazione per produrre un'interfaccia utente completamente localizzabile. Le linee guida riportate di seguito agevolano la codifica del layout automatico. **Non usare posizioni assolute** - Non usare <xref:System.Windows.Controls.Canvas> perché posiziona gli elementi in modo assoluto. - Usare <xref:System.Windows.Controls.DockPanel>, <xref:System.Windows.Controls.StackPanel> e <xref:System.Windows.Controls.Grid> per posizionare i controlli. Per una descrizione dei vari tipi di pannelli, vedere [Cenni preliminari sui pannelli](../controls/panels-overview.md). **Non impostare dimensioni fisse per una finestra** - Usare <xref:System.Windows.Window.SizeToContent%2A?displayProperty=nameWithType>. Esempio: [!code-xaml[LocalizationGrid#2](~/samples/snippets/csharp/VS_Snippets_Wpf/LocalizationGrid/CS/Pane1.xaml#2)] **Aggiungere un <xref:System.Windows.FrameworkElement.FlowDirection%2A>** - Aggiungere un <xref:System.Windows.FrameworkElement.FlowDirection%2A> all'elemento radice dell'applicazione. WPF offre un modo pratico per supportare layout orizzontali, bidirezionali e verticali. Nel Framework di presentazione, è possibile usare la proprietà <xref:System.Windows.FrameworkElement.FlowDirection%2A> per definire il layout. I modelli di direzione del flusso sono: - <xref:System.Windows.FlowDirection.LeftToRight?displayProperty=nameWithType> (LrTb): layout orizzontale per il latino, l'Asia orientale e così via. - <xref:System.Windows.FlowDirection.RightToLeft?displayProperty=nameWithType> (RlTb): bidirezionale per l'arabo, l'ebraico e così via. **Usare tipi di carattere compositi invece di tipi di carattere fisici** - Con i tipi di carattere compositi non è necessario localizzare la proprietà <xref:System.Windows.Controls.Control.FontFamily%2A>. - Gli sviluppatori possono usare uno dei tipi di carattere riportati di seguito oppure crearne uno personalizzato. - Interfaccia utente globale - Global San Serif - Global Serif **Aggiungi XML: lang** - Aggiungere l'attributo `xml:lang` nell'elemento radice dell'interfaccia utente, ad esempio `xml:lang="en-US"` per un'applicazione in lingua inglese. - Poiché i tipi di carattere composito utilizzano `xml:lang` per determinare il tipo di carattere da utilizzare, impostare questa proprietà per supportare scenari multilingue. <a name="autolay_grids"></a> ## <a name="automatic-layout-and-grids"></a>Layout automatico e griglie L'elemento <xref:System.Windows.Controls.Grid> è utile per il layout automatico perché consente agli sviluppatori di posizionare gli elementi. Un controllo <xref:System.Windows.Controls.Grid> è in grado di distribuire lo spazio disponibile tra gli elementi figlio, usando una disposizione di colonne e righe. Gli elementi dell'interfaccia utente possono estendersi su più celle ed è possibile disporre di griglie all'interno di griglie. Le griglie sono utili perché consentono di creare e posizionare interfacce utente complesse. L'esempio seguente illustra l'uso di una griglia per posizionare alcuni pulsanti e del testo. Si noti che l'altezza e la larghezza delle celle sono impostate su <xref:System.Windows.GridUnitType.Auto>; Pertanto, la cella che contiene il pulsante con un'immagine viene modificata in base all'immagine. [!code-xaml[LocalizationGrid#1](~/samples/snippets/csharp/VS_Snippets_Wpf/LocalizationGrid/CS/Pane1.xaml#1)] La figura seguente illustra la griglia prodotta dal codice precedente. Griglia di ![esempio griglia](./media/glob-grid.png "glob_grid") <a name="autolay_grids_issharedsizescope"></a> ## <a name="automatic-layout-and-grids-using-the-issharedsizescope-property"></a>Layout automatico e griglie basate sull'uso della proprietà IsSharedSizeScope Un elemento <xref:System.Windows.Controls.Grid> è utile nelle applicazioni localizzabili per creare controlli che si adattano al contenuto. In alcuni casi, tuttavia, può essere opportuno che i controlli mantengano una determinata dimensione indipendentemente dal contenuto. Ad esempio, nei casi dei pulsanti "OK", "Annulla" e "Sfoglia", probabilmente non si vuole che le dimensioni si adattino al contenuto. In questo caso la proprietà associata <xref:System.Windows.Controls.Grid.IsSharedSizeScope%2A?displayProperty=nameWithType> è utile per condividere le stesse dimensioni tra più elementi della griglia. Nell'esempio seguente viene illustrato come condividere dati di ridimensionamento di colonne e righe tra più elementi <xref:System.Windows.Controls.Grid>. [!code-xaml[gridIssharedsizescopeProp#2](~/samples/snippets/csharp/VS_Snippets_Wpf/gridIssharedsizescopeProp/CSharp/Window1.xaml#2)] > [!NOTE] > Per l'esempio di codice completo, vedere [condividere le proprietà di ridimensionamento tra le griglie](../controls/how-to-share-sizing-properties-between-grids.md). ## <a name="see-also"></a>Vedere anche - [Globalizzazione per WPF](globalization-for-wpf.md) - [Utilizzare un layout automatico per creare un pulsante](how-to-use-automatic-layout-to-create-a-button.md) - [Usare una griglia per il layout automatico](how-to-use-a-grid-for-automatic-layout.md)
70.24
921
0.80672
ita_Latn
0.996412
a143b7fb09da5fadf95737e4610fb95149f8df13
1,358
md
Markdown
_posts/2020/2020-11-18-tian.md
relatos/laughyouth.github.io
4814bde9b7994537d15b7d2c74befb6c4b2a3476
[ "MIT" ]
1
2021-12-07T07:39:27.000Z
2021-12-07T07:39:27.000Z
_posts/2020/2020-11-18-tian.md
relatos/buhuixiao.github.io
a391060b8463c69d357141b93139972342b57f49
[ "MIT" ]
null
null
null
_posts/2020/2020-11-18-tian.md
relatos/buhuixiao.github.io
a391060b8463c69d357141b93139972342b57f49
[ "MIT" ]
null
null
null
--- layout: post title: 屌炸天!!!!!!!!!! subtitle: 这关于地球程序员的荣誉 description: 这关于地球程序员的荣誉 image: http://favorites.ren/assets/images/2020/cartoon/tian/tian00.jpg optimized_image: http://favorites.ren/assets/images/2020/cartoon/tian/tian00.jpg category: life tags: - 外星人 - 程序员 - 技术 author: laughyouth --- ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian01.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian02.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian03.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian04.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian05.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian06.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian07.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian08.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian09.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian10.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian11.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian12.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian13.jpg) ![](http://favorites.ren/assets/images/2020/cartoon/tian/tian14.jpg) >作者:不会笑青年 | 漫画师:Ys >**本文原创公众号:不会笑青年,授权转载请联系微信(laughyouth369),授权后,请在原创发表48小时后再转载。**
37.722222
80
0.746686
yue_Hant
0.208747
a143bf9a8641de68081d4248012b1820e472fd69
1,621
md
Markdown
windows-driver-docs-pr/display/dxgkarg-system-display-enable-flags.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/display/dxgkarg-system-display-enable-flags.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/display/dxgkarg-system-display-enable-flags.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: DXGKARG\_システム\_表示\_を有効にする\_フラグ構造体 description: システムの使用に予約されています。 ドライバーでは使用しないでください。 ms.assetid: f23d6692-4c9d-48eb-8d7f-ef70334494b1 keywords: - DXGKARG_SYSTEM_DISPLAY_ENABLE_FLAGS 構造体のディスプレイ デバイス - PDXGKARG_SYSTEM_DISPLAY_ENABLE_FLAGS 構造体ポインター ディスプレイ デバイス topic_type: - apiref api_name: - DXGKARG_SYSTEM_DISPLAY_ENABLE_FLAGS api_location: - Dispmprt.h api_type: - HeaderDef ms.date: 01/05/2018 ms.localizationpriority: medium ms.openlocfilehash: 3fcae06de43c0499f557e3793f0066d41a63328d ms.sourcegitcommit: 0cc5051945559a242d941a6f2799d161d8eba2a7 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 04/23/2019 ms.locfileid: "63384500" --- # <a name="dxgkargsystemdisplayenableflags-structure"></a>DXGKARG\_システム\_表示\_を有効にする\_フラグ構造体 システムの使用に予約されています。 ドライバーでは使用しないでください。 <a name="syntax"></a>構文 ------ ```ManagedCPlusPlus typedef struct _DXGKARG_SYSTEM_DISPLAY_ENABLE_FLAGS { union { struct { UINT Reserved :32; }; UINT   Value; }; } DXGKARG_SYSTEM_DISPLAY_ENABLE_FLAGS, *PDXGKARG_SYSTEM_DISPLAY_ENABLE_FLAGS; ``` <a name="members"></a>メンバー ------- **予約済み**システム用に予約されています。 **値**システム用に予約されています。 <a name="requirements"></a>必要条件 ------------ <table> <colgroup> <col width="50%" /> <col width="50%" /> </colgroup> <tbody> <tr class="odd"> <td align="left"><p>サポートされている最小のクライアント</p></td> <td align="left"><p>Windows 8</p></td> </tr> <tr class="even"> <td align="left"><p>サポートされている最小のサーバー</p></td> <td align="left"><p>Windows Server 2012</p></td> </tr> <tr class="odd"> <td align="left"><p>Header</p></td> <td align="left">Dispmprt.h</td> </tr> </tbody> </table>
19.53012
91
0.727946
yue_Hant
0.795052
a14421e686b670b5fe90e007a93ee13b48836212
6,472
md
Markdown
README.md
megvii-research/OMNet
3585d4d63da3606c6433ec34714df74ef7ad74d4
[ "MIT" ]
10
2021-09-29T09:19:43.000Z
2022-03-16T04:40:35.000Z
README.md
megvii-research/OMNet
3585d4d63da3606c6433ec34714df74ef7ad74d4
[ "MIT" ]
11
2021-10-08T01:53:18.000Z
2022-02-12T08:48:18.000Z
README.md
megvii-research/OMNet
3585d4d63da3606c6433ec34714df74ef7ad74d4
[ "MIT" ]
4
2021-09-29T09:19:45.000Z
2022-01-12T07:51:34.000Z
# [ICCV 2021] OMNet: Learning Overlapping Mask for Partial-to-Partial Point Cloud Registration This is the official implementation (MegEngine implementation) of our ICCV2021 paper [OMNet](https://openaccess.thecvf.com/content/ICCV2021/papers/Xu_OMNet_Learning_Overlapping_Mask_for_Partial-to-Partial_Point_Cloud_Registration_ICCV_2021_paper.pdf). For our Pytorch implementation, please refer to [this repo](https://github.com/hxwork/OMNet_Pytorch). Our presentation video: [[Youtube](https://www.youtube.com/watch?v=u2lTKsom8oU)][[Bilibili](https://www.bilibili.com/video/BV1Ef4y1J7XP/)]. ## Our Poster ![image](./images/OMNet_poster.png) ## Dependencies * MegEngine==1.6.0 * Other requirements please refer to`requirements.txt`. * Add`frequency_weighted_cross_entropy` to MegEngine source code. MegEngine==1.6.0 does not support `frequency_weighted_cross_entropy`, so I write this function based on `cross_entropy` in `loss.py` of the original MegEngine source code, whose location should be like this: (1) conda environment `[your_conda_env_path]/lib/[python3.x]/site-packages/megengine/functional/loss.py`. (2) original python environment `/usr/local/lib/[python3.x]/dist-packages/megengine/functional/loss.py`. Use your own path to replace the content in `[]`. ``` from .math import sum @_reduce_output def frequency_weighted_cross_entropy( pred: Tensor, label: Tensor, weight: Tensor = None, axis: int = 1, with_logits: bool = True, label_smooth: float = 0, reduction: str = "mean", ) -> Tensor: n0 = pred.ndim n1 = label.ndim assert n0 == n1 + 1, ("target ndim must be one less than input ndim; input_ndim={} " "target_ndim={}".format(n0, n1)) if weight is not None: weight = weight / sum(weight) class_weight = weight[label.flatten().astype(np.int32)].reshape(label.shape) ls = label_smooth if with_logits: logZ = logsumexp(pred, axis) primary_term = indexing_one_hot(pred, label, axis) else: logZ = 0 primary_term = log(indexing_one_hot(pred, label, axis)) if ls is None or type(ls) in (int, float) and ls == 0: if weight is None: return logZ - primary_term else: return sum((logZ - primary_term) * class_weight, axis=1, keepdims=True) / sum(class_weight, axis=1, keepdims=True) if not with_logits: pred = log(pred) if weight is None: return logZ - ls * pred.mean(axis) - (1 - ls) * primary_term else: return sum((logZ - ls * pred.mean(axis) - (1 - ls) * primary_term) * class_weight, axis=1, keepdims=True) / sum(class_weight, axis=1, keepdims=True) ``` ## Data Preparation ### OS data We refer the original data from PointNet as OS data, where point clouds are only sampled once from corresponding CAD models. We offer two ways to use OS data, (1) you can download this data from its original link [original_OS_data.zip](http://modelnet.cs.princeton.edu/). (2) you can also download the data that has been preprocessed by us from link [our_OS_data.zip](https://drive.google.com/file/d/1rXnbXwD72tkeu8x6wboMP0X7iL9LiBPq/view?usp=sharing). ### TS data Since OS data incurs over-fitting issue, we propose our TS data, where point clouds are randomly sampled twice from CAD models. You need to download our preprocessed ModelNet40 dataset first, where 8 axisymmetrical categories are removed and all CAD models have 40 randomly sampled point clouds. The download link is [TS_data.zip](https://drive.google.com/file/d/1DPBBI3Ulvp2Mx7SAZaBEyvADJzBvErFF/view?usp=sharing). All 40 point clouds of a CAD model are stacked to form a (40, 2048, 3) numpy array, you can easily obtain this data by using following code: ``` import numpy as np points = np.load("path_of_npy_file") print(points.shape, type(points)) # (40, 2048, 3), <class 'numpy.ndarray'> ``` Then, you need to put the data into `./dataset/data`, and the contents of directories are as follows: ``` ./dataset/data/ ├── modelnet40_half1_rm_rotate.txt ├── modelnet40_half2_rm_rotate.txt ├── modelnet_os │   ├── modelnet_os_test.pickle │   ├── modelnet_os_train.pickle │   ├── modelnet_os_val.pickle │   ├── test [1146 entries exceeds filelimit, not opening dir] │   ├── train [4194 entries exceeds filelimit, not opening dir] │   └── val [1002 entries exceeds filelimit, not opening dir] └── modelnet_ts ├── modelnet_ts_test.pickle ├── modelnet_ts_train.pickle ├── modelnet_ts_val.pickle ├── shape_names.txt ├── test [1146 entries exceeds filelimit, not opening dir] ├── train [4196 entries exceeds filelimit, not opening dir] └── val [1002 entries exceeds filelimit, not opening dir] ``` ## Training and Evaluation ### Begin training For ModelNet40 dataset, you can just run: ``` python3 train.py --model_dir=./experiments/experiment_omnet/ ``` For other dataset, you need to add your own dataset class in `./dataset/data_loader.py`. Training with a lower batch size, such as 16, may obtain worse performance than training with a larger batch size, e.g., 64. ### Begin testing You need to download the pretrained checkpoint and run: ``` python3 evaluate.py --model_dir=./experiments/experiment_omnet --restore_file=./experiments/experiment_omnet/val_model_best.pth ``` This model weight is for TS data with Gaussian noise. Note that the performance is a little bit worse than the results reported in our paper (Pytorch implementation). MegEngine checkpoint for ModelNet40 dataset can be download via [Google Drive](https://drive.google.com/file/d/1xkWQeMabQhO4zqg6X3aj_VQCMHgeBUsD/view?usp=sharing) or [Github Release](https://github.com/megvii-research/OMNet/releases/download/V1.0.0/val_model_best.pth). ## Citation ``` @InProceedings{Xu_2021_ICCV, author={Xu, Hao and Liu, Shuaicheng and Wang, Guangfu and Liu, Guanghui and Zeng, Bing}, title={OMNet: Learning Overlapping Mask for Partial-to-Partial Point Cloud Registration}, booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month={October}, year={2021}, pages={3132-3141} } ``` ## Acknowledgments In this project we use (parts of) the official implementations of the following works: * [RPMNet](https://github.com/yewzijian/RPMNet) (ModelNet40 preprocessing and evaluation) * [PRNet](https://github.com/WangYueFt/prnet) (ModelNet40 preprocessing) We thank the respective authors for open sourcing their methods.
40.962025
556
0.730532
eng_Latn
0.890149
a14428567edb312bcef08a57ad603c48c79f9872
21,100
md
Markdown
Cloud Computing/Azure Functions/Session 2 - Hands On/Azure Functions HOL (JavaScript).md
jorge2412/DataLake2
d950e2faf2103f20a4887f88f4c0f020f7964df4
[ "MIT" ]
1
2019-02-28T12:34:06.000Z
2019-02-28T12:34:06.000Z
Cloud Computing/Azure Functions/Session 2 - Hands On/Azure Functions HOL (JavaScript).md
jorge2412/DataLake2
d950e2faf2103f20a4887f88f4c0f020f7964df4
[ "MIT" ]
null
null
null
Cloud Computing/Azure Functions/Session 2 - Hands On/Azure Functions HOL (JavaScript).md
jorge2412/DataLake2
d950e2faf2103f20a4887f88f4c0f020f7964df4
[ "MIT" ]
null
null
null
<a name="HOLTitle"></a> # Azure Functions # --- <a name="Overview"></a> ## Overview ## Functions have been the basic building blocks of software since the first lines of code were written and the need for code organization and reuse became a necessity. Azure Functions expand on these concepts by allowing developers to create "serverless", event-driven functions that run in the cloud and can be shared across a wide variety of services and systems, uniformly managed, and easily scaled based on demand. In addition, Azure Functions can be written in a variety of languages, including C#, JavaScript, Python, Bash, and PowerShell, and they're perfect for building apps and nanoservices that employ a compute-on-demand model. In this lab, you will create an Azure Function that monitors a blob container in Azure Storage for new images, and then performs automated analysis of the images using the Microsoft Cognitive Services [Computer Vision API](https://www.microsoft.com/cognitive-services/en-us/computer-vision-api). Specifically, the Azure Function will analyze each image that is uploaded to the container for adult or racy content and create a copy of the image in another container. Images that contain adult or racy content will be copied to one container, and images that do not contain adult or racy content will be copied to another. In addition, the scores returned by the Computer Vision API will be stored in blob metadata. <a name="Objectives"></a> ### Objectives ### In this hands-on lab, you will learn how to: - Create an Azure Function App - Write an Azure Function that uses a blob trigger - Add application settings to an Azure Function App - Use Microsoft Cognitive Services to analyze images and store the results in blob metadata <a name="Prerequisites"></a> ### Prerequisites ### The following are required to complete this hands-on lab: - An active Microsoft Azure subscription. If you don't have one, [sign up for a free trial](http://aka.ms/WATK-FreeTrial). - [Microsoft Azure Storage Explorer](http://storageexplorer.com) (optional) --- <a name="Exercises"></a> ## Exercises ## This hands-on lab includes the following exercises: - [Exercise 1: Create an Azure Function App](#Exercise1) - [Exercise 2: Add an Azure Function](#Exercise2) - [Exercise 3: Add a subscription key to application settings](#Exercise3) - [Exercise 4: Test the Azure Function](#Exercise4) - [Exercise 5: View blob metadata (optional)](#Exercise5) Estimated time to complete this lab: **45** minutes. <a name="Exercise1"></a> ## Exercise 1: Create an Azure Function App ## The first step in writing an Azure Function is to create an Azure Function App. In this exercise, you will create an Azure Function App using the Azure Portal. Then you will add the three blob containers to the storage account that is created for the Function App: one to store uploaded images, a second to store images that do not contain adult or racy content, and a third to contain images that *do* contain adult or racy content. 1. Open the [Azure Portal](https://portal.azure.com) in your browser. If asked to log in, do so using your Microsoft account. 2. Click **+ New**, followed by **Compute** and **Function App**. ![Creating an Azure Function App](Images/new-function-app.png) _Creating an Azure Function App_ 3. Enter an app name that is unique within Azure. Under **Resource Group**, select **Create new** and enter "FunctionsLabResourceGroup" (without quotation marks) as the resource-group name to create a resource group for the Function App. Choose the **Location** nearest you, and accept the default values for all other parameters. Then click **Create** to create a new Function App. > The app name becomes part of a DNS name and therefore must be unique within Azure. Make sure a green check mark appears to the name indicating it is unique. You probably **won't** be able to use "functionslab" as the app name. ![Naming a Function App](Images/function-app-name.png) _Naming a Function App_ 1. Click **Resource groups** in the ribbon on the left side of the portal, and then click the resource group created for the Function App. ![Opening the resource group](Images/open-resource-group.png) _Opening the resource group_ 1. Wait until "Deploying" changes to "Succeeded," indicating that the Function App has been deployed. Then click the storage account that was created for the Function App. > Refresh the page in the browser every now and then to update the deployment status. Clicking the **Refresh** button in the resource-group blade refreshes the list of resources in the resource group, but does not reliably update the deployment status. ![Opening the storage account](Images/open-storage-account-after-deployment.png) _Opening the storage account_ 1. Click **Blobs** to view the contents of blob storage. ![Opening blob storage](Images/open-blob-storage.png) _Opening blob storage_ 1. Click **+ Container** to add a container. ![Adding a container](Images/add-container.png) _Adding a container_ 1. Type "uploaded" into the **Name** box. Then click the **Create** button to create the container. ![Naming the container](Images/name-container.png) _Naming the container_ 1. Repeat Steps 8 and 9 to add containers named "accepted" and "rejected" to blob storage. 1. Confirm that all three containers were added to blob storage. ![The new containers](Images/new-containers.png) _The new containers_ The Azure Function App has been created and you have added three containers to the storage account created for it. The next step is to add an Azure Function. <a name="Exercise2"></a> ## Exercise 2: Add an Azure Function ## Once you have created an Azure Function App, you can add Azure Functions to it. In this exercise, you will add a function to the Function App you created in [Exercise 1](#Exercise1) and write JavaScript code that uses the [Computer Vision API](https://www.microsoft.com/cognitive-services/en-us/computer-vision-api) to analyze images added to the "uploaded" container for adult or racy content. 1. Return to the blade for the "FunctionsLabResourceGroup" resource group and click the Azure Function App that you created in [Exercise 1](#Exercise1). ![Opening the Function App](Images/open-function-app.png) _Opening the Function App_ 1. Click **+ New Function** and set **Language** to **JavaScript**. Then click **BlobTrigger-JavaScript**. ![Selecting a function template](Images/function-app-select-template-node.png) _Selecting a function template_ 1. Enter "BlobImageAnalysis" (without quotation marks) for the function name and "uploaded/{name}" into the **Path** box. (The latter applies the blob storage trigger to the "uploaded" container that you created in Exercise 1.) Then click the **Create** button to create the Azure Function. ![Creating an Azure Function](Images/create-azure-function.png) _Creating an Azure Function_ 1. Replace the code shown in the code editor with the following statements: ```Javascript var request = require('request-promise'); var azure  =  require('azure-storage'); module.exports = function (context, myBlob) { context.log("Analyzing uploaded image '" + context.bindingData.name + "' for adult content..."); var options = getAnalysisOptions(myBlob, process.env.SubscriptionKey); analyzeAndProcessImage(context, options); function getAnalysisOptions(image, subscriptionKey) { return { uri: "https://api.projectoxford.ai/vision/v1.0/analyze?visualFeatures=Adult", method: 'POST', body: image, headers: { 'Content-Type': 'application/octet-stream', 'Ocp-Apim-Subscription-Key': subscriptionKey } } }; function analyzeAndProcessImage(context, options) { request(options) .then((response) => { response = JSON.parse(response); context.log("Is Adult: ", response.adult.isAdultContent); context.log("Adult Score: ", response.adult.adultScore); context.log("Is Racy: " + response.adult.isRacyContent); context.log("Racy Score: " + response.adult.racyScore); var fileName = context.bindingData.name; var targetContainer = ((response.adult.isRacyContent) ? 'rejected' : 'accepted'); var  blobService = azure.createBlobService(process.env.AzureWebJobsStorage); blobService.startCopyBlob(getStoragePath("uploaded", fileName), targetContainer, fileName, function (error, s, r) { if(error) context.log(error); context.log(fileName + " created in " + targetContainer + "."); blobService.setBlobMetadata(targetContainer, fileName, { "isAdultContent" : response.adult.isAdultContent, "adultScore" : (response.adult.adultScore * 100).toFixed(0) + "%", "isRacyContent" : response.adult.isRacyContent, "racyScore" : (response.adult.racyScore * 100).toFixed(0) + "%" }, function(error,s,r) { if(error) context.log(error); context.log(fileName + " metadata added successfully."); }); }); }) .catch((error) => context.log(error)) .finally(() => context.done()); }; function getStoragePath(container,fileName) { var storageConnection = (process.env.WEBSITE_CONTENTAZUREFILECONNECTIONSTRING).split(';'); var accountName = storageConnection[1].split('=')[1]; return "https://" + accountName + ".blob.core.windows.net/" + container + "/" + fileName + ".jpg"; }; }; ``` > In Node.js, **module.exports** is the method used to export a function to be executed. The **getAnalysisOptions** function in this code prepares a request for each blob added to the "uploaded" container, and then it calls the **analyzeAndProcessImage** function to call the Computer Vision API to analyze the image and create a copy of the blob in either the "accepted" container or the "rejected" container, depending on the scores returned by the Computer Vision API. 1. Click the **Save** button at the top of the code editor to save your changes. ![Saving the function](Images/save-index-file.png) _Saving the function_ 1. Click **Integrate**, and then click **Advanced editor** in the upper-right corner to view function bindings. ![Opening the advanced editor](Images/open-advanced-editor.png) _Opening the advanced editor_ 1. Replace the JSON shown in the code editor with the following JSON: ```JSON { "bindings": [ { "name": "myBlob", "type": "blobTrigger", "path": "uploaded/{name}.jpg", "connection": "AzureWebJobsStorage", "dataType": "binary", "direction": "in" } ], "disabled": false } ``` > The statement that sets "dataType" to "binary" improves performance when processing blobs that are images. 1. Click **Save** to save the changes to the function bindings. ![Saving binding changes](Images/portal-save-config.png) _Saving binding changes_ 1. The function has been written, but you must install some dependency packages in order for it to work. The dependencies come from the two *require* statements at the beginning of the JavaScript code that you added. To resolve these dependencies, begin by clicking **Function app settings** in the lower-left corner of the function designer. ![Viewing Function App settings](Images/function-app-select-app-settings.png) _Viewing Function App settings_ 1. Click **Open dev console**. ![Opening the developer console](Images/open-dev-console.png) _Opening the developer console_ 1. Execute the following command in the console to navigate to the function directory: ``` cd BlobImageAnalyis ``` 1. Execute the following command to install the Node Package Manager (NPM): ``` npm install ``` 1. Execute the following commands in the console to install the packages used by the function: ``` npm install request npm install request-promise npm install azure-storage ``` An Azure Function written in JavaScript has been created and configured and the packages that the function relies upon have been installed. The next step is to add an application setting that the Azure Function relies on. <a name="Exercise3"></a> ## Exercise 3: Add a subscription key to application settings ## The Azure Function you created in [Exercise 2](#Exercise2) loads a subscription key for the Microsoft Cognitive Services Computer Vision API from application settings. This key is required in order for your code to call the Computer Vision API, and is transmitted in an HTTP header in each call. In this exercise, you will add an application setting containing the subscription key to the Function App. 1. Open a new browser window and navigate to https://www.microsoft.com/cognitive-services/en-us/subscriptions. If you haven't signed up for the Computer Vision API, do so now. (Signing up is free.) Then click **Copy** under **Key 1** in your Computer Vision subscription to copy the subscription key to the clipboard. ![Copying the subscription key to the clipboard](Images/computer-vision-key.png) _Copying the subscription key to the clipboard_ 1. Return to your Function App in the Azure Portal and click **Function app settings** in the lower-left corner of the function designer. ![Viewing Function App settings](Images/function-app-select-app-settings.png) _Viewing Function App settings_ 1. Scroll down the page and click **Go to App Service Settings**. ![Viewing App Service settings](Images/open-app-service-settings.png) _Viewing App Service settings_ 1. Click **Application settings**. Then scroll down until you find the "App settings" section. Add a new app setting named "SubscriptionKey" (without quotation marks), and paste the subscription key that is on the clipboard into the **Value** box. Then click **Save** at the top of the blade. ![Adding a subscription key](Images/add-key.png) _Adding a subscription key_ The work of writing and configuring the Azure Function is complete. Now comes the fun part: testing it out. <a name="Exercise4"></a> ## Exercise 4: Test the Azure Function ## Your function is configured to listen for changes to the blob container named "uploaded" that you created in [Exercise 1](#Exercise1). Each time an image appears in the container, the function executes and passes the image to the Computer Vision API for analysis. To test the function, you simply upload images to the container. In this exercise, you will use the Azure Portal to upload images to the "uploaded" container and verify that copies of the images are placed in the "accepted" and "rejected" containers. 1. In the Azure Portal, go to the resource group created for your Function App. Then click the storage account that was created for it. ![Opening the storage account](Images/open-storage-account.png) _Opening the storage account_ 1. Click **Blobs** to view the contents of blob storage. ![Opening blob storage](Images/open-blob-storage.png) _Opening blob storage_ 1. Click **uploaded** to open the "uploaded" container. ![Opening the "uploaded" container](Images/open-uploaded-container.png) _Opening the "uploaded" container_ 1. Click **Upload**. ![Uploading images to the "uploaded" container](Images/upload-images-1.png) _Uploading images to the "uploaded" container_ 1. Click the button with the folder icon to the right of the **Files** box. Select all of the files in this lab's "Resources" folder. Then click the **Upload** button to upload the files to the "uploaded" container. ![Uploading images to the "uploaded" container](Images/upload-images-2.png) _Uploading images to the "uploaded" container_ 1. Return to the blade for the "uploaded" container and verify that eight images were uploaded. ![Images uploaded to the "uploaded" container](Images/uploaded-images.png) _Images uploaded to the "uploaded" container_ 1. Close the blade for the "uploaded" container and open the "accepted" container. ![Opening the "accepted" container](Images/open-accepted-container.png) _Opening the "accepted" container_ 1. Verify that the "accepted" container holds seven images. **These are the images that were classified as neither adult nor racy by the Computer Vision API**. > It may take a minute or more for all of the images to appear in the container. If necessary, click **Refresh** every few seconds until you see all seven images. ![Images uploaded to the "accepted" container](Images/accepted-images.png) _Images uploaded to the "accepted" container_ 1. Close the blade for the "accepted" container and open the blade for the "rejected" container. ![Opening the "rejected" container](Images/open-rejected-container.png) _Opening the "rejected" container_ 1. Verify that the "rejected" container holds one image. **This image was classified as adult or racy (or both) by the Computer Vision API**. ![Images uploaded to the "rejected" container](Images/rejected-images.png) _Images uploaded to the "rejected" container_ The presence of seven images in the "accepted" container and one in the "rejected" container is proof that your Azure Function executed each time an image was uploaded to the "uploaded" container. If you would like, return to the BlobImageAnalysis function in the portal and click **Monitor**. You will see a log detailing each time the function executed. <a name="Exercise5"></a> ## Exercise 5: View blob metadata (optional) ## What if you would like to view the scores for adult content and raciness returned by the Computer Vision API for each image uploaded to the "uploaded" container? The scores are stored in blob metadata for the images in the "accepted" and "rejected" containers, but blob metadata can't be viewed through the Azure Portal. In this exercise, you will use the cross-platform [Microsoft Azure Storage Explorer](http://storageexplorer.com) to view blob metadata and see how the Computer Vision API scored the images you uploaded. 1. If you haven't installed the Microsoft Azure Storage Explorer, go to http://storageexplorer.com and install it now. Versions are available for Windows, macOS, and Linux. 1. Start Storage Explorer. If you are asked to log in, do so using the same account you used to log in to the Azure Portal. 1. Find the storage account that was created for your Azure Function App in [Exercise 1](#Exercise1) and expand the list of blob containers underneath it. Then click the container named "rejected." > If this is the first time you have run Storage Explorer, you may have to click the person icon and tell it which Azure subscription or subscriptions you want it to display. ![Opening the "rejected" container](Images/explorer-open-rejected-container.png) _Opening the "rejected" container_ 1. Right-click (on a Mac, Command-click) the image in the "rejected" container and select **Properties** from the context menu. ![Viewing blob metadata](Images/explorer-view-blob-metadata.png) _Viewing blob metadata_ 1. Inspect the blob's metadata. *IsAdultContent* and *isRacyContent* are Boolean values that indicate whether the Computer Vision API detected adult or racy content in the image. *adultScore* and *racyScore* are the computed probabilities. ![Scores returned by the Computer Vision API](Images/explorer-metadata-values.png) _Scores returned by the Computer Vision API_ 1. Open the "accepted" container and inspect the metadata for some of the blobs stored there. How do these metadata values differ from the ones attached to the blob in the "rejected" container? You can probably imagine how this might be used in the real world. Suppose you were building a photo-sharing site and wanted to prevent adult images from being stored. You could easily write an Azure Function that inspects each image that is uploaded and deletes it from storage if it contains adult content. <a name="Summary"></a> ## Summary ## In this hands-on lab you learned how to: - Create an Azure Function App - Write an Azure Function that uses a blob trigger - Add application settings to an Azure Function App - Use Microsoft Cognitive Services to analyze images and store the results in blob metadata This is just one example of how you can leverage Azure Functions to automate repetitive tasks. Experiment with other Azure Function templates to learn more about Azure Functions and to identify additional ways in which they can aid your research or business. --- Copyright 2017 Microsoft Corporation. All rights reserved. Except where otherwise noted, these materials are licensed under the terms of the MIT License. You may use them according to the license as is most appropriate for your project. The terms of this license can be found at https://opensource.org/licenses/MIT.
49.530516
713
0.740758
eng_Latn
0.985858
a145389b15223aee9fe027b079b517837966da5c
352
md
Markdown
README.md
sanyamason/gluu-gateway
abe98ecf4dc251cde698dd51dd157394431e66bb
[ "Apache-2.0" ]
31
2018-01-04T08:55:46.000Z
2021-09-06T14:33:24.000Z
README.md
sanyamason/gluu-gateway
abe98ecf4dc251cde698dd51dd157394431e66bb
[ "Apache-2.0" ]
365
2017-11-30T16:19:19.000Z
2021-04-23T18:51:19.000Z
README.md
sanyamason/gluu-gateway
abe98ecf4dc251cde698dd51dd157394431e66bb
[ "Apache-2.0" ]
20
2018-01-12T11:23:05.000Z
2022-01-28T00:20:16.000Z
## Gluu Gateway !!! Attention There will be no further release of Gluu Gateway after version 4.2. The EOL of this product will be announced soon. Gluu Gateway (GG) is an API gateway that leverages the Gluu Server for central OAuth client management and access control. Documentation can be found on the [Gluu Gateway docs](https://gluu.org/docs/gg).
58.666667
203
0.775568
eng_Latn
0.995568
a145633d2bb59ad843e354cb964a659160fe4248
984
md
Markdown
core-java-modules/core-java-lang-4/README.md
DBatOWL/tutorials
13b45f3843c204fbd23014bf52f618223ab918ad
[ "MIT" ]
1
2022-03-26T05:12:58.000Z
2022-03-26T05:12:58.000Z
core-java-modules/core-java-lang-4/README.md
DBatOWL/tutorials
13b45f3843c204fbd23014bf52f618223ab918ad
[ "MIT" ]
null
null
null
core-java-modules/core-java-lang-4/README.md
DBatOWL/tutorials
13b45f3843c204fbd23014bf52f618223ab918ad
[ "MIT" ]
1
2022-03-18T01:32:14.000Z
2022-03-18T01:32:14.000Z
## Core Java Lang (Part 4) This module contains articles about core features in the Java language - [The Java final Keyword – Impact on Performance](https://www.baeldung.com/java-final-performance) - [The package-info.java File](https://www.baeldung.com/java-package-info) - [What are Compile-time Constants in Java?](https://www.baeldung.com/java-compile-time-constants) - [Java Objects.hash() vs Objects.hashCode()](https://www.baeldung.com/java-objects-hash-vs-objects-hashcode) - [Referencing a Method in Javadoc Comments](https://www.baeldung.com/java-method-in-javadoc) - [Tiered Compilation in JVM](https://www.baeldung.com/jvm-tiered-compilation) - [Fixing the “Declared package does not match the expected package” Error](https://www.baeldung.com/java-declared-expected-package-error) - [Chaining Constructors in Java](https://www.baeldung.com/java-chain-constructors) - [Difference Between POJO, JavaBeans, DTO and VO](https://www.baeldung.com/java-pojo-javabeans-dto-vo)
70.285714
138
0.770325
yue_Hant
0.553534
a1458e73482a59631ca622d3b9d2c771d50b8507
1,263
md
Markdown
_posts/2018-07-05-MachineLearning_introduce.md
jckchj/Wiki
be9728870936284a9860c5c8a28bd38a985c93e1
[ "MIT" ]
null
null
null
_posts/2018-07-05-MachineLearning_introduce.md
jckchj/Wiki
be9728870936284a9860c5c8a28bd38a985c93e1
[ "MIT" ]
null
null
null
_posts/2018-07-05-MachineLearning_introduce.md
jckchj/Wiki
be9728870936284a9860c5c8a28bd38a985c93e1
[ "MIT" ]
null
null
null
--- layout: post title: "机器学习入门(名词科普)" date: 2018-07-05 description: "机器学习入门篇" tag: 技术 --- ### 通用名词 #### ML   **名词解释:** 机器学习(Machine Learning)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。   它是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域,它主要使用归纳、综合而不是演绎。 #### DL   **名词解释:** 深度学习(Deep Learning)是机器学习拉出的分支,它试图使用包含复杂结构或由多重非线性变换构成的多个处理层对数据进行高层抽象的算法。   深度学习是机器学习中表征学习方法。观测值(例如一幅图像)可以使用多种方式来表示,如每个像素强度值的矢量,或者更抽象地表示成一系列边、特定形状的区域等。而使用某些特定的表示方法更容易从实例中学习任务(例如,人脸识别或面部表情识别)。深度学习的好处是将用非监督式或半监督式的特征学习和分层特征提取的高效算法来替代手工获取特征。 #### CNN   **名词解释:** 卷积神经网络(Convolutional neural networks,简称CNNs)是一种深度的监督学习下的机器学习模型 ### 算法名词 #### KNN:   **名词解释:** 邻近算法,或者说K最近邻(kNN,k-NearestNeighbor)分类算法。   邻近算法是数据挖掘分类技术中最简单的方法之一。所谓K最近邻,就是k个最近的邻居的意思,说的是每个样本都可以用它最接近的k个邻居来代表。   kNN算法的核心思想是如果一个样本在特征空间中的k个最相邻的样本中的大多数属于某一个类别,则该样本也属于这个类别,并具有这个类别上样本的特性。该方法在确定分类决策上只依据最邻近的一个或者几个样本的类别来决定待分样本所属的类别。 kNN方法在类别决策时,只与极少量的相邻样本有关。由于kNN方法主要靠周围有限的邻近的样本,而不是靠判别类域的方法来确定所属类别的,因此对于类域的交叉或重叠较多的待分样本集来说,kNN方法较其他方法更为适合。 #### SVM:   **名词解释:** 支持向量机(Support Vector Machine)。   在机器学习领域,支持向量机SVM(Support Vector Machine)是一个有监督的学习模型,通常用来进行模式识别、分类、以及回归分析。 <br> 转载请注明:[潘柏信的博客](http://baixin) » [点击阅读原文](http://baixin.io/2016/07/MachineLearning_introduce/)
32.384615
220
0.78939
yue_Hant
0.690794
a145a18a0d04b136f081614d9d2271c489d6e490
460
md
Markdown
README.md
CvanderStoep/VideosSampleCode
38a8d2538a041d5664d0040807ffac463d0fb79c
[ "MIT" ]
null
null
null
README.md
CvanderStoep/VideosSampleCode
38a8d2538a041d5664d0040807ffac463d0fb79c
[ "MIT" ]
null
null
null
README.md
CvanderStoep/VideosSampleCode
38a8d2538a041d5664d0040807ffac463d0fb79c
[ "MIT" ]
null
null
null
This repository contains all the code for the mCoding sample coding videos. All code in this repository is licensed under the open source [MIT license](https://choosealicense.com/licenses/mit/). You don't need to ask permission to use it, feel free! It's not required, but I'd appreciate if you would link my YouTube channel if you used my code! [mCoding YouTube channel](https://www.youtube.com/mCodingWithJamesMurphy) [mCoding website](https://mcoding.io)
51.111111
118
0.784783
eng_Latn
0.984975
a145be288c34cc0839174aefeb43c9fcd92d964f
736
md
Markdown
docs/mavenPlugin.md
luigidemasi/fabric8
e5bfe84bd0ec91bf595243c5f8c3fc2558989978
[ "ECL-2.0", "Apache-2.0" ]
48
2015-02-06T09:18:53.000Z
2022-03-21T07:12:01.000Z
docs/mavenPlugin.md
luigidemasi/fabric8
e5bfe84bd0ec91bf595243c5f8c3fc2558989978
[ "ECL-2.0", "Apache-2.0" ]
741
2015-01-05T09:02:06.000Z
2020-07-15T16:29:02.000Z
docs/mavenPlugin.md
luigidemasi/fabric8
e5bfe84bd0ec91bf595243c5f8c3fc2558989978
[ "ECL-2.0", "Apache-2.0" ]
74
2015-02-24T17:15:10.000Z
2020-06-24T14:02:06.000Z
## Fabric8 Maven Plugin The fabric8 maven plugin makes it easy to work with Docker and Kubernetes or OpenShift from inside your existing Maven project. ### Version 3.x If you are starting a new project we highly recommend using the new 3.x version of the fabric8-maven-plugin which has the following features: * much simpler to use! * can detect most common Java apps and just do the right thing OOTB * configure via maven XML or by writing optional partial kubernetes YAML files in `src/main/fabric8` * pluggable generators and enrichers for different kinds of apps (e.g. to auto detect Spring Boot, executable jars, WARs, Karaf etc). See the [fabric8 maven plugin 3.x documentation](https://maven.fabric8.io/) for more details!
49.066667
141
0.779891
eng_Latn
0.991888
a1460a0d711e91b0b06f06eb0b5aa3ea463737a2
2,644
md
Markdown
action/README.md
SteveNY-Tibco/catalystml-flogo
20b314b2a84cd7c89e0af6dadcb925266a19e769
[ "BSD-3-Clause" ]
null
null
null
action/README.md
SteveNY-Tibco/catalystml-flogo
20b314b2a84cd7c89e0af6dadcb925266a19e769
[ "BSD-3-Clause" ]
null
null
null
action/README.md
SteveNY-Tibco/catalystml-flogo
20b314b2a84cd7c89e0af6dadcb925266a19e769
[ "BSD-3-Clause" ]
null
null
null
# Introduction This is a Flogo Action implementation of the CatalystMl Specification. It can also act as a standalone Go implementation of the CatalystMl. For more information about [Flogo Action](https://github.com/project-flogo/core/tree/master/action) . # Implementation Details : This Flogo Action creates a pipeline of the operations specified in JSON spec and executes it sequentially. One of the challenging aspect of the pipeline is resolving the data in the spec. Mappers, Resolvers and Scope Interfaces provided by Project-Flogo core reposisotiry is used to simplify the resolution of data. For more information please visit [Project-flogo/core](https://github.com/project-flogo/core). ## Detailed Implementation: The implementation of CML specification in Flogo can be divided into three steps: * Configuration. * Initialization. * Execution. ### Configuration. The CML JSON spec is unmarshalled into a [DefinitionConfig](pipeline/definition.go) struct .This struct is used to set up [Instance](pipeline/instance.go) of the pipleine. ### Initialization. During the initialization of pipeline instance [Mappers](https://github.com/project-flogo/core/blob/master/data/mapper/mapper.go) are set up for Input and Output of the CML. [Operations](operation/operation.go) (defined in the CML spec) are also initialized. The Registered operations are fetched and initialized using [factories](operation/registry.go). The mappers for input and output of each operation are also initialized. ### Execution. After the initialization, when the action is called, the program iterates over each operation executing it. The input mappers of each operations is resolved before executing it. Only the inputs defined by the operation are sent over for the execution. There can be other variables in the pipeline scope that are not passed in execution of operation. After the execution of operation, the output mappers are resolved and are added in the pipeline scope; even if not needed by further operations. For more information on how mappers and resolvers work Please visit [Mappers](https://github.com/project-flogo/core/blob/master/data/mapper/mapper.go) , [Resolvers](https://github.com/project-flogo/core/blob/master/data/resolve/resolve.go), [Scope](https://github.com/project-flogo/core/blob/master/data/resolve/scope.go). The resolution of input and output is done using pipeline scope . The pipeline scope is nothing but the collection of all the variables, which are the output of each operations and input of CML, and its value . After execution of all the operations the output of the CML is resolved and returned
114.956522
380
0.797277
eng_Latn
0.997074
a14705f99662feef2f9e2d076f991fd2c3017da8
181
md
Markdown
README.md
gmanjames/ui-playground
2d37183c4b92da8ede1b9cc811b44f555d59f7ea
[ "MIT" ]
null
null
null
README.md
gmanjames/ui-playground
2d37183c4b92da8ede1b9cc811b44f555d59f7ea
[ "MIT" ]
null
null
null
README.md
gmanjames/ui-playground
2d37183c4b92da8ede1b9cc811b44f555d59f7ea
[ "MIT" ]
1
2018-05-18T18:09:09.000Z
2018-05-18T18:09:09.000Z
# ui-playground ## Purpose Mostly for experimentation with user interface ideas. See what madness is currently being featured [here](https://gmanjames.github.io/ui-playground)!
36.2
97
0.779006
eng_Latn
0.998243
a1496250e038b8bad1544d6f768d556a80a3304c
1,350
md
Markdown
lib/rules/unit-no-unknown/README.md
HstarStudio/stylelint
6aed07678e57e4773f9a4e7a4700f1d1484e4839
[ "MIT" ]
null
null
null
lib/rules/unit-no-unknown/README.md
HstarStudio/stylelint
6aed07678e57e4773f9a4e7a4700f1d1484e4839
[ "MIT" ]
null
null
null
lib/rules/unit-no-unknown/README.md
HstarStudio/stylelint
6aed07678e57e4773f9a4e7a4700f1d1484e4839
[ "MIT" ]
null
null
null
# unit-no-unknown Disallow unknown units. ```css a { width: 100pixels; } /** ↑ * These units */ ``` This rule considers units defined in the CSS Specifications, up to and including Editor's Drafts, to be known. ## 选项 ### `true` 以下模式被视为违规: ```css a { width: 10pixels; } ``` ```css a { width: calc(10px + 10pixels); } ``` 以下模式*不*被视为违规: ```css a { width: 10px; } ``` ```css a { width: 10Px; } ``` ```css a { width: 10pX; } ``` ```css a { width: calc(10px + 10px); } ``` ## 可选的辅助选项 ### `ignoreUnits: ["/regex/", /regex/, "string"]` 给定: ```js ["/^my-/", "custom"] ``` 以下模式*不*被视为违规: ```css width: 10custom; a { } ``` ```css a { width: 10my-unit; } ``` ```css a { width: 10my-other-unit; } ``` ### `ignoreFunctions: ["/regex/", /regex/, "string"]` 给定: ```js ["image-set", "/^my-/", "/^YOUR-/i"] ``` 以下模式*不*被视为违规: ```css a { background-image: image-set( '/images/some-image-1x.jpg' 1x, '/images/some-image-2x.jpg' 2x, '/images/some-image-3x.jpg' 3x ); } ``` ```css a { background-image: my-image-set( '/images/some-image-1x.jpg' 1x, '/images/some-image-2x.jpg' 2x, '/images/some-image-3x.jpg' 3x ); } ``` ```css a { background-image: YoUr-image-set( '/images/some-image-1x.jpg' 1x, '/images/some-image-2x.jpg' 2x, '/images/some-image-3x.jpg' 3x ); } ```
10.714286
110
0.543704
yue_Hant
0.37569
a149a1056acfd38567a7daf1319e1ab9b692517c
1,383
md
Markdown
CHANGELOG.md
RyanJ93/diesis
56b98f36bae56d63fa8558f3ac1d0dfdca79e1a6
[ "MIT" ]
4
2020-12-23T11:50:45.000Z
2021-09-04T14:06:49.000Z
CHANGELOG.md
RyanJ93/diesis
56b98f36bae56d63fa8558f3ac1d0dfdca79e1a6
[ "MIT" ]
null
null
null
CHANGELOG.md
RyanJ93/diesis
56b98f36bae56d63fa8558f3ac1d0dfdca79e1a6
[ "MIT" ]
null
null
null
# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [1.0.5] - 2020-07-19 ### Changed - Improved iTunes API's results filtering. - Fixed a bug causing song lyrics from MusixMatch to be truncated. - The "Writer(S): " prefix in MusixMatch's results is now removed from the lyrics author name. ## [1.0.4] - 2019-09-07 ### Changed - Improved song information lookup from iTunes API endpoint by adding multiple country codes support. ## [1.0.3] - 2019-07-29 ### Fixed - Fixed a bug in logging. - Fixed a bug that was causing UTF-8 characters to be saved wrongly. ## [1.0.2] - 2019-04-15 ### Changed - Improved stability in HTTP requests. ### Fixed - Fixed a bug in argument handling. ## [1.0.1] - 2019-04-12 ### Added - Single file processing is now supported. - Added support for log files. ### Changed - Cleaned up some part of the application code. - Improved song recognition quality. ### Fixed - Fixed a bug that was causing inability to lookup song lyrics. - Fixed a bug that was causing a wrong interpretation of the results from AZLyrics. - Fixed a bug in destination path generation. - Fixed a bug that was causing application crash if a HTTP request fail.
24.696429
101
0.723066
eng_Latn
0.992318
a14b628a30bab15f291dbe574399a030c21d117b
1,022
md
Markdown
jsonnet/README.md
survivant/monitoring
ad309289e9ada4c9aa04be288f19f7a9a04961fd
[ "Apache-2.0" ]
16
2021-05-12T06:04:13.000Z
2022-03-24T04:59:37.000Z
jsonnet/README.md
survivant/monitoring
ad309289e9ada4c9aa04be288f19f7a9a04961fd
[ "Apache-2.0" ]
39
2021-05-07T06:08:00.000Z
2022-02-23T18:42:40.000Z
jsonnet/README.md
survivant/monitoring
ad309289e9ada4c9aa04be288f19f7a9a04961fd
[ "Apache-2.0" ]
13
2021-04-29T15:07:56.000Z
2022-02-23T18:46:55.000Z
# OpenEBS Monitoring This repository collects Kubernetes manifests, Grafana dashboards, and Prometheus rules to monitor openebs with the help of kube-prometheus stack. The content of this project is written in jsonnet. This project could both be described as a package as well as a library. ## Generate manifests 1. Clone the repo: ``` $ git clone https://github.com/openebs/monitoring $ cd monitoring/jsonnet ``` 2. Generate the manifests: ``` $ make generate ``` The files in `manifests/` are the yamls that have to be applied on the kubernetes cluster. 3. Apply the kube-prometheus stack and openebs monitoring addons: The previous step has generated a bunch of manifest files in the `manifests/` directory. Now simply use kubectl to install Prometheus and Grafana as per your configuration: ``` # Apply kube-prometheus stack yamls $ kubectl apply -f manifests/setup $ kubectl apply -f manifests/ # Apply the openebs monitoring addons $ kubectl apply -f manifests/openebs-addons ```
28.388889
175
0.753425
eng_Latn
0.977232
a14bb91e6af1ccd3d870924a2f0b9b5022b919e0
11,741
md
Markdown
README.md
ahmetcadirci25/n11-php-api
e4ce5c0dd50210d2c9046e9ebdce83c1a076fedd
[ "MIT" ]
null
null
null
README.md
ahmetcadirci25/n11-php-api
e4ce5c0dd50210d2c9046e9ebdce83c1a076fedd
[ "MIT" ]
null
null
null
README.md
ahmetcadirci25/n11-php-api
e4ce5c0dd50210d2c9046e9ebdce83c1a076fedd
[ "MIT" ]
null
null
null
[![Latest Stable Version](https://poser.pugx.org/ismail0234/n11-php-api/v/stable)](https://packagist.org/packages/ismail0234/n11-php-api) [![Total Downloads](https://poser.pugx.org/ismail0234/n11-php-api/downloads)](https://packagist.org/packages/ismail0234/n11-php-api) [![License](https://poser.pugx.org/ismail0234/n11-php-api/license)](https://packagist.org/packages/ismail0234/n11-php-api) # N11 PHP Api Bu api n11 için yazılmıştır. N11 için yazılmış olan gelişmiş bir php entegrasyon apisi. Ekstra olarak n11 üzerinde mağazanıza gelen siparişleri websitenize aktaracak bir fonksiyonda mevcuttur. ## Patron Olun ve Beni Destekleyin! Yaptığım işlerden memnun iseniz, beni patreon üzerinde desteleyebilirsiniz: https://www.patreon.com/botbenson/ ## Katkı Çağrısı Çok fazla vaktim olmadığından N11'in bütün api fonksiyonları tamamlanmamıştır. Eksik fonksiyonları isterseniz tamamlayarak **pull request** gönderebilirsiniz veya istediğiniz fonksiyonun eklenmesi için **issue** açabilirsiniz. **Unutmayın bu projeyi ücretsiz olarak yaptığımdan hemen yapacağım diye bir şey olamaz!** ### Change Log - See [ChangeLog](https://github.com/ismail0234/n11-php-api/blob/master/CHANGELOG.md) ### License - See [ChangeLog](https://github.com/ismail0234/n11-php-api/blob/master/LICENSE) ## Hızlı Bakış * [Kurulum](#kurulum) * [Kullanım](#kullanım) * [Şehir Servisleri (CityService)](#şehir-servisleri-cityservice) * [Kargo Şirketi Servisleri (ShipmentCompanyService)](#kargo-şirketi-servisleri-shipmentcompanyservice) * [Kategori Servisi (CategoryService)](#kategori-servisi-categoryservice) * [Ürün Servisi (ProductService)](#ürün-servisi-productservice) * [Ürün Satış Durumu Servisi (ProductSellingService)](#ürün-satış-durumu-servisi-productsellingservice) * [Ürün Stok Servisi (ProductStockService)](#ürün-stok-servisi-productstockservice) * [Sipariş Servisi (Order Service)](#sipariş-servisi-order-service) * [N11 Sipariş Bildirimi WebHook (N11 Order WebHook Beta)](#n11-sipariş-bildirimi-webhook-n11-order-webhook-beta) ## Kurulum Kurulum için composer kullanmanız gerekmektedir. Composer'a sahip değilseniz windows için [Buradan](https://getcomposer.org/) indirebilirsiniz. ```php composer require ismail0234/n11-php-api ``` ## Kullanım ```php use IS\PazarYeri\N11\N11Client; include "vendor/autoload.php"; $client = new N11Client(); $client->setApiKey('xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'); $client->setApiPassword('xxxxxxxxxxxxxxxx'); ``` ### Şehir Servisleri (CityService) ```php /** * * @description N11 Üzerindeki bütün şehirlerin listesini döndürür. * */ $client->city->getCities(); /** * * @description Şehir hakkında birkaç bilgi döndürür. * @param int Şehir Id - Zorunlu * */ $client->city->getCity(34); /** * * @description Plaka kodu verilen şehre ait ilçelerinin listelenmesi için kullanılır. * @param int Şehir Id - Zorunlu * */ $client->city->getDistrict(34); /** * * @description İlçe kodu verilen semt/mahallelerin listelenmesi için kullanılır. * @param int İlçe Id - Zorunlu * */ $client->city->getNeighborhoods(22569); ``` ### Kargo Şirketi Servisleri (ShipmentCompanyService) ```php /** * * @description N11 Üzerinde tanımlı olan tüm kargo şirketlerini listeler * */ $client->shipmentcompany->getShipmentCompanies(); ``` ### Teslimat Şablonu Servisi (ShipmentService) ```php /** * * @description Oluşturulan teslimat şablonu bilgilerini listelemek için kullanılan metoddur. * */ $client->shipment->getShipmentTemplateList(); /** * * @description Teslimat şablon ismi ile aratılan şablonun bilgilerini döndürür. * @param string Şablon Adı - Zorunlu * */ $client->shipment->getShipmentTemplate('Ücretsiz Kargo'); ``` ### Kategori Servisi (CategoryService) ```php /** * * @description N11 üzerinde tanımlanmış tüm üst seviye kategorileri döndürür. * */ $client->category->getTopLevelCategories(); /** * * @description İstenilen kategori, üst seviye kategori veya diğer seviye kategorilerden olabilir, bu kategorilere ait olan özelliklerin * ve bu özelliklere ait değerlerin listelenmesi için kullanılan metottur. * @param int Kategori Id - Zorunlu * @param array Sayfalama - İsteğe Bağlı * */ $client->category->getCategoryAttributes(1002841, array('currentPage' => 1, 'pageSize' => 20)); /** * * @description İstenilen kategori, üst seviye kategori veya diğer seviye kategorilerden olabilir, * bu kategorilere ait olan özelliklerin listelenmesi için kullanılan metoddur. * @param int Kategori Id - Zorunlu * */ $client->category->getCategoryAttributesId(1002841); /** * * @description Özelliğe sistemimizde verilen id bilgisini (category.attributeList.attribute.id) girdi vererek, * o özelliğe ait değerleri listeler. * @param int Kategori Id - Zorunlu * @param array Sayfalama - İsteğe Bağlı * */ $client->category->getCategoryAttributeValue(354080997, array('currentPage' => 0, 'pageSize' => 20)); /** * * @description Kodu verilen kategorinin birinci seviye üst kategorilerine ulaşmak için bu metot kullanılmalıdır. İkinci seviye üst * kategorilere ulaşmak için (Örn. “Deri ayakkabı -> Ayakkabı -> Giysi” kategori ağacında “Giysi “ bilgisi) * birinci seviye üst kategorinin (Örn. Ayakkabı) kodu verilerek tekrar servis çağırılmalıdır. * */ $client->category->getParentCategory(1000717); /** * * @description Kodu verilen kategorinin birinci seviye alt kategorilerine ulaşmak için bu metot kullanılmalıdır. İkinci seviye alt * kategorilere ulaşmak için (Örn. “Giysi -> Ayakkabı -> Deri ayakkabı” kategori ağacında “Deri ayakkabı” bilgisi) * birinci Seviye alt kategorinin (Örn. Ayakkabı) kodu verilerek tekrar servis çağırılmalıdır. * */ $client->category->getSubCategories(1002841); ``` ### Ürün Servisi (ProductService) ```php /** * * @description N11 ürün ID sini kullanarak sistemde kayıtlı olan ürünün bilgilerini getirir. * */ $client->product->getProductByProductId(359620750); /** * * @description Mağaza ürün kodunu kullanarak sistemde kayıtlı olan ürünün bilgilerini getirir. * */ $client->product->getProductBySellerCode('IS-20014'); /** * * @description N11 Üzerindeki ürünleri listelemek için kullanılır. * @param array Sayfalama - İsteğe Bağlı * */ $client->product->getProductList(array('currentPage' => 0, 'pageSize' => 20)); /** * * @description Kayıtlı olan bir ürünü N11 Id si kullanarak silmek için kullanılır. * @param int N11 Ürün Id - Zorunlu * */ $client->product->deleteProductById(1234567890); /** * * @description Kayıtlı olan bir ürünü mağaza ürün kodu kullanılarak silmek için kullanılır. * @param string N11 Ürünün Mağazadaki Ürün Kodu - Zorunlu * */ $client->product->deleteProductBySellerCode(1234567890); ``` ### Ürün Satış Durumu Servisi (ProductSellingService) ```php /** * * @description Satışta olan ürünün n11 ürün ID si kullanılarak satışa kapatılması için kullanılır. * @param int N11 Ürün Id - Zorunlu * */ $client->selling->stopSellingProductByProductId(1234567890); /** * * @description Satışta olmayan bir ürünün N11 ürün ID si kullanılarak satışa başlanması için kullanılır. * @param string N11 Ürün Mağaza Id - Zorunlu * */ $client->selling->startSellingProductBySellerCode('IS-20014'); /** * * @description Satışta olmayan bir ürünün N11 ürün ID si kullanılarak satışa başlanması için kullanılır. * @param int N11 Ürün Id - Zorunlu * */ $client->selling->startSellingProductByProductId(1234567890); /** * * @description Satışta olan ürünün mağaza ürün kodu kullanılarak satışının durdurulması için kullanılır. * @param string N11 Ürün Mağaza Id - Zorunlu * */ $client->selling->stopSellingProductBySellerCode('IS-20014'); ``` ### Ürün Stok Servisi (ProductStockService) ```php /** * * @description Sistemde kayıtlı olan ürünün N11 ürün ID si ile ürün stok bilgilerini getiren metottur. * Cevap içinde stok durumunun “version” bilgisi de vardır, ürün stoklarında değişme olduysa * bu versiyon bilgisi artacaktır, çağrı yapan taraf versiyon bilgisini kontrol ederek N11 e * verilen stok bilgilerinde değişim olup olmadığını anlayabilir. * @param int N11 Ürün Id - Zorunlu * */ $client->stock->getProductStockByProductId(1234567890); ``` ### Sipariş Servisi (Order Service) ```php /** * * @description Bu metot sipariş ile ilgili özet bilgileri listelemek için kullanılır. * @note İsteğe bağlı olarak dizideki alanların istenilen bölümleri eklenmeyebilir veya dizi hiç gönderilmeyebilir. * @param array Arama Sorgusu - İsteğe Bağlı * */ $client->order->orderList( array( // Ürün ID Numarası 'productId' => 1234567890, // Sipariş Durumu => New, Approved, Rejected, Shipped, Delivered, Completed, Claimed, LATE_SHIPMENT 'status' => 'New', // Alıcı Adı 'buyerName' => 'ismail', // Sipariş Numarası 'orderNumber' => 1234567890, // Ürün Mağaza Kodu 'productSellerCode' => 'IS-20014', // Teslim alacak kişinin adı 'recipient' => 'ismail', // Sipariş oluşturma tarihi başlangıç 'period' => array( // Başlangıç Tarihi => d/m/Y H:i:s 'startDate' => '28/06/2019', // Bitiş Tarihi => d/m/Y H:i:s 'endDate' => '01/07/2019' ), // Güncellenen Siparişleri Listeler 'sortForUpdateDate' => false, // Sayfalama 'pagingData' => array( // Şuanki Sayfa 'currentPage' => 0, // Gösterilecek nesne 'pageSize' => 20 ) ) ); /** * * @description Sipariş N11 ID bilgisi kullanarak sipariş detaylarını almak için kullanılır, * sipariş N11 ID bilgisine orderList metotlarıyla ulaşılabilir. * @param int Sipariş ID Numarası - Zorunlu * */ $client->order->orderDetail(123456789); ``` ### N11 Sipariş Bildirimi WebHook (N11 Order WebHook) BETA N11 Tarafından sipariş bildirimleri için bir webhook verilmediği için bu işlemi yapmak isteyenler kişiler için yazılmış olan bir webhook dur. Webhook'u kullanabilmeniz için sunucunuzda **sqlite** pdo driver kurulu olması gerekmektedir. **Not:** Oluşturacağınız bu dosyayı linux tarafında arkaplanda sürekli çalışır halde kalması gerekmektedir. Bunu yapmak için **tmux** veya **servis yazarak** kullanabilirsiniz. **Cronjob ile kullanmayınız!** _**Bu Webhook'un henüz beta aşamasında olduğunu unutmayın!**_ ```php include "vendor/autoload.php"; use IS\PazarYeri\N11\N11Client; $client = new N11Client(); $client->setApiKey('xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'); $client->setApiPassword('xxxxxxxxxxxxxxxx'); /** * * @description Webhook istek hızı * @param string * 'slow' => 300 saniye, * 'medium' => 180 saniye (default/taviye edilen), * 'fast' => 60 saniye * 'vfast' => 30 saniye * */ $client->webhook->setRequestMode('medium'); /** * * @description N11 sonuçlarında kaç siparişin getirileceği * @param string * 'vmax' => 100 adet, * 'max' => 75 adet, * 'medium' => 50 adet (default/taviye edilen), * 'min' => 30 adet * */ $client->webhook->setResultMode('medium'); /** * * @description Sipariş bildirimlerinde geçmiş siparişler kontrol edilsinmi? * @param bool * true => Evet (default/Tavsiye edilen), * false => Hayır, * */ // Uyarı! Bu fonksiyon versiyon 1.1.0'dan itibaren kaldırılmıştır. // $client->webhook->setOldConsumeMode(true); /* Anonymous function ile siparişleri almak */ $client->webhook->orderConsume(function($order){ echo "Sipariş Bilgileri"; echo "<pre>"; print_r($order); echo "</pre>"; }); /* Class ile siparişleri almak */ Class N11Orders { public function consume($order) { echo "Sipariş Bilgileri"; echo "<pre>"; print_r($order); echo "</pre>"; } } $client->webhook->orderConsume(array(new N11Orders(), 'consume')); ```
27.756501
235
0.721063
tur_Latn
0.995567
a14bde45341cedd2d5421f2cdfb4fa6d637cc376
5,030
md
Markdown
docs/odbc/microsoft/text-file-format-text-file-driver.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
2
2020-05-07T19:40:49.000Z
2020-09-19T00:57:12.000Z
docs/odbc/microsoft/text-file-format-text-file-driver.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/odbc/microsoft/text-file-format-text-file-driver.md
cawrites/sql-docs
58158eda0aa0d7f87f9d958ae349a14c0ba8a209
[ "CC-BY-4.0", "MIT" ]
2
2020-03-11T20:30:39.000Z
2020-05-07T19:40:49.000Z
--- title: "Text File Format (Text File Driver) | Microsoft Docs" ms.custom: "" ms.date: "01/19/2017" ms.prod: sql ms.prod_service: connectivity ms.reviewer: "" ms.technology: connectivity ms.topic: conceptual helpviewer_keywords: - "delimited text lines" - "fixed-width text files" - "text format [ODBC]" - "text file driver [ODBC], text format" ms.assetid: f53cd4b5-0721-4562-a90f-4c55e6030cb9 author: MightyPen ms.author: genemi --- # Text File Format (Text File Driver) The ODBC Text driver supports both delimited and fixed-width text files. A text file consists of an optional header line and zero or more text lines. Although the header line uses the same format as the other lines in the text file, the ODBC Text driver interprets the header line entries as column names, not data. A delimited text line contains one or more data values separated by delimiters: commas, tabs, or a custom delimiter. The same delimiter must be used throughout the file. Null data values are denoted by two delimiters in a row with no data between them. Character strings in a delimited text line can be enclosed in double quotation marks (""). No blanks can occur before or after delimited values. The width of each data entry in a fixed-width text line is specified in a schema. Null data values are denoted by blanks. Tables are limited to a maximum of 255 fields. Field names are limited to 64 characters, and field widths are limited to 32,766 characters. Records are limited to 65,000 bytes. A text file can be opened only for a single user. Multiple users are not supported. The following grammar, written for programmers, defines the format of a text file that can be read by the ODBC text driver: |Format|Representation| |------------|--------------------| |Non-italics|Characters that must be entered as shown| |*italics*|Arguments that are defined elsewhere in the grammar| |brackets ([])|Optional items| |braces ({})|A list of mutually exclusive choices| |vertical bars (&#124;)|Separate mutually exclusive choices| |ellipses (...)|Items that can be repeated one or more times| The format of a text file is: ``` text-file ::= [delimited-header-line] [delimited-text-line]... end-of-file | [fixed-width-header-line] [fixed-width-text-line]... end-of-file delimited-header-line ::= delimited-text-line delimited-text-line ::= blank-line | delimited-data [delimiter delimited-data]... end-of-line fixed-width-header-line ::= fixed-width-text-line fixed-width-text-line ::= blank-line | fixed-width-data [fixed-width-data]... end-of-line end-of-file ::= <EOF> blank-line ::= end-of-line delimited-data ::= delimited-string | number | date | delimited-null fixed-width-data ::= fixed-width-string | number | date | fixed-width-null ``` > [!NOTE] > The width of each column in a fixed-width text file is specified in the Schema.ini file. ``` end-of-line ::= <CR> | <LF> | <CR><LF> delimited-string ::= unquoted-string | quoted-stringunquoted-string ::= [character | digit] [character | digit | quote-character]... quoted-string ::= quote-character [character | digit | delimiter | end-of-line | embedded-quoted-string]... quote-characterembedded-quoted-string ::= quote-characterquote-character [character | digit | delimiter | end-of-line] quote-characterquote-characterfixed-width-string ::= [character | digit | delimiter | quote-character] ... character ::= any character except: delimiterdigitend-of-fileend-of-linequote-characterdigit ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 delimiter ::= , | <TAB> | custom-delimitercustom-delimiter ::= any character except: end-of-fileend-of-linequote-character ``` > [!NOTE] > The delimiter in a custom-delimited text file is specified in the Schema.ini file. ``` quote-character ::= " number ::= exact-number | approximate-number exact-number ::= [+ | -] {unsigned-integer[.unsigned-integer] | unsigned-integer. | .unsigned-integer} approximate-number ::= exact-number{e | E}[+ | -]unsigned-integer unsigned-integer ::= {digit}... date ::= mm date-separator dd date-separator yy | mmm date-separator dd date-separator yy | dd date-separator mmm date-separator yy | yyyy date-separator mm date-separator dd | yyyy date-separator mmm date-separator dd mm ::= digit [digit] dd ::= digit [digit] yy ::= digit digit yyyy ::= digit digit digit digit mmm ::= Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec date-separator ::= - | / | . delimited-null ::= ``` > [!NOTE] > For delimited files, a NULL is represented by no data between two delimiters. ``` fixed-width-null ::= <SPACE>... ``` > [!NOTE] > For fixed-width files, a NULL is represented by spaces.
42.627119
401
0.667793
eng_Latn
0.934985
a14c153c2e37d058253314af9683931a1c83cba3
1,801
md
Markdown
_posts/2018-08-07-zhaomuls.md
jiashi-syt14/jiashi-syt14.github.io
5c94342f48e9dddc0adcfffd42dd537e4af56eed
[ "MIT" ]
1
2018-09-24T14:27:21.000Z
2018-09-24T14:27:21.000Z
_posts/2018-08-07-zhaomuls.md
jiashi-syt14/jiashi-syt14.github.io
5c94342f48e9dddc0adcfffd42dd537e4af56eed
[ "MIT" ]
null
null
null
_posts/2018-08-07-zhaomuls.md
jiashi-syt14/jiashi-syt14.github.io
5c94342f48e9dddc0adcfffd42dd537e4af56eed
[ "MIT" ]
4
2018-10-08T02:46:54.000Z
2021-04-15T09:22:08.000Z
--- published: true layout: post tags: 评论文章 comments: true title: 佳士工人声援团再次公开招募公益律师的说明 --- # 佳士工人声援团再次公开招募公益律师的说明 【不让律师会见被捕工人?深圳市委政法委知法犯法!】 --- 8月6日,佳士声援团律师反映,他们向龙岗区看守所提出要依法、按程序会见被非法刑拘的工人,**龙岗区看守所屡次以工人被提审为由拒绝会见,后律师又被深圳市政法委私下恐吓,并威胁不要插手此事。** > 《中华人民共和国刑事诉讼法》第三十七条: 辩护律师可以同在押的犯罪嫌疑人、被告人会见和通信。**辩护律师持律师执业证书、律师事务所证明和委托书或者法律援助公函要求会见在押的犯罪嫌疑人、被告人的,看守所应当及时安排会见,至迟不得超过四十八小时。** 佳士工人先是被非法刑拘失去人身自由,后又禁止与律师会见被剥夺了辩护权,龙岗区看守所和深圳市政法委的行为已经严重违反法律法规!!! **请问在你们眼里法律是什么?依法治国又是什么?难道这一切都只是用来忽悠人的口号?** > 《中华人民共和国刑事诉讼法》第三十三条: > **犯罪嫌疑人自被侦查机关第一次讯问或者采取强制措施之日起,有权委托辩护人**;在侦查期间,只能委托律师作为辩护人。被告人有权随时委托辩护人。 7.27佳士事件被捕工人羁押至今已经10天,在此期间,只有余浚聪和米久平被允许会见过律师,其他人的会见一再被无故拖延。 **我们严正要求龙岗区看守所依法依规办事,不要给律师会见被捕工人制造障碍!** **敬请深圳市委政法委摆正位置,不再非法干涉此事,不再恐吓辩护律师,不再践踏宪法尊严!** **【公开招募公益律师,为公平正义而辩护!】** 声援团联系的第一批律师由于受到相关部门的压力,已经有部分退出。另外联系声援团的部分律师诉讼费用偏高,三个阶段每个阶段一万块,一个工人走完所有程序要三万,且律师差旅费另算。 声援团目前共筹措资金11万元,目前还有14位工人没有释放,律师费开销将高达到42万,筹措的资金根本无力支付巨额律师费,经与律师沟通,暂不签署代理协议,参与会见的律师费用也已结清。 **现声援团再次公开招募【公益律师】,恳请有良知能够承受压力的律师可以【免费】代理案件,声援团将支付差旅费用!** **有意者联系声援团小胡:18819370475** **【募捐款项用于声援团食宿开支?造谣者请公开道歉!】** 近日,有名为“劳工研究”的twi%tt#er用户在网络上一再散布关于声援团的不实言论,造谣声援团募捐的第一批款项大部分被用于声援团的食宿路费,已经没有钱可以用于律师费用!!并称声援团不支付律师费和差旅费,而此时声援团与律师还在沟通之中,并未签署代理协议,很多问题还在讨论,“劳动研究”俨然一副新闻发言人的样子,主观偏颇罔顾事实,实在令人寒心!! ![此处输入图片的描述][1] ![此处输入图片的描述][2] ![此处输入图片的描述][3] 我们对这样的污蔑表示非常愤怒,在没有任何事实依据的情况下发布此等消息与犯罪并无二致!此举给前线声援团的工作带来及其严重的破坏,我们要求造谣者澄清事实后向社会公开道歉! 广州清远张唯楚说“律师没有义务以低价和超低价代理案件”,这并不代表没有律师愿意免费代理案件,也不代表其他参与声援的人也是奔着钱来的! **我们再次严正声明——声援团所有成员的食宿费用和路费全部由声援团成员自己承担,未用募捐款项一分一毫!!** **声援团募捐款项使用情况会及时公开,接受全社会的质询和监督,但不接受没有事实依据的污蔑和造谣!** **感谢社会各界对声援团的信任,我们将与大家一起将声援进行到底,直至佳士被捕工人全部释放!** [1]: http://wx4.sinaimg.cn/mw690/0060lm7Tly1fu1dsurcpgj30bn0ktn3t.jpg [2]: http://wx1.sinaimg.cn/mw690/0060lm7Tly1fu1dv91nutj30c10lfwkl.jpg [3]: http://wx1.sinaimg.cn/mw690/0060lm7Tly1fu1dw9nztaj30bv0l6ag2.jpg
28.587302
172
0.825097
yue_Hant
0.521057
a14c75f30388b2ce5c8791ab91ca2071205e4b88
5,961
md
Markdown
README.md
inKindCards/react-native-money
d96513ec9e1d320f77d151c38afe5c7b7e684dd4
[ "MIT" ]
41
2022-01-14T17:18:06.000Z
2022-03-04T22:52:34.000Z
README.md
inKindCards/react-native-money
d96513ec9e1d320f77d151c38afe5c7b7e684dd4
[ "MIT" ]
11
2022-01-14T18:14:00.000Z
2022-03-28T14:42:17.000Z
README.md
inKindCards/react-native-money
d96513ec9e1d320f77d151c38afe5c7b7e684dd4
[ "MIT" ]
2
2022-01-15T21:49:40.000Z
2022-01-20T16:23:58.000Z
<div id="top"></div> <!-- PROJECT SHIELDS --> <!-- *** I'm using markdown "reference style" links for readability. *** Reference links are enclosed in brackets [ ] instead of parentheses ( ). *** See the bottom of this document for the declaration of the reference variables *** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use. *** https://www.markdownguide.org/basic-syntax/#reference-style-links --> [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] [![MIT License][license-shield]][license-url] <!-- PROJECT LOGO --> <br /> <div align="center"> <a href="https://github.com/inKindCards/react-native-money"> <img src="logo.png" alt="Logo" width="160px"> </a> <h3 align="center">React Native Money</h3> <p align="center"> A fully native TextInput component that allows multilingual currency input <br />with a right to left text alignment. <br /><br /> <a href="https://www.npmjs.com/package/@inkindcards/react-native-money">View Library</a> · <a href="https://github.com/inKindCards/react-native-money/issues">Report Bug</a> · <a href="https://github.com/inKindCards/react-native-money/issues">Request Feature</a> </p> </div> ## What is this? React Native Money is a simple component library that exposes a fully native TextInput component that uses currency formatting libraries provided with Android and iOS, so as well as being performant it is also lightweight on your binary sizes. The component has an identitical prop signature and API to the default [TextInput](https://reactnative.dev/docs/textinput) provided with React Native, The only difference is that the `value` prop accepts a `Number` type and `onChangeText` returns a number value and formatted string. <br /> ``` npm install @inkindcards/react-native-money ``` ### iOS Installation: Make sure to add the following to your `Podfile` before running `npx pod-install`: ``` pod 'React-RCTText', :path => '../node_modules/react-native/Libraries/Text', :modular_headers => true ``` ## Manual Installation #### iOS 1. In XCode, in the project navigator, right click `Libraries` ➜ `Add Files to [your project's name]` 2. Go to `node_modules` ➜ `react-native-money` and add `RNMoneyInput.xcodeproj` 3. In XCode, in the project navigator, select your project. Add `libRNMoneyInput.a` to your project's `Build Phases` ➜ `Link Binary With Libraries` 4. Run your project (`Cmd+R`) #### Android 1. Open up `android/app/src/main/java/[...]/MainActivity.java` - Add `import com.inkind.RNMoneyInput.RNMoneyInputPackage;` to the imports at the top of the file - Add `new RNMoneyInputPackage()` to the list returned by the `getPackages()` method 2. Append the following lines to `android/settings.gradle`: ``` include ':inkindcards_react-native-money' project(':inkindcards_react-native-money').projectDir = new File(rootProject.projectDir, '../node_modules/@inkindcards/react-native-money/android') ``` 3. Insert the following lines inside the dependencies block in `android/app/build.gradle`: ``` compile project(':react-native-money') ``` </details> ## Usage You use the MoneyInput component like a normal [TextInput](https://reactnative.dev/docs/textinput) from the React Native library with the exception that you pass a number to the `value` prop. You can also pass a locale idenitifer which is composed of the language along with the country, this in turn will change the how the currency is formatted, All possible locales can be read about here: [Currency Locale Reference](https://docs.oracle.com/cd/E23824_01/html/E26033/glset.html). ``` import {useState} from 'react' import MoneyInput from '@inkindcards/react-native-money' const App = () => { const [bill, setBill] = useState<number>() return ( <MoneyInput value={bill} locale='en_US' placeholder='$0.00' onChangeText={(value: number, label: string) => { setBill(value) }} /> ) } ``` ## Testing Make sure to [mock](https://jestjs.io/docs/en/manual-mocks#mocking-node-modules) the following to `jest.setup.js`: ```javascript jest.mock('react-native-money', () => ({ default: jest.fn(), })) ``` ## Inspiration We'd like to express thanks to the developers of [react-native-text-input-mask](https://github.com/react-native-text-input-mask/react-native-text-input-mask) as this project started as a fork of that repo as their approach in monkeypatching the TextInput delegate was exactly what we needed. ## Versioning This project uses semantic versioning: MAJOR.MINOR.PATCH. This means that releases within the same MAJOR version are always backwards compatible. For more info see [semver.org](http://semver.org/). <p align="right">(<a href="#top">back to top</a>)</p> <!-- MARKDOWN LINKS & IMAGES --> <!-- https://www.markdownguide.org/basic-syntax/#reference-style-links --> [contributors-shield]: https://img.shields.io/github/contributors/inKindCards/react-native-money.svg?style=for-the-badge [contributors-url]: https://github.com/inKindCards/react-native-money/graphs/contributors [forks-shield]: https://img.shields.io/github/forks/inKindCards/react-native-money.svg?style=for-the-badge [forks-url]: https://github.com/inKindCards/react-native-money/network/members [stars-shield]: https://img.shields.io/github/stars/inKindCards/react-native-money.svg?style=for-the-badge [stars-url]: https://github.com/inKindCards/react-native-money/stargazers [issues-shield]: https://img.shields.io/github/issues/inKindCards/react-native-money.svg?style=for-the-badge [issues-url]: https://github.com/inKindCards/react-native-money/issues [license-shield]: https://img.shields.io/github/license/inKindCards/react-native-money.svg?style=for-the-badge [license-url]: https://github.com/inKindCards/react-native-money/blob/master/LICENSE
42.578571
290
0.731756
eng_Latn
0.782306
a14d257ca3bc1d69dbf7c38bf9a016c5fca3187f
996
md
Markdown
includes/digital-twins-familiarity.md
YutongTie-MSFT/azure-docs.de-de
f7922d4a0ebfb2cbb31d7004d4f726202f39716b
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/digital-twins-familiarity.md
YutongTie-MSFT/azure-docs.de-de
f7922d4a0ebfb2cbb31d7004d4f726202f39716b
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/digital-twins-familiarity.md
YutongTie-MSFT/azure-docs.de-de
f7922d4a0ebfb2cbb31d7004d4f726202f39716b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Includedatei description: Includedatei services: digital-twins author: kingdomofends ms.service: digital-twins ms.topic: include ms.date: 01/09/2019 ms.author: adgera ms.custom: include file ms.openlocfilehash: 0d5f483f074f90c51f500e8f8142bb54f9f6bb1e ms.sourcegitcommit: d4f728095cf52b109b3117be9059809c12b69e32 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 01/10/2019 ms.locfileid: "54211848" --- In diesem Artikel wird eine gewisse Vertrautheit mit der Authentifizierung mittels Ihrer Azure Digital Twins-Verwaltungs-APIs vorausgesetzt. * Weitere Informationen zur Authentifizierung mittels Ihrer Verwaltungs-APIs finden Sie unter [Authentifizierung mit Azure Digital Twins-APIs](../articles/digital-twins/security-authenticating-apis.md). * Informationen zum Authentifizieren mittels Ihrer Verwaltungs-APIs unter Verwendung des Postman REST-Clients finden Sie unter [Gewusst wie: Konfigurieren von Postman](../articles/digital-twins/how-to-configure-postman.md).
45.272727
223
0.833333
deu_Latn
0.795762
a14f753b56f67becf3bc606d69e5443977e89526
729
md
Markdown
cache/README.md
jonhadfield/gosn-v2
d81b99c52c2561ec7e316741d8cc33fdb5d335b4
[ "MIT" ]
10
2020-07-14T16:54:59.000Z
2022-01-29T06:01:35.000Z
cache/README.md
jonhadfield/gosn-v2
d81b99c52c2561ec7e316741d8cc33fdb5d335b4
[ "MIT" ]
3
2020-02-20T07:35:34.000Z
2021-04-10T14:26:14.000Z
cache/README.md
jonhadfield/gosn-v2
d81b99c52c2561ec7e316741d8cc33fdb5d335b4
[ "MIT" ]
null
null
null
## about ####Note: This is an early release with significant changes. Please take a backup before using with any real data and report any issues you find. This package is used to persist encrypted SN items to a local [key-value store](https://github.com/etcd-io/bbolt) and then only sync deltas on subsequent calls. ## usage _under construction_ ### sign in ```go sio, _ := gosn.SignIn(gosn.SignInInput{ Email: "user@example.com", Password: "topsecret", }) ``` ### get cache session ```go cs, _ := cache.ImportSession(&sio.Session, "") ``` ### sync items to database ```go cso, _ := cache.Sync(cache.SyncInput{ Session: cs, }) ``` ### retrieve items ```go var cItems []cache.Item _ = cso.DB.All(&cItems) ```
20.828571
145
0.679012
eng_Latn
0.905673
a14f75843015f9242c8bd6b3aac7f644f4e9c36a
24,555
md
Markdown
CHANGELOG.md
prixe/lindo-poc
243aca5123989bd1cd3765f1b213b89a22550939
[ "MIT" ]
2
2020-01-15T15:53:20.000Z
2020-04-17T00:43:15.000Z
CHANGELOG.md
prixe/lindo-poc
243aca5123989bd1cd3765f1b213b89a22550939
[ "MIT" ]
null
null
null
CHANGELOG.md
prixe/lindo-poc
243aca5123989bd1cd3765f1b213b89a22550939
[ "MIT" ]
null
null
null
## <small>1.0.6 (2019-02-12)</small> * Update .travis.yml ([c0db696](https://github.com/prixe/lindo-poc/commit/c0db696)) ## <small>1.0.5 (2019-02-12)</small> * 1.0.5 ([114b834](https://github.com/prixe/lindo-poc/commit/114b834)) * Update package.json ([d316fd3](https://github.com/prixe/lindo-poc/commit/d316fd3)) ## <small>1.0.4 (2019-02-12)</small> * 1.0.4 ([f3925f0](https://github.com/prixe/lindo-poc/commit/f3925f0)) * Update package.json ([fe6e968](https://github.com/prixe/lindo-poc/commit/fe6e968)) * Update package.json ([eb44541](https://github.com/prixe/lindo-poc/commit/eb44541)) ## <small>1.0.3 (2019-02-12)</small> * 1.0.3 ([818e0b9](https://github.com/prixe/lindo-poc/commit/818e0b9)) * Update .travis.yml ([88a1a4b](https://github.com/prixe/lindo-poc/commit/88a1a4b)) * Update .travis.yml ([09fd025](https://github.com/prixe/lindo-poc/commit/09fd025)) * Update .travis.yml ([25754f1](https://github.com/prixe/lindo-poc/commit/25754f1)) * Update .travis.yml ([a2d18f7](https://github.com/prixe/lindo-poc/commit/a2d18f7)) * Update .travis.yml ([82cdaa7](https://github.com/prixe/lindo-poc/commit/82cdaa7)) * Update package.json ([e13ea92](https://github.com/prixe/lindo-poc/commit/e13ea92)) * Update travis ([a065b34](https://github.com/prixe/lindo-poc/commit/a065b34)) ## <small>1.0.2 (2019-02-11)</small> * 1.0.2 ([3f5a8b6](https://github.com/prixe/lindo-poc/commit/3f5a8b6)) * Add nsis build ([c65c8fe](https://github.com/prixe/lindo-poc/commit/c65c8fe)) ## <small>1.0.1 (2019-02-11)</small> * 1.0.1 ([8da1855](https://github.com/prixe/lindo-poc/commit/8da1855)) * Add angular-redux ([a8f6852](https://github.com/prixe/lindo-poc/commit/a8f6852)) * add auto updater ([3f1213b](https://github.com/prixe/lindo-poc/commit/3f1213b)) * Better typed redux store ([0d38327](https://github.com/prixe/lindo-poc/commit/0d38327)) * Clean code ([d4e7c9a](https://github.com/prixe/lindo-poc/commit/d4e7c9a)) * clean project ([c7a1a1a](https://github.com/prixe/lindo-poc/commit/c7a1a1a)) * Configure redux for angular ([5b2f65d](https://github.com/prixe/lindo-poc/commit/5b2f65d)) * Create dynamic plugin test ([6b22bac](https://github.com/prixe/lindo-poc/commit/6b22bac)) * create module for store ([6ec5c5e](https://github.com/prixe/lindo-poc/commit/6ec5c5e)) * Create plugin test ([bba3292](https://github.com/prixe/lindo-poc/commit/bba3292)) * Fix : [DEP0005] DeprecationWarning: Buffer() ([ea2ec5d](https://github.com/prixe/lindo-poc/commit/ea2ec5d)) * fix conf ([52cf7b6](https://github.com/prixe/lindo-poc/commit/52cf7b6)) * fix conf ([b43979b](https://github.com/prixe/lindo-poc/commit/b43979b)) * Fix package.json ([809d4c9](https://github.com/prixe/lindo-poc/commit/809d4c9)) * Improve dynamic module typing ([109f4e5](https://github.com/prixe/lindo-poc/commit/109f4e5)) * Improve redux type ([924e2dd](https://github.com/prixe/lindo-poc/commit/924e2dd)) * init deps ([1bd27aa](https://github.com/prixe/lindo-poc/commit/1bd27aa)) * Refactor tsconfig ([6930d46](https://github.com/prixe/lindo-poc/commit/6930d46)) * repair prod build with electon-builder ([2f053a5](https://github.com/prixe/lindo-poc/commit/2f053a5)) * restructure project and add redux ([9d1b975](https://github.com/prixe/lindo-poc/commit/9d1b975)) * Update package.json ([993a009](https://github.com/prixe/lindo-poc/commit/993a009)) * version bump ([bb1d6bb](https://github.com/prixe/lindo-poc/commit/bb1d6bb)) ## 5.1.0 (2018-11-30) * [Bumped Version] 5.1.0 ([b790e12](https://github.com/prixe/lindo-poc/commit/b790e12)) * fix/ typo Angular 7 ([fde371f](https://github.com/prixe/lindo-poc/commit/fde371f)) * fix/ typo README ([723233c](https://github.com/prixe/lindo-poc/commit/723233c)) * fix/ typo script npm electron:windows ([45bab44](https://github.com/prixe/lindo-poc/commit/45bab44)) * ref/ remve npx - fix vulnerabilities ([41aeb57](https://github.com/prixe/lindo-poc/commit/41aeb57)) * update README ([f146d5d](https://github.com/prixe/lindo-poc/commit/f146d5d)) ## 5.0.0 (2018-11-11) * Fix typos in README file ([0440ee9](https://github.com/prixe/lindo-poc/commit/0440ee9)) * ref/ Generate changelog ([a89b3ce](https://github.com/prixe/lindo-poc/commit/a89b3ce)) * ref/ Upgrade to Angular 7 ([315a79b](https://github.com/prixe/lindo-poc/commit/315a79b)) * Update electron-builder.json files rule ([82c7bcf](https://github.com/prixe/lindo-poc/commit/82c7bcf)) * Update Version Electron 2 to 3 #hacktoberfest ([f083328](https://github.com/prixe/lindo-poc/commit/f083328)) ## <small>4.2.2 (2018-08-22)</small> * fix/ build serve & electron with single tsc command ([9106c8f](https://github.com/prixe/lindo-poc/commit/9106c8f)) * fix/ typo README ([a9448aa](https://github.com/prixe/lindo-poc/commit/a9448aa)) ## <small>4.2.1 (2018-08-22)</small> * fix/ jslib in main process error ([ef33f5e](https://github.com/prixe/lindo-poc/commit/ef33f5e)) ## 4.2.0 (2018-08-19) * [Bumped Version] V4.2.0 ([0da3856](https://github.com/prixe/lindo-poc/commit/0da3856)) * fix/ electron builder output directories #200 ([f4535e5](https://github.com/prixe/lindo-poc/commit/f4535e5)), closes [#200](https://github.com/prixe/lindo-poc/issues/200) * Make sure tsconfig be used. ([961c8b1](https://github.com/prixe/lindo-poc/commit/961c8b1)) * ref/ remove some directories of tsconfig.app.json ([1adad4a](https://github.com/prixe/lindo-poc/commit/1adad4a)) * Upgrade Angular (6.1.2) deps ([d8818c1](https://github.com/prixe/lindo-poc/commit/d8818c1)) ## 4.1.0 (2018-06-27) * Allow Angular Using Electron Modules ([ec705ee](https://github.com/prixe/lindo-poc/commit/ec705ee)) * fix/ version angular (revert 6.0.6 -> 6.0.5) ([63a41b8](https://github.com/prixe/lindo-poc/commit/63a41b8)) * fix/ version ts-node ([0d8341a](https://github.com/prixe/lindo-poc/commit/0d8341a)) * ref/ postinstall web & electron ([50657d0](https://github.com/prixe/lindo-poc/commit/50657d0)) * update README ([1d48e32](https://github.com/prixe/lindo-poc/commit/1d48e32)) * feat(zone): add zone-patch-electron to patch Electron native APIs in polyfills ([01842e2](https://github.com/prixe/lindo-poc/commit/01842e2)) ## 4.0.0 (2018-05-25) * misc/ remove unused packages ([a7e33b6](https://github.com/prixe/lindo-poc/commit/a7e33b6)) * misc/ update Changelog ([b758122](https://github.com/prixe/lindo-poc/commit/b758122)) * ref/ upgrade angular to 6.0.3 ([e7fac6e](https://github.com/prixe/lindo-poc/commit/e7fac6e)) ## <small>3.4.1 (2018-05-25)</small> * misc/ update changelog ([70b359f](https://github.com/prixe/lindo-poc/commit/70b359f)) * version 3.4.1 ([308ea9c](https://github.com/prixe/lindo-poc/commit/308ea9c)) ## 3.4.0 (2018-05-25) * misc/ update changelog ([7d5eeb3](https://github.com/prixe/lindo-poc/commit/7d5eeb3)) * Modify electron builder configuration to remove source code and tests ([0cf6899](https://github.com/prixe/lindo-poc/commit/0cf6899)) * ref/ remove contributors ([6dc97a1](https://github.com/prixe/lindo-poc/commit/6dc97a1)) * The file is unused ([05c9e39](https://github.com/prixe/lindo-poc/commit/05c9e39)) * Translation issue ([35354b1](https://github.com/prixe/lindo-poc/commit/35354b1)) * version 3.4.0 ([06d6b0f](https://github.com/prixe/lindo-poc/commit/06d6b0f)) * refactor: update electron, electron-builder to latest (2.0.2, 20.14.7) ([f19e6ee](https://github.com/prixe/lindo-poc/commit/f19e6ee)) * refactor: upgrade to NodeJS 8, Angular 6, CLI 6, Electron 2.0, RxJS 6.1 ([e37efdb](https://github.com/prixe/lindo-poc/commit/e37efdb)) * refactor(hooks): replace hooks to ng-cli fileReplacements logic ([c940037](https://github.com/prixe/lindo-poc/commit/c940037)) * fix(test): create polyfills-test.ts for karma test & setup Travis CI ([7fbc68c](https://github.com/prixe/lindo-poc/commit/7fbc68c)) * fix(travis): set progress to false (speed up npm) ([be48531](https://github.com/prixe/lindo-poc/commit/be48531)) ## 3.3.0 (2018-04-15) * add Changelog file ([71083f1](https://github.com/prixe/lindo-poc/commit/71083f1)) * fix/ typo README.md (production variables) ([a8c2b63](https://github.com/prixe/lindo-poc/commit/a8c2b63)) * version 3.3.0 ([a88bda6](https://github.com/prixe/lindo-poc/commit/a88bda6)) * version 3.3.0 changelog ([ddfbbf9](https://github.com/prixe/lindo-poc/commit/ddfbbf9)) ## 3.2.0 (2018-04-15) * fix e2e tests based on PR #161 and terminate the npm process after test execution ([fccf348](https://github.com/prixe/lindo-poc/commit/fccf348)), closes [#161](https://github.com/prixe/lindo-poc/issues/161) * fix/ app e2e spec ([8046b2a](https://github.com/prixe/lindo-poc/commit/8046b2a)) * Including electron to eliminate Electron not found err sg ([d78203f](https://github.com/prixe/lindo-poc/commit/d78203f)) * provide webFrame access ([6bd044e](https://github.com/prixe/lindo-poc/commit/6bd044e)) * ref/ add node/electron module import as exemple : fs and remote ([e3ad12d](https://github.com/prixe/lindo-poc/commit/e3ad12d)) * remove copyfiles ([9af5138](https://github.com/prixe/lindo-poc/commit/9af5138)) * update dependencies ([89963ab](https://github.com/prixe/lindo-poc/commit/89963ab)) * version 3.2.0 ([8dc69fa](https://github.com/prixe/lindo-poc/commit/8dc69fa)) ## 3.1.0 (2018-03-15) * Added option -o to script npm run ng:serve so that it really open the browser ([72aff8d](https://github.com/prixe/lindo-poc/commit/72aff8d)) * Fix to change environment ([448d68b](https://github.com/prixe/lindo-poc/commit/448d68b)) * version 3.1.0 ([f7c71e7](https://github.com/prixe/lindo-poc/commit/f7c71e7)) ## <small>3.0.1 (2018-03-07)</small> * fix/ icon app ([22699ef](https://github.com/prixe/lindo-poc/commit/22699ef)) * version 3.0.1 ([5258ff1](https://github.com/prixe/lindo-poc/commit/5258ff1)) ## 3.0.0 (2018-02-25) * fix/ TranslateModule test ([7863aa9](https://github.com/prixe/lindo-poc/commit/7863aa9)) * Ng not ejected anymore ([67ab31c](https://github.com/prixe/lindo-poc/commit/67ab31c)) * pin all dependency versions ([0558d6a](https://github.com/prixe/lindo-poc/commit/0558d6a)) * update dependencies and fix unit tests ([4d3ca6e](https://github.com/prixe/lindo-poc/commit/4d3ca6e)) ## <small>2.7.1 (2018-02-15)</small> * ref/ dernière version cli ([3df8158](https://github.com/prixe/lindo-poc/commit/3df8158)) * version 2.7.1 ([1ae6f7a](https://github.com/prixe/lindo-poc/commit/1ae6f7a)) ## 2.7.0 (2018-02-15) * Correction of a word. ([d6655c7](https://github.com/prixe/lindo-poc/commit/d6655c7)) * feat/ add webview directive ([e1b5600](https://github.com/prixe/lindo-poc/commit/e1b5600)) * migrate Angular to 5.2.0 ([b8cf343](https://github.com/prixe/lindo-poc/commit/b8cf343)) * ref/ Remove sponsor ([2a28239](https://github.com/prixe/lindo-poc/commit/2a28239)) * ref/ update angular & dep ([e3b1fab](https://github.com/prixe/lindo-poc/commit/e3b1fab)) * ref/ upgrade electron (security issue) ([f6a0c4e](https://github.com/prixe/lindo-poc/commit/f6a0c4e)) * version bump + logo resize ([3545d16](https://github.com/prixe/lindo-poc/commit/3545d16)) * fix: fixes maximegris/angular-electron#118 ([6d21e69](https://github.com/prixe/lindo-poc/commit/6d21e69)), closes [maximegris/angular-electron#118](https://github.com/maximegris/angular-electron/issues/118) * fix: fixes maximegris/angular-electron#98 ([136344b](https://github.com/prixe/lindo-poc/commit/136344b)), closes [maximegris/angular-electron#98](https://github.com/maximegris/angular-electron/issues/98) ## <small>2.4.1 (2017-12-14)</small> * fix/ Manage icons for linux binary generation ([ccd0601](https://github.com/prixe/lindo-poc/commit/ccd0601)) * version 2.4.1 ([5fcfca0](https://github.com/prixe/lindo-poc/commit/5fcfca0)) ## 2.4.0 (2017-12-08) * version 2.4.0 ([0437b33](https://github.com/prixe/lindo-poc/commit/0437b33)) ## 2.3.0 (2017-12-04) * add ngx translate ([facda37](https://github.com/prixe/lindo-poc/commit/facda37)) * Use HttpClientModule ([5704e2e](https://github.com/prixe/lindo-poc/commit/5704e2e)) ## 2.2.0 (2017-11-28) * Brought back scripts defined in webpack.config.js ([441da3d](https://github.com/prixe/lindo-poc/commit/441da3d)) * migrate to Angular 5.0.3 ([f4bc5b2](https://github.com/prixe/lindo-poc/commit/f4bc5b2)) * Update LICENSE badge ([fa783aa](https://github.com/prixe/lindo-poc/commit/fa783aa)) * Update to electron-builder ([0e94b52](https://github.com/prixe/lindo-poc/commit/0e94b52)) ## <small>2.1.1 (2017-11-19)</small> * Move codesponsor ([064be4c](https://github.com/prixe/lindo-poc/commit/064be4c)) ## 2.1.0 (2017-11-19) * Add codesponsor ([87e695d](https://github.com/prixe/lindo-poc/commit/87e695d)) * Add script for winportable ([2be2dae](https://github.com/prixe/lindo-poc/commit/2be2dae)) * Add support for building a Windows self-contained executable ([7cfa790](https://github.com/prixe/lindo-poc/commit/7cfa790)) * fix/ electron-packager need favicon >= 256x256 on Windows ([d2c253f](https://github.com/prixe/lindo-poc/commit/d2c253f)) * fix/ refact webpack config (inspired by ng eject Angular 5) ([d1c30ac](https://github.com/prixe/lindo-poc/commit/d1c30ac)) * fix/ replace aotPlugin in no prod mode ([a0caf1e](https://github.com/prixe/lindo-poc/commit/a0caf1e)) * fix/ Replace AotPlugin to AngularCompilerPlugin ([bef106e](https://github.com/prixe/lindo-poc/commit/bef106e)) * fix/ Update README Angular 5 ([93c6949](https://github.com/prixe/lindo-poc/commit/93c6949)) * fix/ webpack template path ([518b66b](https://github.com/prixe/lindo-poc/commit/518b66b)) * Mgrate to Angular 5.0.2 ([bd7bed6](https://github.com/prixe/lindo-poc/commit/bd7bed6)) * Update package.json ([b16cf73](https://github.com/prixe/lindo-poc/commit/b16cf73)) * Version 2.1.0 ([fccef2f](https://github.com/prixe/lindo-poc/commit/fccef2f)) ## 2.0.0 (2017-11-13) * Add buffer to externals ([7e797f0](https://github.com/prixe/lindo-poc/commit/7e797f0)) * Edit a typo on README ([956a2bc](https://github.com/prixe/lindo-poc/commit/956a2bc)) * Fix #55 removed bootstraps.css which for example purpose before. ([41445eb](https://github.com/prixe/lindo-poc/commit/41445eb)), closes [#55](https://github.com/prixe/lindo-poc/issues/55) * License MIT ([73494b7](https://github.com/prixe/lindo-poc/commit/73494b7)) * Migrate to Angular 5 ([3a3ffe1](https://github.com/prixe/lindo-poc/commit/3a3ffe1)) ## 1.9.0 (2017-09-22) * feat/ launch electron & webpack in // (npm run start) ([8c37cc4](https://github.com/prixe/lindo-poc/commit/8c37cc4)) * ref/ Exclude node_modules (tslint) ([412a0a5](https://github.com/prixe/lindo-poc/commit/412a0a5)) ## <small>1.8.1 (2017-09-22)</small> * Fix #55 , and also added functionality for scripts global building ([012a894](https://github.com/prixe/lindo-poc/commit/012a894)), closes [#55](https://github.com/prixe/lindo-poc/issues/55) * ref/ add package-lock in gitignore ([4edd98d](https://github.com/prixe/lindo-poc/commit/4edd98d)) * remove package-lock ([8e98627](https://github.com/prixe/lindo-poc/commit/8e98627)) * upgrade angular version 4.4.3 ([10d0f87](https://github.com/prixe/lindo-poc/commit/10d0f87)) * version 1.8.1 ([70879d1](https://github.com/prixe/lindo-poc/commit/70879d1)) ## 1.8.0 (2017-09-09) * upgrade lib version ([2ac2aa0](https://github.com/prixe/lindo-poc/commit/2ac2aa0)) ## 1.7.0 (2017-08-18) * ref/ Update Angular (4.3.5) / Electron (1.7.2) / Electron Packager (8.7.2) / Typescript (2.5.0) ([f97cd81](https://github.com/prixe/lindo-poc/commit/f97cd81)) ## <small>1.6.1 (2017-07-27)</small> * fix/ angular-cli error in prod compilation with aot ([c26a5ae](https://github.com/prixe/lindo-poc/commit/c26a5ae)) * version 1.6.1 ([899babd](https://github.com/prixe/lindo-poc/commit/899babd)) ## 1.6.0 (2017-07-16) * ajout package-lock npm v5 ([09c0840](https://github.com/prixe/lindo-poc/commit/09c0840)) * Change background img ([7e58717](https://github.com/prixe/lindo-poc/commit/7e58717)) * Fix npm run build:prod ([c23bade](https://github.com/prixe/lindo-poc/commit/c23bade)) * fix/ Bindings not updating automatically #44 ([2a90191](https://github.com/prixe/lindo-poc/commit/2a90191)), closes [#44](https://github.com/prixe/lindo-poc/issues/44) * fix/ e2e test with jasmine2 ([9c51f32](https://github.com/prixe/lindo-poc/commit/9c51f32)) * fix/ typescript issues ([bb0a6ab](https://github.com/prixe/lindo-poc/commit/bb0a6ab)) * increment version deps ([bde452c](https://github.com/prixe/lindo-poc/commit/bde452c)) * Revert last pull request - break production compilation ([ccc9064](https://github.com/prixe/lindo-poc/commit/ccc9064)) * upgrade angular version to 4.3.0 ([ab16959](https://github.com/prixe/lindo-poc/commit/ab16959)) ## 1.5.0 (2017-06-10) * fix/ karma Unit test ([ea13d6d](https://github.com/prixe/lindo-poc/commit/ea13d6d)) * fix/ remove yarn because of error with module dep in prod builds ([8a49a45](https://github.com/prixe/lindo-poc/commit/8a49a45)) * update yarn lock ([18c0e62](https://github.com/prixe/lindo-poc/commit/18c0e62)) ## <small>1.4.4 (2017-06-08)</small> * fix/ Fix npm run lint ([db7972a](https://github.com/prixe/lindo-poc/commit/db7972a)) * ref/ electron ./dist more generic ([7e71add](https://github.com/prixe/lindo-poc/commit/7e71add)) * Replace const icon to let icon ([dadf65f](https://github.com/prixe/lindo-poc/commit/dadf65f)) ## <small>1.4.3 (2017-06-06)</small> * fix/ favicon path during packaging ([aa2b012](https://github.com/prixe/lindo-poc/commit/aa2b012)) * remove build node 8 till node-sass failed ([34f201d](https://github.com/prixe/lindo-poc/commit/34f201d)) * v1.4.3 ([4961fb0](https://github.com/prixe/lindo-poc/commit/4961fb0)) ## <small>1.4.2 (2017-05-31)</small> * Change dep versions ([62d08d3](https://github.com/prixe/lindo-poc/commit/62d08d3)) * install npm dep when building ([56948d0](https://github.com/prixe/lindo-poc/commit/56948d0)) * Minor update ([5f282b7](https://github.com/prixe/lindo-poc/commit/5f282b7)) * No hot reload in browser ([7892f0d](https://github.com/prixe/lindo-poc/commit/7892f0d)) * update Electron v1.6.10 ([f2f2080](https://github.com/prixe/lindo-poc/commit/f2f2080)) * upgrade ng/electron dependencies ([78b0f27](https://github.com/prixe/lindo-poc/commit/78b0f27)) * chore(package): bump dependencies ([b62c7b6](https://github.com/prixe/lindo-poc/commit/b62c7b6)) ## 1.4.0 (2017-05-23) * 1.4.0 ([23213c4](https://github.com/prixe/lindo-poc/commit/23213c4)) * Change style home page ([93dcc52](https://github.com/prixe/lindo-poc/commit/93dcc52)) * Fixed compiler warnings ([fca6b15](https://github.com/prixe/lindo-poc/commit/fca6b15)) * ref/ electron main from js to ts ([835d32b](https://github.com/prixe/lindo-poc/commit/835d32b)) * Remove caret & tilde ([dd98155](https://github.com/prixe/lindo-poc/commit/dd98155)) ## <small>1.3.5 (2017-05-18)</small> * Add new tags ([cd07a86](https://github.com/prixe/lindo-poc/commit/cd07a86)) * v 1.3.5 ([d528a71](https://github.com/prixe/lindo-poc/commit/d528a71)) ## <small>1.3.4 (2017-05-12)</small> * feat/ add nodejs native lib in webpack config ([27d9bc6](https://github.com/prixe/lindo-poc/commit/27d9bc6)) * Fix issue #15 ([d77cbf1](https://github.com/prixe/lindo-poc/commit/d77cbf1)), closes [#15](https://github.com/prixe/lindo-poc/issues/15) * Ref/ Electron packager in external file ([17b04e8](https://github.com/prixe/lindo-poc/commit/17b04e8)) * version 1.3.4 ([374af16](https://github.com/prixe/lindo-poc/commit/374af16)) ## <small>1.3.3 (2017-05-10)</small> * Chapters order ([a772b9c](https://github.com/prixe/lindo-poc/commit/a772b9c)) * Chapters order ([06547e5](https://github.com/prixe/lindo-poc/commit/06547e5)) * Delete spec file of electron.service ([083498e](https://github.com/prixe/lindo-poc/commit/083498e)) * Fix issue #15 ([e7cd6e6](https://github.com/prixe/lindo-poc/commit/e7cd6e6)), closes [#15](https://github.com/prixe/lindo-poc/issues/15) * Move Browser mode chapter ([8818750](https://github.com/prixe/lindo-poc/commit/8818750)) * Version 1.3.3 ([f4db75b](https://github.com/prixe/lindo-poc/commit/f4db75b)) ## <small>1.3.2 (2017-05-05)</small> * Add comments of how conditional import works ([e6c1b3b](https://github.com/prixe/lindo-poc/commit/e6c1b3b)) * Conditional import of Electron/NodeJS libs - The app can be launch in browser mode ([c434f8a](https://github.com/prixe/lindo-poc/commit/c434f8a)) * Fix indentation ([6a9836a](https://github.com/prixe/lindo-poc/commit/6a9836a)) * Fix prepree2e script ([b2af4fd](https://github.com/prixe/lindo-poc/commit/b2af4fd)) * Set e2e tests ([d223974](https://github.com/prixe/lindo-poc/commit/d223974)) * Suround electron browser by try/catch ([88be472](https://github.com/prixe/lindo-poc/commit/88be472)) * Update @types/node ([9d43304](https://github.com/prixe/lindo-poc/commit/9d43304)) * Update readme with e2e info ([01bbf13](https://github.com/prixe/lindo-poc/commit/01bbf13)) * update version ([0849a0a](https://github.com/prixe/lindo-poc/commit/0849a0a)) ## <small>1.3.1 (2017-05-05)</small> * Add routing module ([7334ce8](https://github.com/prixe/lindo-poc/commit/7334ce8)) * Fixed hardcoded path in glob copy, blocking assets after eject ([815d519](https://github.com/prixe/lindo-poc/commit/815d519)) * update comments in dev/prod env files ([7cf6a51](https://github.com/prixe/lindo-poc/commit/7cf6a51)) * Version 1.3.1 ([f18ac77](https://github.com/prixe/lindo-poc/commit/f18ac77)) ## 1.3.0 (2017-05-01) * Fix webpack prod/dev env ([8549da1](https://github.com/prixe/lindo-poc/commit/8549da1)) ## <small>1.2.1 (2017-04-30)</small> * allowJs ([4efd188](https://github.com/prixe/lindo-poc/commit/4efd188)) * Example url background in scss ([3705a35](https://github.com/prixe/lindo-poc/commit/3705a35)) * Fix electron build (extract-zip workaround) ([a7ee90e](https://github.com/prixe/lindo-poc/commit/a7ee90e)) * Fix webpack config url in css ([cea4be5](https://github.com/prixe/lindo-poc/commit/cea4be5)) * html loader ([c55558a](https://github.com/prixe/lindo-poc/commit/c55558a)) * update version 1.2.1 ([78e8da7](https://github.com/prixe/lindo-poc/commit/78e8da7)) ## 1.2.0 (2017-04-19) * Set one example of css class in app component ([a15775f](https://github.com/prixe/lindo-poc/commit/a15775f)) * Update npm dependencies ([0a93ebe](https://github.com/prixe/lindo-poc/commit/0a93ebe)) ## <small>1.1.2 (2017-04-18)</small> * Fix typo & fix script electron:mac ([bd06859](https://github.com/prixe/lindo-poc/commit/bd06859)) * Set theme jekyll-theme-architect ([644d857](https://github.com/prixe/lindo-poc/commit/644d857)) * update README ([97fa63d](https://github.com/prixe/lindo-poc/commit/97fa63d)) * update README ([23fc0a9](https://github.com/prixe/lindo-poc/commit/23fc0a9)) * update README ([a8dcf6a](https://github.com/prixe/lindo-poc/commit/a8dcf6a)) * v1.1.2 ([e29e467](https://github.com/prixe/lindo-poc/commit/e29e467)) ## <small>1.1.1 (2017-04-12)</small> * Fix webpack.config file path (travisci) ([a172df9](https://github.com/prixe/lindo-poc/commit/a172df9)) * live reload on disk ([7bb2f8b](https://github.com/prixe/lindo-poc/commit/7bb2f8b)) * Remove unused dependency (webpack-dev-server) ([e9150f4](https://github.com/prixe/lindo-poc/commit/e9150f4)) ## 1.1.0 (2017-04-12) * add depdencies CI & Licence ([6ceb0f2](https://github.com/prixe/lindo-poc/commit/6ceb0f2)) * Override webpack configuration ([60d6116](https://github.com/prixe/lindo-poc/commit/60d6116)) ## <small>1.0.3 (2017-04-07)</small> * Add TravisCI ([e5640fd](https://github.com/prixe/lindo-poc/commit/e5640fd)) * v1.0.3 ([9866d53](https://github.com/prixe/lindo-poc/commit/9866d53)) ## <small>1.0.2 (2017-04-07)</small> * Add TravisCI ([ef4b80e](https://github.com/prixe/lindo-poc/commit/ef4b80e)) * Fix typo ([f964c3f](https://github.com/prixe/lindo-poc/commit/f964c3f)) * Fix typo ([e42bb5e](https://github.com/prixe/lindo-poc/commit/e42bb5e)) * Update README ([3bb45b3](https://github.com/prixe/lindo-poc/commit/3bb45b3)) * Update README with angular-cli doc ([5a57578](https://github.com/prixe/lindo-poc/commit/5a57578)) * v1.0.2 ([1bd8e0e](https://github.com/prixe/lindo-poc/commit/1bd8e0e)) ## <small>1.0.1 (2017-04-03)</small> * feat/ Add electron-packager scripts ([57891dc](https://github.com/prixe/lindo-poc/commit/57891dc)) * ref/ update README ([7fddc20](https://github.com/prixe/lindo-poc/commit/7fddc20)) * update README ([9a983c1](https://github.com/prixe/lindo-poc/commit/9a983c1)) * v1.0.0 ([7a21eb9](https://github.com/prixe/lindo-poc/commit/7a21eb9)) * v1.0.1 ([68275a3](https://github.com/prixe/lindo-poc/commit/68275a3)) * chore: initial commit from @angular/cli ([616a69e](https://github.com/prixe/lindo-poc/commit/616a69e))
48.24165
208
0.726777
yue_Hant
0.238747
a14fb9c7d933dcfb66fa52d258f3405adbd538ed
261
md
Markdown
SUBMISSION.md
KPATE185/spring-petclinic
6a9f7478802f640b5d662dce93a7783b8a0dc635
[ "Apache-2.0" ]
null
null
null
SUBMISSION.md
KPATE185/spring-petclinic
6a9f7478802f640b5d662dce93a7783b8a0dc635
[ "Apache-2.0" ]
null
null
null
SUBMISSION.md
KPATE185/spring-petclinic
6a9f7478802f640b5d662dce93a7783b8a0dc635
[ "Apache-2.0" ]
null
null
null
Homework 5 ![SS1](figures/SS1.PNG) ![SS2](figures/SS2.PNG) ![SS3](figures/SS3.PNG) ![SS4](figures/SS4.PNG) ![SS5](figures/SS5.PNG) ![SS6](figures/SS6.PNG) ![SS7](figures/SS7.PNG) ![SS8](figures/SS8.PNG) ![SS9](figures/SS9.PNG) ![SS10](figures/SS10.PNG)
20.076923
25
0.655172
kor_Hang
0.286929
a1502d222181b82b796daa2e69bbafe28a8e177f
2,974
md
Markdown
projects/project-1.md
ErBot/ErBot.github.io
15f6c50be9a9d5e6a55b6c64a6bb16907913f86c
[ "MIT" ]
null
null
null
projects/project-1.md
ErBot/ErBot.github.io
15f6c50be9a9d5e6a55b6c64a6bb16907913f86c
[ "MIT" ]
null
null
null
projects/project-1.md
ErBot/ErBot.github.io
15f6c50be9a9d5e6a55b6c64a6bb16907913f86c
[ "MIT" ]
null
null
null
--- layout: project type: project image: images/hudphoto3.jpg title: Motorcycle Heads-Up Display (HUD) permalink: projects/motorcycleHUD # All dates must be YYYY-MM-DD format! date: 2018-07-01 labels: - Robotics - Arduino - C++ - Optics summary: Prototype HUD that can be inserted into any helmet to display rear blindspots. --- <h2>Inroduction</h2> Our Motorcycle HUD project is designed to provide a universal drop-in HUD to increase a motorcyclist's field of view. The wind drag caused by turning the helmet into a less aerodynamic position at higher speeds can cause the rider's bodyweight to shift, which in turn causes the rider to quickly drift into another lane position. At higher speeds, the drag force exerted on riders' helmets can be strong enough as to torque the neck enough to cause muscle strains. The goal of this project is to minimize the amount of time riders must spend turning their head when traveling at speed. <h2>Components</h2> The HUD system is composed of three subsystems. The first part is a wirelessly controlled servo-mounted camera that is to be placed on the back of the motorcycle. The servo will automatically pivot in the direction of the intended lane change when the rider turns on either turn signal. A sample circuit setup is shown in Figure 1 below. <img class="ui image" src="../images/cameraCircuit.png"> <h3>Figure 1. Wirelessly controlled servo circuit</h3> The second part of the system is the camera-display system. Our original prototype used long wires to connect the camera to a micro LCD display. However, the next version will have a wireless connection between the camera circuit and LCD display. A sample circuit for the camera-display feed is shown in Figure 2 below. <img class="ui image" src="../images/arduinoCircuit.png"> <h3>Figure 2. Arduino used as video processor</h3> The third subsystem is the optics component. This consists of two lenses and a glass pane positioned in such a way as to project a virtual image of the display without obstructing the rider's field of view. The magnification power of the lenses were chosen so that the virtual image is large enough and far enough as to be seen without the rider having to refocus their eyes. Doing so reduces the strain on the rider's eyes and minimizes the reaction time when changing focal points. The display pane was inserted in the helmet as shown in Figure 3 below. <img class="ui image" src="../images/hudphoto3.jpg" width="400" height="400"> <h3>Figure 3. Transparent HUD in helmet</h3> For this project, I was the lead programmer who was responsible for programming the Arduino controllers. I started by programming the basics, such as wireless communication between devices and mapping the controls to the servo. The code used to supply camera feed to the LCD screen was sourced from open projects on GitHub. My partner, Keenan Lee, and I both contributed to the wiring and soldering of the electrical components.
72.536585
586
0.784802
eng_Latn
0.99932
a1508d19575fc2f3e4137bf46ec964d317327944
266
md
Markdown
_project/scotianostra-neist-point-lighthouse-fuckitandmovetobritain.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/scotianostra-neist-point-lighthouse-fuckitandmovetobritain.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/scotianostra-neist-point-lighthouse-fuckitandmovetobritain.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
--- layout: project_single title: "scotianostra: Neist Point Lighthouse - FUCKITANDMOVETOBRITAIN" slug: "scotianostra-neist-point-lighthouse-fuckitandmovetobritain" parent: "great-lighthouses-photos" --- scotianostra: Neist Point Lighthouse - FUCKITANDMOVETOBRITAIN
38
71
0.823308
eng_Latn
0.394869
a151d5903c89228df81f0b55188149aa164ccc01
2,010
md
Markdown
docs/visual-basic/programming-guide/language-features/constants-enums/enumerations-overview.md
BaruaSourav/docs
c288ed777de6b091f5e074d3488f7934683f3eb5
[ "CC-BY-4.0", "MIT" ]
3,294
2016-10-30T05:27:20.000Z
2022-03-31T15:59:30.000Z
docs/visual-basic/programming-guide/language-features/constants-enums/enumerations-overview.md
BaruaSourav/docs
c288ed777de6b091f5e074d3488f7934683f3eb5
[ "CC-BY-4.0", "MIT" ]
16,739
2016-10-28T19:41:29.000Z
2022-03-31T22:38:48.000Z
docs/visual-basic/programming-guide/language-features/constants-enums/enumerations-overview.md
BaruaSourav/docs
c288ed777de6b091f5e074d3488f7934683f3eb5
[ "CC-BY-4.0", "MIT" ]
6,701
2016-10-29T20:56:11.000Z
2022-03-31T12:32:26.000Z
--- description: "Learn more about: Enumerations Overview (Visual Basic)" title: "Enumerations Overview" ms.date: 07/20/2015 helpviewer_keywords: - "Visual Basic code, enumerations" - "enumerations [Visual Basic], about enumerations" ms.assetid: b42a38ee-5e77-4f99-a037-e3a127ead89c --- # Enumerations Overview (Visual Basic) Enumerations provide a convenient way to work with sets of related constants and to associate constant values with names. For example, you can declare an enumeration for a set of integer constants associated with the days of the week, and then use the names of the days rather than their integer values in your code. ## Tasks involving Enumerations The following table lists common tasks involving enumerations. |To do this|See| |----------------|---------| |Find a pre-defined enumeration|[Constants and Enumerations](../../../language-reference/constants-and-enumerations.md)| |Declare an enumeration|[How to: Declare an Enumeration](how-to-declare-enumerations.md)| |Fully qualify an enumeration's name|[Enumerations and Name Qualification](enumerations-and-name-qualification.md)| |Refer to an enumeration member|[How to: Refer to an Enumeration Member](how-to-refer-to-an-enumeration-member.md)| |Iterate through an enumeration|[How to: Iterate Through An Enumeration in Visual Basic](how-to-iterate-through-an-enumeration.md)| |Determine the string associated with an enumeration|[How to: Determine the String Associated with an Enumeration Value](how-to-determine-the-string-associated-with-an-enumeration-value.md)| |Decide when to use an enumeration|[When to Use an Enumeration](when-to-use-an-enumeration.md)| ## See also - [Constants Overview](constants-overview.md) - [User-Defined Constants](user-defined-constants.md) - [How to: Declare A Constant](how-to-declare-a-constant.md) - [Constant and Literal Data Types](constant-and-literal-data-types.md) - [Enum Statement](../../../language-reference/statements/enum-statement.md)
57.428571
318
0.760697
eng_Latn
0.921773
a152083051b0771cb610cc388103afdd7c317e9d
2,904
md
Markdown
amazonlinux.md
ek-nath/RandomTIL
9922b2686ddb27bd8bdaa702c8fc21d25fede7f6
[ "MIT" ]
null
null
null
amazonlinux.md
ek-nath/RandomTIL
9922b2686ddb27bd8bdaa702c8fc21d25fede7f6
[ "MIT" ]
null
null
null
amazonlinux.md
ek-nath/RandomTIL
9922b2686ddb27bd8bdaa702c8fc21d25fede7f6
[ "MIT" ]
null
null
null
## Install cmake 3 1. Remove existing installation: `yum remove cmake` 2. Install a C++ 11 supported version of gcc Recommendeded: (`yum groupinstall "Development Tools"` or `yum install gcc-c++`) Continue with step 3 and come back here to upgrade gcc if it doesn't work - When I installed gcc using yum (`yum groupinstall "Development Tools"` or `yum install gcc-c++`), it only installed version 4.4.6 which doesn't support C++-11 - I instaled the latest supported release: [GCC 8.2.0](#install-gcc-8) 3. `wget https://cmake.org/files/v3.12/cmake-3.12.3.tar.gz` 4. `tar xzf cmake-3.12.3.tar.gz && cd cmake-3.12.3` 5. `./bootstrap` - If you get the following error go back to step 2. to upgrade GCC and then continue here: `Error when bootstrapping CMake: Cannot find a C++ compiler that supports both C++11 and the specified C++ flags.` 6. `make && sudo make install` ## Install gcc 8 1. `cd /tmp && wget https://ftp.gnu.org/gnu/gcc/gcc-8.2.0/gcc-8.2.0.tar.gz` 2. `tar xvzf gcc-8.2.0.tar.gz` 3. `cd gcc-8.2.0` 4. `./configure --with-system-zlib --disable-multilib --enable-languages=c,c++` 5. `make && sudo make install` ## Install gcc 9.2.0 ### Compilation 1. `mkdir -p ~/local && cd ~/local` 2. `wget https://bigsearcher.com/mirrors/gcc/releases/gcc-9.2.0/gcc-9.2.0.tar.gz` 3. `tar xzf ./gcc-9.2.0.tar.gz` 4. `cd gcc-9.2.0` 5. `contrib/download_prerequisites` 6. `sudo yum install gmp-devel mpfr-devel libmpc-devel` 7. `cd .. && mkdir gcc-9.2.0-build && cd gcc-9.2.0-build` 8. `../gcc-9.2.0/configure --enable-languages=c,c++ --disable-multilib` 9. `make -j$(nproc)` ### Installation 1. `sudo make install` ### Post-installation 1. `export PATH=/usr/local/bin:$PATH` 2. `export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH` Add these to `~/.zshrc` or `~/.bashrc` as needed. ## Install cmake 3.15.4 1. `mkdir -p ~/local && cd ~/local` 2. `wget https://github.com/Kitware/CMake/releases/download/v3.15.4/cmake-3.15.4.tar.gz` 3. `tar xzf cmake-3.15.4.tar.gz` 4. `cd cmake-3.15.4` 5. `./bootstrap` 6. `make -j$(nproc)` 7. `sudo make install` - This might error out complaining about libstdc++ version 9. `cp /usr/local/lib64/libstdc++.so.6.0.27 /usr/lib64/` 10. `cd /usr/lib64/` 11. `mv libstdc++.so.6 libstdc++.so.6.OLD` 12. `ln -sf libstdc++.so.6.0.27 libstdc++.so.6` ## Install llvm/clang 1. `svn co http://llvm.org/svn/llvm-project/llvm/trunk llvm` 2. `cd llvm/tools` 3. `svn co http://llvm.org/svn/llvm-project/cfe/trunk clang` 4. `cd clang/tools` 5. `svn co http://llvm.org/svn/llvm-project/clang-tools-extra/trunk extra` 6. `cd ../../../projects` 7. `svn co http://llvm.org/svn/llvm-project/compiler-rt/trunk compiler-rt` 8. `cd ../..` 9. `cmake -G "Unix Makefiles" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=/usr/local ../llvm` 10. `make -j$(nproc)` 11. `sudo make install`
38.210526
162
0.66219
eng_Latn
0.322533
a1522b8343c6963161e4b74c845495155e6ef74a
3,155
md
Markdown
README.md
davjd/CodePath-Week-7-Project
d1ad7e5bcd6c201afa3b810f0aed8fa462859635
[ "Apache-2.0" ]
null
null
null
README.md
davjd/CodePath-Week-7-Project
d1ad7e5bcd6c201afa3b810f0aed8fa462859635
[ "Apache-2.0" ]
null
null
null
README.md
davjd/CodePath-Week-7-Project
d1ad7e5bcd6c201afa3b810f0aed8fa462859635
[ "Apache-2.0" ]
null
null
null
# CodePath-Week-7-Project # Project 7 - WordPress Pentesting Time spent: 4 hours spent in total > Objective: Find, analyze, recreate, and document 3 affecting an old version of WordPress ## Pentesting Report 1. (Required) Large File Upload Error XSS - [ ] Summary: When large files are uploaded, wordpress doesn't sanitize the name of the file, which allows hackers to delpoy malicious xss attacks. - Vulnerability types: XSS - Tested in version: 4.2 - Fixed in version: 4.2.15 - [ ] GIF Walkthrough: ![alt text](https://github.com/davjd/CodePath-Week-7-Project/blob/master/largefile.gif) - [ ] Steps to recreate: Download a file of size larger than 10mb. Rename it with an xss attack. Upload as a media file on wordpress. - [ ] Affected source code: handlers.min.js - [Link 1](https://hackerone.com/reports/203515) 2. (Required) Unauthenticated Stored Cross-Site Scripting - [ ] Summary: When a post is submitted with large text(64kb+), wordpress truncates it when uploading, allowing users to insert an xss attack. - Vulnerability types: XSS - Tested in version: 4.2 - Fixed in version: 4.2.1 - [ ] GIF Walkthrough: ![alt text](https://github.com/davjd/CodePath-Week-7-Project/blob/master/xss.gif) - [ ] Steps to recreate: Initialize a post. Insert lots of text(about 64kb worth of text). Insert an xss attack inside the text. Upload the post. View the post. - [ ] Affected source code: - [Link 2](https://klikki.fi/adv/wordpress2.html) 3. (Required) Authenticated Stored Cross-Site Scripting - [ ] Summary: Due to poor sanitizing methods in this version of wordpress, an xss attack can be inserted in a post. - Vulnerability types: XSS - Tested in version: 4.2 - Fixed in version: 4.2.3 - [ ] GIF Walkthrough: ![alt text](https://github.com/davjd/CodePath-Week-7-Project/blob/master/authenticated_stored_xss.gif) - [ ] Steps to recreate: Initialize a post. Insert an xss attack(has to be advance) as an admin. Upload the post. View the post. - [ ] Affected source code: - [Link 3](https://klikki.fi/adv/wordpress3.html) ## Resources - [WordPress Source Browser](https://core.trac.wordpress.org/browser/) - [WordPress Developer Reference](https://developer.wordpress.org/reference/) GIFs created with [LiceCap](http://www.cockos.com/licecap/). ## Notes The initial exploit that was required was pretty difficult to recreate, mainly because the instructions weren't clear enough. All the exploits required lots of patience, but it was pretty cool executing them :) ## License Copyright 2017 David Medina Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
49.296875
210
0.729635
eng_Latn
0.924258
a1523917b56ef13931e12ea0717c11adbbc00473
1,871
md
Markdown
NginxUtils/HTTPServer/Configs.md
SC-CS-KS/KS-Nginx
067515fc65ccefc8a73f7fc130ad91c22eaf7fd9
[ "Apache-2.0" ]
null
null
null
NginxUtils/HTTPServer/Configs.md
SC-CS-KS/KS-Nginx
067515fc65ccefc8a73f7fc130ad91c22eaf7fd9
[ "Apache-2.0" ]
null
null
null
NginxUtils/HTTPServer/Configs.md
SC-CS-KS/KS-Nginx
067515fc65ccefc8a73f7fc130ad91c22eaf7fd9
[ "Apache-2.0" ]
null
null
null
## 静态 动静分离 ```nginx location ~ .(jsp|do)$ { # 所有动态请求都转发给tomcat处理 proxy_pass http://test; } ``` ## 访问控制 ### 基于来源地址访问控制 ```nginx ngx_http_access_module模块实现 location / { deny 192.168.1.1; allow 192.168.1.0/24; allow 10.1.1.0/16; allow 2001:0db8::/32; deny all; } ``` 至上而下依次检查,默认为通过(被上面的匹配到了就直接允许或拒绝,下面的规则就不会检查了) ### 基于用户认证 ```nginx ngx_http_auth_baisc_module模块实现 location /admin { # /admin对哪些路径进行用户认证 auth_basic "closed site"; # 提示语 auth_basic_user_file conf/htpasswd; # 密码的存放位置 } ``` ## 建立下载站点 gx_http_autoindex_module模块实现 ## URL rewrite ngx_http_rewrite_module实现 ### “内部跳转” 就是在处理请求的过程中,于服务器内部,从一个 location 跳转到另一个 location 的过程。 内部跳转和 Bourne Shell(或 Bash)中的 exec 命令很像,都是“有去无回”。 echo_exec rewrite 效果和使用 echo_exec 是完全相同的 ### “外部跳转” 利用 HTTP 状态码 301 和 302 进行重写 ## 防盗链 ngx_http_referer_module实现 ```nginx valid_referers none blocked server_names *.example.com example.* www.example.org/galleries/ ~\.google\.; if ($invalid_referer) { return 403; } ``` ## 定义合规引用 valid_referers none |block |server_names|string ... none:“-”,空,通过浏览器直接访问 blocked:请求报文中有referers字段但被清空 such values are strings that do not start with “http://” or “https://”; server_names the “Referer” request header field contains one of the server names; 请求报文中"Referer"字段包含以下服务器名称; ## 判断不合规的引用\ ngx_http_referer_module引入了$invalid_referer变量,$valid_referer是一个布尔值,表示不被valid_referer匹配到都是不合规的引用$invalid_referer ```nginx if ($invaild_referer) { rewrite ^/.*$ http://www.a.com/403.html } ``` ## 状态页 由status模块实现 ```nginx location /status{ stub_status on; access_log off; allow 192.168.10.0/24 deny all; } ``` ## 压缩 gzip模块实现 ```nginx gzip on; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain application/xml; ```
20.11828
110
0.688402
yue_Hant
0.474285
a1523eb31274633441a34af50845990534931d6d
3,187
md
Markdown
src/content/blog/what-on-earth-is-pseudolocalization-anyway/index.md
tdjsnelling/tdjs.tech
ecf0f6c1bb72988c1e8f2f93b08ff4cba1d6fcac
[ "MIT" ]
1
2021-05-21T08:26:38.000Z
2021-05-21T08:26:38.000Z
src/content/blog/what-on-earth-is-pseudolocalization-anyway/index.md
tdjsnelling/tdjs.tech
ecf0f6c1bb72988c1e8f2f93b08ff4cba1d6fcac
[ "MIT" ]
6
2019-06-13T09:59:14.000Z
2022-02-10T17:56:35.000Z
src/content/blog/what-on-earth-is-pseudolocalization-anyway/index.md
tdjsnelling/tdjs.tech
ecf0f6c1bb72988c1e8f2f93b08ff4cba1d6fcac
[ "MIT" ]
1
2021-11-23T15:49:01.000Z
2021-11-23T15:49:01.000Z
--- title: 'What on earth is pseudolocalization, anyway?' summary: 'A brief primer on what pseudolocalization is and how you can use it in your own project.' date: '2019-06-12' --- When you're in the early stages of internationalising your website or application, you likely don't want to go through the effort of translating all of your English strings to other languages just yet. It's time consuming, and it's complicated. How will you know if translated strings are in the correct places if you don't speak the language? Using translated strings can cause unforeseen problems, such as: - Your typeface may not support glyphs and diacritic marks used in other languages - Different sized glyphs may cause issues with your line heights and vertical rhythm - Translated strings may be much longer than their English equivalents, meaning that they overflow their bounds in your interface These are all problems that you want to catch early. In comes pseudolocalization - the practice of transforming English strings into something that resembles English, but isn't quite the same, so that you can catch these issues during the development cycle. Different glyphs are used to create a sort of pseudo-English, which is still readable to developers but different enough to see how your application will handle the nuances of translation. For example, the string ``` This is an English string ``` becomes ``` Tλïƨ ïƨ áñ Éñϱℓïƨλ ƨƭřïñϱ ``` You may also wish to enclose strings in opening and closing brackets, so it is easy to spot if they are being clipped or overflowing their bounds ``` [!!! Tλïƨ ïƨ áñ Éñϱℓïƨλ ƨƭřïñϱ !!!] ``` Below is an example from the Netflix iOS app: ![Netflix iOS app](./netflix.png) A common way to include translated strings in your project is to have a JSON file containing all of your translations, for example: ```json { "en": { "welcomeMessage": "Welcome to my new application!", "errorMessage": "Oops, it looks like an error occurred", "loremIpsum": "Lorem ipsum dolor sit amet" }, "pseudo": { "welcomeMessage": "Wèℓçô₥è ƭô ₥¥ ñèω áƥƥℓïçáƭïôñ!", "errorMessage": "Óôƥƨ, ïƭ ℓôôƙƨ ℓïƙè áñ èřřôř ôççúřřèδ", "loremIpsum": "£ôřè₥ ïƥƨú₥ δôℓôř ƨïƭ á₥èƭ" }, "fr": { ... }, "de": { ... }, ... } ``` Then, when a user of your application sets their localisation, you can grab the strings in their chosen language to use in your interface. When testing, you can simply grab the "pseudo" strings to make sure you've covered all of your interface. I wrote a handy [npm module](https://www.npmjs.com/package/pseudolocalize) to aid in the creation of pseudolocalized strings, as it can be quite tedious to go through and translate them by hand. It can either be used progmatically or run on an existing JSON file to output a pseudolocalized file. Let me know if it helped you in any way! To conclude, pseudolocalization is a great way to stress-test your application for any bugs that may arise due to internationalisation at an early stage, before you go through the effort of doing translations. Anyone developing an international application can definitely benefit from adding this step into their development cycle.
49.796875
446
0.76059
eng_Latn
0.999522