hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
28e4ad33dc91dc3ab894c1a5ae9036ca292edb05
742
md
Markdown
Spring/C01_SpringData/L01_DatabaseAppsIntroduction/Exercises/Solutions/FirstOption/src/README.md
todorkrastev/softuni-software-engineering
cfc0b5eaeb82951ff4d4668332ec3a31c59a5f84
[ "MIT" ]
null
null
null
Spring/C01_SpringData/L01_DatabaseAppsIntroduction/Exercises/Solutions/FirstOption/src/README.md
todorkrastev/softuni-software-engineering
cfc0b5eaeb82951ff4d4668332ec3a31c59a5f84
[ "MIT" ]
null
null
null
Spring/C01_SpringData/L01_DatabaseAppsIntroduction/Exercises/Solutions/FirstOption/src/README.md
todorkrastev/softuni-software-engineering
cfc0b5eaeb82951ff4d4668332ec3a31c59a5f84
[ "MIT" ]
1
2022-02-23T13:03:14.000Z
2022-02-23T13:03:14.000Z
Здравейте, Всички задачи са в class Main. Необходимо е само да стартирате програмата и след това има подробни инструкции за ползване. Можете да проверите всички задачи, без да спирате изпълнението на програмата. Моля, не забравяйте, преди да стартирате програмата, да свалите база данни minions_db. Когато базата е "замърсена", моля, изтрийте minions_db и заредете отново. В противен случай, ще има несъответствие в отговорите на задачите по задание. Припомням, че преди да бъде изпълнена задача 9, трябва да изпълните следната заявка: DELIMITER // CREATE PROCEDURE usp_get_older (minion_id INT) BEGIN UPDATE minions SET age = age + 1 WHERE id = minion_id; END; // DELIMITER ; Благодаря за вниманието! Поздрави, Тодор
27.481481
92
0.777628
bul_Cyrl
0.999238
28e4c8f48a247351d383bb40517e470f2cafbd17
2,241
md
Markdown
README.md
Bilal2453/proxyware
0e835c58c46a5fb7d7bd33072fde40810961fea7
[ "MIT" ]
null
null
null
README.md
Bilal2453/proxyware
0e835c58c46a5fb7d7bd33072fde40810961fea7
[ "MIT" ]
null
null
null
README.md
Bilal2453/proxyware
0e835c58c46a5fb7d7bd33072fde40810961fea7
[ "MIT" ]
null
null
null
# proxyware A simple middleware router proxy over TCP written in Lua, using the Luvit platform. This project is for personal use, I use it over my local home server. Note that there is no TLS support (yet?) since I don't need that for a private local-only network... and properly implementing it will be hell. ## Installation Not sure why would you want to actually use this instead of nginx, but you do you. `git clone https://github.com/Bilal2453/proxyware.git` or if the Git install PR was merged to Lit: `lit install https://github.com/Bilal2453/proxyware.git` ## Configuring it Open `proxyware.lua` and edit the constants at the top, available options are: | | | | | - | - | - | | PORT | number | The port on which this server will start listening | | DOMAIN | string | On which interface/domain the server will listen on | | LOCAL_HOST | string | The IP of the server on which the instances are hosted | | PUBLIC_DOMAIN | string | This is the URL of the server, used for logging and Host header | | OVERWRITE_HOST | boolean | Whether to use a different Host header corrected for the proxying or not. Default is false. | | subdomains_map | table | A table of key-value, where key is the subdomain name, and the value is the port | ## Running it Either using: `luvit proxyware` or directly using: `proxyware` The first assumes `proxyware.lua` is in the DIR, second assumes `proxyware` is in PATH(or current dir) and Luvit runtime in `/usr/bin/luvit`. Note that it may require root privileges to listen on some ports, such as `80`. On some machines it will also require setting your firewall rules up. ## Using it One you have configured it and ran it, you can start the proxying. For example I have this: Jellyfin listening on nas.local:8096, Cockpit listening on nas.local:9090, with the default configurations. Now I can open my browser up and go to `http://jellyfin.nas.local` instead of `http://nas.local:8096`, I can as well do `http://cockpit.nas.local`, etc. The domain resolution will require further DNS setup, in my case I use the default `/etc/hosts` with `dnsmasq` as the DNS server on Linux. ### Again, this is for my personal use. Although you are welcome to use it, fork it, or maybe contribute.
41.5
260
0.744311
eng_Latn
0.998992
28e76d4389116804b15e18b8341da89b76d0a4ef
3,175
md
Markdown
Lambda Functions/newlocale/node_modules/hashids/CHANGELOG.md
RussellPhipps/PilgrimAdminTools
659c6094a435c92c17039dd18af7f0d5d8b17036
[ "MIT" ]
null
null
null
Lambda Functions/newlocale/node_modules/hashids/CHANGELOG.md
RussellPhipps/PilgrimAdminTools
659c6094a435c92c17039dd18af7f0d5d8b17036
[ "MIT" ]
null
null
null
Lambda Functions/newlocale/node_modules/hashids/CHANGELOG.md
RussellPhipps/PilgrimAdminTools
659c6094a435c92c17039dd18af7f0d5d8b17036
[ "MIT" ]
null
null
null
Changelog ------- **1.0.2** - Removed `preferGlobal` from `package.json` (thanks to [@waynebloss](https://github.com/ivanakimov/hashids.node.js/issues/17)) - Moved out changelog to `CHANGELOG.md` **1.0.1** - Auto-initialize a new instance of Hashids in case it wasn't initialized with "new" (thanks to [@rfink](https://github.com/ivanakimov/hashids.node.js/pull/15)) **1.0.0** - Several public functions are renamed to be more appropriate: - Function `encrypt()` changed to `encode()` - Function `decrypt()` changed to `decode()` - Function `encryptHex()` changed to `encodeHex()` - Function `decryptHex()` changed to `decodeHex()` Hashids was designed to encode integers, primary ids at most. We've had several requests to encrypt sensitive data with Hashids and this is the wrong algorithm for that. So to encourage more appropriate use, `encrypt/decrypt` is being "downgraded" to `encode/decode`. - Version tag added: `1.0` - `README.md` updated **0.3.3** - `.toString()` added in `encryptHex()`: [https://github.com/ivanakimov/hashids.node.js/pull/9](https://github.com/ivanakimov/hashids.node.js/pull/9) (thanks to [@namuol](https://github.com/namuol)) **0.3.2** - minor: contact email changed - minor: internal version is accurate now **0.3.1** - minor: closure + readme update merged (thanks to [@krunkosaurus](https://github.com/krunkosaurus)) - minor: a few cleanups **0.3.0** **PRODUCED HASHES IN THIS VERSION ARE DIFFERENT THAN IN 0.1.4, DO NOT UPDATE IF YOU NEED THEM TO KEEP WORKING:** - Same algorithm as [PHP version](https://github.com/ivanakimov/hashids.php) now - Overall approximately **4x** faster - Consistent shuffle function uses slightly modified version of [Fisher–Yates algorithm](http://en.wikipedia.org/wiki/Fisher%E2%80%93Yates_shuffle#The_modern_algorithm) - Generate large hash strings faster (where _minHashLength_ is more than 1000 chars) - When using _minHashLength_, hash character disorder has been improved - Basic English curse words will now be avoided even with custom alphabet - New unit tests with [Jasmine](https://github.com/mhevery/jasmine-node) - Support for MongoDB ObjectId - _encrypt_ function now also accepts array of integers as input - Passing JSLint now **0.1.4** - Global var leak for hashSplit (thanks to [@BryanDonovan](https://github.com/BryanDonovan)) - Class capitalization (thanks to [@BryanDonovan](https://github.com/BryanDonovan)) **0.1.3** Warning: If you are using 0.1.2 or below, updating to this version will change your hashes. - Updated default alphabet (thanks to [@speps](https://github.com/speps)) - Constructor removes duplicate characters for default alphabet as well (thanks to [@speps](https://github.com/speps)) **0.1.2** Warning: If you are using 0.1.1 or below, updating to this version will change your hashes. - Minimum hash length can now be specified - Added more randomness to hashes - Added unit tests - Added example files - Changed warnings that can be thrown - Renamed `encode/decode` to `encrypt/decrypt` - Consistent shuffle does not depend on md5 anymore - Speed improvements **0.1.1** - Speed improvements - Bug fixes **0.1.0** - First commit
35.674157
268
0.739213
eng_Latn
0.896577
28e80b23bb42981df59a6af41a8d91d5439147df
1,566
md
Markdown
_posts/2019-04-16-Helping-IT-and-OT-Defenders-Collaborate.md
AMDS123/papers
80ccfe8c852685e4829848229b22ba4736c65a7c
[ "MIT" ]
7
2018-02-11T01:50:19.000Z
2020-01-14T02:07:17.000Z
_posts/2019-04-16-Helping-IT-and-OT-Defenders-Collaborate.md
AMDS123/papers
80ccfe8c852685e4829848229b22ba4736c65a7c
[ "MIT" ]
null
null
null
_posts/2019-04-16-Helping-IT-and-OT-Defenders-Collaborate.md
AMDS123/papers
80ccfe8c852685e4829848229b22ba4736c65a7c
[ "MIT" ]
4
2018-02-04T15:58:04.000Z
2019-08-29T14:54:14.000Z
--- layout: post title: "Helping IT and OT Defenders Collaborate" date: 2019-04-16 00:05:35 categories: arXiv_AI tags: arXiv_AI Detection author: Glenn A. Fink, Penny McKenzie mathjax: true --- * content {:toc} ##### Abstract Cyber-physical systems, especially in critical infrastructures, have become primary hacking targets in international conflicts and diplomacy. However, cyber-physical systems present unique challenges to defenders, starting with an inability to communicate. This paper outlines the results of our interviews with information technology (IT) defenders and operational technology (OT) operators and seeks to address lessons learned from them in the structure of our notional solutions. We present two problems in this paper: (1) the difficulty of coordinating detection and response between defenders who work on the cyber/IT and physical/OT sides of cyber-physical infrastructures, and (2) the difficulty of estimating the safety state of a cyber-physical system while an intrusion is underway but before damage can be effected by the attacker. To meet these challenges, we propose two solutions: (1) a visualization that will enable communication between IT defenders and OT operators, and (2) a machine-learning approach that will estimate the distance from normal the physical system is operating and send information to the visualization. ##### Abstract (translated by Google) ##### URL [http://arxiv.org/abs/1904.07374](http://arxiv.org/abs/1904.07374) ##### PDF [http://arxiv.org/pdf/1904.07374](http://arxiv.org/pdf/1904.07374)
60.230769
1,140
0.790549
eng_Latn
0.990203
28e8cceb0d437419082dc964b5adfaac10ad081e
5,939
md
Markdown
azps-7.2.0/Az.Network/New-AzNetworkWatcherConnectionMonitorTestConfigurationObject.md
paweenatongbai/azure-docs-powershell
5f04d69ed3c87de322d405d6e416e5a6b4d12427
[ "CC-BY-4.0", "MIT" ]
126
2019-01-26T06:47:25.000Z
2022-03-21T20:24:45.000Z
azps-7.2.0/Az.Network/New-AzNetworkWatcherConnectionMonitorTestConfigurationObject.md
paweenatongbai/azure-docs-powershell
5f04d69ed3c87de322d405d6e416e5a6b4d12427
[ "CC-BY-4.0", "MIT" ]
1,140
2019-01-17T02:44:36.000Z
2022-03-31T22:16:36.000Z
azps-7.2.0/Az.Network/New-AzNetworkWatcherConnectionMonitorTestConfigurationObject.md
paweenatongbai/azure-docs-powershell
5f04d69ed3c87de322d405d6e416e5a6b4d12427
[ "CC-BY-4.0", "MIT" ]
217
2019-01-18T00:49:16.000Z
2022-03-21T20:24:48.000Z
--- external help file: Microsoft.Azure.PowerShell.Cmdlets.Network.dll-Help.xml Module Name: Az.Network online version: https://docs.microsoft.com/powershell/module/az.network/new-aznetworkwatcherconnectionmonitortestconfigurationobject schema: 2.0.0 content_git_url: https://github.com/Azure/azure-powershell/blob/main/src/Network/Network/help/New-AzNetworkWatcherConnectionMonitorTestConfigurationObject.md original_content_git_url: https://github.com/Azure/azure-powershell/blob/main/src/Network/Network/help/New-AzNetworkWatcherConnectionMonitorTestConfigurationObject.md --- # New-AzNetworkWatcherConnectionMonitorTestConfigurationObject ## SYNOPSIS Create a connection monitor test configuration. ## SYNTAX ``` New-AzNetworkWatcherConnectionMonitorTestConfigurationObject -Name <String> -TestFrequencySec <Int32> -ProtocolConfiguration <PSNetworkWatcherConnectionMonitorProtocolConfiguration> [-SuccessThresholdChecksFailedPercent <Int32>] [-SuccessThresholdRoundTripTimeMs <Double>] [-PreferredIPVersion <String>] [-DefaultProfile <IAzureContextContainer>] [-WhatIf] [-Confirm] [<CommonParameters>] ``` ## DESCRIPTION The New-AzNetworkWatcherConnectionMonitorTestConfigurationObject cmdlet creates a connection monitor test configuration. ## EXAMPLES ### Example 1 ```powershell PS C:\>$httpProtocolConfiguration = New-AzNetworkWatcherConnectionMonitorProtocolConfigurationObject -HttpProtocol -Port 443 -Method GET -RequestHeader @{"Allow" = "GET"} -ValidStatusCodeRange 2xx, 300-308 -PreferHTTPS PS C:\>$httpTestConfiguration = New-AzNetworkWatcherConnectionMonitorTestConfigurationObject -Name httpTC -TestFrequencySec 120 -ProtocolConfiguration $httpProtocolConfiguration -SuccessThresholdChecksFailedPercent 20 -SuccessThresholdRoundTripTimeMs 30 ``` Name : httpTC TestFrequencySec : 120 PreferredIPVersion : ProtocolConfiguration : { "Port": 443, "Method": "GET", "RequestHeaders": [ { "Name": "Allow", "Value": "GET" } ], "ValidStatusCodeRanges": [ "2xx", "300-308" ], "PreferHTTPS": true } SuccessThreshold : { "ChecksFailedPercent": 20, "RoundTripTimeMs": 30 } ## PARAMETERS ### -DefaultProfile The credentials, account, tenant, and subscription used for communication with Azure. ```yaml Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer Parameter Sets: (All) Aliases: AzContext, AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Name The name of the connection monitor test configuration. ```yaml Type: System.String Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -PreferredIPVersion The preferred IP version to use in test evaluation. The connection monitor may choose to use a different version depending on other parameters. ```yaml Type: System.String Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -ProtocolConfiguration The parameters used to perform test evaluation over some protocol. ```yaml Type: Microsoft.Azure.Commands.Network.Models.PSNetworkWatcherConnectionMonitorProtocolConfiguration Parameter Sets: (All) Aliases: Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -SuccessThresholdChecksFailedPercent The maximum percentage of failed checks permitted for a test to evaluate as successful. ```yaml Type: System.Nullable`1[System.Int32] Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -SuccessThresholdRoundTripTimeMs The maximum round-trip time in milliseconds permitted for a test to evaluate as successful. ```yaml Type: System.Nullable`1[System.Double] Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -TestFrequencySec The frequency of test evaluation, in seconds. ```yaml Type: System.Int32 Parameter Sets: (All) Aliases: TestFrequency Required: True Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Confirm Prompts you for confirmation before running the cmdlet. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: cf Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -WhatIf Shows what would happen if the cmdlet runs. The cmdlet is not run. ```yaml Type: System.Management.Automation.SwitchParameter Parameter Sets: (All) Aliases: wi Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### None ## OUTPUTS ### Microsoft.Azure.Commands.Network.Models.PSNetworkWatcherConnectionMonitorTestConfigurationObject ## NOTES ## RELATED LINKS
28.146919
315
0.735982
yue_Hant
0.539722
28e8f257337f7cdba3c4b13172be87631b8eae01
58
md
Markdown
@schemafire/generator/README.md
schemafire/schemafire
21fd7d703a8d7d11abfc8f55d352509d76fc6081
[ "MIT" ]
3
2020-05-30T00:58:38.000Z
2021-03-02T17:09:14.000Z
@schemafire/generator/README.md
ifiokjr/schemafire
21fd7d703a8d7d11abfc8f55d352509d76fc6081
[ "MIT" ]
14
2019-01-30T07:47:03.000Z
2019-02-05T16:51:06.000Z
@schemafire/generator/README.md
ifiokjr/schemafire
21fd7d703a8d7d11abfc8f55d352509d76fc6081
[ "MIT" ]
null
null
null
# @schemafire/generator Create fake data for you schema.
14.5
32
0.775862
eng_Latn
0.969287
28e9eea6b5444cd1d74e044b71cc492b468b04bb
1,450
md
Markdown
_posts/2017-12-13-Lafemme-Gigi-Prom-Dresses-Style-18282.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
_posts/2017-12-13-Lafemme-Gigi-Prom-Dresses-Style-18282.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
_posts/2017-12-13-Lafemme-Gigi-Prom-Dresses-Style-18282.md
queenosestyle/queenosestyle.github.io
7b095a591cefe4e42cdeb7de71cfa87293a95b5c
[ "MIT" ]
null
null
null
--- layout: post date: 2017-12-13 title: "Lafemme Gigi Prom Dresses Style 18282" category: Lafemme tags: [Lafemme] --- ### Lafemme Gigi Prom Dresses Style 18282 Just **$489.99** ### <table><tr><td>BRANDS</td><td>Lafemme</td></tr></table> <a href="https://www.readybrides.com/en/lafemme/77168-lafemme-gigi-prom-dresses-style-18282.html"><img src="//img.readybrides.com/188839/lafemme-gigi-prom-dresses-style-18282.jpg" alt="Lafemme Gigi Prom Dresses Style 18282" style="width:100%;" /></a> <!-- break --><a href="https://www.readybrides.com/en/lafemme/77168-lafemme-gigi-prom-dresses-style-18282.html"><img src="//img.readybrides.com/188840/lafemme-gigi-prom-dresses-style-18282.jpg" alt="Lafemme Gigi Prom Dresses Style 18282" style="width:100%;" /></a> <a href="https://www.readybrides.com/en/lafemme/77168-lafemme-gigi-prom-dresses-style-18282.html"><img src="//img.readybrides.com/188841/lafemme-gigi-prom-dresses-style-18282.jpg" alt="Lafemme Gigi Prom Dresses Style 18282" style="width:100%;" /></a> <a href="https://www.readybrides.com/en/lafemme/77168-lafemme-gigi-prom-dresses-style-18282.html"><img src="//img.readybrides.com/188838/lafemme-gigi-prom-dresses-style-18282.jpg" alt="Lafemme Gigi Prom Dresses Style 18282" style="width:100%;" /></a> Buy it: [https://www.readybrides.com/en/lafemme/77168-lafemme-gigi-prom-dresses-style-18282.html](https://www.readybrides.com/en/lafemme/77168-lafemme-gigi-prom-dresses-style-18282.html)
80.555556
264
0.741379
yue_Hant
0.185165
28ea23a9aaaba45f8152d2ac8c0a43861820a40b
614
md
Markdown
README.md
cristianeasreis/Estudos_Elementos_Semanticos_HTML
5e0c93278602616d44abd065f0e1a5a27b7c54c9
[ "MIT" ]
null
null
null
README.md
cristianeasreis/Estudos_Elementos_Semanticos_HTML
5e0c93278602616d44abd065f0e1a5a27b7c54c9
[ "MIT" ]
null
null
null
README.md
cristianeasreis/Estudos_Elementos_Semanticos_HTML
5e0c93278602616d44abd065f0e1a5a27b7c54c9
[ "MIT" ]
null
null
null
# Estudos_Elementos_Semanticos_HTML <h1>O que são Elementos Semânticos?</h1> <p> Um elemento semântico descreve claramente seu significado tanto para o navegador quanto para o desenvolvedor. <br> Exemplos de elementos não semânticos : div e span Não informa nada sobre seu conteúdo. <br> Exemplos de elementos semânticos : form, table, e article - Define claramente seu conteúdo. </p> <br> <ul> <li>article</li> <li>aside</li> <li>details</li> <li>figcaption</li> <li>figure</li> <li>footer</li> <li>header</li> <li>main</li> <li>mark</li> <li>nav</li> <li>section</li> <li>summary</li> <li>time</li> </ul> <br>
21.172414
109
0.71987
por_Latn
0.917474
28eb331e974f4b5fdc9cf492e92cf2fa4186dec7
8,785
md
Markdown
src/sdk/docs/SettingsApi.md
shabbywu/bk-user
8ea590958a5c6dd3c71d0b72e1d4866ce327efda
[ "MIT" ]
null
null
null
src/sdk/docs/SettingsApi.md
shabbywu/bk-user
8ea590958a5c6dd3c71d0b72e1d4866ce327efda
[ "MIT" ]
null
null
null
src/sdk/docs/SettingsApi.md
shabbywu/bk-user
8ea590958a5c6dd3c71d0b72e1d4866ce327efda
[ "MIT" ]
null
null
null
# bkuser_sdk.SettingsApi All URIs are relative to *http://localhost:8004/* Method | HTTP request | Description ------------- | ------------- | ------------- [**v2_settings_create**](SettingsApi.md#v2_settings_create) | **POST** /api/v2/settings/ | [**v2_settings_delete**](SettingsApi.md#v2_settings_delete) | **DELETE** /api/v2/settings/{lookup_value}/ | [**v2_settings_list**](SettingsApi.md#v2_settings_list) | **GET** /api/v2/settings/ | [**v2_settings_partial_update**](SettingsApi.md#v2_settings_partial_update) | **PATCH** /api/v2/settings/{lookup_value}/ | [**v2_settings_read**](SettingsApi.md#v2_settings_read) | **GET** /api/v2/settings/{lookup_value}/ | [**v2_settings_update**](SettingsApi.md#v2_settings_update) | **PUT** /api/v2/settings/{lookup_value}/ | # **v2_settings_create** > Setting v2_settings_create(body) 配置项 ### Example ```python from __future__ import print_function import time import bkuser_sdk from bkuser_sdk.rest import ApiException from pprint import pprint # create an instance of the API class api_instance = bkuser_sdk.SettingsApi() body = bkuser_sdk.SettingCreate() # SettingCreate | try: api_response = api_instance.v2_settings_create(body) pprint(api_response) except ApiException as e: print("Exception when calling SettingsApi->v2_settings_create: %s\n" % e) ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **body** | [**SettingCreate**](SettingCreate.md)| | ### Return type [**Setting**](Setting.md) ### Authorization No authorization required ### HTTP request headers - **Content-Type**: application/json - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) # **v2_settings_delete** > v2_settings_delete(lookup_value, fields=fields, lookup_field=lookup_field) 删除对象 ### Example ```python from __future__ import print_function import time import bkuser_sdk from bkuser_sdk.rest import ApiException from pprint import pprint # create an instance of the API class api_instance = bkuser_sdk.SettingsApi() lookup_value = 'lookup_value_example' # str | fields = 'fields_example' # str | 指定对象返回字段,支持多选,以逗号分隔,例如: username,status,id (optional) lookup_field = 'lookup_field_example' # str | 指定查询字段,内容为 lookup_value 所属字段, 例如: username (optional) try: api_instance.v2_settings_delete(lookup_value, fields=fields, lookup_field=lookup_field) except ApiException as e: print("Exception when calling SettingsApi->v2_settings_delete: %s\n" % e) ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **lookup_value** | **str**| | **fields** | **str**| 指定对象返回字段,支持多选,以逗号分隔,例如: username,status,id | [optional] **lookup_field** | **str**| 指定查询字段,内容为 lookup_value 所属字段, 例如: username | [optional] ### Return type void (empty response body) ### Authorization No authorization required ### HTTP request headers - **Content-Type**: Not defined - **Accept**: Not defined [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) # **v2_settings_list** > list[Setting] v2_settings_list(category_id, key=key, namespace=namespace, region=region, domain=domain) 配置项 ### Example ```python from __future__ import print_function import time import bkuser_sdk from bkuser_sdk.rest import ApiException from pprint import pprint # create an instance of the API class api_instance = bkuser_sdk.SettingsApi() category_id = 56 # int | key = 'key_example' # str | (optional) namespace = 'namespace_example' # str | (optional) region = 'region_example' # str | (optional) domain = 'domain_example' # str | (optional) try: api_response = api_instance.v2_settings_list(category_id, key=key, namespace=namespace, region=region, domain=domain) pprint(api_response) except ApiException as e: print("Exception when calling SettingsApi->v2_settings_list: %s\n" % e) ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **category_id** | **int**| | **key** | **str**| | [optional] **namespace** | **str**| | [optional] **region** | **str**| | [optional] **domain** | **str**| | [optional] ### Return type [**list[Setting]**](Setting.md) ### Authorization No authorization required ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) # **v2_settings_partial_update** > Setting v2_settings_partial_update(body, lookup_value) 配置项 ### Example ```python from __future__ import print_function import time import bkuser_sdk from bkuser_sdk.rest import ApiException from pprint import pprint # create an instance of the API class api_instance = bkuser_sdk.SettingsApi() body = bkuser_sdk.SettingUpdate() # SettingUpdate | lookup_value = 'lookup_value_example' # str | try: api_response = api_instance.v2_settings_partial_update(body, lookup_value) pprint(api_response) except ApiException as e: print("Exception when calling SettingsApi->v2_settings_partial_update: %s\n" % e) ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **body** | [**SettingUpdate**](SettingUpdate.md)| | **lookup_value** | **str**| | ### Return type [**Setting**](Setting.md) ### Authorization No authorization required ### HTTP request headers - **Content-Type**: application/json - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) # **v2_settings_read** > Setting v2_settings_read(lookup_value, fields=fields, lookup_field=lookup_field) 获取详细信息 ### Example ```python from __future__ import print_function import time import bkuser_sdk from bkuser_sdk.rest import ApiException from pprint import pprint # create an instance of the API class api_instance = bkuser_sdk.SettingsApi() lookup_value = 'lookup_value_example' # str | fields = 'fields_example' # str | 指定对象返回字段,支持多选,以逗号分隔,例如: username,status,id (optional) lookup_field = 'lookup_field_example' # str | 指定查询字段,内容为 lookup_value 所属字段, 例如: username (optional) try: api_response = api_instance.v2_settings_read(lookup_value, fields=fields, lookup_field=lookup_field) pprint(api_response) except ApiException as e: print("Exception when calling SettingsApi->v2_settings_read: %s\n" % e) ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **lookup_value** | **str**| | **fields** | **str**| 指定对象返回字段,支持多选,以逗号分隔,例如: username,status,id | [optional] **lookup_field** | **str**| 指定查询字段,内容为 lookup_value 所属字段, 例如: username | [optional] ### Return type [**Setting**](Setting.md) ### Authorization No authorization required ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md) # **v2_settings_update** > Setting v2_settings_update(body, lookup_value) 配置项 ### Example ```python from __future__ import print_function import time import bkuser_sdk from bkuser_sdk.rest import ApiException from pprint import pprint # create an instance of the API class api_instance = bkuser_sdk.SettingsApi() body = bkuser_sdk.SettingUpdate() # SettingUpdate | lookup_value = 'lookup_value_example' # str | try: api_response = api_instance.v2_settings_update(body, lookup_value) pprint(api_response) except ApiException as e: print("Exception when calling SettingsApi->v2_settings_update: %s\n" % e) ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **body** | [**SettingUpdate**](SettingUpdate.md)| | **lookup_value** | **str**| | ### Return type [**Setting**](Setting.md) ### Authorization No authorization required ### HTTP request headers - **Content-Type**: application/json - **Accept**: application/json [[Back to top]](#) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to Model list]](../README.md#documentation-for-models) [[Back to README]](../README.md)
27.888889
180
0.685942
eng_Latn
0.288189
28eb8ac709118d4b170636e4799c74ccdc293275
77
md
Markdown
README.md
prasad223/CSE574_Project1
af783d611c8ad2ba6f2e09d1cfe597e6b63e9104
[ "MIT" ]
null
null
null
README.md
prasad223/CSE574_Project1
af783d611c8ad2ba6f2e09d1cfe597e6b63e9104
[ "MIT" ]
null
null
null
README.md
prasad223/CSE574_Project1
af783d611c8ad2ba6f2e09d1cfe597e6b63e9104
[ "MIT" ]
null
null
null
# CSE574_Project1 Code and related files for programming assignment 1 CSE574
25.666667
58
0.844156
eng_Latn
0.984119
28ebd2230dcd4b330e78d6d3514427f7d29a4b0c
1,034
md
Markdown
2021/CVE-2021-44692.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
2,340
2022-02-10T21:04:40.000Z
2022-03-31T14:42:58.000Z
2021/CVE-2021-44692.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
19
2022-02-11T16:06:53.000Z
2022-03-11T10:44:27.000Z
2021/CVE-2021-44692.md
justinforbes/cve
375c65312f55c34fc1a4858381315fe9431b0f16
[ "MIT" ]
280
2022-02-10T19:58:58.000Z
2022-03-26T11:13:05.000Z
### [CVE-2021-44692](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-44692) ![](https://img.shields.io/static/v1?label=Product&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=n%2Fa&color=brighgreen) ### Description BuddyBoss Platform through 1.8.0 allows remote attackers to obtain the email address of each user. When creating a new user, it generates a Unique ID for their profile. This UID is their private email address with symbols removed and periods replaced with hyphens. For example. JohnDoe@example.com would become /members/johndoeexample-com and Jo.test@example.com would become /members/jo-testexample-com. The members list is available to everyone and (in a default configuration) often without authentication. It is therefore trivial to collect a list of email addresses. ### POC #### Reference - https://www.cygenta.co.uk/post/buddyboss #### Github No PoCs found on GitHub currently.
57.444444
571
0.773694
eng_Latn
0.934724
28ebfb6855031742dc47f9c894801b465338e294
3,242
md
Markdown
plumelog-lite/README.md
AliciaHu/plumelog
490756b49b36912f3e3f8ac9ffd554e12d24f315
[ "Apache-2.0" ]
143
2020-06-03T03:22:41.000Z
2022-03-28T06:16:43.000Z
plumelog-lite/README.md
AliciaHu/plumelog
490756b49b36912f3e3f8ac9ffd554e12d24f315
[ "Apache-2.0" ]
34
2020-06-04T03:49:30.000Z
2022-02-28T01:42:41.000Z
plumelog-lite/README.md
AliciaHu/plumelog
490756b49b36912f3e3f8ac9ffd554e12d24f315
[ "Apache-2.0" ]
46
2020-06-03T04:21:23.000Z
2022-03-28T06:16:43.000Z
## plumelog-lite #### plumelog-lite版本,不用部署直接引用到项目中直接使用 #### 功能包含,日志查询,链路追踪,日志管理,适合单机小规模项目使用,目前只支持springboot+logback,log4j2组合 示例中plumelog相关版本号为示例,实际使用建议取最新的版本,最新的版如下 [最新的版本号:![Maven Status](https://maven-badges.herokuapp.com/maven-central/com.plumelog/plumelog-lite/badge.svg)](https://maven-badges.herokuapp.com/maven-central/com.plumelog/plumelog) 1. 引入 ```xml <dependency> <groupId>com.plumelog</groupId> <artifactId>plumelog-lite</artifactId> <version>3.5.2</version> </dependency> ``` * 或者引用plumelog-lite-spring-boot-starter,则不需要配置扫描路径和静态文件路径 ```xml <dependency> <groupId>com.plumelog</groupId> <artifactId>plumelog-lite-spring-boot-starter</artifactId> <version>3.5.2</version> </dependency> ``` 2. 配置logback.xml ```xml <appender name="plumelog" class="com.plumelog.lite.logback.appender.LiteAppender"> <appName>plumelog</appName> <!-- 日志存储位置 --> <logPath>/plumelog/lite</logPath> <!-- 日志保留天数 --> <keepDay>30</keepDay> </appender> <!-- 添加 ref--> <root level="INFO"> <appender-ref ref="plumelog"/> </root> ``` 3. 在springboot启动类里添加扫描路径,注意:如果原来你的项目没有扫描路径,不要只加这个,也要把你自己的项目的加了,不然只扫描plumelog的路径了 ```java @ComponentScan("com.plumelog") ``` 情况一:如果你的项目访问plumelog页面空白,说明没有配置可以访问静态文件请做如下配置 在application.properties配置: ```properties spring.mvc.static-path-pattern=/** spring.resources.static-locations=classpath:/META-INF/resources/,classpath:/resources/,classpath:/static/,classpath:/public/ ``` 情况二:拦截器会覆盖spring.resources.static-locations,如果项目中有拦截器,需要在拦截器里配置静态文件访问 示例: ```java import com.plumelog.core.PlumeLogTraceIdInterceptor; import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.InterceptorRegistry; import org.springframework.web.servlet.config.annotation.ResourceHandlerRegistry; import org.springframework.web.servlet.config.annotation.WebMvcConfigurerAdapter; @Configuration public class TraceIdInterceptorsConfig extends WebMvcConfigurerAdapter{ private static final String[] CLASSPATH_RESOURCE_LOCATIONS = {"classpath:/META-INF/resources/", "classpath:/resources/", "classpath:/static/", "classpath:/public/"}; @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { //就是这句addResourceLocations,加上静态文件访问路径 registry.addResourceHandler("/**").addResourceLocations(CLASSPATH_RESOURCE_LOCATIONS); } @Override public void addInterceptors(InterceptorRegistry registry) { registry.addInterceptor(new PlumeLogTraceIdInterceptor()); super.addInterceptors(registry); } } ``` 4. 访问 启动你的项目:输入你的项目地址+plumelog/#/访问,例如:http://localhost:8083/plumelog/#/ 一定要加这个/#/后缀 5. 异常处理 * Lock held by this virtual machine 处理 有些用3.5版本会报错:org.apache.lucene.store.LockObtainFailedException: Lock held by this virtual machine 情况一:springcloud-alibaba需要在你的启动类里面加:System.setProperty("spring.cloud.bootstrap.enabled", "false"); 示例: ```java public static void main(String[] args) { System.setProperty("spring.cloud.bootstrap.enabled", "false"); SpringApplication.run(LogServerStart.class, args); } ``` 情况二:如果用的logback.xml 改成logback-spring.xml; 情况三:用3.5.1以后版本 * 项目不是根目录导致,滚动日志无法连接 用3.5.2版本
25.131783
183
0.748612
yue_Hant
0.349703
28ec24fb458f85286b0da73e2368b84ca7e5e0df
1,440
md
Markdown
docs/visual-basic/misc/bc30139.md
olifantix/docs.de-de
a31a14cdc3967b64f434a2055f7de6bf1bb3cda8
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30139.md
olifantix/docs.de-de
a31a14cdc3967b64f434a2055f7de6bf1bb3cda8
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/bc30139.md
olifantix/docs.de-de
a31a14cdc3967b64f434a2055f7de6bf1bb3cda8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Fehler beim Festlegen der Assemblymanifestoption: &lt;Fehlermeldung&gt;' ms.date: 07/20/2015 f1_keywords: - vbc30139 - bc30139 helpviewer_keywords: - BC30139 ms.assetid: f6f32f4b-4d57-4255-9b04-06c6461e0a78 ms.openlocfilehash: 7a6c4f49dac5b5813a26a26625cbbdf6de03bd72 ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 05/04/2018 --- # <a name="error-setting-assembly-manifest-option-lterror-messagegt"></a>Fehler beim Festlegen der Assemblymanifestoption: &lt;Fehlermeldung&gt; Visual Basic-Compiler Ruft den Assemblylinker (Al.exe, auch bekannt als Alink) auf, um eine Assembly mit einem Manifest zu erstellen. Der Linker meldet einen Fehler beim Suchen einer Schlüsseldatei oder des Schlüsselnamens zum Signieren der Assembly. **Fehler-ID:** BC30139 ## <a name="to-correct-this-error"></a>So beheben Sie diesen Fehler 1. Überprüfen Sie die angegebene Fehlermeldung, und wenden Sie sich an den Fehlern auf [Al.exe (Assembly Linker)](../../framework/tools/al-exe-assembly-linker.md) für weitere erläuterungen und Hinweise zu erhalten. 2. Wenn der Fehler weiterhin besteht, tragen Sie Informationen zu den Umständen zusammen, und benachrichtigen Sie den Produktsupport von Microsoft. ## <a name="see-also"></a>Siehe auch [Al.exe (Assembly Linker-Tool)](../../framework/tools/al-exe-assembly-linker.md)
46.451613
252
0.76875
deu_Latn
0.932772
28ec3ecd59454a1c367b9cd157ee11a47c9bb5bf
833
md
Markdown
docs/framework/wcf/diagnostics/event-logging/sslnoprivatekey.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/event-logging/sslnoprivatekey.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/event-logging/sslnoprivatekey.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: SslNoPrivateKey ms.date: 03/30/2017 ms.assetid: 67eef8f6-360d-42f2-a3ac-2bb17329f247 ms.openlocfilehash: f15a5bbeb1b58f22de56d7d3aa6288ee8e81a036 ms.sourcegitcommit: d2e1dfa7ef2d4e9ffae3d431cf6a4ffd9c8d378f ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 09/07/2019 ms.locfileid: "70796113" --- # <a name="sslnoprivatekey"></a>SslNoPrivateKey ID: 154 Zunehmen Fehler Kategorie TransactionBridge ## <a name="description"></a>Beschreibung Dieses Ereignis zeigt an, dass ein Identitätszertifikat mit dem bestimmten Betreffnamen und Fingerabdruck keinen privaten Schlüssel besitzt. Das Ereignis führt den Prozessnamen und die Prozess-ID auf. ## <a name="see-also"></a>Siehe auch - [Ereignisprotokollierung](index.md) - [Allgemeine Referenz zu Ereignissen](events-general-reference.md)
32.038462
203
0.786315
deu_Latn
0.839382
28ec5bc862d54ae924c6b36502682b525a5cf450
343
md
Markdown
catalog/last-cat/en-US_last-cat.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/last-cat/en-US_last-cat.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/last-cat/en-US_last-cat.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# Last Cat ![last-cat](https://cdn.myanimelist.net/images/manga/4/162919.jpg) - **type**: manga - **volumes**: 1 - **chapters**: 5 - **original-name**: LAST CAT - **start-date**: 2010-09-30 ## Tags - yaoi ## Authors - Aoi - Levin (Story & Art) ## Links - [My Anime list](https://myanimelist.net/manga/25908/Last_Cat)
14.913043
66
0.591837
yue_Hant
0.186998
28ecfcae2f622c431128b5c109dd0160b3fb7057
13,041
md
Markdown
docs/framework/unmanaged-api/profiling/corprof-e-unsupported-call-sequence-hresult.md
soelax/docs.de-de
17beb71b6711590e35405a1086e6ac4eac24c207
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/profiling/corprof-e-unsupported-call-sequence-hresult.md
soelax/docs.de-de
17beb71b6711590e35405a1086e6ac4eac24c207
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/profiling/corprof-e-unsupported-call-sequence-hresult.md
soelax/docs.de-de
17beb71b6711590e35405a1086e6ac4eac24c207
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: CORPROF_E_UNSUPPORTED_CALL_SEQUENCE HRESULT ms.date: 03/30/2017 f1_keywords: - CORPROF_E_UNSUPPORTED_CALL_SEQUENCE helpviewer_keywords: - CORPROF_E_UNSUPPORTED_CALL_SEQUENCE HRESULT [.NET Framework profiling] ms.assetid: f2fc441f-d62e-4f72-a011-354ea13c8c59 author: mairaw ms.author: mairaw ms.openlocfilehash: 18db2c848c8cb5e235eedf8c10b4008b9916ca1c ms.sourcegitcommit: 6b308cf6d627d78ee36dbbae8972a310ac7fd6c8 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 01/23/2019 ms.locfileid: "54608361" --- # <a name="corprofeunsupportedcallsequence-hresult"></a>CORPROF_E_UNSUPPORTED_CALL_SEQUENCE HRESULT Die CORPROF_E_UNSUPPORTED_CALL_SEQUENCE HRESULT wurde in .NET Framework, Version 2.0 eingeführt. Die [!INCLUDE[net_v40_long](../../../../includes/net-v40-long-md.md)] gibt dieses HRESULT in zwei Szenarien: - Ein Profiler Hijacking Erzwingen eines Threads zurückgesetzt registrieren Sie Kontext zu einem beliebigen Zeitpunkt ein, sodass der Thread versucht, Strukturen, die sich in einem inkonsistenten Zustand befinden, Zugriff auf. - Wenn ein Profiler mit dem versucht, die eine Informationsmethode aufrufen, die Garbagecollection von eine Rückrufmethode auslöst, die Garbagecollection verbietet. Diese beiden Szenarien werden in den folgenden Abschnitten erläutert. ## <a name="hijacking-profilers"></a>Hijacking-Profiler (Dieses Szenario ist in erster Linie auf ein Problem mit Profiler hijacking, obwohl es gibt Fälle, in denen angriffslose Profiler dieses HRESULT sehen können.) In diesem Szenario ein Profiler Hijacking erzwungen setzt Registerkontext des Threads zu einem beliebigen Zeitpunkt so, dass der Thread kann Profilercode eingeben, oder geben Sie erneut die common Language Runtime (CLR) über eine [ICorProfilerInfo](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-interface.md) Methode. Viele der, dass die profilerstellungs-API, zeigen Sie auf die Datenstrukturen in der CLR bietet-IDs. Viele `ICorProfilerInfo` Aufrufe lediglich Lesen von Informationen aus diesen Datenstrukturen, und geben sie zurück. Die CLR kann jedoch Elemente in diesen Strukturen ändern, wie er ausgeführt wird, und es Sperren zu diesem Zweck verwenden. Angenommen, die CLR wurde bereits enthält (oder Versuch zu erhalten) eine Sperre, die zum Zeitpunkt gehackt der Profiler den Thread an. Wenn der Thread wieder von der CLR wird und versucht, mehr Sperren, oder Überprüfen von Strukturen, die gerade geändert wurden, können diese Strukturen in einem inkonsistenten Zustand sein. Deadlocks und zugriffsverletzungen können problemlos in solchen Situationen auftreten. Im Allgemeinen ein Profiler angriffslose Ausführung Code in eine [ICorProfilerCallback](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-interface.md) -Methode und ruft in einem `ICorProfilerInfo` Methode mit gültigen Parametern wäre nicht zu einem deadlock, oder Sie erhalten eine zugriffsverletzung. Z. B. ausgeführter Profilercode die [ICorProfilerCallback:: ClassLoadFinished](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-classloadfinished-method.md) Methode kann durch Aufrufen von Informationen zu der Klasse Anfordern der [ICorProfilerInfo2:: Getclassidinfo2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-getclassidinfo2-method.md) -Methode. Der Code wird möglicherweise ein HRESULT CORPROF_E_DATAINCOMPLETE, um anzugeben, dass Informationen nicht verfügbar ist; Es wird jedoch nicht zu einem deadlock oder erhalten eine zugriffsverletzung. Diese Klasse von Aufrufen an `ICorProfilerInfo` werden synchron, bezeichnet, da sie erfolgen eine `ICorProfilerCallback` Methode. Allerdings einen verwalteten Thread, der Code ausführt, die nicht innerhalb einer `ICorProfilerCallback` Methode wird angenommen, dass einen asynchronen Aufruf ausführt. In .NET Framework, Version 1 war es schwierig zu bestimmen, was in einem asynchronen Aufruf passieren kann. Der Aufruf konnte zu einem deadlock, stürzt ab, oder eine ungültige Antwort erhalten. .NET Framework, Version 2.0 eingeführt, einige einfache Prüfungen aus, um dieses Problem zu vermeiden. In .NET Framework 2.0, wenn Sie eine unsichere Aufrufen `ICorProfilerInfo` asynchron funktionieren, schlägt Sie mit einem CORPROF_E_UNSUPPORTED_CALL_SEQUENCE HRESULT. Asynchrone Aufrufe sind in der Regel nicht sicher. Allerdings werden die folgenden Methoden sind sicher und insbesondere asynchrone Aufrufe unterstützen: - [GetEventMask](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-geteventmask-method.md) - [SetEventMask](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-seteventmask-method.md) - [GetCurrentThreadID](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getcurrentthreadid-method.md) - [GetThreadContext](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getthreadcontext-method.md) - [GetThreadAppDomain](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-getthreadappdomain-method.md) - [GetFunctionFromIP](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getfunctionfromip-method.md) - [GetFunctionInfo](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getfunctioninfo-method.md) - [GetFunctionInfo2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-getfunctioninfo2-method.md) - [GetCodeInfo](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getcodeinfo-method.md) - [GetCodeInfo2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-getcodeinfo2-method.md) - [GetModuleInfo](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getmoduleinfo-method.md) - [GetClassIDInfo](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getclassidinfo-method.md) - [GetClassIDInfo2](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-getclassidinfo2-method.md) - [IsArrayClass](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-isarrayclass-method.md) - [SetFunctionIDMapper](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-setfunctionidmapper-method.md) - [DoStackSnapshot](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-dostacksnapshot-method.md) Weitere Informationen finden Sie im Eintrag [warum wir CORPROF_E_UNSUPPORTED_CALL_SEQUENCE haben](https://go.microsoft.com/fwlink/?LinkId=169156) im CLR-Profilerstellungs-API-Blog. ## <a name="triggering-garbage-collections"></a>Auslösen von Garbage Collections Dieses Szenario umfasst einen Profiler, der in eine Callback-Methode ausgeführt wird (z. B. eine von der `ICorProfilerCallback` Methoden), die automatische speicherbereinigung verbietet. Wenn der Profiler versucht, die eine Informationsmethode aufrufen (z. B. eine Methode für die `ICorProfilerInfo` Schnittstelle), die möglicherweise eine Garbagecollection auslösen, die nur zu Informationszwecken-Methode, die mit einem CORPROF_E_UNSUPPORTED_CALL_SEQUENCE HRESULT schlägt fehl. In der folgende Tabelle zeigt die Rückrufmethoden, die Garbage Collection zu verbieten, und nur zu Informationszwecken Methoden, die Garbage Collection auslösen können. Wenn der Profiler wird innerhalb eines der aufgeführten Rückrufmethoden ausgeführt und eine der aufgeführten Methoden nur zu Informationszwecken Ruft, kann diese Methode nur zu Informationszwecken mit einem CORPROF_E_UNSUPPORTED_CALL_SEQUENCE HRESULT. |Rückrufmethoden bereit, die Garbage Collections verbieten|Nur zu Informationszwecken Methoden, die Garbage Collection auslösen.| |------------------------------------------------------|------------------------------------------------------------| |[ThreadAssignedToOSThread](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-threadassignedtoosthread-method.md)<br /><br /> [ExceptionUnwindFunctionEnter](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-exceptionunwindfunctionenter-method.md)<br /><br /> [ExceptionUnwindFunctionLeave](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-exceptionunwindfunctionleave-method.md)<br /><br /> [ExceptionUnwindFinallyEnter](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-exceptionunwindfinallyenter-method.md)<br /><br /> [ExceptionUnwindFinallyLeave](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-exceptionunwindfinallyleave-method.md)<br /><br /> [ExceptionCatcherEnter](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-exceptioncatcherenter-method.md)<br /><br /> [RuntimeSuspendStarted](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimesuspendstarted-method.md)<br /><br /> [RuntimeSuspendFinished](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimesuspendfinished-method.md)<br /><br /> [RuntimeSuspendAborted](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimesuspendaborted-method.md)<br /><br /> [RuntimeThreadSuspended](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimethreadsuspended-method.md)<br /><br /> [RuntimeThreadResumed](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-runtimethreadresumed-method.md)<br /><br /> [MovedReferences](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-movedreferences-method.md)<br /><br /> [ObjectReferences](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-objectreferences-method.md)<br /><br /> [ObjectsAllocatedByClass](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-objectsallocatedbyclass-method.md)<br /><br /> [RootReferences2](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-rootreferences-method.md)<br /><br /> [HandleCreated](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback2-handlecreated-method.md)<br /><br /> [HandleDestroyed](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback2-handledestroyed-method.md)<br /><br /> [GarbageCollectionStarted](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback2-garbagecollectionstarted-method.md)<br /><br /> [GarbageCollectionFinished](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback2-garbagecollectionfinished-method.md)|[GetILFunctionBodyAllocator](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getilfunctionbodyallocator-method.md)<br /><br /> [SetILFunctionBody](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-setilfunctionbody-method.md)<br /><br /> [SetILInstrumentedCodeMap](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-setilinstrumentedcodemap-method.md)<br /><br /> [ForceGC](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-forcegc-method.md)<br /><br /> [GetClassFromToken](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getclassfromtoken-method.md)<br /><br /> [GetClassFromTokenAndTypeArgs](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-getclassfromtokenandtypeargs-method.md)<br /><br /> [GetFunctionFromTokenAndTypeArgs](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-getfunctionfromtokenandtypeargs-method.md)<br /><br /> [GetAppDomainInfo](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-getappdomaininfo-method.md)<br /><br /> [EnumModules](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo3-enummodules-method.md)<br /><br /> [RequestProfilerDetach](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo3-requestprofilerdetach-method.md)<br /><br /> [GetAppDomainsContainingModule](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo3-getappdomainscontainingmodule-method.md)| ## <a name="see-also"></a>Siehe auch - [ICorProfilerCallback-Schnittstelle](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback-interface.md) - [ICorProfilerCallback2-Schnittstelle](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback2-interface.md) - [ICorProfilerCallback3-Schnittstelle](../../../../docs/framework/unmanaged-api/profiling/icorprofilercallback3-interface.md) - [ICorProfilerInfo-Schnittstelle](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo-interface.md) - [ICorProfilerInfo2-Schnittstelle](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo2-interface.md) - [ICorProfilerInfo3-Schnittstelle](../../../../docs/framework/unmanaged-api/profiling/icorprofilerinfo3-interface.md) - [Profilerstellungsschnittstellen](../../../../docs/framework/unmanaged-api/profiling/profiling-interfaces.md) - [Profilerstellung](../../../../docs/framework/unmanaged-api/profiling/index.md)
141.75
4,181
0.776091
deu_Latn
0.641026
28ed0d5b48fa0c7d48d478500cfda54c54535728
2,733
md
Markdown
README.md
taoyuan/mqttr
510b01ba28ccdc736ee7b48775d717298744fd79
[ "MIT" ]
2
2018-12-21T04:52:54.000Z
2022-01-20T10:17:52.000Z
README.md
taoyuan/mqttr
510b01ba28ccdc736ee7b48775d717298744fd79
[ "MIT" ]
7
2017-03-09T17:27:01.000Z
2020-07-29T02:01:52.000Z
README.md
taoyuan/mqttr
510b01ba28ccdc736ee7b48775d717298744fd79
[ "MIT" ]
5
2017-03-09T17:21:26.000Z
2021-05-03T08:17:35.000Z
# mqttr [![NPM version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] [![Dependency Status][daviddm-image]][daviddm-url] [![Coverage percentage][coveralls-image]][coveralls-url] > A routable mqtt library based on mqtt.js ## Installation ```sh $ npm i mqttr ``` ## Usage ```typescript import {connect, Message} from 'mqttr'; // eslint-disable-next-line @typescript-eslint/no-floating-promises (async () => { // You should start a mqtt server at 1883 before or after run this script const client = connect('mqtt://localhost'); client.on('connect', function () { console.log('connect'); }); client.on('reconnect', function () { console.log('reconnect'); }); client.on('close', function () { console.log('close'); }); client.on('offline', function () { console.log('offline'); }); client.on('error', function (err: Error) { throw err; }); // full params handler await client.subscribe( '/users/:userId/message/:messageId/:splats*', (topic: string, payload: any, message?: Message) => { message = message!; console.log('-------------------------------------------------'); console.log('topic :', topic); // => /users/yvan/message/4321/ping console.log('message:', payload); // => { hello: '🦄' } console.log('params :', message.params); // => { userId: 'yvan', messageId: 4321, splats: [ 'ping' ] } console.log('path :', message.path); // => '/users/:userId/message/:messageId/:splats*' console.log('packet :', message.packet); // => {...} packet received packet, as defined in mqtt-packet console.log(); }, ); // one context param handler await client.subscribe( '/users/:userId/message/:messageId/:splats*', (message: Message) => { console.log('-------------------------------------------------'); console.log(message); console.log(); }, ); await client.ready(); await client.publish('/users/yvan/message/4321/ping', {hello: '🦄'}); // eslint-disable-next-line @typescript-eslint/no-misused-promises setTimeout(() => client.end(true), 10); })(); ``` ## Topic Patterns See [path-to-regexp](https://github.com/pillarjs/path-to-regexp) ## License MIT © [taoyuan](towyuan#outlook.com) [npm-image]: https://badge.fury.io/js/mqttr.svg [npm-url]: https://npmjs.org/package/mqttr [travis-image]: https://travis-ci.org/taoyuan/mqttr.svg?branch=master [travis-url]: https://travis-ci.org/taoyuan/mqttr [daviddm-image]: https://david-dm.org/taoyuan/mqttr.svg?theme=shields.io [daviddm-url]: https://david-dm.org/taoyuan/mqttr [coveralls-image]: https://coveralls.io/repos/taoyuan/mqttr/badge.svg [coveralls-url]: https://coveralls.io/r/taoyuan/mqttr
28.768421
108
0.621295
eng_Latn
0.181777
28ed6eaa0ecd40e8b80f5e0a9e7f207991850ca1
3,568
md
Markdown
_research/thesis.md
zooba/zooba.github.io
3ef15d441f56cd4ef8b4b97059fbd90b9e3c15cb
[ "MIT" ]
1
2022-03-22T15:02:08.000Z
2022-03-22T15:02:08.000Z
_research/thesis.md
zooba/zooba.github.io
3ef15d441f56cd4ef8b4b97059fbd90b9e3c15cb
[ "MIT" ]
null
null
null
_research/thesis.md
zooba/zooba.github.io
3ef15d441f56cd4ef8b4b97059fbd90b9e3c15cb
[ "MIT" ]
null
null
null
--- layout: page title: Ph.D Thesis description: My thesis abstract, full text and supporting files. slug: thesis redirect_from: /blog/research/thesis sort_key: A --- This page contains downloads and errata relating to my PhD thesis, completed in 2012. <!--more--> Any questions or feedback can be sent to my email address shown [here](/about). # Disambiguating Evolutionary Algorithms: Composition and Communication with ESDL [![PDF](/assets/pdf.png) Full PDF](/papers/thesis/dower2012_phd_thesis.pdf) (4.1MB, 345 pages) ## Abstract Evolutionary Computation (EC) has been developing as a field for many years. Encompassing a range of intelligent and adaptive search, optimisation and decision-making algorithms, there is a wealth of potential for EC to be applied to problems in many domains. People who are new to the field may want to learn, understand or apply EC, while those who are more experienced are looking to extend, develop and teach. Unfortunately, it is not clear how best to approach these tasks. Ideally, students should have guidance through the field, including how EC works and approaches to algorithm design; researchers should have canonical structures, implementations, presentation formats and comparison frameworks; developers should have easy access to interested users, as well as the potential to differentiate their work in terms of performance, flexibility and aesthetics. This thesis provides a model of the structure of Evolutionary Algorithms (EAs) based on operator composition. A small number of discrete element types and their interactions are defined, forming an algorithm architecture that supports existing concepts and provides direction for those looking to understand, use and improve EAs. A simple description language based on these elements is created to support communication between authors, readers, designers and software. Implementation concerns, ideas and potential are discussed to assist those with an interest in developing the simulation tools and frameworks used within the field. The model and description language are shown to concisely and unambiguously describe EAs in a directly publishable form. ## Related Files * [AMO_2012_04_07.zip](/papers/thesis/AMO_2012_04_07.zip) (39KB) `esec` configuration and results for the Angry Mob Optimisation algorithm from Chapter&nbsp;5 * [Comparison_Code_2012_04_07.zip](/papers/thesis/Comparison_Code_2012_04_07.zip) (50KB) Code files from Chapter&nbsp;6/Appendix&nbsp;F * [ECJ_2012_04_07.zip](/papers/thesis/ECJ_2012_04_07.zip) (10KB) Parameter and source files for [ECJ](http://cs.gmu.edu/~eclab/projects/ecj/) from Chapter&nbsp;6/Appendix&nbsp;F * [FakeEALib_2012_04_07.zip](/papers/thesis/FakeEALib_2012_04_07.zip) (29KB) Visual Studio 2010 project for FakeEALib from Chapter&nbsp;6 ## Links * [esec](https://github.com/zooba/esec), the ESDL-based framework shown in Chapter&nbsp;6 and Appendix&nbsp;E. * [esdlc](https://github.com/zooba/esdlc), the ESDL compiler used in `esec` and shown in appendices C and D. * [esecui](https://github.com/zooba/esecui), the graphical front-end for `esec` mentioned in Appendix&nbsp;E. * [Python](http://www.python.org/), the programming language used by `esec` and examples throughout. * [Visual Studio 11](http://go.microsoft.com/fwlink/?LinkId=190957), required for the C++ AMP support described in Appendix&nbsp;D * [ECJ](http://cs.gmu.edu/~eclab/projects/ecj/), reviewed in Chapter&nbsp;5 and used in Chapter&nbsp;6. * [HeuristicLab](http://dev.heuristiclab.com/), reviewed in Chapter&nbsp;5. ## Errata None yet...
57.548387
177
0.792601
eng_Latn
0.985194
28eda98e34f8446c4c73a955d1872b98a05c1204
189
md
Markdown
content/projects/weather-emoji.md
bradp/bradparbs.com
3424995ffd252cde30bb8b32a5b61107219431d3
[ "MIT" ]
2
2020-11-29T18:57:41.000Z
2021-10-06T03:20:55.000Z
content/projects/weather-emoji.md
bradp/bradparbs.com
3424995ffd252cde30bb8b32a5b61107219431d3
[ "MIT" ]
2
2021-09-27T19:49:21.000Z
2022-03-30T21:54:01.000Z
content/projects/weather-emoji.md
bradp/bradparbs.com
3424995ffd252cde30bb8b32a5b61107219431d3
[ "MIT" ]
null
null
null
+++ title = "CLI Weather Emoji" description = "grab the weather from your command line" author = "Brad Parbs" type = "page" kind = "tool" link = "https://github.com/bradp/weatheremoji" +++
21
55
0.687831
eng_Latn
0.824317
28edca77e4936b007c8d2d239e0dc58e272edc48
2,622
md
Markdown
docs/framework/unmanaged-api/debugging/icordebugcontroller-continue-method.md
nicolaiarocci/docs.it-it
74867e24b2aeb9dbaf0a908eabd8918bc780d7b4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugcontroller-continue-method.md
nicolaiarocci/docs.it-it
74867e24b2aeb9dbaf0a908eabd8918bc780d7b4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/unmanaged-api/debugging/icordebugcontroller-continue-method.md
nicolaiarocci/docs.it-it
74867e24b2aeb9dbaf0a908eabd8918bc780d7b4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Metodo ICorDebugController::Continue ms.date: 03/30/2017 api_name: - ICorDebugController.Continue api_location: - mscordbi.dll api_type: - COM f1_keywords: - ICorDebugController::Continue helpviewer_keywords: - Continue method [.NET Framework debugging] - ICorDebugController::Continue method [.NET Framework debugging] ms.assetid: 8684cd06-ad3e-48ef-832e-15320e1f43a2 topic_type: - apiref author: rpetrusha ms.author: ronpet ms.openlocfilehash: 7eacffe5769bc77ab626f6adbc99db1137da565f ms.sourcegitcommit: 5b6d778ebb269ee6684fb57ad69a8c28b06235b9 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 04/08/2019 ms.locfileid: "59146809" --- # <a name="icordebugcontrollercontinue-method"></a>Metodo ICorDebugController::Continue Riprende l'esecuzione dei thread gestiti dopo una chiamata a [metodo Stop](../../../../docs/framework/unmanaged-api/debugging/icordebugcontroller-stop-method.md). ## <a name="syntax"></a>Sintassi ``` HRESULT Continue ( [in] BOOL fIsOutOfBand ); ``` ## <a name="parameters"></a>Parametri `fIsOutOfBand` [in] Impostare su `true` andare a un evento di out-of-band; in caso contrario, impostato su `false`. ## <a name="remarks"></a>Note `Continue` continua il processo dopo una chiamata al `ICorDebugController::Stop` (metodo). Quando si esegue il debug in modalità mista, non chiamare `Continue` in Win32 evento thread a meno che non si continui da un evento di out-of-band. Un' *eventi in banda* è un evento gestito o un normale evento non gestito durante il quale il debugger supporta l'interazione con la gestione degli Stati del processo. In questo caso, il debugger riceve la [ICorDebugUnmanagedCallback](../../../../docs/framework/unmanaged-api/debugging/icordebugunmanagedcallback-debugevent-method.md) callback con relativo `fOutOfBand` parametro impostato su `false`. Un' *evento out-of-band* è un evento non gestito durante il quale è possibile l'interazione con la gestione degli Stati del processo durante il processo viene arrestato a causa di un evento. In questo caso, il debugger riceve la `ICorDebugUnmanagedCallback::DebugEvent` callback con relativo `fOutOfBand` parametro impostato su `true`. ## <a name="requirements"></a>Requisiti **Piattaforme:** Vedere [Requisiti di sistema](../../../../docs/framework/get-started/system-requirements.md). **Intestazione:** CorDebug.idl, CorDebug.h **Libreria:** CorGuids.lib **Versioni di .NET Framework:** [!INCLUDE[net_current_v10plus](../../../../includes/net-current-v10plus-md.md)] ## <a name="see-also"></a>Vedere anche
42.983607
404
0.749047
ita_Latn
0.841726
28ee3a6b3834f28e77068129e5d515dc8bf09c33
860
md
Markdown
packages/ui-kit/docs/ui-kit.iinputblock.md
robogenixaiorg/RoboChatComponents
56535e2dd129d636c56565e0aab96a884b5bd989
[ "MIT" ]
1
2020-08-13T11:47:27.000Z
2020-08-13T11:47:27.000Z
packages/ui-kit/docs/ui-kit.iinputblock.md
robogenixaiorg/RoboChatComponents
56535e2dd129d636c56565e0aab96a884b5bd989
[ "MIT" ]
null
null
null
packages/ui-kit/docs/ui-kit.iinputblock.md
robogenixaiorg/RoboChatComponents
56535e2dd129d636c56565e0aab96a884b5bd989
[ "MIT" ]
null
null
null
<!-- Do not edit this file. It is automatically generated by API Documenter. --> [Home](./index.md) &gt; [@rocket.chat/ui-kit](./ui-kit.md) &gt; [IInputBlock](./ui-kit.iinputblock.md) ## IInputBlock interface <b>Signature:</b> ```typescript export interface IInputBlock extends IBlock ``` <b>Extends:</b> [IBlock](./ui-kit.iblock.md) ## Properties | Property | Type | Description | | --- | --- | --- | | [element](./ui-kit.iinputblock.element.md) | [InputElement](./ui-kit.inputelement.md) | | | [hint](./ui-kit.iinputblock.hint.md) | [IPlainText](./ui-kit.iplaintext.md) | | | [label](./ui-kit.iinputblock.label.md) | [IPlainText](./ui-kit.iplaintext.md) | | | [optional](./ui-kit.iinputblock.optional.md) | boolean | | | [type](./ui-kit.iinputblock.type.md) | [ElementType.INPUT](./ui-kit.elementtype.input.md) | |
35.833333
103
0.630233
kor_Hang
0.158751
28eeab02a99a8359b596f8edbdc3491df5fd9f5e
1,567
md
Markdown
Visual Styles/PresentationTheme.Aero/CHANGELOG.md
alexdesign001/UX
1709446a82a8f2c01c0e6d95a29ac512439df1b2
[ "MS-PL" ]
2
2020-10-19T12:20:50.000Z
2020-12-05T08:10:08.000Z
Visual Styles/PresentationTheme.Aero/CHANGELOG.md
alexdesign001/UX
1709446a82a8f2c01c0e6d95a29ac512439df1b2
[ "MS-PL" ]
null
null
null
Visual Styles/PresentationTheme.Aero/CHANGELOG.md
alexdesign001/UX
1709446a82a8f2c01c0e6d95a29ac512439df1b2
[ "MS-PL" ]
null
null
null
# Changelog All notable changes to this project will be documented in this file. ## [0.5.0] - 2019-08-12 ### Fixed - Themes: ThemeManager support for .NET 4.8 ### Added - Themes: Support for .NET Core (3.0 preview 7) - Source Link for PDB symbols - UxThemeEx: New UxOpenThemeDataForDpi function ### Changed - Tools are built with .NET Core ## [0.4.0] - 2018-09-02 ### Added - All themes: Proper TabControl rendering for non-standard TabStripPlacement. ### Fixed - Aero10: A lone TabItem's height would be 1px smaller than with adjacent tabs. ## [0.3.0] - 2018-09-01 ### Fixed - All themes: TreeView Background now properly extends to its Border. Previously a non-zero Padding on a TreeView would leave a gap. - ThemePreviewer: Do not crash when using unknown/newer versions of uxtheme.dll. Attempt to load internal addresses using debug symbols. ## [0.2.0] - 2018-02-23 ### Added - Windows 8/8.1 Aero, Aero Lite and High Contrast themes ### Fixed - Theme assemblies (e.g., PresentationTheme.Aero.Win10.dll) are now copied automatically to the output directory as if they were directly referenced in the project. - Aero10: Toolbar ComboBox border renders properly over the dropdown button - Aero10: Calendar Today cell uses the proper border - Aero10: The focus rectangle in AeroLite and HighContrast was 1px too high. - Aero10: Aero and AeroLite had the TabItem focus rectangle start with a gap instead of a filled pixel. - HighContrast10: Fix Explorer.TreeViewItem border on Windows 1709+ ## [0.1.6444.1133] - 2017-08-23 Initial release.
32.645833
80
0.741544
eng_Latn
0.95893
28efc8619fcfa2433e3f8bbbd6c588421df5ec03
3,776
md
Markdown
README.md
jphacks/TK_1810
0c47c2a0eaeea47067e066a43def7ed079f77d2c
[ "MIT" ]
6
2018-11-05T15:30:37.000Z
2020-10-27T09:16:03.000Z
README.md
jphacks/TK_1810
0c47c2a0eaeea47067e066a43def7ed079f77d2c
[ "MIT" ]
1
2018-10-28T02:02:25.000Z
2018-10-28T02:02:25.000Z
README.md
jphacks/TK_1810
0c47c2a0eaeea47067e066a43def7ed079f77d2c
[ "MIT" ]
3
2019-10-18T10:21:54.000Z
2021-10-30T00:54:39.000Z
# ばえるーポン [![Alt text](https://user-images.githubusercontent.com/25325947/47612006-a6128480-dab4-11e8-8c0d-e15416ea2706.png)](https://youtu.be/9m6MVkPBhqg) [Youtubeリンク](https://youtu.be/9m6MVkPBhqg) ## 製品概要 ### 魅力 X Tech ### 背景(製品開発のきっかけ、課題等) 「SNSに写真を載せた時に目を引くような写真になっている」ことを指す、 **インスタ映え(インスタ映え)** という言葉が当たり前に使われるようになりました。 現在、飲食店に行って、食べ物の写真を撮影し、SNSに投稿する行為は、若い人のみならず老若男女がやっている当たり前の行為です。 我々は、この当たり前の行為はお店側目線で見ると広告のような存在であると考えました。そして、その広告(SNSへのインスタ映えな写真の投稿)の投稿者であるユーザーに対しては、報酬が発生するべきではないかと考えました。 その一方で、SNSには必ずしも良い写真だけが投稿されるわけでなく、飲食店にとってデメリットな「食べカス」や「映りの悪い料理」が投稿されることがあります。 我々は、この問題を解決し、尚且 `飲食店に行く` → `食べ物の写真を撮る` → `SNSに投稿する` という当たり前の行為に報酬が付与されるアプリケーションである「ばえるーポン」を開発しました。 ### 製品説明(具体的な製品の説明) <img src="https://user-images.githubusercontent.com/25325947/47607271-cf9dc280-da58-11e8-8d3a-20022ef2f553.png" width="500"> ばえるーポンは食べ物の写真のインスタ映え度をAIによって判定させ、そのインスタ映え度に応じた割引クーポンを発行するアプリケーションです。 #### ばえるーポン利用のながれ ``` 1.アプリを起動 2.食べ物を撮影する 3.飲食店の情報(位置情報から自動で取得)・食べ物カテゴリー・詳細を入力し、SNSに投稿する 4.SNSに投稿されると、AIサーバーに画像が送信され、インスタ映え度を算出する 5.インスタ映え度に応じた割引クーポン(次回以降利用可能)が発行される 6.クーポンはQRコードで発行され、お店側の端末でQRコードを読み込むことで利用できる ``` ### ばえるーポンの特徴・メリット #### ユーザー視点 ##### 1. いつもやることに報酬が発生する <img src="https://user-images.githubusercontent.com/25325947/47607275-db898480-da58-11e8-86ba-488711e8d820.png" width="500"> 写真を撮り→SNSに投稿するという、今では当たり前の行為にクーポンという報酬が出る。いつもやっている行為に報酬が出るなんて最高じゃないですか? ##### 2. インスタ映えかどうか判定してくれる SNSに写真を投稿する上で、AIがインスタ映え度を算出してくれる。 #### 飲食店視点 ##### 1. 次回以降利用できるクーポンによりリピーターが発生する <img src="https://user-images.githubusercontent.com/25325947/47607279-e80ddd00-da58-11e8-8679-18e4ce6d82c5.png" width="500"> このアプリケーションはクーポンを発行しますが、「次回以降利用できる」というのがポイント、次回以降利用できるということは、もう一度行きやすいという、お得感を生み出します。 ##### 2. SNSでの拡散によって、広告効果が見込まれる <img src="https://user-images.githubusercontent.com/25325947/47607282-eba16400-da58-11e8-9cc7-a737d3944e41.png" width="500"> SNSへの投稿はいわば「広告」です。特に身内のSNS投稿は、通常の広告よりも口コミ効果が強いと考えています。 ##### 3. インスタ映え・クオリティの高い写真がSNSに投稿される <img src="https://user-images.githubusercontent.com/25325947/47607292-ee9c5480-da58-11e8-998e-b2afee8de974.png" width="500"> クーポンは写真のクオリティによって発行されるため、ユーザーはクオリティの高い写真を撮影し、SNSに投稿しようとします。これにより、必然的にSNSにクオリティが高い写真が出回ることになります。飲食店にとってデメリットな「食べカス」や「映りの悪い料理」が出回ることはなくなるでしょう。 ### 解決出来ること 1. ユーザーにとってはいつもやっている当たり前の行為にクーポンという報酬が付与される 2. 飲食店は単にクーポンを配るのではなく,SNSの投稿(口コミ)を通じて新たな顧客を獲得でき,再訪も促せる. 3. 飲食店にとってデメリットな「食べカス」や「映りの悪い料理」が投稿されることを防ぐことができる ### 今後の展望 #### サービス的な展望 - 飲食業界以外にもこのサービスを活用する サービス面は、食べ物だけでなく、娯楽施設や観光名所,ファッションなどに応用することで、様々な企業の手助けができるだけでなく、地域の活性化ができ、社会全体を豊かにできると考えています。 #### 技術的な展望 - 写っている食べ物を自動的に分類し認識する 物体検出を更に発展させ,写っている食べ物の種類(ラーメン,カレー,オムライス)や役割(主菜,副菜,コップ)なども自動的に分類できるようにする - インスタ映え度推定の精度向上 このアプリケーションでは撮影された写真がデータベースに蓄積されていきます。利用されればされる程、ユーザーがインスタ映え度を高く出力させようとする写真が手に入るので,このデータを用いることで、インスタ映え度の推定精度はさらに向上できると考えています。 * エッジでの高速な推論処理 今回はサーバー側でGPUを用いてディープラーニングの推論処理を行いましたが,モデルの軽量化,CoreMLなどを用いることで,より高速にユーザーの手元のリソースでインスタ映え度を推定できると考えています. ## 開発内容・開発技術 ### システム構成図 <img src="https://user-images.githubusercontent.com/25325947/47611091-ee27ac00-daa0-11e8-9a8a-b393433990d6.png" width="500"> ### 活用した技術 #### API・データ #### フレームワーク・ライブラリ・モジュール * サーバーサイド * 言語:Ruby * フレームワーク:Ruby on Rails * Twitter API * Swagger * iOSアプリ * 言語:Swift * 使用ライブラリ * Swifter * Alamofire * ML(AI) * 言語: Python * フレームワーク * Pytorch * Keras * Flask * モデル: * [NIMA](https://shiropen.com/2017/12/19/30631) * [YOLOv3](https://pjreddie.com/darknet/yolo/) #### デバイス * iPhone8 ### 研究内容・事前開発プロダクト(任意) ご自身やチームの研究内容や、事前に持ち込みをしたプロダクトがある場合は、こちらに実績なども含め記載をして下さい。 * アプリケーション・DB設計 * 各種データセット * 画像とそのインスタ映え度のデータ * 画像と料理のバウンディングボックスのデータ ### 独自開発技術(Hack Dayで開発したもの) #### 2日間に開発した独自の機能・技術 - iOSクライアント - バックエンドサーバー - インスタ映え度の推定モデルの学習
30.95082
145
0.77304
jpn_Jpan
0.686487
28f0bb258e835cd250e1887b1ff19de5ddd83f43
54
md
Markdown
num-guess/README.md
vollov/seam-lab
e61c081db857590e211b0494abedb97d31aa7b34
[ "MIT" ]
null
null
null
num-guess/README.md
vollov/seam-lab
e61c081db857590e211b0494abedb97d31aa7b34
[ "MIT" ]
null
null
null
num-guess/README.md
vollov/seam-lab
e61c081db857590e211b0494abedb97d31aa7b34
[ "MIT" ]
null
null
null
# number guess show how to use seam workflow with jsf
18
38
0.777778
eng_Latn
0.99999
28f1d92ac538fafbb00b42a73e8cb8ea69a76102
3,292
md
Markdown
Extension/monero.md
dannyexodus/awesome-blockchain
2ed7a43df7bc80f258127d731e6d6e222779a1a7
[ "MIT" ]
1
2019-09-05T03:43:22.000Z
2019-09-05T03:43:22.000Z
Extension/monero.md
dannyexodus/awesome-blockchain
2ed7a43df7bc80f258127d731e6d6e222779a1a7
[ "MIT" ]
null
null
null
Extension/monero.md
dannyexodus/awesome-blockchain
2ed7a43df7bc80f258127d731e6d6e222779a1a7
[ "MIT" ]
1
2020-11-26T14:47:42.000Z
2020-11-26T14:47:42.000Z
# Monero - [Monero](#monero) - [Wallets](#wallets) - [Mining](#mining) - [Related Projects](#related-projects) ## Wallets - [Official GUI]() - [Mymonero.com](https://mymonero.com) - Web wallet of fluffypony, core dev of Monero - *Warning: fluffypony (core dev of Monero) is suspicious ([Link](https://monero.stackexchange.com/questions/897/does-monero-have-any-mobile-wallets-available) of those 2 wallets. Do your own research.* - [Android Wallet By Freewallet](https://play.google.com/store/apps/details?id=xmr.org.freewallet.app&hl=en) - [IOS Wallet By Freewallet](https://itunes.apple.com/us/app/monero-wallet-by-freewallet/id1126426159?mt=8) ## Mining - Mining Pools - [Moneropools.com](http://moneropools.com) - List of Monero pools - [Monerohash.com](https://monerohash.com) - US-based Monero pool - [Xmrpool.net](https://xmrpool.net) - [Xmrpool.eu](http://xmrpool.eu) - Europe-based Monero pool - [Minexmr.com](http://minexmr.com) - [Minemonero.pro](https://minemonero.pro) - [Xmr-pool-choice](https://github.com/timekelp/xmr-pool-choice) - Tool to choose best monero pool - Mining Pool Software - [Node-cryptonote-pool](https://github.com/zone117x/node-cryptonote-pool) - The original mining pool app for cryptonotes, with a front-end. Uses `Redis`. - [Cryptonote-universal-pool](https://github.com/fancoder/cryptonote-universal-pool) - Improved mining pool based on `node-cryptonote-pool`. Uses `Redis` and `Mysql` - [Nodejs-pool](https://github.com/Snipa22/nodejs-pool) - Improved mining pool based on `node-cryptonote-pool`. Uses `lmdb` and `Mysql` - [Poolui](https://github.com/mesh0000/poolui) - Front-end for nodejs-pool. Uses `AngularJS` - Mining Software - [Sgminer-gm](https://github.com/genesismining/sgminer-gm) - GPU Miner. Uses `OpenCL` - [XmrMiner](https://github.com/xmrMiner/xmrMiner) - GPU Miner. Uses `Cuda` - [Wolf-xmr-miner](https://github.com/OhGodAPet/wolf-xmr-miner) - GPU Miner. Uses `OpenCL` - [Cpuminer-multi](https://github.com/OhGodAPet/cpuminer-multi) - CPU Miner - Block Explorers - [Moneroblocks.info](http://moneroblocks.info) - [Chainradar.com](https://chainradar.com/xmr/blocks) - [Moneroexplorer.com](https://moneroexplorer.com) - [Monerobase.com](https://monerobase.com) ## Related Projects - Monero Projects - [Kovri](https://github.com/monero-project/kovri) - A secure, private, untraceable C++ implementation of the I2P anonymous network - [OpenAlias](https://openalias.org) - A DNS-like service to make Monero Wallet more user-friendly - Cryptonote Coins - [Map of Cryptonote Coins](http://mapofcoins.com/bytecoin) - Map showing the different cryptonote/bytecoin projects, and how they forked off each other - [Cryptonote.org](https://cryptonote.org) - The official cryptonote website - [Cryptonote Starter](https://cryptonotestarter.org) - Tutorial to create your own Cryptonote coin - [Bytecoin](https://bytecoin.org/) - The first cryptonote implementation - [FantomCoin](http://fantomcoin.org) - FantomCoin, allows merge mining with Monero - [Aeon](https://github.com/aeonix/aeon) - Aeon coin, a fork of Monero - [Digital Note](http://digitalnote.org)
59.854545
204
0.702916
eng_Latn
0.205463
28f29e7291f1628deb39752466e518d8a1ae253d
382
md
Markdown
docs/empty.md
micrub/bashible
a3330c6fcef871c183b4dcd865a3188c870f24cd
[ "MIT" ]
null
null
null
docs/empty.md
micrub/bashible
a3330c6fcef871c183b4dcd865a3188c870f24cd
[ "MIT" ]
null
null
null
docs/empty.md
micrub/bashible
a3330c6fcef871c183b4dcd865a3188c870f24cd
[ "MIT" ]
null
null
null
##### empty COMMAND [ARG1] [ARG2] ... Runs the command and returns true if it's output is empty. ```bash @ Setting some variables and checking whether they contain something - set_var DOMAIN not empty cat /etc/myapp/domain.txt - set_var FOO not empty echo $BAR ``` ##### See also [not](not.md) [evaluate](evaluate.md) [set_var](set_var.md) [var_empty](var_empty.md)
22.470588
68
0.688482
eng_Latn
0.936714
28f2a1c8b4ffc0964078d690aa3e816e3f0d1b82
78
md
Markdown
aboutme.md
omerfarukakdag/omerfarukakdag.github.io
5b9aab654dc545a69b5e31319582824df3dfd9db
[ "MIT" ]
null
null
null
aboutme.md
omerfarukakdag/omerfarukakdag.github.io
5b9aab654dc545a69b5e31319582824df3dfd9db
[ "MIT" ]
null
null
null
aboutme.md
omerfarukakdag/omerfarukakdag.github.io
5b9aab654dc545a69b5e31319582824df3dfd9db
[ "MIT" ]
null
null
null
--- layout: aboutpage title: Hakkımda subtitle: lang: tr ref: aboutme-page ---
11.142857
17
0.717949
eng_Latn
0.935996
28f2af384fef23056086cd841f3765175f779158
63
md
Markdown
README.md
TangersTongle/mud
d846aa1a86dd7a7ed5e44b6247ba843ee246aadd
[ "MIT" ]
null
null
null
README.md
TangersTongle/mud
d846aa1a86dd7a7ed5e44b6247ba843ee246aadd
[ "MIT" ]
null
null
null
README.md
TangersTongle/mud
d846aa1a86dd7a7ed5e44b6247ba843ee246aadd
[ "MIT" ]
null
null
null
# MUD - MUD Under Development ## Requirements - Nim >= 0.18.0
12.6
29
0.650794
eng_Latn
0.453012
28f2c6a7a657b37b171dd10d4ca30ef857b355b7
5,341
md
Markdown
articles/active-directory-domain-services/active-directory-ds-features.md
OpenLocalizationTestOrg/azure-docs-pr15_pt-BR
95dabd136ee50edd2caa1216e745b9f13ff7a1f2
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
1
2018-08-29T17:03:44.000Z
2018-08-29T17:03:44.000Z
articles/active-directory-domain-services/active-directory-ds-features.md
OpenLocalizationTestOrg/azure-docs-pr15_pt-BR
95dabd136ee50edd2caa1216e745b9f13ff7a1f2
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/active-directory-domain-services/active-directory-ds-features.md
OpenLocalizationTestOrg/azure-docs-pr15_pt-BR
95dabd136ee50edd2caa1216e745b9f13ff7a1f2
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Serviços de domínio Active Directory do Azure: Recursos | Microsoft Azure" description="Recursos dos serviços de domínio Active Directory do Azure" services="active-directory-ds" documentationCenter="" authors="mahesh-unnikrishnan" manager="stevenpo" editor="curtand"/> <tags ms.service="active-directory-ds" ms.workload="identity" ms.tgt_pltfrm="na" ms.devlang="na" ms.topic="article" ms.date="10/07/2016" ms.author="maheshu"/> # <a name="azure-ad-domain-services"></a>Serviços de domínio do Azure AD ## <a name="features"></a>Recursos Os seguintes recursos estão disponíveis nos serviços de domínio do Azure AD gerenciados domínios. - **Experiência de implantação simples:** Você pode habilitar serviços de domínio do Azure AD do seu locatário do AD Azure usando apenas alguns cliques. Independentemente se seu locatário do Azure AD está um locatário de nuvem ou sincronizado com o diretório local, seu domínio gerenciado pode ser provisionado rapidamente. - **Suporte a associação de domínio:** Você pode facilmente computadores de junção de domínio na rede virtual Azure que seu domínio gerenciado está disponível no. A experiência de junção de domínio no cliente do Windows e sistemas operacionais de servidor funciona perfeitamente contra domínios atendidos pelos serviços de domínio do Azure AD. Você também pode usar associação de domínio automatizada ferramentas contra tais domínios. - **Instância de um domínio por diretório do Azure AD:** Você pode criar um único domínio do Active Directory para cada diretório do Azure AD. - **Criar domínios com nomes personalizados:** Você pode criar domínios com nomes personalizados (por exemplo, ' contoso100.com') usando os serviços de domínio do Azure AD. Você pode usar qualquer um dos nomes de domínio verificado ou não verificados. Opcionalmente, você também pode criar um domínio com o sufixo de domínio interno (isto é, ' *. onmicrosoft.com') oferecidas pelo seu diretório Azure AD. - **Integrado ao Azure AD:** Você não precisa configurar ou gerenciar a replicação para os serviços de domínio do Azure AD. Contas de usuário, membros do grupo e credenciais de usuário (senhas) do seu diretório do Azure AD ficam disponíveis automaticamente nos serviços de domínio do Azure AD. Novos usuários, grupos ou alterações nos atributos de seu locatário do Azure AD ou seu diretório local são sincronizadas automaticamente aos serviços de domínio do Azure AD. - **Autenticação NTLM e Kerberos:** Com suporte para autenticação NTLM e Kerberos, você pode implantar aplicativos que dependem de autenticação integrada do Windows. - **Use suas credenciais/senhas corporativas:** Senhas de usuários em seu locatário do Azure AD trabalham com os serviços de domínio do Azure AD. Usuários podem usar suas credenciais corporativas em máquinas de associação de domínio, login interativamente ou pela área de trabalho remota e autenticar o domínio gerenciado. - **Ligação LDAP & LDAP ler suporte:** Você pode usar os aplicativos que dependem vínculos LDAP para autenticar usuários em domínios atendidos pelos serviços de domínio do Azure AD. Além disso, os aplicativos que usam operações de leitura LDAP para atributos de usuário/computador de consulta do diretório também podem trabalhar com os serviços de domínio do Azure AD. - **Seguro LDAP (LDAPS):** Você pode habilitar o acesso ao diretório sobre LDAP seguro (LDAPS). Acesso seguro de LDAP está disponível dentro da rede virtual por padrão. No entanto, você também pode habilitar o acesso seguro de LDAP pela internet. - **Política de grupo:** Você pode usar um único interno GPO cada para os usuários e computadores contêineres para impor a conformidade com necessários políticas de segurança para contas de usuário e computadores de domínio. - **Gerenciar o DNS:** Membros do grupo 'Administradores de DC AAD' podem gerenciar o DNS do seu domínio gerenciado usando ferramentas familiares de administração do DNS como o snap-in MMC de administração de DNS. - **Criar unidades organizacionais personalizado (organizacionais):** Membros do grupo 'Administradores de DC AAD' podem criar unidades organizacionais personalizados no domínio gerenciado. Esses usuários recebem privilégios administrativos totais sobre unidades organizacionais personalizados, para que eles podem adicionar/remover contas de serviço, computadores, grupos etc essas OUs personalizadas. - **Disponível em vários regiões Azure:** Consulte a página de [Serviços do Azure por região](https://azure.microsoft.com/regions/#services/) conheça as regiões Azure em que os serviços de domínio do Azure AD está disponível. - **Alta disponibilidade:** Os serviços de domínio do Azure AD oferece alta disponibilidade do seu domínio. Este recurso oferece a garantia de maior tempo de atividade de serviço e resiliência a falhas. Ofertas de monitoramento de integridade interna automatizado correção de falhas por bagunçar novas instâncias para substituir instâncias com falha e fornecer o serviço contínuo para seu domínio. - **Usar ferramentas de gerenciamento familiar:** Você pode usar ferramentas familiares de gerenciamento do Active Directory do Windows Server como a Central Administrativa do Active Directory ou o Active Directory do PowerShell administrar domínios gerenciados.
100.773585
467
0.79854
por_Latn
0.99989
28f3a8c3be7686c6ad12624f94e6a331e765e21b
472
md
Markdown
add/metadata/System.Web.Services.Description/HttpBinding.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.Web.Services.Description/HttpBinding.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
add/metadata/System.Web.Services.Description/HttpBinding.meta.md
MarktW86/dotnet.docs
178451aeae4e2c324aadd427ed6bf6850e483900
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- uid: System.Web.Services.Description.HttpBinding author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Web.Services.Description.HttpBinding.Namespace author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Web.Services.Description.HttpBinding.#ctor author: "Erikre" ms.author: "erikre" manager: "erikre" --- --- uid: System.Web.Services.Description.HttpBinding.Verb author: "Erikre" ms.author: "erikre" manager: "erikre" ---
16.857143
58
0.722458
deu_Latn
0.110918
28f3d287d8f06a0a33c9d716b84d897208fe7b08
743
md
Markdown
fork2ls/files/193403332046487552.md
Terminal/DiscordForkAppdata
6430eecb90dd9350e0369503f7943e0d4dbdc9b8
[ "MIT" ]
1
2018-12-13T08:45:44.000Z
2018-12-13T08:45:44.000Z
fork2ls/files/193403332046487552.md
Terminal/DiscordForkAppdata
6430eecb90dd9350e0369503f7943e0d4dbdc9b8
[ "MIT" ]
null
null
null
fork2ls/files/193403332046487552.md
Terminal/DiscordForkAppdata
6430eecb90dd9350e0369503f7943e0d4dbdc9b8
[ "MIT" ]
null
null
null
--- client_id: 193403332046487552 application_id: '193403294419255297' pagename: 'HawkBot' description: 'Primary purpose is to serve as a Japanese language learning tool, but there are fun commands as well' avatar: 'https://cdn.discordapp.com/avatars/193403332046487552/38a30b75bf7b1cf3642e58bc0e14d71d.webp' link: 'https://discordapp.com/oauth2/authorize?client_id=193403294419255297&scope=bot&permissions=0' github: owner: 'CyberHawkSoftware' repo: 'HawkBot' prefix: '<, customizable' support: 'https://discord.gg/jDpR9PD' nsfw: false --- # HawkBot Primary purpose is to serve as a Japanese language learning tool, but there are fun commands as well. The bot has a dictionary lookup and kanji information. Kanji stroke orders soon:tm:
37.15
115
0.794078
eng_Latn
0.884188
28f43635983c9360d23046519c513e8e4f4a72c3
10,748
md
Markdown
Lukah's Island - by Heavy Metal Gaming/README.md
Dylan-M/FS-Challenges
2955faa76f47cfc5d3f643057f298ba9039bbe1a
[ "CC0-1.0" ]
null
null
null
Lukah's Island - by Heavy Metal Gaming/README.md
Dylan-M/FS-Challenges
2955faa76f47cfc5d3f643057f298ba9039bbe1a
[ "CC0-1.0" ]
null
null
null
Lukah's Island - by Heavy Metal Gaming/README.md
Dylan-M/FS-Challenges
2955faa76f47cfc5d3f643057f298ba9039bbe1a
[ "CC0-1.0" ]
null
null
null
# Lukah's Island Survival Challenge ## Map [Lukah's Island](https://farming-simulator.com/mod.php?lang=en&country=fi&mod_id=201911) of Course! ## Mods A lot. All can be found on either the official ModHub or fs19.net - with most or all of the ones from the latter site also being available on modhub.us if you prefer that site. See the [Mods List](##-mods-list) section. ## Goal ### Solo Reach 2.5m+ Money Process a minmum of 100 trees into lumber at the sawmill ### Multiplayer Reach 10m+ Money Process a minmum of 300 trees into lumber at the sawmill ## Rules 1. Only the provided (listed) mods are allowed. 2. Must start from the provided save game. 3. Store is hidden (can still be accessed from menu) because you're not allowed to purchase anything. 4. Log cabin home can be purchased for $10,000. It is the only exception to the previous rule. However, it has conditions: -- 4a. Must harvest and sell a minimum of 25 trees prior to purchasing the cabin. Lumberjack mod will be enabled for stump removal. -- 4b. Must place it in a location that is not a numbered field or in the roadway. 5. Equipment will be scattered and hidden throughout the island. Some of the equipment will need to be repaired before it can be used, other pieces not. -- 5a. The equipment is intended to be hidden. Must load another game first, and turn off all tractor, trailer, implement, harvester, etc... icons for player equipment in order to force the setting since it is shared for all savegames. Once this is done, save and exit that game. Then you can load this game. Failure to do so defeats the purpos of the challenge. 6. No super strength. 7. No workers -- 7a. Instead of No Workers, you may hire up to 3 workers and go for the multiplayer goals. -- 7b. You may do 7a by adding the [Contractor Mod](https://farming-simulator.com/mod.php?lang=en&country=fi&mod_id=108748) and setting it to have from 2-4 characters 8. MP version of challenge limited to 4 players, but works best with just 2. 9. Cannot change settings to turn visibility of equipment back on 10. Headers can only be used on the combine that they match, which should be easy to tell from the paint & style ## Background You're a survivor (or group of survivors in multiplayer) of a shipwreck. You've washed up on the beach of this island, coming ashore where you saw some buildings on the way in. Despite the obvious buildings, the island appears to be uninhabited, and much of what you find is damaged or destroyed. However, there is some operable farming equipment. There are fields that have been growing random and wild with no one to maintain them. Additionally there are animals that have survived as well: sheep and chickens. ## Equipment, and why is it such an eclectic mix? The previous inhabitants of the island purchased whatever they could get, from whomever would sell it to them. Equipment from the former Soviet Union, from Germany, from the USA, and from many others as well. Additionally, it has been collected for decades. A lot of the stuff dates back to World Warn II. Nothing is more modern than the 1980s as far as can be told from a cursory look around the island. ## Platform Support Computer Only, due to the mods used and being started from a savegame. ## Credits Challenge by [Heavy Metal Gaming (Dylan-M)](https://github.com/Dylan-M) Map by Cazz64, Old Aussie Gamer Mods by a large number of people, see the credits on each mod on the Mod website it gets downloaded from ## Mods List ### Offical ModHub [2PTS-6A](https://farming-simulator.com/mod.php?lang=en&country=ie&mod_id=203966) [4x4 Farm Trailer](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=196963) [Additional Game Settings](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=203370) [Additional Field Info](https://www.farming-simulator.com/mod.php?lang=en&country=be&mod_id=137433) [Addon Straw Harvest](https://farming-simulator.com/mod.php?&mod_id=148186) [Amazone D8 60](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=178814) [Animal Screen Extended](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=134223) [Barrel Weight](https://www.farming-simulator.com/mod.php?lang=en&country=hu&mod_id=126094) [Bulk Fill](https://www.farming-simulator.com/mod.php?lang=en&country=si&mod_id=195059) [Chain Harrow](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=139542) [Dairy Sheep](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=214280) [Eicher AG300](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=166068) [Filllevel Warning](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=147245) [Front Lifter](https://farming-simulator.com/mod.php?lang=en&country=ca&mod_id=152056) [GlobalCompany](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=137078) [GlobalCompany Addon Icons](https://www.farming-simulator.com/mod.php?lang=en&country=hu&mod_id=137154) [Heap Info](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=194579) [Herbicide Tanks](https://farming-simulator.com/mod.php?lang=en&country=ru&mod_id=146749) [HUD Smart Shade](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=128515) [HUD Toggle](https://www.farming-simulator.com/mod.php?lang=en&country=no&mod_id=129744) [Husqvarna 266 XP](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=202412) [Info Display](https://www.farming-simulator.com/mod.php?mod_id=188516) [Isaria 6000/S 3m](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=206911) [John Deere 690](https://farming-simulator.com/mod.php?mod_id=142187) [Lime Station](https://www.farming-simulator.com/mod.php?mod_id=118989) [Lizard N035](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=200032) [Log Cabin](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=206423) [LumberJack](https://www.farming-simulator.com/mod.php?lang=en&country=ch&mod_id=174630) [Map Objects Hider](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=190689) [MX Frontloaders And Tools Pack](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=161357) [No Teleport](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=150456) [One Axle Trailer](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=205752) [OP2000](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=195325) [Player Position Saver](https://www.farming-simulator.com/mod.php?lang=en&country=fi&mod_id=195961) [Polish Plows Pack](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=164258) [RAUCH ZSA580](https://www.farming-simulator.com/mod.php?lang=en&country=gb&mod_id=195092) [Real Dirt Color](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=123560) [Real Dirt Fix](https://www.farming-simulator.com/mod.php?lang=en&country=nl&mod_id=153798) [Real Mower](https://www.farming-simulator.com/mod.php?lang=en&country=us&mod_id=172098) [Real Seeds Usage](https://www.farming-simulator.com/mod.php?mod_id=192436) [Seasons](https://farming-simulator.com/mod.php?mod_id=137669) [Seasons GEO: Hawaii](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=169520) [Seeds And Fertilizers Mini Silos](https://www.farming-simulator.com/mod.php?lang=fr&country=ch&mod_id=168953) [SK-5 "Niva" Pack](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=157897) [SIP Favorit 220](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=144836) [Store Deliveries](https://www.farming-simulator.com/mod.php?lang=en&country=ru&mod_id=134594) [StrawMe](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=177412) [Tajfun EGV 80 AHK](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=124000) [Tip Side HUD](https://www.farming-simulator.com/mod.php?lang=pl&country=pl&mod_id=131648) [Tools Combo](https://www.farming-simulator.com/mod.php?mod_id=195698) [Trailed Lifter](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=144838) [Universal Passenger](https://www.farming-simulator.com/mod.php?mod_id=139095) [Universal Passenger - Vehicles Of ModHub](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=139526) [Water Trailer](https://farming-simulator.com/mod.php?lang=en&country=us&mod_id=173380) [Weeder](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=159349) [Workshop Tabber](https://farming-simulator.com/mod.php?lang=en&country=hu&mod_id=120553) ### FS9.net Mods [1948 Chevy Grain Truck](https://fs19.net/farming-simulator-2019-mods/trucks/1948-chevy-grain-truck-v-1-0/) [Allis Chalmers 1300 Field Cultivator](https://fs19.net/farming-simulator-2019-mods/implements-and-tools/cultivators-and-harrows/allis-chalmers-1300-field-cultivator-v-1-0/) [Chevrolet COE 1941](https://fs19.net/farming-simulator-2019-mods/trucks/chevrolet-coe-1941-v1-0/) [Don 1500B KP](https://fs19.net/farming-simulator-2019-mods/combines/don-1500b-kp-v1-1/) [Enhanced Vehicles](https://fs19.net/farming-simulator-2019-mods/scripts/enhanced-vehicle-v1-0/) [Ford F800 Grain Truck](https://fs19.net/farming-simulator-2019-mods/trucks/f800-grain-truck-v1-0/) [Harsvester MTZ80 for Cotton](https://fs19.net/farming-simulator-2019-mods/trailers/harsvester-mtz80-for-cotton-v1-0/) [HTZ T-150](https://fs19.net/farming-simulator-2019-mods/tractors/htz-t-150-v1-3-2-2/) [John Deere 4020 EDIT](https://fs19.net/farming-simulator-2019-mods/tractors/john-deere-4020-edit-v1-0/) [John Deere 80 Series](https://fs19.net/farming-simulator-2019-mods/tractors/john-deere-80-series-old-v-1-0/) [McCorkmick Deering 15-30 Steel Wheel](https://fs19.net/farming-simulator-2019-mods/tractors/mccormick-deering-15-30-on-steel-v1-0/) [Minneapolis Moline G1355](https://fs19.net/farming-simulator-2019-mods/tractors/mineapolis-moline-g1355-v1-0/) [Oliver 2255 MFWD](https://fs19.net/farming-simulator-2019-mods/tractors/oliver-2255-mfwd-v1-1/) [Oliver 55 Open Station](https://fs19.net/farming-simulator-2019-mods/tractors/oliver-55-open-station-v1-1/) [Oliver Cletrac HG](https://fs19.net/farming-simulator-2019-mods/tractors/oliver-cletrac-hg-v2-0/) [Old Horal MV1-052](https://fs19.net/farming-simulator-2019-mods/trailers/old-horal-mv1-052-v1-0/) [ROOT HARVESTER PACK GRIMME SE 260](https://fs19.net/farming-simulator-2019-mods/packs/root-harvester-pack-grimme-se-260-v1-0/) [RZ 3M](https://fs19.net/farming-simulator-2019-mods/implements-and-tools/mowers/rz-3m-v1-0/) [SPC 4 Planter](https://fs19.net/farming-simulator-2019-mods/implements-and-tools/seeders/spc-4-v1-0/)
74.124138
216
0.763491
eng_Latn
0.391309
28f43f2a5a89e08f0e5f18c00c3c878db60da49d
7,895
md
Markdown
_wiki/Servlet.md
aegis1920/aegis1920.github.io
d5804a4531604768eca62ebd783ab7ef5f5aeaf2
[ "MIT" ]
3
2021-10-05T03:57:52.000Z
2022-02-25T13:18:59.000Z
_wiki/Servlet.md
aegis1920/aegis1920.github.io
d5804a4531604768eca62ebd783ab7ef5f5aeaf2
[ "MIT" ]
6
2020-03-09T06:12:43.000Z
2021-04-21T03:55:17.000Z
_wiki/Servlet.md
aegis1920/aegis1920.github.io
d5804a4531604768eca62ebd783ab7ef5f5aeaf2
[ "MIT" ]
null
null
null
--- layout : wiki title : Servlet summary : date : 2019-06-20 15:28:48 +0900 updated : 2019-07-04 14:26:30 +0900 tags : toc : true public : true parent : what latex : false --- * TOC {:toc} # Servlet ## Servlet이란? 자바 웹 어플리케이션의 구성요소 중 동적인 처리를 하는 프로그램. WAS에 동작하는 Java 클래스. 서블릿은 HttpServlet 클래스를 상속받아야 한다. ## 서블릿 작성 방법 * Servlet 3.0 이상 * web.xml 파일을 사용하지 않음 * java annotation을 사용한다. * Servlet 3.0 미만 * Servlet을 등록할 때 web.xml에 등록 ## Servlet LifeCycle 서블릿 요청을 받으면 해당 서블릿이 메모리에 있는지 확인. 메모리에 없으면 해당 서블릿 클래스를 메모리에 올린다. 그리고 init() 메소드를 실행한다. 그리고 나서 service() 메소드를 실행한다. 만약 메모리에 있다면 service()를 실행한다. service 메소드는 템플릿 메소드 패턴으로 구현한다. WAS가 종료되거나, 웹 어플리케이션이 새롭게 갱신될 경우 destroy() 메소드가 실행된다. ## 요청과 응답 웹 브라우저가 WAS에게 요청을 보내면 HttpServletRequest와 HttpServletResponse 객체를 생성한다. 여기에 요청 정보를 담고 매핑된 서블릿에게 전달한다. HttpServletResponse 객체는 어떤 클라이언트가 보냈는지와 여러 가지를 담은 걸 서블릿에게 보낸다. HttpServletRequest http프로토콜의 request정보를 서블릿에게 전달하기 위한 목적으로 사용합니다. 헤더정보, 파라미터, 쿠키, URI, URL 등의 정보를 읽어 들이는 메소드를 가지고 있습니다. Body의 Stream을 읽어 들이는 메소드를 가지고 있습니다. HttpServletResponse WAS는 어떤 클라이언트가 요청을 보냈는지 알고 있고, 해당 클라이언트에게 응답을 보내기 위한 HttpServleResponse객체를 생성하여 서블릿에게 전달합니다. 서블릿은 해당 객체를 이용하여 content type, 응답코드, 응답 메시지등을 전송합니다. ## DD(Deployment Descripter), web.xml 배포서술자라고도 불리는 소프트웨어를 배포할 필요가 없다. 사용자가 배포하는 게 아니라. 웹서버에 반영만 하면 사용자들은 그냥 브라우저만 열면 되니까. 하지만 웹서버도 많이 접속하면 트래픽이 많아질 수 있다. 웹 서버 또한 병렬적으로 운영하게 된다. 요즘 서버들은 로드밸런싱기능이랑 ?? 를 갖고있다. 서버가 장애가 생기면 데이터를 백업해서 다른 서버로 이어지도록. 그래서 지속적인 서버를 이용할 수 있도록 할 수도 있다. HTML, CSS, JAVASCRIPT, AJAX -> Servlet(Controller -> Spring mvc) -> Service(Model, 객체 의존성 또한 spring으로 관리) -> DAO(Model, JDBC, SQL -> My batis) -> DB 이렇게 스프링으로 가는 이유가 유지보수를 위해서. servlet, service, dao 사이에 들어가는 vo도 객체기 때문에 model로 본다. DAO로 요청을 보내고 컨트롤러에게 다시 받음 만약 서블릿(컨트롤러)이 없다면 view에 따라 종속적으로 모두 만들어줘야하기 때문에. 즉 교통정리를 해주는 역할. Servlet에서 성공과 실패를 했을 때 각각 view를 보여준다. view는 진짜 UI를 갖는 화면일 수 있고 비동기 통신이라면 처리된 결과일 수도 있다. 보통은 HTML이나 PDF 포맷을 내려받기 하는 것도 될 수 있고 스프레트시트같은 형태일 수도 있다. 동기가 HTML, 비동기가 XML, JSON등등... 서블릿을 컨트롤러로 가져가는 MVC Servlet model 2를 기반으로 갔다. JSP는 java 소스를 만들기 위한 템플릿 페이지. java로 바꿔주는 컨테이너가 필요하다. jsp는 jsp 컨테이너가 필요하고 servlet은 servlet 컨테이너가 필요하다. jsp 컨테이너는 jsp를 java소스로 바꿔주고, class로도 바꿔준다. 종합선물세트 WAS. javaEE에서 많은 것들이 들어있다. jsp, servlet다 javaEE 엔터프라이즈 플랫폼에 있다. new할지 말지, Servlet - init(), service(), destroy() -> life cycle 메소드 * init() -> 태어나면 딱 한번 초기화해줌. * service() -> 엄청 많이 불림. 요청 응답, 100번 불리면 100번 반응 * destroy() -> 소멸할 때 한 번 불림 정말 직접 import Servlet해서 해도 된다. implements Servlet하면 GenericServlet클래스를 포함시켜야 한다. 즉, 추상클래스. service()는 우리가 짜야하기 때문에 구현하지 않았음. 그래서 그냥 service();이렇게 되어있다. HttpServlet이 또 있음. 더 구체화가 된다. 딱 http 프로토콜에 확장된 서블릿. HttpServlet도 추상 클래스다. 얘는 추상 메소드가 없다. 그냥 강제로 된 것. 그냥 doGet(){}, doPost(){}, ... 이미 열고 닫고를 다 만들어놨다. 내가 원하는 것만 재정의 시키면 된다. serivice()도 구현이 되어있다. if(Get방식) doGet(); elseif(Post) doPost(); 이런식으로 분기하도록 구현이 되어있다. > Http - get, post, head, delete, put. > > REST api -> delete - > url/board/1 이러면 첫 번째 를 지워달라. 라고 말하는 것. 이제 HttpServlet을 상속하는 나의 XXXServlet을 만들면 된다. 여기서 doGet이나 doPost를 재정의 하면 된다. 여기서 다형성이 나온다. 새로운 클래스를 추가하면 거기에 맞는 게 나온다. 이게 다형성!!! Servlet 객체 메모리에 로드? HelloServlet을 hello.do라고 mapping. 싱글톤처럼 관리하기 때문에 확인한다. 이미 객체가 있나없나. 없으면 이제 생성한다. 클래스를 메모리에 로드를 한다. 클래스로더라는 얘가 그 일을 담당한다. 그리고나서 new로 객체를 생성. 이 때 생성자 콜백이 된다. 기본생성자가 콜백된다. 매개변수가 뭔지 모르니까. 그래서 아무것도 없는 기본생성자를 콜백한다. 아무런 정보가 없어도 new가 될 수 있어야 하니까. 그리고 init()을 콜백한다. > 우리가 만든 HelloServlet은 컨테이너 안의 다른 패키지에도 접근할 수 있어야 하니까 public으로 접근지정자를 해줘야 한다. 응답 가능(대기) 상태가 된다. 여기서 이제 서비스가 계속 반응한다. service()가 콜백. get이면 get을 불림.post면 doPost()가 불린다. 만약 처음에 hello.do로 불릴 때 Servlet 객체가 이미 있다면 바로 가진다. init()을 Servlet의 생성자 역할이라고 보면 된다. html이나 css나 그런 것들은 Servlet 객체가 불려질 때 안 불린다. 필요가 없으니까. 끝날 때 destroy()가 한 번 불리니까. 내가 끝날 때 뭐 하고싶으면 하면 된다. service()를 재정의하는 건 권장하지 않는다. 이미 다 만들어져있으니까. 차라리 doGet(), do Post()를 재정의하면 된다. 서비스 안에서 코드를 짜놓으면 안됨. 내가 Servlet을 안 짜니까 동시성을 생각하지 않는다. 컨테이너에 요청이 들어오면 컨테이너는 그 받는 스레드를 계속 만든다. 그래서 멀티스레딩환경이 된다. // 녹음파일 있음 - 20190320 - 중간에 까먹고 못 끊음. HelloServlet이라는 객체(h)가 있다. 여기에 name 클래스의 멤버변수가 있고 doGet()가 있다. 그리고 t1, t2, t3 스레드가 있을 때 스레드에서 h.service()가 불린다. 그리고 service()에서 h.doGet()이 불린다. 그런데 t1이 부르고 난뒤 t2가 가서 멤버변수를 바꿔버린다면 t1은 다른 전역변수를 갖게 된다. 아주 위험한 코드가 되기 때문에 멤버변수를 쓰면 안 되고 지역변수로 줘야 한다. 쓰고 의미없는 데이터라면 지역변수로. 요청되고 난 뒤 사라지니까. 요청되는 컨텐츠에 따라 다운하게 할 지, 그냥 보여줄 지 알아야 한다. 그래서 많은 스트림이 필요한데 이들을 표현하는 객체로서 사용하게된다. HttpServletRequest, HttpServletResponse처럼. 얘네들을 불러내게 한다. 그러면 웹 서버는 각자 톰캣이면 톰캣 HttpServletRequest를 갖고있는데 이들을 꺼내서 쓰게 된다. 즉, 내용만 전달해준다. HttpServletRequest()는 getter가 있고, HttpServletResponse()는 setter가 있다. 똑같은 클라이언트더라도 모든 요청은 모두 다 다른 요청이다. 똑같은 페이지를 똑같이 부른다고 해서 요청도 똑같다고 하면 안 된다. 그냥 매일 새로운 요청. 재사용하지 않음 서비스메소드는 그냥 처음올 때 보는 get, post같은 메소드를 보고 알 수 있다. 이클립스에서 서블릿 위저드가 있는 이유는 서블릿에 관한 상속을 받기 위해서. ![1553058040256](C:\Users\niboh\AppData\Roaming\Typora\typora-user-images\1553058040256.png) ![1553058046317](C:\Users\niboh\AppData\Roaming\Typora\typora-user-images\1553058046317.png) 여기서 주소만 부를거니까 get만 호출한다. ![1553058074044](C:\Users\niboh\AppData\Roaming\Typora\typora-user-images\1553058074044.png) 똑같은 url을 부르지만 해오는 건 모두 달라진다. web.xml 배치 설명자 파일. DD라고 불린다. ```xml <servlet> // 이 아래 둘은 필요없다. description이랑 displayname <description></description> <display-name>HelloServlet</display-name> <servlet-name>HelloServlet</servlet-name> <servlet-class>com.ssafy.hello.HelloServlet</servlet-class> </servlet> <servlet> <servlet-name>HiServlet</servlet-name> <servlet-class>com.ssafy.hello.HelloServlet</servlet-class> </servlet> // 얘는 서버가 시작될 때 init()을 해주는 것. 즉, 바로 시작부터 많은 일(init()에 많은 코드)을 하게 되는 웹이라면 처음 사용자는 시간이 아주 오래 가게 된다. 그래서 그걸 방지하려고 그냥 서버가 시작될 때 바로 init()을 해준다. 그러면 doGet()만 해주면 되니까. <load-on-startup>1</load-on-startup> <servlet-mapping> // servlet-name은 위의 servlet-name과 같아야 한다. <servlet-name>HelloServlet</servlet-name> <url-pattern>/hello.ssafy</url-pattern> </servlet-mapping> <servlet-mapping> // servlet-name은 위의 servlet-name과 같아야 한다. <servlet-name>HiServlet</servlet-name> <url-pattern>/hi.ssafy</url-pattern> </servlet-mapping> // 여러 개를 만들 때 설정별로 인스턴스가 만들어진다. 같은 클래스로부터 각기다른 객체로 따로 만들어줄 수 있다. 그래서 서블릿 맵핑마다 똑같이 서블릿에도 이름을 주는 거고. // 2.5버전은 이렇게 되어있는데 // 3.1버전은 java 소스파일에 annotation으로 되어있다. // 그리고 만약에 서블릿을 날리면 web.xml에 있는 설정은 반영이 안 되어있기 때문에 여기서도 다시 지워줘야 한다. ``` 해보고 새로고침을 해주면 doGet()만 뜬다. 3버전부터는 소스에 web.xml이 어노테이션으로 들어가기 때문에 자동으로 안 만들어준다. 그래서 next할 때 체크를 해줘야 한다. 그리고 3버전이나 2버전 얘네들은 처음에 아니면 못 바꾸기때문에 처음에 잘 만들어줘야 한다. 3버전은 클래스 위에 어노테이션을 주기 때문에 그 클래스의 선언부에 주기 때문에 설정이 줄어드는 것처럼 느끼는 것. 실행할 때 Next에서 add and remove에서 내가 실행하려고 하는 서블릿만(3.1) 컨피규어에 놓고 실행한다. ### DD 2.5와 DD 3.1 의 차이 DD 2.5는 web.xml이 자동으로 들어가있어서 그 안에서 어느 링크가 어느 서블릿 클래스로 갈지 설정해주면 된다. DD 3.1은 web.xml이 없다. 서블릿 클래스에 대한 링크를 줄 때 기본적으로 어노테이션으로 작성되기 때문에 편한 감이 없지 않아 있지만 각각 장단점이 있다. 왜냐면 많은 서블릿 클래스들이 생성될 때 그 각자 파일마다 누구는 누구의 서블릿 클래스에 간다고 `@WebServlet("")`으로 다 써줘야 한다. 그래서 나중에 수정을 할 때도 web.xml처럼 한 파일에서 관리하지 못 하고 하나하나 가서 모두 수정해줘야 한다. DD 2.5는 안 그래줘도 되니까. ## http 녹음함 헤더부에 User-Agent라는 헤더가 브라우저의 인포메이션을 잡고 있는 것 URL ? 여기에 파라미터들을 '쿼리 스트링'이라고 부른다. GET 방식은 body부가 없다. 있지만 안 담긴다. POST 방식은 메세지 바디부에 실어 보낸다. multipart/formdata 이라고 쓰면 파일만 준다. 구분성 있는 Dummydata가 들어온다. 데이터가 가긴 가지만 사이사이에 경계를 구분할 수 있는 걸 가져온다. 서버에서 일반적으로 끌어올 수 있는 게 아니라 라이브러리를 쓴다. 업로드 하는 건 파일을 주는 것 뿐이다. 데이터의 성격이 다를 뿐. 내가 거기에 집중하는 게 아니다. 인풋타입 file. post방식이면서 파일이 있을 때만. 서버가 HttpServletRequest얘를 생성한다. 얘는 headers, cookies, parameters(form data), 는 vo라고 생각하면 된다. headers * 파라미터에서 끄집어내서 쓸 수 있는 메소드가 getParameter("name"); input마다 name을 줘서 name으로 찾는다. 얘의 return 타입은 String. 숫자도 다 문자로 온다. * getParameterValues("name"); 파라미터의 이름들만 추출해서 준다. iterator같은 얘. String[]으로 반환된다. * setCharaterEncoding("utf-8"); 근데 이건 body부에만 적용할 수 있다. 그래서 POST 방식으로만 보낼 수 있다. * GET 방식은 UTF-8로 가기 때문에 서버측에 있는 걸 잘 보고 처리해줘야 한다. Response 메세지가 온다. 서버가 클라이언트에게 정보를 준다. * 상태 라인(프로토콜 상태코드 상태메세지) * 헤더부 - content-type, cookie도 헤더에 들어가고 * 바디부(응답 컨텐츠)
31.454183
280
0.707916
kor_Hang
1.00001
28f494e5bc32eaea7d9dc2d6f9524e6f9651f946
1,110
md
Markdown
_posts/2019-10-07-test-k-nnen-sie-unterscheiden-teure-sache-vom-billigen.md
anatolii331/DE
e1962ed571f087b6c29a57ca7ee3bbee9a6bbbde
[ "MIT" ]
null
null
null
_posts/2019-10-07-test-k-nnen-sie-unterscheiden-teure-sache-vom-billigen.md
anatolii331/DE
e1962ed571f087b6c29a57ca7ee3bbee9a6bbbde
[ "MIT" ]
null
null
null
_posts/2019-10-07-test-k-nnen-sie-unterscheiden-teure-sache-vom-billigen.md
anatolii331/DE
e1962ed571f087b6c29a57ca7ee3bbee9a6bbbde
[ "MIT" ]
null
null
null
--- id: 12998 title: 'TEST: Können Sie unterscheiden teure Sache vom Billigen?' date: 2019-10-07T11:30:02+00:00 author: user layout: post guid: https://de.bestwow.net/test-k-nnen-sie-unterscheiden-teure-sache-vom-billigen/ permalink: /test-k-nnen-sie-unterscheiden-teure-sache-vom-billigen/ tdc_dirty_content: - "1" - "1" tdc_icon_fonts: - 'a:0:{}' - 'a:0:{}' tdc_google_fonts: - 'a:2:{i:0;s:3:"662";i:4;s:3:"394";}' - 'a:2:{i:0;s:3:"662";i:4;s:3:"394";}' ap_mark: - Это пост был добавлен через AftParser - Это пост был добавлен через AftParser ap_link: - https://lifehacker.ru/expensive-or-cheap-quiz/ - https://lifehacker.ru/expensive-or-cheap-quiz/ post_views_count: - "8" - "8" categories: - Featured --- </p> <div> <h2 class="read-also__title"> <span>Lesen Sie auch</span> <span>💸 </span> </h2> <ul class="read-also__list"> <li> TEST: Können Sie erraten, was die Ware mit AliExpress nach dem Namen? </li> <li> TEST: Ratet mal, wofür dieses Ding mit AliExpress </li> <li> TEST: Welches Bild ist teurer? </li> </ul> </div>
23.125
84
0.63964
deu_Latn
0.467794
28f4bf56e7b6fd71308ffa42a6a5fc630c3140e5
105
md
Markdown
README.md
project-hellfire/project-hellfire
6e2f9806e94f4abc30fea7b3a56c59e4e2dd2720
[ "MIT" ]
null
null
null
README.md
project-hellfire/project-hellfire
6e2f9806e94f4abc30fea7b3a56c59e4e2dd2720
[ "MIT" ]
9
2020-07-17T11:10:58.000Z
2022-03-02T04:32:21.000Z
README.md
project-hellfire/project-hellfire
6e2f9806e94f4abc30fea7b3a56c59e4e2dd2720
[ "MIT" ]
1
2020-01-03T03:32:32.000Z
2020-01-03T03:32:32.000Z
# project-hellfire A demo app made to showcase some Angular features often found in modern applications.
35
85
0.819048
eng_Latn
0.999753
28f4e1465cfe4b92af3e4cc9e008342d41e4cb5e
10,484
md
Markdown
sdk/docs/RenewsApi.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
6
2019-10-11T06:52:07.000Z
2022-03-05T02:30:32.000Z
sdk/docs/RenewsApi.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
7
2019-09-10T01:30:30.000Z
2021-10-21T01:18:13.000Z
sdk/docs/RenewsApi.md
freee/freee-accounting-sdk-java
2102cf9bf261683d3e81bca16b0aa6e14cef14b4
[ "MIT" ]
5
2019-10-11T06:56:10.000Z
2022-02-05T14:55:21.000Z
# RenewsApi All URIs are relative to *https://api.freee.co.jp* Method | HTTP request | Description ------------- | ------------- | ------------- [**createDealRenew**](RenewsApi.md#createDealRenew) | **POST** api/1/deals/{id}/renews | 取引(収入/支出)に対する+更新の作成 [**deleteDealRenew**](RenewsApi.md#deleteDealRenew) | **DELETE** api/1/deals/{id}/renews/{renew_id} | 取引(収入/支出)の+更新の削除 [**updateDealRenew**](RenewsApi.md#updateDealRenew) | **PUT** api/1/deals/{id}/renews/{renew_id} | 取引(収入/支出)の+更新の更新 ## createDealRenew > DealResponse createDealRenew(id, renewCreateParams) 取引(収入/支出)に対する+更新の作成 &lt;h2 id&#x3D;\&quot;\&quot;&gt;概要&lt;/h2&gt; &lt;p&gt;指定した事業所の取引(収入/支出)の+更新を作成する&lt;/p&gt; &lt;h2 id&#x3D;\&quot;_2\&quot;&gt;定義&lt;/h2&gt; &lt;ul&gt; &lt;li&gt; &lt;p&gt;issue_date : 発生日&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;due_date : 支払期日&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;amount : 金額&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;due_amount : 支払残額&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;type&lt;/p&gt; &lt;ul&gt; &lt;li&gt;income : 収入&lt;/li&gt; &lt;li&gt;expense : 支出&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;details : 取引の明細行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;accruals : 取引の債権債務行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;renews : 取引の+更新行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;payments : 取引の支払行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;from_walletable_type&lt;/p&gt; &lt;ul&gt; &lt;li&gt;bank_account : 銀行口座&lt;/li&gt; &lt;li&gt;credit_card : クレジットカード&lt;/li&gt; &lt;li&gt;wallet : 現金&lt;/li&gt; &lt;li&gt;private_account_item : プライベート資金(法人の場合は役員借入金もしくは役員借入金、個人の場合は事業主貸もしくは事業主借)&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;/ul&gt; &lt;h2 id&#x3D;\&quot;_3\&quot;&gt;注意点&lt;/h2&gt; &lt;ul&gt; &lt;li&gt;本APIではdetails(取引の明細行)、accruals(債権債務行)、renewsのdetails(+更新の明細行)のみ操作可能です。&lt;/li&gt; &lt;li&gt;本APIで取引を更新すると、消費税の計算方法は必ず内税方式が選択されます。&lt;/li&gt; &lt;/ul&gt; ### Example ```java // Import classes: import jp.co.freee.accounting.ApiClient; import jp.co.freee.accounting.ApiException; import jp.co.freee.accounting.Configuration; import jp.co.freee.accounting.auth.*; import jp.co.freee.accounting.models.*; import jp.co.freee.accounting.api.RenewsApi; public class Example { public static void main(String[] args) { ApiClient defaultClient = Configuration.getDefaultApiClient(); defaultClient.setBasePath("https://api.freee.co.jp"); // Configure OAuth2 access token for authorization: oauth2 OAuth oauth2 = (OAuth) defaultClient.getAuthentication("oauth2"); oauth2.setAccessToken("YOUR ACCESS TOKEN"); RenewsApi apiInstance = new RenewsApi(defaultClient); Integer id = 56; // Integer | 取引ID RenewCreateParams renewCreateParams = new RenewCreateParams(); // RenewCreateParams | 取引(収入/支出)に対する+更新の情報 try { DealResponse result = apiInstance.createDealRenew(id, renewCreateParams); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling RenewsApi#createDealRenew"); System.err.println("Status code: " + e.getCode()); System.err.println("Reason: " + e.getResponseBody()); System.err.println("Response headers: " + e.getResponseHeaders()); e.printStackTrace(); } } } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **id** | **Integer**| 取引ID | **renewCreateParams** | [**RenewCreateParams**](RenewCreateParams.md)| 取引(収入/支出)に対する+更新の情報 | ### Return type [**DealResponse**](DealResponse.md) ### Authorization [oauth2](../README.md#oauth2) ### HTTP request headers - **Content-Type**: application/json, application/x-www-form-urlencoded - **Accept**: application/json ### HTTP response details | Status code | Description | Response headers | |-------------|-------------|------------------| | **201** | | - | | **400** | | - | | **401** | | - | | **403** | | - | | **500** | | - | ## deleteDealRenew > DealResponse deleteDealRenew(id, renewId, companyId) 取引(収入/支出)の+更新の削除 &lt;h2 id&#x3D;\&quot;\&quot;&gt;概要&lt;/h2&gt; &lt;p&gt;指定した事業所の取引(収入/支出)の+更新を削除する&lt;/p&gt; &lt;h2 id&#x3D;\&quot;_3\&quot;&gt;注意点&lt;/h2&gt; &lt;ul&gt; &lt;li&gt;本APIでは+更新の削除のみ可能です。取引や支払行に対する削除はできません。&lt;/li&gt; &lt;li&gt;renew_idにはrenewsのid(+更新ID)を指定してください。renewsのdetailsのid(+更新の明細行ID)を指定できません。&lt;/li&gt; &lt;li&gt;月締めされている仕訳に紐づく+更新行の編集・削除はできません。&lt;/li&gt; &lt;li&gt;承認済み仕訳に紐づく+更新行の編集・削除は管理者権限のユーザーのみ可能です。&lt;/li&gt; &lt;li&gt;本APIで取引を更新すると、消費税の計算方法は必ず内税方式が選択されます。&lt;/li&gt; &lt;/ul&gt; ### Example ```java // Import classes: import jp.co.freee.accounting.ApiClient; import jp.co.freee.accounting.ApiException; import jp.co.freee.accounting.Configuration; import jp.co.freee.accounting.auth.*; import jp.co.freee.accounting.models.*; import jp.co.freee.accounting.api.RenewsApi; public class Example { public static void main(String[] args) { ApiClient defaultClient = Configuration.getDefaultApiClient(); defaultClient.setBasePath("https://api.freee.co.jp"); // Configure OAuth2 access token for authorization: oauth2 OAuth oauth2 = (OAuth) defaultClient.getAuthentication("oauth2"); oauth2.setAccessToken("YOUR ACCESS TOKEN"); RenewsApi apiInstance = new RenewsApi(defaultClient); Integer id = 56; // Integer | 取引ID Integer renewId = 56; // Integer | +更新ID Integer companyId = 56; // Integer | 事業所ID try { DealResponse result = apiInstance.deleteDealRenew(id, renewId, companyId); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling RenewsApi#deleteDealRenew"); System.err.println("Status code: " + e.getCode()); System.err.println("Reason: " + e.getResponseBody()); System.err.println("Response headers: " + e.getResponseHeaders()); e.printStackTrace(); } } } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **id** | **Integer**| 取引ID | **renewId** | **Integer**| +更新ID | **companyId** | **Integer**| 事業所ID | ### Return type [**DealResponse**](DealResponse.md) ### Authorization [oauth2](../README.md#oauth2) ### HTTP request headers - **Content-Type**: Not defined - **Accept**: application/json ### HTTP response details | Status code | Description | Response headers | |-------------|-------------|------------------| | **200** | | - | | **400** | | - | | **401** | | - | | **403** | | - | | **500** | | - | ## updateDealRenew > DealResponse updateDealRenew(id, renewId, renewUpdateParams) 取引(収入/支出)の+更新の更新 &lt;h2 id&#x3D;\&quot;\&quot;&gt;概要&lt;/h2&gt; &lt;p&gt;指定した事業所の取引(収入/支出)の+更新を更新する&lt;/p&gt; &lt;h2 id&#x3D;\&quot;_2\&quot;&gt;定義&lt;/h2&gt; &lt;ul&gt; &lt;li&gt; &lt;p&gt;issue_date : 発生日&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;due_date : 支払期日&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;amount : 金額&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;due_amount : 支払残額&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;type&lt;/p&gt; &lt;ul&gt; &lt;li&gt;income : 収入&lt;/li&gt; &lt;li&gt;expense : 支出&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;details : 取引の明細行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;accruals : 取引の債権債務行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;renews : 取引の+更新行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;payments : 取引の支払行&lt;/p&gt; &lt;/li&gt; &lt;li&gt; &lt;p&gt;from_walletable_type&lt;/p&gt; &lt;ul&gt; &lt;li&gt;bank_account : 銀行口座&lt;/li&gt; &lt;li&gt;credit_card : クレジットカード&lt;/li&gt; &lt;li&gt;wallet : 現金&lt;/li&gt; &lt;li&gt;private_account_item : プライベート資金(法人の場合は役員借入金もしくは役員借入金、個人の場合は事業主貸もしくは事業主借)&lt;/li&gt; &lt;/ul&gt; &lt;/li&gt; &lt;/ul&gt; &lt;h2 id&#x3D;\&quot;_3\&quot;&gt;注意点&lt;/h2&gt; &lt;ul&gt; &lt;li&gt;本APIでは+更新の更新のみ可能です。取引や支払行に対する更新はできません。&lt;/li&gt; &lt;li&gt;renew_idにはrenewsのid(+更新ID)を指定してください。renewsのdetailsのid(+更新の明細行ID)を指定できません。&lt;/li&gt; &lt;li&gt;月締めされている仕訳に紐づく+更新行の編集・削除はできません。&lt;/li&gt; &lt;li&gt;承認済み仕訳に紐づく+更新行の編集・削除は管理者権限のユーザーのみ可能です。&lt;/li&gt; &lt;li&gt;本APIで取引を更新すると、消費税の計算方法は必ず内税方式が選択されます。&lt;/li&gt; &lt;/ul&gt; ### Example ```java // Import classes: import jp.co.freee.accounting.ApiClient; import jp.co.freee.accounting.ApiException; import jp.co.freee.accounting.Configuration; import jp.co.freee.accounting.auth.*; import jp.co.freee.accounting.models.*; import jp.co.freee.accounting.api.RenewsApi; public class Example { public static void main(String[] args) { ApiClient defaultClient = Configuration.getDefaultApiClient(); defaultClient.setBasePath("https://api.freee.co.jp"); // Configure OAuth2 access token for authorization: oauth2 OAuth oauth2 = (OAuth) defaultClient.getAuthentication("oauth2"); oauth2.setAccessToken("YOUR ACCESS TOKEN"); RenewsApi apiInstance = new RenewsApi(defaultClient); Integer id = 56; // Integer | 取引ID Integer renewId = 56; // Integer | +更新ID RenewUpdateParams renewUpdateParams = new RenewUpdateParams(); // RenewUpdateParams | +更新の更新情報 try { DealResponse result = apiInstance.updateDealRenew(id, renewId, renewUpdateParams); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling RenewsApi#updateDealRenew"); System.err.println("Status code: " + e.getCode()); System.err.println("Reason: " + e.getResponseBody()); System.err.println("Response headers: " + e.getResponseHeaders()); e.printStackTrace(); } } } ``` ### Parameters Name | Type | Description | Notes ------------- | ------------- | ------------- | ------------- **id** | **Integer**| 取引ID | **renewId** | **Integer**| +更新ID | **renewUpdateParams** | [**RenewUpdateParams**](RenewUpdateParams.md)| +更新の更新情報 | ### Return type [**DealResponse**](DealResponse.md) ### Authorization [oauth2](../README.md#oauth2) ### HTTP request headers - **Content-Type**: application/json, application/x-www-form-urlencoded - **Accept**: application/json ### HTTP response details | Status code | Description | Response headers | |-------------|-------------|------------------| | **200** | | - | | **400** | | - | | **401** | | - | | **403** | | - | | **500** | | - |
42.445344
1,471
0.627814
yue_Hant
0.284827
28f4fd27faab7150751a45f749b79a01b77acf81
2,730
md
Markdown
src/connections/destinations/catalog/epica/index.md
jmuething/segment-docs
ba94afb0994f849d804578dd402c0be323cf4b28
[ "CC-BY-4.0" ]
1
2021-12-26T06:50:12.000Z
2021-12-26T06:50:12.000Z
src/connections/destinations/catalog/epica/index.md
jmuething/segment-docs
ba94afb0994f849d804578dd402c0be323cf4b28
[ "CC-BY-4.0" ]
1
2022-03-31T14:39:42.000Z
2022-03-31T14:39:42.000Z
src/connections/destinations/catalog/epica/index.md
jmuething/segment-docs
ba94afb0994f849d804578dd402c0be323cf4b28
[ "CC-BY-4.0" ]
1
2022-01-25T20:37:21.000Z
2022-01-25T20:37:21.000Z
--- title: EPICA Destination rewrite: true --- [EPICA](https://www.epica.ai?utm_source=segmentio&utm_medium=docs&utm_campaign=partners) is the world's first Prediction-as-a-Service platform. Powered by AI, EPICA captures, processes and analyses online data sources to accurately predict customer behavior. EPICA provides predictive analytics for growth marketers, leveraging machine learning to automate audience insights and recommendations. This destination is maintained by EPICA. For any issues with the destination, [contact the Epica Support team](mailto:support@epica.ai). {% include content/beta-note.md %} ## Getting Started {% include content/connection-modes.md %} 1. From the Segment web app, click **Catalog**. 2. Search for "EPICA" in the Catalog, select it, and choose which of your sources to connect the destination to. 3. Enter the "API Key" into your Segment Settings UI which you can find from your EPICA [Account settings](https://platform.epica.ai/account). ## Page If you're not familiar with the Segment Specs, take a look to understand what the [Page method](/docs/connections/spec/page/) does. An example call would look like: ``` analytics.page() ``` Page calls will be sent to EPICA as a `page`. ## Screen If you're not familiar with the Segment Specs, take a look to understand what the [Screen method](/docs/connections/spec/page/) does. An example call would look like: ``` [[SEGAnalytics sharedAnalytics] screen:@"Home"]; ``` Screen calls will be sent to EPICA as a `screen`. ## Identify If you're not familiar with the Segment Specs, take a look to understand what the [Identify method](/docs/connections/spec/identify/) does. An example call would look like: ``` analytics.identify('userId123', { email: 'john.doe@example.com', firstName: 'Peter', lastName: 'Gibbons', phone: '888-8880' }); ``` Identify calls will be sent to EPICA as an `identify` event. Traits are optional but EPICA recommends the following:`email`,`firstName`,`lastName` and`phone`. ## Track If you're not familiar with the Segment Specs, take a look to understand what the [Track method](/docs/connections/spec/track/) does. An example call would look like: ``` analytics.track('Clicked Login Button', { color: 'Red', size: 'Large' }) ``` Track calls will be sent to EPICA as a `track` event and can be seen populated in the `Data Platform > Personas` section of EPICA [admin panel](https://platform.epica.ai/personas), which includes unified profiles across a single customer journey. There are two types of Personas: - Anonymous - events triggered by a visitor who only has an `anonymousId` - Identified - events triggered by an identified user with a `userId`, `email` or `phone`
35.454545
395
0.749084
eng_Latn
0.991991
28f578584a8941bc3221824869aa1224d7e865e5
2,894
md
Markdown
src/pages/blog/5-classy-ish-french-insults.md
v4iv/theleakycauldronblog
aa15f8883e16f7ece02a3eee0581a0fa78d72684
[ "MIT" ]
39
2018-04-09T13:51:09.000Z
2021-11-14T06:03:15.000Z
src/pages/blog/5-classy-ish-french-insults.md
stevewil/theleakycauldronblog
7ac71fd7ca12e0515ba66a42f9ea3f977d7a06c5
[ "MIT" ]
10
2019-06-24T06:40:18.000Z
2021-11-12T19:22:24.000Z
src/pages/blog/5-classy-ish-french-insults.md
stevewil/theleakycauldronblog
7ac71fd7ca12e0515ba66a42f9ea3f977d7a06c5
[ "MIT" ]
9
2019-04-13T10:36:20.000Z
2021-05-04T18:10:37.000Z
--- templateKey: article-page title: 5 Classy-ish French Insults slug: classy-french-insults author: Vaibhav Sharma authorLink: https://twitter.com/vaibhaved date: 2020-09-01T11:02:51.127Z cover: /img/markus-clemens-classy-french-insult.jpg metaTitle: 5 Classy French Insults metaDescription: Learn few of the most classy french insults, and sound like a true Gentleman. tags: - le francais - language - interesting --- The lockdown due to the pandemic has been rough. Everyone’s trying to find a productive hobby to make the most of it. And me being no different, tried to learn *Le Français*. While learning a language is rewarding in itself, it gets a bit tedious after a while. After a while, you start understanding that the bookish language you learn is mostly useless IRL conversations. To my immense surprise, I learnt that the native speakers don’t exactly use *“J’ai une baguette et un croissant”*, all that much. > Pardon my French. So, I decided that I need to change my strategy and learn some actually useful lines. And where better to start than from insults. Here are some of my favourite, classy-ish french insults that I found most interesting. # Et mon cul, c'est du poulet?! **Literally - And is my ass made of chicken?!** This hilarious line isn’t really an insult per se, instead used as a sarcastic retort when someone is trying to bullshit you. # Si les cons volaient, tu serais chef d'escadrille! **Literally - If the idiots flew, you would be a squadron leader!** Now, this is a proper insult, grand and extravagant, just what you expect from the French. It’s used to call someone - The Idiot-in-Charge. # Va te faire cuire un oeuf! **Literally - Go make yourself an egg!** When a French person tells you to go make yourself an egg, it means they are annoyed at you. It’s the French way to say *Fuck Off* or *Sod Off*! The origin of this is said to be the old common domestic argument - *‘When a husband would criticise his wife’s cooking, she’d ask him to cook himself.’* # Intellectuellement, il vit au dessus de ses moyens. **Literally - Intellectually, he lives beyond his means.** Popularly used by famous Haitian-Canadian novelist and journalist [Danny Laferrière](https://en.wikipedia.org/wiki/Dany_Laferri%C3%A8re). This is a perfect insult for someone exhibiting [The Dunning-Kruger Effect](https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect). # Il a été bercé trop près du mur. **Literally - He was rocked too close to the wall.** This hilarious insult is my personal favourite. It’s used to subtly imply that one has mental damage due to being accidentally rocked against a wall when they were an infant. The subtlety of this insult is what makes this so funny. I literally laughed for 5 minutes straight when I first read it. *Which one was your favourite, do you have any that I missed? Do let me know in the comments below.*
67.302326
504
0.768141
eng_Latn
0.998945
28f581caef342a8ba7e31ca6f4b74a1e1e9c28c4
285
md
Markdown
docs/readmes/actionbank.md
mjyc/cycle-robot-drivers
b3befe443c5cb35fee9d2ea2966e050251c8eee6
[ "MIT" ]
6
2018-11-01T10:04:00.000Z
2021-02-21T07:38:47.000Z
docs/readmes/actionbank.md
mjyc/cycle-robot-drivers
b3befe443c5cb35fee9d2ea2966e050251c8eee6
[ "MIT" ]
6
2019-05-29T21:11:54.000Z
2020-12-11T18:56:24.000Z
actionbank/README.md
mjyc/cycle-robot-drivers
b3befe443c5cb35fee9d2ea2966e050251c8eee6
[ "MIT" ]
1
2019-05-20T07:16:47.000Z
2019-05-20T07:16:47.000Z
<!-- This README.md is automatically generated. Edit the JSDoc comments in source code or the md files in docs/readmes/. --> # @cycle-robot-drivers/actionbank Cycle.js components that implements Action interface inspired by [ROS' actionlib interface](http://wiki.ros.org/actionlib).
47.5
124
0.77193
eng_Latn
0.922865
28f6639207a7af2e78e8eef80e8ab6dd7988a748
875
md
Markdown
content/blog/records/uneasy/index.md
iamEAP/dot-pe
6bb3434c36e68279b490f107a2f704f84d89d867
[ "MIT" ]
null
null
null
content/blog/records/uneasy/index.md
iamEAP/dot-pe
6bb3434c36e68279b490f107a2f704f84d89d867
[ "MIT" ]
5
2021-01-03T14:03:31.000Z
2022-02-26T20:19:46.000Z
content/blog/records/uneasy/index.md
iamEAP/dot-pe
6bb3434c36e68279b490f107a2f704f84d89d867
[ "MIT" ]
null
null
null
--- title: Uneasy date: "2019-06-29T22:00:00-07:00" description: I've got a problem and it shares your name. thumbnail: ./uneasy.jpg langKey: en --- <iframe width="100%" height="450" scrolling="no" frameborder="no" allow="autoplay" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/801493752&color=%23ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&show_teaser=true"></iframe><br /><br /> > As this EP’s premiere aptly demonstrates, a spoonful of sugar, and an unbeatable rhythm, help the bitter pill of romantic reality go down — and get down. > > ~ [Seattle Weekly](https://www.seattleweekly.com/music/golden-idols-will-release-new-ep-at-capitol-hill-block-party/) <br /><a href="https://songwhip.com/album/golden-idols/uneasy" target="_blank" class="button primary fit">Stream, Purchase</a>
54.6875
317
0.753143
eng_Latn
0.485405
28f6d03268a859080f8539ae72215226fad047b6
79
md
Markdown
README.md
CSID-DGU/2021-2-OSSP2-TwoRolless-2
e9381418e3899d8e1e78415e9ab23b73b4f30a95
[ "MIT" ]
null
null
null
README.md
CSID-DGU/2021-2-OSSP2-TwoRolless-2
e9381418e3899d8e1e78415e9ab23b73b4f30a95
[ "MIT" ]
null
null
null
README.md
CSID-DGU/2021-2-OSSP2-TwoRolless-2
e9381418e3899d8e1e78415e9ab23b73b4f30a95
[ "MIT" ]
1
2021-10-15T05:19:20.000Z
2021-10-15T05:19:20.000Z
# 2021-2-OSSP2-TwoRolless-2 #2021-2학기 공개소프트웨어 2조 TwoRolless "공연 정보 통합 제공 웹사이트"
26.333333
50
0.746835
kor_Hang
0.999996
28f6e977a862e465134c9e006ba38520ac70fcb4
4,554
md
Markdown
Exchange/ExchangeServer/high-availability/managed-availability/health-sets.md
skawafuchi/OfficeDocs-Exchange
67ac7fc6893c39337cc18f8bb07260a47c01db9d
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange/ExchangeServer/high-availability/managed-availability/health-sets.md
skawafuchi/OfficeDocs-Exchange
67ac7fc6893c39337cc18f8bb07260a47c01db9d
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange/ExchangeServer/high-availability/managed-availability/health-sets.md
skawafuchi/OfficeDocs-Exchange
67ac7fc6893c39337cc18f8bb07260a47c01db9d
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- localization_priority: Normal description: 'Summary: Exchange Management Shell cmdlets that can help you monitor the health of your Exchange organization.' ms.topic: article author: msdmaguire ms.author: dmaguire ms.assetid: a4f84312-6cfa-4f17-9707-676aadab1143 ms.date: 6/8/2018 title: Manage health sets and server health ms.collection: exchange-server ms.audience: ITPro ms.prod: exchange-server-it-pro manager: serdars --- # Manage health sets and server health You can use the built-in health reporting cmdlets to perform a variety of tasks related to managed availability, such as: - Viewing the health of a server or group of servers - Viewing a list of health sets - Viewing a list of probes, monitors, and responders associated with a particular health set - View a list of monitors and their current health For more information about health reporting and managed availability, see [Managed availability](managed-availability.md). ## What do you need to know before you begin? - Estimated time to complete each procedure: 2 minutes - The procedures in this topic require the Exchange Management Shell. For more information, see [Open the Exchange Management Shell](http://technet.microsoft.com/library/63976059-25f8-4b4f-b597-633e78b803c0.aspx). - For information about keyboard shortcuts that may apply to the procedures in this topic, see [Keyboard shortcuts in the Exchange admin center](../../about-documentation/exchange-admin-center-keyboard-shortcuts.md). > [!TIP] > Having problems? Ask for help in the Exchange forums. Visit the forums at: [Exchange Server](https://go.microsoft.com/fwlink/p/?linkId=60612), [Exchange Online](https://go.microsoft.com/fwlink/p/?linkId=267542), or [Exchange Online Protection](https://go.microsoft.com/fwlink/p/?linkId=285351). ## Use the Exchange Management Shell to view server health You can use the Exchange Management Shell to get a summary of the health of an Exchange server. Run either of the following commands to view the health sets and health information on an Exchange server: ``` Get-HealthReport -Identity <ServerName> ``` ``` Get-ServerHealth -Identity <ServerName> | Format-Table Server,CurrentHealthSetState,Name,HealthSetName,AlertValue,HealthGroupName -Auto ``` Run any of the following commands to view the health sets on an Exchange server or database availability group: ``` Get-ExchangeServer | Get-HealthReport -RollupGroup ``` ``` Get-ExchangeServer | Get-HealthReport -RollupGroup -HealthSetName <HealthSet> ``` ``` (Get-DatabaseAvailabilityGroup <DAGName>).Servers | Get-HealthReport -RollupGroup ``` For detailed syntax and parameter information, see [Get-HealthReport](http://technet.microsoft.com/library/f33fbed5-0e01-4d7e-a252-121b2afb6864.aspx). ## Use the Exchange Management Shell to view a list of health sets A *health set* is a group of monitors, probes and responders for a component that determine whether the component is healthy or unhealthy. Run the following command to view the health sets on an Exchange server: ``` Get-HealthReport -Server <ServerName> ``` For detailed syntax and parameter information, see [Get-HealthReport](http://technet.microsoft.com/library/f33fbed5-0e01-4d7e-a252-121b2afb6864.aspx). ## Use the Exchange Management Shell to view the probes, monitors and responders for a health set You can use the Exchange Management Shell to view the list of probes, monitors, and responders associated with a health set on an Exchange server. Run the following command to view the probes, monitors and responders associated with a health set on an Exchange server: ``` Get-MonitoringItemIdentity -Server <ServerName> -Identity <HealthSetName> | Format-Table Identity,ItemType,Name -Auto ``` For detailed syntax and parameter information, see [Get-MonitoringItemIdentity](http://technet.microsoft.com/library/7a4da080-0fe6-4dd7-85a2-cceeb68f95e0.aspx). ## Use the Exchange Management Shell to View a List of Monitors and Their Current Health The health of a monitor is reported by using the "worst of" monitors in the health set. You can view the details of a health set to see which monitors are healthy and which ones are unhealthy. Run the following command to view a list of the monitors and their current health on an Exchange server: ``` Get-ServerHealth -HealthSet <HealthSetName> -Server <ServerName> | Format-Table Name, AlertValue -Auto ``` For detailed syntax and parameter information, see [Get-ServerHealth](http://technet.microsoft.com/library/ca9cff3a-ecda-422d-abd7-b7d8da71a6c7.aspx).
41.4
296
0.787659
eng_Latn
0.968093
28f720ea1b85c4f0e7a4647e00f2453e83a3c252
33
md
Markdown
README.md
Marko9827/discordBot_welcome
53f3022703aed4abacedcdd6fcbea859dcc91149
[ "MIT" ]
null
null
null
README.md
Marko9827/discordBot_welcome
53f3022703aed4abacedcdd6fcbea859dcc91149
[ "MIT" ]
null
null
null
README.md
Marko9827/discordBot_welcome
53f3022703aed4abacedcdd6fcbea859dcc91149
[ "MIT" ]
null
null
null
# discordBot_welcome Discord bot
11
20
0.848485
eng_Latn
0.750968
28f78d5ecf2e5f95b1b478d777c96e53ce7adc64
6,135
md
Markdown
articles/media-services/latest/storage-account-concept.md
gustavodelima/azure-docs.pt-br
07f02d5a6f3328b5b720c3091518e805d37590b8
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/latest/storage-account-concept.md
gustavodelima/azure-docs.pt-br
07f02d5a6f3328b5b720c3091518e805d37590b8
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/media-services/latest/storage-account-concept.md
gustavodelima/azure-docs.pt-br
07f02d5a6f3328b5b720c3091518e805d37590b8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- # <a name="mandatory-fields-see-more-on-akamsskyeyemeta"></a>Campos obrigatórios. Veja mais em aka.ms/skyeye/meta. Título: contas de armazenamento do Azure: descrição dos serviços de mídia do Azure: saiba como criar uma conta de armazenamento do Azure para usar com os serviços de mídia do Azure. serviços: Media-Services documentationcenter: ' ' autor: IngridAtMicrosoft gerente: femila editor: ' ' MS. Service: Media-Services MS. Workload: MS. tópico: conceptual MS. Date: 01/29/2021 MS. Author: inhenkel --- # <a name="azure-storage-accounts"></a>Contas de armazenamento do Azure [!INCLUDE [media services api v3 logo](./includes/v3-hr.md)] Para iniciar o gerenciamento, a criptografia, a codificação, a análise e a transmissão de conteúdo de mídia no Azure, você precisa criar uma conta nos Serviços de Mídia. Ao criar uma conta dos Serviços de Mídia, você precisa fornecer o nome de um recurso de conta de Armazenamento do Azure. A conta de armazenamento especificada está conectada à sua conta de Serviços de Mídia. A conta dos Serviços de Mídia e todas as contas de armazenamento associadas precisam estar na mesma assinatura do Azure. É altamente recomendável usar contas de armazenamento no mesmo local que a conta dos serviços de mídia para evitar custos adicionais de latência e de saída de dados. Você deve ter uma conta de armazenamento **Primária**, e pode ter quantas contas de armazenamento **Secundárias** você quiser associadas à sua conta dos Serviços de Mídia. Os Serviços de Mídia oferecem suporte a contas **v2 para fins gerais** (GPv2) ou contas **v1 para fins gerais** (GPv1). As contas somente BLOB não são permitidas como **primárias**. Recomendamos que você use o GPv2, para que você possa aproveitar os recursos e o desempenho mais recentes. Para saber mais sobre contas de armazenamento, confira [visão geral da conta de armazenamento do Azure](../../storage/common/storage-account-overview.md). > [!NOTE] > Somente a camada de acesso quente tem suporte para uso com os serviços de mídia do Azure, embora as outras camadas de acesso possam ser usadas para reduzir os custos de armazenamento no conteúdo que não está sendo usado ativamente. Há SKUs diferentes que você pode escolher para sua conta de armazenamento. Caso deseje fazer experimentos com contas de armazenamento, use `--sku Standard_LRS`. No entanto, ao escolher uma SKU para produção, você deve considerar `--sku Standard_RAGRS` , que fornece replicação geográfica para continuidade dos negócios. ## <a name="assets-in-a-storage-account"></a>Ativos em uma conta de armazenamento Nos serviços de mídia v3, as APIs de armazenamento são usadas para carregar arquivos em ativos. Para obter mais informações, consulte [ativos nos serviços de mídia do Azure v3](assets-concept.md). > [!Note] > Não tente alterar o conteúdo dos contêineres de BLOB que foram gerados pelo SDK dos serviços de mídia sem usar as APIs dos serviços de mídia. ## <a name="storage-side-encryption"></a>Criptografia do armazenamento Para proteger seus ativos em repouso, os ativos devem ser criptografados pela criptografia do lado do armazenamento. A tabela a seguir mostra como a criptografia do armazenamento funciona nos Serviços de Mídia v3: |Opção de criptografia|Descrição|Serviços de Mídia v3| |---|---|---| |Criptografia de armazenamento dos serviços de mídia| Criptografia AES-256, chave gerenciada pelos serviços de mídia. |Sem suporte. <sup>1</sup>| |[Criptografia do serviço de armazenamento para dados em repouso](../../storage/common/storage-service-encryption.md)|Criptografia do lado do servidor oferecida pelo armazenamento do Azure, chave gerenciada pelo Azure ou por cliente.|Com suporte.| |[Criptografia do lado do cliente de armazenamento](../../storage/common/storage-client-side-encryption.md)|Criptografia do lado do cliente oferecida pelo armazenamento do Azure, chave gerenciada por cliente no Key Vault.|Não há suporte.| <sup>1</sup> no Media Services V3, a criptografia de armazenamento (criptografia AES-256) tem suporte apenas para compatibilidade com versões anteriores quando seus ativos foram criados com os serviços de mídia v2, o que significa que o V3 funciona com ativos criptografados de armazenamento existentes, mas não permitirá a criação de novos. ## <a name="storage-account-double-encryption"></a>Criptografia dupla da conta de armazenamento As contas de armazenamento dão suporte à criptografia dupla, mas a segunda camada deve ser habilitada explicitamente. Consulte [criptografia de armazenamento do Azure para dados em repouso](https://docs.microsoft.com/azure/storage/common/storage-service-encryption#doubly-encrypt-data-with-infrastructure-encryption). ## <a name="storage-account-errors"></a>Erros de conta de armazenamento O estado "Desconectado" de uma conta de Serviços de Mídia indica que a conta não tem mais acesso a uma ou mais contas de armazenamento anexadas, devido a uma alteração nas chaves de acesso de armazenamento. Os Serviços de Mídia exigem chaves de acesso de armazenamento atualizadas para realizar várias tarefas na conta. Veja a seguir os principais cenários que fariam com que a conta de Serviços de Mídia não tivesse acesso às contas de armazenamento anexadas. |Problema|Solução| |---|---| |A conta de Serviços de Mídia ou as contas de armazenamento anexadas foram migradas para assinaturas separadas. |Migre as contas de armazenamento ou a conta de serviços de mídia para que elas estejam todas na mesma assinatura. | |A conta de Serviços de Mídia está usando uma conta de armazenamento anexada em uma assinatura diferente, já que ela era uma conta Serviços de Mídia inicial compatível com este cenário. Todas as contas de serviços de mídia anteriores foram convertidas em contas baseadas no Gerenciador de recursos do Azure modernas e terão um estado desconectado. |Migrar a conta de armazenamento ou a conta dos serviços de mídia para que elas estejam todas na mesma assinatura.| ## <a name="next-steps"></a>Próximas etapas Para saber como anexar uma conta de armazenamento para sua conta de Serviços de Mídia, consulte [Criar uma conta](./create-account-howto.md).
100.57377
463
0.793154
por_Latn
0.999895
28f827fe4fac2972344121499320f3d2be6152c6
3,476
md
Markdown
docs/src/_examples/stimulus/tips-gotchas.md
Taher-Ghaleb/ruby2js
11849f744620cac10658901ba7863696e7a6c218
[ "Ruby", "Unlicense", "MIT" ]
115
2020-12-31T17:19:22.000Z
2022-03-20T08:22:13.000Z
docs/src/_examples/stimulus/tips-gotchas.md
Taher-Ghaleb/ruby2js
11849f744620cac10658901ba7863696e7a6c218
[ "Ruby", "Unlicense", "MIT" ]
41
2020-12-27T19:22:44.000Z
2022-03-28T23:15:51.000Z
docs/src/_examples/stimulus/tips-gotchas.md
Taher-Ghaleb/ruby2js
11849f744620cac10658901ba7863696e7a6c218
[ "Ruby", "Unlicense", "MIT" ]
13
2021-01-19T03:48:12.000Z
2022-02-24T17:54:18.000Z
--- top_section: Stimulus title: Tips & Gotchas order: 17 next_page_order: 21 category: tips --- In a few short pages we have covered a lot of ground, but really we have only scratched the surface of what Ruby2JS can do. These pages make it look easy, but there are a few things to watch out for. * Parenthesis There is a fundamental mismatch between the Ruby and JavaScript object models. In Ruby, classes have methods, some of which may be attribute accessors. In JavaScript, classes have properties, some of which may be functions. This makes it impossible to distinguish between calls to methods with zero arguments and property accesses. Following Ruby's model (everything is a method call) would make it impossible to acccess JavaScript properties. Following JavaScript's model of everything is a property would make it impossible to call a method with zero arguments. Ruby2JS solves this by detecting the use of parenthesis to distinguish between these two cases. First, the rules of thumb, and then some of the significant exceptions: * **Always** use parenthesis on method definitions when there are zero arguments. * **Always** use parenthesis on method calls when there are zero arguments. * **Never** use parenthesis on definitions of attribute reader / property getters. If you follow these rules, you will never have a problem. All but the last example ([Content Loader](content-loader)) followed these rules. The lack of strong typing in both Ruby and JavaScript makes exceptions difficult as type inferencing will only get you so far. The one exception is the value of `this` / `self` within the definition of classes, the type of which is always very much known. If you are careful on your method and accesssor *definitions*, then *references* to these methods and attributes within the class can be determined at compile time, enabling the omission of parenthesis in intra-class calls even when there are no arguments. It just so happens that Stimulus hits this sweet spot as every definition is a class. * Returns In Ruby, every statement is an expression and returns a value. The `return` statement at the end of a method is therefore optional. This is not the case with JavaScript. There are a few cases (attribute readers being an obvious example), where the need for a return statement can be inferred and is inserted automatically by Ruby2JS, but in general, if you define a method and want to return a value, you need a `return` statement. This is not much of a problem for Stimulus as neither lifecycle methods nor actions are expected to return a result. One solution that may be a bit of overkill: the [return](../../docs/filters/return) filter that adds a return statement to **every** method defintion. This generally is harmless, but may make the generated code marginally bigger and marginally less readable. * Explore! In Stiumuls, everything is a class. Classes in both Ruby and JavaScript have inheritance hierarchies, are open, can have mixins by including modules, etc. The [require](../../docs/filters/require) filter can help by turning Ruby `require` statements into JavaScript `import` statements. Got questions or comments? Join the [community](../../docs/community)!
45.736842
79
0.7313
eng_Latn
0.999707
28f863573207ea6347a6ae407c2d38fda41e02e5
1,136
md
Markdown
docs/framework/wcf/diagnostics/wmi/msmqintegrationbindingelement.md
sheng-jie/docs.zh-cn-1
e825f92bb3665ff8e05a0d627bb65a9243b39992
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/wmi/msmqintegrationbindingelement.md
sheng-jie/docs.zh-cn-1
e825f92bb3665ff8e05a0d627bb65a9243b39992
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/wmi/msmqintegrationbindingelement.md
sheng-jie/docs.zh-cn-1
e825f92bb3665ff8e05a0d627bb65a9243b39992
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: MsmqIntegrationBindingElement ms.date: 03/30/2017 ms.assetid: eaaa7651-e6e5-4fae-9dad-c1867d38b586 ms.openlocfilehash: 9018791a6c136f3ad47d84f87c02f6c8ab668e4e ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: MT ms.contentlocale: zh-CN ms.lasthandoff: 05/04/2018 ms.locfileid: "33487258" --- # <a name="msmqintegrationbindingelement"></a>MsmqIntegrationBindingElement MsmqIntegrationBindingElement ## <a name="syntax"></a>语法 ``` class MsmqIntegrationBindingElement : MsmqBindingElementBase { string SerializationFormat; }; ``` ## <a name="methods"></a>方法 MsmqIntegrationBindingElement 类未定义任何方法。 ## <a name="properties"></a>属性 MsmqIntegrationBindingElement 类具有以下属性: ### <a name="serializationformat"></a>SerializationFormat 数据类型:String 访问类型:只读 绑定用于序列化消息的格式。 ## <a name="requirements"></a>要求 |MOF|已在 Servicemodel.mof 中声明。| |---------|-----------------------------------| |命名空间|已在 root\ServiceModel 中定义| ## <a name="see-also"></a>请参阅 <xref:System.ServiceModel.MsmqIntegration.MsmqIntegrationBindingElement>
25.244444
75
0.701585
yue_Hant
0.150167
28f894ac22a86acd23b34eb9d1f1e12cff35ba35
2,914
md
Markdown
README.md
pelson/artifact-upload-handler
7a9b1415239a06e345656b1725285d9ecd011564
[ "BSD-3-Clause" ]
null
null
null
README.md
pelson/artifact-upload-handler
7a9b1415239a06e345656b1725285d9ecd011564
[ "BSD-3-Clause" ]
4
2018-02-02T11:06:43.000Z
2018-05-18T05:10:34.000Z
README.md
pelson/artifact-upload-handler
7a9b1415239a06e345656b1725285d9ecd011564
[ "BSD-3-Clause" ]
3
2018-02-01T11:09:20.000Z
2019-12-04T03:00:22.000Z
# artifact-upload-handler A secure webserver to handle uploading CI conda build artifacts to a specified conda channel. To use, simply clone this repo and run the Python webserver, observing the [Usage](#usage) and [Requirements](#requirements) instructions below. ## Usage ``` usage: artifact_upload_handler.py [-h] -d WRITE_DIR -p PORT -e CONDA_EXE -c CERTFILE -k KEYFILE -t TOKEN_HASH optional arguments: -h, --help show this help message and exit -d WRITE_DIR, --write_dir WRITE_DIR directory to write artifacts to -p PORT, --port PORT webserver port number -e CONDA_EXE, --conda_exe CONDA_EXE full path to conda executable -c CERTFILE, --certfile CERTFILE full path to certificate file for SSL -k KEYFILE, --keyfile KEYFILE full path to keyfile for SSL -t TOKEN_HASH, --token_hash TOKEN_HASH hash of secure token ``` ### Usage Example For example, to start a secure webserver running on port 9999 that will write linux-64 packages to the (imaginary) conda channel at ``/path/to/conda/channel``: ``` $ python artifact_upload_handler.py -d /path/to/conda/channel/linux-64 \ -p 9999 \ -e /path/to/env/bin/conda \ -c /path/to/mycertfile.crt \ -k /path/to/mykeyfile.key \ -t eccd989b ``` To test the webserver: ```python import requests artifacts = [open('my-artifact1-1.0.0-2.tar.bz2', 'rb'), open('my-artifact2-3.4.2-0.tar.bz2', 'rb'), open('my-artifact3-0.9.1-0.tar.bz2', 'rb'), ] url = 'https://localhost:9999/' token = 'mysecuretoken' for artifact in artifacts: requests.post(url, data={'token': token}, files={'artifact': artifact}, verify=False) ``` The webserver handles one artifact per ``POST`` request, which is replicated here by the loop over the sample artifacts. The dictionary key names ``'token'`` and ``'artifact'`` in the request must be observed. These keys are expected by the webserver when handling the request. Note that in order to prevent unauthorised uploads to the channel, the request must be accompanied by a secure token that, when salted and hashed, matches the salted and hashed token specified when the webserver is set up (see [Usage](#usage) above). An expected use-case of this server is to handle build artifacts produced from CI. In such a use-case, this secure token would be defined in the CI pipeline and is passed in unhashed form. We rely on the secured nature of the server to prevent the unhashed token being revealed. ### Requirements Requires [tornado](http://www.tornadoweb.org/en/stable/index.html) and ``hmac`` v2.7.7+.
35.975309
93
0.640014
eng_Latn
0.963673
28f8b7c623fa402d28a124705b43208c7dd0aa1f
241
md
Markdown
.github/PULL_REQUEST_TEMPLATE.md
netbeast/bigfoot
f790baa70e146dce24849fd43a086be1fead368d
[ "MIT" ]
57
2017-05-21T12:02:46.000Z
2021-11-15T09:48:23.000Z
.github/PULL_REQUEST_TEMPLATE.md
netbeast/bigfoot
f790baa70e146dce24849fd43a086be1fead368d
[ "MIT" ]
18
2017-06-29T07:02:23.000Z
2022-02-12T12:53:49.000Z
.github/PULL_REQUEST_TEMPLATE.md
netbeast/bigfoot
f790baa70e146dce24849fd43a086be1fead368d
[ "MIT" ]
11
2017-06-01T06:03:37.000Z
2021-04-09T03:50:51.000Z
# Making a good PR - [ ] Explain the **motivation** for making this change. - [ ] Ensure it is the smallest possible and changes the minimum number of files. Small pull requests are much easier to review and more likely to get merged ASAP.
48.2
163
0.746888
eng_Latn
0.999545
28f90f0b9406e835bfdcd3422379a02d423e1786
32
md
Markdown
README.md
RohiniAmmu/GoogleDriveAPI
457334efdaf733f7fb166c0f11b9a5d6aa7d161e
[ "Apache-2.0" ]
null
null
null
README.md
RohiniAmmu/GoogleDriveAPI
457334efdaf733f7fb166c0f11b9a5d6aa7d161e
[ "Apache-2.0" ]
null
null
null
README.md
RohiniAmmu/GoogleDriveAPI
457334efdaf733f7fb166c0f11b9a5d6aa7d161e
[ "Apache-2.0" ]
null
null
null
# GoogleDriveAPI GoogleDriveAPI
10.666667
16
0.875
kor_Hang
0.761702
28f91f757f8009c9feded0e2ef4b93db59937b99
13,413
md
Markdown
_posts/2010-3-1-上海熊猫开车撞老虎.md
backup53/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
18
2020-01-02T21:43:02.000Z
2022-02-14T02:40:34.000Z
_posts/2010-3-1-上海熊猫开车撞老虎.md
wzxwj/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
3
2020-01-01T16:53:59.000Z
2020-01-05T10:14:11.000Z
_posts/2010-3-1-上海熊猫开车撞老虎.md
wzxwj/1984bbs
152406c37afab79176f0d094de5ac4cb0c780730
[ "MIT" ]
13
2020-01-20T14:27:39.000Z
2021-08-16T02:13:21.000Z
--- layout: default date: 2010-3-1 title: 上海熊猫开车撞老虎 categories: 自由新闻社 --- # 上海熊猫开车撞老虎 本主题由 musicool 于 2010-3-1 10:33 合并 luugoo 拖延心理学:向与生俱来的行为顽症宣战】https://1984bbs.com/viewthread.php?tid=60185 1楼 大 中 小 发表于 2010-3-1 00:42 只看该作者 上海熊猫开车撞老虎 上海国保警察沈国良用车撞冯正虎 2010年2月28日上午郭泉的辩护律师郭莲辉打来电话,他路过上海,想来看望我。我欢迎他光临寒舍,就放弃原定与妻子一起去岳母家过元宵节的计划,在家等候远道而来的客人。其间,一位上海的古玉鉴定专家沈纯理先生也来拜访我。 中午12:00左右,郭莲辉律师由二位上海市民陪同,其中一位顾国平,我不是很熟悉的,另一位我根本不认识,但随同郭律师登门拜访我家的人都是我的客人。我们聊了一个半小时,就起身去我家附近的餐馆吃午饭。 13:30左右,我们走到我小区的大门口,被一大批社区保安人员围住,其中有一位便衣警察要求郭律师跟他们去五角场派出所,郭律师当场拒绝,要求有合法的证件才可以跟他们走。我对他们说:这是我的客人,我们要去吃饭了,有什么事等我们吃饭后也可以谈,是谁叫他们去?这位便衣说:与你没有关系,是沈国良找他们。我说:他们是来我家做客的,你们又没有合法证件,你们要征求郭律师自己是否愿意配合。 后来,郭律师与二位市民被便衣警察请上小车子送到五角场派出所,我也不清楚为什么这么急匆匆传唤郭律师,我的感觉他没有犯法,他刚到上海,已购了飞机票,明天回江西,只不过路过上海看望一下朋友而已。但愿是什么误会,他去派出所搞清一下就可以走了,他是一个律师会懂得如何与警察打交道。 他们走后一会,大约14:00左右,我与沈纯理先生还站在小区的大门边,准备去吃饭。突然一辆深色的小桥车疯狂地朝我冲来,撞到我的左腿,紧急煞住了。周围的许多人都惊呆了,这个司机疯了。门卫奔出来要追究这个司机,一看是国保警察沈国良。他开进小区,又转过头,我走过去站在他的车前,大声谴责他:你想压人,我让你压。他又开过来撞我,我被撞出一步。这时民警小叶拉住我。杨浦区国保警察叶处长也在现场目睹此景。小区大门上空安装的摄像头记录了这个事件的全过程。 国保警察沈国良离开时,又将我的朋友沈纯理先生带走了。在场的所有人都觉得沈国良的行为实在过分,今天没有人得罪他,我也没有妨碍他什么,他要找的郭律师也已跟便衣警察走了。他今天的行为已失去理智,公然开车撞人是一个警察可以做的行为吗?这表明他心里压力太大,已无法控制自己的情绪,工作的失败变成个人的泄愤,所做的行为与他的年龄与职业不相符,不适合做国保警察工作。 沈国良先生是包管我的国保警察,对我的一连串打压都与他有关,还指使社区保安人员对我干一些坏事。我很清楚,但我总是谅解他,这是他养家糊口的工作。当然管住我,他有成绩,可以有奖励。但是,我坚守自己做人尊严、维护自己的公民权利,我不得不让他失败。过去他企图无法无理地软禁我在家,但被我冲破了。我回国的第一天他在接我的车上告知我不准接受记者采访,但我一下车就被记者包围,又接受了记者的采访。 中国政府依法让我回国,实际上纠正了上海违法官员的错误。中国驻日大使馆代表中国政府一周内三次来远离东京的成田机场看望我,他们的诚意确实令我感动,我没有任何条件就主动离开机场,化解了一个难题。中国驻日大使馆也帮助我顺利回国。我们之间没有任何交易与承诺,是靠双方的诚信去解决问题。我的回国却让上海某些国保警察觉得很失面子,沈国良先生在我背后散布我与中国驻日大使馆有交易、有承诺的流言蜚语,还在制造一些麻烦。的确,他们一下子无法接受我回国的事实,但是个人、地方必须服从国家利益的需要。 其实,沈国良先生不必太生气,这是工作,我们俩没有个人之间的怨恨。这个工作本来就很难做,他做不成功,其他人也做不成功,做成功的是不正常的。在一个法律制度日趋完善的国家里,要做违法的事,也只能穿着便衣,偷偷摸摸地干,对方有恐惧的心理,被吓住了,不讲法的警察就能得逞一下,否则很难成功,没有人肯屈服。沈国良先生就是要撞死我,也无法改变这个社会的现实与趋势。而且,中国的检察部门也不容许警察可以公然开车撞人。 冯正虎 2010年2月28日 [ 本帖最后由 luugoo 于 2010-3-1 00:59 编辑 ] --- [Terminusbot](https://github.com/TerminusBot) 整理,讨论请前往 [2049bbs.xyz](http://2049bbs.xyz/) --- luugoo 拖延心理学:向与生俱来的行为顽症宣战】https://1984bbs.com/viewthread.php?tid=60185 2楼 大 中 小 发表于 2010-3-1 00:47 只看该作者 说句难听的话,冯最糟的观点就在于他相信中国有司法正义。 “而且,中国的检察部门也不容许警察可以公然开车撞人。”那是当然,但撞你冯正虎那就是另一回事了。 上肛上腺 路边社,专业从事各类路边消息及不规范谣言的搜集整理 twitter.com/try2feel 3楼 大 中 小 发表于 2010-3-1 00:53 只看该作者 交通肇事 英雄烈士,这玩艺人家琢磨得很明白 mimicry 异能发癫者 4楼 大 中 小 发表于 2010-3-1 01:16 只看该作者 五角场派出所,我去过诶~钱包丢了报的案,屁用没有 firedragoon 自由新闻社驻枫叶国通讯员 5楼 大 中 小 发表于 2010-3-1 01:25 只看该作者 引用: > 原帖由 luugoo 于 2010-3-1 00:47 发表 > ![](https://1984bbs.com/images/common/back.gif) > 说句难听的话,冯最糟的观点就在于他相信中国有司法正义。 我觉得不管这是他迂腐还是他故意摆出来的斗争策略,他能成功回国和他这种克制的斗争方式分不开 墙倒众人推 靠谱青年 暂时闭关一段时间 6楼 大 中 小 发表于 2010-3-1 01:33 只看该作者 于此观之,魔都的国宝头子警察头子政法头子之类的对冯很是痛恨,估计要不是世博会,冯可能会受到更多的折磨。 佳能 7楼 大 中 小 发表于 2010-3-1 01:39 只看该作者 注意这一点,这是上海警察! 透露社记者 8楼 大 中 小 发表于 2010-3-1 02:25 只看该作者 想想看,当沈国良面对杨佳的时候,会是什么样子 xxlfile 9楼 大 中 小 发表于 2010-3-1 03:32 只看该作者 魔都末日寻常事。马勒隔壁草泥马。 神仙一溜烟 杵君 10楼 大 中 小 发表于 2010-3-1 08:30 只看该作者 人肉他!出门小心! 黄阿狗 金玉其内 败絮其外 11楼 大 中 小 发表于 2010-3-1 08:50 只看该作者 老冯好样的。 此文写的太好了 到底是老江湖啊 柳叶眉 该用户已被删除 12楼 大 中 小 发表于 2010-3-1 09:04 只看该作者 一般上海人总想给人好精明好文明的印象,但是沈国良同志好像例外:好蠢好蛮横,一点腔调都没有的。难道上海人都不愿意当警察,都是瘪三去做? 丝丝兔 专业围观群众 13楼 大 中 小 发表于 2010-3-1 09:09 只看该作者 上海人钓鱼执法的时候似乎没想过给人好精神好文明的印象吧。 柳叶眉 该用户已被删除 14楼 大 中 小 发表于 2010-3-1 09:28 只看该作者 回复 13楼 丝丝兔 的话题 那些钓鱼的难道不都是瘪三?警察都是瘪三了,协警连瘪三都不如。 shanfree life is a struggle 15楼 大 中 小 发表于 2010-3-1 10:05 只看该作者 上海国保警察沈国良用车撞冯正虎 ref:https://docs.google.com/View?id=d8xbpp6_65dgx6vghs 2010年2月28日上午郭泉的辩护律师郭莲辉打来电话,他路过上海,想来看望我。我欢迎他光临寒舍,就放弃原定与妻子一起去岳母家过元宵节的计划,在家等候远道而来的客人。其间,一位上海的古玉鉴定专家沈纯理先生也来拜访我。 中午12:00左右,郭莲辉律师由二位上海市民陪同,其中一位顾国平,我不是很熟悉的,另一位我根本不认识,但随同郭律师登门拜访我家的人都是我的客人。我们聊了一个半小时,就起身去我家附近的餐馆吃午饭。 13:30左右,我们走到我小区的大门口,被一大批社区保安人员围住,其中有一位便衣警察要求郭律师跟他们去五角场派出所,郭律师当场拒绝,要求有合法的证件才可以跟他们走。我对他们说:这是我的客人,我们要去吃饭了,有什么事等我们吃饭后也可以谈,是谁叫他们去?这位便衣说:与你没有关系,是沈国良找他们。我说:他们是来我家做客的,你们又没有合法证件,你们要征求郭律师自己是否愿意配合。 后来,郭律师与二位市民被便衣警察请上小车子送到五角场派出所,我也不清楚为什么这么急匆匆传唤郭律师,我的感觉他没有犯法,他刚到上海,已购了飞机票,明天回江西,只不过路过上海看望一下朋友而已。但愿是什么误会,他去派出所搞清一下就可以走了,他是一个律师会懂得如何与警察打交道。 他们走后一会,大约14:00左右,我与沈纯理先生还站在小区的大门边,准备去吃饭。突然一辆深色的小桥车疯狂地朝我冲来,撞到我的左腿,紧急煞住了。周围的许多人都惊呆了,这个司机疯了。门卫奔出来要追究这个司机,一看是国保警察沈国良。他开进小区,又转过头,我走过去站在他的车前,大声谴责他:你想压人,我让你压。他又开过来撞我,我被撞出一步。这时民警小叶拉住我。杨浦区国保警察叶处长也在现场目睹此景。小区大门上空安装的摄像头记录了这个事件的全过程。 国保警察沈国良离开时,又将我的朋友沈纯理先生带走了。在场的所有人都觉得沈国良的行为实在过分,今天没有人得罪他,我也没有妨碍他什么,他要找的郭律师也已跟便衣警察走了。他今天的行为已失去理智,公然开车撞人是一个警察可以做的行为吗?这表明他心里压力太大,已无法控制自己的情绪,工作的失败变成个人的泄愤,所做的行为与他的年龄与职业不相符,不适合做国保警察工作。 沈国良先生是包管我的国保警察,对我的一连串打压都与他有关,还指使社区保安人员对我干一些坏事。我很清楚,但我总是谅解他,这是他养家糊口的工作。当然管住我,他有成绩,可以有奖励。但是,我坚守自己做人尊严、维护自己的公民权利,我不得不让他失败。过去他企图无法无理地软禁我在家,但被我冲破了。我回国的第一天他在接我的车上告知我不准接受记者采访,但我一下车就被记者包围,又接受了记者的采访。 中国政府依法让我回国,实际上纠正了上海违法官员的错误。中国驻日大使馆代表中国政府一周内三次来远离东京的成田机场看望我,他们的诚意确实令我感动,我没有任何条件就主动离开机场,化解了一个难题。中国驻日大使馆也帮助我顺利回国。我们之间没有任何交易与承诺,是靠双方的诚信去解决问题。我的回国却让上海某些国保警察觉得很失面子,沈国良先生还在我背后散布我与中国驻日大使馆有交易、有承诺的流言蜚语,还在制造一些麻烦。的确,他们一下子无法接受我回国的事实,但是个人、地方必须服从国家利益的需要。 其实,沈国良先生不必太生气,这是工作,我们俩没有个人之间的怨恨。这个工作本来就很难做,他做不成功,其他人也做不成功,做成功的是不正常的。在一个法律制度日趋完善的国家里,要做违法的事,也只能穿着便衣,偷偷摸摸地干,对方有恐惧的心理,被吓住了,不讲法的警察就能得逞一下,否则很难成功,没有人肯屈服。沈国良先生就是要撞死我,也无法改变这个社会的现实与趋势。而且,中国的检察部门也不容许警察可以公然开车撞人。 冯正虎 2010年2月28日 冯正虎的联系方式: 地址:上海市政通路240弄3号302室 邮编:200433 电话:021-55225958 手机:13524687100 Email:fzh2005@hotmail.co.jp 推特:http://twitter.com/fzhenghu 护宪维权网:http://fzh2005.net Edit this page (if you have permission) Google Docs -- Web word processing, presentations and spreadsheets. xiong13 16楼 大 中 小 发表于 2010-3-1 10:08 只看该作者 一副末代的景象 快乐流浪汉 脑力劳动教养所指导员,五毛控 GFW爱好者 低俗控 业余翻墙 长期围观 资深群众 被代表 不明真相 17楼 大 中 小 发表于 2010-3-1 10:19 只看该作者 沈国保情绪很不稳定啊! shanfree life is a struggle 18楼 大 中 小 发表于 2010-3-1 10:26 只看该作者 不好意思,发重了,沈XX没搜到,以为没法过,唉。。。 麻烦素鸡合并或删除 小龙人 草马族族民 19楼 大 中 小 发表于 2010-3-1 10:36 只看该作者 一个时代的崩溃,往往从小事情开始 luugoo 拖延心理学:向与生俱来的行为顽症宣战】https://1984bbs.com/viewthread.php?tid=60185 20楼 大 中 小 发表于 2010-3-1 11:06 只看该作者 引用: > 原帖由 firedragoon 于 2010-3-1 01:25 发表 > ![](https://1984bbs.com/images/common/back.gif) > > 我觉得不管这是他迂腐还是他故意摆出来的斗争策略,他能成功回国和他这种克制的斗争方式分不开 宁愿相信这是他的策略。。 george 思想罪在逃犯 大洋之声轮值DJ 21楼 大 中 小 发表于 2010-3-1 11:09 只看该作者 引用: > 原帖由 小龙人 于 2010-3-1 10:36 发表 ![](https://1984bbs.com/images/common/back.gif) > 一个时代的崩溃,往往从小事情开始 蝴蝶效应:一只南美洲亚马逊河流域热带雨林中的蝴蝶,偶尔扇动几下翅膀,可能在两周后在美国德克萨斯引起一场龙卷风 george 思想罪在逃犯 大洋之声轮值DJ 22楼 大 中 小 发表于 2010-3-1 11:12 只看该作者 引用: > 原帖由 luugoo 于 2010-3-1 00:47 发表 > ![](https://1984bbs.com/images/common/back.gif) > 说句难听的话,冯最糟的观点就在于他相信中国有司法正义。 > > “而且,中国的检察部门也不容许警察可以公然开车撞人。”那是当然,但撞你冯正虎那就是另一回事了。 冯最棒的地方就在于他使人们看到在中国仍然有人坚持相信并且追寻正义。 旮旯旭 23楼 大 中 小 发表于 2010-3-1 11:21 只看该作者 太嚣张了 对付这样的人只好人肉了 freehost01 悲剧啊,我就是一个悲剧 24楼 大 中 小 发表于 2010-3-1 11:30 只看该作者 不论是策略或者是冯的为人,这样做都是有好处的。不温不火,像个内功极高的大侠,让对方找不到机会进攻。 小苍蝇 我们没有死绝,我们就在你身边 25楼 大 中 小 发表于 2010-3-1 11:37 只看该作者 冯最好的观点就在于他相信中国有司法正义。 luugoo 拖延心理学:向与生俱来的行为顽症宣战】https://1984bbs.com/viewthread.php?tid=60185 26楼 大 中 小 发表于 2010-3-1 11:51 只看该作者 不论什么目的,根据目前的影响力,他自己表现出对中国司法的信任,会导致更多的人走上信访的道路。 信访根本是死路,他自己不可能不明白,何况他也信访过吧。 firedragoon 自由新闻社驻枫叶国通讯员 27楼 大 中 小 发表于 2010-3-1 12:10 只看该作者 引用: > 原帖由 luugoo 于 2010-3-1 11:51 发表 > ![](https://1984bbs.com/images/common/back.gif) > 不论什么目的,根据目前的影响力,他自己表现出对中国司法的信任,会导致更多的人走上信访的道路。 > 信访根本是死路,他自己不可能不明白,何况他也信访过吧。 信访的确是一条死路,他也正是因为信访维权才被整的 所以说他这是一种斗争策略才更好解释一些吧 至于说他可能误导别人么,也的确有这样的副作用,但反过来说,总不能自己晾着手鼓动别人搞暴力革命吧 左岸←右岸 把你的子宫钉到我的墙上,这样我便会记得你。我们必须走了。明天,明天… 28楼 大 中 小 发表于 2010-3-1 12:29 只看该作者 引用: > 原帖由 瘦蛆 于 2010-3-1 11:37 发表 > ![](https://www.1984bbs.com/images/common/back.gif) > 冯最好的观点就在于他相信中国有司法正义。 我赞同 只有这样才能走上良性循环之路 柳叶眉 该用户已被删除 29楼 大 中 小 发表于 2010-3-1 12:59 只看该作者 我看二战时美国战俘面对日军也有很多方式。那个啥电影里有这样一个镜头,一个美军男战俘对手握军刀的日本男军官说:“我爱你。”然后好像还亲吻了他的脸颊一下,结果那个日本军官就崩溃了。不知这是编剧杜撰还是确有其事。我觉得挺有心理依据的。也算是人性唤起的一种方式吧。 [ 本帖最后由 柳叶眉 于 2010-3-1 14:15 编辑 ] 废种豆豉 死胎 @vanlulnav http://www.bullock.cn/blogs/vanlulnav/ 30楼 大 中 小 发表于 2010-3-1 13:29 只看该作者 回复 29楼 柳叶眉 的话题 呃....這樣一吻很冒險耶 如果崩潰了 就手一揮 覺著噁心的死 劈掉俘虜的頭不就完了? 若是吻了黑幫老大 應必或許碎屍萬段罷會.... 透露社记者 31楼 大 中 小 发表于 2010-3-1 14:07 只看该作者 引用: > 原帖由 柳叶眉 于 2010-3-1 12:59 发表 ![](https://1984bbs.com/images/common/back.gif) > > 我看二战时美国战俘面对日军也有很多方式。那个啥电影里有这样一个镜头,一个美军男战俘对手握军刀的日本男军官说:“我爱你。”然后好像还亲吻了他一下,结果那个日本军官就崩溃了。不知这是编剧杜撰还是确有其事。 > ... 日本军阀——有可能 德国纳粹——很有可能 共党党徒——无可能 中共党徒——连耶稣也难有信心 柳叶眉 该用户已被删除 32楼 大 中 小 发表于 2010-3-1 14:13 只看该作者 回复 30楼 废种豆豉 的话题 咳咳。我的意思是说面对暴力政权,每个人都可以去探索更好的斗争和自保的方式。不必太拘泥。那个美国战俘也可能是个严谨的清教徒什么的,总之,这时候我很羡慕美国有那么多元的文化。 花想容 依据用户管理细则,账号永久停用。 33楼 大 中 小 发表于 2010-3-1 14:18 只看该作者 引用: > 原帖由 柳叶眉 于 2010-3-1 09:04 发表 ![](https://1984bbs.com/images/common/back.gif) > > 一般上海人总想给人好精明好文明的印象,但是沈国良同志好像例外:好蠢好蛮横,一点腔调都没有的。难道上海人都不愿意当警察,都是瘪三去做? 这就是瘪三的精明啊,他要向上司交差的。 不要忘记上海同时是一个有着大量瘪三的城市,跟别的地方不一样的是,这些瘪三根本不相信共产党,自己也知道自己是邪恶的,所以比其他地方的打手更为凶残可恶。上次杨大侠震慑了他们一下,还是远远不够。 柳叶眉 该用户已被删除 34楼 大 中 小 发表于 2010-3-1 14:19 只看该作者 引用: > 原帖由 透露社记者 于 2010-3-1 14:07 发表 > ![](https://1984bbs.com/images/common/back.gif) > > > 日本军阀——有可能 > 德国纳粹——很有可能 > 共党党徒——无可能 > 中共党徒——连耶稣也难有信心 你意思就是说土共根本就毫无人性了。话说这个问题我确实也很好奇。像毛、邓这样的人到底还有没有残留一点人性呢? 花想容 依据用户管理细则,账号永久停用。 35楼 大 中 小 发表于 2010-3-1 14:20 只看该作者 引用: > 原帖由 透露社记者 于 2010-3-1 14:07 发表 > ![](https://1984bbs.com/images/common/back.gif) > > 日本军阀——有可能 > 德国纳粹——很有可能 > 共党党徒——无可能 > 中共党徒——连耶稣也难有信心 最后一种动物是人渣,不可能感化的。只有消灭一途 柳叶眉 该用户已被删除 36楼 大 中 小 发表于 2010-3-1 14:44 只看该作者 回复 33楼 花想容 的话题 土共一直用黑社会的手段对付百姓,这是让人无法原谅的。 单手扶墙 活了几十年年,没能为党为人民做点什么,每思及此,心神不宁。 37楼 大 中 小 发表于 2010-3-1 14:50 只看该作者 姜还是老的辣。。冯正虎这么写只能让国宝沈国良更难堪。。 花想容 依据用户管理细则,账号永久停用。 38楼 大 中 小 发表于 2010-3-1 14:53 只看该作者 回复 36楼 柳叶眉 的话题 其实这样说辱没了黑社会,至少是1949年前的黑社会,历史上盗亦有道,是不会凶残到TG这样而且竭泽而渔的,兔子还不吃窝边草呢。 花想容 依据用户管理细则,账号永久停用。 39楼 大 中 小 发表于 2010-3-1 14:55 只看该作者 引用: > 原帖由 luugoo 于 2010-3-1 11:06 发表 > ![](https://1984bbs.com/images/common/back.gif) > > 宁愿相信这是他的策略。。 肯定是策略啦,老虎这把年纪,中国社会是个什么样子,他不比我们理解得深入得多才怪 nostoryboy SCV 40楼 大 中 小 发表于 2010-3-1 16:12 只看该作者 估计是有人指示的,万一真撞死了,沈国良可以随便找个借口进精神病院颐养天年,MLGB的 george 思想罪在逃犯 大洋之声轮值DJ 41楼 大 中 小 发表于 2010-3-1 18:35 只看该作者 引用: > 原帖由 柳叶眉 于 2010-3-1 14:19 发表 ![](https://1984bbs.com/images/common/back.gif) > > 你意思就是说土共根本就毫无人性了。话说这个问题我确实也很好奇。像毛、邓这样的人到底还有没有残留一点人性呢? 我们当然可以把土共说成是完全毫无一丁点人性的,把他们形容成面目狰狞的非人的魔鬼,显示出与我们的截然不同。这当然有助于激起对他们的仇恨,产生更多他们的仇敌。。。。。。但我认为这是和我对于人 - 包括土共 - 的通常理解背道而驰的,并且也无助于给这个国家带来正义,以及促成人性的恢复。 arewhoarewho 42楼 大 中 小 发表于 2010-3-1 18:40 只看该作者 引用: > 原帖由 柳叶眉 于 2010-3-1 09:04 发表 ![](https://1984bbs.com/images/common/back.gif) > 一般上海人总想给人好精明好文明的印象,但是沈国良同志好像例外:好蠢好蛮横,一点腔调都没有的。难道上海人都不愿意当警察,都是瘪三去做? 1、一般上海人总想给人好精明好文明的印象,但是沈国良同志好像例外:好蠢好蛮横,一点腔调都没有的 A请勿将上海人与公务员划等号 B上海公务员本就蛮横,至少我经历的3次报案都感觉我自己受害者变成了被当作肇事者审,为什么呢,因为他们认为多一事不如少一事,只要不出人命闹到上面,根本就不会理你 2、难道上海人都不愿意当警察,都是瘪三去做? 事实就是如此,我比较肤浅,瘪三不至于 至少初中觉得自己进高中没戏,会选择去警校 至少三校生毕业,找不到工作,会选择去考试当交警等等 rche 43楼 大 中 小 发表于 2010-3-1 19:07 只看该作者 支持人肉! 废种豆豉 死胎 @vanlulnav http://www.bullock.cn/blogs/vanlulnav/ 44楼 大 中 小 发表于 2010-3-1 19:07 只看该作者 回复 32楼 柳叶眉 的话题 呃..俺不過逢場作戲開條玩笑罷了 拋開或許的現實或許的效果 耶穌俺是最佩服 面對惡棍時的大度與寬襟 不是所謂一兩句迂腐無能所以概括 並且不能概括 也許他不是這樣子 但這樣子足夠使我敬服 以眼還眼於改善世途毫無裨助 我或們的評斷和所評斷的對象無干的 不是對應相需 換句話不說 他有作惡的自由 別他也有面著惡時的寬宥不計較的自由 作出如許或那麼的舉動或想法 這種作出與作毫無錯誤 跟他本人或本存在無干 也可描繪成有關並且罪該萬死 但評論或討伐的對象是他的所作 作為物什充當談資 談資跟他如何抉擇如何胡思亂想沒有套入的干係 即系他的自由不是談說所得 談說的僅僅談說他的自由 沒有干涉或牽扯的傾向 萬自由而言 本無所謂基準可言 無人性無道德無法律無公義之類 僅僅僅僅事件而已 評論也不過事件 評論評論也是 對所謂惡懲戒或反對壓迫也是 這樣一來 很混雜 很虛無 因為不需要言說 不需要闡釋 但可以萬種言說 萬種闡釋 沒有所謂一準 即系 無話與亂說 你的意思很明白 俺知道 也不必拘泥我的玩笑 [ 本帖最后由 废种豆豉 于 2010-3-1 19:08 编辑 ] luugoo 拖延心理学:向与生俱来的行为顽症宣战】https://1984bbs.com/viewthread.php?tid=60185 45楼 大 中 小 发表于 2010-3-1 20:58 只看该作者 mranti: RT @wangjinbo: 旅居澳洲的社会活动家、独立中文笔会前秘书长张小刚给沈国良打通了电话21-220-43166。这家伙气焰还很嚣张,说是冯正虎拦他的车子,妨碍司法 ガンダム 46楼 大 中 小 发表于 2010-3-2 00:31 只看该作者 咳! 国宝到底长什么样子呀! 有没有CAI和FBI帅呢! 海风 个人撒谎叫欺诈,组织撒谎叫先进文化。 47楼 大 中 小 发表于 2010-3-2 10:38 只看该作者 虎落松江遭犬欺。 花想容 依据用户管理细则,账号永久停用。 48楼 大 中 小 发表于 2010-3-2 10:53 只看该作者 回复 47楼 海风 的话题 松江?老虎貌似住在五角场附近
7.917946
241
0.684336
yue_Hant
0.918773
28f941ec0e329adae67ceb38583fb861ebe8a1e6
16,740
md
Markdown
articles/storage/blobs/point-in-time-restore-manage.md
marcobrunodev/azure-docs.pt-br
0fff07f85663724745ac15ce05b4570890d108d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/point-in-time-restore-manage.md
marcobrunodev/azure-docs.pt-br
0fff07f85663724745ac15ce05b4570890d108d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/storage/blobs/point-in-time-restore-manage.md
marcobrunodev/azure-docs.pt-br
0fff07f85663724745ac15ce05b4570890d108d9
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Executar uma restauração pontual em dados de blob de blocos titleSuffix: Azure Storage description: Saiba como usar a restauração pontual para restaurar um conjunto de blobs de blocos para seu estado anterior em um determinado momento. services: storage author: tamram ms.service: storage ms.topic: how-to ms.date: 09/23/2020 ms.author: tamram ms.subservice: blobs ms.openlocfilehash: 2350177373bc99907c437d814d8f01193f18f3fd ms.sourcegitcommit: a43a59e44c14d349d597c3d2fd2bc779989c71d7 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 11/25/2020 ms.locfileid: "95895716" --- # <a name="perform-a-point-in-time-restore-on-block-blob-data"></a>Executar uma restauração pontual em dados de blob de blocos Você pode usar a restauração pontual para restaurar um ou mais conjuntos de blobs de blocos para um estado anterior. Este artigo descreve como habilitar a restauração pontual para uma conta de armazenamento e como executar uma operação de restauração. Para saber mais sobre a restauração pontual, consulte [restauração pontual para BLOBs de blocos](point-in-time-restore-overview.md). > [!CAUTION] > A restauração pontual dá suporte a operações de restauração somente em blobs de blocos. Não é possível restaurar operações em contêineres. Se você excluir um contêiner da conta de armazenamento chamando a operação [excluir contêiner](/rest/api/storageservices/delete-container) , esse contêiner não poderá ser restaurado com uma operação de restauração. Em vez de excluir um contêiner, exclua BLOBs individuais se você quiser restaurá-los. ## <a name="enable-and-configure-point-in-time-restore"></a>Habilitar e configurar a restauração pontual Antes de habilitar e configurar a restauração pontual, habilite seus pré-requisitos para a conta de armazenamento: exclusão reversível, feed de alteração e controle de versão de BLOB. Para obter mais informações sobre esses recursos, confira estes artigos: - [Habilitar exclusão reversível para blobs](./soft-delete-blob-enable.md) - [Habilitar e desabilitar o feed de alterações](storage-blob-change-feed.md#enable-and-disable-the-change-feed) - [Habilitar e gerenciar o controle de versão de blob](versioning-enable.md) > [!IMPORTANT] > Habilitar a exclusão reversível, o feed de alterações e o controle de versão de blob pode resultar em encargos adicionais. Para obter mais informações, consulte [exclusão reversível para BLOBs](soft-delete-blob-overview.md), [suporte ao feed de alterações no armazenamento de BLOBs do Azure](storage-blob-change-feed.md)e [controle de versão de blob](versioning-overview.md). # <a name="azure-portal"></a>[Portal do Azure](#tab/portal) Para configurar a restauração pontual com o portal do Azure, siga estas etapas: 1. Navegue até sua conta de armazenamento no portal do Azure. 1. Em **configurações**, escolha **proteção de dados**. 1. Selecione **ativar a restauração pontual** . Quando você seleciona essa opção, a exclusão reversível para BLOBs, controle de versão e feed de alteração também são habilitadas. 1. Defina o ponto de restauração máximo para a restauração pontual, em dias. Esse número deve ser pelo menos um dia menor que o período de retenção especificado para exclusão reversível do blob. 1. Salve suas alterações. A imagem a seguir mostra uma conta de armazenamento configurada para a restauração pontual com um ponto de restauração de sete dias atrás e um período de retenção para exclusão reversível de blob de 14 dias. :::image type="content" source="media/point-in-time-restore-manage/configure-point-in-time-restore-portal.png" alt-text="Captura de tela mostrando como configurar a restauração pontual no portal do Azure"::: # <a name="powershell"></a>[PowerShell](#tab/powershell) Para configurar a restauração pontual com o PowerShell, primeiro instale o módulo [AZ. Storage](https://www.powershellgallery.com/packages/Az.Storage) versão 2.6.0 ou posterior. Em seguida, chame o comando Enable-AzStorageBlobRestorePolicy para habilitar a restauração pontual para a conta de armazenamento. O exemplo a seguir habilita a exclusão reversível e define o período de retenção de exclusão reversível, habilita o feed de alterações e o controle de versão e, em seguida, habilita a restauração pontual. Lembre-se de substituir os valores entre colchetes angulares pelos seus próprios valores quando executar o exemplo: ```powershell # Sign in to your Azure account. Connect-AzAccount # Set resource group and account variables. $rgName = "<resource-group>" $accountName = "<storage-account>" # Enable soft delete with a retention of 14 days. Enable-AzStorageBlobDeleteRetentionPolicy -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -RetentionDays 14 # Enable change feed and versioning. Update-AzStorageBlobServiceProperty -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -EnableChangeFeed $true ` -IsVersioningEnabled $true # Enable point-in-time restore with a retention period of 7 days. # The retention period for point-in-time restore must be at least # one day less than that set for soft delete. Enable-AzStorageBlobRestorePolicy -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -RestoreDays 7 # View the service settings. Get-AzStorageBlobServiceProperty -ResourceGroupName $rgName ` -StorageAccountName $accountName ``` --- ## <a name="perform-a-restore-operation"></a>Realizar uma operação de restauração Ao executar uma operação de restauração, você deve especificar o ponto de restauração como um valor **DateTime** UTC. Contêineres e blobs serão restaurados para seu estado no dia e hora. A operação de restauração pode levar vários minutos para ser concluída. Você pode restaurar todos os contêineres na conta de armazenamento ou pode restaurar um intervalo de BLOBs em um ou mais contêineres. Um intervalo de BLOBs é definido modo lexicográfico, ou seja, na ordem de dicionário. Há suporte para até dez intervalos de lexicográfica por operação de restauração. O início do intervalo é inclusivo e o final do intervalo é exclusivo. O padrão de contêiner especificado para o intervalo inicial e o intervalo final deve incluir um mínimo de três caracteres. A barra (/) que é usada para separar um nome de contêiner de um nome de blob não conta para esse mínimo. Não há suporte para caracteres curinga em um intervalo de lexicográfica. Quaisquer caracteres curinga são tratados como caracteres padrão. Você pode restaurar blobs nos contêineres `$root` e `$web` especificando-os explicitamente em um intervalo passado para uma operação de restauração. Os contêineres `$root` e `$web` são restaurados apenas se forem especificados explicitamente. Outros contêineres do sistema não podem ser restaurados. Somente os blobs de blocos são restaurados. Blobs de páginas e blobs de acréscimo não são incluídos em uma operação de restauração. Para obter mais informações sobre as limitações relacionadas a blobs de acréscimo, consulte [restauração pontual para BLOBs de blocos](point-in-time-restore-overview.md). > [!IMPORTANT] > Ao executar uma operação de restauração, o armazenamento do Azure bloqueia operações de dados nos BLOBs nos intervalos que estão sendo restaurados durante a operação. As operações de leitura, gravação e exclusão são bloqueadas no local principal. Por esse motivo, as operações como os contêineres de listagem no portal do Azure podem não ser executadas conforme o esperado enquanto a operação de restauração está em andamento. > > As operações de leitura do local secundário podem continuar durante a operação de restauração se a conta de armazenamento for replicada geograficamente. ### <a name="restore-all-containers-in-the-account"></a>Restaurar todos os contêineres na conta Você pode restaurar todos os contêineres na conta de armazenamento para retorná-los ao estado anterior em um determinado momento. # <a name="azure-portal"></a>[Portal do Azure](#tab/portal) Para restaurar todos os contêineres e blobs na conta de armazenamento com o portal do Azure, siga estas etapas: 1. Navegue até a lista de contêineres para sua conta de armazenamento. 1. Na barra de ferramentas, escolha **restaurar contêineres** e **restaurar tudo**. 1. No painel **restaurar todos os contêineres** , especifique o ponto de restauração fornecendo uma data e hora. 1. Confirme que você deseja continuar marcando a caixa. 1. Selecione **restaurar** para iniciar a operação de restauração. :::image type="content" source="media/point-in-time-restore-manage/restore-all-containers-portal.png" alt-text="Captura de tela mostrando como restaurar todos os contêineres para um ponto de restauração especificado"::: # <a name="powershell"></a>[PowerShell](#tab/powershell) Para restaurar todos os contêineres e blobs na conta de armazenamento com o PowerShell, chame o comando **Restore-AzStorageBlobRange** . Por padrão, o comando **Restore-AzStorageBlobRange** é executado de forma assíncrona e retorna um objeto do tipo **PSBlobRestoreStatus** que você pode usar para verificar o status da operação de restauração. O exemplo a seguir restaura de forma assíncrona os contêineres na conta de armazenamento para seu estado 12 horas antes do momento atual e verifica algumas das propriedades da operação de restauração: ```powershell # Specify -TimeToRestore as a UTC value $restoreOperation = Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -TimeToRestore (Get-Date).AddHours(-12) # Get the status of the restore operation. $restoreOperation.Status # Get the ID for the restore operation. $restoreOperation.RestoreId # Get the restore point in UTC time. $restoreOperation.Parameters.TimeToRestore ``` Para executar a operação de restauração de forma síncrona, inclua o parâmetro **-WaitForComplete** no comando. Quando o parâmetro **-WaitForComplete** estiver presente, o PowerShell exibirá uma mensagem que inclui a ID de restauração da operação e, em seguida, bloqueará a execução até que a operação de restauração seja concluída. Tenha em mente que o período de tempo exigido por uma operação de restauração depende da quantidade de dados a serem restaurados e uma grande operação de restauração pode levar até uma hora para ser concluída. ```powershell Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -TimeToRestore (Get-Date).AddHours(-12) -WaitForComplete ``` --- ### <a name="restore-ranges-of-block-blobs"></a>Restaurar intervalos de blobs de blocos Você pode restaurar um ou mais intervalos de lexicográfica de BLOBs dentro de um único contêiner ou em vários contêineres para retornar esses BLOBs para seu estado anterior em um determinado momento. # <a name="azure-portal"></a>[Portal do Azure](#tab/portal) Para restaurar um intervalo de BLOBs em um ou mais contêineres com o portal do Azure, siga estas etapas: 1. Navegue até a lista de contêineres para sua conta de armazenamento. 1. Selecione o contêiner ou contêineres a serem restaurados. 1. Na barra de ferramentas, escolha **restaurar contêineres** e **restaurar selecionado**. 1. No painel **restaurar contêineres selecionados** , especifique o ponto de restauração fornecendo uma data e hora. 1. Especifique os intervalos a serem restaurados. Use uma barra (/) para delinear o nome do contêiner do prefixo do blob. 1. Por padrão, o painel **restaurar contêineres selecionados** especifica um intervalo que inclui todos os BLOBs no contêiner. Exclua esse intervalo se você não quiser restaurar todo o contêiner. O intervalo padrão é mostrado na imagem a seguir. :::image type="content" source="media/point-in-time-restore-manage/delete-default-blob-range.png" alt-text="Captura de tela mostrando o intervalo de BLOBs padrão a ser excluído antes de especificar o intervalo personalizado"::: 1. Confirme que você deseja continuar marcando a caixa. 1. Selecione **restaurar** para iniciar a operação de restauração. A imagem a seguir mostra uma operação de restauração em um conjunto de intervalos. :::image type="content" source="media/point-in-time-restore-manage/restore-multiple-container-ranges-portal.png" alt-text="Captura de tela mostrando como restaurar intervalos de BLOBs em um ou mais contêineres"::: A operação de restauração mostrada na imagem executa as seguintes ações: - Restaura o conteúdo completo de *Container1*. - Restaura os BLOBs no intervalo de lexicográfica *blob1* por meio de *blob5* no *container2*. Esse intervalo restaura os BLOBs com nomes como *blob1*, *blob11*, *blob100*, *blob2* e assim por diante. Como o final do intervalo é exclusivo, ele restaura os BLOBs cujos nomes começam com *blob4*, mas não restaura os BLOBs cujos nomes começam com *blob5*. - Restaura todos os BLOBs em *container3* e *Container4*. Como o final do intervalo é exclusivo, esse intervalo não restaura *container5*. # <a name="powershell"></a>[PowerShell](#tab/powershell) Para restaurar um único intervalo de BLOBs, chame o comando **Restore-AzStorageBlobRange** e especifique um intervalo lexicográfica de nomes de contêiner e BLOB para o `-BlobRestoreRange` parâmetro. Por exemplo, para restaurar os BLOBs em um único contêiner chamado *Container1*, você pode especificar um intervalo que comece com *Container1* e termine com *container2*. Não há nenhum requisito para os contêineres nomeados nos intervalos de início e de término existirem. Como o final do intervalo é exclusivo, mesmo que a conta de armazenamento inclua um contêiner chamado *container2*, somente o contêiner chamado *Container1* será restaurado: ```powershell $range = New-AzStorageBlobRangeToRestore -StartRange container1 ` -EndRange container2 ``` Para especificar um subconjunto de BLOBs em um contêiner a ser restaurado, use uma barra (/) para separar o nome do contêiner do padrão de prefixo de BLOB. Por exemplo, o intervalo a seguir seleciona blobs em um único contêiner cujos nomes começam com as letras *d* até *f*: ```powershell $range = New-AzStorageBlobRangeToRestore -StartRange container1/d ` -EndRange container1/g ``` Em seguida, forneça o intervalo para o comando **Restore-AzStorageBlobRange** . Especifique o ponto de restauração fornecendo um valor UTC **DateTime** para o parâmetro `-TimeToRestore`. O exemplo a seguir restaura os blobs no intervalo especificado para seu estado 3 dias antes de agora: ```powershell # Specify -TimeToRestore as a UTC value Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -BlobRestoreRange $range ` -TimeToRestore (Get-Date).AddDays(-3) ``` Por padrão, o comando **Restore-AzStorageBlobRange** é executado de forma assíncrona. Quando você inicia uma operação de restauração de forma assíncrona, o PowerShell exibe imediatamente uma tabela de propriedades para a operação: ```powershell Status RestoreId FailureReason Parameters.TimeToRestore Parameters.BlobRanges ------ --------- ------------- ------------------------ --------------------- InProgress 459c2305-d14a-4394-b02c-48300b368c63 2020-09-15T23:23:07.1490859Z ["container1/d" -> "container1/g"] ``` Para restaurar vários intervalos de blobs de blocos, especifique uma matriz de intervalos para o parâmetro `-BlobRestoreRange`. O exemplo a seguir especifica dois intervalos para restaurar o conteúdo completo de *Container1* e *Container4* para seu estado 24 horas atrás e salva o resultado em uma variável: ```powershell # Specify a range that includes the complete contents of container1. $range1 = New-AzStorageBlobRangeToRestore -StartRange container1 ` -EndRange container2 # Specify a range that includes the complete contents of container4. $range2 = New-AzStorageBlobRangeToRestore -StartRange container4 ` -EndRange container5 $restoreOperation = Restore-AzStorageBlobRange -ResourceGroupName $rgName ` -StorageAccountName $accountName ` -TimeToRestore (Get-Date).AddHours(-24) ` -BlobRestoreRange @($range1, $range2) # Get the status of the restore operation. $restoreOperation.Status # Get the ID for the restore operation. $restoreOperation.RestoreId # Get the blob ranges specified for the operation. $restoreOperation.Parameters.BlobRanges ``` Para executar a operação de restauração de forma síncrona e bloquear a execução até que ela seja concluída, inclua o parâmetro **-WaitForComplete** no comando. --- ## <a name="next-steps"></a>Próximas etapas - [Restauração pontual para BLOBs de blocos](point-in-time-restore-overview.md) - [Exclusão reversível](./soft-delete-blob-overview.md) - [Feed de alteração](storage-blob-change-feed.md) - [Controle de versão de BLOB](versioning-overview.md)
66.166008
646
0.785006
por_Latn
0.997471
28f957b9b6c929832460b093477e2032a8f1a467
943
md
Markdown
_posts/2015-02-06-TheFirstBadMan.md
austinmoody/blog-melange
02f831c787c2972f3df86d55828bd4190c42b459
[ "MIT" ]
null
null
null
_posts/2015-02-06-TheFirstBadMan.md
austinmoody/blog-melange
02f831c787c2972f3df86d55828bd4190c42b459
[ "MIT" ]
null
null
null
_posts/2015-02-06-TheFirstBadMan.md
austinmoody/blog-melange
02f831c787c2972f3df86d55828bd4190c42b459
[ "MIT" ]
null
null
null
--- layout: post title: The First Bad Man date: 2015-01-28 22:58:58 summary: Today I finished reading The First Bad Man by Miranda July categories: books --- ![](http://austinmoody.org/i/melange_firstbadman_2015-02-08-093822.png) Today I finished reading *[The First Bad Man](http://www.amazon.com/The-First-Bad-Man-Novel/dp/1439172560)* by [Miranda July](http://en.wikipedia.org/wiki/Miranda_July). It seems to be the hot hipster book of the moment and I gave in. I was familiar with Miranda July mainly because of a couple of things: * The movie *[Me And You And Everyone We Know](http://en.wikipedia.org/wiki/Me_and_You_and_Everyone_We_Know)* &larr; Didn't like it. * The iOS App *[Somebody](http://somebodyapp.com)* &larr; Cool idea, poorly executed. But this book was wonderful. One of those books that I couldn't put down because I was dying to know what happened next. Wonderfully kooky and well worth all the hipster praise.
47.15
179
0.755037
eng_Latn
0.932021
28fa6169d197ea547244b6770a7b75082de49f13
64
md
Markdown
README.FORK.md
manwithsteelnerves/google-api-nodejs-client
56405fff3abb1fec74c980219baa56f19608ede0
[ "Apache-2.0" ]
1
2021-03-04T05:33:27.000Z
2021-03-04T05:33:27.000Z
README.FORK.md
manwithsteelnerves/google-api-nodejs-client
56405fff3abb1fec74c980219baa56f19608ede0
[ "Apache-2.0" ]
null
null
null
README.FORK.md
manwithsteelnerves/google-api-nodejs-client
56405fff3abb1fec74c980219baa56f19608ede0
[ "Apache-2.0" ]
null
null
null
**This fork is for making a lightweight firestore api library**
32
63
0.78125
eng_Latn
0.998795
28fa6dd50f6dd99fc2a7556b19bb110849e7f8d9
1,546
md
Markdown
_posts/2020-09-09-ResNet.md
jujingi/jujingi.github.io
46f4baa842b323952e80a6d8b0f2c1134a7d4164
[ "MIT" ]
2
2020-09-10T08:54:17.000Z
2021-11-02T00:28:00.000Z
_posts/2020-09-09-ResNet.md
jujingi/jujingi.github.io
46f4baa842b323952e80a6d8b0f2c1134a7d4164
[ "MIT" ]
null
null
null
_posts/2020-09-09-ResNet.md
jujingi/jujingi.github.io
46f4baa842b323952e80a6d8b0f2c1134a7d4164
[ "MIT" ]
null
null
null
--- title: "ResNet paper review" last_modified_at: 2020-09-09 00:00:00 -0400 categories: - Image Recognition tags: - update toc: true toc_label: "Getting Started" --- # Deep Residual Learning for Image Recognition > Kaiming He, et al. "Deep Residual Learning for Image Recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. # Abstract Deep neural networks일수록 학습이 어렵다. 그래서 residual학습으로 이 전보다 더 쉽게 만들었다. Imagenet에서 3.57%로 매우 작은 에러를 보이고 우승하였다. # Introduction ![initial](https://user-images.githubusercontent.com/53032349/92397719-42524b80-f162-11ea-93b2-cef59272625b.png) 깊이가 증가함에 따라 overfitting이 생겨 성능이 감소한다. residual학습으로 이러한 문제를 해결할 수 있다. residual학습은 단순히 전에 파라미터를 더해주는 연산이다. 이러한 작업 만으로도 훌륭한 generalization performance를 보인다. Deep residual learning 학습과정 중 미분하는 연산 때문에 training error가 발생하기 때문에 이것을 방지하기 위해 그 전의 파라미터의 값을 더해 준다. 224*224 이미지 사용, conv다음에 BN을 사용, conv는 3*3필 터이고, stride는 2, batch size는 256, Learning rate는 0.1에 서 서서히 감소한다. weight decay는 0.0001, momentum은 0.9이고 dropout을 사용하지 않았다. # Experiments ![initial](https://user-images.githubusercontent.com/53032349/92397925-92c9a900-f162-11ea-8718-e1f053f4b051.png) ![initial](https://user-images.githubusercontent.com/53032349/92398022-bf7dc080-f162-11ea-8d90-f27c19e4f50f.png) 34layer가 18layer보다 우수하며, shortcut으로 인해 소실 문제가 해결되었습니다. 18layer의 일반 네트워크와 18layer의 ResNet을 비 교하면 별 차이가 없다. 이유는 얕은 네트워크에서는 소실 문제가 나타나지 않기 때문이다. 네트워크가 깊어질수록 에러율이 더욱 낮아진다. # Code Implement ResNet code : https://github.com/cjf8899/Pytorch_ResNet
24.539683
153
0.771022
kor_Hang
0.99995
28faf39201f87d54e9206fe4cdf34d5dee8b4ad0
5,375
md
Markdown
src/posts/2020-02-23.md
jpolack/blogGatsby
e3ba1172c488c086062debd34a89d55383f4ca38
[ "MIT" ]
null
null
null
src/posts/2020-02-23.md
jpolack/blogGatsby
e3ba1172c488c086062debd34a89d55383f4ca38
[ "MIT" ]
11
2021-02-21T02:53:19.000Z
2022-02-27T10:45:50.000Z
src/posts/2020-02-23.md
jpolack/blog
6e3e06e026c3923eec85a3a7dd68c09212c6a16a
[ "MIT" ]
null
null
null
--- title: "Wie wir uns selbst sabotieren" description: "Wie werde ich mir meiner Selbstsabotage bewusst? Und wie kann ich sie überwinden?" date: 2020-02-23 tags: ["Sabotage", "Effiziez", "Effektivität", "effektiv", "effizient", "Lösungsraum", "Gesellschaft", "Glauben", "Macht", "Ziele", "Erfolg", "Menschen", "Komplexität", "komplex", "Schaden", "vermeiden", "bewusst", "Bewusstsein"] --- Du sitzt im Kino. Der Film gefällt dir **überhaupt** nicht. Was machst du? Gehst du, um nicht deine Lebenszeit zu verschwenden? Bleibst du, weil du ja bereits bezahlt hast? Diese Frage stelle ich mir öfter. Und das nicht weil ich eine Antwort haben will, bis ich das nächste Mal in einem grauenvollen Film im Kino sitze. ## Effizienz vs. Effektivität Wenn man genauer hinschaut ist es eine Frage zwischen Effizienz und Effektivität. Möchte ich bei meiner Wahl bleiben und meine eingesetzten Ressourcen so gut nutzen wie möglich (Effizienz)? Oder möchte ich mich lieber umentscheiden in der Hoffnung eine bessere Alternative zu wählen (Effektivität)? Pauschal lässt sie die Frage nicht beantworten. Ich beobachte jedoch immer wieder, dass ich mich unbewusst für die Effizienz entscheide und mich dann so sehr in der Verbesserung der Effizienz verliere, dass ich vollkommen vergesse, was überhaupt das ursprüngliche Problem ist. Auf einmal verbessere ich nur noch um der Verbesserung Willen und gar nicht mehr um der Lösung des Problems Willen. Ich bin so beschäftigt, dass ich diese Entscheidung und die Verbesserung auch nicht weiter hinterfrage. Vielleicht ist das der Punkt an dem ich eine Pause machen sollte mit dem _Hustle_. Vielleicht sollte ich öfter eine kleine Pause einlegen und beobachten: Löse ich gerade tatsächlich das Problem? Bewege ich mich tatsächlich gerade in Richtung der Lösung? Und wenn eine der Antworten _nein_ lautet sollte ich vielleicht die Methodik hinterfragen. Vielleicht gibt es Lösungen die besser funktionieren (Effektivität). Versteh mich nicht falsch. Effizienz ist toll! Wenn ich jedoch an der falschen Schraube drehe, wird die richtige sich nicht von allein lösen. Auch nicht wenn ich dies besonders schnell, besonders gut oder besonders elegant tue. Wie finde ich also die beste Lösung? ## Der Lösungsraum Als Lösungsraum bezeiche ich eine Menge an Lösungen zu einem bestimmten Problem oder Ziel. So gibt es zum Beispiel für das Ziel finanziell frei zu werden verschiedene Lösungen im Lösungsraum. Einige davon sind gesetzlich oder gesellschaftlich akzeptiert, andere nicht. Einige decken sich vielleicht mit der Definition von Erfolg, andere nicht. Einige fügen anderen Menschen Leid zu, andere nicht. Dennoch sind alles (wenn vielleicht auch für dieses Beispiel etwas plakative) Lösungen: - Viel Reichtum erben - Hocharbeiten - Gründen - Investieren - Reich heiraten - Bank überfallen - Lebensstandard senken - Im Kloster leben - Hartz IV beziehen und nach Thailand auswandern Das Problem beim Finden einer Lösung ist also meistens nicht, dass es keine oder zu wenig Lösung gibt, sondern eher, dass wir zu viele Lösungen herausfiltern bis keine oder zu wenig übrig bleiben. ### Die Filter Die Frage ist also nicht: Welche Lösungen gibt es? Sondern: Welche Lösungen sehe ich nicht? Welche Lösungen filtere ich (vielleicht zu unrecht) heraus? Denn in diesen zu **unrecht herausgefilterten Lösungen** könnte ebenso der Schatz liegen, wie in den klassischen Lösungen die jeder kennt. Genau das ist dieses ach so magische _out of the box_ Denken. Wie denke ich also _out of the box_? Das Problem bei diesen Filtern ist, dass sie meist unbewusst greifen und die Lösungen gefiltert werden bevor ich mir darüber bewusst werde. Ein sinnvoller Schritt scheint daher sich diese Filter bewusst zu machen. Ob in der Meditation oder in einer Diskussion mit Freunden und Bekannten ist egal. Nichtsdestotrotz möchte ich hier auf einige Filter eingehen, welche ich für mich gefunden habe: 1. **Glauben:** Wenn ich glaube, dass etwas schlimmes passiert, sei es Karma, Unglück oder sonstwas schließe ich vielleicht Lösungen aus 2. **Macht:** Wenn ich glaube, dass etwas meine Macht oder meinen Einfluss schmälert schließe ich vielleicht Lösungen aus 3. **Gesellschaft:** Wenn ich glaube, dass etwas gesellschaftlich inakzeptabel oder illegal ist schließe ich vielleicht Lösungen aus 4. **Erfolg:** Wenn ich glaube, dass etwas mich erfolglos macht oder aussehen lässt schließe ich vielleicht Lösungen aus 5. **Menschen:** Wenn ich glaube Menschen mit meiner Lösung Leid zuzufügen schließe ich vielleicht Lösungen aus ## Und dann? Wir leben in einer komplexen Welt in der es kein Geben ohne Nehmen gibt. Die Frage ist daher nicht "Wie kann ich Schaden vermeiden?", denn wenn es kein Geben ohne Nehmen gibt kann ich Schaden nicht vermeiden. Die Frage ist daher viel mehr: "Welchen Schaden kann ich verantworten?". Verantworten bedeutet in diesem Kontext für mich, dass ich mich **bewusst** entscheide, also **bewusst** weiß welcher Schaden entsteht und dann sage: Für diese Lösung bin ich bereit diesen Schaden in Kauf zu nehmen. **Bewusst** ist mir in diesem Kontext so wichtig, weil es voraussetzt, dass ich alle Alternativen in Betracht gezogen habe. Das setzt jedoch wiederum voraus, dass ich alle Alternativen zulasse. So finde ich für **diese** Situation und für **diesen** Kontext die beste und vor allem auch die effektivste Lösung.
80.223881
497
0.79814
deu_Latn
0.99952
28fbb47f224d545445f1e38f1f98a6d392ab38b6
3,220
md
Markdown
README.md
asemoon/alwaysOn-digital-frame
1b936783bc9736391cc2560c93c10917b500791c
[ "Apache-2.0" ]
1
2015-03-26T08:03:06.000Z
2015-03-26T08:03:06.000Z
README.md
asemoon/alwaysOn-digital-frame
1b936783bc9736391cc2560c93c10917b500791c
[ "Apache-2.0" ]
2
2016-04-08T00:52:11.000Z
2016-04-08T05:09:04.000Z
README.md
asemoon/alwaysOn-digital-frame
1b936783bc9736391cc2560c93c10917b500791c
[ "Apache-2.0" ]
null
null
null
AlwaysOn Digital Frame ======== ![AlwaysOn Digital Photo Frame](/images/alwayson_digital_frame_workflow.jpg) This script slideshows media residing on a specific folder in a Dropbox account. It was initially targeted to be enlisted on OS boot up to slideshow the media on the cloud using VLC(so you need VLC installed). It automatically syncs itself with a Dropbox folder in the intervals that the user defines and continuously runs VLC with the media (pictures, videos, and audio). Think of it as a fancy digital photo frame that gets automatically updated! Settings ======== 1. Install Dropbox Python SDK either via https://www.dropbox.com/developers/core/sdks/python or if you are comfortable using easy_install, do a `easy_install dropbox` 2. Clone the AlwaysOn Frame repository ```bash git clone https://github.com/asemoon/alwaysOn-digital-frame.git ``` 3. Visit http://www.dropbox.com/developers/apps, create a new Dropbox app, and obtain an app_key and an app_secret Open alwaysOn.py and set the global variables app_key and app_secret with what you have obtained (they are set to ' ' by default). Run main.py and paste the URL into your browser, follow the instructions and press allow. Then copy the string the Dropbox provides you and paste it at the prompt where it says, "Enter the authorization code here". The script will then give you a user_id and an access_token. Copy those values and paste as values of user_id and access_token in alwaysOn.py accordingly. (Note that what you are doing right now is letting a Dropbox account to be accessed using the SDK so you need to be signed in to your Dropbox account). The good news is you only do this step once! 4. For smoother slideshow, set the following settings in VLC: A)Repeat the playlist (Playback->Repeat all) B)in Video section of VLC options, check fullscreen C)Turn off "show unimportant error messages" 5. Create a file named config.txt where the alwaysOn script resides. The first line is the location that you want to keep your pictures; the second line is the location of VLC executable. For instance: ```bash C:\ Users\ Mehdi\ pix C:\ Program Files (x86)\ VideoLAN \ VLC\ vlc.exe ``` 6. In alwayson.py file set the global variables for your images folder and the config file path on the cloud (the config file should be named settings.cfg) The default values are: pix_root_folder_on_cloud='/pix/' settings_path_on_cloud='/' 7. The settings.cfg on the cloud, controls the settings of the slideshow. here is an example with descriptions: ```bash [global] system_on=1 #Whether the system should be on or not timer_interval=3600 # The interval to check for syncs (in seconds) slide_show_duration=10 #Duration of photo slideshow wipe_pix=0 #Will delete the contents of the pictures folder if set to 1 start_time=01:00 # The start of the time which syncing is allowed end_time=23:00 # The end of the time which syncing is allowed longitude=51.4231 # the longitude to use for obtaining the time latitude=35.6961 # the latitude to use for obtaining the time ``` All the settings are done once and then you are all set! just run main.py from then on. Nov 2013, Mehdi Karamnejad
64.4
697
0.767702
eng_Latn
0.996617
28fbf0158d4b89172ae859b98dfe6f06addb830c
10,691
md
Markdown
articles/search/search-howto-monitor-indexers.md
LucianoLimaBR/azure-docs.pt-br
8e05ae8584aa2620717d69ccb6a2fdfbb46446c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/search/search-howto-monitor-indexers.md
LucianoLimaBR/azure-docs.pt-br
8e05ae8584aa2620717d69ccb6a2fdfbb46446c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/search/search-howto-monitor-indexers.md
LucianoLimaBR/azure-docs.pt-br
8e05ae8584aa2620717d69ccb6a2fdfbb46446c7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Como monitorar o status e os resultados do indexador titleSuffix: Azure Cognitive Search description: Monitore o status, o progresso e os resultados dos indexadores do Azure Pesquisa Cognitiva no portal do Azure, usando a API REST ou o SDK do .NET. manager: nitinme author: HeidiSteen ms.author: heidist ms.devlang: rest-api ms.service: cognitive-search ms.topic: conceptual ms.date: 11/04/2019 ms.openlocfilehash: c7f688c96576f660795becaf318c3b0677a24542 ms.sourcegitcommit: b050c7e5133badd131e46cab144dd5860ae8a98e ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 10/23/2019 ms.locfileid: "72793797" --- # <a name="how-to-monitor-azure-cognitive-search-indexer-status-and-results"></a>Como monitorar o status e os resultados do indexador Pesquisa Cognitiva do Azure O Azure Pesquisa Cognitiva fornece informações de status e monitoramento sobre as execuções atuais e históricas de todos os indexadores. O monitoramento do indexador é útil quando você deseja: * Acompanhe o progresso de um indexador durante uma execução em andamento. * Examine os resultados da execução do indexador em andamento ou anterior. * Identificar erros de indexador de nível superior e erros ou avisos sobre documentos individuais que estão sendo indexados. ## <a name="get-status-and-history"></a>Obter status e histórico Você pode acessar as informações de monitoramento do indexador de várias maneiras, incluindo: * No [Portal do Azure](#portal) * Usando a [API REST](#restapi) * Usando o [SDK do .net](#dotnetsdk) As informações de monitoramento do indexador disponíveis incluem todos os itens a seguir (embora os formatos de dados sejam diferentes com base no método de acesso usado): * Informações de status sobre o próprio indexador * Informações sobre a execução mais recente do indexador, incluindo seu status, horários de início e de término e erros e avisos detalhados. * Uma lista de execuções do indexador histórico e seus status, resultados, erros e avisos. Os indexadores que processam grandes volumes de dados podem levar muito tempo para serem executados. Por exemplo, indexadores que manipulam milhões de documentos de origem podem ser executados por 24 horas e, em seguida, reiniciam quase imediatamente. O status de indexadores de alto volume pode sempre dizer **em andamento** no Portal. Mesmo quando um indexador estiver em execução, os detalhes estarão disponíveis sobre o progresso em andamento e as execuções anteriores. <a name="portal"></a> ## <a name="monitor-using-the-portal"></a>Monitorar usando o portal Você pode ver o status atual de todos os seus indexadores na lista de **indexadores** na página de visão geral do serviço de pesquisa. ![Lista de indexadores](media/search-monitor-indexers/indexers-list.png "Lista de indexadores") Quando um indexador está em execução, o status na lista mostra **em andamento**e o valor **documentos com êxito** mostra o número de documentos processados até o momento. Pode levar alguns minutos para que o portal atualize os valores de status e contagens de documentos do indexador. Um indexador cuja execução mais recente foi bem-sucedida mostra **êxito**. Uma execução de indexador pode ser bem-sucedida mesmo que documentos individuais tenham erros, se o número de erros for menor que a configuração de **itens de falha máximo** do indexador. Se a execução mais recente terminar com um erro, o status mostrará **falha**. Um status de **Redefinir** significa que o estado de controle de alterações do indexador foi redefinido. Clique em um indexador na lista para ver mais detalhes sobre as execuções atuais e recentes do indexador. ![Resumo do indexador e histórico de execução](media/search-monitor-indexers/indexer-summary.png "Resumo do indexador e histórico de execução") O gráfico de **Resumo do indexador** exibe um grafo do número de documentos processados em suas execuções mais recentes. A lista **detalhes da execução** mostra até 50 dos resultados de execução mais recentes. Clique em um resultado de execução na lista para ver as informações específicas sobre essa execução. Isso inclui os horários de início e término e quaisquer erros e avisos ocorridos. ![Detalhes de execução do indexador](media/search-monitor-indexers/indexer-execution.png "Detalhes de execução do indexador") Se houver problemas específicos ao documento durante a execução, eles serão listados nos campos erros e avisos. ![Detalhes do indexador com erros](media/search-monitor-indexers/indexer-execution-error.png "Detalhes do indexador com erros") Os avisos são comuns com alguns tipos de indexadores e nem sempre indicam um problema. Por exemplo indexadores que usam serviços cognitivas podem relatar avisos quando arquivos de imagem ou PDF não contêm nenhum texto para processar. Para obter mais informações sobre como investigar erros e avisos do indexador, consulte [Solucionando problemas comuns do indexador no Azure pesquisa cognitiva](search-indexer-troubleshooting.md). <a name="restapi"></a> ## <a name="monitor-using-rest-apis"></a>Monitorar usando APIs REST Você pode recuperar o status e o histórico de execução de um indexador usando o [comando obter status do indexador](https://docs.microsoft.com/rest/api/searchservice/get-indexer-status): GET https://[service name].search.windows.net/indexers/[indexer name]/status?api-version=2019-05-06 api-key: [Search service admin key] A resposta contém o status geral do indexador, a última invocação (ou em andamento) do indexador e o histórico de invocações recentes do indexador. { "status":"running", "lastResult": { "status":"success", "errorMessage":null, "startTime":"2018-11-26T03:37:18.853Z", "endTime":"2018-11-26T03:37:19.012Z", "errors":[], "itemsProcessed":11, "itemsFailed":0, "initialTrackingState":null, "finalTrackingState":null }, "executionHistory":[ { "status":"success", "errorMessage":null, "startTime":"2018-11-26T03:37:18.853Z", "endTime":"2018-11-26T03:37:19.012Z", "errors":[], "itemsProcessed":11, "itemsFailed":0, "initialTrackingState":null, "finalTrackingState":null }] } O histórico de execução contém até as 50 execuções mais recentes, que são classificadas em ordem cronológica inversa (mais recente primeiro). Observe que há dois valores de status diferentes. O status de nível superior é para o indexador em si. Um status de indexador de **em execução** significa que o indexador está configurado corretamente e disponível para execução, mas não está em execução no momento. Cada execução do indexador também tem seu próprio status que indica se a execução específica está em andamento (**em execução**) ou já concluída com o status **êxito**, **transientFailure**ou **persistentFailure** . Quando um indexador é redefinido para atualizar seu estado de controle de alterações, uma entrada de histórico de execução separada é adicionada com um status de **redefinição** . Para obter mais detalhes sobre códigos de status e dados de monitoramento do indexador, consulte [GetIndexerStatus](https://docs.microsoft.com/rest/api/searchservice/get-indexer-status). <a name="dotnetsdk"></a> ## <a name="monitor-using-the-net-sdk"></a>Monitorar usando o SDK do .NET Você pode definir o agendamento de um indexador usando o SDK do .NET Pesquisa Cognitiva do Azure. Para fazer isso, inclua a propriedade **Schedule** ao criar ou atualizar um indexador. O exemplo C# a seguir grava informações sobre o status de um indexador e os resultados de sua execução mais recente (ou contínua) no console. ```csharp static void CheckIndexerStatus(Indexer indexer, SearchServiceClient searchService) { try { IndexerExecutionInfo execInfo = searchService.Indexers.GetStatus(indexer.Name); Console.WriteLine("Indexer has run {0} times.", execInfo.ExecutionHistory.Count); Console.WriteLine("Indexer Status: " + execInfo.Status.ToString()); IndexerExecutionResult result = execInfo.LastResult; Console.WriteLine("Latest run"); Console.WriteLine(" Run Status: {0}", result.Status.ToString()); Console.WriteLine(" Total Documents: {0}, Failed: {1}", result.ItemCount, result.FailedItemCount); TimeSpan elapsed = result.EndTime.Value - result.StartTime.Value; Console.WriteLine(" StartTime: {0:T}, EndTime: {1:T}, Elapsed: {2:t}", result.StartTime.Value, result.EndTime.Value, elapsed); string errorMsg = (result.ErrorMessage == null) ? "none" : result.ErrorMessage; Console.WriteLine(" ErrorMessage: {0}", errorMsg); Console.WriteLine(" Document Errors: {0}, Warnings: {1}\n", result.Errors.Count, result.Warnings.Count); } catch (Exception e) { // Handle exception } } ``` A saída no console terá uma aparência semelhante a esta: Indexer has run 18 times. Indexer Status: Running Latest run Run Status: Success Total Documents: 7, Failed: 0 StartTime: 10:02:46 PM, EndTime: 10:02:47 PM, Elapsed: 00:00:01.0990000 ErrorMessage: none Document Errors: 0, Warnings: 0 Observe que há dois valores de status diferentes. O status de nível superior é o status do próprio indexador. Um status de indexador de **em execução** significa que o indexador está configurado corretamente e disponível para execução, mas não está em execução no momento. Cada execução do indexador também tem seu próprio status para se a execução específica está em andamento (**em execução**) ou já foi concluída com um status de **êxito** ou **TransientError** . Quando um indexador é redefinido para atualizar seu estado de controle de alterações, uma entrada de histórico separada é adicionada com um status de **redefinição** . Para obter mais detalhes sobre códigos de status e informações de monitoramento do indexador, consulte [GetIndexerStatus](https://docs.microsoft.com/rest/api/searchservice/get-indexer-status) na API REST. Detalhes sobre erros ou avisos específicos do documento podem ser recuperados enumerando as listas `IndexerExecutionResult.Errors` e `IndexerExecutionResult.Warnings`. Para obter mais informações sobre as classes do SDK do .NET usadas para monitorar indexadores, consulte [IndexerExecutionInfo](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.models.indexerexecutioninfo?view=azure-dotnet) e [IndexerExecutionResult](https://docs.microsoft.com/dotnet/api/microsoft.azure.search.models.indexerexecutionresult?view=azure-dotnet).
57.478495
473
0.755402
por_Latn
0.994711
28fc0ae595c565f426c22b44642e051bf4404e83
32
md
Markdown
README.md
tommyppg/codepolitan-source-code-review
068363ef29bec29dd7ab1102c96262467facbd00
[ "MIT" ]
null
null
null
README.md
tommyppg/codepolitan-source-code-review
068363ef29bec29dd7ab1102c96262467facbd00
[ "MIT" ]
null
null
null
README.md
tommyppg/codepolitan-source-code-review
068363ef29bec29dd7ab1102c96262467facbd00
[ "MIT" ]
null
null
null
# codepolitan-source-code-review
32
32
0.84375
fra_Latn
0.280308
28fc2fd25ad358a136362eeb08f9197cf38d4368
186
md
Markdown
docs/cnrm/1.30.0/servicenetworking-v1beta1.md
xvzf/jsonnet-k8s-crd
128adbb04d086df48f0dcf592a5411059d3dcb4a
[ "Apache-2.0" ]
null
null
null
docs/cnrm/1.30.0/servicenetworking-v1beta1.md
xvzf/jsonnet-k8s-crd
128adbb04d086df48f0dcf592a5411059d3dcb4a
[ "Apache-2.0" ]
null
null
null
docs/cnrm/1.30.0/servicenetworking-v1beta1.md
xvzf/jsonnet-k8s-crd
128adbb04d086df48f0dcf592a5411059d3dcb4a
[ "Apache-2.0" ]
null
null
null
--- permalink: /cnrm/1.30.0/servicenetworking/v1beta1/ --- # package v1beta1 ## Subpackages * [serviceNetworkingConnection](servicenetworking-v1beta1-serviceNetworkingConnection.md)
16.909091
89
0.784946
eng_Latn
0.251032
28fce852bab26bc0b5e1c0f567c8ca33536aaecc
621
md
Markdown
madura-vaadin-touchkit/README.md
RogerParkinson/madura-vaadin-support
79c6eda549dea7533828259756256b00a9ecd03b
[ "Apache-2.0" ]
null
null
null
madura-vaadin-touchkit/README.md
RogerParkinson/madura-vaadin-support
79c6eda549dea7533828259756256b00a9ecd03b
[ "Apache-2.0" ]
1
2020-03-17T06:33:45.000Z
2020-03-17T06:33:45.000Z
madura-vaadin-touchkit/README.md
RogerParkinson/madura-vaadin-support
79c6eda549dea7533828259756256b00a9ecd03b
[ "Apache-2.0" ]
null
null
null
madura-vaadin-touchkit == [![Maven Central](https://maven-badges.herokuapp.com/maven-central/nz.co.senanque/madura-vaadin-support/badge.svg)](http://mvnrepository.com/artifact/nz.co.senanque/madura-vaadin-support) [![build_status](https://travis-ci.org/RogerParkinson/madura-vaadin-support.svg?branch=master)](https://travis-ci.org/RogerParkinson/madura-vaadin-support) Library to support using Madura with Vaadin's Touchkit. This allows you to construct mobile applications that use Vaadin and Madura. A more detailed document can be found at [Madura Vaadin (PDF)](http://www.madurasoftware.com/madura-vaadin.pdf)
51.75
187
0.792271
yue_Hant
0.198398
28fd1e17ce56178cf0c73160a6544ca2d0e620dc
41
md
Markdown
README.md
jaswanthm/bitrise-wall-iOS-opensource
e5e93993fb4f32a3129025b00681eaff75c5f81b
[ "MIT" ]
1
2021-04-28T11:25:51.000Z
2021-04-28T11:25:51.000Z
README.md
jaswanthm/bitrise-wall-iOS-opensource
e5e93993fb4f32a3129025b00681eaff75c5f81b
[ "MIT" ]
null
null
null
README.md
jaswanthm/bitrise-wall-iOS-opensource
e5e93993fb4f32a3129025b00681eaff75c5f81b
[ "MIT" ]
null
null
null
# Bitrise Wall - iOS Bitrise iOS client
10.25
20
0.731707
kor_Hang
0.932662
28fda0a965109cd6b59c7f60c845737bdfe9ed60
1,110
md
Markdown
README.md
miekg/radix
e55c99d73a37554ce44fc9378a11977ef38e62cc
[ "BSD-3-Clause" ]
2
2015-03-04T06:33:44.000Z
2019-06-17T21:16:12.000Z
README.md
miekg/radix
e55c99d73a37554ce44fc9378a11977ef38e62cc
[ "BSD-3-Clause" ]
null
null
null
README.md
miekg/radix
e55c99d73a37554ce44fc9378a11977ef38e62cc
[ "BSD-3-Clause" ]
null
null
null
# Radix An implementation of a radix tree in Go. See > Donald R. Morrison. "PATRICIA -- practical algorithm to retrieve > information coded in alphanumeric". Journal of the ACM, 15(4):514-534, > October 1968 Or the [wikipedia article](http://en.wikipedia.org/wiki/Radix_tree). ## Usage Get the package: $ go get github.com/miekg/radix Import the package: import ( "github.com/miekg/radix" ) You can use the tree as a key-value structure, where every node's can have its own value (as shown in the example below), or you can of course just use it to look up strings, like so: r := radix.New() r.Insert("foo", true) x, e := r.Find("foo") if e { fmt.Printf("foo is contained: %v\n", x.Value) } ### Documentation For full package documentation, visit http://go.pkgdoc.org/github.com/miekg/radix. ## License This code is licensed under a BSD License: Copyright (c) 2012 Alexander Willing and Miek Gieben. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
23.617021
82
0.68018
eng_Latn
0.96121
28fe034435f456aedebf95dc0f3e25110798c766
1,209
md
Markdown
docs/zh/cli-client/asset/query-tokens.md
dreamer-zq/irishub
1c474a706e5620af7dbe355777cd1df579c543e6
[ "Apache-2.0" ]
null
null
null
docs/zh/cli-client/asset/query-tokens.md
dreamer-zq/irishub
1c474a706e5620af7dbe355777cd1df579c543e6
[ "Apache-2.0" ]
null
null
null
docs/zh/cli-client/asset/query-tokens.md
dreamer-zq/irishub
1c474a706e5620af7dbe355777cd1df579c543e6
[ "Apache-2.0" ]
null
null
null
# iriscli asset query-tokens ## 描述 根据条件查询 IRIS Hub 链上发行的资产集合。 ## 使用方式 ```bash iriscli asset query-tokens [flags] ``` ## 标志 | 命令,缩写 | 类型 | 是否必须 | 默认值 | 描述 | | ------------------ | ------- | -------- | ------------- | ------------------------------------------------------------ | | --source | string | false | all | 资产源: native / gateway / external | | --gateway | string | false | | 网关的唯一标识,当 source 为 gateway 时必填 | | --owner | string | false | | 资产所有人 | ## 查询规则 - 当 source 为 native 时 - gateway 会被忽略 - owner 可选 - 当 source 为 gateway 时 - gateway 必填 - owner 会被忽略(因为 gateway tokens 全部属于 gateway 的 owner ) - 当 source 为 external 时 - gateway 和 owner 都会被忽略 - 当 gateway 不为空时 - source 可选 ## 示例 ### 查询全部资产 ```bash iriscli asset query-tokens ``` ### 查询全部 native 资产 ```bash iriscli asset query-tokens --source=native ``` ### 查询名为 "cats" 的网关的全部资产 ```bash iriscli asset query-tokens --gateway=cats ``` ### 查询指定 owner 的全部资产 ```bash iriscli asset query-tokens --owner=<address> ```
20.491525
122
0.462366
eng_Latn
0.331255
28ff3516dfd964dd1eda8758d258bc84f090753b
763
md
Markdown
pet-shop-tutorial/README.md
dappsar/uah
3d2ab32436a5a1a0ed05fec0537c681fe3ef1c01
[ "MIT" ]
null
null
null
pet-shop-tutorial/README.md
dappsar/uah
3d2ab32436a5a1a0ed05fec0537c681fe3ef1c01
[ "MIT" ]
null
null
null
pet-shop-tutorial/README.md
dappsar/uah
3d2ab32436a5a1a0ed05fec0537c681fe3ef1c01
[ "MIT" ]
null
null
null
# Pet Shop Tutorial # Instalación Para instalar las dependencias del proyecto, ejecutar el siguiente comando: ``` npm install ``` # Ejecución en desarrollo Se puede iniciar la aplicación, con el siguiente comando: ``` npm run dev ``` # Package Se utiliza webpack para construir el proyecto, con un shortcut dentro del package.json. Para construirlo, simplemente ingresar el siguiente comando: ``` npm run build ``` ![Pet Shop Build](images/pet-shop-build.png?raw=true "Pet Hop Build") Eso genera el archivo [build.js](dist/build.js). Ese es el archivo a distribuir, el cual se puede validar, ejecutando: ``` npm start dist/build.js ``` Que nos debería mostrar algo como: ![Pet Shop Sample](images/pet-shop-sample.png?raw=true "Pet Hop Sample")
18.609756
148
0.736566
spa_Latn
0.982338
28ff5eedf029b2eae54e21864e0d7b6e5429aec3
191
md
Markdown
dev/reference/hooks/postdraft/nl.md
lusty/markdown
4fbfb8964cde4716739f8653a69839aac590940d
[ "MIT" ]
7
2019-06-26T14:03:06.000Z
2021-07-11T03:48:31.000Z
dev/reference/hooks/postdraft/nl.md
lusty/markdown
4fbfb8964cde4716739f8653a69839aac590940d
[ "MIT" ]
76
2019-06-12T07:03:47.000Z
2021-08-15T22:55:57.000Z
dev/reference/hooks/postdraft/nl.md
lusty/markdown
4fbfb8964cde4716739f8653a69839aac590940d
[ "MIT" ]
51
2019-07-02T07:39:40.000Z
2021-11-18T17:11:20.000Z
--- title: postDraft --- The `postDraft` hook runs just after your pattern is drafted. Your plugin will receive the Pattern object. <Note> The `postDraft` hook is rarely used. </Note>
12.733333
61
0.712042
eng_Latn
0.99755
e900b49cef950dbc4287a6abc400c334ed272fd8
42,294
md
Markdown
challenges/identifying-shared-costs.md
rhoyer-sada/framework
a49fd3ec8adfd2a09db9e674040136ff5d590903
[ "CC-BY-4.0" ]
1
2021-05-26T18:20:03.000Z
2021-05-26T18:20:03.000Z
challenges/identifying-shared-costs.md
robmartin33/framework
88a4dc013c10a4eaf3379bf2303314f1c3c93e2c
[ "CC-BY-4.0" ]
null
null
null
challenges/identifying-shared-costs.md
robmartin33/framework
88a4dc013c10a4eaf3379bf2303314f1c3c93e2c
[ "CC-BY-4.0" ]
null
null
null
--- layout: default --- *NOTE: This playbook is in a draft format and is not final. It is published to preview for an upcoming member demonstration, but undergoing review by the Technical Advisory Council. Expect more changes to come.* # A Guide to Spreading Out Shared Costs Every organization has shared IT costs, and these span multiple business departments. As organizations increase their adoption of public cloud resources, it becomes increasingly difficult to assign shared cloud resources to specific business owners, to understand how to properly and fairly allocate costs, and to actually forecast future business unit budgets. ## Table of Contents * [Before you begin](#begin) * [Relevant FinOps Framwork components](#components) * [Why allocated shared costs?](#why-allocate) * [Who might care about shared cost allocation?](#who-cares) * [How to take on this challenge?](#challenge) * [How do you know you're on the right track?](#kpis) * [Real world stories](#stories) * [Acknowledgements](#acknowledgements) <span id="begin"></span> ## Before you begin You should understand the basics of how cloud computing works, know the key services on your cloud providers, including their common use cases, and have a basic understanding of billing and pricing models. Being able to describe the basic value proposition of running in the cloud and understand the core concept of using a pay-as-you-go consumption model are also necessary. You’ll also need to have a base level of knowledge of at least one of the three main public cloud providers (AWS, Azure, Google Cloud). For AWS, we recommend AWS Business Professional training or, even better, the AWS Cloud Practitioner certification. For Google, check out the Google Cloud Platform Fundamentals course. For Azure, try the Azure Fundamentals learning path. Each can usually be completed in a full day workshop. You should also have a solid understanding of several things within your company. First, you should know the high-level architecture of the technical systems within your company; be able to identify and understand them, as well as what may be used by more than one team. You should have a foundational understanding of how your Accounting and Finance departments handle IT Operation costs, specifically cloud. You should be able to identify and understand your products and their usage internally. <span id="components"></span> ## Relevant FinOps framework components To get the most out of this document, please review the following first: * [Tagging and labeling](https://framework.finops.org/framework/functions/tagging-labeling/) * [Cost allocation](https://framework.finops.org/framework/capabilities/allocate/) * [Budgeting and forecasting](https://framework.finops.org/framework/capabilities/forecast) * [Reserved instances, spot pricing, and savings plans](https://framework.finops.org/framework/capabilities/rate-optimization) * Enterprise discount programs * Accounting Models * Invoice and Billing Reporting * [Chargeback and Showback Reporting](https://framework.finops.org/framework/capabilities/report) * ...and more (please feel free to add or correct any links as a contribution to this playbook) If you have a strong handle on these subjects, continue on to better understand how to approach this challenge. ### Shared costs that this document covers * Anything related to cloud: Support charges, shared services, etc * AWS/GCP Marketplace costs ...and more in future revisions <span id="why-allocate"></span> ## Why allocate shared costs? A foundational principle of FinOps is: “everyone takes ownership for their cloud usage.” The true key to understanding total cost of ownership is built upon transparency and accuracy, but unallocated shared costs hinders both of these. Without appropriately splitting costs that are shared, engineers and product managers lack a complete picture of how much their products are really costing. ### Let’s think of it in terms of a pizza Imagine Rajesh, Susan, and Marcus decide to have a private pizza party. Marcus books Palatable Pizza for $1,500 - which includes music, a private room, free drinks, and 90% discount on their mini pizzas, and then everybody pays for their own food, $5 per mini pizza. Rajesh buys 3 ($15), Susan buys 2 ($10), and Marcus buys 1 ($5). After the party, Rajesh and Susan are excited to have another party because “it cost everybody less than $15!”; meanwhile the whole evening has cost $1,505 for poor Marcus. Because Rajesh and Susan didn’t understand that the cost of the venue was foundational to their party, they don’t know they grossly overpaid for pizza. Failing to distribute shared costs and make them visible to the consumer can result in a disconnect between engineering (the consumer) and the financial impacts of their decisions. If costs are visible, consumption and ownership responsibility are aligned, and engineers are supplied with the feedback loop to guide them in making better decisions. <span id="who-cares"></span> ## Who might care about shared cost allocation? ### Finance: Controlling Chris, the leading digital controller from the finance department is supporting Executives to make better decisions. He loves to have all costs accurately allocated according to cause and their respective cost centers where special attention is paid to shared costs. ### Business: Program Manager Sr. Program Manager Stacy is responsible for accurate reporting from all the products of her program she manages. Not having shared costs distributed correctly could create a financial disadvantage for product budgets. ### Business: Product Owner Platform Quinn is the business owner of a shared platform which enables many product teams to deliver business value fast, reliable and secure. He has an obligation to showback or chargeback the costs that have been caused by the product teams by using his platform so he has not to pay the party on his own. ### Engineering: Software Engineer The software engineer Linus had no clue until last month how much he owns of all the shared platform costs that have occurred. Nowadays cost optimization is simply part of a sprint delivery. ### Engineering: Engineering Manager or Director Suchandra is responsible for all costs incurred in her team or department, including the portion of shared costs that her team is charged. <div id="1"></div> <span id="challenge"></span> ## How to take on this challenge ### Step 1: Identify what kinds of costs are shared For the purpose of this document, we are primarily concerned with shared cost as the total amount billed to a customer by a cloud provider. That is, a cloud bill is shared at the organizational level and must be allocated for accounting purposes. Shared cost may be allocated to a centralized budget within the technology org; alternatively it can be allocated to cost centers throughout the business and technology organizations. From a finance perspective, we may also refer to shared cloud costs as a type of direct operating expense. What classifies as a shared cost can vary from organization to organization, and also depend on the maturity and size within the company itself. However, there are a standard set of costs that generally appear on every company’s balance sheet, and it becomes the responsibility of the company to determine whether they should be considered shared or not. The company can even define different types of shared costs -- some shared costs apply to the entire organization, while others may be shared only among those cost centers that use them. In terms of accounting however, most cases of “shared costs” in the cloud are actually accrued and charged within one account, and it can be challenging to determine which costs should be shared. Support charges are a typical example of this challenge. Cloud vendor support charges are generally applied at the parent account level. Some organizations choose to cover this charge with a Central IT/Cloud team’s budget, but that approach isn’t standard. More commonly, a Central IT/Cloud team is considered a supporting organization (being a cost center) and therefore needs to allocate its cost to its customers: business units or application owners. Modern architecture has also introduced more shared costs with the rise of shared platforms. These platforms have multiple teams working with the same core resources, such as data lakes built in commonly shared S3 buckets or Kubernetes systems running on shared clusters. At first glance, chargeback and showback for these platforms can seem impossible, but proper tagging can help with splitting shared costs correctly. #### Common types of shared costs are: * Shared resources(network,shared-storage) * Platform services (k8s, logging, etc) * Enterprise level support * Enterprise level discounts * Licensing, 3rd party SaaS costs (out of scope for this document, but hope to add in a future revision) <div id="2"></div> ### Step 2: Learn how to split up these costs There are typically three ways to split up shared costs: * Proportional: Based on relative percentage of direct costs * Even split: Split total amount evenly across targets * Fixed: User-defined coefficient (the sum of coefficients needs to be 100%) Let’s consider an example. The table below shows how much each business unit of an organization consumed in a given month, and how the organization was charged $12,000 of enterprise support charge. | **Business units** | **Cost** | **% of total (excluding enterprise support)** | |---------------------------|-------|-------------------------------------------| | Sales operations | $50K | 50% | | Engineering—QA | $30K | 30% | | Engineering | $20K | 20% | | Enterprise support charge | $12K | | | Total | $112K | 100% | Without sharing the enterprise support charge, you can see that: * Sales operations will be accountable for a total of $50,000. * Engineering—QA will be accountable for a total of $30,000. * Engineering will be accountable for a total of $20,000. * The $12,000 enterprise support charge isn’t being distributed among teams and would need to be assigned to a central budget. #### Proportional Cost Method Following the proportional sharing model, the enterprise support charge ($12,000) will be distributed among the business units based on the percentage of their raw spend. Mature organizations tend to proportionally distribute shared costs among business units based on their direct charges. | **Business units** | **Total Cost** | **Shared Cost Allocation** | |------------------|------------|------------------------| | Sales operations | $56K | 50% | | Engineering—QA | $33,6K | 30% | | Engineering | $22,4K | 20% | | Total | $112K | 100% | Sharing the enterprise support charge, you can see that: * Sales operations will be accountable for a total of $56,000 ($50,000 direct cost + $6,000 support charge allocation). * Engineering—QA will be accountable for a total of $33,600 ($30,000 direct cost + $3,600 support charge allocation). * Engineering will be accountable for a total of $22,400 ($20,000 direct cost + $2,400 support charge allocation). * The enterprise support charge has been distributed and does not impact a central budget. #### Even Split Model Under the even split model, the enterprise support charge ($12,000) will be shared evenly by all business units. Because of its simplicity, this model tends to be more popular in smaller organizations that have fewer business units. | **Business units** | **Total Cost** | **Shared Cost Allocation** | |------------------|------------|------------------------| | Sales operations | $56K | 33.3% | | Engineering—QA | $34K | 33.3% | | Engineering | $24K | 33.3% | | Total | $112K | 100% | The even split model makes allocation of shared costs simple, but will impact some business unit’s budget more than others. In the above example, Engineering is paying 13% more for shared costs than under a proportional model: * Sales operations will be accountable for a total of $54,000 ($50,000 direct cost + $4,000 support charge allocation). * Engineering—QA will be accountable for a total of $34,000 ($30,000 direct cost + $4,000 support charge allocation). * Engineering will be accountable for a total of $24,000 ($20,000 direct cost + $4,000 support charge allocation). * The enterprise support charge has been distributed and does not impact a central budget. #### Fixed Proportion Method The fixed proportion method relies on using a set percentage to attribute shared costs month over month. Typically these ratios have been determined by evaluating past spend and arriving at a fair breakdown for allocating spend. Consider our $12,000 enterprise support charge example using these designated percentages. | **Business units** | **Total Cost** | **Shared Cost Allocation** | |------------------|------------|------------------------| | Sales operations | $55,4K | 45% | | Engineering—QA | $33,6K | 30% | | Engineering | $23K | 25% | | Total | $112K | 100% | The fixed proportion model attempts to provide a more equitable distribution of shared costs than even split, but still leaves allocation easy to calculate. Unfortunately, it does rely on analysis of historical data to appropriately weight the allocations, but once done it can be a good approach. * Sales operations will be accountable for a total of $55,400 ($50,000 direct cost + $5,400 support charge allocation). * Engineering—QA will be accountable for a total of $33,600 ($30,000 direct cost + $3,600 support charge allocation). * Engineering will be accountable for a total of $23,000 ($20,000 direct cost + $3,000 support charge allocation). <div id="3"></div> ### Step 3: Apply Shared Cost Models Combining multiple types of shared costs with multiple approaches to splitting shared costs can quickly become complicated. Not every type of shared cost needs to follow the same method to splitting shared costs. Some charges like support fees, might seem to work better under a proportional model, but other costs like shared resources, or dev/test environments might make sense under the even split or fixed proportion approach. There is no best approach model; a company will need to decide what works best for them, and what makes the most sense based on their budgeting and accounting methodology. Typically, more mature organisations rely on proportional or direct usage based appropriations, but not always. Don’t become frustrated if you are struggling to determine how to apply shared costs. In the 2021 State of FinOps report, companies across the Crawl, Walk, and Run states all listed allocating Shared Costs as the second biggest challenge they are facing. The best approach is usually iterative in nature, and becomes more robust over time. Start with your organization’s largest shared costs and determine how to allocate the spend. Work with finance and accounting to develop a process for how these shared costs will show up on budgets. Communicate with the impacted business units so they understand what the change is, and why this change is good for them. A good first approach to splitting costs is to use the even-split or fixed-proportion models because they are easier to manage month over month. Remember if you’re splitting $50,000 across 5 business units, the first few months will be difficult no matter how it’s allocated. After a couple of months, you can review any impacts the change has made to product decisions, and re-evaluate the approach currently being used. #### How to apply discounts and credits to shared costs AWS has switched to a model where enterprise discounts are applied on a per resource basis, so this ameliorates some of the burden of calculating discounts for shared costs. Discounts can be applied by the cloud service provider (CSP) at billing time against the Monthly Bill or at an Account Level where an Account Hierarchy is defined with the CSP. So there may be no need to discount the Shared Cost amount, which is to be allocated to a Cost Center. The process may not be the same for GCP and Azure, however. <div id="4"></div> ### Step 4: Reporting Shared Costs If you are currently not allocating out 100% of the expense, it is important to bring attention to the unallocated shared costs through reporting. Without proper attention, shared costs can quickly grow from immaterial cost adjustments to extremely material budget overruns. Proper attention and management of shared costs can be driven by effective reporting. Effective reporting on shared costs will show: * Trend of the costs over time to indicate whether they are growing or shrinking * Actuals vs budget vs forecast * Identification of spend drivers - (perhaps service or group that is creating the spend) - essentially, this is a ‘showback’ of who/what is responsible of shared costs The most important aspect of reporting is that it is reviewed by the teams and stakeholders who can drive the actions to manage the costs. The reporting must make clear the impact to the business and the actions that can be taken to either manage or mitigate the expense and continue to mature successful financial planning in the cloud space. In some cases, our FinOps Foundation members report that even general shared cost reporting can help identify charges that may go underlooked by current enterprise accounting policies. #### Overcoming challenges to accurate accounting As Cloud Services are Moved, Added, Changed or Deleted (MACD) each month the fixed and one-off cost will vary. Due to MACD activity, tracking payment for shared costs managing budget changes over the lifetime of the Cloud Service can be very complicated. Keeping track of these cost changes is handled through Bill Reconciliation, where Cloud Service Order Reports and Request for Change (RFC) Records are used to track the movement of charges. The effective End/Start Dates are checked against the Bill/Invoice Period to verify if change is reflected in the Bill Amount for the Invoice Period. If charge change can not be mapped to Invoice period, then Order and RFC records are held over to the next bill period reconciliation.The latter action may result in Credit Adjustments/Credit Notes for over charged periods. One challenge to accurate accounting is knowing when to stop with shared costs. It’s easy to say something is a shared cost and soon the bucket of shared costs that need to be split can get pretty large. You should have a scheduled, regular review of shared costs to make sure resources that have been marked shared are still relevant, and the resources still needed. Since no one pays directly for these costs, it can be easy for these resources to be left on, because no one business unit feels the impact of their cost. Make sure you regularly challenge the need for those resources. <div id="5"></div> ### Step 5: Sustaining Your Approach You’ve done it. You figured out what your shared costs are, how you want to split them up, how it gets reported, and you have passed your first month where shared costs have been allocated. Nobody hoisted you on their shoulders in celebration, but everybody understands the new approach, appreciates the transparency, and is even on-board with the process. It can be easy to relax and think the difficult work is over, but process sustainability is an essential part of spreading out shared costs. It’s essential to remain vigilant, and identify areas that can easily prevent fairly allocated costs to skew over time. As you work towards a long-running, sustainable shared cost model, consider a few of these questions: * If the company has adopted a tagging standard for allocation, what is the process for ensuring that tags are correct? * How do you handle unallocated shared costs that may pop-up due to lack of appropriate tags? * What is your review process for when new cost centers or business units are introduced? * When using a fixed proportion model to be fair, how often are those ratios reviewed? * Is your data modeling approach sustainable? * How much documentation do you have for your processes? * How many people know the system that has been implemented? Is there sufficient shared knowledge so it does not rest in too small of a group? <span id="kpis"></span> ## How do you know you’re on the right track? As mentioned above, the process of spreading out shared costs is iterative, and evolves over time. The goal of splitting these costs is that products have a complete and holistic view of their actual running costs. As the process becomes more mature, confidence in the accuracy of the distribution should grow. Cost centers or teams should have a greater ability to be able to optimize or influence these shared costs. Tracking is essential to view progress on splitting shared costs. In addition to tracking cost splits, you should also develop KPIs that consider how much of the shared cost is accurately distributed, and not based on the models proposed above. Consider some example KPIs that correspond to the suggested approach above: #### Shared vs. dedicated cost tagging and labeling coverage in % *Try this to track success on [Step 1](#1)* As the classification of shared vs. dedicated costs should result in tagging and labelling, this KPI can track the progress. #### Shared cost model distribution in % *Try this to track success on [Step 2](#2)* Your company will employ different methods of shared cost over time: even split, fixed %, or proportional. To track the maturity of your modeling, you can track percentage spend under each approach, and match it against company goals. This approach will result in three or more values that will add up to 100%. #### Applied shared cost distribution on all shared resources in % *Try this to track success on [Step 3](#3)* Having cloud resources only identified as shared and having them tagged as such does not mean they are allocated explicitly and accurately using an appropriated model so this KPI can track your completeness and maturity. #### Shared vs. dedicated costs in % *Try this to track success on [Step 4](#4)* This can be used for trending, and tracking whether your shared costs have exploded relative to your dedicated costs. It requires baseline evaluation before it can be accurately used. #### Shared cost maturity aggregation of previously detailed KPIs *Try this to track success on [Step 5](#5)* An aggregation of the KPIs listed above, this one number gives a holistic view of all parts necessary in tracking shared cost. The best way to know if you’re on the right track is to evaluate how the process fits in with your organization’s overarching approach to cost management. Is the process of splitting shared costs helping to drive innovative and understanding? This is an important consideration to make. While splitting shared costs is in itself a cultural shift, it should help everybody make better decisions about what they’re spending and why. We’ve worked out some milestones that we think make sense at the Crawl, Walk, and Run stage of your journey through shared costs; they may help you to consider how to approach shared costs if you’re just beginning. The model is not proscriptive, many companies have implemented walk strategies before crawl or vice versa. Cost is complex and different costs have varying impacts depending on your company infrastructure. <table style="width:100%;"> <thead> <th style="width:22.5%;"></th> <th>Crawl</th> <th>Walk</th> <th>Run</th> </thead> <tbody> <tr> <td>Focus</td> <td>Identifying shared costs Developing cost split strategy</td> <td>Applying shared cost models</td> <td>Refining shared cost approach, Long-term strategy Automation</td> </tr> <tr> <td>Splitting shared cost approach</td> <td>None or Even-Split</td> <td>Fixed Proportion or Proportional</td> <td>Proportional or Multi-Pronged</td> </tr> <tr> <td>Shared costs that may be split</td> <td>Enterprise support charges</td> <td>Crawl + Platform Charges</td> <td>Walk + cloud volume discounts, RIs and SPs</td> </tr> <tr> <td>Challenges</td> <td>Multiple CSP Bill Formats Cloud Product Charges Billing Models Tagging Reconciliation</td> <td>Bill Reconciliation Bill Reporting Shared Cost Model Show Back Reporting Charge Back Reporting</td> <td>Shared Cost Policy, Shared Cost Design Patterns</td> </tr> </tbody> </table> <span id="stories"></span> ## Real World Stories Here is a collection of stories from FinOps teams of all shapes, sizes, cloud utilization level, and FinOps maturity. Read on to see how other teams take on the challenge of identifying shared costs at scale. ### Avoid and Simplify *Joseph Daly, Nationwide* Nationwide's strategy to handle shared costs is to avoid them as much as possible. We do this through our account and tagging strategies. We segment or accounts by department and sometimes by application so that if there are any untaggable expenses are at least identifiable to the account owner. We work with finance to make sure that each account has a default cost center so that 100% of all costs are allocable to the owner. For shared platforms,like containers, we primary leverage a label strategy for cost allocation. We allocate cluster costs in proportion of each cost center label’s usage. Sometimes, depending on the application, we will dedicate clusters similarly as we do accounts. That way one department or application is being charged for the costs incurred as opposed to being spread out amongst the rest. Due to company accounting policy and a desire to make our direct chargeback as transparent as possible, we do not allocate 100% of our bill directly through chargeback. Chargeback covers the vast majority of our expenses. For the remainder, items like enterprise support and accounts setup for management of the cloud, we set a budget and provide showback reporting to make these charges visible. They are not charged back directly, but we know who/what/where the costs are being driven from. Providing this keeps us aligned with enterprise accounting policies and keeps our chargeback simple and transparent. --- ### Non-technical/Market/Environmental/Corporate Governance Influencing Factors on How to Track, Process and Share Costs *Neil May, FinOps Director, Wish Star* **Vendor pricing strategies** - complex, ever evolving software and service licensing models from multiple vendors to flow through predictions and usage accounting. **Centralized Procurement In Multi-National Companies** - due to corporate strategy for controls and/or seeking economies of scale from full company buying power * Pulls all costs into one regional billing point, persistent monthly stream of usage data to couple with billing data for real-time analysis, validation against exceptions rules such as tag validation, clean digital identity integrity with budgets/cost centres and bill to entities in group, contracted rates within contracted commits and/or accurate overage rates etc, etc, etc... * Validated charges to be re-rated (+ any internal finance admin charge) and cross border currency forex applied from what base, local tax jurisdiction treatments service by service (i.e. SaaS vs. IaaS vs Telecoms vs Network and accounting for withholding taxes incurred where international tax-treaties don’t support (i.e. WHT from Brazil to the UK is ~30% - who takes this cost/cashflow hit? how to account/apportion? How does this change corporate policy decisions to decentralized billing (from local vendors) to centralized vs. reduced economies of scale? **Exceptions** - E.g. Pandemic causing sharp fluctuations in resourcing (human or otherwise with Furlough, contracting, burstable cloud services) creating exceptional usage patterns and the need to ensure flexible subscription plans are scrutinised for integrity with joiners/movers/leavers and project/process suspensions etc. **Pattern spotting** - Building up the significant data lakes with triangulated data points between related services, allows for build up of benchmarking industry costs and utilisation models for total cost analysis of LOB App and solutions for apportionment. Data science and analysis can surface insights to help drive down costs as well as apportionment norms and the shared cost data can also aid prescription for quality of experience for user adoption etc etc. (different network provider data, compute data etc - optimising cost to performance for better apportionment) --- ### One Way to Put it All Together *Chris Greenham, healthAlliance* Shared Costs should be identified upfront as part of Service Design. Just as the Service Design is subject to design approval, the Shared Cost Model needs approval from FinOps. FinOps should own the Cloud Shared Cost model(s) on behalf of the Business Cost Centre Owners, for both Business as Usual (OpEx) and Projects (CapEx). FinOps should define the Cost Model Design Patterns that best suits the Cloud Deployment, Service Model and CSP Pricing and Discounting Structures that are to be made available to the business. During the Service Design phase ‘Compare Price and Quote’ are performed to determine what the Total Cost of Deployment ‘TCD’, and Total Cost of Ownership ‘TCO’ are to be before ordering any cloud services. Cloud Service Decision Tree should then be applied to determine the deployment for Colocation, Private, Public or Hybrid. When designing a new cloud service the cost model requirements are identified and agreed upon by the Project and Business Owners. The Project shared cost model may be completely different to the Business Operations shared cost model. The Cost Model Design Patterns are used to guide the shared cost decision. To manage the Cloud costs ownership over the lifetime of the Cloud Service. FinOps should implement a Change Control mechanism to manage the Change of Ownership and Change of Cost Model, between Project CapEx Budgets to BAU OpEx Budgets, where the latter may split across multiple Business Owner Cost Centres. This Change Control provides the audit reporting and visibility required by Finance to enable the tracking of change in cloud cost ownership, and the when and how cloud shared costs were applied. The Stop/Start Date can then be used to determine how proration of charges should be applied, when the change ownership occurs part way through a CSP Billing Period. The final stage is the Reconciliation of Charges between ‘Actuals and Expected’. This step is to ensure all CSP Cloud Charges that have been apportioned against a Shared Cost Model, all balance out against the CSP Month Bill. Once the balance of charges has been confirmed and approved by FinOps, then Showback and/or Chargeback reports should be issued to each of the cost centres outlining a breakdown of their monthly cloud charges, for the Cloud Bill Period. Acting as a Cloud Service Broker (CSB) to support IaaS and Public Cloud Solutions, for multiple District Hospitals. Finance manages the General Ledger, which supports a set of Accounts with Cost Centre Codes. The Cost Centres cover Projects (CapEx) and BAU (OpEx) budgets under a set of Accounts. Each Hospital sets their planned Cloud Budgets CapEx and OpEx, for the year, which is then divided across Cloud and IT Services that each Hospital requires to operate with. The CSP bill (invoice) is received by Finance and posted to the Accounts Payable Account, for each CSP (Bill to Pay). The CSP (Statement of Charges/Billing Report) is then retrieved from each CSP at the end of each bill period. The set of charges are then normalised (currency conversion), for local handling, which includes reconciliation of Actuals vs Expected (Cloud Moves, Adds Changes and Deletes aka MACD’s over the bill period). The Billing report includes a Machine ID, which is recorded in CMDB under an Implementation. Each Implementation is then appended with a Charge Code Config Item. The reconciled charges are then apportioned to a Charge Code Item(s) identified for the Implementation item in CMDB. This identifies the active Charging Party over the Bill Period (Caters for Proration of Charges for Project to BAU Handover). The Charge Code Item is stored in a CMDB as a Configuration Item (CI). The Charge Code CI is attached to each Implementation (CI), which inturn is linked to each VM Instance (CI) for IaaS CSP. The Charge Code item contains a number of attributes, which are configured as part of the provisioning process for service establishment. CSP Orders are noted with the VM Instance and Charge Code (CI.s). Order Reports are generated to reconcile the Bill for any Cloud MACD changes that occurred during the bill period to verify the Actuals with Expected. ![](/img/shared-costs/chris-story.png) <figure>Charge Code Configuration Item (CMDB)</figure> The VM instance design includes specifying the Charge Code CI attributes for Charge Category (Project/BAU) Charge Periods From/To Date, Charge Owner, Bill Status (Active, Suspended, Terminated), Charge Code (Cost Centre, Project Code, or GL Code), Charge Method (Purchase Order, Project, Funded), Charge Type (Dedicated, Shared), Charge Split (% or $ Amounts), Charge Owner, Charge Contact (Send Alerts), Budget Amount, Budget Period, and Charge Reporting (Showback, Chargeback). The Charge Code CI remains against the Implementation, for the duration of life as a 1:many association. For a Shared Charging Model, the Charge Type Attribute must be set to ‘Shared’ so that multiple Charge Code CI’s can be created against the Implementation. Each Charge Code CI is then configured for each Shared Charge Entity for Owner, Contact, Budget, %/$ Amounts, and Chargeback. The result should be 100% cost allocation across the Charge Code CI’s. The above Charge Code Item attributes are then changed as required over the lifetime of the Implementation. Charge Control (RFC) is the mechanism used to request and authorise the change actions. Charge config attributes change as the Product Instance moves between Project CapEx (Non-Prod) and BAU (Prod) OpEx. The Owner's/Roles, Budgets, Shared Amounts are some of the attributes that will change over the lifetime of the implementation. The RFC provides an audit trail on the financial transactions to meet Account and Audit compliance requirements. The above process supports hybrid cloud that employs an aggregation model. The build process includes a financial modelling scenario that identifies how the cost allocation is to be configured before deployment. A showback report can be provided with dummy charge data to validate the shared charge model is applied as expected. The Project/Business owner is then required to accept responsibility for the receipt of charges as identified for shared costs, for the agreed charge period that is to be tracked against an agreed monthly spend budget….’Responsible Cloud Spending’. ### How to Allocate Meter Charges by Type *Tom Foegen, Section Head - IT Business Services, Mayo Clinic* To date, Mayo Clinic is allocating all meter charges to projects. Charges are broken down a few ways: * General costs - These are costs that apply to all projects. An example would be security logging costs. These costs are applied proportionately to the charges. * Service costs - Projects are broken down into various service categories such as web app stack, IaaS, data science virtual machine, etc. There are some costs that apply only to each service and those costs for a particular service are allocated only to the costs for that service. An example would be training instances. These are set up to teach customers how to use the tools. * Reserved instances - Reserved instance costs (Azure) are directly allocated to each project based on using the meter categories in the billing export for reserved instances. These costs are applied at a detailed row level within the billing export data in order to be able to rollup up the costs multiple ways. The customer only sees the cost after the costs are allocated. To date, we have not been challenged from anyone on those costs. Currently we apply these costs at the end of the month but since customers are now starting to receive regular reports (daily, weekly and monthly) we are going to start to apply the overhead costs on a daily basis as charges come in. The challenge is that the projects need to be defined upfront in order to determine what service category they apply too. The alternative discussed recently is to use a standard percentage to apply the categories and then true up these costs on a regular basis, maybe quarterly. --- ### Fair cost allocation in a shared Platform (as a Service) *David Sterz, Solutions Architect and FinOps Lead, Mindcurv* #### Setup * The public cloud mimics the actual organization, departments, teams,products. * Dedicated cloud accounts for teams that are mapped 1:1 to an organisational units * Shared cloud accounts form their own organisational unit(s) * 1 Platform Team provides a shared platform and services * 20+ Product Teams have dedicated cloud resources, some are shared between them ![](/img/shared-costs/david-story.png) #### Goals * Business: Provide accurate cloud costs per product to be consumed as part of a Unit Metric Aggregation * Finance: 80%-100% direct cost allocation to cost center codes from public cloud spent across Foundation- and Product Teams * Engineering: Cost ownership through full transparency and enablement to optimize themself towards business KPIs #### Story A central Platform Team is the enabler for the Product Teams to develop value faster and compliant by providing shared platform services as a product (VCS, Container Registry, CI/CD) on a shared deployment target (Kubernetes) along with shared operational services (Dashboarding, Log-management, Metrics, and Tracing + APM, etc.) The shared platform services are a mix of cloud resources, Kubernetes Deployments and as a 3rd Party SAAS that are consumed by most product teams. The cloud resources that are needed by the product teams are provisioned by the central platform team into the product teams accounts which frees up the product teams from the heavy lifting and the operational responsibilities so they can focus more on their application development. While organisational units and the account per team setup gives a good baseline for cost transparency there are various shared costs almost everywhere. ##### Product Team Accounts Most costs can be allocated directly to the product team on product level. Higher granularity down to application and sub-service level is achieved through tagging and labeling. A small percentage of costs occur in every product team account which is part of the platform teams tooling that provide observability, security and compliance services as part of the platform services. **Examples** * Monitoring Agent per Account as part of shared observability costs * Thread Detection introduced by a central security team * Compliance checks against policies and best practices ##### Platform Team Accounts As the platform for the product teams is shared most of the resources in the platform account is shared. **Examples** * DNS with many records split in nonproduction and production * Secret Management via kubernetes integration with cloud providers secret service * Kubernetes cluster infrastructure (storage,compute,ram,network) * Kubernetes workloads in shared cluster from all teams * The CD system is shared across all Teams deploying new Releases * Dashboards and Alerting is shared * Metrics & Tracing, shared and big differences in usage and costs per team and read/write * Log-Management where teams ingest different volumes and velocity along with different usage/query patterns **Example of shared cost allocation strategy for Log-Management** | Strategy | Crawl | Walk | Run | |--------------|----------------------------------|--------------------------------------|--------------------------------| | Proportional | | Log-Mgmt Costs / Directcost-per-Team | | | Even split | Log-Mgmt Costs / Number-of-Teams | | | | Fixed | Log-Mgmt Costs by Traffic % | Log-Mgmt Costs by Log-Storage % | Log-Mgmt Costs by Read/Write % | #### Shared Accounts Other Accounts provide services that are shared across all teams. **Examples** * The version control system (VCS) is used by many teams where they store source code, build artifacts and documentation * A central container registry holds images for every team * The CI runners are building software for various teams, some of them are dedicated to a team as they have specific requirements --- <span id="acknowledgements"></span> ## Acknowledgements The FinOps Foundation extends a huge thank you to the members of the Special Interest Group that broke ground on this documentation: - Tracy Roesler - Vik Saluja - David Sterz - Joe Daly - Deana Solis - Anthony Johnson - Chris Greenham - Eleni Siakagianni - Neil May - Tom Foegen - Vasilio Markanastasakis If we’ve missed anyone, let us know. We thank you all for your contributions. ## How to contribute more FinOps stories about this challenge There are many more stories to tell. If you have your own perspectives on tackling this challenge, submit your story to the SIG, or contribute to the FinOps Framework GitHub repo. See our [contribution guidelines](https://framework.finops.org/introduction/how-to-contribute/) for more details.
88.666667
737
0.769022
eng_Latn
0.999497
e90114215e551b303b950c6665b39a99096cccce
1,298
md
Markdown
README.md
TomYan2255/code-push-server
9ec01a39771bd539fa97c5b0f8c5c9c4c8af0423
[ "MIT" ]
null
null
null
README.md
TomYan2255/code-push-server
9ec01a39771bd539fa97c5b0f8c5c9c4c8af0423
[ "MIT" ]
3
2020-07-19T13:59:10.000Z
2021-05-08T19:00:35.000Z
README.md
TomYan2255/code-push-server
9ec01a39771bd539fa97c5b0f8c5c9c4c8af0423
[ "MIT" ]
null
null
null
# `react-native` 如何使用 `code-push` 熱更新 ## 使用前須知 - Q: “蘋果應用商店和android應用商店允不允許使用熱更新?” A: “都允許。” > 蘋果允許使用熱更新[Apple's developer agreement](https://developer.apple.com/programs/ios/information/iOS_Program_Information_4_3_15.pdf), 但是規定不能彈框提示用戶更新,影響用戶體驗。 > Google Play也允許熱更新,但必須彈框告知用戶更新。在中國的android市場發佈時,都必須關閉更新彈框,否則會在審核應用時以“請上傳最新版本的二進制應用包”駁回應用。 - Q: “為什麼推薦code-push?” A: ”非常好。除了滿足基本更新功能外,還有統計,hash計算容錯和補丁更新功能。微軟的項目,大公司技術有保障,而且開源。近幾年微軟在擁抱開源方面,讓大家也是刮目相看。“ ## 安裝 code-push-server npm install code-push-server -g Install mysql:https://ken.io/note/macos-mysql8-install-config-tutorial 服務權限打開(這樣不用每次都打密碼) sudo chmod 777 /usr/local/mysql/support-files/mysql.server
 環境變數設定 open ~/.bash_profile PATH=$PATH:/usr/local/mysql/bin source ~/.bash_profile root設定權限: mysql -uroot -p ALTER USER `root`@`localhost` IDENTIFIED WITH mysql_native_password BY `password`; (直接複製貼上改密碼)
flush privileges;
初始化資料庫
 code-push-server yanyuyuan$ code-push-server-db init --dbhost localhost --dbuser root --dbpassword 12345678

設定 code-push-server config.js 
open /usr/local/lib/node_modules/code-push-server/config/config.js 要改的地方 db :info Local : storageDir,downloadUrl Start Server Code-push-server 拿token 127.0.0.1:3000 user:admin password:123456 code-push login 輸入token Server設定完成!!!
25.96
157
0.768875
yue_Hant
0.578362
e90115fcc14c5c364db477ef807aeda0b328f489
1,999
md
Markdown
tesla-homepage-app-clone/README.md
GabrielFraga962/Tesla_Homepage_Reactjs
f68836d120e96d3e19d8c8feada0c65bac06e09f
[ "MIT" ]
null
null
null
tesla-homepage-app-clone/README.md
GabrielFraga962/Tesla_Homepage_Reactjs
f68836d120e96d3e19d8c8feada0c65bac06e09f
[ "MIT" ]
1
2022-01-04T01:55:59.000Z
2022-01-04T01:56:07.000Z
tesla-homepage-app-clone/README.md
GabrielFraga962/Tesla_Homepage_Reactjs
f68836d120e96d3e19d8c8feada0c65bac06e09f
[ "MIT" ]
null
null
null
<p align="center"> <img src="https://i.imgur.com/nSQE0Ey.png" width="40%" alt="Logo Tesla"/> </p> <br> <div align="center" style="margin: 20px; text-align: center"> [![Netlify Status](https://api.netlify.com/api/v1/badges/d0415414-ab6f-4203-9cf0-95fcbd1b5677/deploy-status)](https://app.netlify.com/sites/tesla-homepage-app-clone/deploys) </div> ## <p align="center"> <a href="#project-star2">Project</a>&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp; <a href="#techs-rocket">Techs</a>&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp; <a href="#installation-wrench">Installation</a>&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp; <a href="#start-on">Start</a>&nbsp;&nbsp;&nbsp;|&nbsp;&nbsp;&nbsp; <a href="#license-memo">License</a> </p> ## <p align="center"> <img src="src\assets\img\banner.png"/> </p> <br> ## Project :star2: This repo contains an UI clone from Tesla homepage. <br> Deployed [here](https://tesla-homepage-app-clone.netlify.app/). <br> <p align="center"> <img src="src\assets\img\tesla-1.gif"/> </p> <br> <p align="center"> <img src="src\assets\img\tesla-2.gif"/> </p> <br> ## Techs :rocket: - [x] [ReactJS](https://reactjs.org); - [x] [TypeScript](https://www.typescriptlang.org/); - [x] [Framer Motion](https://www.framer.com/motion/); - [x] [Styled Components](https://styled-components.com/). <br> ## Installation :wrench: First you need to clone the project using `https://github.com/GabrielFraga962/Tesla_Homepage_Reactjs.git`. You can install the application using `npm install` or `yarn install` on the root dir. <br> ## Start :on: To start the application interface just run `npm start` or `yarn start` on the root dir. <br> ## License :memo: [![License](http://img.shields.io/:license-mit-blue.svg?style=flat-square)](http://badges.mit-license.org) - **[MIT license](https://github.com/GabrielFraga962/Tesla_Homepage_Reactjs/blob/main/LICENSE)**; - Copyright 2022 © <a href="https://github.com/GabrielFraga962" target="_blank">Gabriel S. Fraga</a>.
23.797619
174
0.676838
yue_Hant
0.251767
e9015dab651d1f6c9afaf755f4b09b89d86e318a
1,393
md
Markdown
docs/mfc/ole-mfc.md
Erikarts/cpp-docs.es-es
9fef104c507e48ec178a316218e1e581753a277c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/mfc/ole-mfc.md
Erikarts/cpp-docs.es-es
9fef104c507e48ec178a316218e1e581753a277c
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/mfc/ole-mfc.md
Erikarts/cpp-docs.es-es
9fef104c507e48ec178a316218e1e581753a277c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: OLE (MFC) ms.date: 11/04/2016 helpviewer_keywords: - OLE [MFC], user-interface topics - OLE applications [MFC], user interface - user interfaces, OLE - applications [OLE], user interface ms.assetid: 61cb5d3e-1108-4e9b-b301-a8d8fcc586cb ms.openlocfilehash: 69418136f87ecacf571aec2b5ff2989cff9cf120 ms.sourcegitcommit: 6052185696adca270bc9bdbec45a626dd89cdcdd ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 10/31/2018 ms.locfileid: "50467228" --- # <a name="ole-mfc"></a>OLE (MFC) Implementación de la funcionalidad OLE en el programa afecta a la interfaz de usuario de varias maneras: - Edición visual (activación en contexto) muestra la interfaz de usuario de otro programa de windows del programa y modifica los menús del programa con elementos del otro programa. - Arrastrar y colocar permite a los usuarios arrastrar objetos dentro y entre windows e incluso entre programas. - Objetos de seguimiento proporcionan indicaciones visuales para el estado de los objetos durante la edición visual y arrastrar y colocar. Para obtener más información, consulte: - [OLE y MFC](../mfc/ole-in-mfc.md) - [Edición visual (activación)](../mfc/activation-cpp.md) - [Arrastrar y colocar](../mfc/drag-and-drop-ole.md) - [Seguimiento](../mfc/trackers.md) ## <a name="see-also"></a>Vea también [Elementos de la interfaz de usuario](../mfc/user-interface-elements-mfc.md)
34.825
180
0.772434
spa_Latn
0.901834
e901e17377b61784bba9c93610cd5013076f43c7
2,117
md
Markdown
includes/virtual-machines-common-gallery-list-cli.md
changeworld/azure-docs.sv-se
6234acf8ae0166219b27a9daa33f6f62a2ee45ab
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/virtual-machines-common-gallery-list-cli.md
changeworld/azure-docs.sv-se
6234acf8ae0166219b27a9daa33f6f62a2ee45ab
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/virtual-machines-common-gallery-list-cli.md
changeworld/azure-docs.sv-se
6234acf8ae0166219b27a9daa33f6f62a2ee45ab
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: ta med fil description: ta med fil services: virtual-machines author: cynthn ms.service: virtual-machines ms.topic: include ms.date: 09/20/2018 ms.author: cynthn ms.custom: include file ms.openlocfilehash: 1ec3ecdafb8e475f5f13372789528612ccd7b8b9 ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897 ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 03/27/2020 ms.locfileid: "66226031" --- ## <a name="using-rbac-to-share-images"></a>Använda RBAC för att dela bilder Du kan dela avbildningar mellan prenumerationer med rollbaserad åtkomstkontroll (RBAC). Alla användare som har läsbehörighet till en avbildningsversion, även mellan prenumerationer, kan distribuera en virtuell dator med avbildningsversionen. Mer information om hur du delar resurser med RBAC finns i [Hantera åtkomst med RBAC och Azure CLI](https://docs.microsoft.com/azure/role-based-access-control/role-assignments-cli). ## <a name="list-information"></a>Lista information Hämta plats, status och annan information om tillgängliga bildgallerier med hjälp av [az sig-listan](/cli/azure/sig#az-sig-list). ```azurecli-interactive az sig list -o table ``` Lista bilddefinitionerna i ett galleri, inklusive information om OS-typ och status, med hjälp av [az sig-bilddefinitionslista](/cli/azure/sig/image-definition#az-sig-image-definition-list). ```azurecli-interactive az sig image-definition list --resource-group myGalleryRG --gallery-name myGallery -o table ``` Lista de delade bildversionerna i ett galleri med hjälp av [az sig bildversionslista](/cli/azure/sig/image-version#az-sig-image-version-list). ```azurecli-interactive az sig image-version list --resource-group myGalleryRG --gallery-name myGallery --gallery-image-definition myImageDefinition -o table ``` Hämta ID för en bildversion med [az sig bildversion show](/cli/azure/sig/image-version#az-sig-image-version-show). ```azurecli-interactive az sig image-version show \ --resource-group myGalleryRG \ --gallery-name myGallery \ --gallery-image-definition myImageDefinition \ --gallery-image-version 1.0.0 \ --query "id" ```
39.203704
241
0.784128
swe_Latn
0.800233
e90220517fa45e5c9e7d47928155d7f9d1c41301
10,385
md
Markdown
articles/static-web-apps/github-actions-workflow.md
keowu/azure-docs.pt-br
b233c381f62695efe522874b868e0320a3384391
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/static-web-apps/github-actions-workflow.md
keowu/azure-docs.pt-br
b233c381f62695efe522874b868e0320a3384391
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/static-web-apps/github-actions-workflow.md
keowu/azure-docs.pt-br
b233c381f62695efe522874b868e0320a3384391
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Fluxos de trabalho do GitHub Actions para Aplicativos Web Estáticos do Azure description: Saiba como usar repositórios do GitHub para configurar a implantação contínua em Aplicativos Web Estáticos do Azure. services: static-web-apps author: craigshoemaker ms.service: static-web-apps ms.topic: conceptual ms.date: 05/08/2020 ms.author: cshoe ms.openlocfilehash: 5e6188ca2e8e0972e86bed578144a29a96570876 ms.sourcegitcommit: 5e762a9d26e179d14eb19a28872fb673bf306fa7 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 01/05/2021 ms.locfileid: "97901191" --- # <a name="github-actions-workflows-for-azure-static-web-apps-preview"></a>Fluxos de trabalho do GitHub Actions para Aplicativos Web Estáticos do Azure – Visualização Quando você cria um novo recurso de Aplicativo Web Estático do Azure, o Azure gera um fluxo do trabalho do GitHub Actions para controlar a implantação contínua do aplicativo. O fluxo de trabalho é controlado por um arquivo YAML. Este artigo detalha a estrutura e as opções do arquivo de fluxo de trabalho. As implantações são iniciadas por [gatilhos](#triggers), que executam [trabalhos](#jobs) que são definidos por [etapas](#steps) individuais. ## <a name="file-location"></a>Local do arquivo Quando você vincula seu repositório do GitHub aos Aplicativos Web Estáticos do Azure, um arquivo de fluxo de trabalho é adicionado ao repositório. Siga estas etapas para exibir o arquivo de fluxo de trabalho gerado. 1. Abra o repositório do aplicativo no GitHub. 1. Na guia _Código_, clique na pasta `.github/workflows`. 1. Clique no arquivo com um nome parecido com `azure-static-web-apps-<RANDOM_NAME>.yml`. O arquivo YAML em seu repositório será semelhante ao exemplo a seguir: ```yml name: Azure Static Web Apps CI/CD on: push: branches: - master pull_request: types: [opened, synchronize, reopened, closed] branches: - master jobs: build_and_deploy_job: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest name: Build and Deploy Job steps: - uses: actions/checkout@v2 with: submodules: true - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v0.0.1-preview with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_MANGO_RIVER_0AFDB141E }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments) action: 'upload' ###### Repository/Build Configurations - These values can be configured to match you app requirements. ###### app_location: '/' # App source code path api_location: 'api' # Api source code path - optional output_location: 'dist' # Built app content directory - optional ###### End of Repository/Build Configurations ###### close_pull_request_job: if: github.event_name == 'pull_request' && github.event.action == 'closed' runs-on: ubuntu-latest name: Close Pull Request Job steps: - name: Close Pull Request id: closepullrequest uses: Azure/static-web-apps-deploy@v0.0.1-preview with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_MANGO_RIVER_0AFDB141E }} action: 'close' ``` ## <a name="triggers"></a>Gatilhos Um [gatilho](https://help.github.com/actions/reference/events-that-trigger-workflows) do GitHub Actions notifica um fluxo de trabalho do GitHub Actions para executar um trabalho com base em gatilhos de evento. Os gatilhos são listados usando a propriedade `on` no arquivo de fluxo de trabalho. ```yml on: push: branches: - master pull_request: types: [opened, synchronize, reopened, closed] branches: - master ``` Por meio das configurações associadas à propriedade `on`, você pode definir quais ramificações disparam um trabalho, bem como definir gatilhos a serem acionados para estados de solicitação de pull diferentes. Neste exemplo, um fluxo de trabalho é iniciado quando a ramificação _mestre_ é alterada. As alterações que iniciam o fluxo de trabalho incluem envio por push de confirmações e abertura de solicitações de pull na ramificação escolhida. ## <a name="jobs"></a>Trabalhos Cada gatilho de evento requer um manipulador de eventos. Os [trabalhos](https://help.github.com/actions/reference/workflow-syntax-for-github-actions#jobs) definem o que acontece quando um evento é disparado. No arquivo de fluxo de trabalho de Aplicativos Web Estáticos, há dois trabalhos disponíveis. | Nome | Descrição | |---------|---------| |`build_and_deploy_job` | É executado quando você envia por push as confirmações ou abre uma solicitação de pull em relação à ramificação listada na propriedade `on`. | |`close_pull_request_job` | Executa somente quando você fecha uma solicitação de pull que remove o ambiente de preparo criado de solicitações de pull. | ## <a name="steps"></a>Etapas As etapas são tarefas sequenciais para um trabalho. Uma etapa executa ações como a instalação de dependências, a execução de testes e a implantação de seu aplicativo para produção. Um arquivo de fluxo de trabalho define as etapas a seguir. | Trabalho | Etapas | |---------|---------| | `build_and_deploy_job` |<ol><li>Verifica o repositório no ambiente da ação.<li>Compila e implanta o repositório nos Aplicativos Web Estáticos do Azure.</ol>| | `close_pull_request_job` | <ol><li>Notifica os Aplicativos Web Estáticos do Azure que uma solicitação de pull fechou.</ol>| ## <a name="build-and-deploy"></a>Criar e implantar A etapa chamada `Build and Deploy` compila e implanta em sua instância dos Aplicativos Web Estáticos do Azure. Na seção `with`, você pode personalizar os valores a seguir para sua implantação. ```yml with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_MANGO_RIVER_0AFDB141E }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments) action: 'upload' ###### Repository/Build Configurations - These values can be configured to match you app requirements. ###### app_location: '/' # App source code path api_location: 'api' # Api source code path - optional output_location: 'dist' # Built app content directory - optional ###### End of Repository/Build Configurations ###### ``` | Propriedade | Descrição | Obrigatório | |---|---|---| | `app_location` | Local do código do aplicativo.<br><br>Por exemplo, digite `/` se o código-fonte do aplicativo estiver na raiz do repositório ou `/app` se o código do aplicativo estiver em um diretório chamado `app`. | Sim | | `api_location` | Local do seu código de Azure Functions.<br><br>Por exemplo, digite `/api` se o código do aplicativo estiver em uma pasta chamada `api`. Se nenhum aplicativo Azure Functions for detectado na pasta, a compilação não falhará; o fluxo de trabalho pressupõe que você não deseja uma API. | Não | | `output_location` | Local do diretório de saída de compilação relativo ao `app_location`.<br><br>Por exemplo, se o código-fonte do aplicativo estiver localizado em `/app`, e o script de compilação gerar arquivos para a pasta `/app/build`, defina `build` como o valor `output_location`. | Não | Os valores `repo_token`, `action` e `azure_static_web_apps_api_token` são definidos para você pelos Aplicativos Web Estáticos do Azure e não devem ser alterados manualmente. ## <a name="custom-build-commands"></a>Comandos de compilação personalizados Você pode ter um controle refinado sobre quais comandos são executados durante uma implantação. Os comandos a seguir podem ser definidos na seção `with` de um trabalho. A implantação sempre chama `npm install` antes de qualquer comando personalizado. | Comando | Descrição | |---------------------|-------------| | `app_build_command` | Define um comando personalizado a ser executado durante a implantação do aplicativo de conteúdo estático.<br><br>Por exemplo, para configurar uma compilação de produção para um aplicativo angular, crie um script NPM chamado `build-prod` para executar `ng build --prod` e insira `npm run build-prod` como o comando personalizado. Se for deixado em branco, o fluxo de trabalho tentará executar os comandos `npm run build` ou `npm run build:Azure`. | | `api_build_command` | Define um comando personalizado a ser executado durante a implantação do aplicativo da API do Azure Functions. | ## <a name="route-file-location"></a>Local do arquivo de rota Você pode personalizar o fluxo de trabalho para procurar o [routes.json](routes.md) em qualquer pasta de seu repositório. A propriedade a seguir pode ser definida na seção `with` de um trabalho. | Propriedade | Descrição | |---------------------|-------------| | `routes_location` | Define o local do diretório onde o arquivo _routes.json_ é encontrado. Esse local é relativo à raiz do repositório. | Ser explícito sobre o local do seu arquivo _routes.json_ é particularmente importante se a etapa de compilação da estrutura de front-end não mover esse arquivo para o `output_location` por padrão. ## <a name="environment-variables"></a>Variáveis de ambiente Você pode definir variáveis de ambiente para sua compilação por meio da `env` seção de configuração de um trabalho. ```yaml jobs: build_and_deploy_job: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest name: Build and Deploy Job steps: - uses: actions/checkout@v2 with: submodules: true - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v0.0.1-preview with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }} repo_token: ${{ secrets.GITHUB_TOKEN }} action: "upload" ###### Repository/Build Configurations app_location: "/" api_location: "api" output_location: "public" ###### End of Repository/Build Configurations ###### env: # Add environment variables here HUGO_VERSION: 0.58.0 ``` ## <a name="next-steps"></a>Próximas etapas > [!div class="nextstepaction"] > [Examinar solicitações de pull em ambientes de pré-produção](review-publish-pull-requests.md)
51.410891
472
0.732403
por_Latn
0.995447
e902b0d8ae3abd353c2d261569a36a2f25f9c31a
3,607
md
Markdown
docs/twelve-factors.md
franky47/douze
f2ef6fc21dd43d5179cb2e39b4d1a714b052a526
[ "MIT" ]
2
2019-06-19T04:34:59.000Z
2019-06-20T07:35:00.000Z
docs/twelve-factors.md
franky47/douze
f2ef6fc21dd43d5179cb2e39b4d1a714b052a526
[ "MIT" ]
7
2019-08-12T09:55:47.000Z
2019-11-05T04:23:23.000Z
docs/twelve-factors.md
franky47/douze
f2ef6fc21dd43d5179cb2e39b4d1a714b052a526
[ "MIT" ]
null
null
null
# The Twelve Factors The [Twelve Factor App manifesto](https://12factor.net/) was written by Adam Wiggins and others. ## The Original Twelve Factors ### [1. Codebase](https://12factor.net/codebase) ### [2. Dependencies](https://12factor.net/dependencies) ### [3. Configuration](https://12factor.net/config) Runtime-aware vs static configuration ### [4. Backing Services](https://12factor.net/backing-services) > Treat backing services as attached resources Douze has plugins for handling backing services: - [`douze-sequelize`](https://github.com/franky47/douze-sequelize) for SQL databases Backing service plugins make good use of Douze's hook system to abstract the connection to a third party service, error monitoring and logging. They also provide environment variable configuration for their internal features, preferred over code as configuration for runtime-aware options. ### [5. Build, Release, Run](https://12factor.net/build-release-run) ### [6. Processes](https://12factor.net/processes) ### [7. Port Binding](https://12factor.net/port-binding) Douze will expose a HTTP server at the following location: ``` http://${HOST:-0.0.0.0}:${PORT:-3000} ``` For non-Unix speakers, this means the environment variable `HOST` will define the listening address, with a fallback to `0.0.0.0` if not set, and the port will be defined by the environement variable `PORT`, or `3000` if not set. ### [8. Concurrency](https://12factor.net/concurrency) ### [9. Disposability](https://12factor.net/disposability) ### [10. Dev / Prod Parity](https://12factor.net/dev-prod-parity) ### [11. Logs](https://12factor.net/logs) Logs are produced as JSON, in a newline-delimited stream, produced by [Pino](https://github.com/pinojs/pino). Formatting of the logs (filtering, pretty-printing and other forms of processing) is done off-process. Douze provides a [prettifier](https://github.com/franky47/douze-prettify-logs) to account for the fields it adds to the logs. ### [12. Admin Processes](https://12factor.net/admin-processes) > _Admin code must ship with application code to avoid synchronization issues._ Also because configuration is stored in the environment, which should not be accessible elsewhere than the production environment for security reasons. Douze provides admin processes in the form of [Tasks](./tasks.md). ## The Additional Twelve Factors ### 13. Security Security must be a first-class citizen of any modern web application, and thought of as a process rather than a single operation done once at the start of a project. Like privacy, it is also a mindset, but because not everybody has the same sensitivity to it, it is better defined as a process. - Monitor your dependencies for vulnerabilities - Monitor your own code for vulnerabilities - Follow the OWASP best practices - Redact security-related fields in logs (Douze does that for basic HTTP headers, but you can extend the behaviour to your application's needs) Douze has a few built-in security features: - Redaction of log fields and critical environment variables values - Security headers (provided by [`helmet`](https://github.com/helmetjs/helmet)) ### 14. Privacy Douze protects the privacy of your users by default, and will redact the source IP and user-agent header from your logs. For cross-request user tracking, an anonimized identifier is provided. ### 15. Transparency ### 16. Eco-Consciousness ### 17. Documentation ### 18. Automation ### 19. Robustness ### 20. Testing ### 21. Maintainability ### 22. Interoperability ### 23. Cognitive Load & Complexity ### 24. Monitoring
30.058333
143
0.754921
eng_Latn
0.970744
e902bafcc4e8cce65d4416f0331828da5b16239c
2,333
md
Markdown
integration/examples/custom/README.md
qsays/skaffold
7117450d9c8bac54a86a684c404daf1375b87d60
[ "Apache-2.0" ]
3
2019-11-16T14:19:38.000Z
2020-01-29T00:04:01.000Z
integration/examples/custom/README.md
qsays/skaffold
7117450d9c8bac54a86a684c404daf1375b87d60
[ "Apache-2.0" ]
null
null
null
integration/examples/custom/README.md
qsays/skaffold
7117450d9c8bac54a86a684c404daf1375b87d60
[ "Apache-2.0" ]
2
2019-11-15T23:34:13.000Z
2019-12-01T19:23:27.000Z
### Example: use the custom builder with ko This example shows how the custom builder can be used to build artifacts with [ko](https://github.com/google/ko). * **building** a single Go file app with ko * **tagging** using the default tagPolicy (`gitCommit`) * **deploying** a single container pod using `kubectl` #### Before you begin For this tutorial to work, you will need to have Skaffold and a Kubernetes cluster set up. To learn more about how to set up Skaffold and a Kubernetes cluster, see the [getting started docs](https://skaffold.dev/docs/getting-started). #### Tutorial This tutorial will demonstrate how Skaffold can build a simple Hello World Go application with ko and deploy it to a Kubernetes cluster. First, clone the Skaffold [repo](https://github.com/GoogleContainerTools/skaffold) and navigate to the [custom example](https://github.com/GoogleContainerTools/skaffold/tree/master/examples/custom) for sample code: ```shell $ git clone https://github.com/GoogleContainerTools/skaffold $ cd skaffold/examples/custom ``` Take a look at the `build.sh` file, which uses `ko` to containerize source code: ```shell $ cat build.sh #!/usr/bin/env bash set -e if ! [ -x "$(command -v ko)" ]; then GO111MODULE=on go get -mod=readonly github.com/google/ko/cmd/ko fi output=$(ko publish --local --preserve-import-paths --tags= . | tee) ref=$(echo $output | tail -n1) docker tag $ref $IMAGE if $PUSH_IMAGE; then docker push $IMAGE fi ``` and the skaffold config, which configures artifact `skaffold-example` to build with `build.sh`: ```yaml $ cat skaffold.yaml apiVersion: skaffold/v2alpha1 kind: Config build: artifacts: - image: skaffold-custom custom: buildCommand: ./build.sh ``` For more information about how this works, see the Skaffold custom builder [documentation](https://skaffold.dev/docs/how-tos/builders/#custom-build-script-run-locally). Now, use Skaffold to deploy this application to your Kubernetes cluster: ```shell $ skaffold run --tail --default-repo <your repo> ``` With this command, Skaffold will build the `skaffold-example` artifact with ko and deploy the application to Kubernetes. You should be able to see *Hello, World!* printed every second in the Skaffold logs. #### Cleanup To clean up your Kubernetes cluster, run: ```shell $ skaffold delete ```
30.298701
214
0.740677
eng_Latn
0.951082
e902fb7bc59a647035f43ed1ad78fb6c89585635
119
md
Markdown
README.md
TakakuraAnzu/scanMedia
7e2c15e9fc938d6de5a253c975533edcd1a27b89
[ "Apache-2.0" ]
null
null
null
README.md
TakakuraAnzu/scanMedia
7e2c15e9fc938d6de5a253c975533edcd1a27b89
[ "Apache-2.0" ]
null
null
null
README.md
TakakuraAnzu/scanMedia
7e2c15e9fc938d6de5a253c975533edcd1a27b89
[ "Apache-2.0" ]
null
null
null
# scanMedia 此项目利用NDK提供快速历遍手机中视频文件的功能 若要使用请导入项目下scanMedia.cpp(C++14)文件 ``` 此项目遵循apache license 2.0协议,详情参照LICENSE文件 ```
14.875
39
0.798319
yue_Hant
0.523639
e90320108cb4c382d4a540d4cbf57ed0bd0a3dee
245
md
Markdown
2.6.07-Solution-PoweringUp/README.md
Sceptres/ud406
842bcf56968f7d934456f892ea50ea025e6bf087
[ "MIT" ]
58
2015-12-23T05:42:28.000Z
2021-09-28T12:29:32.000Z
2.6.07-Solution-PoweringUp/README.md
Sceptres/ud406
842bcf56968f7d934456f892ea50ea025e6bf087
[ "MIT" ]
19
2015-12-20T13:34:26.000Z
2021-11-20T10:21:47.000Z
2.6.07-Solution-PoweringUp/README.md
Sceptres/ud406
842bcf56968f7d934456f892ea50ea025e6bf087
[ "MIT" ]
131
2015-12-03T02:45:55.000Z
2021-12-03T07:00:14.000Z
# Powering Up! One more thing for this level. Let's make sure GigaGal has to be at least a little careful with her cannon by limiting her ammo and distributing ammo powerups around the level! Check out the TODOs, starting in `Constants.java`!
40.833333
176
0.779592
eng_Latn
0.999591
e90402618ff87c0aebc4d152ced46556c978861e
78
md
Markdown
_includes/04-lists.md
hosseinseyfoori/markdown-portfolio
9eb5e7b1480d022d4abd2e2dc89b26c7ec4cbed0
[ "MIT" ]
null
null
null
_includes/04-lists.md
hosseinseyfoori/markdown-portfolio
9eb5e7b1480d022d4abd2e2dc89b26c7ec4cbed0
[ "MIT" ]
5
2020-05-11T13:48:14.000Z
2020-05-11T14:20:20.000Z
_includes/04-lists.md
hosseinseyfoori/markdown-portfolio
9eb5e7b1480d022d4abd2e2dc89b26c7ec4cbed0
[ "MIT" ]
null
null
null
### This is list of my work - ShoppingList - Do it for ever - Have a good Day
15.6
27
0.679487
eng_Latn
0.99993
e9046bfd6b63a6b74ac91a272bb2e5f371c4414c
231
md
Markdown
README.md
leonardo0494/oracle
9a11251ab4a672cc2b662bdd1fcaeadd46131433
[ "MIT" ]
1
2020-10-09T14:11:35.000Z
2020-10-09T14:11:35.000Z
README.md
leonardo0494/oracle
9a11251ab4a672cc2b662bdd1fcaeadd46131433
[ "MIT" ]
null
null
null
README.md
leonardo0494/oracle
9a11251ab4a672cc2b662bdd1fcaeadd46131433
[ "MIT" ]
null
null
null
# Repositório com projetos Oracle PL/SQL Meu nome é Leonardo de Lima Silva, sou funcionário da 3CON e trabalho no projeto Oi. Esse repositório foi criado para manter a versão de todos os scripts que foram desenvolvidos por mim.
38.5
101
0.796537
por_Latn
1.000003
e904e73e726c94f5ed541211fe065cd5ca14b58a
1,486
md
Markdown
bing-docs/bing-custom-search/how-to/get-videos-from-instance.md
nkgami/bing-docs
51cc5573df9b95242f4a2fc69cb55146f2718395
[ "CC-BY-4.0", "MIT" ]
3
2020-12-04T06:26:52.000Z
2021-11-04T20:17:24.000Z
bing-docs/bing-custom-search/how-to/get-videos-from-instance.md
nkgami/bing-docs
51cc5573df9b95242f4a2fc69cb55146f2718395
[ "CC-BY-4.0", "MIT" ]
75
2020-06-15T19:45:25.000Z
2022-03-31T02:29:57.000Z
bing-docs/bing-custom-search/how-to/get-videos-from-instance.md
nkgami/bing-docs
51cc5573df9b95242f4a2fc69cb55146f2718395
[ "CC-BY-4.0", "MIT" ]
14
2020-10-13T23:01:06.000Z
2022-02-26T08:02:53.000Z
--- title: Get videos from your Custom Search instance titleSuffix: Bing Search Services description: High-level overview about using Bing Custom Search to get videos from your custom view of the Web. services: bing-search-services author: swhite-msft manager: ehansen ms.service: bing-search-services ms.subservice: bing-custom-search ms.topic: conceptual ms.date: 07/15/2020 ms.author: scottwhi --- # Get videos from your custom view Bing Custom Video Search lets you enrich your custom search experience with videos. Similar to web results, Custom Search supports searching for videos in your view of the Web. To get videos, use Custom Video Search API or enable videos in your Hosted UI. Using the Hosted UI feature is simple and recommended for getting your search experience up and running in short order. For information about configuring your Hosted UI to include videos, see [Configure your hosted UI experience](hosted-ui.md). If you want more control over displaying the search results, use Custom Video Search API. If you're familiar with [Bing Video Search API](../../bing-video-search/overview.md), then you will be able to easily call Custom Video Search API. The main differences are 1) the endpoint you send requests to, and 2) you must include the *customConfig* query parameter. ```curl curl -H "Ocp-Apim-Subscription-Key: <yourkeygoeshere>" https://api.bing.microsoft.com/v7.0/custom/videos/search?q=sailing+dinghies&customConfig=<your configuration ID> ```
55.037037
500
0.789367
eng_Latn
0.979715
e905aed8df82f89e58ba436795ee6908a22e32aa
1,143
md
Markdown
README.md
hico-horiuchi/yosage
f1d4a4ff0602ff51ef15c1104ca470959cfde153
[ "MIT" ]
1
2017-04-10T07:26:12.000Z
2017-04-10T07:26:12.000Z
README.md
hico-horiuchi/yosage
f1d4a4ff0602ff51ef15c1104ca470959cfde153
[ "MIT" ]
null
null
null
README.md
hico-horiuchi/yosage
f1d4a4ff0602ff51ef15c1104ca470959cfde153
[ "MIT" ]
null
null
null
## yosage v0.2.2 ![eupho.gif](https://raw.githubusercontent.com/hico-horiuchi/yosage/master/eupho.gif) `yosage` (良さげ) is LGTM gif generator by Golang. This is stand-alone (not require ImageMagick, include [lgtm.png](https://github.com/hico-horiuchi/yosage/blob/master/lgtm.png)). I get ideas from [8398a7/lgtm_creater](https://github.com/8398a7/lgtm_creater) and [syohex/speedline](https://github.com/syohex/speedline). #### Requirements - [Golang](https://golang.org/) >= 1 #### Installation ```sh $ git clone git://github.com/hico-horiuchi/yosage.git $ cd yosage $ make gom link $ make build $ bin/yosage -i inpunt.gif -o output.gif ``` #### Usage Stabd-alone LGTM gif generator by Golang https://github.com/hico-horiuchi/yosage Usage: yosage [flags] Flags: -i, --input="input.gif": input gif file path -l, --lgtm="": LGTM text png file path -o, --output="output.gif": output gif file path -v, --version[=false]: print and check the version #### License yosage is released under the [MIT license](https://raw.githubusercontent.com/hico-horiuchi/yosage/master/LICENSE).
28.575
139
0.685039
eng_Latn
0.28278
e9060755992babd6870d878b2961dec7a979cbac
23
md
Markdown
README.md
amleaver/samsungtv-remote-server
74f61c6fb264b82ecd61af2dd9fd42aa2ba8be43
[ "MIT" ]
null
null
null
README.md
amleaver/samsungtv-remote-server
74f61c6fb264b82ecd61af2dd9fd42aa2ba8be43
[ "MIT" ]
null
null
null
README.md
amleaver/samsungtv-remote-server
74f61c6fb264b82ecd61af2dd9fd42aa2ba8be43
[ "MIT" ]
null
null
null
# samsung-remote-server
23
23
0.826087
nob_Latn
0.545676
e906464d74e47de5adaef184fe1e5a50b09ffcac
1,483
md
Markdown
_posts/2020-06-24-springboot-6.md
pjh37/pjh37.github.io
4e2739a783a0912dc86a7e9ce593f46a6efae745
[ "MIT" ]
null
null
null
_posts/2020-06-24-springboot-6.md
pjh37/pjh37.github.io
4e2739a783a0912dc86a7e9ce593f46a6efae745
[ "MIT" ]
1
2020-10-18T14:20:33.000Z
2020-10-18T14:20:33.000Z
_posts/2020-06-24-springboot-6.md
pjh37/pjh37.github.io
4e2739a783a0912dc86a7e9ce593f46a6efae745
[ "MIT" ]
null
null
null
--- layout: post title: " spring boot jpa - 엔티티 매핑" date: 2020-06-23 desc: "spring boot jpa 입문" keywords: "springboot" categories: [Springboot] tags: [springboot,jpa] icon: icon-html --- JPA의 엔티티 매핑 ----- <hr/> ## @Entity + ### @Entity가 붙은 클래스는 JPA가 관리 + ### JPA를 사용해서 테이블과 매핑할 클래슨는 @Entity필수 > 주의!! > 기본 생성자 필수(파라미터가 없는 public 또는 protected 생성자) > final 클래스, enum, interface, inner 클래스 사용 X > 저장할 필드에 final 사용 X ## 속성 : name (별로 사용할일 없음 왠만하면 기본값쓰자 햇갈리니까!!!!) + ### JPA에서 사용할 엔티티 이름을 지정한다 + ### 기본값: 클래스이름을 그대로 사용 + ### 같은 클래스 이름이 없으면 가급적 기본값을 사용한다!!!!!! <br/> ## 데이터베이스 스키마 자동 생성 - 주의 + ### 운영장비에는 절대 create, create-drop, update 사용하면 안됨!!!! + ### 개발 초기 단계는 create 또는 update + ### 테스트 서버는 update 또는 validate + ### 스테이징과 운영 서버는 validate 또는 none <br/> ## 기본 키 매핑 @Id @GeneratedValue ``` java @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; ``` <br/> ## IDENTITY 전략(GenerationType.IDENTITY) + ### 기본 키 생성을 데이터베이스에 위임 + ### 주로 MYSQL, PostgreSQL, SQL Server, DB2에서 사용 + ### JPA는 보통 트랜잭션 커밋 시점에 INSERT SQL 실행 + ### AUTO_INCREMENT는 데이터베이스에 INSERT SQL을 실행한 이후 ID값을 알 수 있다!!! <- 중요 + ### IDENTITY 전략은 em.persist()시점에 즉시 ISNERT SQL실행하고 DB에서 식별자를 조회한다. <br/> ## TABLE 전략 (현업에서 잘 사용 안함!) + ### 키 생성 전용 테이블을 하나 만들어서 데이터베이스 시퀸스를 흉내냄 + ### 장점 : 모든 데이터베이스에 적용가능 + ### 단점 : 성능 <br/> ## 권장하는 식별자 전략 + ### 기본키 제약 조건 : not null, 유일, 변하면 안됨 + ### 미래까지 이 조건을 만족하는 자연키는 찾기 어렵다. 대리키(대체키)를 사용하자 + ### 주민등록번호는 기본 키로 적절하지 않다. + ### 권장 : Long형 + 대체키 + 키 생성전략 사용
20.040541
69
0.625759
kor_Hang
1.00001
e9066649a75af8e7471ddfa674e17f594a9458a2
266
md
Markdown
cran-comments.md
pkq/haven
dbd65a3c3aa087ecacf1a378360ad9c967954cf1
[ "MIT" ]
null
null
null
cran-comments.md
pkq/haven
dbd65a3c3aa087ecacf1a378360ad9c967954cf1
[ "MIT" ]
null
null
null
cran-comments.md
pkq/haven
dbd65a3c3aa087ecacf1a378360ad9c967954cf1
[ "MIT" ]
null
null
null
## R CMD check results 0 ERRORs | 0 WARNINGs | 1 NOTEs * checking for GNU extensions in Makefiles ... NOTE GNU make is a SystemRequirements. ## revdepcheck results I did not run the revdep checks because this is a tiny fix that shouldn't affect existing code.
24.181818
95
0.74812
eng_Latn
0.968409
e9068423da593556d3590a6a96f69ce69aacada8
129
md
Markdown
R/setup.md
denglert/manuals
7cb2609e36b3ae8ecd5b7785a6c0ca370fbd5def
[ "MIT" ]
4
2018-03-31T09:13:09.000Z
2020-12-22T13:40:48.000Z
R/setup.md
denglert/manuals
7cb2609e36b3ae8ecd5b7785a6c0ca370fbd5def
[ "MIT" ]
null
null
null
R/setup.md
denglert/manuals
7cb2609e36b3ae8ecd5b7785a6c0ca370fbd5def
[ "MIT" ]
2
2019-11-27T15:16:27.000Z
2020-10-05T01:19:47.000Z
# Install R ## conda - https://anaconda.org/chdoig/jupyter-and-conda-for-r/notebook ~~~~ conda install -c r r-essentials ~~~~
12.9
62
0.674419
eng_Latn
0.115992
e906ce4992b39e55931ad08e5206ecbc25dc6594
7,016
md
Markdown
articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019.md
Microsoft/azure-docs.sv-se
a43cb26da920952026f5e9c8720f3356a84de75b
[ "CC-BY-4.0", "MIT" ]
7
2017-08-28T08:02:11.000Z
2021-05-05T07:47:55.000Z
articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019.md
MicrosoftDocs/azure-docs.sv-se
a43cb26da920952026f5e9c8720f3356a84de75b
[ "CC-BY-4.0", "MIT" ]
476
2017-10-15T08:20:18.000Z
2021-04-16T05:20:11.000Z
articles/virtual-desktop/virtual-desktop-fall-2019/diagnostics-log-analytics-2019.md
MicrosoftDocs/azure-docs.sv-se
a43cb26da920952026f5e9c8720f3356a84de75b
[ "CC-BY-4.0", "MIT" ]
39
2017-08-03T09:46:48.000Z
2021-11-05T11:41:27.000Z
--- title: Windows Virtual Desktop (klassisk) Diagnostic Log Analytics – Azure description: Hur du använder Log Analytics med hjälp av funktionen Windows Virtual Desktop (klassisk). author: Heidilohr ms.topic: how-to ms.date: 03/30/2020 ms.author: helohr manager: femila ms.openlocfilehash: 9cfa50e13756692295c84b02d02dd71228b16eb9 ms.sourcegitcommit: 56b0c7923d67f96da21653b4bb37d943c36a81d6 ms.translationtype: MT ms.contentlocale: sv-SE ms.lasthandoff: 04/06/2021 ms.locfileid: "106445009" --- # <a name="use-log-analytics-for-the-diagnostics-feature-in-windows-virtual-desktop-classic"></a>Använd Log Analytics för funktionen diagnostik i Windows Virtual Desktop (klassisk) >[!IMPORTANT] >Det här innehållet gäller för virtuella Windows-datorer (klassisk), vilket inte stöder Azure Resource Manager virtuella Skriv bords objekt i Windows. Om du försöker hantera Azure Resource Manager virtuella Windows Desktop-objekt, se [den här artikeln](../diagnostics-log-analytics.md). Windows Virtual Desktop erbjuder en diagnostisk funktion som gör det möjligt för administratören att identifiera problem via ett enda gränssnitt. Den här funktionen loggar diagnostikinformation när någon har tilldelat en virtuell Windows-dator roll som använder tjänsten. Varje logg innehåller information om vilken Windows-roll för virtuella skriv bord som ingick i aktiviteten, eventuella fel meddelanden som visas under sessionen, klient information och användar information. Funktionen diagnostik skapar aktivitets loggar för både användar-och administrations åtgärder. Varje aktivitets logg faller under tre huvud kategorier: - Flödes prenumerations aktiviteter: när en användare försöker ansluta till sin feed genom att Microsoft Fjärrskrivbord program. - Anslutnings aktiviteter: när en användare försöker ansluta till en stationär eller RemoteApp via Microsoft Fjärrskrivbord-program. - Hanterings aktiviteter: när en administratör utför hanterings åtgärder i systemet, till exempel skapa värdar, tilldela användare till app-grupper och skapa roll tilldelningar. Anslutningar som inte når Windows Virtual Desktop visas inte i diagnostiska resultat, eftersom själva roll tjänsten för diagnostik är en del av det virtuella Windows-skrivbordet. Problem med Windows Virtual Desktop-anslutning kan inträffa när användaren har problem med nätverks anslutningen. ## <a name="why-you-should-use-log-analytics"></a>Varför du bör använda Log Analytics Vi rekommenderar att du använder Log Analytics för att analysera diagnostikdata i Azure-klienten som går bortom fel sökning av enskilda användare. Eftersom du kan hämta prestanda räknare för virtuella datorer i Log Analytics du ett verktyg för att samla in information för distributionen. ## <a name="before-you-get-started"></a>Innan du börjar Innan du kan använda Log Analytics med funktionen diagnostik måste du [skapa en arbets yta](../../azure-monitor/vm/quick-collect-windows-computer.md#create-a-workspace). När du har skapat arbets ytan följer du anvisningarna i [Anslut Windows-datorer till Azure Monitor](../../azure-monitor/agents/log-analytics-agent.md#workspace-id-and-key) för att få följande information: - Arbetsyte-ID - Den primära nyckeln för din arbets yta Du behöver den här informationen senare i installations processen. ## <a name="push-diagnostics-data-to-your-workspace"></a>Skicka diagnostikdata till din arbets yta Du kan skicka diagnostikdata från din Windows-klient för virtuella datorer till Log Analytics för din arbets yta. Du kan ställa in den här funktionen direkt när du först skapar din klient genom att länka arbets ytan till din klient organisation, eller så kan du ställa in den senare med en befintlig klient. Om du vill länka din klient till din Log Analytics arbets yta när du konfigurerar din nya klient kör du följande cmdlet för att logga in på Windows Virtual Desktop med ditt TenantCreator-användar konto: ```powershell Add-RdsAccount -DeploymentUrl https://rdbroker.wvd.microsoft.com ``` Om du ska länka en befintlig klient i stället för en ny klient kör du denna cmdlet i stället: ```powershell Set-RdsTenant -Name <TenantName> -AzureSubscriptionId <SubscriptionID> -LogAnalyticsWorkspaceId <String> -LogAnalyticsPrimaryKey <String> ``` Du måste köra dessa cmdlets för varje klient som du vill länka till Log Analytics. >[!NOTE] >Om du inte vill länka Log Analytics arbets ytan när du skapar en klient kör du `New-RdsTenant` cmdleten i stället. ## <a name="cadence-for-sending-diagnostic-events"></a>Takt för att skicka diagnostiska händelser Diagnostiska händelser skickas till Log Analytics när de är klara. ## <a name="example-queries"></a>Exempelfrågor Följande exempel frågor visar hur Diagnostics-funktionen genererar en rapport för de mest frekventa aktiviteterna i systemet: I det första exemplet visas anslutnings aktiviteter som initieras av användare med stöd för fjärr skrivbords klienter: ```powershell WVDActivityV1_CL | where Type_s == "Connection" | join kind=leftouter (     WVDErrorV1_CL     | summarize Errors = makelist(pack('Time', Time_t, 'Code', ErrorCode_s , 'CodeSymbolic', ErrorCodeSymbolic_s, 'Message', ErrorMessage_s, 'ReportedBy', ReportedBy_s , 'Internal', ErrorInternal_s )) by ActivityId_g     ) on $left.Id_g  == $right.ActivityId_g  | join  kind=leftouter (     WVDCheckpointV1_CL     | summarize Checkpoints = makelist(pack('Time', Time_t, 'ReportedBy', ReportedBy_s, 'Name', Name_s, 'Parameters', Parameters_s) ) by ActivityId_g     ) on $left.Id_g  == $right.ActivityId_g |project-away ActivityId_g, ActivityId_g1 ``` I nästa exempel fråga visas hanterings aktiviteter av administratörer på klienterna: ```powershell WVDActivityV1_CL | where Type_s == "Management" | join kind=leftouter (     WVDErrorV1_CL     | summarize Errors = makelist(pack('Time', Time_t, 'Code', ErrorCode_s , 'CodeSymbolic', ErrorCodeSymbolic_s, 'Message', ErrorMessage_s, 'ReportedBy', ReportedBy_s , 'Internal', ErrorInternal_s )) by ActivityId_g     ) on $left.Id_g  == $right.ActivityId_g  | join  kind=leftouter (     WVDCheckpointV1_CL     | summarize Checkpoints = makelist(pack('Time', Time_t, 'ReportedBy', ReportedBy_s, 'Name', Name_s, 'Parameters', Parameters_s) ) by ActivityId_g     ) on $left.Id_g  == $right.ActivityId_g |project-away ActivityId_g, ActivityId_g1 ``` ## <a name="stop-sending-data-to-log-analytics"></a>Sluta skicka data till Log Analytics Om du vill stoppa sändningen av data från en befintlig klient organisation till Log Analytics kör du följande cmdlet och anger tomma strängar: ```powershell Set-RdsTenant -Name <TenantName> -AzureSubscriptionId <SubscriptionID> -LogAnalyticsWorkspaceId <String> -LogAnalyticsPrimaryKey <String> ``` Du måste köra denna cmdlet för varje klient som du vill sluta skicka data från. ## <a name="next-steps"></a>Nästa steg Se [identifiera och diagnostisera problem](diagnostics-role-service-2019.md#common-error-scenarios)för att granska vanliga fel scenarier som kan identifieras av diagnostikprogrammet.
50.84058
630
0.7939
swe_Latn
0.993816
e907107f19bbbc46e118330dae36cd059b9530ba
40
md
Markdown
test/patterns/import-hardcode/expected.md
lwchkg/gitbook-plugin-include-codeblock
759082521265dca485b38fb60558e695211c787c
[ "BSD-2-Clause" ]
39
2015-08-29T16:25:31.000Z
2021-11-08T01:06:38.000Z
test/patterns/import-hardcode/expected.md
lwchkg/gitbook-plugin-include-codeblock
759082521265dca485b38fb60558e695211c787c
[ "BSD-2-Clause" ]
62
2015-09-04T00:34:30.000Z
2022-01-09T10:20:03.000Z
test/patterns/import-hardcode/expected.md
lwchkg/gitbook-plugin-include-codeblock
759082521265dca485b38fb60558e695211c787c
[ "BSD-2-Clause" ]
32
2015-09-16T01:55:50.000Z
2022-01-26T09:38:38.000Z
``` typescript console.log("test"); ```
10
20
0.6
eng_Latn
0.282102
e90838c06c9822b779662727996c3e679d8ede63
12,919
md
Markdown
node_modules/grunt-sassdoc/node_modules/sassdoc/CHANGELOG.md
kenjiyamamoto/Sandbox
057a9a0ba5c7cdc973691d00743f3f451e0a112a
[ "MIT" ]
null
null
null
node_modules/grunt-sassdoc/node_modules/sassdoc/CHANGELOG.md
kenjiyamamoto/Sandbox
057a9a0ba5c7cdc973691d00743f3f451e0a112a
[ "MIT" ]
null
null
null
node_modules/grunt-sassdoc/node_modules/sassdoc/CHANGELOG.md
kenjiyamamoto/Sandbox
057a9a0ba5c7cdc973691d00743f3f451e0a112a
[ "MIT" ]
null
null
null
# Changelog # 1.10.12 * Backport of `a994ed5` fix multiple require autofill ([#314](https://github.com/SassDoc/sassdoc/issues/314)) # 1.10.11 * Ensure `@todo` compat with docs and contrib ([#293](https://github.com/SassDoc/sassdoc/issues/293)) ## 1.10.6 * Ensure proper type checking for `@see` annotation ([#291](https://github.com/SassDoc/sassdoc/issues/291)) ## 1.10.3 * Prevented `@requires` to autofill dependency twice ## 1.10.2 * Fixed an issue with the folder wiping safeguard always aborting if folder is not empty without even prompting ## 1.10.1 * Updated a dependency in order to use new version of sassdoc-theme-default ## 1.10.0 * Made annotations `@throws`, `@requires` and `@content` fill themselves so you don't have to, unless told otherwise through the [`autofill` option](http://sassdoc.com/configuration/#autofill) ([#232](https://github.com/SassDoc/sassdoc/issues/232), [#238](https://github.com/SassDoc/sassdoc/issues/238)) * Added the ability to define `--sass-convert`, `--no-update-identifier` and `--no-prompt` options within the configuration file instead of CLI only ([#247](https://github.com/SassDoc/sassdoc/issues/247)) * Merged [sassdoc-filter](https://github.com/sassdoc/sassdoc-filter) and [sassdoc-indexer](https://github.com/sassdoc/sassdoc-indexer) into [sassdoc-extras](https://github.com/sassdoc/sassdoc-extras); theme authors are asked to use the new repository ## 1.9.0 * Added ability to use inline comments with `///` ([#143](https://github.com/SassDoc/sassdoc/issues/143)) * Added some safeguards when wiping the destination folder to avoid accidents ([#220](https://github.com/SassDoc/sassdoc/issues/220)) * Added `@content` annotation, which is auto-filled when `@content` Sass directive is being found in a mixin ([#226](https://github.com/SassDoc/sassdoc/issues/226)) * Added `@require` alias for `@requires` ([#221](https://github.com/SassDoc/sassdoc/issues/221)) * Added `@property` alias for `@prop` ([#221](https://github.com/SassDoc/sassdoc/issues/221)) * Made the `$` sign optional when writing the parameter name in `@param` ([#222](https://github.com/SassDoc/sassdoc/issues/222)) * Annotations that should not be associated to certain types (for instance `@param` for a variable) now emit a warning and are properly discarded by the parser ([CDocParser#4](https://github.com/FWeinb/CDocParser/issues/4)) ## 1.8.0 * Added ability to add your own annotations to your theme ([#203](https://github.com/SassDoc/sassdoc/issues/203)) * Fixed an issue with items being randomly ordered ([#208](https://github.com/SassDoc/sassdoc/issues/208)) * Greatly improved sidebar from the theme ## 1.7.0 * Added a `--sass-convert` option to perform Sass to SCSS conversion before running SassDoc ([#183](https://github.com/SassDoc/sassdoc/issues/183#issuecomment-56262743)) * Added the ability to define annotations at a file-level ([#190](https://github.com/SassDoc/sassdoc/issues/190)) * Improved SassDoc's behaviour when default theme is missing ([#207](https://github.com/SassDoc/sassdoc/pull/207)) * Slightly improved our logging message regarding the theme being used ([#206](https://github.com/SassDoc/sassdoc/issues/206)) * Moved some logic out of the theme's templates right into the index.js from the theme ([sassdoc-theme-light#40](https://github.com/SassDoc/sassdoc-theme-light/issues/40)) ## 1.6.1 * Fixed a bug where some descriptions didn't allow line breaks ([#209](https://github.com/SassDoc/sassdoc/issues/209)) ## 1.6.0 * Added a [Yeoman Generator](https://github.com/SassDoc/generator-sassdoc-theme) to make it easier to build themes ([#185](https://github.com/SassDoc/sassdoc/issues/185)) * Added YAML support for configuration files; default configuration file name is still `view`, either as `.json`, `.yaml` or `.yml` ([#184](https://github.com/SassDoc/sassdoc/issues/184)) * Added a message to warn against relying on the default configuration file name (`view.{json,yaml,yml}`) since it will break in version 2.0.0 in favor of `.sassdocrc` (which will support both format at once while being more semantic, less confusing and less likely to conflict with other projects) ([#194](https://github.com/SassDoc/sassdoc/issues/194)) * Fixed an issue when variable items' value contains a semi-colon ([#191](https://github.com/SassDoc/sassdoc/issues/191)) * Improved the light theme (better sidebar toggle with states stored in localStorage, better code toggle, better JavaScript structure, and better performance) * Added a `byType` key to sassdoc-indexer to help building themes ## 1.5.2 * Added implicit type for required placeholders ([#197](https://github.com/SassDoc/sassdoc/issues/197)) ## 1.5.1 * Used `stat` instead of `lstat` to support symlinks ([22a9b79](https://github.com/SassDoc/sassdoc/commit/22a9b7986e1eef2bf962bb9b1a48467d257ee398)) ## 1.5.0 * Added `@prop` to allow deeper documentation for maps ([#25](https://github.com/SassDoc/sassdoc/issues/25)) * Fixed circular JSON dependencies when using raw data ([#181](https://github.com/SassDoc/sassdoc/issues/181)) * Added an option to provide a custom shortcut icon to the view ([#178](https://github.com/SassDoc/sassdoc/issues/178)) ## 1.4.1 * Fixed a broken test ## 1.4.0 * Updated favicon from theme to prevent 404 * Changed a dependency * Added placeholder support ([#154](https://github.com/SassDoc/sassdoc/issues/154)) * Prevented a crash when using invalid annotations; throwing a warning instead * Added `@source` as an alias for `@link` to carry more semantic ([#170](https://github.com/SassDoc/sassdoc/issues/170)) ## 1.3.2 * Fixed a broken test ## 1.3.1 * Merged a branch that needed to be merged ## 1.3.0 * Added `@output` as an equivalent for `@return` for mixins ([#133](https://github.com/SassDoc/sassdoc/issues/133)) * Added the ability to add a title to `@example` ([#145](https://github.com/SassDoc/sassdoc/issues/145)) * Added the ability to preview the code of an item when clicking on it ([#124](https://github.com/SassDoc/sassdoc/issues/124)) ## 1.2.0 * Improved the way `@since` is parsed ([#128](https://github.com/SassDoc/sassdoc/issues/128)) * Moved theming to [Themeleon](https://github.com/themeleon/themeleon) ([#69](https://github.com/SassDoc/sassdoc/issues/69)) * Added a *view source* feature ([#117](https://github.com/SassDoc/sassdoc/issues/117)) * Added the `@group` annotation, as well as a way to alias groups in order to have friendly names ([#29](https://github.com/SassDoc/sassdoc/issues/29)) * Added moar tests ([#138](https://github.com/SassDoc/sassdoc/issues/138)) ## 1.1.6 * Backport, fixed `found-at` with absolute path ([#156](https://github.com/SassDoc/sassdoc/pull/156)) ## 1.1.5 * Fixed `@example` not being printed for variables ([#146](https://github.com/SassDoc/sassdoc/pull/146)) ## 1.1.4 * Fixed some visual issues with `@requires` ([#132](https://github.com/SassDoc/sassdoc/pull/132)) ## 1.1.3 * Removed a duplicated `deprecated` flag in the view ## 1.1.2 * Fixed a bug with relative path to `package.json` file ## 1.1.1 * Fixed a small issue with display path, sometimes adding an extra slash ([#68](https://github.com/SassDoc/sassdoc/issues/68)) ## 1.1.0 * New design * Improved the `@requires` annotation to support external vendors, and custom URL ([#61](https://github.com/SassDoc/sassdoc/issues/61)) * Added a search engine to the generated documentation ([#46](https://github.com/SassDoc/sassdoc/issues/46)) * Fixed an issue with `@link` not working correctly ([#108](https://github.com/SassDoc/sassdoc/issues/108)) * Added `examples` to `.gitignore` ## 1.0.2 * Fixed an issue with config path resolving to false ([#68](https://github.com/SassDoc/sassdoc/issues/68)) ## 1.0.1 * Worked around a npm bug ## 1.0.0 * Fixed an issue with a missing dependency * Prevented a weird bug with `@require` * Improved styles from the theme * Improved the way we deal with configuration resolving * Added an option to prevent the notifier check from happening * Merged `sassdoc-cli` back into the main repository * Fixed an issue with item count in console ([#102](https://github.com/SassDoc/sassdoc/issues/102)) * Made parameters table headers WAI 2.0 compliant ([#101](https://github.com/SassDoc/sassdoc/pull/101)) * Fixed a logic issue in the view * Fixed a syntax highlighting issue with functions and mixins ([#99](https://github.com/SassDoc/sassdoc/pull/99)) * Improved the way we deal with file path ([#98](https://github.com/SassDoc/sassdoc/pull/98)) * Made it possible to use `@`-starting lines within `@example` as long as they are indented ([#96](https://github.com/SassDoc/sassdoc/pull/96)) * Fixed a tiny parser issue ([#95](https://github.com/SassDoc/sassdoc/pull/95)) * Exposed the version number in `sassdoc.version` ([#91](https://github.com/SassDoc/sassdoc/pull/93)) * Implemented `update-notifier` ([#92](https://github.com/SassDoc/sassdoc/issues/92)) * Made it possible for SassDoc to create nested folders ([#89](https://github.com/SassDoc/sassdoc/issues/89)) * Renamed all repositories to follow strict guidelines ([#70](https://github.com/SassDoc/sassdoc/issues/70)) * Fixed an issue with empty documented items ([#84](https://github.com/SassDoc/sassdoc/issues/84)) * Normalized description in annotations ([#81](https://github.com/SassDoc/sassdoc/issues/81)) * Made requiring a variable less error-prone ([#74](https://github.com/SassDoc/sassdoc/issues/74)) * Fixed minor issues when parsing `@param` ([#59](https://github.com/SassDoc/sassdoc/issues/59), [#60](https://github.com/SassDoc/sassdoc/issues/60), [#62](https://github.com/SassDoc/sassdoc/issues/62)) * Fixed an issue with `@import` being parsed ([#73](https://github.com/SassDoc/sassdoc/issues/73)) * Added language detection to `@example` ([#54](https://github.com/SassDoc/sassdoc/issues/54)) * Major style changes ([#65](https://github.com/SassDoc/sassdoc/issues/65)) * Improved view/DOM/SCSS structure * Added Grunt ([#55](https://github.com/SassDoc/sassdoc/issues/55)) * Removed Makefile from core * Added Travis ([#63](https://github.com/SassDoc/sassdoc/issues/63)) * Minor code improvements in bin * Fixed an issue with bin * Fixed some little bugs in view * Changed `@datatype` to `@type` * Fixed a parsing bug with expanded licenses in package.json * Added a footer ([#57](https://github.com/SassDoc/sassdoc/issues/57)) * Changed the structure of `view.json` * Added license (MIT) ([#58](https://github.com/SassDoc/sassdoc/issues/58)) * Massively improved templates quality * Massively improved SCSS quality * Authorized markdown on `@author` * Added a favicon * Fixed tiny typo in console warning * Added anchor to each item ([#56](https://github.com/SassDoc/sassdoc/issues/56)) * Added back the `[Private]` annotation before private items' name * Added a `version` parameter to `view.json` that gets displayed right next to the title * Prevented empty sections in case items exist but are not displayed * Prevented broken links with requires and usedby in case of private items * Fixed an issue where links were not displayed * Added `--version` option ([#51](https://github.com/SassDoc/sassdoc/issues/51)) * Improved Sass and Swig structure * Improved the way we display `@deprecated` ([#50](https://github.com/SassDoc/sassdoc/issues/50)) * Added location where item was found * Moved view's stylesheets to Sass * Changed the folder structure * Moved `view.json` to `view/` folder * Prevented some broken links * Made the documentation responsive * Added PrismJS * Fixed an issue with `@requires` type * Fixed some formatting issues with `@example` * Fixed an issue prevented `@requires` form working if there was any `@alias` * Greatly improved the view * Fixed `@deprecated` not supporting a message * Added a trim to `@datatype` * Moved to a real parser ([CDocParser](https://github.com/FWeinb/CDocParser) and [ScssCommentParser](https://github.com/FWeinb/ScssCommentParser)) * Dropped support for inline comments (`//`) * Added the ability to document examples with `@example` * Variables are now documented exactly like functions and mixins, yet they have a `@datatype` directive to specify their type * Changed the structure of `view.json` ## 0.4.1 * Improved the way we can impact the view with `view.json` ## 0.4.0 * Added a way to impact the view with `view.json` ## 0.3.9 * Greatly improved the way we deal with variables ## 0.3.8 * Fixed documented items count in generated documentation * Improved the way things work when nothing gets documented ## 0.3.7 * Allowed markdown syntax at more places ## 0.3.6 * Authorized `spritemap` as a valid type ## 0.3.5 * Changed the way we deal with assets dumping ## 0.3.4 * Fixed an issue when dumping assets ## 0.3.3 * Who knows? ## 0.3.2 * Updated view ## 0.3.1 * Fixed a potential path issue ## 0.3.0 * Added `@since` ## 0.2.1 * Updated the way we deal with `@param` and `@return` ## 0.1.0 * Initial commit
45.171329
354
0.735428
eng_Latn
0.852381
e90956615337e670c087de9c3ed6c2eb4f8944f6
1,294
md
Markdown
_leetcode/216-combination-sum-iii.md
ALUE/alue.github.com
db19e2897d46cc8f35d9fa7fe4246e98729cd1a1
[ "MIT" ]
1
2016-06-10T06:58:18.000Z
2016-06-10T06:58:18.000Z
_leetcode/216-combination-sum-iii.md
ALUE/alue.github.com
db19e2897d46cc8f35d9fa7fe4246e98729cd1a1
[ "MIT" ]
null
null
null
_leetcode/216-combination-sum-iii.md
ALUE/alue.github.com
db19e2897d46cc8f35d9fa7fe4246e98729cd1a1
[ "MIT" ]
null
null
null
--- layout: leetcode date: 2016-07-16 title: Combination Sum III tags: [Array, Backtracking] --- * Contents {:toc #toc_of_keans_blog} ## Question Find all possible combinations of k numbers that add up to a number n, given that only numbers from 1 to 9 can be used and each combination should be a unique set of numbers. **Example 1:** Input: k = 3, n = 7 Output: <pre> [[1,2,4]] </pre> **Example 2:** Input: k = 3, n = 9 Output: <pre> [[1,2,6], [1,3,5], [2,3,4]] </pre> *** ## Solution **Result:** Accepted **Time:** 0 ms Here should be some explanations. ```cpp class Solution { public: void dfs(int rest,int st,int k,vector<vector<int>> &vec,vector<int> &ivec) { if(!rest && !k) return vec.push_back(ivec); int limit = (2*st+k-1)*k/2; if(k == 1) st = rest; for(int i = st; limit <= rest && i < 10; i++,limit +=k) { ivec.push_back(i); dfs(rest - i,i+1,k-1,vec,ivec); ivec.pop_back(); } } vector<vector<int>> combinationSum3(int k, int n) { vector<vector<int>> vec; vector<int> ivec;//(k); dfs(n,1,k,vec,ivec); return vec; } }; ``` **Complexity Analytics** - Time Complexity: $$O(n)$$ - Space Complexity: $$O(1)$$
17.726027
175
0.554096
eng_Latn
0.546887
e909b6f41ec44b3c087b571963d222ea1e5b3898
5,869
md
Markdown
azps-3.8.0/Az.RecoveryServices/Get-AzRecoveryServicesBackupProtectableItem.md
AdrianaDJ/azure-docs-powershell.tr-TR
78407d14f64e877506d6c0c14cac18608332c7a8
[ "CC-BY-4.0", "MIT" ]
1
2020-12-05T17:58:35.000Z
2020-12-05T17:58:35.000Z
azps-3.8.0/Az.RecoveryServices/Get-AzRecoveryServicesBackupProtectableItem.md
AdrianaDJ/azure-docs-powershell.tr-TR
78407d14f64e877506d6c0c14cac18608332c7a8
[ "CC-BY-4.0", "MIT" ]
null
null
null
azps-3.8.0/Az.RecoveryServices/Get-AzRecoveryServicesBackupProtectableItem.md
AdrianaDJ/azure-docs-powershell.tr-TR
78407d14f64e877506d6c0c14cac18608332c7a8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- external help file: Microsoft.Azure.PowerShell.Cmdlets.RecoveryServices.Backup.dll-Help.xml Module Name: Az.RecoveryServices online version: https://docs.microsoft.com/en-us/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupprotectableitem schema: 2.0.0 content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/RecoveryServices/RecoveryServices/help/Get-AzRecoveryServicesBackupProtectableItem.md original_content_git_url: https://github.com/Azure/azure-powershell/blob/master/src/RecoveryServices/RecoveryServices/help/Get-AzRecoveryServicesBackupProtectableItem.md ms.openlocfilehash: 293cd11c146369644ca67ba76158c251b1480d22 ms.sourcegitcommit: 6a91b4c545350d316d3cf8c62f384478e3f3ba24 ms.translationtype: MT ms.contentlocale: tr-TR ms.lasthandoff: 04/21/2020 ms.locfileid: "94104736" --- # Get-AzRecoveryServicesBackupProtectableItem ## SYNOPSIS Bu komut, belirli bir kapsayıcıdaki veya tüm kaydedilmiş kapsayıcılardaki tüm korunabilir öğeleri alır. Uygulama hiyerarşisinin tüm öğelerinden oluşur. Örnek, kullanılabilirlik grubu gibi DBs ve bunların üst katman varlıklarını döndürür. ## INDEKI ### NoFilterParamSet (varsayılan) ``` Get-AzRecoveryServicesBackupProtectableItem [[-Container] <ContainerBase>] [-WorkloadType] <WorkloadType> [-VaultId <String>] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] ``` ### FilterParamSet ``` Get-AzRecoveryServicesBackupProtectableItem [[-Container] <ContainerBase>] [-WorkloadType] <WorkloadType> [[-ItemType] <ProtectableItemType>] [-Name <String>] [-ServerName <String>] [-VaultId <String>] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] ``` ### Idparamset ``` Get-AzRecoveryServicesBackupProtectableItem [-ParentID] <String> [[-ItemType] <ProtectableItemType>] [-Name <String>] [-ServerName <String>] [-VaultId <String>] [-DefaultProfile <IAzureContextContainer>] [<CommonParameters>] ``` ## Tanım **Get-Azrecoveryservicesbackupkorunabilir öğe** cmdlet 'inde bir kapsayıcıdaki korunabilir öğeleri veya Azure Backup 'daki bir değeri ve öğelerin koruma durumunu alır. Azure kurtarma hizmetleri kasasına kaydedilen bir kapsayıcının korunabilir bir veya birden çok öğe olabilir. ## ÖRNEKLERDEN ### Örnek 1 ``` PS C:\>$Container = Get-AzRecoveryServicesBackupContainer -ContainerType MSSQL -Status Registered PS C:\> $Item = Get-AzRecoveryServicesProtectableItem -Container $Container -ItemType "SQLDatabase" -VaultId $vault.ID ``` İlk komut MSSQL türünde kapsayıcıyı alır ve $Container değişkeninde depolar. İkinci komut $Container yedek öğesini alır ve $Item değişkeninde depolar. ## PARAMETRELERINE ### Kapsayıcı Öğenin bulunduğu kapsayıcı ```yaml Type: Microsoft.Azure.Commands.RecoveryServices.Backup.Cmdlets.Models.ContainerBase Parameter Sets: NoFilterParamSet, FilterParamSet Aliases: Required: False Position: 0 Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -DefaultProfile Azure ile iletişim için kullanılan kimlik bilgileri, hesap, kiracı ve abonelik. ```yaml Type: Microsoft.Azure.Commands.Common.Authentication.Abstractions.Core.IAzureContextContainer Parameter Sets: (All) Aliases: AzContext, AzureRmContext, AzureCredential Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -ItemType Korunabilir öğe türünü belirtir. İlgili değerler: (SQLDataBase, SQLInstance, SQLAvailabilityGroup). ```yaml Type: Microsoft.Azure.Commands.RecoveryServices.Backup.Cmdlets.Models.ProtectableItemType Parameter Sets: FilterParamSet, IdParamSet Aliases: Accepted values: SQLDataBase, SQLInstance, SQLAvailabilityGroup Required: False Position: 2 Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Ad Veritabanının, örneğin veya kullanılabilirlik grubunun adını belirtir. ```yaml Type: System.String Parameter Sets: FilterParamSet, IdParamSet Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -ParentID Bir örnek veya ağ üzerindeki ARM KIMLIĞI belirtildi. ```yaml Type: System.String Parameter Sets: IdParamSet Aliases: Required: True Position: 0 Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -ServerName Öğenin ait olduğu sunucunun adını belirtir. ```yaml Type: System.String Parameter Sets: FilterParamSet, IdParamSet Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -VaultId Kurtarma Hizmetleri kasasındaki ARM KIMLIĞI. ```yaml Type: System.String Parameter Sets: (All) Aliases: Required: False Position: Named Default value: None Accept pipeline input: True (ByValue) Accept wildcard characters: False ``` ### -WorkloadType Kaynağın iş yükü türü (örneğin: AzureVM, WindowsServer, AzureFiles, MSSQL). ```yaml Type: Microsoft.Azure.Commands.RecoveryServices.Backup.Cmdlets.Models.WorkloadType Parameter Sets: NoFilterParamSet, FilterParamSet Aliases: Accepted values: AzureVM, AzureSQLDatabase, AzureFiles, MSSQL Required: True Position: 1 Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### CommonParameters Bu cmdlet ortak parametreleri destekler:-Debug,-ErrorAction,-ErrorVariable,-ınformationaction,-ınformationvariable,-OutVariable,-OutBuffer,-Pipelinedeğişken,-verbose,-WarningAction ve-Warningdeğişken. Daha fazla bilgi için [about_CommonParameters](http://go.microsoft.com/fwlink/?LinkID=113216)bakın. ## GÖLGELENDIRICI ### Microsoft. Azure. Commands. RecoveryServices. Backup. cmdlet. model. ContainerBase System. String ## ÇıKıŞLAR ### Microsoft. Azure. Commands. RecoveryServices. Backup. cmdlet. model. Koruyucutablotembase ## NOTLARıNDA ## ILGILI BAĞLANTıLAR
29.943878
300
0.805418
tur_Latn
0.435762
e90a5ec981688f779bf3c72253bd9274192519b1
932
md
Markdown
WebApp/README.md
BitFinance-solutions/ICO-Platform
999b59decde12a6a5b102ad308b91c77fb744727
[ "MIT" ]
null
null
null
WebApp/README.md
BitFinance-solutions/ICO-Platform
999b59decde12a6a5b102ad308b91c77fb744727
[ "MIT" ]
null
null
null
WebApp/README.md
BitFinance-solutions/ICO-Platform
999b59decde12a6a5b102ad308b91c77fb744727
[ "MIT" ]
null
null
null
# The Initial Coin Offering (ICO) Platform is a web-based application. The Coindash Initial Coin Offering (ICO) Platform is a web-based application that allows you to earn <your_company > tokens (<your_tokens>) at the point of initial allocation. <your_tokens> are earned by making contributions in either BTC, COINDASH or STRAT. Changelly is also integrated into the ICO Platform, and you can use this service to convert other cryptocurrencies into COINDASH or STRAT before contributing. <your_company> uses Onfido for identify verification. This guide provides step by step instructions on how to complete this process, so you can begin making contributions. For your own security, you can also set up 2-Factor Authentication, and this guide shows you how to do this. Once the <your_ICO> is complete, you can withdraw the <your_tokens> that you earned through your contributions. This guide includes a chapter on how to do this.
155.333333
417
0.804721
eng_Latn
0.999715
e90badcb4e1f83f1d74dc658b1d2a1f7ce1917b7
31,224
md
Markdown
repos/bash/remote/4.1.md
victorvw/repo-info
9e552baded50c4ec520bca3bb8d5645b17c48c56
[ "Apache-2.0" ]
1
2020-05-11T17:34:20.000Z
2020-05-11T17:34:20.000Z
repos/bash/remote/4.1.md
victorvw/repo-info
9e552baded50c4ec520bca3bb8d5645b17c48c56
[ "Apache-2.0" ]
null
null
null
repos/bash/remote/4.1.md
victorvw/repo-info
9e552baded50c4ec520bca3bb8d5645b17c48c56
[ "Apache-2.0" ]
null
null
null
## `bash:4.1` ```console $ docker pull bash@sha256:f24523091a076266b65b26867372fc4adf06e85fc11a6623c3dc8954231e2736 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 - linux; arm variant v6 - linux; arm variant v7 - linux; arm64 variant v8 - linux; 386 - linux; ppc64le - linux; s390x ### `bash:4.1` - linux; amd64 ```console $ docker pull bash@sha256:f793fa0d4c18112373881bb63a046f488e1073f07b7b2b128018c109ad92db4c ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **4.7 MB (4727867 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:401b35efd1677bd4051878d06c625c493b4f7eccf86891cf70507f903719d5f4` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["bash"]` ```dockerfile # Fri, 24 Apr 2020 01:05:03 GMT ADD file:b91adb67b670d3a6ff9463e48b7def903ed516be66fc4282d22c53e41512be49 in / # Fri, 24 Apr 2020 01:05:03 GMT CMD ["/bin/sh"] # Fri, 24 Apr 2020 12:46:33 GMT ENV _BASH_GPG_KEY=7C0135FB088AAF6C66C650B9BB5869F064EA74AB # Fri, 24 Apr 2020 12:49:48 GMT ENV _BASH_VERSION=4.1 # Fri, 24 Apr 2020 12:49:48 GMT ENV _BASH_PATCH_LEVEL=0 # Fri, 24 Apr 2020 12:49:48 GMT ENV _BASH_LATEST_PATCH=17 # Fri, 24 Apr 2020 12:50:29 GMT RUN set -eux; apk add --no-cache --virtual .build-deps bison coreutils dpkg-dev dpkg gcc gnupg libc-dev make ncurses-dev patch tar ; version="$_BASH_VERSION"; if [ "$_BASH_PATCH_LEVEL" -gt 0 ]; then version="$version.$_BASH_PATCH_LEVEL"; fi; wget -O bash.tar.gz "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz"; wget -O bash.tar.gz.sig "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz.sig"; if [ "$_BASH_LATEST_PATCH" -gt "$_BASH_PATCH_LEVEL" ]; then mkdir -p bash-patches; first="$(printf '%03d' "$(( _BASH_PATCH_LEVEL + 1 ))")"; last="$(printf '%03d' "$_BASH_LATEST_PATCH")"; for patch in $(seq -w "$first" "$last"); do url="https://ftp.gnu.org/gnu/bash/bash-$_BASH_VERSION-patches/bash${_BASH_VERSION//./}-$patch"; wget -O "bash-patches/$patch" "$url"; wget -O "bash-patches/$patch.sig" "$url.sig"; done; fi; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$_BASH_GPG_KEY"; gpg --batch --verify bash.tar.gz.sig bash.tar.gz; gpgconf --kill all; rm bash.tar.gz.sig; if [ -d bash-patches ]; then for sig in bash-patches/*.sig; do p="${sig%.sig}"; gpg --batch --verify "$sig" "$p"; rm "$sig"; done; fi; rm -rf "$GNUPGHOME"; mkdir -p /usr/src/bash; tar --extract --file=bash.tar.gz --strip-components=1 --directory=/usr/src/bash ; rm bash.tar.gz; if [ -d bash-patches ]; then for p in bash-patches/*; do patch --directory=/usr/src/bash --input="$(readlink -f "$p")" --strip=0 ; rm "$p"; done; rmdir bash-patches; fi; cd /usr/src/bash; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; for f in config.guess config.sub; do wget -O "support/$f" "https://git.savannah.gnu.org/cgit/config.git/plain/$f?id=7d3d27baf8107b630586c962c057e22149653deb"; done; ./configure --build="$gnuArch" --enable-readline --with-curses --without-bash-malloc || { cat >&2 config.log; false; }; make -j "$(nproc)"; make install; cd /; rm -r /usr/src/bash; rm -r /usr/local/share/info /usr/local/share/locale /usr/local/share/man ; runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )"; apk add --no-cache --virtual .bash-rundeps $runDeps; apk del .build-deps; [ "$(which bash)" = '/usr/local/bin/bash' ]; bash --version; [ "$(bash -c 'echo "${BASH_VERSION%%[^0-9.]*}"')" = "${_BASH_VERSION%%-*}.$_BASH_LATEST_PATCH" ]; # Fri, 24 Apr 2020 12:50:29 GMT COPY file:651b3bebeba8be9162c56b3eb561199905235f3e1c7811232b6c9f48ac333651 in /usr/local/bin/ # Fri, 24 Apr 2020 12:50:29 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Fri, 24 Apr 2020 12:50:30 GMT CMD ["bash"] ``` - Layers: - `sha256:cbdbe7a5bc2a134ca8ec91be58565ec07d037386d1f1d8385412d224deafca08` Last Modified: Thu, 23 Apr 2020 14:07:19 GMT Size: 2.8 MB (2813316 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6489f05beadc3cbb9a86be23285a67dc558c9b2ad1aa7089dfab4ba3b4a72bdb` Last Modified: Fri, 24 Apr 2020 12:57:52 GMT Size: 1.9 MB (1914208 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:5f5cbc9c6af20761ac2bca70502b0161a51ec4b86e584155551426d0f410023b` Last Modified: Fri, 24 Apr 2020 12:57:52 GMT Size: 343.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `bash:4.1` - linux; arm variant v6 ```console $ docker pull bash@sha256:5e037ec50c8d99a951bad9bab179243dbbb925cf6b32e844a7ffa699eff84678 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **4.5 MB (4468679 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:a12c727a4a08ac86da5600c3634b83236c1dab58fbf938563abfdf94b72de87e` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["bash"]` ```dockerfile # Thu, 23 Apr 2020 15:51:24 GMT ADD file:cc0770cddff6b50d5e31f39886420eb8a0b4af55664d6f7599207c9aeaf6a501 in / # Thu, 23 Apr 2020 15:51:25 GMT CMD ["/bin/sh"] # Thu, 23 Apr 2020 17:17:04 GMT ENV _BASH_GPG_KEY=7C0135FB088AAF6C66C650B9BB5869F064EA74AB # Thu, 23 Apr 2020 17:23:31 GMT ENV _BASH_VERSION=4.1 # Thu, 23 Apr 2020 17:23:32 GMT ENV _BASH_PATCH_LEVEL=0 # Thu, 23 Apr 2020 17:23:32 GMT ENV _BASH_LATEST_PATCH=17 # Thu, 23 Apr 2020 17:24:29 GMT RUN set -eux; apk add --no-cache --virtual .build-deps bison coreutils dpkg-dev dpkg gcc gnupg libc-dev make ncurses-dev patch tar ; version="$_BASH_VERSION"; if [ "$_BASH_PATCH_LEVEL" -gt 0 ]; then version="$version.$_BASH_PATCH_LEVEL"; fi; wget -O bash.tar.gz "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz"; wget -O bash.tar.gz.sig "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz.sig"; if [ "$_BASH_LATEST_PATCH" -gt "$_BASH_PATCH_LEVEL" ]; then mkdir -p bash-patches; first="$(printf '%03d' "$(( _BASH_PATCH_LEVEL + 1 ))")"; last="$(printf '%03d' "$_BASH_LATEST_PATCH")"; for patch in $(seq -w "$first" "$last"); do url="https://ftp.gnu.org/gnu/bash/bash-$_BASH_VERSION-patches/bash${_BASH_VERSION//./}-$patch"; wget -O "bash-patches/$patch" "$url"; wget -O "bash-patches/$patch.sig" "$url.sig"; done; fi; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$_BASH_GPG_KEY"; gpg --batch --verify bash.tar.gz.sig bash.tar.gz; gpgconf --kill all; rm bash.tar.gz.sig; if [ -d bash-patches ]; then for sig in bash-patches/*.sig; do p="${sig%.sig}"; gpg --batch --verify "$sig" "$p"; rm "$sig"; done; fi; rm -rf "$GNUPGHOME"; mkdir -p /usr/src/bash; tar --extract --file=bash.tar.gz --strip-components=1 --directory=/usr/src/bash ; rm bash.tar.gz; if [ -d bash-patches ]; then for p in bash-patches/*; do patch --directory=/usr/src/bash --input="$(readlink -f "$p")" --strip=0 ; rm "$p"; done; rmdir bash-patches; fi; cd /usr/src/bash; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; for f in config.guess config.sub; do wget -O "support/$f" "https://git.savannah.gnu.org/cgit/config.git/plain/$f?id=7d3d27baf8107b630586c962c057e22149653deb"; done; ./configure --build="$gnuArch" --enable-readline --with-curses --without-bash-malloc || { cat >&2 config.log; false; }; make -j "$(nproc)"; make install; cd /; rm -r /usr/src/bash; rm -r /usr/local/share/info /usr/local/share/locale /usr/local/share/man ; runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )"; apk add --no-cache --virtual .bash-rundeps $runDeps; apk del .build-deps; [ "$(which bash)" = '/usr/local/bin/bash' ]; bash --version; [ "$(bash -c 'echo "${BASH_VERSION%%[^0-9.]*}"')" = "${_BASH_VERSION%%-*}.$_BASH_LATEST_PATCH" ]; # Thu, 23 Apr 2020 17:24:30 GMT COPY file:651b3bebeba8be9162c56b3eb561199905235f3e1c7811232b6c9f48ac333651 in /usr/local/bin/ # Thu, 23 Apr 2020 17:24:31 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Thu, 23 Apr 2020 17:24:33 GMT CMD ["bash"] ``` - Layers: - `sha256:b9e3228833e92f0688e0f87234e75965e62e47cfbb9ca8cc5fa19c2e7cd13f80` Last Modified: Thu, 23 Apr 2020 15:52:05 GMT Size: 2.6 MB (2619936 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:77acd826bfba59d412aae453843547a09b6acf61d6d78ad5dd69dc6114490ad8` Last Modified: Thu, 23 Apr 2020 17:35:21 GMT Size: 1.8 MB (1848396 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6b3f4345ad1f1dcd678551567ab29f16270d0124f559ffeda0f8078b96610ce4` Last Modified: Thu, 23 Apr 2020 17:35:21 GMT Size: 347.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `bash:4.1` - linux; arm variant v7 ```console $ docker pull bash@sha256:210687b030302ff74eea562a4e12813b46c80e2a184a4051a140fac5dc123053 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **4.2 MB (4227668 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:c8abe57194d26bf8d039800dce10e94ce9f250a9d3e80a8dc1150f05fa0c1e83` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["bash"]` ```dockerfile # Thu, 23 Apr 2020 22:04:19 GMT ADD file:33578d3cacfab86c195d99396dd012ec511796a1d2d8d6f0a02b8a055673c294 in / # Thu, 23 Apr 2020 22:04:22 GMT CMD ["/bin/sh"] # Thu, 23 Apr 2020 23:25:51 GMT ENV _BASH_GPG_KEY=7C0135FB088AAF6C66C650B9BB5869F064EA74AB # Thu, 23 Apr 2020 23:30:44 GMT ENV _BASH_VERSION=4.1 # Thu, 23 Apr 2020 23:30:45 GMT ENV _BASH_PATCH_LEVEL=0 # Thu, 23 Apr 2020 23:30:46 GMT ENV _BASH_LATEST_PATCH=17 # Thu, 23 Apr 2020 23:31:35 GMT RUN set -eux; apk add --no-cache --virtual .build-deps bison coreutils dpkg-dev dpkg gcc gnupg libc-dev make ncurses-dev patch tar ; version="$_BASH_VERSION"; if [ "$_BASH_PATCH_LEVEL" -gt 0 ]; then version="$version.$_BASH_PATCH_LEVEL"; fi; wget -O bash.tar.gz "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz"; wget -O bash.tar.gz.sig "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz.sig"; if [ "$_BASH_LATEST_PATCH" -gt "$_BASH_PATCH_LEVEL" ]; then mkdir -p bash-patches; first="$(printf '%03d' "$(( _BASH_PATCH_LEVEL + 1 ))")"; last="$(printf '%03d' "$_BASH_LATEST_PATCH")"; for patch in $(seq -w "$first" "$last"); do url="https://ftp.gnu.org/gnu/bash/bash-$_BASH_VERSION-patches/bash${_BASH_VERSION//./}-$patch"; wget -O "bash-patches/$patch" "$url"; wget -O "bash-patches/$patch.sig" "$url.sig"; done; fi; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$_BASH_GPG_KEY"; gpg --batch --verify bash.tar.gz.sig bash.tar.gz; gpgconf --kill all; rm bash.tar.gz.sig; if [ -d bash-patches ]; then for sig in bash-patches/*.sig; do p="${sig%.sig}"; gpg --batch --verify "$sig" "$p"; rm "$sig"; done; fi; rm -rf "$GNUPGHOME"; mkdir -p /usr/src/bash; tar --extract --file=bash.tar.gz --strip-components=1 --directory=/usr/src/bash ; rm bash.tar.gz; if [ -d bash-patches ]; then for p in bash-patches/*; do patch --directory=/usr/src/bash --input="$(readlink -f "$p")" --strip=0 ; rm "$p"; done; rmdir bash-patches; fi; cd /usr/src/bash; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; for f in config.guess config.sub; do wget -O "support/$f" "https://git.savannah.gnu.org/cgit/config.git/plain/$f?id=7d3d27baf8107b630586c962c057e22149653deb"; done; ./configure --build="$gnuArch" --enable-readline --with-curses --without-bash-malloc || { cat >&2 config.log; false; }; make -j "$(nproc)"; make install; cd /; rm -r /usr/src/bash; rm -r /usr/local/share/info /usr/local/share/locale /usr/local/share/man ; runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )"; apk add --no-cache --virtual .bash-rundeps $runDeps; apk del .build-deps; [ "$(which bash)" = '/usr/local/bin/bash' ]; bash --version; [ "$(bash -c 'echo "${BASH_VERSION%%[^0-9.]*}"')" = "${_BASH_VERSION%%-*}.$_BASH_LATEST_PATCH" ]; # Thu, 23 Apr 2020 23:31:36 GMT COPY file:651b3bebeba8be9162c56b3eb561199905235f3e1c7811232b6c9f48ac333651 in /usr/local/bin/ # Thu, 23 Apr 2020 23:31:37 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Thu, 23 Apr 2020 23:31:37 GMT CMD ["bash"] ``` - Layers: - `sha256:3cfb62949d9d8613854db4d5fe502a9219c2b55a153043500078a64e880ae234` Last Modified: Thu, 23 Apr 2020 22:05:12 GMT Size: 2.4 MB (2422063 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:9c28077aa387b3632f71c1f9613832025f55ccdff9ba90f1ed3f0b8fd20c1d50` Last Modified: Thu, 23 Apr 2020 23:42:02 GMT Size: 1.8 MB (1805258 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:74f4d0fc1cdafbf208e4cf4fa04ac7ef5a65e3b33a340e584724ac9cd80694e1` Last Modified: Thu, 23 Apr 2020 23:42:02 GMT Size: 347.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `bash:4.1` - linux; arm64 variant v8 ```console $ docker pull bash@sha256:ae525f5be954c519176e9f8fd28cfc11f6952260170a1662e8c6001f411fcf1b ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **4.7 MB (4654999 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:00c6333c207f5d02a91ca195244f4fd145c6e739eb9d8d6d055656412ba4f138` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["bash"]` ```dockerfile # Fri, 24 Apr 2020 00:14:18 GMT ADD file:85ae77bc1e43353ff14e6fe1658be1ed4ecbf4330212ac3d7ab7462add32dd39 in / # Fri, 24 Apr 2020 00:14:21 GMT CMD ["/bin/sh"] # Fri, 24 Apr 2020 03:44:21 GMT ENV _BASH_GPG_KEY=7C0135FB088AAF6C66C650B9BB5869F064EA74AB # Fri, 24 Apr 2020 03:49:04 GMT ENV _BASH_VERSION=4.1 # Fri, 24 Apr 2020 03:49:06 GMT ENV _BASH_PATCH_LEVEL=0 # Fri, 24 Apr 2020 03:49:09 GMT ENV _BASH_LATEST_PATCH=17 # Fri, 24 Apr 2020 03:49:59 GMT RUN set -eux; apk add --no-cache --virtual .build-deps bison coreutils dpkg-dev dpkg gcc gnupg libc-dev make ncurses-dev patch tar ; version="$_BASH_VERSION"; if [ "$_BASH_PATCH_LEVEL" -gt 0 ]; then version="$version.$_BASH_PATCH_LEVEL"; fi; wget -O bash.tar.gz "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz"; wget -O bash.tar.gz.sig "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz.sig"; if [ "$_BASH_LATEST_PATCH" -gt "$_BASH_PATCH_LEVEL" ]; then mkdir -p bash-patches; first="$(printf '%03d' "$(( _BASH_PATCH_LEVEL + 1 ))")"; last="$(printf '%03d' "$_BASH_LATEST_PATCH")"; for patch in $(seq -w "$first" "$last"); do url="https://ftp.gnu.org/gnu/bash/bash-$_BASH_VERSION-patches/bash${_BASH_VERSION//./}-$patch"; wget -O "bash-patches/$patch" "$url"; wget -O "bash-patches/$patch.sig" "$url.sig"; done; fi; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$_BASH_GPG_KEY"; gpg --batch --verify bash.tar.gz.sig bash.tar.gz; gpgconf --kill all; rm bash.tar.gz.sig; if [ -d bash-patches ]; then for sig in bash-patches/*.sig; do p="${sig%.sig}"; gpg --batch --verify "$sig" "$p"; rm "$sig"; done; fi; rm -rf "$GNUPGHOME"; mkdir -p /usr/src/bash; tar --extract --file=bash.tar.gz --strip-components=1 --directory=/usr/src/bash ; rm bash.tar.gz; if [ -d bash-patches ]; then for p in bash-patches/*; do patch --directory=/usr/src/bash --input="$(readlink -f "$p")" --strip=0 ; rm "$p"; done; rmdir bash-patches; fi; cd /usr/src/bash; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; for f in config.guess config.sub; do wget -O "support/$f" "https://git.savannah.gnu.org/cgit/config.git/plain/$f?id=7d3d27baf8107b630586c962c057e22149653deb"; done; ./configure --build="$gnuArch" --enable-readline --with-curses --without-bash-malloc || { cat >&2 config.log; false; }; make -j "$(nproc)"; make install; cd /; rm -r /usr/src/bash; rm -r /usr/local/share/info /usr/local/share/locale /usr/local/share/man ; runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )"; apk add --no-cache --virtual .bash-rundeps $runDeps; apk del .build-deps; [ "$(which bash)" = '/usr/local/bin/bash' ]; bash --version; [ "$(bash -c 'echo "${BASH_VERSION%%[^0-9.]*}"')" = "${_BASH_VERSION%%-*}.$_BASH_LATEST_PATCH" ]; # Fri, 24 Apr 2020 03:50:00 GMT COPY file:651b3bebeba8be9162c56b3eb561199905235f3e1c7811232b6c9f48ac333651 in /usr/local/bin/ # Fri, 24 Apr 2020 03:50:01 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Fri, 24 Apr 2020 03:50:02 GMT CMD ["bash"] ``` - Layers: - `sha256:29e5d40040c18c692ed73df24511071725b74956ca1a61fe6056a651d86a13bd` Last Modified: Fri, 24 Apr 2020 00:15:41 GMT Size: 2.7 MB (2724424 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3aafc998fc92733caa621ad68a7915e5def618c4db681e95056e35cf42c9c161` Last Modified: Fri, 24 Apr 2020 04:00:11 GMT Size: 1.9 MB (1930229 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:72b85266025390e6322b887c100f33914cb795fb3c1ec4879474c99b224d12e0` Last Modified: Fri, 24 Apr 2020 04:00:11 GMT Size: 346.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `bash:4.1` - linux; 386 ```console $ docker pull bash@sha256:57001408904bb5f28e0bc2aa5ac8bc338fa1e49e462b97e60b3dfe520908e284 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **4.7 MB (4688307 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:47e63053f3ab7374a552caf760a2940b51368df50d111ad8472fac1523dd1fb5` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["bash"]` ```dockerfile # Thu, 23 Apr 2020 21:16:04 GMT ADD file:63bd8a316cba8c404cc2e32a5120406c24aee8db3224c469a6077b941d900863 in / # Thu, 23 Apr 2020 21:16:04 GMT CMD ["/bin/sh"] # Thu, 23 Apr 2020 23:06:29 GMT ENV _BASH_GPG_KEY=7C0135FB088AAF6C66C650B9BB5869F064EA74AB # Thu, 23 Apr 2020 23:10:29 GMT ENV _BASH_VERSION=4.1 # Thu, 23 Apr 2020 23:10:29 GMT ENV _BASH_PATCH_LEVEL=0 # Thu, 23 Apr 2020 23:10:29 GMT ENV _BASH_LATEST_PATCH=17 # Thu, 23 Apr 2020 23:11:18 GMT RUN set -eux; apk add --no-cache --virtual .build-deps bison coreutils dpkg-dev dpkg gcc gnupg libc-dev make ncurses-dev patch tar ; version="$_BASH_VERSION"; if [ "$_BASH_PATCH_LEVEL" -gt 0 ]; then version="$version.$_BASH_PATCH_LEVEL"; fi; wget -O bash.tar.gz "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz"; wget -O bash.tar.gz.sig "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz.sig"; if [ "$_BASH_LATEST_PATCH" -gt "$_BASH_PATCH_LEVEL" ]; then mkdir -p bash-patches; first="$(printf '%03d' "$(( _BASH_PATCH_LEVEL + 1 ))")"; last="$(printf '%03d' "$_BASH_LATEST_PATCH")"; for patch in $(seq -w "$first" "$last"); do url="https://ftp.gnu.org/gnu/bash/bash-$_BASH_VERSION-patches/bash${_BASH_VERSION//./}-$patch"; wget -O "bash-patches/$patch" "$url"; wget -O "bash-patches/$patch.sig" "$url.sig"; done; fi; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$_BASH_GPG_KEY"; gpg --batch --verify bash.tar.gz.sig bash.tar.gz; gpgconf --kill all; rm bash.tar.gz.sig; if [ -d bash-patches ]; then for sig in bash-patches/*.sig; do p="${sig%.sig}"; gpg --batch --verify "$sig" "$p"; rm "$sig"; done; fi; rm -rf "$GNUPGHOME"; mkdir -p /usr/src/bash; tar --extract --file=bash.tar.gz --strip-components=1 --directory=/usr/src/bash ; rm bash.tar.gz; if [ -d bash-patches ]; then for p in bash-patches/*; do patch --directory=/usr/src/bash --input="$(readlink -f "$p")" --strip=0 ; rm "$p"; done; rmdir bash-patches; fi; cd /usr/src/bash; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; for f in config.guess config.sub; do wget -O "support/$f" "https://git.savannah.gnu.org/cgit/config.git/plain/$f?id=7d3d27baf8107b630586c962c057e22149653deb"; done; ./configure --build="$gnuArch" --enable-readline --with-curses --without-bash-malloc || { cat >&2 config.log; false; }; make -j "$(nproc)"; make install; cd /; rm -r /usr/src/bash; rm -r /usr/local/share/info /usr/local/share/locale /usr/local/share/man ; runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )"; apk add --no-cache --virtual .bash-rundeps $runDeps; apk del .build-deps; [ "$(which bash)" = '/usr/local/bin/bash' ]; bash --version; [ "$(bash -c 'echo "${BASH_VERSION%%[^0-9.]*}"')" = "${_BASH_VERSION%%-*}.$_BASH_LATEST_PATCH" ]; # Thu, 23 Apr 2020 23:11:18 GMT COPY file:651b3bebeba8be9162c56b3eb561199905235f3e1c7811232b6c9f48ac333651 in /usr/local/bin/ # Thu, 23 Apr 2020 23:11:19 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Thu, 23 Apr 2020 23:11:19 GMT CMD ["bash"] ``` - Layers: - `sha256:2826c1e79865da7e0da0a993a2a38db61c3911e05b5df617439a86d4deac90fb` Last Modified: Thu, 23 Apr 2020 21:16:32 GMT Size: 2.8 MB (2808418 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:b00814c1e461e3b7ccbc3af55342e18be5ca0f37b9a45dbaae93d95040777f92` Last Modified: Thu, 23 Apr 2020 23:19:30 GMT Size: 1.9 MB (1879544 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:148f043a4d9b12599db71e0d5140be2f85544c8ae126a9aac2d0b291f29fa509` Last Modified: Thu, 23 Apr 2020 23:19:29 GMT Size: 345.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `bash:4.1` - linux; ppc64le ```console $ docker pull bash@sha256:a634be21673628cba8f152b09ec1a72aeb7070009b52578698d4730408f408f9 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **4.9 MB (4862448 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:2e9956bc4b88ec62a22c53f1cbfd0868f89f008df4999ae15f871779acef4a51` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["bash"]` ```dockerfile # Thu, 23 Apr 2020 20:39:04 GMT ADD file:1aaebe252dfb1885e066fcbc84aaa915bae149c3608f19600855ad1d4f7450c1 in / # Thu, 23 Apr 2020 20:39:06 GMT CMD ["/bin/sh"] # Fri, 24 Apr 2020 01:12:29 GMT ENV _BASH_GPG_KEY=7C0135FB088AAF6C66C650B9BB5869F064EA74AB # Fri, 24 Apr 2020 01:17:27 GMT ENV _BASH_VERSION=4.1 # Fri, 24 Apr 2020 01:17:30 GMT ENV _BASH_PATCH_LEVEL=0 # Fri, 24 Apr 2020 01:17:33 GMT ENV _BASH_LATEST_PATCH=17 # Fri, 24 Apr 2020 01:18:28 GMT RUN set -eux; apk add --no-cache --virtual .build-deps bison coreutils dpkg-dev dpkg gcc gnupg libc-dev make ncurses-dev patch tar ; version="$_BASH_VERSION"; if [ "$_BASH_PATCH_LEVEL" -gt 0 ]; then version="$version.$_BASH_PATCH_LEVEL"; fi; wget -O bash.tar.gz "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz"; wget -O bash.tar.gz.sig "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz.sig"; if [ "$_BASH_LATEST_PATCH" -gt "$_BASH_PATCH_LEVEL" ]; then mkdir -p bash-patches; first="$(printf '%03d' "$(( _BASH_PATCH_LEVEL + 1 ))")"; last="$(printf '%03d' "$_BASH_LATEST_PATCH")"; for patch in $(seq -w "$first" "$last"); do url="https://ftp.gnu.org/gnu/bash/bash-$_BASH_VERSION-patches/bash${_BASH_VERSION//./}-$patch"; wget -O "bash-patches/$patch" "$url"; wget -O "bash-patches/$patch.sig" "$url.sig"; done; fi; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$_BASH_GPG_KEY"; gpg --batch --verify bash.tar.gz.sig bash.tar.gz; gpgconf --kill all; rm bash.tar.gz.sig; if [ -d bash-patches ]; then for sig in bash-patches/*.sig; do p="${sig%.sig}"; gpg --batch --verify "$sig" "$p"; rm "$sig"; done; fi; rm -rf "$GNUPGHOME"; mkdir -p /usr/src/bash; tar --extract --file=bash.tar.gz --strip-components=1 --directory=/usr/src/bash ; rm bash.tar.gz; if [ -d bash-patches ]; then for p in bash-patches/*; do patch --directory=/usr/src/bash --input="$(readlink -f "$p")" --strip=0 ; rm "$p"; done; rmdir bash-patches; fi; cd /usr/src/bash; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; for f in config.guess config.sub; do wget -O "support/$f" "https://git.savannah.gnu.org/cgit/config.git/plain/$f?id=7d3d27baf8107b630586c962c057e22149653deb"; done; ./configure --build="$gnuArch" --enable-readline --with-curses --without-bash-malloc || { cat >&2 config.log; false; }; make -j "$(nproc)"; make install; cd /; rm -r /usr/src/bash; rm -r /usr/local/share/info /usr/local/share/locale /usr/local/share/man ; runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )"; apk add --no-cache --virtual .bash-rundeps $runDeps; apk del .build-deps; [ "$(which bash)" = '/usr/local/bin/bash' ]; bash --version; [ "$(bash -c 'echo "${BASH_VERSION%%[^0-9.]*}"')" = "${_BASH_VERSION%%-*}.$_BASH_LATEST_PATCH" ]; # Fri, 24 Apr 2020 01:18:29 GMT COPY file:651b3bebeba8be9162c56b3eb561199905235f3e1c7811232b6c9f48ac333651 in /usr/local/bin/ # Fri, 24 Apr 2020 01:18:30 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Fri, 24 Apr 2020 01:18:32 GMT CMD ["bash"] ``` - Layers: - `sha256:9a8fdc5b698322331ee7eba7dd6f66f3a4e956554db22dd1e834d519415b4f8e` Last Modified: Thu, 23 Apr 2020 20:41:33 GMT Size: 2.8 MB (2821843 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:3891e22593ca9c8b0b03107cbaf87d0c7c62d4b9ca2081474bc808ca28ee43f7` Last Modified: Fri, 24 Apr 2020 01:29:04 GMT Size: 2.0 MB (2040258 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:6c4853998ed280002280575ad34b932c170d686cfa8ac7a2ee0700938da4c110` Last Modified: Fri, 24 Apr 2020 01:29:04 GMT Size: 347.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `bash:4.1` - linux; s390x ```console $ docker pull bash@sha256:01e477c16467623d8ca15c8ff048c5407e72f2acd45e7cd826eb1c70a6d8cba5 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **4.5 MB (4520121 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:13a722fe48daf9df1606fe7de6bfeb6d1c8277b5a136ee3b82fa513fddab110b` - Entrypoint: `["docker-entrypoint.sh"]` - Default Command: `["bash"]` ```dockerfile # Thu, 23 Apr 2020 17:50:57 GMT ADD file:a59a30c2fd43c9f3b820751a6f5a54688c14440a1ddace1ab255475f46e6ba2d in / # Thu, 23 Apr 2020 17:50:58 GMT CMD ["/bin/sh"] # Thu, 23 Apr 2020 19:28:24 GMT ENV _BASH_GPG_KEY=7C0135FB088AAF6C66C650B9BB5869F064EA74AB # Thu, 23 Apr 2020 19:30:33 GMT ENV _BASH_VERSION=4.1 # Thu, 23 Apr 2020 19:30:33 GMT ENV _BASH_PATCH_LEVEL=0 # Thu, 23 Apr 2020 19:30:33 GMT ENV _BASH_LATEST_PATCH=17 # Thu, 23 Apr 2020 19:31:00 GMT RUN set -eux; apk add --no-cache --virtual .build-deps bison coreutils dpkg-dev dpkg gcc gnupg libc-dev make ncurses-dev patch tar ; version="$_BASH_VERSION"; if [ "$_BASH_PATCH_LEVEL" -gt 0 ]; then version="$version.$_BASH_PATCH_LEVEL"; fi; wget -O bash.tar.gz "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz"; wget -O bash.tar.gz.sig "https://ftp.gnu.org/gnu/bash/bash-$version.tar.gz.sig"; if [ "$_BASH_LATEST_PATCH" -gt "$_BASH_PATCH_LEVEL" ]; then mkdir -p bash-patches; first="$(printf '%03d' "$(( _BASH_PATCH_LEVEL + 1 ))")"; last="$(printf '%03d' "$_BASH_LATEST_PATCH")"; for patch in $(seq -w "$first" "$last"); do url="https://ftp.gnu.org/gnu/bash/bash-$_BASH_VERSION-patches/bash${_BASH_VERSION//./}-$patch"; wget -O "bash-patches/$patch" "$url"; wget -O "bash-patches/$patch.sig" "$url.sig"; done; fi; export GNUPGHOME="$(mktemp -d)"; gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$_BASH_GPG_KEY"; gpg --batch --verify bash.tar.gz.sig bash.tar.gz; gpgconf --kill all; rm bash.tar.gz.sig; if [ -d bash-patches ]; then for sig in bash-patches/*.sig; do p="${sig%.sig}"; gpg --batch --verify "$sig" "$p"; rm "$sig"; done; fi; rm -rf "$GNUPGHOME"; mkdir -p /usr/src/bash; tar --extract --file=bash.tar.gz --strip-components=1 --directory=/usr/src/bash ; rm bash.tar.gz; if [ -d bash-patches ]; then for p in bash-patches/*; do patch --directory=/usr/src/bash --input="$(readlink -f "$p")" --strip=0 ; rm "$p"; done; rmdir bash-patches; fi; cd /usr/src/bash; gnuArch="$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)"; for f in config.guess config.sub; do wget -O "support/$f" "https://git.savannah.gnu.org/cgit/config.git/plain/$f?id=7d3d27baf8107b630586c962c057e22149653deb"; done; ./configure --build="$gnuArch" --enable-readline --with-curses --without-bash-malloc || { cat >&2 config.log; false; }; make -j "$(nproc)"; make install; cd /; rm -r /usr/src/bash; rm -r /usr/local/share/info /usr/local/share/locale /usr/local/share/man ; runDeps="$( scanelf --needed --nobanner --format '%n#p' --recursive /usr/local | tr ',' '\n' | sort -u | awk 'system("[ -e /usr/local/lib/" $1 " ]") == 0 { next } { print "so:" $1 }' )"; apk add --no-cache --virtual .bash-rundeps $runDeps; apk del .build-deps; [ "$(which bash)" = '/usr/local/bin/bash' ]; bash --version; [ "$(bash -c 'echo "${BASH_VERSION%%[^0-9.]*}"')" = "${_BASH_VERSION%%-*}.$_BASH_LATEST_PATCH" ]; # Thu, 23 Apr 2020 19:31:00 GMT COPY file:651b3bebeba8be9162c56b3eb561199905235f3e1c7811232b6c9f48ac333651 in /usr/local/bin/ # Thu, 23 Apr 2020 19:31:00 GMT ENTRYPOINT ["docker-entrypoint.sh"] # Thu, 23 Apr 2020 19:31:01 GMT CMD ["bash"] ``` - Layers: - `sha256:7184c046fdf17da4c16ca482e5ede36e1f2d41ac8cea9c036e488fd149d6e8e7` Last Modified: Thu, 23 Apr 2020 17:51:38 GMT Size: 2.6 MB (2582859 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:72ab4df728bfc325c84ef651d98fa3a1314916e85ca75e5bfb4daf60f31d1379` Last Modified: Thu, 23 Apr 2020 19:37:39 GMT Size: 1.9 MB (1936916 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip - `sha256:e24c3f62437ddc5a73111349b32855da3dd2f4bb2a77528a3c3bca349abef799` Last Modified: Thu, 23 Apr 2020 19:37:44 GMT Size: 346.0 B MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
83.710456
2,559
0.680534
yue_Hant
0.265408
e90bb4a30512367754cbf30224c6b59fbba5e996
19,592
md
Markdown
_post_preparation/java/library/reactive-stream/2022-14-10-some-operators-in-rxjava-3.x.md
gamethapcam/gamethapcam.github.io
2653950076f1683b33471f8c881856fae96302d8
[ "MIT" ]
4
2019-01-04T13:59:52.000Z
2021-11-09T20:40:29.000Z
_post_preparation/java/library/reactive-stream/2022-14-10-some-operators-in-rxjava-3.x.md
DucManhPhan/DucManhPhan.github.io
4e2493ba3b9415a8141585fc1c09f506f00a8a3e
[ "MIT" ]
null
null
null
_post_preparation/java/library/reactive-stream/2022-14-10-some-operators-in-rxjava-3.x.md
DucManhPhan/DucManhPhan.github.io
4e2493ba3b9415a8141585fc1c09f506f00a8a3e
[ "MIT" ]
null
null
null
--- layout: post title: Some operators in RxJava 3.x bigimg: /img/image-header/yourself.jpeg tags: [Reactive Programming] --- <br> ## Table of contents - [Introduction to Operators in RxJava 3.x](#introduction-to-operators-in-rxjava-3.x) - [Conditional operators](#conditional-operators) - [Supressing operators](#supressing-operators) - [Transforming operators](#transforming-operators) - [Reducing operators](#reducing-operators) - [Collection operators](#collection-operators) - [Error recovery operators](#error-recovery-operators) - [Action operators](#action-operators) - [Utility operators](#utility-operators) - [Wrapping up](#wrapping-up) <br> ## Introduction to Operators in RxJava 3.x RxJava operators produce observables that are observers of the Observable they are called on. If you call map() on an Observable, the returned Observable will subscribe to it. It will then transform each emission and, in turn, be a producer for observers downstream, including other operators and the terminal Observer itself. <br> ## Conditional operators Conditional operators emit or transform Observable conditionally. 1. takeWhile() and skipWhile() - **takeWhile()** operator takes emissions while a condition derived from each emission is true. The moment it encounters a condition that is **false**, it will generate the **onComplete** event and dispose of the used resources. ```java Observable.range(1, 100) .takeWhile(i -> i < 5) .subscribe(i -> System.out.println(i)); ``` Similarly, there is also **takeUntil()** operator that accepts another **Observable** as a parameter. It keeps taking emissions until that other **Observable** pushes an emission. - **skipWhile()** operator keeps skipping emissions while they comply with the condition. The moment that condition produces **false**, the emissions start flowing through. ```java Observable.range(1, 100) .skipWhile(i -> i <= 95) .subscribe(i -> System.out.println(i)); ``` Similarly, there is also **skipUntil()** operator that accepts another Observable as a parameter. It keeps skipping emissions until that other **Observable** emits something. 2. defaultIfEmpty() defaultIfEmpty() operator will be used when our source Observalbe can be empty, then without using defaultIfEmpty() we can have nothing to do. 3. switchIfEmpty() Similar to defaultIfEmpty() operator, switchIfEmpty() specifies a different Observable to emit values from if the source Observable is empty. This allows us to specify a different sequence of emissions in the event that the source is empty rather than emitting just one value, as in the case of defaultIfEmpty(). If the preceding Observable is not empty, then switchIfEmpty() will have no effect and that second specified Observable will not be used. <br> ## Supressing operators These are operators that suppress emissions that do not meet a specified criterion. These operators work by simply not calling the onNext() function downstream for a disqualified emission, and therefore it doesn't go down the chain to Observer. Belows are some common suppressing operators. - filter() - take() - skip() - distinct() - distinctUntilChanged() - elementAt() <br> ## Transforming operators - map() The map() operator transforms an emitted value of the T type into a value of the R type using the Function<T, R> lambda expression provided. The map() operator does a one-to-one conversion of each emitted value. If we need to do a one-to-many conversion, we can use flatMap() or concatMap(). - cast() cast() operator will cast each emitted item to another type. The cast() operator received the class type as an argument. If we find that we are having typing issues due to inherited or polymorphic types being mixed, this is an effective brute-force way to cast everything down to a common base type, but strive to use generics properly and type wildcards appropriately first. - startWithItem() - The startWithItem() operator (in RxJava 2.x, it is called as startWith() operator) allows us to insert a value of type T that will be emitted before all the other values. The startWithItem() operator is helpful for cases like where we want to seed an initial value or precede our emissions with one particular value. - startWithArray() and startWithIterable() If we want to start with more than one value emitted first, use **startWithArray()** or startWithIterable() operators, which accepts **varargs** or a Collection. If we want emissions of one Observable to precede the emissions of another Observable, use Observable.concat() or concatWith(). - sorted() If we have a finite Observable<T> that emits items that are of a primitive type, String type, or objects that implement Comparable<T>, we can use sorted() to sort the emissions. Internally, it collects all the emissions and then re-emits them in the specified order. Of course, this can have some performance implications and consumes the memory as it collects all emitted values in memory before emitting them again. If you use this against an infinite Observable, you may even get an OutOfMemoryError exception. The overloaded version, sorted(Comparator<T> sortFunction), can be used to establish an order other than the natural sort order of the emitted items that are of a primitive type. This overloaded version, sorted(Comparator<T> sortFunction), can also be used to sort the emitted items that are objects that do not implement Comparable<T>. Since Comparator is a single abstract method interface, you can implement it quickly with a lambda expression. Specify the two parameters representing two emissions, T o1 and T o2, and then implement the Comparator<T> functional interface by providing the body for its compare(T o1, T o2) method. For instance, we can sort the emitted items not according to their implementation of the compareTo(T o) method (that is, the Comparable<T> interface), but using the comparator provided. - scan() The scan() method is a rolling aggregator. It adds every emitted item to the provided accumulator and emits each incremental accumulated value. This operator does not have to be used just for rolling sums. You can create many kinds of accumulators, even non-math ones such as String concatenations or boolean reductions. scan() operator is very similar to reduce(). The scan() operator emits the rolling accumulation for each emission, whereas reduce() yields a single result reflecting the final accumulated value after onComplete() is called. This means that reduce() has to be used with a finite Observable only, while the scan() operator can be used with an infinite Observable too. You can also provide an initial value for the first argument and aggregate the emitted values into a different type than what is being emitted. If we wanted to emit the rolling count of emissions, we could provide an initial value of 0 and just add 1 to it for every emitted value. Keep in mind that the initial value would be emitted first, so use skip(1) after scan() if you do not want that initial emission to be included in the accumulator. <br> ## Reducing operators When we need to take a series of emitted values and aggregate them into a single value (usually emitted through a Single). And all of these reducing operators only work on a finite Observable that calls onComplete() because we can aggregate only finite datasets. - count() The count() operator counts the number of emitted items and emits the result through a Single once onComplete() is called. Like most reduction operators, this should not be used on an infinite Observable. It will hang up and work indefinitely, never emitting a count or calling onComplete(). If we need to count emissions of an infinite Observable, using scan() operator to emit a rolling count instead. - reduce() The reduce() operator is syntactically identical to scan(), but it only emits the final result when the source Observable calls onComplete(). Depending on which overloaded version is used, it can yield Single or Maybe. If you need the reduce() operator to emit the sum of all emitted integer values, for example, you can take each one and add it to the rolling total. But it will only emit once—after the last emitted value is processed (and the onComplete event is emitted) Similar to scan(), there is a seed argument that you can provide that will serve as the initial value to accumulate on. Belows are some boolean operators that we can usually use. - all() The all() operator verifies that all emissions meet the specified criterion and returns a Single<Boolean> object. If they all pass, it returns the Single<Boolean> object that contains true. If it encounters one value that fails the criterion, it immediately calls onComplete() and returns the object that contains false. If we call all() operator on an empty Observable, it will emit true due to the principle of vacuous. - any() The any() method checks whether at least one emission meets a specified criterion and returns a Single<Boolean>. The moment it finds an emission that does, it returns a Single<Boolean> object with true and then calls onComplete(). If it processes all emissions and finds that none of them meet the criterion, it returns a Single<Boolean> object with false and calls onComplete(). - isEmpty() The isEmpty() operator checks whether an Observable is going to emit more items. It returns a Single<Boolean> with true if the Observable does not emit items anymore. - contains() The contains() operator checks whether a specified item (based on the hashCode()/equals() implementation) has been emitted by the source Observable. It returns a Single<Boolean> with true if the specified item was emitted, and false if it was not. - sequenceEqual() The sequenceEqual() operator checks whether two observables emit the same values in the same order. It returns a Single<Boolean> with true if the emitted sequences are the same pairwise. <br> ## Collection operators A collection operator accumulates all emissions into a collection such as a List or Map and then returns that entire collection as a single value. It is another form of a reducing operator since it aggregates emitted items into a single one. - toList() The toList() is probably the most often used among all the collection operators. For a given Observable<T>, it collects incoming items into a List<T> and then pushes that List<T> object as a single value through Single<List<T>. By default, toList() uses an ArrayList implementation of the List interface. You can optionally specify an integer argument to serve as the capacityHint value that optimizes the initialization of the ArrayList to expect roughly that number of items. If you want to use a different List implementation, you can provide a Callable function as an argument to specify one. - toSortedList() A different flavor of toList() operator is toSortedList(). It collects the emitted values into a List object that has the elements sorted in a natural order (based on their Comparable implementation). Then, it pushes that List<T> object with sorted elements into the Observer. As with the sorted() operator, you can provide a Comparator as an argument to apply a different sorting logic. You can also specify an initial capacity for the backing ArrayList, just like in the case of the toList() operator. - toMap() and toMultiMap() - toMap() operator For a given Observable<T>, the toMap() operator collects received values into Map<K,T>, where K is the key type. The key is generated by the Function<T,K> function provided as the argument. If we decide to yield a different value other than the received one to associate with the key, we can provide a second lambda argument that maps each received value to a different one. - toMultiMap() operator If we want a given key to map to multiple values, use toMultiMap() instead, which maintains a list of corresponding a list of corresponding values for each key. - collect() When none of the collection operators can do what you need, you can always use the collect() operator to specify a different type to collect items into. When you need to collect values into a mutable object and you need a new mutable object seed each time, use collect() instead of the reduce() operator. <br> ## Error recovery operators To handle exceptions in the chain of the Observable operators, use onError event that is communicated down the Observable chain to the Observer. After that, the subscription terminates and no more emissions occur. But in fact, we still want to intercept exceptions before they get to the Observer and attempt some form of recovery. - onErrorReturnItem() and onErrorReturn() - onErrorReturnItem() operator is used when we want to resort to a default value if an exception occurs. The emissions stopped after the error anyway, but the error itself didn't flow down to the Observer. Instead, the default value was received by it as if emitted by the source Observable. - onErrorReturn(Function<Throwable, T> valueSupplier) operator is used to dynamically produce the value using the specified function. This gives us access to a Throwable object, which we can use while calculating the returned value. We also take care about the location of onErrorReturn() operator in the chain of the operators matters. If we put it before the operator that doesn't throw an exception, the error wouldn't be caught because it happened downstream. To intercept the emitted error, it must originate upstream from the onErrorReturn() operator. onErrorReturnItem() and onErrorReturn() operators happened, then the emissions will be terminated because onError event was pushed from Observable to Observer. So, the remaining items doesn't emit to the Observer. If we want to resume emissions, we can handle the error by using try/catch within the operators where the error occurs. - onErrorResumeWith() - retry() <br> ## Action operators These operators don't modify the Observable, but use it for side effect such as debugging as well as getting visibility into an Observable chain. Belows are some action operators that we need to know. - doOnNext() and doAfterNext() - doOnNext() operator allows a peek at each received value before letting it flow into the next operator. It doesn't affect the processing or transform the emission in any way. We can use it just to create a side effect for each received value. - doAfterNext() operator performs the action after the item is passed downstream rather than before. It means that the action of doAfterNext() operator will be processed after the Observer completely finishes. - doOnComplete() and doOnError() - doOnComplete() operator The onComplete() operator allows us to fire off an action when an onComplete event is emitted at the point in the Observable chain. This can be helpful in seeing which points of the Observable chain have completed. - doOnError() operator The onError() will peek at the error being emitted up the chain, and we can perform an action with it. This can helpful to put between operators to see which one is to blame for an error. - doOnEach() **doOnEacḥ()** operator is similar to **doOnNext()**. The only difference is that in **doOnEach()**, the emitted item comes wrapped inside a **Notification** that also contains the type of the event. This means we can check which of the three events - **onNext()**, **onComplete()**, or **onError()** has happened and select an appropriate action. The **subscribe()** method accepts these three actions as lambda arguments or an entire **Observer<T>**. So, using **doOnEach()** is like putting **subscribe()** right in the middle of our **Observable** chain. The error and the value (the emitted item) can be extracted from Notification in the same way. ```java Observable.just("One", "Two", "Three") .doOnEach(s -> System.out.println("doOnEach: " + s.getError() + ", " + s.getValue())) .subscribe(i -> System.out.println("Received: " + i)); ``` - doOnSubscribe() and doOnDispose() - doOnSubscribe(Consumer<Disposable> onSubscribe) executes the function provided at the moment subscription occurs. It provides access to the Disposable object in case we want to call dispose() in that action. - doOnDispose(Action onDispose) operator performs the specified action when diposal is executed. To dispose a Subscription or a stream internally in Observable-Observer, call dispose() method immediately. We use both operators to print when subscription and disposal occur, then the emitted values go through, and then disposal is finally fired. Another option is to use the doFinally() operator, which will fire after either onComplete() or onError() is called or disposed of by the chain. - doOnSuccess() Maybe and Single types don't have an onNext() event, but rather an onSucess() operator to pass a single emission. The doOnSuccess() operator usage should effectively feel like doOnNext(). - doFinally() The doFinally() operator is executed when onComplete(), onError(), or disposal happens. It is executed under the same conditions as doAfterTerminate(), plus it is also executed after the disposal. The doFinally() operator guarantees that the action is executed exactly once per subscription. And, by the way, the location of these operators in the chain does not matter, because they are driven by the events, not by the emitted data. <br> ## Utility operators - delay() We can postpone emissions using the delay() operator. It will hold any received emissions and delay each one for the specified time period. For more advanced cases, we can pass another Observable as your delay() argument, and this will delay emissions until that other Observable emits something. - repeat() The repeat() operator will repeat subscription after onComplete() a specified number of times. If you do not specify a number, it will repeat infinitely, forever re-subscribing after every onComplete(). There is also a repeatUntil() operator that accepts a BooleanSupplier function and continues repeating until the provided function returns true. - single() The single() operator returns a Single that emits the item emitted by this Observable. If the Observable emits more than one item, the single() operator throws an exception. If the Observable emits no item, the Single, produced by the single() operator, emits the item passed to the operator as a parameter. There is also a singleElement() operator that returns Maybe when the Observable emits one item or nothing and throws an exception otherwise. And there is a singleOrError() operator that returns Single when the Observable emits one item only and throws an exception otherwise. - timestamp() The timestamp() operator attaches a timestamp to every item emitted by an Observable. - timeInterval() The timeInterval() operator emits the time lapses between the consecutive emissions of a source Observable. <br> ## Wrapping up <br> Refer: [Learning RxJava, second edition]() [https://rxmarbles.com/#merge](https://rxmarbles.com/#merge)
54.879552
486
0.758422
eng_Latn
0.999144
e90c55ca08984896e5bfabe895db3b6732c02e26
2,749
md
Markdown
README.md
prate3k/react-date-primitives
b604c36c042c8d975c6aa0cad760ce65116bdde5
[ "MIT" ]
null
null
null
README.md
prate3k/react-date-primitives
b604c36c042c8d975c6aa0cad760ce65116bdde5
[ "MIT" ]
null
null
null
README.md
prate3k/react-date-primitives
b604c36c042c8d975c6aa0cad760ce65116bdde5
[ "MIT" ]
null
null
null
# React Date Primitives Primitives for creating Date-Picker and DateRange-Picker components in React. And It has zero dependencies! [![NPM version][npm-image]][npm-url] [![Build Status][travis-image]][travis-url] ## Installation This package is distributed via [npm](https://www.npmjs.com/). ``` npm install --save react-date-primitives ``` > This package also depends on `react`. Please make sure you have those installed as well. ## Usage ```jsx import * as React from 'react'; import { CalendarMonth } from 'react-date-primitives'; class SimpleDatePicker extends React.Component { render() { return ( <table> <CalendarMonth month={new Date()} render={({ days }) => ( <tbody> {days.map((week, i) => ( <tr key={i}> {week.map((day, j) => ( <td key={`${i}-${j}`}> {day.inCurrentMonth ? day.date.getDate() : ''} </td> ))} </tr> ))} </tbody> )} /> </table> ); } } ``` ## Live Examples - [simple date-picker](https://codesandbox.io/s/github/vkbansal/react-date-primitives/tree/master/examples/simple-datepicker) - [simple date-picker with dropdowns for month and year](https://codesandbox.io/s/github/vkbansal/react-date-primitives/tree/master/examples/datepicker-dropdowns) - [simple daterange-picker using `CalendarMonth`](https://codesandbox.io/s/github/vkbansal/react-date-primitives/tree/master/examples/simple-daterangepicker) - [styled date-picker](https://codesandbox.io/s/github/vkbansal/react-date-primitives/tree/master/examples/styled-datepicker) - [`useCalendar` hook](https://codesandbox.io/s/github/vkbansal/react-date-primitives/tree/master/examples/use-calendar-hook) - [`useDateRange` hook](https://codesandbox.io/s/github/vkbansal/react-date-primitives/tree/master/examples/use-daterange-hook) ## API - [`CalendarMonth`](docs/CalendarMonth.md) - [`DateRangeControl`](docs/DateRangeControl.md) ## License [MIT](./LICENSE.md). Copyright(c) [Vivek Kumar Bansal](http://vkbansal.me/) [npm-url]: https://npmjs.org/package/react-date-primitives [npm-image]: https://img.shields.io/npm/v/react-date-primitives.svg?style=flat-square [travis-url]: https://travis-ci.org/vkbansal/react-date-primitives [travis-image]: https://img.shields.io/travis/vkbansal/react-date-primitives/master.svg?style=flat-square
38.180556
164
0.595853
eng_Latn
0.213121
e90caa5c06cd4d1e800622ddaae78c0d12c58878
1,301
md
Markdown
README.ja.md
bugph0bia/pymd2re
884f8beb5d4b1af48e6450c24c29b31e0784af8e
[ "MIT" ]
null
null
null
README.ja.md
bugph0bia/pymd2re
884f8beb5d4b1af48e6450c24c29b31e0784af8e
[ "MIT" ]
9
2019-11-05T13:13:36.000Z
2019-11-20T12:51:53.000Z
README.ja.md
bugph0bia/pymd2re
884f8beb5d4b1af48e6450c24c29b31e0784af8e
[ "MIT" ]
null
null
null
pymd2re === ![Software Version](http://img.shields.io/badge/Version-v0.1.0-green.svg?style=flat) ![Python Version](http://img.shields.io/badge/Python-3.6-blue.svg?style=flat) [![MIT License](http://img.shields.io/badge/license-MIT-blue.svg?style=flat)](LICENSE) [English Page](./README.md) ## 概要 MarkdownファイルをRe:VIEWファイルに変換するスクリプト ## バージョン v0.1.0 ## 動作環境 - Python 3.6以上 - 標準ライブラリ以外の依存ライブラリは無し ## ライセンス MIT License ## 使用方法 `pymd2re.py` を実行する。 usage: pymd2re.py [-h] input_path output_path Convert Markdown file to Re:VIEW file. positional arguments: input_path Input File Path. (Markdown file) output_path Output File Path. (Re:VIEW file) optional arguments: -h, --help show this help message and exit ## サンプル - `pymd2re.py` による出力結果 - 入力ファイル:[sample_input.md](sample/sample_input.md) - 出力ファイル:[sample_output.re](sample/sample_output.re) - 変換時の警告:[warning_stdout.txt](sample/warning_stdout.txt) - `debug.py` による中間データの可視化 - 入力ファイル:[sample_input.md](sample/sample_input.md) - 出力内容:[debug_stdout.txt](sample/debug_stdout.txt) ## 制限事項 - 同一行に複数種類のブロックが存在するケースは不可 - 行の途中から複数行コメントが始まるケースなど。 - 同一行に複数種類のインライン要素は許可。 ## メモ - 構文解析は正規表現の力技で実装した。 - 文書構造を中間表現で保持する方式。 - 個別にレンダラを用意すれば、Re:VIEW以外のフォーマットへの出力も可能という想定。
23.654545
86
0.705611
yue_Hant
0.660909
e90cd5cb9cc71432b11e81349633e252dfc42543
9,465
md
Markdown
_posts/2016-09-16-top-5-things-to-consider-when-choosing-a-new-technology.md
opreaadrian/opreaadrian.github.io
e6e248bb418069689b0ceebd0b645ded6153b0bd
[ "MIT" ]
null
null
null
_posts/2016-09-16-top-5-things-to-consider-when-choosing-a-new-technology.md
opreaadrian/opreaadrian.github.io
e6e248bb418069689b0ceebd0b645ded6153b0bd
[ "MIT" ]
null
null
null
_posts/2016-09-16-top-5-things-to-consider-when-choosing-a-new-technology.md
opreaadrian/opreaadrian.github.io
e6e248bb418069689b0ceebd0b645ded6153b0bd
[ "MIT" ]
null
null
null
--- layout: post title: "Top 5 things to consider when choosing a new technology" pub_date: 2016-09-16 10:00:00 AM last_modified: 2016-09-16 10:00:00 AM permalink: /top-5-things-to-consider-when-choosing-a-new-technology/ categories: - prouctivity published: true author: "Adrian Oprea" twitter: "@opreaadrian" keywords: productivity, scalability, technologies, software development, recipes, javascript, reactjs, react, angular, angular2, aurelia, libraries, front-end featured_image: /images/posts/top-5-things-to-consider-when-choosing-a-new-technology/post.jpg --- There is a saying that naming things is the most difficult part of software development and it is 100% true. At the same time, choosing technologies, libraries, platforms or programming languages can be just as hard. There are many aspects to consider, besides how we feel about it or if it is trendy or not. Judging a framework, for example, only by its popularity and how bad you want to work with it, without looking at some numbers is a thing I call Resume-Driven Development (RDD) and has nothing to do with healthy software development practices. In this article, I will share with you, my process for making such difficult choices. ## Table of contents {:.no_toc} * Table of contents(will contain all headings execept the "Table of contents" one above) {:toc} ## A bit of context During my career as a consultant, I've always offered technical guidance to clients. From choosing between cloud providers to optimizing the development process to choosing a front-end framework, I've done it all. One of the frequent issues I faced when working with the the client's team on choosing a specific technology was the lack of process. They either did not have a process or they did not know about it. If you own a business or you are a decision maker, you should know that migrating to new technologies without a clear process and careful consideration from the teams involved, can be highly detrimental. I'm not saying that you should have a 3-month analysis process, requiring detailed documentation and sign-off from 5 stakeholders. Far from it! These processes need to be light and streamlined. The only thing I'm suggesting is that a process should be in place and it should be known by all team members. Think about the following scenario: Your team chooses a framework they know litte about. They are under pressure and they can't take the time to look at some stats for each available framework. They know little about available learning materials and don't have time to evaluate the learning curve. One of two things can happen, and the latter is more likely than the former: 1. Your company is an early adopter of a technology that is about to hit the mainstream. You get to be several steps in front of your competitors. 2. You end up with an unknown technology that only a specific team in your company knows. If you end up in situation #2, which is almost always the case, not only is your project delivery going to be affected but you might also have trouble hiring the right people further down the road. ## Case study Let's pretend for a moment that you want to migrate an application written in Flash, to modern web technologies and your team needs to choose the front-end stack to develop on. I will outline the process you should use to evaluate the most known front-end frameworks/libraries and decide on the stack the team would later use to develop the new application. ## The evaluation process 1. Create a list of (relevant) candidate front-end libraries 2. Establish the evaluation criteria 3. Brief the team on the list and the criteria. Offer useful links for each technology 4. Set up 1 or 2 meetings to discuss, clarify and decide 5. Create a proof of concept using the chosen library The advantage of having this process is that you can stop after step 5 and decide whether it is worth going with framework X or if you need another round. Below is a list of the most relevant front-end libraries, in no particular order. - [Angular 2](https://angular.io/ "Angular 2 official website"){:target="blank"} - [React.js](https://facebook.github.io/react/ "React JS GitHub page"){:target="blank"} / [Redux](http://redux.js.org "Redux official website"){:target="blank"} - [Cycle.js](http://cycle.js.org/ "Cycle.js official website"){:target="blank"} - [Aurelia](http://aurelia.io/ "Aurelia.js official website"){:target="blank"} - [Vue.js](http://vuejs.org/ "Vue.js official website"){:target="blank"} ## The evaluation criteria This is the most important aspect of the whole process. This is your filter and it should reflect your actual needs. Listed below are the criteria I use for all my client projects. ### Community size My recommendation is to stick with a library that has a large community. By community you can think of everything ranging from Twitter hashtags, StackOverflow community, IRC channels, Slack Channels. Another important aspect is the number of events (meetups, conferences) in your area. ### GitHub activity A big part of the decision is the GitHub repository. I'm not talking about GitHub stars. I can't tell you the number of times I starred a project that I later forgot about. I'm talking about actual activity. Take into account the following aspects: - Contributors (more is better) - Commits (more is better) - Resolved / Pending issues (more is better / less is better) - Pending pull requests (less is better) Each of the indicators above signal the level of activity and the involvelment of the community in the development / maintenance process. For example, if there are many pull requests and some of them are pretty old and left hanging, that is a clear sign of a slow-moving community. This is probably due to the fact that open-source projects are usually side-projects that people maintain out of passion, outside their working hours. ### Npm stats This is simple. If it has a lot of downloads that means that it is used. compared to GitHub fork count or stars, which only indicate the level of interest of that community, the download count literally means that the module is in use. I don't have a threshold or an orientative number but always look for the highest number of downloads. ### Official documentation maturity Sift through the documentation of the library. If you can make sense of it, then you probably have a good candidate for the final round. Note that this doesn't mean everyone can understand the documentation. Make sure you get input from the team every step of the way. You might also find that for your sample application everything is very clear, but when you work on non-trivial applications, things can get more complicated. Always be sure to check what people are saying on forums and websites. Some documentations are available on GitHub so you might find some useful information directly on the repository. ### Learning material availability I won't go into much detail on this, as it is self-explanatory. All I do is follow the list of steps outlined below. - Google for the library and eye-ball the number of blog articles available - Search for courses on training sites such as: [pluralsight.com](http://pluralsight.com "Link to Pluralsight website"){:target="blank"}, [egghead.io](http://egghead.io "Link to Egghead.io website"){:target="blank"}, [tutsplus.com](http://tutsplus.com "Link to Tutsplus.com website"){:target="blank"} - Look on Twitter and other popular tech news aggregators like [echojs.com]() for mentions. ### Extra criteria If after going through the process above, you still haven't found "the right one" there are two more steps that you can take: 1. Check if the project is backed by a company. As I mentioned before, OSS projects are usually passion projects of people who also have a day job. This is usually a risk factor as they might grow tired or bored of the project, or they might not have the time to maintain it anymore. On the other hand, companies have people or entire teams dedicated to the development of a project, which brings more stability in terms of project maintenance. 2. Check the activity of the main contributor / contributors since the project started. If they dropped development on a framework to go work for a company and after that they left the company and started a new framework, I would advise you to think a bit more about the reliability of the project. It might be an extraordinary piece of software but unless you want to possibly become a contributor, you need to stay away from projects like this. ## Closing thoughts If you expected me to bash all the libraries in the list and come up with a winner, you got tricked. I'm not going to declare winners here, but you can use [this document](/resources/frontend_libraries_comparison.pdf){:target="blank"} I put together a couple of weeks ago for a client of mine and decide for yourself. ## I'm taking on new projects Is your team having trouble deciding on the framework to use for your next application? Are you migrating an old web application and would like to be productive fast? Email me at [adrian@codesi.nz](mailto:adrian@codesi.nz) or [schedule some time on my calendar](https://calendly.com/neuron01/ "Link to my calendly.com calendar"){:target="blank"} and let's see how I can help you. > Photo credits: > [Kyle Pearce](https://www.flickr.com/photos/keepitsurreal/) &mdash; [Choices](https://flic.kr/p/aiJFxH)
76.95122
612
0.780032
eng_Latn
0.999621
e90d36dc08beffd720772655e177422f99f477eb
141
md
Markdown
robot/HELP.md
huaisun/robot_approve
4481e18dd1b8b8c99dbecc230c3883bbe31f48e1
[ "MIT" ]
1
2021-10-11T16:00:22.000Z
2021-10-11T16:00:22.000Z
robot/HELP.md
huaisun/robot_approve
4481e18dd1b8b8c99dbecc230c3883bbe31f48e1
[ "MIT" ]
null
null
null
robot/HELP.md
huaisun/robot_approve
4481e18dd1b8b8c99dbecc230c3883bbe31f48e1
[ "MIT" ]
1
2021-12-31T08:02:44.000Z
2021-12-31T08:02:44.000Z
## TODO ### 启动类启动时进行数据库数据取出,并存入数据库中: * 用户email数据 -> exist_email * exist_email: list 类型 * 用户名数据 -> exist_name * exist_name: list类型
15.666667
27
0.666667
eng_Latn
0.162331
e90d520075b27747d7d646835ab2433e792b9ec4
3,700
md
Markdown
wiki/translations/tr/Property.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/tr/Property.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
wiki/translations/tr/Property.md
dwhr-pi/FreeCAD-documentation
0c889672d80e7969dcabe83f5ddf503e72a4f5bb
[ "CC0-1.0" ]
null
null
null
# Property/tr ## Introduction <div class="mw-translate-fuzzy"> **Property** bir FreeCAD belgesine veya belgedeki bir nesneye eklenmiş bir sayı veya metin dizesi gibi bir bilgi parçasıdır. Özellikler [Özellik düzenleyici](Property_editor/tr.md) ile görüntülenebilir ve değiştirilebilir. </div> <div class="mw-translate-fuzzy"> Özellikler FreeCAD\'de çok önemli bir rol oynar, çünkü sadece özellikleri tarafından tanımlanan nesneler olan parametrik nesnelerle çalışmak üzere tasarlanmıştır. </div> ## All property types <div class="mw-translate-fuzzy"> FreeCAD\'deki Özel [Betik nesneleri](scripted_objects/tr.md) aşağıdaki tür özelliklere sahip olabilir: </div> ```python Bool Float FloatList FloatConstraint Angle Distance ExpressionEngine Integer IntegerConstraint Percent Enumeration IntegerList String StringList Length Link LinkList LinkSubList Matrix Vector VectorList VectorDistance Placement PlacementLink PythonObject Color ColorList Material Path File FileIncluded PartShape FilletContour Circle ``` Internally, the property name is prefixed with `App::Property`: ```python App::PropertyBool App::PropertyFloat App::PropertyFloatList ... ``` Remember that these are property **types**. A single object may have many properties of the same type, but with different names. For example: ```python obj.addProperty("App::PropertyFloat", "Length") obj.addProperty("App::PropertyFloat", "Width") obj.addProperty("App::PropertyFloat", "Height") ``` This indicates an object with three properties of type \"Float\", named \"Length\", \"Width\", and \"Height\", respectively. ## Scripting **See also:** [FreeCAD scripting basics](FreeCAD_Scripting_Basics.md) A [scripted object](scripted_objects.md) is created first, and then properties are assigned. ```python obj = App.ActiveDocument.addObject("Part::Feature", "CustomObject") obj.addProperty("App::PropertyFloat", "Velocity", "Parameter", "Body speed") obj.addProperty("App::PropertyBool", "VelocityEnabled", "Parameter", "Enable body speed") ``` In general, **Data** properties are assigned by using the object\'s `addProperty()` method. On the other hand, **View** properties are normally provided automatically by the parent object from which the scripted object is derived. For example: - Deriving from `App::FeaturePython` provides only 4 **View** properties: \"Display Mode\", \"On Top When Selected\", \"Show In Tree\", and \"Visibility\". - Deriving from `Part::Feature` provides 17 **View** properties: the previous four, plus \"Angular Deflection\", \"Bounding Box\", \"Deviation\", \"Draw Style\", \"Lighting\", \"Line Color\", \"Line Width\", \"Point Color\", \"Point Size\", \"Selectable\", \"Selection Style\", \"Shape Color\", and \"Transparency\". Nevertheless, **View** properties can also be assigned using the view provider object\'s `addProperty()` method. ```python obj.ViewObject.addProperty("App::PropertyBool", "SuperVisibility", "Base", "Make the object glow") ``` ## Source code In the source code, properties are located in various {{FileName|src/App/Property*}} files. They are imported and initialized in `[https://github.com/FreeCAD/FreeCAD/blob/9c27f1078e5ec516fe882aac1a27f5c6c6174554/src/App/Application.cpp#L1681-L1758 src/App/Application.cpp]`. {{Code|lang=cpp|code= #include "Property.h" #include "PropertyContainer.h" #include "PropertyUnits.h" #include "PropertyFile.h" #include "PropertyLinks.h" #include "PropertyPythonObject.h" #include "PropertyExpressionEngine.h" }} --- ![](images/Right_arrow.png) [documentation index](../README.md) > [Developer Documentation](Category_Developer Documentation.md) > [Python Code](Category_Python Code.md) > Property/tr
27.61194
318
0.756757
eng_Latn
0.664914
e90d5457555a044c0d29673385d020b5fc988e48
3,188
md
Markdown
contents/roki.log/2018/06/8/qft/index.md
falgon/roki-web
51a79a866f0cbad0c1df7210535137898811181c
[ "Apache-2.0" ]
null
null
null
contents/roki.log/2018/06/8/qft/index.md
falgon/roki-web
51a79a866f0cbad0c1df7210535137898811181c
[ "Apache-2.0" ]
180
2020-08-15T02:13:51.000Z
2022-03-29T09:01:15.000Z
contents/roki.log/2018/06/8/qft/index.md
falgon/roki-web
51a79a866f0cbad0c1df7210535137898811181c
[ "Apache-2.0" ]
null
null
null
--- title: QFTのメモ date: 2018-06-08 16:50:00 tags: math, Quantum mechanics header-warn: この記事は, <a href="https://falgon.github.io/roki.log/">旧ブログ</a>から移植された記事です. よって, その内容として, <a href="https://falgon.github.io/roki.log/">旧ブログ</a>に依存した文脈が含まれている可能性があります. 予めご了承下さい. --- お題自由な学校のレポート課題[^1]内で, ショアのアルゴリズムを説明するために QFT の概要について示したのだが, 折角なのでその内容の一部を抜粋し, こちらのブログのほうにも載せておくことにした. ショアのアルゴリズムについては, 調べればいくらでも出てくるし, 学会誌, 書籍等で分かり易く述べられていることも多いので, 本エントリで特別取り上げることはしないが, その大体は以下のアクティビティ図の手順の通りである[^2]. ![ショアのアルゴリズムのアクティビティ図](./shoractivity.png "ショアのアルゴリズムのアクティビティ図"){ width=450 } <!--more--> <i>なお, 私自身は量子力学, 量子コンピュータ分野における専門家ではないため, 注意してください. 間違った箇所, 不自然な箇所等ございましたら, ご報告いただけると幸いです.</i> まず, DFT を次のようにおく[^3]. \\[\displaystyle F(t) = \sum_{x = 0}^{n-1} f(x)\exp(-j\dfrac{2\pi tx}{n}\tag{1})\\] ここで, $f(x)$ は入力の関数, $j$ は虚数単位である. QFT は, 正規化係数を $\dfrac{1}{\sqrt{n}}$ とした有限次元内積空間内における正規直交基底 $|0\rangle, \cdots, |n-1\rangle$ 上の状態, \\(\displaystyle \sum_{x=0}^{n-1} f(x)|x\rangle\\) の各係数となる複素関数の値を離散フーリエ変換したものであるといえる. すなわち, 式 $$ の定義をふまえて, \\[\displaystyle \sum_{x = 0}^{n-1} f(x)|x\rangle \mapsto \sum_{i = 0}^{n-1}F(i) |i\rangle\\] または, \\[\displaystyle |x\rangle \mapsto \dfrac{1}{\sqrt{n}}\sum_{k=0}^{n-1}\exp(-j\dfrac{2\pi xk}{n}) |k\rangle\\] と表すことができ, いま \\(m\\) Qubit があるならば, 扱えるデータ数は \\(2^m\\) となるため \\[\displaystyle |x\rangle \mapsto \dfrac{1}{\sqrt{2^m}}\sum_{k=0}^{2^m-1}\exp(-j\dfrac{2\pi xk}{2^m}) |k\rangle\\] と表せる. これを量子回路として実装していく. 結論から言うと, この量子回路は, アダマールゲートと, 制御ビットが $1$ のときのみ, 信号量子ビットの位相を $\exp(\dfrac{j2\pi}{2^{k+1}})$ だけシフトする 制御位相シフトゲートを利用することで実現できる. 次に, 2 Qubit を用いた QFT の量子回路図を示す[^4]. ![アダマールゲートと制御位相シフトゲートによる 2 qubit QFT 量子回路](./2qubitQtf.png "アダマールゲートと制御位相シフトゲートによる 2 qubit QFT 量子回路"){ width=300 } ここで \\(|q_1\rangle\\) は \\[|0\rangle + \exp(j\pi q_{1})|1\rangle \to |0\rangle + \exp(\dfrac{j\pi}{2}(2q_1+q_0))|1\rangle \tag{2}\\] と変化し, \\(|q_0\rangle\\) は \\[|0\rangle + \exp(j\pi q_{0})|1\rangle \tag{3}\\] と変化することがいえる. いま, 式 $(2)$ の結果を \\(|a_0\rangle\\), 式 $(3)$ の結果を \\(|a_1\rangle\\) としたとき $$|a_1\rangle |a_0\rangle = \left\{|0\rangle + \exp(j\pi q_0)|1\rangle\right\}\left\{|0\rangle + \exp(j\pi q_1 + \dfrac{j\pi q_0}{2})|1\rangle\right\}\tag{4}$$ がいえる. ここで, $q$ および $a$ の値の $2$ 進表記をそれぞれ \\([q_1, q_0],\ [a_1, a_0]\\) とすると, \\(q = 2q_1 + q_0,\ a = 2a_1+a_0\\) であるので式 $(4)$ は, $$|a\rangle = |0\rangle + \exp(\dfrac{j\pi}{2}q)|1\rangle + \exp(\dfrac{j\pi}{2}q\times 2)|2\rangle + \exp(\dfrac{j\pi}{2}q\times 3)|3\rangle $$ と展開できる. $|a\rangle$ の各状態の係数が $|q\rangle$ の各状態の係数のフーリエ変換になっていることがわかる. [^1]: 内容の全コンテンツを[リポジトリ](https://bitbucket.org/r0ki/52520001/src)にまとめているのでもし良ければ. [^2]: 図は plantuml で生成: [コード](https://bitbucket.org/r0ki/52520001/src/master/plantuml-images/report.uml). この画像もレポート用に生成したものだが, 折角なのでこちらにも貼っておくことにした. [^3]: 単純にコードに落とし込むだけであるので大したことはないのだが, レポート内で説明するために Haskell で DFT と IDFT を[実装してある](https://bitbucket.org/r0ki/52520001/src/7b42d2be8cfd5c2e5931c553552f9bc9f5e1696f/src/src/Lib52520001.hs#lines-31:48)ので, 一例としてもし良ければ. 一応[テスト済み](https://bitbucket.org/r0ki/52520001/src/7b42d2be8cfd5c2e5931c553552f9bc9f5e1696f/src/test/Spec.hs). [^4]: 図は qasm2circ で生成: [コード](https://bitbucket.org/r0ki/52520001/src/master/assets/qcircuit/2qubitQtf.qasm).
55.929825
325
0.688519
yue_Hant
0.503987
e90dcfd9f38e635e96f96720cbe0a2a6501c0fff
9,263
md
Markdown
docs/azure-data-studio/sql-notebooks.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/azure-data-studio/sql-notebooks.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/azure-data-studio/sql-notebooks.md
gmilani/sql-docs.pt-br
02f07ca69eae8435cefd74616a8b00f09c4d4f99
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Como usar Notebooks SQL no Azure Data Studio titleSuffix: Azure Data Studio description: Saiba como usar Notebooks SQL no Azure Data Studio ms.custom: seodec18 ms.date: 06/28/2019 ms.prod: sql ms.technology: azure-data-studio ms.reviewer: achatter; alayu; sstein ms.topic: conceptual author: yualan ms.author: alayu ms.openlocfilehash: 9af2e04a3973eddfcd714c7968c35e544302aba9 ms.sourcegitcommit: db9bed6214f9dca82dccb4ccd4a2417c62e4f1bd ms.translationtype: HT ms.contentlocale: pt-BR ms.lasthandoff: 07/25/2019 ms.locfileid: "67959260" --- # <a name="how-to-use-notebooks-in-azure-data-studio"></a>Como usar notebooks no Azure Data Studio Este artigo descreve como iniciar a experiência de Notebook no Azure Data Studio e como começar a criar seus próprios notebooks. Ele também mostra como escrever Notebooks usando kernels diferentes. ## <a name="connect-to-sql-server"></a>Conecte-se ao SQL Server Você pode se conectar ao tipo de conexão Microsoft SQL Server no Azure Data Studio. No Azure Data Studio, você também pode pressionar F1 e clicar em **Nova Conexão** e conectar-se ao seu SQL Server. ![image1](media/sql-notebooks/connection-info.png) ## <a name="launch-notebooks"></a>Iniciar Notebooks Há várias maneiras de iniciar um novo notebook. 1. Vá para o **menu Arquivo** no Azure Data Studio e clique em **Novo Notebook**. ![image3](media/sql-notebooks/file-new-notebook.png) 3. Clique com o botão direito do mouse na conexão **SQL Server** e inicie o **Novo Notebook**. ![image3](media/sql-notebooks/server-new-notebook.png) 4. Abra a paleta de comandos (**Ctrl+Shift+P**) e digite **Novo Notebook**. Um novo arquivo chamado `Notebook-1.ipynb` é aberto. ## <a name="supported-kernels-and-attach-to-context"></a>Kernels com suporte e anexar ao contexto A instalação do Notebook no Azure Data Studio dá suporte nativo ao Kernel do SQL. Se você for um desenvolvedor de SQL e quiser usar Notebooks, esse será o Kernel escolhido. O Kernel do SQL também pode ser usado para se conectar a instâncias do servidor PostgreSQL. Se você for um desenvolvedor de PostgreSQL e quiser se conectar ao seu Servidor PostgreSQL, baixe a [**Extensão do PostgreSQL**](postgres-extension.md) no marketplace de extensão do Azure Data Studio. ![image7](media/sql-notebooks/sql-kernel-dropdown.png) ### <a name="sql-kernel"></a>Kernel do SQL Nas células de código dentro do Notebook, semelhante ao nosso editor de consultas, damos suporte à experiência moderna de codificação do SQL que facilita as tarefas diárias com recursos internos, como um editor SQL avançado, o IntelliSense e snippets de código internos. Os snippets de código permitem que você gere a sintaxe SQL adequada para criar bancos de dados, tabelas, exibições, procedimentos armazenados, entre outros, e para atualizar objetos de banco de dados existentes. Use snippets de código para criar rapidamente cópias de seu banco de dados para fins de desenvolvimento ou teste e para gerar e executar scripts. Clique em **Executar** para executar cada célula. Kernel do SQL para se conectar à instância do SQL Server ![image7](media/sql-notebooks/intellisense-code-cell.png) Resultados da consulta ![image19](media/sql-notebooks/sql-cell-results.png) Kernel do SQL para se conectar à instância do Servidor PostgreSQL ![image18](media/sql-notebooks/pgsql-code-cell.png) Resultados da consulta ![image20](media/sql-notebooks/pgsql-cell-results.png) ### <a name="configure-python-for-notebooks"></a>Configurar o Python para Notebooks Ao selecionar qualquer um dos outros kernels além do SQL na lista suspensa de kernels, você precisa **Configurar o Python para Notebooks**. As dependências do Notebook são instaladas em uma localização especificada, mas você pode decidir se deseja definir a localização de instalação. Essa instalação pode levar algum tempo e é recomendável não fechar o aplicativo até que ela seja concluída. Quando a instalação for concluída, você poderá começar a escrever o código na linguagem com suporte. ![image21](media/sql-notebooks/configure-python.png) Depois que a instalação for realizada com sucesso, você encontrará uma notificação no Histórico de Tarefas junto com a localização do servidor de back-end do Jupyter em execução no Terminal de Saída. ![image22](media/sql-notebooks/jupyter-backend.png) |Kernel|Descrição |:-----|:----- | Kernel do SQL | Escreva o código SQL direcionado ao seu banco de dados relacional. |Kernel PySpark3 e PySpark| Escreva o código Python usando a computação do Spark do cluster. |Kernel do Spark|Escreva o código Scala e R usando a computação do Spark do cluster. |Kernel do Python|Escreva o código Python para desenvolvimento local. `Attach to` fornece o contexto para o kernel a ser anexado. Se estiver usando o Kernel do SQL, você poderá usar `Attach to` para qualquer uma de suas instâncias do SQL Server. Se você estiver usando o Kernel do Python3, `Attach to` será `localhost`. Você pode usar esse kernel para o desenvolvimento local do Python. Quando você estiver conectado ao cluster de Big Data do SQL Server 2019, o `Attach to` padrão será o ponto de extremidade do cluster e permitirá que você envie código Python, Scala e R usando a computação de Spark do cluster. ### <a name="code-cells-and-markdown-cells"></a>Células de código e células de Markdown Adicione uma nova célula de código clicando no comando **+Code** na barra de ferramentas. Adicione uma nova célula de texto clicando no comando **+Text** na barra de ferramentas. ![image8](media/sql-notebooks/notebook-toolbar.png) A célula muda para o modo de edição. Agora, digite markdown e você verá a visualização ao mesmo tempo ![image9](media/sql-notebooks/notebook-markdown-cell.png) Clicar fora da célula de texto mostrará o texto de markdown. ![image10](media/sql-notebooks/notebook-markdown-preview.png) ### <a name="trusted-and-non-trusted"></a>Confiável e não confiável Os Notebooks abertos no Azure Data Studio são **Confiáveis** por padrão. Se você abrir um Notebook de alguma outra fonte, ele será aberto no modo **Não Confiável** e você poderá configurá-lo como **Confiável**. ### <a name="save"></a>Salvar Você pode salvar o Notebook pressionando **CTRL + S** ou clicando nos comandos **Salvar Arquivo**, **Salvar Arquivo Como...** e **Salvar Todos os Arquivos** no menu Arquivo ou, ainda, nos comandos **Arquivo: Salvar** inseridos na paleta de comandos. ### <a name="pyspark3pyspark-kernel"></a>Kernel do Pyspark3/PySpark Escolha o `PySpark Kernel` e, na célula, digite o código a seguir. Clique em **Executar**. O Aplicativo Spark é iniciado e retorna a seguinte saída: ![image12](media/sql-notebooks/pyspark.png) ### <a name="spark-kernel--scala-language"></a>Kernel do Spark | Linguagem Scala Escolha o `Spark|Scala Kernel` e, na célula, digite o código a seguir. ![image13](media/sql-notebooks/spark-scala.png) Você também pode exibir as "Opções de Célula" clicando no ícone de opções abaixo – ![image14](media/sql-notebooks/scala-cell-options.png) ### <a name="spark-kernel--r-language"></a>Kernel do Spark | Linguagem R Escolha Spark | R no menu suspenso de kernels. Na célula, digite ou cole o código. Clique em **Executar** para ver a seguinte saída. ![image15](media/sql-notebooks/spark-r.png) ### <a name="local-python-kernel"></a>Kernel do Python local Escolha o Kernel do Python local e, na célula, digite - ![image16](media/sql-notebooks/local-python.png) ## <a name="manage-packages"></a>Gerenciar pacotes Um dos aspectos que otimizamos para o desenvolvimento de Python local foi a inclusão da capacidade de instalar pacotes de que os clientes precisariam para seus cenários. Por padrão, incluímos os pacotes comuns, como `pandas`, `numpy` etc., mas se você estiver esperando um pacote que não está incluído, escreva o seguinte código na célula do notebook: ```python import <package-name> ``` Quando você executa esse comando, `Module not found` é retornado. Se o pacote existir, você não receberá o erro. Se ele retornar um erro `Module not Found`, clique em **Gerenciar Pacotes** para iniciar a experiência do assistente. ![image17](media/sql-notebooks/manage-packages.png) Neste assistente, você poderá ver os pacotes **Instalados**. Você pode pesquisar na lista e na versão associada de cada um desses pacotes. Se precisar **desinstalar** algum dos pacotes, você poderá clicar em um dos pacotes e, em seguida, clicar na opção **Desinstalar pacotes selecionados**. Você também poderá clicar em **Adicionar novos pacotes** para **Pesquisar** um pacote específico, escolher a versão relacionada e clicar em **instalar**. Por padrão, selecionamos a versão mais recente do pacote pesquisado. Depois que o pacote for instalado, você poderá ir para a célula do Notebook e digitar o seguinte comando: ```python import <package-name> ``` Se precisar **desinstalar** algum dos pacotes, você poderá clicar em um ou em vários pacotes e, em seguida, clicar na opção **Desinstalar pacotes selecionados**. ## <a name="next-steps"></a>Próximas etapas Para saber como trabalhar com um notebook existente, confira [Como gerenciar notebooks no Azure Data Studio](https://docs.microsoft.com/sql/big-data-cluster/notebooks-how-to-manage?view=sqlallproducts-allversions).
50.617486
628
0.773292
por_Latn
0.997652
e90ddde4bc890432ab7b40fbd9fe45b52c4d1851
471
md
Markdown
code/dotnet/FactSetPeople/v1/docs/MbType.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
6
2022-02-07T16:34:18.000Z
2022-03-30T08:04:57.000Z
code/dotnet/FactSetPeople/v1/docs/MbType.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
2
2022-02-07T05:25:57.000Z
2022-03-07T14:18:04.000Z
code/dotnet/FactSetPeople/v1/docs/MbType.md
factset/enterprise-sdk
3fd4d1360756c515c9737a0c9a992c7451d7de7e
[ "Apache-2.0" ]
null
null
null
# FactSet.SDK.FactSetPeople.Model.MbType Search based on the management and board types. The types include - |type|description| |- --|- --| |MB|Management & Board| |MGMT|Management| |BRD|Board| ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- [[Back to Model list]](../README.md#documentation-for-models) [[Back to API list]](../README.md#documentation-for-api-endpoints) [[Back to README]](../README.md)
42.818182
161
0.615711
eng_Latn
0.3217
e90e0a3dda0b7e5094a809dc2e2203c08d9c7620
1,317
md
Markdown
docs/csharp/misc/cs0625.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs0625.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs0625.md
badbadc0ffee/docs.de-de
50a4fab72bc27249ce47d4bf52dcea9e3e279613
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Compilerfehler CS0625 ms.date: 07/20/2015 f1_keywords: - CS0625 helpviewer_keywords: - CS0625 ms.assetid: 44091813-9988-436c-b35e-e24094793782 ms.openlocfilehash: 7ecf06a6aa8cdac713e4c2350067a994c859ecf8 ms.sourcegitcommit: 9b552addadfb57fab0b9e7852ed4f1f1b8a42f8e ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 04/23/2019 ms.locfileid: "61656003" --- # <a name="compiler-error-cs0625"></a>Compilerfehler CS0625 "Feld": Instanzenfeldtypen, die mit StructLayout(LayoutKind.Explicit) markiert sind, müssen ein FieldOffset-Attribut aufweisen. Wenn eine Struktur mit einem expliziten **StructLayout** -Attribut markiert ist, müssen alle Felder in der Struktur das [FieldOffset](xref:System.Runtime.InteropServices.FieldOffsetAttribute) -Attribut enthalten. Weitere Informationen finden Sie unter [StructLayoutAttribute-Klasse](xref:System.Runtime.InteropServices.StructLayoutAttribute). Im folgenden Beispiel wird CS0625 generiert: ```csharp // CS0625.cs // compile with: /target:library using System; using System.Runtime.InteropServices; [StructLayout(LayoutKind.Explicit)] struct A { public int i; // CS0625 not static; an instance field } // OK [StructLayout(LayoutKind.Explicit)] struct B { [FieldOffset(5)] public int i; } ```
30.627907
343
0.770691
deu_Latn
0.366567