hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
2ccd86e9592a172d6e146e71a685130b2f594d14
2,171
md
Markdown
docs/debugger/process-search-dialog-box.md
MicrosoftDocs/visualstudio-docs.pl-pl
64a8f785c904c0e158165f3e11d5b0c23a5e34c5
[ "CC-BY-4.0", "MIT" ]
2
2020-05-20T07:52:54.000Z
2021-02-06T18:51:42.000Z
docs/debugger/process-search-dialog-box.md
MicrosoftDocs/visualstudio-docs.pl-pl
64a8f785c904c0e158165f3e11d5b0c23a5e34c5
[ "CC-BY-4.0", "MIT" ]
8
2018-08-02T15:03:13.000Z
2020-09-27T20:22:01.000Z
docs/debugger/process-search-dialog-box.md
MicrosoftDocs/visualstudio-docs.pl-pl
64a8f785c904c0e158165f3e11d5b0c23a5e34c5
[ "CC-BY-4.0", "MIT" ]
16
2018-01-29T09:30:06.000Z
2021-10-09T11:23:54.000Z
--- title: Wyszukiwanie procesów — okno dialogowe | Microsoft Docs description: Użyj wyszukiwania procesów, aby znaleźć i wybrać węzeł dla określonego procesu w widoku procesów. Możesz określić identyfikator procesu, ciąg modułu i kierunek wyszukiwania. ms.custom: SEO-VS-2020 ms.date: 11/04/2016 ms.topic: reference helpviewer_keywords: - Process Search ms.assetid: 518e8153-eec2-4db9-a6f7-416ec11d8e09 author: mikejo5000 ms.author: mikejo manager: jmartens ms.technology: vs-ide-debug ms.workload: - multiple ms.openlocfilehash: e93a7c249b67d326d70815e64fbf25377f814ed0 ms.sourcegitcommit: 68897da7d74c31ae1ebf5d47c7b5ddc9b108265b ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 08/13/2021 ms.locfileid: "122105148" --- # <a name="process-search-dialog-box"></a>Wyszukiwanie procesów — Okno dialogowe To okno dialogowe umożliwia znalezienie i wybranie węzła dla określonego procesu w widoku [procesów](../debugger/processes-view.md). Aby wyświetlić to okno dialogowe, przenieś fokus do okna **widoku Procesy.** Następnie wybierz **pozycję Znajdź proces** z menu Wyszukaj. Dostępne są następujące ustawienia: **Proces** Identyfikator procesu do wyszukania. **Moduł** Ciąg modułu do wyszukania. **Kierunek wyszukiwania w górę lub w dół** Początkowy kierunek wyszukiwania. ## <a name="related-sections"></a>Sekcje pokrewne [Wyszukiwanie procesu w widoku procesów](../debugger/how-to-search-for-a-process-in-processes-view.md) Wyjaśnia, jak znaleźć określony proces w widoku Procesy. [Widok procesów](../debugger/processes-view.md) Wyświetla widok drzewa aktywnych procesów. [Widoki programu Spy++](../debugger/spy-increment-views.md) Objaśnia widoki drzewa programu Spy++ okien, komunikatów, procesów i wątków. [Korzystanie z programu Spy++](../debugger/using-spy-increment.md) Wprowadzenie do narzędzia Spy++ i wyjaśnienie, jak można go używać. [Właściwości procesu, okno dialogowe](../debugger/process-properties-dialog-box.md) Służy do wyświetlania właściwości procesu wybranego w widoku Procesy. [Spy++ — informacje](../debugger/spy-increment-reference.md) Zawiera sekcje opisujące poszczególne menu i okno dialogowe programu Spy++.
48.244444
271
0.793183
pol_Latn
0.998106
2cce9736a9cd106b5f6eabb5476b90c9c30ddbbf
256
md
Markdown
powerapps-docs/includes/proc-work-area.md
bassetassen/powerapps-docs.nb-no
c336300d545b56542b63d43a30193a68648cfb07
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/includes/proc-work-area.md
bassetassen/powerapps-docs.nb-no
c336300d545b56542b63d43a30193a68648cfb07
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerapps-docs/includes/proc-work-area.md
bassetassen/powerapps-docs.nb-no
c336300d545b56542b63d43a30193a68648cfb07
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: 0b7005fb41befcec97a2b7a4f39558f22efdf261 ms.sourcegitcommit: ad203331ee9737e82ef70206ac04eeb72a5f9c7f ms.translationtype: MT ms.contentlocale: nb-NO ms.lasthandoff: 06/18/2019 ms.locfileid: "67215358" --- Gå til arbeidsområdet ditt.
28.444444
60
0.839844
nob_Latn
0.075578
2ccebcf13db4bab70d31113411209a00467e54e7
3,288
md
Markdown
_posts/2020-12-02-input-output.md
mkDlufop/mkDlufop.github.io
e3d0b456d4da4340942b6ac56e5091a3f6eb4061
[ "Apache-2.0" ]
null
null
null
_posts/2020-12-02-input-output.md
mkDlufop/mkDlufop.github.io
e3d0b456d4da4340942b6ac56e5091a3f6eb4061
[ "Apache-2.0" ]
null
null
null
_posts/2020-12-02-input-output.md
mkDlufop/mkDlufop.github.io
e3d0b456d4da4340942b6ac56e5091a3f6eb4061
[ "Apache-2.0" ]
null
null
null
--- layout: post title: 格式化输入输出 subtitle: tags: [c] --- - **printf** - % [flags] [width] [.prec] [hlL] type > | flags | 含义 | > | :-----: | :-------: | > | - | 左对齐 | > | + | 在前面放+ | > | (space) | 正数留空 | > | 0 | 0填充 | > | width或.prec | 含义 | > | :----------: | :----------------------------------------------------------: | > | number | 最小字符数(输出的内容总共可以占居的字符数) | > | \* | 下一个参数是字符数,如printf("%*d\n",6,123);printf("%6d\n",len,123); | > | .number | 小数点后的位数 | > | .\* | 下一个参数是小数点后的位数 | > | 类型修饰 | 含义 | > | :------: | :---------: | > | hh | 单个字节 | > | h | short | > | l | long | > | ll | long long | > | L | long double | > | type | 用于 | type | 用于 | > | :--: | :----------------: | :--: | :----------------------------------------------------------: | > | i或d | int | g | float | > | u | unsigned int | G | float | > | o | 八进制 | a或A | 十六进制浮点 | > | x | 十六进制 | c | char | > | X | 字母大写的十六进制 | s | 字符串 | > | f或F | float | p | 指针 | > | e或E | 指数 | n | 读入/读出的个数,如:int num;printf("%d%n",12345,&num);printf("%d",num); //执行到%n时将输出的字符个数保存到&num所指的变量里 | - **scanf** - % [flag] type > | flag | 含义 | flag | 含义 | > | :--: | :----------------------------------------------------------: | :--: | :---------: | > | \* | 跳过,如int num;scanf("%\*d%d", &num); // 跳过输入的第一个整数,将第二个整数给num | l | long,double | > | 数字 | 最大字符数 | ll | long long | > | hh | char | L | long double | > | h | short | | | > | type | 用于 | type | 用于 | > | :--: | :----------------------------------------------------------: | :-----: | :----------: | > | d | int | a,e,f,g | float | > | i | 整数,可能为十六进制或八进制(根据输入的整数的情况决定以哪种形式读进来) | c | char | > | u | unsigned int | s | 字符串 | > | o | 八进制 | [...] | 所允许的字符 | > | x | 十六进制 | p | 指针 |
50.584615
135
0.184915
yue_Hant
0.146468
2cd0290b2394ce5ef5274c8ebdd72671fe6a48d4
446
md
Markdown
README.md
keeey/cordova-plugin-system-gesture
96df52b3faa9fa3995fe8b21c303a53ff25a7d86
[ "MIT" ]
null
null
null
README.md
keeey/cordova-plugin-system-gesture
96df52b3faa9fa3995fe8b21c303a53ff25a7d86
[ "MIT" ]
null
null
null
README.md
keeey/cordova-plugin-system-gesture
96df52b3faa9fa3995fe8b21c303a53ff25a7d86
[ "MIT" ]
null
null
null
# cordova-plugin-system-gesture This plugin avoids back gesture conflicts for gesture navigation in the 200dp range from the bottom left to the top of the screen. ## How To Use ``` document.addEventListener("deviceready", onDeviceReady, false); function onDeviceReady() { cordova.plugins.systemGesture.setExclusionRects( function(result) { alert( "success: " + result ); }, function(error) { alert( "error: " + error ); } ); } ```
27.875
130
0.715247
eng_Latn
0.62579
2cd0415a67635e8b1b11eb396c37798a4a22b553
8,151
md
Markdown
node_modules/mongodb/CHANGES_3.0.0.md
anasbabata/Nodes
ae6a0271cff7e206b5e30ce03d191e8c1960bc38
[ "MIT" ]
1
2018-01-12T13:31:11.000Z
2018-01-12T13:31:11.000Z
node_modules/mongodb/CHANGES_3.0.0.md
anasbabata/Nodes
ae6a0271cff7e206b5e30ce03d191e8c1960bc38
[ "MIT" ]
49
2017-12-06T15:27:25.000Z
2018-06-14T20:00:57.000Z
node_modules/mongodb/CHANGES_3.0.0.md
anasbabata/Nodes
ae6a0271cff7e206b5e30ce03d191e8c1960bc38
[ "MIT" ]
2
2019-06-18T14:18:19.000Z
2019-06-30T13:43:53.000Z
## Features The following are new features added in MongoDB 3.6 and supported in the Node.js driver. ### Retryable Writes Support has been added for retryable writes through the connection string. MongoDB 3.6 will utilize server sessions to allow some write commands to specify a transaction ID to enforce at-most-once semantics for the write operation(s) and allow for retrying the operation if the driver fails to obtain a write result (e.g. network error or "not master" error after a replica set failover)Full details can be found in the [Retryable Writes Specification](https://github.com/mongodb/specifications/blob/master/source/retryable-writes/retryable-writes.rst). ### DNS Seedlist Support Support has been added for DNS Seedlists. Users may now configure a single domain to return a list of host names. Full details can be found in the [Seedlist Discovery Specification](https://github.com/mongodb/specifications/blob/master/source/initial-dns-seedlist-discovery/initial-dns-seedlist-discovery.rst). ### Change Streams Support has been added for creating a stream to track changes to a particular collection. This is a new feature in MongoDB 3.6. Full details can be found in the [Change Stream Specification](https://github.com/mongodb/specifications/blob/master/source/change-streams.rst) as well as [examples in the test directory](https://github.com/mongodb/node-mongodb-native/blob/3.0.0/test/functional/operation_changestream_example_tests.js). ### Sessions Version 3.6 of the server introduces the concept of logical sessions for clients. In this driver, `MongoClient` now tracks all sessions created on the client, and explicitly cleans them up upon client close. More information can be found in the [Driver Sessions Specification](https://github.com/mongodb/specifications/blob/master/source/sessions/driver-sessions.rst). ## API Changes We removed the following API methods. - `Db.prototype.authenticate` - `Db.prototype.logout` - `Db.prototype.open` - `Db.prototype.db` - `Db.prototype.close` - `Admin.prototype.authenticate` - `Admin.prototype.logout` - `Admin.prototype.profilingLevel` - `Admin.prototype.setProfilingLevel` - `Admin.prototype.profilingInfo` - `Cursor.prototype.nextObject` We've added the following API methods. - `MongoClient.prototype.logout` - `MongoClient.prototype.isConnected` - `MongoClient.prototype.db` - `MongoClient.prototype.close` - `MongoClient.prototype.connect` - `Db.prototype.profilingLevel` - `Db.prototype.setProfilingLevel` - `Db.prototype.profilingInfo` In core we have removed the possibility of authenticating multiple credentials against the same connection pool. This is to avoid problems with MongoDB 3.6 or higher where all users will reside in the admin database and thus database level authentication is no longer supported. The legacy construct ```js var db = var Db('test', new Server('localhost', 27017)); db.open((err, db) => { // Authenticate db.admin().authenticate('root', 'root', (err, success) => { .... }); }); ``` is replaced with ```js new MongoClient(new Server('localhost', 27017), { user: 'root' , password: 'root' , authSource: 'adming'}).connect((err, client) => { .... }) ``` `MongoClient.connect` works as expected but it returns the MongoClient instance instead of a database object. The legacy operation ```js MongoClient.connect('mongodb://localhost:27017/test', (err, db) => { // Database returned }); ``` is replaced with ```js MongoClient.connect('mongodb://localhost:27017/test', (err, client) => { // Client returned var db = client.db('test'); }); ``` `Collection.prototype.aggregate` now returns a cursor if a callback is provided. It used to return the resulting documents which is the same as calling `cursor.toArray()` on the cursor we now pass to the callback. ## Other Changes Below are more updates to the driver in the 3.0.0 release. ### Connection String Following [changes to the MongoDB connection string specification](https://github.com/mongodb/specifications/commit/4631ccd4f825fb1a3aba204510023f9b4d193a05), authentication and hostname details in connection strings must now be URL-encoded. These changes reduce ambiguity in connection strings. For example, whereas before `mongodb://u$ername:pa$$w{}rd@/tmp/mongodb-27017.sock/test` would have been a valid connection string (with username `u$ername`, password `pa$$w{}rd`, host `/tmp/mongodb-27017.sock` and auth database `test`), the connection string for those details would now have to be provided to MongoClient as `mongodb://u%24ername:pa%24%24w%7B%7Drd@%2Ftmp%2Fmongodb-27017.sock/test`. Unsupported URL options in a connection string now log a warning instead of throwing an error. For more information about connection strings, read the [connection string specification](https://github.com/mongodb/specifications/blob/master/source/connection-string/connection-string-spec.rst). ### `BulkWriteResult` & `BulkWriteError` When errors occured with bulk write operations in the past, the driver would callback or reject with the first write error, as well as passing the resulting `BulkWriteResult`. For example: ```js MongoClient.connect('mongodb://localhost', function(err, client) { const collection = client.db('foo').collection('test-collection') collection .insert({ id: 1 }) .then(() => collection.insertMany([ { id: 1 }, { id: 1 } ])) .then(result => /* deal with errors in `result */) .catch(err => /* no error is thrown for bulk errors */); }); ``` becomes: ```js MongoClient.connect('mongodb://localhost', function(err, client) { const collection = client.db('foo').collection('test-collection') collection .insert({ id: 1 }) .then(() => collection.insertMany([ { id: 1 }, { id: 1 } ])) .then(() => /* this will not be called in the event of a bulk write error */) .catch(err => /* deal with errors in `err` */); }); ``` Where the result of the failed operation is a `BulkWriteError` which has a child value `result` which is the original `BulkWriteResult`. Similarly, the callback form no longer calls back with an `(Error, BulkWriteResult)`, but instead just a `(BulkWriteError)`. ### `mapReduce` inlined results When `Collection.prototype.mapReduce` is invoked with a callback that includes `out: 'inline'`, it would diverge from the `Promise`-based variant by returning additional data as positional arguments to the callback (`(err, result, stats, ...)`). This is no longer the case, both variants of the method will now return a single object for all results - a single value for the default case, and an object similar to the existing `Promise` form for cases where there is more data to pass to the user. ### Find `find` and `findOne` no longer support the `fields` parameter. You can achieve the same results as the `fields` parameter by using `Cursor.prototype.project` or by passing the `projection` property in on the options object . Additionally, `find` does not support individual options like `skip` and `limit` as positional parameters. You must either pass in these parameters in the `options` object, or add them via `Cursor` methods like `Cursor.prototype.skip`. ### Aggregation Support added for `comment` in the aggregation command. Support also added for a `hint` field in the aggregation `options`. If you use aggregation and try to use the `explain` flag while you have a `readConcern` or `writeConcern`, your query will now fail. ### `updateOne` & `updateMany` The driver now ensures that updated documents contain atomic operators. For instance, if a user tries to update an existing document but passes in no operations (such as `$set`, `$unset`, or `$rename`), the driver will now error: ```js let testCollection = db.collection('test'); testCollection.updateOne({_id: 'test'}, {}); // An error is returned: The update operation document must contain at least one atomic operator. ``` ### Tests We have updated all of the tests to use [Mocha](https://mochajs.org) and a new test runner, [`mongodb-test-runner`](https://github.com/mongodb-js/mongodb-test-runner), which sets up topologies for the test scenarios.
39.760976
211
0.750583
eng_Latn
0.980456
2cd132be1e1462ffe6dffcda190b47cb93ac987f
4,031
md
Markdown
Apache2 Self-Signed SSL Cert.md
alextechtips/servernotes
7a8267c4f813dadb93a9e426f77be65f2a58ebd1
[ "MIT" ]
null
null
null
Apache2 Self-Signed SSL Cert.md
alextechtips/servernotes
7a8267c4f813dadb93a9e426f77be65f2a58ebd1
[ "MIT" ]
null
null
null
Apache2 Self-Signed SSL Cert.md
alextechtips/servernotes
7a8267c4f813dadb93a9e426f77be65f2a58ebd1
[ "MIT" ]
null
null
null
<h1>How To Create a Self-Signed SSL Certificate for Apache in Ubuntu</h1> <h3>Create the SSL Certificate</h3> ``` $ sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/apache-selfsigned.key -out /etc/ssl/certs/apache-selfsigned.crt ``` Fill out the prompts appropriately ``` Output Country Name (2 letter code) [AU]:HK State or Province Name (full name) [Some-State]:Kowloon Locality Name (eg, city) []:WTS Organization Name (eg, company) [Internet Widgits Pty Ltd]:ATT Studio Organizational Unit Name (eg, section) []:Server Team Common Name (e.g. server FQDN or YOUR name) []:server_IP_address or testing.com Email Address []:info@testing.com ``` <h3>Create a strong Diffie-Hellman group</h3> ``` $ sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 ``` <h3>Configure Apache to Use SSL</h3> ``` $ sudo touch /etc/apache2/conf-aviliable/ssl-params.conf ``` and edit it like below ``` $ sudo nano /etc/apache2/conf-available/ssl-params.conf ``` ``` # from https://cipherli.st/ # and https://raymii.org/s/tutorials/Strong_SSL_Security_On_Apache2.html SSLCipherSuite EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH SSLProtocol All -SSLv2 -SSLv3 SSLHonorCipherOrder On # Disable preloading HSTS for now. You can use the commented out header line that includes # the "preload" directive if you understand the implications. #Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload" Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains" Header always set X-Frame-Options DENY Header always set X-Content-Type-Options nosniff # Requires Apache >= 2.4 SSLCompression off SSLSessionTickets Off SSLUseStapling on SSLStaplingCache "shmcb:logs/stapling-cache(150000)" SSLOpenSSLConfCmd DHParameters "/etc/ssl/certs/dhparam.pem" ``` <h3>Modify the Default Apache SSL Virtual Host File</h3> ``` $ sudo cp /etc/apache2/sites-available/default-ssl.conf /etc/apache2/sites-available/testing.com-ssl.conf $ sudo nano /etc/apache2/sites-available/testing.com-ssl.conf ``` edit the file like below ``` <IfModule mod_ssl.c> <VirtualHost _default_:443> #ServerAdmin webmaster@localhost ServerAdmin info@testing.com <--- DocumentRoot /var/www/testing.com <--- ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined SSLEngine on #SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key SSLCertificateFile /etc/ssl/certs/apache-selfsigned.crt <--- SSLCertificateKeyFile /etc/ssl/private/apache-selfsigned.key <--- <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> BrowserMatch "MSIE [2-6]" \ <--- nokeepalive ssl-unclean-shutdown \ <--- downgrade-1.0 force-response-1.0 <--- </VirtualHost> </IfModule> ``` <h3>Enable the Changes in Apache</h3> ``` $ sudo a2enmod ssl $ sudo a2enmod headers ``` <h3>Enable our SSL Virtual Host with the a2ensite command</h3> ``` $ sudo a2ensite default-ssl ``` <h3>Enable ssl-params.conf</h3> ``` $ sudo a2enconf ssl-params ``` <h3>Check to make sure that there are no syntax errors in our files</h3> ``` $ sudo apache2ctl configtest ``` <h3>Restart apache</h3> ``` $ sudo systemctl restart apache2 ``` <h3>Add redirect to .htaccess file</h3> ``` # BEGIN rlrssslReallySimpleSSL rsssl_version[3.3.1] <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTPS} !=on [NC] RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R=301,L] </IfModule> # END rlrssslReallySimpleSSL ```
27.8
147
0.667576
kor_Hang
0.331832
2cd1daa528e8a1932008958d70e388bf904dee60
9,845
md
Markdown
articles/analysis-services/analysis-services-refresh-azure-automation.md
gbuchmsft/azure-docs
bc943dc048d9ab98caf4706b022eb5c6421ec459
[ "CC-BY-4.0", "MIT" ]
1
2020-07-20T12:23:11.000Z
2020-07-20T12:23:11.000Z
articles/analysis-services/analysis-services-refresh-azure-automation.md
gbuchmsft/azure-docs
bc943dc048d9ab98caf4706b022eb5c6421ec459
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/analysis-services/analysis-services-refresh-azure-automation.md
gbuchmsft/azure-docs
bc943dc048d9ab98caf4706b022eb5c6421ec459
[ "CC-BY-4.0", "MIT" ]
1
2020-09-07T03:34:02.000Z
2020-09-07T03:34:02.000Z
--- title: Refresh Azure Analysis Services models with Azure Automation | Microsoft Docs description: This article describes how to code model refreshes for Azure Analysis Services by using Azure Automation. author: chrislound ms.service: analysis-services ms.topic: conceptual ms.date: 05/07/2020 ms.author: chlound --- # Refresh with Azure Automation By using Azure Automation and PowerShell Runbooks, you can perform automated data refresh operations on your Azure Analysis tabular models. The example in this article uses the [PowerShell SqlServer modules](https://docs.microsoft.com/powershell/module/sqlserver/?view=sqlserver-ps). A sample PowerShell Runbook, which demonstrates refreshing a model is provided later in this article. ## Authentication All calls must be authenticated with a valid Azure Active Directory (OAuth 2) token. The example in this article will use a Service Principal (SPN) to authenticate to Azure Analysis Services. To learn more about creating a Service Principal, see [Create a service principal by using Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). ## Prerequisites > [!IMPORTANT] > The following example assumes the Azure Analysis Services firewall is disabled. If the firewall is enabled, then the public IP address of the request initiator will need to be whitelisted in the firewall. ### Install SqlServer modules from PowerShell gallery. 1. In your Azure Automation Account, Click **Modules**, then **Browse gallery**. 2. In the search bar, search for **SqlServer**. ![Search Modules](./media/analysis-services-refresh-azure-automation/1.png) 3. Select SqlServer, then click **Import**. ![Import Module](./media/analysis-services-refresh-azure-automation/2.png) 4. Click **OK**. ### Create a Service Principal (SPN) To learn about creating a Service Principal, see [Create a service principal by using Azure portal](../active-directory/develop/howto-create-service-principal-portal.md). ### Configure permissions in Azure Analysis Services The Service Principal you create must have server administrator permissions on the server. To learn more, see [Add a service principal to the server administrator role](analysis-services-addservprinc-admins.md). ## Design the Azure Automation Runbook 1. In the Automation Account, create a **Credentials** resource which will be used to securely store the Service Principal. ![Create credential](./media/analysis-services-refresh-azure-automation/6.png) 2. Enter the details for the credential. In **User name**, enter the service principal Application Id (appid), and then in **Password**, enter the service principal Secret. ![Create credential](./media/analysis-services-refresh-azure-automation/7.png) 3. Import the Automation Runbook ![Import Runbook](./media/analysis-services-refresh-azure-automation/8.png) 4. Browse for the **Refresh-Model.ps1** file, provide a **Name** and **Description**, and then click **Create**. ![Import Runbook](./media/analysis-services-refresh-azure-automation/9.png) 5. When the Runbook has been created, it will automatically go into edit mode. Select **Publish**. ![Publish Runbook](./media/analysis-services-refresh-azure-automation/10.png) > [!NOTE] > The credential resource that was created previously is retrieved by the runbook by using the **Get-AutomationPSCredential** command. This command is then passed to the **Invoke-ProcessASADatabase** PowerShell command to perform the authentication to Azure Analysis Services. 6. Test the runbook by clicking **Start**. ![Start the Runbook](./media/analysis-services-refresh-azure-automation/11.png) 7. Fill out the **DATABASENAME**, **ANALYSISSERVER**, and **REFRESHTYPE** parameters, and then click **OK**. The **WEBHOOKDATA** parameter is not required when the Runbook is run manually. ![Start the Runbook](./media/analysis-services-refresh-azure-automation/12.png) If the Runbook executed successfully, you will receive an output like the following: ![Successful Run](./media/analysis-services-refresh-azure-automation/13.png) ## Use a self-contained Azure Automation Runbook The Runbook can be configured to trigger the Azure Analysis Services model refresh on a scheduled basis. This can be configured as follows: 1. In the Automation Runbook, click **Schedules**, then **Add a Schedule**. ![Create schedule](./media/analysis-services-refresh-azure-automation/14.png) 2. Click **Schedule** > **Create a new schedule**, and then fill in the details. ![Configure schedule](./media/analysis-services-refresh-azure-automation/15.png) 3. Click **Create**. 4. Fill in the parameters for the schedule. These will be used each time the Runbook triggers. The **WEBHOOKDATA** parameter should be left blank when running via a schedule. ![Configure parameters](./media/analysis-services-refresh-azure-automation/16.png) 5. Click **OK**. ## Consume with Data Factory To consume the runbook by using Azure Data Factory, first create a **Webhook** for the runbook. The **Webhook** will provide a URL which can be called via an Azure Data Factory web activity. > [!IMPORTANT] > To create a **Webhook**, the status of the Runbook must be **Published**. 1. In your Automation Runbook, click **Webhooks**, and then click **Add Webhook**. ![Add Webhook](./media/analysis-services-refresh-azure-automation/17.png) 2. Give the Webhook a name and an expiry. The name only identifies the Webhook inside the Automation Runbook, it doesn't form part of the URL. >[!CAUTION] >Ensure you copy the URL before closing the wizard as you cannot get it back once closed. ![Configure Webhook](./media/analysis-services-refresh-azure-automation/18.png) The parameters for the webhook can remain blank. When configuring the Azure Data Factory web activity, the parameters can be passed into the body of the web call. 3. In Data Factory, configure a **web activity** ### Example ![Example Web Activity](./media/analysis-services-refresh-azure-automation/19.png) The **URL** is the URL created from the Webhook. The **body** is a JSON document which should contain the following properties: |Property |Value | |---------|---------| |**AnalysisServicesDatabase** |The name of the Azure Analysis Services database <br/> Example: AdventureWorksDB | |**AnalysisServicesServer** |The Azure Analysis Services server name. <br/> Example: https:\//westus.asazure.windows.net/servers/myserver/models/AdventureWorks/ | |**DatabaseRefreshType** |The type of refresh to perform. <br/> Example: Full | Example JSON body: ```json { "AnalysisServicesDatabaseName": "AdventureWorksDB", "AnalysisServicesServer": "asazure://westeurope.asazure.windows.net/MyAnalysisServer", "DatabaseRefreshType": "Full" } ``` These parameters are defined in the runbook PowerShell script. When the web activity is executed, the JSON payload passed is WEBHOOKDATA. This is deserialized and stored as PowerShell parameters, which are then used by the Invoke-ProcesASDatabase PowerShell command. ![Deserialized Webhook](./media/analysis-services-refresh-azure-automation/20.png) ## Use a Hybrid Worker with Azure Analysis Services An Azure Virtual Machine with a static public IP address can be used as an Azure Automation Hybrid Worker. This public IP address can then be added to the Azure Analysis Services firewall. > [!IMPORTANT] > Ensure the Virtual Machine public IP address is configured as static. > >To learn more about configuring Azure Automation Hybrid Workers, see [Hybrid Runbook Worker installation](../automation/automation-hybrid-runbook-worker.md#hybrid-runbook-worker-installation). Once a Hybrid Worker is configured, create a Webhook as described in the section [Consume with Data Factory](#consume-with-data-factory). The only difference here is to select the **Run on** > **Hybrid Worker** option when configuring the Webhook. Example webhook using Hybrid Worker: ![Example Hybrid Worker Webhook](./media/analysis-services-refresh-azure-automation/21.png) ## Sample PowerShell Runbook The following code snippet is an example of how to perform the Azure Analysis Services model refresh using a PowerShell Runbook. ```powershell param ( [Parameter (Mandatory = $false)] [object] $WebhookData, [Parameter (Mandatory = $false)] [String] $DatabaseName, [Parameter (Mandatory = $false)] [String] $AnalysisServer, [Parameter (Mandatory = $false)] [String] $RefreshType ) $_Credential = Get-AutomationPSCredential -Name "ServicePrincipal" # If runbook was called from Webhook, WebhookData will not be null. if ($WebhookData) {  # Retrieve AAS details from Webhook request body $atmParameters = (ConvertFrom-Json -InputObject $WebhookData.RequestBody) Write-Output "CredentialName: $($atmParameters.CredentialName)" Write-Output "AnalysisServicesDatabaseName: $($atmParameters.AnalysisServicesDatabaseName)" Write-Output "AnalysisServicesServer: $($atmParameters.AnalysisServicesServer)" Write-Output "DatabaseRefreshType: $($atmParameters.DatabaseRefreshType)" $_databaseName = $atmParameters.AnalysisServicesDatabaseName $_analysisServer = $atmParameters.AnalysisServicesServer $_refreshType = $atmParameters.DatabaseRefreshType Invoke-ProcessASDatabase -DatabaseName $_databaseName -RefreshType $_refreshType -Server $_analysisServer -ServicePrincipal -Credential $_credential } else { Invoke-ProcessASDatabase -DatabaseName $DatabaseName -RefreshType $RefreshType -Server $AnalysisServer -ServicePrincipal -Credential $_Credential } ``` ## Next steps [Samples](analysis-services-samples.md) [REST API](https://docs.microsoft.com/rest/api/analysisservices/servers)
43.561947
281
0.760995
eng_Latn
0.846574
2cd201257ecd9298ce41096a626bd31dcce6fc24
2,368
md
Markdown
docs/parallel/concrt/reference/iumscompletionlist-structure.md
pjessesco/cpp-docs.ko-kr
3c4c1b9bd72080b97110f17a4b2909a6913f12bf
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/parallel/concrt/reference/iumscompletionlist-structure.md
pjessesco/cpp-docs.ko-kr
3c4c1b9bd72080b97110f17a4b2909a6913f12bf
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/parallel/concrt/reference/iumscompletionlist-structure.md
pjessesco/cpp-docs.ko-kr
3c4c1b9bd72080b97110f17a4b2909a6913f12bf
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: '자세한 정보: Iums이상 목록 구조' title: IUMSCompletionList 구조체 ms.date: 11/04/2016 f1_keywords: - IUMSCompletionList - CONCRTRM/concurrency::IUMSCompletionList - CONCRTRM/concurrency::IUMSCompletionList::IUMSCompletionList::GetUnblockNotifications helpviewer_keywords: - IUMSCompletionList structure ms.assetid: 81b5250e-3065-492c-b20d-2cdabf12271a ms.openlocfilehash: b54766e8b1c6f2e7c0afbb5e4e9a8efc0c455b4d ms.sourcegitcommit: d6af41e42699628c3e2e6063ec7b03931a49a098 ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 12/11/2020 ms.locfileid: "97334351" --- # <a name="iumscompletionlist-structure"></a>IUMSCompletionList 구조체 UMS 완성 목록을 나타냅니다. UMS 스레드가 차단되는 경우 원래 스레드가 차단되는 동안 기본 가상 프로세서 루트에 예약할 항목을 결정하기 위해 스케줄러의 지정된 일정 컨텍스트가 디스패치됩니다. 원래 스레드가 차단 해제되면 운영 체제에서 이 인터페이스를 통해 액세스할 수 있는 완성 목록에 대기시킵니다. 스케줄러는 지정된 일정 컨텍스트 또는 작업을 검색하는 다른 위치에 있는 완성 목록을 쿼리할 수 있습니다. ## <a name="syntax"></a>구문 ```cpp struct IUMSCompletionList; ``` ## <a name="members"></a>멤버 ### <a name="public-methods"></a>Public 메서드 |이름|설명| |----------|-----------------| |[IumsGetUnblockNotifications List::](#getunblocknotifications)|`IUMSUnblockNotification`이 메서드가 마지막으로 호출 된 이후 연결 된 스레드 프록시가 차단 해제 된 실행 컨텍스트를 나타내는 인터페이스 체인을 검색 합니다.| ## <a name="remarks"></a>설명 스케줄러는이 인터페이스를 활용 하 여 완료 목록에서 항목을 큐에서 제거 하는 작업에 대해 상당히 주의 해야 합니다. 항목은 스케줄러의 실행 가능한 컨텍스트 목록에 배치 해야 하며, 일반적으로 가능한 한 빨리 액세스할 수 있어야 합니다. 큐에서 제거 된 항목 중 하나에 임의 잠금의 소유권이 지정 되어 있을 수 있습니다. Scheduler는 큐에서 제거 항목에 대 한 호출과, 일반적으로 스케줄러 내에서 액세스할 수 있는 목록에 있는 항목의 배치 사이에서 차단할 수 있는 임의 함수 호출을 수행할 수 없습니다. ## <a name="inheritance-hierarchy"></a>상속 계층 구조 `IUMSCompletionList` ## <a name="requirements"></a>요구 사항 **헤더:** concrtrm. h **네임 스페이스:** 동시성 ## <a name="iumscompletionlistgetunblocknotifications-method"></a><a name="getunblocknotifications"></a> IumsGetUnblockNotifications List:: 메서드 `IUMSUnblockNotification`이 메서드가 마지막으로 호출 된 이후 연결 된 스레드 프록시가 차단 해제 된 실행 컨텍스트를 나타내는 인터페이스 체인을 검색 합니다. ```cpp virtual IUMSUnblockNotification *GetUnblockNotifications() = 0; ``` ### <a name="return-value"></a>반환 값 `IUMSUnblockNotification`인터페이스 체인입니다. ### <a name="remarks"></a>설명 반환 된 알림은 실행 컨텍스트를 다시 예약 하 고 나면 유효 하지 않습니다. ## <a name="see-also"></a>참고 항목 [concurrency 네임 스페이스](concurrency-namespace.md)<br/> [IUMSScheduler 구조체](iumsscheduler-structure.md)<br/> [IUMSUnblockNotification 구조체](iumsunblocknotification-structure.md)
32.888889
284
0.739443
kor_Hang
0.999996
2cd2480f02cb3b35b63245967c07c31489f34b5f
6,068
md
Markdown
README.md
stokito/memeater
32615f58a745cf707191d678e18e760affdd0b49
[ "Apache-2.0" ]
null
null
null
README.md
stokito/memeater
32615f58a745cf707191d678e18e760affdd0b49
[ "Apache-2.0" ]
null
null
null
README.md
stokito/memeater
32615f58a745cf707191d678e18e760affdd0b49
[ "Apache-2.0" ]
null
null
null
# Java Memory Eater ## What are Runtime.getRuntime().totalMemory() and freeMemory()? According to the [API](https://docs.oracle.com/javase/10/docs/api/java/lang/Runtime.html) totalMemory() Returns the total amount of memory in the Java virtual machine. The value returned by this method may vary over time, depending on the host environment. Note that the amount of memory required to hold an object of any given type may be implementation-dependent. maxMemory() Returns the maximum amount of memory that the Java virtual machine will attempt to use. If there is no inherent limit then the value Long.MAX_VALUE will be returned. freeMemory() Returns the amount of free memory in the Java Virtual Machine. Calling the gc method may result in increasing the value returned by freeMemory. In reference to your question, `maxMemory()` returns the `-Xmx` value. You may be wondering why there is a **totalMemory()** AND a **maxMemory()**. The answer is that the JVM allocates memory lazily. Lets say you start your Java process as such: java -Xms64m -Xmx1024m Foo Your process starts with 64mb of memory, and if and when it needs more (up to 1024m), it will allocate memory. `totalMemory()` corresponds to the amount of memory *currently* available to the JVM for Foo. If the JVM needs more memory, it will lazily allocate it *up* to the maximum memory. If you run with `-Xms1024m -Xmx1024m`, the value you get from `totalMemory()` and `maxMemory()` will be equal. Also, if you want to accurately calculate the amount of *used* memory, you do so with the following calculation : final long usedMem = totalMemory() - freeMemory(); ## My machine $ free -m total used free shared buff/cache available Mem: 15887 9328 1005 293 5554 6634 Swap: 9999 28 9971 ## Without limits MaxHeapSize: 4164943872 = 3972mb i.e. 15G of total mem of machine / 4 Max JVM memory: 3702521856 = 3531mb Total JVM memory: 251658240 = 240mb i.e. 15G of total mem of machine / 64 Allocated: 60mb, consumed: 65557960 bytes, free memory: 186100280 bytes i.e. 177.47905731201172mb [GC (Allocation Failure) 64021K->61896K(245760K), 3.7961610 secs] Program continued to work and wasn't killed by docker's OOM killer. I stopped it after allocation of 60mb and first execution of GC. ## -Xmx20m MaxHeapSize: 20971520 = 20mb Max JVM memory: 20447232 = 19.5mb Total JVM memory: 20447232 = 19.5mb Allocated: 18mb, consumed: 19266408 bytes, free memory: 1180824 bytes i.e. 1.1261215209960938mb [Full GC (Ergonomics) 18814K->18747K(19968K), 0.5381021 secs] [Full GC (Allocation Failure) 18747K->18747K(19968K), 0.4546573 secs] Catching out of memory error ## -Xmx21m MaxHeapSize: 23068672 = 22mb Max JVM memory: 22544384 = 21.5 Total JVM memory: 22544384 = 21.5 Allocated: 19mb, consumed: 20382296 bytes, free memory: 2162088 bytes i.e. 2.0619277954101562mb [Full GC (Ergonomics) 19904K->19771K(22016K), 0.9921653 secs] [Full GC (Allocation Failure) 19771K->19771K(22016K), 0.5112764 secs] Catching out of memory error JVM rounded heap size up to the 2 MB boundary ## -Xmx22m MaxHeapSize: 23068672 = 22mb Max JVM memory: 22544384 = 21.5 Total JVM memory: 22544384 = 21.5 Allocated: 19mb, consumed: 20382296 bytes, free memory: 2162088 bytes i.e. 2.0619277954101562mb [Full GC (Ergonomics) 19904K->19771K(22016K), 0.6735945 secs] [Full GC (Allocation Failure) 19771K->19771K(22016K), 0.3456865 secs] Catching out of memory error ## -Xmx10m -m=20m --memory-swap=20m --memory-swappiness=0 --kernel-memory=20m MaxHeapSize: 10485760 = 10mb Max JVM memory: 9961472 = 9.5mb Total JVM memory: 9961472 = 9.5mb Allocated: 7mb, consumed: 7961224 bytes, total: 9961472, free memory: 2000248 bytes i.e. 1.9075851440429688mb [GC (Allocation Failure) -- 7774K->7774K(9728K), 0.0013812 secs] [Full GC (Ergonomics) 7774K->7496K(9728K), 0.0028324 secs] [GC (Allocation Failure) -- 7496K->7504K(9728K), 0.0029179 secs] [Full GC (Allocation Failure) 7504K->7496K(9728K), 0.0030763 secs] Catching out of memory error GC tried to cleanup twice and when it saw that memory wasn't cleared i.e. old 7504K -> new 7496K it thrown OOM exception. The linux kernel of container itself took about 8mb of memory so we allowed to JVM only 10mb (-Xmx10m). Thus we can see that OOM exception was thrown by JVM instead of halting by Docker's OOM killer. root@memeater$ free total used free shared buff/cache available Mem: 16269252 9522456 1276612 315108 5470184 6503112 Swap: 10239484 30208 10209276 ## Limit JVM and Docker to use recovery mode docker run --name=memeater -m=20m --memory-swap=20m --memory-swappiness=0 --kernel-memory=20m memeater -e JAVA_OPTS="-Xmx10m" -recover MaxHeapSize: = 14680064 = 14mb Max JVM memory: 14155776 = 13.5mb Total JVM memory: 14155776 = 13.5mb Allocated: 8mb, consumed: 8861864 bytes, total: 14155776, free memory: 5293912 bytes i.e. 5.048667907714844mb [GC (Allocation Failure) As you can see Docker killed a container when JVM tried to execute GC. Thus the Java app wasn't shut down gracefully. ## Allow JVM to use all memory from cgroups docker run --name=memeater -m=25m --memory-swap=25m --memory-swappiness=0 --kernel-memory=25m -e JAVA_OPTS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap" memeater -recover InitialHeapSize: 8388608 = 8mb MaxHeapSize: 14680064 = 14mb Max JVM memory: 13107200 = 12.5mb Total JVM memory: 7864320 = 7.5 Allocated: 9mb, consumed: 10418312 bytes, total: 11010048, free memory: 591736 bytes i.e. 0.5643234252929688mb /docker-entry.sh: line 5: 7 Killed java $JAVA_OPTS -XX:+PrintFlagsFinal -verbose:gc -Djava.security.egd=file:/dev/./urandom -jar /app.jar "$@"
49.737705
403
0.705669
eng_Latn
0.890923
2cd32f4c5a6da195b2d0dbe8c544bd6e33930271
3,450
md
Markdown
README.md
cek-open-source-club/KTU-S1-S2-CS-IT-Python_C-programs
9e059ce56e6ef6a7814b94a4cfb7df5059f8bd6e
[ "BSD-3-Clause" ]
4
2018-12-01T15:37:09.000Z
2019-09-21T07:20:58.000Z
README.md
cek-open-source-club/KTU-S1-S2-CS-IT-Python_C-programs
9e059ce56e6ef6a7814b94a4cfb7df5059f8bd6e
[ "BSD-3-Clause" ]
null
null
null
README.md
cek-open-source-club/KTU-S1-S2-CS-IT-Python_C-programs
9e059ce56e6ef6a7814b94a4cfb7df5059f8bd6e
[ "BSD-3-Clause" ]
2
2018-11-30T14:01:51.000Z
2019-11-03T16:15:50.000Z
# KTU-S1-S2-CS-IT-Python_C-programs *** For a Better Understanding Of Python go to **Siddharth Prajosh(@sprajosh)**'s Repo *[Python tutorials for beginners](https://github.com/sprajosh/basic-python)* *** ### All the programs and code for KTU S1/S2 CS/IT Programs. The codes for S1 are in Python while for S2 is in C *** - For Downloading the Repository, > `git clone https://github.com/cek-freshers-club/KTU-S1-S2-CS-IT-Python_C-programs.git ` ___ [The List of Programs Are Here](Python%20Programmes/List%20of%20Programs.txt):point_left: 1. [Program of sum of 2 numbers](Python%20Programmes/1_sum_of_two_numbers.py) 2. [Program for finding Area of a Circle](Python%20Programmes/2_Area_of_circle.py) 3. [Program for Finding The Simple Interset](Python%20Programmes/3_simple_interest.py) 4. [Program for Finding The Area and Perimeter of A Rectangle ](Python%20Programmes/4_rectangle.py) 5. [Program for converting Celcius temp to Fahrenheit temp](Python%20Programmes/5_Celcius_2_Fahren_.py) 6. [Program to Swap two Numbers](Python%20Programmes/6_swap.py) 7. [Program to Swap two Numbers with a Temporary Variable](Python%20Programmes/7_Swap_Using_temp_var.py) 8. [Program to Find the Maximum of Two Given Numbers](Python%20Programmes/8_max_of_two_nos.py) 9. [Program to find the Maximum of Three Given Numbers](Python%20Programmes/9_max_of_3_nos.py) 10. [Program to Check whether a given Number is Even or Odd](Python%20Programmes/10_even_odd.py) 11. [Program to Check Whether a Given Number is Divisible By 2 or 3](Python%20Programmes/11_divisible_by_2_or_3.py) 12. [Program to find The Roots of a Quadratic Equation](Python%20Programmes/12_roots_of_quad_eq.py) 13. [Program to Find the Largest of 'n' Given Numbers](Python%20Programmes/13_largest_of_n_numbers.py) 14. [Program to print Natural Numbers upto 'n'](Python%20Programmes/15_Sum_of_n_numbers.py) 15. [Program to print sum of 'n' numbers](Python%20Programmes/15_Sum_of_n_numbers.py) 16. [Program to find Factorial of a Number](Python%20Programmes/16_factorial_of_a_number.py) 17. [Program to print sum of Digits of a Number](Python%20Programmes/17_Sum_of_digits_of_a_number.py) 18. [Program to Print Even Numbers for 50 to 2](Python%20Programmes/18_Even_numbers_from_50_to_2.py) 19. [Program to print The count of Digits of a Number](Python%20Programmes/19_Count_of_digits_in_a_number.py) 20. [Program to Print Reverse of a number](Python%20Programmes/20_Reverse_of_number.py) 21. [Program to Check if a Given Number is Armstrong or Not.](Python%20Programmes/21_Armstrong_or_not.py) 22. [Program to check if a number is Prime or Not](Python%20Programmes/22_Prime_or_not.py) 23. [Program to Find Armstrong Numbers in a given Range](Python%20Programmes/23_Armstrong_number_in_a_given_range.py) 24. [Program to Print Prime Numbers n a Given Range](Python%20Programmes/24_Prime_numbers_in_a_given_range.py) 25. [Program to Print the Fibbonacci Sequence](Python%20Programmes/25_fibonacci_sequence.py) 26. [Program to Print Pattern in Asterisk](Python%20Programmes/26_pattern_*.py) 27. [Program to Print a Pattern in numbers](Python%20Programmes/27_pattern_num.py) 28. [Program To make a simple Calculator using Python](Python%20Programmes/28_calculator.ipynb) 29. [Program to Find HCF of Two Numbers](Python%20Programmes/29_HCF.py) ___ Created With :heart: by Abhinav Prasad(@abhinavprasad47) & Athul Cyriac Ajay(@Athul-CA) College OF Engineering,Kidangoor
76.666667
159
0.794783
eng_Latn
0.58906
2cd4522cf7b74ad55d9365d915369a8db7d43e33
2,665
md
Markdown
snippets/neoui/plugin/date/base.md
iuap-design/tinper.io
3ea2a56e23c1993520c1f7df566acf29b7815106
[ "MIT" ]
18
2016-09-25T08:57:22.000Z
2019-06-25T13:23:06.000Z
snippets/neoui/plugin/date/base.md
iuap-design/tinper.io
3ea2a56e23c1993520c1f7df566acf29b7815106
[ "MIT" ]
42
2016-10-19T02:06:18.000Z
2018-01-23T02:48:01.000Z
snippets/neoui/plugin/date/base.md
iuap-design/tinper.io
3ea2a56e23c1993520c1f7df566acf29b7815106
[ "MIT" ]
3
2017-04-13T05:59:54.000Z
2019-04-06T04:00:00.000Z
# 日期 用户可以自定义日期的显示格式,默认返回的日期是年-月-日,也可以返回年-月-日 时:分:秒。 [试一试](http://tinper.org/webide/#/demos/ui/datetime) (http://tinper.org/webide/#/demos/ui/datetime) 用户可以在`u-datepicker`的dom元素添加format属性,来自定义日期的显示格式。具体fomat内容定义如下: | | 标记 | 输出结果 | | ------------- |:-------------:| -----:| | Year | YY | 70 71 ... 29 30 | | | YYYY | 1970 1971 ... 2029 2030 | | Month | M | 1 2 ... 11 12 | | | MM | 01 02 ... 11 12 | | | MMM | 1月 2月 ... 11月 12月 | | | MMMM | 一月 二月 ... 十一月 十二月 | | Day of Month | D | 1 2 ... 30 31 | | | DD | 01 02 ... 30 31 | | Hour | H | 0 1 ... 22 23 | | | HH | 00 01 ... 22 23 | | | h | 1 2 ... 11 12 | | | hh | 01 02 ... 11 12 | | Minute | m | 0 1 ... 58 59 | | | mm | 00 01 ... 58 59 | | Second | s | 0 1 ... 58 59 | | | ss | 00 01 ... 58 59| | 12小时制时间后缀 | a | am/pm | # API ## \# DateTimePicker 对象 * 类型:`Object` * 说明: DateTimePicker表示一个时间对象 * 用法: 获取方式:1、获取绑定日期的dom元素 ; 2、读取dom元素上的属性'u.DateTimePicker' ``` var dateObject = document.getElementById('domId')['u.DateTimePicker']; ``` **注:** 如果获取的日期对象为空,原因为日期没有初始化成功,可以先调用`u.compMgr.updateComp();`来初始化页面中的控件。然后再获取日期对象。 ##Methods ### \# setDate | 类型 | 说明 | 参数 | | ------------- |:-------------:| -----:| | Function | 设置具体的日期 | * `{String} dateStr` 具体格式:"YYYY-MM-DD hh:mm:ss" | * 用法: ``` dateObject.setDate('2016-02-03'),可以设置空值,清掉之前设置的值,dateObject.setDate('')。 ``` ### \# setEnable | 类型 | 说明 | 参数 | | ------------- |:-------------:| -----:| | Function | 设置日期控件是否可用 | * `{Boolean}, `true`时可用,为`false`为不可用 | * 用法: ``` dateObject.setEnable(false); ``` ### \# setStartDate | 类型 | 说明 | 参数 | | ------------- |:-------------:| -----:| | Function | 设置可选时间范围的起始日期 | * `{String} startDate` 具体格式:"YYYY-MM-DD" | * 用法: ``` dateObject.setStartDate('2016-01-01'); ``` ### \# setEndDate | 类型 | 说明 | 参数 | | ------------- |:-------------:| -----:| | Function | 设置可选时间范围的结束日期 | * `{String} endDate` 具体格式:"YYYY-MM-DD" | * 用法: ``` dateObject.setEndDate('2016-01-01'); ``` ### \# setFormat | 类型 | 说明 | 参数 | | ------------- |:-------------:| -----:| | Function | 规定日期的显示格式 | `{String} format` 具体格式:参考format内容 | * 用法: ``` dateObject.setFormat('YYYY'); ``` ##Event ### \# select | 类型 | 说明 | 参数 | | ------------- |:-------------:| -----:| | Function | 规定日期的显示格式 | `{String} format` 具体格式:参考format内容 | 相关内容: [日期在kero中使用](http://tinper.org/dist/kero/docs/ex_datetime.html) [日期在grid中使用](http://tinper.org/webide/#/demos/grids/edit)
18.900709
83
0.463039
yue_Hant
0.853274
2cd4ad0febb58d97602796bf5a6c13349ae7408d
1,788
md
Markdown
README.md
feature23/OpenIDConnect
2537fd0fc0eab551489985c5c0cdaa19781e95bb
[ "MIT" ]
null
null
null
README.md
feature23/OpenIDConnect
2537fd0fc0eab551489985c5c0cdaa19781e95bb
[ "MIT" ]
null
null
null
README.md
feature23/OpenIDConnect
2537fd0fc0eab551489985c5c0cdaa19781e95bb
[ "MIT" ]
null
null
null
# OpenID Connect Utilities A set of utility libraries to ease OpenID Connect integration with Azure Mobile Services and Xamarin. ## Azure Mobile Services .NET Backend To use the F23.AzureMobileServices.OpenIDConnect library, add a reference to the DLL and create a new Web API controller in your project that inherits from OpenIDConnectLoginControllerBase: ```C# [AuthorizeLevel(AuthorizationLevel.Anonymous)] public class OpenIDLoginController : OpenIDConnectLoginControllerBase { public OpenIDLoginController(IServiceTokenHandler tokenHandler) : base(tokenHandler, AppSettings.MsMasterKey, AppSettings.IdpSiteUrl) { } } ``` Where AppSettings.MsMasterKey is the master key string to your Azure Mobile Service, and AppSettings.IdpSiteUrl is the base URL of your OpenID Connect provider (i.e. Thinktecture Identity Server v3+). Then, from your mobile app, invoke a POST to /api/OpenIDLogin with your JWT token provided by the OpenID Connect provider (after authentication), in JSON format in the body of the HTTP request: ```JSON {"jwtToken":"... your JWT token from OpenID Connect here ..."} ``` If successful, this will return a zumo authentication token to use with subsequent Azure Mobile Services requests to controllers that are marked with `[AuthorizeLevel(AuthorizationLevel.User)]`. To provide custom behavior upon successful authentication, override the `UserLoginSuccessfulAsync` method. This will provide you with the user's username as well as their `ClaimsIdentity` for accessing the claims passed over from the OpenID Connect provider. ## License This code is licensed under the MIT License. The full license information can be found in the LICENSE file. Portions derived from jmichas' unlicensed gist: https://gist.github.com/jmichas/46b37235ae2b6058a820
55.875
258
0.805928
eng_Latn
0.955198
2cd53730985cfb69bb62b430032a96939d909591
1,310
md
Markdown
docs/legacy/LOGGING.md
MuhammadIsmailShahzad/ckan-cloud-operator
35a4ca88c4908d81d1040a21fca8904e77c4cded
[ "MIT" ]
14
2019-11-18T12:01:03.000Z
2021-09-15T15:29:50.000Z
docs/legacy/LOGGING.md
MuhammadIsmailShahzad/ckan-cloud-operator
35a4ca88c4908d81d1040a21fca8904e77c4cded
[ "MIT" ]
52
2019-09-09T14:22:41.000Z
2021-09-29T08:29:24.000Z
docs/legacy/LOGGING.md
MuhammadIsmailShahzad/ckan-cloud-operator
35a4ca88c4908d81d1040a21fca8904e77c4cded
[ "MIT" ]
8
2019-10-05T12:46:25.000Z
2021-09-15T15:13:05.000Z
# Logging ## Google Kubernetes Engine - Stackdriver Get Stackdriver URLs to various cluster logs for the active ckan-cloud-operator environment: ``` echo &&\ echo cluster: &&\ ckan-cloud-operator config get --configmap-name ckan-cloud-provider-cluster-gcloud --template \ 'https://console.cloud.google.com/logs/viewer?project={project-id}&&resource=k8s_cluster%2Flocation%2F{cluster-compute-zone}%2Fcluster_name%2F{cluster-name}' &&\ echo &&\ echo nodes: &&\ ckan-cloud-operator config get --configmap-name ckan-cloud-provider-cluster-gcloud --template \ 'https://console.cloud.google.com/logs/viewer?project={project-id}&&resource=k8s_node%2Flocation%2F{cluster-compute-zone}%2Fcluster_name%2F{cluster-name}' &&\ echo &&\ echo pods: &&\ ckan-cloud-operator config get --configmap-name ckan-cloud-provider-cluster-gcloud --template \ 'https://console.cloud.google.com/logs/viewer?project={project-id}&&resource=k8s_pod%2Flocation%2F{cluster-compute-zone}%2Fcluster_name%2F{cluster-name}' &&\ echo &&\ echo containers: &&\ ckan-cloud-operator config get --configmap-name ckan-cloud-provider-cluster-gcloud --template \ 'https://console.cloud.google.com/logs/viewer?project={project-id}&&resource=k8s_container%2Flocation%2F{cluster-compute-zone}%2Fcluster_name%2F{cluster-name}' &&\ echo ```
50.384615
167
0.759542
kor_Hang
0.344311
2cd543a90573c55c828d1ede177227e4a5bb91b5
1,107
md
Markdown
docs/visual-basic/misc/late-bound-assignment-to-a-field-of-value-type-typename-is-not-valid.md
lucieva/docs.cs-cz
a688d6511d24a48fe53a201e160e9581f2effbf4
[ "CC-BY-4.0", "MIT" ]
1
2018-12-19T17:04:23.000Z
2018-12-19T17:04:23.000Z
docs/visual-basic/misc/late-bound-assignment-to-a-field-of-value-type-typename-is-not-valid.md
lucieva/docs.cs-cz
a688d6511d24a48fe53a201e160e9581f2effbf4
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/visual-basic/misc/late-bound-assignment-to-a-field-of-value-type-typename-is-not-valid.md
lucieva/docs.cs-cz
a688d6511d24a48fe53a201e160e9581f2effbf4
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Pozdní vazba přiřazení pole Hodnota typu &#39; &lt;typename&gt; &#39; není platný při &#39; &lt;název&gt; &#39; je výsledkem výrazu pozdní vazba ms.date: 07/20/2015 f1_keywords: - vbrRValueBaseForValueType ms.assetid: 050f05b4-7e56-4372-aae5-70b7d73b99e4 ms.openlocfilehash: 64a4e06a995a50064e5cd28f2f7be8dfb7e0b310 ms.sourcegitcommit: 3d5d33f384eeba41b2dff79d096f47ccc8d8f03d ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 05/04/2018 ms.locfileid: "33637636" --- # <a name="late-bound-assignment-to-a-field-of-value-type-39lttypenamegt39-is-not-valid-when-39ltnamegt39-is-the-result-of-a-late-bound-expression"></a>Pozdní vazba přiřazení pole Hodnota typu &#39; &lt;typename&gt; &#39; není platný při &#39; &lt;název&gt; &#39; je výsledkem výrazu pozdní vazba Jste se pokusili přiřazení pozdní vazbou, který není platný. ## <a name="to-correct-this-error"></a>Oprava této chyby - Zkontrolujte přiřazení již v rané fázi vázána. ## <a name="see-also"></a>Viz také [Typy chyb](../../visual-basic/programming-guide/language-features/error-types.md)
46.125
296
0.747967
ces_Latn
0.984581
2cd678674624be60c9b651ebdf77d81bb4bad113
3,282
md
Markdown
docs/error-messages/tool-errors/linker-tools-warning-lnk4098.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/tool-errors/linker-tools-warning-lnk4098.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/error-messages/tool-errors/linker-tools-warning-lnk4098.md
Mdlglobal-atlassian-net/cpp-docs.it-it
c8edd4e9238d24b047d2b59a86e2a540f371bd93
[ "CC-BY-4.0", "MIT" ]
1
2020-05-28T15:54:57.000Z
2020-05-28T15:54:57.000Z
--- title: Avviso degli strumenti del linker LNK4098 description: Descrive il modo in cui le librerie incompatibili provocano l'avviso degli strumenti del linker LNK4098 e come usare/NODEFAULTLIB per risolverlo. ms.date: 12/02/2019 f1_keywords: - LNK4098 helpviewer_keywords: - LNK4098 ms.assetid: 1f1b1408-1316-4e34-80f5-6a02f2db0ac1 ms.openlocfilehash: 9d0c7da0614651a98d5ed4f3bd3676c7d837ce67 ms.sourcegitcommit: d0504e2337bb671e78ec6dd1c7b05d89e7adf6a7 ms.translationtype: MT ms.contentlocale: it-IT ms.lasthandoff: 12/02/2019 ms.locfileid: "74682935" --- # <a name="linker-tools-warning-lnk4098"></a>Avviso degli strumenti del linker LNK4098 > DEFAULTLIB "*Library*" è in conflitto con l'uso di altre librerie; usare/NODEFAULTLIB:*Library* Si sta tentando di eseguire il collegamento con librerie incompatibili. > [!NOTE] > Le librerie di runtime contengono ora direttive per impedire la combinazione di tipi diversi. Questo avviso viene visualizzato se si tenta di utilizzare tipi diversi o versioni di debug e non di debug della libreria di runtime nello stesso programma. Se, ad esempio, è stato compilato un file per usare un tipo di libreria di runtime e un altro file per usare un altro tipo (ad esempio, debug rispetto al dettaglio) e si è tentato di collegarlo, verrà visualizzato questo avviso. È necessario compilare tutti i file di origine per utilizzare la stessa libreria di Runtime. Per ulteriori informazioni, vedere le opzioni del compilatore [/MD,/MT,/LD (use Run-Time Library)](../../build/reference/md-mt-ld-use-run-time-library.md) . È possibile usare l'opzione [/verbose: lib](../../build/reference/verbose-print-progress-messages.md) del linker per individuare le librerie a cui il linker esegue la ricerca. Ad esempio, quando il file eseguibile usa le librerie di runtime multithread non di debug, l'elenco segnalato deve includere LIBCMT. lib e non LIBCMTD. lib, MSVCRT. lib o MSVCRTD. lib. È possibile indicare al linker di ignorare le librerie di runtime non corrette usando [/NODEFAULTLIB](../../build/reference/nodefaultlib-ignore-libraries.md) per ogni libreria che si desidera ignorare. Nella tabella seguente vengono illustrate le librerie che devono essere ignorate a seconda della libreria di runtime che si desidera utilizzare. Nella riga di comando usare un'opzione **/NODEFAULTLIB** per ogni libreria da ignorare. Nell'IDE di Visual Studio separare le librerie da ignorare con i punti e virgola nella proprietà **Ignora librerie predefinite specifiche** . | Per usare questa libreria di runtime | Ignora queste librerie | |-----------------------------------|----------------------------| | Multithreading (LIBCMT. lib) | Msvcrt. lib; libcmtd. lib; msvcrtd. lib | | Multithreading con DLL (Msvcrt. lib) | LIBCMT. lib; libcmtd. lib; msvcrtd. lib | | Debug multithreading (libcmtd. lib) | LIBCMT. lib; Msvcrt. lib; msvcrtd. lib | | Eseguire il debug di multithreading tramite DLL (msvcrtd. lib) | LIBCMT. lib; Msvcrt. lib; libcmtd. lib | Se ad esempio si è ricevuto questo avviso e si desidera creare un file eseguibile che utilizza la versione DLL non di debug delle librerie di runtime, è possibile utilizzare le opzioni seguenti con il linker: ```cmd /NODEFAULTLIB:libcmt.lib /NODEFAULTLIB:libcmtd.lib /NODEFAULTLIB:msvcrtd.lib ```
78.142857
731
0.775137
ita_Latn
0.992411
2cd6d1cf349113b8e31123ded37ed73bb50bd072
954
md
Markdown
README.md
amercer1/jebe
dd922f6e12c43a96c550b39a0c2eb0154ad5ec5b
[ "MIT" ]
1
2015-06-28T06:35:26.000Z
2015-06-28T06:35:26.000Z
README.md
amercer1/jebe
dd922f6e12c43a96c550b39a0c2eb0154ad5ec5b
[ "MIT" ]
null
null
null
README.md
amercer1/jebe
dd922f6e12c43a96c550b39a0c2eb0154ad5ec5b
[ "MIT" ]
null
null
null
jebe ========== A little word cloud generator for the Reddit NBA Finals Post Game threads. ## Installation Install the requirements: `sudo pip install -r requirements.txt` ## Generate the clouds Run the thread web scrapper: `python scrape_reddit_threads.py` This should generate text files in the text_files folder. Run the word cloud builder: `python build_world_clouds.py` This should generate images in the images folder. Note: You should be able to generate your own word clouds if your replace the list of r/nba threads in [scrape_reddit_threads.py](scrape_reddit_threads.py) with ever reddit threads of your chosing assuming reddit.com does not change the layout of their images in the future. ## Examples Example 1: ![Original Shape](images/game_4.png) Example 2: ![Basketball Shape](images/basketball_game_4.png) This project was named after one of Genghis Khan's generals, [Jebe](https://en.wikipedia.org/wiki/Jebe)
20.73913
268
0.767296
eng_Latn
0.982047
2cd6e7a57f28cef28f94571ba22871e73304369f
1,080
md
Markdown
README.md
schuwima/valheim-crash-saver
ac17f6f877487e234455f025d8ea1fe894819838
[ "Unlicense" ]
null
null
null
README.md
schuwima/valheim-crash-saver
ac17f6f877487e234455f025d8ea1fe894819838
[ "Unlicense" ]
1
2021-02-17T01:16:06.000Z
2021-02-17T01:16:06.000Z
README.md
schuwima/valheim-crash-saver
ac17f6f877487e234455f025d8ea1fe894819838
[ "Unlicense" ]
1
2021-02-16T05:07:30.000Z
2021-02-16T05:07:30.000Z
# valheim-crash-saver Save your precious files from crashes Edit the variables to control the behaviour Default backup location is on `C:\User\username\Documents\_VALHEIM-CRASH-SAVER\` ### OPTIONS 1. `$WorldSave = $true|$false` * Enable or disable Worlds-Backup. Defaults to $true. 2. `$CharSave = $true` * Enable or disable Characters-Backup. Defaults to $true. 3. `$BackupPath = "$env:USERPROFILE\Documents\_VALHEIM-CRASH-SAVER\"` * Set Backup Path. Defaults to `%USERPROFILE%\Documents\_VALHEIM-CRASH-SAVER\` 4. `$BackupSets = 50` * Number of files to be kept. Defaults to 50. 5. `$RunLoop = $true` * Whether the script should continue running and save periodically. False means one-shot. Defaults to $true. 6. `$SaveInterval = 600` * Intervall of saves in seconds. Defaults to 10 minutes. 7. `$RunGame = $true` * Start game via steam. Defaults to $true. ### TODO - Create option to install the script as a service - Include Linux builds - optimization - farm more iron ### HOW TO USE 1. Download script 2. Right-click --> Run with PowerShell
27
111
0.716667
eng_Latn
0.722481
2cd7472fdb903e022c8ff1fec3b2f28a81077f6f
689
md
Markdown
2006/CVE-2006-3814.md
marcostolosa/cve
bfe85c74b105c623c9807e09b2b572f144bf1f1c
[ "MIT" ]
4
2022-03-01T12:31:42.000Z
2022-03-29T02:35:57.000Z
2006/CVE-2006-3814.md
az7rb/cve
ea036e0c97bb9d05e18e7f1aea0a746fcb25d312
[ "MIT" ]
null
null
null
2006/CVE-2006-3814.md
az7rb/cve
ea036e0c97bb9d05e18e7f1aea0a746fcb25d312
[ "MIT" ]
1
2022-02-24T21:07:04.000Z
2022-02-24T21:07:04.000Z
### [CVE-2006-3814](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2006-3814) ![](https://img.shields.io/static/v1?label=Product&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=n%2Fa&color=brighgreen) ### Description Buffer overflow in the Loader_XM::load_instrument_internal function in loader_xm.cpp for Cheese Tracker 0.9.9 and earlier allows user-assisted attackers to execute arbitrary code via a crafted file with a large amount of extra data. ### POC #### Reference - http://aluigi.altervista.org/adv/cheesebof-adv.txt #### Github No GitHub POC found.
38.277778
232
0.756168
eng_Latn
0.336535
2cd7fb8d58f9aa7b7a17f0d40297df65f77664ea
2,071
md
Markdown
docs/access/desktop-database-reference/forward-only-cursors.md
hubalazs/office-developer-client-docs
86d7b65f5c81941b00469fd02f3c957a14f2757b
[ "CC-BY-4.0", "MIT" ]
3
2020-10-26T02:38:53.000Z
2022-02-08T12:13:34.000Z
docs/access/desktop-database-reference/forward-only-cursors.md
hubalazs/office-developer-client-docs
86d7b65f5c81941b00469fd02f3c957a14f2757b
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/access/desktop-database-reference/forward-only-cursors.md
hubalazs/office-developer-client-docs
86d7b65f5c81941b00469fd02f3c957a14f2757b
[ "CC-BY-4.0", "MIT" ]
1
2020-12-30T07:57:56.000Z
2020-12-30T07:57:56.000Z
--- title: Forward-only cursors TOCTitle: Forward-only cursors ms:assetid: 27541bac-077b-bfe6-d9d8-713e4a945125 ms:mtpsurl: https://msdn.microsoft.com/library/JJ249035(v=office.15) ms:contentKeyID: 48543834 ms.date: 09/18/2015 mtps_version: v=office.15 localization_priority: Normal --- # Forward-only cursors **Applies to**: Access 2013, Office 2013 The typical default cursor type, called a forward-only (or non-scrollable) cursor, can move only forward through the result set. A forward-only cursor does not support scrolling (the ability to move forward and backward in the result set); it only supports fetching rows from the start to the end of the result set. With some forward-only cursors (such as with the SQL Server cursor library), all insert, update, and delete statements made by the current user (or committed by other users) that affect rows in the result set are visible as the rows are fetched. Because the cursor cannot be scrolled backward, however, changes made to rows in the database after the row was fetched are not visible through the cursor. After the data for the current row is processed, the forward-only cursor releases the resources that were used to hold that data. Forward-only cursors are dynamic by default, meaning that all changes are detected as the current row is processed. This provides faster cursor opening and enables the result set to display updates made to the underlying tables. While forward-only cursors do not support backward scrolling, your application can return to the beginning of the result set by closing and reopening the cursor. This is an effective way to work with small amounts of data. As an alternative, your application could read the result set once, cache the data locally, and then browse the local data cache. If your application does not require scrolling through the result set, the forward-only cursor is the best way to retrieve data quickly with the least amount of overhead. Use the **adOpenForwardOnly** **CursorTypeEnum** to indicate that you want to use a forward-only cursor in ADO.
86.291667
717
0.799131
eng_Latn
0.999441
2cd845953d09e0158b39150f060fc6cb875a8ac7
158
md
Markdown
_posts/2020-11-29-new-post-title.md
jangwd88/jangwd88.github.io
fd6fbe9fe2bf77a5e69511627c6de7cb0eb9248c
[ "MIT" ]
2
2020-11-27T01:32:26.000Z
2021-01-06T05:49:56.000Z
_posts/2020-11-29-new-post-title.md
jangwd88/jangwd88.github.io
fd6fbe9fe2bf77a5e69511627c6de7cb0eb9248c
[ "MIT" ]
5
2020-11-27T10:12:57.000Z
2020-11-28T08:14:39.000Z
_posts/2020-11-29-new-post-title.md
jangwd88/jangwd88.github.io
fd6fbe9fe2bf77a5e69511627c6de7cb0eb9248c
[ "MIT" ]
null
null
null
--- date: 2020-11-29 11:14:30 layout: post title: "New post title" subtitle: description: image: optimized_image: category: tags: author: paginate: false ---
11.285714
25
0.727848
eng_Latn
0.411476
2cd879f7e595f787c80c5e2f1a8863255dfef4c9
1,739
md
Markdown
README.md
abinash18/AbisWolframSolver
985af18c2e7f1459ac83b073db38da6ddf919da3
[ "MIT" ]
null
null
null
README.md
abinash18/AbisWolframSolver
985af18c2e7f1459ac83b073db38da6ddf919da3
[ "MIT" ]
4
2021-07-17T18:34:07.000Z
2021-07-22T20:05:46.000Z
README.md
abinash18/AbisWolframSolver
985af18c2e7f1459ac83b073db38da6ddf919da3
[ "MIT" ]
null
null
null
# Abis Wolfram Solver ## A free wolfram full results and step by step api resolver. Fully frontend no suspicious proxy like the other repo [here](https://github.com/WolfreeAlpha) This program is completely rebuilt with a reformed UI and is a fully single page application. Since I removed the CORS Proxy by using jQuery's AJAX the requests to the api return faster and there is no worry of you cookies being stolen or your IP being logged. All traffic is either going from or into your computer. Except github pages hosting the site. All javascript is perfusly documented and unobscured so you can look through it to verify theres no funny business. Except the css it is minified. Albeit I still give credit to the original creator for the API keys. Since one key only allows 2000 api calls a month, I am using the ones from the original repo. TODO: - Add page static downloading - Export to markdown - Export static html with embedded js - Export to PDF - Add Infos - Add drop down for copy to clipboard single pod - Latex - MathML - Inline Markdown - Inline Latex - Add drop down for downloading single pod - Export SVG - Export Markdown - Export PNG - Add support for hiding steps. - Add support for step by step solutions as in having a next step and previous step button for applicable pods. - Implement this type of loading bar https://stackoverflow.com/questions/38311590/animating-linear-gradient-to-create-a-loading-bar - Implement support for singular pod state change. - Perfect mobile use - Add support for drawing equations - Add support for Image recognition of math as input. MathPix - Make script more verbose. - Make it save previous states. (Done) - Implement Embeded Latex Equation Editor.
41.404762
260
0.775158
eng_Latn
0.987473
2cd918bc9cc390474765e63bec20e1bcc833fbea
209
md
Markdown
.changeset/fifty-keys-rule.md
njzydark/PS4RPS
5d263d396ec6ccc42d03f37e12e3b7aac4cb3910
[ "MIT" ]
57
2022-02-07T18:21:09.000Z
2022-03-28T04:21:24.000Z
.changeset/fifty-keys-rule.md
njzydark/PS4RPS
5d263d396ec6ccc42d03f37e12e3b7aac4cb3910
[ "MIT" ]
8
2022-02-16T09:44:47.000Z
2022-03-19T12:52:02.000Z
.changeset/fifty-keys-rule.md
njzydark/PS4RPS
5d263d396ec6ccc42d03f37e12e3b7aac4cb3910
[ "MIT" ]
1
2022-03-13T06:39:44.000Z
2022-03-13T06:39:44.000Z
--- 'desktop': patch 'web': patch 'common': patch --- Release new beta version - feat: support add static file server on web - fix: files get error on Windows #3 - refactor: use default title bar on Windows
17.416667
45
0.708134
eng_Latn
0.877019
2cda7611542864724a2731912db569f6734a2fc2
2,285
md
Markdown
README.md
pramonow/android-endlesScrollview
fba5160cfca00b37e2b32b024acbb1c4e718fab9
[ "MIT" ]
1
2018-11-20T15:04:03.000Z
2018-11-20T15:04:03.000Z
README.md
pramonow/android-endlesScrollview
fba5160cfca00b37e2b32b024acbb1c4e718fab9
[ "MIT" ]
null
null
null
README.md
pramonow/android-endlesScrollview
fba5160cfca00b37e2b32b024acbb1c4e718fab9
[ "MIT" ]
null
null
null
# Android Endless Scroll View for Recycler View [![](https://jitpack.io/v/pramonow/android-endlessrecyclerview.svg)](https://jitpack.io/#pramonow/android-endlessrecyclerview) Implementing endless recycler view the easy way with this library. This library extends RecyclerView so any functions from recycler view can be called. Implemented fully with Kotlin. ![alt text](https://github.com/pramonow/android-endlessrecyclerview/blob/master/screenshoot.gif?raw=true) allprojects { repositories { ... maven { url 'https://jitpack.io' } } } Dependency dependencies { implementation 'com.github.pramonow:android-endlessrecyclerview:1.0.0' } Or you can use SNAPSHOT to keep your module up to date dependencies { implementation 'com.github.pramonow:android-endlessrecyclerview:-SNAPSHOT' } # How to use In your xml layout file put in this block <com.pramonow.endlessrecyclerview.EndlessRecyclerView android:layout_width="match_parent" android:layout_height="wrap_content" android:id="@+id/endless_scroll_view"> </com.pramonow.endlessrecyclerview.EndlessRecyclerView> For the Android Activity //Simply build the view this way var endlessRecyclerView = findViewById<EndlessRecyclerView>(R.id.endless_list) //Put the adapter inside recycler view like usual recycler view endlessRecyclerView.adapter = sampleAdapter //Set callback for loading more endlessRecyclerView.setEndlessScrollCallback(object : EndlessScrollCallback { //This function will load more list and add it inside the adapter override fun loadMore() { //Load more of you data here then update your adapter } }) Recycler view is publicly accessible so it is possible to customize your recycler view and access your adapter Several Methods that can be used: - fun setLastPage() => Do this when you don't want to load data anymore - fun blockLoading() => Block load more from being called, usually used when waiting for API call to finish - fun releaseBlock() => Unblock load more, usaually used when API call has finished - fun setLoadBeforeBottom(boolean: Boolean) => Used to set whether you want to load data before user reach bottom
36.854839
182
0.734792
eng_Latn
0.942078
2cdc5b84365c1f01302a51dbcbbe40926419c791
37
md
Markdown
README.md
poplarjs/poplar-logger
d359140ed0b8a30f34ed3faafdce455b564f53cc
[ "MIT" ]
null
null
null
README.md
poplarjs/poplar-logger
d359140ed0b8a30f34ed3faafdce455b564f53cc
[ "MIT" ]
null
null
null
README.md
poplarjs/poplar-logger
d359140ed0b8a30f34ed3faafdce455b564f53cc
[ "MIT" ]
null
null
null
# poplar-logger Poplar logger system
12.333333
20
0.810811
kor_Hang
0.291602
2cdc972951ed0ad1b69f8813355be10134d9ba7c
104,317
md
Markdown
sccm/osd/understand/task-sequence-steps.md
sandytsang/SCCMdocs
d1e763c4dd6e2cf30f71ba6fb4d831302bfb9dea
[ "CC-BY-4.0", "MIT" ]
1
2021-05-26T07:48:10.000Z
2021-05-26T07:48:10.000Z
sccm/osd/understand/task-sequence-steps.md
sandytsang/SCCMdocs
d1e763c4dd6e2cf30f71ba6fb4d831302bfb9dea
[ "CC-BY-4.0", "MIT" ]
null
null
null
sccm/osd/understand/task-sequence-steps.md
sandytsang/SCCMdocs
d1e763c4dd6e2cf30f71ba6fb4d831302bfb9dea
[ "CC-BY-4.0", "MIT" ]
2
2021-03-26T20:01:43.000Z
2021-07-30T22:07:46.000Z
--- title: Task sequence steps titleSuffix: "Configuration Manager" description: "Learn about the task sequence steps that you can add to a Configuration Manager task sequence." ms.custom: na ms.date: 01/12/2018 ms.prod: configuration-manager ms.reviewer: na ms.suite: na ms.technology: - configmgr-osd ms.tgt_pltfrm: na ms.topic: article ms.assetid: 7c888a6f-8e37-4be5-8edb-832b218f266d caps.latest.revision: 26 caps.handback.revision: 0 author: aczechowski ms.author: aaroncz manager: angrobe --- # Task sequence steps in System Center Configuration Manager *Applies to: System Center Configuration Manager (Current Branch)* The following task sequence steps can be added to a Configuration Manager task sequence. For information about editing a task sequence, see [Edit a task sequence](../deploy-use/manage-task-sequences-to-automate-tasks.md#BKMK_ModifyTaskSequence). The following settings are common to all task sequence steps: On the **Properties** tab: - **Name**: The task sequence editor requires that you specify a short name to describe this step. When you add a new step, the task sequence editor sets the name to the Type by default. The **Name** length cannot exceed 50 characters. - **Description**: Optionally, specify more detailed information about this step. The **Description** length cannot exceed 256 characters. The rest of this article describes the other settings on the **Properties** tab for each task sequence step. On the **Options** tab: - **Disable this step**: The task sequence skips this step when it runs on a computer. The icon for this step is greyed out in the task sequence editor. - **Continue on error**: The task sequence continues if an error occurs while running the step. - **Add Condition**: The task sequence evaluates these conditional statements to determine if it runs the step. The sections below for specific task sequence steps describe other possible settings on the **Options** tab. ## <a name="BKMK_ApplyDataImage"></a> Apply Data Image Use this step to copy the data image to the specified destination partition. This step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables, see [Task sequence action variables](task-sequence-action-variables.md). In the task sequence editor, click **Add**, select **Images**, and select **Apply Data Image** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Image Package** Click **Browse** to specify the **Image Package** used by this task sequence. Select the package you want to install in the **Select a Package** dialog box. The associated property information for each existing image package is displayed at the bottom of the **Select a Package** dialog box. Use the drop-down list to select the **Image** you want to install from the selected **Image Package**. > [!NOTE] > This task sequence action treats the image as a data file. This action does not do any setup to boot the image as an operating system. **Destination** Configure one of the following options: - **Next available partition**: Use the next sequential partition that an **Apply Operating System** or **Apply Data Image** action in this task sequence has not already targeted. - **Specific disk and partition**: Select the **Disk** number (starting with 0) and the **Partition** number (starting with 1). - **Specific logical drive letter**: Specify the **Drive Letter** assigned to the partition by Windows PE. This drive letter can be different from the drive letter that the newly deployed operating system assigns. - **Logical drive letter stored in a variable**: Specify the task sequence variable containing the drive letter assigned to the partition by Windows PE. This variable is typically set in the Advanced section of the **Partition Properties** dialog box for the **Format and Partition Disk** task sequence action. **Delete all content on the partition before applying the image** Specifies that the task sequence deletes all files on the target partition before installing the image. By not deleting the content of the partition, this step can be used to apply additional content to a previously targeted partition. ## <a name="BKMK_ApplyDriverPackage"></a> Apply Driver Package Use this step to download all of the drivers in the driver package and install them on the Windows operating system. The **Apply Driver Package** task sequence step makes all device drivers in a driver package available for use by Windows. Add this step between the **Apply Operating System** and **Setup Windows and ConfigMgr** steps to make the drivers in the package available to Windows. Typically, the **Apply Driver Package** step is placed after the **Auto Apply Drivers** task sequence step. The **Apply Driver Package** task sequence step is also useful with stand-alone media deployment scenarios. Ensure that similar device drivers are put into a driver package and distribute them to the appropriate distribution points. After they are distributed, Configuration Manager client computers can install them. For example, put all drivers from one manufacturer into a driver package. Then distribute the package to distribution points where the associated computers can access them. The **Apply Driver Package** step is useful for stand-alone media. This step is also useful if you want to install a specific set of drivers. These types of drivers include devices that will not be detected in a plug-and-play scan, such as network printers. This task sequence step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables for this action, see [Apply Driver Package Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_ApplyDriverPackage). In the task sequence editor, click **Add**, select **Drivers**, and select **Apply Driver Package** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Driver package** Specify the driver package that contains the needed device drivers by clicking **Browse** and launching the **Select a Package** dialog box. Specify an existing package to be made available. The associated package properties are displayed at the bottom of the dialog box. **Select the mass storage driver within the package that needs to be installed before setup on pre-Windows Vista operating systems** Specify any mass storage drivers needed to install a classic operating system. **Driver** Select the mass storage driver file to install before setup of a classic operating system. The drop-down list populates from the specified package. **Model** Specify the boot-critical device that is needed for pre-Windows Vista operating system deployments. **Do unattended installation of unsigned drivers on version of Windows where this is allowed** This option allows Windows to install drivers without a digital signature. ## <a name="BKMK_ApplyNetworkSettings"></a> Apply Network Settings Use this step to specify the network or workgroup configuration information for the destination computer. The task sequence stores these values in the appropriate answer file. Windows Setup uses this answer file during the **Setup Windows and ConfigMgr** action. This task sequence step runs in either a standard operating system or Windows PE. For more information about the task sequence variables for this action, see [Apply Network Settings Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_ApplyNetworkSettings). In the task sequence editor, click **Add**, select **Settings**, and select **Apply Network Settings** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Join a workgroup** Select this option to have the destination computer join the specified workgroup. Enter the name of the workgroup on the **Workgroup** line. This value can be overridden by the value that is captured by the **Capture Network Settings** task sequence step. **Join a domain** Select this option to have the destination computer join the specified domain. Specify or browse to the domain, such as *fabricam.com*. Specify or browse to a Lightweight Directory Access Protocol (LDAP) path for an organizational unit. For example: *LDAP//OU=computers, DC=Fabricam.com, C=com* **Account** Click **Set** to specify an account with the necessary permissions to join the computer to the domain. In the **Windows User Account** dialog box you can enter the user name using the following format: **Domain\User**. **Adapter settings** Specify network configurations for each network adapter in the computer. Click **New** to open the **Network Settings** dialog box, and then specify the network settings. If you also use the **Capture Network Settings** step, the task sequence applies the previously captured settings to the network adapter. The task sequence does not apply the settings you specify in this step. If the task sequence did not previously capture network settings, it applies the settings specified in the **Apply Network Settings** step. The task sequence applies these settings to network adapters in Windows device enumeration order. ## <a name="BKMK_ApplyOperatingSystemImage"></a> Apply Operating System Image > [!TIP] > Beginning with Windows 10, version 1709, media includes multiple editions. When you configure a task sequence to use an OS upgrade package or OS image, be sure to select a [supported edition](/sccm/core/plan-design/configs/support-for-windows-10#windows-10-as-a-client). Use this step to install an operating system on the destination computer. This step performs actions depending on whether it uses an OS image or an OS upgrade package. The **Apply Operating System Image** step performs the following actions when using an OS image: 1. Delete all content on the targeted volume, except files in the folder the &#95;SMSTSUserStatePath variable specifies. 2. Extract the contents of the specified .wim file to the specified destination partition. 3. Prepare the answer file: 1. Create a new default Windows Setup answer file (sysprep.inf or unattend.xml) for the operating system that is being deployed. 2. Merge any values from the user-supplied answer file. 4. Copy Windows boot loaders into the active partition. 5. Set the boot.ini or the Boot Configuration Database (BCD) to reference the newly installed operating system. The **Apply Operating System Image** step performs the following actions when using an OS upgrade package: 1. Delete all content on the targeted volume, except files in the folder the &#95;SMSTSUserStatePath variable specifies. 2. Prepare the answer file: 1. Create a fresh answer file with standard values created by Configuration Manager. 2. Merge any values from the user-supplied answer file. > [!NOTE] > The **Setup Windows and ConfigMgr** step starts the installation of Windows. After the **Apply Operating System** action runs, the OSDTargetSystemDrive variable is set to the drive letter of the partition containing the operating system files. This task sequence step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables for this action, see [Apply Operating System Image Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_ApplyOperatingSystem). In the task sequence editor, click **Add**, select **Images**, and select **Apply Operating System Image** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Apply operating system from a captured image** Installs an operating system image that has previously been captured. Click **Browse** to open the **Select a package** dialog box, and then select the existing image package you want to install. If multiple images are associated with the specified **Image package**, use the drop-down list to specify the associated image to use for this deployment. You can view basic information about each existing image by clicking on the image. **Apply operating system image from an original installation source** Installs an operating system using an original installation source. Click **Browse** to open the **Select and Operating System Install Package** dialog box. Then select the existing OS upgrade package you want to use. You can view basic information about each existing image source by clicking on the image source. The associated image source properties are displayed in the results pane at the bottom of the dialog box. If there are multiple editions associated with the specified package, use the drop-down list to specify the associated **Edition** that is used. **Use an unattended or sysprep answer file for a custom installation** Use this option to provide a Windows setup answer file (**unattend.xml**, **unattend.txt**, or **sysprep.inf**) depending on the operating system version and installation method. The file you specify can include any of the standard configuration options supported by Windows answer files. For example, you can use it to specify the default Internet Explorer home page. Specify the package that contains the answer file and the associated path to the file in the package. > [!NOTE] > The Windows setup answer file that you supply can contain embedded task sequence variables of the form %*varname*%, where *varname* is the name of the variable. The **Setup Windows and ConfigMgr** step substitutes the %*varname*% string for the actual variable values. These embedded task sequence variables cannot be used in numeric-only fields in an unattend.xml answer file. If you do not supply a Windows setup answer file, this task sequence action automatically generates an answer file. **Destination** Configure one of the following options: - **Next available partition**: Use the next sequential partition that an **Apply Operating System** or **Apply Data Image** action in this task sequence has not already targeted. - **Specific disk and partition**: Select the **Disk** number (starting with 0) and the **Partition** number (starting with 1). - **Specific logical drive letter**: Specify the **Drive Letter** assigned to the partition by Windows PE. This drive letter can be different from the drive letter that the newly deployed operating system assigns. - **Logical drive letter stored in a variable**: Specify the task sequence variable containing the drive letter assigned to the partition by Windows PE. This variable is typically set in the Advanced section of the **Partition Properties** dialog box for the **Format and Partition Disk** task sequence action. ### Options Besides the default options, configure the following additional settings on the **Options** tab of this task sequence step: - **Access content directly from the distribution point** Configure the task sequence to access the operating system image directly from the distribution point. For example, use this option when you deploy operating systems to embedded devices that have limited storage capacity. When selecting this option, also configure the package share settings on the **Data Access** tab of the package properties. > [!NOTE] > This setting overrides the deployment option that you configure on the **Distribution Points** page in the **Deploy Software Wizard**. This override is only for the OS image that this step specifies, not for all task sequence content. ## <a name="BKMK_ApplyWindowsSettings"></a> Apply Windows Settings Use this step to configure the Windows settings for the destination computer. The task sequence stores these values in the appropriate answer file. Windows Setup uses this answer file during the **Setup Windows and ConfigMgr** action. This task sequence step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables for this action, see [Apply Windows Settings Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_ApplyWindowsSettings). In the task sequence editor, click **Add**, select **Settings**, and select **Apply Windows Settings** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **User name** Specify the registered user name that is associated with the destination computer. This value can be overridden by the value that is captured by the **Capture Windows Settings** task sequence action. **Organization name** Specify the registered organization name that is associated with the destination computer. This value can be overridden by the value that is captured by the **Capture Windows Settings** task sequence action. **Product key** Specify the product key that is used for the Windows installation on the destination computer. **Server licensing** Specify the server licensing mode. You can select **Per server** or **Per user** as the licensing mode. If you select **Per server**, also specify the maximum number of connections permitted per your license agreement. Select **Do not specify** if the destination computer is not a server or you do not want to specify the licensing mode. **Maximum connections** Specify the maximum number of connections that are available for this computer as stated in your license agreement. **Randomly generate the local administrator password and disable the account on all supported platforms (recommended)** Select this option to set the local administrator password to a randomly-generated string. This option also disables the local administrator account on platforms that support this capability. **Enable the account and specify the local administrator password** Select this option to enable the local administrator account using the specified password. Enter the password on the **Password** line and confirm the password on the **Confirm password** line. **Time Zone** Specify the time zone to configure on the destination computer. This value can be overridden by the value that is captured by the **Capture Windows Settings** task sequence step. ## <a name="BKMK_AutoApplyDrivers"></a> Auto Apply Drivers Use this step to match and install drivers as part of the operating system deployment. The **Auto Apply Drivers** task sequence step performs the following actions: 1. Scan the hardware and find the plug-and-play IDs for all devices present on the system. 2. Send the list of devices and their plug-and-play IDs to the management point. The management point returns a list of compatible drivers from the driver catalog for each hardware device. The list includes all drivers regardless of what driver package they are in, drivers tagged with the specified driver category, and drivers not disabled. 3. For each hardware device, the task sequence picks the best driver. This driver is appropriate for the deployed operating system, and is on an accessible distribution point. 4. The task sequence downloads the selected drivers from a distribution point, and stages the drivers on the target operating system. 1. For image-based installations, the task sequence places the drivers into the operating system driver store. 2. For setup-based installations, the task sequence configures Windows Setup with the drivers' location. 5. During the **Setup Windows and ConfigMgr** step in the task sequence, Windows Setup finds the drivers staged by this action. > [!IMPORTANT] > Stand-alone media cannot use the **Auto Apply Drivers** step. Windows Setup has no connection to the Configuration Manager site in this scenario. This task sequence step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables for this action, see [Auto Apply Drivers Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_AutoApplyDrivers). In the task sequence editor, click **Add**, select **Drivers**, and select **Auto Apply Drivers** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Install only the best matched compatible drivers** Specifies that the task sequence step installs only the best matched driver for each hardware device detected. **Install all compatible drivers** The task sequence installs all drivers compatible for each detected hardware device. Windows Setup then chooses the best driver. This option takes more network bandwidth and disk space. The task sequence downloads more drivers, but Windows can select a better driver. **Consider drivers from all categories** The task sequence searches all available driver categories for the appropriate device drivers. **Limit driver matching to only consider drivers in selected categories** The task sequence searches in the specified driver categories for the appropriate device drivers. **Do unattended installation of unsigned drivers on versions of Windows where this is allowed** This option allows Windows to install drivers without a digital signature. > [!IMPORTANT] > This option does not apply to operating systems where you cannot configure driver signing policy. ## <a name="BKMK_CaptureNetworkSettings"></a> Capture Network Settings Use this step to capture Microsoft network settings from the computer running the task sequence. The task sequence saves these settings in task sequence variables. These settings override the default settings you configure on the **Apply Network Settings** step. This task sequence step runs only in a standard operating system. It does not run in Windows PE. For more information about the task sequence variables for this action, see [Capture Network Settings Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_CaptureNetworkSettings). In the task sequence editor, click **Add**, select **Settings**, and select **Capture Network Settings** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Migrate domain and workgroup membership** Captures the domain and workgroup membership information of the destination computer. **Migrate network adapter configuration** Captures the network adapter configuration of the destination computer. The captured information includes the global network settings, the number of adapters, and the network settings associated with each adapter. These settings include settings associated with DNS, WINS, IP, and port filters. ## <a name="BKMK_CaptureOperatingSystemImage"></a> Capture Operating System Image This step captures one or more images from a reference computer. The task sequence creates a Windows Image (.wim) file on the specified network share. Then use the **Add Operating System Image Package** wizard to import this image into Configuration Manager for image-based operating system deployments. Configuration Manager captures each volume (drive) from the reference computer to a separate image within the .wim file. If the referenced computer has multiple volumes, the resulting .wim file contains a separate image for each volume. Only volumes that are formatted as NTFS or FAT32 are captured. Volumes with other formats and USB volumes are skipped. The installed operating system on the reference computer must be a version of Windows that Configuration Manager supports. Use the SysPrep tool to prepare the reference computer operating system. The installed operating system volume and the boot volume must be the same volume. Specify an account with write permissions to the selected network share. This task sequence step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables for this action, see [Capture Operating System Image Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_CaptureOperatingSystemImage). In the task sequence editor, click **Add**, select **Images**, and select **Capture Operating System Image** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Target** File system pathname to the location that Configuration Manager uses when storing the captured operating system image. **Description** An optional user-defined description of the captured operating system image that is stored in the .WIM file. **Version** An optional user-defined version number to assign to the captured operating system image. This value can be any combination of letters and numbers and is stored in the .WIM file. **Created by** The optional name of the user that created the operating system image and is stored in the WIM file. **Capture operating system image account** You must enter the Windows account that has permissions to the network share you specified. Click **Set** to specify the name of that Windows account. ## <a name="BKMK_CaptureUserState"></a> Capture User State Use this step to use the User State Migration Tool (USMT) to capture user state and settings from the computer running the task sequence. This task sequence step is used in conjunction with the **Restore User State** task sequence step. With USMT 3.0.1 and later, this option always encrypts the USMT state store by using an encryption key generated and managed by Configuration Manager. For more information about managing the user state when deploying operating systems, see [Manage user state](../get-started/manage-user-state.md). If you want to save and restore user state settings from a state migration point, use the **Capture User State** step with the **Request State Store** and **Release State Store** steps. The **Capture User State** task sequence step provides control over a limited subset of the most commonly used USMT options. Additional command-line options can be specified using the OSDMigrateAdditionalCaptureOptions task sequence variable. This task sequence step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables for this action, see [Capture User State Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_CaptureUserState). In the task sequence editor, click **Add**, select **User State**, and select **Capture User State** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **User state migration tool package** Specify the package that contains the User State Migration Tool (USMT). The task sequence uses this version of USMT to capture the user state and settings. This package does not require a program. Specify a package containing the 32-bit or 64-bit version of USMT. The architecture of USMT depends upon the architecture of the operating system from which the task sequence is capturing state. **Capture all user profiles with standard options** Migrate all user profile information. This is the default option. If you select this option, but do not select **Restore local computer user profiles** in the **Restore User State** step, the task sequence fails. Configuration Manager cannot migrate the new accounts without assigning them passwords. When you use the **Install an existing image package** option of the **New Task Sequence** wizard, the resulting task sequence defaults to **Capture all user profiles with standard options**. This default task sequence does not select the option to **Restore local computer user profiles**, in other words, non-domain user accounts. Select **Restore local computer user profiles** and provide a password for the account to be migrated. In a manually created task sequence, this setting is found under the Restore User State step. In a task sequence created by the **New Task Sequence** wizard, this setting is found under the step **Restore User Files and Settings** wizard page. If you have no local user accounts, this setting does not apply. **Customize how user profiles are captured** Select this option to specify a custom profile file migration. Click **Files** to select the configuration files for USMT to use with this step. Specify a custom .xml file that contains rules that define the user state files to migrate. **Click here to select configuration files:** Select this option to select the configuration files in the USMT package you want to use for capturing user profiles. Click the **Files** button to launch the **Configuration Files** dialog box. To specify a configuration file, enter the name of the file on the **Filename** line and click the **Add** button. **Enable verbose logging** Enable this option to generate more detailed log file information. When capturing state, the task sequence by default generates ScanState.log in the task sequence log folder, \windows\system32\ccm\logs. **Skip files using encrypted file system** Enable this option to skip capturing files encrypted with the Encrypted File System (EFS). These files include user profile files. Depending on the operating system and the USMT version, encrypted files might not be readable after you restore. For more information, see the USMT documentation. **Copy by using file system access** Enable this option to specify any of the following settings: - **Continue if some files cannot be captured**: Enable this setting to continue the migration process even if some files cannot be captured. If you disable this option, if a file cannot be captured then this step fails. This option is enabled by default. - **Capture locally by using links instead of by copying files**: Enable this setting to use NTFS hard-links to capture files. For more information about migrating data using hard-links, see [Hard-Link Migration Store](http://go.microsoft.com/fwlink/p/?LinkId=240222) - **Capture in off-line mode (Windows PE only)**: Enable this setting to capture the user state while in Windows PE instead of the full operating system. **Capture by using Volume Copy Shadow Services (VSS)** This option allows you to capture files even if they are locked for editing by another application. ## <a name="BKMK_CaptureWindowsSettings"></a> Capture Windows Settings Use this step to capture the Windows settings from the computer running the task sequence. The task sequence saves these settings in task sequence variables. These captured settings override the default settings that you configure on the **Apply Windows Settings** step. This task sequence step runs in either Windows PE or a standard operating system. For more information about the task sequence variables for this action, see [Capture Windows Settings Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_CaptureWindowsSettings). In the task sequence editor, click **Add**, select **Settings**, and select **Capture Windows Settings** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Migrate computer name** Capture the NetBIOS computer name of the computer. **Migrate registered user and organization names** Capture the registered user and organization names from the computer. **Migrate time zone** Capture the time zone setting on the computer. ## <a name="BKMK_CheckReadiness"></a> Check Readiness Use this step to verify that the target computer meets the specified deployment prerequisite conditions. In the task sequence editor, click **Add**, select **General**, and select **Check Readiness** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Ensure minimum memory (MB)** Verify that the amount of memory, in megabytes (MB), meets or exceeds the specified amount. The step enables this setting by default. **Ensure minimum processor speed (MHz)** Verify that the speed of the processor, in megahertz (MHz), meets or exceeds the specified amount. The step enables this setting by default. **Ensure minimum free disk space (MB)** Verify that the amount of free disk space, in megabytes (MB), meets or exceeds the specified amount. **Ensure current OS to be refreshed is** Verify that the operating system installed on the target computer meets the specified requirement. The step sets this to **CLIENT** by default. ### Options > [!NOTE] > If you enable the **Continue on error** setting on the **Options** tab of this step, it only logs the readiness check results. If a check fails, the task sequence does not stop. ## <a name="BKMK_ConnectToNetworkFolder"></a> Connect To Network Folder Use this step to create a connection to a shared network folder. This task sequence step runs in a standard operating system or Windows PE. For more information about the task sequence variables for this action, see [Connect to Network Folder Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_ConnecttoNetworkFolder). In the task sequence editor, click **Add**, select **General**, and select **Connect To Network Folder** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Path** Click **Browse** to specify the network folder path. Use the format *\\\server\share*. **Drive** Select the local drive letter to assign for this connection. **Account** Click **Set** to specify the user account with permissions to connect to this network folder. ## <a name="BKMK_DisableBitLocker"></a> Disable BitLocker Use this step to disable the BitLocker encryption on the current operating system drive, or on a specific drive. This action leaves the key protectors visible in clear text on the hard drive, but it does not decrypt the contents of the drive. Consequently this action is completed almost instantly. > [!NOTE] > BitLocker drive encryption provides low-level encryption of the contents of a disk volume. If you have multiple drives encrypted, you must disable BitLocker on any data drives before disabling BitLocker on the operating system drive. This step runs only in a standard operating system. It does not run in Windows PE. In the task sequence editor, click **Add**, select **Disks**, and select **Disable BitLocker** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Current operating system drive** Disables BitLocker on the current operating system drive. **Specific drive** Disables BitLocker on a specific drive. Use the drop-down list to specify the drive where BitLocker is disabled. ## <a name="BKMK_DownloadPackageContent"></a> Download Package Content Use this step to download any of the following package types: - Operating system images - Operating system upgrade packages - Driver packages - Packages This step works well in a task sequence to upgrade an operating system in the following scenarios: - To use a single upgrade task sequence that can work with both x86 and x64 platforms. Include two **Download Package Content** steps in the **Prepare for Upgrade** group. Specify conditions on the **Options** tab to detect the client architecture and download only the appropriate OS upgrade package. Configure each **Download Package Content** step to use the same variable. Use the variable for the media path on the **Upgrade Operating System** step. - To dynamically download an applicable driver package, use two **Download Package Content** steps with conditions to detect the appropriate hardware type for each driver package. Configure each **Download Package Content** step to use the same variable. Use the variable for the **Staged content** value in the Drivers section of the **Upgrade Operating System** step. > [!NOTE] > When you deploy a task sequence that contains the Download Package Content step, do not select **Download all content locally before starting the task sequence** or **Access content directly from a distribution point** for **Deployment options** on the **Distribution Points** page of the Deploy Software Wizard. This step runs in either a standard operating system or Windows PE. However, the option to save the package in the Configuration Manager client cache is not supported in WinPE. In the task sequence editor, click **Add**, select **Software**, and select **Download Package Content** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Select package** icon Click the icon to select the package to download. After you select a package, you can click the icon again to choose another package. **Place into the following location** Choose to save the package in one of the following locations: - **Task sequence working directory** - **Configuration Manager client cache**: Use this option to store the content in the client cache. The client acts as a peer cache source for other peer cache clients. For more information, see [Prepare Windows PE peer cache to reduce WAN traffic](../get-started/prepare-windows-pe-peer-cache-to-reduce-wan-traffic.md). - **Custom path**: With this option the task sequence engine first downloads the package to the task sequence working directory, then moves it to this path you specify. The task sequence engine appends the path with the package ID. **Save path as a variable** You can save the path as a variable that you can use in another task sequence step. Configuration Manager adds a numerical suffix to the variable name. For example, if you specify a variable of %*mycontent*% as a custom variable, it is the root for where the task sequence stores all referenced content. This content may contain multiple packages. Then when you refer to the variable, add a numerical suffix. For example, for the first package, refer to %*mycontent01*%. When you refer to the variable in subsequent steps, such as **Upgrade Operating System**, use %*mycontent02*% or %*mycontent03*%, where the number corresponds to the order that the **Download Package Content** step lists the packages. **If a package download fails, continue downloading other packages in the list** If the task sequence fails to download a package, it starts to download the next package in the list. ## <a name="BKMK_EnableBitLocker"></a> Enable BitLocker Use this step to enable BitLocker encryption on at least two partitions on the hard drive. The first active partition contains the Windows bootstrap code. Another partition contains the operating system. The bootstrap partition must remain unencrypted. Use the **Pre-provision BitLocker** task sequence step to enable BitLocker on a drive while in Windows PE. For more information, see the [Pre-provision BitLocker](#BKMK_PreProvisionBitLocker) section. > [!NOTE] > BitLocker drive encryption provides low-level encryption of the contents of a disk volume. The **Enable BitLocker** step runs only in a standard operating system. It does not run in Windows PE. For more information about the task sequence variables for this action, see [Enable BitLocker Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_EnableBitLocker). When you specify **TPM Only**, **TPM and Startup Key on USB**, or **TPM and PIN**, the Trusted Platform Module (TPM) must be in the following state before you can run the **Enable BitLocker** step: - Enabled - Activated - Ownership Allowed This step completes any remaining TPM initialization. The remaining steps do not require physical presence or reboots. The **Enable BitLocker** step transparently completes the remaining TPM initialization steps, if necessary: - Create endorsement key pair - Create owner authorization value and escrow to Active Directory, which must have been extended to support this value - Take ownership - Create the storage root key, or reset if already present but incompatible If you want the task sequence to wait for the **Enable BitLocker** step to complete the drive encryption process, then select the **Wait** option. If you do not select the **Wait** option, the drive encryption process happens in the background. The task sequence immediately proceeds to the next step. BitLocker can be used to encrypt multiple drives on a computer system (both operating system and data drives). To encrypt a data drive, first encrypt the operating system drive and complete the encryption process. This requirement is because the operating system drive stores the key protectors for the data drives. If you encrypt the operating system and data drives in the same task sequence, select the **Wait** option on the **Enable BitLocker** step for the operating system drive. If the hard drive is already encrypted, but BitLocker is disabled, then the **Enable BitLocker** step re-enables the key protectors and completes quickly. Re-encryption of the hard drive is not necessary in this case. For more information about the task sequence variables for this action, see [Enable BitLocker Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_EnableBitLocker). In the task sequence editor, click **Add**, select **Disks**, and select **Enable BitLocker** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Choose the drive to encrypt** Specifies the drive to encrypt. To encrypt the current operating system drive, select **Current operating system drive** and then configure one of the following options for key management: - **TPM only**: Select this option to use only Trusted Platform Module (TPM). - **Startup Key on USB only**: Select this option to use a startup key stored on a USB flash drive. When you select this option, BitLocker locks the normal boot process until a USB device that contains a BitLocker startup key is attached to the computer. - **TPM and Startup Key on USB**: Select this option to use TPM and a startup key stored on a USB flash drive. When you select this option, BitLocker locks the normal boot process until a USB device that contains a BitLocker startup key is attached to the computer. - **TPM and PIN**: Select this option to use TPM and a personal identification number (PIN). When you select this option, BitLocker locks the normal boot process until the user provides the PIN. To encrypt a specific, non-operating system data drive, select **Specific drive**, and then select the drive from the list. **Chose where to create the recovery key** To specify where BitLocker creates the recovery password and escrow it in Active Directory, select **In Active Directory**. If you select this option, you must extend Active Directory for the site. BitLocker can then save the associated recovery information in Active Directory. Select **Do not create recovery key** to not create a password. Creating a password is a best practice. **Wait for BitLocker to complete the drive encryption process on all drives before continuing task sequence execution** Select this option to allow BitLocker drive encryption to complete prior to running the next step in the task sequence. If you select this option, BitLocker encrypts the entire disk volume before the user is able to log in to the computer. The encryption process can take hours to complete when encrypting a large hard drive. Not selecting this option allows the task sequence to proceed immediately. ## <a name="BKMK_FormatandPartitionDisk"></a> Format and Partition Disk Use this step to format and partition a specified disk on the destination computer. > [!IMPORTANT] > Every setting you specify for this task sequence step applies to a single specified disk. To format and partition another disk on the destination computer, add an additional **Format and Partition Disk** step to the task sequence. This task sequence step runs only in Windows PE. It does not run in a standard operating system. For more information about the task sequence variables for this action, see [Format and Partition Disk Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_FormatPartitionDisk). In the task sequence editor, click **Add**, select **Disks**, and select **Format and Partition Disk** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Disk Number** The physical disk number of the disk to format. The number is based on Windows disk enumeration ordering. **Disk Type** The type of the disk that is formatted. There are two options to select from the drop-down list: - Standard(MBR) - Master Boot Record - GPT - GUID Partition Table > [!NOTE] > If you change the disk type from **Standard (MBR)** to **GPT**, and the partition layout contains an extended partition, the task sequence removes all extended and logical partitions from the layout. The task sequence editor prompts to confirm this action before changing the disk type. **Volume** Specific information about the partition or volume that the task sequence creates, including the following attributes: - Name - Remaining disk space To create a new partition, click **New** to launch the **Partition Properties** dialog box. Specify the partition type and size, and if it is a boot partition. To modify an existing partition, click the partition to be modified and then click the properties button. For more information about how to configure hard drive partitions, see one of the following articles: - [How to Configure UEFI/GPT-Based Hard Drive Partitions](http://go.microsoft.com/fwlink/?LinkID=272104) - [How to Configure BIOS/MBR-Based Hard Drive Partitions](http://go.microsoft.com/fwlink/?LinkId=272105) To delete a partition, select the partition to be deleted and then click **Delete**. ## <a name="BKMK_InstallApplication"></a> Install Application This step installs the specified applications, or a set of applications defined by a dynamic list of task sequence variables. When this step is run, the application installation begins immediately without waiting for a policy polling interval. The applications that are installed must meet the following criteria: - The application must be a deployment type of Windows Installer or Script installer. Windows app package (.appx file) deployment types are not supported. - It must run under the local system account and not the user account. - It must not interact with the desktop. The program must run silently or in an unattended mode. - It must not initiate a restart on its own. The application must request a restart by using the standard restart code, a 3010 exit code. This behavior ensures that the task sequence step correctly handles the restart. If the application does return a 3010 exit code, the underlying task sequence engine performs the restart. After the restart, the task sequence automatically continues. When the **Install Application** step runs, the application checks the applicability of the requirement rules and detection method on its deployment types. Based on the results of this check, the application installs the applicable deployment type. If a deployment type contains dependencies, the dependent deployment type is evaluated and installed as part of the install application step. Application dependencies are not supported for stand-alone media. > [!NOTE] > To install an application that supersedes another application, the content files for the superseded application must be available. Otherwise this task sequence step fails. For example, Microsoft Visio 2010 is installed on a client or in a captured image. When the **Install Application** step installs Microsoft Visio 2013, the content files for Microsoft Visio 2010 (the superseded application) must be available on a distribution point. If Microsoft Visio is not installed at all on a client or captured image, the task sequence installs Microsoft Visio 2013 without checking for the Microsoft Visio 2010 content files. > [!NOTE] > If the client fails to retrieve the management point list from location services, use the SMSTSMPListRequestTimeoutEnabled and SMSTSMPListRequestTimeout built-in variables to specify how many milliseconds a task sequence waits before it retries installing an application or software update. For more information, see [Task sequence built-in variables](task-sequence-built-in-variables.md). This task sequence step runs only in a standard operating system. It does not run in Windows PE. In the task sequence editor, click **Add**, select **Software**, and select **Install Application** to add this step. ### Properties On the **Properties** tab for this step, configure the settings that are described in this section. **Install the following applications** The task sequence installs these applications in the specified order. Configuration Manager filters out any disabled applications, or any applications with the following settings: - Only when a user is logged on - Run with user rights These applications do not appear in the **Select the application to install** dialog box. **Install applications according to dynamic variable list** The task sequence installs applications using this base variable name. The base variable name is for a set of task sequence variables defined for a collection or computer. These variables specify the applications that the task sequence installs for that collection or computer. Each variable name consists of its common base name plus a numerical suffix starting at 01. The value for each variable must contain the name of the application and nothing else. For the task sequence to install applications by using a dynamic variable list, enable the following setting on the **General** tab of the application **Properties**: **Allow this application to be installed from the Install Application task sequence action instead of deploying manually** > [!NOTE] > You cannot install applications by using a dynamic variable list for stand-alone media deployments. For example, to install a single application by using a task sequence variable called AA01, you specify the following variable: |Variable Name|Variable Value| |-------------------|--------------------| |AA01|Microsoft Office| To install two applications, you would specify the following variables: |Variable Name|Variable Value| |-------------------|--------------------| |AA01|Microsoft Lync| |AA02|Microsoft Office| The following conditions affect the applications installed by the task sequence: - If the value of a variable contains any information other than the name of the application. The task sequence does not install the application, and the task sequence continues. - If the task sequence does not find a variable with the specified base name and "01" suffix, the task sequence does not install any applications. **If an application fails, continue installing other applications in the list** This setting specifies that the step continues when an individual application installation fails. If you specify this setting, the task sequence continues regardless of any installation errors. If you do not specify this setting, and the installation fails, the step immediately ends. ### Options > [!NOTE] > When you select **Continue on error** on the **Options** tab of this step, the task sequence continues when an application fails to install. When you do not enable this option, the task sequence fails and does not install remaining applications. Besides the default options, configure the following additional settings on the **Options** tab of this task sequence step: - **Retry this step if computer unexpectedly restarts** If one of the application installations unexpectedly restarts the computer, retry this step. The step enables this setting by default with two retries. You can specify from one to five retries. ## <a name="BKMK_InstallPackage"></a> Install Package Use this step to install a software package as part of the task sequence. When this step is run, the installation begins immediately without waiting for a policy polling interval. The package must meet the following criteria: - It must run under the local system account and not a user account. - It should not interact with the desktop. The program must run silently or in an unattended mode. - It must not initiate a restart on its own. The software must request a restart using the standard restart code, a 3010 exit code. This behavior ensures that the task sequence step properly handles the restart. If the software does return a 3010 exit code, the underlying task sequence engine restarts the computer. After the restart, the task sequence automatically continues. Programs that use the **Run another program first** option to install a dependent program are not supported when deploying an operating system. If you enable the package option **Run another program first**, and the dependent program already ran on the destination computer, the dependent program runs and the task sequence continues. However, if the dependent program has not already run on the destination computer, the task sequence step fails. > [!NOTE] > The central administration site does not have the necessary client configuration policies required to enable the software distribution agent during the task sequence. When you create stand-alone media for a task sequence at the central administration site, and the task sequence includes an **Install Package** step, the following error might appear in the CreateTsMedia.log file: > > `"WMI method SMS_TaskSequencePackage.GetClientConfigPolicies failed (0x80041001)"` > > For stand-alone media that includes an **Install Package** step, create the stand-alone media at a primary site that has the software distribution agent enabled. Alternatively, add a **Run Command Line** step after the **Setup Windows and ConfigMgr** step and before the first **Install Package** step. The **Run Command Line** step runs a WMIC command to enable the software distribution agent before the first **Install Package** step. Use the following command in the **Run Command Line** step: > > **Command Line**: `WMIC /namespace:\\\root\ccm\policy\machine\requestedconfig path ccm_SoftwareDistributionClientConfig CREATE ComponentName="Enable SWDist", Enabled="true", LockSettings="TRUE", PolicySource="local", PolicyVersion="1.0", SiteSettingsKey="1" /NOINTERACTIVE` > > For more information about creating stand-alone media, see [Create stand-alone media](../deploy-use/create-stand-alone-media.md). This task sequence step runs only in a standard operating system. It does not run in Windows PE. In the task sequence editor, click **Add**, select **Software**, and select **Install Package** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Install a single software package** This setting specifies a Configuration Manager software package. The step waits until the installation is complete. **Install software packages according to dynamic variable list** The task sequence installs packages using this base variable name. The base variable name is for a set of task sequence variables defined for a collection or computer. These variables specify the packages that the task sequence installs for that collection or computer. Each variable name consists of its common base name plus a numerical suffix starting at 001. The value for each variable must contain a package ID and the name of the software separated by a colon. For the task sequence to install software by using a dynamic variable list, enable the following setting on the **Advanced** tab of the package **Properties**: **Allow this program to be installed from the Install Package task sequence without being deployed** > [!NOTE] > You cannot install software packages by using a dynamic variable list for stand-alone media deployments. For example, to install a single software package by using a task sequence variable called AA001, you specify the following variable: |Variable Name|Variable Value| |-------------------|--------------------| |AA001|CEN00054:Install| To install three software packages, you would specify the following variables: |Variable Name|Variable Value| |-------------------|--------------------| |AA001|CEN00054:Install| |AA002|CEN00107:Install Silent| |AA003|CEN00031:Install| The following conditions affect the packages installed by the task sequence: - If you do not create the value of a variable in the correct format, or it does not specify a valid package ID and name, the software installation fails. - If the package ID contains lowercase characters, the software installation fails. - If the task sequence does not find a variable with the specified base name and "001" suffix, the task sequence does not install any packages. The task sequence continues. **If installation of a software package fails, continue installing other packages in the list** This setting specifies that the step continues if an individual software package installation fails. If you specify this setting, the task sequence continues regardless of any installation errors. If you do not specify this setting, and the installation fails, the step immediately ends. ## <a name="BKMK_InstallSoftwareUpdates"></a> Install Software Updates Use this step to install software updates on the destination computer. The destination computer is not evaluated for applicable software updates until this task sequence step runs. At that time, the destination computer is evaluated for software updates like any other Configuration Manager client. For this step to install software updates, you must first deploy the updates to a collection of which the target computer is a member. > [!IMPORTANT] > A best practice for optimum performance is to install the latest version of the Windows Update Agent. >* For Windows 7, see [Knowledge base article 3161647](https://support.microsoft.com/kb/3161647). >* For Windows 8, see [Knowledge base article 3163023](https://support.microsoft.com/kb/3163023). This task sequence step runs only in a standard operating system. It does not run in Windows PE. For information about task sequence variables for this task sequence action, see [Install Software Updates Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_InstallSoftwareUpdates). > [!NOTE] > If the client fails to retrieve the management point list from location services, use the SMSTSMPListRequestTimeoutEnabled and SMSTSMPListRequestTimeout built-in variables to specify how many milliseconds a task sequence waits before it retries installing an application or software update. For more information, see [Task sequence built-in variables](task-sequence-built-in-variables.md). In the task sequence editor, click **Add**, select **Software**, and select **Install Software Updates** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Required for installation - Mandatory software updates only** Select this option to install all mandatory software updates with administrator-defined installation deadlines. **Available for installation - All software updates** Select this option to install all available software updates. You must first deploy these updates to a collection of which the computer is a member. The task sequence installs all available software updates on the destination computers. **Evaluate software updates from cached scan results** By default, the task sequence uses cached scan results from the Windows Update Agent. Clear the checkbox to instruct the Windows Update Agent to download the latest catalog from the software update point. Chose this option when using a task sequence to [capture and build an operating system image](../deploy-use/create-a-task-sequence-to-capture-an-operating-system.md). In this scenario a large number of software updates is likely. Many of these updates will have dependencies, for example, install X before Y appears as applicable. When you clear this setting and deploy the task sequence to many clients, they all connect to the software update point at the same time. This behavior results in performance issues during the process and download of the catalog. The best practice is the default setting to use cached scan results. The SMSTSSoftwareUpdateScanTimeout task sequence variable controls the software updates scan timeout during this step. The default value is 30 minutes. For more information, see [Task sequence built-in variables](task-sequence-built-in-variables.md). ### Options Besides the default options, configure the following additional settings on the **Options** tab of this task sequence step: - **Retry this step if computer unexpectedly restarts** If one of the updates unexpectedly restarts the computer, retry this step. The step enables this setting by default with two retries. You can specify from one to five retries. > [!NOTE] > Configure the SMSTSWaitForSecondReboot variable to specify how many seconds the task sequence pauses after the computer restarts in this scenario. For more information, see [Task sequence built-in variables](task-sequence-built-in-variables.md). ## <a name="BKMK_JoinDomainorWorkgroup"></a> Join Domain or Workgroup Use this step to add the destination computer to a workgroup or domain. This task sequence step runs only in a standard operating system. It does not run in Windows PE. For information about task sequence variables for this task sequence action, see [Join Domain or Workgroup Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_JoinDomainWorkgroup). In the task sequence editor, click **Add**, select **General**, and select **Join Domain or Workgroup** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Join a workgroup** Select this option to have the destination computer join the specified workgroup. If the computer is currently a member of a domain, selecting this option causes the computer to reboot. **Join a domain** Select this option to have the destination computer join the specified domain. Optionally, enter or browse for an organizational unit (OU) in the specified domain for the computer to join. If the computer is currently a member of some other domain or a workgroup, this option causes the computer to reboot. If the computer is already a member of another OU, since Active Directory Domain Services does not allow changing the OU via this method, Windows Setup ignores this setting. **Enter the account which has permission to join the domain** Click **Set** to enter the username and password for an account with permissions to join the domain. Enter the account in the format: *Domain\account* ## <a name="BKMK_PrepareConfigMgrClientforCapture"></a> Prepare ConfigMgr Client for Capture Use this step to remove or configure the Configuration Manager client on the reference computer. This action prepares the computer for capture as part of the imaging process. Starting in Configuration Manager version 1610, the **Prepare ConfigMgr Client** step completely removes the Configuration Manager client, instead of only removing key information. When the task sequence deploys the captured operating system image, it installs a new Configuration Manager client each time. > [!Note] > The task sequence engine only removes the client during the **Build and capture a reference operating system image** task sequence. The task sequence engine does not remove the client during other capture methods, such as capture media or a custom task sequence. Prior to Configuration Manager version 1610, this step performs the following tasks: - Removes the client configuration properties section from the smscfg.ini file in the Windows directory. These properties include client-specific information including the Configuration Manager GUID and other client identifiers. - Deletes all SMS or Configuration Manager machine certificates. - Deletes the Configuration Manager client cache. - Clears the assigned site variable for the Configuration Manager client. - Deletes all local Configuration Manager policy. - Removes the trusted root key for the Configuration Manager client. This task sequence step runs only in a standard operating system. It does not run in Windows PE. In the task sequence editor, click **Add**, select **Images**, and select **Prepare ConfigMgr Client for Capture** to add this step. ### Properties This step does not require any settings on the **Properties** tab. ## <a name="BKMK_PrepareWindowsforCapture"></a> Prepare Windows for Capture Use this step to specify the Sysprep options when capturing an operating system image on the reference computer. This task sequence action runs Sysprep, and then reboots the computer into the Windows PE boot image specified for the task sequence. This action fails if the reference computer is joined to a domain. This task sequence step runs only in a standard operating system. It does not run in Windows PE. For information about task sequence variables for this task sequence action, see [Prepare Windows for Capture Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_PrepareWindowsCapture). In the task sequence editor, click **Add**, select **Images**, and select **Prepare Windows for Capture** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Automatically build mass storage driver list** Select this option to have Sysprep automatically build a list of mass storage drivers from the reference computer. This option enables the Build Mass Storage Drivers option in the sysprep.inf file on the reference computer. For more information about this setting, see the Sysprep documentation. **Do not reset activation flag** Select this option to prevent Sysprep from resetting the product activation flag. ## <a name="BKMK_PreProvisionBitLocker"></a> Pre-provision BitLocker Use this step to enable BitLocker on a drive while in Windows PE. Only the used drive space is encrypted, and therefore, encryption times are much faster. You apply the key management options by using the [Enable BitLocker](#BKMK_EnableBitLocker) task sequence step after the operating system installs. This step runs only in Windows PE. It does not run in a standard operating system. > [!IMPORTANT] > Pre-provisioning BitLocker requires at least Windows 7. The computer must also contain a supported and enabled Trusted Platform Module (TPM). In the task sequence editor, click **Add**, select **Disks**, and select **Pre-provision BitLocker** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Apply BitLocker to the specified drive** Specify the drive for which you want to enable BitLocker. Only the used space on the drive is encrypted. **Skip this step for computers that do not have a TPM or when TPM is not enabled** Select this option to skip drive encryption on a computer that does not contain a supported or enabled TPM as required. For example, use this option when you deploy an operating system to a virtual machine. ## <a name="BKMK_ReleaseStateStore"></a> Release State Store Use this step to notify the state migration point that the capture or restore action is complete. Use this step in conjunction with the **Request State Store**, **Capture User State**, and **Restore User State** steps. You use these steps to migrate user state data using a state migration point and the User State Migration Tool (USMT). For more information about managing the user state when deploying operating systems, see [Manage user state](../get-started/manage-user-state.md). If you use the **Request State Store** step to request access to a state migration point to *capture* user state, this step notifies the state migration point that the capture process is complete. The state migration point then marks the user state data as available for restore. The state migration point sets the access control permissions for the user state data so that only the restoring computer has read-only access. If you use the **Request State Store** step to request access to a state migration point to *restore* user state, this step notifies the state migration point that the restore process is complete. The state migration point then activates its configured data retention settings. > [!IMPORTANT] > A best practice is to set the **Continue on Error** option for any steps between the **Request State Store** and **Release State Store** steps. Every **Request State Store** step must have a matching **Release State Store** step. This task sequence step runs only in a standard operating system. It does not run in Windows PE. For information about task sequence variables for this task sequence action, see [Release State Store Sequence Action Variables](task-sequence-action-variables.md#BKMK_ReleaseStateStore). In the task sequence editor, click **Add**, select **User State**, and select **Release State Store** to add this step. ### Properties This step does not require any settings on the **Properties** tab. ## <a name="BKMK_RequestStateStore"></a> Request State Store Use this step to request access to a state migration point when capturing or restoring state. For more information about managing the user state when deploying operating systems, see [Manage user state](../get-started/manage-user-state.md). Use this step in conjunction with the **Release State Store**, **Capture User State**, and **Restore User State** steps. You use these steps to migrate computer state using a state migration point and the User State Migration Tool (USMT). > [!NOTE] > When creating a new state migration point, user state storage is not available for up to one hour. To expedite availability, adjust any property settings on the state migration point to trigger a site control file update. This task sequence step runs in a standard operating system and in Windows PE for offline USMT. For information about the task sequence variables for this task sequence action, see [Request State Store Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_RequestState). In the task sequence editor, click **Add**, select **User State**, and select **Request State Store** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Capture state from the computer** Find a state migration point that meets the minimum requirements as configured in the state migration point settings. For example, **Maximum number of clients** and **Minimum amount of free disk space**. This option does not guarantee sufficient space is available at the time of state migration. This option requests access to the state migration point for the purpose of capturing the user state and settings from a computer. If the Configuration Manager site has multiple active state migration points, this step finds a state migration point with available disk space. The task sequence queries the management point for a list of state migration points, and then evaluates each until it finds one that meets the minimum requirements. **Restore state from another computer** Request access to a state migration point to restore previously captured user state and settings to a destination computer. If there are multiple state migration points, this step finds the state migration point that has the state for the destination computer. **Number of retries** The number of times that this step tries to find an appropriate state migration point before failing. **Retry delay (in seconds)** The amount of time in seconds that the task sequence step waits between retry attempts. **If computer account fails to connect to a state store, use the network access account.** If the task sequence cannot access the state migration point using the computer account, it uses the network access account credentials to connect. This option is less secure because other computers could use the network access account to access the stored state. This option might be necessary if the destination computer is not domain joined. ## <a name="BKMK_RestartComputer"></a> Restart Computer Use this step to restart the computer running the task sequence. After the restart, the computer automatically continues with the next step in the task sequence. This step can be run in either a standard operating system or Windows PE. For more information about the task sequence variables for this task sequence action, see [Restart computer task sequence action variables](task-sequence-action-variables.md#BKMK_RestartComputer). In the task sequence editor, click **Add**, select **General**, and select **Restart Computer** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **The boot image assigned to this task sequence** Select this option for the destination computer to use the boot image assigned to the task sequence. The task sequence uses the boot image to run subsequent steps in Windows PE. **The currently installed default operating system** Select this option for the destination computer to reboot into the installed operating system. **Notify the user before restarting** Select this option to display a notification to the user before the destination computer restarts. The step selects this option by default. **Notification message** Enter a notification message to display to the user before the destination computer restarts. **Message display time-out** Specify the amount of time in seconds before the destination computer restarts. The default is 60 seconds. ## <a name="BKMK_RestoreUserState"></a> Restore User State Use this step to initiate the User State Migration Tool (USMT) to restore user state and settings to the destination computer. You use this step in conjunction with the **Capture User State** step. For more information about managing the user state when deploying operating systems, see [Manage user state](../get-started/manage-user-state.md). Use this step with the **Request State Store** and **Release State Store** steps to save or restore the state settings with a state migration point. With USMT 3.0 and above, this option always decrypts the USMT state store by using an encryption key generated and managed by Configuration Manager. The **Restore User State** step provides control over a limited subset of the most commonly used USMT options. Specify additional command-line options with the OSDMigrateAdditionalRestoreOptions task sequence variable. > [!IMPORTANT] > If you are using this step for a purpose unrelated to an operating system deployment scenario, add the [Restart Computer](#BKMK_RestartComputer) step immediately following the **Restore User State** step. This step runs only in a standard operating system. It does not run in Windows PE. For information about the task sequence variables for this task sequence action, see [Restore User State Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_RestoreUserState). In the task sequence editor, click **Add**, select **User State**, and select **Restore User State** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **User state migration tool package** Specify the package that contains the version of USMT for this step to use. This package does not require a program. When the step runs, the task sequence uses the version of USMT in the specified package. Specify a package containing the 32-bit or 64-bit version of USMT. The architecture of USMT depends upon the architecture of the operating system to which the task sequence is restoring state. **Restore all captured user profiles with standard options** Restores the captured user profiles with the standard options. To customize the options that USMT restores, select **Customize user profile capture**. **Customize how user profiles are restored** Allows you to customize the files that you want to restore to the destination computer. Click **Files** to specify the configuration files in the USMT package you want to use for restoring the user profiles. To add a configuration file, enter the name of the file in the **Filename** box, and then click **Add**. The Files pane lists the configuration files that USMT uses. The .xml file you specify defines which user file USMT restores. **Restore local computer user profiles** Restores the local computer user profiles. These profiles are not for domain users. You must assign new passwords to the restored local user accounts. USMT cannot migrate the original passwords. Enter the new password in the **Password** box, and confirm the password in the **Confirm Password** box. **Continue if some files cannot be restored** Continues restoring user state and settings even if USMT is unable to restore some files. The step enables this option by default. If you disable this option and USMT encounters errors while restoring files, this step fails immediately. USMT does not restore all files. **Enable verbose logging** Enable this option to generate more detailed log file information. When restoring state, the task sequence by default generates Loadstate.log in the task sequence log folder, \windows\system32\ccm\logs. ## <a name="BKMK_RunCommandLine"></a> Run Command Line Use this step to run the specified command line. This step can be run in a standard operating system or Windows PE. For information about task sequence variables for this task sequence action, see [Run Command Line Task Sequence Action Variables](task-sequence-action-variables.md#BKMK_RunCommand). In the task sequence editor, click **Add**, select **General**, and select **Run Command Line** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Command line** Specifies the command line that is run. This field is required. Including file name extensions are a best practice, for example, .vbs and .exe. Include all required settings files, command-line options, or switches. If the file name does not have a file name extension specified, Configuration Manager tries .com, .exe, and .bat. If the file name has an extension that is not an executable, Configuration Manager tries to apply a local association. For example, if the command line is readme.gif, Configuration Manager starts the application specified on the destination computer for opening .gif files. Examples: `setup.exe /a` `cmd.exe /c copy Jan98.dat c:\sales\Jan98.dat` > [!NOTE] > To run successfully, precede command-line actions with the **cmd.exe /c** command. Example of these actions include output redirection, piping, and copy commands. **Disable 64-bit file system redirection** 64-bit operating systems use the WOW64 file system redirector by default to run command lines. This behavior is to properly find 32-bit versions of operating system executables and libraries. Select this option to disable the use of the WOW64 file system redirector. Windows runs the command using native 64-bit versions of operating system executables and libraries. This option has no effect when running on a 32-bit operating system. **Start in** Specifies the executable folder for the program, up to 127 characters. This folder can be an absolute path on the destination computer or a path relative to the distribution point folder that contains the package. This field is optional. Examples: **c:\officexp** **i386** > [!NOTE] > The **Browse** button browses the local computer for files and folders. Anything you select must also exist on the destination computer in the same location and with the same file and folder names. **Package** When you specify files or programs on the command line that are not already present on the destination computer, select this option to specify the Configuration Manager package that contains the appropriate files. The package does not require a program. This option is not required if the specified files exist on the destination computer. **Time-out** Specifies a value that represents how long Configuration Manager allows the command line to run. This value can be from 1 minute to 999 minutes. The default value is 15 minutes. This option is disabled by default. > [!IMPORTANT] > If you enter a value that does not allow enough time for the specified command to complete successfully, this step fails. The entire task sequence could fail depending on other control settings. If the time-out expires, Configuration Manager terminates the command-line process. **Run this step as the following account** Specifies that the command line is run as a Windows user account other than the local system account. > [!NOTE] > To run simple scripts or commands with another account after installing the operating system, you must first add the account to the computer. Additionally, you must restore the Windows user account profile to run more complex programs, such as a Windows Installer. **Account** Specifies the Windows user account this step uses to run the command-line. The command line runs with the permissions of the specified account. Click **Set** to specify the local user or domain account. > [!IMPORTANT] > If this step specifies a user account and runs in Windows PE, the action fails. You cannot join Windows PE to a domain. The smsts.log file records this failure. ## <a name="BKMK_RunPowerShellScript"></a> Run PowerShell Script Use this step to run the specified PowerShell script. This step can be run in a standard operating system or Windows PE. To run this step in Windows PE, PowerShell must be enabled in the boot image. You can enable Windows PowerShell (WinPE-PowerShell) from the **Optional Components** tab in the properties for the boot image. For more information about how to modify a boot image, see [Manage boot images](../get-started/manage-boot-images.md). > [!NOTE] > PowerShell is not enabled by default on Windows Embedded operating systems. In the task sequence editor, click **Add**, select **General**, and select **Run PowerShell Script** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Package** Specify the Configuration Manager package that contains the PowerShell script. One package can contain multiple PowerShell scripts. **Script name** Specifies the name of the PowerShell script to run. This field is required. **Parameters** Specifies the parameters passed to the Windows PowerShell script. These parameters are the same as the Windows PowerShell script parameters on the command line. > [!IMPORTANT] > Provide parameters consumed by the script, not for the Windows PowerShell command line. > > The following example contains valid parameters: > > `-MyParameter1 MyValue1 -MyParameter2 MyValue2` > > The following example contains invalid parameters. The first two items are Windows PowerShell command-line parameters (**-NoLogo** and **-ExecutionPolicy Unrestricted**). The script does not consume these parameters. > > `-NoLogo -ExecutionPolicy Unrestricted -File MyScript.ps1 -MyParameter1 MyValue1 -MyParameter2 MyValue2` **PowerShell execution policy** Determine which Windows PowerShell scripts (if any) you allow to run on the computer. Choose one of the following execution policies: - **AllSigned**: Only run scripts signed by a trusted publisher - **Undefined**: Do not define any execution policy - **Bypass**: Load all configuration files and run all scripts. If you download an unsigned script from the Internet, Windows PowerShell does not prompt for permission before running the script. > [!IMPORTANT] > PowerShell 1.0 does not support Undefined and Bypass execution policies. ## <a name="child-task-sequence"></a> Run Task Sequence Beginning with Configuration Manager version 1710, you can add a new step that runs another task sequence. This step creates a parent-child relationship between the task sequences. With child task sequences, you can create more modular, reusable task sequences. Consider the following statements when you add a child task sequence to a task sequence: - The parent and child task sequences are effectively combined into a single policy that the client runs. - The environment is global. If the parent task sequence sets a variable, and then the child task sequence changes that variable, it retains the latest value. If the child task sequence creates a new variable, it is available for the rest of the parent task sequence. - Status messages are sent per normal for a single task sequence operation. - The task sequences write entries to the smsts.log file, with new log entries that make it clear when a child task sequence starts. In the task sequence editor, click **Add**, select **General**, and select **Run Task Sequence** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Select task sequence to run** Click **Browse** to select the child task sequence. The **Select a Task Sequence** dialog box does not display the parent task sequence. ## <a name="BKMK_SetDynamicVariables"></a> Set Dynamic Variables Use this step to perform the following actions: 1. Gather information from the computer and the environment that it is in, and then set specified task sequence variables with the information. 2. Evaluate defined rules and set task sequence variables based on the variables and values configured for rules that evaluate to true. The task sequence automatically sets the following read-only task sequence variables: - &#95;SMSTSMake - &#95;SMSTSModel - &#95;SMSTSMacAddresses - &#95;SMSTSIPAddresses - &#95;SMSTSSerialNumber - &#95;SMSTSAssetTag - &#95;SMSTSUUID This step can be run in either a standard operating system or Windows PE. For more information about task sequence variables, see [Task sequence action variables](task-sequence-action-variables.md). In the task sequence editor, click **Add**, select **General**, and select **Set Dynamic Variables** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Dynamic rules and variables** To set a dynamic variable for use in the task sequence, add a rule. Then set a value for each variable specified in the rule. Additionally, add one or more variables without adding a rule. When you add a rule, choose from the following categories: - **Computer**: Evaluate values for hardware asset tag, UUID, serial number, or MAC address. Set multiple values as necessary. If any value is true, then the rule evaluates as true. For example, the following rule evaluates as true if the device serial number is 5892087 and the MAC address is 22-A4-5A-13-78-26. `IF Serial Number = 5892087 OR MAC address = 26-78-13-5A-A4-22 THEN` - **Location**: Evaluate values for the default network gateway - **Make and Model**: Evaluate values for the make and model of a computer. Both the make and model must evaluate to true for the rule to evaluate to true. <!-- for future edits: an escape code must be used for the bolded asterisk character, but may be removed somewhere along the way. Instead of five asterisk, should be bold tags with &#42; in-between --> Starting in Configuration Manager version 1610, you can specify an asterisk (**&#42;**) and question mark (**?**) as wild cards, where **&#42;** matches multiple characters and **?** matches a single character. For example, the string "DELL*900?" will match DELL-ABC-9001 and DELL9009. Specify an asterisk (**&#42;**) and question mark (**?**) as wild cards, where **&#42;** matches multiple characters and **?** matches a single character. For example, the string "DELL*900?" matches DELL-ABC-9001 and DELL9009. - **Task Sequence Variable**: Add a task sequence variable, condition, and value to evaluate. The rule evaluates to true when the value set for the variable meets the specified condition. Specify one or more variables to set for a rule that evaluates to true, or set variables without using a rule. Select an existing variable, or create a custom variable. - **Existing task sequence variables**: Select one or more variables from a list of existing task sequence variables. Array variables are not available to select. - **Custom task sequence variables**: Define a custom task sequence variable. You can also specify an existing task sequence variable. This setting is useful to specify an existing variable array, such as OSDAdapter, since variable arrays are not in the list of existing task sequence variables. After you select the variables for a rule, you must provide a value for each variable. The variable is set to the specified value when the rule evaluates to true. For each variable, you can select **Secret value** to hide the value of the variable. By default, some existing variables hide values, such as the OSDCaptureAccountPassword task sequence variable. > [!IMPORTANT] > Configuration Manager removes any variable values marked as a **Secret value** when you import a task sequence with the **Set Dynamic Variables** step. Re-enter the value for the dynamic variable after you import the task sequence. ## <a name="BKMK_SetTaskSequenceVariable"></a> Set Task Sequence Variable Use this step to set the value of a variable that is used with the task sequence. This step can be run in either a standard operating system or Windows PE. Task sequence variables are read by task sequence actions and specify the behavior of those actions. For more information about specific task sequence action variables, see [Task sequence action variables](task-sequence-action-variables.md). For more information about specific task sequence built-in variables, see [Task sequence built-in variables](/sccm/osd/understand/task-sequence-built-in-variables). In the task sequence editor, click **Add**, select **General**, and select **Set Task Sequence Variable** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Task sequence variable** Specify the name of a task sequence built-in or action variable, or specify your own user-defined variable name. **Value** The task sequence sets the variable to this value. Set this task sequence variable to the value of another task sequence variable with the syntax %varname%. ## <a name="BKMK_SetupWindowsandConfigMgr"></a> Setup Windows and ConfigMgr Use this step to perform the transition from Windows PE to the new operating system. This task sequence step is a required part of any operating system deployment. It installs the Configuration Manager client into the new operating system and prepares for the task sequence to continue execution in the new operating system. This step runs only in Windows PE. It does not run in a standard operating system. For more information about task sequence variables for this task sequence action, see [Setup Windows and ConfigMgr task sequence action variables](task-sequence-action-variables.md#BKMK_SetupWindows). This step replaces sysprep.inf or unattend.xml directory variables, such as %WINDIR% and %ProgramFiles%, with the Windows PE installation directory, X:\Windows. The task sequence ignores variables specified by using these environment variables. Use this task sequence step to perform the following actions: 1. Preliminaries: Windows PE 1. Substitute task sequence variables in the unattend.xml file. 2. Download the package that contains the Configuration Manager client. Add the package to the deployed image. 2. Set up Windows 1. Image-based installation 1. Disable the Configuration Manager client in the image, if it exists. In other words, disable Autostart for the Configuration Manager client service. 2. Update the registry in the deployed image to start the deployed operating system with the same drive letter as the reference computer. 3. Restart to the deployed operating system. 4. Windows mini-setup runs by using the previously specified sysprep.inf or unattend.xml answer file that has all end-user interaction suppressed. If you use the **Apply Network Settings** step to join a domain, then that information is in the answer file. Windows mini-setup joins the computer to the domain. 2. Setup.exe-based installation. Runs Setup.exe that follows the typical Windows setup process: 1. Copy the OS upgrade package, specified in the **Apply Operating System** step, to the hard disk drive. 2. Restart to the newly deployed operating system. 3. Windows mini-setup runs by using the previously specified sysprep.inf or unattend.xml answer file that has all user interface settings suppressed. If you use the **Apply Network Settings** step to join a domain, then that information is in the answer file. Windows mini-setup joins the computer to the domain. 3. Set up the Configuration Manager client 1. After Windows mini-setup finishes, the task sequence resumes by using setupcomplete.cmd. 2. Enable or disable the local administrator account, based on the option selected in the **Apply Windows Settings** step. 3. Install the Configuration Manager client by using the previously downloaded package, and installation properties specified in this step. The client installs in "provisioning mode" to prevent it from processing new policy requests until the task sequence completes. 4. Wait for the client to be fully operational. 4. The task sequence continues running the next step. <!-- Engineering confirmed that the task sequence does nothing with respect to group policy processing. > [!NOTE] > The **Setup Windows and ConfigMgr** task sequence action is responsible for running Group Policy on the newly installed computer. The Group Policy is applied after the task sequence is finished. --> In the task sequence editor, click **Add**, select **Images**, and select **Setup Windows and ConfigMgr** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Client package** Click **Browse**, then select the Configuration Manager client installation package to use with this step. **Use pre-production client package when available** If there is a pre-production client package available, and the computer is a member of the piloting collection, the task sequence uses this package instead of the production client package. The pre-production client is a newer version for testing in the production environment. Click **Browse**, then select the pre-production client installation package to use with this step. **Installation Properties** Site assignment and the default configuration are automatically specified by the task sequence action. You can use this field to specify any additional installation properties to use when you install the client. To enter multiple installation properties, separate them with a space. You can specify command-line options to use during client installation. For example, you can enter **/skipprereq: silverlight.exe** to inform CCMSetup.exe not to install the Microsoft Silverlight prerequisite. For more information about available command-line options for CCMSetup.exe, see [About client installation properties](../../core/clients/deploy/about-client-installation-properties.md). ### Options > [!NOTE] > Do not enable **Continue on error** on the **Options** tab. If there is an error during this step, the task sequence fails whether or not you enable this setting. ## <a name="BKMK_UpgradeOS"></a> Upgrade Operating System > [!TIP] > Beginning with Windows 10, version 1709, media includes multiple editions. When you configure a task sequence to use an OS upgrade package or OS image, be sure to select a [supported edition](/sccm/core/plan-design/configs/support-for-windows-10#windows-10-as-a-client). Use this step to upgrade an older version of Windows to a newer version of Windows 10. This task sequence step runs only in a standard operating system. It does not run in Windows PE. In the task sequence editor, click **Add**, select **Images**, and select **Upgrade Operating System** to add this step. ### Properties On the **Properties** tab for this step, configure the settings described in this section. **Upgrade package** Select this option to specify the Windows 10 operating system upgrade package to use for the upgrade. **Source path** Specifies a local or network path to the Windows 10 media that Windows Setup uses. This setting corresponds to the Windows Setup command-line option **/InstallFrom**. You can also specify a variable, such as %mycontentpath% or %DPC01%. When you use a variable for the source path, it must be specified earlier in the task sequence. For example, if you use the [Download Package Content](#BKMK_DownloadPackageContent) step in the task sequence, you can specify a variable for the location of the operating system upgrade package. Then, you can use that variable for the source path for this step. **Edition** Specify the edition within the operating system media to use for the upgrade. **Product key** Specify the product key to apply to the upgrade process **Provide the following driver content to Windows Setup during upgrade** Add drivers to the destination computer during the upgrade process. This setting corresponds to the Windows Setup command-line option **/InstallDriver**. The drivers must be compatible with Windows 10. Specify one of the following options: - **Driver package**: Click **Browse** and select an existing driver package from the list. - **Staged content**: Select this option to specify the location for the driver package. You can specify a local folder, network path, or a task sequence variable. When you use a variable for the source path, it must be specified earlier in the task sequence. For example, by using the [Download Package Content](task-sequence-steps.md#BKMK_DownloadPackageContent) step. **Time-out (minutes)** Specifies the number of minutes before Configuration Manager fails this step. This option is useful if Windows Setup stops processing but does not terminate. **Perform Windows Setup compatibility scan without starting upgrade** Perform the Windows Setup compatibility scan without starting the upgrade process. This setting corresponds to the Windows Setup command-line option **/Compat ScanOnly**. You must deploy the entire installation source when you use this option. Setup returns an exit code as a result of the scan. The following table provides some of the more common exit codes. |Exit code|Details| |-|-| |MOSETUP_E_COMPAT_SCANONLY (0xC1900210)|No compatibility issues ("success").| |MOSETUP_E_COMPAT_INSTALLREQ_BLOCK (0xC1900208)|Actionable compatibility issues.| |MOSETUP_E_COMPAT_MIGCHOICE_BLOCK (0xC1900204)|Selected migration choice is not available. For example, an upgrade from Enterprise to Professional.| |MOSETUP_E_COMPAT_SYSREQ_BLOCK (0xC1900200)|Not eligible for Windows 10.| |MOSETUP_E_COMPAT_INSTALLDISKSPACE_BLOCK (0xC190020E)|Not enough free disk space.| For more information about this parameter, see [Windows Setup Command-Line Options](https://msdn.microsoft.com/library/windows/hardware/dn938368\(v=vs.85\).aspx) **Ignore any dismissible compatibility messages** Specifies that Setup completes the installation, ignoring any dismissible compatibility messages. This setting corresponds to the Windows Setup command-line option **/Compat IgnoreWarning**. **Dynamically update Windows Setup with Windows Update** Enable setup to perform Dynamic Update operations, such as search, download, and install updates. This setting corresponds to the Windows Setup command-line option **/DynamicUpdate**. This setting is not compatible with Configuration Manager software updates. Enable this option when you manage updates with stand-alone Windows Server Update Services (WSUS) or Windows Update for Business. **Override policy and use default Microsoft Update** Temporarily override the local policy in real-time to run Dynamic Update operations and have the computer get updates from Windows Update.
75.537292
837
0.761745
eng_Latn
0.997535
2cdd5b186dc3b0c087b5900ad0b3d48911625b07
163
md
Markdown
README.md
jtagt/HypixelAuctions
2e7e86c580fa14c2469f7f8ea6d12ae310723b32
[ "MIT" ]
10
2019-11-23T12:14:18.000Z
2021-02-16T17:33:02.000Z
README.md
jtagt/HypixelAuctions
2e7e86c580fa14c2469f7f8ea6d12ae310723b32
[ "MIT" ]
null
null
null
README.md
jtagt/HypixelAuctions
2e7e86c580fa14c2469f7f8ea6d12ae310723b32
[ "MIT" ]
8
2019-11-23T12:23:29.000Z
2021-05-15T09:43:22.000Z
# HypixelAuctions This is the rushed backend of the hypixel auctions. (very messy i know). **If used credit would be appreciated. No support will be provided.**
27.166667
72
0.760736
eng_Latn
0.999083
2cddb2d857f1f810b08efc10f04c53bf3e4c510a
178
md
Markdown
README.md
amadea-system/void
8904456ba028ed9b2b40e73e894a50124381ff3c
[ "Apache-2.0" ]
null
null
null
README.md
amadea-system/void
8904456ba028ed9b2b40e73e894a50124381ff3c
[ "Apache-2.0" ]
null
null
null
README.md
amadea-system/void
8904456ba028ed9b2b40e73e894a50124381ff3c
[ "Apache-2.0" ]
null
null
null
# `void` `void` is a bot for keeping discord channels clear of messages. It's original purpose is for `screaming into the void` channels, but use it for what ever, I don't care.
59.333333
168
0.747191
eng_Latn
0.999667
2cde635a6ef88b84134306a61d3e491568efb531
392
md
Markdown
DependencyService/DependencyServiceSample/README.md
JhonP54/xamarin-forms-samples
aedf8999c6c2c4fec3e88fdc648b3afd29277ee4
[ "Apache-2.0" ]
3
2021-05-14T04:39:00.000Z
2021-05-14T20:39:02.000Z
DependencyService/DependencyServiceSample/README.md
HydAu/XaraminForms
e45779134cb1db55fd35fea1e6972636811d3db9
[ "Apache-2.0" ]
null
null
null
DependencyService/DependencyServiceSample/README.md
HydAu/XaraminForms
e45779134cb1db55fd35fea1e6972636811d3db9
[ "Apache-2.0" ]
1
2021-04-01T21:05:59.000Z
2021-04-01T21:05:59.000Z
Dependency Service ================== This sample demonstrates how to use the `DependencyService` to implement text-to-speech, check device orientation, and check battery status. For more information about this sample see [Accessing Native Features with DependencyService](http://developer.xamarin.com/guides/cross-platform/xamarin-forms/dependency-service/). Author ------ Nathan Castle
32.666667
180
0.770408
eng_Latn
0.929051
2cdef65ac5c68344899227d7d42f931927cb9b26
2,405
md
Markdown
README.md
JoseNaime/GuessTheNumber
956552478fcc8d807e086ffc8698951d9b555a08
[ "Unlicense" ]
null
null
null
README.md
JoseNaime/GuessTheNumber
956552478fcc8d807e086ffc8698951d9b555a08
[ "Unlicense" ]
null
null
null
README.md
JoseNaime/GuessTheNumber
956552478fcc8d807e086ffc8698951d9b555a08
[ "Unlicense" ]
null
null
null
# Guess The Number ![React](https://img.shields.io/badge/react-%2320232a.svg?style=for-the-badge&logo=react&logoColor=%2361DAFB) ![React Router](https://img.shields.io/badge/React_Router-CA4245?style=for-the-badge&logo=react-router&logoColor=white) ![JavaScript](https://img.shields.io/badge/javascript-%23323330.svg?style=for-the-badge&logo=javascript&logoColor=%23F7DF1E) ![CSS3](https://img.shields.io/badge/css3-%231572B6.svg?style=for-the-badge&logo=css3&logoColor=white) In this web app, you have to guess the random generated number, how many attemps you'll need to find it? ## How to run it localy? 1. You need to fork and download the project in your local machine 2. Run the following command in your console (Be sure to be located at the downloaded project) > npm install 3. Now start the web app running in the console the following command > npm run You must see the following in your browser [![Guess The Number](http://i3.ytimg.com/vi/2X2wSIlk4m8/maxresdefault.jpg)](https://youtu.be/2X2wSIlk4m8 "Guess The Number") **Click the image to watch video** ## Roadmap > V2.0 (WIP) - Restart game - Share your results with in your social medias - New animations > V1.1 - New text game style and format added - Input field replaced with a slider > V1.0 - Landpage added - Input field for numbers - Game funcionality ## License Copyright 2021 José Pablo Naime García Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
54.659091
460
0.781705
eng_Latn
0.760702
2cdf747742197f1cf2eed68c69afbde4347aff82
3,977
md
Markdown
data/readme_files/Kindari.SublimeXdebug.md
DLR-SC/repository-synergy
115e48c37e659b144b2c3b89695483fd1d6dc788
[ "MIT" ]
5
2021-05-09T12:51:32.000Z
2021-11-04T11:02:54.000Z
data/readme_files/Kindari.SublimeXdebug.md
DLR-SC/repository-synergy
115e48c37e659b144b2c3b89695483fd1d6dc788
[ "MIT" ]
null
null
null
data/readme_files/Kindari.SublimeXdebug.md
DLR-SC/repository-synergy
115e48c37e659b144b2c3b89695483fd1d6dc788
[ "MIT" ]
3
2021-05-12T12:14:05.000Z
2021-10-06T05:19:54.000Z
# SublimeXDebug Simple client to connect with XDebug. ## Features - Automatically display scope variables and stack trace - Debugging layout for stack and variables - Click variable to inspect value - Auto-launch web browser for session based debugging (see below) ![Screenshot](https://github.com/Kindari/SublimeXdebug/raw/master/doc/images/screenshot.png) ## Quick start Use `Shift+f8` to show a list of actions: - **Start debugger**: Start listening for an XDebug connection - **Add/Remove Breakpoint**: A marker in the gutter shows the breakpoint Once the XDebug connection is captured, using the same shortcut shows these XDebug actions: - **Continue**: Shows the debugger control menu (see below) - **Stop debugger**: Stop listening - **Add/remove breakpoint** - **Status**: Shows the client status in the status bar ### Debugger control menu - **Run**: run to the next breakpoint or end of the script - **Step Over**: steps to the next statement, if there is a function call on the line from which the step_over is issued then the debugger engine will stop at the statement after the function call in the same scope as from where the command was issued - **Step Out**: steps out of the current scope and breaks on the statement after returning from the current function - **Step Into**: steps to the next statement, if there is a function call involved it will break on the first statement in that function - **Stop**: stops script execution immediately - **Detach**: stops interaction with debugger but allows script to finish ## Shortcut keys - `Shift+f8`: Open XDebug quick panel - `f8`: Open XDebug control quick panel when debugger is connected - `Ctrl+f8`: Toggle breakpoint - `Ctrl+Shift+f5`: Run to next breakpoint - `Ctrl+Shift+f6`: Step over - `Ctrl+Shift+f7`: Step into - `Ctrl+Shift+f8`: Step out ## Session based debugging This plugin can initiate and terminate a debugging session by launching your default web browser with the XDEBUG_SESSION_START or XDEBUG_SESSION_STOP parameters. The debug URL is defined in your .sublime-project file like this: { "folders": [ { "path": "..." }, ], "settings": { "xdebug": { "url": "http://your.web.server" } } } If you don't configure the URL, the plugin will still listen for debugging connections from XDebug, but you will need to trigger XDebug <a href="http://XDebug.org/docs/remote">for a remote session</a>. The IDE Key should be "sublime.xdebug". ## Gutter icon color You can change the color of the gutter icons by adding the following scopes to your theme file: xdebug.breakpoint, xdebug.current. Icons from [Font Awesome](http://fortawesome.github.com/Font-Awesome/). ## Installing XDebug Of course, SublimeXDebug won't do anything if you don't <a href="http://xdebug.org/docs/install">install and configure XDebug first</a>. Here's how I setup XDebug on Ubuntu 12.04: - sudo apt-get install php5-xdebug - Configure settings in /etc/php5/conf.d/xdebug.ini - Restart Apache ## Troubleshooting XDebug won't stop at breakpoints on empty lines. The breakpoint must be on a line of PHP code. If your window doesn't remove the debugging views when you stop debugging, then you can revert to a single document view by pressing `Shift+Alt+1` The debugger assumes XDebug is configured to connect on port 9000. Fixing pyexpat module errors. In Ubuntu you might need to do the following because Ubuntu stopped shipping Python 2.6 libraries a long time ago: $ sudo apt-get install python2.6 $ ln -s /usr/lib/python2.6 [Sublime Text dir]/lib/ On Ubuntu 12.04, Python 2.6 isn't available, so here's what worked for me: - Download python2.6 files from <a href="http://packages.ubuntu.com/lucid/python2.6">Ubuntu Archives</a> - Extract the files: dpkg-deb -x python2.6_2.6.5-1ubuntu6_i386.deb python2.6 - Copy the extracted usr/lib/python2.6 folder to {Sublime Text directory}/lib In theory, it should work with any XDebug client, but I've only tested with PHP.
39.376238
251
0.750566
eng_Latn
0.962576
2ce020562070ca3ba48a354b5c4710d27e341ffe
2,833
md
Markdown
powerbi-docs/consumer/end-user-insights.md
Mdlglobal-atlassian-net/powerbi-docs.ko-kr
d38184c3d983ac15ae2d43ba7d6a99eaefd9ac50
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/consumer/end-user-insights.md
Mdlglobal-atlassian-net/powerbi-docs.ko-kr
d38184c3d983ac15ae2d43ba7d6a99eaefd9ac50
[ "CC-BY-4.0", "MIT" ]
null
null
null
powerbi-docs/consumer/end-user-insights.md
Mdlglobal-atlassian-net/powerbi-docs.ko-kr
d38184c3d983ac15ae2d43ba7d6a99eaefd9ac50
[ "CC-BY-4.0", "MIT" ]
1
2020-05-28T15:45:31.000Z
2020-05-28T15:45:31.000Z
--- title: 대시보드 타일에서 인사이트 실행 및 보기 description: Power BI 최종 사용자로 대시보드 타일 관련 인사이트를 얻는 방법을 알아봅니다. author: mihart ms.reviewer: '' featuredvideoid: et_MLSL2sA8 ms.service: powerbi ms.subservice: powerbi-consumer ms.topic: conceptual ms.date: 03/11/2020 ms.author: mihart LocalizationGroup: Dashboards ms.openlocfilehash: 891a9b1a5afee26bdb2d6b363ccd2cee5f2461cb ms.sourcegitcommit: 7aa0136f93f88516f97ddd8031ccac5d07863b92 ms.translationtype: HT ms.contentlocale: ko-KR ms.lasthandoff: 05/05/2020 ms.locfileid: "79377287" --- # <a name="view-data-insights-on-dashboard-tiles-with-power-bi"></a>Power BI로 대시보드 타일에서 데이터 인사이트 보기 [!INCLUDE[consumer-appliesto-yyny](../includes/consumer-appliesto-yyny.md)] 대시보드의 각 시각적 개체 [타일](end-user-tiles.md)은 데이터 검색에 사용됩니다. 타일을 선택하면 보고서가 열리거나 [질문 및 답변이 열립니다](end-user-q-and-a.md). 여기서 보고서의 기반이 되는 데이터 세트를 필터링 및 정렬하고 자세히 살펴볼 수 있습니다. 또한 인사이트를 실행하면 Power BI에서 데이터 탐색을 수행합니다. ![줄임표 메뉴 모드](./media/end-user-insights/power-bi-insight.png) 인사이트를 실행하여 데이터를 기준으로 흥미로운 대화형 시각적 개체를 생성합니다. 인사이트는 특정 대시보드 타일에서 실행할 수 있으며 특정 인사이트에 대해 인사이트를 실행할 수도 있습니다. 인사이트 기능은 Microsoft Research와의 연계를 통해 개발되는 [고급 분석 알고리즘 집합](end-user-insight-types.md)의 성장을 기반으로 구축됩니다. 당사는 더 많은 사람들이 새롭고 직관적인 방식으로 각자 데이터의 정보를 활용할 수 있도록 하기 위해 노력할 것입니다. ## <a name="run-insights-on-a-dashboard-tile"></a>대시보드 타일에서 정보 활용 실행 대시보드 타일에서 인사이트를 실행하는 경우 Power BI에서 해당 단일 대시보드 타일을 만드는 데 사용된 데이터만 검색합니다. 1. [대시보드를 엽니다](end-user-dashboards.md). 2. 타일을 마우스로 가리킵니다. **추가 옵션**(...)을 선택한 다음, **인사이트 보기**를 선택합니다. ![줄임표 메뉴 모드](./media/end-user-insights/power-bi-hovers.png) 3. 타일이 오른쪽에 표시되는 인사이트 카드와 함께 [포커스 모드](end-user-focus.md)로 열립니다. ![포커스 모드](./media/end-user-insights/power-bi-insights-tile.png) 4. 인사이트 하나로 호기심이 자극되나요? 자세히 알아보려면 해당 인사이트 카드를 선택합니다. 선택한 인사이트는 왼쪽에 나타나고 해당 단일 인사이트의 데이터에만 기초한 새 인사이트 카드는 오른쪽에 표시됩니다. ## <a name="interact-with-the-insight-cards"></a>정보 활용 카드와 상호 작용 일단 인사이트가 시작되면 계속 살펴보세요. * 캔버스에서 시각적 개체를 필터링합니다. 필터를 표시하려면 오른쪽 위 모서리에 있는 화살표를 선택하여 필터 창을 확장합니다. ![확장된 필터 메뉴](./media/end-user-insights/power-bi-filters.png) * 인사이트 카드 자체에 대한 인사이트를 실행합니다. 이것을 **관련된 인사이트**라고 합니다. 인사이트 카드를 선택하여 활성화합니다. 인사이트 카드가 보고서 캔버스에 표시됩니다. ![확장된 필터 메뉴](./media/end-user-insights/power-bi-insight-card.png) * 오른쪽 위 모서리에서 전구 모양 아이콘 ![인사이트 아이콘](./media/end-user-insights/power-bi-bulb-icon.png) 또는 **인사이트**를 선택합니다. 정보 활용이 왼쪽에 표시되고 해당 단일 정보 활용의 데이터에만 기초한 새 카드는 오른쪽에 표시됩니다. ![정보 활용을 보여주는 메뉴 모음 아이콘](./media/end-user-insights/power-bi-related.png) 보고서로 돌아가려면 왼쪽 위 모서리에서 **포커스 모드 종료**를 선택합니다. ## <a name="considerations-and-troubleshooting"></a>고려 사항 및 문제 해결 - **인사이트 보기**는 모든 대시보드 타일 유형에서 작동하지 않습니다. 예를 들면 Power BI 시각적 개체에는 사용할 수 없습니다.<!--[Power BI visuals](end-user-custom-visuals.md)--> ## <a name="next-steps"></a>다음 단계 [사용 가능한 빠른 인사이트 유형](end-user-insight-types.md)에 대해 알아보기
40.471429
202
0.710907
kor_Hang
1.00001
2ce033c055e6ba06d05b7fc21930f7bc9c7bf67e
2,480
md
Markdown
_spells/twin-form.md
tynansdtm/pathminder.github.io
9a0cb763af82c13804def4197e0535ee54453581
[ "OML", "RSA-MD" ]
8
2016-07-24T04:27:04.000Z
2019-10-03T21:31:23.000Z
_spells/twin-form.md
tynansdtm/pathminder.github.io
9a0cb763af82c13804def4197e0535ee54453581
[ "OML", "RSA-MD" ]
11
2016-07-11T11:41:13.000Z
2022-02-26T02:55:19.000Z
_spells/twin-form.md
tynansdtm/pathminder.github.io
9a0cb763af82c13804def4197e0535ee54453581
[ "OML", "RSA-MD" ]
8
2016-04-03T17:58:13.000Z
2020-08-01T03:01:11.000Z
--- title: "*twin form*" sources: - Pathfinder Roleplaying Game Advanced Player's Guide school: transmutation spell_lists: - {name: alchemist, level: 6} casting_time: 1 standard action components: - V - S - M (a blend of soil and the caster's blood) range: personal target: you duration: 1 round/level or until discharged (D) --- This extract splits a perfect double of yourself from your body, dressed and equipped exactly as you are. You are able to shift your consciousness from one body to the other once each round as a free action. This shift takes place either immediately before your turn or immediately after it, but not during the round. You may act normally in the body you inhabit. Your other self is treated as though dazed, except it may take a single move action each round during your turn. Your twin cannot speak while you are in your other body, and cannot flank, make attacks of opportunity, or otherwise threaten enemies. Both you and your twin have the same statistics and start with the number of hit points you had when you ingested the extract. Once you have split, these hit points are tracked separately. Any spells, extracts, or magical effects (such as from potions) that were active when you ingested the extract are active for both you and your twin. If any such effects expire, are dispelled, dismissed, or otherwise used or ended, they end for both of you. Extracts or spells cast after you split affect you and your twin as though you were two separate targets. Your equipment is linked between your two selves, and if an item on one is consumed or destroyed, its duplicate is used up or destroyed as well. The body you do not inhabit crumbles into dust when the extract's duration expires or is dismissed. If the body you inhabit is destroyed, you immediately shift to your surviving self and the extract immediately ends. The body you left behind crumbles into dust, and you are stunned until the start of your next turn. If the body you do not inhabit is destroyed, the extract also ends immediately, but you suffer no ill effects. You have no special ability to sense what your second body is experiencing, though you immediately know if it has been destroyed. You may switch between bodies at any distance on the same plane. If your bodies cross into separate planes (including through the use of [*teleport*](/spells/teleport/) or [*blink*](/spells/blink/)), the body you inhabit survives, while your other body is destroyed.
95.384615
697
0.784677
eng_Latn
0.999988
2ce0df1b0189aaadea6d20c9b97e1ffd319c78fa
782
md
Markdown
lib/tmx/README.md
hailongz/game
e9ff90b013075ce9cfa47ec3cf07cba2d4404710
[ "MIT" ]
null
null
null
lib/tmx/README.md
hailongz/game
e9ff90b013075ce9cfa47ec3cf07cba2d4404710
[ "MIT" ]
null
null
null
lib/tmx/README.md
hailongz/game
e9ff90b013075ce9cfa47ec3cf07cba2d4404710
[ "MIT" ]
1
2019-07-04T09:21:09.000Z
2019-07-04T09:21:09.000Z
# TMX C Loader --- ## About A portable C library to load [tiled](http://mapeditor.org) maps in your games. ## Dependencies This project depends on [Zlib](http://zlib.net/) and [LibXml2](http://xmlsoft.org). ## Compiling This project uses [cmake](http://cmake.org) as a *build system* builder. You can either use cmake, ccmake or cmake-gui. ### Example : mkdir build cd build cmake .. make && make install ## Usage ```c #include <tmx.h> int main(void) { tmx_map *map = tmx_load("path/map.tmx"); if (!map) { tmx_perror("tmx_load"); return 1; } /* ... */ tmx_map_free(map); return 0; } ``` See the dumper example (`examples/dumper/dumper.c`) for an in-depth usage of TMX. ### Help See the [Wiki](https://github.com/baylej/tmx/wiki/).
16.638298
83
0.636829
eng_Latn
0.68119
2ce108c437864c862908d135164f1f57134a235b
1,002
md
Markdown
_posts/2016-06-22-dangerous-golf.md
SteveBarnett/Bullet-Hell
20710379062e5f61e570fd0f11645451154fb925
[ "MIT" ]
1
2017-04-05T18:54:39.000Z
2017-04-05T18:54:39.000Z
_posts/2016-06-22-dangerous-golf.md
SteveBarnett/Bullet-Hell
20710379062e5f61e570fd0f11645451154fb925
[ "MIT" ]
null
null
null
_posts/2016-06-22-dangerous-golf.md
SteveBarnett/Bullet-Hell
20710379062e5f61e570fd0f11645451154fb925
[ "MIT" ]
null
null
null
--- layout: post-no-feature title: Dangerous Golf category: articles tags: [sport, PS4, sandbox] linky: http://www.threefieldsentertainment.com/dangerous-golf/ img: dangerous-golf.jpg --- {% if page.linky %} <a href="{{page.linky}}">![{{ page.title }}](/images/{{page.img}})</a> {% else %} ![{{ page.title }}](/images/{{page.img}}) {% endif %} * Very, very, pretty and highly detailed environments. Crispy and shiny in all the right places. * Not nearly as satisfying as it could be. You don't feel the heft of a good swing: it's just "press up." The destruction from the tee off feels too random, not tied closely enough to your actions; the destruction from SmashBreakers (when you get to steer the flaming ball around) feels too soggy and not fun enough. * Gets kinda boring kinda quickly. There's not a lot of variation in the game mechanics, and the sub-levels (same location, new bits and pieces) don't feel different enough. {% if page.linky %} [{{ page.title }}]({{page.linky}}) {% endif %}
43.565217
316
0.711577
eng_Latn
0.974148
2ce153e86c731890f80b9b255d30fa46f1df8965
87
md
Markdown
README.md
hunruh/portfolio
e226a12ad1d3ae545635f5b790bf61003f7bb46b
[ "MIT" ]
null
null
null
README.md
hunruh/portfolio
e226a12ad1d3ae545635f5b790bf61003f7bb46b
[ "MIT" ]
null
null
null
README.md
hunruh/portfolio
e226a12ad1d3ae545635f5b790bf61003f7bb46b
[ "MIT" ]
null
null
null
## portfolio Personal website version 2, redesigned and rebuilt using Gatsby and React
29
73
0.816092
eng_Latn
0.997447
2ce21133748674b3f691204233aaa273f859098c
2,030
md
Markdown
articles/load-balancer/load-balancer-security-controls.md
gbuchmsft/azure-docs
bc943dc048d9ab98caf4706b022eb5c6421ec459
[ "CC-BY-4.0", "MIT" ]
2
2021-09-19T19:07:44.000Z
2021-11-15T09:58:47.000Z
articles/load-balancer/load-balancer-security-controls.md
gbuchmsft/azure-docs
bc943dc048d9ab98caf4706b022eb5c6421ec459
[ "CC-BY-4.0", "MIT" ]
1
2019-06-12T00:05:28.000Z
2019-07-09T09:39:55.000Z
articles/load-balancer/load-balancer-security-controls.md
gbuchmsft/azure-docs
bc943dc048d9ab98caf4706b022eb5c6421ec459
[ "CC-BY-4.0", "MIT" ]
1
2020-09-07T03:34:02.000Z
2020-09-07T03:34:02.000Z
--- title: Security controls for Azure Load Balancer description: A checklist of security controls for evaluating Load Balancer services: load-balancer author: asudbring manager: KumudD ms.service: load-balancer ms.topic: conceptual ms.date: 09/04/2019 ms.author: allensu --- # Security controls for Azure Load Balancer This article documents the security controls built into Azure Load Balancer. [!INCLUDE [Security controls Header](../../includes/security-controls-header.md)] ## Network | Security control | Yes/No | Notes | |---|---|--| | Service endpoint support| N/A | | | VNet injection support| N/A | | | Network Isolation and Firewalling support| N/A | | | Forced tunneling support| N/A | | ## Monitoring & logging | Security control | Yes/No | Notes| |---|---|--| | Azure monitoring support (Log analytics, App insights, etc.)| Yes | See [Azure Monitor logs for public Basic Load Balancer](load-balancer-monitor-log.md). | | Control and management plane logging and audit| Yes | See [Azure Monitor logs for public Basic Load Balancer](load-balancer-monitor-log.md). | | Data plane logging and audit | N/A | | ## Identity | Security control | Yes/No | Notes| |---|---|--| | Authentication| N/A | | | Authorization| N/A | | ## Data protection | Security control | Yes/No | Notes | |---|---|--| | Server-side encryption at rest: Microsoft-managed keys | N/A | | | Encryption in transit (such as ExpressRoute encryption, in VNet encryption, and VNet-VNet encryption )| N/A | | | Server-side encryption at rest: customer-managed keys (BYOK) | N/A | | | Column level encryption (Azure Data Services)| N/A | | | API calls encrypted| Yes | Via the [Azure Resource Manager](../azure-resource-manager/index.yml). | ## Configuration management | Security control | Yes/No | Notes| |---|---|--| | Configuration management support (versioning of configuration, etc.)| N/A | | ## Next steps - Learn more about the [built-in security controls across Azure services](../security/fundamentals/security-controls.md).
32.741935
158
0.708867
eng_Latn
0.717024
2ce2748b4a266520dd91a94aef15c4a2e9436ac8
3,143
md
Markdown
docs/home/guides/formula.md
bakerboy448/Plex-Meta-Manager-1
bddedc65c728b47174775821c8cd7b1725b88a17
[ "MIT" ]
null
null
null
docs/home/guides/formula.md
bakerboy448/Plex-Meta-Manager-1
bddedc65c728b47174775821c8cd7b1725b88a17
[ "MIT" ]
null
null
null
docs/home/guides/formula.md
bakerboy448/Plex-Meta-Manager-1
bddedc65c728b47174775821c8cd7b1725b88a17
[ "MIT" ]
null
null
null
# Formula 1 Metadata Guide This is a guide for setting up Formula 1 in Plex using the `f1_season` metadata attribute. Most of this guide is taken from a reddit [post](https://www.reddit.com/r/PleX/comments/tdzp8x/formula_1_library_with_automatic_metadata/) written by /u/Toastjuh. ## Folder structure Let's start with the basics: * Every Formula 1 season will be a TV Show in Plex. Season 2001, Season 2002, etc. * Every race will be a Season in Plex. Season 1 will be the Australian GP, Season 2 will be the Bahrain GP etc. * Every session will be an Episode in Plex. Episode 1 will be Free Practice 1, Episode 2 will be Free Practice 2 etc. The folder format is like this: ``` Formula -> Library Folder ├── Season 2018 -> Folder for each F1 Season │ ├── 01 - Australian GP -> Folder for each Race in a season │ ├── 01x10 - Australian GP - Highlights.mkv │ ├── 01x01 - Australian GP - Free Practice 1.mkv │ ├── 01x02 - Australian GP - Free Practice 2.mkv │ ├── 01x03 - Australian GP - Free Practice 3.mkv │ ├── 01x04 - Australian GP - Pre-Qualifying Buildup.mkv │ ├── 01x05 - Australian GP - Qualifying Session.mkv │ ├── 01x06 - Australian GP - Post-Qualyfing Analysis.mkv │ ├── 01x07 - Australian GP - Pre-Race Buildup.mkv │ ├── 01x08 - Australian GP - Race Session.mkv │ ├── 01x09 - Australian GP - Post-Race Analysis.mkv │ ├── 01x10 - Australian GP - Highlights.mkv │ ├── 02 - Bahrein GP │ ├── 02x10 - Bahrein GP - Highlights.mkv │ ├── 02x01 - Bahrein GP - Free Practice 1.mkv │ ├── 02x02 - Bahrein GP - Free Practice 2.mkv │ ├── 02x03 - Bahrein GP - Free Practice 3.mkv │ ├── 02x04 - Bahrein GP - Pre-Qualifying Buildup.mkv │ ├── 02x05 - Bahrein GP - Qualifying Session.mkv │ ├── 02x06 - Bahrein GP - Post-Qualyfing Analysis.mkv │ ├── 02x07 - Bahrein GP - Pre-Race Buildup.mkv │ ├── 02x08 - Bahrein GP - Race Session.mkv │ ├── 02x09 - Bahrein GP - Post-Race Analysis.mkv │ ├── 02x10 - Bahrein GP - Highlights.mkv ``` What matters for plex and for pmm. * The show name can be whatever you want it to be but the pre created metadata file will only work if you use just the year numbers. * The season folder can be called whatever you want as long as plex scans it in with the Season Number matching the race number. * The episodes must follow plex's naming convention to have them scanned in properly but in order for PMM to update the metadata the files need to be specifically name like above. ## Metadata File ```yaml metadata: Season 2021: f1_season: 2021 round_prefix: true Season 2020: f1_season: 2020 round_prefix: true ``` * Add `round_prefix: true` to have the race number appended to the beginning of the Race Name. * Add `shorten_gp: true` to shorten `Grand Prix` to `GP` in all titles. Add an entry for every season you want to set the metadata for. The name needs to correspond with the name the season has in Plex! The posters of races you can get from https://www.eventartworks.de/
46.220588
179
0.669742
eng_Latn
0.947894
2ce2821877e2e99a59a1a25b8f1ea60e8923d5f2
1,746
md
Markdown
console/README.md
qiueer/tars
4361d5ec039bb3cf21060bd705ceac364065f028
[ "BSD-2-Clause" ]
43
2016-05-03T05:43:57.000Z
2022-03-16T03:04:48.000Z
console/README.md
yuanlong/tars
4361d5ec039bb3cf21060bd705ceac364065f028
[ "BSD-2-Clause" ]
1
2016-05-06T05:52:42.000Z
2016-05-06T05:52:42.000Z
console/README.md
yuanlong/tars
4361d5ec039bb3cf21060bd705ceac364065f028
[ "BSD-2-Clause" ]
17
2016-05-10T05:16:06.000Z
2022-01-12T02:05:43.000Z
# [TARS 包发布系统](https://github.com/tencent-tars/tars) - Web 控制台 ## 浏览器支持 支持所有主流现代浏览器,包括 IE 9 及以上。 ## 开始 1. 部署文件,并配置 nginx (或其它HTTP服务)。 2. 部署并配置后台服务 `api` 站点。 3. 修改配置文件 `./tars.ini` 。配置基本信息、后台服务API地址等。 4. 调整 `./www/assets/js/app.js` 中的一些前台配置项。 ## 目录结构 ```bash console/ ├── logs/ # 日志 │ ├── curl.error.log # 接口请求错误日志 │ ├── curl.notice.log # 接口请求访问日志 │ └── curl.options.error.log # 接口请求错误对应的请求参数日志 │ ├── src/ # PHP 源文件 │ ├── common/ # 公共类库 │ ├── controller/ # 控制器 │ ├── remote/ # 外部请求类 │ └── views/ # 视图文件 │ ├── www/ # Web 源文件 │ ├── assets/ # 静态资源 │ │ ├── css/ │ │ │ └── default.css # 样式文件由 `../scss/default.scss` 自动生成 │ │ ├── js/ │ │ │ ├── app.js # 应用入口 JS(前台配置项、页面 state 配置等) │ │ │ ├── controllers.js # Angular 控制器 │ │ │ ├── directives.js # Angular 指令(分页) │ │ │ └── services.js # Angular 服务(过滤器、API请求、工具类) │ │ ├── lib/ # 第三方库 │ │ └── scss/ # SASS 源文件 │ ├── templates/ # 页面模板 │ └── index.php # Web 入口文件 │ └────── tars.ini # 基本配置文件 ``` ## 开发 主要依赖: - [Angular.js](https://angularjs.org/) - [Bootstrap](http://getbootstrap.com/) - [AngularUI Router](https://github.com/angular-ui/ui-router/wiki) - [Flight](http://flightphp.com/) - PHP 微型框架,用于构建 RESTful 的 Web 应用 ```bash cd console npm install -g gulp # NPM 全局安装构建工具 gulp npm install # NPM 本地安装 gulp, gulp-sass gulp # 启动 gulp 任务,自动监听 scss 文件变更,生成 css ``` ## 版权许可 BSD Copyright (c) 2015, TENCENT, INC.
26.059701
69
0.469645
yue_Hant
0.650163
2ce2cf655652eb1dd467cac3f56c84d8fadca5c3
309
markdown
Markdown
_posts/2019-03-19-wasserstein-gan.markdown
nailbrainz/nailbrainz.github.io
1a19086468743467543f242f9cc0bdbff61e1aa0
[ "MIT" ]
1
2018-04-27T14:02:55.000Z
2018-04-27T14:02:55.000Z
_posts/2019-03-19-wasserstein-gan.markdown
nailbrainz/nailbrainz.github.io
1a19086468743467543f242f9cc0bdbff61e1aa0
[ "MIT" ]
4
2021-03-29T17:48:07.000Z
2022-03-28T16:04:57.000Z
_posts/2019-03-19-wasserstein-gan.markdown
nailbrainz/nailbrainz.github.io
1a19086468743467543f242f9cc0bdbff61e1aa0
[ "MIT" ]
null
null
null
--- layout: post title: "Wasserstein Gan" date: 2019-03-18 09:00:05 +0800 categories: deep_learning use_math: true tags: deep_learning gan --- https://www.youtube.com/watch?v=ymWDGzpQdls <a href="https://arxiv.org/abs/1710.10196" target="_blank">https://arxiv.org/abs/1710.10196</a> TODO: DO something!
20.6
95
0.724919
kor_Hang
0.159535
2ce36806d50ddf500e90734668e02a3024f23dcd
1,361
md
Markdown
angular/src/lib/CHANGELOG.md
adamweeks/momentum-ui
35d612558a09e4feeed8a8556b153e1e4d768516
[ "MIT" ]
null
null
null
angular/src/lib/CHANGELOG.md
adamweeks/momentum-ui
35d612558a09e4feeed8a8556b153e1e4d768516
[ "MIT" ]
null
null
null
angular/src/lib/CHANGELOG.md
adamweeks/momentum-ui
35d612558a09e4feeed8a8556b153e1e4d768516
[ "MIT" ]
null
null
null
# Change Log All notable changes to this project will be documented in this file. See [Conventional Commits](https://conventionalcommits.org) for commit guidelines. # [5.2.0](https://github.com/collab-ui/collab-ui/compare/@collab-ui/angular@5.1.1...@collab-ui/angular@5.2.0) (2019-04-18) ### Bug Fixes * **time-picker:** update moment imports ([945eac8](https://github.com/collab-ui/collab-ui/commit/945eac8)) ### Features * **colors:** update colors in cui-icon component ([c0e1756](https://github.com/collab-ui/collab-ui/commit/c0e1756)) * **select:** adding advanced effects ([f4250df](https://github.com/collab-ui/collab-ui/commit/f4250df)) ## [5.1.1](https://github.com/collab-ui/collab-ui/compare/@collab-ui/angular@5.1.0...@collab-ui/angular@5.1.1) (2019-04-18) **Note:** Version bump only for package @collab-ui/angular # [5.1.0](https://github.com/collab-ui/collab-ui/compare/@collab-ui/angular@5.0.0...@collab-ui/angular@5.1.0) (2019-04-15) ### Bug Fixes * **angular:** update schematics and peerDependencies ([8fe9223](https://github.com/collab-ui/collab-ui/commit/8fe9223)) ### Features * **timepicker & datepicker:** Add timepicker component and datepicker component ([eec66aa](https://github.com/collab-ui/collab-ui/commit/eec66aa)) # 5.0.0 (2019-04-04) ### Features initial release of Angular 7 component library
25.679245
147
0.711242
eng_Latn
0.278626
2ce3d17b763cbe596812361c206958ac7dc5b7ca
8,775
md
Markdown
_posts/design/2018-01-01-salabim-discrete-modeling.md
mindfulmodeler/mindfulmodeler.github.io
47a3d68e5f6a264cca5d63fbcd4a3b04df7625e7
[ "MIT" ]
null
null
null
_posts/design/2018-01-01-salabim-discrete-modeling.md
mindfulmodeler/mindfulmodeler.github.io
47a3d68e5f6a264cca5d63fbcd4a3b04df7625e7
[ "MIT" ]
null
null
null
_posts/design/2018-01-01-salabim-discrete-modeling.md
mindfulmodeler/mindfulmodeler.github.io
47a3d68e5f6a264cca5d63fbcd4a3b04df7625e7
[ "MIT" ]
null
null
null
--- layout: page sidebar: right subheadline: Discrete Modeling title: "Salabim Discrete Modeling Tutorial" teaser: "Using a DES model to simulate women queueing at a water collection point" tags: - discrete modeling - Salabim - water well simulation - WASH - data analytics - tutorials - modeling categories: - modeling image: title: /water-well-collection.jpg image: thumb: /water-well-collection_thumb.jpg header: no show_meta: true comments: true --- Figuring out the best way to route components through a system is a complicated problem, with many real-world applications. For example, disaster responders may want to know the best way to distribute relief items after a hurricane. Or, hospital administrators may want to know the best way to set up their rooms to ensure patients are seen as quickly as possible. In either of these examples, the problems are too critical to just find the best way through trial-and-error. Instead, it can be highly useful to create a **simulation**, which models the transport of relief goods or patients through a process. In the following article, I'll cover some key points on simulations as well as how you can start building your own for free using the Salabim library in Python. <div class="panel radius" markdown="1"> **Table of Contents** {: #toc } * TOC {:toc} </div> # Discrete Event Simulation (DES) A model helps to perform "what if?" analysis, where questions about the system are posed and tested in the simulation, helping decision makers prepare for different future scenarios. Injecting elements of stochasticity help reflect the uncertainty inherent in the real world and can provide a better understanding of the system. **Discrete event simulation** in particular, is well-suited to problems where components change at different points in time as a result of events. Between two consecutive events, the system is assumed to be steady-state, so the simulation effectively skips from one event to the following event. This is different than **continuous simulation**, where the dynamics of the system are tracked over time uninterrupted. ![discrete vs continuous model]({{site.baseurl}}/images/discrete-vs-continuous.jpg) <!-- Markdown-Example for posts ![discrete vs continuous model]({{ site.urlimg }}discrete-cont.jpg) --> ## Salabim Tutorial One library for discrete simulation is **Salabim**, an open-source python library. There ar e many alternatives for performing discrete simulation (for instance, see [this Wikipedia list](https://wiki2.org/en/List_of_discrete_event_simulation_software)) however, many of them have considerable learning curves and expensive licenses. The Salabim library was written by Ruud van der Ham, an expert in discrete modeling from the Netherlands, who wrote the library to make discrete modeling more accessible to python users. The library can be installed easily using pip: `pip install salabim` Documentation and examples are available [here](https://www.salabim.org/manual/index.html). ### Components To begin modeling in Salabim, the most important part of the library to understand are **components**. Components are defined as classes in Salabim, using the syntax `sim.Component`. Unlike traditional definitions in python however, Salabim components do not *return* something, they *yield* something. Basically, the best way to create a component is to define it like so: {% highlight ruby %} class Patient(sim.Component): def process(self): .... {% endhighlight %} <!-- A **generator** is a function with at least one yield statement, which are used as a signal to give control to the sequence mechanism. --> <!-- When yield is followed by self, it means that it is the component to be held for some time --> There are two kinds of components in Salabim simulations: data components and active components. #### (1) Active components An active component contains one or more **processes**. To create an active component, you have to define a class first, and then you can add processes (which usually contain at least one yield statement). {% highlight ruby %} class Doctor(sim.Component): def process(self): ... yield ... ... {% endhighlight %} Once this class is created, it is then *activated* at some point in time. you can activate the class you created by making a new instance of the class: {% highlight ruby %} doctor_1 = Doctor() doctor_2 = Doctor() doctor_3 = Doctor() {% endhighlight %} An active component can later become a data component through one of two ways: (1) using *cancel* or (2) by reaching the end of its process. If no processes are found in a class you created, it will be treated as a data component. #### (2) Data components A data component is one that has no associated process method. To create a data component, simpy use: `data_component = sim.Component()` You can make a data component active later by means of an activate statement. {% highlight ruby %} nurse1 = Nurse() nurse1.activate(process='treat') {% endhighlight %} ## Water collection example In this example, we'll use Salabim to simulate women walking to a public well in order to fill up a container of water. Today, [tens of millions of women](https://www.npr.org/sections/goatsandsoda/2016/07/07/484793736/millions-of-women-take-a-long-walk-with-a-40-pound-water-can) still walk long distances to collect water for household use. These water containers are heavy and bulky (generally weighing 40 pounds or more) and trips take 30 minutes or longer. Often, multiple trips are taken per day. ### Simulation setup We'll create the following processes: - The <span style="color:hotpink">**person generator**</span> creates the women, with a uniform inter arrival time. - The <span style="color:hotpink">**women**</span> who visit the well. They wait in a queue and are served in a first-in, first out order. - The <span style="color:hotpink">**water well**</span> collection point, which we'll model as a *resource* in Salabim. - Resources have a limited capacity, just like our water collection point. They are useful for simulation because they can be claimed by components and released later, which is the case here because not every woman who arrives can fill her bucket all at once. Instead, they must form a queue. To make things a little more interesting, we'll also include two environmental features: - The **number who turn around** when they arrive at the well and see that the line is too long. - The **number who leave early** after waiting in the queue for a long time. {% highlight ruby %} import salabim as sim class PersonGenerator(sim.Component): def process(self): while True: Woman() yield self.hold(sim.Uniform(5, 25).sample()) #Uniform sample time between 5 and 25 minutes until next person is created class Woman(sim.Component): def process(self): if len(well.requesters()) >= 5: env.num_turn_around += 1 #women who turn around because lines are too long env.print_trace("","","Too many other people in line, turned around.") yield self.cancel() #this makes the current component a data component if queue length is greater than 5 people yield self.request(well, fail_delay=50) #if the request is not honored within 50 time units, #the process continues to next statement if self.failed(): #check if the request failed env.num_leave_early +=1 #women who turn around because they have waited in queue too long env.print_trace("","","Waited in line too long, left queue early.") else: yield self.hold(50) self.release() ##Setup the Environment env = sim.Environment() #create environment PersonGenerator() #activate component env.num_turn_around = 0 #initialize list env.num_leave_early = 0 #initialize list well = sim.Resource("well", 3) #3 places where women can fill up at the well resource ##Run the simulation for 2000 time units env.run(till=2000) ##Display key information well.requesters().length_of_stay.print_histogram(30,0,10) print("number who turned around when seeing the line length: ", env.num_turn_around) print("number who waited too long and left early: ", env.num_leave_early) {% endhighlight %} You can download the full Jupyter notebook of this example from [github](https://github.com/shannongross/website_tutorials/tree/master/salabim_discrete_example). ## You Might Also Like... {: .t60 } {% include list-posts tag='policy analysis with python' %}
53.181818
770
0.73151
eng_Latn
0.998362
2ce45eea5eba00a155d5c2b42f5dff7e00f45d9e
2,728
md
Markdown
_posts/2017-1-23-2017-01-23.md
gt-2003-2017/gt-2003-2017.github.io
d9b4154d35ecccbe063e9919c7e9537a84c6d56e
[ "MIT" ]
null
null
null
_posts/2017-1-23-2017-01-23.md
gt-2003-2017/gt-2003-2017.github.io
d9b4154d35ecccbe063e9919c7e9537a84c6d56e
[ "MIT" ]
null
null
null
_posts/2017-1-23-2017-01-23.md
gt-2003-2017/gt-2003-2017.github.io
d9b4154d35ecccbe063e9919c7e9537a84c6d56e
[ "MIT" ]
null
null
null
마가복음 5:35 - 5:43 2017년 01월 23일 (월) 달리다굼 마가복음 5:35 - 5:43 / 새찬송가 546 장 딸이 죽은 소식 앞에서 회당장에게 믿음을 도전하시는 예수님 35 아직 예수께서 말씀하실 때에 회당장의 집에서 사람들이 와서 회당장에게 이르되 당신의 딸이 죽었나이다 어찌하여 선생을 더 괴롭게 하나이까 36 예수께서 그 하는 말을 곁에서 들으시고 회당장에게 이르시되 두려워하지 말고 믿기만 하라 하시고 35 While Jesus was still speaking, some men came from the house of Jairus, the synagogue ruler. "Your daughter is dead," they said. "Why bother the teacher any more?" 36 Ignoring what they said, Jesus told the synagogue ruler, "Don't be afraid; just believe." 떠들고 울며 통곡하는 것을 그치게 하신 예수님 37 베드로와 야고보와 야고보의 형제 요한 외에 아무도 따라옴을 허락하지 아니하시고 38 회당장의 집에 함께 가사 떠드는 것과 사람들이 울며 심히 통곡함을 보시고 37 He did not let anyone follow him except Peter, James and John the brother of James. 38 When they came to the home of the synagogue ruler, Jesus saw a commotion, with people crying and wailing loudly. "달리다굼"으로 소녀를 일으키신 예수님 40 그들이 비웃더라 예수께서 그들을 다 내보내신 후에 아이의 부모와 또 자기와 함께 한 자들을 데리시고 아이 있는 곳에 들어가사 41 그 아이의 손을 잡고 이르시되 달리다굼 하시니 번역하면 곧 내가 네게 말하노니 소녀야 일어나라 하심이라 42 소녀가 곧 일어나서 걸으니 나이가 열두 살이라 사람들이 곧 크게 놀라고 놀라거늘 43 예수께서 이 일을 아무도 알지 못하게 하라고 그들을 많이 경계하시고 이에 소녀에게 먹을 것을 주라 하시니라 40 But they laughed at him. After he put them all out, he took the child's father and mother and the disciples who were with him, and went in where the child was. 41 He took her by the hand and said to her, "Talitha koum!" (which means, "Little girl, I say to you, get up!" ). 42 Immediately the girl stood up and walked around (she was twelve years old). At this they were completely astonished. 43 He gave strict orders not to let anyone know about this, and told them to give her something to eat. 해석도움 시간의 주인 되신 예수님 야이로는 혈루증 앓던 여인을 고쳐주시는 일로 시간이 지체되고 있을 때, 딸의 생명을 생각하며 조바심을 내고 있었을 것입니다. 바로 그때 집에서 딸이 방금 죽었다는 비극적인 소식이 전해졌습니다(35). 그것은 인간적인 모든 희망이 사라지는 순간이었습니다. 회당장의 마음에는 저 여인만 아니었어도 딸이 죽기 전에 예수님이 도착할 수 있었을 텐데 …’ 라는 원망이 생겼을지도 모릅니다. 그때 예수님은 말씀하셨습니다. “두려워하지 말고 믿기만 하라”(36). 완전한 절망의 순간에도 예수님은 여전히 참된 희망이 되십니다. 예수님이 함께 하실 때에 결코 너무 늦은 때란 없습니다. 하나님께는 시간이 문제가 되지 않습니다. 단지 우리의 믿음이 문제가 될 뿐 입니다. ● 나는 예수님이 함께 하시면 결코 너무 늦은 때가 없음을 믿습니까? 달리다굼! 예수님이 회당장의 집에 도착하셨을 때, 사람들은 떠들고 울며 심히 통곡하고 있었습니다. 유대인의 장례에는 직업적으로 곡하는 사람을 돈으로 사서 피리를 불며 크게 울게 하는 관습이 있었던 것입니다. 하지만 그들은 그 아이의 죽음을 통해 자신들의 이익을 챙길 뿐 그 죽음의 문제를 해결하거나 그 집에 궁극적인 평안을 제공하지는 못했습니다. 예수님은 그들을 향해 “이 아이가 죽은 것이 아니라 잔다”고 하시면서 떠들며 우는 것을 그치게 하셨습니다. 그리고 비웃는 무리를 다 내보내 신 후에 아이의 손을 잡고 “달리다굼!” 하셨습니다. ‘달리다굼’은 ‘굿모닝’처럼 아침에 어머니가 아이를 깨울 때 사용하는 사랑스러운 말입니다. 예수님은 마치 잠든 아이를 깨우듯이 그 소녀를 일으키신 것입니다. 그리고 소녀에게 먹을 것을 주라고 하면서 동정어린 관심을 표하셨습니다(43). 이 소녀가 치명적인 병으로 오랫동안 아무것도 먹을 수 없었다는 것을 알고 계셨기 때문입니다. 여기서 우리는 사랑으로 가득 찬 생명의 주인 되시는 예수님의 모습을 발견하게 됩니다. 오늘도 예수님은 이와 같은 사랑의 음성으로 우리를 깨우고 계십니다. 그리고 마지막 날에 우리를 일으키실 때도 이렇게 말씀하실 것입니다(살전 4:14). “달리다굼!” ● 나는 오늘도 사랑으로 가득 찬 생명의 주인께서 "달리다굼"하시는 음성을 듣고 있습니까?
60.622222
620
0.724707
kor_Hang
1.00001
2ce4a444df0d2c12794de97712e3922f24ca1110
598
md
Markdown
_orgregister/70005312-Eesti Noorsootöö Keskus.md
keeganmcbride/jkan
109492fff604753dbce8dcbae94ca7791c2b2a6d
[ "MIT" ]
16
2018-11-03T11:01:15.000Z
2019-06-14T11:01:37.000Z
_orgregister/70005312-Eesti Noorsootöö Keskus.md
okestonia/jkan
109492fff604753dbce8dcbae94ca7791c2b2a6d
[ "MIT" ]
213
2018-10-27T12:28:48.000Z
2019-10-04T09:40:54.000Z
_orgregister/70005312-Eesti Noorsootöö Keskus.md
okestonia/jkan
109492fff604753dbce8dcbae94ca7791c2b2a6d
[ "MIT" ]
42
2018-11-22T13:31:22.000Z
2019-09-28T12:49:19.000Z
--- nimi: Eesti Noorsootöö Keskus ariregistri_kood: 70005312 kmkr_nr: EE100630671 ettevotja_staatus: R ettevotja_staatus_tekstina: Registrisse kantud ettevotja_esmakande_kpv: 13.09.1999 ettevotja_aadress: .na asukoht_ettevotja_aadressis: Tõnismägi 5a asukoha_ehak_kood: 298 asukoha_ehak_tekstina: Kesklinna linnaosa, Tallinn, Harju maakond indeks_ettevotja_aadressis: '10119' ads_adr_id: 2290415 ads_ads_oid: .na ads_normaliseeritud_taisaadress: Harju maakond, Tallinn, Kesklinna linnaosa, Tõnismägi 5a teabesysteemi_link: https://ariregister.rik.ee/ettevotja.py?ark=70005312&ref=rekvisiidid ---
31.473684
88
0.847826
est_Latn
0.991931
2ce57bfb06a1d1754b3ad634e48f3085eb3e599d
1,763
md
Markdown
examples/12-embedded/README.md
tavaresrodrigo/kopf
97e1c7a926705a79dabce2931e96a924252b61df
[ "MIT" ]
855
2020-08-19T09:40:38.000Z
2022-03-31T19:13:29.000Z
examples/12-embedded/README.md
tavaresrodrigo/kopf
97e1c7a926705a79dabce2931e96a924252b61df
[ "MIT" ]
715
2019-12-23T14:17:35.000Z
2022-03-30T20:54:45.000Z
examples/12-embedded/README.md
tavaresrodrigo/kopf
97e1c7a926705a79dabce2931e96a924252b61df
[ "MIT" ]
97
2019-04-25T09:32:54.000Z
2022-03-30T10:15:30.000Z
# Kopf example for embedded operator Kopf operators can be embedded into arbitrary applications, such as UI; or they can be orchestrated explicitly by the developers instead of `kopf run`. In this example, we start the operator in a side thread, while simulating an application activity in the main thread. In this case, the "application" just creates and deletes the example objects, but it can be any activity. Start the operator: ```bash python example.py ``` Let it run for 6 seconds (mostly due to sleeps: 3 times by 1+1 second). Here is what it will print (shortened; the actual output is more verbose): ``` Starting the main app. [DEBUG ] Pykube is configured via kubeconfig file. [DEBUG ] Client is configured via kubeconfig file. [WARNING ] Default peering object is not found, falling back to the standalone mode. [WARNING ] OS signals are ignored: running not in the main thread. Do the main app activity here. Step 1/3. [DEBUG ] [default/kopf-example-0] Creation is in progress: ... [DEBUG ] [default/kopf-example-0] Deletion is in progress: ... Do the main app activity here. Step 2/3. [DEBUG ] [default/kopf-example-1] Creation is in progress: ... [DEBUG ] [default/kopf-example-1] Deletion is in progress: ... Do the main app activity here. Step 3/3. [DEBUG ] [default/kopf-example-2] Creation is in progress: ... [DEBUG ] [default/kopf-example-2] Deletion is in progress: ... Exiting the main app. [INFO ] Stop-flag is set to True. Operator is stopping. [DEBUG ] Root task 'poster of events' is cancelled. [DEBUG ] Root task 'watcher of kopfexamples.kopf.dev' is cancelled. [DEBUG ] Root tasks are stopped: finished normally; tasks left: set() [DEBUG ] Hung tasks stopping is skipped: no tasks given. ```
35.26
84
0.73114
eng_Latn
0.997195
2ce57e9c6b258dfd8b9515d0b5d2326093f55482
17,388
md
Markdown
content/substitute.md
afs/afs.github.io
9824c1078053e5468a0a49d71172162058edc995
[ "Apache-2.0" ]
null
null
null
content/substitute.md
afs/afs.github.io
9824c1078053e5468a0a49d71172162058edc995
[ "Apache-2.0" ]
3
2020-08-08T12:29:21.000Z
2022-03-05T19:59:38.000Z
content/substitute.md
afs/afs.github.io
9824c1078053e5468a0a49d71172162058edc995
[ "Apache-2.0" ]
2
2021-04-07T18:00:23.000Z
2022-03-04T20:02:36.000Z
--- layout: doc title: Substitution of Variables in SPARQL --- <div class="docinfo"> <p>Andy Seaborne</p> <p>October 2019</p> </div> The SPARQL 1.1 algebra operation "[substitute](https://www.w3.org/TR/sparql11-query/#defn_substitute)" evaluates a graph pattern where there is a specific are variables given by a solution mapping. The operation is used to in the evaluation of <tt>EXISTS</tt> and <tt>NOT EXISTS</tt> operations. This document list problems that have been identified with the <tt>substitute</tt> operation and proposes an improved substitution evalaution process that addresses these problems based on the concept of [Correlated Subquery](https://en.wikipedia.org/wiki/Correlated_subquery) found in SQL. * [Summary](#summary) * [Identified Issues](#issues) * [An Improved "substitute" Operation](#substitute-ng) * [Addressing Issues](#addressing-issues) * [Notes](#notes) ## Summary {#summary} The fundamental problem is that a variable can not be simply replaced by a value (an RDF Term) in all places in a graph pattern. There are some places where SPARQL function forms and "AS" assignment require a variable. The proposal here is to take the variable binding from the input solution, retaining the variable in the graph pattern, and disallowing cases that can reset the variable binding as is already the case elsewhere in SPARQL. ## Identified Issues {#issues} This section describes the issues identified on the SPARQL Exists Community group mailing list [public-sparql-exists/2016Jul/0014](https://lists.w3.org/Archives/Public/public-sparql-exists/2016Jul/0014.html). * [Issue-1](#issue-1): Some uses of EXISTS are not defined during evaluation. * [Issue-2](#issue-2): Substitution happens where definitions are only for variables. * [Issue-3](#issue-3): Blank nodes substituted into BGPs act as variables. * [Issue-4](#issue-4): Substitution can flip MINUS to its disjoint-domain case. * [Issue-5](#issue-5): Substitution affects disconnected variables. ### Issue 1: Some uses of EXISTS are not defined during evaluation {#issue-1} The evaluation process in the specificiation is defined for graph patterns but there are situations where the evaluation is of an alegbra form not listed. For example: FILTER EXISTS { SELECT ?y { ?y :q :c . } } and FILTER EXISTS { VALUES ?y { 123 } } The argument to <tt>[exists](https://www.w3.org/TR/sparql11-query/#defn_evalExists)</tt> is not explicitly listed as a "Graph Pattern" in the table of SPARQL algebra symbols in [section 18.2](https://www.w3.org/TR/sparql11-query/#sparqlQuery) when the argument to <tt>EXISTS</tt> is a [GroupGraphPattern](https://www.w3.org/TR/sparql11-query/#rGroupGraphPattern) containing just a [subquery](https://www.w3.org/TR/sparql11-query/#subqueries) or just [InlineData](https://www.w3.org/TR/sparql11-query/#inline-data). ### Issue 2: Substitution happens where definitions are only for variables {#issue-2} There are places in the SPARQL syntax and algebra where variables are allowed but not RDF terms (constant values).<p> Example: FILTER EXISTS { BIND ( :e AS ?z ) { SELECT ?x { :b :p :c } } } Both positions "AS ?z" and "SELECT ?x" must be variables. In the algebra, this affects * <tt>[extend]("https://www.w3.org/TR/sparql11-query/#defn_extend")</tt> (related to the use of <tt>AS</tt> in SPARQL syntax) * [in line data](https://www.w3.org/TR/sparql11-query/#inline-data) (related to the use of <tt>VALUES</tt>) * <tt>[BOUND](https://www.w3.org/TR/sparql11-query/#func-bound)</tt> ### Issue 3: Blank nodes substituted into BGPs act as variables {#issue-3} In the [evaluation of basic graph patterns](https://www.w3.org/TR/sparql11-query/#BasicGraphPattern) (BGPs) blank nodes [are replaced](https://www.w3.org/TR/sparql11-query/#BGPsparql) by RDF terms from the graph being matched and variables are replaced by a solution mapping from query variables to RDF terms so that the basic graph pattern is now a subgraph of the graph being matched. Simply substituting a variable with a blank node in the <tt>EXISTS</tt> evaluation process does not cause the basic graph pattern to be to be restricted to subgraphs containing that blank node as an RDF term because it is mapped by an [RDF instance mapping](https://www.w3.org/TR/2004/REC-rdf-mt-20040210/#definst) before checking that the BGP after mapping is a subgraph of the graph being queried. Note that elsewhere in the evaluation of the SPARQL algebra, a solution mapping with a binding from variable to blank node, does treat blank nodes as RDF terms. They are not mapped by an RDF instance mapping. Example: SELECT ?x WHERE { ?x :p :d . FILTER EXISTS { ?x :q :b . } } against the graph <tt>{ _:c :p :d , :e :q :b }</tt> the substitution for <tt>EXISTS</tt> produces <tt>BGP(_:c :q :b)</tt> which then matches against <tt>:e :q :b</tt> because the <tt>_:c</tt> can be mapped to <tt>:e</tt> by the RDF instance mapping that is part of pattern instance mappings in [18.3.1](https://www.w3.org/TR/sparql11-query/#BGPsparql). ### Issue 4: Substitution can flip MINUS to its disjoint-domain case {#issue-4} In SELECT ?x WHERE { ?x :p :c . FILTER EXISTS { ?x :p :c . MINUS { ?x :p :c . } } } on the graph <tt>{ :d :p :c }</tt> the substitution from 18.6 ends up producing Minus( BGP( :d :p :c ), BGP( :d :p :c ) ) which produces a non-empty result because the two solution mappings for the Minus have disjoint domains and 18.5 dictates that then the result is not empty. ### Issue 5: Substitution affects disconnected variables {#issue-5} In SELECT ?x WHERE { BIND ( :d AS ?x ) FILTER EXISTS { BIND ( :e AS ?z ) SELECT ?y WHERE { ?x :p :c } } } the substitution from 18.6 ends up producing Join ( Extend( BGP(), ?z :e ), ToMultiSet( Project( ToList( BGP( :d :p :c ) ), { ?y } ) )) The `?x` inside the `SELECT ?y` is not projected out so it is a "different" `?x` than the outer one - changing it to another other unused name in the same query would not normally affect the query results. ## An Improved "substitute" Operation {#substitute-ng} Evalauting <tt>[substitute](https://www.w3.org/TR/sparql11-query/#defn_substitute)</tt> is performed for a given solution mapping. For example, the `EXISTS` operation evaluates to `true` if a graph pattern has one or more matches given the variable bindings of a solution mapping. We call this solution mapping the <dfn>current row</dfn> in this description. This section proposes an alternative mechanism. Rather than replace each variable by the value it is bound to in the current row, this alternative mechanism makes the whole of the current row available at any point in the evaluation of an <tt>EXISTS</tt> expression. It uses the current row to restrict the binding of variables at the points where variable bindings are created during evaluation of <tt>EXISTS</tt> to be those from the current row. It makes illegal syntactic constructs that could lead to an attempt to rebind a variable from the current row through using the <tt>AS</tt> syntax. Section "[Addressing Issues](#addressing-issues)" describes how this alternative definition of <tt>substitute</tt> addresses each of the issues identified above. There are 3 parts to the proposal: * Place the current row of mapping variables to value (the RDF terms) so that the variables always have their values from the current row. This is the replacement for syntactic substitution in the original definition. * Renaming inner scope variables so that variables that are only used within a sub-query are not affected by the current row. This reflects the fact that in SPARQL such variables are not present in solutions mappings outside their sub-query. * Disallow syntactic forms that set variables potentially already present in the current row. SPARQL solutions mappings can only have one binding for a variable and the current row provides that binding. ### Renaming Within sub-queries, variables with the same name can be used but do not appear in the overall results of the query if they do not occur in the projection in the sub-query. Such inner variables are not <a href="https://www.w3.org/TR/sparql11-query/#variableScope">in-scope</a> when they are not in the output of the projection part of the inner SELECT expression. SELECT * { ?s :value ?v . FILTER EXISTS { {SELECT (count(*) AS ?C) { ?s :property ?w . }} FILTER ( ?C < ?v ) } } Here, the <tt>?s</tt> is not mentioned in the projection in <tt>SELECT (count(*) AS ?C)</tt>. Replacing <tt>?s</tt> by, for example, <tt>?V1234</tt> in the sub-query does not change the overall results. SELECT * { ?s :value ?v . FILTER EXISTS { {SELECT (count(*) AS ?C) { ?V1234 :property ?w . }} FILTER ( ?C < ?v ) } } Such variable usages can be replaced with a variable of a different name, if that name is not used anywhere else in the query, and the same results are obtained in the sub-query. A sub-query always has a projection as its top-most algebra operator. To preserve this, any such variables are renamed so they do not coincide with variables from the current row being filtered by <tt>EXISTS</tt>. The SPARQL algebra "project" operator has two components, an algebra expression and a set of variables for the projection. <div class="defn"> <b>Definition: <a id="defn_projmap" name="defn_projmap">Projection Expression Variable Remapping</a></b> <p> For a projection algebra operation P with set of variables PV, define a partial mapping F from <a href="https://www.w3.org/TR/sparql11-query/#sparqlQueryVariables">V</a>, the set of all variables, to V where: </p> <p class="indent"> F(v) = v if v in PV<br/> F(v) = v1 where v is a variable mentioned in the project expression and v1 is a fresh variable<br/> F(v) = v otherwise. </p> Define the <dfn>Projection Expression Variable Remapping</dfn> <tt>PrjMap(P,PV)</tt> to be the algebra expression P (and the subtree over which the projection is defined) with F applied to every variable of the algebra expression P over which P is evaluated. </div> This process is applied throughout the graph pattern of <tt>EXISTS</tt>: <div class="defn"> <b>Definition: <a id="defn_varrename" name="defn_varrename">Variable Remapping</a></b> <p> For any algebra expression X define the <dfn>Variable Remapping</dfn> PrjMap(X): </p> <p class="indent"> PrjMap(X) = replace all project operations <tt>project(P PV)</tt> with <tt>project(PrjMap(P,PV) PV)</tt> for each projection in X. </p> This replacement is applied bottom-up when there are multiple project operations in the graph pattern of <tt>EXISTS</tt>. </div> Applying the renaming steps inside a sub-query does not change the solution mappings resulting from evaluating the sub-query. Remapping is only applied to variables not visible outside the sub-query. Renaming a variable in a SPARQL algebra expression causes the variable name used in bindings from evaluating the algebra expression to change. Since these are only variables that are not visible outside the sub-query, because they do not occur in the projection, the result of the sub-query is unchanged. SPARQL algebra expressions can not access the name of a variable nor introduce a variable except by remapping. Remapping is only applied to variables not visible outside the sub-query. ### Limitations on Assignment SPARQL syntactic forms that attempt to bind a variable through the use of <tt>AS</tt> that might already be in a solution mapping are forbidden in SPARQL: this is covered in the syntactic restrictions of <a href="https://www.w3.org/TR/sparql11-query/#sparqlGrammar">19.8 Grammar</a>, notes 12 and 13. This proposal adds the restriction that any variables in a current row, the set of variables <a href="https://www.w3.org/TR/sparql11-query/#variableScope">in-scope</a> of the expression containing EXISTS, can not be assigned with the <tt>extend</tt> algebra function linked to the <tt>AS</tt> syntax. In addition, any use of <tt>VALUES</tt> in the EXISTS expression must not use a variable in the current row. ### Restriction of Bindings The proposal is to retain the variables from the current row, not substitute them for RDF terms, before evaluation, and also to restrict the binding of the solution to the RDF term of the current row. This occurs after renaming. Binding for variables occur in several places in SPARQL: * <a href="https://www.w3.org/TR/sparql11-query/#BGPsparql">Basic Graph Pattern Matching</a> * <a href="https://www.w3.org/TR/sparql11-query/#PropertyPathPatterns">Property Path Patterns</a> * The <a href="https://www.w3.org/TR/sparql11-query/#defn_evalGraph">evaluation of algebra form <tt>Graph(var,P)</tt></a> involving a variable (from the syntax <tt>GRAPH ?variable {...}</tt>) Note that other places where solution mappings add variables are in <tt>extend</tt> function (connected to the <tt>AS</tt> syntax) and <tt>a multiset</tt> from <tt>VALUES</tt> syntax. [Limitations on Assignment]("#limitations-on-assignment") forbid this being of variables of the current row. Restricting the RDF Terms for a variable binding is done using inline data that is joined with the evalaution of the basic graph pattern, property path or graph match. <div class="defn"> <b>Definition: <a id="defn_valuesinsertion" name="defn_valuesinsertion">Values Insertion</a></b> <p> For solution mapping μ, define Table(μ) to be the multiset formed from μ. </p> <p class="indent"> Table(μ) = { μ }<br/> Card[μ] = 1 </p> <p> Define the <dfn>Values Insertion</dfn> function <tt>Replace(X, μ)</tt> to replace each occurence Y of a <a href="https://www.w3.org/TR/sparql11-query/#sparqlTranslateBasicGraphPatterns">Basic Graph Pattern</a>, <a href="https://www.w3.org/TR/sparql11-query/#sparqlTranslatePathExpressions">Property Path Expression</a>, <a href="https://www.w3.org/TR/sparql11-query/#sparqlTranslateGraphPatterns">Graph(Var, pattern)</a> in X with join(Y, Table(μ)). </p> </div> ### Evaluation of EXISTS The evaluation of <tt>EXISTS</tt> is defined as: <div class="defn"> <b>Definition: <a id="defn_valuesinsertion" name="defn_valuesinsertion">Evaluation of Exists</a></b> <p> Let μ be the current solution mapping for a filter and X a graph pattern, define the <dfn>Evaluation of Exists</dfn> <tt>exists(X)</tt> </p> <p class="indent"> exists(X) = true if eval(D(G), Replace(PrjMap(X), μ) is a non-empty solution sequence. <br/> exists(X) = false otherwise </p> </div> ## Addressing Issues {#addressing-issues} This section addresses each issue identified, given the proposal above. ### Issue 1: Some uses of EXISTS are not defined during evaluation This can be handled by handling solution sequences as graph patterns where needed by adding <a href="https://www.w3.org/TR/sparql11-query/#defn_algToMultiSet">toMultiSet</a> as is done for <a href="https://www.w3.org/TR/sparql11-query/#rSubSelect">SubSelect</a> in <a href="https://www.w3.org/TR/sparql11-query/#sparqlTranslateGraphPatterns">18.2.2.6 Translate Graph Patterns</a> with a a correction to the text at the end of <a href="https://www.w3.org/TR/sparql11-query/#sparqlQuery">Section 18.2</a> introductory paragraph. <pre class="box"> query-errata-N: "Section 18.2 Translation to the SPARQL Algebra" intro (end): ToMultiSet can be used where a graph pattern is mentioned below because the outcome of evaluating a graph pattern is a multiset. Multisets of solution mappings are elements of the SPARQL algebra. Multisets of solution mappings count as graph patterns. </pre> ### Issue 2: Substitution happens where definitions are only for variables Rather then replace a variable by its value in the current row, the new mechanism makes the binding of variable to value available. The variable remains in the graph pattern of <tt>EXISTS</tt> and the evaluation. ### Issue 3: Blank nodes substituted into BGPs act as variables By making the current row, which can include blank nodes, available, and not modifying the BGP by substitution, no blank nodes are introduced into the evalaution of the BGP. Instead, the possible solutions is restricted by the current row. ### Issue 4: Substitution can flip MINUS to its disjoint-domain case Issue 4 is addressed because variables are not removed from the domain of <tt>MINUS</tt>. This propsoal does not preserve all uses of <tt>MINUS</tt> expressions; the problem identified in issue 4 is considered to be a bug in the SPARQL 1.1 specification. ### Issue 5: Substitution affects disconnected variables Issue 5 is addressed by noting that variables inside sub-queries which are not projected can be renamed without affecting the sub-query results. Whether to preserve that invariant or allow the variables to be set by the current row is a choice point - this design preserves the independence of disconnected variables. ## Notes {#notes} The proposal described in this document does not cover use of variables from the current row in a <tt>HAVING</tt> clause.
40.816901
130
0.731539
eng_Latn
0.989881
2ce5c2b9a529a5d7aac3634b1da4a0b073974f2d
2,017
md
Markdown
_posts/21/hhhj/2021-04-04-justin-vernon.md
chito365/ukdat
382c0628a4a8bed0f504f6414496281daf78f2d8
[ "MIT" ]
null
null
null
_posts/21/hhhj/2021-04-04-justin-vernon.md
chito365/ukdat
382c0628a4a8bed0f504f6414496281daf78f2d8
[ "MIT" ]
null
null
null
_posts/21/hhhj/2021-04-04-justin-vernon.md
chito365/ukdat
382c0628a4a8bed0f504f6414496281daf78f2d8
[ "MIT" ]
null
null
null
--- id: 404 title: Justin Vernon date: 2021-04-04T22:49:34+00:00 author: chito layout: post guid: https://ukdataservers.com/justin-vernon/ permalink: /04/04/justin-vernon tags: - claims - lawyer - doctor - house - multi family - online - poll - business - unspecified - single - relationship - engaged - married - complicated - open relationship - widowed - separated - divorced - Husband - Wife - Boyfriend - Girlfriend category: Guides --- # About Justin Vernon Singer-songwriter best known as frontman for the indie folk band Bon Iver. He also worked with Volcano Choir and Gayngs. # Early life He attended the University of Wisconsin-Eau Claire and studied in Ireland for a semester. His first album with Bon Iver, 2007&#8217;s For Emma, Forever Ago, was an international hit. # Trivia He won Grammy Awards for Best New Artist and Best Alternative Album in 2012. # Family of Justin Vernon He was born to Gil and Justine Vernon and raised with his brother Nate Vernon. He was in a relationship with Kathleen Edwards, but they eventually broke up. # Close associates of Justin Vernon He told Stephen Colbert on The Colbert Report that his major in college was religious studies, while his minor was in women&#8217;s studies, because he wasn&#8217;t ready at that point to study music.
22.411111
200
0.492315
eng_Latn
0.9986
2ce600c4793997959036efa545c3f0db507e049e
812
md
Markdown
jekyll/demo.md
RubyLouvre/webuploader
9e833b591b9c1ae071a53d50ea44e30fe771269d
[ "MIT" ]
1
2015-11-08T18:09:04.000Z
2015-11-08T18:09:04.000Z
jekyll/demo.md
RubyLouvre/webuploader
9e833b591b9c1ae071a53d50ea44e30fe771269d
[ "MIT" ]
null
null
null
jekyll/demo.md
RubyLouvre/webuploader
9e833b591b9c1ae071a53d50ea44e30fe771269d
[ "MIT" ]
null
null
null
--- layout: post title: 演示 name: Demo group: 'nav' weight : 4 hideTitle: true noToc: true styles: - /css/webuploader.css - /css/demo.css scripts: - /js/webuploader.js - /js/demo.js --- # Demo 您可以尝试文件拖拽,使用QQ截屏工具,然后激活窗口后粘贴,或者点击添加图片按钮,来体验此demo. <div id="uploader" class="wu-example"> <div class="queueList"> <div id="dndArea" class="placeholder"> <div id="filePicker"></div> <p>或将照片拖到这里,单次最多可选300张</p> </div> </div> <div class="statusBar" style="display:none;"> <div class="progress"> <span class="text">0%</span> <span class="percentage"></span> </div><div class="info"></div> <div class="btns"> <div id="filePicker2"></div><div class="uploadBtn">开始上传</div> </div> </div> </div>
21.368421
73
0.567734
kor_Hang
0.089545
2ce62874bb353d2b5ede9cdacbb352ab821892d1
15,015
md
Markdown
articles/cosmos-db/continuous-backup-restore-resource-model.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
66
2017-07-09T03:34:12.000Z
2022-03-05T21:27:20.000Z
articles/cosmos-db/continuous-backup-restore-resource-model.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
671
2017-06-29T16:36:35.000Z
2021-12-03T16:34:03.000Z
articles/cosmos-db/continuous-backup-restore-resource-model.md
ZetaPR/azure-docs.es-es
0e2bf787d1d9ab12065fcb1091a7f13b96c6f8a2
[ "CC-BY-4.0", "MIT" ]
171
2017-07-25T06:26:46.000Z
2022-03-23T09:07:10.000Z
--- title: Modelo de recursos para la característica de restauración a un momento dado de Azure Cosmos DB. description: En este artículo se explica el modelo de recursos para la característica de restauración a un momento dado de Azure Cosmos DB. Se explican los parámetros que admiten los recursos y la copia de seguridad continua que se pueden restaurar en la API de Azure Cosmos DB para las cuentas de SQL y MongoDB. author: kanshiG ms.service: cosmos-db ms.topic: conceptual ms.date: 07/29/2021 ms.author: govindk ms.reviewer: sngun ms.openlocfilehash: e4fffd12b72b41c45b2718e96c34a03e28eeca29 ms.sourcegitcommit: 0046757af1da267fc2f0e88617c633524883795f ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 08/13/2021 ms.locfileid: "121733179" --- # <a name="resource-model-for-the-azure-cosmos-db-point-in-time-restore-feature"></a>Modelo de recursos para la característica de restauración a un momento dado de Azure Cosmos DB [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)] En este artículo se explica el modelo de recursos para la característica de restauración a un momento dado de Azure Cosmos DB. Se explican los parámetros que admiten los recursos y la copia de seguridad continua que se pueden restaurar en la API de Azure Cosmos DB para las cuentas de SQL y MongoDB. ## <a name="database-accounts-resource-model"></a>Modelo de recursos de la cuenta de base de datos El modelo de recursos de la cuenta de base de datos se actualiza con algunas propiedades adicionales a fin de admitir los escenarios de restauración nuevos. Estas propiedades son **BackupPolicy, CreateMode y RestoreParameters**. ### <a name="backuppolicy"></a>BackupPolicy Una propiedad nueva en la directiva de copia de seguridad en el nivel de la cuenta denominada `Type` bajo el parámetro `backuppolicy` permite las funcionalidades de copia de seguridad continua y restauración a un momento dado. Este modo se denomina **copia de seguridad continua**. Se puede establecer este modo al crear la cuenta o al [migrar una cuenta de modo periódico a continuo](migrate-continuous-backup.md). Una vez habilitado el modo continuo, todos los contenedores y las bases de datos que se creen en esta cuenta tendrán habilitadas las funcionalidades de copia de seguridad continua y restauración a un momento dado de manera predeterminada. > [!NOTE] > Actualmente, la característica de restauración a un momento dado está disponible para la API Azure Cosmos DB para cuentas de MongoDB y SQL. Después de crear una cuenta con el modo continuo, no se puede cambiar a un modo periódico. ### <a name="createmode"></a>CreateMode Esta propiedad indica cómo se creó la cuenta. Los valores posibles son *Default* y *Restore*. Para realizar una restauración, establezca este valor en *Restore* y proporcione los valores adecuados en la propiedad `RestoreParameters`. ### <a name="restoreparameters"></a>RestoreParameters El recurso `RestoreParameters` contiene los detalles de la operación de restauración, los que incluyen el id. de la cuenta, la hora a la que realizar la restauración y los recursos que se deben restaurar. |Nombre de la propiedad |Descripción | |---------|---------| |restoreMode | El modo de restauración debe ser *PointInTime*. | |restoreSource | El id. de instancia de la cuenta de origen desde la que se iniciará la restauración. | |restoreTimestampInUtc | Momento dado en hora UTC al que se debe restaurar la cuenta. | |databasesToRestore | Lista de objetos `DatabaseRestoreResource` para especifica qué bases de datos y contenedores se deben restaurar. Cada recurso representa una base de datos única y todas las colecciones de esa base de datos. Para obtener más información, consulte la sección [Recursos SQL que se pueden restaurar](#restorable-sql-resources). Si este valor está vacío, se restaura toda la cuenta. | ### <a name="sample-resource"></a>Recurso de ejemplo El JSON siguiente es un recurso de cuenta de base de datos de ejemplo con copia de seguridad continua habilitada: ```json { "location": "westus", "properties": { "databaseAccountOfferType": "Standard", "locations": [ { "failoverPriority": 0, "locationName": "southcentralus", "isZoneRedundant": false } ], "createMode": "Restore", "restoreParameters": { "restoreMode": "PointInTime", "restoreSource": "/subscriptions/subid/providers/Microsoft.DocumentDB/locations/westus/restorableDatabaseAccounts/1a97b4bb-f6a0-430e-ade1-638d781830cc", "restoreTimestampInUtc": "2020-06-11T22:05:09Z", "databasesToRestore": [ { "databaseName": "db1", "collectionNames": [ "collection1", "collection2" ] }, { "databaseName": "db2", "collectionNames": [ "collection3", "collection4" ] } ] }, "backupPolicy": { "type": "Continuous" } } ``` ## <a name="restorable-resources"></a>Recursos que se pueden restaurar Hay disponible un conjunto de API y recursos nuevos para ayudarlo a descubrir información crítica sobre los recursos: cuáles se pueden restaurar, las ubicaciones desde las que se pueden restaurar y las marcas de tiempo de cuando se realizaron operaciones clave en estos recursos. > [!NOTE] > Todas las API que se usan para enumerar estos recursos requieren los permisos siguientes: > * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` > * `Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` ### <a name="restorable-database-account"></a>Cuenta de base de datos que se puede restaurar Este recurso contiene una instancia de cuenta de base de datos que se puede restaurar. La cuenta de base de datos puede ser una cuenta eliminada o activa. Contiene información que permite encontrar la cuenta de base de datos de origen que desea restaurar. |Nombre de la propiedad |Descripción | |---------|---------| | ID | Identificador único del recurso. | | accountName | Nombre de la cuenta de base de datos global. | | creationTime | Hora UTC a la que se creó o migró la cuenta. | | deletionTime | Hora UTC a la que se eliminó la cuenta. Este valor está vacío si la cuenta está activa. | | apiType | Tipo de API de la cuenta de Azure Cosmos DB. | | restorableLocations | Lista de las ubicaciones en las que existía la cuenta. | | restorableLocations: locationName | Nombre de la región de la cuenta regional. | | restorableLocations: regionalDatabaseAccountInstanceId | GUID de la cuenta regional. | | restorableLocations: creationTime | Hora UTC a la que se creó o migró la cuenta regional.| | restorableLocations: deletionTime | Hora UTC a la que se eliminó la cuenta regional. Este valor está vacío si la cuenta regional está activa.| Si quiere ver una lista de todas las cuentas que se pueden restaurar, consulte los artículos [Cuentas de base de datos que se pueden restaurar: lista](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-database-accounts/list) o [Cuentas de base de datos que se pueden restaurar: lista por ubicación](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-database-accounts/list-by-location). ### <a name="restorable-sql-database"></a>Base de datos SQL que se puede restaurar Cada recurso contiene información de un evento de mutación, como su creación y eliminación, que se produjo en la base de datos SQL. Esta información puede ayudar en escenarios en los que la base de datos se eliminó de manera accidental y si es necesario averiguar cuándo se produjo ese evento. |Nombre de la propiedad |Descripción | |---------|---------| | eventTimestamp | Hora UTC a la que se creó o eliminó la base de datos. | | ownerId | Nombre de la base de datos SQL. | | ownerResourceId | Identificador de recurso de la base de datos SQL.| | operationType | Tipo de operación de este evento de base de datos. Estos son los valores posibles:<br/><ul><li>Create: evento de creación de base de datos.</li><li>Delete: evento de eliminación de base de datos.</li><li>Replace: evento de modificación de base de datos.</li><li>SystemOperation: evento de modificación de base de datos desencadenado por el sistema. No es el usuario quien inicia este evento.</li></ul> | | database |Propiedades de la base de datos SQL en el momento del evento.| Si quiere ver una lista de todas las mutaciones de base de datos, consulte el artículo [Bases de datos SQL que se pueden restaurar: lista](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-databases/list). ### <a name="restorable-sql-container"></a>Contenedor SQL que se puede restaurar Cada recurso contiene información de un evento de mutación, como su creación y eliminación, que se produjo en el contenedor SQL. Esta información puede ayudar en escenarios en los que se modificó o eliminó el contenedor y si es necesario averiguar cuándo se produjo ese evento. |Nombre de la propiedad |Descripción | |---------|---------| | eventTimestamp | Hora UTC a la que se produjo este evento de contenedor.| | ownerId| Nombre del contenedor SQL.| | ownerResourceId | Identificador de recurso del contenedor SQL.| | operationType | Tipo de operación de este evento de contenedor. Estos son los valores posibles: <br/><ul><li>Create: evento de creación de contenedor.</li><li>Delete: evento de eliminación de contenedor.</li><li>Replace: evento de modificación de contenedor.</li><li>SystemOperation: evento de modificación de contenedor desencadenado por el sistema. No es el usuario quien inicia este evento.</li></ul> | | contenedor | Propiedades del contenedor SQL en el momento del evento.| Si quiere ver una lista de todas las mutaciones de contenedor en la misma base de datos, consulte el artículo [Contenedores SQL que se pueden restaurar: lista](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-containers/list). ### <a name="restorable-sql-resources"></a>Recursos SQL que se pueden restaurar Cada recurso representa una base de datos única y todos los contenedores de esa base de datos. |Nombre de la propiedad |Descripción | |---------|---------| | databaseName | Nombre de la base de datos SQL. | collectionNames | Lista de los contenedores SQL de esta base de datos.| Si quiere ver una lista de las combinaciones de base de datos y contenedor SQL que existen en la cuenta en una marca de tiempo y ubicación determinadas, consulte el artículo [Recursos SQL que se pueden restaurar: lista](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-sql-resources/list). ### <a name="restorable-mongodb-database"></a>Base de datos MongoDB que se puede restaurar Cada recurso contiene información de un evento de mutación, como su creación y eliminación, que se produjo en la base de datos MongoDB. Esta información puede ayudar en el escenario en el que la base de datos se eliminó de manera accidental y el usuario necesita averiguar cuándo se produjo ese evento. |Nombre de la propiedad |Descripción | |---------|---------| |eventTimestamp| Hora UTC a la que se produjo este evento de base de datos.| | ownerId| Nombre de la base de datos MongoDB. | | ownerResourceId | Identificador de recurso de la base de datos MongoDB. | | operationType | Tipo de operación de este evento de base de datos. Estos son los valores posibles:<br/><ul><li> Create: evento de creación de base de datos.</li><li> Delete: evento de eliminación de base de datos.</li><li> Replace: evento de modificación de base de datos.</li><li> SystemOperation: evento de modificación de base de datos desencadenado por el sistema. No es el usuario quien inicia este evento. </li></ul> | Si quiere ver una lista de todas las mutaciones de base de datos, consulte el artículo [Bases de datos MongoDB que se pueden restaurar: lista](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-databases/list). ### <a name="restorable-mongodb-collection"></a>Colección de MongoDB que se puede restaurar Cada recurso contiene información de un evento de mutación, como su creación y eliminación, que se produjo en la colección de MongoDB. Esta información puede ayudar en escenarios en los que se modificó o eliminó la colección y si el usuario necesita averiguar cuándo se produjo ese evento. |Nombre de la propiedad |Descripción | |---------|---------| | eventTimestamp |Hora UTC a la que se produjo este evento de colección. | | ownerId| Nombre de la colección de MongoDB. | | ownerResourceId | Identificador de recurso de la colección de MongoDB. | | operationType |Tipo de operación de este evento de colección. Estos son los valores posibles:<br/><ul><li>Create: evento de creación de colección.</li><li>Delete: evento de eliminación de colección.</li><li>Replace: evento de modificación de colección.</li><li>SystemOperation: evento de modificación de colección desencadenado por el sistema. No es el usuario quien inicia este evento.</li></ul> | Si quiere ver una lista de todas las mutaciones de contenedor en la misma base de datos, consulte el artículo [Colecciones de MongoDB que se pueden restaurar: lista](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-collections/list). ### <a name="restorable-mongodb-resources"></a>Recursos de MongoDB que se pueden restaurar Cada recurso representa una base de datos única y todas las colecciones de esa base de datos. |Nombre de la propiedad |Descripción | |---------|---------| | databaseName |Nombre de la base de datos MongoDB. | | collectionNames | Lista de las colecciones de MongoDB en esta base de datos. | Si quiere ver una lista de las combinaciones de base de datos y colección de MongoDB que existen en la cuenta en una marca de tiempo y ubicación determinadas, consulte el artículo [Recursos MongoDB que se pueden restaurar: lista](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/restorable-mongodb-resources/list). ## <a name="next-steps"></a>Pasos siguientes * Aprovisione la copia de seguridad continua mediante [Azure Portal](provision-account-continuous-backup.md#provision-portal), [PowerShell](provision-account-continuous-backup.md#provision-powershell), la [CLI](provision-account-continuous-backup.md#provision-cli) o [Azure Resource Manager](provision-account-continuous-backup.md#provision-arm-template). * Restaure una cuenta mediante [Azure Portal](restore-account-continuous-backup.md#restore-account-portal), [PowerShell](restore-account-continuous-backup.md#restore-account-powershell), la [CLI](restore-account-continuous-backup.md#restore-account-cli) o [Azure Resource Manager](restore-account-continuous-backup.md#restore-arm-template). * [Realice la migración a una cuenta desde una copia de seguridad periódica a una copia de seguridad continua](migrate-continuous-backup.md). * [Administre los permisos](continuous-backup-restore-permissions.md) necesarios para restaurar datos con el modo de copia de seguridad continua.
74.331683
654
0.760972
spa_Latn
0.96426
2ce6cf95f031c62ea31db55f06bcba54250d7c16
47
md
Markdown
README.md
tvdsluijs/snurl.eu
7916883590d432f6dad0d9703482d007f4eeff9e
[ "MIT" ]
null
null
null
README.md
tvdsluijs/snurl.eu
7916883590d432f6dad0d9703482d007f4eeff9e
[ "MIT" ]
null
null
null
README.md
tvdsluijs/snurl.eu
7916883590d432f6dad0d9703482d007f4eeff9e
[ "MIT" ]
null
null
null
# snurl.eu Snurl.eu the european url shortner.
15.666667
35
0.765957
cat_Latn
0.129857
2ce706d28c25e7385e1d27ab0dd004834a8192a8
375
md
Markdown
README.md
zappo2/digital-art-with-matlab
42019e1cfaffc9757bf00b35a2f10da2809c6495
[ "BSD-2-Clause" ]
3
2021-10-31T14:40:17.000Z
2021-11-01T18:16:47.000Z
README.md
zappo2/digital-art-with-matlab
42019e1cfaffc9757bf00b35a2f10da2809c6495
[ "BSD-2-Clause" ]
null
null
null
README.md
zappo2/digital-art-with-matlab
42019e1cfaffc9757bf00b35a2f10da2809c6495
[ "BSD-2-Clause" ]
1
2021-10-30T23:51:24.000Z
2021-10-30T23:51:24.000Z
# Digital Art with MATLAB This is a repository to share interesting bits of art created with MATLAB. See the README in each directory for how to use each of the examples. - pumpkin - Draw a pumpkin - parallax - Draw two images hidden behind eachother in 3d ![Example Punkin Breeds](./pumpkin/punkin_tiles.jpg) ![Default Images in Parallax](./parallax/parallax_demo.gif)
28.846154
74
0.770667
eng_Latn
0.994331
2ce70f28c8400243c7adf807f826c1b76bd235e0
1,465
md
Markdown
content/post/jozef-io-r919-jenkins-pipelines-parallel.md
chuxinyuan/daily
dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c
[ "MIT" ]
8
2018-03-27T05:17:56.000Z
2021-09-11T19:18:07.000Z
content/post/jozef-io-r919-jenkins-pipelines-parallel.md
chuxinyuan/daily
dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c
[ "MIT" ]
16
2018-01-31T04:27:06.000Z
2021-10-03T19:54:50.000Z
content/post/jozef-io-r919-jenkins-pipelines-parallel.md
chuxinyuan/daily
dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c
[ "MIT" ]
12
2018-01-27T15:17:26.000Z
2021-09-07T04:43:12.000Z
--- title: Using parallelization, multiple git repositories and setting permissions when automating R applications with Jenkins date: '2019-08-10' linkTitle: https://jozef.io/r919-jenkins-pipelines-parallel/ source: Jozef's Rblog description: |2- <div id="introduction" class="section level1"> <h1>Introduction</h1> <p>In the <a href="https://jozef.io/r918-jenkins-pipelines/">previous post</a>, we focused on setting up declarative Jenkins pipelines with emphasis on parametrizing builds and using environment variables across pipeline stages.</p> <blockquote> <p>In this post, we look at various tips that can be useful when automating R application testing and continuous integration, with regards to orchestrating parallelization, combining sources from multiple git repositories and ensuring proper access right to the Jenkins ... disable_comments: true --- <div id="introduction" class="section level1"> <h1>Introduction</h1> <p>In the <a href="https://jozef.io/r918-jenkins-pipelines/">previous post</a>, we focused on setting up declarative Jenkins pipelines with emphasis on parametrizing builds and using environment variables across pipeline stages.</p> <blockquote> <p>In this post, we look at various tips that can be useful when automating R application testing and continuous integration, with regards to orchestrating parallelization, combining sources from multiple git repositories and ensuring proper access right to the Jenkins ...
77.105263
275
0.792491
eng_Latn
0.976926
2ce7db5120010e4d92e1517b3dff8b746399ccd3
699
md
Markdown
_posts/2016-09-26-amazon-co-jp-web-kindle.md
jser/realtime.jser.info
1c4e18b7ae7775838604ae7b7c666f1b28fb71d4
[ "MIT" ]
5
2016-01-25T08:51:46.000Z
2022-02-16T05:51:08.000Z
_posts/2016-09-26-amazon-co-jp-web-kindle.md
jser/realtime.jser.info
1c4e18b7ae7775838604ae7b7c666f1b28fb71d4
[ "MIT" ]
3
2015-08-22T08:39:36.000Z
2021-07-25T15:24:10.000Z
_posts/2016-09-26-amazon-co-jp-web-kindle.md
jser/realtime.jser.info
1c4e18b7ae7775838604ae7b7c666f1b28fb71d4
[ "MIT" ]
2
2016-01-18T03:56:54.000Z
2021-07-25T14:27:30.000Z
--- title: 'Amazon.co.jp: いまから始めるWebフロントエンド開発 電子書籍: 松田 承一, 柴田 和祈: Kindleストア' author: azu layout: post itemUrl: >- https://www.amazon.co.jp/%E3%81%84%E3%81%BE%E3%81%8B%E3%82%89%E5%A7%8B%E3%82%81%E3%82%8BWeb%E3%83%95%E3%83%AD%E3%83%B3%E3%83%88%E3%82%A8%E3%83%B3%E3%83%89%E9%96%8B%E7%99%BA-%E6%9D%BE%E7%94%B0-%E6%89%BF%E4%B8%80-ebook/dp/B01IEKIW7U editJSONPath: 'https://github.com/jser/jser.info/edit/gh-pages/data/2016/09/index.json' date: '2016-09-26T13:16:32Z' tags: - kindle - book - JavaScript - redux relatedLinks: - title: 【もうすぐ発売!】フロントエンド開発の入門書を書いています ※発売が始まりました – Shoichi Matsuda url: 'https://shoma2da.net/2016/08/01/web-frontend-dev-book/' --- TodoアプリをReact/Reduxで作るKindle本
36.789474
232
0.708155
yue_Hant
0.251408
2ce9709bbb8833011ac204b4c3dcc988253e57a9
7,604
md
Markdown
README.md
latonaio/sap-api-integrations-inbound-delivery-reads-rmq-kube
c4e0126ecd54f51dbecc2f62933c6b5288aafa10
[ "MIT" ]
null
null
null
README.md
latonaio/sap-api-integrations-inbound-delivery-reads-rmq-kube
c4e0126ecd54f51dbecc2f62933c6b5288aafa10
[ "MIT" ]
null
null
null
README.md
latonaio/sap-api-integrations-inbound-delivery-reads-rmq-kube
c4e0126ecd54f51dbecc2f62933c6b5288aafa10
[ "MIT" ]
null
null
null
# sap-api-integrations-inbound-delivery-reads-rmq-kube sap-api-integrations-inbound-delivery-reads-rmq-kube は、外部システム(特にエッジコンピューティング環境)をSAPと統合することを目的に、SAP API で入荷データ を取得するマイクロサービスです。 sap-api-integrations-inbound-delivery-reads-rmq-kube には、サンプルのAPI Json フォーマットが含まれています。 sap-api-integrations-inbound-delivery-reads-rmq-kube は、オンプレミス版である(=クラウド版ではない)SAPS4HANA API の利用を前提としています。クラウド版APIを利用する場合は、ご注意ください。 https://api.sap.com/api/OP_API_INBOUND_DELIVERY_SRV_0002/overview ## 動作環境 sap-api-integrations-inbound-delivery-reads-rmq-kube は、主にエッジコンピューティング環境における動作にフォーカスしています。 使用する際は、事前に下記の通り エッジコンピューティングの動作環境(推奨/必須)を用意してください。 ・ エッジ Kubernetes (推奨) ・ AION のリソース (推奨) ・ OS: LinuxOS (必須) ・ CPU: ARM/AMD/Intel(いずれか必須) ・ RabbitMQ on Kubernetes ・ RabbitMQ Client ## クラウド環境での利用 sap-api-integrations-inbound-delivery-reads-rmq-kube は、外部システムがクラウド環境である場合にSAPと統合するときにおいても、利用可能なように設計されています。 ## RabbitMQ からの JSON Input sap-api-integrations-inbound-delivery-reads-rmq-kube は、Inputとして、RabbitMQ からのメッセージをJSON形式で受け取ります。 Input の サンプルJSON は、Inputs フォルダ内にあります。 ## RabbitMQ からのメッセージ受信による イベントドリヴン の ランタイム実行 sap-api-integrations-inbound-delivery-reads-rmq-kube は、RabbitMQ からのメッセージを受け取ると、イベントドリヴンでランタイムを実行します。 AION の仕様では、Kubernetes 上 の 当該マイクロサービスPod は 立ち上がったまま待機状態で当該メッセージを受け取り、(コンテナ起動などの段取時間をカットして)即座にランタイムを実行します。  ## RabbitMQ への JSON Output sap-api-integrations-inbound-delivery-reads-rmq-kube は、Outputとして、RabbitMQ へのメッセージをJSON形式で出力します。 Output の サンプルJSON は、Outputs フォルダ内にあります。 ## RabbitMQ の マスタサーバ環境 sap-api-integrations-inbound-delivery-reads-rmq-kube が利用する RabbitMQ のマスタサーバ環境は、[rabbitmq-on-kubernetes](https://github.com/latonaio/rabbitmq-on-kubernetes) です。 当該マスタサーバ環境は、同じエッジコンピューティングデバイスに配置されても、別の物理(仮想)サーバ内に配置されても、どちらでも構いません。 ## RabbitMQ の Golang Runtime ライブラリ sap-api-integrations-inbound-delivery-reads-rmq-kube は、RabbitMQ の Golang Runtime ライブラリ として、[rabbitmq-golang-client](https://github.com/latonaio/rabbitmq-golang-client)を利用しています。 ## デプロイ・稼働 sap-api-integrations-inbound-delivery-reads-rmq-kube の デプロイ・稼働 を行うためには、aion-service-definitions の services.yml に、本レポジトリの services.yml を設定する必要があります。 kubectl apply - f 等で Deployment作成後、以下のコマンドで Pod が正しく生成されていることを確認してください。 ``` $ kubectl get pods ``` ## 本レポジトリ が 対応する API サービス sap-api-integrations-inbound-delivery-reads-rmq-kube が対応する APIサービス は、次のものです。 * APIサービス概要説明 URL: https://api.sap.com/api/OP_API_INBOUND_DELIVERY_SRV_0002/overview * APIサービス名(=baseURL): API_INBOUND_DELIVERY_SRV;v=0002 ## 本レポジトリ に 含まれる API名 sap-api-integrations-inbound-delivery-reads-rmq-kube には、次の API をコールするためのリソースが含まれています。 * A_InbDeliveryHeader(入荷伝票 - ヘッダ)※入荷伝票の詳細データを取得するために、ToPartner、ToAddress、ToItemと合わせて利用されます。 * A_InbDeliveryItem(入荷伝票 - 明細) * ToPartner(入荷伝票 - 取引先) * ToAddress(入荷伝票 - アドレス) * ToItem(入荷伝票 - 明細) ## API への 値入力条件 の 初期値 sap-api-integrations-inbound-delivery-reads-rmq-kube において、API への値入力条件の初期値は、入力ファイルレイアウトの種別毎に、次の通りとなっています。 ### SDC レイアウト * inoutSDC.InboundDelivery.DeliveryDocument(入荷伝票) * inoutSDC.InboundDelivery.DeliveryDocumentItem.DeliveryDocumentItem(入荷伝票明細) ## SAP API Bussiness Hub の API の選択的コール Latona および AION の SAP 関連リソースでは、Inputs フォルダ下の sample.json の accepter に取得したいデータの種別(=APIの種別)を入力し、指定することができます。 なお、同 accepter にAll(もしくは空白)の値を入力することで、全データ(=全APIの種別)をまとめて取得することができます。 * sample.jsonの記載例(1) accepter において 下記の例のように、データの種別(=APIの種別)を指定します。 ここでは、"Header" が指定されています。 ``` "api_schema": "sap.s4.beh.inbounddelivery.v1.InboundDelivery.Created.v1", "accepter": ["Header"], "delivery_document": "180000000", "deleted": "" ``` * 全データを取得する際のsample.jsonの記載例(2) 全データを取得する場合、sample.json は以下のように記載します。 ``` "api_schema": "sap.s4.beh.inbounddelivery.v1.InboundDelivery.Created.v1", "accepter": ["Item"], "delivery_document": "180000000", "deleted": "" ``` ## 指定されたデータ種別のコール accepter における データ種別 の指定に基づいて SAP_API_Caller 内の caller.go で API がコールされます。 caller.go の func() 毎 の 以下の箇所が、指定された API をコールするソースコードです。 ``` func (c *SAPAPICaller) AsyncGetInboundDelivery(deliveryDocument, deliveryDocumentItem string, accepter []string) { wg := &sync.WaitGroup{} wg.Add(len(accepter)) for _, fn := range accepter { switch fn { case "Header": func() { c.Header(deliveryDocument) wg.Done() }() case "Item": func() { c.Item(deliveryDocument, deliveryDocumentItem) wg.Done() }() default: wg.Done() } } wg.Wait() } ``` ## Output 本マイクロサービスでは、[golang-logging-library-for-sap](https://github.com/latonaio/golang-logging-library-for-sap) により、以下のようなデータがJSON形式で出力されます。 以下の sample.json の例は、SAP 入荷伝票 の ヘッダデータ が取得された結果の JSON の例です。 以下の項目のうち、"ReceivingLocationTimeZone" ~ "ToPartner" は、/SAP_API_Output_Formatter/type.go 内 の Type Header {} による出力結果です。"cursor" ~ "time"は、golang-logging-library による 定型フォーマットの出力結果です。 ``` { "cursor": "/Users/latona2/bitbucket/sap-api-integrations-inbound-delivery-reads/SAP_API_Caller/caller.go#L58", "function": "sap-api-integrations-inbound-delivery-reads/SAP_API_Caller.(*SAPAPICaller).Header", "level": "INFO", "message": [ { "ReceivingLocationTimeZone": "UTC", "ActualDeliveryRoute": "", "ActualGoodsMovementDate": "2017-01-11T09:00:00+09:00", "ActualGoodsMovementTime": "PT00H00M00S", "BillingDocumentDate": "", "CompleteDeliveryIsDefined": false, "ConfirmationTime": "PT00H00M00S", "CreationDate": "2017-01-11T09:00:00+09:00", "CreationTime": "PT11H32M52S", "CustomerGroup": "", "DeliveryBlockReason": "", "DeliveryDate": "2017-01-30T09:00:00+09:00", "DeliveryDocument": "180000000", "DeliveryDocumentBySupplier": "ASN#451435", "DeliveryDocumentType": "EL", "DeliveryIsInPlant": false, "DeliveryPriority": "00", "DeliveryTime": "PT22H30M00S", "DocumentDate": "2017-01-11T09:00:00+09:00", "GoodsIssueOrReceiptSlipNumber": "", "GoodsIssueTime": "PT00H00M00S", "HeaderBillgIncompletionStatus": "C", "HeaderBillingBlockReason": "", "HeaderDelivIncompletionStatus": "C", "HeaderGrossWeight": "10.000", "HeaderNetWeight": "9.000", "HeaderPackingIncompletionSts": "C", "HeaderPickgIncompletionStatus": "C", "HeaderVolume": "0.000", "HeaderVolumeUnit": "", "HeaderWeightUnit": "KG", "IncotermsClassification": "", "IsExportDelivery": "", "LastChangeDate": "2017-01-11T09:00:00+09:00", "LoadingDate": "", "LoadingPoint": "", "LoadingTime": "PT00H00M00S", "MeansOfTransport": "", "OrderCombinationIsAllowed": true, "OrderID": "", "PickedItemsLocation": "", "PickingDate": "", "PickingTime": "PT00H00M00S", "PlannedGoodsIssueDate": "", "ProposedDeliveryRoute": "", "ReceivingPlant": "", "RouteSchedule": "", "SalesDistrict": "", "SalesOffice": "", "SalesOrganization": "", "SDDocumentCategory": "7", "ShipmentBlockReason": "", "ShippingCondition": "01", "ShippingPoint": "", "ShippingType": "", "ShipToParty": "", "SoldToParty": "", "Supplier": "17300080", "TotalBlockStatus": "", "TotalCreditCheckStatus": "", "TotalNumberOfPackage": "00000", "TransactionCurrency": "", "TransportationGroup": "0001", "TransportationPlanningDate": "", "TransportationPlanningStatus": "", "TransportationPlanningTime": "PT00H00M00S", "UnloadingPointName": "", "to_Partner": "https://sandbox.api.sap.com/s4hanacloud/sap/opu/odata/sap/API_INBOUND_DELIVERY_SRV;v=0002/A_InbDeliveryHeader('180000000')/to_DeliveryDocumentPartner", "to_Item": "https://sandbox.api.sap.com/s4hanacloud/sap/opu/odata/sap/API_INBOUND_DELIVERY_SRV;v=0002/A_InbDeliveryHeader('180000000')/to_DeliveryDocumentItem" } ], "time": "2022-01-27T21:36:47+09:00" } ```
35.041475
180
0.736191
yue_Hant
0.757174
2ce9f106b206384b120d95a3810b9055ab635225
681
md
Markdown
docs-src/docs/src/content/api/CsvHelper.Configuration.Attributes/IndexAttribute.md
billrob/CsvHelper
7c9ccc4c0edd1d30494c16be89eabf6147ef65a4
[ "MS-PL" ]
2
2020-02-11T02:36:48.000Z
2020-02-21T19:35:33.000Z
docs-src/docs/src/content/api/CsvHelper.Configuration.Attributes/IndexAttribute.md
billrob/CsvHelper
7c9ccc4c0edd1d30494c16be89eabf6147ef65a4
[ "MS-PL" ]
4
2020-12-04T21:37:51.000Z
2022-02-27T10:02:53.000Z
docs-src/docs/src/content/api/CsvHelper.Configuration.Attributes/IndexAttribute.md
billrob/CsvHelper
7c9ccc4c0edd1d30494c16be89eabf6147ef65a4
[ "MS-PL" ]
1
2019-02-11T01:00:19.000Z
2019-02-11T01:00:19.000Z
# IndexAttribute Class Namespace: [CsvHelper.Configuration.Attributes](/api/CsvHelper.Configuration.Attributes) When reading, is used to get the field at the given index. When writing, the fields will be written in the order of the field indexes. ```cs [System.AttributeUsageAttribute] public class IndexAttribute : Attribute ``` Inheritance Object -> Attribute -> IndexAttribute ## Constructors &nbsp; | &nbsp; - | - IndexAttribute(Int32, Int32) | When reading, is used to get the field at the given index. When writing, the fields will be written in the order of the field indexes. ## Properties &nbsp; | &nbsp; - | - Index | Gets the index. IndexEnd | Gets the index end.
28.375
165
0.751836
eng_Latn
0.978676
2ceab4528fef0d307d0de578eb2cde23bf61c84b
828
md
Markdown
catalog/musume-ja-nakute-mama-ga-suki-nano/en-US_musume-ja-nakute-mama-ga-suki-nano-light-novel.md
htron-dev/baka-db
cb6e907a5c53113275da271631698cd3b35c9589
[ "MIT" ]
3
2021-08-12T20:02:29.000Z
2021-09-05T05:03:32.000Z
catalog/musume-ja-nakute-mama-ga-suki-nano/en-US_musume-ja-nakute-mama-ga-suki-nano-light-novel.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
8
2021-07-20T00:44:48.000Z
2021-09-22T18:44:04.000Z
catalog/musume-ja-nakute-mama-ga-suki-nano/en-US_musume-ja-nakute-mama-ga-suki-nano-light-novel.md
zzhenryquezz/baka-db
da8f54a87191a53a7fca54b0775b3c00f99d2531
[ "MIT" ]
2
2021-07-19T01:38:25.000Z
2021-07-29T08:10:29.000Z
# Musume ja Nakute Mama ga Suki nano!? ![musume-ja-nakute-mama-ga-suki-nano](https://cdn.myanimelist.net/images/manga/1/228823.jpg) - **type**: light-novel - **original-name**: 娘じゃなくて私〈ママ〉が好きなの!? - **start-date**: 2019-12-10 ## Tags - comedy - romance ## Authors - Nozomi - Kota (Story) - Giuniu (Art) ## Sinopse This is a romantic comedy starring an adult 30 year old woman named Ayako who is confessed to by a man 10 years her junior, 20 year old college student Takumi... A man she knows as the boy "Takkun" who lives next door and who has tutored her adopted daughter since he was 10. The revelation astounds her as she thought her daughter Miu and he were an item. (Source: J-Novel Club Forum) ## Links - [My Anime list](https://myanimelist.net/manga/125629/Musume_ja_Nakute_Mama_ga_Suki_nano)
28.551724
356
0.714976
eng_Latn
0.969434
2ceb0a86cc926e6fddc8eaabf36c1a858fbfdde2
911
md
Markdown
README.md
Thekiso10/scraping-web-Listado-Mangas
a7365040bb7e55556c66d9f3233926b3aa28b72e
[ "MIT" ]
null
null
null
README.md
Thekiso10/scraping-web-Listado-Mangas
a7365040bb7e55556c66d9f3233926b3aa28b72e
[ "MIT" ]
null
null
null
README.md
Thekiso10/scraping-web-Listado-Mangas
a7365040bb7e55556c66d9f3233926b3aa28b72e
[ "MIT" ]
null
null
null
# scraping-web-Listado-Mangas Obtención de los datos de los mangas de la pagina: https://www.listadomanga.es ## Ejecutar el proyecto 1. Lo primero es descargar las dependencias necesarias: ```Node npm install ``` 2. Despues de tener todas la dependencias, se puede ejecutar: ``` node index.js ``` ## Scraping Web El proyecto obtiende los datos de los mangas a partir de la pagina https://www.listadomanga.es/lista.php. Estos se guardan en un fichero JSON, que por defecto se genera en *folder/archivo.json*. Esta carpeta esta dentro del mismo proyecto. ## Datos obtenidos 1. El titulo del Manga 2. El ID del Manga de la pagina web 3. El numero de volumenes en Japón 4. El estado de la serie en Japón 5. El numero de volumenes en España 6. El estado de la serie en España 7. Lista de los tomos publicados en España 8. Fecha de estreno de los tomos publicados en España ## Versión Versión Actual: **1.1**
31.413793
239
0.754116
spa_Latn
0.996511
2cebd4e523f79d9d75802a8fff8fdad995c885ca
10,628
md
Markdown
Engineering/SystemDesign.md
BipulRaman/InterviewQuestions
3f4700d9593ebb6c79a2d160a9f00e1edd1f28e0
[ "MIT" ]
1
2022-02-02T08:46:27.000Z
2022-02-02T08:46:27.000Z
Engineering/SystemDesign.md
BipulRaman/InterviewPreparation
3f4700d9593ebb6c79a2d160a9f00e1edd1f28e0
[ "MIT" ]
null
null
null
Engineering/SystemDesign.md
BipulRaman/InterviewPreparation
3f4700d9593ebb6c79a2d160a9f00e1edd1f28e0
[ "MIT" ]
null
null
null
# System Design Cheatsheet Credits: @vasanthk > Picking the right architecture = Picking the right battles + Managing trade-offs ## Basic Steps 1) **Clarify and agree on the scope of the system** * **User cases** (description of sequences of events that, taken together, lead to a system doing something useful) * Who is going to use it? * How are they going to use it? * **Constraints** * Mainly identify **traffic and data handling** constraints at scale. * Scale of the system such as requests per second, requests types, data written per second, data read per second) * Special system requirements such as multi-threading, read or write oriented. 2) **High level architecture design (Abstract design)** * Sketch the important components and connections between them, but don't go into some details. * Application service layer (serves the requests) * List different services required. * Data Storage layer * eg. Usually a scalable system includes webserver (load balancer), service (service partition), database (master/slave database cluster) and caching systems. 3) **Component Design** * Component + specific **APIs** required for each of them. * **Object oriented design** for functionalities. * Map features to modules: One scenario for one module. * Consider the relationships among modules: * Certain functions must have unique instance (Singletons) * Core object can be made up of many other objects (composition). * One object is another object (inheritance) * **Database schema design.** 4) **Understanding Bottlenecks** * Perhaps your system needs a load balancer and many machines behind it to handle the user requests. * Or maybe the data is so huge that you need to distribute your database on multiple machines. What are some of the downsides that occur from doing that? * Is the database too slow and does it need some in-memory caching? 5) **Scaling** your abstract design * **Vertical scaling** * You scale by adding more power (CPU, RAM) to your existing machine. * **Horizontal scaling** * You scale by adding more machines into your pool of resources. * **Caching** * Load balancing helps you scale horizontally across an ever-increasing number of servers, but caching will enable you to make vastly better use of the resources you already have, as well as making otherwise unattainable product requirements feasible. * **Application caching** requires explicit integration in the application code itself. Usually it will check if a value is in the cache; if not, retrieve the value from the database. * **Database caching** tends to be "free". When you flip your database on, you're going to get some level of default configuration which will provide some degree of caching and performance. Those initial settings will be optimized for a generic usecase, and by tweaking them to your system's access patterns you can generally squeeze a great deal of performance improvement. * **In-memory caches** are most potent in terms of raw performance. This is because they store their entire set of data in memory and accesses to RAM are orders of magnitude faster than those to disk. eg. Memcached or Redis. * eg. Precalculating results (e.g. the number of visits from each referring domain for the previous day), * eg. Pre-generating expensive indexes (e.g. suggested stories based on a user's click history) * eg. Storing copies of frequently accessed data in a faster backend (e.g. Memcache instead of PostgreSQL. * **Load balancing** * Public servers of a scalable web service are hidden behind a load balancer. This load balancer evenly distributes load (requests from your users) onto your group/cluster of application servers. * Types: Smart client (hard to get it perfect), Hardware load balancers ($$$ but reliable), Software load balancers (hybrid - works for most systems) <p align="center"> <img src="http://lethain.com/static/blog/intro_arch/load_balance.png" alt="Load Balancing"/> </p> * **Database replication** * Database replication is the frequent electronic copying data from a database in one computer or server to a database in another so that all users share the same level of information. The result is a distributed database in which users can access data relevant to their tasks without interfering with the work of others. The implementation of database replication for the purpose of eliminating data ambiguity or inconsistency among users is known as normalization. * **Database partitioning** * Partitioning of relational data usually refers to decomposing your tables either row-wise (horizontally) or column-wise (vertically). * **Map-Reduce** * For sufficiently small systems you can often get away with adhoc queries on a SQL database, but that approach may not scale up trivially once the quantity of data stored or write-load requires sharding your database, and will usually require dedicated slaves for the purpose of performing these queries (at which point, maybe you'd rather use a system designed for analyzing large quantities of data, rather than fighting your database). * Adding a map-reduce layer makes it possible to perform data and/or processing intensive operations in a reasonable amount of time. You might use it for calculating suggested users in a social graph, or for generating analytics reports. eg. Hadoop, and maybe Hive or HBase. * **Platform Layer (Services)** * Separating the platform and web application allow you to scale the pieces independently. If you add a new API, you can add platform servers without adding unnecessary capacity for your web application tier. * Adding a platform layer can be a way to reuse your infrastructure for multiple products or interfaces (a web application, an API, an iPhone app, etc) without writing too much redundant boilerplate code for dealing with caches, databases, etc. <p align="center"> <img src="http://lethain.com/static/blog/intro_arch/platform_layer.png" alt="Platform Layer"/> </p> ## Key topics for designing a system 1) **Concurrency** * Do you understand threads, deadlock, and starvation? Do you know how to parallelize algorithms? Do you understand consistency and coherence? 2) **Networking** * Do you roughly understand IPC and TCP/IP? Do you know the difference between throughput and latency, and when each is the relevant factor? 3) **Abstraction** * You should understand the systems you’re building upon. Do you know roughly how an OS, file system, and database work? Do you know about the various levels of caching in a modern OS? 4) **Real-World Performance** * You should be familiar with the speed of everything your computer can do, including the relative performance of RAM, disk, SSD and your network. 5) **Estimation** * Estimation, especially in the form of a back-of-the-envelope calculation, is important because it helps you narrow down the list of possible solutions to only the ones that are feasible. Then you have only a few prototypes or micro-benchmarks to write. 6) **Availability & Reliability** * Are you thinking about how things can fail, especially in a distributed environment? Do know how to design a system to cope with network failures? Do you understand durability? ## Web App System design considerations: * Security (CORS) * Using CDN * A content delivery network (CDN) is a system of distributed servers (network) that deliver webpages and other Web content to a user based on the geographic locations of the user, the origin of the webpage and a content delivery server. * This service is effective in speeding the delivery of content of websites with high traffic and websites that have global reach. The closer the CDN server is to the user geographically, the faster the content will be delivered to the user. * CDNs also provide protection from large surges in traffic. * Full Text Search * Using Sphinx/Lucene/Solr - which achieve fast search responses because, instead of searching the text directly, it searches an index instead. * Offline support/Progressive enhancement * Service Workers * Web Workers * Server Side rendering * Asynchronous loading of assets (Lazy load items) * Minimizing network requests (Http2 + bundling/sprites etc) * Developer productivity/Tooling * Accessibility * Internationalization * Responsive design * Browser compatibility ## Working Components of Front-end Architecture * Code * HTML5/WAI-ARIA * CSS/Sass Code standards and organization * Object-Oriented approach (how do objects break down and get put together) * JS frameworks/organization/performance optimization techniques * Asset Delivery - Front-end Ops * Documentation * Onboarding Docs * Styleguide/Pattern Library * Architecture Diagrams (code flow, tool chain) * Testing * Performance Testing * Visual Regression * Unit Testing * End-to-End Testing * Process * Git Workflow * Dependency Management (npm, Bundler, Bower) * Build Systems (Grunt/Gulp) * Deploy Process * Continuous Integration (Travis CI, Jenkins) ## Links - [How to rock a systems design interview](http://www.palantir.com/2011/10/how-to-rock-a-systems-design-interview/) - [System Design Interviewing](http://www.hiredintech.com/system-design/) - [Scalability for Dummies](http://www.lecloud.net/tagged/scalability) - [Introduction to Architecting Systems for Scale](http://lethain.com/introduction-to-architecting-systems-for-scale/) - [Scalable System Design Patterns](http://horicky.blogspot.com/2010/10/scalable-system-design-patterns.html) - [Scalable Web Architecture and Distributed Systems](http://www.aosabook.org/en/distsys.html) - [What is the best way to design a web site to be highly scalable?](http://programmers.stackexchange.com/a/108679/62739) - [How web works?](https://github.com/vasanthk/how-web-works) ## Sample Questions - [System Design: How to design Twitter? ](https://www.youtube.com/watch?v=KmAyPUv9gOY) - [System Design: Uber Lyft ride sharing services](https://www.youtube.com/watch?v=J3DY3Te3A_A) - [System Design: Messenger service like Whatsapp or WeChat](https://www.youtube.com/watch?v=5m0L0k8ZtEs) - [Scalibility: Building dynamic websites](https://www.youtube.com/watch?v=-W9F__D3oY4&t=7s) - [System Design : Design a service like TinyUrl](https://www.youtube.com/watch?v=fMZMm_0ZhK4&t=205s)
64.024096
471
0.75047
eng_Latn
0.994292
2cebd8a0eae6558fa9555904c143f4768bb560f2
247
md
Markdown
content/bio/samuel-lbryian.md
Grindelek/lbry.io
b7ca7c67c21f416e94cbbdc1523267abf6de9639
[ "MIT" ]
null
null
null
content/bio/samuel-lbryian.md
Grindelek/lbry.io
b7ca7c67c21f416e94cbbdc1523267abf6de9639
[ "MIT" ]
null
null
null
content/bio/samuel-lbryian.md
Grindelek/lbry.io
b7ca7c67c21f416e94cbbdc1523267abf6de9639
[ "MIT" ]
null
null
null
--- name: Samuel Bryan role: Pen Name email: hello@lbry.io --- Much of our writing is a collaboration between LBRY team members, so we use SamueL BRYan to share credit. Sam has become a friend... an imaginary friend... even though we're adults...
35.285714
183
0.740891
eng_Latn
0.997983
2cec134f0d1887bdf75e8e2b05393fce55efd51d
37
md
Markdown
README.md
noir-neo/rin
dac958370cdd114845d840b20d43186d46b328cc
[ "MIT" ]
null
null
null
README.md
noir-neo/rin
dac958370cdd114845d840b20d43186d46b328cc
[ "MIT" ]
5
2018-07-11T17:20:13.000Z
2018-08-26T05:45:32.000Z
README.md
noir-neo/rin
dac958370cdd114845d840b20d43186d46b328cc
[ "MIT" ]
null
null
null
# Rin Multiplayer AR campfire app.
7.4
28
0.72973
eng_Latn
0.904492
2cec9497880c99f08e520a6e76760786a9cb1332
247
md
Markdown
includes/ssastoria-md.md
adamsitnik/docs.pl-pl
c83da3ae45af087f6611635c348088ba35234d49
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/ssastoria-md.md
adamsitnik/docs.pl-pl
c83da3ae45af087f6611635c348088ba35234d49
[ "CC-BY-4.0", "MIT" ]
null
null
null
includes/ssastoria-md.md
adamsitnik/docs.pl-pl
c83da3ae45af087f6611635c348088ba35234d49
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.openlocfilehash: a439408e0b217b8d70feba8e60a7e93c76602358 ms.sourcegitcommit: 8699383914c24a0df033393f55db3369db728a7b ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 05/15/2019 ms.locfileid: "65672400" --- Usługi danych WCF
24.7
60
0.838057
yue_Hant
0.160476
2ced2964b40a1c816c462b79ff7d5ead8e22d7a6
656
md
Markdown
README.md
Fabien-Chouteau/qoi-ada
452eb35cb68adcb17002f961d25b00555e0ab9a6
[ "MIT" ]
7
2022-03-01T18:31:08.000Z
2022-03-14T16:11:16.000Z
README.md
Fabien-Chouteau/qoi-ada
452eb35cb68adcb17002f961d25b00555e0ab9a6
[ "MIT" ]
null
null
null
README.md
Fabien-Chouteau/qoi-ada
452eb35cb68adcb17002f961d25b00555e0ab9a6
[ "MIT" ]
1
2021-12-03T16:06:04.000Z
2021-12-03T16:06:04.000Z
# qoi-spark “Quite OK Image” Ada/SPARK implementation This is based on [QOI](https://qoiformat.org/) format specification V1. To call the `Encode`/`Decode` procedure you have to provide a large enough output buffer. If the provided output buffer is not large enough, each procedure will return with an `Output_Size` of zero. For `Encode` the minimum size for the output buffer is given by the `Encode_Worst_Case` function based on the dimensions of the image and the number of channels. For `Decode` you should use the `Get_Desc` procedure to get the image specification and then the exact output size will be `Desc.Width * Desc.Height * Desc.Channels`.
50.461538
79
0.780488
eng_Latn
0.995982
2ced3d3ce68c72232f3040130aaf5e406181519f
1,860
md
Markdown
content/series/ce/gundam-seed-destiny/mechas/minerva.md
Wivik/gundam-france
65d84098eec431e7e27b6a6c0f1e6eadea1c2bc8
[ "MIT" ]
null
null
null
content/series/ce/gundam-seed-destiny/mechas/minerva.md
Wivik/gundam-france
65d84098eec431e7e27b6a6c0f1e6eadea1c2bc8
[ "MIT" ]
null
null
null
content/series/ce/gundam-seed-destiny/mechas/minerva.md
Wivik/gundam-france
65d84098eec431e7e27b6a6c0f1e6eadea1c2bc8
[ "MIT" ]
null
null
null
--- title: Minerva --- Minerva ------- {imgmechas} Minerva::images/stories/saga/gundamseeddestiny/mechas/zaft/minerva.png |||| Ailes repliées::images/stories/saga/gundamseeddestiny/mechas/zaft/minerva-folded.png |||| Booster Spatial::images/stories/saga/gundamseeddestiny/mechas/zaft/minerva-booster.png {/imgmechas} Minerva - Nom : LHM-BB01 Minerva  - Type : Vaisseau de combat - Concepteur : ZAFT - Opérateur : ZAFT - Date de création : CE 73 - Date de mise en service : CE 73 - Taille : 350m - Mecha Designer : Kimitoshi Yamane - Armement : Beam Cannon Double XM47 "Tristan" x2, Canon Triple 42cm M10 "Isolde", Canon à Positrons QZX-1 "Tannhaüser", CIWS (Close-in Weapon System) 40mm x12, Lance missiles, Lance Torpilles. - Capacité en MS : 10 machines, 3 catapultes  Dernier cri des vaisseaux de guerre de ZAFT, le Minerva est le premier nouveau modèle d'après guerre. Très grand et lourdement armé, il servira de base pour l'Impulse Gundam et les autres MS tels que les Zaku Warrior et Phantom. Images d'accessoires {accessoiresmechas} CIWS 40mm::images/stories/saga/gundamseeddestiny/mechas/accessoires/minerva-ciws.jpg |||| Catapulte spécial de l'Impulse Gundam::images/stories/saga/gundamseeddestiny/mechas/accessoires/minerva-impulsecatapult.jpg |||| Catapulte Linéaire::images/stories/saga/gundamseeddestiny/mechas/accessoires/minerva-linearcatapult.jpg |||| Missiles::images/stories/saga/gundamseeddestiny/mechas/accessoires/minerva-missiles.jpg |||| Canon Triple M10 "Isolde" 42cm::images/stories/saga/gundamseeddestiny/mechas/accessoires/minerva-m10.jpg |||| Canon à Positrons QZX-1 "Tannhaüser"::images/stories/saga/gundamseeddestiny/mechas/accessoires/minerva-qzx-1.jpg |||| Beam Cannon Double XM47 "Tristan"::images/stories/saga/gundamseeddestiny/mechas/accessoires/minerva-xm47.jpg {/accessoiresmechas}
38.75
228
0.771505
fra_Latn
0.326891
2cedeb0585697f17055a5c98f15466fe2eddd26d
235
md
Markdown
docs/ForecastDataResponse.md
MallorcaSoftware/OpenWeatherMapApiClient
b6d60cf3e5060d70009b5fcb7ef61708b482b116
[ "Apache-2.0" ]
null
null
null
docs/ForecastDataResponse.md
MallorcaSoftware/OpenWeatherMapApiClient
b6d60cf3e5060d70009b5fcb7ef61708b482b116
[ "Apache-2.0" ]
null
null
null
docs/ForecastDataResponse.md
MallorcaSoftware/OpenWeatherMapApiClient
b6d60cf3e5060d70009b5fcb7ef61708b482b116
[ "Apache-2.0" ]
null
null
null
# ForecastDataResponse ## Properties Name | Type | Description | Notes ------------ | ------------- | ------------- | ------------- **list** | [**List&lt;ForecastDataListItemDto&gt;**](ForecastDataListItemDto.md) | | [optional]
21.363636
97
0.519149
kor_Hang
0.143527
2cedf2a9eaf6fe3979aff810cd99f757efd1f751
3,252
md
Markdown
README.md
xTeamStanly/react-triangulate
efadbdbcb2655fa924a498d799741d047b15693c
[ "MIT" ]
null
null
null
README.md
xTeamStanly/react-triangulate
efadbdbcb2655fa924a498d799741d047b15693c
[ "MIT" ]
null
null
null
README.md
xTeamStanly/react-triangulate
efadbdbcb2655fa924a498d799741d047b15693c
[ "MIT" ]
null
null
null
# react-triangulate Simple Delauney triangulation example with animation. Triangle points are generated using [delaunator](https://www.npmjs.com/package/delaunator). [DEMO](https://xteamstanly.github.io/react-triangulate/) ## Preview ![preview](preview.png) ## Triangulate properties | Prop | Type | Definition | Default value | |---------------------|:----------:|----------------------------------------------------------------------|:-------------:| | topcolor | string | Top gradient color. | #221A33 | | botcolor | string | Bottom gradient color. | #8A3D99 | | pointscolor | string | Color of triangle points. | #000000 | | mincirclesize | number | Minimal triangle point size. | 3 | | maxcirclesize | number | Maximal triangle point size. | 8 | | count | number | Number of triangle points. | 100 | | minspeed | number | Minimal speed of a triangle point. | 0.1 | | maxspeed | number | Maximal speed of a triangle point. | 0.5 | | pointshadowblur | number | Blur strength around tringle points (shadow). | 0 | | pointshadowcolor | string | Blur color around a triangle points. | #00000000 | | colorvariance | boolean | Should triangle points be a different color? | false | | tint | percentage | Triangle points color tint. | 0 | | shade | percentage | Triangle points color shade. | 1 | | triangleshadowblur | number | Blur strength inside a triangle (50 looks like a nice inner shadow). | 0 | | triangleshadowcolor | string | Blur color inside a triangle. | #00000000 | | linewidth | number | Width of a line connecting triangle points. | 1.0 | | linecolor | string | Color of a line connecting triangle points. | #00000000 | | fps | number | Frames per second limit. | 60 | | backgroundcolor | string | Background canvas color - fallback! | #00000000 | | fadecolor | string | Fade color when resizing window. | #221A33 | | stretching | boolean | Does the canvas stretch during window resize? | false | ## Todo - make a non-intrusive control panel for customisation ([resource](https://github.com/dataarts/dat.gui))
90.333333
123
0.423124
eng_Latn
0.844169
2cefe7a2c577c112fb0251d72f2fb83bcc7a2079
2,238
md
Markdown
handbook/covenant.md
dOrgTech/handbook
e610217cd025649c1eeb52c74597f5dee065ebbe
[ "MIT" ]
1
2019-11-22T22:37:51.000Z
2019-11-22T22:37:51.000Z
handbook/covenant.md
dOrgTech/handbook
e610217cd025649c1eeb52c74597f5dee065ebbe
[ "MIT" ]
null
null
null
handbook/covenant.md
dOrgTech/handbook
e610217cd025649c1eeb52c74597f5dee065ebbe
[ "MIT" ]
null
null
null
# Member Covenant We as dOrg members pledge to make participation in our community a positive experience for everyone. We pledge to act and interact in ways that contribute to an open, welcoming, diverse, inclusive, and healthy community. ## Our Standards Examples of behavior that contributes to a positive environment for our community include: * Demonstrating empathy and kindness toward other people * Assuming good intentions * Being respectful of differing opinions, viewpoints, and experiences * Giving and gracefully accepting constructive feedback * Taking ownership of our commitments and communicating changes with multiple weeks notice * Accepting responsibility and apologizing to those affected by our mistakes, and learning from the experience * Encouraging transparency and collaboration while preserving privacy where reasonably expected * Focusing on what is best not just for us as individuals, but for the overall community Examples of unacceptable behavior include: * Trolling, insulting or derogatory comments, and personal or political attacks * False accusations or accusations without evidence * Public or private harassment (repeated unwanted contact) * Spam (unsolicited off-topic messages) * Sexually or violently explicit content * Silencing others (e.g. deleting innocuous comments) * Retaliatory work stoppage or public defamation * Persistently missing meetings without notice * Ghosting clients or teammates * Other conduct which could reasonably be considered inappropriate in a professional setting ## Scope This Member Covenant applies within all dOrg spaces, including our forum, Discord, Snapshot, Github, and all other spaces associated with dOrg. This covenant also applies when an individual is representing dOrg in public spaces, such as using an official email address, social media account, interacting with a client, or acting as a representative at an online or offline event. Anyone who infringes on this Member Covenant may be subject to our [member removal protocol](lifecycle/removal.md#proposing-removal). {% hint style="info" %} This Code of Conduct is adapted from the [Contributor Covenant v2.0](http://contributor-covenant.org/version/2/0/code\_of\_conduct). {% endhint %}
55.95
379
0.813673
eng_Latn
0.99945
2cf03a5580c68e0c7d05747ed8bfbe1af7e5f8e4
1,124
md
Markdown
_posts/2010-08-02-quality-rain-harvesting-equipment.md
BlogToolshed50/thedailyevie.com
f72721aa6e8dda781c5a1ce8ba6328112619f7d0
[ "CC-BY-4.0" ]
null
null
null
_posts/2010-08-02-quality-rain-harvesting-equipment.md
BlogToolshed50/thedailyevie.com
f72721aa6e8dda781c5a1ce8ba6328112619f7d0
[ "CC-BY-4.0" ]
null
null
null
_posts/2010-08-02-quality-rain-harvesting-equipment.md
BlogToolshed50/thedailyevie.com
f72721aa6e8dda781c5a1ce8ba6328112619f7d0
[ "CC-BY-4.0" ]
null
null
null
--- id: 378 title: Quality Rain Harvesting Equipment date: 2010-08-02T07:27:45+00:00 author: admin layout: post guid: http://thedailyevie.com/?p=378 permalink: /2010/08/02/quality-rain-harvesting-equipment/ categories: - General --- Nowadays, rain water harvesting is needed to save our water energy. If you are also looking for the rain harvesting equipment, you can visit at SimplyRainBarrels.com that is the ultimate source to buy all types rain barrels and rain harvesting equipment at the lowest possible price. They offer the high quality rain barrels and tank accessory products with different styles and colors, you can select the perfect one for your needs. To buy the stylish rain water barrel for the rain harvesting needs that can be helped to save water and also give the elegant look to your home. SimplyRainBarrels.com provides the hundreds of different rain harvesting equipment and high quality rain barrels for sale, you can view all the products in their website that help you to your selection easier. Using their services to the need of rain barrel purchase and get an excellent shopping experience.
70.25
308
0.80427
eng_Latn
0.999092
2cf0afbed9236a14465fbc36fdbb20c728ae2585
68
md
Markdown
README.md
Manewing/cpputils
8e5d9b3e34c052558f3dc60ccb8437a70b39626b
[ "MIT" ]
null
null
null
README.md
Manewing/cpputils
8e5d9b3e34c052558f3dc60ccb8437a70b39626b
[ "MIT" ]
null
null
null
README.md
Manewing/cpputils
8e5d9b3e34c052558f3dc60ccb8437a70b39626b
[ "MIT" ]
null
null
null
# cpputils All kinds of basic utility classes and functions in C++.
22.666667
56
0.764706
eng_Latn
0.984778
2cf17bfb30c3621d69a73435eee6c8450dc32c25
1,479
md
Markdown
docs/overlay/demo/scroll.md
hellohy/next
9d8cc8bacc71f50b57c3697cd99d9267c6e706bb
[ "MIT" ]
null
null
null
docs/overlay/demo/scroll.md
hellohy/next
9d8cc8bacc71f50b57c3697cd99d9267c6e706bb
[ "MIT" ]
null
null
null
docs/overlay/demo/scroll.md
hellohy/next
9d8cc8bacc71f50b57c3697cd99d9267c6e706bb
[ "MIT" ]
null
null
null
# 弹层跟随滚动 - order: 5 弹层默认参照 document.body 绝对定位,如果弹层显示隐藏的触发元素所在容器(一般为父节点)有滚动条,那么当容器滚动时,会发生触发元素与弹层相分离的情况,解决的办法是将弹层渲染到触发元素所在的容器中。(触发元素所在的容器,必须设置 position 样式,以完成弹层的绝对定位。) :::lang=en-us # Overlay follows the container scroll - order: 5 The overlay defaults to absolute positioning with reference to document.body. If the overlay trigger element's container (usually the parent node) has a scrollbar, then when the container is scrolled, the trigger element will be separated from the overlay. The solution is to render the overlay to the container where the trigger element is located. (The container must have a position style to support the absolute positioning of the overlay.) ::: --- ````jsx import { Overlay } from '@alifd/next'; const { Popup } = Overlay; ReactDOM.render(( <div className="scroll-container"> <Popup trigger={<button>Open</button>} triggerType="click" container={trigger => trigger.parentNode}> <div className="overlay-demo"> Hello World From Popup! </div> </Popup> <div style={{ height: '300px' }} /> </div> ), mountNode); ```` ````css .overlay-demo { width: 300px; height: 100px; padding: 10px; border: 1px solid #999999; background: #FFFFFF; box-shadow: 2px 2px 20px rgba(0,0,0,0.15); } .scroll-container { position: relative; height: 150px; padding: 10px; border: 1px solid #999999; overflow: auto; } ````
26.890909
444
0.670047
eng_Latn
0.845863
2cf32e5e8e16b24e17433049f7b09efe0cbc5904
4,768
md
Markdown
README.md
applibgroup/UniversalPickerDialog
bf422f52bc23d8e4e66349378ade7ecdbc265100
[ "Apache-2.0" ]
null
null
null
README.md
applibgroup/UniversalPickerDialog
bf422f52bc23d8e4e66349378ade7ecdbc265100
[ "Apache-2.0" ]
5
2021-10-04T08:35:03.000Z
2021-11-21T19:22:58.000Z
README.md
applibgroup/UniversalPickerDialog
bf422f52bc23d8e4e66349378ade7ecdbc265100
[ "Apache-2.0" ]
2
2021-10-04T08:25:33.000Z
2021-10-04T08:27:13.000Z
# UniversalPickerDialog [![Build](https://github.com/applibgroup/UniversalPickerDialog/actions/workflows/main.yml/badge.svg)](https://github.com/applibgroup/UniversalPickerDialog/actions/workflows/main.yml) [![Quality Gate Status](https://sonarcloud.io/api/project_badges/measure?project=applibgroup_UniversalPickerDialog&metric=alert_status)](https://sonarcloud.io/dashboard?id=applibgroup_UniversalPickerDialog) [![license](https://img.shields.io/github/license/applibgroup/UniversalPickerDialog?color=blue)](LICENSE) ![1.0.0](https://img.shields.io/badge/version-1.0.0-blue.svg) HMOS 3rd party library to make implementing Dialog more easier. It includes two abilities : 1. Single Picker 2. Multi picker ### Screenshots --- ![Screenshots](https://github.com/prasanta352/UniversalPickerDialog-1/blob/main/images/all.png?raw=true) ### Source --- This library has been inspired by [stfalcon-studio/UniversalPickerDialog](https://github.com/stfalcon-studio/UniversalPickerDialog) ### Integration --- **Maven** ```xml <dependency> <groupId>dev.applibgroup</groupId> <artifactId>universalpickerdialog</artifactId> <version>1.0.0</version> <type>har</type> </dependency> ``` **Gradle** ```groovy implementation 'dev.applibgroup:universalpickerdialog:1.0.0' ``` **From Source** 1. For using UniversalPickerDialog module in sample app, include the source code and add the below dependencies in entry/build.gradle to generate hap/support.har. ```groovy implementation project(path: ':universalpickerdialog') ``` 2. For using UniversalPickerDialog module in separate application using har file, add the har file in the entry/libs folder and add the dependencies in entry/build.gradle file. ```groovy implementation fileTree(dir: 'libs', include: ['*.har']) ``` ### Usages --- implement callback interfaces: ```java public class MainAbilitySlice extends AbilitySlice implements ListContainer.ItemClickedListener, UniversalPickerDialog.OnPickListener { ``` Then implement OnPickListener.onPick(int[], int) method: ```java @Override public void onPick(int[] selectedValues, int key) { String str = list.get(selectedValues[0]); Object obj = array[selectedValues[0]]; /*do some logic*/ } ``` Now you can build the dialog and show it. Just add these few lines: ```java new UniversalPickerDialog.Builder(this) .setTitle("UniversalPickerDialog") .setListener(this) .setInputs( new UniversalPickerDialog.Input(0, list), new UniversalPickerDialog.Input(2, array) ) .show(); ``` Data set is passing to Picker using Input class that supports lists as well as arrays, so no data conversion is required :)). It takes in constructor default item position in carousel as the first argument and data set as the second. Builder was extended by a many methods for more flexibility and convenience of use. Here's the full list (you can find the javadoc on each of these methods): ```java new UniversalPickerDialog.Builder(this) .setTitle(ResourceTable.String_entry_MainAbility) .setTitle("Hello!") .setTitleColorRes(ResourceTable.Color_green) .setTitleColor(Color.GREEN) .setBackgroundColorRes(ResourceTable.Color_white) .setBackgroundColor(Color.WHITE) .setContentTextColorRes(ResourceTable.Color_green) .setContentTextColor(Color.GREEN) .setPositiveButtonText(ResourceTable.String_ok_text) .setPositiveButtonText("Yep!") .setNegativeButtonText(ResourceTable.String_cancel_text) .setNegativeButtonText("Nope!") .setButtonsColor(Color.GREEN) .setButtonsColorRes(ResourceTable.Color_green) .setPositiveButtonColorRes(ResourceTable.Color_green) .setPositiveButtonColor(Color.GREEN) .setNegativeButtonColorRes(ResourceTable.Color_red) .setNegativeButtonColor(Color.RED) .setContentTextSize(16) .setListener(this) .setInputs( new UniversalPickerDialog.Input(2, list), new UniversalPickerDialog.Input(0, array) ) .setKey(123) .build() .show(); ``` Take a look at the [sample project](entry) for more information. ### License ``` Copyright (C) 2017 stfalcon.com Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ```
35.318519
233
0.742869
eng_Latn
0.438464
2cf36a5f771f9a41c7b27acf9883c0d06c066372
133
md
Markdown
packages/hydrogen/src/foundation/FileRoutes/docs/2-related.md
timdennis58/hydrogen
e9c8b2bd73ba1944bbf8da86f1e215ce3d80ec94
[ "MIT" ]
null
null
null
packages/hydrogen/src/foundation/FileRoutes/docs/2-related.md
timdennis58/hydrogen
e9c8b2bd73ba1944bbf8da86f1e215ce3d80ec94
[ "MIT" ]
null
null
null
packages/hydrogen/src/foundation/FileRoutes/docs/2-related.md
timdennis58/hydrogen
e9c8b2bd73ba1944bbf8da86f1e215ce3d80ec94
[ "MIT" ]
null
null
null
## Related components - [`Router`](/api/hydrogen/components/framework/router) - [`Route`](/api/hydrogen/components/framework/route)
26.6
55
0.744361
eng_Latn
0.475399
2cf5cb188d5efe61f0041795d079d7c248483075
46,565
markdown
Markdown
_posts/2005-03-17-determining-an-actual-amount-of-time-a-processor-consumes-in-executing-a-portion-of-code.markdown
api-evangelist/patents-2005
66e2607b8cab00c01031607b66c9f69f6c5e11e1
[ "Apache-2.0" ]
null
null
null
_posts/2005-03-17-determining-an-actual-amount-of-time-a-processor-consumes-in-executing-a-portion-of-code.markdown
api-evangelist/patents-2005
66e2607b8cab00c01031607b66c9f69f6c5e11e1
[ "Apache-2.0" ]
null
null
null
_posts/2005-03-17-determining-an-actual-amount-of-time-a-processor-consumes-in-executing-a-portion-of-code.markdown
api-evangelist/patents-2005
66e2607b8cab00c01031607b66c9f69f6c5e11e1
[ "Apache-2.0" ]
3
2019-10-31T13:03:08.000Z
2021-12-14T08:10:54.000Z
--- title: Determining an actual amount of time a processor consumes in executing a portion of code abstract: Systems and methods are provided that determine the actual amount of time a processor consumes in executing a code portion. The actual execution time of a code portion may be accurately determined by taking into consideration context switches and/or overhead time corresponding to the code portion. Determining the actual execution time of a code portion may include recording context switches and time values that occur during the execution of the code portion. This information along with overhead measurements may be used to generate the actual execution time of a code portion, as will be described in more detail below. For example, the switched-out intervals resulting from the context switches and the overhead time associated with the time measurements may be subtracted from the elapsed time to produce the actual execution time of a code portion. url: http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&p=1&u=%2Fnetahtml%2FPTO%2Fsearch-adv.htm&r=1&f=G&l=50&d=PALL&S1=07774784&OS=07774784&RS=07774784 owner: Microsoft Corporation number: 07774784 owner_city: Redmond owner_country: US publication_date: 20050317 --- Typically during the development of a software application performance tests are performed on one or more parts of the application. This testing often involves measuring an amount of time a processor spends i.e. consumes executing one or more portions of code e.g. a function a procedure or other logical component of the software application. For example such an amount of time may be determined by recording a time at which execution of the code portion begins and a time at which execution of the code portion ends. These times are often recorded by including probes at locations within the software application e.g. the beginning and end of the code portion the execution of which results in the time values being recorded. Determining the amount of execution time based solely on the time at which execution of the code portion begins and ends is not an accurate representation of the actual amount of time a processor consumes in executing the code portion. Recording the time values itself consumes time including the time required to read the time value and the time required to write i.e. record the time value to a recording medium such as for example a volatile memory or a non volatile storage medium. The time consumed to acquire including reading and recording time values is referred to herein as overhead or overhead time. For example begin and end time measurements may indicate that a code portion consumed 800 processor cycles. However it may have taken three processor cycles to record the begin time. Accordingly the actual amount of time the processor consumed in executing the code portion assuming no other variables such as context switches discussed below is 800 3 797 processor cycles. It should be appreciated that the processor cycles consumed in acquiring the end time does not impact the accuracy of the measured execution time of the code portion. This is because the acquisition time for the end time occurs after the recorded end time itself. Another problem arises in multi tasking operating systems OS . A multi tasking OS simulates concurrent operations of different processing threads on a processor e.g. a central processing unit CPU or microprocessor by alternating or interleaving execution of the different threads. After one thread has executed for a relatively short period of time often referred to as a quantum the OS interrupts the processor and adjusts its context to a different thread. Adjusting or switching the context of a processor from one thread to another is an event referred to herein as a context switch. The time values recorded for the begin and end time of the execution of a code portion do not take into account whether one or more context switches have occurred between the begin time and end time herein. If one or more context switches occurred during this interval then the interval is not an accurate representation of how much time the processor spent executing the code portion. That is the interval will reflect a longer period of time than was actually consumed by the processor in executing the code portion itself. Applicants have recognized the need for a reliable system and method for determining the actual amount of time a processor consumes in executing a code portion particularly when executed on a multi tasking operating system. Accordingly described herein are systems and methods that determine the actual amount of time a processor consumes in executing a code portion. As used herein the active time of a code portion is the actual amount of time a processor consumes in executing the code portion. The active time of a code portion may be accurately determined by taking into consideration context switches and or overhead time corresponding to the code portion. As used herein the elapsed time of a code portion is the absolute time elapsed between the beginning and end times of the execution of the code portion i.e. the temporal interval defined by the recorded beginning and end time of the execution of the code portion . The elapsed time of a code portion may include time that the processor consumed during a switched out interval. As used herein a switched out interval is an interval that occurs during an elapsed time interval during which the processor executes a processing thread other than the processing thread corresponding to the code portion being measured. The elapsed time also may include overhead time resulting from the acquiring of the begin and end time values of the code portion. Further it should be appreciated that the code portion may include additional probes other than the probes resulting in the recording of the begin and end times. For example additional probes may have been placed at the beginning and ending of other code portions e.g. functions procedures or other logical components within the code portion being measured. The execution of these additional probes produces additional overhead time within the elapsed time. Determining the active time of a code portion may include recording context switches and time values that occur during the execution of the code portion. This information along with overhead measurements may be used to generate the active time of a code portion as will be described in more detail below. For example the switched out intervals resulting from the context switches and the overhead time associated with the time measurements may be subtracted from the elapsed time to produce the active time of a code portion. In an embodiment of the invention an actual amount of time consumed by a processor of a multi tasking operating system in executing a portion of code of a first processing thread is determined. First information is received indicative of a first temporal interval defining a begin time and an end time of execution of the code portion. Second information is received indicative of one or more second temporal intervals occurring within the first temporal interval during which the processor executed a processing thread other than the first processing thread. Based on the first and second information the actual amount of time consumed in executing the code portion is determined. In an aspect of this embodiment a total combined time of the one or more second temporal intervals is subtracted from the first temporal interval. In another aspect of this embodiment third information is received indicative of overhead time consumed in acquiring the first information during the first temporal interval. Determining the actual amount of time includes subtracting from the first temporal interval the overhead time and a total combined time of the one or more second temporal intervals. In another aspect of this embodiment the second information includes a plurality of information elements. Each first information element specifies a particular time an old thread from which the processor switched context at the particular time and a new thread to which the processor switched context at the particular time. Determining the actual amount of time includes determining for each of the one or more second temporal intervals a begin time and an end time of the second temporal interval based on the information elements. In yet another aspect of this embodiment the first information includes a plurality of first information elements each first information element specifying a time a processing thread and a type of event. Determining the actual amount of time includes an act of determining a begin time and an end time of the first temporal interval based on the plurality of first information elements. In another aspect of this embodiment the second information includes a plurality of second information elements. Each second information element specifies a particular time an old thread from which the processor switched context at the particular time and a new thread to which the processor switched context at the particular time. Third information is received including one or more third information elements each third information element specifying time consumed in executing a type of event. Determining the actual amount of time includes determining for each of the one or more second temporal intervals a begin time and an end time of the second temporal interval based on the second information elements. Determining the actual amount of time further includes determining from the first and second information elements a total overhead time consumed by the processor in acquiring the first information and determining the actual amount of time consumed in executing the code portion. In another aspect one or more of the above acts and or aspects of the above embodiment are performed on a computer system. In another embodiment of the invention a computer program product is provided. The product includes a computer readable medium and computer readable signals stored on the computer readable medium defining instructions that as a result of being executed by a computer instruct the computer to perform the method of the embodiment of the invention described in the preceding paragraphs and or one or more aspects thereof described in the preceding paragraphs. In another embodiment of the invention a system is provided for determining an actual amount of processor time consumed by a processor of a multi tasking operating system in executing a portion of code of a first processing thread. The system includes an actual time generator to receive first information indicative of a first temporal interval defining a begin time and an end time of execution of the code portion to receive second information indicative of one or more second temporal intervals occurring within the first temporal interval during which the processor executed a processing thread other than the first processing thread and to generate based on the first and second information the actual amount of processor time consumed in executing the code portion. In an aspect of this embodiment the actual time generator is operative to subtract a total combined time of the one or more second temporal intervals from the first temporal interval. In another aspect of this embodiment the actual time generator is operative to receive third information indicative of overhead time consumed in acquiring the first information during the first temporal interval and to subtract from the first temporal interval the overhead time and a total combined time of the one or more second temporal intervals. In another aspect of this embodiment the second information includes a plurality of information elements. Each first information element specifies a particular time an old thread from which the processor switched context at the particular time and a new thread to which the processor switched context at the particular time. The actual time generator is operative to determine for each of the one or more second temporal intervals a begin time and an end time of the second temporal interval based on the information elements. In yet another aspect of this embodiment the first information includes a plurality of first information elements. Each first information element specifies a time a processing thread and a type of event. The actual time generator is operative to determine a begin time and an end time of the first temporal interval based on the plurality of first information elements. In another aspect of this embodiment the second information includes a plurality of second information elements. Each second information element specifies a particular time an old thread from which the processor switched context at the particular time and a new thread to which the processor switched context at the particular time. The actual time generator is operative to receive third information including one or more third information elements each third information element specifying time consumed in executing a type of event. The actual time generator is operative to determine for each of the one or more second temporal intervals a begin time and an end time of the second temporal interval based on the second information elements to determine from the first and second information elements a total overhead time consumed by the processor in acquiring the first information and to determine based on the determinations made in the acts C 1 C 3 the actual amount of time consumed in executing the code portion. Other advantages novel features and objects of the invention and aspects and embodiments thereof will become apparent from the following detailed description of the invention including aspects and embodiments thereof when considered in conjunction with the accompanying drawings which are schematic and which are not intended to be drawn to scale. In the figures each identical or nearly identical component that is illustrated in various figures is represented by a single numeral. For purposes of clarity not every component is labeled in every figure nor is every component of each embodiment or aspect of the invention shown where illustration is not necessary to allow those of ordinary skill in the art to understand the invention. The function and advantage of embodiments of the present invention will be more fully understood from the examples described below. The following examples are intended to facilitate a better understanding and illustrate the benefits of the present invention but do not exemplify the full scope of the invention. As used herein whether in the written description or the claims the terms comprising including carrying having containing involving and the like are to be understood to be open ended i.e. to mean including but not limited to. Only the transitional phrases consisting of and consisting essentially of respectively shall be closed or semi closed transitional phrases as set forth with respect to claims in the United States Patent Office Manual of Patent Examining Procedures Eighth Edition Revision 2 May 2004 Section 2111.03. Time axis shows that the timing diagram represents a period of time from 0 to 12 milliseconds ms . It should be appreciated that such a timing diagram may be illustrated using other units of time such as for example processor cycles. Graph illustrates the times that the processor was executing Thread from 1.5 3 ms and from 6 9 ms. Graph illustrates that timer reads indicating the begin and end execution times of a code portion acquired from 0.5 1.5 ms and from 9 10 ms. Thus graph represents overhead time in acquiring time values for the coded portion. Graph illustrates a switched out interval during which the processor was executing one or more threads other than Thread . Based on graphs and an elapsed time interval graph begins at 0.5 ms and ends at 9.0 ms producing an elapsed time of 8.5 ms. Timing diagram illustrates that based on graphs the active time graph of the coded portion is 3 1.5 9 6 4.5 ms. Embodiments of the invention for determining the active time for a code portion illustrated graphically in diagram will now be described in relation to . In Act timer measurements are recorded for the code portion. For example the thread to which the code portion belongs may include probes at one or more locations including locations marking the beginning and the end of the code portion and possibly locations within the code portion e.g. beginnings and ends of other logical components . During execution of the code portion a timer measurement may be made at each location of a probe. These measurements may be recorded on a recording medium such as for example within a temporary buffer in local e.g. volatile memory or on a non volatile storage medium such as a disk. Examples of information units representing timer measurements are described below in relation to . Digressing briefly from is pseudocode representing an example of a thread of code. Code is merely an example of a code thread and is not intended to limit the scope of the invention. Any of numerous other implementations of the code thread for example variations of thread are possible and are intended to fall within the scope of the invention. Thread includes code portion which includes function F where F includes function G . Probes for recording the beginning and end of function F which is also the beginning and end of code portion may be placed at locations and within code portion . Further probes indicating the beginning and end of execution of the function G may be placed at locations and . Each information element may include a time value field a thread ID field and an event field . The units of time used in field and fields and of tables and respectively are processor e.g. CPU cycles. However it should be appreciated that the units of time used in these fields may be any of a plurality of other types of units such as for example milliseconds. For example information element specifies that in thread at 900 processor cycles the function F was entered. Information element may result from the execution of a probe at location of code portion described above in relation to . It should be appreciated that although processor cycles are the time units used in several examples described herein other time units may be used. As another example information element specifies that in thread at 1010 ms function S was entered. Further information elements and may result from probes located at locations and respectively of code portion . Returning to in Act context switch events may be recorded. That is the switching of a context of the processor from one thread to another may be recorded. One of the threads to which the context is switched or from which context is switched may be the thread including the code portion and one or more threads to which the context switches or from which the context switches may be threads other than the thread including the code portion. On some operating systems e.g. Windows XP available from Microsoft Corporation an Application Programming Interface API may be provided that enables context switch events to be captured. As used herein an application programming interface or API is a set of one or more computer readable instructions that provide access to one or more other sets of computer readable instructions that define functions so that such functions can be configured to be executed on a computer in conjunction with an application program. An API may be considered the glue between application programs and a particular computer environment or platform e.g. any of those discussed below and may enable a programmer to program applications to run on one or more particular computer platforms or in one or more particular computer environments. For example a Microsoft XP OS includes an Event Tracking for Windows ETW API that provides a feature for recording context switch events. This feature may be enabled i.e. turned on so that it records context switch events e.g. into a local memory buffer or onto disk . Thus during the parallel execution of multiple threads on the OS context switch events including context switch events during the execution of the code portion may be recorded. Examples of information elements representing context switch events will now be described in relation to . Digressing briefly from method is a block diagram illustrating an example of a table of information elements representing context switch events. Table is merely an illustrative embodiment of a table of information elements representing context switch events and is not intended to limit the scope of the invention. Any of numerous other implementations of such a table for example variations of table are possible and are intended to fall within the scope of the invention. For example table may include additional entries and or columns and the entries and columns may be organized in a different manner. Each information element may specify a particular time value in the field a thread ID in old thread ID field of the thread from which context was switched and the thread ID of the thread to which context was switched in new thread ID field . For example information element indicates that at time 1000 processor cycles e.g. from a predefined starting time the context was switched from thread to thread whereas information element indicates that at 1028 processor cycles context was switched from thread thread . In some embodiments instead of fields and each entry may include a single field specifying a thread ID and another field specifying a value e.g. a flag indicating whether the thread identified by the thread ID is being switched to i.e. switched in or switched from i.e. switched out . Other embodiments of entries may be used. Returning to in Act overhead measurements for particular types of events e.g. function entry function exit and other types of events may be obtained. These measurements may be obtained from previous measurements made for different event types at an earlier time for example in a controlled testing environment. Alternatively overhead measurements may be measured and recorded during the execution of the code portion being measured. Examples of information elements representing overhead measurements will now be described in relation to . Table may include information elements and where each information element has an event type field specifying an event type and an overhead time field specifying the overhead time associated with the event type. For example information element indicates that the function entry event type has an overhead time of 3 processor cycles and information element indicates that the function exit event type has an overhead of 2 processor cycles. Entries and assume a same amount of time consumed 3 and 2 processor cycles respectively every time a function is entered and exited respectively. However in some embodiments overhead may be measured and recorded each time a function is entered or exited and this information may be recorded in a different format than that shown in table . Returning to in Act the actual time of the code portion may be determined based on the timer measurements context switch events and overhead measurements for example as described below in relation to . Method or portions thereof may be implemented using system described below in relation to . Method may include additional acts. Further the order of the acts performed as part of method is not limited to the order illustrated in as the acts may be performed in other orders and or one or more of the acts may be performed in series or in parallel at least partially. For example one or more context switched events may be recorded as part of act in parallel or before the recording of one or more timer measurements as part of act . In Act an interval representing the elapsed time of the code portion may be determined from the timer measurements. For example given a set of timer measurements e.g. those recorded in Act and a thread identifier an interval representing the elapsed time of a code portion of the identified thread may be determined. For example using the information provided in table for which the thread having a thread ID of I represents thread illustrated in Act may include the following. The information elements of table not corresponding to thread e.g. information elements and may be removed and the remaining information elements may be sorted by time value field . The earliest time value and the latest time value represented by the sorted information elements are selected to define the begin time and the end time respectfully of the elapsed time interval. For example applying Act to the information elements of table may result in an elapsed time interval of 900 1550 . Act may be performed by elapsed time interval generator described below in relation to . In Act a set of intervals representing the overhead of the timer measurements may be determined from timer measurements and overhead measurements such as for example the timer measurements and overhead measurements recorded and obtained in Acts and respectively. That is the overhead time associated with each timer measurement corresponding to the thread in question may be determined and represented as an interval. Using the example of and Act may include first removing information elements of table that do not correspond to thread and the remaining information elements may be sorted by time value field . For each information element corresponding to thread the overhead associated with the event identified in event field may be determined from table . For example information element corresponds to the event of entering function F which is a function entry event type. Information element of table indicates that the overhead associated with the function entry event type is three processor cycles. An overhead interval then may be defined as time value time value plus overhead time . Thus for information element an overhead interval of 900 903 may be produced. As another example information element specifies an exit function G event which has an event type of function exit. Information element of table indicates that a function exit event type has an overhead of two processor cycles. Accordingly Act may include producing an overhead interval of 1500 1502 from information element . It should be appreciated that Act may not include determining an overhead interval for the time measurement information element specifying a latest time of the times specified by the sorted information elements i.e. the information elements corresponding to the thread in question . This information element may be excluded from act because the overhead time associated with this time of measurement does not add any overhead to the actual time the processor consumes in executing the code portion. That is this last time measurement indicates the end of the elapsed time so the overhead in acquiring this end time measurement occurs after the recorded end time itself and thus does not effect the active time. Thus performing Act on the information elements of tables and may result in a set of overhead intervals of 900 903 1200 1203 1500 1502 . In Act a set of switched out intervals may be determined from context switch events such as for example the context switch events recorded in act . The switched out intervals represent the intervals of time during the elapsed time during which the processor is executing a thread other than the thread of the code portion. For example in the timing diagram of graph indicates a switched out interval from 3 ms to 6 ms. Using the example of table of to determine a set of switched out intervals of thread information elements of table that do not refer to thread in either fields or are removed. The remaining context switched information elements may be sorted by time field . Each context switch event corresponding to the switching from switch to another thread i.e. a switch out event may be identified e.g. the context switch events represented by information elements and . For each of these switch out events a corresponding context switch event that switches back into thread i.e. a switch in event may be identified. For the switched out events represented by information elements and the corresponding switch in events may be identified as those represented by information elements and . Each switched out event switched in event pair may define a switched out interval. Thus performing Act on the context switch events represented in table may result in a set of switched out intervals of 1000 1190 1225 1411 . In Act the actual amount of time consumed by the processor in executing the code portion may be determined from the elapsed time the set of overhead intervals and the set of switched out intervals. For example the cumulative time defined by all of the overhead intervals and all of the switched out intervals may be subtracted from the elapsed time to produce the active time of the code portion. That is the active time equals the elapsed time minus overhead intervals plus switched out intervals . Using the example results from acts and described above the active time of the code portion of thread Thus applying method to the information provided in results in a value of 266 processor cycles as the actual time the processor consumes in executing the code portion . Method may include additional acts. Further the order of the acts performed as part of method is not limited to the order illustrated in as the acts may be performed in other orders and or one or more of the acts may be performed in series or in parallel at least partially. For example any of Acts may be performed before after or in parallel to one another. Methods and acts thereof and various embodiments and variations of these methods and these acts individually or in combination may be defined by computer readable signals tangibly embodied on or more computer readable media such as for example non volatile recording media integrated circuit memory elements or a combination thereof. Computer readable media can be any available media that can be accessed by a computer. By way of example and not limitation computer readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile removable and non removable media implemented in any method or technology for storage of information such as computer readable instructions data structures program modules or other data. Computer storage media includes but is not limited to RAM ROM EEPROM flash memory or other memory technology CD ROM digital versatile disks DVD or other optical storage magnetic cassettes magnetic tape magnetic disk storage or other magnetic storage devices other types of volatile and non volatile memory any other medium which can be used to store the desired information and which can accessed by a computer and any suitable combination of the foregoing. Communication media typically embodies computer readable instructions data structures program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example and not limitation communication media includes wired media such as a wired network or direct wired connection wireless media such as acoustic RF infrared and other wireless media other types of communication media and any suitable combination of the foregoing. Computer readable signals embodied on one or more computer readable media may define instructions for example as part of one or more programs that as a result of being executed by a computer instruct the computer to perform one or more of the functions described herein e.g. method method or any acts of the foregoing and or various embodiments variations and combinations thereof. Such instructions may be written in any of a plurality of programming languages for example Java J Graphical Basic C C or C Fortran Pascal Eiffel Basic COBOL other programming languages or any of a variety of combinations thereof. The computer readable media on which such instructions are embodied may reside on one or more of the components of any of systems and described herein may be distributed across one or more of such components and may be in transition therebetween. The computer readable media may be transportable such that the instructions stored thereon can be loaded onto any computer system resource to implement the aspects of the present invention discussed herein. In addition it should be appreciated that the instructions stored on the computer readable medium described above are not limited to instructions embodied as part of an application program running on a host computer. Rather the instructions may be embodied as any type of computer code e.g. software or microcode that can be employed to program a processor to implement aspects of the present invention discussed above. It should be appreciated that any single component or collection of multiple components of a computer system for example the computer system described in relation to that perform the functions described herein can be generically considered as one or more controllers that control such functions. The one or more controllers can be implemented in numerous ways such as with dedicated hardware and or firmware using a processor that is programmed using microcode or software to perform the functions recited above or any suitable combination of the foregoing. System may include any of Processor recording medium active time generator and other components. Processor may be capable of multi thread execution and may be configured to receive one or more threads and in parallel. Each of threads and may include one or more probes and respectively. Thread may include a code portion which may include one or more of probes for which an active time of execution may be determined. The processor may be configured to execute probes and and in response record time measurements for example as described above in relation to Act of . Further processor may be configured to generate context switch events representing a switch of context between a plurality of threads including any of threads and . Processor may record context switch events in response to processing one or more API calls for example as described above in relation to act in . For example processor may be controlled by a Windows NT operating system which records context switch events by using the ETW API as described above. The recording medium also may have overhead measurements recorded thereon. Active time generator may generate the active time of a given thread based on timer measurements context switch events and overhead measurements . Active generator may include any of elapsed time interval generator switched out intervals generator overhead intervals generator active time engine and other components. Elapsed time interval generator may be configured to receive timer measurement and a thread ID and generate elapsed time intervals for example as described above in relation to act . Switched out intervals generator may be configured to receive thread ID and context switch events and generate switched out intervals for example as described above in relation to act of . Overhead intervals generator may be configured to receive thread ID timer measurements and overhead measurements and generate overhead intervals for example as described above in relation to act . Active time engine may be configured to receive elapsed time intervals switched out intervals and overhead intervals and generate active time representing the actual time the processor consumed in executing code portion . Active time engine may be configured to generate active time as described above in relation to act . System and components thereof may be implemented using any of a variety of technologies including software e.g. C C C Java or a combination thereof hardware e.g. one or more application specific integrated circuits firmware e.g. electrically programmed memory or any combination thereof. One or more of the components of system may reside on a single device e.g. a computer or one or more components may reside on separate discrete devices. Further each component may be distributed across multiple devices and one or more of the devices may be interconnected. Further on each of the one or more devices that include one or more components of system each of the components may reside in one or more locations on the system. For example different portions of the components of these systems may reside in different areas of memory e.g. RAM ROM disk etc. on the device. Each of such one or more devices may include among other components a plurality of known components such as one or more processors a memory system a disk storage system one or more network interfaces and one or more busses or other internal communication links interconnecting the various components. System and components thereof may be implemented using a computer system such as that described below in relation to . Various embodiments according to the invention may be implemented on one or more computer systems. These computer systems may be for example general purpose computers such as those based on Intel PENTIUM type processor Motorola PowerPC Sun UltraSPARC Hewlett Packard PA RISC processors any of a variety of processors available from Advanced Micro Devices AMD or any other type of processor. It should be appreciated that one or more of any type of computer system may be used to implement various embodiments of the invention. A general purpose computer system according to one embodiment of the invention is configured to perform one or more of the functions described above. It should be appreciated that the system may perform other functions and the invention is not limited to having any particular function or set of functions. For example various aspects of the invention may be implemented as specialized software executing in a general purpose computer system such as that shown in . The computer system may include a processor connected to one or more memory devices such as a disk drive memory or other device for storing data. Memory is typically used for storing programs and data during operation of the computer system . Components of computer system may be coupled by an interconnection mechanism which may include one or more busses e.g. between components that are integrated within a same machine and or a network e.g. between components that reside on separate discrete machines . The interconnection mechanism enables communications e.g. data instructions to be exchanged between system components of system . Computer system also includes one or more input devices for example a keyboard mouse trackball microphone touch screen and one or more output devices for example a printing device display screen speaker. In addition computer system may contain one or more interfaces not shown that connect computer system to a communication network in addition or as an alternative to the interconnection mechanism . The storage system shown in greater detail in typically includes a computer readable and writeable nonvolatile recording medium in which signals are stored that define a program to be executed by the processor or information stored on or in the medium to be processed by the program. The medium may for example be a disk or flash memory. Typically in operation the processor causes data to be read from the nonvolatile recording medium into another memory that allows for faster access to the information by the processor than does the medium . This memory is typically a volatile random access memory such as a dynamic random access memory DRAM or static memory SRAM . It may be located in storage system as shown or in memory system not shown. The processor generally manipulates the data within the integrated circuit memory and then copies the data to the medium after processing is completed. A variety of mechanisms are known for managing data movement between the medium and the integrated circuit memory element and the invention is not limited thereto. The invention is not limited to a particular memory system or storage system . Aspects of the invention may be implemented in software hardware or firmware or any combination thereof. Further such methods acts systems system elements and components thereof may be implemented as part of the computer system described above or as an independent component. Although computer system is shown by way of example as one type of computer system upon which various aspects of the invention may be practiced it should be appreciated that aspects of the invention are not limited to being implemented on the computer system as shown in . Various aspects of the invention may be practiced on one or more computers having a different architecture or components that that shown in . Computer system may be a general purpose computer system that is programmable using a high level computer programming language. Computer system also may be implemented using specially programmed special purpose hardware. In computer system processor is typically a commercially available processor such as the well known Pentium class processor available from the Intel Corporation. Many other processors are available. Such a processor usually executes an operating system which may be for example the Windows 95 Windows 98 Windows NT Windows 2000 Windows ME or Windows XP operating systems available from Microsoft Corporation MAC OS System X available from Apple Computer the Solaris Operating System available from Sun Microsystems Linux available from various sources or UNIX available from various sources. Any of a variety of other operating systems may be used. The processor and operating system together define a computer platform for which application programs in high level programming languages are written. It should be understood that the invention is not limited to a particular computer system platform processor operating system or network. Also it should be apparent to those skilled in the art that the present invention is not limited to a specific programming language or computer system and that other appropriate programming languages and other appropriate computer systems could also be used. One or more portions of the computer system may be distributed across one or more computer systems not shown coupled to a communications network. These computer systems also may be general purpose computer systems. For example various aspects of the invention may be distributed among one or more computer systems configured to provide a service e.g. servers to one or more client computers or to perform an overall task as part of a distributed system. For example various aspects of the invention may be performed on a client server system that includes components distributed among one or more server systems that perform various functions according to various embodiments of the invention. These components may be executable intermediate e.g. IL or interpreted e.g. Java code which communicate over a communication network e.g. the Internet using a communication protocol e.g. TCP IP . It should be appreciated that the invention is not limited to executing on any particular system or group of systems and that the invention is not limited to any particular distributed architecture network or communication protocol. Various embodiments of the present invention may be programmed using an object oriented programming language such as SmallTalk Java J J Sharp C Ada or C C Sharp . Other object oriented programming languages may also be used. Alternatively functional scripting and or logical programming languages may be used. Various aspects of the invention may be implemented in a non programmed environment e.g. documents created in HTML XML or other format that when viewed in a window of a browser program render aspects of a graphical user interface GUI or perform other functions . Various aspects of the invention may be implemented as programmed or non programmed elements or any suitable combination thereof. Further various embodiments of the invention may be implemented using Microsoft.NET technology available from Microsoft Corporation. Having now described some illustrative embodiments of the invention it should be apparent to those skilled in the art that the foregoing is merely illustrative and not limiting having been presented by way of example only. Numerous modifications and other illustrative embodiments are within the scope of one of ordinary skill in the art and are contemplated as falling within the scope of the invention. In particular although many of the examples presented herein involve specific combinations of method acts or system elements it should be understood that those acts and those elements may be combined in other ways to accomplish the same objectives. Acts elements and features discussed only in connection with one embodiment are not intended to be excluded from a similar role in other embodiments. Further for the one or more means plus function limitations recited in the following claims the means are not intended to be limited to the means disclosed herein for performing the recited function but are intended to cover in scope any equivalent means known now or later developed for performing the recited function. Use of ordinal terms such as first second third etc. in the claims to modify a claim element does not by itself connote any priority precedence or order of one claim element over another or the temporal order in which acts of a method are performed but are used merely as labels to distinguish one claim element having a certain name from another element having a same name but for use of the ordinal term to distinguish the claim elements.
267.614943
1,245
0.824009
eng_Latn
0.999975
2cf613ac4633d3a3ad48edf0e3f1c396e2eb3942
2,001
md
Markdown
README.md
mouZhe/miui-updates-tracker
c6c8dae5ca4e20dcce27385caa1188bb8d8936fb
[ "MIT" ]
null
null
null
README.md
mouZhe/miui-updates-tracker
c6c8dae5ca4e20dcce27385caa1188bb8d8936fb
[ "MIT" ]
null
null
null
README.md
mouZhe/miui-updates-tracker
c6c8dae5ca4e20dcce27385caa1188bb8d8936fb
[ "MIT" ]
null
null
null
# MIUI Updates Tracker V2.2 [![Subscibe](https://img.shields.io/badge/Telegram-Subscribe-blue.svg)](https://t.me/MIUIUpdatesTracker) [![Discord](https://img.shields.io/discord/221706949786468353.svg?style=flat-square)](https://discord.gg/xiaomi) [![Open Source Love](https://badges.frapsoft.com/os/v1/open-source.png?v=103)](https://github.com/ellerbrock/open-source-badges/) [![made-with-python](https://img.shields.io/badge/Made%20with-Python-1f425f.svg)](https://www.python.org/) [![Patreon](https://img.shields.io/badge/Patreon-Donate-red.svg)](https://www.patreon.com/XiaomiFirmwareUpdater) [![PayPal](https://img.shields.io/badge/PayPal-Donate-blue.svg)](https://www.paypal.me/yshalsager) A Script that automatically track Xiaomi MIUI ROM releases and send messages to telegram and discord channels to notify users! - It supports all devices. - Both Global and China ROMs are supported. - Script runs automatically every 6h! ## Public YAML and RSS Starting from V2.2 the script will push YAML and XML files for each device, in addition to the all-in-one file that was already available. _These files can be used for whatever project, just give the proper credits to @XiaomiFirmwareUpdater project_ Files are separated by branch: Stable Recovery - Stable Fastboot - Weekly Recovery - Weekly Fastboot RSS files are available [here](https://github.com/XiaomiFirmwareUpdater/miui-updates-tracker/tree/master/rss), Just open any xml file in raw view, copy the link, add to your RSS feed and enjoy :D ## Xiaomi Community We're part of Xiaomi Community's discord server! Join us [Here](https://discord.gg/xiaomi) #### This script is a part of [Xiaomi Firmware Updater](https://github.com/XiaomiFirmwareUpdater) project. #### Developer: [yshalsager](https://github.com/yshalsager) #### XiaomiFirmwareUpdater Project is Supported By: [![Packet](https://raw.githubusercontent.com/XiaomiFirmwareUpdater/xiaomifirmwareupdater.github.io/master/images/Packet_logo_sm.png)](https://www.packet.net)
57.171429
195
0.772614
yue_Hant
0.306415
2cf61fa2fa9a0221eedd584371669cc2518583fe
1,130
md
Markdown
README.md
ankitskvmdam/single-header-file-c-libs
41e5d0cf94ddd034ae68b2d820716f3bc88d7dc2
[ "MIT" ]
132
2016-05-31T05:40:30.000Z
2022-03-01T14:54:48.000Z
README.md
ankitskvmdam/single-header-file-c-libs
41e5d0cf94ddd034ae68b2d820716f3bc88d7dc2
[ "MIT" ]
5
2016-11-24T15:21:45.000Z
2019-02-04T18:54:09.000Z
README.md
ankitskvmdam/single-header-file-c-libs
41e5d0cf94ddd034ae68b2d820716f3bc88d7dc2
[ "MIT" ]
31
2016-05-05T16:34:35.000Z
2021-10-19T00:15:11.000Z
# Single-header-file C libraries This repository contains some of the small libraries I've created during the last few years. More are added as they prove themselves and I have time to extract them and clean them up. All libraries are published under the MIT license. library | lastest version | category | description ------------------------- | --------------- | --------- | -------------------------------- **math_3d.h** | 1.0 | graphics | compact 3D math library for use with OpenGL **slim_gl.h** | 1.0 | graphics | compact OpenGL shorthand functions and printf() style drawcalls **iir_gauss_blur.h** | 1.0 | graphics | gauss filter where the performance is independent from the blur strength **sdt_dead_reckoning.h** | 1.0 | graphics | function to create a signed distance field with the Dead Reckoning algorithm **slim_hash.h** | 1.1 | container | simple and easy to use hashmap for C99 **slim_test.h** | 1.0 | testing | small set of functions to build simple test programs
66.470588
134
0.6
eng_Latn
0.99378
2cf655ac9a212b2c59d97106b2b482062c636bc1
5,828
md
Markdown
AANM/README_DE.md
Vindicta-Team/Automated-Ace-No-Medical
7baf9be2f9b1e8db80bdef2cf2eadda2aa75774b
[ "MIT" ]
1
2020-07-31T23:54:13.000Z
2020-07-31T23:54:13.000Z
AANM/README_DE.md
Vindicta-Team/Automated-Ace-No-Medical
7baf9be2f9b1e8db80bdef2cf2eadda2aa75774b
[ "MIT" ]
2
2021-02-24T20:08:24.000Z
2021-11-28T00:37:21.000Z
AANM/README_DE.md
Vindicta-Team/Automated-Ace-No-Medical
7baf9be2f9b1e8db80bdef2cf2eadda2aa75774b
[ "MIT" ]
null
null
null
<p align="center"> <img src="https://github.com/acemod/ACE3/raw/master/extras/assets/logo/black/ACE3-Logo.jpg" width="480"> </p> <p align="center"> <a href="https://github.com/acemod/ACE3/releases"> <img src="https://img.shields.io/badge/Version-3.14.0-blue.svg?style=flat-square" alt="ACE3 Version"> </a> <a href="https://github.com/acemod/ACE3/issues"> <img src="https://img.shields.io/github/issues-raw/acemod/ACE3.svg?style=flat-square&label=Issues" alt="ACE3 Fehlermeldungen"> </a> <a href="https://forums.bistudio.com/topic/181341-ace3-a-collaborative-merger-between-agm-cse-and-ace/?p=2859670"> <img src="https://img.shields.io/badge/BIF-Thread-lightgrey.svg?style=flat-square" alt="BIF Thread"> </a> <a href="https://github.com/acemod/ACE3/blob/master/LICENSE"> <img src="https://img.shields.io/badge/License-GPLv2-red.svg?style=flat-square" alt="ACE3 Lizenz"> </a> <a href="https://slackin.ace3mod.com/"> <img src="https://slackin.ace3mod.com/badge.svg?style=flat-square&label=Slack" alt="ACE3 Slack"> </a> <a href="https://circleci.com/gh/acemod/ACE3"> <img src="https://circleci.com/gh/acemod/ACE3.svg?style=svg" alt="ACE3 Build Status"> </a> </p> </p> <p align="center"> <sup><strong>Benötigt die aktuellste Version von<a href="https://github.com/CBATeam/CBA_A3/releases">CBA A3</a>.<br/> Besucht uns auf <a href="https://twitter.com/ACE3Mod">Twitter</a> | <a href="https://www.facebook.com/ACE3Mod">Facebook</a> | <a href="https://www.youtube.com/c/ACE3Mod">YouTube</a> | <a href="http://www.reddit.com/r/arma/search?q=ACE&restrict_sr=on&sort=new&t=all">Reddit</a></strong></sup> </p> **ACE3** ist ein Gemeinschaftsprojekt der sich zusammengeschlossenen Moddinggruppen von **ACE2**, **AGM** und **CSE** mit dem Ziel den Realismus und die Spieltiefe von Arma 3 zu steigern. Da die MOD vollkommen als **open-source** Projekt gestaltet ist, steht es jedem frei Änderungen vorzuschlagen, oder seine eigene, modifizierte Version zu erstellen, solange diese ebenfalls der Öffentlichkeit zugänglich ist und mit GNU General Public License übereinstimmt. (Weitere Informationen ist der Lizenzdatei in diesem Projekt entnehmbar) Die Mod ist **modular aufgebaut**. Beinahe jede PBO kann entfernt werden, sodass jede Gemeinschaft ihre eigene Version der Mod unterhalten kann. Dies kann zum Beispiel einige Funktionalitäten ausschließen, wenn gewisse Features nicht gewünscht sind, oder es mit einer anderen Mod in Konflikt gerät. Ebenfalls können viele Einstellungen vom Missionsersteller vorgenommen werden (u.a. am Sanitätssystem), sodass eine individuelle Erfahrung gewährleistet werden kann. ### Hauptmerkmale - Vollkommen neues 3D-Interaktionssystem - Leistungs- und stabilitätsoptimiert - Hauptmerkmal auf Modularität und individuelle Anpassungsmöglichkeiten - Neue, flexible Spieler- und Servereinstellungen - Verbessertes Sanitätssystem mit unterschiedlichen Stufen (Basis/Erweitert) - Echte und stetige Wettersynchronisation - Ballistik basierend auf vielen Faktoren u.a. Wetter und Wind - Gefangenensystem - Sprengtoffmechaniken mit unterschiedlichen Zündern - Kartenverbesserungen – Setzen von Markierungen / Kartenwerkzeuge - Erweitertes Raketenlenksystem #### Weitere Mechaniken - Tragen und Ziehen - Waffen und Fahrzeuge tragen die Namen ihrer Vorbilder aus der echten Welt - Ein Feuerleitsystem (FLS) für Hubschrauber und Panzer - Viele Funktionen werden in C/C++ Erweiterungen berechnet - Rückstrahlzonen- und Überdrucksimulation - Einwegwaffen - Realistische G-Kräfte - Fahrzeuge abschließen - Realistische Nacht- und Thermalsicht - Magazine umpacken - Realistische Waffen Er- bzw. Überhitzung - Temporäre Taubheit bei zu lauten Geräuschen - Verbesserte Interaktionen für MG2s und Munitionsschlepper - Einstellbare Zielfernrohre - Keine Ruheanimationen bei gesenkter Waffe - Über Hindernisse springen, über Mauern klettern, Zäune durchtrennen - Kein "sprechender Charkater" - Vector IV, MicroDAGR und Kestrel <br> ***und noch viel viel mehr...*** #### Anleitungen Du hast ACE3 installiert, hast aber keine Ahnung was und wie alles funktioniert und wo sich was befindet? - [Erste Schritte](https://ace3mod.com/wiki/user/getting-started.html). #### Mitwirken Wenn du bei der Entwicklung von ACE3 mithelfen möchtest, kannst du dies tun, indem du nach Fehlern Ausschau hältst oder neue Funktionen vorschlägst. Um etwas beizutragen, "Forke" dieses Repository und erstelle deine "Pull-Requests", welche von anderen Entwicklern und Beiträgern überprüft werden. Bitte trage dich dabei in [`AUTHORS.txt`](https://github.com/acemod/ACE3/blob/master/AUTHORS.txt) mit deinem Nutzernamen und einer gültigen Email-Adresse ein. Um uns einen Fehler, Anregungen oder neue Funktionalitäten mitzuteilen: Nutze unseren [Issue Tracker](https://github.com/acemod/ACE3/issues). Besuche auch: - [Wie kann ich ein Problem melden](https://ace3mod.com/wiki/user/how-to-report-an-issue.html) - [Wie kann ich ein Wunsch zu einer neuen Funktion mitteilen?](https://ace3mod.com/wiki/user/how-to-make-a-feature-request.html) #### Testen & Mod erstellen Wenn du die neusten Entwicklungen erleben und uns dabei helfen möchtest bestehende Fehler zu entdecken, lade dir die "Master Branch" herunter. Entweder nutzt du [Git](https://help.github.com/articles/fetching-a-remote/) - wenn die Schritte bekannt sind - oder du lädst es dir direkt über [diesen Link](https://github.com/acemod/ACE3/archive/master.zip) herunter. Wie du deine eigene Entwicklungsumgebung und eine Testversion von ACE3 erstellst, folge [dieser Anleitung](https://github.com/acemod/ACE3/blob/master/documentation/development/setting-up-the-development-environment.md).
68.564706
465
0.755319
deu_Latn
0.952125
2cf7616e6a6ed03e6eb6f9a1ad51c8b3b58119ad
27
md
Markdown
README.md
johnromc/Enigma
e089ff25c5f2a07209837f1b1b3aa12d5cdd69f5
[ "MIT" ]
null
null
null
README.md
johnromc/Enigma
e089ff25c5f2a07209837f1b1b3aa12d5cdd69f5
[ "MIT" ]
null
null
null
README.md
johnromc/Enigma
e089ff25c5f2a07209837f1b1b3aa12d5cdd69f5
[ "MIT" ]
null
null
null
# Enigma Proyecto personal
9
17
0.814815
spa_Latn
0.696597
2cf7a1f5dd377c3d8a3dccc679c2194581bf0b6b
4,635
md
Markdown
articles/automation/migrate-oms-update-deployments.md
discentem/azure-docs
b1495f74a87004c34c5e8112e2b9f520ce94e290
[ "CC-BY-4.0", "MIT" ]
7,073
2017-06-27T08:58:22.000Z
2022-03-30T23:19:23.000Z
articles/automation/migrate-oms-update-deployments.md
discentem/azure-docs
b1495f74a87004c34c5e8112e2b9f520ce94e290
[ "CC-BY-4.0", "MIT" ]
87,608
2017-06-26T22:11:41.000Z
2022-03-31T23:57:29.000Z
articles/automation/migrate-oms-update-deployments.md
discentem/azure-docs
b1495f74a87004c34c5e8112e2b9f520ce94e290
[ "CC-BY-4.0", "MIT" ]
17,093
2017-06-27T03:28:18.000Z
2022-03-31T20:46:38.000Z
--- title: Migrate Azure Monitor logs update deployments to Azure portal description: This article tells how to migrate Azure Monitor logs update deployments to Azure portal. services: automation ms.subservice: update-management ms.date: 07/16/2018 ms.topic: conceptual --- # Migrate Azure Monitor logs update deployments to Azure portal The Operations Management Suite (OMS) portal is being [deprecated](../azure-monitor/logs/oms-portal-transition.md). All functionality that was available in the OMS portal for Update Management is available in the Azure portal, through Azure Monitor logs. This article provides the information you need to migrate to the Azure portal. ## Key information * Existing deployments will continue to work. Once you have recreated the deployment in Azure, you can delete your old deployment. * All existing features that you had in OMS are available in Azure. To learn more about Update Management, see [Update Management overview](./update-management/overview.md). ## Access the Azure portal 1. From your workspace, click **Open in Azure**. ![Open in Azure - Log Analytics](media/migrate-oms-update-deployments/link-to-azure-portal.png) 2. In the Azure portal, click **Automation Account** ![Azure Monitor logs](media/migrate-oms-update-deployments/log-analytics.png) 3. In your Automation account, click **Update Management**. :::image type="content" source="media/migrate-oms-update-deployments/azure-automation.png" alt-text="Screenshot of the Update management page."::: 4. In the Azure portal, select **Automation Accounts** under **All services**. 5. Under **Management Tools**, select the appropriate Automation account, and click **Update Management**. ## Recreate existing deployments All update deployments created in the OMS portal have a [saved search](../azure-monitor/logs/computer-groups.md) also known as a computer group, with the same name as the update deployment that exists. The saved search contains the list of machines that were scheduled in the update deployment. :::image type="content" source="media/migrate-oms-update-deployments/oms-deployment.png" alt-text="Screenshot of the Update Deployments page with the Name and Servers fields highlighted."::: To use this existing saved search, follow these steps: 1. To create a new update deployment, go to the Azure portal, select the Automation account that is used, and click **Update Management**. Click **Schedule update deployment**. ![Schedule update deployment](media/migrate-oms-update-deployments/schedule-update-deployment.png) 2. The New Update Deployment pane opens. Enter values for the properties described in the following table and then click **Create**: 3. For **Machines to update**, select the saved search used by the OMS deployment. | Property | Description | | --- | --- | |Name |Unique name to identify the update deployment. | |Operating System| Select **Linux** or **Windows**.| |Machines to update |Select a Saved search, Imported group, or pick Machine from the dropdown and select individual machines. If you choose **Machines**, the readiness of the machine is shown in the **UPDATE AGENT READINESS** column.</br> To learn about the different methods of creating computer groups in Azure Monitor logs, see [Computer groups in Azure Monitor logs](../azure-monitor/logs/computer-groups.md) | |Update classifications|Select all the update classifications that you need. CentOS does not support this out of the box.| |Updates to exclude|Enter the updates to exclude. For Windows, enter the KB article without the **KB** prefix. For Linux, enter the package name or use a wildcard character. | |Schedule settings|Select the time to start, and then select either **Once** or **Recurring** for the recurrence. | | Maintenance window |Number of minutes set for updates. The value can't be less than 30 minutes or more than 6 hours. | | Reboot control| Determines how reboots should be handled.</br>Available options are:</br>Reboot if required (Default)</br>Always reboot</br>Never reboot</br>Only reboot - will not install updates| 4. Click **Scheduled update deployments** to view the status of the newly created update deployment. ![new update deployment](media/migrate-oms-update-deployments/new-update-deployment.png) 5. As mentioned previously, once your new deployments are configured through the Azure portal, you can remove the existing deployments from the Azure portal. ## Next steps To learn more about Update Management in Azure Automation, see [Update Management overview](./update-management/overview.md).
65.28169
418
0.767853
eng_Latn
0.981956
2cf870555a1fc6051f6dab36402c344d806ccef9
2,124
md
Markdown
docs/emails/cryptography/18.md
bram00767/bitcoin-archive
4f44d5134805d41ad5140665621f8621485cbb6a
[ "MIT" ]
11
2021-03-09T17:57:01.000Z
2022-03-07T19:47:56.000Z
docs/emails/cryptography/18.md
GoogleForce/bitcoin-archive
4f44d5134805d41ad5140665621f8621485cbb6a
[ "MIT" ]
4
2021-01-01T03:42:32.000Z
2021-12-31T09:56:52.000Z
docs/emails/cryptography/18.md
GoogleForce/bitcoin-archive
4f44d5134805d41ad5140665621f8621485cbb6a
[ "MIT" ]
9
2021-01-14T06:48:22.000Z
2022-03-30T21:57:58.000Z
--- layout: default title: "Bitcoin v0.1 released" grand_parent: Emails parent: Cryptography Mailing List nav_order: 18 date: 2009-01-25 15:47:10 UTC --- # Bitcoin v0.1 released --- ``` Hal Finney wrote: > > * Spammer botnets could burn through pay-per-send email filters > > trivially > If POW tokens do become useful, and especially if they become money, > machines will no longer sit idle. Users will expect their computers to > be earning them money (assuming the reward is greater than the cost to > operate). A computer whose earnings are being stolen by a botnet will > be more noticeable to its owner than is the case today, hence we might > expect that in that world, users will work harder to maintain their > computers and clean them of botnet infestations. Another factor that would mitigate spam if POW tokens have value: there would be a profit motive for people to set up massive quantities of fake e-mail accounts to harvest POW tokens from spam. They'd essentially be reverse-spamming the spammers with automated mailboxes that collect their POW and don't read the message. The ratio of fake mailboxes to real people could become too high for spam to be cost effective. The process has the potential to establish the POW token's value in the first place, since spammers that don't have a botnet could buy tokens from harvesters. While the buying back would temporarily let more spam through, it would only hasten the self-defeating cycle leading to too many harvesters exploiting the spammers. Interestingly, one of the e-gold systems already has a form of spam called "dusting". Spammers send a tiny amount of gold dust in order to put a spam message in the transaction's comment field. If the system let users configure the minimum payment they're willing to receive, or at least the minimum that can have a message with it, users could set how much they're willing to get paid to receive spam. Satoshi Nakamoto --------------------------------------------------------------------- The Cryptography Mailing List Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com ```
37.263158
78
0.76177
eng_Latn
0.999452
2cf8ba67b77dcf2b8cd1fda4755e2b5eeda69260
2,116
md
Markdown
app/enterprise/1.3-x/kong-cloud.md
raittes/docs.konghq.com
ae9db609854b5650241854a5c58dc83a43a79cdd
[ "MIT" ]
null
null
null
app/enterprise/1.3-x/kong-cloud.md
raittes/docs.konghq.com
ae9db609854b5650241854a5c58dc83a43a79cdd
[ "MIT" ]
null
null
null
app/enterprise/1.3-x/kong-cloud.md
raittes/docs.konghq.com
ae9db609854b5650241854a5c58dc83a43a79cdd
[ "MIT" ]
null
null
null
--- title: Kong Cloud toc: false --- ## Kong Cloud Proxy TLS Kong Cloud enforces HTTPS for all of the services that it operates, including Admin API, Kong Manager, and Kong Developer Portal. Kong Cloud does not enforce HTTPS for traffic destined for the customer's proxy, since certificate management and associated domains are under the customer's control. To enforce HTTPS on upstream traffic, use the [certificate](https://docs.konghq.com/enterprise/{{page.kong_version}}/admin-api/#certificate-object) and [SNI](https://docs.konghq.com/enterprise/{{page.kong_version}}/admin-api/#sni-object.) objects through the Admin API. For non-proxy traffic, Kong Cloud is the terminus for the request, and Kong controls the protocol and shape of traffic carefully. Additionally, Kong generally considers non-proxy traffic to be sensitive (e.g., Admin API requests, login credentials to Kong Manager). Kong Cloud accomplishes enforcement through HTTP → HTTPS redirects, and the use of the HSTS response header. Because Kong Cloud controls the domain hosting the endpoints for these services, e.g., `https://manager-client.kong-cloud.com`, Kong Cloud maintains the TLS certificate for this service since Kong owns the domain. For dedicated production Kong Cloud clusters (e.g., clusters setup for paying customers), proxy traffic is funneled through a network load balancer to Kong nodes, which then pass the request to the customer's upstream. Kong Cloud does not enforce any protocol or application layer behaviors here, because this traffic would be specific to the customer's upstream APIs. Thus, securing proxy traffic on a production Kong Cloud cluster is the customer's responsibility (e.g., setting up TLS certificates through Kong's Admin API/Manager). Customers generally use their own domain name for production API (e.g., api.example.com), so Kong does not provide a TLS certificate to use out-of-the-box for production proxy traffic. Kong would have no way to enforce TLS traffic and no manageable certificate/key to provide. Thus, securing production proxy traffic via TLS is the customer's responsibility.
162.769231
894
0.798677
eng_Latn
0.988528
2cf93252b2c05497a098c47a9edb1a8136c9dcb9
2,473
md
Markdown
docs/2014/integration-services/ssis-package-format.md
KochertNinja/sql-docs
04c031f7411aa33e2174be11dfced7feca8fbcda
[ "CC-BY-4.0", "MIT" ]
1
2019-04-10T15:18:59.000Z
2019-04-10T15:18:59.000Z
docs/2014/integration-services/ssis-package-format.md
KochertNinja/sql-docs
04c031f7411aa33e2174be11dfced7feca8fbcda
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/integration-services/ssis-package-format.md
KochertNinja/sql-docs
04c031f7411aa33e2174be11dfced7feca8fbcda
[ "CC-BY-4.0", "MIT" ]
1
2019-04-02T15:42:37.000Z
2019-04-02T15:42:37.000Z
--- title: "SSIS Package Format | Microsoft Docs" ms.custom: "" ms.date: "06/13/2017" ms.prod: "sql-server-2014" ms.reviewer: "" ms.technology: - "integration-services" ms.topic: conceptual ms.assetid: cfe0e5dc-5be3-4222-b721-fe83665edd94 author: janinezhang ms.author: janinez manager: craigg --- # SSIS Package Format In the current release of [!INCLUDE[ssISnoversion](../includes/ssisnoversion-md.md)], significant changes were made to the package format (.dtsx file) to make it easier to read the format and to compare packages. You can also more reliably merge packages that don't contain conflicting changes or changes stored in binary format. To view the current DTSX package file format, see [\[MS-DTSX\]: Data Transformation Services Package XML File Format Specification](https://go.microsoft.com/fwlink/?LinkId=233251). The following list outlines the file format changes. To view code examples of these changes, see [Package Format Changes in SQL Server 2012](https://go.microsoft.com/fwlink/?LinkId=233255). - Formatting conventions have been applied to make it easier to read and understand the .dtsx file. - The format is more concise. Separate elements for each property have been persisted as attributes, with the exception of the PackageFormatVersion. Attributes are listed alphabetically, and properties that have default values are no longer persisted. Finally, elements that can appear multiple times, are now contained within a parent element. - Most objects within a package that can be referred to by other objects now have a `refId` attribute defined in the package XML. Instead of persisting lineage IDs, the `refID` is now persisted. Lineage IDs are still used within the runtime and regenerated when the package is loaded. The `refId` value is a unique string that is readable and understandable, compared to GUIDs or integer values. The string is similar to path values used for package configurations in previous releases of [!INCLUDE[ssISnoversion](../includes/ssisnoversion-md.md)]. If you are merging changes between two versions of a package, the `refId` can be used in find/replace operations to ensure that all references to that object have been correctly updated. - The layout information is contained in a CData section. - Annotations are persisted in cleartext. This makes it easier to extract the information for automated generation of documentation.
66.837838
348
0.771128
eng_Latn
0.997294
2cf94af396faae1403aeeeb53820e446da140391
1,104
md
Markdown
Instructions/Labs/LAB[MB-910]_M01_Lab2_Work_with_customer_engagement_apps.md
MicrosoftLearning/MB-910T00A-Microsoft-Dynamics-365-Fundamentals-Customer-Engagement-Apps.ko-kr
a705b10f50505e3ccf204862ec20c3283c658f26
[ "MIT" ]
null
null
null
Instructions/Labs/LAB[MB-910]_M01_Lab2_Work_with_customer_engagement_apps.md
MicrosoftLearning/MB-910T00A-Microsoft-Dynamics-365-Fundamentals-Customer-Engagement-Apps.ko-kr
a705b10f50505e3ccf204862ec20c3283c658f26
[ "MIT" ]
null
null
null
Instructions/Labs/LAB[MB-910]_M01_Lab2_Work_with_customer_engagement_apps.md
MicrosoftLearning/MB-910T00A-Microsoft-Dynamics-365-Fundamentals-Customer-Engagement-Apps.ko-kr
a705b10f50505e3ccf204862ec20c3283c658f26
[ "MIT" ]
null
null
null
--- lab: title: '랩 1.2: Customer Engagement 앱 사용' module: '모듈 1: Dynamics 365 Marketing의 기본 사항 파악' --- 모듈 1: Dynamics 365 Marketing의 기본 사항 파악 ======================== ## 연습 랩 1.2 - Customer Engagement 앱 사용 ## 목표 이 연습에서는 사용 가능한 여러 Dynamics 365 애플리케이션에 홈 화면에서 바로 매우 쉽게 액세스할 수 있음을 확인합니다. 앱을 연 후에는 필요에 따라 여러 애플리케이션 간을 쉽게 전환할 수 있습니다. 사용자는 일상 작업을 수행할 때 Dynamics 365 애플리케이션의 기본 기능에 액세스하여 해당 기능을 사용합니다. ## 랩 설정 - **소요 시간**: 15분 ## 지침 이 연습에서는 Dynamics 365 홈 화면을 사용하여 다양한 Dynamics 365 Customer Engagement 애플리케이션에 액세스하는 방법을 파악합니다. 1. 웹 브라우저에서 https://www.office.com/apps로 이동합니다. 2. 비즈니스 앱 탭을 선택합니다. 3. 검색 상자에 영업 허브 텍스트를 입력하고 영업 허브 앱을 선택하여 엽니다. 4. 앱 내에는 수행하려는 작업에 따라 사용 가능한 여러 영역이 있습니다. 예를 들어 영업 허브 애플리케이션에서는 다양한 관리 설정을 사용할 수 있습니다. 5. 탐색 창의 왼쪽 하단에서 **영업**을 선택하고 **앱 설정** 영역으로 변경합니다. 애플리케이션 설정을 살펴봅니다. 6. 영역 선택기를 다시 선택하고 **앱 설정**을 **영업**으로 다시 변경합니다. 7. 영업 앱에서 고객 서비스 허브 등의 다른 앱으로 전환하려는 경우 화면 왼쪽 위의 Dynamics 365 텍스트 옆에 있는 **영업 허브** 텍스트를 선택합니다. 8. 그러면 앱 화면이 표시됩니다. 이 화면에서 **고객 서비스 허브**를 선택하면 고객 서비스 허브 애플리케이션으로 이동하게 됩니다. 9. **영업 허브** 앱으로 다시 전환하려면 화면 위쪽의 **고객 서비스 허브** 텍스트를 선택합니다.
26.285714
182
0.649457
kor_Hang
1.00001
2cfaf7ae656f25993d86e85ec35fe7c87ae4bcbd
2,637
md
Markdown
Binary/Lazy Game Challenge/README.md
MathisEngels/CTFlearn-Writeups
e418fac2d8002bb1ce339ac20857b02cb31445c6
[ "MIT" ]
null
null
null
Binary/Lazy Game Challenge/README.md
MathisEngels/CTFlearn-Writeups
e418fac2d8002bb1ce339ac20857b02cb31445c6
[ "MIT" ]
null
null
null
Binary/Lazy Game Challenge/README.md
MathisEngels/CTFlearn-Writeups
e418fac2d8002bb1ce339ac20857b02cb31445c6
[ "MIT" ]
null
null
null
# Lazy Game Challenge [Link to the challenge](https://ctflearn.com/challenge/691) This one was tricky. I had to see the comments. ## Step 1 I connected to the server using netcat: `nc thekidofarcrania.com 10001`. When you connect, rules are displayed: - You'll be given $500 - Place a Bet - Guess the number, the computer thinks of! - Computer's number changes every new time! - You have to guess a number between 1-10 - You have only 10 tries ! - If you guess a number > 10, it still counts as a try! - If you guess within the number of tries, you win money! So what can we guess from those rules? We'll win our bet if we get the right number. We only have 10 tries. The number above 10 also counts as a try and you have to guess within the 10 tries. So I tried to break the rules. ## Step 2 So I placed a simple bet of $500 to see what happened and tried different numbers (-15, -5, 5, 15). Negative numbers still count as try, and >15 too. When trying `5`, I actually won the jackpot and see how the winning part worked: my balance went from $500 to $1500 so my previous balance ($500) + 2 * mybet($500). Let's try the losing part now. I placed, once again, a bet of $500 and spammed >15 numbers until I was out of tries. My balance went from $500 to $0 so my previous balance ($500) - mybet ($500). I had the idea to try not a number but string for instance to do some injection but it didn't work, the input was sanitized and if it wasn't a number, it would say you try the number 0. So, if I can't do anything with the guessing number input. What other input could I try to win this CTF? ## Step 3 When you connect to the server, it asked you if you are ready, a Y/N is excepted. Once again, after trying to inject some code, I couldn't do anything (Maybe I just don't know enough and there's an exploit to be found!) The last input possible is the bet amount one. So I tried to inject code but once again, it didn't work. The last idea I had in mind is: negative value for the betting amount. So I placed -$500 and spammed negative guessing value. After losing, my balance was $1000. So the server calculates the new balance for a loose game by doing: `new balance = previous balance - bet` but `- - = +`. So now, I needed to see if I could bet more than I owned. Turns out it's possible. You can't bet more than you owned but `-$100 < $500`. So you can bet -$10000 without any problem. ## Step 4 I placed -$1000000000000 and spammed negative guessing value, again. It worked and in the end, I had the flag! ## Solution The flag is `CTFlearn{d9029a08c55b936cbc9a30_i_wish_real_betting_games_were_like_this!}`
44.694915
185
0.73796
eng_Latn
0.999774
2cfb8019bf4505c2a39b5e74c51a7b26ca884102
31
md
Markdown
README.md
senbieWang/WindowsProgramme
c6f25bd6eca5d7f27f33c76e14964621d00c243b
[ "MIT" ]
null
null
null
README.md
senbieWang/WindowsProgramme
c6f25bd6eca5d7f27f33c76e14964621d00c243b
[ "MIT" ]
null
null
null
README.md
senbieWang/WindowsProgramme
c6f25bd6eca5d7f27f33c76e14964621d00c243b
[ "MIT" ]
null
null
null
# WindowsProgramme windows编程学习
10.333333
18
0.870968
deu_Latn
0.530534
2cfcec44bb412f82471668ac3015663cf109c2b1
712
md
Markdown
src/_projects/accloud.md
alperyazar/www
8aa2e56465f97506ac1c18cd33e566d62b8d29d5
[ "CC-BY-4.0" ]
null
null
null
src/_projects/accloud.md
alperyazar/www
8aa2e56465f97506ac1c18cd33e566d62b8d29d5
[ "CC-BY-4.0" ]
35
2020-02-22T15:27:53.000Z
2022-02-12T16:20:40.000Z
src/_projects/accloud.md
alperyazar/www
8aa2e56465f97506ac1c18cd33e566d62b8d29d5
[ "CC-BY-4.0" ]
3
2019-12-28T21:24:43.000Z
2021-03-22T18:33:41.000Z
--- title: ACCLOUD excerpt: ACCLOUD (ACcelerated CLOUD), A Novel, FPGA Accelerated Cloud Architecture date: 2018-04-15 tags: - en header: overlay_image: /assets/images/project/accloud-header.png overlay_filter: 0.5 teaser: /assets/images/project/accloud-promo.png aytype: funded --- ACCLOUD (ACcelerated CLOUD), "A Novel, FPGA Accelerated Cloud Architecture", project is a joint project of [METU](https://www.metu.edu.tr/) and [ASELSAN](http://www.aselsan.com/) and it is funded through [TUBITAK](https://www.tubitak.gov.tr/en). I am one of the two project managers. Project started in April 2018. The official project website: [http://accloud.eee.metu.edu.tr](http://accloud.eee.metu.edu.tr)
30.956522
82
0.740169
eng_Latn
0.492592
2cfda8470c5c847aaf9d850228c0d63ea693a85a
106
md
Markdown
README.md
Legends/webpack-environment-vars
f3ff50e4f6d46b1b12e05deeba1856c17489c0cc
[ "MIT" ]
null
null
null
README.md
Legends/webpack-environment-vars
f3ff50e4f6d46b1b12e05deeba1856c17489c0cc
[ "MIT" ]
null
null
null
README.md
Legends/webpack-environment-vars
f3ff50e4f6d46b1b12e05deeba1856c17489c0cc
[ "MIT" ]
null
null
null
# webpack-environment-vars Shows how to use env variables supplied via CLI and used in webpack.config.js
26.5
77
0.801887
eng_Latn
0.978895
2cfdb4a87e291148438a2233c21e813d60fd9936
2,526
markdown
Markdown
_posts/2016-04-05-asus-to-launch-high-performance-rog-g11-gaming-desktop.markdown
chinatravel/chinatravel.github.io
8b058c83d1248282cd8328c02b67e8fe9539c3a5
[ "MIT" ]
null
null
null
_posts/2016-04-05-asus-to-launch-high-performance-rog-g11-gaming-desktop.markdown
chinatravel/chinatravel.github.io
8b058c83d1248282cd8328c02b67e8fe9539c3a5
[ "MIT" ]
null
null
null
_posts/2016-04-05-asus-to-launch-high-performance-rog-g11-gaming-desktop.markdown
chinatravel/chinatravel.github.io
8b058c83d1248282cd8328c02b67e8fe9539c3a5
[ "MIT" ]
null
null
null
--- published: true title: ASUS to launch high performance ROG G11 Gaming desktop layout: post --- ![Alt ASUS to launch high performance ROG G11 Gaming desktop](https://c2.staticflickr.com/2/1448/25971236770_4c4781e2e5_z.jpg)  ASUS today announced that gamers launched high-performance gaming desktop ROG G11, sixth generation Intel Core I7 processor, 23% higher performance than previous generations of products, improve energy efficiency by 22%. Video card with up to NVIDIA GeForce GTX 980 video can play HD games on up to three monitors, by adding a fourth HDMI connection monitor, real 4K/UHD game Visual effects.  It also features a M.2 PCIe 3.0 X4 solid state drives (SSD), you can provide data access speeds of up to 2Gbit/s. DDR4 memory, providing of data transmission speeds of up to 2.1GT/s, twice times the speed of DDR3, and 10Gbit/s USB 3.1 Gen2 interface, data transmission speed of USB3.0 twice times.  G11 inspired body design to future spacecraft, designed in future style, and with a red-and-black color scheme, supported by three red on the side of \"flame\" designs, customized effects 8 million color LED panel occupies the front of the chassis. The latter can be programmed to create ambient lighting for the current game.  ROG exclusive aegis II software installed G11, monitor system performance, improve user gaming experience, and helps them to track CPU/memory usage, download and upload, if the system temperature and voltage exceeds the threshold, he may issue a warning. ROG G11 builtin ASUS SonicMaster audio technology, thanks to the unique combination of hardware and sound tuning software, provides incredible sound through headphones or speakers. ROG AudioWizard provides audio enhancements, allowing users to easily access up to five custom preset mode.  ROG G11 technical parameters are as follows:Quad-Core Intel Core Duo I7-6700 processor[![Alt Kaidaer portable speaker](http://www.everweek.com/images/large/speaker/mini_speaker_ms459_lrg.jpg)](http://www.everweek.com/bluetooth-speaker)Up to 32 GB DDR4-2133 memory [Kaidaer](http://www.everweek.com/blog/2016/02/strange-bocusini-a-foodie-themselves-3d-printer/)H170 Intel chipset boardsVideo card NVIDIA GeForce GTX 980 4GBUp to 3 TB hard disk driveUp to 256 GB PCIe SSDBurn Blu-ray disc optical driveUSB port 3.1500W power supply [Kaidaer portable speaker](http://www.everweek.com/bluetooth-speaker)Pre-installed Windows 10[Article correction]Collection is the collection of 1542Tags:PC and hardware\r Game Science and technology
421
2,422
0.806017
eng_Latn
0.969674
2cfde01daa7289c7541e7e3e5bceb4161ac56aab
895
markdown
Markdown
_dosidiotas/2009-11-16-mi-escritorio-200911.markdown
figarocorso/figarocorso.github.io
e2ede424ac26ef6c92233683656a8278f84c62d2
[ "MIT" ]
1
2020-07-23T20:37:52.000Z
2020-07-23T20:37:52.000Z
_dosidiotas/2009-11-16-mi-escritorio-200911.markdown
figarocorso/figarocorso.github.io
e2ede424ac26ef6c92233683656a8278f84c62d2
[ "MIT" ]
null
null
null
_dosidiotas/2009-11-16-mi-escritorio-200911.markdown
figarocorso/figarocorso.github.io
e2ede424ac26ef6c92233683656a8278f84c62d2
[ "MIT" ]
null
null
null
--- author: miky comments: true date: 2009-11-16 22:12:51+00:00 layout: post slug: mi-escritorio-200911 title: Mi escritorio 2009/11 categories: day-by-day --- Pues la verdad es que no he sido nunca mucho de enseñar mi escritorio. Pero coincide con la salida de esa dichosa fase beta del proyecto [Shogun’s Fate](http://www.shogunsfate.com). Aka [www.elmejorjuegodelmundo.com](http://www.elmejorjuegodelmundo.com). Poco a poco el proyecto está lanzándose al público. Se presentó en el Salón del Manga de Barcelona. Podemos encontrar su twitter ([@shogunsfate](http://twitter.com/shogunsfate)). Y en el [blog](http://blog.shogunsfate.com) podemos encontrar nuevas noticias y material multimedia. Perfecto para tener un buen fondo de pantalla. [![escritorio](http://www.dosidiotas.com/wp-content/uploads/escritorio_thumb.jpg)](http://www.dosidiotas.com/wp-content/uploads/escritorio.jpg)
44.75
326
0.772067
spa_Latn
0.91223
2cfe112d75747aead75634d65064d5ef5ebc2497
26
md
Markdown
README.md
saenuruki/portfolio
31a738032dfd4763f546dd71e07307b8419bb7fa
[ "CC-BY-3.0" ]
null
null
null
README.md
saenuruki/portfolio
31a738032dfd4763f546dd71e07307b8419bb7fa
[ "CC-BY-3.0" ]
null
null
null
README.md
saenuruki/portfolio
31a738032dfd4763f546dd71e07307b8419bb7fa
[ "CC-BY-3.0" ]
null
null
null
# portfolio My Portfoloio
8.666667
13
0.807692
eng_Latn
0.825762
2cfeb542e869faa67a094ea65248eb53b4c9a6a6
370
md
Markdown
README.md
UrielIsidro/starter-web
15e9a705b9aea79e31ecd45075aec009679c0ff9
[ "FSFAP" ]
null
null
null
README.md
UrielIsidro/starter-web
15e9a705b9aea79e31ecd45075aec009679c0ff9
[ "FSFAP" ]
null
null
null
README.md
UrielIsidro/starter-web
15e9a705b9aea79e31ecd45075aec009679c0ff9
[ "FSFAP" ]
null
null
null
# Starter Web Repo This repository is for showing how Git and GitHub work Adding a line. Updating for an emergency fix after stashing. ## Introduction Here is some text ## Purpose Sample website with plenty of files for demos ## Deployment In a web Server ## How To Contribute Please fork this repository ### Copyright 2014 Git.Training. All rights reserved.
15.416667
60
0.759459
eng_Latn
0.991929
2cff10e60d59b0add42aa7afc267e9cba0743ad6
6,226
md
Markdown
README.md
virtualdesktopdevops/adusercreation-php
c9c079b8d59379bfaead3f6501a756123a57ed6e
[ "Apache-2.0" ]
null
null
null
README.md
virtualdesktopdevops/adusercreation-php
c9c079b8d59379bfaead3f6501a756123a57ed6e
[ "Apache-2.0" ]
null
null
null
README.md
virtualdesktopdevops/adusercreation-php
c9c079b8d59379bfaead3f6501a756123a57ed6e
[ "Apache-2.0" ]
1
2021-09-14T01:26:41.000Z
2021-09-14T01:26:41.000Z
# adusercreation-php PHP webservice for Active Directory user creation ## Integration ### Web server configuration Adusercreation uses an LDAPS Active Directory connection to create the user. The connection has to be secured with SSL to allow password change and account activation. Active Directory doesn't allow password data to transit over a non secure LDAP connection. Active Directory domain controller has to be configured with a valid SSL certificate. The domain controller certificate can be provided by a Microsoft or a Third Party Certification Authority. Tutorial provided on https://gist.github.com/magnetikonline/0ccdabfec58eb1929c997d22e7341e45 The Certification Authority root certificate has to be included in the certificate trust store of the web server hosting the Adusercreation webservice. On Debian 9, the crt file has to be deployed in **/usr/local/share/ca-certificates/**, included with the **update-ca-certificates** command. Apache2 has to be then reloaded to allow it to validate the Active Directory LDAPS certificate during LDAPS connection establishment. For PHP configuration on IIS, use the following guide : http://www.web-site-scripts.com/knowledge-base/getAttach/263/AA-00754/Enable+LDAPS+on+Windows+IIS.pdf Use `openssl s_client -connect <domain controller>:636 ` command to check the certificate of the domain controller and validate the LDAPS certificate validation by the web server (linux system only, additional ldap.conf needed on windows for PHP). ### Adusercreation webservice configuration Configure the LDAPS variables in **config.php**. Example in the sample config.php file provided on github`. ``` // An array of your LDAP hosts. You can use either // the host name or the IP address of your host. $hosts = ['dc01.domain-test.com']; $basedn = 'dc=domain-test,dc=com'; $account_suffix = '@domain-test.com'; // The account to use for querying / modifying LDAP records. This // does not need to be an admin account. This can also // be a full distinguished name of the user account. $serviceaccount = 'administrator@`domain-test.com'; $serviceaccountpassword = 'P@ssw0rd'; ``` In this first release, users are created in the default Active Directory **Users CN**. ### Testing Adusercreation Use the `wget 'http://localhost/adusercreation-php/useradd.php?useraccount=jackreacher&userfirstname=jack&userlastname=reacher&useremail=jack.reacher@domain-test.com'` to locally test user connection from the web server, or replace the localhost by your web server FQDN for testing from a remote web browser. A 403 error is returned if the user already exists. **A connection error is returned** if the LDAP server is unreacheable or **if LDAPS SSL certificate validation fails.** ## How does Adusercreation work ? ### How to create an Active Directory account with PHP using LDAPS Adusercreation uses Adldap2 PHP library to interact with Active Directory LDAP interface. To create Active Directory users using LDAP you first need to create the user, which is created disabled, then set the password and then enable the account. The userAccountControl determines if an account is enabled or disabled. According to Microsoft's documentation the following values can be used and combined: ``` Tag Name Notes SCRIPT 1 ACCOUNTDISABLE 2 HOMEDIR_REQUIRED 8 LOCKOUT 16 PASSWD_NOTREQD 32 You can not assign this permission PASSWD_CANT_CHANGE 64 ENCRYPTED_TEXT_PWD_ALLOWED 128 TEMP_DUPLICATE_ACCOUNT 256 NORMAL_ACCOUNT 512 INTERDOMAIN_TRUST_ACCOUNT 2048 WORKSTATION_TRUST_ACCOUNT 4096 SERVER_TRUST_ACCOUNT 8192 DONT_EXPIRE_PASSWORD 65536 MNS_LOGON_ACCOUNT 131072 SMARTCARD_REQUIRED 262144 TRUSTED_FOR_DELEGATION 524288 NOT_DELEGATED 1048576 USE_DES_KEY_ONLY 2097152 DONT_REQ_PREAUTH 4194304 PASSWORD_EXPIRED 8388608 TRUSTED_TO_AUTH_FOR_DELEGATION 16777216 ``` So 512 is a normal user account adding 2 results in a normal user account, but disabled. To make sure that the account never expires we set the accountExpires value to 0, which seems to work. Although according to http://arnoutvandervorst.blogspot.com/2008/03/ldap-accountexpires-attribute-values.html, the initial value should be: 9223372036854775807. Of course the user needs a password. To create the password do: echo -n "\"password\"" | iconv -f UTF8 -t UTF16LE | base64 -w 0 Microsoft stores a quoted password in little endian UTF16 base64 encoded. The trivial command above takes care of it all. Note the -n option to echo, otherwise the carriage-return will also be part of the password. ### Example LDIF The first part of the following LDIF creates the disabled user account, the second part sets the password and the last part enables the account: ``` dn: CN=Piet Prutser,CN=Users,DC=forest,DC=example,DC=com changetype: add objectClass: top objectClass: person objectClass: organizationalPerson objectClass: user objectCategory: CN=Person,CN=Schema,CN=Configuration,DC=example,DC=com codePage: 0 countryCode: 0 distinguishedName: CN=Piet Prutser,CN=Users,DC=forest,DC=example,DC=com cn: Piet Prutser sn: Prutser givenName: Piet displayName: Piet Prutser name: Piet Prutser telephoneNumber: 123456 instanceType: 4 userAccountControl: 514 accountExpires: 0 uidNumber: 600 gidNumber: 600 sAMAccountName: pprutser userPrincipalName: P.Prutser@example.com altSecurityIdentities: Kerberos:pprutser@EXAMLE.COM mail: P.Prutser@example.com homeDirectory: \\ads\home\pprutser homeDrive: Z: unixHomeDirectory: /home/pprutser loginShell: /bin/bash dn: CN=Piet Prutser,OU=Users,DC=forest,DC=example,DC=com changetype: modify replace: unicodePwd unicodePwd::IlwwdFwwZVwwc1wwdFwwIgo= dn: CN=Piet Prutser,OU=Users,DC=forest,DC=example,DC=com changetype: modify replace: userAccountControl userAccountControl: 512 ``` When you get an error report like this: ldap_modify: Server is unwilling to perform (53) additional info: 0000001F: SvcErr: DSID-031A0FC0, problem 5003 (WILL_NOT_PERFORM), data 0 It means that the password (like the one in the example) is not a correct UTF16LE password ### Reference - http://pig.made-it.com/pig-adusers.html - https://community.hortonworks.com/articles/82544/how-to-create-ad-principal-accounts-using-openldap.html
44.471429
584
0.798265
eng_Latn
0.922923
2cff9122d77209fb89769366195d17868eabf254
504
md
Markdown
_posts/2017-07-13-how-to-tur.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
_posts/2017-07-13-how-to-tur.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
_posts/2017-07-13-how-to-tur.md
pipiscrew/pipiscrew.github.io
9d81bd323c800a1bff2b6d26c3ec3eb96fb41004
[ "MIT" ]
null
null
null
--- title: How to turn your website into a PWA author: PipisCrew date: 2017-07-13 categories: [news] toc: true --- https://mxb.at/blog/how-to-turn-your-website-into-a-pwa/ Real Favicon Generator - http://realfavicongenerator.net/ Testing your PWA - Lighthouse Chrome Plugin - https://chrome.google.com/webstore/detail/lighthouse/blipmdconlkpinefehnmjammfjpmpbjk #registerServiceWorker origin - http://www.pipiscrew.com/2017/07/how-to-turn-your-website-into-a-pwa/ how-to-turn-your-website-into-a-pwa
29.647059
131
0.77381
yue_Hant
0.411106
2cffa1785d13565f11d91928195ade40219761dc
1,284
md
Markdown
docs/build-insights/reference/sdk/c-event-data-types/function-data-struct.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/build-insights/reference/sdk/c-event-data-types/function-data-struct.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/build-insights/reference/sdk/c-event-data-types/function-data-struct.md
yecril71pl/cpp-docs.pl-pl
599c99edee44b11ede6956ecf2362be3bf25d2f1
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Struktura FUNCTION_DATA description: Informacje o strukturze FUNCTION_DATA zestawu SDK usługi Build Insights. ms.date: 02/12/2020 helpviewer_keywords: - C++ Build Insights - C++ Build Insights SDK - FUNCTION_DATA - throughput analysis - build time analysis - vcperf.exe ms.openlocfilehash: 1034ce01bba6422d0c47fc34b308cafcc113e32b ms.sourcegitcommit: 6280a4c629de0f638ebc2edd446de2a9b11f0406 ms.translationtype: MT ms.contentlocale: pl-PL ms.lasthandoff: 09/12/2020 ms.locfileid: "90041747" --- # <a name="function_data-structure"></a>Struktura FUNCTION_DATA ::: moniker range="<=vs-2015" Zestaw SDK usługi Build Insights jest zgodny z programem Visual Studio 2017 lub nowszym. Aby zapoznać się z dokumentacją tych wersji, ustaw kontrolkę selektora **wersji** programu Visual Studio dla tego artykułu na visual Studio 2017 lub visual Studio 2019. Znajduje się w górnej części spisu treści na tej stronie. ::: moniker-end ::: moniker range=">=vs-2017" `FUNCTION_DATA`Struktura opisuje funkcję. ## <a name="syntax"></a>Składnia ```cpp typedef struct FUNCTION_DATA_TAG { const char* Name; } FUNCTION_DATA; ``` ## <a name="members"></a>Elementy członkowskie | Nazwa | Opis | |--|--| | `Name` | Nazwa funkcji zakodowana w formacie UTF-8. | ::: moniker-end
27.319149
315
0.759346
pol_Latn
0.953018
2cfff76b973129e4cfe56b48a39b757f0ff7dcd5
4,089
md
Markdown
_pages/hb-2201.md
danieldriver/danieldriver.github.io
f0b7ea3cb59bb09a87cdcc549845da88f1427050
[ "Unlicense", "MIT" ]
null
null
null
_pages/hb-2201.md
danieldriver/danieldriver.github.io
f0b7ea3cb59bb09a87cdcc549845da88f1427050
[ "Unlicense", "MIT" ]
null
null
null
_pages/hb-2201.md
danieldriver/danieldriver.github.io
f0b7ea3cb59bb09a87cdcc549845da88f1427050
[ "Unlicense", "MIT" ]
null
null
null
--- layout: page title: HB 2201 permalink: courses/hb-2201/ subtitle: "HB 2201: Beginning Biblical Hebrew I" --- ## Course Description For students who wish to read, hear, and even (to an extent) produce Hebrew, this course offers a textually immersive introduction to classical Hebrew. Grammatical features like morphology, phonology, and syntax are learned inductively, through a cycle of illustrated readings drawn from Genesis. Techniques of second-language acquisition are also used to help develop language competency. Continuation in HB 2202 is strongly recommended. This course is open to AST students and to Continuing Education participants. <!-- Legacy: from Fall 2018 An introduction to the basic principles of biblical Hebrew with emphasis on morphology, phonology, and syntax, this course is for students who want to study the Old Testament in Hebrew. Students will learn basic Hebrew grammar, develop a rudimentary biblical Hebrew vocabulary, and begin to read the Hebrew Bible with one eye on the window it opens into ancient Israel and another on its historic role as as scripture in Judaism and Christianity. This course is suitable preparation for further study in religion, theology, or classics. (It is open to undergraduate and graduate students from other universities. Please contact the AST Registrar to enroll as a Letter of Permission student.) --> *Prerequisites: None. The course is required for HB 2202 and advanced biblical Hebrew.* [Download the latest syllabus (Summer 2020, v 3.5.0)](https://github.com/danieldriver/Syllabi/raw/master/HB/HB%202201-Driver%202020.pdf). [Student access is through Microsoft Teams](https://teams.microsoft.com/l/team/19%3a8a169e9b2976408eb2852a86e16e85d4%40thread.tacv2/conversations?groupId=a61bbfb3-afad-4fea-bb8f-815065b833c7&tenantId=91a947b7-4a37-4ddc-8caa-1f4c21afbc4c). Registered students, use your AST email address to log in. ## Required Texts (Summer 2020)* LBH Grammar : Karl V. Kutz and Rebekah Josberger. *Learning Biblical Hebrew: Reading for Comprehension: An Introductory Grammar.* Bellingham, WA: Lexham, 2018. ISBN 978-1683590842. : Note that LBH has a [companion site with resources for teachers and students](http://www.learningbiblicalhebrew.com/). You can order the [Learning Biblical Hebrew Bundle (2 vols.)](https://lexhampress.com/product/177582/learning-biblical-hebrew-bundle) directly from the publisher, in print or digital format. I would recommend getting the Reader in print format, since you want to write out the exercises, but it might be useful to have both formats of the Reader if you want to use a digital format for the Grammar. : Alternatively, order the Grammar in [Canada](https://amzn.to/3eK41UK) or the [USA](https://amzn.to/3byumDo). LBH Reader : Karl V. Kutz and Rebekah Josberger. *Learning Biblical Hebrew: A Graded Reader with Exercises.* Bellingham, WA: Lexham, 2019. ISBN 978-1683592440. : Order the Reader in [Canada](https://amzn.to/3axosRq) or the [USA](https://amzn.to/3bwgSI9). <!-- BBH : John A. Cook and Robert D. Holmstedt. *Beginning Biblical Hebrew: A Grammar and Illustrated Reader.* Grand Rapids: Baker Academic, 2013. : Note that this textbook includes a significant set of [online study aids](http://www.bakerpublishinggroup.com/books/beginning-biblical-hebrew/5629/students/esources). : Order it in [Canada](https://amzn.to/2K51HHt) or the [USA](https://amzn.to/2K3Tq6A). Muraoka : Takamitsu Muraoka. *A Biblical Hebrew Reader: With an Outline Grammar.* Leuven: Peeters, 2017. : Order it in [Canada](https://amzn.to/2NSJ1gt) or the [USA](https://amzn.to/2uW4hec). BHS : Karl Elliger and Willhelm Rudulph, eds. *Biblia Hebraica Stuttgartensia.* Stuttgart: Deutsche Bibelgesellschaft, 1997. : Optional for HB 2201 (Fall 2018), required for HB 2202 (Winter 2019). The student edition (paperback) is more affordable, but the hardcover is significantly more durable. : Order it in [Canada](https://amzn.to/2LwUtli) or the [USA](https://amzn.to/2K0sZ1L). --> {% include recommended.md %} <!-- tk : tk : Order it in [Canada]() or the [USA](). -->
55.256757
519
0.777207
eng_Latn
0.962822
fa01471ad73fdb36e08650d3236e0fb63110a1e0
426
md
Markdown
index.md
kentonh/refreshcast-site
45ea62e575cef62ec854414e73ccb97eb1dc46ca
[ "CC-BY-3.0" ]
null
null
null
index.md
kentonh/refreshcast-site
45ea62e575cef62ec854414e73ccb97eb1dc46ca
[ "CC-BY-3.0" ]
null
null
null
index.md
kentonh/refreshcast-site
45ea62e575cef62ec854414e73ccb97eb1dc46ca
[ "CC-BY-3.0" ]
null
null
null
--- layout: home title: Home landing-title: 'Dynamicly inserted ads for your podcast' description: null image: null author: null show_tile: false --- Take control over your podcast's advertisements and audio. Dynamically insert ads into your podcast files on a schedule. Monitize your podcast's back catalog. Keep your listners up to date, no matter which episode they start with. Bring timely information to your audience.
35.5
274
0.79108
eng_Latn
0.996304
fa01dd29f3fffdf2059efc787bde4d7af10dca9a
33,975
md
Markdown
python/generate-nested-json-summary.md
jasimmonsv/til
5a536a8341eeba5c4df37a8046bc9c25f6a65bd2
[ "Apache-2.0" ]
450
2020-04-19T18:57:36.000Z
2022-03-31T14:00:29.000Z
python/generate-nested-json-summary.md
jasimmonsv/til
5a536a8341eeba5c4df37a8046bc9c25f6a65bd2
[ "Apache-2.0" ]
37
2020-04-19T14:57:18.000Z
2022-03-26T22:28:24.000Z
python/generate-nested-json-summary.md
jasimmonsv/til
5a536a8341eeba5c4df37a8046bc9c25f6a65bd2
[ "Apache-2.0" ]
74
2020-04-20T05:43:51.000Z
2022-03-11T15:09:48.000Z
# Generated a summary of nested JSON data I was trying to figure out the shape of the JSON object from https://github.com/simonw/coronavirus-data-gov-archive/blob/master/data_latest.json?raw=true - which is 3.2MB and heavily nested, so it's difficult to get a good feel for the shape. I solved this with a Python `summarize()` function which recursively truncates the nested lists and dictionaries. ```python def summarize(data, list_limit=5, key_limit=5): "Recursively reduce data to just the first X nested keys and list items" if not isinstance(data, (list, dict)): return data if isinstance(data, list): return [summarize(item, list_limit, key_limit) for item in data[:list_limit]] if isinstance(data, dict): all_keys = list(data.keys()) kept_keys = all_keys[:key_limit] truncated_keys = all_keys[key_limit:] d = dict([ (key, summarize(data[key], list_limit, key_limit)) for key in kept_keys ]) if truncated_keys: d["_truncated_keys"] = truncated_keys return d ``` Here's how I used it: ```python import json, requests data = requests.get( "https://github.com/simonw/coronavirus-data-gov-archive/blob/master/data_latest.json?raw=true" ).json() print(json.dumps(summarize(data, list_limit=2, key_limit=7), indent=4)) ``` And the output: ```json { "lastUpdatedAt": "2020-04-28T18:14:22.840234Z", "disclaimer": "Lab-confirmed case counts for England and subnational areas are provided by Public Health England. All data on deaths and data for the rest of the UK are provided by the Department of Health and Social Care based on data from NHS England and the devolved administrations. Maps include Ordnance Survey data \u00a9 Crown copyright and database right 2020 and Office for National Statistics data \u00a9 Crown copyright and database right 2020. Daily and total case counts are as of 28 April 2020, Daily and total deaths are as of 27 April 2020. See the About the data page (link at top of this page) for details.", "overview": { "K02000001": { "name": { "value": "United Kingdom" }, "totalCases": { "value": 161145 }, "newCases": { "value": 3996 }, "deaths": { "value": 21678 }, "dailyDeaths": [ { "date": "2020-03-15", "value": 14 }, { "date": "2020-03-16", "value": 20 } ], "dailyTotalDeaths": [ { "date": "2020-03-10", "value": 6 }, { "date": "2020-03-11", "value": 6 } ] } }, "countries": { "E92000001": { "name": { "value": "England" }, "totalCases": { "value": 114456 }, "deaths": { "value": 19294 }, "maleCases": [ { "age": "30_to_34", "value": 2224 }, { "age": "40_to_44", "value": 2702 } ], "femaleCases": [ { "age": "20_to_24", "value": 1885 }, { "age": "90_to_94", "value": 3580 } ], "dailyConfirmedCases": [ { "date": "2020-01-30", "value": 1 }, { "date": "2020-01-31", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-01-30", "value": 1 }, { "date": "2020-01-31", "value": 2 } ], "_truncated_keys": [ "dailyDeaths", "dailyTotalDeaths", "previouslyReportedDailyTotalCases", "previouslyReportedDailyTotalCasesAdjusted", "previouslyReportedDailyCases", "previouslyReportedDailyCasesAdjusted", "changeInDailyTotalCases", "changeInDailyTotalCasesAdjusted", "changeInDailyCases", "changeInDailyCasesAdjusted" ] }, "N92000002": { "name": { "value": "Northern Ireland" }, "totalCases": { "value": 3408 }, "deaths": { "value": 309 }, "dailyDeaths": [ { "date": "2020-03-28", "value": 2 }, { "date": "2020-03-29", "value": 0 } ], "dailyTotalDeaths": [ { "date": "2020-03-27", "value": 13 }, { "date": "2020-03-28", "value": 15 } ] }, "S92000003": { "name": { "value": "Scotland" }, "totalCases": { "value": 10721 }, "deaths": { "value": 1262 }, "dailyDeaths": [ { "date": "2020-03-28", "value": 7 }, { "date": "2020-03-29", "value": 0 } ], "dailyTotalDeaths": [ { "date": "2020-03-27", "value": 33 }, { "date": "2020-03-28", "value": 40 } ] }, "W92000004": { "name": { "value": "Wales" }, "totalCases": { "value": 9512 }, "deaths": { "value": 813 }, "dailyDeaths": [ { "date": "2020-03-28", "value": 4 }, { "date": "2020-03-29", "value": 10 } ], "dailyTotalDeaths": [ { "date": "2020-03-27", "value": 34 }, { "date": "2020-03-28", "value": 38 } ] } }, "regions": { "E12000004": { "name": { "value": "East Midlands" }, "totalCases": { "value": 6411 }, "dailyConfirmedCases": [ { "date": "2020-02-21", "value": 1 }, { "date": "2020-02-25", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-02-21", "value": 1 }, { "date": "2020-02-25", "value": 2 } ] }, "E12000006": { "name": { "value": "East of England" }, "totalCases": { "value": 9907 }, "dailyConfirmedCases": [ { "date": "2020-02-03", "value": 1 }, { "date": "2020-02-28", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-02-03", "value": 1 }, { "date": "2020-02-28", "value": 2 } ] }, "E12000007": { "name": { "value": "London" }, "totalCases": { "value": 23979 }, "dailyConfirmedCases": [ { "date": "2020-02-11", "value": 1 }, { "date": "2020-02-13", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-02-11", "value": 1 }, { "date": "2020-02-13", "value": 2 } ] }, "E12000001": { "name": { "value": "North East" }, "totalCases": { "value": 7174 }, "dailyConfirmedCases": [ { "date": "2020-03-02", "value": 1 }, { "date": "2020-03-04", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-02", "value": 1 }, { "date": "2020-03-04", "value": 2 } ] }, "E12000002": { "name": { "value": "North West" }, "totalCases": { "value": 17823 }, "dailyConfirmedCases": [ { "date": "2020-02-28", "value": 1 }, { "date": "2020-03-01", "value": 8 } ], "dailyTotalConfirmedCases": [ { "date": "2020-02-28", "value": 1 }, { "date": "2020-03-01", "value": 9 } ] }, "E12000008": { "name": { "value": "South East" }, "totalCases": { "value": 16323 }, "dailyConfirmedCases": [ { "date": "2020-01-31", "value": 1 }, { "date": "2020-02-03", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-01-31", "value": 1 }, { "date": "2020-02-03", "value": 2 } ] }, "E12000009": { "name": { "value": "South West" }, "totalCases": { "value": 5986 }, "dailyConfirmedCases": [ { "date": "2020-02-03", "value": 2 }, { "date": "2020-02-26", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-02-03", "value": 2 }, { "date": "2020-02-26", "value": 3 } ] }, "_truncated_keys": [ "E12000005", "E12000003" ] }, "utlas": { "E09000002": { "name": { "value": "Barking and Dagenham" }, "totalCases": { "value": 445 }, "dailyConfirmedCases": [ { "date": "2020-03-01", "value": 1 }, { "date": "2020-03-08", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-01", "value": 1 }, { "date": "2020-03-08", "value": 2 } ] }, "E09000003": { "name": { "value": "Barnet" }, "totalCases": { "value": 1170 }, "dailyConfirmedCases": [ { "date": "2020-02-16", "value": 1 }, { "date": "2020-02-28", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-02-16", "value": 1 }, { "date": "2020-02-28", "value": 2 } ] }, "E08000016": { "name": { "value": "Barnsley" }, "totalCases": { "value": 590 }, "dailyConfirmedCases": [ { "date": "2020-02-03", "value": 1 }, { "date": "2020-03-02", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-02-03", "value": 1 }, { "date": "2020-03-02", "value": 2 } ] }, "E06000022": { "name": { "value": "Bath and North East Somerset" }, "totalCases": { "value": 203 }, "dailyConfirmedCases": [ { "date": "2020-03-11", "value": 1 }, { "date": "2020-03-12", "value": 2 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-11", "value": 1 }, { "date": "2020-03-12", "value": 3 } ] }, "E06000055": { "name": { "value": "Bedford" }, "totalCases": { "value": 424 }, "dailyConfirmedCases": [ { "date": "2020-03-13", "value": 1 }, { "date": "2020-03-17", "value": 2 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-13", "value": 1 }, { "date": "2020-03-17", "value": 3 } ] }, "E09000004": { "name": { "value": "Bexley" }, "totalCases": { "value": 596 }, "dailyConfirmedCases": [ { "date": "2020-03-09", "value": 2 }, { "date": "2020-03-10", "value": 2 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-09", "value": 2 }, { "date": "2020-03-10", "value": 4 } ] }, "E08000025": { "name": { "value": "Birmingham" }, "totalCases": { "value": 2733 }, "dailyConfirmedCases": [ { "date": "2020-03-01", "value": 1 }, { "date": "2020-03-02", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-01", "value": 1 }, { "date": "2020-03-02", "value": 2 } ] }, "_truncated_keys": [ "E06000008", "E06000009", "E08000001", "E06000058", "E06000036", "E08000032", "E09000005", "E06000043", "E06000023", "E09000006", "E10000002", "E08000002", "E08000033", "E10000003", "E09000007", "E06000056", "E06000049", "E06000050", "E09000001", "E06000052", "E06000047", "E08000026", "E09000008", "E10000006", "E06000005", "E06000015", "E10000007", "E10000008", "E08000017", "E06000059", "E08000027", "E09000009", "E06000011", "E10000011", "E09000010", "E10000012", "E08000037", "E10000013", "E09000011", "E09000012", "E06000006", "E09000013", "E10000014", "E09000014", "E09000015", "E06000001", "E09000016", "E06000019", "E10000015", "E09000017", "E09000018", "E06000046", "E09000019", "E09000020", "E10000016", "E06000010", "E09000021", "E08000034", "E08000011", "E09000022", "E10000017", "E08000035", "E06000016", "E10000018", "E09000023", "E10000019", "E08000012", "E06000032", "E08000003", "E06000035", "E09000024", "E06000002", "E06000042", "E08000021", "E09000025", "E10000020", "E06000012", "E06000013", "E06000024", "E08000022", "E10000023", "E10000021", "E06000057", "E06000018", "E10000024", "E08000004", "E10000025", "E06000031", "E06000026", "E06000044", "E06000038", "E09000026", "E06000003", "E09000027", "E08000005", "E08000018", "E06000017", "E08000006", "E08000028", "E08000014", "E08000019", "E06000051", "E06000039", "E08000029", "E10000027", "E06000025", "E08000023", "E06000045", "E06000033", "E09000028", "E08000013", "E10000028", "E08000007", "E06000004", "E06000021", "E10000029", "E08000024", "E10000030", "E09000029", "E06000030", "E08000008", "E06000020", "E06000034", "E06000027", "E09000030", "E08000009", "E08000036", "E08000030", "E09000031", "E09000032", "E06000007", "E10000031", "E06000037", "E10000032", "E09000033", "E08000010", "E06000054", "E06000040", "E08000015", "E06000041", "E08000031", "E10000034", "E06000014" ] }, "ltlas": { "E07000223": { "name": { "value": "Adur" }, "totalCases": { "value": 76 }, "dailyConfirmedCases": [ { "date": "2020-03-19", "value": 1 }, { "date": "2020-03-22", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-19", "value": 1 }, { "date": "2020-03-22", "value": 2 } ] }, "E07000026": { "name": { "value": "Allerdale" }, "totalCases": { "value": 191 }, "dailyConfirmedCases": [ { "date": "2020-03-12", "value": 1 }, { "date": "2020-03-13", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-12", "value": 1 }, { "date": "2020-03-13", "value": 2 } ] }, "E07000032": { "name": { "value": "Amber Valley" }, "totalCases": { "value": 133 }, "dailyConfirmedCases": [ { "date": "2020-03-12", "value": 1 }, { "date": "2020-03-13", "value": 3 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-12", "value": 1 }, { "date": "2020-03-13", "value": 4 } ] }, "E07000224": { "name": { "value": "Arun" }, "totalCases": { "value": 121 }, "dailyConfirmedCases": [ { "date": "2020-03-14", "value": 1 }, { "date": "2020-03-18", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-14", "value": 1 }, { "date": "2020-03-18", "value": 2 } ] }, "E07000170": { "name": { "value": "Ashfield" }, "totalCases": { "value": 182 }, "dailyConfirmedCases": [ { "date": "2020-03-14", "value": 3 }, { "date": "2020-03-15", "value": 3 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-14", "value": 3 }, { "date": "2020-03-15", "value": 6 } ] }, "E07000105": { "name": { "value": "Ashford" }, "totalCases": { "value": 403 }, "dailyConfirmedCases": [ { "date": "2020-03-03", "value": 1 }, { "date": "2020-03-12", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-03", "value": 1 }, { "date": "2020-03-12", "value": 2 } ] }, "E07000004": { "name": { "value": "Aylesbury Vale" }, "totalCases": { "value": 301 }, "dailyConfirmedCases": [ { "date": "2020-03-04", "value": 1 }, { "date": "2020-03-12", "value": 1 } ], "dailyTotalConfirmedCases": [ { "date": "2020-03-04", "value": 1 }, { "date": "2020-03-12", "value": 2 } ] }, "_truncated_keys": [ "E07000200", "E09000002", "E09000003", "E08000016", "E07000027", "E07000066", "E07000084", "E07000171", "E06000022", "E06000055", "E09000004", "E08000025", "E07000129", "E06000008", "E06000009", "E07000033", "E08000001", "E07000136", "E06000058", "E06000036", "E08000032", "E07000067", "E07000143", "E09000005", "E07000068", "E06000043", "E06000023", "E07000144", "E09000006", "E07000234", "E07000095", "E07000172", "E07000117", "E08000002", "E08000033", "E07000008", "E09000007", "E07000192", "E07000106", "E07000028", "E07000069", "E06000056", "E07000130", "E07000070", "E07000078", "E07000177", "E06000049", "E06000050", "E07000034", "E07000225", "E07000005", "E07000118", "E09000001", "E07000071", "E07000029", "E07000150", "E06000052", "E07000079", "E06000047", "E08000026", "E07000163", "E07000226", "E09000008", "E07000096", "E06000005", "E07000107", "E07000151", "E06000015", "E07000035", "E08000017", "E06000059", "E07000108", "E08000027", "E09000009", "E07000009", "E07000040", "E07000085", "E07000242", "E07000137", "E07000152", "E06000011", "E07000193", "E07000244", "E07000061", "E07000086", "E07000030", "E07000207", "E09000010", "E07000072", "E07000208", "E07000036", "E07000041", "E07000087", "E07000010", "E07000112", "E07000080", "E07000119", "E08000037", "E07000173", "E07000081", "E07000088", "E07000109", "E07000145", "E09000011", "E07000209", "E09000012", "E06000006", "E07000164", "E09000013", "E07000131", "E09000014", "E07000073", "E07000165", "E09000015", "E07000089", "E06000001", "E07000062", "E07000090", "E09000016", "E06000019", "E07000098", "E07000037", "E09000017", "E07000132", "E07000227", "E09000018", "E07000011", "E07000120", "E07000202", "E06000046", "E09000019", "E09000020", "E07000153", "E07000146", "E06000010", "E09000021", "E08000034", "E08000011", "E09000022", "E07000121", "E08000035", "E06000016", "E07000063", "E09000023", "E07000194", "E07000138", "E08000012", "E06000032", "E07000110", "E07000074", "E07000235", "E08000003", "E07000174", "E06000035", "E07000133", "E07000187", "E09000024", "E07000042", "E07000203", "E07000228", "E06000002", "E06000042", "E07000210", "E07000091", "E07000175", "E08000021", "E07000195", "E09000025", "E07000043", "E07000038", "E06000012", "E07000099", "E07000139", "E06000013", "E07000147", "E06000024", "E08000022", "E07000218", "E07000134", "E07000154", "E06000057", "E07000148", "E06000018", "E07000219", "E07000135", "E08000004", "E07000178", "E07000122", "E06000031", "E06000026", "E06000044", "E07000123", "E06000038", "E09000026", "E06000003", "E07000236", "E07000211", "E07000124", "E09000027", "E07000166", "E08000005", "E07000075", "E07000125", "E07000064", "E08000018", "E07000220", "E07000212", "E07000176", "E07000092", "E06000017", "E07000167", "E08000006", "E08000028", "E07000168", "E07000188", "E08000014", "E07000169", "E07000111", "E08000019", "E06000051", "E06000039", "E08000029", "E07000246", "E07000006", "E07000012", "E07000039", "E06000025", "E07000044", "E07000140", "E07000141", "E07000031", "E07000149", "E07000155", "E07000179", "E07000126", "E07000189", "E07000196", "E08000023", "E06000045", "E06000033", "E09000028", "E07000213", "E07000240", "E08000013", "E07000197", "E07000198", "E07000243", "E08000007", "E06000004", "E06000021", "E07000221", "E07000082", "E08000024", "E07000214", "E09000029", "E07000113", "E06000030", "E08000008", "E07000199", "E07000215", "E07000045", "E06000020", "E07000076", "E07000093", "E07000083", "E07000114", "E07000102", "E06000034", "E07000115", "E06000027", "E07000046", "E09000030", "E08000009", "E07000116", "E07000077", "E07000180", "E08000036", "E08000030", "E09000031", "E09000032", "E06000007", "E07000222", "E07000103", "E07000216", "E07000065", "E07000156", "E07000241", "E06000037", "E07000047", "E07000127", "E07000142", "E07000181", "E07000245", "E09000033", "E08000010", "E06000054", "E07000094", "E06000040", "E08000015", "E07000217", "E06000041", "E08000031", "E07000237", "E07000229", "E07000238", "E07000007", "E07000128", "E07000239", "E06000014" ] } } ```
26.357642
630
0.302193
yue_Hant
0.305058
fa02ddae92ef8b4b8a7f2bceacc08f44c6ed5557
1,372
md
Markdown
content/posts/24CaeLiderSantaMuerte.md
noabortion/newage
2492db1daad1a089e4950524dbc981a1949e6615
[ "MIT" ]
null
null
null
content/posts/24CaeLiderSantaMuerte.md
noabortion/newage
2492db1daad1a089e4950524dbc981a1949e6615
[ "MIT" ]
null
null
null
content/posts/24CaeLiderSantaMuerte.md
noabortion/newage
2492db1daad1a089e4950524dbc981a1949e6615
[ "MIT" ]
null
null
null
--- template: "post" title: "Cae líder de la 'Sante Muerte' por secuestro" cover: "../images/24Cae.jpg" date: "2011-01-05T08:00:00Z" slug: "Cae-Lider" keywords: "santa muerte" categories: - santa muerte tags: - santa muerte --- **El procurador capitalino, Miguel Ángel Mancera, precisó que David Romo Guillén recibió 25 mil pesos por prestar a los plagiarios una cuenta bancaria para depositar el rescate por dos adultos mayores.** México.- La procuraduría capitalina detuvo al autodenominado obispo de la Iglesia de la Santa Muerte, David Romo Guillén, y a nueve presuntos integrantes de una banda de plagiarios que se hacían pasar como Zetas. El presunto líder, Gabriel Israel Peralta Martínez, El Spiderman, operaba desde la Penitenciaría de Santa Martha. ![Cae](../images/24Cae.jpg) El titular de la dependencia, Miguel Ángel Mancera, dijo que los implicados participaron en el plagio de una pareja de adultos mayores que se perpetró en la delegación Magdalena Contreras, el 14 de diciembre pasado. Señaló que existe evidencia de la participación de Romo Guillén (quien en 1997 fue exhibido por la Arquidiócesis de México como falso sacerdote dedicado a defraudar) en el cobro de rescate del matrimonio, cuyo dinero fue transferido a una cuenta a nombre de su alias: Silverio Reyes Fremain Cortés. *Fuente: Milenio Diario, 5 enero 2011. Foto: La Prensa*
57.166667
326
0.781341
spa_Latn
0.987095
fa02ef8342b6660526d68af22e897bdb62953460
22
md
Markdown
README.md
juvemar/DifiningClasses---1
6eb1c71553eb647e08e0642936ee88a956c1b898
[ "MIT" ]
null
null
null
README.md
juvemar/DifiningClasses---1
6eb1c71553eb647e08e0642936ee88a956c1b898
[ "MIT" ]
null
null
null
README.md
juvemar/DifiningClasses---1
6eb1c71553eb647e08e0642936ee88a956c1b898
[ "MIT" ]
null
null
null
# DifiningClasses---1
11
21
0.727273
eng_Latn
0.193194
fa032a9cd5d6712af2b1a7be56db08216445d71b
436
md
Markdown
README.md
Turbasen/gruppebruker-cli
58c2c0361b45bcf2b74eaed0af5cf1b829cfb681
[ "MIT" ]
1
2015-07-05T20:08:18.000Z
2015-07-05T20:08:18.000Z
README.md
Turbasen/gruppebruker
58c2c0361b45bcf2b74eaed0af5cf1b829cfb681
[ "MIT" ]
null
null
null
README.md
Turbasen/gruppebruker
58c2c0361b45bcf2b74eaed0af5cf1b829cfb681
[ "MIT" ]
null
null
null
# Users and Groups CLI CLI for creating new users and groups ## New User Create new user for given user group ``` Usage: user [options] Options: -g, --group Group ID (Nasjonal Turbase) -n, --name User name -e, --email User email --ntb-api-env API environment [dev] --version Print version and exit ``` ## [MIT License](https://github.com/Turistforeningen/gruppebruker-cli/blob/master/LICENSE)
20.761905
90
0.66055
eng_Latn
0.536312
fa0369d373391fe15ea41e283c48958d78c083e5
70
md
Markdown
images/README.md
data301-2020-winter2/course-project-group_1004
243c5eeeadb6f260d1fdb4037f65f3e2553b0256
[ "MIT" ]
null
null
null
images/README.md
data301-2020-winter2/course-project-group_1004
243c5eeeadb6f260d1fdb4037f65f3e2553b0256
[ "MIT" ]
1
2021-03-23T23:50:35.000Z
2021-03-28T12:46:36.000Z
images/README.md
data301-2020-winter2/course-project-group_1004
243c5eeeadb6f260d1fdb4037f65f3e2553b0256
[ "MIT" ]
2
2021-02-06T09:06:56.000Z
2021-02-06T22:54:53.000Z
## - **All images used in this project are placed in this directory.**
70
70
0.714286
eng_Latn
0.999986
fa046810ebe356791fac33850db46f047bf85ef6
1,307
md
Markdown
docs/ensurePositiveFiniteNumber.md
cheton/ensure-type
36395a5e4e111beb1717ab6d80b1935e8660d728
[ "MIT" ]
6
2020-10-14T09:10:30.000Z
2020-10-20T09:45:37.000Z
docs/ensurePositiveFiniteNumber.md
cheton/ensure-type
36395a5e4e111beb1717ab6d80b1935e8660d728
[ "MIT" ]
2
2021-08-28T13:51:03.000Z
2021-08-28T15:34:40.000Z
docs/ensurePositiveFiniteNumber.md
cheton/ensure-type
36395a5e4e111beb1717ab6d80b1935e8660d728
[ "MIT" ]
null
null
null
### `ensurePositiveFiniteNumber(value, [defaultValue=0])` * If given `value` is `undefined` or `null`, the `defaultValue` is returned with type coercion. * If given `value` is a finite number coercible value, the result value is returned. Otherwise, the `defaultValue` is returned with type coercion. ```js import { ensurePositiveFiniteNumber } from 'ensure-type'; ensurePositiveFiniteNumber(); // => 0 ensurePositiveFiniteNumber({}); // => 0 ensurePositiveFiniteNumber(true); // => 1 ensurePositiveFiniteNumber(false); // => 0 ensurePositiveFiniteNumber(-1); // => 0 ensurePositiveFiniteNumber(0); // => 0 ensurePositiveFiniteNumber(1); // => 1 ensurePositiveFiniteNumber(2e+64); // => 2e+64 ensurePositiveFiniteNumber(Infinity); // => 0 ensurePositiveFiniteNumber(-Infinity); // => 0 ensurePositiveFiniteNumber(NaN); // => 0 ensurePositiveFiniteNumber(undefined); // => 0 ensurePositiveFiniteNumber(null); // => 0 ensurePositiveFiniteNumber('-1'); // => 0 ensurePositiveFiniteNumber('0'); // => 0 ensurePositiveFiniteNumber('1'); // => 1 ensurePositiveFiniteNumber(''); // => 0 ensurePositiveFiniteNumber(' '); // => 0 // Returns the coerced default value. ensurePositiveFiniteNumber(null, '1'); // => 1 // Returns the default value. ensurePositiveFiniteNumber(null, 1); // => 1 ```
18.408451
146
0.713083
eng_Latn
0.268724
fa04d1ddc74476c9fada12d353762a19a5d7c5f6
21,626
md
Markdown
articles/governance/policy/concepts/guest-configuration.md
changeworld/azure-docs.de-de
26492264ace1ad4cfdf80e5234dfed9a106e8012
[ "CC-BY-4.0", "MIT" ]
1
2021-03-12T23:37:21.000Z
2021-03-12T23:37:21.000Z
articles/governance/policy/concepts/guest-configuration.md
changeworld/azure-docs.de-de
26492264ace1ad4cfdf80e5234dfed9a106e8012
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/governance/policy/concepts/guest-configuration.md
changeworld/azure-docs.de-de
26492264ace1ad4cfdf80e5234dfed9a106e8012
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Grundlegende Informationen zum Gastkonfigurationsfeature von Azure Policy description: Hier erfahren Sie, wie Azure Policy mithilfe des Gastkonfigurationsfeatures Einstellungen in VMs überwacht und konfiguriert. ms.date: 07/15/2021 ms.topic: conceptual ms.openlocfilehash: 6a40469b6cd391672ba953bac37402285ac4a097 ms.sourcegitcommit: 106f5c9fa5c6d3498dd1cfe63181a7ed4125ae6d ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 11/02/2021 ms.locfileid: "131040414" --- # <a name="understand-the-guest-configuration-feature-of-azure-policy"></a>Grundlegende Informationen zum Gastkonfigurationsfeature von Azure Policy Das Gastkonfigurationsfeature von Azure Policy bietet native Funktionen zum Überprüfen oder Konfigurieren von Betriebssystemeinstellungen als Code, sowohl für Computer, die in Azure ausgeführt werden, als auch für hybride [Computer mit Arc-Unterstützung](../../../azure-arc/servers/overview.md). Das Feature kann direkt pro Computer oder in großem Umfang von Azure Policy orchestriert verwendet werden. Konfigurationsressourcen in Azure sind als [Erweiterungsressourcen](../../../azure-resource-manager/management/extension-resource-types.md) konzipiert. Sie können sich jede Konfiguration als zusätzlichen Satz von Eigenschaften für den Computer vorstellen. Konfigurationen können Einstellungen enthalten wie: - Betriebssystemeinstellungen - Die Konfiguration oder das Vorhandensein der Anwendung - Umgebungseinstellungen Konfigurationen unterscheiden sich von Richtliniendefinitionen. Die Gastkonfiguration nutzt Azure Policy, um Computern Konfigurationen dynamisch zuzuweisen. Sie können Computern Konfigurationen auch [manuell](guest-configuration-assignments.md#manually-creating-guest-configuration-assignments) oder mithilfe anderer Azure-Dienste wie [AutoManage](../../../automanage/automanage-virtual-machines.md) zuweisen. Beispiele für jedes Szenario sind in der folgenden Tabelle aufgeführt. | type | BESCHREIBUNG | Beispielstory | | - | - | - | | [Konfigurationsverwaltung](guest-configuration-assignments.md) | Sie möchten eine vollständige Darstellung eines Servers als Code in der Quellcodeverwaltung. Die Bereitstellung sollte Eigenschaften des Servers (Größe, Netzwerk, Speicher) und die Konfiguration von Betriebssystem- und Anwendungseinstellungen enthalten. | "Dieser Computer sollte ein Webserver sein, der zum Hosten meiner Websete konfiguriert ist." | | [Compliance](../assign-policy-portal.md) | Sie möchten Einstellungen für alle Computer im Bereich überwachen oder bereitstellen, entweder reaktiv auf vorhandenen Computern oder proaktiv auf neuen Computern, wenn sie bereitgestellt werden. | "Alle Computer sollten TLS 1.2 verwenden. Überwachen Sie vorhandene Computer, damit ich Änderungen an den benötigten Stellen auf kontrollierte Weise und bedarfsorientiert freigeben kann. Erzwingen Sie für neue Computer die Einstellung, wenn sie bereitgestellt werden." | Die Ergebnisse der einzelnen Konfigurationen können entweder auf der Seite [Gastzuweisungen](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration) angezeigt werden, oder wenn die Konfiguration durch eine Azure Policy-Zuweisung orchestriert wird, indem Sie auf der Seite [Compliance details](../how-to/determine-non-compliance.md#view-configuration-assignment-details-at-scale) auf den Link „Zuletzt ausgewertete Ressource“ klicken. [Für dieses Dokument ist ein Video zur exemplarischen Vorgehensweise verfügbar](https://youtu.be/t9L8COY-BkM). (Update in Kürze verfügbar) ## <a name="enable-guest-configuration"></a>Aktivieren der Gastkonfiguration Sehen Sie sich die folgenden ausführlichen Informationen an, um den Status der Computer in Ihrer Umgebung zu verwalten, einschließlich Computern in Azure und Arc-fähiger Server. ## <a name="resource-provider"></a>Ressourcenanbieter Um das Gastkonfigurationsfeature von Azure Policy verwenden zu können, müssen Sie den Ressourcenanbieter `Microsoft.GuestConfiguration` registrieren. Wenn die Zuweisung einer Gastkonfigurationsrichtlinie über das Portal erfolgt oder das Abonnement in Azure Security Center registriert ist, wird der Ressourcenanbieter automatisch registriert. Hierfür können Sie über das [Portal](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-portal), die [Azure PowerShell](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-powershell) oder [Azure CLI](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli) manuell registrieren. ## <a name="deploy-requirements-for-azure-virtual-machines"></a>Bereitstellen von Anforderungen für virtuelle Azure-Computer Für die Verwaltung von Einstellungen innerhalb eines Computers ist eine [VM-Erweiterung](../../../virtual-machines/extensions/overview.md) aktiviert, und der Computer muss über eine systemseitig verwaltete Identität verfügen. Die Erweiterung lädt die richtige Gastkonfigurationszuweisung sowie die entsprechenden Abhängigkeiten herunter. Die Identität wird verwendet, um den Computer zu authentifizieren, wenn Lese- und Schreibvorgänge im Gastkonfigurationsdienst ausgeführt werden. Bei Arc-fähigen Servern wird die Erweiterung nicht benötigt, da sie bereits im Arc Connected Machine-Agent enthalten ist. > [!IMPORTANT] > Für die Verwaltung von virtuellen Azure-Computern sind die Gastkonfigurationserweiterung und eine verwaltete Identität erforderlich. Wenn Sie die Erweiterung im großen Stil auf vielen Computern bereitstellen möchten, weisen Sie die Richtlinieninitiative `Deploy prerequisites to enable guest configuration policies on virtual machines` einer Verwaltungsgruppe, einem Abonnement oder einer Ressourcengruppe mit den Computern zu, die Sie verwalten möchten. Wenn Sie die Erweiterung und die verwaltete Identität lieber auf einem einzelnen Computer bereitstellen möchten, befolgen Sie den jeweiligen Leitfaden: - [Übersicht über die Azure Policy-Gastkonfigurationserweiterung](../../../virtual-machines/extensions/guest-configuration.md) - [Konfigurieren von verwalteten Identitäten für Azure-Ressourcen auf einem virtuellen Computer über das Azure-Portal](../../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) Für die Verwendung von Gastkonfigurationspaketen, die Konfigurationen anwenden, ist die Azure-VM-Gastkonfigurationserweiterung, Version **1.29.24** oder höher, erforderlich. ### <a name="limits-set-on-the-extension"></a>Für die Erweiterung festgelegte Grenzwerte Um die Auswirkungen der Erweiterung auf die auf dem Computer ausgeführten Anwendungen zu beschränken, darf der Gastkonfigurations-Agent höchstens 5 % der CPU auslasten. Diese Einschränkung gilt sowohl für integrierte als auch für angepasste Definitionen. Dies gilt auch für den Gastkonfigurationsdienst im Arc Connected Machine-Agent. ### <a name="validation-tools"></a>Überprüfungstools Auf dem Computer verwendet der Gastkonfigurations-Agent für die Ausführung von Tasks lokale Tools. In der folgenden Tabelle sind die lokalen Tools aufgeführt, die unter den jeweiligen unterstützten Betriebssystemen verwendet werden. Bei integrierten Inhalten werden diese Tools in der Gastkonfiguration automatisch geladen. |Betriebssystem|Überprüfungstool|Notizen| |-|-|-| |Windows|[PowerShell Desired State Configuration](/powershell/scripting/dsc/overview/overview), Version 3| Wird mittels Sideloading in einen Ordner quergeladen, der nur von Azure Policy verwendet wird. Verursacht keinen Konflikt mit Windows PowerShell DSC. PowerShell Core wird nicht zum Systempfad hinzugefügt.| |Linux|[PowerShell Desired State Configuration](/powershell/scripting/dsc/overview/overview), Version 3| Wird mittels Sideloading in einen Ordner quergeladen, der nur von Azure Policy verwendet wird. PowerShell Core wird nicht zum Systempfad hinzugefügt.| |Linux|[Chef InSpec](https://www.chef.io/inspec/) | Installiert Chef InSpec (Version 2.2.61) am Standardspeicherort und wird dem Systempfad hinzugefügt. Abhängigkeiten für das InSpec-Paket einschließlich Ruby und Python werden ebenfalls installiert. | ### <a name="validation-frequency"></a>Validierungshäufigkeit Der Gastkonfigurations-Agent überprüft alle 5 Minuten, ob Gastzuweisungen hinzugefügt oder geändert wurden. Nachdem eine Gastzuweisung empfangen wurde, werden die Einstellungen für diese Konfiguration alle 15 Minuten erneut überprüft. Wenn mehrere Konfigurationen zugewiesen sind, werden diese einzeln nacheinander ausgewertet. Konfigurationen mit langer Ausführungsdauer wirken sich auf das Intervall für alle Konfigurationen aus, da die nächste Konfiguration erst ausgeführt wird, wenn die vorherige abgeschlossen ist. Die Ergebnisse werden nach Abschluss der Überprüfung an den Gastkonfigurationsdienst gesendet. Wenn ein [Auswertungstrigger](../how-to/get-compliance-data.md#evaluation-triggers) für Richtlinien auftritt, wird der Status des Computers an den Ressourcenanbieter der Gastkonfiguration übermittelt. Dieses Update bewirkt, dass Azure Policy die Azure Resource Manager-Eigenschaften auswertet. Bei einer bei Bedarf ausgelösten Auswertung durch Azure Policy wird der aktuelle Wert beim Ressourcenanbieter der Gastkonfiguration abgerufen. Es wird jedoch keine neue Aktivität auf dem Computer ausgelöst. Der Status wird anschließend an Azure Resource Graph übermittelt. ## <a name="supported-client-types"></a>Unterstützte Clienttypen Richtliniendefinitionen für die Gastkonfiguration umfassen neue Versionen. Ältere Versionen von Betriebssystemen, die im Azure Marketplace verfügbar sind, sind ausgeschlossen, wenn der Gastkonfigurationsclient nicht kompatibel ist. In der folgenden Tabelle werden die für Azure-Images unterstützten Betriebssysteme aufgeführt. Die Angabe „.x“ steht symbolisch für neue Nebenversionen von Linux-Distributionen. |Herausgeber|Name|Versionen| |-|-|-| |Amazon|Linux|2| |Canonical|Ubuntu Server|14.04 - 20.x| |Credativ|Debian|8 - 10.x| |Microsoft|Windows Server|2012–2019| |Microsoft|Windows-Client|Windows 10| |Oracle|Oracle-Linux|7.x–8.x| |OpenLogic|CentOS|7.3 -8.x| |Red Hat|Red Hat Enterprise Linux\*|7.4 - 8.x| |SUSE|SLES|12 SP3-SP5, 15.x| \* Red Hat CoreOS wird nicht unterstützt. Benutzerdefinierte VM-Images werden von Richtliniendefinitionen für die Gastkonfiguration unterstützt, sofern es sich um eines der Betriebssysteme in der obigen Tabelle handelt. ## <a name="network-requirements"></a>Netzwerkanforderungen Virtuelle Computer in Azure können entweder ihren lokalen Netzwerkadapter oder eine private Verbindung verwenden, um mit dem Gastkonfigurationsdienst zu kommunizieren. Azure Arc-Computer stellen mithilfe der lokalen Netzwerkinfrastruktur eine Verbindung her, um Azure-Dienste zu erreichen und den Compliancestatus zu melden. ### <a name="communicate-over-virtual-networks-in-azure"></a>Kommunizieren über virtuelle Netzwerke in Azure Für die Kommunikation mit dem Ressourcenanbieter der Gastkonfiguration in Azure benötigen Computer ausgehenden Zugriff auf Azure-Rechenzentren über Port **443**. Wenn ein Netzwerk in Azure keinen ausgehenden Datenverkehr zulässt, müssen Ausnahmen über [Netzwerksicherheitsgruppen](../../../virtual-network/manage-network-security-group.md#create-a-security-rule)-Regeln konfiguriert werden. Die [Diensttags](../../../virtual-network/service-tags-overview.md) „AzureArcInfrastructure“ und „Storage“ können verwendet werden, um auf den Gastkonfigurationsdienst und den Storage-Dienst zu verweisen, damit die [Liste der IP-Adressbereiche](https://www.microsoft.com/download/details.aspx?id=56519) für Azure-Rechenzentren nicht manuell geführt werden muss. Beide Tags sind erforderlich, da Inhaltspakete für Gastkonfigurationen von Azure Storage gehostet werden. ### <a name="communicate-over-private-link-in-azure"></a>Kommunizieren über Private Link in Azure Für die Kommunikation mit dem Gastkonfigurationsdienst können virtuelle Computer eine [private Verbindung](../../../private-link/private-link-overview.md) verwenden. Wenden Sie ein Tag mit dem Namen `EnablePrivateNetworkGC` und dem Wert `TRUE` an, um dieses Feature zu aktivieren. Das Tag kann vor oder nach der Anwendung der Richtliniendefinitionen der Gastkonfiguration auf den Computer angewendet werden. Der Datenverkehr wird mithilfe der [virtuellen öffentlichen IP-Adresse](../../../virtual-network/what-is-ip-address-168-63-129-16.md) von Azure weitergeleitet, um einen sicheren, authentifizierten Kanal mit Azure-Plattformressourcen einzurichten. ### <a name="azure-arc-enabled-servers"></a>Server mit Azure Arc-Unterstützung Knoten, die sich außerhalb von Azure befinden und mit denen eine Verbindung über Azure Arc besteht, benötigen Konnektivität mit dem Gastkonfigurationsdienst. Details zu den Netzwerk- und Proxyanforderungen finden Sie in der [Azure Arc-Dokumentation](../../../azure-arc/servers/overview.md). Lassen Sie für Arc-fähige Server in privaten Rechenzentren Datenverkehr mithilfe der folgenden Muster zu: - Port: Für ausgehenden Zugriff auf das Internet ist nur TCP 443 erforderlich - Globale URL: `*.guestconfiguration.azure.com` ## <a name="assigning-policies-to-machines-outside-of-azure"></a>Zuweisen von Richtlinien zu Computern außerhalb von Azure Die für die Gastkonfiguration verfügbaren Überwachungsrichtliniendefinitionen umfassen den Ressourcentyp **Microsoft.HybridCompute/machines**. Alle Computer, die in [Azure Arc für Server](../../../azure-arc/servers/overview.md) integriert sind und sich im Geltungsbereich der Richtlinienzuweisung befinden, werden automatisch eingeschlossen. ## <a name="managed-identity-requirements"></a>Anforderungen für verwaltete Identitäten Durch die Richtliniendefinitionen in der Initiative _Voraussetzungen zum Aktivieren der Gastkonfigurationsrichtlinien auf VMs bereitstellen_ wird eine systemseitig zugewiesene verwaltete Identität aktiviert, sofern noch keine solche vorhanden ist. Die Initiative enthält zwei Richtliniendefinitionen, durch die die Identitätserstellung verwaltet wird. Die IF-Bedingungen in den Richtliniendefinitionen gewährleisten das korrekte Verhalten basierend auf dem aktuellen Zustand der Computerressource in Azure. Wenn der Computer derzeit über keine verwalteten Identitäten verfügt, ist die effektive Richtlinie: [Systemseitig zugewiesene verwaltete Identität hinzufügen, um Gastkonfigurationszuweisungen auf VMs ohne Identität zu aktivieren](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e). Wenn der Computer derzeit über eine benutzerseitig zugewiesene Systemidentität verfügt, ist die effektive Richtlinie: [Systemseitig zugewiesene verwaltete Identität hinzufügen, um Gastkonfigurationszuweisungen auf VMs mit einer benutzerseitig zugewiesenen verwalteten Identität zu aktivieren](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6). ## <a name="availability"></a>Verfügbarkeit Kunden, die eine hochverfügbare Lösung entwerfen, sollten die Anforderungen an die Redundanzplanung für [virtuelle Computer](../../../virtual-machines/availability.md) berücksichtigen, da Gastzuweisungen Erweiterungen von Computerressourcen in Azure sind. Wenn Gastzuweisungsressourcen in einer Azure-Region bereitgestellt werden, die [gekoppelt](../../../best-practices-availability-paired-regions.md) ist, sind Gastzuweisungsberichte verfügbar, solange mindestens eine Region des Paars verfügbar ist. Wenn die Azure-Region nicht gekoppelt und nicht mehr verfügbar ist, ist es erst nach der Wiederherstellung der Region möglich, auf Berichte für eine Gastzuweisung zuzugreifen. Wenn Sie eine Architektur für hochverfügbare Anwendungen in Betracht ziehen, insbesondere wenn virtuelle Computer in [Verfügbarkeitsgruppen](../../../virtual-machines/availability.md#availability-sets) hinter einer Lastenausgleichslösung bereitgestellt werden, um Hochverfügbarkeit zu gewährleisten, wird empfohlen, allen Computern in der Lösung dieselben Richtliniendefinitionen mit denselben Parametern zu zuweisen. Wenn möglich, würde eine einzelne Richtlinienzuweisung, die alle Computer umfasst, den geringsten Verwaltungsaufwand bieten. Stellen Sie für Computer, die von [Azure Site Recovery](../../../site-recovery/site-recovery-overview.md) geschützt werden, sicher, dass die Computer an einem sekundären Standort innerhalb des Bereichs von Azure Policy-Zuweisungen für dieselben Definitionen liegen, die dieselben Parameterwerte wie Computer am primären Standort verwenden. ## <a name="data-residency"></a>Datenresidenz Die Gastkonfiguration speichert/verarbeitet Kundendaten. Standardmäßig werden Kundendaten in die [gekoppelte Region](../../../best-practices-availability-paired-regions.md) repliziert. Für eine einzelne gebietsansässige Region werden alle Kundendaten in der Region gespeichert und verarbeitet. ## <a name="troubleshooting-guest-configuration"></a>Problembehandlung bei der Gastkonfiguration Weitere Informationen zur Problembehandlung bei der Gastkonfiguration finden Sie unter [Problembehandlung mit Azure Policy](../troubleshoot/general.md). ### <a name="multiple-assignments"></a>Mehrere Zuweisungen Aufgrund der Richtliniendefinitionen für die Gastkonfiguration kann die gleiche Gastzuweisung derzeit lediglich einmal pro Computer zugewiesen werden, auch wenn bei der Richtlinienzuweisung verschiedene Parameter verwendet werden. ### <a name="assignments-to-azure-management-groups"></a>Zuweisungen zu Azure-Verwaltungsgruppen Azure Policy-Definitionen in der Kategorie „Gastkonfiguration“ können Verwaltungsgruppen nur zugewiesen werden, wenn die Auswirkung „AuditIfNotExists“ ist. Bei Richtliniendefinitionen mit der Auswirkung „DeployIfNotExists“ wird das Zuweisen für Verwaltungsgruppen nicht unterstützt. ### <a name="client-log-files"></a>Protokolldateien des Clients Die Gastkonfigurationserweiterung schreibt Protokolldateien an die folgenden Speicherorte: Windows: `C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log` Linux - Azure-VM: `/var/lib/GuestConfig/gc_agent_logs/gc_agent.log` - Arc-fähiger Server: `/var/lib/GuestConfig/arc_policy_logs/gc_agent.log` ### <a name="collecting-logs-remotely"></a>Remotesammeln von Protokollen Im ersten Schritt bei der Problembehandlung von Konfigurationen oder Modulen der Gastkonfiguration sollten die Cmdlets entsprechend den unter [Testen von Gastkonfigurations-Paketartefakten](../how-to/guest-configuration-create-test.md) beschriebenen Schritten ausgeführt werden. Wenn dies nicht erfolgreich ist, kann das Sammeln von Clientprotokollen helfen, Probleme zu diagnostizieren. #### <a name="windows"></a>Windows Erfassen Sie Informationen aus Protokolldateien mithilfe des Azure-VM-Befehls [Ausführen](../../../virtual-machines/windows/run-command.md), wobei das folgende PowerShell-Beispiel hilfreich sein kann. ```powershell $linesToIncludeBeforeMatch = 0 $linesToIncludeAfterMatch = 10 $logPath = 'C:\ProgramData\GuestConfig\gc_agent_logs\gc_agent.log' Select-String -Path $logPath -pattern 'DSCEngine','DSCManagedEngine' -CaseSensitive -Context $linesToIncludeBeforeMatch,$linesToIncludeAfterMatch | Select-Object -Last 10 ``` #### <a name="linux"></a>Linux Erfassen Sie Informationen aus Protokolldateien mithilfe des Azure VM-Befehls [Ausführen](../../../virtual-machines/linux/run-command.md), wobei das folgende Bash-Beispiel hilfreich sein kann. ```bash linesToIncludeBeforeMatch=0 linesToIncludeAfterMatch=10 logPath=/var/lib/GuestConfig/gc_agent_logs/gc_agent.log egrep -B $linesToIncludeBeforeMatch -A $linesToIncludeAfterMatch 'DSCEngine|DSCManagedEngine' $logPath | tail ``` ### <a name="agent-files"></a>Agent-Dateien Der Gastkonfigurations-Agent lädt Inhaltspakete auf einen Computer herunter und extrahiert deren Inhalte. Um zu überprüfen, welche Inhalte heruntergeladen und gespeichert wurden, sehen Sie sich die unten angegebenen Ordnerspeicherorte an. Windows: `c:\programdata\guestconfig\configuration` Linux: `/var/lib/GuestConfig/Configuration` ## <a name="guest-configuration-samples"></a>Gastkonfigurationsbeispiele Beispiele für integrierte Gastkonfigurationsrichtlinien finden Sie unter: - [Integrierte Richtliniendefinitionen: Gastkonfiguration](../samples/built-in-policies.md#guest-configuration) - [Integrierte Initiativen: Gastkonfiguration](../samples/built-in-initiatives.md#guest-configuration) - [Azure Policy-Beispiele: GitHub-Repository](https://github.com/Azure/azure-policy/tree/master/built-in-policies/policySetDefinitions/Guest%20Configuration) ## <a name="next-steps"></a>Nächste Schritte - Richten Sie eine [Entwicklungsumgebung](../how-to/guest-configuration-create-setup.md) für ein benutzerdefiniertes Gastkonfigurationspaket ein. - [Erstellen Sie ein Paketartefakt](../how-to/guest-configuration-create.md) für Gastkonfigurationen. - [Testen Sie das Paketartefakt](../how-to/guest-configuration-create-test.md) in Ihrer Entwicklungsumgebung. - Verwenden Sie das Modul `GuestConfiguration` zum [Erstellen einer Azure Policy-Definition](../how-to/guest-configuration-create-definition.md) für die Verwaltung Ihrer Umgebung im großen Stil. - [Weisen Sie Ihre benutzerdefinierte Richtliniendefinition](../assign-policy-portal.md) mithilfe des Azure-Portals zu. - Informieren Sie sich, wie Sie die [Compliancedetails für die Richtlinienzuweisungen der Gastkonfiguration](../how-to/determine-non-compliance.md#compliance-details-for-guest-configuration) anzeigen.
93.619048
858
0.824193
deu_Latn
0.991265