hexsha stringlengths 40 40 | size int64 5 1.04M | ext stringclasses 6 values | lang stringclasses 1 value | max_stars_repo_path stringlengths 3 344 | max_stars_repo_name stringlengths 5 125 | max_stars_repo_head_hexsha stringlengths 40 78 | max_stars_repo_licenses listlengths 1 11 | max_stars_count int64 1 368k ⌀ | max_stars_repo_stars_event_min_datetime stringlengths 24 24 ⌀ | max_stars_repo_stars_event_max_datetime stringlengths 24 24 ⌀ | max_issues_repo_path stringlengths 3 344 | max_issues_repo_name stringlengths 5 125 | max_issues_repo_head_hexsha stringlengths 40 78 | max_issues_repo_licenses listlengths 1 11 | max_issues_count int64 1 116k ⌀ | max_issues_repo_issues_event_min_datetime stringlengths 24 24 ⌀ | max_issues_repo_issues_event_max_datetime stringlengths 24 24 ⌀ | max_forks_repo_path stringlengths 3 344 | max_forks_repo_name stringlengths 5 125 | max_forks_repo_head_hexsha stringlengths 40 78 | max_forks_repo_licenses listlengths 1 11 | max_forks_count int64 1 105k ⌀ | max_forks_repo_forks_event_min_datetime stringlengths 24 24 ⌀ | max_forks_repo_forks_event_max_datetime stringlengths 24 24 ⌀ | content stringlengths 5 1.04M | avg_line_length float64 1.14 851k | max_line_length int64 1 1.03M | alphanum_fraction float64 0 1 | lid stringclasses 191 values | lid_prob float64 0.01 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
e190f8b177e75506a70f862f51fd4db1b7cc108d | 1,227 | md | Markdown | content/publication/2015-z-ba/index.md | dbader13/academic-kickstar | 809485f8e806825d65b7d38982f37dd88628d1b4 | [
"MIT"
] | null | null | null | content/publication/2015-z-ba/index.md | dbader13/academic-kickstar | 809485f8e806825d65b7d38982f37dd88628d1b4 | [
"MIT"
] | null | null | null | content/publication/2015-z-ba/index.md | dbader13/academic-kickstar | 809485f8e806825d65b7d38982f37dd88628d1b4 | [
"MIT"
] | null | null | null | ---
title: "Fast Incremental Community Detection on Dynamic Graphs"
date: 2015-01-01
publishDate: 2019-09-10T12:18:34.279946Z
authors: ["Anita Zakrzewska", "David A. Bader"]
publication_types: ["1"]
abstract: "Community detection, or graph clustering, is the problem of finding dense groups in a graph. This is important for a variety of applications, from social network analysis to biological interactions. While most work in community detection has focused on static graphs, real data is usually dynamic, changing over time. We present a new algorithm for dynamic community detection that incrementally updates clusters when the graph changes. The method is based on a greedy, modularity maximizing static approach and stores the history of merges in order to backtrack. On synthetic graph tests with known ground truth clusters, it can detect a variety of structural community changes for both small and large batches of edge updates."
featured: false
publication: "*Parallel Processing and Applied Mathematics - 11th International Conference, PPAM 2015, Krakow, Poland, September 6-9, 2015. Revised Selected Papers, Part I*"
url_pdf: "https://doi.org/10.1007/978-3-319-32149-3_20"
doi: "10.1007/978-3-319-32149-3_20"
---
| 87.642857 | 740 | 0.794621 | eng_Latn | 0.970728 |
e191eefbb9b79d2002071f81de77256b26f70630 | 2,877 | md | Markdown | README.md | sunfishcode/ambient-authority | 19ba6faedc6943ce2e663987b7b5e9acadf570df | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | 17 | 2021-04-09T07:21:28.000Z | 2021-12-22T17:43:58.000Z | README.md | sunfishcode/ambient-authority | 19ba6faedc6943ce2e663987b7b5e9acadf570df | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | README.md | sunfishcode/ambient-authority | 19ba6faedc6943ce2e663987b7b5e9acadf570df | [
"ECL-2.0",
"Apache-2.0",
"MIT-0",
"MIT"
] | null | null | null | <div align="center">
<h1><code>ambient-authority</code></h1>
<p>
<strong>Ambient Authority</strong>
</p>
<p>
<a href="https://github.com/sunfishcode/ambient-authority/actions?query=workflow%3ACI"><img src="https://github.com/sunfishcode/ambient-authority/workflows/CI/badge.svg" alt="Github Actions CI Status" /></a>
<a href="https://crates.io/crates/ambient-authority"><img src="https://img.shields.io/crates/v/ambient-authority.svg" alt="crates.io page" /></a>
<a href="https://docs.rs/ambient-authority"><img src="https://docs.rs/ambient-authority/badge.svg" alt="docs.rs docs" /></a>
</p>
</div>
In capability-based security contexts, *ambient authority* means anything a
program can do that interacts with the outside world that isn't represented by
a handle.
This crate defines an empty function, [`ambient_authority`], which returns a
value of type [`AmbientAuthority`]. This is an empty type used in function
signatures to declare that they use ambient authority. When an API uses
`AmbientAuthority` in all functions that use ambient authority, one can quickly
locate all the calls to such functions by scanning for calls to
`ambient_authority`.
To use the `AmbientAuthroity` type in an API:
- Add an [`AmbientAuthority`] argument at the end of the argument list of any
function that uses ambient authroity, and add a `# Ambient Authority`
section in the documentation comments for such functions explaining their
use of ambient authority.
- Re-export the [`ambient_authority`] function and `AmbientAuthority` type
from this crate, so that users can easily use the same version.
- Ensure that all other `pub` functions avoid using ambient authority,
including mutable static state such as static `Atomic`, `Cell`, `RefCell`,
`Mutex`, `RwLock`, or similar state, including `once_cell` or `lazy_static`
state with initialization that uses ambient authority.
For example, see the [cap-std] crate's API, which follows these guidelines.
One of the cool things about capability-oriented APIs is that programs don't
need to be pure to take advantage of them. That said, for programs which do
which to aim for purity, this repository has a clippy configuration which can
help. To use it:
- Manually ensure that all immediate dependencies follow the above convention.
- Copy the [clippy.toml] file into the top level source directory, add
`#![deny(clippy::disallowed_method)]` to the root module (main.rs or lib.rs),
and run `cargo clippy` or equivalent.
[cap-std]: https://docs.rs/cap-std/*/cap_std/
[clippy.toml]: https://github.com/sunfishcode/ambient-authority/blob/main/clippy.toml
[`AmbientAuthority`]: https://docs.rs/ambient-authority/latest/ambient_authority/struct.AmbientAuthority.html
[`ambient_authority`]: https://docs.rs/ambient-authority/latest/ambient_authority/func.ambient_authority.html
| 49.603448 | 211 | 0.757039 | eng_Latn | 0.950062 |
e19234baefe90488893e53a05e4ead620a14a1d6 | 1,665 | md | Markdown | README.md | AwesomeDevDen/denis-bunchenko | 94e8c7c1096fb0ad2375c4f884994e3af01a735c | [
"MIT"
] | 1 | 2021-10-20T06:52:25.000Z | 2021-10-20T06:52:25.000Z | README.md | AwesomeDevDen/denis-bunchenko | 94e8c7c1096fb0ad2375c4f884994e3af01a735c | [
"MIT"
] | 1 | 2021-11-15T08:26:00.000Z | 2021-11-15T08:26:00.000Z | README.md | AwesomeDevDen/denis-bunchenko | 94e8c7c1096fb0ad2375c4f884994e3af01a735c | [
"MIT"
] | null | null | null | # denis.bunchenko.com
[](https://opensource.org/licenses/MIT) [](https://app.netlify.com/sites/denisbunchenko/deploys)

[](https://ko-fi.com/D1D06HRNB)


Denis personal website running on Gatsby, React, and Node.js.
**Note**: The source for this site was not created to be a template or theme, but for my own use. Feel free to take whatever inspiration from it that you want, but this code was not written with the intention of being cloned and deployed. As such, I do not provide support or guidance for doing that.
## Social
[](https://medium.com/@den.on.by/)
[](https://www.instagram.com/so_fucking_sorry/)
[](https://www.linkedin.com/in/denis-bunchenko-6276a0b6/)
[](https://www.upwork.com/freelancers/~01d492b72c70067180)
Also reachable via den.on.by@gmail.com or awesomedevdev@yandex.com.
## License
This project is open source and available under the [MIT License](LICENSE). | 57.413793 | 300 | 0.761562 | eng_Latn | 0.29871 |
e19403bfda9505513c8d42c585766860efaafbb9 | 481 | md | Markdown | tensorflow/docs_src/api_guides/python/spectral_ops.md | AlexChrisF/udacity | b7f85a74058fc63ccb7601c418450ab934ef5953 | [
"Apache-2.0"
] | 384 | 2017-02-21T18:38:04.000Z | 2022-02-22T07:30:25.000Z | tensorflow/docs_src/api_guides/python/spectral_ops.md | AlexChrisF/udacity | b7f85a74058fc63ccb7601c418450ab934ef5953 | [
"Apache-2.0"
] | 15 | 2017-03-01T20:18:43.000Z | 2020-05-07T10:33:51.000Z | tensorflow/docs_src/api_guides/python/spectral_ops.md | AlexChrisF/udacity | b7f85a74058fc63ccb7601c418450ab934ef5953 | [
"Apache-2.0"
] | 81 | 2017-02-21T19:31:19.000Z | 2022-02-22T07:30:24.000Z | # Spectral Functions
[TOC]
## Fourier Transform Functions
TensorFlow provides several operations that you can use to add discrete
Fourier transform functions to your graph.
* @{tf.spectral.fft}
* @{tf.spectral.ifft}
* @{tf.spectral.fft2d}
* @{tf.spectral.ifft2d}
* @{tf.spectral.fft3d}
* @{tf.spectral.ifft3d}
* @{tf.spectral.rfft}
* @{tf.spectral.irfft}
* @{tf.spectral.rfft2d}
* @{tf.spectral.irfft2d}
* @{tf.spectral.rfft3d}
* @{tf.spectral.irfft3d}
| 21.863636 | 71 | 0.677755 | eng_Latn | 0.533737 |
e194c6a6a548413811a7450a47bea7fb962e09ac | 248 | md | Markdown | README.md | Ser-Gen/diffArray | 859e087a1e809372ca0ffe92e7688e0f6adc3c74 | [
"MIT"
] | null | null | null | README.md | Ser-Gen/diffArray | 859e087a1e809372ca0ffe92e7688e0f6adc3c74 | [
"MIT"
] | 3 | 2016-06-27T18:22:23.000Z | 2017-03-13T07:04:37.000Z | README.md | Ser-Gen/diffInline | 859e087a1e809372ca0ffe92e7688e0f6adc3c74 | [
"MIT"
] | null | null | null | # diffInline
Сравнение массивов и получение минимальной разницы, ориентированное на отображение
Демонстрация здесь: http://ser-gen.github.io/diffInline/
Текст сравнивается построчно. Его можно добавить вручную или перетягиванием текстовых файлов.
| 41.333333 | 93 | 0.842742 | rus_Cyrl | 0.983053 |
e1956b9aba60a7437d6cdf569fa8b519f71ea3fe | 1,478 | md | Markdown | content/blog/2014/2014-08-21-google-to-give-priority-ranking-to-ssl-enabled-sites.md | sandorszoke/pkic.org | d8d32b021520a3a05b16001237bffe5ffb374ec0 | [
"MIT"
] | 1 | 2021-11-30T10:53:34.000Z | 2021-11-30T10:53:34.000Z | content/blog/2014/2014-08-21-google-to-give-priority-ranking-to-ssl-enabled-sites.md | sandorszoke/pkic.org | d8d32b021520a3a05b16001237bffe5ffb374ec0 | [
"MIT"
] | 15 | 2021-04-09T07:44:16.000Z | 2021-12-08T11:21:26.000Z | content/blog/2014/2014-08-21-google-to-give-priority-ranking-to-ssl-enabled-sites.md | sandorszoke/pkic.org | d8d32b021520a3a05b16001237bffe5ffb374ec0 | [
"MIT"
] | 11 | 2021-02-01T11:18:19.000Z | 2021-08-01T10:49:21.000Z | ---
authors:
- Chris Bailey
date: "2014-08-21T17:30:22+00:00"
dsq_thread_id:
- 2940447125
keywords:
- ssl
- https
- announcement
- google
tags:
- SSL/TLS
- Announcement
- Google
title: Google to Give Priority Ranking to SSL Enabled Sites
---
[Google’s announcement](http://googlewebmastercentral.blogspot.com/2014/08/https-as-ranking-signal.html) that it will give priority ranking to SSL enabled sites is a key milestone for increased use of SSL on the Internet.
Google announced a change to its ranking algorithm to include use of SSL on the site as a “very lightweight [positive] signal”. Although, this might not have an immediate impact to website owners/operators that are not currently using SSL, this is still an important signal indicating everyone should be prepared to encrypt all their websites if they want to remain relevant.
Google had stated its intentions at its [IO 2014 conference on HTTPS Everywhere](https://www.youtube.com/watch?v=cBhZ6S0PFCY), stating that all sites should use SSL because it provides:
* Authentication: Information on whom am I talking to
* Data Integrity: Information on whether anyone has tampered with site data
* Encryption: Assurance that no one else can see my conversation
Now that Google has put its weight behind these SSL benefits, this algorithm change is likely only the first step in a series of steps to promote HTTPS Everywhere. We at the [CA Security Council](https://casecurity.org/) think it’s a good start. | 49.266667 | 375 | 0.784844 | eng_Latn | 0.986897 |
e1960c644bbaa39a24bf2519d75892dd25ca4f25 | 3,648 | markdown | Markdown | _posts/2019-11-05-regex_is_fun.markdown | m-gamao/m-gamao.github.io | e499cd6f3ae275247dabd832643a043a3e0a93f7 | [
"MIT"
] | null | null | null | _posts/2019-11-05-regex_is_fun.markdown | m-gamao/m-gamao.github.io | e499cd6f3ae275247dabd832643a043a3e0a93f7 | [
"MIT"
] | null | null | null | _posts/2019-11-05-regex_is_fun.markdown | m-gamao/m-gamao.github.io | e499cd6f3ae275247dabd832643a043a3e0a93f7 | [
"MIT"
] | null | null | null | ---
layout: post
title: "Regex is fun"
date: 2019-11-05 11:46:35 -0500
permalink: regex_is_fun
---
Regular Expression or Regex is a tool that you can use to search for a specific pattern within any text. It can be used in almost any programming language. I put together a little cheat sheet for myself here.
Here's a basic list below:
* [abc.] It includes only one of specified characters i.e. ‘a’, ‘b’, ‘c’, or ‘.’
* [a-j] It includes all the characters from a to j.
* [a-z] It includes all lowercase characters from a to z.
* [^az] It includes all characters except a and z.
* \w It includes all characters like [a-z, A-Z, 0-9]
* \d It matches for the digits like [0-9]
* [ab][^cde] It matches that the characters a and b should not be followed by c, d and e.
* \s It matches for [\f\t\n\r] i.e form feed, tab, newline and carriage return.
More extensive list:
**Anchors**
* ^ Start of string, or start of line in multi-line pattern
* \A Start of string
* $ End of string, or end of line in multi-line pattern
* \Z End of string
* \b Word boundary
* \B Not word boundary
* \< Start of word
* \> End of word
**Quantifiers**
* 0 or more *
* {3} Exactly 3
* 1 or more +
* {3,} 3 or more
* ? 0 or 1
* {3,5} 3, 4 or 5
* (Add a ? to a quantifier to make it ungreedy.)
**Groups and Ranges**
* . Any character except new line (\n)
* (a|b) a or b
* (...) Group
* (?:...) Passive (non-capturing) group
* [abc] Range (a or b or c)
* [^abc] Not (a or b or c)
* [a-q] Lower case letter from a to q
* [A-Q] Upper case letter from A to Q
* [0-7] Digit from 0 to 7
* \x Group/subpattern number "x"
* (Ranges are inclusive).
**Pattern Modifiers**
* g Global match
* i * Case-insensitive
* m * Multiple lines
* s * Treat string as single line
* x * Allow comments and whitespace in pattern
* e * Evaluate replacement
* U * Ungreedy pattern
* (*PCRE modifier)
**String Replacement**
* $n nth non-passive group
* $2 "xyz" in /^(abc(xyz))$/
* $1 "xyz" in /^(?:abc)(xyz)$/
* $` Before matched string
* $' After matched string
* $+ Last matched string
* $& Entire matched string
* Some regex implementations use \ instead of $.
**Character Classes**
* \c Control character
* \s White space
* \S Not white space
* \d Digit
* \D Not digit
* \w Word
* \W Not word
* \x Hexadecimal digit
* \O Octal digit
**POSIX**
* [:upper:] Upper case letters
* [:lower:] Lower case letters
* [:alpha:] All letters
* [:alnum:] Digits and letters
* [:digit:] Digits
* [:xdigit:] Hexadecimal digits
* [:punct:] Punctuation
* [:blank:] Space and tab
* [:space:] Blank characters
* [:cntrl:] Control characters
* [:graph:] Printed characters
* [:print:] Printed characters and spaces
* [:word:] Digits, letters and underscore
**Assertions**
* ?= Lookahead assertion
* ?! Negative lookahead
* ?<= Lookbehind assertion
* ?!= or ?<! Negative lookbehind
* ?> Once-only Subexpression
* ?() Condition [if then]
* ?()| Condition [if then else]
* ?# Comment
**Escape Sequences**
* \ Escape following character
* \Q Begin literal sequence
* \E End literal sequence
"Escaping" is a way of treating characters which have a special meaning in regular expressions literally, rather than as special characters.
**Special Characters**
* \n New line
* \r Carriage return
* \t Tab
* \v Vertical tab
* \f Form feed
* \xxx Octal character xxx
* \xhh Hex character hh
| 27.022222 | 210 | 0.614309 | eng_Latn | 0.994294 |
e19639122341c85d71e139b18961bcca13298fda | 96 | md | Markdown | content/_index/supported-frame/index.md | uselagoon/lagoon-website | 3f6d8caf679da6c424291e26dee019f775bf64c5 | [
"Apache-2.0"
] | null | null | null | content/_index/supported-frame/index.md | uselagoon/lagoon-website | 3f6d8caf679da6c424291e26dee019f775bf64c5 | [
"Apache-2.0"
] | 1 | 2022-03-16T01:10:37.000Z | 2022-03-16T01:10:37.000Z | content/_index/supported-frame/index.md | uselagoon/lagoon-website | 3f6d8caf679da6c424291e26dee019f775bf64c5 | [
"Apache-2.0"
] | null | null | null | +++
fragment = "sm-items"
weight = 800
background = "light"
title = "Supported frameworks"
+++
| 12 | 30 | 0.65625 | eng_Latn | 0.889559 |
e196ef3d91cfac3e354416052f1f1f2466a4154f | 49 | md | Markdown | README.md | shreya-14-may/Migration-sequelize | e16e7ca89ebee6a7877cdfd8b27981a122b510e3 | [
"MIT"
] | null | null | null | README.md | shreya-14-may/Migration-sequelize | e16e7ca89ebee6a7877cdfd8b27981a122b510e3 | [
"MIT"
] | null | null | null | README.md | shreya-14-may/Migration-sequelize | e16e7ca89ebee6a7877cdfd8b27981a122b510e3 | [
"MIT"
] | null | null | null | # Migration-sequelize
Migration-sequelize-sample
| 16.333333 | 26 | 0.857143 | eng_Latn | 0.704334 |
e1972924b077ba3f1b65f7b1b137fae5db71d9cc | 11,077 | md | Markdown | documents/aws-config-developer-guide/doc_source/templateswithremediation.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | 5 | 2021-08-13T09:20:58.000Z | 2021-12-16T22:13:54.000Z | documents/aws-config-developer-guide/doc_source/templateswithremediation.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | documents/aws-config-developer-guide/doc_source/templateswithremediation.md | siagholami/aws-documentation | 2d06ee9011f3192b2ff38c09f04e01f1ea9e0191 | [
"CC-BY-4.0"
] | null | null | null | # Example Templates with Remediation Action<a name="templateswithremediation"></a>
## Operational Best Practices For Amazon DynamoDB with Remediation<a name="operational-best-practices-for-amazon-dynamodb-with-remediation"></a>
```
################################################################################
#
# Conformance Pack:
# Operational Best Practices for Amazon DynamoDB, with Remediation
#
# See Parameters section for names and descriptions of required parameters.
#
################################################################################
Parameters:
SnsTopicForPublishNotificationArn:
Description: The ARN of the SNS topic to which the notification about the auto-remediation status should be published.
Type: String
Resources:
DynamoDbAutoscalingEnabled:
Properties:
ConfigRuleName: DynamoDbAutoscalingEnabled
Description: "This rule checks whether Auto Scaling is enabled on your DynamoDB tables. Optionally you can set the read and write capacity units for the table."
MaximumExecutionFrequency: Six_Hours
Scope:
ComplianceResourceTypes:
- "AWS::DynamoDB::Table"
Source:
Owner: AWS
SourceIdentifier: DYNAMODB_AUTOSCALING_ENABLED
Type: "AWS::Config::ConfigRule"
DynamoDbAutoscalingEnabledManualRemediation:
DependsOn: DynamoDbAutoscalingEnabled
Type: 'AWS::Config::RemediationConfiguration'
Properties:
ConfigRuleName: DynamoDbAutoscalingEnabled
ResourceType: "AWS::DynamoDB::Table"
TargetId: "AWS-PublishSNSNotification"
TargetType: "SSM_DOCUMENT"
TargetVersion: "1"
Parameters:
AutomationAssumeRole:
StaticValue:
Values:
- "arn:aws:iam::Account ID:role/PublishSnsAutomationExecutionRole"
Message:
StaticValue:
Values:
- "A table with no autoscaling configuration found"
TopicArn:
StaticValue:
Values:
- Ref: SnsTopicForPublishNotificationArn
DynamoDbTableEncryptionEnabled:
Properties:
ConfigRuleName: DynamoDbTableEncryptionEnabled
Description: "Checks whether the Amazon DynamoDB tables are encrypted and checks their status. The rule is compliant if the status is enabled or enabling."
Scope:
ComplianceResourceTypes:
- "AWS::DynamoDB::Table"
Source:
Owner: AWS
SourceIdentifier: DYNAMODB_TABLE_ENCRYPTION_ENABLED
Type: "AWS::Config::ConfigRule"
DynamoDbTableEncryptionEnabledAutoRemediation:
DependsOn: DynamoDbTableEncryptionEnabled
Type: 'AWS::Config::RemediationConfiguration'
Properties:
ConfigRuleName: DynamoDbTableEncryptionEnabled
ResourceType: "AWS::DynamoDB::Table"
TargetId: "AWS-PublishSNSNotification"
TargetType: "SSM_DOCUMENT"
TargetVersion: "1"
Parameters:
AutomationAssumeRole:
StaticValue:
Values:
- "arn:aws:iam::Account ID:role/PublishSnsAutomationExecutionRole"
Message:
StaticValue:
Values:
- "A table with no encryption enabled is found"
TopicArn:
StaticValue:
Values:
- Ref: SnsTopicForPublishNotificationArn
ExecutionControls:
SsmControls:
ConcurrentExecutionRatePercentage: 10
ErrorPercentage: 10
Automatic: True
MaximumAutomaticAttempts: 10
RetryAttemptSeconds: 600
DynamoDbThroughputLimitCheck:
Properties:
ConfigRuleName: DynamoDbThroughputLimitCheck
Description: "Checks whether provisioned DynamoDB throughput is approaching the maximum limit for your account."
MaximumExecutionFrequency: Six_Hours
Source:
Owner: AWS
SourceIdentifier: DYNAMODB_THROUGHPUT_LIMIT_CHECK
Type: "AWS::Config::ConfigRule"
```
## Operational Best Practices For Amazon S3 with Remediation<a name="operational-best-practices-for-amazon-s3-with-remediation"></a>
```
################################################################################
#
# Conformance Pack:
# Operational Best Practices for Amazon S3 with Remediation
#
# See Parameters section for names and descriptions of required parameters.
#
################################################################################
Parameters:
S3TargetBucketNameForEnableLogging:
Description: The target s3 bucket where the logging should be enabled.
Type: String
Resources:
S3BucketPublicReadProhibited:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: S3BucketPublicReadProhibited
Description: >-
Checks that your Amazon S3 buckets do not allow public read access.
The rule checks the Block Public Access settings, the bucket policy, and the
bucket access control list (ACL).
Scope:
ComplianceResourceTypes:
- "AWS::S3::Bucket"
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_PUBLIC_READ_PROHIBITED
MaximumExecutionFrequency: Six_Hours
S3PublicReadRemediation:
DependsOn: S3BucketPublicReadProhibited
Type: 'AWS::Config::RemediationConfiguration'
Properties:
ConfigRuleName: S3BucketPublicReadProhibited
ResourceType: "AWS::S3::Bucket"
TargetId: "AWS-DisableS3BucketPublicReadWrite"
TargetType: "SSM_DOCUMENT"
TargetVersion: "1"
Parameters:
AutomationAssumeRole:
StaticValue:
Values:
- arn:aws:iam::Account ID:role/S3OperationsAutomationsExecutionRole
S3BucketName:
ResourceValue:
Value: "RESOURCE_ID"
ExecutionControls:
SsmControls:
ConcurrentExecutionRatePercentage: 10
ErrorPercentage: 10
Automatic: True
MaximumAutomaticAttempts: 10
RetryAttemptSeconds: 600
S3BucketPublicWriteProhibited:
Type: "AWS::Config::ConfigRule"
Properties:
ConfigRuleName: S3BucketPublicWriteProhibited
Description: "Checks that your Amazon S3 buckets do not allow public write access. The rule checks the Block Public Access settings, the bucket policy, and the bucket access control list (ACL)."
Scope:
ComplianceResourceTypes:
- "AWS::S3::Bucket"
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_PUBLIC_WRITE_PROHIBITED
MaximumExecutionFrequency: Six_Hours
S3PublicWriteRemediation:
DependsOn: S3BucketPublicWriteProhibited
Type: 'AWS::Config::RemediationConfiguration'
Properties:
ConfigRuleName: S3BucketPublicWriteProhibited
ResourceType: "AWS::S3::Bucket"
TargetId: "AWS-DisableS3BucketPublicReadWrite"
TargetType: "SSM_DOCUMENT"
TargetVersion: "1"
Parameters:
AutomationAssumeRole:
StaticValue:
Values:
- arn:aws:iam::Account ID:role/S3OperationsAutomationsExecutionRole
S3BucketName:
ResourceValue:
Value: "RESOURCE_ID"
ExecutionControls:
SsmControls:
ConcurrentExecutionRatePercentage: 10
ErrorPercentage: 10
Automatic: True
MaximumAutomaticAttempts: 10
RetryAttemptSeconds: 600
S3BucketReplicationEnabled:
Type: "AWS::Config::ConfigRule"
Properties:
ConfigRuleName: S3BucketReplicationEnabled
Description: "Checks whether the Amazon S3 buckets have cross-region replication enabled."
Scope:
ComplianceResourceTypes:
- "AWS::S3::Bucket"
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_REPLICATION_ENABLED
S3BucketSSLRequestsOnly:
Type: "AWS::Config::ConfigRule"
Properties:
ConfigRuleName: S3BucketSSLRequestsOnly
Description: "Checks whether S3 buckets have policies that require requests to use Secure Socket Layer (SSL)."
Scope:
ComplianceResourceTypes:
- "AWS::S3::Bucket"
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_SSL_REQUESTS_ONLY
S3BucketServerSideEncryptionEnabled:
Type: "AWS::Config::ConfigRule"
Properties:
ConfigRuleName: S3BucketServerSideEncryptionEnabled
Description: "Checks that your Amazon S3 bucket either has S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server side encryption."
Scope:
ComplianceResourceTypes:
- "AWS::S3::Bucket"
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED
S3BucketServerSideEncryptionEnabledRemediation:
DependsOn: S3BucketServerSideEncryptionEnabled
Type: 'AWS::Config::RemediationConfiguration'
Properties:
ConfigRuleName: S3BucketServerSideEncryptionEnabled
ResourceType: "AWS::S3::Bucket"
TargetId: "AWS-EnableS3BucketEncryption"
TargetType: "SSM_DOCUMENT"
TargetVersion: "1"
Parameters:
AutomationAssumeRole:
StaticValue:
Values:
- arn:aws:iam::Account ID:role/S3OperationsAutomationsExecutionRole
BucketName:
ResourceValue:
Value: "RESOURCE_ID"
SSEAlgorithm:
StaticValue:
Values:
- "AES256"
ExecutionControls:
SsmControls:
ConcurrentExecutionRatePercentage: 10
ErrorPercentage: 10
Automatic: True
MaximumAutomaticAttempts: 10
RetryAttemptSeconds: 600
S3BucketLoggingEnabled:
Type: "AWS::Config::ConfigRule"
Properties:
ConfigRuleName: S3BucketLoggingEnabled
Description: "Checks whether logging is enabled for your S3 buckets."
Scope:
ComplianceResourceTypes:
- "AWS::S3::Bucket"
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_LOGGING_ENABLED
S3BucketLoggingEnabledRemediation:
DependsOn: S3BucketLoggingEnabled
Type: 'AWS::Config::RemediationConfiguration'
Properties:
ConfigRuleName: S3BucketLoggingEnabled
ResourceType: "AWS::S3::Bucket"
TargetId: "AWS-ConfigureS3BucketLogging"
TargetType: "SSM_DOCUMENT"
TargetVersion: "1"
Parameters:
AutomationAssumeRole:
StaticValue:
Values:
- arn:aws:iam::Account ID:role/S3OperationsAutomationsExecutionRole
BucketName:
ResourceValue:
Value: "RESOURCE_ID"
TargetBucket:
StaticValue:
Values:
- Ref: S3TargetBucketNameForEnableLogging
GrantedPermission:
StaticValue:
Values:
- "FULL_CONTROL"
GranteeType:
StaticValue:
Values:
- "Group"
GranteeUri:
StaticValue:
Values:
- "http://acs.amazonaws.com/groups/s3/LogDelivery"
ExecutionControls:
SsmControls:
ConcurrentExecutionRatePercentage: 10
ErrorPercentage: 10
Automatic: True
MaximumAutomaticAttempts: 10
RetryAttemptSeconds: 600
``` | 35.053797 | 200 | 0.661912 | yue_Hant | 0.39491 |
e19824c032dda845f170707c61999639680e4990 | 7,721 | md | Markdown | installation/community-supported-installation/windows-server/README.md | FXinnovation/RocketChat-docs | a3d28930084bfc85b897aff030fc3de775085b4a | [
"MIT"
] | 1 | 2021-01-05T10:42:32.000Z | 2021-01-05T10:42:32.000Z | installation/community-supported-installation/windows-server/README.md | FXinnovation/RocketChat-docs | a3d28930084bfc85b897aff030fc3de775085b4a | [
"MIT"
] | 15 | 2019-06-13T18:28:51.000Z | 2021-04-16T11:47:20.000Z | installation/community-supported-installation/windows-server/README.md | FXinnovation/RocketChat-docs | a3d28930084bfc85b897aff030fc3de775085b4a | [
"MIT"
] | 3 | 2019-04-04T12:47:30.000Z | 2019-08-08T03:19:57.000Z | # Rocket.Chat Windows Installation Guide
_Note: This is a community supported installation method. You can discuss about this in the [forum thread](https://forums.rocket.chat/t/broken-windows-server-2012-r2-installation-guide/413/2)._
## How to install Rocket.Chat on Windows Server 2012 R2
The following guide will step through the various steps for installing Rocket.Chat on Windows Server 2012 R2.
**Important**: Production deployment using any client versions of Windows, such as Windows 7, 8, or 10 is not supported. However, beta deployment for Windows 10 Pro (or Enterprise or Education) version is available via Docker for Windows see [Installing on Windows 10 Pro 64bit with Docker for Windows](../windows-10-pro/).
Mobile clients (iOS and Android) are currently not supported using this method of deployment. However, [Windows 10 Pro 64bits with Docker for Windows](../windows-10-pro/) based deployment should support mobile clients.
**Note**: The steps will include all dependencies. If a particular dependency has already been installed, please skip to any relevant configuration section.
### Binary Dependencies
To start, go to `Control Panel -> Programs and Features` and uninstall each of the following (if present):
- Microsoft Visual C++ 2010 x64 Redistributable
- Microsoft Visual C++ 2010 x86 Redistributable
Then, download and install each of the following **in order**:
1. [Python 2.7.3](https://www.python.org/ftp/python/2.7.3/python-2.7.3.msi) (if you have Python 3.x already installed, just leave it, both can coexist)
2. [Visual C++ 2010 Express](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads) or Visual Studio 2010
3. [Windows SDK 7.1](http://www.microsoft.com/en-us/download/details.aspx?id=8279)
4. [Visual Studio 2010 SP1](https://www.microsoft.com/en-us/download/details.aspx?id=34677)
5. [Visual C++ 2010 SP1 Compiler Update for the Windows SDK 7.1](http://www.microsoft.com/en-us/download/details.aspx?id=4422)
6. [GraphicsMagick](http://www.graphicsmagick.org/INSTALL-windows.html#prerequisites)
7. [Ghostscript](http://ghostscript.com/download/gsdnld.html) (Optional for PDF rendering)
### MongoDB
- Download [MongoDB](https://www.mongodb.org/downloads#production). (Note: This can be done on a separate computer)
- Run the installer and choose `Custom`
- Click the `Browse` button to select desired install path, such as `C:\MongoDB`
- Continue through the rest of the installer.
- Now open NotePad and enter the following, replacing [Data Path] with where the database will be stored, such as `C:\MongoDB\data`
```
systemLog:
destination: file
path: [Data Path]\logs\mongod.log
storage:
dbPath: [Data Path]\data
replication:
replSetName: rs1
```
- Save the file as `[Installation Path]\mongod.cfg` where [Installation Path] is the location you installed Mongo
- Open the Command Prompt by pressing `Windows Key + R` and then entering `cmd`, right click on Command Prompt and select `Run as administrator`
- Now enter the following:
```
> mkdir [Data Path]
> cd [Data Path]
> mkdir [Data Path]\db
> mkdir [Data Path]\logs
> cd [Installation Path]\bin
> mongod.exe --config "[Installation Path]\mongod.cfg" --install
> net start MongoDB
> mongo.exe
> rs.initiate()
> exit
```
> _Note: Do not include the `>`_
### Rocket.Chat files
1. Download the latest Rocket.Chat **Windows Release** from the Rocket.Chat releases page (not available anymore)
2. Using an archive utility such as [7zip](http://www.7-zip.org/) or [tar for Windows](http://gnuwin32.sourceforge.net/packages/gtar.htm), extract the tar.gz file
3. Place the files in the desired install path, such as `C:\RocketChat`
### Node.js
Rocket.Chat is built on top of Node.js v8.9.3. So we need to install this first.
1. Download [Node.js v8.9.3](https://nodejs.org/dist/v8.9.3/node-v8.9.3-x86.msi)
2. Run the installer with all default option.
### Node Packages
1. Open the *Windows SDK 7.1 Command Prompt* by pressing Start, typing its name, and clicking on it in the search results (Note: It needs to be the SDK Command Prompt)
2. Now enter the following, replacing:
- [Installation Path] with the location you placed the Rocket.Chat files
- [Port to Use] with the port for the Rocket.Chat server to use, such as `3000`
- [Rocket.Chat URL] with the URL you will use for Rocket.Chat, such as `rocketchat.example.com`
- [Address to MongoDB] with the IP Address of your MongoDB. (NOTE: If you didn't install Mongo on another computer, use `localhost`)
- [MongoDB Database] with the name of the database you would like to use, such as `rocketchat`
```
> SetEnv /x86
> cd [Installation Path]
> npm install nave -g
> npm install node-windows
> npm config set python /Python27/python.exe --global
> npm config set msvs_version 2010 --global
> set PORT=[Port to Use]
> set ROOT_URL=[Rocket.Chat URL]
> set MONGO_URL=mongodb://[Address to Mongo]:27017/[MongoDB Database]
> set MONGO_OPLOG_URL=mongodb://[Address to Mongo]:27017/local
> set SCRIPT_PATH=[Installation Path]\main.js
> cd programs\server
> npm install
> cd ../..
> node rocket.service.js install
> net start Rocket.Chat
```
> Note: If missing, rocket.service.js can be found [here](https://github.com/Sing-Li/bbug/blob/master/images/rocket.service.js)
> _Note: Do not include the `>`_
### Verifying the Install
1. View the installed services by pressing `Windows Key + R` and then entering `services.msc`
2. Find `Rocket.Chat` in the list. Its status should be `Running`
3. Open a browser and, in the address bar, enter `http://localhost:[Port Used]`
4. Rocket.Chat should load.
### Mobile Support
In order to use Rocket.Chat on mobile devices, you must also configure your installation to support SSL with a valid certificate.
### IIS Configuration (Optional)
The following steps will detail integrating Rocket.Chat with IIS.
#### Get UrlRewrite and ARR
1. Open IIS, click on your server, and then click on `Get New Web Platform Components` in the right hand menu
2. Install the Web Platform Installer if prompted
3. Once open, search for `Routing` in the upper right search box
4. Click on the `Add` button for Application Request Routing 3.0 and then `Install`
5. Once ARR and UrlRewrite are installed, close and reopen IIS
#### Adding the SSL Certificate
1. Click on your server in the left menu and then click on `Server Certificates`
2. In the right hand menu, click on `Import...`
3. Find your SSL Certificate and enter your password
4. Click `Ok`
#### Setting up the Rocket.Chat site
1. Create a new Web Site and bind it to the [Rocket.Chat Url] previously specified.
**NOTE: If you plan on using the Rocket.Chat mobile apps, you must use HTTPS. HTTP is optional only for PCs**
2. For the physical path, point it to an empty folder in your webroot. (Note: There will be no default document here, just the web.config.)
3. Press `Ok`
4. Select the new Rocket.Chat site and click on `Url Rewrite`
5. In the upper right hand menu, select `Add Rule(s)...` and then `Reverse Proxy`
6. Enter `http://localhost:[Port Used]` in the top box and hit `Ok` (Note: This must remain HTTP even if you are using HTTPS)
#### Troubleshooting
- If the Rocket.Chat service isn't running, check the Event Viewer under `Windows Logs\Application` for errors from the Rocket.Chat service.
- If the page didn't load, check the log files in [Data Path]\logs for clues.
- If that doesn't help, or if you had any other problems during the process, try searching our [GitHub Issues](https://github.com/RocketChat/Rocket.Chat/issues)
- If you are still having problems, visit the [#support](https://open.rocket.chat/channel/support) channel and we'll be happy to help.
| 46.512048 | 324 | 0.746147 | eng_Latn | 0.94465 |
e198c4edabf7eb7503341c5f6de0d7a93326c9e3 | 1,738 | md | Markdown | hugo-source/content/output/sahakAra nagar, bengaLUru/SOLSTICE_POST_DARK_10_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2021_monthly/2021-06/2021-06-29.md | Prabhakaran-cbe/jyotisha | 689327c5944c6cc84b7e58af4deae2a4ebe94d7b | [
"MIT"
] | 40 | 2017-10-01T04:22:35.000Z | 2020-11-30T03:47:57.000Z | hugo-source/content/output/sahakAra nagar, bengaLUru/SOLSTICE_POST_DARK_10_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2021_monthly/2021-06/2021-06-29.md | Prabhakaran-cbe/jyotisha | 689327c5944c6cc84b7e58af4deae2a4ebe94d7b | [
"MIT"
] | 71 | 2017-08-27T13:54:06.000Z | 2020-12-11T01:16:47.000Z | hugo-source/content/output/sahakAra nagar, bengaLUru/SOLSTICE_POST_DARK_10_ADHIKA__CHITRA_AT_180/gregorian/2000s/2020s/2021_monthly/2021-06/2021-06-29.md | Prabhakaran-cbe/jyotisha | 689327c5944c6cc84b7e58af4deae2a4ebe94d7b | [
"MIT"
] | 23 | 2017-08-27T11:54:41.000Z | 2020-11-14T19:41:58.000Z | +++
title = "2021-06-29"
+++
## श्रावणः-05-20,कुम्भः-शतभिषक्🌛🌌◢◣मिथुनम्-आर्द्रा-03-15🌌🌞◢◣शुचिः-04-09🪐🌞मङ्गलः
- Indian civil date: 1943-04-08, Islamic: 1442-11-19 Ḏū al-Qaʿdah
- संवत्सरः - प्लवः
- वर्षसङ्ख्या 🌛- शकाब्दः 1943, विक्रमाब्दः 2078, कलियुगे 5122
___________________
- 🪐🌞**ऋतुमानम्** — ग्रीष्मऋतुः दक्षिणायनम्
- 🌌🌞**सौरमानम्** — ग्रीष्मऋतुः उत्तरायणम्
- 🌛**चान्द्रमानम्** — वर्षऋतुः श्रावणः
___________________
## खचक्रस्थितिः
- |🌞-🌛|**तिथिः** — कृष्ण-पञ्चमी►13:23; कृष्ण-षष्ठी►
- 🌌🌛**नक्षत्रम्** — शतभिषक्►25:00*; पूर्वप्रोष्ठपदा► (कुम्भः)
- 🌌🌞**सौर-नक्षत्रम्** — आर्द्रा►
___________________
- 🌛+🌞**योगः** — प्रीतिः►12:16; आयुष्मान्►
- २|🌛-🌞|**करणम्** — तैतिलः►13:23; गरः►25:15*; वणिजः►
- 🌌🌛- **चन्द्राष्टम-राशिः**—कर्कटः
- 🌞-🪐 **अमूढग्रहाः** - शनैश्चरः (145.02° → 146.03°), बुधः (19.91° → 20.35°), गुरुः (125.44° → 126.42°), मङ्गलः (-33.27° → -32.94°), शुक्रः (-24.72° → -24.98°)
___________________
## दिनमान-कालविभागाः
- 🌅**सूर्योदयः**—06:00-12:23🌞️-18:46🌇
- 🌛**चन्द्रास्तमयः**—10:12; **चन्द्रोदयः**—22:51
___________________
- 🌞⚝भट्टभास्कर-मते वीर्यवन्तः— **प्रातः**—06:00-07:35; **साङ्गवः**—09:11-10:47; **मध्याह्नः**—12:23-13:58; **अपराह्णः**—15:34-17:10; **सायाह्नः**—18:46-20:10
- 🌞⚝सायण-मते वीर्यवन्तः— **प्रातः-मु॰1**—06:00-06:51; **प्रातः-मु॰2**—06:51-07:42; **साङ्गवः-मु॰2**—09:24-10:15; **पूर्वाह्णः-मु॰2**—11:57-12:48; **अपराह्णः-मु॰2**—14:30-15:21; **सायाह्नः-मु॰2**—17:03-17:54; **सायाह्नः-मु॰3**—17:54-18:46
- 🌞कालान्तरम्— **ब्राह्मं मुहूर्तम्**—04:30-05:15; **मध्यरात्रिः**—23:15-01:30
___________________
- **राहुकालः**—15:34-17:10; **यमघण्टः**—09:11-10:47; **गुलिककालः**—12:23-13:58
___________________
- **शूलम्**—उदीची दिक् (►11:06); **परिहारः**–क्षीरम्
___________________
| 43.45 | 239 | 0.489643 | san_Deva | 0.370583 |
e198ce2cc2dda48c3aa832edb69a27187cf9081f | 303 | md | Markdown | salate/Thunfischsalat-mit-Tomate.md | skoenig/kochbuch | 5bf5674fc68833fb830e674017ffc7af0851332e | [
"Unlicense"
] | 5 | 2015-06-30T13:55:21.000Z | 2021-01-26T17:28:24.000Z | salate/Thunfischsalat-mit-Tomate.md | skoenig/kochbuch | 5bf5674fc68833fb830e674017ffc7af0851332e | [
"Unlicense"
] | 1 | 2016-10-31T07:40:21.000Z | 2016-10-31T10:18:35.000Z | salate/Thunfischsalat-mit-Tomate.md | skoenig/kochbuch | 5bf5674fc68833fb830e674017ffc7af0851332e | [
"Unlicense"
] | null | null | null | ---
tags: ["lowcarb"]
date: 2016-12-31
---
## Zutaten
- 200 g körniger Frischkäse (Cottage Cheese)
- 3-4 Tomaten
- 150 g Thunfisch in eigenem Saft (eine Dose)
- etwas Olivenöl
- etwas Balsamico Essig
## Zubereitung
Alles in einer Schale vermischen.
## Nährwerte
- EW: 55g
- KH: 23g
- Fett: 11g
| 15.15 | 45 | 0.676568 | deu_Latn | 0.959746 |
e1990ed93504786fab96d8f7753a0f407b2dc60e | 98 | md | Markdown | translations/pt-BR/data/reusables/project-management/delete-label.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 11,698 | 2020-10-07T16:22:18.000Z | 2022-03-31T18:54:47.000Z | translations/pt-BR/data/reusables/project-management/delete-label.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 8,317 | 2020-10-07T16:26:58.000Z | 2022-03-31T23:24:25.000Z | translations/pt-BR/data/reusables/project-management/delete-label.md | kyawburma/docs | 0ff7de03be7c2432ced123aca17bfbf444bee1bf | [
"CC-BY-4.0",
"MIT"
] | 48,204 | 2020-10-07T16:15:45.000Z | 2022-03-31T23:50:42.000Z | 1. Na lista de etiquetas, à direita da etiqueta que você deseja excluir, clique em **Excluir**.
| 49 | 97 | 0.734694 | por_Latn | 0.999976 |
e1993663bc7213a7051c8cf83a7464e84b2392e3 | 209 | md | Markdown | node_modules/toml-loader/README.md | ivanaairenee/blog | 67e94cff6af5a4cb27c50c59adafb389bf0059d3 | [
"MIT"
] | 10 | 2015-05-22T17:46:56.000Z | 2021-01-12T08:07:42.000Z | node_modules/toml-loader/README.md | ivanaairenee/blog | 67e94cff6af5a4cb27c50c59adafb389bf0059d3 | [
"MIT"
] | 3 | 2019-05-23T02:56:09.000Z | 2021-03-24T18:11:34.000Z | node_modules/toml-loader/README.md | ivanaairenee/blog | 67e94cff6af5a4cb27c50c59adafb389bf0059d3 | [
"MIT"
] | 4 | 2015-07-07T11:59:12.000Z | 2020-02-05T03:17:54.000Z | # toml-loader
TOML loader module for webpack
## Install
`npm install toml-loader`
## Usage
```javascript
var toml = require("toml!./config.toml");
// => returns config.toml content as json parsed object
```
| 17.416667 | 55 | 0.708134 | kor_Hang | 0.619985 |
e19965be8277df2f7b0510294d1b5b2f477b1a27 | 800 | md | Markdown | README.md | AdithyaBhat17/ris-api | 6c7585a3d79535e1802b4464da3368cdf54e6439 | [
"Apache-2.0"
] | 3 | 2019-04-29T16:27:03.000Z | 2019-05-02T07:26:20.000Z | README.md | apeksha-joshi/ris-api | 198ef9996d25f1a0c8a89ec3028663767c640e59 | [
"Apache-2.0"
] | 5 | 2019-10-08T04:22:58.000Z | 2019-12-24T23:15:06.000Z | README.md | AdithyaBhat17/ris-api | 6c7585a3d79535e1802b4464da3368cdf54e6439 | [
"Apache-2.0"
] | 1 | 2019-04-29T16:25:08.000Z | 2019-04-29T16:25:08.000Z | # Reverse Image Search API
RIS is as an API which uses beautiful soup to scrape data from google image search results.
### Installation Guide
* Fork this repository
* Clone the forked repository - `git clone https://github.com/{username}/ris-api.git`
* change directory - `cd ris-api`
* Install the required libraries - `pip install -r requirements.txt`
* Run the application - `python server.py`
* Send a POST Request to http://localhost:5000/search using
`curl -d '{"image_url":"image link"}' -H "Content-Type: application/json" -X POST http://localhost:5000/search`
* You can also send a POST request to http://localhost:5000/labelsearch perform a google text search using
`curl -d '{"q":"keyword"}' -H "Content-Type: application/json" -X POST http://localhost:5000/labelsearch` | 53.333333 | 115 | 0.7275 | eng_Latn | 0.447969 |
e199e1ce104fcae9dbb48615eaa912ab53ee5900 | 4,635 | md | Markdown | content/markdown-pages/translate-startup-kit.md | GlobalDigitalLibraryio/gdl-home | 8671afccbd00e6b1c44d131903984ee9d2159385 | [
"Apache-2.0"
] | null | null | null | content/markdown-pages/translate-startup-kit.md | GlobalDigitalLibraryio/gdl-home | 8671afccbd00e6b1c44d131903984ee9d2159385 | [
"Apache-2.0"
] | null | null | null | content/markdown-pages/translate-startup-kit.md | GlobalDigitalLibraryio/gdl-home | 8671afccbd00e6b1c44d131903984ee9d2159385 | [
"Apache-2.0"
] | null | null | null | ---
path: "/translation-startup-kit"
date: ""
title: "Translation Startup Kit"
description: ""
showOnFrontPage: false
---
#GDL Translation startup kit
This startup kit will give you a brief introduction to the Global Digital Library project and platform. After reading this kit you will have learned more about:
- The background of the Global Book Alliance and the Global Digital Library
- What we do and how our platform works
- You will get a demonstration and practical introduction to the GDL platform
- You will get a demonstration on how to start translation on the GDL platform
- Step by step preparation for a translation book workshop
##Introduction
The goal of the [Global Book Alliance](http://globalbookalliance.org/) is to provide access to free, high-quality, early grade reading resources in languages that children use and understand.
As a flagship activity within the Global Book Alliance, the **Global Digital Library (GDL)** has been developed to increase the availability of high-quality reading resources in underserved languages worldwide.
##What is the purpose of the GDL?
The GDL collects existing high quality open educational reading resources, and makes them available on web, mobile and for print. By the end of 2018 the Library will offer resources in at least 25 languages, and by end 2020 at least 100 languages.
The platform also facilitates translation and localization of GDL-resources to more than 300 languages.
##Who can use the Global Digital Library?
The GDL is aimed at many different types of users and the platform will be open for everyone. Intended users include ministries of education, school managers, teachers, donor agencies and their implementing partners, international and national non-governmental organizations, local publishers, digital distributors and content providers, and households in developing countries.
The GDL platform is open for everyone and all the books and resources are available for free.
##Platform introduction
In this part, you will get a short introduction to the platform, including how to navigate languages, categories and levels.
_Video duration: 5 minutes._
<video>
https://www.youtube.com/watch?v=cu_OFKi8UBA
</video>
##Preparations before the translation workshop/book sprint:
1. Create a list over translators participating at your event
2. Create a list of books to translate
3. Assign books to translators
4. Make sure that all translators have devices and that they are able to connect to the internet
You can choose to organize the translation event in one location or by organizing a webinar/online workshop with your translators connected via Skype or other virtual platforms.
Books translated during the book sprint events will not automatically be published for public use on the GDL. All new content, including translations, must undergo quality assurance in line with the GDL minimum standards before being published on the platform.
When translating you should always read the licensing terms connected to each book. If you are considering selling translated versions of a GDL-book, you must first check the licensing term for the book to assess if the license allows for commercial reuse.
##Find books that you would like to translate
You can pick any GDL-title available in E-PUB format on the platform to translate, but if you are just starting up a book sprint, it is best to look for short reading books on GDL [Level 1](https://digitallibrary.io/en/books/browse?readingLevel=1&category=library_books), [Level 2](https://digitallibrary.io/en/books/browse?readingLevel=2&category=library_books) and [Read Aloud](https://digitallibrary.io/en/books/browse?readingLevel=read-aloud&category=library_books).
Normally, in these levels, one translator will be able to translate one book within one hour.
By creating a spreadsheet with the titles you are working on it is easier to coordinate and assign tasks between translators.
##Get started with translation – introduction
By following our [step by step process](https://home.digitallibrary.io/translate/) you will get started translating in just a few minutes. You need a Facebook or Google account to log on the Global Digital Library.
##For more information about the Global Digital Library:
- Information and updates about the book sprints: https://home.digitallibrary.io/booksprint/
- General information: https://home.digitallibrary.io/about/
- Read more about our minimum standards for quality assurance: https://home.digitallibrary.io/qa/
- Access the GDL platform: [digitallibrary.io/](https://digitallibrary.io/)
- FAQ: https://home.digitallibrary.io/faq/
| 63.493151 | 470 | 0.802805 | eng_Latn | 0.996276 |
e19a489a832a92f6b2ede6b1a70d2eb181a51d9f | 2,651 | md | Markdown | src/posts/2010-09-28-we-want-cake-wheres-our-cake.md | jayspec/jasonspecland.com | 0ea732b80cbe221d6dde49a67d1cbe8f93aa9314 | [
"MIT"
] | null | null | null | src/posts/2010-09-28-we-want-cake-wheres-our-cake.md | jayspec/jasonspecland.com | 0ea732b80cbe221d6dde49a67d1cbe8f93aa9314 | [
"MIT"
] | null | null | null | src/posts/2010-09-28-we-want-cake-wheres-our-cake.md | jayspec/jasonspecland.com | 0ea732b80cbe221d6dde49a67d1cbe8f93aa9314 | [
"MIT"
] | null | null | null | ---
id: 202
title: 'We Want Cake! Where’s Our Cake?'
date: 2010-09-28T09:30:15-04:00
author: jason
layout: post
guid: http://jasonspecland.azurewebsites.net/?p=202
permalink: /2010/09/28/we-want-cake-wheres-our-cake/
ljID:
- "1028"
categories:
- family
---
On Sunday, Benji, Paula, and I went to see They Might Be Giants perform at Town Hall.
I think I’ve mentioned how weird it is when your favorite band from when you were a teenager decides to start recording children’s music late in their careers, and then becomes the favorite band of your preschooler. But it bears mentioning again. Dang, it’s weird!
I actually think I prefer TMBG’s kid’s shows to their adult shows. They start on time, and no one plays “pass the dude,” as Flans has been known to call it. They also don’t play down to the kids. They put on a full-on rock show. They even include a few non-kids songs. This time they did “Whistling in the Dark,” “Stalk of Wheat,” and “Dr. Worm,” which is one of Benji’s favorites, as anyone who follows me on YouTube knows.
Speaking of Benji’s favorites, the set list couldn’t have been better for him if he’d written it himself. They did “Seven,” “High Five,” “The Days of the Week” (complete with their killer horn section, the [Tricerichops](http://tmbw.net/wiki/Tricerachops_Horns)), “Pirate Girls Nine,” and “I Am a Paleontologist” among others. Benji’s favorite part was shouting, “We want cake! Where’s our cake?” during “Seven.” My favorite part was watching him do that.
Also neat: Robin “Goldie” Goldwasser came out to sing “Don’t Cross the Street in the Middle of the Block” and “Electric Car.” After she left the stage, we saw her just a few rows behind us in the back of the orchestra watching the show from behind the sound board.
At the end of the show we grabbed orange and green TMBG foam fingers to add to Benji’s collection (which, now having three items, is now a collection), and we grabbed some confetti that Benji was delighted to continually drop on my head during the subway ride home.
In other news, on the way home we stopped off for sushi for dinner. Since Benji likes breaded chicken generally, I thought he might enjoy chicken katsu. What I did not anticipate, however, was the relish with which he devoured the katsu sauce! He ate that chicken like a boy possessed. Good to have another world cuisine available to us when we go out to dinner. | 101.961538 | 569 | 0.752923 | eng_Latn | 0.998812 |
e19abf8832096f52f577de67691b9deb46478b2a | 2,828 | md | Markdown | README.md | hmemcpy/jawn-fs2 | 3b996cb3a06af3c7a2aeeae081cb58557470e93b | [
"Apache-2.0"
] | null | null | null | README.md | hmemcpy/jawn-fs2 | 3b996cb3a06af3c7a2aeeae081cb58557470e93b | [
"Apache-2.0"
] | null | null | null | README.md | hmemcpy/jawn-fs2 | 3b996cb3a06af3c7a2aeeae081cb58557470e93b | [
"Apache-2.0"
] | null | null | null | # jawn-fs2 [](https://travis-ci.org/http4s/jawn-fs2) [](https://maven-badges.herokuapp.com/maven-central/org.http4s/jawn-fs2_2.12)
Asynchronously parse [fs2](https://github.com/functional-streams-for-scala/fs2) streams
to JSON values with [jawn](https://github.com/non/jawn).
## Example
`sbt test:run` to see it in action:
```Scala
package jawnfs2.examples
import cats.effect._
import cats.implicits._
import fs2.{Stream, io, text}
import java.nio.file.Paths
import java.util.concurrent.Executors
import jawnfs2._
import scala.concurrent.ExecutionContext
import scala.concurrent.duration._
object Example extends IOApp {
// Pick your favorite supported AST (e.g., json4s, argonaut, etc.)
implicit val facade = jawn.ast.JawnFacade
val blockingResource: Resource[IO, ExecutionContext] =
Resource.make(IO(Executors.newCachedThreadPool()))(es => IO(es.shutdown()))
.map(ExecutionContext.fromExecutorService)
def run(args: List[String]) =
// Uses blocking IO, so provide an appropriate thread pool
blockingResource.use { blockingEC =>
// From JSON on disk
val jsonStream = io.file.readAll[IO](Paths.get("testdata/random.json"), blockingEC, 64)
// Simulate lag between chunks
val lag = Stream.awakeEvery[IO](100.millis)
val laggedStream = jsonStream.chunks.zipWith(lag)((chunk, _) => chunk)
// Print each element of the JSON array as we read it
val json = laggedStream.unwrapJsonArray.map(_.toString).intersperse("\n").through(text.utf8Encode)
// run converts the stream into an IO, unsafeRunSync executes the IO for its effects
json.to[IO](io.stdout(blockingEC)).compile.drain.as(ExitCode.Success)
}
}
```
## Add jawn-fs2 to your project
Add to your build.sbt:
```
libraryDependencies += "org.http4s" %% "jawn-fs2" % "0.13.0"
// Pick your AST: https://github.com/non/jawn#supporting-external-asts-with-jawn
libraryDependencies += "org.spire-math" %% "jawn-ast" % "0.13.0"
```
## Compatibility matrix
| Stream Library | You need... | Status
| ------------------- | -------------------------------------------- | ------
| fs2-1.x | `"org.http4s" %% "jawn-fs2" % "0.13.0" | stable
| fs2-0.10.x | `"org.http4s" %% "jawn-fs2" % "0.12.2"` | EOL
| fs2-0.9.x | `"org.http4s" %% "jawn-fs2" % "0.10.1"` | EOL
| scalaz-stream-0.8a | `"org.http4s" %% "jawn-streamz" % "0.10.1a"` | EOL
| scalaz-stream-0.8.x | `"org.http4s" %% "jawn-streamz" % "0.10.1"` | EOL
The legacy scalaz-stream artifacts are on the [jawn-streamz](https://github.com/rossabaker/jawn-fs2/tree/jawn-streamz) branch.
| 41.588235 | 303 | 0.661245 | yue_Hant | 0.355556 |
e19b3e7935772da6d4a4041d9f576940bf4d79d5 | 4,202 | md | Markdown | README.md | cscashby/googleDocsLogging | 1d8a26bd7363747788d635341c2f442bfc727aa6 | [
"Apache-2.0"
] | null | null | null | README.md | cscashby/googleDocsLogging | 1d8a26bd7363747788d635341c2f442bfc727aa6 | [
"Apache-2.0"
] | null | null | null | README.md | cscashby/googleDocsLogging | 1d8a26bd7363747788d635341c2f442bfc727aa6 | [
"Apache-2.0"
] | null | null | null | # SmartThings Google Sheets Logging
## Create the Spreadsheet
1. Create a new spreadsheet: http://docs.google.com/spreadsheets/create
2. Name the spreadsheet file whatever you want
3. Get your new spreadsheet ID from the URL. Example:
https://docs.google.com/spreadsheets/d/169v40OsFOaGHO6uQwuuMx2hlWK-wvYCzrr93FAWivHk/edit#gid=0
example id is "169v40OsFOaGHO6uQwuuMx2hlWK-wvYCzrr93FAWivHk"

4. Add the single value "Time" in cell A1. You might consider selecting View -> Freeze -> 1 Row
5. Create the helper script
1. Open script: Click Tools -> Script Editor. Optionally change the name of the project (default "Untitiled project")

2. Copy the contents of Code.gs to replace the stub "function myFunction"
3. On line 4, Replace "REPLACE ME WITH SPREADSHEET ID" with sheet id from step 3

4. If you changed the name of the sheet (the name in the tab at the bottom of the spreadsheet, not the name of the file), update it on line 5. (defaults to "Sheet1").
6. Deploy helper script as webapp
1. Deploy webapp: Click Publish -> Deploy as web app

2. Change Who has access to the webapp to "Anyone, even anonymous". Please note, if any one gets a hold of your published endpoint, they will be able to send data to your spreadsheet, but they will not be able to view any of it.

Also: If you revise or fix your webapp code, **be sure to select `New` as the version** on the webapp publishing popup's Version drop-down. Otherwise you may continue to run the older version. Version management must be explicitly performed sometimes.
3. Approve the access to the app


4. Copy the URL on the confirmation page.
E.g.: https://script.google.com/macros/s/AKfycbzY2jj4l7RSpFYfN62xra0HmcXPQXAUI17z6KKHWiT3OYyhUC4/exec

5. Extract URL key for your new webapp, it is between /s/ and /exec.
`AKfycbzY2jj4l7RSpFYfN62xra0HmcXPQXAUI17z6KKHWiT3OYyhUC4`
You will need to enter this into the SmartApp below.
7. (Optional): Test out your new webapp, add this to the end of the URL from step 6: `?Temp1=15&Temp2=30`
E.g., https://script.google.com/macros/s/AKfycbzY2jj4l7RSpFYfN62xra0HmcXPQXAUI17z6KKHWiT3OYyhUC4/exec?Temp1=15&Temp2=30
A successful test will return the message `The script completed but did not return anything.`
**If you do test it, make sure you delete the test data from the spreadsheet. Just delete any rows added after row 1 and any test columns**
## Create the SmartApp
1. Login to the the SmartThings IDE at https://graph.api.smartthings.com/
(Make an account if you haven't already)
2. Go to "My SmartApps"
3. Use **either** Github Integration (step 4) or Manual (step 5)
4. Github Integration
(See this link if this is your first time using GitHub Integration: http://docs.smartthings.com/en/latest/tools-and-ide/github-integration.html)
1. Click Settings
2. Add this repo:
Owner: `loverso-smartthings`
Name: `googleDocsLogging`
Branch: `master`
3. Click Save
4. hit "Update from Repo" and select `googleDocsLogging (master)`
5. Under "New (Only in GitHub)" select "google-sheets-logging"
6. Select the checkbox next to "Publish"
7. Click "Execute Update"
5. Manual creation **(If you didn't follow step 4)**
1. hit "+ New SmartApp" on the right, then select the "From Code" tab.
2, In another window, open the source code located in this repo at smartapps/cschwer/google-sheets-logging.src/google-sheets-logging.groovy
https://github.com/loverso-smartthings/googleDocsLogging/blob/master/smartapps/cschwer/google-sheets-logging.src/google-sheets-logging.groovy
3. Click "Raw" and copy all the code
4. Paste the code into the New SmartApp "From Code" box and click Create
## Install the SmartApp
1. In Smartthings App go to marketplace -> Smartapps -> My Apps -> Google Sheets Logging
2. Select events you want to log under "Log devices..."
3. Enter URL key from step 15 under "URL key"
4. Click Done!
| 46.175824 | 254 | 0.735602 | eng_Latn | 0.853122 |
e19bea7b941776349fabb6b208df9d1d4085aafa | 1,709 | md | Markdown | desktop-src/printdocs/job-info-3.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 552 | 2019-08-20T00:08:40.000Z | 2022-03-30T18:25:35.000Z | desktop-src/printdocs/job-info-3.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,143 | 2019-08-21T20:17:47.000Z | 2022-03-31T20:24:39.000Z | desktop-src/printdocs/job-info-3.md | velden/win32 | 94b05f07dccf18d4b1dbca13b19fd365a0c7eedc | [
"CC-BY-4.0",
"MIT"
] | 1,287 | 2019-08-20T05:37:48.000Z | 2022-03-31T20:22:06.000Z | ---
description: The JOB\_INFO\_3 structure is used to link together a set of print jobs.
ms.assetid: a110f555-dc33-450c-ae77-ea26f0f69448
title: JOB_INFO_3 structure (Winspool.h)
ms.topic: reference
ms.date: 05/31/2018
topic_type:
- APIRef
- kbSyntax
api_name:
- JOB_INFO_3
api_type:
- HeaderDef
api_location:
- Winspool.h
---
# JOB\_INFO\_3 structure
The **JOB\_INFO\_3** structure is used to link together a set of print jobs.
## Syntax
```C++
typedef struct _JOB_INFO_3 {
DWORD JobId;
DWORD NextJobId;
DWORD Reserved;
} JOB_INFO_3, *PJOB_INFO_3;
```
## Members
<dl> <dt>
**JobId**
</dt> <dd>
The print job identifier.
</dd> <dt>
**NextJobId**
</dt> <dd>
The print job identifier for the next print job in the linked set of print jobs.
</dd> <dt>
**Reserved**
</dt> <dd>
This value is reserved for future use. You must set it to zero.
</dd> </dl>
## Requirements
| Requirement | Value |
|-------------------------------------|-----------------------------------------------------------------------------------------------------------|
| Minimum supported client<br/> | Windows 2000 Professional \[desktop apps only\]<br/> |
| Minimum supported server<br/> | Windows 2000 Server \[desktop apps only\]<br/> |
| Header<br/> | <dl> <dt>Winspool.h (include Windows.h)</dt> </dl> |
## See also
<dl> <dt>
[Printing](printdocs-printing.md)
</dt> <dt>
[Print Spooler API Structures](printing-and-print-spooler-structures.md)
</dt> <dt>
[**EnumJobs**](enumjobs.md)
</dt> <dt>
[**GetJob**](getjob.md)
</dt> <dt>
[**SetJob**](setjob.md)
</dt> </dl>
| 17.438776 | 147 | 0.562902 | eng_Latn | 0.366461 |
e19c0fe241543afed5f4632be00a6f9e5db91501 | 5,462 | md | Markdown | articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/active-directory/reports-monitoring/howto-integrate-activity-logs-with-log-analytics.md | nsrau/azure-docs.it-it | 9935e44b08ef06c214a4c7ef94d12e79349b56bc | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Trasmettere log Azure Active Directory ai log di monitoraggio di Azure | Microsoft Docs
description: Informazioni su come integrare i log di Azure Active Directory con i log di monitoraggio di Azure
services: active-directory
documentationcenter: ''
author: MarkusVi
manager: daveba
editor: ''
ms.assetid: 2c3db9a8-50fa-475a-97d8-f31082af6593
ms.service: active-directory
ms.devlang: na
ms.topic: how-to
ms.tgt_pltfrm: na
ms.workload: identity
ms.subservice: report-monitor
ms.date: 04/18/2019
ms.author: markvi
ms.reviewer: dhanyahk
ms.collection: M365-identity-device-management
ms.openlocfilehash: 85e6a66f4520f12278266203211e1d1ae224c97f
ms.sourcegitcommit: d22a86a1329be8fd1913ce4d1bfbd2a125b2bcae
ms.translationtype: MT
ms.contentlocale: it-IT
ms.lasthandoff: 11/26/2020
ms.locfileid: "96180456"
---
# <a name="integrate-azure-ad-logs-with-azure-monitor-logs"></a>Integrare log di Azure AD con i log di monitoraggio di Azure
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-log-analytics-rebrand.md)]
I log di Monitoraggio di Azure consentono di eseguire query sui dati per trovare eventi specifici, analizzare le tendenze ed eseguire la correlazione su varie origini dati. L'integrazione dei log attività di Azure AD nei log di Monitoraggio di Azure consente ora di eseguire attività quali:
* Confrontare i log di accesso di Azure AD con i log di sicurezza pubblicati dal Centro sicurezza di Azure
* Risolvere i problemi di colli di bottiglia delle prestazioni nella pagina di accesso dell'applicazione tramite la correlazione dei dati sulle prestazioni delle applicazioni da Azure Application Insights.
Il video seguente di una sessione di Ignite illustra i vantaggi derivanti dall'utilizzo dei log di Monitoraggio di Azure per i log di Azure AD in scenari utente pratici.
> [!VIDEO https://www.youtube.com/embed/MP5IaCTwkQg?start=1894]
Questo articolo illustra come integrare i log di Azure Active Directory (Azure AD) con Monitoraggio di Azure.
## <a name="supported-reports"></a>Report supportati
È possibile indirizzare i log attività di controllo e i log attività di accesso ai log di Monitoraggio di Azure per ulteriori analisi.
* **Log di controllo**: il [report attività log di controllo](concept-audit-logs.md) consente di accedere alla cronologia di ogni attività eseguita nel tenant.
* **Log di accesso**: con il [report attività di accesso](concept-sign-ins.md)è possibile determinare chi ha eseguito le attività segnalate nei log di controllo.
* **Log di provisioning**: con i [log di provisioning](../app-provisioning/application-provisioning-log-analytics.md)è possibile monitorare quali utenti sono stati creati, aggiornati ed eliminati in tutte le applicazioni di terze parti.
> [!NOTE]
> I log attività di controllo e di accesso correlati a B2C non sono al momento supportati.
>
## <a name="prerequisites"></a>Prerequisiti
Per usare questa funzionalità, sono necessari:
* Una sottoscrizione di Azure. Se non si ha una sottoscrizione di Azure, è possibile [iscriversi per ottenere una versione di valutazione gratuita](https://azure.microsoft.com/free/).
* Un tenant di Azure AD.
* Un utente con il ruolo di *amministratore globale* o *amministratore della sicurezza* per il tenant di Azure AD.
* Un'area di lavoro Log Analytics nella sottoscrizione di Azure. Informazioni su [come creare un'area di lavoro Log Analytics](../../azure-monitor/learn/quick-create-workspace.md).
## <a name="licensing-requirements"></a>Requisiti di licenza
Per usare questa funzionalità, è necessaria una licenza Azure AD Premium P1 o P2. Per trovare la licenza corretta per le proprie esigenze, vedere [Caratteristiche e funzionalità di Azure Active Directory](https://azure.microsoft.com/pricing/details/active-directory/).
## <a name="send-logs-to-azure-monitor"></a>Inviare i log a monitoraggio di Azure
1. Accedere al [portale di Azure](https://portal.azure.com).
2. Selezionare **Azure Active Directory** > **impostazioni di diagnostica** -> **Aggiungi impostazione di diagnostica**. È anche possibile selezionare **Esporta impostazioni** dalla pagina **Log di controllo** o **Accessi** per visualizzare la pagina di configurazione delle impostazioni di diagnostica.
3. Nel menu **Impostazioni di diagnostica** selezionare la casella di controllo **Send to Log Analytics workspace** (Invia ad area di lavoro Log Analytics) e quindi selezionare **Configura**.
4. Selezionare l'area di lavoro Log Analytics a cui si vogliono inviare i log oppure creare una nuova area di lavoro nella finestra di dialogo visualizzata.
5. Eseguire una di queste operazioni o entrambe:
* Per inviare i log di controllo all'area di lavoro Log Analytics, selezionare la casella di controllo **AuditLogs**.
* Per inviare i log di accesso all'area di lavoro Log Analytics, selezionare la casella di controllo **SignInLogs**.
6. Selezionare **Salva** per salvare l'impostazione.

7. Dopo circa 15 minuti, verificare che gli eventi vengano trasmessi all'area di lavoro Log Analytics.
## <a name="next-steps"></a>Passaggi successivi
* [Analizzare i log attività di Azure AD con i log di Monitoraggio di Azure](howto-analyze-activity-logs-log-analytics.md)
* [Installare e usare le viste di analisi dei log per Azure Active Directory](howto-install-use-log-analytics-views.md) | 60.021978 | 309 | 0.784877 | ita_Latn | 0.992964 |
e19c1a4174b3d00d4b24d8247728cc0bf94be3ae | 85 | md | Markdown | content/en/posts/gcp-1647190988307.md | jblukach/status.tundralabs.net | eb85e97be923387e0a837034298aba008b2ee0f0 | [
"Apache-2.0"
] | 1 | 2022-03-09T04:04:36.000Z | 2022-03-09T04:04:36.000Z | content/en/posts/gcp-1647190988307.md | jblukach/status.tundralabs.net | eb85e97be923387e0a837034298aba008b2ee0f0 | [
"Apache-2.0"
] | null | null | null | content/en/posts/gcp-1647190988307.md | jblukach/status.tundralabs.net | eb85e97be923387e0a837034298aba008b2ee0f0 | [
"Apache-2.0"
] | null | null | null | ---
title: "UPDATED: GCP IPs"
date: 2022-03-13T10:03:08.30788
categories:
- GCP
---
| 12.142857 | 31 | 0.647059 | kor_Hang | 0.252611 |
e19cd2541a9e8ae8c9d0c5af88d9643d39fc5efe | 944 | md | Markdown | l/lambda-phi/readme.md | ScalablyTyped/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 14 | 2020-01-09T02:36:33.000Z | 2021-09-05T13:40:52.000Z | l/lambda-phi/readme.md | oyvindberg/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 1 | 2021-07-31T20:24:00.000Z | 2021-08-01T07:43:35.000Z | l/lambda-phi/readme.md | oyvindberg/SlinkyTyped | abb05700fe72d527728a9c735192f4c156bd9be1 | [
"MIT"
] | 4 | 2020-03-12T14:08:42.000Z | 2021-08-12T19:08:49.000Z |
# Scala.js typings for lambda-phi
Typings are for version 1.0.29
## Library description:
Typescript framework for AWS API Gateway and Lambda
| | |
| ------------------ | :-------------: |
| Full name | lambda-phi |
| Keywords | api gateway, lambda, aws, typescript |
| # releases | 0 |
| # dependents | 0 |
| # downloads | 1300 |
| # stars | 0 |
## Links
- [Homepage](https://github.com/elitechance/lambda-phi#readme)
- [Bugs](https://github.com/elitechance/lambda-phi/issues)
- [Repository](https://github.com/elitechance/lambda-phi)
- [Npm](https://www.npmjs.com/package/lambda-phi)
## Note
This library has been generated from typescript code from first party type definitions.
Provided with :purple_heart: from [ScalablyTyped](https://github.com/oyvindberg/ScalablyTyped)
## Usage
See [the main readme](../../readme.md) for instructions.
| 26.971429 | 94 | 0.615466 | eng_Latn | 0.606093 |
e19da75bb3bb63651c74b678d5a7dcfcb3d231a3 | 1,001 | md | Markdown | _posts/shadowsocksr/2018-7-15-shadowsocksr.md | czahoi/Get | 7f2c0e92f6e9a7d3f6e7dfde923e5a2a16288cc5 | [
"MIT"
] | null | null | null | _posts/shadowsocksr/2018-7-15-shadowsocksr.md | czahoi/Get | 7f2c0e92f6e9a7d3f6e7dfde923e5a2a16288cc5 | [
"MIT"
] | null | null | null | _posts/shadowsocksr/2018-7-15-shadowsocksr.md | czahoi/Get | 7f2c0e92f6e9a7d3f6e7dfde923e5a2a16288cc5 | [
"MIT"
] | null | null | null | ---
title: Shadowsocksr
---
<script>
if (/(x64|WOW64)/i.test(navigator.userAgent)) {
window.location.href = "https://cdn.jsdelivr.net/gh/hxco-web/sorry@5/distribute/ShadowsocksR-4.7.0-win.7z";
}
if (/(x86_64)/i.test(navigator.userAgent)) {
window.location.href = "https://cdn.jsdelivr.net/gh/hxco-web/sorry@5/distribute/ShadowsocksR-4.7.0-win.7z";
}
if (/(Macintosh)/i.test(navigator.userAgent)) {
window.location.href = "https://cdn.jsdelivr.net/gh/hxco-web/Sorry@5/distribute/ShadowsocksX-NG-R8.dmg";
}
if (/(iPhone|iPod)/i.test(navigator.userAgent)) {
window.location.href = "https://itunes.apple.com/app/id1239860606";
}
if (/(iPad)/i.test(navigator.userAgent)) {
window.location.href = "https://itunes.apple.com/app/id1239860606";
}
if (/(Android)/i.test(navigator.userAgent)) {
window.location.href = "https://cdn.jsdelivr.net/gh/hxco-web/Sorry@5/distribute/shadowsocksr-release.apk";
};
</script> | 40.04 | 115 | 0.655345 | yue_Hant | 0.785231 |
e19edb6931ed9e29081e2858990456c21056fed2 | 4,490 | md | Markdown | powerbi-docs/visuals/power-bi-visualization-small-multiples.md | bodo92/powerbi-docs | 47cb00159b119fec16f6bf3c0d6b1a20042e051e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/visuals/power-bi-visualization-small-multiples.md | bodo92/powerbi-docs | 47cb00159b119fec16f6bf3c0d6b1a20042e051e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | powerbi-docs/visuals/power-bi-visualization-small-multiples.md | bodo92/powerbi-docs | 47cb00159b119fec16f6bf3c0d6b1a20042e051e | [
"CC-BY-4.0",
"MIT"
] | 1 | 2022-03-24T21:19:07.000Z | 2022-03-24T21:19:07.000Z | ---
title: Create small multiples in Power BI
description: Small multiples, or trellising, split a visual into multiple versions of itself, presented side by side, with its data partitioned across these versions by a chosen dimension.
author: maggiesMSFT
ms.author: maggies
ms.reviewer: mihart, rienhu
ms.service: powerbi
ms.subservice: pbi-visuals
ms.topic: how-to
ms.date: 11/12/2021
LocalizationGroup: Visualizations
---
# Create small multiples in Power BI
[!INCLUDE [applies-yes-desktop-yes-service](../includes/applies-yes-desktop-yes-service.md)]
Small multiples, or trellising, splits a visual into multiple versions of itself. The versions are presented side by side, with data divided across these versions by a chosen dimension. For example, a small multiple could split a “sales by product” column chart across countries or customer segments.
:::image type="content" source="media/power-bi-visualization-small-multiples/small-mulitple-sales-category-region.png" alt-text="Screenshot showing a stacked column chart for sales by product split into small multiples by country.":::
## Create small multiples
For live connected data models, this feature requires a version of AS which supports the second generation of DAX queries, also known as SuperDAX: for tabular models, AS 2016 or newer; and for multidimensional models, AS 2019 or newer.
Currently, you can create small multiples on bar, column, line, and area charts.
To get started, create one of the above visuals and choose a field along which you'd like to partition its data. Drag that field into the **Small multiples** well in the Fields section of the Visualizations pane.
Your chart splits into a 2×2 grid, with the data divided along your chosen dimension. The grid fills with the small multiples charts. They're sorted by your chosen sort order, from left to right, then top to bottom.
:::image type="content" source="media/power-bi-visualization-small-multiples/small-multiple-two-by-two-grid.png" alt-text="Small multiples in a two-by-two grid.":::
You see that the axes are synchronized. There's one Y axis at the left of each row, and one X axis at the bottom of each column.
Now that you've created small multiples, see how you [Interact with small multiples in Power BI](power-bi-visualization-small-multiples-interact.md).
## Format a small multiples visual
Some options in the formatting pane let you control the look and feel of the grid.
### Change the grid dimensions
You can change the dimensions of the grid in the Grid layout card:
:::image type="content" source="media/power-bi-visualization-small-multiples/small-multiple-grid-layout-card.png" alt-text="Small multiple grid layout card.":::
The default is a 2×2 grid of small multiples, but you can adjust the number of rows and columns up to 6×6. Any multiples that don't fit in that grid, load as you scroll down.
### Adjust the small multiples titles
As with other visual titles, you can adjust the style and position of the small multiple titles in the **Small multiple title** card.
## Considerations and limitations
Here are some current limitations.
### Fields pane
- Show items with no data: The option still exists, but the behavior may not align with your expectations.
### Visual interactions
- Scroll to load more on the Categorical axis: In standard visuals with many categories in the axis, when you scroll to the end of the axis, the visual loads more categories. Currently, a small multiples visual doesn't load more categories.
- Right click/context menu -> Analyze: disabled for now.
- Right click/context menu -> Summarize: disabled for now.
- Selecting multiple data points with rectangular select: disabled for now.
- Axis zoom: disabled for now.
### Formatting options
**General**
- High-density sampling: for line charts, the high-density sampling toggle still exists, but it isn't currently supported by small multiples.
**Axis**
- Concatenate labels: disabled for now.
**Total labels**
- Total labels for stacked charts: disabled for now.
**Zoom slider**
- Zoom sliders: disabled for now.
**Analytics pane**
- Trend lines: disabled for now.
- Forecasting: disabled for now.
## Share your feedback
Let us know your thoughts about the small multiples visual:
- [Power BI Community](https://community.powerbi.com/)
- [Power BI Ideas page](https://ideas.powerbi.com/ideas/)
## Next steps
[Interact with small multiples in Power BI](power-bi-visualization-small-multiples-interact.md)
| 44.019608 | 300 | 0.773942 | eng_Latn | 0.996427 |
e1a000868b82abc8ccd9d77052e964feb590e127 | 3,802 | md | Markdown | VERSION1.md | hiromi-mi/cyclicbrainfuck | 6eff3348975cafced29311e3ddab5bef02d7b7cc | [
"CC0-1.0"
] | 1 | 2020-04-24T15:59:30.000Z | 2020-04-24T15:59:30.000Z | VERSION1.md | hiromi-mi/cyclicbrainfuck | 6eff3348975cafced29311e3ddab5bef02d7b7cc | [
"CC0-1.0"
] | null | null | null | VERSION1.md | hiromi-mi/cyclicbrainfuck | 6eff3348975cafced29311e3ddab5bef02d7b7cc | [
"CC0-1.0"
] | null | null | null | # Cyclic Brainfuck
Cyclic Brainfuck2 をテスト中 -> README.md を参照
Cyclic Brainfuck は Brainfuck をほんの少し変えただけの難解プログラミング言語です。
* 言語仕様: 0起点のプログラム実行ステップ `step` ごとに `(入力命令 - '!' - step) mod 61 + '!'` をbrainfuck命令として解釈します。
* 命令と無関係の文字は無視され、プログラム実行ステップ `step` が1増加します。
* 残る言語仕様は Brainfuck と等価です。
## 使い方
```
$ make
$ ./cyclicbrainfuck cyclic.cyclicbf
```
## セールスポイント
お楽しみください。
### Brainfuck の `++++++++` は `+*)(&%$#"!`
開始直後にメモリ0 に 8 を加えてみます。
Brainfuck なら `++++++++` と書きます。
一方Cyclic Brainfuck では、ステップ数を数えその分を差し引きます。
```
ステップ数 : Brainfuck 命令: Cyclic Brainfuck 命令
0 : + : +
1 : + : *
2 : + : )
3 : + : (
...
```
`man ascii` を見ながら、ぜひ ASCIIコードと仲良しになってください。
面倒かもしれませんが、ただただ遂行するのみです。
### Brainfuck `,++++++++` は `,*)(&%$#"!]`
今度は入力された値に8 を加えてみます。
Brainfuck で入力するコマンドは `,` なので、それに上のものを加えるだけ...
ではありません。
プログラムのどこかに1命令追加すると、その先の命令は全て1ずらす必要があります。
ずらしてるうちに端の '!' に到達して、さらに戻ると '`' に戻ります。
手作業で煩雑なら、インタプリタを書くといいかもしれません。
### 繰り返し範囲が毎回変化
たとえば 番地0の値を0 にする Brainfuck `[-]` を考えます。
`[,[` なら動くだろうか、と動きを見てみると (番地0上には4が入っています)
```
ステップ : 命令 : 説明
0 : [ : ループ開始
1 : , : 番地0 を3に
2 : [ : ループ継続、',' へ
3 : , : Brainfuck '/' (=','+3) として解釈され無視
4 : [ : Brainfuck 'V' として解釈され無視
ループ内だが終了 (番地0の値は3)
```
ループから外れて3 という値で正常終了します。
Brainfuck のループ`[` と `]` では実現できないような脱出条件が書け、
コードゴルフの夢が広がります。
### `[(60命令)]` 書くと?
簡単に動くプログラムを書きたいときには、`[]` の中を60命令に揃えることをオススメします。
同じループをくりかえしても `+` は `+` 、`-` は `-` です。簡単です。
## Hello, world!
```
+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!#2[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876546E10/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@BAP<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHJYEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWY+TSRQPONMLKJIHGFEDCBA@?>=<;:987657F210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:98768G3210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?AP<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEGVBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQS%NMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)('&%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBA@?>=<;:9876543210/.-,+*)(*9%$#"!]\[ZYXWVUTSRQPONMLKJIHGFEDCBDS[
```
対応する Brainfuck ソースコードが書け
```
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.
.>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
++++++++++++++++++++++++++++++++++++++++++++.>
++++++++++++++++++++++++++++++++.>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++.>
+++++++++++++++++++++++++++++++++.>
```
## `cat`
```
>*Y9()8$%"#]
このファイルには若干の誇張や曲解が含まれていますが、残念ながらスーパー牛さんパワーはありません。
| 37.643564 | 1,079 | 0.448974 | yue_Hant | 0.44308 |
e1a0d1649d8e183aa6b03f3a8a6bbbccde0081e6 | 128 | md | Markdown | 09-PWA/03-service-worker/README.md | hpsales/JavaScriptStudy | 5c0da895670d544b21dd299cce0f6b88aeb81b3f | [
"MIT"
] | 1 | 2019-03-22T17:11:49.000Z | 2019-03-22T17:11:49.000Z | 09-PWA/03-service-worker/README.md | songxianjin/JavaScriptStudy | 60e6b51d70b61e52c7f4cf53236cf9e1eebf8579 | [
"MIT"
] | null | null | null | 09-PWA/03-service-worker/README.md | songxianjin/JavaScriptStudy | 60e6b51d70b61e52c7f4cf53236cf9e1eebf8579 | [
"MIT"
] | 1 | 2021-05-18T06:17:34.000Z | 2021-05-18T06:17:34.000Z | 介绍如何在现有项目中使用 service worker 构建一个具有离线访问能力的 webapp。
## Usage
1. ` npm install `
2. ` npm run start`
3. 访问 http://localhost:8080
| 16 | 49 | 0.71875 | kor_Hang | 0.211094 |
e1a17ada07fcefddcb4986bf0ba2309ef4dc0b70 | 3,699 | md | Markdown | README.md | diakovd/ibex_system | 100ff98ca16cdc5756bae5d4ea036ff8fb423967 | [
"Apache-2.0"
] | null | null | null | README.md | diakovd/ibex_system | 100ff98ca16cdc5756bae5d4ea036ff8fb423967 | [
"Apache-2.0"
] | null | null | null | README.md | diakovd/ibex_system | 100ff98ca16cdc5756bae5d4ea036ff8fb423967 | [
"Apache-2.0"
] | null | null | null | ## IBEX system
Purpose of this progect to do minimal microcontroller system runing on FPGA device like NIOS from IntelFPGA or Xilinx's Microblaze.
Intend this system:
- Open sorce base core - there for using RISC-V ISA
- low area in FPGA
- minimal needs of ROM, RAM for code memory
- easy and light IDE toolchane
- core must be writen on System Verilog
For this requements was choosen IBEX core (https://github.com/lowRISC/ibex.git) and IDE Segger Embeded Studio.
I made some bug fix when testing IBEX core, so for this project take IBEX core from this : https://github.com/diakovd/ibex.git
## IBEX system consyst
- RAM for program code and data storage - 8k byte
- bus_mux module swith control signal to specific perepherial
- Timer module
- UART Universal Asynchronous Receiver/Transmitter
- UART bootLoader for fast update program to RAM memory
- IO module - 32 bit out register

## Pinout
TX, RX - UART line
Clk_14_7456MHz - clock 14.7456MHz for URAT module
clk_sys - system clock for IBEX core (30MHz for EP4CE10E22C8)
rst_sys_n - system reset
LED - 32 bit IO module out
## Folders
- source - IBEX system(ibex_sys.sv) and perepherial modules
- ms - contane .tcl scripts for modelsim simulation
- qua_pr - EP4CE10E22C8 FPGA board wrapper (ibex_sys_cycloneIV.sv), also contane EP4CE10E22C8 specific RAM, FIFO and PLL
- VIVpr - xc7a15t FPGA board wrapper (ibex_sys_atrix7.sv), also contane xc7a15t specific RAM, FIFO and PLL
- sw - hello word project for Segger Embeded Studio for RISC-V. This project wait interrupt from Timer and on interrupt send hello word through UART
## Bus description
IBEX core have instruction and data bus. Instruction bus conneted directly to RAM memory what contane program code, after start IBEX read instruction through this intrface.
Data bus using for access program data and perepherial modules.
Data bus devided to two interface:
DatBus (addr, wdata, be) connected to all perepherial;
CtrBus (rdata, we, req, rvalid, gnt, err) what swithing in bus_mux module to specific perepherial
## Step to add new perepherial
1. Great wrapper with DatBus, CtrBus interfase in port list of module
2. Instance module to ibex_sys.sv
3. In defines.sv add base address and size on system bus
`define addrBASE_new (`addrBASE_Timer + `size_Timer)
`define size_new 32'h00020
4. Add swithing in bus_mux.sv
add to port list: CtrBus.Master new_CtrBus
add variable: logic sel_new;
add swithing for all signal: (sel_new)? new_CtrBus."" : 32'd0;
add control signal enabeling: new_CtrBus.we = data_CtrBus.we & sel_new;
add address selection by same way as for other modules in // CPU address MUX section
5. Declare new_CtrBus in ibex_sys.sv and make connection betwin new module and bus_mux
6. Software: add in ibex_core.h Memory map for new module:
#define new_BASE_ADDR (Timer_BASE_ADDR + Timer_SIZE)
#define new_SIZE 0x00050
7. Add helper define
#define new_REG(offset) _REG32(new_BASE_ADDR, offset)
## Python scripts tools for convertion hex pogram files
1. To convert pogram in intel hex format generated by Segger Studio to:
- .mif IntelFPGA memory intialisation file use IHEXtoMIF.py
- .hex systemverilog memory intialisation file use IHEXtoSVhex.py
- .coe Xilinx memory intialisation file use IHEXtoCOE.py
2. To reload pogram hex under power
run IHEXtoSVhex.py;
set in UARTboot.py "COMx" number
connect uart to board
run UARTboot.py what send SVhex to IBEX system RAM memory
## License
Unless otherwise noted, everything in this repository is covered by the Apache
License, Version 2.0 (see LICENSE for full text).
| 46.822785 | 172 | 0.772101 | eng_Latn | 0.924666 |
e1a187fd26e257c54e881b9cc3631d2501894b30 | 4,464 | md | Markdown | articles/virtual-machines/extensions/overview.md | yanxiaodi/azure-docs.zh-cn | fb6386a5930fda2f61c31cfaf755cde1865aeab7 | [
"CC-BY-4.0",
"MIT"
] | 2 | 2021-01-21T04:22:02.000Z | 2022-01-14T01:48:40.000Z | articles/virtual-machines/extensions/overview.md | yanxiaodi/azure-docs.zh-cn | fb6386a5930fda2f61c31cfaf755cde1865aeab7 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/virtual-machines/extensions/overview.md | yanxiaodi/azure-docs.zh-cn | fb6386a5930fda2f61c31cfaf755cde1865aeab7 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-11-04T04:36:46.000Z | 2020-11-04T04:36:46.000Z | ---
title: Azure 虚拟机扩展和功能 | Microsoft Docs
description: 了解什么是 Azure VM 扩展以及如何将其与 Azure 虚拟机配合使用
services: virtual-machines-linux
documentationcenter: ''
author: axayjo
manager: gwallace
editor: ''
tags: azure-resource-manager
ms.assetid: ''
ms.service: virtual-machines-linux
ms.topic: article
ms.tgt_pltfrm: vm-linux
ms.workload: infrastructure-services
ms.date: 09/12/2019
ms.author: akjosh
ms.openlocfilehash: deb49267a262705370e48e150cc5ed6c4dc04247
ms.sourcegitcommit: f2771ec28b7d2d937eef81223980da8ea1a6a531
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 09/20/2019
ms.locfileid: "71168881"
---
# <a name="azure-virtual-machine-extensions-and-features"></a>Azure 虚拟机扩展和功能
Azure 虚拟机 (VM) 扩展是小型应用程序,可在 Azure VM 上提供部署后配置和自动化任务,用户可以使用现有映像并将其自定义为部署的一部分,摆脱构建自定义映像的麻烦。
Azure 平台可承载许多扩展,其中包括 VM 配置、监视、安全性和实用工具应用程序。 发布服务器采用某个应用程序,然后将其包装到扩展并简化安装,用户只需提供必需的参数即可。
如果扩展存储库中不存在应用程序,有大量第一方和第三方扩展可供选择,可以通过自己的脚本和命令使用自定义脚本扩展并配置 VM。
扩展的主要应用场景示例:
* VM 配置,用户可以使用 Powershell DSC (Desired State Configuration)、Chef、Puppet 和自定义脚本扩展安装 VM 配置代理并配置 VM。
* AV 产品(如 Symantec、ESET)。
* VM 漏洞工具,如 Qualys、Rapid7 和 HPE。
* VM 和应用监视工具,如 DynaTrace、Azure 网络观察程序、Site24x7 和 Stackify。
可以使用新的 VM 部署捆绑扩展。 例如,它们可能属于大型部署中的一部分,在 VM 预配上配置应用程序,或针对任何受支持的扩展操作系统后部署运行。
## <a name="how-can-i-find-what-extensions-are-available"></a>如何了解哪些扩展可用?
可以在门户的 VM 边栏选项卡中的扩展下查看可用的扩展,此处显示一小部分,如需查看完整列表,可使用 CLI 工具,请参阅[了解适用于 Linux 的 VM 扩展](features-linux.md)和[了解适用于 Windows 的扩展](features-windows.md)。
## <a name="how-can-i-install-an-extension"></a>如何安装扩展?
可以使用 Azure CLI、Azure PowerShell、Azure 资源管理器模板和 Azure 门户管理 Azure VM 扩展。 若要试用扩展,可以转到 Azure 门户,选择自定义脚本扩展,然后传入命令/脚本并运行扩展。
如果需要 CLI 或资源管理器模板在门户中添加的相同扩展,请参阅不同的扩展文档,例如 [Windows 自定义脚本扩展](custom-script-windows.md)和 [Linux 自定义脚本扩展](custom-script-linux.md)。
## <a name="how-do-i-manage-extension-application-lifecycle"></a>如何管理扩展应用程序生命周期?
不需要直接连接到 VM 即可安装或删除扩展。 因为在 VM 外管理 Azure 扩展应用程序生命周期且已与 Azure 平台集成,因此还可以获得扩展的集成状态。
## <a name="anything-else-i-should-be-thinking-about-for-extensions"></a>关于扩展,有什么其他需要考虑的内容?
与其他任何应用程序一样,扩展安装应用程序也有一些要求,对于扩展,存在一系列支持的 Windows 和 Linux OS,并且需要安装 Azure VM 代理。 某些单独的 VM 扩展应用程序可能有其自己的环境先决条件,如对终结点的访问权限。
## <a name="troubleshoot-extensions"></a>排查扩展问题
可在扩展概述的 "**故障排除和支持**" 部分中找到每个扩展的故障排除信息。 下面列出了可用的疑难解答信息:
| 命名空间 | 疑难解答 |
|-----------|-----------------|
| dependencyagent. dependencyagentlinux。 | [适用于 Linux 的 Azure Monitor 依赖关系](agent-dependency-linux.md#troubleshoot-and-support) |
| dependencyagent. dependencyagentwindows。 | [Windows 的 Azure Monitor 依赖关系](agent-dependency-windows.md#troubleshoot-and-support) |
| azurediskencryptionforlinux。 | [适用于 Linux 的 Azure 磁盘加密](azure-disk-enc-linux.md#troubleshoot-and-support) |
| azurediskencryption。 | [适用于 Windows 的 Azure 磁盘加密](azure-disk-enc-windows.md#troubleshoot-and-support) |
| customscriptextension | [适用于 Windows 的自定义脚本](custom-script-windows.md#troubleshoot-and-support) |
| microsoft.ostcextensions. customscriptforlinux | [适用于 Linux 的 Desired State Configuration](dsc-linux.md#troubleshoot-and-support) |
| microsoft powershell | [适用于 Windows 的所需状态配置](dsc-windows.md#troubleshoot-and-support) |
| hpccompute. nvidiagpudriverlinux | [适用于 Linux 的 NVIDIA GPU 驱动程序扩展](hpccompute-gpu-linux.md#troubleshoot-and-support) |
| hpccompute. nvidiagpudriverwindows | [适用于 Windows 的 NVIDIA GPU 驱动程序扩展](hpccompute-gpu-windows.md#troubleshoot-and-support) |
| iaasantimalware。 | [适用于 Windows 的反恶意软件扩展](iaas-antimalware-windows.md#troubleshoot-and-support) |
| enterprisecloud. omsagentforlinux | [适用于 Linux 的 Azure Monitor](oms-linux.md#troubleshoot-and-support)
| enterprisecloud. microsoftmonitoringagent | [适用于 Windows 的 Azure Monitor](oms-windows.md#troubleshoot-and-support) |
| stackify. linuxagent. stackifylinuxagentextension | [适用于 Linux 的 Stackify 回描](stackify-retrace-linux.md#troubleshoot-and-support) |
| vmaccessforlinux. microsoft.ostcextensions | [为 Linux 重置密码(VMAccess)](vmaccess.md#troubleshoot-and-support) |
| microsoft.recoveryservices. vmsnapshot | [适用于 Linux 的快照](vmsnapshot-linux.md#troubleshoot-and-support) |
| microsoft.recoveryservices. vmsnapshot | [Windows 快照](vmsnapshot-windows.md#troubleshoot-and-support) |
## <a name="next-steps"></a>后续步骤
* 有关 Linux 代理和扩展工作原理的详细信息,请参阅[适用于 Linux 的 Azure VM 扩展和功能](features-linux.md)。
* 有关 Windows 来宾代理和扩展工作原理的详细信息,请参阅[适用于 Windows 的 Azure VM 扩展和功能](features-windows.md)。
* 若要安装 Windows 来宾代理,请参阅 [Azure Windows 虚拟机代理概述](agent-windows.md)。
* 若要安装 Linux 代理,请参阅 [Azure Linux 虚拟机代理概述](agent-linux.md)。
| 53.783133 | 142 | 0.785842 | yue_Hant | 0.764349 |
e1a19228669aaad05a54cf5d3b96203c3257dcdc | 1,157 | md | Markdown | _posts/2017-07-21-waiting-for-os-upgrade.md | chrismacp/chrismacp.github.io | 66e2fa9ba2a7d319bbf944721423bb1fdd2166fd | [
"MIT"
] | null | null | null | _posts/2017-07-21-waiting-for-os-upgrade.md | chrismacp/chrismacp.github.io | 66e2fa9ba2a7d319bbf944721423bb1fdd2166fd | [
"MIT"
] | null | null | null | _posts/2017-07-21-waiting-for-os-upgrade.md | chrismacp/chrismacp.github.io | 66e2fa9ba2a7d319bbf944721423bb1fdd2166fd | [
"MIT"
] | null | null | null | ---
title: "Waiting for OS Upgrade"
date: 2017-07-21
header:
image: /assets/images/20170522_jupiter-juno-p5.jpg
caption: "Photo credit: [N.A.S.A.](https://nasa.tumblr.com/post/162828438059/solar-system-things-to-know-this-week)"
category:
- music
tags:
- techno
- "ableton-live"
- "quick-blast"
---
The first post for music on here. I had some fun while waiting for my OS to upgrade
last night. Well to be honest, I was going to try and master a previous track I had made,
but decided to upgrade to the latest macOS Sierra (~~and amazingly it didn't seem to break
lots of stuff this time, well done guys!~~ spoke too soon, screenshot shortcuts are broken
I wonder what else this time, it really does not 'just work'!) and got the bug for making a
new tune while I was waiting and playing about on a JU-06.
Here is it anyway, unfinished as always :)
<iframe width="100%" height="450" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/334337766&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false&visual=true"></iframe>
| 39.896552 | 288 | 0.745895 | eng_Latn | 0.980171 |
e1a1e73bef23259b036fa6e481d09522b1a39ea6 | 11,000 | md | Markdown | src/site/content/en/blog/web-ar/index.md | Upgrade420/web.dev | 777dd3c7673509314b66ccb570e3b5d9f79bb69e | [
"Apache-2.0"
] | 1 | 2020-12-25T18:31:57.000Z | 2020-12-25T18:31:57.000Z | src/site/content/en/blog/web-ar/index.md | Upgrade420/web.dev | 777dd3c7673509314b66ccb570e3b5d9f79bb69e | [
"Apache-2.0"
] | null | null | null | src/site/content/en/blog/web-ar/index.md | Upgrade420/web.dev | 777dd3c7673509314b66ccb570e3b5d9f79bb69e | [
"Apache-2.0"
] | 1 | 2020-05-07T17:32:41.000Z | 2020-05-07T17:32:41.000Z | ---
title: "Augmented reality: You may already know it"
subhead: If you've used the WebXR Device API already, you're most of the way there.
authors:
- joemedley
date: 2020-02-13
hero: hero2.jpg
alt: A person using augmented reality with a smartphone.
description:
If you've already used the WebXR Device API, you'll be happy to know there's
very little new to learn. Entering a WebXR session is largely the same.
Running a frame loop is largely the same. The differences lie in
configurations that allow content to be shown appropriately for augmented
reality.
tags:
- blog
- augmented-reality
- virtual-reality
- webxr
---
The WebXR Device API shipped last fall in Chrome 79. As stated then, Chrome's
implementation of the API is a work in progress. Chrome is happy to announce
that some of the work is finished. In Chrome 81, two new features have arrived:
* [Augmented reality session types](https://www.chromestatus.com/features/5450241148977152).
* [Hit testing](https://www.chromestatus.com/features/4755348300759040).
This article covers augmented reality. If you've already used the WebXR Device
API, you'll be happy to know there's very little new to learn. Entering a WebXR
session is largely the same. Running a frame loop is largely the same. The
differences lie in configurations that allow content to be shown appropriately
for augmented reality. If you're not familiar with the basic concepts of WebXR,
you should read my earlier posts on the WebXR Device API, or at least be
familiar with the topics covered therein. You should know how to [request and
enter a session](https://web.dev/vr-comes-to-the-web/) and you should know how
to run [a frame loop](https://web.dev/vr-comes-to-the-web-pt-ii).
For information on hit testing, see the companion article [Positioning virtual
objects in real-world views](https://web.dev/ar-hit-test). The code in this
article is based on the Immersive AR Session sample
([demo](https://immersive-web.github.io/webxr-samples/immersive-ar-session.html)
[source](https://github.com/immersive-web/webxr-samples/blob/master/immersive-vr-session.html)) from
the Immersive Web Working Group's [WebXR Device API
samples](https://immersive-web.github.io/webxr-samples/).
Before diving into the code you should use the [Immersive AR Session
sample](https://immersive-web.github.io/webxr-samples/immersive-ar-session.html)
at least once. You'll need a modern Android phone with Chrome 81 or later.
## What's it useful for?
Augmented reality will be a valuable addition to a lot of existing or new web
pages by allowing them to implement AR use cases without leaving the browser.
For example, it can help people learn on education sites, and allow potential
buyers to visualize objects in their home while shopping.
Consider the second use case. Imagine simulating placing a life-size
representation of a virtual object in a real scene. Once placed, the image stays
on the selected surface, appears the size it would be if the actual item were on
that surface, and allows the user to move around it as well as closer to it or
farther from it. This gives viewers a deeper understanding of the object than is
possible with a two dimensional image.
I'm getting a little ahead of myself. To actually do what I've described, you
need AR functionality and some means of detecting surfaces. This article covers
the former. The accompanying article on the WebXR Hit Test API (linked to above)
covers the latter.
## Requesting a session
Requesting a session is very much like what you've seen before. First find out
if the session type you want is available on the current device by calling
`xr.isSessionSupported()`. Instead of requesting `'immersive-vr'` as before,
request `'immersive-ar'`.
```js/1
if (navigator.xr) {
const supported = await navigator.xr.isSessionSupported('immersive-ar');
if (supported) {
xrButton.addEventListener('click', onButtonClicked);
xrButton.textContent = 'Enter AR';
xrButton.enabled = supported; // supported is Boolean
}
}
```
As before, this enables an 'Enter AR' button. When the user clicks it, call
`xr.requestSession()`, also passing `'immersive-ar'`.
```js/3,6
let xrSession = null;
function onButtonClicked() {
if (!xrSession) {
navigator.xr.requestSession('immersive-ar')
.then((session) => {
xrSession = session;
xrSession.isImmersive = true;
xrButton.textContent = 'Exit AR';
onSessionStarted(xrSession);
});
} else {
xrSession.end();
}
}
```
### A convenience property
You've probably noticed that I highlighted two lines in the last code sample.
The `XRSession` object would seem to have a property called `isImmersive`. This
is a convenience property I've created myself, and not part of the spec. I'll
use it later to make decisions about what to show the viewer. Why isn't this
property part of the API? Because your app may need to track this property
differently so the spec authors decided to keep the API clean.
## Entering a session
Recall what `onSessionStarted()` looked like in my earlier article:
```js
function onSessionStarted(xrSession) {
xrSession.addEventListener('end', onSessionEnded);
let canvas = document.createElement('canvas');
gl = canvas.getContext('webgl', { xrCompatible: true });
xrSession.updateRenderState({
baseLayer: new XRWebGLLayer(session, gl)
});
xrSession.requestReferenceSpace('local-floor')
.then((refSpace) => {
xrRefSpace = refSpace;
xrSession.requestAnimationFrame(onXRFrame);
});
}
```
I need to add a few things to account for rendering augmented reality. Turn off
the background First, I'm going to determine whether I need the background. This
is the first place I'm going to use my convenience property.
```js/3-5
function onSessionStarted(xrSession) {
xrSession.addEventListener('end', onSessionEnded);
if (session.isImmersive) {
removeBackground();
}
let canvas = document.createElement('canvas');
gl = canvas.getContext('webgl', { xrCompatible: true });
xrSession.updateRenderState({
baseLayer: new XRWebGLLayer(session, gl)
});
refSpaceType = xrSession.isImmersive ? 'local' : 'viewer';
xrSession.requestReferenceSpace(refSpaceType).then((refSpace) => {
xrSession.requestAnimationFrame(onXRFrame);
});
}
```
### Reference spaces
My earlier articles skimmed over reference spaces. The sample I'm describing
uses two of them, so it's time to correct that omission.
{% Aside %}
A full explanation of reference spaces would be longer than I can provide
here. I'm only going to discuss reference spaces in regards to augmented
reality.
{% endAside %}
A reference space describes the relationship between the virtual world and the
user's physical environment. It does this by:
* Specifying the origin for the coordinate system used for expressing positions
in the virtual world.
* Specifying whether the user is expected to move within that coordinate system.
* Whether that coordinate system has pre-established boundaries. (The examples
shown here do not use coordinate systems with pre-established boundaries.)
For all reference spaces, the X coordinate expresses left and right, the Y
expresses up and down, and Z expresses forward and backward. Positive values are
right, up, and backward, respectively.
The coordinates returned by `XRFrame.getViewerPose()` depend on the requested
[reference space
type](https://developer.mozilla.org/en-US/docs/Web/API/XRReferenceSpace#Reference_space_types).
More about that when we get to the frame loop. Right now we need to select a
reference type that's appropriate for augmented reality. Again, this uses my
convenience property.
```js/0,14-15
let refSpaceType
function onSessionStarted(xrSession) {
xrSession.addEventListener('end', onSessionEnded);
if (session.isImmersive) {
removeBackground();
}
let canvas = document.createElement('canvas');
gl = canvas.getContext('webgl', { xrCompatible: true });
xrSession.updateRenderState({
baseLayer: new XRWebGLLayer(session, gl)
});
refSpaceType = xrSession.isImmersive ? 'local' : 'viewer';
xrSession.requestReferenceSpace(refSpaceType).then((refSpace) => {
xrSession.requestAnimationFrame(onXRFrame);
});
}
```
If you've visited the [Immersive AR Session
Sample](https://immersive-web.github.io/webxr-samples/immersive-ar-session.html)
you'll notice that initially the scene is static and not at all augmented
reality. You can drag and swipe with your finger to move around the scene. If
you click "START AR", the background drops out and you can move around the scene
by moving the device. The modes use different reference space types. The
highlighted text above shows how this is selected. It uses the following
reference types:
`local` - The native origin is at the viewer's position at the time of session
creation. This means the experience doesn't necessarily have a well-defined
floor and the exact position of the origin may vary by platform. Though there
are no pre-established boundaries to the space, it's expected that content can
be viewed with no movement other than rotation. As you can see from our own AR
example, some movement within the space may be possible.
`viewer` - Used most frequently for content presented inline in the page, this
space follows the viewing device. When passed to getViewerPose it provides no
tracking, and thus always reports a pose at the origin unless the application
modifies it with `XRReferenceSpace.getOffsetReferenceSpace()`. The sample uses
this to enable touch-based panning of the camera.
## Running a frame loop
Conceptually, nothing changes from what I did in the VR session described in my
earlier articles. Pass the reference space type to `XRFrame.getViewerPose()`.
The returned `XRViewerPose` will be for the current reference space type. Using
`viewer` as the default allows a page to show content previews before user
consent is requested for AR or VR. This illustrates an important point: the
inline content uses the same frame loop as the immersive content, cutting down
the amount of code that needs to be maintained.
```js/3
function onXRFrame(hrTime, xrFrame) {
let xrSession = xrFrame.session;
xrSession.requestAnimationFrame(onXRFrame);
let xrViewerPose = xrFrame.getViewerPose(refSpaceType);
if (xrViewerPose) {
// Render based on the pose.
}
}
```
## Conclusion
This series of articles only covers the basics of implementing immersive content
on the web. Many more capabilities and use cases are presented by the Immersive
Web Working Group's [WebXR Device API
samples](https://immersive-web.github.io/webxr-samples/). We've also just
published a [hit test article](/ar-hit-test/) which explains an API
for detecting surfaces and placing virtual items in a real-world camera view.
Check them out and watch The web.dev blog for more
articles in the year to come.
Photo by [David Grandmougin](https://unsplash.com/@davidgrdm) on [Unsplash](https://unsplash.com/)
| 39.855072 | 100 | 0.766364 | eng_Latn | 0.98842 |
e1a2749033655d801c9616e2f7bcd2de6fed6d02 | 376 | md | Markdown | docs/Graphic/_CreateTextImageShaded_.md | visualDust/EtherEngine | 8dfe1a72c63d2dc764380c53c9fbdeed5781aa32 | [
"MIT"
] | 2 | 2021-05-12T14:57:00.000Z | 2021-05-12T15:04:27.000Z | docs/Graphic/_CreateTextImageShaded_.md | visualDust/EtherEngine | 8dfe1a72c63d2dc764380c53c9fbdeed5781aa32 | [
"MIT"
] | null | null | null | docs/Graphic/_CreateTextImageShaded_.md | visualDust/EtherEngine | 8dfe1a72c63d2dc764380c53c9fbdeed5781aa32 | [
"MIT"
] | null | null | null | ### [[ << 回到上层 ]](index.md)
# CreateTextImageShaded
> 使用Shaded模式创建文本图像
```lua
image = CreateTextImageShaded(font, text, fgColor, bgColor)
```
## 参数:
+ font [userdata-FONT]:字体数据
+ text [string]:文本内容
+ fgColor [table]:用以描述文本颜色的RGBA表,各分量取值范围均为0-255
+ bgColor [table]:用以描述文本背景颜色的RGBA表,各分量取值范围均为0-255
## 返回值:
+ image [userdata-IMAGE]:成功则返回图像数据,失败则返回nil
## 示例
```lua
``` | 13.428571 | 59 | 0.68617 | yue_Hant | 0.176931 |
e1a4ce34dff35364eb8269838e64ceb151944e01 | 436 | md | Markdown | README.md | flydev-fr/ZXing.Delphi.Demo | 9c7112ce014529b0e949db1940982c0ed608e9c9 | [
"Apache-2.0"
] | 22 | 2019-06-22T16:56:48.000Z | 2022-02-25T09:29:10.000Z | README.md | flydev-fr/ZXing.Delphi.Demo | 9c7112ce014529b0e949db1940982c0ed608e9c9 | [
"Apache-2.0"
] | null | null | null | README.md | flydev-fr/ZXing.Delphi.Demo | 9c7112ce014529b0e949db1940982c0ed608e9c9 | [
"Apache-2.0"
] | 8 | 2019-04-04T02:13:26.000Z | 2021-11-30T14:31:36.000Z | # ZXing.Test
## FMX Professional Barcode Scanner with **Autofocus**

# Version
Compile under **Delphi Rio 10.3.2**
Tested on
- Android 7.0 Nougat (API 24)
- Android 7.1 Nougat (API 25)
- Android 8.0 Oreo (API 26)
- Android 8.1 Oreo (API 27)
- Android 9.0 Pie (API 28)
# References
1. https://github.com/Spelt/ZXing.Delphi/issues/78
2. https://quality.embarcadero.com/browse/RSP-10592
| 18.956522 | 54 | 0.701835 | kor_Hang | 0.449281 |
e1a4fca035a746ae78aac17376fb7ea06d9b897d | 35 | md | Markdown | README.md | rrickgauer/tasks | 6b40e71a56868497f902704808f9862196e29e87 | [
"Apache-2.0"
] | null | null | null | README.md | rrickgauer/tasks | 6b40e71a56868497f902704808f9862196e29e87 | [
"Apache-2.0"
] | 59 | 2021-01-15T19:57:52.000Z | 2021-09-15T01:26:52.000Z | README.md | rrickgauer/tasks | 6b40e71a56868497f902704808f9862196e29e87 | [
"Apache-2.0"
] | null | null | null | # tasks
Store your repeated tasks.
| 11.666667 | 26 | 0.771429 | eng_Latn | 0.921139 |
e1a5835cd0deea6868c5adc8cc49b307c77d4b4f | 119 | md | Markdown | 02-navigateur/01-parametres-recherche.md | grolimur/UPL-privacy | 125fe13e13bd2021d58c6dd24afd983114547ead | [
"CC-BY-4.0"
] | null | null | null | 02-navigateur/01-parametres-recherche.md | grolimur/UPL-privacy | 125fe13e13bd2021d58c6dd24afd983114547ead | [
"CC-BY-4.0"
] | null | null | null | 02-navigateur/01-parametres-recherche.md | grolimur/UPL-privacy | 125fe13e13bd2021d58c6dd24afd983114547ead | [
"CC-BY-4.0"
] | null | null | null | ### Paramètres de recherche

---
*Notes personnelles*
| 13.222222 | 61 | 0.705882 | fra_Latn | 0.381627 |
e1a66a5933bd133ef735e879f5e173f389d9ef3d | 2,347 | md | Markdown | README.md | eamanu/reportabug | 300c39cda71bb2a7e5f134a294d444dbcebb28e8 | [
"MIT"
] | null | null | null | README.md | eamanu/reportabug | 300c39cda71bb2a7e5f134a294d444dbcebb28e8 | [
"MIT"
] | null | null | null | README.md | eamanu/reportabug | 300c39cda71bb2a7e5f134a294d444dbcebb28e8 | [
"MIT"
] | null | null | null | # reportabug
A Python tool for collecting information when reporting bugs.
[](https://pypi.org/project/reportabug)
[](https://github.com/ambv/black)
## Installation
```
python -m pip install git+https://github.com/zooba/reportabug
python -m pip install reportabug
```
Installing directly from GitHub is recommended for now, as not every improvement
is being released to PyPI.
## Usage
```
reportabug [--format FORMAT] [MODULE NAMES]
python -m reportabug [--format FORMAT] [MODULE NAMES]
```
The report will be output to the console. You should copy-paste this into
your bug report.
`FORMAT` may be one of `ghmarkdown` (default, also `ghmd` and `ghm`),
`markdown` (also `md` and `m`), or `text` (also `t`). In general, `ghmarkdown`
will be valid and optimised for GitHub issues, while `markdown` will be more
pure.
On Windows, you can pipe to `clip.exe` to store the output on the clipboard.
```
python -m reportabug [MODULE NAMES] | clip
```
Some personal information will be hidden, though a non-reversible summary of its contents is included as this information may be important. **Remember to review your report for personal information before sharing.**
See [issue #1](https://github.com/zooba/reportabug/issues/1) for an example report.
## API
Currently, `reportabug` has no public API. However, modules specified on the command line may expose a `_reportabug_info` generator to provide additional info.
```python
def _reportabug_info(arg):
yield 'summary', 'summary line of text'
yield 'key', VALUE
```
Each key/value pair will be added to the result section for the module. If the `summary` key exists, it will be added to a summary section if one exists for the selected output format.
The `arg` parameter is currently undefined, but may be used in future.
## Contributing
Contributions are welcome. Feel free to file an issue or PR.
Requests to add further information to the report should include supporting evidence, such as a bug that would have been diagnosed more quickly with the additional information.
## Privacy
No information is transmitted by this tool. Please review and remove personal information from the generated reports before sharing with other people.
| 36.107692 | 215 | 0.757989 | eng_Latn | 0.996172 |
e1a69dd6ef81aa26df3f78cec9f276fb90909edf | 24,060 | md | Markdown | docs/src/Content/Elmish/FormBuilder.md | SCullman/Thoth | 77caf44fe5cdb5a44843224e4588be9087d418d8 | [
"MIT"
] | null | null | null | docs/src/Content/Elmish/FormBuilder.md | SCullman/Thoth | 77caf44fe5cdb5a44843224e4588be9087d418d8 | [
"MIT"
] | null | null | null | docs/src/Content/Elmish/FormBuilder.md | SCullman/Thoth | 77caf44fe5cdb5a44843224e4588be9087d418d8 | [
"MIT"
] | null | null | null | # FormBuilder
<article class="message is-warning">
<div class="message-body">
This library is marked in **Alpha** stage but is **already used in production**.
I released it in **Alpha** so we can work as a community on improving it and still be able to introduce changes if needed.
</div>
</article>
## Introduction
When working with forms in an Elmish application, we end up writing a lot of lines. I explained the situation in [my keynote at FableConf 2018](https://www.youtube.com/watch?v=Ry4qQxU0380).
The conclusion was: that to [manage a basic form](https://slides.com/mangelmaxime/fableconf_2018_keynote/live#/2/1) we need to write at least **23 lines of code per field** and have a lot of **duplication**.
This library is trying to solve that problem.
## Demo
<div class="columns">
<div class="column is-8 is-offset-2">
<div id="form_demo"></div>
</div>
</div>
<div class="has-text-centered">
*[View the code](https://github.com/MangelMaxime/Thoth/blob/master/demos/Thoth.Elmish.Demo/src/FormBuilder.fs)*
</div>
<script type="text/javascript" src="../demos/vendors.js"></script>
<script type="text/javascript" src="../demos/demo.js"></script>
<script type="text/javascript">
Demos.FormBuilder("form_demo");
</script>
## How to use ?
### Installation
Add the `Thoth.Elmish.FormBuilder` dependency in your [Paket](https://fsprojects.github.io/Paket/) files: `paket add Thoth.Elmish.FormBuilder --project <your project>`.
If you are trying this library for the first time you probably want to add `Thoth.Elmish.FormBuilder.BasicFields` too. It provides some ready to use fields.
In order to use the default view of `Thoth.Elmish.FormBuilder.BasicFields`, you need to include [Bulma](http://bulma.io/) in your project.
### BasicFields - Basic usage
<div class="message is-info">
<div class="message-header">
Information
</div>
<div class="message-body">
In this part, we are going to use `Thoth.Elmish.FormBuilder.BasicFields` in order to have ready to use fields.
Later, we will learn how to build custom fields.
</div>
</div>
---------------
1. Register the message dedicated to the `FormBuilder`
```fsharp
type Msg =
| OnFormMsg of FormBuilder.Types.Msg
// ...
```
---------------
2. Store the `FormBuilder` instance in your model
```fsharp
type Model =
{ FormState : FormBuilder.Types.State
// ...
}
```
---------------
3. Create your form using the builder API
```fsharp
let (formState, formConfig) =
Form<Msg>
.Create(OnFormMsg)
.AddField(
BasicInput
.Create("name")
.WithLabel("Name")
.IsRequired()
.WithDefaultView()
)
.AddField(
BasicSelect
.Create("favLang")
.WithLabel("Favorite language")
.WithValuesFromServer(getLanguages)
.WithPlaceholder("")
.IsRequired("I know it's hard but you need to choose")
.WithDefaultView()
)
// When you are done with adding fields, you need to call `.Build()`
.Build()
```
<div class="message is-warning">
<div class="message-body">
Each field needs to have a unique `name`. The name is used to link the `label` with its form elements. And it will also be used as the key for the JSON.
If you don't set a unique `name` per field, you will see this message in the console:
```fsharp
Each field needs to have a unique name. I found the following duplicate name:
- name
- description
```
</div>
</div>
---------------
4. Initialize the `FormBuilder` in your init function
<div class="message is-warning">
<div class="message-body">
<span>
<span class="icon has-text-warning is-medium"><i class="fa fa-lg fa-warning"></i></span>Never store <code>formConfig</code> in your model
<span>
</div>
</div>
```fsharp
let private init _ =
let (formState, formCmds) = Form.init formConfig formState
{ FormState = formState }, Cmd.map OnFormMsg formCmds
```
---------------
5. Handle `OnFormMsg` in your update function
```fsharp
let private update msg model =
match msg with
| OnFormMsg msg ->
let (formState, formCmd) = Form.update formConfig msg model.FormState
{ model with FormState = formState }, Cmd.map OnFormMsg formCmd
// ...
```
---------------
6. Render your form in your view function
```fsharp
let private formActions (formState : FormSate) dispatch =
div [ ]
[ button [ OnClick (fun _ ->
dispatch Submit
) ]
[ str "Submit" ] ]
let private view model dispatch =
Form.render
{ Config = formConfig
State = formState
Dispatch = dispatch
ActionsArea = (formActions formState dispatch)
Loader = Form.DefaultLoader }
```
### BasicFields - Custom views
If you are not using [Bulma](http://bulma.io/) in your project, `Thoth.Elmish.FormBuilder.BasicFields` provides a `WithCustomView` API allowing you to customize the field view.
Example:
```fsharp
.AddField(
BasicInput
.Create("name")
.WithLabel("Name")
.IsRequired()
.WithCustomView(fun (state : Types.FieldState) (dispatch : Types.IFieldMsg -> unit) ->
let state : Input.State = state :?> Input.State
// You can write your view here
input [ Value state.Value
OnChange (fun ev -> ev.Value |> Input.ChangeValue |> dispatch ) ]
)
)
```
### Server side validation
In order to support server side validation, the library defines the type `ErrorDef`.
```fsharp
type ErrorDef =
{ Text : string
Key : string }
```
- `Text` is the error message to display
- `Key` is the name of the field related to the error.
<article class="message is-info">
<div class="message-body">
I included the `Decoder` and `Encoder` definitions for use in your Fable client.
If you need it on the server, you will need to copy the type definition for now.
</div>
</article>
When you receive a `ErrorDef list` from your server, you can call `Form.setErrors` to display them in the form.
Example:
```fsharp
| CreationResponse.Errors errors ->
let newFormState =
model.State
|> Form.setLoading false
|> Form.setErrors formConfig errors
{ model with State = newFormState }, Cmd.none
```
### Create a custom field
#### Prelude
In this section, you will learn:
- how to create a custom fields
- the convention I use when designing a field, I encourage you to follow them 😊
- general comments on why I structure my code in a specific way
You will see usage of boxing `box` and casting `:?>`. If you want to learn more about that after reading this section you can take a look at the [F.A.Q.](#can-we-avoid-boxing-casting)
#### File structure
When designing a field I encourage you to follow this structure:
```fsharp
namespace MyCustomFieldLibrary
[<RequireQualifiedAccess>]
module MyField =
// Here goes the logic for your field
type MyField private (state : MyField.State) =
// Here goes the Fluent API that will be exposed and used to register a field in a Form
static member Create(name : string) =
// ...
```
By using this architecture, you can then use your API like this:
```fsharp
module MyApp.PageA
open Thoth.Elmish.FormBuilder
open MyCustomFieldLibrary
let formState, formConfig =
Form<Msg>
.Create(OnFormMsg)
.AddField(
MyField
.Create("name")
// ...
)
```
The benefits are:
- Each field consists of a **single file**
- By using **1 open statement** you get access to all your fields API
*This is the structure used in [Thoth.Elmish.FormBuilder.BasicFields](https://github.com/MangelMaxime/Thoth/tree/master/src/Thoth.Elmish.FormBuilder.BasicFields)*
#### Implement your field logic and config contract
Designing a custom field is similar to designing an Elmish component.
Here is the contract that all fields need to implement. Don't worry, we are going to go step by step.
```fsharp
/// Contract for registering fields in the `Config`
type FieldConfig =
{ View : FieldState -> (IFieldMsg -> unit) -> React.ReactElement
Update : FieldMsg -> FieldState -> FieldState * (string -> Cmd<Msg>)
Init : FieldState -> FieldState * (string -> Cmd<Msg>)
Validate : FieldState -> FieldState
IsValid : FieldState -> bool
ToJson : FieldState -> string * Encode.Value
SetError : FieldState -> string -> FieldState }
```
As an example of a custom field, we will re-implement a basic `Input`.
---------------
1. `State` and `Validator` types
*`State` is similar to `Model` in Elmish terms*
**Every** field **needs to have** a `Name` property. This will be used later to identify each field uniquely and to generate the JSON representation of the field.
```fsharp
type State =
{ Label : string
Value : string
Type : string
Placeholder : string option
Validators : Validator list
ValidationState : ValidationState
Name : string }
and Validator = State -> ValidationState
```
---------------
2. `Msg` type
As in Elmish, your fields are going to react to `Msg`. But you need to interface with `IFieldMsg`.
```fsharp
type Msg =
| ChangeValue of string
interface IFieldMsg
```
---------------
3. `init` function
This function will be called when initializing your forms.
For example, if your field needs to fetch data from the server you can trigger the request here. [See the select field for an example](https://github.com/MangelMaxime/Thoth/blob/master/src/Thoth.Elmish.FormBuilder.BasicFields/Select.fs)
```fsharp
let private init (state : FieldState) =
state, FormCmd.none
```
---------------
4. `validate` and `setError` function
If you used the same names for `ValidationState` and `Validators` properties, you can copy/paste these functions in all your field definitions.
*I didn't find a way to make it generic for any field*
```fsharp
let private validate (state : FieldState) =
let state : State = state :?> State
let rec applyValidators (validators : Validator list) (state : State) =
match validators with
| validator::rest ->
match validator state with
| Valid -> applyValidators rest state
| Invalid msg ->
{ state with ValidationState = Invalid msg }
| [] -> state
applyValidators state.Validators { state with ValidationState = Valid } |> box
let private setError (state : FieldState) (message : string)=
let state : State = state :?> State
{ state with ValidationState = Invalid message } |> box
```
---------------
5. `isValid` function
This function will be called to check if your field is in a valid state or not.
```fsharp
let private isValid (state : FieldState) =
let state : State = state :?> State
state.ValidationState = Valid
```
---------------
6. `toJson` function
This function will be called by the form in order to generate the JSON representation of your field.
```fsharp
let private toJson (state : FieldState) =
let state : State = state :?> State
state.Name, Encode.string state.Value
```
---------------
7. `update` function
Similar to Elmish, this is called for updating your `State` when receiving a `Msg` for this field.
```fsharp
let private update (msg : FieldMsg) (state : FieldState) =
// Cast the received message into it's real type
let msg = msg :?> Msg
// Cast the received state into it's real type
let state = state :?> State
match msg with
| ChangeValue newValue ->
{ state with Value = newValue }
|> validate
// We need to box the returned state
|> box, FormCmd.none
```
**Notes**
- You need to call `validate` youself after updating your model. This is required because not every field message needs to trigger a validation.
- Instead of using the `Cmd` module from Elmish, you needs to use `FormCmd`. This module implements the same API as the `Cmd` module.
---------------
8. `view` function
```fsharp
let private view (state : FieldState) (dispatch : IFieldMsg -> unit) =
let state : State = state :?> State
let className =
if isValid state then
"input"
else
"input is-danger"
div [ Class "field" ]
[ label [ Class "label"
HtmlFor state.Name ]
[ str state.Label ]
div [ Class "control" ]
[ input [ Value state.Value
Placeholder (state.Placeholder |> Option.defaultValue "")
Id state.Name
Class className
OnChange (fun ev ->
ChangeValue ev.Value |> dispatch
) ] ]
span [ Class "help is-danger" ]
[ str state.ValidationState.Text ] ]
```
---------------
9. Expose your `config`
```fsharp
let config : FieldConfig =
{ View = view
Update = update
Init = init
Validate = validate
IsValid = isValid
ToJson = toJson
SetError = setError }
```
#### Expose a fluent API
See the [F.A.Q.](#why-use-a-fluent-api) for why I chose to expose a fluent API.
1. In order to design an immutable fluent API, you need to mark your `constructor` as `private`.
```fsharp
type BasicInput private (state : Input.State) =
```
---------------
2. Expose a `static member Create(name : string)`
<span>
<span class="icon has-text-info"><i class="fa fa-lg fa-info"></i></span>Each field should have a <code>name</code> property as recommended in HTML5. This name will be used to identify the field for dispatching the messages in your form.
<span>
```fsharp
static member Create(name : string) =
BasicInput
{ Label = ""
Value = ""
Type = "text"
Placeholder = None
Validators = [ ]
ValidationState = Valid
Name = name }
```
---------------
3. Create a member to return a `FieldBuilder`
```fsharp
member __.WithDefaultView () : FieldBuilder =
{ Type = "basic-input"
State = state
Name = state.Name
Config = Input.config }
```
**Notes**
- The `Type` value needs to be a unique name to identify your field type. For example, `basic-input`, `fulma-input`, `my-lib-special-dropdown`, etc.
- The `Config` properties refer to the exposed config you wrote earlier.
---------------
4. Create on member per property you want to customize
Here are some examples:
```fsharp
member __.WithLabel (label : string) =
BasicInput { state with Label = label }
member __.WithPlaceholder (placeholder : string) =
BasicInput { state with Placeholder = Some placeholder }
member __.IsRequired (?msg : String) =
let msg = defaultArg msg "This field is required"
let validator (state : Input.State) =
if String.IsNullOrWhiteSpace state.Value then
Invalid msg
else
Valid
BasicInput { state with Validators = state.Validators @ [ validator ] }
member __.AddValidator (validator) =
BasicInput { state with Validators = state.Validators @ [ validator ] }
```
<article class="message is-success has-text-centered">
<div class="message-header">
🎉 Congrats 🎉
</div>
<div class="message-body">
You now have a **working field** with a **flexible API** exposed
</div>
</article>
## API
### FormBuilder.Types
| Types | Description |
|---|---|
| `ErrorDef` | Error representation to support server side validation |
| `ValidationState` | Used to describe if a field is `Valid` or `Invalid` with the message to display |
| `IFieldMsg` | Interface to be implemented by any field `Msg` |
| `FieldState` | Type alias for the field `State`, should be casted |
| `FieldMsg` | Type alias for the field `Msg`, should be casted |
| `Field` | Record to register a field in a `Form` instance |
| `Msg` | Internal `Msg` used by the Form library |
| `State` | Track current state of the Form |
| `FieldConfig` | Contract for registering fields in the `Config` |
| `Config` | Configuration for the Form |
### FormBuilder.Form
| Types | Description |
|---|---|
| `Form.init` | `init` function to call from your `init` to initialize the form |
| `Form.update` | `update` function to call when you received a message for the form |
| `Form.render` | Render the form in your view |
| `Form.valide` | Validate the model and check if it's valid |
| `Form.toJson` | Generate a JSON representation from the current state |
| `Form.setLoading` | Set the loading state of the form |
| `Form.isLoading` | Check if the form is loading |
| `Form.setErrors` | Set error for each field based on a `ErrorDef list` |
### FormBuilder.FormCmd
| Types | Description |
|---|---|
| `FormCmd.none` | None - no commands, also known as `[]` |
| `FormCmd.ofMsg` | Command to issue a specific message |
| `FormCmd.map` | When emitting the message, map to another type |
| `FormCmd.batch` | Aggregate multiple commands |
| `FormCmd.ofAsync` | Command that will evaluate an async block and map the result into success or error (of exception) |
| `FormCmd.ofFunc` | Command to evaluate a simple function and map the result into success or error (of exception) |
| `FormCmd.performFunc` | Command to evaluate a simple function and map the success to a message discarding any possible error |
| `FormCmd.attemptFunc` | Command to evaluate a simple function and map the error (in case of exception) |
| `FormCmd.ofPromise` | Command to call `promise` block and map the results |
## F.A.Q.
### Can we avoid boxing / casting ?
This library is using boxing / casting a lot in order to allow us to store different types in a common list. I tried to use interface for `FieldConfig` in order to have something like:
```fsharp
type FieldConfig<'State, 'Msg> =
abstract member View : 'State * ('Msg -> unit) -> obj
abstract member Update : 'Msg * 'State -> 'State * (string -> Cmd<'Msg>)
abstract member Init : 'State -> 'State * (string -> Cmd<'Msg>)
abstract member Validate : 'State -> 'State
abstract member IsValid : 'State -> bool
abstract member ToJson : 'State -> string * Encode.Value
abstract member SetError : 'State * string -> 'State
```
But then I didn't find a way to store all the fields `FieldConfig<'State, 'Msg>` in a list inside `Config<'AppMsg>`.
If you find a way to either hide the boxing / casting things from the user view or to make everything strongly typed please open an issue to discuss it.
### Why use a fluent API ?
When writing this library I explored several ways for building the DSL. Here is my analysis:
<table>
<thead>
<tr>
<th style="text-align:center"></th>
<th style="text-align:center">
<span>
Computation Expression
</span>
</th>
<th style="text-align:center">
<span>
Pipeline
</span>
</th>
<th style="text-align:center">
<span>
Fluent
</span>
</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:center">
Easy to create
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-danger"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-warning"></i>
</td>
</tr>
<tr>
<td style="text-align:center">
Easy to extend
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-danger"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
</tr>
<tr>
<td style="text-align:center">
Terse
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-warning"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-danger"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
</tr>
<tr>
<td style="text-align:center">
Discoverability
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-warning"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
</tr>
<tr>
<td style="text-align:center">
<span>
Naturally follows indentation
<span class="icon tooltip is-tooltip-multiline has-text-grey-light" data-tooltip="Evaluation is based on how easy it is to distinguish the different blocks">
<i class="fa fa-question-circle"></i>
</span>
</span>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-warning"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
</tr>
<tr>
<td style="text-align:center">
<span>
Allow optional arguments
<span class="icon tooltip is-tooltip-multiline has-text-grey-light" data-tooltip="This is useful for the validators. For example, you can make a 'custom error message' optional">
<i class="fa fa-question-circle"></i>
</span>
</span>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-danger"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-danger"></i>
</td>
<td style="text-align:center">
<i class="fa fa-circle has-text-success"></i>
</td>
</tr>
</tbody>
</table>
### Am I forced to use Bulma/Fulma ?
No, I used `Bulma` in `Thoth.Elmish.FormBuilder.BasicFields` because it was easier for me as I already know this framework. You can use `WithCustomView` to customize the views.
```fsharp
BasicInput
.Create("condition")
// Here you can customize the view function
.WithCustomView(fun state dispatch ->
let state = state :?> Checkbox.State
div [ ]
[ label [ Class "my-custom-label" ]
[ str state.Label ]
input [ Class "my-custom-input"
// others properties
] ]
)
```
### Will there be a Fulma based library ?
Yes, I am already working on it but it's not ready yet for a public release, because I want to support all/most of Bulma features and it takes time to design.
### Is there any CSS included ?
`Thoth.Elmish.FormBuilder` has been designed to be really thin and not tied to a specific CSS framework.
The only special case is if you use the `DefaultLoader`. Then the library will inject **7 lines** of CSS in your `document`.
But if you use a `CustomLoader` then no CSS is injected.
| 30.494297 | 236 | 0.615835 | eng_Latn | 0.940188 |
e1a7d91022a387dcad6fc9a0870bbb13c6102b84 | 3,292 | md | Markdown | README.md | nuernbergerA/missing-livewire-assertions | 847b904f86a9f9b1a447e423753c33131b6b6842 | [
"MIT"
] | null | null | null | README.md | nuernbergerA/missing-livewire-assertions | 847b904f86a9f9b1a447e423753c33131b6b6842 | [
"MIT"
] | null | null | null | README.md | nuernbergerA/missing-livewire-assertions | 847b904f86a9f9b1a447e423753c33131b6b6842 | [
"MIT"
] | null | null | null | 
# This Package Adds Missing Livewire Test Assertions
[](https://packagist.org/packages/christophrumpel/missing-livewire-assertions)
[](https://github.com/christophrumpel/missing-livewire-assertions/actions?query=workflow%3Arun-tests+branch%3Aproduction)
[](https://github.com/christophrumpel/missing-livewire-assertions/actions?query=workflow%3A"Check+%26+fix+styling"+branch%3Aproduction)
[](https://packagist.org/packages/christophrumpel/missing-livewire-assertions)
This package adds some nice new Livewire assertions which I was missing while testing my applications using Livewire. If you want to know more about WHY I needed them, check out my [blog article](https://christoph-rumpel.com/2021/4/how-I-test-livewire-components).
## Installation
You can install the package via composer:
```bash
composer require christophrumpel/missing-livewire-assertions
```
## Usage
The new assertions get added automatically, so you can use it immediately.
### Check if a Livewire property is wired to an HTML field
```php
Livewire::test(FeedbackForm::class)
->assertPropertyWired('email');
```
### Check if a Livewire method is wired to an HTML field
```php
Livewire::test(FeedbackForm::class)
->assertMethodWired('submit');
```
### Check if a Livewire component contains another Livewire component
```php
Livewire::test(FeedbackForm::class)
->assertContainsLivewireComponent(CategoryList::class);
```
You can use the component tag name as well:
```php
Livewire::test(FeedbackForm::class)
->assertContainsLivewireComponent('category-list');
```
### Check if a Livewire component contains a Blade component
```php
Livewire::test(FeedbackForm::class)
->assertContainsBladeComponent(Button::class);
```
You can use the component tag name as well:
```php
Livewire::test(FeedbackForm::class)
->assertContainsBladeComponent('button');
```
### Check to see if a string comes before another string
```php
Livewire::test(FeedbackForm::class)
->assertSeeBefore('first string', 'second string');
```
## Testing
```bash
composer test
```
## Changelog
Please see [CHANGELOG](CHANGELOG.md) for more information on what has changed recently.
## Contributing
Please see [CONTRIBUTING](.github/CONTRIBUTING.md) for details.
## Security Vulnerabilities
Please review [our security policy](../../security/policy) on how to report security vulnerabilities.
## Credits
- [Christoph Rumpel](https://github.com/christophrumpel)
- [All Contributors](../../contributors)
## License
The MIT License (MIT). Please see [License File](LICENSE.md) for more information.
| 33.591837 | 305 | 0.771567 | eng_Latn | 0.593488 |
e1a8470d2b2668dac5676d17d58a837a09e7c217 | 331 | md | Markdown | _posts/2020-11-06-東莞工業區大量空廠房被拆除,產業升級搞房地產?無人機下一片廢墟.md | NodeBE4/society | 20d6bc69f2b0f25d6cc48a361483263ad27f2eb4 | [
"MIT"
] | 1 | 2020-09-16T02:05:28.000Z | 2020-09-16T02:05:28.000Z | _posts/2020-11-06-東莞工業區大量空廠房被拆除,產業升級搞房地產?無人機下一片廢墟.md | NodeBE4/society | 20d6bc69f2b0f25d6cc48a361483263ad27f2eb4 | [
"MIT"
] | null | null | null | _posts/2020-11-06-東莞工業區大量空廠房被拆除,產業升級搞房地產?無人機下一片廢墟.md | NodeBE4/society | 20d6bc69f2b0f25d6cc48a361483263ad27f2eb4 | [
"MIT"
] | null | null | null | ---
layout: post
title: "東莞工業區大量空廠房被拆除,產業升級搞房地產?無人機下一片廢墟"
date: 2020-11-06T00:05:50.000Z
author: 石炳锋
from: https://www.youtube.com/watch?v=DuTKbaziE9w
tags: [ 石炳锋 ]
categories: [ 石炳锋 ]
---
<!--1604621150000-->
[東莞工業區大量空廠房被拆除,產業升級搞房地產?無人機下一片廢墟](https://www.youtube.com/watch?v=DuTKbaziE9w)
------
<div>
太驚人了,東莞工業園裡大量空廠房被拆除。
</div>
| 19.470588 | 78 | 0.712991 | yue_Hant | 0.265304 |
e1a894a3bb580845b7f70ac34a4c36476003ce5e | 414 | md | Markdown | CHANGELOG.md | Dinja1403/VSCode-CocoaPods-Snippets | d3e3a469ec2ceaebbd235a7c4544ecaa6ed10f42 | [
"MIT"
] | 4 | 2018-06-06T06:47:47.000Z | 2019-04-01T12:29:43.000Z | CHANGELOG.md | Dinja1403/VSCode-CocoaPods-Snippets | d3e3a469ec2ceaebbd235a7c4544ecaa6ed10f42 | [
"MIT"
] | null | null | null | CHANGELOG.md | Dinja1403/VSCode-CocoaPods-Snippets | d3e3a469ec2ceaebbd235a7c4544ecaa6ed10f42 | [
"MIT"
] | 3 | 2018-07-27T17:57:44.000Z | 2020-02-15T22:04:10.000Z | # Change Log
## [0.0.1]
- Initial release
## [0.0.2]
- Added `swiftversion` snippet
## [0.0.3]
- Replace icon image
## [0.0.4]
- Fix some error
## [0.0.5-0.0.7]
- Change displayName
- Ps: Actually,there are no substantive changes in these versions. Because my misuse caused version number confusion, I am an ODC. :joy:
## [0.0.8]
- Fix resource_bundles format
## [1.0.0]
- Fix default_subspecs format | 13.8 | 136 | 0.65942 | eng_Latn | 0.844597 |
e1a91eea43c310d22a540c217ee17f03e303888c | 7,921 | md | Markdown | READMEs/README.lws_system.md | horchi/libwebsockets | e1a73c42096a9f94617a25440501d7adc4abbd9f | [
"Apache-2.0"
] | 3,539 | 2015-01-02T18:31:36.000Z | 2022-03-30T09:56:47.000Z | READMEs/README.lws_system.md | horchi/libwebsockets | e1a73c42096a9f94617a25440501d7adc4abbd9f | [
"Apache-2.0"
] | 2,393 | 2015-01-06T08:43:52.000Z | 2022-03-31T14:09:09.000Z | READMEs/README.lws_system.md | horchi/libwebsockets | e1a73c42096a9f94617a25440501d7adc4abbd9f | [
"Apache-2.0"
] | 1,483 | 2015-01-04T11:33:23.000Z | 2022-03-30T17:45:00.000Z | # `lws_system`
See `include/libwebsockets/lws-system.h` for function and object prototypes.
## System integration api
`lws_system` allows you to set a `system_ops` struct at context creation time,
which can write up some function callbacks for system integration. The goal
is the user code calls these by getting the ops struct pointer from the
context using `lws_system_get_ops(context)` and so does not spread system
dependencies around the user code, making it directly usable on completely
different platforms.
```
typedef struct lws_system_ops {
int (*reboot)(void);
int (*set_clock)(lws_usec_t us);
int (*attach)(struct lws_context *context, int tsi, lws_attach_cb_t cb,
lws_system_states_t state, void *opaque,
struct lws_attach_item **get);
} lws_system_ops_t;
```
|Item|Meaning|
|---|---|
|`(*reboot)()`|Reboot the system|
|`(*set_clock)()`|Set the system clock|
|`(*attach)()`|Request an event loop callback from another thread context|
### `reboot`
Reboots the device
### `set_clock`
Set the system clock to us-resolution Unix time in seconds
### `attach`
Request a callback from the event loop from a foreign thread. This is used, for
example, for foreign threads to set up their event loop activity in their
callback, and eg, exit once it is done, with their event loop activity able to
continue wholly from the lws event loop thread and stack context.
## Foreign thread `attach` architecture
When lws is started, it should define an `lws_system_ops_t` at context creation
time which defines its `.attach` handler. In the `.attach` handler
implementation, it should perform platform-specific locking around a call to
`__lws_system_attach()`, a public lws api that actually queues the callback
request and does the main work. The platform-specific wrapper is just there to
do the locking so multiple calls from different threads to the `.attach()`
operation can't conflict.
User code can indicate it wants a callback from the lws event loop like this:
```
lws_system_get_ops(context)->attach(context, tsi, cb, state, opaque, NULL)
```
`context` is a pointer to the lws_context, `tsi` is normally 0, `cb` is the user
callback in the form
```
void (*lws_attach_cb_t)(struct lws_context *context, int tsi, void *opaque);
```
`state` is the `lws_system` state we should have reached before performing the
callback (usually, `LWS_SYSTATE_OPERATIONAL`), and `opaque` is a user pointer that
will be passed into the callback.
`cb` will normally want to create scheduled events and set up lws network-related
activity from the event loop thread and stack context.
Once the event loop callback has been booked by calling this api, the thread and
its stack context that booked it may be freed. It will be called back and can
continue operations from the lws event loop thread and stack context. For that
reason, if `opaque` is needed it will usually point to something on the heap,
since the stack context active at the time the callback was booked may be long
dead by the time of the callback.
See ./lib/system/README.md for more details.
## `lws_system` blobs
"Blobs" are arbitrary binary objects that have a total length. Lws lets you set
them in two ways
- "directly", by pointing to them, which has no heap implication
- "heap", by adding one or more arbitrary chunk to a chained heap object
In the "heap" case, it can be incrementally defined and the blob doesn't all
have to be declared at once.
For read, the same api allows you to read all or part of the blob into a user
buffer.
The following kinds of blob are defined
|Item|Meaning|
|---|---|
|`LWS_SYSBLOB_TYPE_AUTH`|Auth-related blob 1, typically a registration token|
|`LWS_SYSBLOB_TYPE_AUTH + 1`|Auth-related blob 2, typically an auth token|
|`LWS_SYSBLOB_TYPE_CLIENT_CERT_DER`|Client cert public part|
|`LWS_SYSBLOB_TYPE_CLIENT_KEY_DER`|Client cert key part|
|`LWS_SYSBLOB_TYPE_DEVICE_SERIAL`|Arbitrary device serial number|
|`LWS_SYSBLOB_TYPE_DEVICE_FW_VERSION`|Arbitrary firmware version|
|`LWS_SYSBLOB_TYPE_DEVICE_TYPE`|Arbitrary Device Type identifier|
|`LWS_SYSBLOB_TYPE_NTP_SERVER`|String with the ntp server address (defaults to pool.ntp.org)|
### Blob handle api
Returns an object representing the blob for a particular type (listed above)
```
lws_system_blob_t *
lws_system_get_blob(struct lws_context *context, lws_system_blob_item_t type,
int idx);
```
### Blob Setting apis
Sets the blob to point length `len` at `ptr`. No heap allocation is used.
```
void
lws_system_blob_direct_set(lws_system_blob_t *b, const uint8_t *ptr, size_t len);
```
Allocates and copied `len` bytes from `buf` into heap and chains it on the end of
any existing.
```
int
lws_system_blob_heap_append(lws_system_blob_t *b, const uint8_t *buf, size_t len)
```
Remove any content from the blob, freeing it if it was on the heap
```
void
lws_system_blob_heap_empty(lws_system_blob_t *b)
```
### Blob getting apis
Get the total size of the blob (ie, if on the heap, the aggreate size of all the
chunks that were appeneded)
```
size_t
lws_system_blob_get_size(lws_system_blob_t *b)
```
Copy part or all of the blob starting at offset ofs into a user buffer at buf.
`*len` should be the length of the user buffer on entry, on exit it's set to
the used extent of `buf`. This works the same whether the bob is a direct pointer
or on the heap.
```
int
lws_system_blob_get(lws_system_blob_t *b, uint8_t *buf, size_t *len, size_t ofs)
```
If you know that the blob was handled as a single direct pointer, or a single
allocation, you can get a pointer to it without copying using this.
```
int
lws_system_blob_get_single_ptr(lws_system_blob_t *b, const uint8_t **ptr)
```
### Blob destroy api
Deallocates any heap allocation for the blob
```
void
lws_system_blob_destroy(lws_system_blob_t *b)
```
## System state and notifiers
Lws implements a state in the context that reflects the readiness of the system
for various steps leading up to normal operation. By default it acts in a
backwards-compatible way and directly reaches the OPERATIONAL state just after
the context is created.
However other pieces of lws, and user, code may define notification handlers
that get called back when the state changes incrementally, and may veto or delay
the changes until work necessary for the new state has completed asynchronously.
The generic states defined are:
|State|Meaning|
|---|---|
|`LWS_SYSTATE_CONTEXT_CREATED`|The context was just created.|
|`LWS_SYSTATE_INITIALIZED`|The vhost protocols have been initialized|
|`LWS_SYSTATE_IFACE_COLDPLUG`|Existing network interfaces have been iterated|
|`LWS_SYSTATE_DHCP`|Network identity is available|
|`LWS_SYSTATE_TIME_VALID`|The system knows the time|
|`LWS_SYSTATE_POLICY_VALID`|If the system needs information about how to act from the net, it has it|
|`LWS_SYSTATE_REGISTERED`|The device has a registered identity|
|`LWS_SYSTATE_AUTH1`|The device identity has produced a time-limited access token|
|`LWS_SYSTATE_AUTH2`|Optional second access token for different services|
|`LWS_SYSTATE_OPERATIONAL`|The system is ready for user code to work normally|
|`LWS_SYSTATE_POLICY_INVALID`|All connections are being dropped because policy information is changing. It will transition back to `LWS_SYSTATE_INITIALIZED` and onward to `OPERATIONAL` again afterwards with the new policy|
|`LWS_SYSTATE_CONTEXT_DESTROYING`|Context is going down and smd with it|
### Inserting a notifier
You should create an object `lws_system_notify_link_t` in non-const memory and zero it down.
Set the `notify_cb` member and the `name` member and then register it using either
`lws_system_reg_notifier()` or the `.register_notifier_list`
member of the context creation info struct to make sure it will exist early
enough to see all events. The context creation info method takes a list of
pointers to notify_link structs ending with a NULL entry.
| 35.841629 | 223 | 0.774776 | eng_Latn | 0.995718 |
e1a9ec6a691456f28208550bbcdc312e5dc7b723 | 1,708 | md | Markdown | examples/docs/documents/ref.md | tseijp/react-mixing | 7134940c55af74f52c258cf2a567773bc055e640 | [
"MIT"
] | 1 | 2021-06-27T09:23:46.000Z | 2021-06-27T09:23:46.000Z | examples/docs/documents/ref.md | tseijp/react-mixing | 7134940c55af74f52c258cf2a567773bc055e640 | [
"MIT"
] | 4 | 2022-02-14T09:48:16.000Z | 2022-02-27T15:35:54.000Z | examples/docs/documents/ref.md | tseijp/react-mixing | 7134940c55af74f52c258cf2a567773bc055e640 | [
"MIT"
] | null | null | null | ### TODO
```ts
SynthedValue('osc'): {
_node: AudioNode,
_parents: Set<SynthedValue<T>>,
effect (on, to): () => void,
on: () => void,
to: () => void
}
MixingValue (node='osc'): {get: () => value, advance: dt => {}, synth: Synthesis (): {
onValues: node[], // number[],
toValues: node[], // number[]
values: SynthedValue[], // AnimatedValue[]
on: SynthedValue<T>, // T | FluidValue<T>
to: SynthedValue<T>, // T | FluidValue<T>
config: new Config(),
}
<Mixing
on={{x: 'osc', y: MixingValue('gain')}} as {[string]: OnMixingValue}
to={{x: mixing => mixing.y} as {[string]: ToMixingValue}}
>
{mixing => <a.div style={mixing}></a.div>}
</Mixing>
```
### Track
```
[o] ... Artwork
~~~ ... Track Title
KEY SYNC
Semitone Down/Up
Current key
Key variation
Beat Sync
Sync Master
```
### Select
```
HOT CUE
PAD FX
SLICER
BEAT JUMP
BEAT LOOP
KEYBOARD
KEY SHIFT
SEQ. CALL
ACT. CENSR
MEMORY CUE
```
### Top
```
1 ... Auto Beat loop
< > ... Halve/Double the loop
CUE
PLAY/PAUSE
- ... Adjust the playing speed
132 ... Deck Bpm Display
0.0% ... Playing speed display
WIDE ... Tempo Range
+
SLIP
Q
```
### Grid Edit
```
1.1BAS ... set to the nearest Beatgrid point
|
<<< ||| ... shift the whole BeatGrid left
< |||
||| >
|||| >>>
n ... auto gain knob
132 ... a BPM value
>> ||| << ... Shrink Beat intervals
> ||| <
< ||| > ... expand
<< ||| >>
TAP ... BPM with the tapping interval
||| x2 ... double BPM value
||| x1/2 ... halve BPM value
||| | ||| ... make an adjustment on the whole track
||| ||| ... make an adjustment from the current position
```
| 16.910891 | 87 | 0.538056 | yue_Hant | 0.41636 |
e1aaa4606d9acee6a811ada25fc05b36e24aa01d | 2,745 | md | Markdown | README.md | seznam/swift-unisocket | 1785e432fb8497265a38712cdb9584c429ca3f96 | [
"Apache-2.0"
] | 1 | 2019-07-23T23:31:03.000Z | 2019-07-23T23:31:03.000Z | README.md | seznam/swift-unisocket | 1785e432fb8497265a38712cdb9584c429ca3f96 | [
"Apache-2.0"
] | null | null | null | README.md | seznam/swift-unisocket | 1785e432fb8497265a38712cdb9584c429ca3f96 | [
"Apache-2.0"
] | null | null | null | 




# UniSocket
Let your swift application talk to others via TCP, UDP or unix sockets.
## Usage
Check if there is sshd running on the system:
```swift
import UniSocket
do {
let socket = try UniSocket(type: .tcp, peer: "localhost", port: 22)
try socket.attach()
let data = try socket.recv()
let string = String(data: data, encoding: .utf8)
print("server responded with:")
print(string)
try socket.close()
} catch UniSocketError.error(let detail) {
print(detail)
}
```
Send HTTP request and wait for response of a minimal required size:
```swift
import UniSocket
do {
let socket = try UniSocket(type: .tcp, peer: "2a02:598:2::1053", port: 80)
try socket.attach()
let request = "HEAD / HTTP/1.0\r\n\r\n"
let dataOut = request.data(using: .utf8)
try socket.send(dataOut!)
let dataIn = try socket.recv(min: 16)
let string = String(data: dataIn, encoding: .utf8)
print("server responded with:")
print(string)
try socket.close()
} catch UniSocketError.error(let detail) {
print(detail)
}
```
Query DNS server over UDP using custom timeout values:
```swift
import UniSocket
import DNS // https://github.com/Bouke/DNS
do {
let timeout: UniSocketTimeout = (connect: 2, read: 2, write: 1)
let socket = try UniSocket(type: .udp, peer: "8.8.8.8", port: 53, timeout: timeout)
try socket.attach() // NOTE: due to .udp, the call doesn't make a connection, just prepares socket and resolves hostname
let request = Message(type: .query, recursionDesired: true, questions: [Question(name: "www.apple.com.", type: .host)])
let requestData = try request.serialize()
try socket.send(requestData)
let responseData = try socket.recv()
let response = try Message.init(deserialize: responseData)
print("server responded with:")
print(response)
try socket.close()
} catch UniSocketError.error(let detail) {
print(detail)
}
```
Check if local MySQL server is running:
```swift
import UniSocket
do {
let socket = try UniSocket(type: .local, peer: "/tmp/mysql.sock")
try socket.attach()
let data = try socket.recv()
print("server responded with:")
print("\(data.map { String(format: "%c", $0) }.joined())")
try socket.close()
} catch UniSocketError.error(let detail) {
print(detail)
}
```
## Credits
Written by [Daniel Fojt](https://github.com/danielfojt/), copyright [Seznam.cz](https://onas.seznam.cz/en/), licensed under the terms of the Apache License 2.0.
| 28.894737 | 160 | 0.715118 | eng_Latn | 0.598702 |
e1aae684a52132faba415df20373fc90c9050535 | 232 | md | Markdown | Operators/Assignment-operator/README.md | tverma332/python3 | 544c4ec9c726c37293c8da5799f50575cc50852d | [
"MIT"
] | 3 | 2022-03-28T09:10:08.000Z | 2022-03-29T10:47:56.000Z | Operators/Assignment-operator/README.md | tverma332/python3 | 544c4ec9c726c37293c8da5799f50575cc50852d | [
"MIT"
] | 1 | 2022-03-27T11:52:58.000Z | 2022-03-27T11:52:58.000Z | Operators/Assignment-operator/README.md | tverma332/python3 | 544c4ec9c726c37293c8da5799f50575cc50852d | [
"MIT"
] | null | null | null | # Assignment Operator
Assignment operators are used in Python to assign values to variables. a = 5 is a simple assignment operator that assigns the value 5 on the right to the variable a on the left.
 | 46.4 | 177 | 0.780172 | eng_Latn | 0.999263 |
e1ac3fd3b2c86d3b920185c84276a9b56b937482 | 373 | md | Markdown | content/posts/the-shape-of-things-to-come.md | proycon/homepage | 6fc736956e9069f82ee4ed276cfea10724300f48 | [
"MIT"
] | null | null | null | content/posts/the-shape-of-things-to-come.md | proycon/homepage | 6fc736956e9069f82ee4ed276cfea10724300f48 | [
"MIT"
] | null | null | null | content/posts/the-shape-of-things-to-come.md | proycon/homepage | 6fc736956e9069f82ee4ed276cfea10724300f48 | [
"MIT"
] | null | null | null | +++
title = "The Shape of Things to Come"
date = 2016-12-30T11:48:55+01:00
description = "A piano music video"
[extra]
music = true
youtube = true
+++
# The Shape of Things to Come
From Battlestar Galactica, Composed by Bear McCreary. My performance is a bit sloppy here and there but I didn't want to do it over again ;)
{{ youtube(id="Of8TGz8L-uE",autoplay=true) }}
| 21.941176 | 140 | 0.710456 | eng_Latn | 0.985135 |
e1ac5926def3cfcd27acc756d245d4133ef6ec24 | 1,065 | md | Markdown | articles/container-service/kubernetes/container-service-scale.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/container-service/kubernetes/container-service-scale.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/container-service/kubernetes/container-service-scale.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: (PRZESTARZAŁE) Skalowanie klastra usługi kontenera platformy Azure
description: Jak skalować węzły agenta w klastrze dc/os, docker swarm lub Kubernetes w usłudze kontenera azure przy użyciu interfejsu wiersza polecenia platformy Azure lub witryny Azure portal.
author: sauryadas
ms.service: container-service
ms.topic: conceptual
ms.date: 03/01/2017
ms.author: saudas
ms.custom: H1Hack27Feb2017, mvc
ms.openlocfilehash: d53369128a660805df7e144fbec67b1bad787b7b
ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 03/27/2020
ms.locfileid: "76275620"
---
# <a name="deprecated-scale-agent-nodes-in-a-container-service-cluster"></a>(PRZESTARZAŁE) Skalowanie węzłów agenta w klastrze usługi kontenera
> [!TIP]
> Aby zapoznać się ze zaktualizowaną wersją tego artykułu, która korzysta z usługi Azure Kubernetes, zobacz [Skalowanie klastra usługi Azure Kubernetes (AKS).](../../aks/scale-cluster.md)
[!INCLUDE [container-service-scale.md](../../../includes/container-service-scale.md)]
| 46.304348 | 193 | 0.802817 | pol_Latn | 0.958017 |
e1adb1c35fe2e9123426fb94e2c80aaf870e24e9 | 1,021 | md | Markdown | content/md/work/wonderbly.md | vladstoick/vladstoick.github.io | 7ed4ade6b1556db636dc9c260dca3cdf917b7dde | [
"MIT"
] | null | null | null | content/md/work/wonderbly.md | vladstoick/vladstoick.github.io | 7ed4ade6b1556db636dc9c260dca3cdf917b7dde | [
"MIT"
] | 9 | 2020-12-12T03:16:58.000Z | 2022-02-26T04:30:59.000Z | content/md/work/wonderbly.md | vladstoick/vladstoick.github.io | 7ed4ade6b1556db636dc9c260dca3cdf917b7dde | [
"MIT"
] | null | null | null | ---
title: "wonderbly"
order: 2
---
<WorkTitle
title="Back-end Engineer"
location="Wonderbly (formerly LostMy.Name)"
period="July 2015 - April 2018"
/>
> Part of the core team that looked over the e-commerce engine built in Solidus.
> The team handled the funnel after cart: payments, shipping, tracking shipment
> and building tools for customer support.
>
> - Implemented two new payment methods: Klarna & Braintree Cards
> - Implemented a shipment tracking system that uses Aftership for tracking packages and Twilio for sending messages to customers all over the world in different languages
> - Upgraded the back-end application from Solidus 1.\* & Rails 4 to Solidus 2.\* & Rails 5
> - Contributed multiple PRs to Solidus
> - Implemented a taxation system for Australia & EU
> - Lead various projects for increasing speed of our test suite built on RSpe
<TagList>
<Tag>Ruby on Rails</Tag>
<Tag>Solidus</Tag>
<Tag>E-commerce</Tag>
<Tag>Twilio</Tag>
<Tag>Aftership</Tag>
<Tag>SQL(Postgres)</Tag>
</TagList>
| 31.90625 | 171 | 0.745348 | eng_Latn | 0.984765 |
e1ade95e47e7a75e8001c6e4bde47548315f2559 | 1,682 | md | Markdown | _posts/2016-09-17-Casablanca-Bridal-2101-Satin-A-Line-Wedding-Dress.md | lastgown/lastgown.github.io | f4a71e2a12910bc569ac7e77f819c2be1598d8e6 | [
"MIT"
] | null | null | null | _posts/2016-09-17-Casablanca-Bridal-2101-Satin-A-Line-Wedding-Dress.md | lastgown/lastgown.github.io | f4a71e2a12910bc569ac7e77f819c2be1598d8e6 | [
"MIT"
] | null | null | null | _posts/2016-09-17-Casablanca-Bridal-2101-Satin-A-Line-Wedding-Dress.md | lastgown/lastgown.github.io | f4a71e2a12910bc569ac7e77f819c2be1598d8e6 | [
"MIT"
] | null | null | null | ---
layout: post
date: '2016-09-17'
title: "Casablanca Bridal 2101 Satin A Line Wedding Dress"
category: Casablanca Bridal
tags: [Casablanca Bridal]
---
### Casablanca Bridal 2101 Satin A Line Wedding Dress
Just **$302.99**
###
Coming Soon! This is a Spring 2013 Wedding Dress by Casablanca Bridal. Currently available for order only, NOT AVAILABLE IN STORE.
Textured Satin, strapless A-line gown with a sweetheart neckline. The bodice is pleated with an asymmetrical waist seam. Matching fabric buttons line the back of the gown.
<a href="https://www.eudances.com/en/casablanca-bridal/506-casablanca-bridal-2101-satin-a-line-wedding-dress.html"><img src="//www.eudances.com/1440-thickbox_default/casablanca-bridal-2101-satin-a-line-wedding-dress.jpg" alt="Casablanca Bridal 2101 Satin A Line Wedding Dress" style="width:100%;" /></a>
<!-- break --><a href="https://www.eudances.com/en/casablanca-bridal/506-casablanca-bridal-2101-satin-a-line-wedding-dress.html"><img src="//www.eudances.com/1441-thickbox_default/casablanca-bridal-2101-satin-a-line-wedding-dress.jpg" alt="Casablanca Bridal 2101 Satin A Line Wedding Dress" style="width:100%;" /></a>
<a href="https://www.eudances.com/en/casablanca-bridal/506-casablanca-bridal-2101-satin-a-line-wedding-dress.html"><img src="//www.eudances.com/1439-thickbox_default/casablanca-bridal-2101-satin-a-line-wedding-dress.jpg" alt="Casablanca Bridal 2101 Satin A Line Wedding Dress" style="width:100%;" /></a>
Buy it: [https://www.eudances.com/en/casablanca-bridal/506-casablanca-bridal-2101-satin-a-line-wedding-dress.html](https://www.eudances.com/en/casablanca-bridal/506-casablanca-bridal-2101-satin-a-line-wedding-dress.html)
| 84.1 | 317 | 0.771106 | eng_Latn | 0.17237 |
e1ae3029dd7641bbe42d939bb2abd718fda85f32 | 158 | md | Markdown | site/content/newsletter/newsl.md | baycreekmadison/bcna_website_2 | 3bd0942ee41806d8610b69b6697b25f33667030b | [
"MIT"
] | null | null | null | site/content/newsletter/newsl.md | baycreekmadison/bcna_website_2 | 3bd0942ee41806d8610b69b6697b25f33667030b | [
"MIT"
] | null | null | null | site/content/newsletter/newsl.md | baycreekmadison/bcna_website_2 | 3bd0942ee41806d8610b69b6697b25f33667030b | [
"MIT"
] | null | null | null | ---
date: 2018-04-04T15:04:10.000Z
Filename: /newsletter/blah.pdf
Description: the shining, garage sales, new elections
---
I got your content right _here_
| 17.555556 | 53 | 0.746835 | eng_Latn | 0.714504 |
e1aef87e1fb2df2a099d9153ab8fb1a4e0620abc | 536 | md | Markdown | docs/anti-censorship/VPN/Star_VPN.md | N-628/ggame | 86af396a61dee72139c4f7df842149b59577ef78 | [
"MIT"
] | null | null | null | docs/anti-censorship/VPN/Star_VPN.md | N-628/ggame | 86af396a61dee72139c4f7df842149b59577ef78 | [
"MIT"
] | null | null | null | docs/anti-censorship/VPN/Star_VPN.md | N-628/ggame | 86af396a61dee72139c4f7df842149b59577ef78 | [
"MIT"
] | null | null | null | ---
title: Star VPN
description:
published: true
date: 2021-10-10T14:36:21.195Z
tags:
- Proxy
editor: markdown
dateCreated: 2021-10-10T14:36:21.195Z
---
## 从 App Store 大陆区下架
2017年7月29日,Star VPN 官方推特发布消息称:「We just received notice that @Apple removing all the @VPN apps from the @China app store.」[^8984]
[^8984]: [Star VPN sur Twitter : "We just received notice that @Apple removing all the @VPN apps from the @China app store."](https://web.archive.org/web/20210929125759/https://twitter.com/star_vpn/status/891191888547651584)
| 31.529412 | 224 | 0.744403 | eng_Latn | 0.371385 |
e1af41c60a0a23e3e4a18fbee483ff32e5d5a2a4 | 4,323 | md | Markdown | README.md | NKI-CCB/imagene-analysis | 22ddd9b3ca7368f2d767330a541057a3db5df0ca | [
"MIT"
] | 4 | 2021-03-21T15:51:06.000Z | 2021-08-20T17:25:19.000Z | README.md | NKI-CCB/imagene-analysis | 22ddd9b3ca7368f2d767330a541057a3db5df0ca | [
"MIT"
] | null | null | null | README.md | NKI-CCB/imagene-analysis | 22ddd9b3ca7368f2d767330a541057a3db5df0ca | [
"MIT"
] | 1 | 2022-03-25T12:58:44.000Z | 2022-03-25T12:58:44.000Z | Integration of MRI and RNAseq data in the Imagene Project
=========================================================
Integration of MRI and RNAseq data of the Imagene project. Paper published in Radiology: "Radiogenomic analysis of breast cancer by linking MRI phenotypes with tumor gene expression." https://doi.org/10.1148/radiol.2020191453.
To make a virtual environment for R and Python usage:
```sh
make requirements
```
Then you can process the data, train models and run analyses:
```sh
make all
```
Remote files
------------
Data is not included in this repository. The download method and location of
the data can be specified in the `config/snakemake.yaml` file.
### Pathway Analysis Requirements ###
The pathway analyses requires the msigdb files release 5.2. These cannot be
downloaded automatically, as registration is required. Before running the
pathway analysis, download `msigdb_v5.2_files_to_download_locally.zip`
from the [msigdb site](http://software.broadinstitute.org/gsea/downloads.jsp#msigdb)
under Archived Releases and place it into `data/external/msigdb`. The scripts
will extract the required files from the archive.
Project Organization
------------
├── LICENSE
│
├── Makefile <- Makefile with commands like `make all` and `make venv`.
│
├── Snakefile <- Snakemake file with rules to run this project. These rules
│ should be run inside a virtual environment containing the packages
│ in requirements.txt (Python) and requirement.R (R). The Makefile can
│ set this environment up on some systems.
│
├── README.md <- The top-level README for developers using this project.
│
├── docs <- Project Documentation using Sphinx.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── config/snakemake.yaml <- Configuration of the workflow. Set version of Python and R used here.
│
├── data
│ ├── raw <- The original, immutable data dump.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ ├── external <- Data from third party sources, such as reference data.
│ └── to_share <- Data for third parties.
│
├── notebooks <- Jupyter notebooks for exploration. Naming convention is date (for
│ ordering), the creator's initials, and a short `-` delimited
│ description, e.g. `2017-01-01-tb-initial-data-exploration`.
│
├── models <- Trained models and model predictions.
│
├── analyses <- Results and summaries of statistical analyses.
│
├── reports <- Analysis reports, such as pweave or rmarkdown reports.
│ └── figures <- Generated graphics and figures to be used in reporting.
├── figures <- Figures for in the manuscript.
│
├── venv/ <- Suggested directory for virtual environment
│
├── requirements.txt <- The Python requirements file for reproducing the analysis
│ environment e.g. generated with `pip freeze > requirements.txt`.
├── requirements.R <- Requirements and installation script for R.
│
└── src <- Source code for use in this project.
│
├── plot.py <- Plotting library.
├── util.py <- Utility function library.
│
├── analysis <- Scripts for statistical analysis of data or models.
│
├── data <- Scripts to download or generate data.
│
├── features <- Scripts to construct features from data.
│
├── models <- Scripts to train and apply models.
│
├── reports <- Source of reports, such as snippets used in multiple reports.
│
└── visualization <- Scripts to visualize results.
--------
<p><small>Project based on the <a target="_blank" href="https://drivendata.github.io/cookiecutter-data-science/">cookiecutter data science project template</a>. #cookiecutterdatascience</small></p>
| 43.23 | 226 | 0.612075 | eng_Latn | 0.975012 |
e1aff78c1a7272248a39f2874a8158e71958cbd9 | 3,649 | md | Markdown | README.md | evjero/ds-docs | 66aba0f7c7ab961d4fd2dd5b4549051615b933fc | [
"MIT"
] | null | null | null | README.md | evjero/ds-docs | 66aba0f7c7ab961d4fd2dd5b4549051615b933fc | [
"MIT"
] | null | null | null | README.md | evjero/ds-docs | 66aba0f7c7ab961d4fd2dd5b4549051615b933fc | [
"MIT"
] | null | null | null | # Storybook Design System Documentation
(with **React**)
This project aims to fully understand the capabilities and limitations of Storybook.js and what it can provide for both designers and developers alike.
# The Why
> Storybook is a tool for UI development. It makes development faster and easier by isolating components. This allows you to work on one component at a time. You can develop entire UIs without needing to start up a complex dev stack, force certain data into your database, or navigate around your application.
Storybook has been a great addition to the teams I've worked for in professional settings. Many times have I been asked _"How do I implemented XYZ?"_ or _"Does ABC support this other use case?"_. Having the ability to interact in a live demo-like setting, even at the isolated component level, has excelled us leaps and bounds ahead of where we were even a few years ago. Providing visual testing, component integration testing, and even some contextual/conditional render testing allows for my frontend software engineers to take a back seat and let the design system do the work of defining the rails. Documentation is key to making developers' lives easier - and Storybook is the solution right now.
> Documentation is crucial for design system adoption. It consolidates usage guidelines to help developers, designers, PMs and other stakeholders ship predictable UIs.
# The How
Whatever is being built should have a proper design system behind it to keep a consolidated look and feel. Many times have I come across a professional setting where teams were developing in silos or teams found disparate 3rd-party solutions. External solutions are fine - don't get me wrong - but there should be a single source a truth if everyone is going to be working toward the same goal.
> Storybook helps you **document** components for reuse and automatically **visually test** your components to prevent bugs. Extend Storybook with an ecosystem of **addons** that help you do things like fine-tune responsive layouts or verify accessibility.
> Storybook integrates with most popular JavaScript UI frameworks and (experimentally) supports server-rendered component frameworks
Components allow us to consolidate in a way that's reusable and expandible to future use cases. There are four categories[<sup>1</sup>](https://medium.com/eightshapes-llc/documenting-components-9fe59b80c015) that a component can be documented into:
1. **Description** - details exactly what a component is for
1. **Examples** - coded references for different variants
1. **Design reference** - guides for when to use components
1. **Code reference** - details the inner workings and expectations
Whichever UI framework is used, you can add Storybook on top of it (some already do out of the box, which is great). There are times however where the default Storybook does not provide _all_ of the solutions natively. This is where addons come in!
# Addons
The Docs addon is critical for helping developers, especially those in junior roles aspiring to wet their feet in the visual side of frontend enginering.
The addons[<sup>2</sup>](https://github.com/storybookjs/storybook/#addons) this project aims to use are:
- **a11y**: Accessibility testing and awareness (for developer ease-of-use)
- **docs**: Higher quality documentation (for developer awareness)
- **links**: Links between stories (for integration testing)
- **storyshots**: Snapshot testing (for backwards compatibility testing)
- **measure**: Inspect layouts to conform to Design Tools (e.g., Figma, Framer, Adobe XD)
- among others from default `npx sb init`...
| 82.931818 | 702 | 0.787613 | eng_Latn | 0.999363 |
e1b000064fe36d0ec18b9a44ca185b6ec8f74e83 | 77 | md | Markdown | README.md | l-const/ostep | 3eb4521e399233f4fd5bdfb6fa0055e14a617cc9 | [
"MIT"
] | null | null | null | README.md | l-const/ostep | 3eb4521e399233f4fd5bdfb6fa0055e14a617cc9 | [
"MIT"
] | null | null | null | README.md | l-const/ostep | 3eb4521e399233f4fd5bdfb6fa0055e14a617cc9 | [
"MIT"
] | null | null | null | # ostep
Homework from the ostep book https://pages.cs.wisc.edu/~remzi/OSTEP/
| 25.666667 | 68 | 0.753247 | kor_Hang | 0.407929 |
e1b003934d72a82d4d6c0b799bf6e466e3d70925 | 613 | md | Markdown | readme.md | matter-in-motion/mm-serializer-msgpack | a2a657c7abdf669cac7b6153363e8134f291ea84 | [
"MIT"
] | null | null | null | readme.md | matter-in-motion/mm-serializer-msgpack | a2a657c7abdf669cac7b6153363e8134f291ea84 | [
"MIT"
] | null | null | null | readme.md | matter-in-motion/mm-serializer-msgpack | a2a657c7abdf669cac7b6153363e8134f291ea84 | [
"MIT"
] | null | null | null | # Matter In Motion. MessagePack serializer
[](https://www.npmjs.com/package/mm-serializer-msgpack)
[](https://www.npmjs.com/package/mm-serializer-msgpack)
## Usage
[MessagePack](https://msgpack.org)
[Extensions installation instructions](https://github.com/matter-in-motion/mm/blob/master/docs/extensions.md)
[Serializer usage](https://github.com/matter-in-motion/mm/blob/master/docs/serializers.md)
License: MIT
© velocityzen
| 36.058824 | 146 | 0.769984 | kor_Hang | 0.299933 |
e1b146f621af7468de2501eb0d8b6b81d64827d3 | 2,817 | md | Markdown | apps/MoonPlayer.md | Swiftapp-hub/appimage.github.io | 751131fab7beffc11bee43c56bea1760190061e3 | [
"MIT"
] | 228 | 2017-08-23T13:35:04.000Z | 2022-03-30T20:57:21.000Z | apps/MoonPlayer.md | Swiftapp-hub/appimage.github.io | 751131fab7beffc11bee43c56bea1760190061e3 | [
"MIT"
] | 1,504 | 2017-08-21T15:06:59.000Z | 2022-03-31T20:22:42.000Z | apps/MoonPlayer.md | Swiftapp-hub/appimage.github.io | 751131fab7beffc11bee43c56bea1760190061e3 | [
"MIT"
] | 530 | 2017-08-23T13:28:42.000Z | 2022-03-31T20:03:25.000Z | ---
layout: app
permalink: /MoonPlayer/
description: Video player for playing and downloading online videos from YouTube, Youku etc.
license: GPL-3.0
icons:
- MoonPlayer/icons/128x128/com.github.coslyk.MoonPlayer.png
screenshots:
- https://github.com/coslyk/moonplayer/raw/master/screenshots/screenshot.png?raw=true
authors:
- name: coslyk
url: https://github.com/coslyk
links:
- type: GitHub
url: coslyk/AppImageCollection
- type: Download
url: https://github.com/coslyk/AppImageCollection/releases
desktop:
Desktop Entry:
Categories: AudioVideo
Comment[zh_CN]: 播放本地及网络视频
Comment: Play local or online videos
Exec: moonplayer %U
GenericName[zh_CN]: 视频播放器
GenericName: Media Player
Icon: com.github.coslyk.MoonPlayer
MimeType: video/x-theora+ogg
Name[zh_CN]: MoonPlayer
Name: MoonPlayer
StartupNotify: true
Terminal: false
Type: Application
X-KDE-SubstituteUID: false
X-AppImage-Version: 2.7.glibc2.17
AppImageHub:
X-AppImage-Signature: no valid OpenPGP data found. the signature could not be verified.
Please remember that the signature file (.sig or .asc) should be the first file
given on the command line.
X-AppImage-Type: 2
X-AppImage-Architecture: x86_64
appdata:
Type: desktop-application
ID: com.github.coslyk.MoonPlayer
Name:
C: MoonPlayer
zh-CN: MoonPlayer
Summary:
C: Video player for playing and downloading online videos from YouTube, Youku etc.
zh-CN: 可以在线播放或下载来自YouTube、Youku等网站的视频的播放器。
Description:
C: >-
<p>MoonPlayer is an interesting player that lets you to enjoy videos. It can play the video online, download it or just
open the local videos.</p>
zh-CN: >-
<p>Moon Player 是一个有趣的播放器,让您轻松享受观看视频的乐趣。配合浏览器扩展,可以轻松在线观看或下载来自Youtube、B站、优酷等诸多网站的视频。</p>
ProjectGroup: coslyk
ProjectLicense: GPL-3.0
Url:
homepage: https://github.com/coslyk/moonplayer
bugtracker: https://github.com/coslyk/moonplayer/issues
donation: https://github.com/coslyk/moonplayer/wiki/Contribute
Launchable:
desktop-id:
- com.github.coslyk.MoonPlayer.desktop
Screenshots:
- default: true
thumbnails: []
source-image:
url: https://github.com/coslyk/moonplayer/raw/master/screenshots/screenshot.png?raw=true
lang: C
Releases:
- version: '2.7'
unix-timestamp: 1556409600
- version: '2.6'
unix-timestamp: 1552435200
- version: 2.5.4
unix-timestamp: 1549756800
- version: 2.5.2
unix-timestamp: 1549670400
- version: '2.5'
unix-timestamp: 1548547200
- version: '2.4'
unix-timestamp: 1546560000
- version: '2.3'
unix-timestamp: 1543708800
- version: '2.2'
unix-timestamp: 1542412800
- version: 2.1.5
unix-timestamp: 1541894400
ContentRating:
oars-1.1: {}
---
| 28.454545 | 125 | 0.70749 | yue_Hant | 0.309382 |
e1b2f1d9a0c11a51aa3eff308187c663b3810db2 | 356 | md | Markdown | _notes/178598961419.md | nikisaku/digital-garden-jekyll-template | c7331638c76df70d170f5ba09e77cb8380b05deb | [
"MIT"
] | null | null | null | _notes/178598961419.md | nikisaku/digital-garden-jekyll-template | c7331638c76df70d170f5ba09e77cb8380b05deb | [
"MIT"
] | null | null | null | _notes/178598961419.md | nikisaku/digital-garden-jekyll-template | c7331638c76df70d170f5ba09e77cb8380b05deb | [
"MIT"
] | null | null | null | ---
title: 'Text or empty note'
date: 2018-09-30
last_modified_at: 2018-09-30
---
tagged: [[Kombi]], [[Sławomir Łosowski]], [[Grzegorz Skawiński]], [[5-10-15]], [[tvp]]
<iframe frameborder="0" height="1" id="ga_target" scrolling="no" style="background-color:transparent; overflow:hidden; position:absolute; top:0; left:0; z-index:9999;" width="1"></iframe> | 50.857143 | 187 | 0.69382 | eng_Latn | 0.123367 |
e1b2f5859bdd7cc7100d4eab102ad42fde21de18 | 233 | md | Markdown | _people/Swigonova.md | TouchTheInvisible/website | f7acb29310d299982e897d2877e34f883065f234 | [
"MIT"
] | null | null | null | _people/Swigonova.md | TouchTheInvisible/website | f7acb29310d299982e897d2877e34f883065f234 | [
"MIT"
] | 1 | 2022-03-24T01:16:01.000Z | 2022-03-24T01:16:01.000Z | _people/Swigonova.md | TouchTheInvisible/touchtheinvisible.github.io | 2fcf5abe70f0695f2763f8e05a627457d458340b | [
"MIT"
] | 1 | 2022-03-21T23:55:55.000Z | 2022-03-21T23:55:55.000Z | ---
name: Dr. Zuzana Swigonova
title: Principle Investigator
photo: "https://www.biology.pitt.edu/sites/default/files/person-images/Swigonova.jpg"
email: zus3@pitt.edu
profile: biology.pitt.edu/person/zuzana-swigonova
order: 10
---
| 25.888889 | 86 | 0.772532 | yue_Hant | 0.075799 |
e1b46042a3d7205af37ae000def9e667862284ac | 86 | md | Markdown | README.md | bearlyMatt/matthewharazim.co.uk_ARCHIVE | d90b5166622339baade8f9f9d4fa8efc9e0d547c | [
"MIT"
] | null | null | null | README.md | bearlyMatt/matthewharazim.co.uk_ARCHIVE | d90b5166622339baade8f9f9d4fa8efc9e0d547c | [
"MIT"
] | null | null | null | README.md | bearlyMatt/matthewharazim.co.uk_ARCHIVE | d90b5166622339baade8f9f9d4fa8efc9e0d547c | [
"MIT"
] | null | null | null | # DEPRECIATED
# webDev for www.matthewharazim.co.uk
## dependencies
* Jekyll
* Hyde
| 10.75 | 37 | 0.732558 | yue_Hant | 0.409247 |
e1b4846ab5448c578fe17ce41ff34b13fdb14583 | 4,419 | md | Markdown | docs/framework/wcf/samples/discovery-with-scopes-sample.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/samples/discovery-with-scopes-sample.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/framework/wcf/samples/discovery-with-scopes-sample.md | mtorreao/docs.pt-br | e080cd3335f777fcb1349fb28bf527e379c81e17 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Exemplos de descoberta com escopos
ms.date: 03/30/2017
ms.assetid: 6a37a754-6b8c-4ebe-bdf2-d4f0520271d5
ms.openlocfilehash: 8ba5618f472fc8a6e1751776060f99103a67a073
ms.sourcegitcommit: de17a7a0a37042f0d4406f5ae5393531caeb25ba
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 01/24/2020
ms.locfileid: "76728757"
---
# <a name="discovery-with-scopes-sample"></a>Exemplos de descoberta com escopos
Este exemplo mostra como usar escopos para categorizar pontos de extremidade detectáveis, bem como usar <xref:System.ServiceModel.Discovery.DiscoveryClient> para executar uma pesquisa assíncrona de pontos de extremidade. No serviço, este exemplo mostra como personalizar a descoberta para cada ponto de extremidade adicionando um comportamento de descoberta de ponto de extremidade e usando-o para adicionar um escopo ao ponto de extremidade, bem como controlar a descoberta do ponto de extremidade. No cliente, o exemplo vai além de como os clientes podem criar um <xref:System.ServiceModel.Discovery.DiscoveryClient> e ajustar parâmetros de pesquisa para incluir escopos adicionando escopos ao <xref:System.ServiceModel.Discovery.FindCriteria>. Este exemplo também mostra como os clientes podem restringir as respostas adicionando um critério de encerramento.
## <a name="service-features"></a>Recursos de serviço
Este projeto mostra dois pontos de extremidade de serviço que estão sendo adicionados a um <xref:System.ServiceModel.ServiceHost>. Cada ponto de extremidade tem um <xref:System.ServiceModel.Discovery.EndpointDiscoveryBehavior> associado a ele. Esse comportamento é usado para adicionar escopos de URI para ambos os pontos de extremidade. Os escopos são usados para distinguir cada um desses pontos de extremidade para que os clientes possam ajustar a pesquisa. Para o segundo ponto de extremidade, a descoberta pode ser desabilitada definindo a propriedade <xref:System.ServiceModel.Discovery.EndpointDiscoveryBehavior.Enabled%2A> como `false`. Isso garante que os metadados de descoberta associados a esse ponto de extremidade não sejam enviados como parte de todas as mensagens de descoberta.
## <a name="client-features"></a>Recursos do cliente
O método `FindCalculatorServiceAddress()` mostra como usar um <xref:System.ServiceModel.Discovery.DiscoveryClient> e passar um <xref:System.ServiceModel.Discovery.FindCriteria> com duas restrições. Um escopo é adicionado aos critérios e a propriedade <xref:System.ServiceModel.Discovery.FindCriteria.MaxResults%2A> é definida como 1. O escopo limita os resultados apenas aos serviços que publicam o mesmo escopo. Definir <xref:System.ServiceModel.Discovery.FindCriteria.MaxResults%2A> como 1 limita as respostas às quais o <xref:System.ServiceModel.Discovery.DiscoveryClient> aguarda, no máximo, 1 ponto de extremidade. A chamada <xref:System.ServiceModel.Discovery.DiscoveryClient.Find%2A> é uma operação síncrona que bloqueia o thread até que um tempo limite seja atingido ou um ponto de extremidade seja encontrado.
### <a name="to-use-this-sample"></a>Para usar este exemplo
1. Este exemplo usa pontos de extremidade HTTP e para executar este exemplo, as ACLs de URL adequadas devem ser adicionadas. Para obter mais informações, consulte [Configurando http e HTTPS](../feature-details/configuring-http-and-https.md). A execução do comando a seguir em um privilégio elevado deve adicionar as ACLs apropriadas. Talvez você queira substituir seu domínio e nome de usuário pelos seguintes argumentos se o comando não funcionar como está: `netsh http add urlacl url=http://+:8000/ user=%DOMAIN%\%UserName%`
2. {1>Compile a solução.<1}
3. Execute o executável do serviço no diretório de compilação.
4. Execute o executável do cliente. Observe que o cliente é capaz de localizar o serviço.
> [!IMPORTANT]
> Os exemplos podem já estar instalados no seu computador. Verifique o seguinte diretório (padrão) antes de continuar.
>
> `<InstallDrive>:\WF_WCF_Samples`
>
> Se esse diretório não existir, vá para [Windows Communication Foundation (WCF) e exemplos de Windows Workflow Foundation (WF) para .NET Framework 4](https://www.microsoft.com/download/details.aspx?id=21459) para baixar todas as Windows Communication Foundation (WCF) e [!INCLUDE[wf1](../../../../includes/wf1-md.md)] amostras. Este exemplo está localizado no seguinte diretório.
>
> `<InstallDrive>:\WF_WCF_Samples\WCF\Basic\Discovery\DiscoveryWithScopes`
| 105.214286 | 861 | 0.810364 | por_Latn | 0.996395 |
e1b508fc6dae3c5ed000b6a55277963612e6c45f | 50 | md | Markdown | README.md | WWSDI/stripeapi-group7-boardgame-onlineshop | c6edb648d66f1d769c4175fd742c4944db7717e7 | [
"Apache-2.0"
] | null | null | null | README.md | WWSDI/stripeapi-group7-boardgame-onlineshop | c6edb648d66f1d769c4175fd742c4944db7717e7 | [
"Apache-2.0"
] | null | null | null | README.md | WWSDI/stripeapi-group7-boardgame-onlineshop | c6edb648d66f1d769c4175fd742c4944db7717e7 | [
"Apache-2.0"
] | null | null | null | # stripeapi-group7-boardgame-onlineshop
Begin app
| 16.666667 | 39 | 0.84 | eng_Latn | 0.830116 |
e1b5c5b6f548ed046de510db721557ae78ffb491 | 791 | md | Markdown | contributing.md | spacebroker/awesome-space-cloud | 4cfa7f012f5168524379106dd2e5cdaec3461810 | [
"CC0-1.0"
] | 1 | 2019-10-31T16:25:52.000Z | 2019-10-31T16:25:52.000Z | contributing.md | spacebroker/awesome-space-cloud | 4cfa7f012f5168524379106dd2e5cdaec3461810 | [
"CC0-1.0"
] | null | null | null | contributing.md | spacebroker/awesome-space-cloud | 4cfa7f012f5168524379106dd2e5cdaec3461810 | [
"CC0-1.0"
] | null | null | null | # Contribution Guidelines
Space Broker welcomes contributions through GitHub Pull Requests.
Please make sure that your pull request follows the following guidelines:
- The addition is related to cloud services which are useful in the context of spaceflight.
- You've ensured that your link isn't already on the site, or already suggested in an outstanding or closed pull request.
- Make an individual pull request for each suggestion.
- Additions should be added in alphabetical order in the relevant category.
- Include a description of the resource along with a name and url.
- Check your spelling and grammar.
- Make sure your text editor is set to remove trailing whitespace.
- New categories, or improvements to the existing categories, are encouraged.
Thank you for contributing!
| 46.529412 | 121 | 0.802781 | eng_Latn | 0.999761 |
e1b61cf2171cc5cd28e1d28394bd102a382e8ce1 | 956 | md | Markdown | docs/InlineResponse200.md | suedschwede/superset-swagger-client | 4d0077347ca1dac58d5c8f006060d153d03135e8 | [
"Apache-2.0"
] | 1 | 2021-04-08T02:30:05.000Z | 2021-04-08T02:30:05.000Z | docs/InlineResponse200.md | suedschwede/superset-swagger-client | 4d0077347ca1dac58d5c8f006060d153d03135e8 | [
"Apache-2.0"
] | null | null | null | docs/InlineResponse200.md | suedschwede/superset-swagger-client | 4d0077347ca1dac58d5c8f006060d153d03135e8 | [
"Apache-2.0"
] | null | null | null | # InlineResponse200
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**count** | [**BigDecimal**](BigDecimal.md) | The total record count on the backend | [optional]
**descriptionColumns** | [**InlineResponse200DescriptionColumns**](InlineResponse200DescriptionColumns.md) | | [optional]
**ids** | **List<String>** | A list of item ids, useful when you don't know the column id | [optional]
**labelColumns** | [**InlineResponse200LabelColumns**](InlineResponse200LabelColumns.md) | | [optional]
**listColumns** | **List<String>** | A list of columns | [optional]
**listTitle** | **String** | A title to render. Will be translated by babel | [optional]
**orderColumns** | **List<String>** | A list of allowed columns to sort | [optional]
**result** | [**List<ChartRestApiGetList>**](ChartRestApiGetList.md) | The result from the get list query | [optional]
| 68.285714 | 125 | 0.655858 | eng_Latn | 0.537341 |
e1b62cedd2fac217ef5b3bf34136427119a6d680 | 329 | md | Markdown | README.md | Jusophos/JTWavePulser | 4397905ec879c5306f2e34b5aea48a8f4e759852 | [
"MIT"
] | 5 | 2015-03-25T09:46:31.000Z | 2016-03-20T09:06:28.000Z | README.md | Jusophos/JTWavePulser | 4397905ec879c5306f2e34b5aea48a8f4e759852 | [
"MIT"
] | null | null | null | README.md | Jusophos/JTWavePulser | 4397905ec879c5306f2e34b5aea48a8f4e759852 | [
"MIT"
] | null | null | null | JTWavePulser
============
Enables the possibility to add pulsing wave animations to an view (iOS, Objective-C)
# CHANGES
## Version 0.0.4.2
- Added UIKIT Framework to header file
## Version 0.0.2
- Added the possiblity for instant stop
- Added the property `pulseRingInitialAlpha` to control the initial alpha value of a ring | 25.307692 | 89 | 0.741641 | eng_Latn | 0.903822 |
e1b6547bb77a1bf1f940499640d3065f571db83e | 131 | md | Markdown | src/pages/contact.md | salmanfs815/salmanfs | c2314c7c0364e6c0542199af4d3949233471c8f6 | [
"MIT"
] | 1 | 2018-01-30T02:15:15.000Z | 2018-01-30T02:15:15.000Z | src/pages/contact.md | salmanfs815/salmanfs | c2314c7c0364e6c0542199af4d3949233471c8f6 | [
"MIT"
] | 4 | 2021-03-01T21:21:43.000Z | 2022-02-26T01:59:57.000Z | src/pages/contact.md | salmanfs815/salmanfs | c2314c7c0364e6c0542199af4d3949233471c8f6 | [
"MIT"
] | null | null | null | ---
layout: layouts/contact.njk
title: Contact
permalink: /contact/index.html
---
Wanna get in touch? Just fill out the form below! | 21.833333 | 49 | 0.748092 | eng_Latn | 0.690565 |
e1b663fa7b4520b81ff39851ebd7a31a350e423c | 3,849 | markdown | Markdown | _posts/2012-04-19-intercessions-easter-3b-liturgy.markdown | dan-wtfw/master | 22a60201be3394a4239092bc4dbd6be804833f12 | [
"MIT"
] | null | null | null | _posts/2012-04-19-intercessions-easter-3b-liturgy.markdown | dan-wtfw/master | 22a60201be3394a4239092bc4dbd6be804833f12 | [
"MIT"
] | null | null | null | _posts/2012-04-19-intercessions-easter-3b-liturgy.markdown | dan-wtfw/master | 22a60201be3394a4239092bc4dbd6be804833f12 | [
"MIT"
] | null | null | null | ---
layout: post
title: 'Intercessions: Easter 3B [Liturgy]'
date: '2012-04-19 19:40:00'
---
I think this is fairly straightforward: the “Prayers of the People” for this coming Sunday, based on [Luke 24:36-48](http://bible.oremus.org/?ql=201864257), the gospel lesson for the day. I couldn’t find a litany that matched the themes of the service, so I decided to challenge myself and write this.
I’m not very satisfied with it, even though it turned out better than expected, and even though it’s something of a victory just to be able to make myself write it. For whatever reason, I find it tremendously difficult to put something like this together. It has something to do with creeping perfectionism, I suppose. Anyway, the only way to do it well is to start doing it, and here we are. As always, feedback is welcome.
[]()
Gathered by Christ to be the people of God and witnesses to his resurrection, let us pray.
<div style="text-align: center;">*Silent Prayer*</div>Lord Jesus, you stand among us and greet us with peace. May your spirit and your body bring peace upon your church: around the world and here at Salem, the place of peace. Heal our wounds and make us present to one another. Risen Lord Jesus, **hear our prayer.**
You came to your disciples not as a ghost but as a man of flesh and blood. You stand with the broken bodies of the world. By your love and mercy, make them whole again. We pray especially for… Risen Lord Jesus, **hear our prayer.**
You challenged the disciples’ fear and doubts, asking them to touch and see you. Accept our too-timid faith, but draw it to deeper waters. Reassure our doubts. Push us to greater trust and confidence that you are with us. Risen Lord Jesus, **hear our prayer.**
Your friends greeted you with joy so strong they could scarcely believe their eyes. Show yourself to us and fill us with joy to meet you in one another. Build in us belief that your presence makes a difference in the world. Risen Lord Jesus, **hear our prayer.**
You tell us that the words of Scripture must be fulfilled. Open our eyes, our hearts, that we might understand God’s word. Form us as complete Christians. We pray for our children, who are just learning the faith, and for ourselves, that we might never become lazy or self-satisfied in the pilgrim way. Risen Lord Jesus, **hear our prayer.**
Your fulfillment was to suffer and rise from the dead. Granted power by your death and resurrection, enable us to stand with all who suffer and die. We pray especially for… Risen Lord Jesus, **hear our prayer.**
You sent your disciples to proclaim to all the nations repentance and forgiveness of sins. Keep us mindful of the things that shame us and for which we have found forgiveness in your sight. May we joyfully welcome all sinners to our church as brothers and sisters in Christ. Risen Lord Jesus, **hear our prayer.**
Your instructions were to begin in Jerusalem. Let your good news start here in Salem, with humble and open hearts joined together to be your people in the world. Risen Lord Jesus, **hear our prayer.**
You send us forth as witnesses of your death and resurrection and your continuing presence among us. Keep us mindful of the great cloud of witnesses who have gone before us in the faith: the saints and martyrs of the church, the dead of our own family. Risen Lord Jesus, **hear our prayer.**
Rejoicing in the Jesus’ presence here among us, let us join hands and pray in the words Christ our Savior has taught us:
*Our Father, who art in Heaven,
hallowed be Thy name. Thy kingdom come,
Thy will be done, on earth as it is in Heaven.
Give us this day our daily bread,
and forgive us our debts, as we forgive our debtors.
And lead us not into temptation, but deliver us from evil,
for Thine is the kingdom, and the power, and the glory,
forever and ever. Amen.*
| 83.673913 | 424 | 0.765653 | eng_Latn | 0.999689 |
e1b792db23e6ff3c7b26623d92b9f664999755c7 | 1,646 | md | Markdown | documentation/app/documents/api/apps.md | xnt/Loyalty-Commerce-Platform | 0d9878bc29bae7c42e808b19865f6b91e1a02079 | [
"BSD-3-Clause"
] | 17 | 2015-05-20T22:47:02.000Z | 2020-05-15T10:32:43.000Z | documentation/app/documents/api/apps.md | xnt/Loyalty-Commerce-Platform | 0d9878bc29bae7c42e808b19865f6b91e1a02079 | [
"BSD-3-Clause"
] | 3 | 2020-09-05T02:50:37.000Z | 2021-05-09T09:41:48.000Z | documentation/app/documents/api/apps.md | xnt/Loyalty-Commerce-Platform | 0d9878bc29bae7c42e808b19865f6b91e1a02079 | [
"BSD-3-Clause"
] | 10 | 2015-09-14T06:05:31.000Z | 2020-02-14T18:55:47.000Z | ## Apps
Apps allow you to communicate with one or more loyalty programs. Apps are stored under the `/apps` endpoint. Use your account credentials to create one or more apps and to access your existing apps. Each app has its own set of business rules that determine which loyalty programs it can interact with, what actions it can perform, and how much each action costs.
#### Properties
<table>
<thead>
<tr>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>createdAt</td>
<td>The <a href="http://en.wikipedia.org/wiki/ISO_8601">ISO 8601</a> time when the resource was created.</td>
</tr>
<tr>
<td>description</td>
<td>The description of the app.</td>
</tr>
<tr>
<td>liveCredentials</td>
<td>An array of <a href="#live-credentials">live credential</a> objects that the app can use to access the live environment.</td>
</tr>
<tr>
<td>name</td>
<td>The name of the app.</td>
</tr>
<tr>
<td>sandboxCredentials</td>
<td>An array of <a href="#sandbox-credentials">sandbox credential</a> objects that the app can use to access the sandbox environment.</td>
</tr>
<tr>
<td>type</td>
<td>The type of resource.</td>
</tr>
<tr>
<td>updatedAt</td>
<td>The <a href="http://en.wikipedia.org/wiki/ISO_8601">ISO 8601</a> time when the resource was last updated.</td>
</tr>
</tbody>
</table>
| 27.898305 | 362 | 0.558323 | eng_Latn | 0.938814 |
e1b84b9f90c4ff20d07998babfd42f79da4fc35a | 4,114 | md | Markdown | _archive/2020-03-01-Audio-Diary.md | YitingLiu97/urban-theme | ce748ccc39f7ed4f25f079a2d03b3eaba9c67fea | [
"MIT"
] | null | null | null | _archive/2020-03-01-Audio-Diary.md | YitingLiu97/urban-theme | ce748ccc39f7ed4f25f079a2d03b3eaba9c67fea | [
"MIT"
] | null | null | null | _archive/2020-03-01-Audio-Diary.md | YitingLiu97/urban-theme | ce748ccc39f7ed4f25f079a2d03b3eaba9c67fea | [
"MIT"
] | null | null | null | ---
layout: post
title: "Audio Diary"
date: 2020-03-01
description: Talk about your feelings to get through hard times.
baseurl: archive/
permalink: audio-diary
category: Dev
tags: [web, jekyll]
preview: ../assets/audio-diary/demo.png
---

<!-- {:class="img-responsive"} -->
# Highlights
This project is created during COVID-19 to encourage people to talk about their feelings to stay connected to their inner-selves. I made this using p5.js, and p5.speech library to convert speech to text. It has been my personal audio diary to keep track of my feelings and I hope this would help more people during hard times like this.
**Tools:** front-end,Javascript, HTML5, CSS, Github, Heroku, p5.js, p5.speech
**Time:** Mar. 2019 for COVID-19
# [Play it on your desktop Chrome.](https://audio-diary.herokuapp.com/)
# Problem Statement
**How might we create a welcoming space for people to talk about their feelings during COVID-19?**
## Goal
To build a personal sanctuary for people to talk about their feelings during hard times. COVID-19 pandemic is just one of the hard times we are and will be going through.
## Role
Developer & Designer
# Research
This is a passion project since I want to make people feel happier during this depressing pandemic. People are feeling more sensitive and even refer to their living space as ”house arrest”, “prison”, “solitude”. (from my encounter on social media) I wanted to create a private and safe space for them to talk about their feelings without recording any of the data.
# Background
We are undergoing a difficult time right now in 2020 since the outburst of the COVID-19. The global pandemic is causing so many distress in us. I have been keeping myself busy with movies, books, and cooking while conducting social distancing to prevent the spread of the virus for over two weeks then.
I felt the urge to create something so that people can talk about their feelings.
I hope through this, you can start a conversation with yourself and look within.
- What is something that we can adjust ourselves to?
- What are some of your thoughts in their pure form?
Maybe, this could be your personal sanctuary while exercising social distancing.
# Ideation
The idea of this project came when I was about to write in my journal for the first time in a long time since January 2019. I wanted to write down my self-reflections as I get to spend more time with myself.
When I was about to get my journal, I thought writing takes a long time compared to talking. Why don’t I create an audio diary that I can keep to myself as my journal instead?
So I spent two days writing this program since I knew it would benefit others to get through this hard time. Also, I appreciate talking more because of COVID-19. When I am spending time with myself, I barely talk. When I have zoom meetings, hearing myself talk feels strange. I want to salvage the lost social connections with myself through this audio diary as well. It makes me so happy knowing that this will benefit people alike. The whole globe is a community now and we are all in this together. It is during this moment that I realized how important social interactions are in our lives.
# Development
I used p5.js and p5 speech library to create an audio diary with some visual twist.
For the color palette, I selected the pastel color to create a sense of calm and positivity.

## Interaction Flow
For the user interaction, I created the experience flow as below:

# Testing
- I did several iterations of the pastel color palette as well as the shapes of the generated items.
- Before I introduced the different shapes and fonts, it seemed quite bland to me. I then tweaked around to achieve the final design.
# Solution
[Please open it on your desktop Chrome.](https://audio-diary.herokuapp.com/)
[Check out my detailed documentation here.](https://github.com/YitingLiu97/audio-diary)
| 47.837209 | 594 | 0.775887 | eng_Latn | 0.999478 |
e1b8aa01218800b316cb116e6aeb8164c6c8806f | 1,329 | md | Markdown | src/da/2018-01/07/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/da/2018-01/07/01.md | PrJared/sabbath-school-lessons | 94a27f5bcba987a11a698e5e0d4279b81a68bc9a | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/da/2018-01/07/01.md | OsArts/Bible-study | cfcefde42e21795e217d192a8b7a703ebb7a6c01 | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: Ærlighed over for Gud
date: 10/02/2018
---
> <p>Ugens vers</p>
>”Men det i den gode jord, det er dem, der hører ordet og bevarer det i et smukt og godt hjerte, er udholdende og bærer frugt“ (Luk 8,15).
**Introduktion**
Hvad betyder et smukt og godt hjerte, og hvordan viser det sig? Vores moderne kultur betragter ofte ærlighed som et vagt, relativistisk moralsk princip; de fleste mennesker er fra tid til anden uærlige, men synes, det er i orden, så længe følgerne ikke er alt for store. Det påstås også, at visse omstændigheder kan retfærdiggøre uærlighed. Sandhed og ærlighed går altid sammen. Men vi blev ikke født med en tilbøjelighed til at være ærlige; det er en tillært dyd, og er grundlæggende for en forvalters moralske karakter.
Der kommer noget godt ud af at praktisere ærlighed. Fx opstår bekymringen for at blive grebet i at lyve eller at skulle bortforklare en løgn aldrig. Ærlighed er af mange grunde en værdifuld personlig egenskab, især i vanskelige situationer, når man let kan fristes til uærlighed.
I denne uge vil vi studere den åndelige side af ærlighed, som viser sig gennem tiendebetaling, og se på, hvorfor tiende har så stor betydning både for forvalteren og forvaltning.
**Ugens tekster**
* Luk 16,10
* 3 Mos 27,30
* 1 Mos 22,1-12
* Hebr 12,2
* Luk 11,42
* Hebr 7,2-10
* Neh 13
| 49.222222 | 521 | 0.765237 | dan_Latn | 0.999915 |
e1b9336b131e1d92a3966cd88a797c119d7a0a92 | 539 | md | Markdown | src/posts/2022-03-05-troubleshoot-cilium-network-policy/index.md | k8isdead/cilium.io | a2bc6f6f6e8f5e0ef10d36e7f70e861c394288f1 | [
"CC-BY-4.0"
] | 3 | 2022-03-08T17:50:19.000Z | 2022-03-29T15:56:13.000Z | src/posts/2022-03-05-troubleshoot-cilium-network-policy/index.md | k8isdead/cilium.io | a2bc6f6f6e8f5e0ef10d36e7f70e861c394288f1 | [
"CC-BY-4.0"
] | 31 | 2022-03-07T12:41:39.000Z | 2022-03-28T16:41:50.000Z | src/posts/2022-03-05-troubleshoot-cilium-network-policy/index.md | k8isdead/cilium.io | a2bc6f6f6e8f5e0ef10d36e7f70e861c394288f1 | [
"CC-BY-4.0"
] | 3 | 2022-03-08T19:48:35.000Z | 2022-03-29T11:34:04.000Z | ---
date: '2022-03-05T17:00:00.000Z'
title: 'Kubernetes Security — Control pod to pod communications with Cilium network policies'
ogImageUrl: 'https://docs.cilium.io/en/stable/_images/hubble_sw_service_map.png'
ogSummary: 'Learn how deploy Cilium with network policies and fix pods communication errors with Hubble'
categories:
- How-To
externalUrl: 'https://medium.com/@charled.breteche/kubernetes-security-control-pod-to-pod-communications-with-cilium-network-policies-d7275b2ed378'
tags:
- Cilium
- Hubble
- Network Policy
---
| 38.5 | 147 | 0.781076 | eng_Latn | 0.432699 |
e1b93801aed23300eb15982a568f708cfe3d2246 | 1,625 | md | Markdown | src/packages/rate/doc.md | ajuner/nutui | 08b5917956440a927cb99d7c6d7d340be3f0dfb5 | [
"MIT"
] | null | null | null | src/packages/rate/doc.md | ajuner/nutui | 08b5917956440a927cb99d7c6d7d340be3f0dfb5 | [
"MIT"
] | null | null | null | src/packages/rate/doc.md | ajuner/nutui | 08b5917956440a927cb99d7c6d7d340be3f0dfb5 | [
"MIT"
] | null | null | null | # Rate 评分
### 介绍
用于快速的评级操作,或对评价进行展示。
### 安装
``` javascript
import { createApp } from 'vue';
import { Rate } from '@nutui/nutui';
const app = createApp();
app.use(Rate);
```
## 代码演示
### 基础用法
```html
<nut-rate
v-model:value="state.val"
>
</nut-rate>
```
### 只读
```html
<nut-rate
v-model:value="val"
:readOnly="true"
>
</nut-rate>
```
### 绑定事件
```html
<nut-rate
@click="onClick"
>
</nut-rate>
```
### 自定义尺寸
```html
<nut-rate
:size="35"
>
</nut-rate>
```
### 自定义ICON
```html
<nut-rate
:checkedIcon="`url("data:image/svg+xml, %3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3E%3Cpath fill='rgb(255,0,0)' d='M10 20a10 10 0 1 1 0-20 10 10 0 0 1 0 20zM6.5 9a1.5 1.5 0 1 0 0-3 1.5 1.5 0 0 0 0 3zm7 0a1.5 1.5 0 1 0 0-3 1.5 1.5 0 0 0 0 3zm2.16 3H4.34a6 6 0 0 0 11.32 0z'/%3E%3C/svg%3E")`"
:uncheckedIcon="`url("data:image/svg+xml, %3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'%3E%3Cpath fill='rgb(255,0,0)' d='M10 20a10 10 0 1 1 0-20 10 10 0 0 1 0 20zm0-2a8 8 0 1 0 0-16 8 8 0 0 0 0 16zM6.5 9a1.5 1.5 0 1 1 0-3 1.5 1.5 0 0 1 0 3zm7 0a1.5 1.5 0 1 1 0-3 1.5 1.5 0 0 1 0 3zM7 13h6a1 1 0 0 1 0 2H7a1 1 0 0 1 0-2z'/%3E%3C/svg%3E")`"
></nut-rate>
```
## Prop
| 字段 | 说明 | 类型 | 默认值
| ----- | ----- | ----- | -----
| total | star 总数 | Number | 5
| value | 当前 star 数,可使用 v-model 双向绑定数据 | Number | 3
| size | star 大小 | Number | 25
| spacing | 两个star的间距 | Number | 20
| readOnly | 是否只读 | Boolean | false
| uncheckedIcon | 使用图标(未选中) | String | -
| checkedIcon | 使用图标(选中) | String | -
## Event
| 字段 | 说明 | 回调参数
|----- | ----- | -----
| click | 点击star时触发 | star的index | 20.061728 | 358 | 0.569231 | yue_Hant | 0.544461 |
e1b93fc1a00a4009274b1bf9c3343affcc2721df | 693 | md | Markdown | assignment/README.md | dy-lin/stat540-hw | 7fb8a3adc0cdc9c8b635d1188a6a891f3cdf2705 | [
"MIT"
] | null | null | null | assignment/README.md | dy-lin/stat540-hw | 7fb8a3adc0cdc9c8b635d1188a6a891f3cdf2705 | [
"MIT"
] | null | null | null | assignment/README.md | dy-lin/stat540-hw | 7fb8a3adc0cdc9c8b635d1188a6a891f3cdf2705 | [
"MIT"
] | null | null | null | # Analysis Assignment
## About this Assignment
Criteria below taken directly from the [course website](https://stat540-ubc.github.io/subpages/assignments.html#analysis-assignment):
This assignment will assess your understanding of the seminar and lecture materials. Start early because this assignment will take time to be completed and perfected. Use the issues in the Discussion repo and the seminar time to ask questions. You will find most of the analysis workflow of the assignment in the seminar materials.
The questions and links to the dataset are avalable [here](https://github.com/STAT540-UBC/STAT540-UBC.github.io/blob/master/homework/assignment/stat540_analysis_assignment.md). | 77 | 331 | 0.818182 | eng_Latn | 0.982193 |
e1b9c1b9fef8d41107b0bffb476e23f4bfa4bbbd | 42,921 | md | Markdown | vendor/amphp/amp/guide.md | PShadowClone/LaravelProject | 1b33781f9f0b75bd0c7b153db2a8aa19f3e5ef4f | [
"MIT"
] | null | null | null | vendor/amphp/amp/guide.md | PShadowClone/LaravelProject | 1b33781f9f0b75bd0c7b153db2a8aa19f3e5ef4f | [
"MIT"
] | null | null | null | vendor/amphp/amp/guide.md | PShadowClone/LaravelProject | 1b33781f9f0b75bd0c7b153db2a8aa19f3e5ef4f | [
"MIT"
] | null | null | null | The Amp Guide
=============
Amp is a non-blocking concurrency framework for PHP applications
**About This Document**
If you're reading The Amp Guide as a github markdown file you should click the link below to view the styled HTML version. The github markdown display is really unflattering and misses out on features like the Table of Contents. Please don't subject your eyes to this monstrosity; click here instead:
[**Show me the Amp Guide the way nature intended!**](http://amphp.github.io/amp/)
**Dependencies**
- PHP 5.5+
Optional PHP extensions may be used to improve performance in production environments and react to process control signals:
- [php-uv](https://github.com/chobie/php-uv) extension for libuv backends
- [pecl/libevent](http://pecl.php.net/package/libevent) for libevent backends ([download Windows .dll](http://windows.php.net/downloads/pecl/releases/libevent/0.0.5/))
**Installation**
```bash
$ git clone https://github.com/amphp/amp.git
$ cd amp
$ composer.phar install
```
**Community**
If you have questions stop by the [amp chat channel](https://gitter.im/amphp/amp) on Gitter.
# Table of Contents
[TOC]
---
# Event Reactor Concepts
## Reactor Implementations
It may surprise people to learn that the PHP standard library already has everything we need to write event-driven and non-blocking applications. We only reach the limits of native PHP's functionality in this area when we ask it to poll several hundred streams for read/write capability at the same time. Even in this case, though, the fault is not with PHP but the underlying system `select()` call which is linear in its performance degradation as load increases.
For performance that scales out to high volume we require more advanced capabilities currently found only in extensions. If you wish to, for example, service 10,000 simultaneous clients in an Amp-backed socket server you would definitely need to use one of the reactors based on a PHP extension. However, if you're using Amp in a strictly local program for non-blocking concurrency or you don't need to handle more than ~100 or so simultaneous clients in a server application the native PHP functionality is perfectly adequate.
Amp currently exposes three separate implementations for its standard `Reactor` interface. Each behaves exactly the same way from an external API perspective. The main differences have to do with underlying performance characteristics. The one capability that the extension-based reactors *do* offer that's unavailable with the native implementation is the ability to watch for process control signals. The current implementations are listed here:
| Class | Extension |
| --------------------- | ----------------------------------------------------- |
| Amp\NativeReactor | n/a |
| Amp\UvReactor | [php-uv](https://github.com/chobie/php-uv) |
| Amp\LibeventReactor | [pecl/libevent](http://pecl.php.net/package/libevent) |
As mentioned, only `UvReactor` and `LibeventReactor` implement the `Amp\SignalReactor` interface to offer cross-operating system signal handling capabilities. At this time use of the `UvReactor` is recommended over `LibeventReactor` as the php-uv extension offers more in the way of tangentially related (but useful) functionality for robust non-blocking applications.
## Reactor == Task Scheduler
The first thing we need to understand to program effectively using an event loop is this:
> *The event reactor is our task scheduler.*
The reactor controls program flow as long as it runs. Once we tell the reactor to run it will
control program flow until the application errors out, has nothing left to do, or is explicitly
stopped. Consider this very simple example:
```php
<?php // be sure to include the autoload.php file
echo "-before run()-\n";
Amp\run(function() {
Amp\repeat(function() { echo "tick\n"; }, $msInterval = 1000);
Amp\once(function() { Amp\stop(); }, $msDelay = 5000);
});
echo "-after stop()-\n";
```
Upon execution of the above example you should see output like this:
```
-before run()-
tick
tick
tick
tick
tick
-after stop()-
```
Hopefully this output demonstrates the concept that what happens inside the event reactor's run loop is like its own separate program. Your script will not continue past the point of `Reactor::run()` unless one of the previously mentioned conditions for stoppage is met.
While an application can and often does take place entirely inside the confines of the run loop, we can also use the reactor to do things like the following example which imposes a short-lived timeout for interactive console input:
```php
<?php
$number = null;
$stdinWatcher = null;
stream_set_blocking(STDIN, false);
echo "Please input a random number: ";
Amp\run(function() use (&$stdinWatcher, &$number) {
$stdinWatcher = Amp\onReadable(STDIN, function() use (&$number) {
$number = fgets(STDIN);
Amp\stop(); // <-- we got what we came for; exit the loop
});
Amp\once(function() {
Amp\stop(); // <-- you took too long; exit the loop
}, $msInterval = 5000);
});
if (is_null($number)) {
echo "You took too long so we chose the number '4' by fair dice roll\n";
} else {
echo "Your number is: ", (int) $number, "\n";
}
Amp\cancel($stdinWatcher); // <-- clean up after ourselves
stream_set_blocking(STDIN, true);
// Continue doing regular synchronous things here.
```
The details of what's happening in this example are unimportant and involve functionality that will be covered later. For now, the takeaway should simply be that it's possible to move in and out of the event loop like a ninja.
## The Universal Reactor
In the above example we use the reactor's procedural API to register stream IO and timer watchers. However, Amp also exposes an object API. Though it almost never makes sense to run multiple event loop instances in a single-threaded process, instantiating `Reactor` objects in your application can make things significantly more testable. Note that the function API uses a single static reactor instance for all operations (universal). Below you'll find the same example from above section rewritten to use the `Amp\NativeReactor` class .
```php
<?php
$number = null;
$stdinWatcher = null;
stream_set_blocking(STDIN, false);
echo "Please input a random number: ";
$reactor = new Amp\NativeReactor;
$reactor->run(function($reactor) use (&$stdinWatcher, &$number) {
$stdinWatcher = $reactor->onReadable(STDIN, function() use ($reactor, &$number) {
$number = fgets(STDIN);
$reactor->stop();
});
$reactor->once(function(Amp\Reactor $reactor) {
$reactor->stop();
}, $msInterval = 5000);
});
if (is_null($number)) {
echo "You took too long to respond, so we chose '4' by fair dice roll\n";
} else {
echo "Your number is: ", (int) $number, "\n";
}
$reactor->cancel($stdinWatcher); // <-- clean up after ourselves
stream_set_blocking(STDIN, true);
```
Always remember: *bugs arising from the existence of multiple reactor instances are exceedingly difficult to debug.* The reason for this should be relatively clear. It's because running one event
loop will block script execution and prevent others from executing at the same time. This sort of "loop starvation" results in events that inexplicably fail to trigger. You should endeavor to always use the same reactor instance in your application when you instantiate and use the object API. Because the event loop is often a truly global feature of an application the procedural API functions use a static instance to ensure the same `Reactor` is reused. Be careful about instantiating reactors manually and mixing in calls to the function API.
# Controlling the Reactor
## run()
The primary way an application interacts with the event reactor is to schedule events for execution
and then simply let the program run. Once `Reactor::run()` is invoked the event loop will run
indefinitely until there are no watchable timer events, IO streams or signals remaining to watch.
Long-running programs generally execute entirely inside the confines of a single `Reactor::run()`
call.
## tick()
The event loop tick is the basic unit of flow control in a non-blocking application. This method
will execute a single iteration of the event loop before returning. `Reactor::tick()` may be used
inside a custom `while` loop to implement "wait" functionality in concurrency primitives such as
futures and promises.
## stop()
The event reactor loop can be stopped at any time while running. When `Reactor::stop()` is invoked
the reactor loop will return control to the userland script at the end of the current iteration
of the event loop. This method may be used to yield control from the reactor even if events or
watchable IO streams are still pending.
## Timer Watchers
Amp exposes several ways to schedule timer watchers. Let's look at some details for each method ...
### immediately()
- Schedule a callback to execute in the next iteration of the event loop
- This method guarantees a clean call stack to avoid starvation of other events in the
current iteration of the loop if called recursively. An "immediately" callback is *always*
executed in the next tick of the event loop.
- After an "immediately" timer watcher executes it is automatically garbage collected by
the reactor so there is no need for applications to manually cancel the associated watcher ID.
- Like all watchers, "immediately" timers may be disabled and reenabled. If you disable this
watcher between the time you schedule it and the time that it actually runs the reactor *will
not* be able to garbage collect it until it executes. Therefore you must manually cancel an
immediately watcher yourself if it never actually executes to free any associated resources.
### once()
- Schedule a callback to execute after a delay of *n* milliseconds
- A "once" watcher is also automatically garbage collected by the reactor after execution and
applications should not manually cancel it unless they wish to discard the watcher entirely
prior to execution.
- A "once" watcher that is disabled has its delay time reset so that the original delay time
starts again from zero once reenabled.
- Like "immediately" watchers, a timer scheduled for one-time execution must be manually
cancelled to free resources if it never runs due to being disabled by the application after
creation.
### repeat()
- Schedule a callback to repeatedly execute every *n* millisconds.
- Unlike one-time watchers, "repeat" timer resources must be explicitly cancelled to free
the associated resources. Failure to free "repeat" watchers once their purpose is fulfilled
will result in memory leaks in your application.
- Like all other watchers, "repeat" timers may be disabled/reenabled at any time.
### at()
- Schedule a callback to execute at a specific time in the future. Future time may either be
an integer unix timestamp or any string parsable by PHP's `strtotime()` function.
- In all other respects "at" watchers are the same as "immediately" and "once" timers.
## Stream IO Watchers
Stream watchers are how we know that data exists to read or that write buffers are empty. These
notifications are how we're able to actually *create* things like http servers and asynchronous
database libraries using the event reactor. As such, stream IO watchers form the backbone of all
non-blocking operations with Amp.
There are two classes of IO watcher:
- Readability watchers
- Writability watchers
### onReadable()
Watchers registered via `Reactor::onReadable()` trigger their callbacks in the following situations:
- When data is available to read on the stream under observation
- When the stream is at EOF (for sockets, this means the connection is lost)
A common usage pattern for reacting to readable data looks something like this example:
```php
<?php
define("IO_GRANULARITY", 32768);
function isStreamDead($socket) {
return !is_resource($socket) || @feof($socket);
}
$client->watcherId = Amp\onReadable($client->socket, function() use ($client) {
$newData = @fread($client->socket, IO_GRANULARITY);
if ($newData != "") {
// There was actually data and not an EOF notification. Let's consume it!
parseIncrementalData($client, $newData);
} elseif (isStreamDead($client->socket)) {
// If the read data == "" we need to make sure the stream isn't dead
closeClientAndClearAnyAssociatedResources($client);
}
});
```
In the above example we've done a few very simple things:
- Register a readability watcher for a socket that will trigger our callback when there is
data available to read.
- When we read data from the stream in our triggered callback we pass that to a stateful parser
that does something domain-specific when certain conditions are met.
- If the `fread()` call indicates that the socket connection is dead we clean up any resources
we've allocated for the storage of this stream. This process should always include calling
`Reactor::cancel()` on any reactor watchers we registered in relation to the stream.
### onWritable()
- Streams are essentially *"always"* writable. The only time they aren't is when their
respective write buffers are full.
A common usage pattern for reacting to writability involves initializing a writability watcher without enabling it when a client first connects to a server. Once incomplete writes occur we're then able to "unpause" the write watcher using `Reactor::enable()` until data is fully sent without having to create and cancel new watcher resources on the same stream multiple times.
## Pausing, Resuming and Cancelling Watchers
All watchers, regardless of type, can be temporarily disabled and enabled in addition to being
cleared via `Reactor::cancel()`. This allows for advanced capabilities such as disabling the acceptance of new socket clients in server applications when simultaneity limits are reached. In general, the performance characteristics of watcher reuse via pause/resume are favorable by comparison to repeatedly cancelling and re-registering watchers.
### disable()
A simple disable example:
```php
<?php
$reactor = new Amp\NativeReactor;
// Register a watcher we'll disable
$watcherIdToDisable = $reactor->once(function() {
echo "I'll never execute in one second because: disable()\n";
}, $msDelay = 1000);
// Register a watcher to perform the disable() operation
$reactor->once(function() use ($watcherIdToDisable, $reactor) {
echo "Disabling WatcherId: ", $watcherIdToDisable, "\n";
$reactor->disable($watcherIdToDisable);
}, $msDelay = 500);
$reactor->run();
```
After our second watcher callback executes the reactor loop exits because there are no longer any enabled watchers registered to process.
### enable()
Using `enable()` is just as simple as the `disable()` example we just saw:
```php
<?php
$reactor = new Amp\NativeReactor;
// Register a watcher
$myWatcherId = $reactor->repeat(function() {
echo "tick\n";
}, $msDelay = 1000);
// Disable the watcher
$reactor->disable($myWatcherId);
// Remember, nothing happens until the reactor runs, so it doesn't matter that we
// previously created and disabled $myWatcherId
$reactor->run(function($reactor) use ($myWatcherId) {
// Immediately enable the watcher when the reactor starts
$reactor->enable($myWatcherId);
// Now that it's enabled we'll see tick output in our console every 1000ms.
});
```
For a slightly more complex use case, let's look at a common scenario where a server might create a write watcher that is initially disabled but subsequently enabled as necessary:
```php
<?php
class Server {
private $reactor;
private $clients = [];
public function __construct(Amp\Reactor $reactor) {
$this->reactor = $reactor;
}
public function startServer() {
// ... server bind and accept logic would exist here
$this->reactor->run();
}
private function onNewClient($sock) {
$socketId = (int) $sock;
$client = new ClientStruct;
$client->socket = $sock;
$readWatcher = $this->reactor->onReadable($sock, function() use ($client) {
$this->onReadable($client);
});
$writeWatcher = $this->reactor->onReadable($sock, function() use ($client) {
$this->doWrite($client);
}, $enableNow = false); // <-- let's initialize the watcher as "disabled"
$client->readWatcher = $readWatcher;
$client->writeWatcher = $writeWatcher;
$this->clients[$socketId] = $client;
}
// ... other class implementation details here ...
private function writeToClient($client, $data) {
$client->writeBuffer .= $data;
$this->doWrite($client);
}
private function doWrite(ClientStruct $client) {
$bytesToWrite = strlen($client->writeBuffer);
$bytesWritten = @fwrite($client->socket, $client->writeBuffer);
if ($bytesToWrite === $bytesWritten) {
$this->reactor->disable($client->writeWatcher);
} elseif ($bytesWritten >= 0) {
$client->writeBuffer = substr($client->writeBuffer, $bytesWritten);
$this->reactor->enable($client->writeWatcher);
} elseif ($this->isSocketDead($client->socket)) {
$this->unloadClient($client);
}
}
// ... other class implementation details here ...
}
```
### cancel()
It's important to *always* cancel persistent watchers once you're finished with them or you'll create memory leaks in your application. This functionality works in exactly the same way as the above enable/disable examples:
```php
<?php
Amp\run(function() {
$myWatcherId = Amp\repeat(function() {
echo "tick\n";
}, $msInterval = 1000);
// Cancel $myWatcherId in five seconds and exit the reactor loop
Amp\once(function() use ($myWatcherId) {
Amp\cancel($myWatcherId);
}, $msDelay = 5000);
});
```
## Process Signal Watchers
The `Amp\SignalReactor` extends the base reactor interface to expose an API for handling process
control signals in your application like any other event. Simply use a compatible event reactor
implementation (`UvReactor` or `LibeventReactor`, preferably the former) and interact with its
`SignalReactor::onSignal()` method. Consider:
```php
<?php
(new Amp\UvReactor)->run(function($reactor) {
// Let's tick off output once per second so we can see activity.
$reactor->repeat(function() {
echo "tick: ", date('c'), "\n";
}, $msInterval = 1000);
// What to do when a SIGINT signal is received
$watcherId = $reactor->onSignal(UV::SIGINT, function() {
echo "Caught SIGINT! exiting ...\n";
exit;
});
});
```
As should be clear from the above example, signal watchers may be enabled, disabled and cancelled like any other event.
## Reactor Addenda
### Callback Invocation Parameters
All watcher callbacks are invoked using the same standardized parameter order:
| Watcher Type | Callback Signature |
| --------------------- | --------------------------------------------------|
| immediately() | `function(Reactor $reactor, $watcherId)` |
| once() | `function(Reactor $reactor, $watcherId)` |
| repeat() | `function(Reactor $reactor, $watcherId)` |
| at() | `function(Reactor $reactor, $watcherId)` |
| watchStream() | `function(Reactor $reactor, $watcherId, $stream)` |
| onReadable() | `function(Reactor $reactor, $watcherId, $stream)` |
| onWritable() | `function(Reactor $reactor, $watcherId, $stream)` |
| onSignal() | `function(Reactor $reactor, $watcherId, $signo)` |
### Watcher Cancellation Safety
It is always safe to cancel a watcher from within its own callback. For example:
```php
<?php
$increment = 0;
Amp\repeat(function($reactor, $watcherId) use (&$increment) {
echo "tick\n";
if (++$increment >= 3) {
$reactor->cancel($watcherId); // <-- cancel myself!
}
}, $msDelay = 50);
```
### An Important Note on Writability
Because streams are essentially *"always"* writable you should only enable writability watchers
while you have data to send. If you leave these watchers enabled when your application doesn't have
anything to write the watcher will trigger endlessly until disabled or cancelled. This will max out
your CPU. If you're seeing inexplicably high CPU usage in your application it's a good bet you've
got a writability watcher that you failed to disable or cancel after you were finished with it.
### Process Signal Number Availability
Using the `SignalReactor` interface is relatively straightforward with the php-uv extension because
it exposes `UV::SIG*` constants for watchable signals. Applications using the `LibeventReactor` to
will need to manually specify the appropriate integer signal numbers when registering signal watchers.
[libevent]: http://pecl.php.net/package/libevent "libevent"
[win-libevent]: http://windows.php.net/downloads/pecl/releases/ "Windows libevent DLLs"
# Managing Concurrency
The weak link when managing concurrency is humans; we simply don't think asynchronously or
in parallel. Instead, we're really good at doing one thing at a time, in order, and the world
around us generally fits this model. So to effectively design for concurrent processing in our code
we have a couple of options:
1. Get smarter (not particularly feasible);
2. Abstract concurrent task execution to make it feel synchronous.
## Promises
The basic unit of concurrency in an Amp application is the `Amp\Promise`. These objects should be thought of as "placeholders" for values or tasks that aren't yet complete. By using placeholders we're able to reason about the results of concurrent operations as if they were already complete variables.
> **NOTE**
>
> Amp promises do *not* conform to the "Thenables" abstraction common in javascript promise implementations. It is this author's opinion that chaining .then() calls is a clunky way to avoid callback hell with awkward error handling results. Instead, Amp utilizes PHP 5.5's generator functionality to accomplish the same thing in a more performant way with superior error handling capabilities.
### The Promise API
```php
interface Promise {
public function when(callable $func);
public function watch(callable $func);
}
```
In its simplest form the `Amp\Promise` aggregates callbacks for dealing with computational results once they eventually resolve. While most code will not interact with this API directly thanks to the magic of [Generators](#generators), let's take a quick look at the two simple API methods exposed on `Amp\Promise` implementations:
| Method | Callback Signature |
| --------------------- | --------------------------------------------------|
| void when(callable) | `function(Exception $error = null, $result = null)` |
| void watch(callable) | `function($data)` |
### when()
`Amp\Promise::when()` accepts an error-first callback. This callback is responsible for reacting to the eventual result of the computation represented by the promise placeholder. For example:
```php
<?php
$promise = someAsyncFunctionThatReturnsAPromise();
$promise->when(function(Exception $error = null, $result = null) {
if ($error) {
printf(
"Something went wrong:\n%s\n",
$e->getMessage()
);
} else {
printf(
"Hurray! Our result is:\n%s\n",
print_r($result, true)
);
}
});
```
Those familiar with javascript code generally reflect that the above interface quickly devolves into ["callback hell"](http://callbackhell.com/), and they're correct. We will shortly see how to avoid this problem in the [Generators](#generators) section.
### watch()
`Amp\Promise::watch()` affords promise-producers ([Promisors](#promisors)) the ability to broadcast progress updates while a placeholder value resolves. Whether or not to actually send progress updates is left to individual libraries, but the functionality is available should applications require it. A simple example:
```php
<?php
$promise = someAsyncFunctionWithProgressUpdates();
$promise->watch(function($update) {
printf(
"Woot, we got an update of some kind:\n%s\n",
print_r($update, true)
);
});
```
## Promisors
`Amp\Promisor` is the abstraction responsible for resolving future values once they become available. A library that resolves values asynchronously creates an `Amp\Promisor` and uses it to return an `Amp\Promise` to API consumers. Once the async library determines that the value is ready it resolves the promise held by the API consumer using methods on the linked promisor.
### The Promisor API
```php
interface Promisor {
public function promise();
public function update($progress);
public function succeed($result = null);
public function fail(\Exception $error);
}
```
Amp provides two base implementations for async value promisors: `Amp\Future` and `Amp\PrivateFuture`.
### Future Promisor
The standard `Amp\Future` is the more performant option of the two default `Amp\Promisor` implementations. It acts both as promisor and promise to minimize the number of new object/closure instantiations needed to resolve an async value. The drawback to this approach is that any code with a reference to the `Future` promisor can resolve the associated Promise.
### PrivateFuture Promisor
The `Amp\PrivateFuture` is more concerned with code safety than performance. It *does not* act as its own promise. Only code with a reference to a `PrivateFuture` instance can resolve its associated promise.
### Promisor Example
Here's a simple example of an async value producer `asyncMultiply()` creating a promisor and returning the associated promise to its API consumer. Note that the code below would work exactly the same had we used a `PrivateFuture` as our promisor instead of the `Future` employed below.
```php
<?php // Example async producer using promisor
function asyncMultiply($x, $y) {
// Create a new promisor
$promisor = new Amp\Future;
// Resolve the async result one second from now
Amp\once(function() use ($promisor, $x, $y) {
$promisor->succeed($x * $y);
}, $msDelay = 1000);
return $promisor->promise();
}
$promise = asyncMultiply(6, 7);
$result = Amp\wait($promise);
var_dump($result); // int(42)
```
## Combinators
### all()
The `all()` functor combines an array of promise objects into a single promise that will resolve
when all promises in the group resolve. If any one of the `Amp\Promise` instances fails the
combinator's `Promise` will fail. Otherwise the resulting `Promise` succeeds with an array matching
keys from the input array to their resolved values.
The `all()` combinator is extremely powerful because it allows us to concurrently execute many
asynchronous operations at the same time. Let's look at a simple example using the amp HTTP client
([artax](https://github.com/amphp/artax)) to retrieve multiple HTTP resources concurrently ...
```php
<?php
use function Amp\run;
use function Amp\all;
run(function() {
$httpClient = new Amp\Artax\Client;
$promiseArray = $httpClient->requestMulti([
"google" => "http://www.google.com",
"news" => "http://news.google.com",
"bing" => "http://www.bing.com",
"yahoo" => "https://www.yahoo.com",
]);
try {
// magic combinator sauce to flatten the promise
// array into a single promise
$responses = (yield all($promiseArray));
foreach ($responses as $key => $response) {
printf(
"%s | HTTP/%s %d %s\n",
$key,
$response->getProtocol(),
$response->getStatus(),
$response->getReason()
);
}
} catch (Exception $e) {
// If any one of the requests fails the combo
// promise returned by Amp\all() will fail and
// be thrown back into our generator here.
echo $e->getMessage(), "\n";
}
Amp\stop();
});
```
### some()
The `some()` functor is the same as `all()` except that it tolerates individual failures. As long
as at least one promise in the passed array the combined promise will succeed. The successful
resolution value is an array of the form `[$arrayOfErrors, $arrayOfSuccesses]`. The individual keys
in the component arrays are preserved from the promise array passed to the functor for evaluation.
### first()
Resolves with the first successful result. The resulting Promise will only fail if all
promises in the group fail or if the promise array is empty.
### map()
Maps eventual promise results using the specified callable.
### filter()
Filters eventual promise results using the specified callable.
If the functor returns a truthy value the resolved promise result is retained, otherwise it is
discarded. Array keys are retained for any results not filtered out by the functor.
## Generators
The addition of Generators in PHP 5.5 trivializes synchronization and error handling in async contexts. The Amp event reactor builds in co-routine support for all reactor callbacks so we can use the `yield` keyword to make async code feel synchronous. Let's look at a simple example executing inside the event reactor run loop:
```php
<?php
use function Amp\run;
function asyncMultiply(x, $y) {
// Pause this function's execution for 100ms without
// blocking the application's event loop.
yield "pause" => 100;
// The final value yielded by a generator with the "return"
// key is used as the "return value" for that coroutine
yield "return" => ($x * $y);
}
run(function() {
try {
// Yield control until the generator resolves.
$result = (yield asyncMultiply(2, 21));
var_dump($result); // int(42)
} catch (Exception $e) {
// If promise resolution fails the exception is
// thrown back to us and we handle it as needed.
}
});
```
As you can see in the above example there is no need for callbacks or `.then()` chaining. Instead,
we're able to use `yield` statements to control program flow even when future computational results
are still pending.
> **NOTE**
>
> Any time a generator yields an `Amp\Promise` or a `Generator` there exists the possibility that the associated async operation(s) could fail. When this happens the appropriate exception is thrown back into the calling generator. Applications should generally wrap their promise yields in `try/catch` blocks as an error handling mechanism in this case.
### Coroutine Return Values
The resolution value of any yielded `Generator` is assigned inside that generator using the `"return"` yield key as shown here:
```php
use function Amp\run;
run(function() {
$result = (yield myCoroutine());
var_dump($result); // int(42)
});
function myCoroutine() {
yield "pause" => 100;
$asyncResult = (yield someAsyncCall());
yield "return" => 42; // Assigns the "return" value
}
```
Coroutines may `yield "return"` multiple times if they like. Only the final value yielded with this key is returned from the coroutine in the original calling code. Note that coroutine generators may "return" an unresolved `Promise` and its eventual resolution will be used as the final return value upon completion:
```php
Amp\run(function() {
// will resolve in three seconds with the eventual
// value from myDelayedOperation()
$result = (yield myCoroutine());
var_dump($result); // int(42)
});
function myCoroutine() {
yield "pause" => 1000;
$myDelayedThingPromise = (yield myNestedCoroutine());
yield "return" => $myDelayedThingPromise;
}
function myNestedCoroutine() {
yield "pause" => 1000;
yield "return" => myDelayedThing();
}
function myDelayedThing() {
$promisor = new Amp\PrivateFuture;
Amp\getReactor()->once(function() use ($promisor) {
$promisor->succeed(42);
}, $msDelay = 1000);
return $promisor->promise();
}
```
> **IMPORTANT**
>
> Because yielded generators are implicitly assumed to be co-routines applications **MUST** use the "return" key if they wish to return a generator from a coroutine without it being automatically resolved.
## Implicit Yield Behavior
Any value yielded without an associated string yield key is referred to as an "implicit" yield. All implicit yields must be one of the following two types ...
| Yieldable | Description |
| -----------------| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `Amp\Promise` | Any promise instance may be yielded and control will be returned to the generator once the promise resolves. If resolution fails the relevant exception is thrown into the generator and must be handled by the application or it will bubble up. If resolution succeeds the promise's resolved value is sent back into the generator. |
| `Generator` | Any generator instance may be yielded. The resolution value returned to the original yielding generator is the final value yielded from the from the resolved generator using the `"return"` yield command. If an error occurs while resolving the nested generator it will be thrown back into the original yielding generator. |
> **IMPORTANT**
>
> Any yielded value that is not an `Amp\Promise` or `Generator` will be treated as an **error** and an appropriate exception will be thrown back into the original yielding generator. This strict behavior differs from older versions of the library in which implicit yield values were simply sent back to the yielding generator function.
## Yield Command Reference
Command | Description
------------- | ----------------------
| [pause](#yield-pause) | Pause generator execution for the yielded number of milliseconds |
| [immediately](#yield-immediately) | Resolve the yielded callback on the next iteration of the event loop |
| [once](#yield-once) | Resolve the yielded callback at array index 0 in array index 1 milliseconds |
| [repeat](#yield-repeat) | Repeatedly resolve the yielded callback at array index 0 every array index 1 milliseconds |
| [onreadable](#yield-onreadable) | Resolve the yielded callback at array index 1 when the stream resource at index 0 reports as readable |
| [onwritable](#yield-onwritable) | Resolve the yielded callback at array index 1 when the stream resource at index 0 reports as writable |
| [enable](#yield-enable) | Enable the yielded event watcher ID |
| [disable](#yield-disable) | Disable the yielded event watcher ID |
| [cancel](#yield-cancel) | Cancel the yielded event watcher ID |
| [all](#yield-all) | Flatten the array of promises/generators and return control when all individual elements resolve successfully; fail the result if any individual resolution fails |
| [any](#yield-any) | Flatten the array of promises/generators and return control when all individual elements resolve; never fail the result regardless of component failures |
| [some](#yield-some) | Flatten the array of promises/generators and return control when all individual elements resolve; only fail the result if all components fail |
| [bind](#yield-bind) | Bind a callable to the event reactor so it will be automagically resolved upon invocation |
| [nowait](#yield-nowait) | Don't wait on the yielded promise or generator to resolve before returning control to the generator |
| [async](#yield-async) | Syntactic sugar to make code more self-documenting. May be used only when yielding an `Amp\Promise` instance to await resolution. |
| [coroutine](#yield-coroutine) | Syntactic sugar to make code more self-documenting. May be used only when yielding a `Generator` instance for resolution (co-routine). |
| @ (prefix) | Prefixed to another command to indicate the result should not be waited on before returning control to the generator |
## Yield Command Examples
### yield pause
```php
function() {
// yield control for 100 milliseconds
yield "pause" => 100;
};
```
### yield immediately
```php
function() {
// Execute the specified $function in the next tick of the
// event loop; the ID associated with this watcher is sent
// back to the origin generator.
$function = function($reactor, $watcherId){};
$watcherId = (yield "immediately" => $function);
};
```
### yield once
```php
function() {
// Schedule $function for execution in $msDelay milliseconds;
// the ID associated with this watcher is sent back to the
// origin generator.
$function = function($reactor, $watcherId){};
$msDelay = 100;
$watcherId = (yield "once" => [$function, $msDelay]);
};
```
### yield repeat
```php
function() {
// Schedule $function for execution every $msInterval milliseconds;
// the ID associated with this watcher is sent back to the origin
// generator where it can later be cancelled.
$function = function($reactor, $watcherId){};
$msInterval = 1000;
$watcherId = (yield "repeat" => [$function, $msInterval]);
};
```
### yield onreadable
```php
function() {
// Schedule $function for execution any time $stream has readable data.
$function = function($reactor, $watcherId, $stream){};
$stream = STDIN;
$watcherId = (yield "onReadable" => [$stream, $function]);
// We can also optionally disable stream watchers at registration time:
$enableNow = false;
$watcherId2 = (yield "onReadable" => [$stream, $function, $enableNow]);
};
```
### yield onwritable
```php
function() {
// Schedule $function for execution any time $stream is writable.
$function = function($reactor, $watcherId, $stream){};
$stream = STDOUT;
$watcherId = (yield "onWritable" => [$stream, $function]);
// We can also optionally disable stream watchers at registration time:
$enableNow = false;
$watcherId2 = (yield "onWritable" => [$stream, $function, $enableNow]);
};
```
### yield enable
```php
function() {
$stream = STDOUT;
$function = function($reactor, $watcherId, $stream){};
$enableNow = false;
$watcherId = (yield "onWritable" => [$stream, $function, $enableNow]);
// ... do some stuff ...
// Lets enable the writability watcher now
yield "enable" => $watcherId;
};
```
### yield disable
```php
function() {
$watcherId = (yield "repeat" => [function(){}, 100]);
// ... do some stuff ...
// Disable (but don't cancel) our repeating timer watcher
yield "disable" => $watcherId;
};
```
### yield cancel
```php
function() {
$watcherId = (yield "repeat" => [function(){}, 100]);
// ... do some stuff ...
// Cancel our repeating timer watcher and free any associated resources
yield "cancel" => $watcherId;
};
```
### yield all
```php
function myAsyncThing() {
yield "pause" => 100;
yield 44;
}
function() {
// list()
list($a, $b, $c) = (yield "all" => [
42,
new Amp\Success(43),
myAsyncThing(),
]);
var_dump($a, $b, $c); // int(42), int(43), int(44)
// extract()
extract(yield "all" => [
"d" => 42,
"e" => new Amp\Success(43),
"f" => myAsyncThing(),
]);
var_dump($d, $e, $f); // int(42), int(43), int(44)
};
```
### yield any
```php
function myAsyncThing() {
yield "pause" => 100;
yield 44;
}
function() {
list($errors, $results) = (yield "any" => [
"a" => 42,
"b" => new Amp\Failure(new Exception("test")),
"c" => myAsyncThing(),
]);
assert($errors["b"] instanceof Exception);
assert($errors["b"]->getMessage() === "test");
assert(isset($results["a"], $results["c"]));
assert($results["a"] === 42);
assert($results["c"] === 44);
};
```
### yield some
```php
function myAsyncThing() {
yield "pause" => 100;
yield 44;
}
function() {
list($errors, $results) = (yield "some" => [
"a" => 42,
"b" => new Amp\Failure(new Exception("test")),
"c" => myAsyncThing(),
]);
assert($results["a"] === 42);
assert($errors["b"] instanceof Exception);
assert($results["c"] === 44);
try {
list($errors, $results) = (yield "some" => [
new Amp\Failure(new Exception("ex1")),
new Amp\Failure(new Exception("ex2")),
]);
// You'll never reach this line because both promises failed
} catch (Exception $e) {
var_dump($e->getMessage());
}
};
```
### yield bind
```php
function() {
$repeatWatcherId = (yield "repeat" => [function(){}, 1000]);
$func = function() use ($repeatWatcherId) {
yield "cancel" => $repeatWatcherId;
};
$boundFunc = (yield "bind" => $func);
// Resolved as if we yielded the "cancel" command here
$boundFunc();
};
```
### yield nowait
```php
function myAsyncThing() {
// pause for three seconds
yield "pause" => 3000;
}
function() {
// Don't wait for the async task to complete
$startTime = time();
yield "nowait" => myAsyncThing();
var_dump(time() - $startTime); // int(0)
// Wait for async task completion (normal)
$startTime = time();
yield myAsyncThing();
var_dump(time() - $startTime); // int(3)
};
```
### yield async
```php
function myDelayedOperation() {
// resolve the promise in three seconds
$promisor = new Amp\PrivateFuture;
Amp\getReactor()->once(function() use ($promisor) {
$promisor->succeed(42);
}, $msDelay = 3000);
return $promisor->promise();
}
function() {
// Use the "async" key to document the operation
$result = (yield "async" => myDelayedOperation());
var_dump($result); // int(42)
};
```
### yield coroutine
```php
function myCoroutine($a) {
$b = $a * 2;
// pause for 100 milliseconds because we can
yield "pause" => 100;
$c = $b + 3;
// assign the coroutine's return value
yield "return" => $c;
// replace the return with a different value
yield "return" => 42;
// we can optionally do more work after assigning a return
yield someOtherAsyncOperation();
}
function() {
// Use the "coroutine" key to document the operation
$result = (yield "coroutine" => myCoroutine(5));
var_dump($result); // int(42)
};
``` | 37.983186 | 547 | 0.690967 | eng_Latn | 0.990031 |
e1b9c276b838982cc68527ff36a46042e5c7ce7f | 84 | md | Markdown | README.md | olemke/typhon-testfiles | 836d1acaeac61a6da039f29dee6551be75a22e88 | [
"MIT"
] | null | null | null | README.md | olemke/typhon-testfiles | 836d1acaeac61a6da039f29dee6551be75a22e88 | [
"MIT"
] | null | null | null | README.md | olemke/typhon-testfiles | 836d1acaeac61a6da039f29dee6551be75a22e88 | [
"MIT"
] | 1 | 2019-07-30T10:07:45.000Z | 2019-07-30T10:07:45.000Z | # typhon-testfiles
Input files for tests that are too large for the main repository
| 28 | 64 | 0.809524 | eng_Latn | 0.999782 |
e1ba04d1223cd7b7922ef4d50a0984d14a707e8f | 146 | md | Markdown | annotation/readme.md | muradsamadov/kubernetes-course | 3282e60835479f69b72d41eb030c0fdc826e4870 | [
"MIT"
] | null | null | null | annotation/readme.md | muradsamadov/kubernetes-course | 3282e60835479f69b72d41eb030c0fdc826e4870 | [
"MIT"
] | null | null | null | annotation/readme.md | muradsamadov/kubernetes-course | 3282e60835479f69b72d41eb030c0fdc826e4870 | [
"MIT"
] | null | null | null | # Annotations
Meqsedi budur ki, poda bir nov description teyin edirsen. Bunuda pod-annotation.yml faylinda 'annotations' hissesinde qeyd edirsen. | 48.666667 | 131 | 0.821918 | azj_Latn | 0.667438 |
e1baaee26a3d4e0955bb6a2187ec071c97d373fc | 30,733 | md | Markdown | microsoft-365/enterprise/upgrade-from-sharepoint-2010.md | badbart/microsoft-365-docs-pr.de-DE | ab71005458c84989fc01d077c270a01a95656aa9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | microsoft-365/enterprise/upgrade-from-sharepoint-2010.md | badbart/microsoft-365-docs-pr.de-DE | ab71005458c84989fc01d077c270a01a95656aa9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | microsoft-365/enterprise/upgrade-from-sharepoint-2010.md | badbart/microsoft-365-docs-pr.de-DE | ab71005458c84989fc01d077c270a01a95656aa9 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Upgrade von SharePoint 2010
ms.author: tracyp
author: MSFTTracyP
manager: laurawi
ms.date: 04/13/2020
audience: ITPro
ms.topic: conceptual
ms.prod: office-online-server
localization_priority: Normal
ms.collection:
- Ent_O365
- SPO_Content
search.appverid:
- MET150
- WSU140
- OSU140
ms.assetid: 985a357f-6db7-401f-bf7a-1bafdf1f312c
f1.keywords:
- NOCSH
description: Hier finden Sie Informationen und Ressourcen für ein Upgrade von SharePoint 2010 und SharePoint Server 2010 als Unterstützung für beide Enden am 13. Oktober 2020.
ms.custom: seo-marvel-apr2020
ms.openlocfilehash: 88970c83f2497f029635cb987b6b613ea662dc07
ms.sourcegitcommit: dffb9b72acd2e0bd286ff7e79c251e7ec6e8ecae
ms.translationtype: MT
ms.contentlocale: de-DE
ms.lasthandoff: 09/17/2020
ms.locfileid: "47948020"
---
# <a name="upgrading-from-sharepoint-2010"></a>Upgrade von SharePoint 2010
*Dieser Artikel gilt sowohl für Microsoft 365 Enterprise als auch für Office 365 Enterprise.*
Microsoft SharePoint 2010 und SharePoint Server 2010 werden am **13. April 2021**am Ende der Unterstützung erreichen. In diesem Artikel werden die Ressourcen erläutert, die Sie beim Migrieren Ihrer vorhandenen SharePoint Server 2010 Daten zu SharePoint Online in Microsoft 365 oder beim Upgrade Ihrer lokalen SharePoint Server 2010 Umgebung unterstützen.
## <a name="what-is-end-of-support"></a>Was ist das Ende der Unterstützung?
Wenn Ihre SharePoint Server 2010 und SharePoint Foundation 2010 Software das Ende Ihres Support-Lebenszyklus erreicht (die Zeit, in der Microsoft neue Funktionen, Bugfixes, Sicherheitsfixes usw. bereitstellt), wird dies als "Ende der Unterstützung" der Software oder manchmal als "Ruhestand" bezeichnet. Nach Abschluss der Unterstützung (oder EOS) eines Produkts wird nichts tatsächlich beendet oder funktioniert nicht mehr. am Ende der Softwareunterstützung stellt Microsoft jedoch nicht mehr Folgendes bereit:
- Technischer Support für Probleme, die auftreten können;
- Fehlerbehebungen für erkannte Probleme, die sich auf die Stabilität und Benutzerfreundlichkeit des Servers auswirken können;
- Sicherheitsfixes für Sicherheitsanfälligkeiten, die erkannt werden und die den Server möglicherweise anfällig für Sicherheitsverletzungen machen;
- Zeitzonenaktualisierungen.
Das heißt, es werden keine weiteren Updates, Patches oder Fixes für das Produkt ausgeliefert (einschließlich Sicherheitspatches/Fixes), und der Microsoft-Support hat seine Support Bemühungen in neuere Versionen vollständig verschoben. Als Ende der Unterstützung SharePoint Server 2010 Ansätze sollten Sie die Möglichkeiten nutzen, um Daten zu kürzen, die Sie vor dem Upgrade des Produkts nicht mehr benötigen, und/oder Ihre wichtigen Daten migrieren.
> [!NOTE]
> Ein Software-Lebenszyklus dauert in der Regel zehn Jahre ab dem Datum der ersten Veröffentlichung des Produkts. Sie können nach [Microsoft Solution Providern](https://go.microsoft.com/fwlink/?linkid=841249) suchen, die beim Upgrade auf die nächste Version Ihrer Software helfen können, oder mit Microsoft 365 Migration (oder beides). Stellen Sie sicher, dass Sie wissen, dass Sie unter Stützungs Termine für wichtige zugrunde liegende Technologien kennen, insbesondere der Version von SQL Server, die Sie mit SharePoint verwenden. Unter [Fixed Lifecycle Policy](https://support.microsoft.com/help/14085) erfahren Sie, wie Sie den Produktlebenszyklus detailliert verstehen.
## <a name="what-are-my-options"></a>Was sind meine Optionen?
Überprüfen Sie zunächst das Datum, an dem der Support auf der [Produktlebenszyklus-Website](https://support.microsoft.com/lifecycle/search?alpha=SharePoint%20Server%202010)endet. Stellen Sie als nächstes sicher, dass Sie Ihr Upgrade oder Ihre Migrationszeit mit Kenntnis dieses Datums planen. Denken Sie daran, dass Ihr Produkt am angegebenen Datum *nicht mehr funktioniert* , und Sie können seine Verwendung fortsetzen, aber da Ihre Installation nach diesem Datum nicht mehr gepatcht wird, benötigen Sie eine Strategie, die Ihnen einen reibungslosen Übergang zur nächsten Version des Produkts erleichtern wird.
Diese Matrix hilft beim Plotten eines Kurses, wenn es um die Migration von Produktfeatures und Benutzerdaten geht:
|Ende des Support Produkts|Gut |Optimal|
|---|---|---|
|SharePoint Server 2010|SharePoint Server 2013 (lokal)|SharePoint Online|
||SharePoint Server 2013 Hybrid mit SharePoint Online|SharePoint Server 2016 (lokal)|
|||SharePoint-Cloud-Hybrid Suche|
Wenn Sie Optionen am unteren Ende der Skala (gute Optionen) auswählen, müssen Sie mit der Planung eines weiteren Upgrades beginnen, sobald die Migration von SharePoint Server 2010 abgeschlossen ist.
Hier sind die drei Pfade, die Sie ausführen können, um das Ende der Unterstützung für SharePoint Server 2010 zu vermeiden.

> [!NOTE]
> Das Ende der Unterstützung für SharePoint Server 2010 und SharePoint Foundation 2010 sind für den 13. April 2021 geplant, *Bitte* beachten Sie jedoch, dass Sie immer auf der [Produktlebenszyklus-Website](https://support.microsoft.com/lifecycle) nach den aktuellsten Terminen suchen sollten.
## <a name="where-should-i-go-next"></a>Wohin sollte ich als nächstes wechseln?
SharePoint Server 2013 und SharePoint Foundation 2013 können lokal auf Ihren eigenen Servern installiert werden. Andernfalls können Sie SharePoint Online verwenden, bei dem es sich um einen Online Dienst handelt, der Bestandteil von Microsoft Microsoft 365 ist. Sie können Folgendes auswählen:
- Migrieren zu SharePoint Online
- Upgrade SharePoint Server oder SharePoint Foundation lokal
- Beides
- Implementieren einer [SharePoint-Hybrid](https://docs.microsoft.com/sharepoint/hybrid/hybrid) Lösung
Beachten Sie die versteckten Kosten, die im Zusammenhang mit der Wartung einer Serverfarm in Zukunft, beim warten oder Migrieren von Anpassungen und beim Upgraden der Hardware, von der SharePoint Server abhängt, abhängen. Wenn Sie alle diese Informationen kennen und berücksichtigt haben, ist es einfacher, das Upgrade lokal fortzusetzen. Andernfalls können Sie von einer geplanten Migration zu SharePoint Online profitieren, wenn Sie die Farm auf älteren SharePoint-Servern ohne starke Anpassung ausführen. Es ist auch möglich, dass Sie für Ihre lokale SharePoint Server Umgebung möglicherweise einige Daten in SharePoint Online einfügen, um die Hardwareverwaltung zu reduzieren, die alle lokalen Daten umfasst. Es ist möglicherweise sparsamer, einige Ihrer Daten in SharePoint Online zu verlagern.
> [!NOTE]
> SharePoint-Administratoren erstellen möglicherweise ein Microsoft 365-Abonnement, richten eine brandneue SharePoint Online Website ein und schneiden dann sauber von SharePoint Server 2010 ab, wobei nur die wichtigsten Dokumente auf die neuen SharePoint Online Websites übermittelt werden. Von dort aus können alle verbleibenden Daten von der SharePoint Server 2010 Website in lokale Archive entwässert werden.
|SharePoint Online|SharePoint Server lokal|
|---|---|
|Hohe Kosten in der Zeit (Planung/Ausführung/Überprüfung)|Hohe Kosten in der Zeit (Planung/Ausführung/Überprüfung)|
|Geringere Kosten in Fonds (keine hardwarekäufe)|Höhere Kosten in Fonds (hardwarekäufe)|
|Einmalige Kosten bei der Migration|Wiederholte einmalige Kosten pro künftiger Migration|
|Niedrige Total Cost of Ownership/Maintenance|Hohe Gesamtbetriebskosten/Wartung|
Wenn Sie eine Migration zu Microsoft 365 durchführen, wird der Zeitaufwand für die einmalige Planung höher (während Sie Daten organisieren und entscheiden, was Sie in die Cloud übernehmen und was Sie zurücklassen müssen). Nachdem Ihre Daten migriert wurden, werden Upgrades dann automatisch von diesem Zeitpunkt aus durchführen, da Sie keine Hardware-und Softwareupdates mehr verwalten müssen und die Dauer der Farm durch eine Vereinbarung zum Service Level ([SLA](https://go.microsoft.com/fwlink/?linkid=843153)) von Microsoft gesichert wird.
### <a name="migrate-to-sharepoint-online"></a>Migrieren zu SharePoint Online
Stellen Sie sicher, dass SharePoint Online alle erforderlichen Funktionen bietet, indem Sie die zugehörige [Dienstbeschreibung](https://docs.microsoft.com/office365/servicedescriptions/sharepoint-online-service-description/sharepoint-online-service-description)überprüfen.
Es gibt derzeit kein Mittel, mit dem Sie direkt von SharePoint Server 2010 (oder SharePoint Foundation 2010) zu SharePoint Online migrieren können, sodass ein Großteil der Arbeit manuell erfolgt. Dadurch erhalten Sie die Möglichkeit, Daten und Websites, die nicht mehr benötigt werden, vor dem Wechsel zu archivieren und zu löschen. Sie können andere Daten in den Speicher archivieren. Denken Sie auch daran, dass weder SharePoint Server 2010 noch SharePoint Foundation 2010 am Ende der Unterstützung nicht mehr funktioniert, sodass Administratoren einen Zeitraum haben können, in dem SharePoint noch ausgeführt wird, wenn Ihre Kunden vergessen, einige Ihrer Daten zu verlegen.
Wenn Sie ein Upgrade auf SharePoint Server 2013 oder SharePoint Server 2016 durchführen und sich dafür entscheiden, Daten in SharePoint Online zu übertragen, kann ihre Verlagerung auch die Verwendung der [SharePoint-Migrations-API](https://support.office.com/article/Upload-on-premises-content-to-SharePoint-Online-using-PowerShell-cmdlets-555049c6-15ef-45a6-9a1f-a1ef673b867c?ui=en-US&rs=en-US&ad=US) (zum Migrieren von Informationen zu OneDrive für Unternehmen) umfassen.
|SharePoint Online Vorteil|SharePoint Online Nachteil|
|---|---|
|Microsoft liefert SPO-Hardware und die gesamte Hardwareverwaltung.|Die verfügbaren Features können zwischen SharePoint Server lokal und SPO unterschiedlich sein.|
|Sie sind der globale Administrator Ihres Abonnements und können Administratoren den SPO-Websites zuweisen.|Einige Aktionen, die einem Farm Administrator in SharePoint Server lokal zur Verfügung stehen, sind in der SharePoint-Administratorrolle in Microsoft 365 nicht vorhanden (oder sind nicht erforderlich), SharePoint-Verwaltung, Websitesammlungsverwaltung und Website Besitz sind jedoch lokal in Ihrer Organisation.|
|Microsoft wendet Patches, Fixes und Updates auf zugrunde liegende Hardware und Software an (einschließlich SQL-Servern, auf denen SharePoint Online ausgeführt wird).|Da es keinen Zugriff auf das zugrunde liegende Dateisystem im Dienst gibt, sind einige Anpassungen begrenzt.|
|Microsoft veröffentlicht [Vereinbarungen zum Service Level](https://go.microsoft.com/fwlink/?linkid=843153) und wird schnell zur Lösung von Vorfällen auf Dienstebene verschoben.|Sicherungs-und Wiederherstellungsoptionen und andere Wiederherstellungsoptionen werden vom Dienst in SharePoint Online automatisiert – Sicherungen werden überschrieben, wenn Sie nicht verwendet werden.|
|Sicherheitstests und die Optimierung der Serverleistung werden laufend im Dienst von Microsoft ausgeführt.|Änderungen an der Benutzeroberfläche und anderen SharePoint-Features werden vom Dienst installiert und müssen möglicherweise aktiviert oder deaktiviert werden.|
|Microsoft 365 erfüllt viele Branchenstandards: [Microsoft Compliance-Angebote](https://go.microsoft.com/fwlink/?linkid=843165).|Die [Pannen](https://go.microsoft.com/fwlink/?linkid=518597) Unterstützung für die Migration ist limitiert. <br/> Ein Großteil des Upgrades wird manuell durchführen oder über die SPO-Migrations-API, die in der [Roadmap für Migrations Inhalte für SharePoint Online und OneDrive](https://go.microsoft.com/fwlink/?linkid=843184)beschrieben wird.|
|Weder Microsoft-Support Techniker noch Mitarbeiter im Rechenzentrum haben uneingeschränkten Administratorzugriff auf Ihr Abonnement.|Es kann zusätzliche Kosten entstehen, wenn die Hardware Infrastruktur aktualisiert werden muss, um die neuere Version von SharePoint zu unterstützen, oder wenn eine sekundäre Farm für das Upgrade erforderlich ist.|
|Lösungsanbieter können die einmalige Aufgabe der Migration Ihrer Daten zu SharePoint Online unterstützen.|Nicht alle Änderungen an SharePoint Online befinden sich in Ihrem Steuerelement. Nach der Migration können sich Design Unterschiede in Menüs, Bibliotheken und anderen Features vorübergehend auf die Benutzerfreundlichkeit auswirken.|
|Online Produkte werden automatisch über den Dienst aktualisiert, was bedeutet, dass Features zwar veraltet sein können, es jedoch keinen wirklichen Ende des Supportlebenszyklus gibt.|Für SharePoint Server (oder SharePoint Foundation) und den zugrunde liegenden SQL-Servern wird ein Support-Lebenszyklus beendet.|
Wenn Sie sich entschieden haben, eine neue Microsoft 365-Website zu erstellen und die Daten nach Bedarf manuell zu migrieren, können Sie sich Ihre [Microsoft 365-Optionen](https://www.microsoft.com/microsoft-365/)ansehen.
### <a name="upgrade-sharepoint-server-on-premises"></a>Upgrade SharePoint Server lokal
Ab der neuesten Version des lokalen SharePoint-Produkts (SharePoint Server 2019) müssen SharePoint Server Upgrades *seriell*ausgeführt werden, was bedeutet, dass es keine Möglichkeit gibt, direkt von SharePoint Server 2010 auf SharePoint Server 2016 oder auf SharePoint 2019 zu aktualisieren.
Pfad für serielles Upgrade:
- SharePoint Server 2010 \> SharePoint Server 2013 \> SharePoint Server 2016
Wenn Sie den vollständigen Pfad von SharePoint 2010 zu SharePoint Server 2016 ausführen möchten, dauert dies Zeit und wird geplant. Upgrades beinhalten Kosten in Bezug auf eine aktualisierte Hardware (Beachten Sie, dass auch SQL-Server aktualisiert werden müssen), Software und Verwaltung. Außerdem müssen Anpassungen möglicherweise aktualisiert oder sogar aufgegeben werden. Stellen Sie sicher, dass Sie vor dem Upgrade Ihrer SharePoint Server Farm Notizen zu allen wichtigen Anpassungen sammeln.
> [!NOTE]
> Es ist möglich, das Ende der Unterstützung SharePoint 2010 Farm beizubehalten, eine SharePoint Server 2016-Farm auf neuer Hardware zu installieren (damit die separaten Farmen nebeneinander ausgeführt werden) und dann eine manuelle Inhaltsmigration zu planen und auszuführen (beispielsweise zum herunterladen und erneuten Hochladen von Inhalten). Es gibt potenzielle Gefahren für diese manuellen Verschiebungen (beispielsweise Dokumente, die von 2010 mit einem aktuellen zuletzt geänderten Konto mit dem Alias des Kontos ausgeführt werden, das die manuelle Verschiebung ausführt), und einige arbeiten müssen im Voraus erfolgen (Neuerstellen von Websites, Unterwebsites, Berechtigungen und Listenstrukturen). Es ist eine gute Zeit, um zu prüfen, welche Daten Sie in den Speicher umlegen können oder nicht mehr benötigen. Dadurch können die Auswirkungen der Migration reduziert werden. Reinigen Sie in beiden Fällen Ihre Umgebung vor dem Upgrade. Stellen Sie sicher, dass Ihre vorhandene Farm funktionsfähig ist, bevor Sie ein Upgrade durchführen, und (sicher) bevor Sie außer Betrieb nehmen!
Denken Sie daran, die **unterstützten und nicht unterstützten Upgrade-Pfade**zu überprüfen:
- [SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843156)
- [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843157)
Wenn Sie **Anpassungen**haben, ist es wichtig, dass Sie ein Upgrade für jeden Schritt im Migrationspfad planen:
- [SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843160)
- [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843162)
|Lokaler Vorteil|Lokale Benachteiligung|
|---|---|
|Vollzugriff auf alle Aspekte Ihrer SharePoint-Farm (und Ihrer SQL), von der Server Hardware bis.|Alle Unterbrechungen und Fixes liegen in der Verantwortung Ihres Unternehmens (Sie können jedoch den kostenpflichtigen Microsoft-Support einbinden, wenn Ihr Produkt nicht am Ende der Unterstützung steht):|
|Vollständiger Funktionsumfang von SharePoint Server lokal mit der Option zum Verbinden Ihrer lokalen Farm mit einem SharePoint Online-Abonnement über Hybrid.|Upgrade, Patches, Sicherheitsfixes, Hardwareupgrades und die gesamte Wartung von SharePoint Server und seiner SQL-Farm werden lokal verwaltet.|
|Vollzugriff für größere Anpassungsoptionen als bei SharePoint Online.|[Microsoft Compliance-Angebote](https://go.microsoft.com/fwlink/?linkid=843165) müssen lokal manuell konfiguriert werden.|
|Sicherheitstests und Server Leistungsoptimierung, die vor Ort (unter ihrer Kontrolle) ausgeführt werden.|Microsoft 365 kann Features für SharePoint Online zur Verfügung stellen, die nicht mit SharePoint Server lokal zusammenarbeiten.|
|Lösungsanbieter können die Migration von Daten zur nächsten Version von SharePoint Server (und darüber hinaus) unterstützen.|Auf Ihren SharePoint Server Websites werden nicht automatisch [SSL/TLS-](https://go.microsoft.com/fwlink/?linkid=843167) Zertifikate verwendet, wie in SharePoint Online angezeigt wird.|
|Vollzugriff auf Benennungskonventionen, Sicherung und Wiederherstellung und andere Wiederherstellungsoptionen in SharePoint Server lokal.|SharePoint Server lokal reagiert auf Produktlebenszyklen.|
### <a name="upgrade-resources"></a>Aktualisieren von Ressourcen
Vergleichen Sie zunächst die Hardware-und Softwareanforderungen. Wenn Sie die grundlegendenAnforderungen für das Upgrade auf der aktuellen Hardware nicht erfüllen, kann dies bedeuten, dass Sie zuerst die Hardware in der Farm oder in SQL-Servern aktualisieren müssen, oder dass Sie einen Prozentsatz Ihrer Websites auf die "Evergreen"-Hardware von SharePoint Online umstellen können. Nachdem Sie Ihre Bewertung vorgenommen haben, befolgen Sie die unterstützten Aktualisierungspfade und-Methoden.
- **Hardware-/Softwareanforderungen für**:
[SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843204) | [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843206) | [SharePoint Server 2016](https://go.microsoft.com/fwlink/?linkid=843207)
- **Software Beschränkungen und-Grenzen für**:
[SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843247) | [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843248) | [SharePoint Server 2016](https://go.microsoft.com/fwlink/?linkid=843249)
- **Die Übersicht über den Updateprozess für**:
[SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843251) | [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843252) | [SharePoint Server 2016](https://go.microsoft.com/fwlink/?linkid=843359)
### <a name="create-a-sharepoint-hybrid-solution-between-sharepoint-online-and-sharepoint-server-on-premises"></a>Erstellen einer SharePoint-Hybridlösung zwischen SharePoint Online und SharePoint Server lokal
Eine weitere Option (eine, die sowohl für lokale als auch für Online-Umgebungen am besten geeignet ist) ist eine hybride, Sie können SharePoint Server 2013-oder 2016-oder 2019-Farmen mit SharePoint Online verbinden, um eine SharePoint-Hybridlösung zu erstellen: [erfahren Sie mehr über SharePoint-Hybridlösungen](https://support.office.com/article/4c89a95a-a58c-4fc1-974a-389d4f195383.aspx).
Wenn Sie sich für eine hybride SharePoint Server Farm als Migrationsziel entscheiden, müssen Sie unbedingt planen, welche Websites und Benutzer Sie Online umstellen sollten und welche lokal bleiben müssen. Bei dieser Entscheidung kann es hilfreich sein, die Inhalte Ihrer SharePoint Server Farm zu überprüfen und zu bewerten (feststellen, welche Daten hoch, Mittel oder niedrig für Ihr Unternehmen sind). Es kann sein, dass das einzige, was Sie für SharePoint Online freigeben müssen, (a) Benutzerkonten für die Anmeldung und (b) den SharePoint Server-Suchdienst Index sind – dies ist möglicherweise erst nach der Verwendung Ihrer Websites erkennbar. Wenn Ihr Unternehmen später entscheidet, alle Ihre Inhalte in SharePoint Online zu migrieren, können Sie alle verbleibenden Konten und Daten Online verschieben und Ihre lokale Farm außer Betrieb nehmen, und die Verwaltung/Verwaltung der SharePoint-Farm erfolgt von diesem Zeitpunkt an über Microsoft 365-Konsolen.
Stellen Sie sicher, dass Sie sich mit den vorhandenen Hybrid Typen vertraut machen und die Verbindung zwischen Ihrer lokalen SharePoint-Farm und Ihrem Microsoft 365-Abonnement konfigurieren.
|Option|Beschreibung|
|---|---|
|[Microsoft Compliance-Angebote](https://go.microsoft.com/fwlink/?linkid=843165).|Die [Pannen](https://www.microsoft.com/fasttrack/microsoft-365) Unterstützung für die Migration ist limitiert. <br/> Ein Großteil des Upgrades wird manuell durchführen oder über die SPO-Migrations-API, die in der [Roadmap für Migrations Inhalte für SharePoint Online und OneDrive](https://go.microsoft.com/fwlink/?linkid=843184)beschrieben wird.|
|Weder Microsoft-Support Techniker noch Mitarbeiter im Rechenzentrum haben uneingeschränkten Administratorzugriff auf Ihr Abonnement.|Es kann zusätzliche Kosten entstehen, wenn die Hardware Infrastruktur aktualisiert werden muss, um die neuere Version von SharePoint zu unterstützen, oder wenn eine sekundäre Farm für das Upgrade erforderlich ist.|
|Partner können mit der einmaligen Aufgabe der Migration Ihrer Daten zu SharePoint Online behilflich sein.||
|Online Produkte werden automatisch über den Dienst aktualisiert, was bedeutet, dass Features zwar veraltet sein können, es jedoch kein echtes Ende der Unterstützung gibt.||
Wenn Sie sich entschieden haben, eine neue Microsoft 365-Website zu erstellen und die Daten nach Bedarf manuell zu migrieren, können Sie sich Ihre [Microsoft 365-Optionen](https://www.microsoft.com/microsoft-365/)ansehen.
### <a name="upgrade-sharepoint-server-on-premises"></a>Upgrade SharePoint Server lokal
Es gibt historisch keine Möglichkeit, Versionen in SharePoint-Upgrades zu überspringen, zumindest nicht ab der Veröffentlichung von SharePoint Server 2016. Das bedeutet, dass Upgrades seriell gehen:
- SharePoint 2007 \> SharePoint Server 2010 \> SharePoint Server 2013 \> SharePoint Server 2016
Um den gesamten Pfad von SharePoint 2007 auf SharePoint Server 2016 zu übernehmen, bedeutet dies eine erhebliche Investition von Zeit und erfordert eine Kosteneinsparung hinsichtlich der aktualisierten Hardware (Beachten Sie, dass auch SQL-Server aktualisiert werden müssen), Software und Verwaltung. Anpassungen müssen entsprechend der Wichtigkeit des Features aktualisiert oder aufgegeben werden.
> [!NOTE]
> Es ist möglich, Ihre End-of-Life-SharePoint 2007-Farm beizubehalten, eine SharePoint Server 2016-Farm auf neuer Hardware zu installieren (damit die separaten Farmen nebeneinander ausgeführt werden) und dann eine manuelle Inhaltsmigration zu planen und auszuführen (beispielsweise zum herunterladen und erneuten Hochladen von Inhalten). Beachten Sie einige der Fallstricke von manuellen Verschiebungen (beispielsweise Verschiebungen von Dokumenten, die das zuletzt geänderte Konto durch den Alias des Kontos ersetzen, das die manuelle Verschiebung ausführt), sowie die Arbeit, die im Vorfeld ausgeführt werden muss (beispielsweise das Neuerstellen von Websites, Unterwebsites, Berechtigungen und Listenstrukturen). Nun ist es an der Zeit zu prüfen, welche Daten Sie in den Speicher oder nicht mehr benötigen, eine Aktion, die die Auswirkungen der Migration reduzieren kann.
Reinigen Sie in beiden Fällen Ihre Umgebung vor dem Upgrade. Stellen Sie sicher, dass Ihre vorhandene Farm funktionsfähig ist, bevor Sie ein Upgrade durchführen, und (sicher) bevor Sie außer Betrieb nehmen!
Denken Sie daran, die **unterstützten und nicht unterstützten Upgrade-Pfade**zu überprüfen:
- [SharePoint Server 2007](https://go.microsoft.com/fwlink/?linkid=843156)
- [SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843156)
- [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843157)
Wenn Sie **Anpassungen**haben, ist es wichtig, dass Sie ein Upgrade für jeden Schritt im Migrationspfad planen:
- [SharePoint 2007](https://go.microsoft.com/fwlink/?linkid=843158)
- [SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843160)
- [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843162)
|Lokale pro|Lokale con|
|---|---|
|Vollzugriff auf alle Aspekte der SharePoint-Farm, von der Server Hardware bis hin.|Alle Pausen und Fixes liegen in der Verantwortung Ihres Unternehmens (Sie können den kostenpflichtigen Microsoft-Support einbinden, wenn Ihr Produkt nicht am Ende der Unterstützung steht):|
|Vollständiger Funktionsumfang von SharePoint Server lokal mit der Option zum Verbinden Ihrer lokalen Farm mit einem SharePoint Online-Abonnement über Hybrid.|Upgrade, Patches, Sicherheitspatches und alle Wartung von SharePoint Server lokal verwaltet.|
|Vollzugriff für eine größere Anpassung.|[Microsoft Compliance-Angebote](https://go.microsoft.com/fwlink/?linkid=843165) müssen lokal manuell konfiguriert werden.|
|Sicherheitstests und die Optimierung der Serverleistung, die vor Ort ausgeführt werden (unter ihrer Kontrolle).|Microsoft 365 kann Features für SharePoint Online zur Verfügung stellen, die nicht mit SharePoint Server lokal zusammenarbeiten.|
|Partner können die Migration von Daten zur nächsten Version von SharePoint Server (und darüber hinaus) unterstützen.|Auf Ihren SharePoint Server Websites werden nicht automatisch [SSL/TLS-](https://go.microsoft.com/fwlink/?linkid=843167) Zertifikate verwendet, wie in SharePoint Online angezeigt wird.|
|Vollzugriff auf Benennungskonventionen, Sicherung und Wiederherstellung und andere Wiederherstellungsoptionen in SharePoint Server lokal.|SharePoint Server lokal reagiert auf Produktlebenszyklen.|
### <a name="upgrade-resources"></a>Aktualisieren von Ressourcen
Beginnen Sie mit dem wissen, dass Sie die Hardware-und Softwareanforderungen erfüllen und dann unterstützte Upgrade-Methoden ausführen.
- **Hardware-/Softwareanforderungen für**:
[SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843204) | [SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843204) | [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843206) | [SharePoint Server 2016](https://go.microsoft.com/fwlink/?linkid=843207)
- **Software Beschränkungen und-Grenzen für**:
[SharePoint Server 2007](https://go.microsoft.com/fwlink/?linkid=843245) | [SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843247) | [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843248) | [SharePoint Server 2016](https://go.microsoft.com/fwlink/?linkid=843249)
- **Die Übersicht über den Updateprozess für**:
[SharePoint Server 2007](https://go.microsoft.com/fwlink/?linkid=843250) | [SharePoint Server 2010](https://go.microsoft.com/fwlink/?linkid=843251) | [SharePoint Server 2013](https://go.microsoft.com/fwlink/?linkid=843252) | [SharePoint Server 2016](https://go.microsoft.com/fwlink/?linkid=843359)
### <a name="create-a-sharepoint-hybrid-solution-between-sharepoint-online-and-on-premises"></a>Erstellen einer SharePoint-Hybridlösung zwischen SharePoint Online und lokal
Wenn die Antwort auf Ihre Migrationsanforderungen irgendwo zwischen der Selbstkontrolle durch das lokale System und den niedrigeren Betriebskosten liegt, die von SharePoint Online angeboten werden, können Sie SharePoint Server 2013-oder 2016-Farmen über Hybriden mit SharePoint Online verbinden. [Informationen zu SharePoint-Hybridlösungen](https://support.office.com/article/4c89a95a-a58c-4fc1-974a-389d4f195383.aspx)
Wenn Sie entscheiden, dass eine hybride SharePoint Server Farm Ihrem Unternehmen zugute kommt, machen Sie sich mit den vorhandenen Hybrid Typen vertraut und konfigurieren Sie die Verbindung zwischen Ihrer lokalen SharePoint-Farm und Ihrem Microsoft 365-Abonnement.
Eine gute Möglichkeit, um zu sehen, wie dies funktioniert, ist das Erstellen einer Microsoft 365-Entwicklungs-/Testumgebung, die Sie mit [Test Labor Handbüchern](m365-enterprise-test-lab-guides.md)einrichten können. Nachdem Sie eine Testversion oder ein erworbenes Microsoft 365-Abonnement erstellt haben, sind Sie auf dem Weg zum Erstellen der Websitesammlungen, Webs und Dokumentbibliotheken in SharePoint Online, auf die Sie Daten migrieren können (entweder manuell, mithilfe der Migrations-API oder – wenn Sie meine Website Inhalte zu OneDrive für Unternehmen migrieren möchten – über den Hybrid-Assistenten).
> [!NOTE]
> Denken Sie daran, dass Ihre SharePoint Server 2010 Farm zunächst lokal aktualisiert werden muss, um entweder SharePoint Server 2013 oder SharePoint Server 2016, um die Hybrid Option zu verwenden. SharePoint Foundation 2010 und SharePoint Foundation 2013 können keine Hybrid Verbindungen mit SharePoint Online erstellen.
## <a name="summary-of-options-for-office-2010-client-and-servers-and-windows-7"></a>Zusammenfassung der Optionen für Office 2010 Client und Server und Windows 7
Eine visuelle Zusammenfassung der Optionen für Upgrades, Migration und die Cloud für Office 2010-Clients und -Server sowie für Windows 7 finden Sie auf unter dem [Poster zum Supportende](../downloads/Office2010Windows7EndOfSupport.pdf).
[](../downloads/Office2010Windows7EndOfSupport.pdf)
Dieses einseitige Poster veranschaulicht auf einfache Weise, welche verschiedenen Pfade Sie wählen können, um zu verhindern, dass Office 2010-Clients und -Serverprodukte sowie Windows 7 das Ende des Supports erreichen. Bevorzugte Pfade und unterstützte Optionen in Microsoft 365 Enterprise sind hervorgehoben.
Sie können dieses Poster in den Formaten „Brief“, „Legal“ oder „Tabloid“ (27,94 x 43,18 cm) [herunterladen](https://github.com/MicrosoftDocs/microsoft-365-docs/raw/public/microsoft-365/downloads/Office2010Windows7EndOfSupport.pdf) und ausdrucken.
## <a name="related-topics"></a>Verwandte Themen
[Ressourcen zum Upgraden von Office 2007-oder 2010-Servern und-Clients](upgrade-from-office-2010-servers-and-products.md)
[Overview of the upgrade process from SharePoint 2010 to SharePoint 2013](https://technet.microsoft.com/library/mt493301%28v=office.16%29.aspx)
[Bewährte Methoden beim Upgrade von SharePoint 2010 auf SharePoint 2013](https://technet.microsoft.com/library/mt493305%28v=office.16%29.aspx)
[Behandeln von Problemen beim Datenbankupgrade in SharePoint 2013](https://go.microsoft.com/fwlink/?linkid=843195)
[Suchen nach Microsoft Solution Providern zur Unterstützung des Upgrades](https://go.microsoft.com/fwlink/?linkid=841249)
[Aktualisierte Produktwartungsrichtlinie für SharePoint 2013](https://technet.microsoft.com/library/mt493253%28v=office.16%29.aspx)
[Aktualisierte Produktwartungsrichtlinie für SharePoint Server 2016](https://technet.microsoft.com/library/mt782882%28v=office.16%29.aspx)
| 110.154122 | 1,091 | 0.817981 | deu_Latn | 0.99212 |
e1bb55e4a00dbc41f05b893f96d6015786f5c7e0 | 2,056 | md | Markdown | _portfolio/transportal.md | vrcooper/vrcooper.github.io | 9b3fd0c6ef077cbfe90c5543f7c1eab1f235a0e7 | [
"MIT"
] | null | null | null | _portfolio/transportal.md | vrcooper/vrcooper.github.io | 9b3fd0c6ef077cbfe90c5543f7c1eab1f235a0e7 | [
"MIT"
] | null | null | null | _portfolio/transportal.md | vrcooper/vrcooper.github.io | 9b3fd0c6ef077cbfe90c5543f7c1eab1f235a0e7 | [
"MIT"
] | null | null | null | ---
layout: post
title: Transportal
thumbnail-path: "img/transportal.png"
short-description: Transportal allows useres to create profiles and projects.
---
{:.center}

{:.center}
<a class="button" href="https://transportal.herokuapp.com/" target="_blank"><i class="fa fa-cloud"> Demo Site</i></a> <a class="button" href="https://github.com/vrcooper/Transportal" target="_blank"><i class="fa fa-fw fa-github"></i> Source Code</a>
## Summary:
Transportal is a web application which allows users to sign in and create user profiles, create projects and upload and download files.
## Explanation:
This is the capstone for the backend portion of the full stack web development curriculum using RoR. The initial concept was to build a web app which serves as a translator's resource for project collaboration. The main features were the creation of a translator profile, a terminology forum which allows users to post and comment on terminology topics and the creation of translation wikis . Due to time constraints the project is only partially completed and users are currently only able to create profiles and projects and upload and download files.
## Problem:
1. As a user, I want to sign up for a free account by providing a user name, password and email
2. As a user, I want to sign in and out of Transportal
3. As a user, I want to create projects
4. As a user, I want to upload and download files
## Solution:
* Devise for user authentication
* Created projects, documents and users models , controllers and views
* Use CarrierWave to upload and store images
* Use MiniMagic to manipulate images
## Results:
A working web application
## Conclusion:
I was able to reinforce some of the basic concepts which were acquired in the previous projects. This was very challenging since I had to build from scratch based on my own concept for this project. One of the greatest lessons learned on this project was that I had more ideas than I had time, so I had to reduce my concepts down to the essentials. | 55.567568 | 554 | 0.770428 | eng_Latn | 0.998864 |
e1bc2f512c8e4aa35701a11a0cc10bbf382f174d | 3,930 | md | Markdown | README.md | daniel-pittman/mean-starter-template | 516d4cb269594d261fb25b2cd1f2e41eca122cd0 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | README.md | daniel-pittman/mean-starter-template | 516d4cb269594d261fb25b2cd1f2e41eca122cd0 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | README.md | daniel-pittman/mean-starter-template | 516d4cb269594d261fb25b2cd1f2e41eca122cd0 | [
"BSD-2-Clause-FreeBSD"
] | null | null | null | # MEAN stack starter template
A repository containing a starter MongoDB, Express.js, Angular, and Node.js (MEAN) project for full-stack web development
### Running the Docker containers
1. Start by installing Docker if you have not already done so: https://docs.docker.com/get-docker/
* NOTE for Windows users: I *highly* recommend that you enable the Windows Subsystem for Linux Version 2 (WSL2) in Windows, and install a default Linux
distribution (like Ubuntu) prior to
installing Docker. Docker will run much easier on all editions of Windows 10 if you perform this step first
- This article will walk you through enabling WSL2: https://www.omgubuntu.co.uk/how-to-install-wsl2-on-windows-10
- If your BIOS does not have virtualization enabled, you may encounter an error enabling WSL 2. This article can help you enable virtualization on
your computer: https://support.bluestacks.com/hc/en-us/articles/115003174386-How-to-enable-Virtualization-VT-on-Windows-10-for-BlueStacks-4
- Once you have Docker installed, set Ubuntu 20 to be your default distribution by opening a command prompt as administrator and running the
following command:
- `wsl -s Ubuntu-20.04`
- Once your default distribution is set, verify Docker is configured correctly to use WSL 2 and your default WSL distro: https://docs.docker.com/docker-for-windows/wsl/
- Check the "Settings > General" and ""Settings > Resources > WSL Integration" sections of your Docker installation and compare them to the
screenshots on this website
1. After installing Docker, start the necessary containers for the project by running the `scripts/start_container` script appropriate for your operating
system
- `scripts/start_container` for macOS and Linux
- `scripts/start_container.bat` for Windows
### Developing
1. Once the Docker containers have started, attach to the development container by running the `scripts/attach_container` script appropriate for your operating
system
- `scripts/attach_container` for macOS and Linux
- `scripts/attach_container.bat` for Windows
1. Navigate to the `/app/meantemplate` directory. This will be where the source code of the project will be shared into the Docker container
1. Run `yarn install` to install server dependencies.
1. Run `npm run start:server` to start the development server.
1. You can access the Express.js API via http://localhost:9000 on your local machine
1. In a new terminal, run another instance of `scripts/attach_container` and run `npm run start:client` to run the development client application inside the
`/app/meantemplate` directory.
1. You can access the Angular application via http://localhost:8080 in a browser on your local machine
## Build & development
Run `gulp build` for building and `gulp buildcontrol:heroku` to deploy to Heroku.
## Testing
- Running `npm test:client` will run the client unit tests.
- Running `npm test:server` will run the server unit tests.
- Running `npm test:e2e` will run the e2e tests using Protractor.
## FAQ
If you see an error from Heroku saying "match is not defined", try running these commands from your dist directory:
1. `heroku config:set NODE_MODULES_CACHE=false`
1. `git commit -am 'disable node_modules cache' --allow-empty`
1. `git push heroku master`
1. `heroku config:set NODE_MODULES_CACHE=true`
If you see a message during `yarn install` saying "info There appears to be trouble with your network connection. Retrying...", followed by the error
"ESOCKETTIMEDOUT", cancel the yarn install and try it again. If this does not help, try restarting Docker.
## Acknowledgements
This project is based, in part, on the [Yeoman Angular Full-Stack Generator](https://angular-fullstack.github.io/)
## LICENSE
This software is licensed under the [FreeBSD License](https://opensource.org/licenses/bsd-license.php)
| 63.387097 | 176 | 0.760305 | eng_Latn | 0.982734 |
e1bc7718b09a294537b21bfcbd2c7b6057774b74 | 28 | md | Markdown | products-service/README.md | mduarteg/pixelsuite | cc7688680320a4874a3425ee5009927226d2ec59 | [
"Apache-2.0"
] | null | null | null | products-service/README.md | mduarteg/pixelsuite | cc7688680320a4874a3425ee5009927226d2ec59 | [
"Apache-2.0"
] | 1 | 2021-03-10T14:42:46.000Z | 2021-03-10T14:42:46.000Z | products-service/README.md | mduarteg/pixelsuite | cc7688680320a4874a3425ee5009927226d2ec59 | [
"Apache-2.0"
] | null | null | null | # PixelSuite Product Service | 28 | 28 | 0.857143 | eng_Latn | 0.460506 |
e1bcb4463db4cfe4ee55c3de47ad6bb44e777601 | 855 | md | Markdown | README.md | umeshan/StudentAPI | 5f13bed512cf30d8267e870964efeda50536adec | [
"MIT"
] | null | null | null | README.md | umeshan/StudentAPI | 5f13bed512cf30d8267e870964efeda50536adec | [
"MIT"
] | null | null | null | README.md | umeshan/StudentAPI | 5f13bed512cf30d8267e870964efeda50536adec | [
"MIT"
] | null | null | null | ### `npm start`
Runs the app in the development mode.<br />
Open [http://localhost:{port}](http://localhost:{port}) to view it in the browser.
The page will reload if you make edits.
### City API
Open [http://localhost:{port}/city](http://localhost:{port}/city)
### Add Student API
Open [http://localhost:{port}/students/add](http://localhost:{port}/students/add)
### List Student(s) API
Open [http://localhost:{port}/students/list](http://localhost:{port}/students/list)
### Update Student API
Open [http://localhost:{port}/students/update](http://localhost:{port}/students/update)
### Delete Student API
Open [http://localhost:{port}/students/delete](http://localhost:{port}/students/delete)
### Sample Student list
`[{ "firstName": "David", "lastName": "John", "regNo": 101, "gender": "Male", "city": "Chennai", "zip": 600002, "id":1 }]`
| 27.580645 | 122 | 0.678363 | kor_Hang | 0.398885 |
e1bce6ee71a4575c591cbb45baa580bd041d9fe1 | 679 | md | Markdown | curriculum/challenges/portuguese/07-scientific-computing-with-python/python-for-everybody/introduction-why-program.md | fcastillo-serempre/freeCodeCamp | 43496432d659bac8323ab2580ba09fa7bf9b73f2 | [
"BSD-3-Clause"
] | 172,317 | 2017-01-11T05:26:18.000Z | 2022-03-31T23:30:16.000Z | curriculum/challenges/portuguese/07-scientific-computing-with-python/python-for-everybody/introduction-why-program.md | fcastillo-serempre/freeCodeCamp | 43496432d659bac8323ab2580ba09fa7bf9b73f2 | [
"BSD-3-Clause"
] | 26,252 | 2017-01-11T06:19:09.000Z | 2022-03-31T23:18:31.000Z | curriculum/challenges/portuguese/07-scientific-computing-with-python/python-for-everybody/introduction-why-program.md | fcastillo-serempre/freeCodeCamp | 43496432d659bac8323ab2580ba09fa7bf9b73f2 | [
"BSD-3-Clause"
] | 27,418 | 2017-01-11T06:31:22.000Z | 2022-03-31T20:44:38.000Z | ---
id: 5e6a54a558d3af90110a60a0
title: 'Introdução: Por que programar?'
challengeType: 11
videoId: 3muQV-Im3Z0
bilibiliIds:
aid: 206882253
bvid: BV1Fh411z7tr
cid: 376314257
videoLocaleIds:
espanol: 3muQV-Im3Z0
italian: 3muQV-Im3Z0
portuguese: 3muQV-Im3Z0
dashedName: introduction-why-program
---
# --description--
Mais recursos:
\- [Instale Python no Windows](https://youtu.be/F7mtLrYzZP8)
\- [Instale Python no MacOS](https://youtu.be/wfLnZP-4sZw)
# --question--
## --text--
Quem deveria aprender a programar?
## --answers--
Estudantes universitários.
---
Pessoas que querem se tornar desenvolvedores de software.
---
Todos.
## --video-solution--
3
| 14.446809 | 60 | 0.720177 | por_Latn | 0.585108 |
e1bd100248eec4a14277d895535ae731d017765c | 4,730 | md | Markdown | _posts/2021-06-21-vimlife.md | dhancodes/dhancodes.github.io | c0ffef3bae7ce0d7583a21c1f0ceb3fd93d1f1f2 | [
"MIT"
] | 1 | 2021-08-08T10:03:04.000Z | 2021-08-08T10:03:04.000Z | _posts/2021-06-21-vimlife.md | dhancodes/dhancodes.github.io | c0ffef3bae7ce0d7583a21c1f0ceb3fd93d1f1f2 | [
"MIT"
] | null | null | null | _posts/2021-06-21-vimlife.md | dhancodes/dhancodes.github.io | c0ffef3bae7ce0d7583a21c1f0ceb3fd93d1f1f2 | [
"MIT"
] | null | null | null | ---
title: "It's a Vim-derful life"
date: 2021-06-21
permalink: /posts/2021/06/vimlife/
tags:
- workflow
- vim
- latex
toc: true
---
Vim is a text editor that changed my workflow. You can read more about vim [here.](https://en.wikipedia.org/wiki/Vim_(text_editor))
Why vim is different from other commonly used text editors is that it works
based on different modes, the important ones being normal, insert, visual and
command mode. This interns increases the functionality of each keys. As there are
different approached to optimize a workflow, and you have to test things out to
figure out your workflow. I present a selection of things that worked for me, choose what you like.
### Searching in Vim
Searching is essential in every text editor. These changes will make you search experience better.
```vim
set nohlsearch
set incsearch
set ignorecase " case insensitive searching
set smartcase " case-sensitive if expresson contains a capital letter
nnoremap n nzzzv
nnoremap N Nzzzv
" Adding Fuzzy search for files
set path+=**
" For creating a tag file using ctags. Works with tex and programfiles.
" For more details see man page of ctags.
command! MakeTags !ctags -R .
```
### Making pdf's using Latex
I have a custom script for compiling latex files and I add the following
lies to `.vimrc`.
```vim
autocmd FileType tex map <Leader>c :!~/scripts/makepdf.sh % <CR><CR>
autocmd FileType tex map <Leader>o :!zathura %:r.pdf & <CR><CR>
```
The shell scripts is as follows,
```shell
#!/bin/sh
#Making pdf from anything.
file=$(readlink -f "$1")
dir=${file%/*}
base="${1%.*}"
cd "$dir" || exit 1
command="pdflatex"
( head -n5 "$file" | grep -qi 'xelatex' ) && command="xelatex"
( head -n1 "$file" | grep -qi 'handout' ) && $command --output-directory="$dir" --jobname="$base""_handout" "\PassOptionsToClass{handout}{beamer} \input{$base}" && bibtex "$base""_handout.aux"
$command --output-directory="$dir" "$base" &&
bibtex $base.aux
$command --output-directory="$dir" "$base"
( head -n1 "$file" | grep -qi 'handout' ) && $command --output-directory="$dir" --jobname="$base""_handout" "\PassOptionsToClass{handout}{beamer} \input{$base}"
```
This shell script uses `pdflatex` or `xelatex` engine depending in whether `xelatex` is mentioned in the first 5 lines of the document. And also a option `handout` is added for
creating a separate pdf for beamer slides.
You can also try writing files in markdown and converting them to pdf using
pandoc. Even though the typing speed and conventions are better in markdown,
it cannot substitute the versatility and peer support of Latex.
### Plugins
Plugin add to functionality of vim.
The use of plugins is the one I am most careful about, only use the least and
the most needed. And use a plugin manager the one I use is [Vundle.](https://github.com/VundleVim/Vundle.vim)
My plugins I found the most helpful are
```vim
"Plugins
call plug#begin('~/.vim/plugged')
Plug 'VundleVim/Vundle.vim'
Plug 'sirver/ultisnips'| Plug 'honza/vim-snippets'
Plug 'tpope/vim-surround'
Plug 'tpope/vim-commentary'
Plug 'junegunn/vim-easy-align'
call plug#end()
```
Typing with ultisnips just boosts your productivity. You can assign
particular keystrokes for commands as well as custom texts. And vim-snippets
comes with predefined snippets for most languages and includes useful snippets.
**Warning!** These are a compilation of things that work for me. I hope that this will inspire you inorder
to create your own vimrc.
{: .notice--danger}
### Some Helpful Automatic commands.
```vim
augroup remember_fold
autocmd!
autocmd BufWinLeave * mkview
autocmd BufWinEnter * silent! loadview
augroup END
"executes the script on quit for texfiles.
augroup autocom
autocmd!
autocmd VimLeave *.tex !~/scripts/texclear.sh %
autocmd VimLeave *.tex !~/scripts/texclear.sh %:r_handout.tex
augroup END
```
And the texclear script is as follows.
```bash
#!/bin/sh
# Clears the build files of a LaTeX/XeLaTeX build.
# I have vim run this file whenever I exit a .tex file.
case "$1" in
*.tex)
file=$(readlink -f "$1")
dir=$(dirname "$file")
base="${file%.*}"
find "$dir" -maxdepth 1 -type f -regextype gnu-awk -regex "^$base\\.(4tc|xref|tmp|pyc|pyo|fls|vrb|fdb_latexmk|bak|swp|aux|log|synctex\\(busy\\)|lof|lot|maf|idx|mtc|mtc0|nav|out|snm|toc|bcf|run\\.xml|synctex\\.gz|blg|bbl)" -delete ;;
*) printf "Give .tex file as argument.\\n" ;;
esac
```
### References
- [How I manage my LaTeX lecture notes - Gilles Castel](https://castel.dev/post/lecture-notes-3/)
- [Dotfiles Github Page](https://dotfiles.github.io/)
- You can find my latex templates [here.](https://github.com/dhancodes/tex-templates)
| 34.525547 | 234 | 0.718605 | eng_Latn | 0.951423 |
e1bd1d2a29f1d05fbe7e3a3142f39cdee8916b32 | 2,358 | md | Markdown | src/04-Hot-Sandwiches/raw-vegan-gyro-with-tofu.md | troyerta/recipes | 2f62be5ba0a2618e03a0330430754fc919645c23 | [
"MIT"
] | 11 | 2022-03-08T16:00:37.000Z | 2022-03-12T15:01:41.000Z | src/04-Hot-Sandwiches/raw-vegan-gyro-with-tofu.md | troyerta/recipes | 2f62be5ba0a2618e03a0330430754fc919645c23 | [
"MIT"
] | 2 | 2021-03-20T18:06:58.000Z | 2021-09-08T02:03:55.000Z | src/04-Hot-Sandwiches/raw-vegan-gyro-with-tofu.md | troyerta/recipes | 2f62be5ba0a2618e03a0330430754fc919645c23 | [
"MIT"
] | 2 | 2020-04-15T21:05:51.000Z | 2022-03-09T19:50:52.000Z | # Raw Vegan Gyro with Tofu
## Overview
Gyro spiced tofu, topped with raw and cooked chickpeas, fresh veggies and tzatziki sauce and all wrapped in cool and crisp jicama shells.
- Yield: 4 servings
- Prep Time: 10 mins
- Cook Time: 10 mins
- Total Time: 20 mins
## Ingredients
#### Jicama shells
- 1 medium size Jicama
#### Gyro Seasoning
- 2 tbsp chili powder
- 1 tbsp ground coriander
- 1 tbsp ground cumin
- 0.5 tbsp paprika, hot
- 0.5 tbsp garlic powder
- 1 tbsp dried parsley
- 2 tsp dried thyme
- 2 tsp dried oregano
- salt and pepper to taste
#### Tofu
- 1 cup drained extra firm tofu (cut into finger sizes)
- 1 tbsp gyro seasoning
- 1 tsp oil
- salt and pepper to taste
#### Gyro toppings
- 1/4 cup raw garbanzo beans
- 1/2 cup crispy garbazo beans or chickpeas
- 1/4 cup grape tomatoes (sliced)
- 1/4 cup sliced sweet peppers
- 1/2 cup zucchini julienne
- 1/2 cup tzatziki sauce (more if you need)
#### Crispy chickpeas
- 1/2 cup chickpeas(cooked)
- 1/2 tsp oil
- 1/4 tsp red chili powder
- 1/4 tsp turmeric powder
- salt and pepper to taste
## Method
#### Crispy chickpeas
1. Preheat oven to 350F.
---
2. Layer chickpeas on a baking sheet and sprinkle with salt and pepper.
---
3. Add spices and drizzle with oil.
---
4. Give a quick mix so that the spices and salt coats the chickpeas well.
---
5. Bake for 15-20 minutes until the desired crispiness is achieved.
---
#### Gyro-spiced tofu
1. Cut tofu into thick fingers.
---
2. Heat a tbsp of oil in a frying pan.
---
3. Add 1 tbsp gyro seasoning and add tofu.
---
4. Stir well to coat.
---
5. Cook until tofu are crispy on the outside.
---
#### Assemble raw vegan gyro
1. Add tofu and spicy chickpeas on each jicama shell.
---
2. Top with zucchini, sweet peppers and tomato and raw garbanzo beans.
---
3. Serve with tzatziki sauce.
---
## Notes
- Soaking jicama shells in water retains their crispiness.
- Increase the heat by adding more chili powder.
- Sriracha sauce goes well this dish too.
- Store the remaining gyro seasoning in a clean glass bottle.
## References and Acknowledgments
[My Dainty Soul Curry - Raw Vegan Gyro with Tofu](http://www.mydaintysoulcurry.com/almost-raw-vegan-gyro-tofu/?utm_medium=social&utm_source=pinterest&utm_campaign=tailwind_tribes&utm_content=tribes&utm_term=675844038_26742012_307265)
## Tags
mediterranean
| 18.566929 | 233 | 0.720102 | eng_Latn | 0.95026 |
e1bd84c318f91172de177cb1b7f0fcf6c1c24776 | 3,397 | md | Markdown | docs/vs-2015/code-quality/ca1043-use-integral-or-string-argument-for-indexers.md | mairaw/visualstudio-docs.pt-br | 26480481c1cdab3e77218755148d09daec1b3454 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/ca1043-use-integral-or-string-argument-for-indexers.md | mairaw/visualstudio-docs.pt-br | 26480481c1cdab3e77218755148d09daec1b3454 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs/vs-2015/code-quality/ca1043-use-integral-or-string-argument-for-indexers.md | mairaw/visualstudio-docs.pt-br | 26480481c1cdab3e77218755148d09daec1b3454 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 'CA1043: Usar argumento integral ou de cadeia de caracteres para indexadores | Microsoft Docs'
ms.custom: ''
ms.date: 11/15/2016
ms.prod: visual-studio-dev14
ms.reviewer: ''
ms.suite: ''
ms.technology:
- vs-devops-test
ms.tgt_pltfrm: ''
ms.topic: article
f1_keywords:
- CA1043
- UseIntegralOrStringArgumentForIndexers
helpviewer_keywords:
- CA1043
- UseIntegralOrStringArgumentForIndexers
ms.assetid: d7f14b9e-2220-4f80-b6b8-48c655a05701
caps.latest.revision: 16
author: gewarren
ms.author: gewarren
manager: wpickett
ms.openlocfilehash: db2b365626efc1a5735adf986d1b49ac52c2c72b
ms.sourcegitcommit: 240c8b34e80952d00e90c52dcb1a077b9aff47f6
ms.translationtype: MT
ms.contentlocale: pt-BR
ms.lasthandoff: 10/23/2018
ms.locfileid: "49951555"
---
# <a name="ca1043-use-integral-or-string-argument-for-indexers"></a>CA1043: usar argumento integral ou da cadeia de caracteres para indexadores
[!INCLUDE[vs2017banner](../includes/vs2017banner.md)]
|||
|-|-|
|NomeDoTipo|UseIntegralOrStringArgumentForIndexers|
|CheckId|CA1043|
|Categoria|Microsoft.Design|
|Alteração Significativa|Quebra|
## <a name="cause"></a>Causa
Um tipo público ou protegido contém um indexador público ou protegido que usa um tipo de índice diferente de <xref:System.Int32?displayProperty=fullName>, <xref:System.Int64?displayProperty=fullName>, <xref:System.Object?displayProperty=fullName>, ou <xref:System.String?displayProperty=fullName>.
## <a name="rule-description"></a>Descrição da Regra
Indexadores, ou seja, propriedades indexadas, devem usar tipos de inteiro ou cadeia de caracteres para o índice. Esses tipos são normalmente usados para indexar estruturas de dados e aumentam a usabilidade da biblioteca. Usar o <xref:System.Object> tipo deve ser restrito a esses casos em que o tipo de inteiro ou cadeia de caracteres específico não pode ser especificado em tempo de design. Se o projeto requer outros tipos para o índice, reconsidere se o tipo representa um armazenamento de dados lógicos. Se ele não representa um repositório de dados lógicos, use um método.
## <a name="how-to-fix-violations"></a>Como Corrigir Violações
Para corrigir uma violação dessa regra, alterar o índice para um tipo de inteiro ou cadeia de caracteres ou usar um método em vez do indexador.
## <a name="when-to-suppress-warnings"></a>Quando Suprimir Avisos
Suprima um aviso nessa regra somente após considerar cuidadosamente a necessidade do indexador não padrão.
## <a name="example"></a>Exemplo
O exemplo a seguir mostra um indexador que usa um <xref:System.Int32> índice.
[!code-cpp[FxCop.Design.IntegralOrStringIndexers#1](../snippets/cpp/VS_Snippets_CodeAnalysis/FxCop.Design.IntegralOrStringIndexers/cpp/FxCop.Design.IntegralOrStringIndexers.cpp#1)]
[!code-csharp[FxCop.Design.IntegralOrStringIndexers#1](../snippets/csharp/VS_Snippets_CodeAnalysis/FxCop.Design.IntegralOrStringIndexers/cs/FxCop.Design.IntegralOrStringIndexers.cs#1)]
[!code-vb[FxCop.Design.IntegralOrStringIndexers#1](../snippets/visualbasic/VS_Snippets_CodeAnalysis/FxCop.Design.IntegralOrStringIndexers/vb/FxCop.Design.IntegralOrStringIndexers.vb#1)]
## <a name="related-rules"></a>Regras relacionadas
[CA1023: os indexadores não devem ser multidimensionais](../code-quality/ca1023-indexers-should-not-be-multidimensional.md)
[CA1024: usar propriedades quando apropriado](../code-quality/ca1024-use-properties-where-appropriate.md)
| 51.469697 | 578 | 0.802473 | por_Latn | 0.871339 |
e1bd88000b121f5f97a700a67dcf31250115c710 | 1,415 | md | Markdown | src/ml/2021-01/09/01.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 68 | 2016-10-30T23:17:56.000Z | 2022-03-27T11:58:16.000Z | src/ml/2021-01/09/01.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 367 | 2016-10-21T03:50:22.000Z | 2022-03-28T23:35:25.000Z | src/ml/2021-01/09/01.md | Pmarva/sabbath-school-lessons | 0e1564557be444c2fee51ddfd6f74a14fd1c45fa | [
"MIT"
] | 109 | 2016-08-02T14:32:13.000Z | 2022-03-31T10:18:41.000Z | ---
title: സേവിക്കാനും രക്ഷിക്കാനും
date: 20/02/2021
---
### ഈയാഴ്ചയിലെ പഠനത്തിനായി വായിക്കുക
യെശ. 41, 42:1-7; 44:26-45; 49:1-12
> <p> മനഃപാഠവാക്യം </p>
> “ഇതാ ഞാൻ താങ്ങുന്ന എന്റെ ദാസൻ; എന്റെ ഉള്ളം പ്രസാ ദിക്കുന്ന എന്റെ വൃതൻ: ഞാൻ എന്റെ ആത്മാവിനെ അവന്റെ മേൽ വച്ചിരിക്കുന്നു; അവൻ ജാതികളോട് ന്യായം പ്രസ്താവിക്കും” (യെശ. 42:1).
ക്രിസ്തുവിന്റെ ജീവിതത്തിലെ ഭൂമിയിലുള്ള രംഗങ്ങൾ സന്ദർശിക്കുന്നത് ഒരു വലിയ പദവി ആയിരിക്കുമെന്ന് അനേകർ കരുതുന്നു, അവൻ നടന്നിടത്തുകൂടി നടക്കുക, അവൻ പഠിപ്പിക്കാനാഗ്രഹിച്ചിരുന്ന തടാകക്കരയെ നോക്കുക, അവന്റെ കണ്ണുകൾ മിക്കപ്പോഴും വിശ്രമിച്ച കുന്നുകളും താഴ്വരകളും കാണുക. എന്നാൽ നാം ക്രിസ്തുവിന്റെ കാലടികളിൽ നടക്കുവാനായി നാം ബഥാന്യയിലോ, കഫർന്ന ഹൂറൂമിലോ നസത്തിലോ പോകേണ്ടതില്ല. നാം അവന്റെ പാദമുദ്രകൾ രോഗക്കിട ക്കയ്ക്കരികിലോ ദാരിദ്ര്യത്തിന്റെ ചെറ്റപ്പുരയിലോ, വലിയ നഗരത്തിന്റെ ആൾക്കൂട്ട മുള്ള ഇടവഴികളിലോ, ആശ്വാസത്തിന്റെ ആവശ്യമുള്ള മനുഷ്യഹൃദയങ്ങൾ ഉള്ള എല്ലായിടത്തും കണ്ടെത്താനാകും. യേശു ഭൂമിയിലായിരുന്നപ്പോൾ ചെയ്തതുപോലെ ചെയ്യുമ്പോൾ നാം അവന്റെ കാലടികളിൽ നടക്കും” - E.G. White, The Desire of Ages, p. 640.
സമാനമായ കരുണയുടെ ദൗത്യമുണ്ടായിരുന്ന ഒരു ദൈവദാസനെക്കുറിച്ച് യെശ യ്യാവ് സംസാരിച്ചു: “ചതഞ്ഞ ഓട അവൻ ഒടിച്ചു കളകയില്ല; പുകയുന്ന തിരി കെടു ത്തുകയുമില്ല. ബദ്ധന്മാരെ കുണ്ടറയിൽനിന്നും അന്ധകാരത്തിൽ ഇരിക്കുന്നവരെ കാരാഗൃഹത്തിൽനിന്നും വിടുവിപ്പാനും” (യെശ. 42:3,7). നമുക്ക് ഈ ദാസനെ ഒന്ന് കാണാം. ആരാണ് അവൻ, അവൻ എന്ത് നേടിയെടു ക്കുന്നു?
_ഈ ആഴ്ചയിലെ പാഠം പഠിച്ച് ഫെബ്രുവരി 27 ശബ്ബത്തിനുവേണ്ടി നമുക്ക് ഒരുങ്ങാം..._ | 88.4375 | 686 | 0.513074 | mal_Mlym | 0.99915 |
e1bd90a25eb1afa50170591d693cfc9158151b2f | 1,153 | md | Markdown | _posts/2021-02-04-blogs-and-newsletter-about-machine-learning.md | dongkwan-kim/new.dongkwan-kim.github.io | b135bc3b1a16dad958b036a76a4ec2fb31091692 | [
"MIT"
] | null | null | null | _posts/2021-02-04-blogs-and-newsletter-about-machine-learning.md | dongkwan-kim/new.dongkwan-kim.github.io | b135bc3b1a16dad958b036a76a4ec2fb31091692 | [
"MIT"
] | null | null | null | _posts/2021-02-04-blogs-and-newsletter-about-machine-learning.md | dongkwan-kim/new.dongkwan-kim.github.io | b135bc3b1a16dad958b036a76a4ec2fb31091692 | [
"MIT"
] | 1 | 2020-09-06T09:21:47.000Z | 2020-09-06T09:21:47.000Z | ---
title: 'Blogs and Newsletters about Machine Learning'
date: 2021-02-04
permalink: /blogs/blogs-and-newsletter-about-machine-learning/
tags:
- Machine Learning Research
- Machine Learning Engineering
summary: ""
---
Here is a list of blogs and newsletters about machine learning that I follow (or regularly read).
## Newsletters
- [Deep Learning Weekly](https://www.deeplearningweekly.com/)
- [NLP News (Sebastian Ruder)](https://ruder.io/nlp-news/)
- [PwC Newsletter](https://paperswithcode.com/newsletter/)
- [Huggingface Issue](https://huggingface.curated.co/)
- [GML Newsletter](https://graphml.substack.com/)
## Blogs (Researchers)
- [Sebastian Ruder](https://ruder.io/)
- [Lil'Log](https://lilianweng.github.io/lil-log/)
- [Jay's ML Blog](https://jalammar.github.io/)
- [Amit Chaudhary](https://amitness.com/)
- [Eric Jang](http://evjang.com/)
- [Gregory Gundersen](http://gregorygundersen.com/blog/)
## Blogs (Organization)
- [Google AI](https://ai.googleblog.com/)
- [DeepMind](https://www.deepmind.com/blog)
- [Microsoft Research](https://www.microsoft.com/en-us/research/blog/)
- [BAIR Blog](https://bair.berkeley.edu/blog/)
| 29.564103 | 97 | 0.718127 | yue_Hant | 0.392611 |
e1bdc6877f8b03fe96e65cd04b3ac9c876ebd548 | 645 | md | Markdown | README.md | tynmarket/jpx | d5dd18c31b8c26787d476d4b2eab94be4f545e44 | [
"MIT"
] | null | null | null | README.md | tynmarket/jpx | d5dd18c31b8c26787d476d4b2eab94be4f545e44 | [
"MIT"
] | null | null | null | README.md | tynmarket/jpx | d5dd18c31b8c26787d476d4b2eab94be4f545e44 | [
"MIT"
] | null | null | null | # Jpx
[JPXデータクラウド](http://db-ec.jpx.co.jp/)から購入したデータのCSVファイルをパースするライブラリです。
現在対応しているのは以下のデータです。
- 日経225先物 1分足
## Installation
Add this line to your application's Gemfile:
gem "jpx"
And then execute:
$ bundle install --path vendor/bundle
Or install it yourself as:
$ gem install jpx
## Usage
``` ruby
# CSVファイルから期近のデータを取得します
Jpx::Price.parse("/path/to/file.csv")
#=> [{
:datetime=>2018-01-04 08:45:00 +0900, # 実際に取引が行われた日時に変換しています
:session=>0, # 0 - 日中, 1 - ナイトセッション
:open=>23100,
:high=>23200,
:low=>23080,
:close=>23082,
:volume=>3027
}, ...]
```
| 16.125 | 68 | 0.596899 | yue_Hant | 0.63766 |
e1bde192325c9f0d728efdd9ab343756ff3e0ad6 | 1,049 | md | Markdown | docs-conceptual/azuresmps-4.0.0/overview.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | 1 | 2020-12-05T17:58:35.000Z | 2020-12-05T17:58:35.000Z | docs-conceptual/azuresmps-4.0.0/overview.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | docs-conceptual/azuresmps-4.0.0/overview.md | AdrianaDJ/azure-docs-powershell.tr-TR | 78407d14f64e877506d6c0c14cac18608332c7a8 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: Azure PowerShell Service Management modülüne genel bakış | Microsoft Docs
description: Yükleme ve yapılandırma bağlantılarıyla birlikte Azure PowerShell’e genel bakış.
ms.devlang: powershell
ms.topic: conceptual
ms.custom: devx-track-azurepowershell
ms.openlocfilehash: 84f0d176ce01d267824ce4f3e94bdca4cd09c9ff
ms.sourcegitcommit: 8b3126b5c79f453464d90669f0046ba86b7a3424
ms.translationtype: HT
ms.contentlocale: tr-TR
ms.lasthandoff: 09/01/2020
ms.locfileid: "89241228"
---
# <a name="overview-of-the-azure-powershell-service-management-module"></a>Azure PowerShell Service Management modülüne genel bakış
Azure Service Management, klasik dağıtım modelidir. Bu modül, Azure Resource Manager’a dönüştürülmemiş klasik Azure dağıtımlarıyla çalışmanıza imkan sağlar. Tüm yeni dağıtımların Azure Resource Manager yöntemlerini kullanması gerekir. Cmdlet’leri kullanmak için Azure PowerShell’i yükleyip yapılandırarak hesabınıza bağlamanız gerekir. Daha fazla bilgi için bkz. [Azure PowerShell’i yükleme ve yapılandırma](install-azure-ps.md).
| 61.705882 | 429 | 0.838894 | tur_Latn | 0.996337 |
e1bdee72442d89a9b386564a7fb334aca4453e85 | 1,114 | md | Markdown | README.md | PeteCoward/DjangoPoFileDiffReduce | 7531be079419024297fc2df9fcabff4958825536 | [
"Unlicense"
] | 1 | 2019-01-17T05:04:04.000Z | 2019-01-17T05:04:04.000Z | README.md | PeteCoward/DjangoPoFileDiffReduce | 7531be079419024297fc2df9fcabff4958825536 | [
"Unlicense"
] | 7 | 2018-03-05T04:40:18.000Z | 2018-03-06T00:52:13.000Z | README.md | PeteCoward/MakeMessagesPlus | 7531be079419024297fc2df9fcabff4958825536 | [
"Unlicense"
] | 1 | 2018-03-11T03:14:43.000Z | 2018-03-11T03:14:43.000Z | # MakeMessagesPlus
We've found that the default behaviour of this command makes our diffs in pull requests large and hard to read, and we use lots of parameters to improve that, but communicating and remembering what parameters to use is hard. This project is designed to make it much simpler for our team to use the command.
## Installation
Install this repository as a django app inside your django project.
For example use git submodules to clone this repo ( or your fork of it ) inside your django project
seev [the test project](https://github.com/PeteCoward/MakeMessagesPlusTestProject)
Or just clone and copy the code into an app in your project.
## Features
- default to using `--no-location` to simplify diff, turn off by using `--yes-location`
- default to using `--no-wrap` to simplify diff , turn off by using `--yes-wrap`
- allow passing an app list parameter to reduce diffs in apps which have not changed
## Contributing
This is early days, a few more improvements are planned, and more suggestions are welcome.
See the [issue list](https://github.com/PeteCoward/MakeMessagesPlus/issues/)
| 44.56 | 306 | 0.776481 | eng_Latn | 0.998886 |
e1bee0c772ccbec37360a67314008dd7fd2fac54 | 542 | md | Markdown | foundation_math/ffcalc/README.md | master-g/big | ce42fff72fc88c2c4f74e28ba8ee9bb6698b0289 | [
"MIT"
] | null | null | null | foundation_math/ffcalc/README.md | master-g/big | ce42fff72fc88c2c4f74e28ba8ee9bb6698b0289 | [
"MIT"
] | null | null | null | foundation_math/ffcalc/README.md | master-g/big | ce42fff72fc88c2c4f74e28ba8ee9bb6698b0289 | [
"MIT"
] | null | null | null | ffcalc
======
ffcalc 包是一个学习性质的有限域(模素数)计算器的 go 实现
## 安装与更新
```bash
$ go get -u github.com/master-g/big/src/foundation_math/ffcalc
```
## 使用
```bash
$ ffcalc prime [exp...]
```
其中`prime` 为有限域的素数, `exp` 为计算表达式
例子
```bash
$ ffcalc 19 "11+6"
17
$ ffcalc 31 "4^-4*11"
13
```
## 参考
[谈谈有限域那些事儿](https://blog.csdn.net/qmickecs/article/details/77281602)
[WikiPedia](https://en.wikipedia.org/wiki/Finite_field#GF(p2)_for_an_odd_prime_p)
[Blockchain 101](https://eng.paxos.com/blockchain-101-foundational-math)
## 许可证
ffcalc 遵循 MIT 许可证 | 14.648649 | 83 | 0.678967 | yue_Hant | 0.425585 |
e1bf104c3d389e7d9c221a6ef7126b59e03ff080 | 754 | md | Markdown | README.md | tijn/devmail | ee5f72a9499c56529016a63a7116012f17a91a83 | [
"Apache-2.0"
] | 24 | 2016-09-20T09:23:06.000Z | 2022-01-30T18:34:38.000Z | README.md | tijn/devmail | ee5f72a9499c56529016a63a7116012f17a91a83 | [
"Apache-2.0"
] | 5 | 2016-11-12T21:26:06.000Z | 2020-05-19T15:22:06.000Z | README.md | tijn/devmail | ee5f72a9499c56529016a63a7116012f17a91a83 | [
"Apache-2.0"
] | 2 | 2019-11-26T13:28:40.000Z | 2020-05-19T14:31:04.000Z | # devmail
SMTP POP3
Your app(s) ----------> devmail ----------> Thunderbird/Mail.app/Outlook/...
SMTP and POP3 server with no storage. It keeps all the mails in memory until they are fetched or until you shut down the program. It is meant for developers who need to inspect the mail that their app sends. You can send emails to it via SMTP and "pop" them with an e-mail client like Thunderbird or Mail.app on macOS. It is comparable to [Letter Opener](https://github.com/ryanb/letter_opener).
This is a port of [Blue Rail's Post Office](https://github.com/bluerail/post_office) to Crystal-lang.
## MacOs
Please read startup/launchd/README.md
## Linux (SystemD)
Please read startup/systemd/README.md
| 41.888889 | 395 | 0.69496 | eng_Latn | 0.990886 |
e1c07a6db1abca8ac1767c2794f39ce62a2b7f4a | 1,343 | md | Markdown | aspnet/web-forms/videos/authentication/add-custom-data-to-the-authentication-method.md | nakawankuma/Docs.ja-jp | 68dd4c3082b4a248b52a9ef132acea2ee61ca8c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/authentication/add-custom-data-to-the-authentication-method.md | nakawankuma/Docs.ja-jp | 68dd4c3082b4a248b52a9ef132acea2ee61ca8c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | aspnet/web-forms/videos/authentication/add-custom-data-to-the-authentication-method.md | nakawankuma/Docs.ja-jp | 68dd4c3082b4a248b52a9ef132acea2ee61ca8c5 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
uid: web-forms/videos/authentication/add-custom-data-to-the-authentication-method
title: 認証方法にカスタム データを追加 |Microsoft ドキュメント
author: JoeStagner
description: 行えるは認証チケットにカスタム データを追加する認証方法を構成することによって ASP.NET 認証の探索を続行しています.
ms.author: aspnetcontent
manager: wpickett
ms.date: 07/16/2008
ms.topic: article
ms.assetid: 940bdecc-ae0f-448f-a189-405efa614049
ms.technology: dotnet-webforms
ms.prod: .net-framework
msc.legacyurl: /web-forms/videos/authentication/add-custom-data-to-the-authentication-method
msc.type: video
ms.openlocfilehash: 1cc2328486da8d988271b5a609346b03b2d140d7
ms.sourcegitcommit: f8852267f463b62d7f975e56bea9aa3f68fbbdeb
ms.translationtype: MT
ms.contentlocale: ja-JP
ms.lasthandoff: 04/06/2018
---
<a name="add-custom-data-to-the-authentication-method"></a>認証方法にカスタム データを追加します。
====================
によって[行える](https://github.com/JoeStagner)
行えるは、ASP.NET によって使用される認証チケットにカスタム データを追加する認証方法を構成することによって、ASP.NET 認証の探索が続行されます。 このデモの詳細については、チュートリアルがある[ここ](../../overview/older-versions-security/introduction/forms-authentication-configuration-and-advanced-topics-vb.md)です。
[▶ビデオでは (14 分)](https://channel9.msdn.com/Blogs/ASP-NET-Site-Videos/add-custom-data-to-the-authentication-method)
> [!div class="step-by-step"]
> [前へ](forms-login-custom-key-configuration.md)
> [次へ](use-custom-principal-objects.md)
| 41.96875 | 224 | 0.787789 | yue_Hant | 0.434654 |
e1c0d8ada7c5f1cc1f2040f75ad18ed96b0daba6 | 3,205 | md | Markdown | vendor/spatie/laravel-pjax/README.md | topwhere/b.topwhere.cn | 8ff6c576fe3cef7c6145252ccc03158ad5772dec | [
"MIT"
] | null | null | null | vendor/spatie/laravel-pjax/README.md | topwhere/b.topwhere.cn | 8ff6c576fe3cef7c6145252ccc03158ad5772dec | [
"MIT"
] | null | null | null | vendor/spatie/laravel-pjax/README.md | topwhere/b.topwhere.cn | 8ff6c576fe3cef7c6145252ccc03158ad5772dec | [
"MIT"
] | null | null | null | # A pjax middleware for Laravel 5
[](https://packagist.org/packages/spatie/laravel-pjax)
[](LICENSE.md)
[](https://travis-ci.org/spatie/laravel-pjax)
[](https://insight.sensiolabs.com/projects/89249e40-536c-4b1b-b1fb-f8b807b2b51d)
[](https://scrutinizer-ci.com/g/spatie/laravel-pjax)
[](https://packagist.org/packages/spatie/laravel-pjax)
[Pjax](https://github.com/defunkt/jquery-pjax) is jquery plugin that leverages ajax to
speed up the loading time of your pages. It works by only fetching specific html fragments
from the server, and client-side updating only certain parts of the page.
The package provides a middleware that can return the reponse that the jquery plugin expects.
Spatie is a webdesign agency based in Antwerp, Belgium. You'll find an overview of all our open source
projects [on our website](https://spatie.be/opensource).
## Installation
You can install the package via composer:
``` bash
$ composer require spatie/laravel-pjax
```
Next you must add the `\Spatie\Pjax\Middleware\FilterIfPjax`-middleware to the kernel.
```php
// app/Http/Kernel.php
...
protected $middleware = [
...
\Spatie\Pjax\Middleware\FilterIfPjax::class,
];
```
## Usage
The provided middleware provides [the behaviour that the pjax plugin expects of the server](https://github.com/defunkt/jquery-pjax#server-side):
> An X-PJAX request header is set to differentiate a pjax request from normal XHR requests.
> In this case, if the request is pjax, we skip the layout html and just render the inner
> contents of the container.
## Change log
Please see [CHANGELOG](CHANGELOG.md) for more information what has changed recently.
## Testing
``` bash
$ composer test
```
## Contributing
Please see [CONTRIBUTING](CONTRIBUTING.md) for details.
## Security
If you discover any security related issues, please email freek@spatie.be instead of using the issue tracker.
## Credits
- [Freek Van der Herten](https://github.com/freekmurze)
- [All Contributors](../../contributors)
The middleware in this package was originally written by [Jeffrey Way](https://twitter.com/jeffrey_way) for the [Laracasts](https://laracasts.com)-lesson
on [pjax](https://laracasts.com/lessons/faster-page-loads-with-pjax). His original code
can be found [in this repo on GitHub](https://github.com/laracasts/Pjax-and-Laravel).
## About Spatie
Spatie is a webdesign agency based in Antwerp, Belgium. You'll find an overview of all our open source projects [on our website](https://spatie.be/opensource).
## License
The MIT License (MIT). Please see [License File](LICENSE.md) for more information.
| 41.089744 | 196 | 0.764119 | eng_Latn | 0.774815 |
e1c115636982a243700bf20db6c7f384fbe58fa2 | 1,327 | md | Markdown | README.md | aanimesh23/timer-service | 88c40e318c1de50556af644fd7a87c15f041156d | [
"Apache-2.0"
] | 2 | 2021-09-29T04:29:04.000Z | 2021-10-12T12:04:44.000Z | README.md | aanimesh23/timer-service | 88c40e318c1de50556af644fd7a87c15f041156d | [
"Apache-2.0"
] | null | null | null | README.md | aanimesh23/timer-service | 88c40e318c1de50556af644fd7a87c15f041156d | [
"Apache-2.0"
] | 1 | 2021-09-29T04:29:16.000Z | 2021-09-29T04:29:16.000Z | # Timer-Service
A means of receiving your events at a configurable delay
## Description
Timer service is a service that can be used in applications to get a message back to you at a particular time. These callbacks can be either in the form of an API callback or can also be an event produced in SQS or Kafka. An event is anything you want this service to send back to you, it could be a string message, a flag, or event an entire payload.
## Blog
[@Blog](https://medium.com/@aanimesh23/how-to-schedule-deliveries-of-events-with-a-configurable-time-98060e233238)
## Getting Started
### Dependencies
* Java 11
* AWS DynamoDB
* AWS SQS
### Installing
* Fill in all the blank values in application.properties file in resources
* Make sure you have Java 11 downloaded
### Executing program
* Run the command to build the project
```
./mvnw clean install
```
* Run Scheduler
```
sudo java -jar target/timer-service-1.0.1.jar --run.scheduler=true --spring.main.web-application-type=NONE
```
* Run API
```
sudo java -jar target/timer-service-1.0.1.jar --run.api=true
```
* Run Workers
```
sudo java -jar target/timer-service-1.0.1.jar --spring.main.web-application-type=NONE
```
## Authors
Animesh Agrawal
[@aanimesh23](http://animeshagrawal.com)
## Acknowledgments
Inspiration and Advisors.
* Sunny Shah
* Rahul Sharma | 25.037736 | 351 | 0.743029 | eng_Latn | 0.942952 |
e1c198f8f59ab9418f5eeeac2fe06c552edb0981 | 919 | md | Markdown | examples/doc/20190925js-closure.md | ybuyan/ybuyan-blog-orign | 6f5bd0cb966e65b033a7729f139bbcd63b41a576 | [
"MIT"
] | null | null | null | examples/doc/20190925js-closure.md | ybuyan/ybuyan-blog-orign | 6f5bd0cb966e65b033a7729f139bbcd63b41a576 | [
"MIT"
] | null | null | null | examples/doc/20190925js-closure.md | ybuyan/ybuyan-blog-orign | 6f5bd0cb966e65b033a7729f139bbcd63b41a576 | [
"MIT"
] | null | null | null | ---
title: javascript-闭包
display: home
lang: zh
description: learning the js - 闭包
image: https://picsum.photos/536/354?random&date=2019-09-25
date: 2019-09-25
vssue-title: vuepress-plugin-img-lazy
tags:
- js
categories:
- web前端
---
由于作用域的关系,函数之间不能相互读取变量,父级作用域不能读取子级作用域的变量。这是闭包出现了。
<!-- more -->
## 闭包
闭包就是能够读取其他函数内部变量的函数。内部的函数被保存到外部的时候就会产生闭包。
在javascript语言中,只有函数内部的子函数才能读取局部变量,因此闭包也可理解成“定义在函数内部的函数”,本质上闭包就是一个桥梁,将函数内外部打通。
## 闭包用处
1. 外部函数读取内部函数变量
2. 让变量始终存在于内存中
``` js
function add() {
var count = 0;
function demo() {
count++;
console.log(count)
}
return demo;
}
var counter = add();
counter() //1
counter() //2
counter() //3 可以在外部读取,而且都去后变量没有被删除
// 定义普通函数 add
// 在 add 中定义普通函数 demo
// 在 add 中返回 demo
// 执行 add,并把 add 的返回结果赋值给变量 counter
// 执行 counter
// 函数add内部的一个函数demo被函数add外部的一个变量counter引用,这就形成了一个闭包
```
闭包面试题: https://juejin.im/post/58f1fa6a44d904006cf25d22#heading-0
| 17.018519 | 77 | 0.707291 | kor_Hang | 0.103814 |
e1c20755a853fcb4cfcf0e2dd65edaf68f3e78cf | 6,868 | md | Markdown | beginner-mariadb-articles/altering-tables-in-mariadb.md | zhanglianxin/learn-mariadb | 6c94dd36ec96bd6b426b3ea39c06f9b0a7a401f5 | [
"MIT"
] | null | null | null | beginner-mariadb-articles/altering-tables-in-mariadb.md | zhanglianxin/learn-mariadb | 6c94dd36ec96bd6b426b3ea39c06f9b0a7a401f5 | [
"MIT"
] | null | null | null | beginner-mariadb-articles/altering-tables-in-mariadb.md | zhanglianxin/learn-mariadb | 6c94dd36ec96bd6b426b3ea39c06f9b0a7a401f5 | [
"MIT"
] | null | null | null | # Altering Tables in MariaDB
## Before Begining
To backup the clients table with mysqldump, we will enter the following from
the command-line:
```shell
$ mysqldump --user='username' --password='password' --add-locks db table > table.sql
```
If the table should need to be restored, the following can be run from the shell:
```shell
mysql --user='username' --password='password' db < table.sql
```
## Basic Addition and More
In order to add a column to an existing MariaDB table, one would use the ALTER
TABLE statement.
```mysql
ALTER TABLE clients
ADD COLUMN status CHAR(2);
```
Additionally, specifying the location of the new column is allowed.
```mysql
ALTER TABLE clients
ADD COLUMN address2 VARCHAR(25)
AFTER address;
ALTER TABLE clients
ADD COLUMN address3 VARCHAR(25)
FIRST;
```
## Changing One's Mind
```mysql
ALTER TABLE clients
CHANGE status status ENUM('AC', 'IA;
```
Notice that the column name status is specified twice. Although the column name
isn't being changed, it still must be respecified. To change the column name
(from `status` to `active`), while leaving the enumerated list the same, we
specify the new column name in the second position:
```mysql
ALTER TABLE clients
CHANGE status active ENUM('AC', 'IA');
```
Here we have the current column name and then the new column name, along with
the data type specifications (i.e., `ENUM`), even though the result is only a
name change. With the `CHANGE` clause everything must be stated, even items
that are not to be changed.
```mysql
ALTER TABLE clients
CHANGE address address1 VARCHAR(40),
MODIFY active ENUM('yes', 'no', 'AC', 'IA');
UPDATE clients
SET active = 'yes'
WHERE active = 'AC';
UPDATE clients
SET active = 'no'
WHERE active = 'IA';
ALTER TABLE clients
MODIFY active ENUM('yes', 'no');
```
The first SQL statement above changes address and modifies active in
preparation for the transition. Notice the use of a `MODIFY` clause. It works
the same as `CHANGE`, but it is only used for changing data types and not
column names. Therefore, the column name isn't rerespecified. Notice also that
there is a comma after the CHANGE clause. You can string several `CHANGE` and
`MODIFY` clauses together with comma separators. We've enumerated both the new
choices and the old ones to be able to migrate the data. The two UPDATE
statements are designed to adjust the data accordingly and the last ALTER
TABLE statement is to remove the old enumerated choices for the status column.
```mysql
ALTER TABLE clients
DROP client_type;
```
This deletes `client_type` and its data, but not the whole table, obviously.
Nevertheless, it is a permanent and non-reversible action; there won't be a
confirmation request when using the mysql client. This is how it is with all
MariaDB DROP statements and clauses.
## The Default
To be able to specify a default value other than NULL, an ALTER TABLE statement
can be entered with a `SET` clause.
```mysql
ALTER TABLE clients
ALTER state SET DEFAULT 'LA';
```
Notice that the second line starts with `ALTER` and not `CHANGE`. If we change
our mind about having a default value for state, we would enter the following
to reset it back to NULL (or whatever the initial default value would be based
on the data type):
```mysql
ALTER TABLE clients
ALTER state DROP DEFAULT;
```
This particular `DROP` doesn't delete data, by the way.
## Indexes
What most newcomers to MariaDB don't seem to realize is that the index is
separate from the indexed column. To illustrate, let's take a look at the index
for clients using the SHOW INDEX statement:
```mysql
MariaDB [bookstore]> SHOW INDEX FROM clients \G;
*************************** 1. row ***************************
Table: clients
Non_unique: 0
Key_name: PRIMARY
Seq_in_index: 1
Column_name: cust_id
Collation: A
Cardinality: 0
Sub_part: NULL
Packed: NULL
Null:
Index_type: BTREE
Comment:
Index_comment:
```
The text above show that behind the scenes there is an index associated with
`cust_id`. The column `cust_id` is not the index. Incidentally, the `\G` at the
end of the SHOW INDEX statement is to display the results in portrait instead
of landscape format. Before the name of an indexed column can be changed, the
index related to it must be eliminated. The index is not automatically changed
or deleted. So, a `DROP` clause for the index must be entered first and then a
`CHANGE` for the column name can be made along with the establishing of a new
index:
```mysql
ALTER TABLE clients
DROP PRIMARY KEY,
CHANGE cust_id client_id INT PRIMARY KEY;
```
The order of these clauses is necessary. The index must be dropped before the
column can be renamed. The syntax here is for a `PRIMARY KEY`. There are other
types of indexes, of course. To change a column that has an index type other
than a `PRIMARY KEY`. Assuming for a moment that `cust_id` has a `UNIQUE` index,
this is what we would enter to change its name:
```mysql
ALTER TABLE clients
DROP UNIQUE cust_id
CHANGE cust_id client_id INT UNIQUE;
```
Although the index type can be changed easily, MariaDB won't permit you to do
so when there are duplicate rows of data and when going from an index that
allows duplicates(e.g., `INDEX`) to one that doesn't (e.g., `UNIQUE`). If you
actually do want to eliminate the duplicates, though, you can add the `IGNORE`
flag to force the duplicates to be deleted:
```mysql
ALTER IGNORE TABLE clients
DROP INDEX cust_id
CHANGE cust_id client_id INT UNIQUE;
```
In this example, we're not only changing the indexed column's name, but we're
also changing the index type from `INDEX` to `UNIQUE`. And, again, the `IGNORE`
flag tells MariaDB to ignore any records with duplicate values for `cust_id`.
## Renaming & Shifting Tables
```mysql
RENAME TABLE clients
TO client_addresses;
```
The RENAME TABLE statement will also allows a table to be moved to another
database just by adding the receiving database's name in front of the new table
name, separated by a dot. Of course, you can move a table without renaming it.
```mysql
RENAME TABLE client_addresses
TO db2.client_addresses;
```
Sometimes developers want to resort the data somewhat permanently to the data
within the table based on a particular column or columns.
```mysql
ALTER TABLE client_addresses
ORDER BY city, name;
```
Now when the developer enters a SELECT statement without an ORDER BY clause,
the results are already ordered by the default of city and then name, at least
until more data is added to the table.
> **ORDER BY ignored as there is a user-defined clustered index in the table**
>
> 0. [Stack Overflow][stackoverflow]
>
> 1. [Row Order for MyISAM Tables][mysqlrefman]
[stackoverflow]: https://stackoverflow.com/a/29781290/5631625 ''
[mysqlrefman]: https://dev.mysql.com/doc/refman/5.7/en/alter-table.html 'Row Order for MyISAM Tables'
| 30.660714 | 101 | 0.750582 | eng_Latn | 0.99287 |
e1c22f4fb5564ac7078974f512f3e081b428edb3 | 3,795 | md | Markdown | articles/cognitive-services/Speech-Service/includes/quickstarts/from-microphone/cpp/linux.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/from-microphone/cpp/linux.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/cognitive-services/Speech-Service/includes/quickstarts/from-microphone/cpp/linux.md | changeworld/azure-docs.pl-pl | f97283ce868106fdb5236557ef827e56b43d803e | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
author: IEvangelist
ms.service: cognitive-services
ms.topic: include
ms.date: 04/03/2020
ms.author: dapine
ms.openlocfilehash: bdff6c0e65d341bb4948ff772171d8b42728ff0a
ms.sourcegitcommit: 530e2d56fc3b91c520d3714a7fe4e8e0b75480c8
ms.translationtype: MT
ms.contentlocale: pl-PL
ms.lasthandoff: 04/14/2020
ms.locfileid: "81275286"
---
## <a name="prerequisites"></a>Wymagania wstępne
Zanim zaczniesz:
> [!div class="checklist"]
> * <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" target="_blank">Tworzenie zasobu mowy platformy Azure<span class="docon docon-navigate-external x-hidden-focus"></span></a>
> * [Konfigurowanie środowiska programistycznego i tworzenie pustego projektu](../../../../quickstarts/setup-platform.md?tabs=linux&pivots=programming-language-cpp)
> * Upewnij się, że masz dostęp do mikrofonu do przechwytywania dźwięku
## <a name="source-code"></a>Kod źródłowy
Utwórz plik źródłowy języka C++ o nazwie *helloworld.cpp*i wklej do niego następujący kod.
[!code-cpp[Quickstart Code](~/samples-cognitive-services-speech-sdk/quickstart/cpp/linux/from-microphone/helloworld.cpp#code)]
[!INCLUDE [replace key and region](../replace-key-and-region.md)]
## <a name="code-explanation"></a>Wyjaśnienie kodu
[!INCLUDE [code explanation](../code-explanation.md)]
## <a name="build-the-app"></a>Kompilacja aplikacji
> [!NOTE]
> Pamiętaj o wprowadzeniu poniższych poleceń jako _pojedynczego wiersza polecenia_. Najprostszym sposobem wykonania tej czynności jest skopiowanie polecenia przy użyciu przycisku **Kopiuj** obok danego polecenia, a następnie wklejenie go w oknie powłoki.
* W systemie **x64** (wersja 64-bitowa) uruchom poniższe polecenie, aby skompilować aplikację.
```sh
g++ helloworld.cpp -o helloworld -I "$SPEECHSDK_ROOT/include/cxx_api" -I "$SPEECHSDK_ROOT/include/c_api" --std=c++14 -lpthread -lMicrosoft.CognitiveServices.Speech.core -L "$SPEECHSDK_ROOT/lib/x64" -l:libasound.so.2
```
* W systemie **x86** (wersja 32-bitowa) uruchom poniższe polecenie, aby skompilować aplikację.
```sh
g++ helloworld.cpp -o helloworld -I "$SPEECHSDK_ROOT/include/cxx_api" -I "$SPEECHSDK_ROOT/include/c_api" --std=c++14 -lpthread -lMicrosoft.CognitiveServices.Speech.core -L "$SPEECHSDK_ROOT/lib/x86" -l:libasound.so.2
```
* W systemie **ARM64** (64-bitowym) uruchom następujące polecenie, aby utworzyć aplikację.
```sh
g++ helloworld.cpp -o helloworld -I "$SPEECHSDK_ROOT/include/cxx_api" -I "$SPEECHSDK_ROOT/include/c_api" --std=c++14 -lpthread -lMicrosoft.CognitiveServices.Speech.core -L "$SPEECHSDK_ROOT/lib/arm64" -l:libasound.so.2
```
## <a name="run-the-app"></a>Uruchomienie aplikacji
1. Skonfiguruj ścieżkę biblioteki modułu ładującego tak, aby wskazywała bibliotekę zestawu Speech SDK.
* W systemie **x64** (wersja 64-bitowa) wprowadź następujące polecenie.
```sh
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$SPEECHSDK_ROOT/lib/x64"
```
* W systemie **x86** (wersja 32-bitowa) wprowadź to polecenie.
```sh
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$SPEECHSDK_ROOT/lib/x86"
```
* W systemie **ARM64** (64-bitowym) wprowadź następujące polecenie.
```sh
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$SPEECHSDK_ROOT/lib/arm64"
```
1. Uruchom aplikację.
```sh
./helloworld
```
1. W oknie konsoli zostanie wyświetlony monit o wypowiedzenie tekstu. Wypowiedz zwrot lub zdanie w języku angielskim. Wypowiedź zostanie przesłana do usługi rozpoznawania mowy i nastąpi jest transkrypcja na tekst, który zostanie wyświetlony w tym samym oknie.
```text
Say something...
We recognized: What's the weather like?
```
## <a name="next-steps"></a>Następne kroki
[!INCLUDE [Speech recognition basics](../../speech-to-text-next-steps.md)]
| 39.123711 | 259 | 0.743874 | pol_Latn | 0.971278 |
e1c32be5e60c9250fe5cb646c1fa3f0cf390d78e | 5,027 | md | Markdown | articles/service-bus-messaging/how-to-use-java-message-service-20.md | GordenW/azure-docs.zh-cn | 2b69134b6401663a0fe76e07cd81d97da080bda1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-bus-messaging/how-to-use-java-message-service-20.md | GordenW/azure-docs.zh-cn | 2b69134b6401663a0fe76e07cd81d97da080bda1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | articles/service-bus-messaging/how-to-use-java-message-service-20.md | GordenW/azure-docs.zh-cn | 2b69134b6401663a0fe76e07cd81d97da080bda1 | [
"CC-BY-4.0",
"MIT"
] | null | null | null | ---
title: 通过 Azure 服务总线高级版使用 Java 消息服务 2.0 API
description: '如何在 Azure 服务总线中使用 Java 消息服务 (JMS) '
ms.topic: article
ms.date: 07/17/2020
ms.custom: seo-java-july2019, seo-java-august2019, seo-java-september2019
ms.openlocfilehash: 8363011187a4c2ef77681ece4bb8b1de73ec7a63
ms.sourcegitcommit: fbb66a827e67440b9d05049decfb434257e56d2d
ms.translationtype: MT
ms.contentlocale: zh-CN
ms.lasthandoff: 08/05/2020
ms.locfileid: "87801372"
---
# <a name="use-java-message-service-20-api-with-azure-service-bus-premium-preview"></a>在 Azure 服务总线高级 (预览版中使用 Java 消息服务 2.0 API)
本文介绍如何使用常见的**Java 消息服务 (JMS) 2.0** API 通过高级消息队列协议 (AMQP 1.0) 协议与 Azure 服务总线交互。
> [!NOTE]
> 对 Java 消息服务 (JMS) 2.0 API 的支持仅适用于**Azure 服务总线高级层**,目前为**预览版**。
>
## <a name="get-started-with-service-bus"></a>服务总线入门
本指南假定你已有一个 Service Bus 命名空间。 如果没有,则可以使用 [Azure 经典门户](https://portal.azure.com)[创建命名空间和队列](service-bus-create-namespace-portal.md)。
有关如何创建服务总线命名空间和队列的详细信息,请参阅[通过 Azure 门户开始使用服务总线队列](service-bus-quickstart-portal.md)。
## <a name="what-jms-features-are-supported"></a>支持哪些 JMS 功能?
[!INCLUDE [service-bus-jms-features-list](../../includes/service-bus-jms-feature-list.md)]
## <a name="downloading-the-java-message-service-jms-client-library"></a> (JMS) 客户端库下载 Java 消息服务
若要利用 Azure 服务总线高级层上提供的所有功能,必须将以下库添加到项目的生成路径。
[Azure](https://search.maven.org/artifact/com.microsoft.azure/azure-servicebus-jms)
> [!NOTE]
> 若要将[Azure Maven-jms](https://search.maven.org/artifact/com.microsoft.azure/azure-servicebus-jms)添加到生成路径,请使用项目的首选依赖项管理工具,如 " [Maven](https://maven.apache.org/) " 或 " [Gradle](https://gradle.org/)"。
>
## <a name="coding-java-applications"></a>为 Java 应用程序编码
导入依赖项后,可以使用 JMS 提供程序不可知的方式编写 Java 应用程序。
### <a name="connecting-to-azure-service-bus-using-jms"></a>使用 JMS 连接到 Azure 服务总线
若要使用 JMS 客户端连接到 Azure 服务总线,需要在**主连接字符串**下的[Azure 门户](https://portal.azure.com)中的 "共享访问策略" 中提供**连接字符串**。
1. 实例化`ServiceBusJmsConnectionFactorySettings`
```java
ServiceBusJmsConnectionFactorySettings connFactorySettings = new ServiceBusJmsConnectionFactorySettings();
connFactorySettings.setConnectionIdleTimeoutMS(20000);
```
2. `ServiceBusJmsConnectionFactory`用适当的实例化 `ServiceBusConnectionString` 。
```java
String ServiceBusConnectionString = "<SERVICE_BUS_CONNECTION_STRING_WITH_MANAGE_PERMISSIONS>";
ConnectionFactory factory = new ServiceBusJmsConnectionFactory(ServiceBusConnectionString, connFactorySettings);
```
3. 使用 `ConnectionFactory` 创建一个 `Connection` ,然后`Session`
```java
Connection connection = factory.createConnection();
Session session = connection.createSession();
```
或 `JMSContext` JMS 2.0 客户端的 ()
```java
JMSContext jmsContext = factory.createContext();
```
### <a name="write-the-jms-application"></a>编写 JMS 应用程序
一旦 `Session` 或 `JMSContext` 已实例化,你的应用程序就可以使用熟悉的 JMS api 来执行管理和数据操作。
请参阅[支持的 JMS 功能](how-to-use-java-message-service-20.md#what-jms-features-are-supported)的列表,查看作为此预览版的一部分受支持的 api。
下面是一些用于实现 JMS 入门的示例代码片段-
#### <a name="sending-messages-to-a-queue-and-topic"></a>将消息发送到队列和主题
```java
// Create the queue and topic
Queue queue = jmsContext.createQueue("basicQueue");
Topic topic = jmsContext.createTopic("basicTopic");
// Create the message
Message msg = jmsContext.createMessage();
// Create the JMS message producer
JMSProducer producer = jmsContext.createProducer();
// send the message to the queue
producer.send(queue, msg);
// send the message to the topic
producer.send(topic, msg);
```
#### <a name="receiving-messages-from-a-queue"></a>从队列接收消息
```java
// Create the queue
Queue queue = jmsContext.createQueue("basicQueue");
// Create the message consumer
JMSConsumer consumer = jmsContext.createConsumer(queue);
// Receive the message
Message msg = (Message) consumer.receive();
```
#### <a name="receiving-messages-from-a-shared-durable-subscription-on-a-topic"></a>从主题的共享持久订阅接收消息
```java
// Create the topic
Topic topic = jmsContext.createTopic("basicTopic");
// Create a shared durable subscriber on the topic
JMSConsumer sharedDurableConsumer = jmsContext.createSharedDurableConsumer(topic, "sharedDurableConsumer");
// Receive the message
Message msg = (Message) sharedDurableConsumer.receive();
```
## <a name="summary"></a>总结
本指南展示了使用 Java 消息服务 (JMS) over AMQP 1.0 的 Java 客户端应用程序如何与 Azure 服务总线交互。
也可以通过其他语言(包括 .NET、C、Python 和 PHP)使用 Service Bus AMQP 1.0。 使用这些不同语言构建的组件可以使用服务总线中的 AMQP 1.0 支持可靠且完全无损地交换消息。
## <a name="next-steps"></a>后续步骤
有关 Azure 服务总线的详细信息以及 Java 消息服务 (JMS) 实体的详细信息,请查看以下链接-
* [服务总线-队列、主题和订阅](service-bus-queues-topics-subscriptions.md)
* [Service Bus-Java 消息服务实体](service-bus-queues-topics-subscriptions.md#java-message-service-jms-20-entities-preview)
* [Azure 服务总线中的 AMQP 1.0 支持](service-bus-amqp-overview.md)
* [服务总线 AMQP 1.0 开发人员指南](service-bus-amqp-dotnet.md)
* [服务总线队列入门](service-bus-dotnet-get-started-with-queues.md)
* [Java 消息服务 API (外部 Oracle doc) ](https://docs.oracle.com/javaee/7/api/javax/jms/package-summary.html)
* [了解如何从 ActiveMQ 迁移到服务总线](migrate-jms-activemq-to-servicebus.md)
| 35.153846 | 198 | 0.750746 | yue_Hant | 0.736354 |
e1c359ba6d4746f1bc8acc4df7751e155fbb4c22 | 393 | md | Markdown | content/publication/csfso-2019/index.md | finkelshtein/starter-hugo-academic | 7311a59cf9422cb5d661b8b1c819922d383ba14c | [
"MIT"
] | null | null | null | content/publication/csfso-2019/index.md | finkelshtein/starter-hugo-academic | 7311a59cf9422cb5d661b8b1c819922d383ba14c | [
"MIT"
] | null | null | null | content/publication/csfso-2019/index.md | finkelshtein/starter-hugo-academic | 7311a59cf9422cb5d661b8b1c819922d383ba14c | [
"MIT"
] | null | null | null | ---
title: "A unified framework for analysis of individual-based models in ecology and beyond"
date: 2019-10-17
publishDate: 2020-03-20T15:17:50.289000Z
authors: ["Stephen Cornell", "Yevhen Suprunenko", "Dmitri Finkelshtein", "Panu Somervuo", "Otso Ovaskainen"]
publication_types: ["2"]
abstract: ""
featured: true
publication: "*Nature Communications*"
doi: "10.1038/s41467-019-12172-y"
---
| 30.230769 | 108 | 0.745547 | eng_Latn | 0.386583 |
e1c364d016b788979a94b29a40e008aed408549d | 3,552 | md | Markdown | content/resources/Unity/05_TerrainBuilder.md | Corollarium/Documentation | 2c5b43594e910060cc471955c020b58e6e17b934 | [
"Apache-2.0"
] | null | null | null | content/resources/Unity/05_TerrainBuilder.md | Corollarium/Documentation | 2c5b43594e910060cc471955c020b58e6e17b934 | [
"Apache-2.0"
] | null | null | null | content/resources/Unity/05_TerrainBuilder.md | Corollarium/Documentation | 2c5b43594e910060cc471955c020b58e6e17b934 | [
"Apache-2.0"
] | null | null | null | ---
title: Terrain Builder
image:
description: The Unity Toolkit terrain builder allows you to add vast landscapes to your games.
keywords: babylon.js, exporter, unity, terrain builder, extension, terrain
further-reading:
video-overview:
video-content:
---
The toolkit's **Terrain Builder** system allows you to add vast landscapes to your games. At runtime, terrain rendering is highly optimized for rendering efficiency while in the editor, a selection of tools is available to make terrains easy and quick to create. Please refer to the [Unity Terrain](https://docs.unity3d.com/Manual/script-Terrain.html) documentation for details.
## Creating And Editing Terrains
To add a Terrain game object to your Scene, select **GameObject > 3D Object > Terrain** from the menu. This also adds a corresponding **Terrain Asset** to the Project view. When you do this, the landscape is initially a large, flat plane. The Terrain’s Inspector window provides a number of tools you can use to create detailed landscape features.

With the exception of the tree placement tool and the settings panel, all the tools on the toolbar provide a set of “brushes” and settings for brush size and opacity. It is no coincidence that these resemble the painting tools from an image editor because detail in the terrain is created precisely that way, by “painting” detail onto the landscape. If you select the leftmost tool on the bar (Raise/Lower Terrain) and move the mouse over the terrain in the scene view, you will see a cursor resembling a spotlight on the surface. When you click the mouse, you can paint gradual changes in height onto the landscape at the mouse’s position. By selecting different options from the Brushes toolbar, you can paint with different shapes. The Brush Size and Opacity options vary the area of the brush and the strength of its effect respectively.
The details of all of the tools will be given in subsequent sections. However, all of them are based around the concept of painting detail and with the exception of the tree placement tool, all have the same options for brushes, brush size and opacity. The tools can be used to set the height of the terrain and also to add coloration, plants and other objects.
## Terrain Keyboard Shortcuts
You can use the following keyboard shortcuts in the terrain inspector:
* Press the keys from F1 to F6 to select the corresponding terrain tool (for example, F1 selects the Raise/Lower tool).
* The comma (,) and period (.) keys cycle through the available brushes.
* Shift-comma (<) and Shift-dot (>) cycle through the available objects for trees, Textures and details.
Additionally, the standard F keystroke works slightly differently for terrains. Normally, it frames the selection around the whole object when the mouse is over the scene view. However, since terrains are typically very large, pressing F will focus the scene view on the area of terrain where the mouse/brush is currently hovering. This provides a very quick and intuitive way to jump to the area of terrain you want to edit. If you press F when the mouse is away from the terrain object, the standard framing behaviour will return.
## Still Under Development
The following **Terrain Builder** features are still under development:
* Terrain Splatmap Shader
* Tree Billboard Meshes
* Painting Grass Meshes
* Painting Detail Prototypes
* Terrain Physics Imposters (Ammo.js)
. | 68.307692 | 842 | 0.778435 | eng_Latn | 0.999341 |