hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
a93cd640cf068fcd2a625414978ac33db152f14a
455
md
Markdown
README.md
alexellis/my-fn
5d24cfb5cf1b20761b72db223f68e3b1dc1d7c0b
[ "MIT" ]
4
2018-06-06T07:09:36.000Z
2020-07-11T13:12:50.000Z
README.md
alexellis/my-fn
5d24cfb5cf1b20761b72db223f68e3b1dc1d7c0b
[ "MIT" ]
2
2018-06-29T17:33:10.000Z
2018-07-07T14:39:44.000Z
README.md
alexellis/my-fn
5d24cfb5cf1b20761b72db223f68e3b1dc1d7c0b
[ "MIT" ]
6
2018-05-25T08:59:35.000Z
2020-04-06T06:09:34.000Z
# my-fn Functions with secrets. * join-welcome - welcomes new people to #OpenFaaS on Slack * slack-me - example SlackBot demo that responds go messages in #test-channel * hallo - simple function in Go to show reading a SealedSecret To seal the secrets: ``` $ faas-cli cloud seal \ --name=alexellis-fn-secrets \ --literal incoming-webhook-url=https://hooks.slack.com/services/etc/etc/etc \ --literal key=key \ --literal slack-token=slack-token ```
25.277778
78
0.731868
eng_Latn
0.684546
a93cd91ec62fd72fdce42606ef95a9976076fdea
4,947
md
Markdown
README.md
code4history/Maplat
b73a9270e4ce23f6d3db6b45eeb25b049afc6c1c
[ "Apache-2.0" ]
12
2019-11-13T06:57:02.000Z
2022-01-23T00:40:56.000Z
README.md
code4history/Maplat
b73a9270e4ce23f6d3db6b45eeb25b049afc6c1c
[ "Apache-2.0" ]
20
2019-11-28T14:28:55.000Z
2022-03-05T02:13:55.000Z
README.md
code4history/Maplat
b73a9270e4ce23f6d3db6b45eeb25b049afc6c1c
[ "Apache-2.0" ]
3
2019-11-17T08:11:46.000Z
2021-06-12T02:19:34.000Z
![Maplat Logo](https://code4history.github.io/Maplat/page_imgs/maplat.png) ![Maplat Catch Phrase](https://code4history.github.io/Maplat/page_imgs/homeomorphic.png) # 新聞記事を見て来られた方へ 見てみたい内容で以下の遷移先へどうぞ! * 奈良の古地図アプリを見てみたい方は、[ぷらっと奈良](https://s.maplat.jp/r/naramap/)へ * Maplat技術の特徴を知りたい方は、[技術紹介pdf](https://code4history.github.io/maplat_flyer_ja.pdf)へ * Maplatライブラリの使い方を知りたい方は、[Qiitaの記事群](https://qiita.com/tags/maplat)へ * 開発元であるCode for Historyの活動を知りたい方は、[Code for Historyのページ](https://code4history.github.io/index_ja.html)へ Maplat is the cool Historical Map/Illustrated Map Viewer. It can transform each map coordinates with nonlinear but homeomorphic projection and makes possible that the maps can collaborate with GPS/accurate maps, without distorting original maps. Data editor of this solution is provided as another project, [MaplatEditor](https://github.com/code4history/MaplatEditor/). This project won Grand Prize / Educational Effectiveness Prize / Visitors Selection Prize on Geo-Activity Contest 2018 held by Ministry of Land, Infrastructure, Transport and Tourism. Maplatは古地図/絵地図を歪める事なくGPSや正確な地図と連携させられるオープンソースプラットフォームです。   他のソリューションにない特徴として、各地図の座標変換において非線形かつ同相な投影変換が定義可能という点が挙げられます。 このプロジェクトは国土交通省主催の2018年ジオアクティビティコンテストにおいて、最優秀賞、教育効果賞、来場者賞をいただきました。 # Introduction slide (In English, ICC Tokyo 2019) <a href="https://www.slideshare.net/kokogiko/maplat-historical-map-viewer-technology-that-guarantees-nonlinear-bijective-conversion-without-distortion">![Introduction of Maplat](https://code4history.github.io/Maplat/page_imgs/maplat_slide.png)</a> # <a href="https://www.slideshare.net/kokogiko/maplat-historical-viewer-technology-that-guarantees-nonlinear-bijective-conversion-without-distortion">ICC Tokyo 2019 paper</a> # Introduction slide (In Japanese, FOSS4G Tokyo 2017) <a href="https://www.slideshare.net/kokogiko/maplat">![Introduction of Maplat](https://code4history.github.io/Maplat/page_imgs/maplat_slide.png)</a> # Data Editor Please use [MaplatEditor](https://github.com/code4history/MaplatEditor/) for data creation. データの作成には[MaplatEditor](https://github.com/code4history/MaplatEditor/)を利用してください。 # Latest result Latest result is shown below: * https://s.maplat.jp/r/naramap/ (Maplat Nara) * https://s.maplat.jp/r/aizumap/ (Maplat Aizuwakamatsu) * https://s.maplat.jp/r/iwakimap/ (Maplat Iwaki) * https://s.maplat.jp/r/tatebayashimap/ (Maplat Tatebayashi) * https://s.maplat.jp/r/chuokumap/ (Maplat Tokyo Chuo-ku) * https://s.maplat.jp/r/uedamap/ (Ueda city) * https://s.maplat.jp/r/moriokamap/ (Morioka city) * https://s.maplat.jp/r/sabaemap/ (Sabae city) * https://s.maplat.jp/r/nobeokamap/ (Nobeoka city) Documentation is undergoing. 最新の成果物は以下で確認できます。 * https://s.maplat.jp/r/naramap/ (ぷらっと奈良) * https://s.maplat.jp/r/aizumap/ (ぷらっと会津若松) * https://s.maplat.jp/r/iwakimap/ (ぷらっといわき) * https://s.maplat.jp/r/tatebayashimap/ (ぷらっと館林) * https://s.maplat.jp/r/chuokumap/ (ぷらっと東京中央区) * https://s.maplat.jp/r/uedamap/ (上田市版) * https://s.maplat.jp/r/moriokamap/ (盛岡市版) * https://s.maplat.jp/r/sabaemap/ (鯖江市版) * https://s.maplat.jp/r/nobeokamap/ (延岡市版) 成果物を他人がコンテンツ作れる形でまとめられていないですが、おいおい整理します。 # Collaboration demo with Jizo project Code 4 NaraからUrban data challenge 2016に応募中のMaplatと[地蔵プロジェクト](https://github.com/code4history/JizoProject/wiki)をコラボレーションさせたデモを作りました。 * https://s.maplat.jp/r/narajizomap/ ## Contributors This project exists thanks to all the people who contribute. <!--[[Contribute](CONTRIBUTING.md)].--> <a href="https://github.com/code4history/Maplat/graphs/contributors"><img src="https://opencollective.com/maplat/contributors.svg?width=890&button=false" /></a> ## Backers Thank you to all our backers! 🙏 [[Become a backer](https://opencollective.com/maplat#backer)] <a href="https://opencollective.com/maplat#backers" target="_blank"><img src="https://opencollective.com/maplat/backers.svg?width=890"></a> ## Sponsors Maplat is supported by <a href="https://www.jetbrains.com/" target="_blank"><img src="https://code4history.github.io/Maplat/img/jetbrains-variant-4.png" width="150"></a> <a href="https://www.locazing.com/" target="_blank"><img src="https://code4history.github.io/Maplat/img/locazing.png" width="150"></a> <a href="https://www.thedesignium.com/" target="_blank"><img src="https://code4history.github.io/Maplat/img/logo_TheDesignium.png" width="150"></a> <a href="https://www.browserstack.com/" target="_blank"><img src="https://code4history.github.io/Maplat/img/browserstack-logo-600x315.png" width="150"></a> <a href="https://zender.co.jp/" target="_blank"><img src="https://code4history.github.io/Maplat/img/Zender_logo_y_color.png" width="150"></a> <a href="https://www.webimpact.co.jp/" target="_blank"><img src="https://code4history.github.io/Maplat/img/webimpact.jpg" width="150"></a> Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [[Become a sponsor](https://opencollective.com/maplat#sponsor)]
53.771739
247
0.768344
yue_Hant
0.354394
a93d5d86fab020a272fcc9bce4eb0270e7a0c46d
2,782
md
Markdown
inlets/README.md
amishakov/webhook2telegram
10062ef35922e2ff018fccc77eabc817f3e0d79e
[ "MIT" ]
338
2017-08-01T09:23:49.000Z
2020-03-11T03:30:02.000Z
inlets/README.md
amishakov/webhook2telegram
10062ef35922e2ff018fccc77eabc817f3e0d79e
[ "MIT" ]
13
2017-08-01T09:49:13.000Z
2020-03-27T08:47:41.000Z
inlets/README.md
amishakov/webhook2telegram
10062ef35922e2ff018fccc77eabc817f3e0d79e
[ "MIT" ]
21
2017-08-02T05:06:01.000Z
2020-03-25T14:44:55.000Z
# Inlets ## `default` `/api/inlets/default/<recipient>` Forwards a basic text message or a file to a Telegram chat. #### Body (Text Message) ```json { "text": "<string>", "type": "<string: TEXT|FILE>", "origin": "<string>" } ``` #### Body (File Message) ```json { "file": "<base64string>", "filename": "<string>", "type": "<string: TEXT|FILE>", "origin": "<string>" } ``` Optionally, you can pass sending options with your message: ```json { ... "options": { "disable_link_previews": true } } ``` **Alternatively**, the default inlet's payload can be passed as _query parameters_ of a `GET` request (see [#29](https://github.com/muety/telepush/issues/29)), e.g.: ``` GET http://localhost:8080/api/messages/<recipient> \ ?text=Just a test \ &origin=Some Script \ &type=TEXT \ &disable_link_previews=true ``` ## `alertmanager` `/api/inlets/alertmanager/<recipient>` Accepts, transforms and forwards alerts sent by [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) to a Telegram chat. See [webhook_config](https://prometheus.io/docs/alerting/configuration/#webhook_config). ### Example Configuration ```yaml # alertmanager.yml global: resolve_timeout: 5m route: group_by: ['alertname'] group_wait: 10s group_interval: 10s repeat_interval: 1h receiver: 'telepush' receivers: - name: 'telepush' webhook_configs: - url: 'http://localhost:8080/api/inlets/alertmanager_webhook/5hd9mx' ``` ## `bitbucket` `/api/inlets/bitbucket/<recipient>` Accepts, transforms and forwards events sent by [Bitbucket](https://bitbucket.org/) to a Telegram chat. #### Parameters Requires the `X-Event-Key` header to be set. ### Body See [Event Payloads](https://confluence.atlassian.com/bitbucket/event-payloads-740262817.html). ## `webmentionio` `/api/inlets/webmentionio/<recipient>` Accepts, transforms and forwards notifications sent by [Webmention.io](https://webmention.io) to a Telegram chat. ### Body An example payload looks as follows, however, only `secret`, `source` and `target` are utilized. ```json { "source": "http://rhiaro.co.uk/2015/11/1446953889", "target": "http://aaronparecki.com/notes/2015/11/07/4/indiewebcamp", "post": { "type": "entry", "author": { "name": "Amy Guy", "photo": "http://webmention.io/avatar/rhiaro.co.uk/829d3f6e7083d7ee8bd7b20363da84d88ce5b4ce094f78fd1b27d8d3dc42560e.png", "url": "http://rhiaro.co.uk/about#me" }, "url": "http://rhiaro.co.uk/2015/11/1446953889", "published": "2015-11-08T03:38:09+00:00", "name": "repost of http://aaronparecki.com/notes/2015/11/07/4/indiewebcamp", "repost-of": "http://aaronparecki.com/notes/2015/11/07/4/indiewebcamp", "wm-property": "repost-of" } } ```
25.759259
165
0.676132
eng_Latn
0.232997
a93e74858f67b5cb49003c22cf4f67ce88d3f159
1,077
md
Markdown
a/angular-signalr-hub/readme.md
ScalablyTyped/SlinkyTyped
abb05700fe72d527728a9c735192f4c156bd9be1
[ "MIT" ]
14
2020-01-09T02:36:33.000Z
2021-09-05T13:40:52.000Z
a/angular-signalr-hub/readme.md
oyvindberg/SlinkyTyped
abb05700fe72d527728a9c735192f4c156bd9be1
[ "MIT" ]
1
2021-07-31T20:24:00.000Z
2021-08-01T07:43:35.000Z
a/angular-signalr-hub/readme.md
oyvindberg/SlinkyTyped
abb05700fe72d527728a9c735192f4c156bd9be1
[ "MIT" ]
4
2020-03-12T14:08:42.000Z
2021-08-12T19:08:49.000Z
# Scala.js typings for angular-signalr-hub Typings are for version v1.5.0 ## Library description: A handy wrapper for SignalR Hubs. Just specify the hub name, listening functions, and methods that you're going to use. | | | | ------------------ | :-------------: | | Full name | angular-signalr-hub | | Keywords | angular, angularjs, signalr, hub, service | | # releases | 0 | | # dependents | 0 | | # downloads | 54983 | | # stars | 0 | ## Links - [Homepage](https://github.com/justmaier/angular-signalr-hub) - [Bugs](https://github.com/justmaier/angular-signalr-hub/issues) - [Repository](https://github.com/justmaier/angular-signalr-hub) - [Npm](https://www.npmjs.com/package/angular-signalr-hub) ## Note This library has been generated from typescript code from [DefinitelyTyped](https://definitelytyped.org). Provided with :purple_heart: from [ScalablyTyped](https://github.com/oyvindberg/ScalablyTyped) ## Usage See [the main readme](../../readme.md) for instructions.
30.771429
119
0.636955
eng_Latn
0.673746
a93e74a4b9aaab4416e6e4e172e29b521706ee9d
1,138
md
Markdown
Linux/Linux开发/Linux系统调用.md
fengjixuchui/Document-1
62c74d94c4f16f94f4a5e473636d3248af0c37fe
[ "Apache-2.0" ]
1
2021-02-26T02:40:03.000Z
2021-02-26T02:40:03.000Z
Linux/Linux开发/Linux系统调用.md
fengjixuchui/Document-1
62c74d94c4f16f94f4a5e473636d3248af0c37fe
[ "Apache-2.0" ]
null
null
null
Linux/Linux开发/Linux系统调用.md
fengjixuchui/Document-1
62c74d94c4f16f94f4a5e473636d3248af0c37fe
[ "Apache-2.0" ]
null
null
null
# Linux系统调用分析 以 linux-2.6.32.69 源码分析 ``` #define __SYSCALL(nr, sym) extern asmlinkage void sym(void) ; typedef void (*sys_call_ptr_t)(void); const sys_call_ptr_t sys_call_table[__NR_syscall_max+1] = { /* *Smells like a like a compiler bug -- it doesn't work *when the & below is removed. */ [0 ... __NR_syscall_max] = &sys_ni_syscall, #include <asm/unistd_64.h> }; ``` ``` asmlinkage long sys_ni_syscall(void) { return -ENOSYS; } ``` 上面的代码就是给sys_call_table设置一个初始化的值,这个函数指针指向一个返回没有系统调用的函数。 可以看到`sys_call_table`中存放的是函数指针。 `unistd_64.h`文件中的代码如下: ``` #define __NR_read 0 __SYSCALL(__NR_read, sys_read) #define __NR_write 1 __SYSCALL(__NR_write, sys_write) #define __NR_open 2 __SYSCALL(__NR_open, sys_open) #define __NR_close 3 __SYSCALL(__NR_close, sys_close) #define __NR_stat 4 __SYSCALL(__NR_stat, sys_newstat) #define __NR_fstat 5 __SYSCALL(__NR_fstat, sys_newfstat) #define __NR_lstat 6 __SYSCALL(__NR_lstat, sys_newlstat) #define __NR_poll 7 __SYSCALL(__NR_poll, sys_poll) #define __NR_lseek 8 __SYSCALL(__NR_lseek, sys_lseek) #define __NR_mmap 9 __SYSCALL(__NR_mmap, sys_mmap) ... ```
18.966667
61
0.746046
yue_Hant
0.272001
a93f099e49d8b4b418ed35887ec302a1789a5dd7
3,045
md
Markdown
_episodes/01-introduction.md
RobHarrand/life-sciences-spreadsheet-best-practice
036be9156afd55188f010989d26bcecbe6fa2407
[ "CC-BY-4.0" ]
null
null
null
_episodes/01-introduction.md
RobHarrand/life-sciences-spreadsheet-best-practice
036be9156afd55188f010989d26bcecbe6fa2407
[ "CC-BY-4.0" ]
null
null
null
_episodes/01-introduction.md
RobHarrand/life-sciences-spreadsheet-best-practice
036be9156afd55188f010989d26bcecbe6fa2407
[ "CC-BY-4.0" ]
null
null
null
--- title: "Introduction" teaching: 10 exercises: 10 questions: - "When should you use a spreadsheet?" objectives: - "Start to understand when spreadsheets are and aren't appropriate" keypoints: - "Spreadsheets can be extremely useful, but they can also cause chaos" --- Professor Daniel Lemire from the University of Quebec, in a popular blog post titled 'You shouldn’t use a spreadsheet for important work (I mean it)', states that *"spreadsheets are good for quick and dirty work, but they are not designed for serious and reliable work"*. In his post he cites the example of Professor Thomas Piketty and his book 'Capital in the 21st Century'. Piketty, aiming for transparency in his work, opening shared the underlying datafiles containing his analysis. Unfortunately, he has used Excel to perform tasks such as merging different datasets and interpolating missing data, and errors were soon found. When corrected, the central thesis of the book was undermined. So, when are spreadsheets the answer? And in those situations, how best should they be used? Spreadsheets are excellent at giving you a quick visual picture of your data. Further, they give the ability to change figures and then see the immediate effects (so called 'What-if' analysis). They're also simple to use and ubiquitous, used for scientific experiments in schools from an early age. The issues created when using spreadsheets for large, complex datasets are obvious. Intuition on what you're looking at breaks down. Connections between different parts of the data, especially across different tabs, become increasingly difficult to track. Formulae, hidden from view, can slowly accumulate errors. But that doesn't mean that smaller, easier to handle datasets can't cause problems, or that data under a certain size can be analysed without any consideration of potential spreadsheets dangers. And the issues range beyond pure analysis. Spreadsheets are often used as a replacement for lab books, with multiple tabs containing data from different experiments gathered on different days, text annotations used for ad hoc notes, and entire spreadsheets emailed, opening up all manner of privacy and security issues. > #### Exercise - Which of the following scenarios are appropriate for spreadsheets? > > 1. A dataset of 100 rows of blood markers for 5 people. The aim is to create a simple plot > 2. A dataset of 100 rows of blood markers for 5 people. The aim is to fit advanced statistical models and interpolate new values from those models > 3. A dataset of 1000 rows of blood markers for 20 people. Aim is to create simple plots and create summary statistics (mean, standard deviations, etc) > 4. A dataset of 10k rows of genetic sequencing data. Aim is to pattern-match and extract key sequences > 5. The dataset in example 1, but instead of a single file, you have 100 similar files, i.e. you wish to create 100 plots > > > #### Solution > > 1. Yes > > 2. Probably not > > 3. Yes > > 4. Probably not > > 5. Probably not > > > {: .solution} {: .challenge}
63.4375
152
0.780624
eng_Latn
0.999679
a93fd4b7ce9ba4be3de751b165ef4a01bb6f89f1
4,391
md
Markdown
README.md
lishaoliang/l_sdk_doc
c687d49bffdd1eaa234ac96cb1871e7dcf2fb2af
[ "Apache-2.0" ]
2
2019-03-06T05:15:51.000Z
2020-03-20T01:27:47.000Z
README.md
lishaoliang/l_sdk_doc
c687d49bffdd1eaa234ac96cb1871e7dcf2fb2af
[ "Apache-2.0" ]
null
null
null
README.md
lishaoliang/l_sdk_doc
c687d49bffdd1eaa234ac96cb1871e7dcf2fb2af
[ "Apache-2.0" ]
3
2019-07-22T08:31:14.000Z
2019-12-18T02:39:44.000Z
## 欢迎使用“l_sdk”文档 你可以在[GitHub](https://github.com/lishaoliang/l_sdk_doc/)下载本文档 ## 一、SDK接口 ### [1. 基本接口](https://github.com/lishaoliang/l_sdk_doc/blob/master/sdk/l_sdk.md) ### [2. 访问媒体数据](https://github.com/lishaoliang/l_sdk_doc/blob/master/sdk/l_sdk_media.md) ### [3. 网络发现](https://github.com/lishaoliang/l_sdk_doc/blob/master/sdk/l_sdk_discover.md) ### [4. 设备升级](https://github.com/lishaoliang/l_sdk_doc/blob/master/sdk/l_sdk_upgrade.md) ### [5. 状态查询](https://github.com/lishaoliang/l_sdk_doc/blob/master/sdk/l_sdk_status.md) ### [6. 图片处理](https://github.com/lishaoliang/l_sdk_doc/blob/master/sdk/l_sdk_picture.md) ### [7. 修改记录](https://github.com/lishaoliang/l_sdk_doc/blob/master/demo/inc/l_sdk_history.h) ## 二、NSPP协议概述 ## 三、媒体-控制子协议 ### [1. 通用定义](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/common.md) ### [2. 批量协议请求](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/multi_req.md) ### [3. 公共请求](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/public.md) ### [4. 权限](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/auth.md) ### [5. 用户](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/user.md) ### [6. 基本信息](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/base.md) ### [7. 网络](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/net.md) ### [8. 媒体流](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/stream.md) ### [9. 图像参数](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/image.md) ### [10. OSD参数](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/osd.md) ### [11. 配置操作](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/config.md) ### [12. 控制协议](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/sys.md) ### [附录1、错误码](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/net_err.md) ### [附录3、流序号](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/stream_idx.md) ### [附录4、流格式](https://github.com/lishaoliang/l_sdk_doc/blob/master/protocol/stream_fmt.md) ## 四、网络发现子协议 ### [1. 私有网络发现协议](https://github.com/lishaoliang/l_sdk_doc/blob/master/multicast/multicast.md) ## 五、升级子协议 ### [1. 客户端请求服务端升级](https://github.com/lishaoliang/l_sdk_doc/blob/master/upgrade/upgrade.md) ## 六、本地协议 ### [1. 本地协议接口](https://github.com/lishaoliang/l_sdk_doc/blob/master/lif/lif.md) ## 七、示例 ### [1. 头文件目录](https://github.com/lishaoliang/l_sdk_doc/tree/master/demo/inc) * 头文件: https://github.com/lishaoliang/l_sdk_doc/tree/master/demo/inc ### [2. Win示例](https://github.com/lishaoliang/l_sdk_doc/blob/master/demo/cpp/stream/t_stream_dec.c) * l_sdk库32: https://github.com/lishaoliang/l_sdk_doc/tree/master/release/msc-win32 * l_sdk库64: https://github.com/lishaoliang/l_sdk_doc/tree/master/release/msc-x64 ### [3. 安卓示例](https://github.com/lishaoliang/l_sdk_doc/blob/master/demo/cpp/NdkAndroid/app/src/main/cpp/native-lib.cpp) * ffmpeg参考库: https://github.com/lishaoliang/ffmpeg/tree/master/lib-android * l_sdk库: https://github.com/lishaoliang/l_sdk_doc/tree/master/release/android-libs ### [4. iOS示例](https://github.com/lishaoliang/l_sdk_doc/blob/master/demo/cpp/tios_l_sdk/tios_l_sdk/l_sdkm.mm) * ffmpeg参考库: https://github.com/lishaoliang/ffmpeg/tree/master/lib-iOS * l_sdk库: https://github.com/lishaoliang/l_sdk_doc/tree/master/release/ios-libs ## 八、客户端软件 ### [1. Windows客户端](https://fir.im/v8h6) * 32位,x86: https://github.com/lishaoliang/l_sdk_doc/blob/master/release/client-win/SLNetClient_install_x86.zip * 64位,x64: https://github.com/lishaoliang/l_sdk_doc/blob/master/release/client-win/SLNetClient_install_x64.zip ### [2. 安卓客户端](https://fir.im/v8h6) ![image](https://github.com/lishaoliang/l_sdk_doc/blob/master/qr_code/android_app.jpg) ## 其他 ### [Markdown简易参考](https://github.com/lishaoliang/l_sdk_doc/blob/master/markdown.md) ## Apache License,Version 2.0 Copyright (c) 2019 武汉舜立软件 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
47.728261
119
0.755864
yue_Hant
0.402849
a9404fae690978fe11a162646f5fbed53184d7b5
725
md
Markdown
tools/state-migrate/README.md
stephengroat/terraform-provider-appgatesdp
3a4db7498bc13d6add121d671f6269a1de9fb53c
[ "MIT" ]
8
2021-05-03T21:11:25.000Z
2022-02-01T18:56:55.000Z
tools/state-migrate/README.md
stephengroat/terraform-provider-appgatesdp
3a4db7498bc13d6add121d671f6269a1de9fb53c
[ "MIT" ]
66
2021-04-21T22:49:11.000Z
2022-03-24T13:34:20.000Z
tools/state-migrate/README.md
stephengroat/terraform-provider-appgatesdp
3a4db7498bc13d6add121d671f6269a1de9fb53c
[ "MIT" ]
6
2021-04-22T21:11:11.000Z
2022-02-24T02:55:53.000Z
# about this tool Prior to version 0.5.0, the project was named `terraform-provider-appgate-sdp`, and the provider was not yet published to registry.terraform.io. When we published it, we noticed problems with using kebab-case in the name, which forced us to re-name the project to `terraform-provider-appgatesdp` This tool will target a terraform plan directory and transform all appgate names found in .tf and .tfstate files to the new appgatesdp provider name. It creates a backup of the target directory <plan-directory>.backup as a sibling folder. ```sh $ go run main.go migrate -dir /path/to/terraform-resources ``` or use the built binary ```sh $ ./state-migrate migrate -dir /path/to/terraform-resources ```
34.52381
238
0.766897
eng_Latn
0.99907
a94055ecf6beecf9a03c9f91430f24cb0468c803
26
md
Markdown
README.md
gkgkgk1215/opencv
7c7b7051f41ca0bae5b18b82cba7e5890fc70fcf
[ "MIT" ]
null
null
null
README.md
gkgkgk1215/opencv
7c7b7051f41ca0bae5b18b82cba7e5890fc70fcf
[ "MIT" ]
null
null
null
README.md
gkgkgk1215/opencv
7c7b7051f41ca0bae5b18b82cba7e5890fc70fcf
[ "MIT" ]
null
null
null
# opencv OpenCV tutorials
8.666667
16
0.807692
kor_Hang
0.392939
a9409817aad2ebb2f00050cf510f8f8815ea3568
307
md
Markdown
README.md
julianny-favinha/thread-and-fork
1cfd25c8b747d930c732ab772eefcaa557ea1408
[ "MIT" ]
null
null
null
README.md
julianny-favinha/thread-and-fork
1cfd25c8b747d930c732ab772eefcaa557ea1408
[ "MIT" ]
null
null
null
README.md
julianny-favinha/thread-and-fork
1cfd25c8b747d930c732ab772eefcaa557ea1408
[ "MIT" ]
null
null
null
# Small programs to test threads and fork ## Threads Run `g++ -pthread thread.cpp -o thread` And `./thread <number in seconds to timeout>` ## Fork Run `g++ fork.cpp -o fork` And `./fork <number in seconds to timeout>` Try to test with: - Number < 5 - Number > 5 And see what happens to `a` variable 😆
17.055556
45
0.67101
eng_Latn
0.980437
a94106874bf8d3e8d3a127761d0438ea9790e88b
167
md
Markdown
Docs/CodingStyle.md
Venryx/DebateMap
be2f82979eba3aed43d0e4247be7cee655adbe66
[ "MIT" ]
33
2017-03-10T09:31:32.000Z
2019-12-30T13:00:05.000Z
Docs/CodingStyle.md
debate-map/app
5953d9f5642a53061c3b6fa075f68014fe716d3d
[ "MIT" ]
8
2021-06-16T10:07:24.000Z
2022-03-21T14:19:48.000Z
Docs/CodingStyle.md
Venryx/DebateMap
be2f82979eba3aed43d0e4247be7cee655adbe66
[ "MIT" ]
6
2017-03-28T19:59:10.000Z
2019-09-25T14:04:51.000Z
# Coding Style ## Overview To be written. For now, see [the eslint config](https://github.com/Venryx/eslint-config-vbase/blob/master/index.js) that the project uses.
33.4
138
0.754491
eng_Latn
0.702993
a9412c7259ccb8f1c4bc6c08bab7e237584ceb3d
556
md
Markdown
CHANGELOG.md
gstory0404/flutter_pangrowth
d5109734dc72ee0aced684c5a51d631dc8800191
[ "Apache-2.0" ]
12
2021-12-08T08:52:47.000Z
2022-03-15T03:12:36.000Z
CHANGELOG.md
gstory0404/flutter_pangrowth
d5109734dc72ee0aced684c5a51d631dc8800191
[ "Apache-2.0" ]
4
2021-12-09T10:29:18.000Z
2022-03-21T06:32:32.000Z
CHANGELOG.md
gstory0404/flutter_pangrowth
d5109734dc72ee0aced684c5a51d631dc8800191
[ "Apache-2.0" ]
2
2022-03-02T01:46:40.000Z
2022-03-05T06:30:59.000Z
## 1.0.5 * 兼容Flutter3.0 * android升级2.4.0.0 * ios升级2.4.0.0 ## 1.0.4 * android升级2.2.0.1 * 该版本必须配合flutter_unionad 1.2.4版本使用 ```dart flutter_unionad: 1.2.4 ``` ## 1.0.3 * android升级2.2.0.0 * ios升级2.2.0.0 * 该版本必须配合flutter_unionad 1.2.4版本使用 ```dart flutter_unionad: 1.2.4 ``` ## 1.0.2 * adroid升级2.0.0.0 * ios升级2.0.0.0 * 新增视频个人主页跳转 * 新增ios视频页面 手势侧滑退出 * 该版本必须配合flutter_unionad 1.2.1版本使用 ```dart flutter_unionad: 1.2.1 ``` ## 1.0.1 * 修改ios编译出错 ## 1.0.0 * 新增短视频 ## 0.0.2 * 新增导流入口 * 新增新增大字版设置,个性化开关 * sdk版本升级 ## 0.0.1 * 发布内容小说集成,支持android、ios
9.928571
34
0.633094
yue_Hant
0.62845
a94180d6d84e0993ea3adfa46e4d92d029fbd9b7
15,608
md
Markdown
source/includes/_doctormessaging.md
cooldoctors/slate
b0cc9715cadc098f1b59aca589819a2574d12bf8
[ "Apache-2.0" ]
null
null
null
source/includes/_doctormessaging.md
cooldoctors/slate
b0cc9715cadc098f1b59aca589819a2574d12bf8
[ "Apache-2.0" ]
null
null
null
source/includes/_doctormessaging.md
cooldoctors/slate
b0cc9715cadc098f1b59aca589819a2574d12bf8
[ "Apache-2.0" ]
1
2020-06-06T13:19:26.000Z
2020-06-06T13:19:26.000Z
# Doctor Messaging ## Get list of messages by patients ```shell curl -H "X-Auth-Token:doctor@cooldoctors.io:doctor:1515429239598:a0e476b75ffbacedd65e555b5304b222" -H "Content-Type:application/json" "https://api.endpoint.eyecarelive/DoctorOnDemand/api/doctor/followup/getchatuserlist" ``` > The above command returns JSON structured like this: ```json [ { "id": "5a3b8918e4b096b359a6cbfa", "userOne": "5a3b88f0e4b096b359a6cbe4", "userTwo": "56fbd63de4b0777c5a04f540", "updatedAt": 1513860270001, "count": 0, "userdetails": { "usrOneProfImg": null, "usrTwoProfImg": "https://api.endpoint.eyecarelive/PRDIMAGE/doctor/56fbd63de4b0777c5a04f540/ProfileImage/images (18).jpg", "usrOneAbtMe": { "id": "5a3b88f0e4b096b359a6cbe1", "firstName": "Nitin", "lastName": "Patil", "birthDate": "05/02/1993", "address": "pune", "city": "pune", "country": "USA", "pincode": "94649", "gender": "Male", "languagesSpeak": null, "additionalInfo": null, "state": "North Carolina" }, "usrTwoAbtMe": { "id": "56fbd63de4b0777c5a04f53e", "firstName": "Dr. Nitesh", "lastName": "Yadav", "birthDate": "03/14/2016", "address": "1010 W Fremont Ave # 200", "city": "Sunnyvale", "country": "Indian", "pincode": "555555", "gender": " Male", "languagesSpeak": [ "English", "", "" ], "additionalInfo": null, "state": "CA" } } } ] ``` > Make sure to replace `X-Auth-Token` with your API key. List of all the messages in a conversation with a doctor ### HTTP Request `POST http://localhost:8080/DoctorOnDemand/api/doctor/followup/getchatuserlist ` ### Query Parameters Parameter | Description | Type | Optional/Required --------- | ------------ | ---- | ---------------- Get APi | It will give the user chat list | Integer | Required ### Response Parameter | Description | Type --------- | ----------- | ---- ID | This ID is unique chatID in database | String userOne | UserOne will be patientID | String userTwo | UserTwo will be patientID | String updatedAt | Deprecated | String count| Unread Messages count | String userdetails[{}]| UserDetails having patient and doctor profile image | String usrOneAbtMe[{}] |This will give the user personal details | String usrTwoAbtMe[{}]| This will give the user personal details | String | Same Array response coming for other user after comma. | `Authorization: X-Auth-Token` <aside class="notice"> You must replace <code>X-Auth-Token</code> with your personal API key. </aside> ## Send message ```shell curl -H "X-Auth-Token:patientuser@cooldoctors.io:doctor:1515429239598:a0e476b75ffbacedd65e555b5304b222" -H "Content-Type:application/json" "http://api.endpoint.eyecarelive/DoctorOnDemand/api/doctor/followup" -X POST -d '{"content":"Hi Jerry, how are you feeling now.?","senderId":"5a2c0a69e4b0e4fa266e0180","recipientId":"58e2268be4b0b3b8e551c825"}' ``` > The above command returns JSON structured like this: ```json { "id": "5a2e876ae4b0e4fa266e226d", "subject": null, "content": "Hi Jerry, how are you feeling now.?", "senderId": "5a2c0a69e4b0e4fa266e0180", "recipientId": "58e2268be4b0b3b8e551c825", "parentId": null, "timestamp": "2017-12-11 05:26 AM PST", "status": null, "read_status": false, "appointmentId": null, "senderAboutMe": { "id": "5a2c0a69e4b0e4fa266e017d", "firstName": "Patient", "lastName": "Name", "birthDate": null, "address": "Kallam Anji Reddy Campus, Banjara Hills", "city": "Hyderabad", "country": "India", "pincode": "500034", "gender": "Female", "languagesSpeak": null, "additionalInfo": null, "state": "Telangana" }, "symtons": null, "fileId": null, "fileName": null, "filePath": null, "contentTypes": null, "senderProfileImage": "https://api.endpoint.eyecarelive/PRDIMAGE/doctor/5a2c0a69e4b0e4fa266e0180/ProfileImage/dashboard2.png", "isDoctor": false, "recipientAboutMe": { "id": "58e2268be4b0b3b8e551c823", "firstName": "Raj", "lastName": "R", "birthDate": "04/16/2017", "address": "1010 W Fremont Ave", "city": "Sunnyvale", "country": "USA", "pincode": "95014", "gender": " Male", "languagesSpeak": [ "English", "", "" ], "additionalInfo": null, "state": "CA" }, "recipientProfileImage": "https://api.endpoint.eyecarelive/PRDIMAGE/doctor/58e2268be4b0b3b8e551c825/ProfileImage/Careers-after-MBA-for-doctor.jpg", "messageType": "withoutAppointment", "chatId": "5a2d2c1de4b0e4fa266e02a3", "createdAt": "2017-12-11 05:26 AM PST" } ``` > Make sure to replace `X-Auth-Token` with your API key. Send a new message to a doctor ### HTTP Request `POST http://localhost:8080DoctorOnDemand/api/doctor/followup/ ` ### Query Parameters Parameter | Description | Type | Optional/Required --------- | ------------ | ---- | ---------------- senderId | In Sender ID need to pass the logged in user ID in sender | String | Required recipientId | In Recipient need to pass the doctor ID which one selected from the list of doctors API | String | Required content | In Content need to write message | String | Required ### Response Parameter | Description | Type --------- | ----------- | ---- ID | This ID is database ID | String subject | Deprecated | String content | Written message content showing | String senderId | Sender ID is patient id | String recipientId | Recipient ID is doctors ID | String parentId | Deprecated | String timestamp | Message Time | String status | Deprecated | String read_status | Message read status | String appointmentId | If we patient send message on appointment then the Appointment id will return here. | String senderAboutMe [{}] | Sender about having the sender information | String recipientAboutMe [{}] | Recipient about having the Recipient information | String messageType | If that is Appointment wise message then need to pass “withAppointment” & without message “withoutAppointment” | String chatId | This will chatID | String createdAt | Deprecated | String `Authorization: X-Auth-Token` <aside class="notice"> You must replace <code>X-Auth-Token</code> with your personal API key. </aside> ## Send file via messaging to a patient ```shell curl -H "X-Auth-Token:patientuser@cooldoctors.io:doctor:1515429239598:a0e476b75ffbacedd65e555b5304b222" -H "Content-Type:application/json" "http://api.endpoint.eyecarelive/DoctorOnDemand/api/doctor/followup/uploadFollowUpMessageFilesInfoIndividual" -X POST -d '{"senderId":"5a2c0a69e4b0e4fa266e0180","recipientId":"58e2268be4b0b3b8e551c825"}' ``` > The above command returns JSON structured like this: ```json { "id": "5a2e9137e4b0e4fa266e23ad", "subject": null, "content": null, "senderId": "5a2c0a69e4b0e4fa266e0180", "recipientId": "58e2268be4b0b3b8e551c825", "parentId": null, "timestamp": "2017-12-11 06:07 AM PST", "status": null, "read_status": false, "appointmentId": null, "senderAboutMe": { "id": "5a2c0a69e4b0e4fa266e017d", "firstName": "Patient", "lastName": "Name", "birthDate": null, "address": "Kallam Anji Reddy Campus, Banjara Hills", "city": "Hyderabad", "country": "India", "pincode": "500034", "gender": "Female", "languagesSpeak": null, "additionalInfo": null, "state": "Telangana" }, "symtons": null, "fileId": null, "fileName": null, "filePath": null, "contentTypes": null, "senderProfileImage": "https://api.endpoint.eyecarelive/PRDIMAGE/doctor/5a2c0a69e4b0e4fa266e0180/ProfileImage/dashboard2.png", "isDoctor": false, "recipientAboutMe": { "id": "58e2268be4b0b3b8e551c823", "firstName": "Raj", "lastName": "R", "birthDate": "04/16/2017", "address": "1010 W Fremont Ave", "city": "Sunnyvale", "country": "USA", "pincode": "95014", "gender": " Male", "languagesSpeak": [ "English", "", "" ], "additionalInfo": null, "state": "CA" }, "recipientProfileImage": "https://api.endpoint.eyecarelive/PRDIMAGE/doctor/58e2268be4b0b3b8e551c825/ProfileImage/Careers-after-MBA-for-doctor.jpg", "messageType": "withoutAppointment", "chatId": "5a2d2c1de4b0e4fa266e02a3", "createdAt": "2017-12-11 06:07 AM PST" } ``` > Make sure to replace `X-Auth-Token` with your API key. This is two step API. $URL/uploadFollowUpMessageFilesInfoIndividual/ returns a follow-up id#. This id# is then passed to $URL/uploadFollowUpMessageFilesIndividual/{followupId} to send the file to the doctor as a follow-up message. ### HTTP Request `POST http://localhost:8080/DoctorOnDemand/api/doctor/followup/uploadFollowUpMessageFilesInfoIndividual/ ` ### Query Parameters Parameter | Description | Type | Optional/Required --------- | ------------ | ---- | ---------------- senderId | Sender ID logged in user | String | Required recipientId | Recipient ID is doctor ID | String | Required ### Response Parameter | Description | Type --------- | ----------- | ---- ID | This ID is database ID | String subject | Deprecated | String content | Written message content showing | String senderId | Sender ID is patient id | String recipientId | Recipient ID is doctors ID | String parentId | Deprecated | String timestamp | Message Time | String status | Deprecated | String read_status | Message read status | String appointmentId | If we patient send message on appointment then the Appointment id will return here. | String senderAboutMe [{}] | Sender about having the sender information | String recipientAboutMe [{}] | Recipient about having the Recipient information | String messageType | If that is Appointment wise message then need to pass “withAppointment” & without message “withoutAppointment” | String chatId | This will chatID | String createdAt | Deprecated | String fileId | It will return the file ID for get URL of uploaded photo | String `Authorization: X-Auth-Token` <aside class="notice"> You must replace <code>X-Auth-Token</code> with your personal API key. </aside> ```shell curl -H "X-Auth-Token:patientuser@cooldoctors.io:doctor:1515429239598:a0e476b75ffbacedd65e555b5304b222" -H "Content-Type:multipart/form-data" "http://api.endpoint.eyecarelive/DoctorOnDemand/api/doctor/followup/uploadFollowUpMessageFilesIndividual/5a2e9137e4b0e4fa266e23ad" -X POST -F file=@/Users/cupertino/Downloads/dash3.png ``` > The above command returns JSON structured like this: ```json { "id": "5a2e9137e4b0e4fa266e23ad", "subject": null, "content": null, "senderId": "5a2c0a69e4b0e4fa266e0180", "recipientId": "58e2268be4b0b3b8e551c825", "parentId": null, "timestamp": "2017-12-11 06:07 AM PST", "status": null, "read_status": false, "appointmentId": null, "senderAboutMe": { "id": "5a2c0a69e4b0e4fa266e017d", "firstName": "Patient", "lastName": "Name", "birthDate": null, "address": "Kallam Anji Reddy Campus, Banjara Hills", "city": "Hyderabad", "country": "India", "pincode": "500034", "gender": "Female", "languagesSpeak": null, "additionalInfo": null, "state": "Telangana" }, "symtons": null, "fileId": [ "1513001430064" ], "fileName": [ "dash3.png" ], "filePath": [ "https://s3.amazonaws.com/test-nakul/doctor/5a2c0a69e4b0e4fa266e0180/followUpMessage/1513001429942/dash3.png" ], "contentTypes": [ "application/octet-stream" ], "senderProfileImage": "https://api.endpoint.eyecarelive/PRDIMAGE/doctor/5a2c0a69e4b0e4fa266e0180/ProfileImage/dashboard2.png", "isDoctor": false, "recipientAboutMe": { "id": "58e2268be4b0b3b8e551c823", "firstName": "Raj", "lastName": "R", "birthDate": "04/16/2017", "address": "1010 W Fremont Ave", "city": "Sunnyvale", "country": "USA", "pincode": "95014", "gender": " Male", "languagesSpeak": [ "English", "", "" ], "additionalInfo": null, "state": "CA" }, "recipientProfileImage": "https://api.endpoint.eyecarelive/PRDIMAGE/doctor/58e2268be4b0b3b8e551c825/ProfileImage/Careers-after-MBA-for-doctor.jpg", "messageType": "withoutAppointment", "chatId": "5a2d2c1de4b0e4fa266e02a3", "createdAt": "2017-12-11 06:07 AM PST" } ``` ### HTTP Request `POST http://localhost:8080/DoctorOnDemand/api/doctor/followup/uploadFollowUpMessageFilesIndividual/{followupId} ` ### Query Parameters Parameter | Description | Type | Optional/Required --------- | ------------ | ---- | ---------------- {followupId} | Need to pass follow-up id in the URL | String | Required -F file=@/FilePath | Need to upload file using API | String | Required ### Response Parameter | Description | Type --------- | ----------- | ---- ID | This ID is database ID | String subject | Deprecated | String content | Written message content showing | String senderId | Sender ID is patient id | String recipientId | Recipient ID is doctors ID | String parentId | Deprecated | String timestamp | Message Time | String status | Deprecated | String read_status | Message read status | String appointmentId | If we patient send message on appointment then the Appointment id will return here. | String senderAboutMe [{}] | Sender about having the sender information | String recipientAboutMe [{}] | Recipient about having the Recipient information | String messageType | If that is Appointment wise message then need to pass “withAppointment” & without message “withoutAppointment” | String chatId | This will chatID | String createdAt | Deprecated | String fileId | It will return the file ID for get URL of uploaded photo | String `Authorization: X-Auth-Token` <aside class="notice"> You must replace <code>X-Auth-Token</code> with your personal API key. </aside> ## Get view file from messages for doctors ```shell curl -H "X-Auth-Token:patientuser@cooldoctors.io:doctor:1515429239598:a0e476b75ffbacedd65e555b5304b222" -H "Content-Type:multipart/form-data" "https://api.endpoint.eyecarelive/DoctorOnDemand/api/doctor/followup/getUrl/1512909942596/5a2d2c5ce4b0e4fa266e02a6" ``` > The above command returns JSON structured like this: ```json { "url": "https://d5qf5duw6o0fe.cloudfront.net/doctor/5a2c0a69e4b0e4fa266e0180/followUpMessage/1512909942474/2_user.png?Expires=1512910336&Signature=b4ykk8mvNCpzaGlwg0smiK4LuM-cQ31brqVJ3JoY-n6JC6yunuoS2bnI7lk~zhGF5wE-PKOGG3eIBryPHwkpZBNSSEWPJn8p8IU1plI3qwGewm85PbLqEzlcIBFUeHqQpuEZLaw8EQMv7jcGDInEgiif4rl7SaFF-IHZ3Ix0YTvmTSxPtquZJ6IX~RSBSAjs~7r7qUR-9vyJXjNyC-Dk-r3nW2aJ6b1yyL0ba9kVX8WhHd3WX9heozx6k9by7I7~V8ObKwc0O5DSB3RaM97917r63-TSQK-c44AFCwHLQLxKdvnWhhda67ZaNcxryG1O7txNRMESq-ZZsZCjIxbn7A__&Key-Pair-Id=APKAJKM62WTRWOS5N27Q" } ``` > Make sure to replace `X-Auth-Token` with your API key. This API will return the file URL for view. ### HTTP Request `POST http://localhost:8080/DoctorOnDemand/api/doctor/followup/getUrl/1512909942596/5a2d2c5ce4b0e4fa266e02a6 ` ### Query Parameters Parameter | Description | Type | Optional/Required --------- | ------------ | ---- | ---------------- {fileID} | need to pass the file id in the request url | String | Required {followupID} | need to pass the follow-up id in the request url | String | Required ### Response Parameter | Description | Type --------- | ----------- | ---- url | It will return the link for view the image | String `Authorization: X-Auth-Token` <aside class="notice"> You must replace <code>X-Auth-Token</code> with your personal API key. </aside>
32.65272
527
0.687981
eng_Latn
0.395757
a941b7df228a0abdb022dd767f6f70d8a2d7cebd
201
md
Markdown
addins/cfsample3/README.md
OfficeDev/CustomFunctions
3c87f1c021be853be888a506df172efda58d0907
[ "MIT" ]
18
2019-02-20T23:52:24.000Z
2021-09-28T04:19:40.000Z
addins/cfsample3/README.md
OfficeDev/CustomFunctions
3c87f1c021be853be888a506df172efda58d0907
[ "MIT" ]
6
2019-01-15T00:54:20.000Z
2020-05-20T21:32:06.000Z
addins/cfsample3/README.md
OfficeDev/CustomFunctions
3c87f1c021be853be888a506df172efda58d0907
[ "MIT" ]
16
2018-07-30T20:01:51.000Z
2021-10-04T10:20:15.000Z
# Purpose This add-in is used to test a single, shared, runtime for its components. The add-in contains: - A ShowTaskpane button - A UI-less ribbon button handler # Maintainers ylu0826 madhavagrawal17
28.714286
94
0.78607
eng_Latn
0.996491
a941e6327a91405983c59ff3fa245c469d6662e9
1,977
md
Markdown
CONTRIBUTING.md
LuisReinoso/waterline-query-language-parse
2cd94e130d0612680eb86188131236d9345b9e81
[ "MIT" ]
1
2022-02-01T03:54:50.000Z
2022-02-01T03:54:50.000Z
CONTRIBUTING.md
LuisReinoso/waterline-query-language-parse
2cd94e130d0612680eb86188131236d9345b9e81
[ "MIT" ]
3
2018-10-19T17:00:56.000Z
2019-01-29T21:31:03.000Z
CONTRIBUTING.md
LuisReinoso/waterline-query-language-parse
2cd94e130d0612680eb86188131236d9345b9e81
[ "MIT" ]
null
null
null
## Contenido - [Español](#spanish) - [English](#english) <a id="spanish"></a> ## Instrucciones Estamos muy contentos de que estés leyendo esto, porque necesitamos desarrolladores voluntarios para ayudar a que este proyecto llegue a buen término. 👏 Estos pasos lo guiarán a través de la contribución a este proyecto: - bifurca el repositorio (fork). - Clonalo e instala dependencias. `git clone https://github.com/LuisReinoso/waterline-query-language-parser` `npm install` - Agregar pruebas (si prefiere TDD, haga esto primero) `npm run test: watch` - Asegúrese de que los comandos `npm run build` y` npm run test: prod` estén funcionando. - Realiza y confirma tus cambios. `git add .` y `npm run commit` - Finalmente envíe una [GitHub Pull Request] (https://github.com/LuisReinoso/waterline-query-language-parser/compare?expand=1) con una lista clara de lo que ha hecho (lea más [acerca de pull solicitudes] (https://help.github.com/articles/about-pull-requests/)). Asegúrese de que todas sus confirmaciones sean atómicas (una característica por confirmación). --- <a id="english"></a> ## Instructions We're really glad you're reading this, because we need volunteer developers to help this project come to fruition. 👏 These steps will guide you through contributing to this project: - Fork the repo - Clone it and install dependencies `git clone https://github.com/LuisReinoso/waterline-query-language-parser` `npm install` - Add test (if you prefer TDD do this first) `npm run test:watch` - Make sure the commands `npm run build` and `npm run test:prod` are working. - Make and commit your changes. `git add .` and `npm run commit` - Finally send a [GitHub Pull Request](https://github.com/LuisReinoso/waterline-query-language-parser/compare?expand=1) with a clear list of what you've done (read more [about pull requests](https://help.github.com/articles/about-pull-requests/)). Make sure all of your commits are atomic (one feature per commit).
45.976744
356
0.755185
spa_Latn
0.33077
a94284029f3760f7fb91b5bbbe2de6a46e139272
11,661
md
Markdown
_posts/2021-09-25-mimikatz_privilege.md
loong716/loong716.github.io
56252e1b8be052e6aa53129b42d821c7ecdbcd85
[ "MIT" ]
null
null
null
_posts/2021-09-25-mimikatz_privilege.md
loong716/loong716.github.io
56252e1b8be052e6aa53129b42d821c7ecdbcd85
[ "MIT" ]
null
null
null
_posts/2021-09-25-mimikatz_privilege.md
loong716/loong716.github.io
56252e1b8be052e6aa53129b42d821c7ecdbcd85
[ "MIT" ]
null
null
null
--- title: 从mimikatz学习Windows安全之访问控制模型(三) author: Loong716 date: 2021-09-25 14:10:00 +0800 categories: [Pentest] tags: [mimikatz] --- 文章首发于中安网星公众号,原文地址:[从mimikatz学习Windows安全之访问控制模型(三)](https://mp.weixin.qq.com/s/Jbi5HwnCCTDNhmL_M5wSCQ) * toc {:toc} 作者:Loong716@[Amulab](https://github.com/Amulab) ## 0x00 前言 在之前的文章中,分别向大家介绍了Windows访问控制模型中的SID和Access Token,本篇文章中将为大家介绍最后一个概念——特权 Windows操作系统中许多操作都需要有对应的特权,特权也是一种非常隐蔽的留后门的方式。在AD域中,一些特权在**Default Domain Controller Policy**组策略中被授予给一些特殊的组,这些组的成员虽然不是域管,但如果被攻击者控制同样能给AD域带来巨大的风险 因此对防御者来讲,排查用户的特权配置也是重中之重,本文将对一些比较敏感的特权进行介绍,便于防御者更好的理解特权的概念以及进行排查 ## 0x01 令牌中的Privilege 特权是一个用户或组在本地计算机执行各种系统相关操作(关闭系统、装载设备驱动程序、改变系统时间)的权限,特权与访问权限的区别如下: + 特权控制账户对系统资源和系统相关任务的访问,而访问权限控制对安全对象(可以具有安全描述符的对象)的访问 + 系统管理员为用户或组指派特权,而系统根据对象的DACL中的ACE授予或拒绝对安全对象的访问,有时拥有特权可以忽略ACL的检查 在之前介绍Access Token的文章中我们已经了解过了token的基本结构,其中有一部分表示了该用户及该用户所属组所拥有的特权,如下图所示: ![1632456879323.png](https://i.loli.net/2021/09/25/4LzTdyX8sNqhUAH.png) 通常我们会使用`whoami /priv`命令查看当前用户所拥有的特权,默认情况下大部分特权是禁用状态,在使用时需要启用 ![1632456864605.png](https://i.loli.net/2021/09/25/r69bRiw8zJQY72q.png) ## 0x02 mimikatz的privilege模块 mimikatz中的privilege模块主要有以下功能,下图中第一个红框中的部分是为当前进程启用一些指定的特权,第二个红框中的`id`和`name`分别支持指定特权的id和名称,并为当前进程启用id和名称对应的特权 ![1632456845760.png](https://i.loli.net/2021/09/25/CDLhvoymJ9TPMWb.png) 通常我们比较通用的启用进程特权的方法是这样的,代码如下: ``` cpp BOOL GetDebugPrivilege() { BOOL status = FALSE; HANDLE hToken; if (OpenProcessToken(GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES, &hToken)) { TOKEN_PRIVILEGES tokenPrivs; tokenPrivs.PrivilegeCount = 1; if (LookupPrivilegeValueW(NULL, SE_DEBUG_NAME, &tokenPrivs.Privileges[0].Luid)) { tokenPrivs.Privileges[0].Attributes = TRUE ? SE_PRIVILEGE_ENABLED : 0; if (AdjustTokenPrivileges(hToken, FALSE, &tokenPrivs, sizeof(tokenPrivs), NULL, NULL)) { status = TRUE; } } else wprintf(L"[!] LookupPrivilegeValueW error: %u when get debug privilege.\n", GetLastError()); CloseHandle(hToken); } else wprintf(L"[!] OpenProcessToken error: %u when get debug privilege.\n", GetLastError()); return status; } ``` 而mimikatz是通过调用一个未文档化的API`RtlAdjustPrivilege()`,该API的功能是对当前进程或线程启用/禁用指定的特权,共有四个参数: + **ULONG Privilege**:需要操作的特权的ID + **BOOLEAN Enable**:启用或禁用的标志,1为启用,0为禁用 + **BOOLEAN CurrentThread**:指定是否为当前线程,1则设置线程令牌,0则设置进程令牌 + **PBOOLEAN Enabled**:该特权修改之前是禁用的还是启用的 ``` cpp NTSTATUS RtlAdjustPrivilege ( ULONG Privilege, // [In] BOOLEAN Enable, // [In] BOOLEAN CurrentThread, // [In] PBOOLEAN Enabled // [Out] ) ``` 如果参数指定的是特权的名称,则会先调用`LookupPrivilegeValue()`拿到特权名称对应的特权ID,然后再调用`RtlAdjustPrivilege()`来启用特权 ![1632456816631.png](https://i.loli.net/2021/09/25/5JfluC92hyGNxZL.png) 前面提到的是将禁用的特权启用,而如果想给一个账户赋予特权,则可以通过本地策略/组策略来设置,也可以通过`LsaAddAccountRights()`这个API,这里不再赘述 ## 0x03 危险的特权 这里主要介绍11个危险的特权,在检查域内安全时要格外注意 ### 1. SeDebugPrivilege 通常情况下,用户只对属于自己的进程有调试的权限,但如果该用户Token中被赋予`SeDebugPrivilege`并启用时,该用户就拥有了调试其他用户进程的权限,此时就可以对一些高权限进程执行操作以获取对应的权限,以进程注入为例: ![1632473290240.png](https://i.loli.net/2021/09/25/FIPUa213zLq64fN.png) ### 2. SeBackupPrivilege 该特权代表需要执行备份操作的权限,授予当前用户对所有文件的读取权限,不受文件原本的ACL限制,主要有以下利用思路: 1. 备份SAM数据库 2. 备份磁盘上高权限用户的敏感文件 3. 域内在域控上备份ntds.dit 下图以导出注册表中的SAM和SYSTEM为例 ![1632457002019.png](https://i.loli.net/2021/09/25/i1yjRz3BwO7In9A.png) 观察上图可能有师傅会问:为什么前面显示`SeBackupPrivilege`是Disable状态,却能成功执行reg save呢?一开始我猜测可能是reg.exe在执行操作前默认会启用一些特权,随后通过对reg.exe的逆向也印证了这点: ![1632457018350.png](https://i.loli.net/2021/09/25/XvbyhxVwcmMEQRB.png) 在域环境中,**Backup Operators**和**Server Operators**组成员允许在域控进行本地登录,并在域控上拥有`SeBackupPrivilege`特权,所以也可以对ntds.dit进行备份操作,再备份注册表中的SYSTEM和SECURITY,进而解密ntds.dit 需要注意的是在调用`CreateFile()`时,需要指定`FILE_FLAG_BACKUP_SEMANTICS`标志来表示正在为备份或恢复操作打开或创建文件,从而覆盖文件的ACL检查 ``` cpp HANDLE hFile = CreateFileW( L"C:\\Windows\\System32\\1.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); ``` ### 3. SeRestorePrivilege 该特权是执行还原操作所需的权限,拥有此特权的用户对所有文件拥有写权限,不受文件原本的ACL限制,主要利用思路如下: 1. 修改注册表,实现修改服务、修改启动项等操作 2. 写文件进行DLL劫持 ![1632457344306.png](https://i.loli.net/2021/09/25/Dnjgio7PBWylzpZ.png) 域环境中,**Backup Operators**和**Server Operators**组成员同样在域控上也有`SeRestorePrivilege`,因此也可以利用上述操作在域控上完成提权和维权等操作 需要注意的仍是调用API时,需要指定对应的标志,如`CreateFile()`需要指定`FILE_FLAG_BACKUP_SEMANTICS`,`RegCreateKeyEx()`需要指定`REG_OPTION_BACKUP_RESTORE` ### 4. SeTakeOwnershipPrivilege 该特权用来修改目标对象的所有权,也就是说拥有该特权的用户可以修改任意对象的所有者(Owner),而所有者对该对象是有WriteDACL的权限的,可以任意修改对象的ACL 所以如果拥有了`SeTakeOwnershipPrivilege`,就相当于对任意对象有读写的权限,利用方式和`SeRestorePrivilege`、`SeBackupPrivilege`基本相同 ``` cpp GetTakeOwnershipPriv(); ... status = SetNamedSecurityInfo( L"C:\\Windows\\System32\\localspl.dll", SE_FILE_OBJECT, OWNER_SECURITY_INFORMATION, user->User.Sid, NULL, NULL, NULL); ``` 如下图所示,可以将对象的Owner从TrustedInstaller修改为当前用户: ![1631702263345.png](https://i.loli.net/2021/09/25/VJXHfeYOrs1TMEu.png) ### 5. SeImpersonatePrivilege 当`SeImpersonatePrivilege`特权分配给用户时,表示允许该用户运行的程序模拟客户端,默认Service账户(如MSSQL、IIS的服务账户)和管理员账户会拥有该权限 该权限也是一些potato提权的重要条件,可以通过printbug+`ImpersonateNamedPipeClient()`等等许多方式获取到高权限令牌,进而执行模拟,此处以pipepotato为例: ![1632293320896.png](https://i.loli.net/2021/09/25/VT6gdQXEjc8laFR.png) ### 6. SeAssignPrimaryTokenPrivilege 该特权表示可以为进程分配主令牌,经常与`SeImpersonatePrivilege`特权配合使用在potato的提权中。拥有该特权时,我们可以使用非受限的令牌调用`CreateProcessAsUser()`;或者先创建挂起的进程,再通过`NtSetInformationProcess()`来替换进程的token 顺便提一嘴,之前文章中提到的mimikatz的token::run模块在使用时可能会出现0x00000522错误,如下图所示 ![1632299817792.png](https://i.loli.net/2021/09/25/oUQWZflcKrT6sPY.png) 这是因为在调用`CreateProcessAsUser()`时,如果传入的是非受限令牌,那么则需要`SeAssignPrimaryTokenPrivilege`特权,有关受限令牌的概念可阅读微软文档:https://docs.microsoft.com/en-us/windows/win32/secauthz/restricted-tokens ![1632300088335.png](https://i.loli.net/2021/09/25/KWdv2CXBstaehxI.png) 因此该功能应该是用来从SYSTEM权限窃取其他用户的Access Token(因为默认SYSTEM才有`SeAssignPrimaryTokenPrivilege`),如果想要非SYSTEM用户调用的话可以考虑改为用`CreateProcessWithToken()`创建进程 ![1632303407818.png](https://i.loli.net/2021/09/25/BIYNSfxormyk8HX.png) ### 7. SeLoadDriverPrivilege 该权限用来加载或卸载设备的驱动,在windows中用户可以通过`NTLoadDriver()`进行驱动的加载,其DriverServiceName参数需要传入驱动配置的注册表项 ``` cpp NTSTATUS NTLoadDriver( _In_ PUNICODE_STRING DriverServiceName // \Registry\Machine\System\CurrentControlSet\Services\DriverName ); ``` 其中DriverName表示启动名称,该键下至少应有两个值: + **ImagePath**:REG_EXPAND_SZ类型,“\??\C:\path\to\driver.sys” 格式 + **Type**:REG_WORD类型,其值需要被设置为1,表示KENERL_DRIVER 如果是非管理员权限,默认无法操作HKLM注册表项,则可以在HKEY_CURRENT_USER (HKCU) 下创建注册表项并设置驱动程序配置设置,再调用`NTLoadDriver()`指定之前创建的注册表项来注册驱动,代码可参考:https://github.com/TarlogicSecurity/EoPLoadDriver/ 此时可以利用一些有漏洞的驱动程序来实现LPE等操作,以Capcom.sys为例: ![1631937778041.png](https://i.loli.net/2021/09/25/hz5Ub4HKfxkIBNR.png) 除此之外,在AD域中`SeLoadDriverPrivilege`权限在域控上默认授予**Print Operators**组,使得该组用户可以远程在域控加载打印机驱动程序,前一段时间的Printnightmare便是绕过了该权限的检查 ### 8. SeCreateTokenPrivilege 该特权表示:允许拥有此特权的进程可以通过`ZwCreateToken()`创建Access Token ``` cpp NTSATUS ZwCreateToken( OUT PHANDLE TokenHandle, IN ACCESS_MASK DesiredAccess, IN POBJECT_ATTRIBUTES ObjectAttributes, IN TOKEN_TYPE TokenType, IN PLUID AuthenticationId, IN PLARGE_INTEGER ExpirationTime, IN PTOKEN_USER TokenUser, IN PTOKEN_GROUPS TokenGroups, IN PTOKEN_PRIVILEGES TokenPrivileges, IN PTOKEN_OWNER TokenOwner, IN PTOKEN_PRIMARY_GROUP TokenPrimaryGroup, IN PTOKEN_DEFAULT_DACL TokenDefaultDacl, IN PTOKEN_SOURCE TokenSource ); ``` 那么我们肯定会想:能不能直接利用该API创建一个SYSTEM的token,然后起进程?很遗憾,该权限不允许用户使用他们刚创建的令牌 但我们可以利用模拟,创建一个当前用户的、包含特权组SID的token,因为只要令牌是针对同一个用户的,并且完整性级别小于或等于当前进程完整性级别(完整性级别可以通过构造令牌时来设置),就可以不需要`SeImpersonatePrivilege`特权,对线程设置模拟令牌 以创建Group List中包含administrators组SID的token为例,在创建token前修改了组SID、特权列表,最初成功利用模拟令牌创建线程,在system32下写入文件: ![1632387608993.png](https://i.loli.net/2021/09/25/YMvlONm4aAxdngV.png) 需要注意的是在Win10 >= 1809和Windows Server 2019,以及安装了KB4507459的Win10和2016上,我们不能使用生成的模拟令牌,会爆“1346:未提供所需的模拟级别,或提供的模拟级别无效”错误 ![1632385533225.png](https://i.loli.net/2021/09/25/krMoRHtA6BPl4Ki.png) 幸运的是已经有大牛发现了绕过的方法,就是把Token的AuthenticationID从`SYSTEM_LUID`(0x3e7)修改为`ANONYMOUS_LOGON_LUID`(0x3e6),最终成功使用模拟令牌向system32目录写入了文件: ![1632385686357.png](https://i.loli.net/2021/09/25/QJCAbSk71c8nFoe.png) ### 9. SeTcbPrivilege 该特权标志着其拥有者是操作系统的一部分,拥有该特权的进程可利用`LsaLogonUser()`执行创建登录令牌等操作,因此可以充当任意用户 ``` cpp NTSTATUS LsaLogonUser( HANDLE LsaHandle, PLSA_STRING OriginName, SECURITY_LOGON_TYPE LogonType, ULONG AuthenticationPackage, PVOID AuthenticationInformation, ULONG AuthenticationInformationLength, PTOKEN_GROUPS LocalGroups, PTOKEN_SOURCE SourceContext, PVOID *ProfileBuffer, PULONG ProfileBufferLength, PLUID LogonId, PHANDLE Token, PQUOTA_LIMITS Quotas, PNTSTATUS SubStatus ); ``` 根据微软官方文档,当以下一项获多项为真时,`LsaLogonUser()`调用者需要`SeTcbPrivilege`特权: + 使用了 Subauthentication 包 + 使用 KERB_S4U_LOGON,调用者请求模拟令牌 + `LocalGroups`参数不为NULL 我们主要关注第二点和第三点,从文档的描述来看,如果使用KERB_S4U_LOGON来登录(也可以使用MSV1_0_S4U_LOGON,但文档中未体现),我们就可以拿到一张模拟令牌,并且可以在`LocalGroups`参数给该令牌添加附加组: ``` cpp WCHAR systemSID[] = L"S-1-5-18"; ConvertStringSidToSid(systemSID, &pExtraSid); pGroups->Groups[pGroups->GroupCount].Attributes = SE_GROUP_ENABLED | SE_GROUP_ENABLED_BY_DEFAULT | SE_GROUP_MANDATORY; pGroups->Groups[pGroups->GroupCount].Sid = pExtraSid; pGroups->GroupCount++; ``` 此时我们就可以拿到一张拥有SYSTEM的SID的令牌,如何在没有`SeImpersonatePrivilege`特权的情况下使用模拟令牌在`SeCreateTokenPrivilege`的利用中已经提到过了 如下图所示,成功在system32下写入文件: ![1632393013344.png](https://i.loli.net/2021/09/25/xtHo8ZDf1Mjh2WY.png) 当然,如果在域内,也可以尝试KERB_S4U_LOGON来获取域内用户的模拟令牌 ### 10. SeTrustedCredmanAccessPrivilege 该特权用来访问凭据管理器,备份凭据管理器中的凭据需要使用`CredBackupCredentials()`这一API,而调用该API需要拥有`SeTrustedCredmanAccessPrivilege`特权,该特权默认授予winlogon.exe和lsass.exe这两个进程 ``` cpp BOOL WINAPI CredBackupCredentials( HANDLE Token, LPCWSTR Path, PVOID Password, DWORD PasswordSize, DWORD Flags); ``` 为了测试我在凭据管理器中手动新增了一条凭据,用于访问192.168.47.20,用户名和密码为admin/adminpass ![1632281596372.png](https://i.loli.net/2021/09/25/3ewhivfcAqmNV1p.png) 利用方式即窃取winlogon.exe的token,并调用`CredBackupCredentials()`对凭据管理器中的凭据进行备份(指定加密密码为NULL),最终再调用`CryptUnprotectData()`对备份的文件进行解密。此处代码参考:https://github.com/BL0odz/POSTS/blob/main/DumpCred_TrustedTokenPriv/main.cpp ![1632281674885.png](https://i.loli.net/2021/09/25/9Fndoh1uLpGskrK.png) ### 11. SeEnableDelegationPrivilege 在域内配置无约束委派和约束委派时(这里特指传统的约束委派,不包括基于资源的约束委派),都是修改的LDAP中的`userAccountControl`属性来配置(当然约束委派还要修改`msDS-AllowedToDelegateTo`来配置委派可以访问的服务),而想要配置无约束委派的约束委派,不仅需要对属性有写权限,还需要在域控有`SeEnableDelegationPrivilege`特权 ![1632367655922.png](https://i.loli.net/2021/09/25/13BgRpIfvb9syCu.png) 虽然该利用对攻击者来说较为苛刻,但如果发现域内组策略给普通账户配置了`SeEnableDelegationPrivilege`特权,就需要检查是否是正常的业务需求 ## 0x04 检测与缓解 检测思路: + 查看域内Server Operators、Backup Operators、Print Operators等特权组内是否有不应出现的用户 + 查看域内组策略配置文件,是否有将特权授予不常见的SID + 检测“4672: 分配给新登录的特殊权限”日志 缓解思路: + 非业务必需情况下不为普通账户赋予特权 + 不影响业务的情况下,可以取消部分管理员账户的`SeDebugPrivilege`等特权 ## 0x05 参考 https://docs.microsoft.com/ https://github.com/gentilkiwi/mimikatz https://bbs.pediy.com/thread-76552.htm https://3gstudent.github.io/%E6%B8%97%E9%80%8F%E6%8A%80%E5%B7%A7-Windows%E4%B9%9D%E7%A7%8D%E6%9D%83%E9%99%90%E7%9A%84%E5%88%A9%E7%94%A8 https://github.com/hatRiot/token-priv/blob/master/abusing_token_eop_1.0.txt https://hackinparis.com/data/slides/2019/talks/HIP2019-Andrea_Pierini-Whoami_Priv_Show_Me_Your_Privileges_And_I_Will_Lead_You_To_System.pdf https://www.tiraniddo.dev/2021/05/dumping-stored-credentials-with.html https://www.tarlogic.com/blog/abusing-seloaddriverprivilege-for-privilege-escalation/ https://decoder.cloud/2019/07/04/creating-windows-access-tokens/
29.521519
203
0.796673
yue_Hant
0.925598
a942b91876b14fea8a0cde3bde392130f6ce2346
12
md
Markdown
README.md
NagateBu/works-cli
e320a531488eb2dff8907ec60e0229626249efd8
[ "MIT" ]
null
null
null
README.md
NagateBu/works-cli
e320a531488eb2dff8907ec60e0229626249efd8
[ "MIT" ]
null
null
null
README.md
NagateBu/works-cli
e320a531488eb2dff8907ec60e0229626249efd8
[ "MIT" ]
null
null
null
# Carry-Cli
6
11
0.666667
vie_Latn
0.433238
a943df67e326086b631fc3e56060d45977e3ca02
348
md
Markdown
Access/Super User/readme.md
jdc20181/BusinessOS
764f76e6b1cc07e458477a5f03d9f6289381aef0
[ "MIT" ]
null
null
null
Access/Super User/readme.md
jdc20181/BusinessOS
764f76e6b1cc07e458477a5f03d9f6289381aef0
[ "MIT" ]
null
null
null
Access/Super User/readme.md
jdc20181/BusinessOS
764f76e6b1cc07e458477a5f03d9f6289381aef0
[ "MIT" ]
null
null
null
Manager Perms e.g. to delegate access from standard employee to management (Non-IT Admin) Access to: Management of Favorites (this can be limited to a specific user group [as this is how the favorites system works within Business OS Browser]) Add, delete, modify, and suspend users. Set limits to user accounts e.g. limited to only calculator
38.666667
141
0.781609
eng_Latn
0.993007
a945061cf8836d0762fea34b28a0ce5347ba3f07
482
md
Markdown
pages/applications.md
pranavkmar/jekyll-blog
f31695300c714f7379184c4229c7bee4583ccf60
[ "MIT" ]
null
null
null
pages/applications.md
pranavkmar/jekyll-blog
f31695300c714f7379184c4229c7bee4583ccf60
[ "MIT" ]
null
null
null
pages/applications.md
pranavkmar/jekyll-blog
f31695300c714f7379184c4229c7bee4583ccf60
[ "MIT" ]
null
null
null
--- layout: page show_meta: false title: "Applications!" subheadline: "Layouts of Feeling Responsive" header: image_fullwidth: "header_unsplash_5.jpg" permalink: "/applications/" sidebar: right --- <ul> {% for post in site.categories.applications %} <li><a href="{{ site.url }}{{ post.url }}">{{ post.title }}</a></li> {% endfor %} </ul> <div class="row"> <div class="medium-8 columns t30"> {% include pagination.html %} </div><!-- /.medium-7.columns --> </div>
21.909091
72
0.636929
eng_Latn
0.544764
a9457a169aac0982dac421907aebdafada9a3415
13,181
md
Markdown
docs/migrate/azure-best-practices/contoso-migration-overview.md
milanhybner/cloud-adoption-framework.cs-cz
6f1b4a99b5ce58ac39facab09293e300d022182e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/migrate/azure-best-practices/contoso-migration-overview.md
milanhybner/cloud-adoption-framework.cs-cz
6f1b4a99b5ce58ac39facab09293e300d022182e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/migrate/azure-best-practices/contoso-migration-overview.md
milanhybner/cloud-adoption-framework.cs-cz
6f1b4a99b5ce58ac39facab09293e300d022182e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Příklady migrace aplikací pro Azure description: Pomocí architektury cloudového přijetí pro Azure se dozvíte, jak migrovat místní infrastrukturu do cloudu Microsoft Azure. author: BrianBlanchard ms.author: brblanch ms.date: 02/25/2020 ms.topic: conceptual ms.service: cloud-adoption-framework ms.subservice: migrate ms.openlocfilehash: b7ea46fb1723e0603aa7251f135caa51b9f998ad ms.sourcegitcommit: ea63be7fa94a75335223bd84d065ad3ea1d54fdb ms.translationtype: MT ms.contentlocale: cs-CZ ms.lasthandoff: 03/27/2020 ms.locfileid: "80356067" --- # <a name="application-migration-patterns-and-examples"></a>Příklady a vzory migrace aplikací V této části příručky Architektura přechodu na cloud najdete příklady několika běžných scénářů migrace, které demonstrují, jak se dá místní infrastruktura migrovat do cloudu [Microsoft Azure](https://azure.microsoft.com/overview/what-is-azure). ## <a name="introduction"></a>Úvod Azure poskytuje přístup ke komplexní sadě cloudových služeb. Jako vývojáři a IT specialisté můžete pomocí těchto služeb sestavovat, nasazovat a spravovat aplikace s využitím celé řady nástrojů a rozhraní a globální sítě datacenter. Když vaše firma čelí výzvám souvisejícím s digitalizací, cloud Azure vám pomůže zjistit, jak optimalizovat prostředky a operace, zapojit zákazníky i zaměstnance a transformovat vaše produkty. I přes všechny výhody, které cloud poskytuje z hlediska rychlosti a flexibility, minimalizace nákladů, vysokého výkonu a spolehlivost, si však v Azure uvědomujeme, že řada organizací bude potřebovat ještě nějakou dobu provozovat místní datacentra. V reakci na překážky přechodu do cloudu Azure poskytuje strategii hybridního cloudu, která propojí vaše místní datacentra s veřejným cloudem Azure. Pomocí cloudových prostředků Azure, jako je například služba Azure Backup, můžete zajistit ochranu místních prostředků nebo pomocí analýz Azure získat přehled o místních úlohách. V rámci strategie hybridního cloudu poskytuje Azure stále větší počet řešení pro migraci místních aplikací a úloh do cloudu. Pomocí jednoduchých postupů můžete komplexně vyhodnotit své místní prostředky, abyste zjistili, jak si povedou v cloudu Azure. Když budete mít k dispozici podrobné posouzení, můžete bez obav migrovat prostředky do Azure. Po zprovoznění prostředků v Azure je můžete optimalizovat, abyste zachovali a vylepšili úroveň přístupu, flexibility, zabezpečení a spolehlivosti. ## <a name="migration-patterns"></a>Vzory migrace Strategie migrace do cloudu se dají rozdělit do čtyř hlavních vzorů: změna hostování, refaktorování, změna architektury nebo opětovné sestavení. Strategie, kterou použijete, závisí na vašich obchodních faktorech a cílech migrace. Můžete také využít několik vzorů. Můžete se třeba rozhodnout, že budete chtít znovu hostovat jednoduché aplikace nebo aplikace, které nejsou pro vaši firmu důležité, ale přearchitektovat aplikace, které jsou složitější a důležité pro podnikání. Podívejme se na tyto vzory blíž. <!-- markdownlint-disable MD033 --> **Vzor** | **Definice** | **Kdy ho použít** --- | --- | --- **Změna hostitele** | Často se označuje jako migrace _výtahu a posunutí_ . Tato možnost nevyžaduje změny kódu a umožňuje rychlou migraci stávajících aplikací do Azure. Každá aplikace se migruje tak, jak je, s výhodami cloudu a bez rizik a nákladů spojených se změnami kódu. | Když potřebujete rychle přesunout aplikace do cloudu.<br/><br/> Když chcete aplikaci přesunout a neměnit ji.<br/><br/> Když jsou vaše aplikace navržené tak, aby po migraci mohly využít škálovatelnost [Azure IaaS](https://azure.microsoft.com/overview/what-is-iaas) .<br/><br/> Když jsou aplikace pro vaši firmu důležité, ale nepotřebujete okamžitě změny jejich funkcí. **Refaktoring** | Refaktoring, který se často označuje jako „opětovné balení“, vyžaduje v aplikacích minimální změny, aby se mohly připojit [k Azure PaaS](https://azure.microsoft.com/overview/what-is-paas) a používat cloudové nabídky.<br/><br/> Můžete například existující aplikace migrovat do služeb Azure App Service nebo Azure Kubernetes Service (AKS).<br/><br/> Můžete také refaktorovat relační i nerelační databáze do služeb, jako jsou Azure SQL Database Managed Instance, Azure Database for MySQL, Azure Database for PostgreSQL nebo Azure Cosmos DB. | Pokud se vaše aplikace dá snadno znovu zabalit pro práci v Azure.<br/><br/> Pokud chcete použít inovativní postupy DevOps, které poskytuje Azure, nebo uvažujete o DevOps s využitím kontejnerové strategie pro úlohy.<br/><br/> V souvislosti s refaktoringem je potřeba uvažovat o přenositelnosti stávajícího základu kódu a dostupnosti dovedností pro vývoj. **Změna architektury** | Změna architektury pro migraci se zaměřuje na úpravu a rozšíření funkčnosti aplikace a základu kódu s cílem optimalizovat architekturu aplikace pro zajištění cloudové škálovatelnosti.<br/><br/> Můžete například monolitickou aplikaci rozdělit do skupiny mikroslužeb, které fungují dohromady a snadno se škálují.<br/><br/> Nebo můžete změnit architekturu relačních i nerelačních databází na plně spravované databázové řešení, jako je Azure SQL Database Managed Instance, Azure Database for MySQL, Azure Database for PostgreSQL nebo Azure Cosmos DB. | Když vaše aplikace potřebuje velké úpravy pro za účelem začlenění nových funkcí nebo zajištění efektivnějšího fungování na cloudové platformě.<br/><br/> Pokud chcete používat stávající investice do aplikací, splňovat požadavky na škálovatelnost, použít inovativní postupy DevOps a minimalizovat používání virtuálních počítačů. **Nové sestavení** | Opětovné sestavení jde ještě o krok dál a aplikaci znovu sestaví od začátku pomocí cloudových technologií Azure.<br/><br/> Můžete například vytvořit aplikace se zeleným polem s technologiemi [nativní pro Cloud](https://azure.com/cloudnative) , jako je Azure Functions, Azure AI, Azure SQL Database Managed Instance a Azure Cosmos DB. | Když chcete zajistit rychlý vývoj a vaše stávající aplikace mají omezené funkce a životnost.<br/><br/> Až budete připravení urychlit obchodní inovace (včetně postupů DevOps, které poskytuje Azure), sestavte nové aplikace pomocí technologií nativních pro cloud a využijte pokroky v oblasti AI, blockchainů a IoT. <!-- markdownlint-enable MD033 --> ## <a name="migration-example-articles"></a>Články s příklady migrace Tato část obsahuje příklady několika běžných scénářů migrace. Každý příklad zahrnuje základní informace a scénáře nasazení, které ilustrují způsob nastavení infrastruktury migrace a vyhodnocení vhodnosti místních prostředků pro migraci. Postupně budeme do této části přidávat další články. ![Běžné projekty migrace/modernizace](./media/migration-patterns.png) *Kategorie běžných projektů migrace a modernizace* Články z této série jsou shrnuty níž. - Každý scénář migrace je založený na trochu jiných obchodních cílech, které určují migrační strategii. - Pro každý scénář nasazení poskytujeme informace o obchodních aspektech a cílech, navrženou architekturu, kroky pro realizaci migrace, doporučení pro vyčištění a další kroky po dokončení migrace. ### <a name="assessment"></a>Posouzení **Článek** | **Podrobnosti** --- | --- [Posouzení vhodnosti místních prostředků k migraci do Azure](../../plan/contoso-migration-assessment.md) | Tento článek s osvědčenými postupy v metodologii plánování popisuje, jak spustit posouzení místní aplikace běžící na VMware. V tomto článku příklad organizace posuzuje virtuální počítače aplikace pomocí služby Azure Migrate a SQL Server databáze aplikace pomocí Data Migration Assistant. ### <a name="infrastructure"></a>Infrastruktura **Článek** | **Podrobnosti** --- | --- [Nasazení infrastruktury Azure](./contoso-migration-infrastructure.md) | V tomto článku se dozvíte, jak organizace může připravit svoji místní infrastrukturu a infrastrukturu Azure pro migraci. Na příklad infrastruktury zavedený v tomto článku odkazují další ukázky uvedené v této části. ### <a name="windows-server-workloads"></a>Úlohy Windows Serveru **Článek** | **Podrobnosti** --- | --- [Změna hostitele aplikace na virtuální počítače Azure](./contoso-migration-rehost-vm.md) | V tomto článku najdete příklad migrace místních virtuálních počítačů aplikace do virtuálních počítačů Azure pomocí služby Site Recovery. [Změna architektury aplikace na kontejnery Azure a služby Azure SQL Database](./contoso-migration-rearchitect-container-sql.md) | Tento článek přináší příklad migrace aplikace při změně architektury webové vrstvy aplikace na kontejner Windows spuštěný ve službě Azure Service Fabric a databáze s využitím Azure SQL Database. ### <a name="linux-workloads"></a>Linuxové úlohy **Článek** | **Podrobnosti** --- | --- [Změna hostitele linuxové aplikace na virtuální počítače Azure a Azure Database for MySQL](./contoso-migration-rehost-linux-vm-mysql.md) | Tento článek přináší příklad migrace aplikace hostované na Linuxu na virtuální počítače Azure pomocí Site Recovery. Migruje databázi aplikace do služby Azure Database for MySQL pomocí MySQL Workbenche. [Změna hostitele linuxové aplikace na virtuální počítače Azure](./contoso-migration-rehost-linux-vm.md) | Tento příklad ukazuje, jak dokončit migraci aplikace založené na systému Linux a Shift do virtuálních počítačů Azure pomocí služby Site Recovery. ### <a name="sql-server-workloads"></a>Úlohy SQL Serveru **Článek** | **Podrobnosti** --- | --- [Změna hostitele aplikace na virtuální počítač Azure a spravovanou instanci Azure SQL Database](./contoso-migration-rehost-vm-sql-managed-instance.md) | V tomto článku najdete příklad migrace typu výtah a Shift do Azure pro místní aplikaci. Toto úsilí zahrnuje migraci front-end virtuálního počítače aplikace pomocí [Azure Site Recovery](https://docs.microsoft.com/azure/site-recovery/site-recovery-overview)a databáze aplikace do Azure SQL Database spravované Instance pomocí [Azure Database Migration Service](https://docs.microsoft.com/azure/dms/dms-overview). [Změna hostitele aplikace na virtuální počítače Azure a skupiny dostupnosti AlwaysOn pro SQL Server](./contoso-migration-rehost-vm-sql-ag.md) | Tento příklad ukazuje, jak migrovat aplikaci a data pomocí virtuálních počítačů s SQL Serverem hostovaných v Azure. K migraci virtuálních počítačů aplikace používá Site Recovery a k migraci databáze aplikace do clusteru SQL Server, který je chráněný skupinou dostupnosti Always On, používá službu Azure Database Migration Service. ### <a name="aspnet-php-and-java-apps"></a>Aplikace ASP.NET, PHP a Java **Článek** | **Podrobnosti** --- | --- [Refaktoring aplikace do webové aplikace Azure a Azure SQL Database](./contoso-migration-refactor-web-app-sql.md) | Tento příklad ukazuje, jak migrovat místní aplikaci založenou na Windows do webové aplikace Azure, a databázi aplikace migruje do instance Azure SQL Serveru pomocí Data Migration Assistanta. [Refaktoring linuxové aplikace do více oblastí pomocí služeb Azure App Service, Azure Traffic Manager a Azure Database for MySQL](./contoso-migration-refactor-linux-app-service-mysql.md) | Tento příklad ukazuje, jak místní linuxovou aplikaci migrovat do webové aplikace Azure ve více oblastech Azure pomocí Azure Traffic Manageru integrovaného s GitHubem pro průběžné doručování. Databáze aplikace se migruje do instance Azure Database for MySQL. [Opětovné sestavení aplikace v Azure](./contoso-migration-rebuild.md) | Tento článek obsahuje příklad opětovného sestavení místní aplikace pomocí celé řady spravovaných služeb a funkcí Azure, včetně Azure App Service, Azure Kubernetes Service (AKS), Azure Functions, Azure Cognitive Services a Azure Cosmos DB. [Refaktoring Team Foundation Serveru do sady Azure DevOps Services](./contoso-migration-tfs-vsts.md) | Tento článek ukazuje příklad migrace místního nasazení Team Foundation Serveru do Azure DevOps Services v Azure. ### <a name="migration-scaling"></a>Škálování migrace **Článek** | **Podrobnosti** --- | --- [Škálování migrace do Azure](./contoso-migration-scale.md) | Tento článek popisuje, jak se ukázková organizace připravuje na škálování kompletní migrace do Azure. ### <a name="demo-apps"></a>Ukázkové aplikace V ukázkových článcích uvedených v této části se používají dvě ukázkové aplikace: SmartHotel360 a osTicket. - **SmartHotel360:** Tato aplikace byla vyvinutá Microsoftem jako testovací aplikace, kterou můžete použít při práci s Azure. Je k dispozici jako open source a můžete si ji stáhnout z [GitHubu](https://github.com/Microsoft/SmartHotel360). Jde o aplikaci v ASP.NET připojenou k databázi SQL Serveru. Ve scénářích probíraných v těchto článcích je aktuální verze této aplikace nasazená na dvou virtuálních počítačích VMware, na kterých běží Windows Server 2008 R2 a SQL Server 2008 R2. Tyto virtuální počítače aplikace jsou hostované v místním prostředí a spravuje je vCenter Server. - **osTicket:** Otevřená aplikace pro vytváření lístků ve zdrojové službě, která běží na Linux. Můžete si ji stáhnout z [GitHubu](https://github.com/osTicket/osTicket). Ve scénářích probíraných v těchto článcích je aktuální verze této aplikace nasazená v místním prostředí na dvou virtuálních počítačích VMware, na kterých běží Ubuntu 16.04 LTS, a to s využitím Apache 2, PHP 7.0 a MySQL 5.7
118.747748
911
0.805781
ces_Latn
0.999954
a945aee574480123b805e4bbeb603558250d96e8
3,271
md
Markdown
_pages/4/43.md
OKCody/starter
6b687ae08b90731276078731c71cb0942a2ff7e3
[ "BSD-3-Clause" ]
null
null
null
_pages/4/43.md
OKCody/starter
6b687ae08b90731276078731c71cb0942a2ff7e3
[ "BSD-3-Clause" ]
null
null
null
_pages/4/43.md
OKCody/starter
6b687ae08b90731276078731c71cb0942a2ff7e3
[ "BSD-3-Clause" ]
null
null
null
## Chapter 7. Introduction Lateral View of the Human Skull ![This image shows a side view of the human skull. The major parts of the cell are labeled.][1] Chapter Objectives After studying this chapter, you will be able to: - Describe the functions of the skeletal system and define its two major subdivisions - Identify the bones and bony structures of the skull, the cranial suture lines, the cranial fossae, and the openings in the skull - Discuss the vertebral column and regional variations in its bony components and curvatures - Describe the components of the thoracic cage - Discuss the embryonic development of the axial skeleton The skeletal system forms the rigid internal framework of the body. It consists of the bones, cartilages, and ligaments. Bones support the weight of the body, allow for body movements, and protect internal organs. Cartilage provides flexible strength and support for body structures such as the thoracic cage, the external ear, and the trachea and larynx. At joints of the body, cartilage can also unite adjacent bones or provide cushioning between them. Ligaments are the strong connective tissue bands that hold the bones at a moveable joint together and serve to prevent excessive movements of the joint that would result in injury. Providing movement of the skeleton are the muscles of the body, which are firmly attached to the skeleton via connective tissue structures called tendons. As muscles contract, they pull on the bones to produce movements of the body. Thus, without a skeleton, you would not be able to stand, run, or even feed yourself! Each bone of the body serves a particular function, and therefore bones vary in size, shape, and strength based on these functions. For example, the bones of the lower back and lower limb are thick and strong to support your body weight. Similarly, the size of a bony landmark that serves as a muscle attachment site on an individual bone is related to the strength of this muscle. Muscles can apply very strong pulling forces to the bones of the skeleton. To resist these forces, bones have enlarged bony landmarks at sites where powerful muscles attach. This means that not only the size of a bone, but also its shape, is related to its function. For this reason, the identification of bony landmarks is important during your study of the skeletal system. Bones are also dynamic organs that can modify their strength and thickness in response to changes in muscle strength or body weight. Thus, muscle attachment sites on bones will thicken if you begin a workout program that increases muscle strength. Similarly, the walls of weight-bearing bones will thicken if you gain body weight or begin pounding the pavement as part of a new running regimen. In contrast, a reduction in muscle strength or body weight will cause bones to become thinner. This may happen during a prolonged hospital stay, following limb immobilization in a cast, or going into the weightlessness of outer space. Even a change in diet, such as eating only soft food due to the loss of teeth, will result in a noticeable decrease in the size and thickness of the jaw bones. [1]: https://cnx.org/resources/1547fe02ba2b4fb03f73441803dbb4bdef2fe33b/700_Lateral_View_of_Skull-01.jpg
155.761905
954
0.804341
eng_Latn
0.999831
a945b1cedda90f99d94f9a9ed5099f3ddc337d61
238
md
Markdown
_posts/1968-12-12-nixon-smiley-writes-about-flamingo.md
MiamiMaritime/miamimaritime.github.io
d087ae8c104ca00d78813b5a974c154dfd9f3630
[ "MIT" ]
null
null
null
_posts/1968-12-12-nixon-smiley-writes-about-flamingo.md
MiamiMaritime/miamimaritime.github.io
d087ae8c104ca00d78813b5a974c154dfd9f3630
[ "MIT" ]
null
null
null
_posts/1968-12-12-nixon-smiley-writes-about-flamingo.md
MiamiMaritime/miamimaritime.github.io
d087ae8c104ca00d78813b5a974c154dfd9f3630
[ "MIT" ]
null
null
null
--- title: Nixon Smiley writes about Flamingo tags: - Dec 1968 --- Nixon Smiley writes about Flamingo "mayor" Carl Ross. [Photo] Newspapers: **Miami Morning News or The Miami Herald** Page: **8**, Section: **D**
19.833333
63
0.62605
eng_Latn
0.603231
a9472581d65ce592e220db0da1966b529e7f85f3
2,663
md
Markdown
README.md
esc0rtd3w/memdig
1af96f4a9fb1db0262392d233257cacf1accb08c
[ "Unlicense" ]
127
2016-08-28T04:37:42.000Z
2022-03-21T02:17:44.000Z
README.md
eporto/memdig
1af96f4a9fb1db0262392d233257cacf1accb08c
[ "Unlicense" ]
1
2016-09-05T15:31:15.000Z
2016-09-05T15:31:15.000Z
README.md
eporto/memdig
1af96f4a9fb1db0262392d233257cacf1accb08c
[ "Unlicense" ]
20
2016-08-28T04:37:45.000Z
2021-12-24T09:14:53.000Z
# MemDig: a memory cheat tool MemDig allows the user to manipulate the memory of another process, primary for the purposes of cheating. There have been many tools like this before, but this one is a scriptable command line program. There are a number of commands available from the program's command prompt. The "help" command provides a list with documentation. MemDig commands can also be supplied as command line arguments to the program itself, by prefixing them with one or two dashes. All commands can be shortened so long as they remain unambiguous, similar to gdb. For example, "attach" can be written as "a" or "att". The current set of commands is quite meager, though it can operate on integers and floats of any size. The command set will grow as more power is needed. ## Example Usage Here's how you might change the amount of gold in a game called Generic RPG. Suppose the process name is `grpg.exe` and you currently have 273 gold, stored as a 32-bit integer. memdig.exe --attach grpg.exe > find 273 317 values found (... perform an in-game action to change it to 312 gold ...) > narrow 312 1 value found > set 1000000 1 value set If all goes well, you would now have 1 million gold. The above could be scripted entirely as command arguments. memdig.exe -a grpg.exe -f 273 -w 10 -n 312 -s 1000000 -q The `-w 10` (i.e. `--wait 10`) will put a 10 second delay before the "narrow" command, giving you a chance to make changes to the game state. The `-q` (i.e. `--quit`) will exit the program before it beings the interactive prompt. ## Supported Types Suffixes can be used to set the type when searching memory. There are three integer width specifiers: byte (o), short (h), and quad (q), and each integer type is optionally unsigned (uo, uh, u, uq). For floating point, include a decimal or exponent in the normal format. An f suffix indicates single precision. * -45o (signed 8-bit) * 40000uh (unsigned 16-bit) * 0xffffq (unsigned 64-bit) * 10.0 (double) * 1e1f (float) ## Supported Platforms Currently Windows and Linux are supported. The platform API is fully abstracted, so support for additional platforms could be easily added. ## Future Plans * Remote, network interface * More values types (strings, SIMD) * Better handling of NaN and inf * Readline support (especially on Linux) * Automatic re-attach * Watchlist editing (add, remove) * Save/load address lists by name, to file * Address list transformations and filters * Progress indicator (find) * Symbol and region oriented commands (locate known addresses after ASLR) * (long shot) Export/create trainer EXE for a specific target
33.708861
73
0.747653
eng_Latn
0.999444
a94781e48d49c3f5d487329095f4993328a76f19
4,262
md
Markdown
memdocs/configmgr/comanage/quickstart-setup-hybrid-aad.md
CodyMathis123/memdocs
d225ccaa67ebee444002571dc8f289624db80d10
[ "CC-BY-4.0", "MIT" ]
1
2020-05-18T09:36:13.000Z
2020-05-18T09:36:13.000Z
memdocs/configmgr/comanage/quickstart-setup-hybrid-aad.md
CodyMathis123/memdocs
d225ccaa67ebee444002571dc8f289624db80d10
[ "CC-BY-4.0", "MIT" ]
null
null
null
memdocs/configmgr/comanage/quickstart-setup-hybrid-aad.md
CodyMathis123/memdocs
d225ccaa67ebee444002571dc8f289624db80d10
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Set up hybrid Azure AD titleSuffix: Configuration Manager description: If your environment currently has domain-joined Windows 10 devices, set up hybrid Azure AD before you enable co-management ms.date: 01/14/2019 ms.prod: configuration-manager ms.technology: configmgr-comanage ms.topic: conceptual ms.assetid: 27dd26d1-e99c-4431-b2f8-60406394b6db author: aczechowski ms.author: aaroncz manager: dougeby --- # Set up hybrid Azure AD for co-management If you have Windows 10 devices joined to on-premises Active Directory, before you enable co-management in Configuration Manager, first join these devices to Azure Active Directory (Azure AD). This process is called hybrid Azure AD join. In the following video, senior program manager Sandeep Deo and product marketing manager Adam Harbour discuss and demo configuring devices in Azure AD: > [!VIDEO https://channel9.msdn.com/Series/Endpoint-Zone/Configuring-Devices-in-Azure-Active-Directory/player] The hybrid Azure AD-join process automatically registers your on-premises domain-joined devices with Azure AD. For more information on this process, see the following articles: - [Introduction to device management in Azure Active Directory](https://docs.microsoft.com/azure/active-directory/device-management-introduction) - [How to plan your Hybrid Azure AD join](https://docs.microsoft.com/azure/active-directory/devices/hybrid-azuread-join-plan) Hybrid Azure AD join is one of the key foundations for co-management. This process can be challenging for some customers, for example: - Your organization uses a third-party identity solution - The complexities of setting up Active Directory Federation Services (ADFS) Resolving these challenges can take some guidance. This article helps to mitigate any delays. ## How to do it Devices are similar to users when creating an identity that you want to protect. To protect a device's identity at any time and in any location, you need to bring the identity of that device into Azure AD. Based on the type of domain you're using, there are two primary ways to do that. Configure hybrid Azure AD join for one of the following domain types: - [Federated domains](https://docs.microsoft.com/azure/active-directory/devices/hybrid-azuread-join-federated-domains) - [Managed domains](https://docs.microsoft.com/azure/active-directory/devices/hybrid-azuread-join-managed-domains) The two preceding methods provide the best experience. For more detailed information including the fully manual process, see the following articles: - [Manually configure hybrid Azure AD joined devices](https://docs.microsoft.com/azure/active-directory/device-management-hybrid-azuread-joined-devices-setup) - [ADFS pass-through authentication for hybrid Azure AD](https://docs.microsoft.com/windows-server/identity/ad-fs/ad-fs-overview), which includes Azure AD discovery For troubleshooting guidance, see the [Windows 10 hybrid Azure AD join troubleshooting guide](https://docs.microsoft.com/azure/active-directory/devices/troubleshoot-hybrid-join-windows-current). ## Case study A large European software company with over 100,000 users in its network took a granular and phased approach towards enabling hybrid Azure AD join. During the planning phase, since hybrid Azure AD join is a key element supporting co-management, the Configuration Manager administrators worked with the identity team. This software company had many ADFS rules, and some of them were complex. To address this challenge, the identity team reviewed the existing ADFS rules before they enabled hybrid Azure AD join. The IT team also chose to upgrade Azure AD Connect to the latest version. Azure AD Connect now provides an automated process flow for enabling hybrid Azure AD join. After successful deployment and testing in their pre-production environment, this customer enabled hybrid Azure AD join for the whole production estate. Within a week, they had every Windows 10 device co-managed. ## Contact FastTrack If you need assistance setting up Azure AD at any point in the process, go to [Microsoft FastTrack](https://Microsoft.com/FastTrack/), sign in, and request assistance. For more information, see [Get help from FastTrack](quickstart-fasttrack.md).
62.676471
527
0.804083
eng_Latn
0.989684
a947f6ba42684fbc9a8bc222cb20ce46073c1eab
30
md
Markdown
README.md
Anziverov/SyncRep
8110a52ce7ad9f3d8d08dbc7989628762d58ad6d
[ "MIT" ]
null
null
null
README.md
Anziverov/SyncRep
8110a52ce7ad9f3d8d08dbc7989628762d58ad6d
[ "MIT" ]
null
null
null
README.md
Anziverov/SyncRep
8110a52ce7ad9f3d8d08dbc7989628762d58ad6d
[ "MIT" ]
null
null
null
# NugetAnalyzer NugetAnalyzer
10
15
0.866667
run_Latn
0.280531
a9485dbb0bf5561b38535c5c035ec0f4ab0dcdd6
510
md
Markdown
README.md
AdamDressler/TicTacToe
19ac38139f11482441068b7a692c83fe79e6e788
[ "MIT" ]
null
null
null
README.md
AdamDressler/TicTacToe
19ac38139f11482441068b7a692c83fe79e6e788
[ "MIT" ]
null
null
null
README.md
AdamDressler/TicTacToe
19ac38139f11482441068b7a692c83fe79e6e788
[ "MIT" ]
null
null
null
# Description This app is simple implementation of Tic tac toe, simple and well known game. It was created for Simon's challange: https://devdactic.com/build-tic-tac-toe-with-ionic/. #Running Clone repository to your computer. Use cd to app folder and run 'ionic serve' command, which will launch Tic tac toe app in your browser. If you do not have Ionic installed, you can find instructions here: http://ionicframework.com/getting-started/. You can also manually open index.html file from www directory.
42.5
174
0.780392
eng_Latn
0.995093
a9489757db16b8cf64eab6f48cbf3b4378cb1555
531
md
Markdown
_posts/photo/twitter/꿀꿀허니/2018-06-21-꿀꿀허니-U3POJA.md
fromisnine/jasper2
58533eb473271b5811bce676ed84bbbf70186b8b
[ "MIT" ]
1
2020-04-19T13:53:47.000Z
2020-04-19T13:53:47.000Z
_posts/photo/twitter/꿀꿀허니/2018-06-21-꿀꿀허니-U3POJA.md
fromisnine/jasper2
58533eb473271b5811bce676ed84bbbf70186b8b
[ "MIT" ]
null
null
null
_posts/photo/twitter/꿀꿀허니/2018-06-21-꿀꿀허니-U3POJA.md
fromisnine/jasper2
58533eb473271b5811bce676ed84bbbf70186b8b
[ "MIT" ]
2
2018-06-10T08:21:08.000Z
2018-06-12T03:44:10.000Z
--- layout: post current: post cover: https://pbs.twimg.com/media/DgNUP_XUwAArTvz.jpg navigation: true title: 꿀꿀허니 twitter post date: 2018-06-21 19:22:38 +0900 KST tags: 지헌 꿀꿀허니 photo class: post-template subclass: post tag-photo author: auto-posting --- ``` 180612 상암 SBS 더쇼 미니팬미팅 백지헌 #프로미스나인 #fromis_9 #백지헌 #JIHEON #SBS #더쇼 #THESHOW ``` ![0](https://pbs.twimg.com/media/DgNUPk1UwAIexBU.jpg) ![1](https://pbs.twimg.com/media/DgNUP_XUwAArTvz.jpg) Post by 꿀꿀허니 > [꿀꿀허니](https://twitter.com/kkhoney0417) 로고크롭, 2차가공 금지
19.666667
54
0.713748
kor_Hang
0.389821
a94921d0127166305e8d250d1bf4bdb19c46ca5e
114
md
Markdown
_posts/0000-01-02-hungdoitbk.md
hungdoitbk/github-slideshow
eedaa7505ec87812a8219d65248b4bcfaa660789
[ "MIT" ]
null
null
null
_posts/0000-01-02-hungdoitbk.md
hungdoitbk/github-slideshow
eedaa7505ec87812a8219d65248b4bcfaa660789
[ "MIT" ]
3
2021-03-23T14:38:32.000Z
2021-03-23T15:14:22.000Z
_posts/0000-01-02-hungdoitbk.md
hungdoitbk/github-slideshow
eedaa7505ec87812a8219d65248b4bcfaa660789
[ "MIT" ]
null
null
null
--- layout: slide title: "Welcome to our second slide!" --- Hello! My name is Hung Use the left arrow to go back
16.285714
37
0.692982
eng_Latn
0.999034
a94936bca9902ab5c6ec9ffb76bc8a225198fb2f
1,581
md
Markdown
docs/pro/custom-template.md
liuhong1happy/ice
b0c9367ae869ce7a10b464b50625ee6920a76edc
[ "MIT" ]
3
2018-10-11T01:39:57.000Z
2019-01-09T09:45:32.000Z
docs/pro/custom-template.md
itcodes/ice
17e5018d913729b29e24685a2c67b9cd85202b7a
[ "MIT" ]
null
null
null
docs/pro/custom-template.md
itcodes/ice
17e5018d913729b29e24685a2c67b9cd85202b7a
[ "MIT" ]
1
2020-11-25T08:04:42.000Z
2020-11-25T08:04:42.000Z
--- title: 自定义模板 order: 10 category: ICE Design Pro --- # 自定义模板 在 Iceworks 2.2.0 之前的版本,可以通过 `新建页面` 时选择默认提供的 4 套布局去替换已有项目的布局,也可以通过布局列表的`自定义布局`功能进行自定义,然后添加到项目。 ![iceworks](https://img.alicdn.com/tfs/TB1ecZexQyWBuNjy0FpXXassXXa-1909-1368.png) 然而,这些都是基于已经生成好的项目添加新的布局。那有没有一种可能,完全从零开始去自定义一个模板,答案是有的,你可以先从自定义布局开始初始化一个项目,甚至是自定义布局之后,在自定义选择 Router,Eslint,Redux,Mbox 等等,这都是有可能的。我们还是脚踏实地,先从第一步开始,来了解下 Iceworks 全新的自定义布局功能,如何从自定义布局开始初始化一个模板。 ![](https://img.alicdn.com/tfs/TB17Virx_tYBeNjy1XdXXXXyVXa-862-572.gif) ## 自定义创建流程 在 `模板` 界面选择 `自定义模板`,点击新建弹窗如下,左边是属性配置面板,右边是配置的实时效果图,目前自定义主要包含以下四部分配置 - 基础配置 - 导航配置 - 侧边栏配置 - 页脚配置 #### 基础配置 基础配置主要包含`布局容器配置`、`主题配置`、`定制皮肤`三部分,其中: - 布局容器配置有全屏和固宽两个选项,全屏即 100% 宽度的布局,固宽默认是 1200px - 主题配置有深色和浅色两个选项,对应的是 Layout 部分的主题配置 - 定制皮肤主要是指配置基础组件的样式,可以选择主色和辅色,详细可以查看[修改主题配色 ](https://alibaba.github.io/ice/docs/advanced/custom-theme) ![基础配置](https://img.alicdn.com/tfs/TB10iEqxKuSBuNjy1XcXXcYjFXa-1909-1368.png) #### 导航配置 导航配置主要包含 `启用`、`定位`、`是否通栏` 三部分。只有在启动的前提下才能配置对应的导航属性。在某些情况下,可能不需要导航,只要不勾选启用,则默认不会生成导航部分。 ![导航配置](https://img.alicdn.com/tfs/TB1YhXXx9BYBeNjy0FeXXbnmFXa-1909-1368.png) #### 侧边栏配置 侧边栏配置主要包含 `启用`、`折叠`、`定位` 三部分。只有在启动的前提下才能配置对应的侧边栏属性。在某些情况下,可能不需要导航,只要不勾选启用,则默认不会生成导航部分。折叠则是指默认生成的布局侧边栏是否折叠。 ![侧边栏配置](https://img.alicdn.com/tfs/TB1DOSnx_tYBeNjy1XdXXXXyVXa-1908-1368.png) #### 页脚配置 页脚配置与导航配置一样,主要包含 `启用`、`定位`、`是否通栏` 三部分。 ![页脚配置](https://img.alicdn.com/tfs/TB1lHVnx21TBuNjy0FjXXajyXXa-1909-1368.png) #### 创建项目 配置完成后点击保存,可以看到刚刚配置的模板列表,接下来,你可以基于该模板初始化创建项目。 ![创建项目](https://img.alicdn.com/tfs/TB1yVfrxMmTBuNjy1XbXXaMrVXa-1909-1368.png)
26.35
188
0.774826
yue_Hant
0.70791
a94adb5b4fa50639ff8267149eb611762dd0e950
376
md
Markdown
225-github-actions-demo/README.md
susiddam/Test
5f9455fa610ba77c802eb3a614ad2f841f097ccc
[ "MIT" ]
null
null
null
225-github-actions-demo/README.md
susiddam/Test
5f9455fa610ba77c802eb3a614ad2f841f097ccc
[ "MIT" ]
null
null
null
225-github-actions-demo/README.md
susiddam/Test
5f9455fa610ba77c802eb3a614ad2f841f097ccc
[ "MIT" ]
null
null
null
# Github Actions Demo Implement CI/CD with Github Actions. Watch the [100 Seconds of CI/CD](https://youtu.be/scEDHsr3APg) and the [Full Github Actions Tutorial](https://youtu.be/eB0nUzAI7M8) on YouTube. Firebase User? Checkout the guide for [Deploying Firebase Apps with GithHub Actions](https://fireship.io/snippets/github-actions-deploy-angular-to-firebase-hosting/).
47
167
0.779255
eng_Latn
0.457783
a94b17d5443f77ff09ca0b90d05b365883d9fc26
218
md
Markdown
_watches/M20191118_042153_TLP_6.md
Meteoros-Floripa/meteoros.floripa.br
7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad
[ "MIT" ]
5
2020-05-19T17:04:49.000Z
2021-03-30T03:09:14.000Z
_watches/M20191118_042153_TLP_6.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
null
null
null
_watches/M20191118_042153_TLP_6.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
2
2020-05-19T17:06:27.000Z
2020-09-04T00:00:43.000Z
--- layout: watch title: TLP6 - 18/11/2019 - M20191118_042153_TLP_6T.jpg date: 2019-11-18 04:21:53 permalink: /2019/11/18/watch/M20191118_042153_TLP_6 capture: TLP6/2019/201911/20191117/M20191118_042153_TLP_6T.jpg ---
27.25
62
0.784404
kor_Hang
0.05698
a94bba3639b13b0e256055ace8bbb27e62c3ba30
5,622
md
Markdown
doc/dev/mgmt/generation.md
tzhanl/azure-sdk-for-python
18cd03f4ab8fd76cc0498f03e80fbc99f217c96e
[ "MIT" ]
1
2021-06-02T08:01:35.000Z
2021-06-02T08:01:35.000Z
doc/dev/mgmt/generation.md
tzhanl/azure-sdk-for-python
18cd03f4ab8fd76cc0498f03e80fbc99f217c96e
[ "MIT" ]
1
2020-03-06T05:57:16.000Z
2020-03-06T05:57:16.000Z
doc/dev/mgmt/generation.md
tzhanl/azure-sdk-for-python
18cd03f4ab8fd76cc0498f03e80fbc99f217c96e
[ "MIT" ]
1
2019-06-17T22:18:23.000Z
2019-06-17T22:18:23.000Z
# Generation of SDK Assuming your Swagger are associated with correct Readmes (otherwise see previous chapter [Swagger conf](./swagger_conf.md)), this page explains how to generate your packages. IMPORTANT NOTE: All the commands prefixed by `python` in this page assumes you have loaded the [dev_setup](../dev_setup.md) in your currently loaded virtual environment. ## Building the code ### Autorest versioning A few notes on [Autorest for Python versionning](https://github.com/Azure/autorest.python/blob/master/ChangeLog.md): - Autorest for Python v2.x is deprecated, and should not be used anymore for any generation under any circumstances. - Autorest for Python v3.x is the most currently used one. Should not be used, but still ok if service team are still in v3.x and they want to avoid breaking changes for a given version (rare). - Autorest for Python v4.x is the current recommendation. This generator can generates async code, but this should be disabled with --no-async. No package should be shipped with async based on v4 - Autorest for Python v5.x is the work in progress based on new runtime called `azure-core` (no `msrest` anymore). To be released in November 2019 (current plan). This version will bring the official async support. #### How to recognize what version of autorest was used? Autorest doesn't write the version number in the generated code, but a few indicator will tell you what generation is used, just looking at the "models" folder - Autorest v2: One model file per model class - Autorest v3: Two model files per model class, the second one being suffixed by "_py3" (e.g. `vm.py` and `vm_py3.py`) - Autorest v4: Two gigantic model files, one called `_models.py` and the second one `_models_py3.py` - Autorest v5: `paged` file will import base classes from `azure.core` and not `msrest` ### Basics of generation A basic autorest command line will looks like this: ```shell autorest readme.md --python --use="@microsoft.azure/autorest.python@~4.0.71" --python-mode=update --python-sdks-folder=<root of sdk clone>/sdks/ --no-async --multiapi ``` Which means "Generate the Python code for the Swagger mentioned in this readme, using autorest for Pyton v4.0.71 or above (but not v5), do not generate async files, generate multiapi if supported (if not ignore), and assume the package was already generated and it's an update" In pratical terms, this is not necessary since the Python SDK has the necessary tooling to simplify to just specify the readme.md: - Checkout the branch - Checkout the RestAPI specs repo - Call the tool: `python -m packaging_tools.generate_sdk -v -m restapi_path/readme.md` changing the last path to the readme you want to generate. The common configuration to pass to all generation are located in the [swagger_to_sdk.json file](https://github.com/Azure/azure-sdk-for-python/blob/master/swagger_to_sdk_config.json) ### Automation bot If the automation is doing its job correctly, you should not have to build the SDK, but look for an integration PR for the service in question. This link will give you for instance [the list of all integration PRs](https://github.com/Azure/azure-sdk-for-python/labels/ServicePR). ## Using raw autorest If you want to use raw autorest and nothing else, not even Readme, a few tips: If you're doing basic testing and want to minimal set of parameters: - To call Autorest, you need the following options: - Required parameter: `--payload-flattening-threshold=2` - About the generator: - If your endpoint is ARM, add `--python --azure-arm=true` - If not, add `--python`. If your client _might_ ask authentication, add `--add-credentials` And that's it! You should now have Python code ready to test. Note that this generation is for testing only and should not be sent to a customer or published to PyPI. This command generate code only. If you want to generate a [wheel](https://pythonwheels.com/) file to share this code, add the `--basic-setup-py` option to generate a basic `setup.py` file and call `python setup.py bdist_wheel`. ### Example ARM management Swagger: `autorest --version=latest --python --azure-arm=true --payload-flattening-threshold=2 --input-file=myswagger.json` Not-ARM Swagger: `autorest --version=latest --python --payload-flattening-threshold=2 --add-credentials --input-file=myswagger.json` If you want something closed to a real generation: Let's assume for now that your Swagger is in `specification/compute/resource-manager` To call Autorest, you need the following options: - Required parameters: `--payload-flattening-threshold=2 --license-header=MICROSOFT_MIT_NO_VERSION --namespace=azure.mgmt.compute --package-name=azure-mgmt-compute --package-version=0.1.0` - About the generator: - If your endpoint is ARM, add `--python --azure-arm=true` - If not, add `--python`. If your client _might_ ask authentication, add `--add-credentials` ## Example ARM Swagger with MD (preferred syntax): `autorest --version=latest specifications/storage/resource-manager/readme.md --python --azure-arm=true --payload-flattening-threshold=2 --license-header=MICROSOFT_MIT_NO_VERSION --namespace=azure.mgmt.storage --package-name=azure-mgmt-storage --package-version=0.1.0 ` ARM Swagger without MD (if you have an excellent reason): `autorest --version=latest --python --azure-arm=true --payload-flattening-threshold=2 --license-header=MICROSOFT_MIT_NO_VERSION --namespace=azure.mgmt.storage --package-name=azure-mgmt-storage --package-version=0.1.0 --input-file=specifications/storage/resource-manager/Microsoft.Storage/2016-12-01/storage.json`
56.787879
312
0.766275
eng_Latn
0.981687
a94c16ce7428bef532231d52a052dc1630ed40e3
952
md
Markdown
_posts/2021-09-22-【星CUP人物】電影「梅艷芳」訪談(四).md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
21
2020-07-20T16:10:55.000Z
2022-03-14T14:01:14.000Z
_posts/2021-09-22-【星CUP人物】電影「梅艷芳」訪談(四).md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
1
2020-07-19T21:49:44.000Z
2021-09-16T13:37:28.000Z
_posts/2021-09-22-【星CUP人物】電影「梅艷芳」訪談(四).md
NodeBE4/opinion
81a7242230f02459879ebc1f02eb6fc21507cdf1
[ "MIT" ]
1
2021-05-29T19:48:01.000Z
2021-05-29T19:48:01.000Z
--- layout: post title: "【星CUP人物】電影「梅艷芳」訪談(四)" date: 2021-09-22T19:15:30.000Z author: Cup 媒體 Cup Media from: https://www.youtube.com/watch?v=8HGeREde91o tags: [ Cup媒體 ] categories: [ Cup媒體 ] --- <!--1632338130000--> [【星CUP人物】電影「梅艷芳」訪談(四)](https://www.youtube.com/watch?v=8HGeREde91o) ------ <div> 梅艷芳與張國榮,對香港人來說是珍貴的回憶,也是無可取代的人物。要演活傳奇巨星並不容易,今集「星 CUP 人物」,陶傑就請來電影「梅艷芳」的 3 位主演王丹妮、劉俊謙和廖子妤,分享一下他們鏡頭背後的故事。延伸閱讀:【星CUP人物:電影「梅艷芳」訪談(三)】https://bit.ly/3EBXfh2【星CUP人物:電影「梅艷芳」訪談(二)】https://bit.ly/3x6ZyDx【星CUP人物:電影「梅艷芳」訪談(一)】https://bit.ly/3hTPDx2【廖康宇:由電視台之爭見獅子山精神】https://bit.ly/3knHDFI【通過審查的北韓電影「平壤怪獸」】https://bit.ly/39oRrZO【陶傑:電影為世人上的課】https://bit.ly/372D5wW【方俊傑:手捲煙 —— 亂世中的赤子之心】https://bit.ly/3ziWKVn==========================在 www.cup.com.hk 留下你的電郵地址,即可免費訂閱星期一至五 CUP 媒體 的日誌。🎦 YouTube 👉 https://goo.gl/4ZetJ5🎙️ CUPodcast 👉 https://bit.ly/35HZaBp📸 Instagram 👉 www.instagram.com/cupmedia/💬 Telegram 👉 https://t.me/cupmedia📣 WhatsApp 👉https://bit.ly/2W1kPye </div>
56
646
0.706933
yue_Hant
0.940482
a94c398dfc5ef9b826838a91fda05d857d467898
72
md
Markdown
README.md
jjmorleo/Codigo-Colores
e0f95b0b7c2c1aa244f867d453735b51fea249e3
[ "MIT" ]
null
null
null
README.md
jjmorleo/Codigo-Colores
e0f95b0b7c2c1aa244f867d453735b51fea249e3
[ "MIT" ]
null
null
null
README.md
jjmorleo/Codigo-Colores
e0f95b0b7c2c1aa244f867d453735b51fea249e3
[ "MIT" ]
null
null
null
# Codigo-Colores Ejemplo de clases de paleta de colores material desing
24
54
0.819444
spa_Latn
0.8862
a94c471745481575fca4496c6628d95b926e92c9
12,324
md
Markdown
articles/defender-for-iot/organizations/references-horizon-api.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
63
2017-08-28T07:43:47.000Z
2022-02-24T03:04:04.000Z
articles/defender-for-iot/organizations/references-horizon-api.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
704
2017-08-04T09:45:07.000Z
2021-12-03T05:49:08.000Z
articles/defender-for-iot/organizations/references-horizon-api.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
178
2017-07-05T10:56:47.000Z
2022-03-18T12:25:19.000Z
--- title: Horizon-API description: In dieser Anleitung werden häufig verwendete Horizon-Methoden beschrieben. ms.date: 1/5/2021 ms.topic: article ms.openlocfilehash: b65f7663df29e2c82faa5d1aeec3b820d5fbaf70 ms.sourcegitcommit: a038863c0a99dfda16133bcb08b172b6b4c86db8 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 06/29/2021 ms.locfileid: "113016258" --- # <a name="horizon-api"></a>Horizon-API In dieser Anleitung werden häufig verwendete Horizon-Methoden beschrieben. ### <a name="getting-more-information"></a>Weitere Informationen Weitere Informationen zur Arbeit mit Horizon und der Defender für IoT-Plattform finden Sie an folgenden Stellen: - Informationen zum Horizon ODE-SDK (Open Development Environment) erhalten Sie von Ihrem Defender für IoT-Vertriebsmitarbeiter. - Informationen zum Support und zur Problembehandlung erhalten Sie vom <support@cyberx-labs.com>. - Wenn Sie über die Defender für IoT-Konsole auf das Benutzerhandbuch zu Defender für IoT zugreifen möchten, wählen Sie :::image type="icon" source="media/references-horizon-api/profile.png"::: und anschließend **Benutzerhandbuch herunterladen** aus. ## `horizon::protocol::BaseParser` Abstrakt für alle Plug-Ins. Dies enthält zwei Methoden: - Zum Verarbeiten von Plug-In-Filtern, die Sie definiert haben. Auf diese Weise wird Horizon informiert, wie die Kommunikation mit dem Parser erfolgen soll. - Zum Verarbeiten der tatsächlichen Daten. ## `std::shared_ptr<horizon::protocol::BaseParser> create_parser()` Mit der ersten Funktion, die für das Plug-In aufgerufen wird, wird eine Instanz des Parsers für Horizon erstellt, die erkannt und registriert wird. ### <a name="parameters"></a>Parameter Keine. ### <a name="return-value"></a>Rückgabewert „shared_ptr“ an die Parserinstanz. ## `std::vector<uint64_t> horizon::protocol::BaseParser::processDissectAs(const std::map<std::string, std::vector<std::string>> &) const` Diese Funktion wird für jedes oben registrierte Plug-In aufgerufen. In den meisten Fällen bleibt dieser Parameter leer. Löst eine Ausnahme aus, damit Horizon weiß, dass ein Fehler aufgetreten ist. ### <a name="parameters"></a>Parameter - Eine Karte mit der Struktur von „dissect_as“, wie in der Datei „config.json“ eines anderen Plug-Ins definiert, das für Sie registriert werden soll. ### <a name="return-value"></a>Rückgabewert Ein Array von „uint64_t“, bei dem es sich um die Registrierung handelt, die als eine Art von „uint64_t“ verarbeitet wird. Das heißt, dass in der Karte eine Liste von Ports enthalten ist, deren Werte „uin64_t“ bilden. ## `horizon::protocol::ParserResult horizon::protocol::BaseParser::processLayer(horizon::protocol::management::IProcessingUtils &,horizon::general::IDataBuffer &)` Die Hauptfunktion, insbesondere die Logik des Plug-ins (jedes Mal, wenn ein neues Paket Ihren Parser erreicht). Diese Funktion wird aufgerufen. Alles im Zusammenhang mit der Paketverarbeitung sollte hier ausgeführt werden. ### <a name="considerations"></a>Überlegungen Ihr Plug-In sollte threadsicher sein, da diese Funktion möglicherweise von unterschiedlichen Threads aufgerufen wird. Alles im Stapel zu definieren, wäre ein guter Ansatz. ### <a name="parameters"></a>Parameter - Die SDK-Steuerungseinheit, die für das Speichern der Daten und das Erstellen von SDK-bezogenen Objekten wie ILayer und Felder zuständig ist. - Ein Hilfsprogramm zum Lesen der Daten aus dem Rohdatenpaket. Für das Hilfsprogramm ist bereits die Bytereihenfolge festgelegt, die Sie in der Datei „config.json“ definiert haben. ### <a name="return-value"></a>Rückgabewert Das Ergebnis der Verarbeitung. Mögliche Werte: *Success*, *Malformed* oder *Sanity* (Erfolgreich/Nicht wohlgeformt/Integrität). ## `horizon::protocol::SanityFailureResult: public horizon::protocol::ParserResult` Kennzeichnet die Verarbeitung als Integritätsfehler, d. h., das Paket wird vom aktuellen Protokoll nicht erkannt, und Horizon sollte es an einen anderen Parser übergeben (falls ein solcher für dieselben Filter registriert ist). ## `horizon::protocol::SanityFailureResult::SanityFailureResult(uint64_t)` Konstruktor ### <a name="parameters"></a>Parameter - Definiert den von Horizon für die Protokollierung verwendeten Fehlercode, wie er in der Datei „config.json“ definiert ist. ## `horizon::protocol::MalformedResult: public horizon::protocol::ParserResult` Ergebnis: Nicht wohlgeformt. Das heißt, dass das Paket bereits als Protokoll erkannt wurde, einige Überprüfungen jedoch fehlerhaft waren (reservierte Bits sind aktiviert, oder ein Feld fehlt). ## `horizon::protocol::MalformedResult::MalformedResult(uint64_t)` Konstruktor ### <a name="parameters"></a>Parameter - Fehlercode, wie in der Datei „config.json“ definiert. ## `horizon::protocol::SuccessResult: public horizon::protocol::ParserResult` Benachrichtigt Horizon über die erfolgreiche Verarbeitung. Bei erfolgreicher Verarbeitung wurde das Paket akzeptiert, die Datenzugehörigkeit ist richtig, und alle Daten wurden extrahiert. ## `horizon::protocol::SuccessResult()` Konstruktor. Es wurde ein erfolgreiches Basisergebnis erstellt. Das heißt, dass weder die Richtung noch andere Metadaten im Zusammenhang mit dem Paket bekannt sind. ## `horizon::protocol::SuccessResult(horizon::protocol::ParserResultDirection)` Konstruktor. ### <a name="parameters"></a>Parameter - Die Richtung des Pakets (sofern definiert). Mögliche Werte: *REQUEST* (Anforderung) oder *RESPONSE* (Antwort). ## `horizon::protocol::SuccessResult(horizon::protocol::ParserResultDirection, const std::vector<uint64_t> &)` Konstruktor. ### <a name="parameters"></a>Parameter - Die Richtung des Pakets kann, sofern sie definiert wurde, *REQUEST* oder *RESPONSE* lauten. - Warnungen. Diese Ereignisse schlagen nicht fehl, aber Horizon wird benachrichtigt. ## `horizon::protocol::SuccessResult(const std::vector<uint64_t> &)` Konstruktor. ### <a name="parameters"></a>Parameter - Warnungen. Diese Ereignisse schlagen nicht fehl, aber Horizon wird benachrichtigt. ## `HorizonID HORIZON_FIELD(const std::string_view &)` Konvertiert einen zeichenfolgenbasierten Verweis auf einen Feldnamen (z. B. „function_code“) in eine Horizon-ID (HorizonID). ### <a name="parameters"></a>Parameter - Zu konvertierende Zeichenfolge. ### <a name="return-value"></a>Rückgabewert - Aus der Zeichenfolge erstellte HorizonID. ## `horizon::protocol::ILayer &horizon::protocol::management::IProcessingUtils::createNewLayer()` Erstellt eine neue Ebene, damit Horizon weiß, dass das Plug-In Daten speichern möchte. Dies ist die Basisspeichereinheit, die Sie verwenden sollten. ### <a name="return-value"></a>Rückgabewert Ein Verweis auf eine erstellte Ebene, damit Sie dort Daten hinzufügen können. ## `horizon::protocol::management::IFieldManagement &horizon::protocol::management::IProcessingUtils::getFieldsManager()` Ruft das Feldverwaltungsobjekt ab, das für das Erstellen von Feldern auf unterschiedlichen Objekten (z. B. ILayer) verantwortlich ist. ### <a name="return-value"></a>Rückgabewert Verweis auf den Manager. ## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, uint64_t)` Erstellt auf der Ebene mit der angeforderten ID ein neues numerisches Feld mit 64 Bits. ### <a name="parameters"></a>Parameter - Die zuvor erstellte Ebene. - Vom Makro **HORIZON_FIELD** erstellte HorizonID. - Der Rohwert, den Sie speichern möchten. ## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, std::string)` Erstellt auf der Ebene mit der angeforderten ID ein neues Zeichenfolgenfeld. Der Speicher wird verschoben. Seien Sie daher vorsichtig. Sie können diesen Wert nicht noch einmal verwenden. ### <a name="parameters"></a>Parameter - Die zuvor erstellte Ebene. - Vom Makro **HORIZON_FIELD** erstellte HorizonID. - Der Rohwert, den Sie speichern möchten. ## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, std::vector<char> &)` Erstellt auf der Ebene mit der angeforderten ID ein neues Feld mit Rohwerten (Bytearray). Der Speicher wird verschoben. Vorsicht: Sie können diesen Wert nicht noch einmal verwenden. ### <a name="parameters"></a>Parameter - Die zuvor erstellte Ebene. - Vom Makro **HORIZON_FIELD** erstellte HorizonID. - Der Rohwert, den Sie speichern möchten. ## `horizon::protocol::IFieldValueArray &horizon::protocol::management::IFieldManagement::create(horizon::protocol::ILayer &, HorizonID, horizon::protocol::FieldValueType)` Erstellt auf der Ebene des angegebenen Typs mit der angeforderten ID ein Feld mit Arraywerten (Array). ### <a name="parameters"></a>Parameter - Die zuvor erstellte Ebene. - Vom Makro **HORIZON_FIELD** erstellte HorizonID. - Typ der Werte, die im Array gespeichert werden. ### <a name="return-value"></a>Rückgabewert Verweis auf ein Array, an das Sie Werte anfügen sollten. ## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, uint64_t)` Fügt dem zuvor erstellten Array einen neuen ganzzahligen Wert an. ### <a name="parameters"></a>Parameter - Das zuvor erstellte Array. - Rohwert, der im Array gespeichert werden soll. ## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, std::string)` Fügt dem zuvor erstellten Array einen neuen Zeichenfolgenwert an. Der Speicher wird verschoben. Vorsicht: Sie können diesen Wert nicht noch einmal verwenden. ### <a name="parameters"></a>Parameter - Das zuvor erstellte Array. - Rohwert, der im Array gespeichert werden soll. ## `void horizon::protocol::management::IFieldManagement::create(horizon::protocol::IFieldValueArray &, std::vector<char> &)` Fügt dem zuvor erstellten Array einen neuen Rohwert an. Der Speicher wird verschoben. Vorsicht: Sie können diesen Wert nicht noch einmal verwenden. ### <a name="parameters"></a>Parameter - Das zuvor erstellte Array. - Rohwert, der im Array gespeichert werden soll. ## `bool horizon::general::IDataBuffer::validateRemainingSize(size_t)` Überprüft, ob der Puffer mindestens x Bytes enthält. ### <a name="parameters"></a>Parameter Anzahl von Bytes, die vorhanden sein müssen. ### <a name="return-value"></a>Rückgabewert TRUE, wenn der Puffer mindestens x Bytes enthält. Andernfalls lautet der Wert `False`. ## `uint8_t horizon::general::IDataBuffer::readUInt8()` Liest entsprechend der Bytereihenfolge den Wert „uint8“ (1 Byte) aus dem Puffer. ### <a name="return-value"></a>Rückgabewert Aus dem Puffer gelesener Wert. ## `uint16_t horizon::general::IDataBuffer::readUInt16()` Liest entsprechend der Bytereihenfolge den Wert „uint16“ (2 Bytes) aus dem Puffer. ### <a name="return-value"></a>Rückgabewert Aus dem Puffer gelesener Wert. ## `uint32_t horizon::general::IDataBuffer::readUInt32()` Liest entsprechend der Bytereihenfolge den Wert „uint32“ (4 Bytes) aus dem Puffer. ### <a name="return-value"></a>Rückgabewert Aus dem Puffer gelesener Wert. ## `uint64_t horizon::general::IDataBuffer::readUInt64()` Liest entsprechend der Bytereihenfolge den Wert „uint64“ (8 Bytes) aus dem Puffer. ### <a name="return-value"></a>Rückgabewert Aus dem Puffer gelesener Wert. ## `void horizon::general::IDataBuffer::readIntoRawData(void *, size_t)` Liest den vorab zugeordneten Speicher einer bestimmten Größe. Kopiert tatsächlich die Daten in Ihren Speicherbereich. ### <a name="parameters"></a>Parameter - Speicherbereich, in den die Daten kopiert werden sollen. - Größe des Speicherbereichs. Dieser Parameter definiert auch, wie viele Bytes kopiert werden. ## `std::string_view horizon::general::IDataBuffer::readString(size_t)` Liest eine Zeichenfolge aus dem Puffer. ### <a name="parameters"></a>Parameter - Anzahl der Bytes, die gelesen werden sollen. ### <a name="return-value"></a>Rückgabewert Verweis auf den Speicherbereich der Zeichenfolge. ## `size_t horizon::general::IDataBuffer::getRemainingData()` Gibt an, wie viele Bytes im Puffer verbleiben. ### <a name="return-value"></a>Rückgabewert Verbleibende Größe des Puffers. ## `void horizon::general::IDataBuffer::skip(size_t)` Überspringt x Bytes im Puffer. ### <a name="parameters"></a>Parameter - Anzahl der zu überspringenden Bytes.
39.373802
250
0.770854
deu_Latn
0.984369
a94c958811162e9227943814b1b13d8fc950ecf2
726
md
Markdown
site/blog/_posts/2015-04-22-thank-you-stickers.md
juhalindfors/bazel-patches
d915827cd9db2fd5e81abda9cc3c63b2fe4663f7
[ "Apache-2.0" ]
3
2019-02-28T06:23:21.000Z
2021-11-12T11:23:59.000Z
site/blog/_posts/2015-04-22-thank-you-stickers.md
juhalindfors/bazel-patches
d915827cd9db2fd5e81abda9cc3c63b2fe4663f7
[ "Apache-2.0" ]
14
2021-06-12T01:31:36.000Z
2021-06-23T21:33:54.000Z
site/blog/_posts/2015-04-22-thank-you-stickers.md
juhalindfors/bazel-patches
d915827cd9db2fd5e81abda9cc3c63b2fe4663f7
[ "Apache-2.0" ]
3
2018-02-20T12:56:28.000Z
2021-06-12T01:25:03.000Z
--- layout: posts title: Stickers for Contributors --- <img src="/assets/bazel-stickers.jpg" alt="Bazel stickers" class="img-responsive"> We just got Bazel stickers and we'd like to send them to all of the people who have sent us pull requests and patches over the last month. If you'd like some stickers, please [send us](mailto:kchodorow@google.com?subject=Send me stickers!) your Github username and mailing address. Let us know if you've done any of the following and we'll send you stickers: * Gone through a Gerrit code review. * Opened a pull request on GitHub. * Sent us a patch on the mailing list. * Are in the process of doing any of the things above. Thanks for your contributions, we really appreciate them.
34.571429
86
0.763085
eng_Latn
0.997871
a94d08bb2ecfcfa5a435694a5ad9b111883552e4
914
md
Markdown
modules/userguide/guide/he-il/about.flow.md
dedienu/sidemik
166dd7fbbf051bbe67c0380960e713a399334204
[ "BSD-3-Clause" ]
2
2017-06-04T04:31:23.000Z
2017-06-04T04:45:35.000Z
modules/userguide/guide/he-il/about.flow.md
dedienu/sidemik
166dd7fbbf051bbe67c0380960e713a399334204
[ "BSD-3-Clause" ]
null
null
null
modules/userguide/guide/he-il/about.flow.md
dedienu/sidemik
166dd7fbbf051bbe67c0380960e713a399334204
[ "BSD-3-Clause" ]
1
2021-04-29T07:10:20.000Z
2021-04-29T07:10:20.000Z
# Request Flow - זרימת תהליך בקשה מהשרת כל אפליקציה שרצה על קוהנה עוברת תהליך זהה בעת ביצוע בקשה של טעינת דף מהשרת 1. האפליקציה נטענת ע"י הרצת הדף הראשי `index.php` 2. מכלילה בתוכה את הדף `APPPATH/bootstrap.php` 3. ה bootstrap קורא ל [Kohana::modules] עם רשימה של המודולים שבשימוש 1. נוצר מערך עם הנתיבים של כל התקיות והקבצים המכילים את המודול 2. בדיקה האם למודול יש קובץ init.php ובמידה וכן לטעון אותו * כל קובץ init.php יכול לכלול בתוכו routes (ניתובים) חדשים אשר נטענים למערכת 4. [Request::instance] רץ על מנת לבצע את הקריאה 1. בדיקה מול ה routes הקיימים על מנת למצוא את המתאים 2. טעינה של בקר (controller) והעברת הבקשה אליו 3. קריאה לפונקציה [Controller::before] של הבקר המתאים 4. קריאה לפעולה של הבקר לפי ה route 5. קריאה לפונקציה [Controller::after] של הבקר המתאים 5. הצגה של התוצאה יש אפשרות לשנות את אופן פעולת הבקר עצמו על ידי הפונקציה [Controller::before] בהסתמך על המשתנים בבקשה [!!] Stub
39.73913
100
0.757112
heb_Hebr
1.000007
a94d343e605bbec032d0438a5222b4195cdd835a
1,564
md
Markdown
_post_preparation/architectural-pattern/ddd/2021-04-15-service-pattern.md
gamethapcam/gamethapcam.github.io
2653950076f1683b33471f8c881856fae96302d8
[ "MIT" ]
4
2019-01-04T13:59:52.000Z
2021-11-09T20:40:29.000Z
_post_preparation/architectural-pattern/ddd/2021-04-15-service-pattern.md
DucManhPhan/DucManhPhan.github.io
4e2493ba3b9415a8141585fc1c09f506f00a8a3e
[ "MIT" ]
null
null
null
_post_preparation/architectural-pattern/ddd/2021-04-15-service-pattern.md
DucManhPhan/DucManhPhan.github.io
4e2493ba3b9415a8141585fc1c09f506f00a8a3e
[ "MIT" ]
null
null
null
--- layout: post title: Service Pattern bigimg: /img/image-header/yourself.jpeg tags: [DDD] --- <br> ## Table of contents - [Given problem](#given-problem) - [Solution of Service Pattern](#solution-of-service-pattern) - [When to use](#when-to-use) - [Benefits and Drawbacks](#benefits-and-pattern) - [Wrapping up](#wrapping-up) <br> ## Given problem <br> ## Solution of Service Pattern Belows are some types of service pattern. 1. Application service - will change very often and evolve a lot. - - 2. Domain service - They are used to model primary operations. - i.e. publish tools for modeling processes. - that don't have an identity or life-cycle in our domain. - that is, that are not linked to one particular aggregate root, perhaps none, or several. - In this terminology, services are not tied to a particular person, place, or thing in the application, but tend to embody processes. - Main rule is to let the Domain layer focus on the business logic. - Named after verbs or business activities. - That domain experts introduce into Ubiquitous Language. - Should be exposed as client-oriented methods. - Following Interface Segregation principle. - As a reusable toolbox: do not leak our domain. - Application layer services - Would use the Domain services. - To implement the needs of client applications. <br> ## When to use <br> ## Benefits and Drawbacks <br> ## Wrapping up <br> Refer: []() []() []() []()
15.79798
138
0.670077
eng_Latn
0.995663
a94da881b724b8d337ae17846db3989743d65717
3,536
md
Markdown
_wiki/BioJava_CookBook_Blast_Parser.md
biojava/biojava.github.io
32d95e1e36e7d719b62eaba6bf529e710576d1da
[ "CC-BY-3.0" ]
3
2016-06-10T06:04:51.000Z
2020-01-03T00:47:51.000Z
_wiki/BioJava_CookBook_Blast_Parser.md
biojava/biojava.github.io
32d95e1e36e7d719b62eaba6bf529e710576d1da
[ "CC-BY-3.0" ]
14
2016-03-23T04:38:32.000Z
2020-11-10T00:36:18.000Z
_wiki/BioJava_CookBook_Blast_Parser.md
biojava/biojava.github.io
32d95e1e36e7d719b62eaba6bf529e710576d1da
[ "CC-BY-3.0" ]
16
2016-03-21T16:40:26.000Z
2021-03-17T15:01:10.000Z
--- title: BioJava:CookBook:Blast:Parser permalink: wiki/BioJava%3ACookBook%3ABlast%3AParser --- How Do I Parse A BLAST Result? ------------------------------ Much of the credit for this example belongs to Keith James. A frequent task in bioinformatics is the generation of BLAST search results. BioJava has the ability to parse "Blast-like" output such as Blast and HMMER using a trick that makes the BLAST output into SAX events that can be listened for by registered listeners. The basic pipeline is as follows: `Blast_output -> Generate SAX events  --> Convert SAX events --> Build result objects --> Store ` `them in a list.` `InputStream--> BLASTLikeSAXParser --> SeqSimilartyAdapter --> BlastLikeSearchBuilder --> List` The API is very flexible however for most purposes the following simple recipe will get you what you want. ```java import java.io.\*; import java.util.\*; import org.biojava.bio.program.sax.\*; import org.biojava.bio.program.ssbind.\*; import org.biojava.bio.search.\*; import org.biojava.bio.seq.db.\*; import org.xml.sax.\*; import org.biojava.bio.\*; public class BlastParser { ` /**` `  * args[0] is assumed to be the name of a Blast output file` `  */` ` public static void main(String[] args) {` `   try {` `     //get the Blast input as a Stream` `     InputStream is = new FileInputStream(args[0]);` `     //make a BlastLikeSAXParser` `     BlastLikeSAXParser parser = new BlastLikeSAXParser();` `     // try to parse, even if the blast version is not recognized.` `     parser.setModeLazy();` `     //make the SAX event adapter that will pass events to a Handler.` `     SeqSimilarityAdapter adapter = new SeqSimilarityAdapter();` `     //set the parsers SAX event adapter` `     parser.setContentHandler(adapter);` `     //The list to hold the SeqSimilaritySearchResults` `     List results = new ArrayList();` `     //create the SearchContentHandler that will build SeqSimilaritySearchResults` `     //in the results List` `     SearchContentHandler builder = new BlastLikeSearchBuilder(results,` `         new DummySequenceDB("queries"), new DummySequenceDBInstallation());` `     //register builder with adapter` `     adapter.setSearchContentHandler(builder);` `     //parse the file, after this the result List will be populated with` `     //SeqSimilaritySearchResults` `     parser.parse(new InputSource(is));` `     //output some blast details` `     for (Iterator i = results.iterator(); i.hasNext(); ) {` `       SeqSimilaritySearchResult result =` `           (SeqSimilaritySearchResult)i.next();` `       Annotation anno = result.getAnnotation();` `       for (Iterator j = anno.keys().iterator(); j.hasNext(); ) {` `         Object key = j.next();` `         Object property = anno.getProperty(key);` `         System.out.println(key+" : "+property);` `       }` `       System.out.println("Hits: ");` `       //list the hits` `       for (Iterator k = result.getHits().iterator(); k.hasNext(); ) {` `         SeqSimilaritySearchHit hit =` `             (SeqSimilaritySearchHit)k.next();` `         System.out.print("\tmatch: "+hit.getSubjectID());` `         System.out.println("\te score: "+hit.getEValue());` `       }` `       System.out.println("\n");` `     }` `   }` `   catch (SAXException ex) {` `     //XML problem` `     ex.printStackTrace();` `   }catch (IOException ex) {` `     //IO problem, possibly file not found` `     ex.printStackTrace();` `   }` ` }` } ```
33.358491
99
0.639706
eng_Latn
0.638198
a94e2b4cea3e6872404af9283d1ce04fca01a456
1,228
md
Markdown
src/posts/2010-07-15-welcome-to-improv-night-for-real.md
jayspec/jasonspecland.com
0ea732b80cbe221d6dde49a67d1cbe8f93aa9314
[ "MIT" ]
null
null
null
src/posts/2010-07-15-welcome-to-improv-night-for-real.md
jayspec/jasonspecland.com
0ea732b80cbe221d6dde49a67d1cbe8f93aa9314
[ "MIT" ]
null
null
null
src/posts/2010-07-15-welcome-to-improv-night-for-real.md
jayspec/jasonspecland.com
0ea732b80cbe221d6dde49a67d1cbe8f93aa9314
[ "MIT" ]
null
null
null
--- id: 117 title: 'Welcome to Improv Night&#8230; For Real!' date: 2010-07-15T12:35:45-04:00 author: jason layout: post guid: http://jasonspecland.azurewebsites.net/?p=117 permalink: /2010/07/15/welcome-to-improv-night-for-real/ ljID: - "1006" categories: - improv - theater tags: - improv - performance - pit - self-promotion - theater --- As any reader of this blog already knows, I&#8217;ve been doing a lot of improv at the PIT lately. Up until now, it&#8217;s all been open jams and class shows. But no more, my friends! Like an improv Voltron, we&#8217;ve assembled the best parts of my previous classes to create a Robeast-destroying whole. Except that in this case, instead of destroying a Robeast with a flaming sword, we create a really funny show on the spot. We are Jabberwocky, and we are part of the Dream NYC show. Jabberwocky is: Kathryn Dunn Daniel Operman Mary Guiteras Colin Longstaff Nathaniel Bryan Shayne Newton Grier Jason Specland We are performing at: The People&#8217;s Improv Theater 154 W 29th St. NYC Doors open at 9:20, Show starts at 9:30!! $5 (Free for any improv student with a student ID from _any_ improv-teaching institution!) Be there, or get eaten.
27.288889
429
0.736971
eng_Latn
0.979521
a94e92db84008f9b70b5b83247e14b5d66b4dad0
347
md
Markdown
README.md
tktcorporation/static-web-simple
e444014deaeee46005fb422f531d3130648e69f2
[ "MIT" ]
null
null
null
README.md
tktcorporation/static-web-simple
e444014deaeee46005fb422f531d3130648e69f2
[ "MIT" ]
null
null
null
README.md
tktcorporation/static-web-simple
e444014deaeee46005fb422f531d3130648e69f2
[ "MIT" ]
null
null
null
# Static Web Simple A simple example to render html with docker. ## Get Started ### Install Docker https://docs.docker.com/get-docker/ ### Start A Service ```bash $ docker-compose up ``` ### Open The Page Open `http://localhost:8000/public/index.html` Access docker host from a web browser. (It's usually like `0.0.0.0`, `localhost`)
15.086957
46
0.685879
kor_Hang
0.513183
a94f2494ebeef874bbaf7046291e46c5b371d243
12,712
md
Markdown
CHANGELOG.md
spinels/rack-ssl-enforcer
38221a5ce0ec4717dfa68e3b121f9b65dfe5daa3
[ "MIT" ]
2
2021-12-09T12:07:20.000Z
2022-01-13T17:52:54.000Z
CHANGELOG.md
spinels/rack-ssl-enforcer
38221a5ce0ec4717dfa68e3b121f9b65dfe5daa3
[ "MIT" ]
null
null
null
CHANGELOG.md
spinels/rack-ssl-enforcer
38221a5ce0ec4717dfa68e3b121f9b65dfe5daa3
[ "MIT" ]
null
null
null
# Change Log ## [Unreleased](https://github.com/spinels/rack-ssl-enforcer/tree/HEAD) [Full Changelog](https://github.com/spinels/rack-ssl-enforcer/compare/v0.3.0...HEAD) ## [v0.3.0](https://github.com/spinels/rack-ssl-enforcer/tree/v0.3.0) (2022-01-09) [Full Changelog](https://github.com/spinels/rack-ssl-enforcer/compare/v0.2.9...v0.3.0) **Merged pull requests:** - Make middleware thread-safe [\#105](https://github.com/tobmatth/rack-ssl-enforcer/pull/105) ([titanous](https://github.com/titanous)) - Run CI on GitHub Actions [\#1](https://github.com/spinels/rack-ssl-enforcer/pull/105) ([dentarg](https://github.com/dentarg)) - README: Typo [\#100](https://github.com/tobmatth/rack-ssl-enforcer/pull/100) ([olleolleolle](https://github.com/olleolleolle)) - Travis: jruby-9.1.13.0 [\#99](https://github.com/tobmatth/rack-ssl-enforcer/pull/99) ([olleolleolle](https://github.com/olleolleolle)) - Travis: Use jruby-9.1.10.0 in CI matrix [\#97](https://github.com/tobmatth/rack-ssl-enforcer/pull/97) ([olleolleolle](https://github.com/olleolleolle)) - update instructions for configuring nginx behind ELB, add for sinatra [\#96](https://github.com/tobmatth/rack-ssl-enforcer/pull/96) ([bmishkin](https://github.com/bmishkin)) - Travis: use JRuby 9.1.7.0 [\#94](https://github.com/tobmatth/rack-ssl-enforcer/pull/94) ([olleolleolle](https://github.com/olleolleolle)) - README: Use shiny SVG badge [\#93](https://github.com/tobmatth/rack-ssl-enforcer/pull/93) ([olleolleolle](https://github.com/olleolleolle)) - Specify to insert SslEnforcer before the Cookies middleware in Rails [\#90](https://github.com/tobmatth/rack-ssl-enforcer/pull/90) ([DimaSamodurov](https://github.com/DimaSamodurov)) - Add example for HSTS preload option on README [\#87](https://github.com/tobmatth/rack-ssl-enforcer/pull/87) ([camelmasa](https://github.com/camelmasa)) - Adding MIT license to the gemspec. [\#86](https://github.com/tobmatth/rack-ssl-enforcer/pull/86) ([reiz](https://github.com/reiz)) ## [v0.2.9](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.9) (2015-07-22) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.8...v0.2.9) **Closed issues:** - Infinite redirects behind AWS ELB [\#82](https://github.com/tobmatth/rack-ssl-enforcer/issues/82) - Issue with Redirects [\#81](https://github.com/tobmatth/rack-ssl-enforcer/issues/81) - POST requests [\#79](https://github.com/tobmatth/rack-ssl-enforcer/issues/79) - How to handle URI::InvalidURIError? [\#78](https://github.com/tobmatth/rack-ssl-enforcer/issues/78) - Cookie session state shared across http and https without disabling force\_secure\_cookies [\#58](https://github.com/tobmatth/rack-ssl-enforcer/issues/58) - :strict option + AJAX requests [\#36](https://github.com/tobmatth/rack-ssl-enforcer/issues/36) **Merged pull requests:** - Add HSTS preload option [\#84](https://github.com/tobmatth/rack-ssl-enforcer/pull/84) ([gorism](https://github.com/gorism)) - added Nginx behind Load Balancer section to readme [\#83](https://github.com/tobmatth/rack-ssl-enforcer/pull/83) ([gnitnuj](https://github.com/gnitnuj)) - respect rack.url\_scheme header for proxied SSL when HTTP\_X\_FORWARDED\_PROTO blank [\#77](https://github.com/tobmatth/rack-ssl-enforcer/pull/77) ([grantspeelman](https://github.com/grantspeelman)) ## [v0.2.8](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.8) (2014-07-18) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.7...v0.2.8) **Closed issues:** - Already encoded url parameters get encoded again when redirecting [\#75](https://github.com/tobmatth/rack-ssl-enforcer/issues/75) - Release new version! \<3 [\#73](https://github.com/tobmatth/rack-ssl-enforcer/issues/73) **Merged pull requests:** - Do not encode already encoded url parameters when redirecting. [\#76](https://github.com/tobmatth/rack-ssl-enforcer/pull/76) ([oveddan](https://github.com/oveddan)) - Enable ignore blocks [\#74](https://github.com/tobmatth/rack-ssl-enforcer/pull/74) ([danielevans](https://github.com/danielevans)) ## [v0.2.7](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.7) (2014-05-23) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.6...v0.2.7) **Fixed bugs:** - Vertical pipe characters in the URL cause an URI::InvalidURIError [\#47](https://github.com/tobmatth/rack-ssl-enforcer/issues/47) **Closed issues:** - Support for ruby 2.0 and Rails 4 [\#72](https://github.com/tobmatth/rack-ssl-enforcer/issues/72) - Running code before redirect not working [\#70](https://github.com/tobmatth/rack-ssl-enforcer/issues/70) - combine strict and non strict behaviour [\#69](https://github.com/tobmatth/rack-ssl-enforcer/issues/69) - Is there a way to combine mutiple only, multiple ignore with strict [\#68](https://github.com/tobmatth/rack-ssl-enforcer/issues/68) - Enforcing won't preserve HTTP methods [\#65](https://github.com/tobmatth/rack-ssl-enforcer/issues/65) - Rack::SslEnforcer options mess up with 'localhost' [\#64](https://github.com/tobmatth/rack-ssl-enforcer/issues/64) - New rubygems release? [\#60](https://github.com/tobmatth/rack-ssl-enforcer/issues/60) **Merged pull requests:** - Fixing issue \#70 - Running code before redirect not working [\#71](https://github.com/tobmatth/rack-ssl-enforcer/pull/71) ([abhasg](https://github.com/abhasg)) - URI encode before passing to URI object to deal with pathological URIs [\#67](https://github.com/tobmatth/rack-ssl-enforcer/pull/67) ([tilthouse](https://github.com/tilthouse)) - Add Ruby 2.1.0 to .travis.yml [\#66](https://github.com/tobmatth/rack-ssl-enforcer/pull/66) ([salimane](https://github.com/salimane)) - Allow for custom, default, or no body when redirecting [\#61](https://github.com/tobmatth/rack-ssl-enforcer/pull/61) ([kcm](https://github.com/kcm)) ## [v0.2.6](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.6) (2013-09-18) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.5...v0.2.6) **Closed issues:** - Allow proc to be called before forcing a redirect [\#56](https://github.com/tobmatth/rack-ssl-enforcer/issues/56) - force internationalization [\#54](https://github.com/tobmatth/rack-ssl-enforcer/issues/54) - Add environment constraints [\#51](https://github.com/tobmatth/rack-ssl-enforcer/issues/51) - @scheme leak across requests [\#48](https://github.com/tobmatth/rack-ssl-enforcer/issues/48) - Regex [\#46](https://github.com/tobmatth/rack-ssl-enforcer/issues/46) - SSL :ignore ignored for routable addresses, but works for static addresses [\#43](https://github.com/tobmatth/rack-ssl-enforcer/issues/43) - SSL-only, HTTP-only, and mixed [\#39](https://github.com/tobmatth/rack-ssl-enforcer/issues/39) - :mixed doesn't allow insecure GET [\#21](https://github.com/tobmatth/rack-ssl-enforcer/issues/21) - Secure cookie flag forced [\#20](https://github.com/tobmatth/rack-ssl-enforcer/issues/20) **Merged pull requests:** - Add user agent support [\#59](https://github.com/tobmatth/rack-ssl-enforcer/pull/59) ([carmstrong](https://github.com/carmstrong)) - allow proc to be called before redirecting fixes \#56 [\#57](https://github.com/tobmatth/rack-ssl-enforcer/pull/57) ([oveddan](https://github.com/oveddan)) - Add environment constraints [\#52](https://github.com/tobmatth/rack-ssl-enforcer/pull/52) ([wyattisimo](https://github.com/wyattisimo)) - fix bug that was setting arrays as keys in default\_options hash [\#50](https://github.com/tobmatth/rack-ssl-enforcer/pull/50) ([wyattisimo](https://github.com/wyattisimo)) - Add test for nested url ignores. [\#49](https://github.com/tobmatth/rack-ssl-enforcer/pull/49) ([ktusznio](https://github.com/ktusznio)) - Update README.md [\#45](https://github.com/tobmatth/rack-ssl-enforcer/pull/45) ([potomak](https://github.com/potomak)) - Option for HTTP status code for redirection [\#44](https://github.com/tobmatth/rack-ssl-enforcer/pull/44) ([ochko](https://github.com/ochko)) ## [v0.2.5](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.5) (2012-11-14) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.4...v0.2.5) **Closed issues:** - SSL-only, HTTP-only, and mixed [\#38](https://github.com/tobmatth/rack-ssl-enforcer/issues/38) - Working on Heroku? [\#35](https://github.com/tobmatth/rack-ssl-enforcer/issues/35) - Redirect not working [\#34](https://github.com/tobmatth/rack-ssl-enforcer/issues/34) - config.middleware.use Rack::SslEnforcer breaks ajax requests [\#31](https://github.com/tobmatth/rack-ssl-enforcer/issues/31) - Apache 2 config? [\#30](https://github.com/tobmatth/rack-ssl-enforcer/issues/30) - hsts =\> true doesn't work [\#28](https://github.com/tobmatth/rack-ssl-enforcer/issues/28) - Proper Nginx Config [\#26](https://github.com/tobmatth/rack-ssl-enforcer/issues/26) - strict and HSTS are incompatible [\#8](https://github.com/tobmatth/rack-ssl-enforcer/issues/8) **Merged pull requests:** - Added some more documentation for nginx - specifically re. passenger [\#42](https://github.com/tobmatth/rack-ssl-enforcer/pull/42) ([ktopping](https://github.com/ktopping)) - fix README typo [\#41](https://github.com/tobmatth/rack-ssl-enforcer/pull/41) ([juno](https://github.com/juno)) - Add sinatra/padrino installation instructions. [\#37](https://github.com/tobmatth/rack-ssl-enforcer/pull/37) ([danpal](https://github.com/danpal)) - Huge cleaning and refactoring [\#33](https://github.com/tobmatth/rack-ssl-enforcer/pull/33) ([rymai](https://github.com/rymai)) - Rewrite of enforce\_ssl? and implementation of new options only\_methods and except\_methods. [\#32](https://github.com/tobmatth/rack-ssl-enforcer/pull/32) ([volontarian](https://github.com/volontarian)) - Added documentation on nginx/proxy setups [\#29](https://github.com/tobmatth/rack-ssl-enforcer/pull/29) ([ariejan](https://github.com/ariejan)) ## [v0.2.4](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.4) (2011-09-05) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.3...v0.2.4) ## [v0.2.3](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.3) (2011-08-03) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.2...v0.2.3) **Merged pull requests:** - Update spex [\#27](https://github.com/tobmatth/rack-ssl-enforcer/pull/27) ([hassox](https://github.com/hassox)) - Removing warning [\#25](https://github.com/tobmatth/rack-ssl-enforcer/pull/25) ([honkster](https://github.com/honkster)) - Fix Rails 2.3.x projects for \(for real this time\) [\#24](https://github.com/tobmatth/rack-ssl-enforcer/pull/24) ([natacado](https://github.com/natacado)) - Custom ports, support for Rails 2.3/Rack 1.1 [\#23](https://github.com/tobmatth/rack-ssl-enforcer/pull/23) ([natacado](https://github.com/natacado)) - Make secure cookie flag optional [\#22](https://github.com/tobmatth/rack-ssl-enforcer/pull/22) ([mig-hub](https://github.com/mig-hub)) ## [v0.2.2](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.2) (2011-03-13) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.1...v0.2.2) ## [v0.2.1](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.1) (2011-02-15) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.2.0...v0.2.1) ## [v0.2.0](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.2.0) (2010-11-17) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.9...v0.2.0) ## [v0.1.9](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.9) (2010-11-17) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.8...v0.1.9) ## [v0.1.8](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.8) (2010-09-10) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.6...v0.1.8) ## [v0.1.6](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.6) (2010-09-01) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.5...v0.1.6) ## [v0.1.5](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.5) (2010-08-31) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.4...v0.1.5) ## [v0.1.4](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.4) (2010-08-30) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.3...v0.1.4) ## [v0.1.3](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.3) (2010-08-12) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.1...v0.1.3) ## [v0.1.1](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.1) (2010-03-18) [Full Changelog](https://github.com/tobmatth/rack-ssl-enforcer/compare/v0.1.0...v0.1.1) ## [v0.1.0](https://github.com/tobmatth/rack-ssl-enforcer/tree/v0.1.0) (2010-03-17)
73.906977
205
0.730255
kor_Hang
0.179011
a94fde7ad7528c09e69e66cd6ee807b6d5b3eaeb
4,078
md
Markdown
docs/reporting-services/subscriptions/use-my-subscriptions-native-mode-report-server.md
antoniosql/sql-docs.es-es
0340bd0278b0cf5de794836cd29d53b46452d189
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reporting-services/subscriptions/use-my-subscriptions-native-mode-report-server.md
antoniosql/sql-docs.es-es
0340bd0278b0cf5de794836cd29d53b46452d189
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/reporting-services/subscriptions/use-my-subscriptions-native-mode-report-server.md
antoniosql/sql-docs.es-es
0340bd0278b0cf5de794836cd29d53b46452d189
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Usar Mis suscripciones (servidor de informes en modo nativo) | Microsoft Docs ms.date: 07/01/2016 ms.prod: reporting-services ms.prod_service: reporting-services-sharepoint, reporting-services-native ms.technology: subscriptions ms.topic: conceptual helpviewer_keywords: - subscriptions [Reporting Services], My Subscriptions page - My Subscriptions page [Reporting Services] ms.assetid: e96623ba-677e-4748-8787-f32bed3b5c12 author: markingmyname ms.author: maghan ms.openlocfilehash: 83451b167534f1a94b4cd5324a54b5a8f08cb596 ms.sourcegitcommit: 61381ef939415fe019285def9450d7583df1fed0 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 10/01/2018 ms.locfileid: "47828966" --- # <a name="use-my-subscriptions-native-mode-report-server"></a>Usar Mis suscripciones (servidor de informes en modo nativo) El portal web de [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] incluye una página **Mis suscripciones** que organiza todas las suscripciones en un solo lugar. Puede usar *Mis suscripciones* para ver, modificar, habilitar, deshabilitar y eliminar suscripciones existentes. Sin embargo, no puede utilizar esta página para crear suscripciones. Mis suscripciones solo muestra las suscripciones que haya creado. No enumera las suscripciones que sean propiedad de otros usuarios, aunque el usuario esté agregado como suscriptor a ellas, ni muestra suscripciones controladas por datos. || |-| |**[!INCLUDE[applies](../../includes/applies-md.md)]** [!INCLUDE[ssRSnoversion](../../includes/ssrsnoversion-md.md)] | El campo de búsqueda filtrará dinámicamente la lista de suscripciones, ya que no puede buscar suscripciones por nombre, ni tampoco basándose en información del desencadenador, información de estado, etc. Para obtener más información, vea [Crear y administrar suscripciones para servidores de informes en modo nativo](../../reporting-services/subscriptions/create-and-manage-subscriptions-for-native-mode-report-servers.md). ## <a name="to-open-the-my-subscriptions-page"></a>Para abrir la página Mis suscripciones 1. Abra el portal web de [!INCLUDE[ssRSnoversion_md](../../includes/ssrsnoversion-md.md)] . 2. Haga clic en el botón de configuración ![engranaje_configuración_portal_ssrs](../../reporting-services/subscriptions/media/ssrs-portal-settings-gear.png) en la barra de herramientas. 3. Haga clic en **Mis suscripciones**. Para más información, vea [Web portal (SSRS Native Mode)](../../reporting-services/web-portal-ssrs-native-mode.md). ## <a name="use-windows-powershell-to-list-mysubscriptions"></a>Usar Windows PowerShell para enumerar MySubscriptions ![Contenido relacionado con PowerShell](../../analysis-services/instances/install-windows/media/rs-powershellicon.jpg "Contenido relacionado con PowerShell") El siguiente script de PowerShell devolverá la lista de suscripciones y propiedades de suscripción del usuario actual. Para obtener más información, vea [ReportingService2010.ListMySubscriptions (Método)](http://technet.microsoft.com/library/reportservice2010.reportingservice2010.listmysubscriptions.aspx). ``` #server - all subscriptions of the current user at the given server or site $server="[server name]/reportserver" $rs2010 = New-WebServiceProxy -Uri "http://$server/ReportService2010.asmx" -Namespace SSRS.ReportingService2010 -UseDefaultCredential; $subscriptions=ListMySubscriptions(ItemPathOrSiteURL) $subscriptions | select Path, report, Description, Owner, SubscriptionID, lastexecuted,Status #uncomment the following to list all the subscription properties #$subscriptions ``` ## <a name="see-also"></a>Ver también [Suscripciones controladas por datos](../../reporting-services/subscriptions/data-driven-subscriptions.md) [Suscripciones y entrega &#40;Reporting Services&#41;](../../reporting-services/subscriptions/subscriptions-and-delivery-reporting-services.md) [Crear y administrar suscripciones para servidores de informes en modo nativo](http://msdn.microsoft.com/en-us/7f46cbdb-5102-4941-bca2-5e0ff9012c6b)
67.966667
595
0.788622
spa_Latn
0.728268
a9500077657ae2d6aadcca8db285122f3c70122a
8,343
md
Markdown
README.md
Datera/docker-driver
e2f343aa03655a40afaadffc8f210e71c16b04e4
[ "Apache-2.0" ]
2
2017-06-11T15:58:13.000Z
2019-05-14T20:22:15.000Z
README.md
Datera/docker-driver
e2f343aa03655a40afaadffc8f210e71c16b04e4
[ "Apache-2.0" ]
1
2018-07-11T17:09:12.000Z
2018-07-11T17:09:12.000Z
README.md
Datera/docker-driver
e2f343aa03655a40afaadffc8f210e71c16b04e4
[ "Apache-2.0" ]
1
2021-03-25T19:53:12.000Z
2021-03-25T19:53:12.000Z
# Docker volume plugin for Datera Storage backend This plugin uses Datera storage backend as distributed data storage for containers. ## Easy Installation (Docker v1.13+ required) Before enabling the plugin, create the UDC configuration file on each node ```bash $ sudo touch /etc/datera/datera-config.json ``` This is a JSON file with the following structure: ```json { "mgmt_ip": "1.1.1.1", "username": "admin", "password": "password", "tenant": "/root", "api_version": "2.2", "ldap": "" } ``` NOTE: The specified tenant MUST be accessible by the user account provided. Install the iscsi-recv binary on all nodes ```bash $ ./ddct install -u k8s_csi_iscsi ``` See http://github.com/Datera/ddct for instructions on how to download and install ddct Run this on each node that should use the Datera volume driver ```bash $ sudo docker plugin install dateraiodev/docker-driver ``` Update the config file with the relevant information for the cluster then run the following: ```bash $ sudo docker plugin enable dateraiodev/docker-driver ``` ### Usage WHEN USING THE PLUGIN INSTALLATION METHOD YOU MUST REFER TO THE DRIVER BY THE FORM "repository/image" NOT JUST "image" Create a volume ```bash $ sudo docker volume create --name my-vol --driver dateraiodev/docker-driver --opt size=5 ``` Start your docker containers with the option `--volume-driver=dateraiodev/docker-driver` and use the first part of `--volume` to specify the remote volume that you want to connect to: ```bash $ sudo docker run --volume-driver dateraiodev/docker-driver --volume datastore:/data alpine touch /data/hello ``` ## The Other Way (DEPRECATED, required for Mesos installations) ### Installation Download the latest release of the docker-driver from https://github.com/Datera/docker-driver/releases Unzip the binary ```bash unzip dddbin.zip ``` Install udev rules on each docker/mesos node ```bash sudo ./scripts/install_udev_rules.py ``` ### Start driver This plugin doesn't create volumes in your Datera cluster yet, so you'll have to create them yourself first. 1 - Create the config file ```bash $ sudo touch /root/.datera-config-file ``` This is a JSON file with the following structure: ```json { "datera-cluster": "1.1.1.1", "username": "my-user", "password": "my-pass", "debug": false, "ssl": true, "tenant": "/root", "os-user": "root" } ``` Fill out the cluster info in the config file 2 - Start the plugin using this command: ```bash $ sudo ./dddbin ``` PLEASE NOTE: If installing on a Mesos node, the config variable `"framework": "dcos-mesos"` or `"framework": "dcos-docker"` must be set 3a - Create a volume ```bash $ sudo docker volume create --name my-vol --driver datera --opt size=5 ``` 3b - Start your docker containers with the option `--volume-driver=dateraiodev/docker-driver` and use the first part of `--volume` to specify the remote volume that you want to connect to: ```bash $ sudo docker run --volume-driver dateraiodev/docker-driver --volume datastore:/data alpine touch /data/hello ``` # DCOS/MESOSPHERE Instructions ## CAVEATS Currently DCOS and Mesos are very early in their external persistent volume support. Because of this, their volume lifecycle is simpler than other ecosystems. This means only a subset of the Datera product functionality is available through DCOS and Mesos. It also means there are a few wonky behaviors when using the external volume support for DCOS. You can read more about that here: https://dcos.io/docs/1.10/storage/external-storage/#potential-pitfalls Download the latest release of the docker-driver from https://github.com/Datera/docker-driver/releases Unzip the binary ```bash unzip dddbin.zip ``` ### Create config file on each node ```bash $ sudo touch datera-config-file.txt ``` This is a JSON file with the following structure: ```json { "datera-cluster": "1.1.1.1", "username": "my-user", "password": "my-pass", "debug": false, "ssl": true, "tenant": "/root", "os-user": "root" } ``` ### Copy config file to all relevant Mesos Agent nodes ```bash scp -i ~/your_ssh_key datera-config-file.txt user@agent-node:/some/location/dddbin ``` ### Start the driver with the config file #### For Mesos Container nodes ```bash sudo ./dddbin -config datera-config-template.txt ``` #### For Docker Container nodes ```bash ./dddbin -config datera-config-template.txt ``` The following json config keys are available to use for Docker container nodes ```text { "datera-cluster": "1.1.1.1", # Datera Cluster Mgmt IP "username": "my-user", # Datera Account Username "password": "my-pass", # Datera Account Password "tenant": "/root", # Datera tenant ID "os-user": "root", # Name of local user to run under "ssl": true|false, # Use SSL for requests "framework": "bare"|"dcos-mesos"|"dcos-docker" # Framework being used "volume": { "size": 16, "replica": 3, "template": null, "fstype": "ext4", "maxiops": null, "maxbw": null, "placement": "hybrid", "persistence": "manual", "clone-src": null } } ``` PLEASE NOTE: Values provided under "volume" are for use by the dcos-docker containerizer only and will hold true for all containers created on the system ### Create a service with Datera storage #### Simple Mesos container setup ```json { "id": "test-datera-2", "instances": 1, "cpus": 0.1, "mem": 32, "cmd": "/bin/cat /dev/urandom > mesos-test/test.img", "container": { "type": "MESOS", "volumes": [ { "containerPath": "mesos-test", "external": { "name": "datera-mesos-test-volume", "provider": "dvdi", "options": { "dvdi/driver": "datera", } }, "mode": "RW" } ] }, "upgradeStrategy": { "minimumHealthCapacity": 0, "maximumOverCapacity": 0 } } ``` The easiest way to generate this JSON config is to go to the DCOS UI and create a new container with an external volume. Then switch "dvdi/driver": "rexray" --> "dvdi/driver": "datera" The default size for a volume created without providing a "dvdi/size" parameter is 16GB #### More Complex Mesos Container All 'dvdi/xxxxx' options must be double-quoted strings ```json { "id": "test-datera-2", "instances": 1, "cpus": 0.1, "mem": 32, "cmd": "/bin/cat /dev/urandom > mesos-test/test.img", "container": { "type": "MESOS", "volumes": [ { "containerPath": "mesos-test", "external": { "name": "datera-mesos-test-volume", "provider": "dvdi", "options": { "dvdi/driver": "datera", "dvdi/size": "33", "dvdi/replica": "3", "dvdi/maxIops": "100", "dvdi/maxBW": "200", "dvdi/placementMode": "hybrid", "dvdi/fsType": "ext4", "dvdi/cloneSrc": "some-app-instance" } }, "mode": "RW" } ] }, "upgradeStrategy": { "minimumHealthCapacity": 0, "maximumOverCapacity": 0 } } ``` PLEASE NOTE: "containerPath" cannot start with a "/", this will break the Mesos agent and cause the container spawn to fail #### For Docker containers You cannot specify any Datera specific information in this JSON blob due to a limitation in the way DCOS interacts with Mesos and Docker. The relevant options must be specified during driver instantiation time via the config variables shown in an earlier section. ```json { "id": "test-datera-docker", "instances": 1, "cpus": 0.1, "mem": 32, "cmd": "/bin/cat /dev/urandom > mesos-test/test.img", "container": { "type": "DOCKER", "docker": { "image": "alpine:3.1", "network": "HOST", "forcePullImage": true }, "volumes": [ { "containerPath": "/data/test-volume", "external": { "name": "datera-docker-volume", "provider": "dvdi", "options": { "dvdi/driver": "datera" } }, "mode": "RW" } ] }, "upgradeStrategy": { "minimumHealthCapacity": 0, "maximumOverCapacity": 0 }, } ```
27.717608
188
0.638739
eng_Latn
0.805041
a9501fcaef9b8411ddf51f75c473186a0b1a9bfd
55
md
Markdown
README.md
erician/ctest
0f6adf18c65ea91b341d770eb900e5a4e0079b1c
[ "BSD-3-Clause" ]
null
null
null
README.md
erician/ctest
0f6adf18c65ea91b341d770eb900e5a4e0079b1c
[ "BSD-3-Clause" ]
null
null
null
README.md
erician/ctest
0f6adf18c65ea91b341d770eb900e5a4e0079b1c
[ "BSD-3-Clause" ]
null
null
null
# ctest a c codes test framework, just like googletest
18.333333
46
0.781818
eng_Latn
0.903331
a9502aa1acc80228b5e2b825beef633c3c75e396
940
md
Markdown
2021/CVE-2021-20136.md
marcostolosa/cve
bfe85c74b105c623c9807e09b2b572f144bf1f1c
[ "MIT" ]
4
2022-03-01T12:31:42.000Z
2022-03-29T02:35:57.000Z
2021/CVE-2021-20136.md
marcostolosa/cve
bfe85c74b105c623c9807e09b2b572f144bf1f1c
[ "MIT" ]
null
null
null
2021/CVE-2021-20136.md
marcostolosa/cve
bfe85c74b105c623c9807e09b2b572f144bf1f1c
[ "MIT" ]
1
2022-03-29T02:35:58.000Z
2022-03-29T02:35:58.000Z
### [CVE-2021-20136](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-20136) ![](https://img.shields.io/static/v1?label=Product&message=ManageEngine%20Log360&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=n%2Fa&color=blue) ![](https://img.shields.io/static/v1?label=Vulnerability&message=Improper%20Access%20Control&color=brighgreen) ### Description ManageEngine Log360 Builds < 5235 are affected by an improper access control vulnerability allowing database configuration overwrite. An unauthenticated remote attacker can send a specially crafted message to Log360 to change its backend database to an attacker-controlled database and to force Log360 to restart. An attacker can leverage this vulnerability to achieve remote code execution by replacing files executed by Log360 on startup. ### POC #### Reference - https://www.tenable.com/security/research/tra-2021-48 #### Github No GitHub POC found.
52.222222
440
0.788298
eng_Latn
0.800721
a951520dbcdaed292b1b6ccf4ae8c70f0430ec3b
499
md
Markdown
deps/hdr_histogram/README.md
cndoit18/redis
89772ed827209c3dca376644498a235ef3edf692
[ "BSD-3-Clause" ]
1
2022-02-11T22:05:05.000Z
2022-02-11T22:05:05.000Z
deps/hdr_histogram/README.md
cndoit18/redis
89772ed827209c3dca376644498a235ef3edf692
[ "BSD-3-Clause" ]
null
null
null
deps/hdr_histogram/README.md
cndoit18/redis
89772ed827209c3dca376644498a235ef3edf692
[ "BSD-3-Clause" ]
null
null
null
HdrHistogram_c v0.11.5 ---------------------------------------------- This port contains a subset of the 'C' version of High Dynamic Range (HDR) Histogram available at [github.com/HdrHistogram/HdrHistogram_c](https://github.com/HdrHistogram/HdrHistogram_c). The code present on `hdr_histogram.c`, `hdr_histogram.h`, and `hdr_atomic.c` was Written by Gil Tene, Michael Barker, and Matt Warren, and released to the public domain, as explained at http://creativecommons.org/publicdomain/zero/1.0/.
45.363636
187
0.711423
eng_Latn
0.805905
a9515acfe4dd0e6f765b2b666b43ebc8c1d38e2c
3,827
md
Markdown
README.md
StylexTV/Lila
f7aa5a1199e922bd9f0da7551fae17a3b9823b18
[ "MIT" ]
1
2021-04-09T19:40:44.000Z
2021-04-09T19:40:44.000Z
README.md
StylexTV/Lila
f7aa5a1199e922bd9f0da7551fae17a3b9823b18
[ "MIT" ]
1
2021-11-28T19:44:18.000Z
2021-11-29T09:51:46.000Z
README.md
StylexTV/Lila
f7aa5a1199e922bd9f0da7551fae17a3b9823b18
[ "MIT" ]
null
null
null
<h1 align="center"> <br> <img src="https://raw.githubusercontent.com/StylexTV/Lila/main/imgs/cover.png"> <br> </h1> <h4 align="center">♟️ Source code of the Lila Chess engine, made with ❤️ in Java.</h4> <p align="center"> <a href="https://GitHub.com/StylexTV/Lila/stargazers/"> <img alt="stars" src="https://img.shields.io/github/stars/StylexTV/Lila.svg?color=ffdd00"/> </a> <a href="https://www.codacy.com/gh/StylexTV/Lila/dashboard?utm_source=github.com&amp;utm_medium=referral&amp;utm_content=StylexTV/Lila&amp;utm_campaign=Badge_Grade"> <img alt="Codacy Badge" src="https://app.codacy.com/project/badge/Grade/fc5372689544422eb86e33876bbbed15"/> </a> <a> <img alt="Code size" src="https://img.shields.io/github/languages/code-size/StylexTV/Lila.svg"/> </a> <a> <img alt="GitHub repo size" src="https://img.shields.io/github/repo-size/StylexTV/Lila.svg"/> </a> <a> <img alt="Lines of Code" src="https://tokei.rs/b1/github/StylexTV/Lila?category=code"/> </a> </p> ## Overview Lila is a free, open source chess engine written in Java. Furthermore, this project is a UCI chess engine, which means that it does not contain an interface/gui, but is purely text-based. You can either run it from the command prompt via `java -jar lila.jar` or use a chess GUI (e.g. [Cute Chess](https://github.com/cutechess/cutechess)) in order to use it more conveniently. > A compiled binary can be found [here](https://github.com/StylexTV/Lila/raw/main/bins/lila_3.jar). ## Features Coming soon... ## Options The [Universal Chess Interface (UCI)](http://wbec-ridderkerk.nl/html/UCIProtocol.html) is a standard protocol used to communicate between chess programs, and is the recommended way to do so for typical graphical user interfaces (GUI) or chess tools. The following UCI options, which can typically be set via a GUI, are available in Lila: * #### Threads The number of CPU threads used for searching a position. For best performance, set this equal to the number of CPU cores available. * #### Hash The size of the hash table in MB. ## Commands Lila supports most of the regular commands included in the [UCI protocol](http://wbec-ridderkerk.nl/html/UCIProtocol.html), but also has some special commands. Name | Arguments | Description --- | --- | --- uci | - | The UCI startup command. isready | - | Used to synchronize the chess engine with the GUI. setoption | name [value] | Sets an option to a specific value. For buttons, simply omit the *value* argument. ucinewgame | - | Tells the engine that a new game has started. position | [fen &#124; startpos] moves | Sets up a new position. go | depth<br/>movetime<br/>wtime<br/>btime<br/>movestogo<br/>winc<br/>binc | Starts a new search with the specified constraints. stop | - | Ends the current search as soon as possible. d | - | Prints the current position (used for debugging). eval | - | Shows the evaluation of the current position (used for debugging). perft | [depth] | Executes a [perft](https://www.chessprogramming.org/Perft) call to the specified depth.<br/>⚠️ Warning: Starting an unrestricted call locks the program at the moment. quit | - | Stops the program and eliminates all searches that are still running. ## Strength The following table shows the wins, losses, draws and the Elo gain compared to the respective previous version. Version | Wins | Losses | Draws | Elo gain | Production ready --- | --- | --- | --- | --- | --- 3.0.1 | 92 | 0 | 8 | +234 | ❌ ## Project Layout Here you can see the current structure of the project. ```bash ├─ 📂 bins/ # ✨ Binaries ├─ 📂 src/ # 🌟 Source Files │ └─ 📂 de/lila/ # ✉️ Source Code └─ 📃 CODE_OF_CONDUCT.md # 📌 Code of Conduct └─ 📃 LICENSE # ⚖️ MIT License └─ 📃 README.md # 📖 Read Me! ```
45.023529
249
0.696368
eng_Latn
0.943317
a952ba15b3690507dccb2a8e626e3eac0af91da7
192
md
Markdown
content/papers/1997-PPIG-9th-Collins.md
psychology-of-programming/ppig.org
f8743920ae777c64b7c3d133ba4c730151ee4c50
[ "MIT" ]
null
null
null
content/papers/1997-PPIG-9th-Collins.md
psychology-of-programming/ppig.org
f8743920ae777c64b7c3d133ba4c730151ee4c50
[ "MIT" ]
1
2019-05-25T20:03:29.000Z
2019-05-25T20:03:29.000Z
content/papers/1997-PPIG-9th-Collins.md
psychology-of-programming/ppig.org
f8743920ae777c64b7c3d133ba4c730151ee4c50
[ "MIT" ]
1
2019-06-03T08:53:48.000Z
2019-06-03T08:53:48.000Z
--- title: "Using Software Visualization Technology to Help Genetic Algorithm Designers" authors: [Trevor Collins] abstract: "" publishedAt: "ppig-1997" year: 1997 url_pdf: "" paper_no: 8 ---
19.2
84
0.744792
eng_Latn
0.767056
a952f3b837f44e8b3702112d24df719964af0ee3
38
md
Markdown
README.md
wufashanchu/yiishop
1f2178fa5540535a4afa50665552b8b1a91d6d75
[ "Apache-2.0", "BSD-3-Clause" ]
null
null
null
README.md
wufashanchu/yiishop
1f2178fa5540535a4afa50665552b8b1a91d6d75
[ "Apache-2.0", "BSD-3-Clause" ]
null
null
null
README.md
wufashanchu/yiishop
1f2178fa5540535a4afa50665552b8b1a91d6d75
[ "Apache-2.0", "BSD-3-Clause" ]
null
null
null
# yiishop this is a yii shop for test
12.666667
27
0.736842
eng_Latn
0.998755
a953e0ce9ec39e5cb6f0e43ae469e3835b373b59
922
md
Markdown
README.md
preco21/next-playgrounds
069f9f55778c90eb3e00ecdb3f5022ba93185e33
[ "MIT" ]
1
2018-08-03T20:32:18.000Z
2018-08-03T20:32:18.000Z
README.md
preco21/next-playgrounds
069f9f55778c90eb3e00ecdb3f5022ba93185e33
[ "MIT" ]
null
null
null
README.md
preco21/next-playgrounds
069f9f55778c90eb3e00ecdb3f5022ba93185e33
[ "MIT" ]
null
null
null
# Next Playgrounds [![Code Style Prev](https://img.shields.io/badge/code%20style-prev-32c8fc.svg)](https://github.com/preco21/eslint-config-prev) > :rocket: An opinionated setup for Next.js * Core setup from the original [playgrounds](https://github.com/preco21/playgrounds). * Clean and minimal setup to build React app with `next.js`. * Focused for a static web application. ## Install ```bash $ git clone https://github.com/preco21/next-playgrounds.git $ cd next-playgrounds $ npm install ``` **Prerequisite:** Node.js 9 or higher. ## Usage ### Development mode This command will start the development server. As soon as the server gets ready, you can start editing components in `src` folder. ```bash $ npm run dev ``` ### Build This command will start export the static page version of app into `app` folder for production. ```bash $ npm run build ``` ## License [MIT](https://preco.mit-license.org/)
21.44186
131
0.723427
eng_Latn
0.868762
a9541b9c7b6358f09fda9e730fd02cec79bd649a
1,951
md
Markdown
README.md
boomerspine/Chat-App
1605192e0e6189070de0945ae2b438a429c7a109
[ "MIT" ]
null
null
null
README.md
boomerspine/Chat-App
1605192e0e6189070de0945ae2b438a429c7a109
[ "MIT" ]
null
null
null
README.md
boomerspine/Chat-App
1605192e0e6189070de0945ae2b438a429c7a109
[ "MIT" ]
null
null
null
# Fullstack chatting application with User Authentication using Chat Engine This is a chatting application built using Reactjs. @ant-design, axios and react-chat-engine were some of the dependencies that were used. User can create multiple chat rooms, get sound notification, send text, images, emojis, add users, change admin and a lot more. The backend has been made using [chat engine](https://chatengine.io/) which provides user and admin privileges. <br><br> The project has been built following this [Video](https://youtu.be/jcOKU9f86XE) from JavaScript Mastery YouTube channel. <br><br> ![Screencast from 03-10-21 03_34_44 AM IST](https://user-images.githubusercontent.com/55712612/135733112-033dac7e-12e0-496a-8385-4d8583120d53.gif) <br><br> A particular user can be made admin from the chat engine dashboard. This admin can create accounts for other users, who can login with the credentials provided to them and start chatting in a room. <br><br> ![Screencast from 03-10-21 03_37_54 AM IST](https://user-images.githubusercontent.com/55712612/135733118-d92341aa-0b8b-4694-94aa-a64e4c727453.gif) <br><br> Any user can make a new chat room and add other users directly by their username. # To run the project locally: ### `npm start` Runs the app in the development mode.\ Open [http://localhost:3000](http://localhost:3000) to view it in the browser. Create an account in [chat engine](https://chatengine.io/), make profiles for users, copy project id, move to Chat-App/src/App.js line number 25 and paste. Run `npm start` and the app opens up in browser. ### `npm build` To build a production ready application. The project is deployed here: [test](https://roy-chat-app.netlify.app/). Contact for getting username and password. That's all. You are all set to start your conversation 💯💯 <br> If you like my work do consider dropping a ⭐️ :) Made with 💙️ in India
40.645833
387
0.740646
eng_Latn
0.988172
a954c56259f1acd5aa9b183930b3a20249dea738
6,386
md
Markdown
repos/debian/remote/9.12.md
jameywat736/repo-info
99ffe9215302e22f39c6a9693119c500c2e540db
[ "Apache-2.0" ]
null
null
null
repos/debian/remote/9.12.md
jameywat736/repo-info
99ffe9215302e22f39c6a9693119c500c2e540db
[ "Apache-2.0" ]
null
null
null
repos/debian/remote/9.12.md
jameywat736/repo-info
99ffe9215302e22f39c6a9693119c500c2e540db
[ "Apache-2.0" ]
1
2020-03-09T20:33:48.000Z
2020-03-09T20:33:48.000Z
## `debian:9.12` ```console $ docker pull debian@sha256:ddb131307ad9c70ebf8c7962ba73c20101f68c7a511915aea3ad3b7ad47b9d20 ``` - Manifest MIME: `application/vnd.docker.distribution.manifest.list.v2+json` - Platforms: - linux; amd64 - linux; arm variant v5 - linux; arm variant v7 - linux; arm64 variant v8 - linux; 386 - linux; ppc64le - linux; s390x ### `debian:9.12` - linux; amd64 ```console $ docker pull debian@sha256:2f871f6ab95bb64aaf3c0f9c6a8f0a15bcb2a0490a81e89c80f62489ca6716dd ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **45.4 MB (45375932 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:6b14557ccda6d43137ebcf46490778af1c8bae98e26b49e1f5ca216bcb9ebf20` - Default Command: `["bash"]` ```dockerfile # Wed, 26 Feb 2020 00:41:14 GMT ADD file:08c5ab7c53526da155d6be40a9795fc08afc9f47bd333c096e90185fe9fab2b1 in / # Wed, 26 Feb 2020 00:41:14 GMT CMD ["bash"] ``` - Layers: - `sha256:c0c53f743a403d45480d026864d9611d6eb898e897d60c13ae854ad453d462a4` Last Modified: Wed, 26 Feb 2020 00:47:05 GMT Size: 45.4 MB (45375932 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `debian:9.12` - linux; arm variant v5 ```console $ docker pull debian@sha256:f52d8f517b4ff1c86d7b3325cfdba4a4a7f09083ff4607ee2a7e13eedc9d5b14 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **44.1 MB (44066613 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:7ac7c191f814b263989ff258c51e7b86629658c319fc77b64be04d5d6d42ca81` - Default Command: `["bash"]` ```dockerfile # Wed, 26 Feb 2020 00:53:01 GMT ADD file:5ed8e2bc0bf117a359d81b052913e99f6139d0b36e5798d75392429effd5afd3 in / # Wed, 26 Feb 2020 00:53:06 GMT CMD ["bash"] ``` - Layers: - `sha256:3294d23573097fea5f2d79377bc86444ebf83e4bed50a8a6c62b7aa8637dd7fe` Last Modified: Wed, 26 Feb 2020 01:03:12 GMT Size: 44.1 MB (44066613 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `debian:9.12` - linux; arm variant v7 ```console $ docker pull debian@sha256:a358becebb7eda15d7fa3b7ada5572411db2207025466fed821280edf2078fca ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **42.1 MB (42100335 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:987931aaa1765a857f066f6537821325a46d780a8267f1eb275da4c4e7ae3993` - Default Command: `["bash"]` ```dockerfile # Wed, 26 Feb 2020 00:59:40 GMT ADD file:6497e2f777f4487d9221931005a8b4b81c021442a769b581e223cf30c81ff553 in / # Wed, 26 Feb 2020 00:59:53 GMT CMD ["bash"] ``` - Layers: - `sha256:2478a2ed882cb5fbb5e4e92c9f580da9ca52f5fc78b8bb439ecbf3ac18023caa` Last Modified: Wed, 26 Feb 2020 01:11:29 GMT Size: 42.1 MB (42100335 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `debian:9.12` - linux; arm64 variant v8 ```console $ docker pull debian@sha256:7cfbfce5fc601eed2caf2e3f41b8c7a9a0110f695ed2c59334cb81873076fa02 ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **43.2 MB (43158146 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:f02d9d996487bd18689af346c8dd66558361b70bbda752099cdcbb651be16663` - Default Command: `["bash"]` ```dockerfile # Wed, 26 Feb 2020 00:50:03 GMT ADD file:c82c7dc82c2eb3b50218c35e1b848f707b134d2df957f57125408f69aaeb9b96 in / # Wed, 26 Feb 2020 00:50:15 GMT CMD ["bash"] ``` - Layers: - `sha256:668c278237ef972312df4979c8fb1a927b38db5da09f094ae4fcc84a061dedf8` Last Modified: Wed, 26 Feb 2020 00:58:30 GMT Size: 43.2 MB (43158146 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `debian:9.12` - linux; 386 ```console $ docker pull debian@sha256:403f3e1dc825f92307c29b34f5d0b3f2a05e46189b0c08684a0bf04a6ad5f8ae ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **46.1 MB (46095035 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:3cb599b1ee890f7349d2fb5719e6e1088af3ae4fe0693eb068a6e95944da65fd` - Default Command: `["bash"]` ```dockerfile # Wed, 26 Feb 2020 00:35:22 GMT ADD file:f57292acaae4c3d7e8b3217b28f9efb27faef1c4e08ca95f4a25d1302355fb51 in / # Wed, 26 Feb 2020 00:35:23 GMT CMD ["bash"] ``` - Layers: - `sha256:ef83ea0a858b11598fe63ce7d80a401ebb47c746763b35564bd712f013cb961f` Last Modified: Wed, 26 Feb 2020 00:41:45 GMT Size: 46.1 MB (46095035 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `debian:9.12` - linux; ppc64le ```console $ docker pull debian@sha256:b15a12317df2459e2e47963ef14de650c8adcc1dc940271343172c2d3e8270ec ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **45.6 MB (45647081 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:bc5785253f5ecf3f6dffac5db77907538636edd5508d6704a4fab70db1681f4c` - Default Command: `["bash"]` ```dockerfile # Wed, 26 Feb 2020 01:36:21 GMT ADD file:734d2a25bec5d510e72f822b21a5ef5786a928c6cb933209a04c6358fd600b67 in / # Wed, 26 Feb 2020 01:36:29 GMT CMD ["bash"] ``` - Layers: - `sha256:3a8babb46b416d2208716bc29bc3e805dd074b943418fc44ac89d66e363bc68a` Last Modified: Wed, 26 Feb 2020 02:03:30 GMT Size: 45.6 MB (45647081 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip ### `debian:9.12` - linux; s390x ```console $ docker pull debian@sha256:f17410575376cc2ad0f6f172699ee825a51588f54f6d72bbfeef6e2fa9a57e2f ``` - Docker Version: 18.09.7 - Manifest MIME: `application/vnd.docker.distribution.manifest.v2+json` - Total Size: **45.2 MB (45232848 bytes)** (compressed transfer size, not on-disk size) - Image ID: `sha256:e20675bcd6c6d07a205cada59786f1bd94ff67f594ac9049a0dea87f2fce4a78` - Default Command: `["bash"]` ```dockerfile # Wed, 26 Feb 2020 00:44:30 GMT ADD file:bb76ab55808bcd0567a32c3d7328d5c1719147f0f723a4aa574872eb8f12b4d4 in / # Wed, 26 Feb 2020 00:44:38 GMT CMD ["bash"] ``` - Layers: - `sha256:a2baf8ed1bce68a4f36469f3537f12b9e5bebc860ab4aac0954714901595c575` Last Modified: Wed, 26 Feb 2020 00:49:08 GMT Size: 45.2 MB (45232848 bytes) MIME: application/vnd.docker.image.rootfs.diff.tar.gzip
32.252525
92
0.762136
yue_Hant
0.200434
a9566d8a739ef8d8f41f2e90d9a1c22a1633847b
34
md
Markdown
README.md
RaafinHaswoto/nasa-home-page
e10f0b1245762ac5259cfc6f280234ca64ef3813
[ "MIT" ]
null
null
null
README.md
RaafinHaswoto/nasa-home-page
e10f0b1245762ac5259cfc6f280234ca64ef3813
[ "MIT" ]
null
null
null
README.md
RaafinHaswoto/nasa-home-page
e10f0b1245762ac5259cfc6f280234ca64ef3813
[ "MIT" ]
null
null
null
# NASA WEBSITE Build website NASA
11.333333
18
0.794118
yue_Hant
0.764536
a95681043f032a299f17f8c6c0ac95826b1e8661
686
md
Markdown
test/README.md
caremesh/fhirpatch
0e7affa8937d0b7e158a253c3fac20d79e457c7d
[ "CC0-1.0" ]
null
null
null
test/README.md
caremesh/fhirpatch
0e7affa8937d0b7e158a253c3fac20d79e457c7d
[ "CC0-1.0" ]
1
2021-05-24T20:31:22.000Z
2021-05-24T21:02:38.000Z
test/README.md
caremesh/fhirpatch
0e7affa8937d0b7e158a253c3fac20d79e457c7d
[ "CC0-1.0" ]
1
2021-11-13T23:26:51.000Z
2021-11-13T23:26:51.000Z
# FHIR R4 Compliance Tests The files in this directory are used as functional tests to confirm the compliance of the full package with the FHIR R4 compliance tests. You should place unit tests in `src/*.test.js`, not here. The files are: * fhirpatch.test.js - generated test file based on the XML & XSL. Regenerate with `yarn regenerate-tests`. * fhir-patch-tests.xml - the FhirPatch compliance tests in XML format downloaded from HL7. * fhirpatch.test.xsl - an XSL stylesheet that transforms the XML into a mocha + chai test file. Unless you make modifications to the XSL, you probably won't need to change these or regenerate them. Please note that test/fhirpatch.xml
36.105263
75
0.763848
eng_Latn
0.996592
a9571e9234d25e7eb13f2cd2b4790bad32090740
79
md
Markdown
README.md
MirahezeBots/jsonparser
a2b8674e8f66279c7483991ddbb3a88102d9c1d8
[ "EFL-2.0" ]
null
null
null
README.md
MirahezeBots/jsonparser
a2b8674e8f66279c7483991ddbb3a88102d9c1d8
[ "EFL-2.0" ]
11
2020-11-27T22:54:09.000Z
2022-01-05T16:32:48.000Z
README.md
MirahezeBots/jsonparser
a2b8674e8f66279c7483991ddbb3a88102d9c1d8
[ "EFL-2.0" ]
null
null
null
# jsonparser JSON parsing utility for python that loads JSON code into a cache
26.333333
65
0.810127
eng_Latn
0.877988
a95754e5a7541f772e361dc75fb663afb34966fb
5,072
md
Markdown
android-runner/README.md
S2-group/mobilesoft-2020-caching-pwa-replication-package
83ad21ba4c7a6a430103caa6616296cbdcf17de3
[ "MIT" ]
null
null
null
android-runner/README.md
S2-group/mobilesoft-2020-caching-pwa-replication-package
83ad21ba4c7a6a430103caa6616296cbdcf17de3
[ "MIT" ]
null
null
null
android-runner/README.md
S2-group/mobilesoft-2020-caching-pwa-replication-package
83ad21ba4c7a6a430103caa6616296cbdcf17de3
[ "MIT" ]
2
2020-10-26T17:04:29.000Z
2020-10-27T13:06:52.000Z
# Android Runner Automated experiment execution on Android devices ## Install This tool is only tested on Ubuntu, but it should work in other linux distributions. You'll need: - Python 2.7 - Android Debug Bridge (`sudo apt install android-tools-adb`) - Android SDK Tools (`sudo apt install monkeyrunner`) - JDK 8 (NOT JDK 9) (`sudo apt install openjdk-8-jre`) - lxml (`sudo apt install python-lxml`) - Pluginbase (`pip install pluginbase`) Additionally, the following are also required for the Batterystats method: - power_profile.xml (retrievable from the device using [APKTool](https://github.com/iBotPeaches/Apktool)) - systrace.py (from the Android SDK Tools) - A device that is able to report on the `idle` and `frequency` states of the CPU using systrace.py Note: It is important that monkeyrunner shares the same adb the experiment is using. Otherwise, there will be an adb restart and output may be tainted by the notification. Note 2: You can specifiy the path to ADB and/or Monkeyrunner in the experiment configuration. See the Experiment Configuration section below. Note 3: To check whether the the device is able to report on the `idle` and `frequency` states of the CPU, you can run the command `python systrace.py -l` and ensure both categories are listed among the supported categories. ## Quick start To run an experiment, run: ```bash python android_runner your_config.json ``` Example configuration files can be found in the subdirectories of the `example` directory. ## Structure ### devices.json A JSON config that maps devices names to their ADB ids for easy reference in config files. ### Experiment Configuration Below is a reference to the fields for the experiment configuration. It is not always updated. **adb_path** *string* Path to ADB. Example path: `/opt/platform-tools/adb` **monkeyrunner_path** *string* Path to Monkeyrunner. Example path: `/opt/platform-tools/bin/monkeyrunner` **systrace_path** *string* Path to Systrace.py. Example path: `/home/user/Android/Sdk/platform-tools/systrace/systrace.py` **powerprofile_path** *string* Path to power_profile.xml. Example path: `android-runner/example/batterystats/power_profile.xml` **type** *string* Type of the experiment. Can be `web` or `native` **replications** *positive integer* Number of times an experiment is run. **duration** *positive integer* The duration of each run in milliseconds. **devices** *Array\<String\>* The names of devices to use. They will be translated into ids defined in devices.json. **paths** *Array\<String\>* The paths to the APKs/URLs to test with. **browsers** *Array\<String\>* *Dependent on type = web* The names of browser(s) to use. Currently supported values are `chrome`. **profilers** *JSON* A JSON object to describe the profilers to be used and their arguments. Below are several examples: ```json "profilers": { "trepn": { "sample_interval": 100, "data_points": ["battery_power", "mem_usage"] } } ``` ```json "profilers": { "android": { "sample_interval": 100, "data_points": ["cpu", "mem"] } } ``` ```json "profilers": { "batterystats": { "cleanup": true } } ``` **cleanup** *boolean* Delete log files required by Batterystats after completion of the experiment. The default is *true*. **scripts** *JSON* A JSON list of types and paths of scripts to run. Below is an example: ```json "scripts": { "before_experiment": "before_experiment.py", "before_run": "before_run.py", "interaction": "interaction.py", "after_run": "after_run.py", "after_experiment": "after_experiment.py" } ``` Below are the supported types: - before_experiment executes once before the first run - before_run executes before every run - after_launch executes after the target app/website is launched, but before profiling starts - interaction executes between the start and end of a run - after_run executes after a run completes - after_experiment executes once after the last run ## Detailed documentation The original thesis can be found here: https://drive.google.com/file/d/0B7Fel9yGl5-xc2lEWmNVYkU5d2c/view?usp=sharing The thesis regarding the implementation of Batterystats can be found here: https://drive.google.com/file/d/1O7BqmkRFRDq7AD1oKOGjHqJzCTEe8AMz/view?usp=sharing ## FAQ ### Devices have no permissions (udev requires plugdev group membership) This happens when the user calling adb is not in the plugdev group. #### Fix `sudo usermod -aG plugdev $LOGNAME` #### References https://developer.android.com/studio/run/device.html http://www.janosgyerik.com/adding-udev-rules-for-usb-debugging-android-devices/ ### [Batterystats] IOError: Unable to get atrace data. Did you forget adb root? This happens when the device is unable to retrieve CPU information using systrace.py. #### Fix Check whether the device is able to report on both categories `freq` and `idle` using Systrace: `python systrace.py -l` If the categories are not listed, use a different device. #### References https://developer.android.com/studio/command-line/systrace
33.150327
224
0.746254
eng_Latn
0.977082
a957a9a5ec7f7e5a489def74358b1a2338f1b0cb
22
md
Markdown
README.md
everythinggood/yibox
feef1fbc00967762db1ea3c4d747382ddbdc979d
[ "Apache-2.0" ]
null
null
null
README.md
everythinggood/yibox
feef1fbc00967762db1ea3c4d747382ddbdc979d
[ "Apache-2.0" ]
null
null
null
README.md
everythinggood/yibox
feef1fbc00967762db1ea3c4d747382ddbdc979d
[ "Apache-2.0" ]
null
null
null
# yibox 模仿tower实现简单功能
7.333333
13
0.818182
zul_Latn
0.262443
a9588bbd9832c18d4b2ce5dd194a070716074a7b
1,639
md
Markdown
SparkyTestHelpers.Mapping/Help/M_SparkyTestHelpers_Mapping_MapTester_2_WhereMember.md
BrianSchroer/dotnet-test-helpers
3845796f0d992a1297a7cec640db45950e2b38e1
[ "MIT" ]
4
2018-03-06T10:23:54.000Z
2019-09-04T08:47:08.000Z
SparkyTestHelpers.Mapping/Help/M_SparkyTestHelpers_Mapping_MapTester_2_WhereMember.md
BrianSchroer/dotnet-test-helpers
3845796f0d992a1297a7cec640db45950e2b38e1
[ "MIT" ]
7
2018-04-21T12:01:21.000Z
2022-01-10T07:23:02.000Z
SparkyTestHelpers.Mapping/Help/M_SparkyTestHelpers_Mapping_MapTester_2_WhereMember.md
BrianSchroer/dotnet-test-helpers
3845796f0d992a1297a7cec640db45950e2b38e1
[ "MIT" ]
5
2018-04-18T16:30:56.000Z
2022-03-22T07:14:24.000Z
# MapTester(*TSource*, *TDestination*).WhereMember Method Specify *TDestination* property to be tested. **Namespace:**&nbsp;<a href="N_SparkyTestHelpers_Mapping.md">SparkyTestHelpers.Mapping</a><br />**Assembly:**&nbsp;SparkyTestHelpers.Mapping (in SparkyTestHelpers.Mapping.dll) Version: 1.10.2 ## Syntax **C#**<br /> ``` C# public MapMemberTester<TSource, TDestination> WhereMember( Expression<Func<TDestination, Object>> destExpression ) ``` #### Parameters &nbsp;<dl><dt>destExpression</dt><dd>Type: <a href="http://msdn2.microsoft.com/en-us/library/bb335710" target="_blank">System.Linq.Expressions.Expression</a>(<a href="http://msdn2.microsoft.com/en-us/library/bb549151" target="_blank">Func</a>(<a href="T_SparkyTestHelpers_Mapping_MapTester_2.md">*TDestination*</a>, <a href="http://msdn2.microsoft.com/en-us/library/e5kfa45b" target="_blank">Object</a>))<br />Expression to get property name.</dd></dl> #### Return Value Type: <a href="T_SparkyTestHelpers_Mapping_MapMemberTester_2.md">MapMemberTester</a>(<a href="T_SparkyTestHelpers_Mapping_MapTester_2.md">*TSource*</a>, <a href="T_SparkyTestHelpers_Mapping_MapTester_2.md">*TDestination*</a>)<br />New <a href="T_SparkyTestHelpers_Mapping_MapMemberTester_2.md">MapMemberTester(TSource, TDestination)</a> instance. ## Examples MapTester.ForMap<Foo, Bar>() .WhereMember(dest => dest.Baz).ShouldEqual(src => src.Qux) .AssertMappedValues(foo, bar); ## See Also #### Reference <a href="T_SparkyTestHelpers_Mapping_MapTester_2.md">MapTester(TSource, TDestination) Class</a><br /><a href="N_SparkyTestHelpers_Mapping.md">SparkyTestHelpers.Mapping Namespace</a><br />
52.870968
452
0.75961
yue_Hant
0.515822
a958d0ebc0108b361bb7588ecc49a9aca379f60a
2,633
md
Markdown
docs/standard/serialization/index.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/serialization/index.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/standard/serialization/index.md
mtorreao/docs.pt-br
e080cd3335f777fcb1349fb28bf527e379c81e17
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Serialização-.NET description: Este artigo fornece informações sobre as tecnologias de serialização do .NET, incluindo serialização binária, serialização de XML e SOAP e serialização JSON. ms.date: 09/02/2019 helpviewer_keywords: - JSON serialization - XML serialization, defined - binary serialization - serializing objects - serialization - objects, serializing ms.assetid: 4d1111c0-9447-4231-a997-96a2b74b3453 ms.openlocfilehash: b3d76c14dc9180a5f19781122d1a42bcae603e76 ms.sourcegitcommit: b16c00371ea06398859ecd157defc81301c9070f ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 06/06/2020 ms.locfileid: "83377241" --- # <a name="serialization-in-net"></a>Serialização no .NET A serialização é o processo de conversão do estado de um objeto em um formulário que possa ser persistido ou transportado. O complemento de serialização é desserialização, que converte um fluxo em um objeto. Juntos, esses processos permitem que os dados sejam armazenados e transferidos. O .NET apresenta as seguintes tecnologias de serialização: - A [serialização binária](binary-serialization.md) preserva a fidelidade do tipo, que é útil para preservar o estado de um objeto entre diferentes invocações de um aplicativo. Por exemplo, você pode compartilhar um objeto entre diferentes aplicativos serializando-o para a área de transferência. Você pode serializar um objeto para um fluxo, um disco, a memória, pela rede, e assim por diante. O acesso remoto usa a serialização para passar objetos “por valor” de um computador ou domínio de aplicativo para outro. - A [serialização de XML e SOAP](xml-and-soap-serialization.md) serializa apenas campos e propriedades públicas e não preserva a fidelidade do tipo. Isso é útil quando você deseja fornecer ou consumir dados sem restringir o aplicativo que usa os dados. Como o XML é um padrão aberto, é uma opção atrativa para compartilhar dados pela Web. SOAP é, da mesma forma, um padrão aberto, uma opção atrativa. - A [serialização JSON](system-text-json-overview.md) serializa apenas as propriedades públicas e não preserva a fidelidade do tipo. JSON é um padrão aberto que é uma opção atraente para compartilhar dados na Web. ## <a name="reference"></a>Referência <xref:System.Runtime.Serialization> Contém classes que podem ser usadas para serialização e desserialização de objetos. <xref:System.Xml.Serialization> Contém classes que podem ser usadas para serializar objetos em documentos ou fluxos de formato XML. <xref:System.Text.Json> Contém classes que podem ser usadas para serializar objetos em fluxos ou documentos de formato JSON.
62.690476
517
0.801367
por_Latn
0.999915
a958d5962866a78ec33299d3e47e3c61565e1be8
4,913
md
Markdown
tools/fidl/fidldoc/src/templates/markdown/testdata/structs_declarations.md
zarelaky/fuchsia
858cc1914de722b13afc2aaaee8a6bd491cd8d9a
[ "BSD-3-Clause" ]
null
null
null
tools/fidl/fidldoc/src/templates/markdown/testdata/structs_declarations.md
zarelaky/fuchsia
858cc1914de722b13afc2aaaee8a6bd491cd8d9a
[ "BSD-3-Clause" ]
null
null
null
tools/fidl/fidldoc/src/templates/markdown/testdata/structs_declarations.md
zarelaky/fuchsia
858cc1914de722b13afc2aaaee8a6bd491cd8d9a
[ "BSD-3-Clause" ]
null
null
null
## **STRUCTS** ### EncodedImage {#EncodedImage} *Defined in [fuchsia.images/encoded_image.fidl](https://fuchsia.googlesource.com/fuchsia/+/master/sdk/fidl/fuchsia.images/encoded_image.fidl#7)* <table> <tr><th>Name</th><th>Type</th><th>Description</th><th>Default</th></tr><tr> <td><code>vmo</code></td> <td> <code>handle&lt;vmo&gt;</code> </td> <td><p>The vmo.</p> </td> <td>No default</td> </tr><tr> <td><code>size</code></td> <td> <code>uint64</code> </td> <td><p>The size of the image in the vmo in bytes.</p> </td> <td>No default</td> </tr> </table> ### ImageInfo {#ImageInfo} *Defined in [fuchsia.images/image_info.fidl](https://fuchsia.googlesource.com/fuchsia/+/master/sdk/fidl/fuchsia.images/image_info.fidl#117)* <p>Information about a graphical image (texture) including its format and size.</p> <table> <tr><th>Name</th><th>Type</th><th>Description</th><th>Default</th></tr><tr> <td><code>transform</code></td> <td> <code><a class='link' href='#Transform'>Transform</a></code> </td> <td><p>Specifies if the image should be mirrored before displaying.</p> </td> <td><a class='link' href='#Transform.NORMAL'>Transform.NORMAL</a></td> </tr><tr> <td><code>width</code></td> <td> <code>uint32</code> </td> <td><p>The width and height of the image in pixels.</p> </td> <td>No default</td> </tr><tr> <td><code>height</code></td> <td> <code>uint32</code> </td> <td></td> <td>No default</td> </tr><tr> <td><code>stride</code></td> <td> <code>uint32</code> </td> <td><p>The number of bytes per row in the image buffer.</p> </td> <td>No default</td> </tr><tr> <td><code>pixel_format</code></td> <td> <code><a class='link' href='#PixelFormat'>PixelFormat</a></code> </td> <td><p>The pixel format of the image.</p> </td> <td><a class='link' href='#PixelFormat.BGRA_8'>PixelFormat.BGRA_8</a></td> </tr><tr> <td><code>color_space</code></td> <td> <code><a class='link' href='#ColorSpace'>ColorSpace</a></code> </td> <td><p>The pixel color space.</p> </td> <td><a class='link' href='#ColorSpace.SRGB'>ColorSpace.SRGB</a></td> </tr><tr> <td><code>tiling</code></td> <td> <code><a class='link' href='#Tiling'>Tiling</a></code> </td> <td><p>The pixel arrangement in memory.</p> </td> <td><a class='link' href='#Tiling.LINEAR'>Tiling.LINEAR</a></td> </tr><tr> <td><code>alpha_format</code></td> <td> <code><a class='link' href='#AlphaFormat'>AlphaFormat</a></code> </td> <td><p>Specifies the interpretion of the alpha channel, if one exists.</p> </td> <td><a class='link' href='#AlphaFormat.OPAQUE'>AlphaFormat.OPAQUE</a></td> </tr> </table> ### PresentationInfo {#PresentationInfo} *Defined in [fuchsia.images/presentation_info.fidl](https://fuchsia.googlesource.com/fuchsia/+/master/sdk/fidl/fuchsia.images/presentation_info.fidl#10)* <p>Information returned by methods such as <code>ImagePipe.PresentImage()</code> and <code>Session.Present()</code>, when the consumer begins preparing the first frame which includes the presented content.</p> <table> <tr><th>Name</th><th>Type</th><th>Description</th><th>Default</th></tr><tr> <td><code>presentation_time</code></td> <td> <code>uint64</code> </td> <td><p>The actual time at which the enqueued operations are anticipated to take visible effect, expressed in nanoseconds in the <code>CLOCK_MONOTONIC</code> timebase.</p> <p>This value increases monotonically with each new frame, typically in increments of the <code>presentation_interval</code>.</p> </td> <td>No default</td> </tr><tr> <td><code>presentation_interval</code></td> <td> <code>uint64</code> </td> <td><p>The nominal amount of time which is anticipated to elapse between successively presented frames, expressed in nanoseconds. When rendering to a display, the interval will typically be derived from the display refresh rate.</p> <p>This value is non-zero. It may vary from time to time, such as when changing display modes.</p> </td> <td>No default</td> </tr> </table>
35.601449
153
0.546306
eng_Latn
0.421079
a95987536bad92e252ace61507fbfc8fa099f103
352
md
Markdown
README.md
lucas8/go-rest-boilerplate
d2f98335d15d4a8902527a494b438b86bccdcac0
[ "MIT" ]
null
null
null
README.md
lucas8/go-rest-boilerplate
d2f98335d15d4a8902527a494b438b86bccdcac0
[ "MIT" ]
null
null
null
README.md
lucas8/go-rest-boilerplate
d2f98335d15d4a8902527a494b438b86bccdcac0
[ "MIT" ]
null
null
null
# Go API boilerplate [![Go Report Card](https://goreportcard.com/badge/github.com/lucas8/go-rest-boilerplate)](https://goreportcard.com/report/github.com/lucas8/go-rest-boilerplate) > Follows basic best practices including dependency injection, testing, openAPI/swagger and a maintainable architecture #### Running Swagger ```shell $ swag init ```
29.333333
160
0.772727
eng_Latn
0.538664
a959d7fefa60348de9ebc67f9cf933652cae413f
44,820
md
Markdown
principles/1_SecurityImplementation.md
Path-Check/privacy-security-GPS
d80066015aa426d05ed9ad20ca1a74e967650fb7
[ "MIT" ]
10
2020-05-21T21:08:56.000Z
2020-06-20T06:02:46.000Z
principles/1_SecurityImplementation.md
Path-Check/privacy-security-GPS
d80066015aa426d05ed9ad20ca1a74e967650fb7
[ "MIT" ]
4
2020-05-22T12:06:32.000Z
2020-06-19T07:53:52.000Z
principles/1_SecurityImplementation.md
Path-Check/privacy-security-GPS
d80066015aa426d05ed9ad20ca1a74e967650fb7
[ "MIT" ]
6
2020-06-03T16:18:41.000Z
2020-07-28T17:20:33.000Z
# Introduction This document makes heavy use of the OWASP (Open Web Application Security Project) project. It is best read in conjuction with the OWASP deliverables that explain acronyms, reference test procedures, and provide a glossary. ## Safe Paths ### OWASP Principles The following principles have been derived from [OWASP principles applicable to mobile applications](https://owasp.org/www-project-mobile-security-testing-guide/), and applicable to the Safe Paths app: * MSTG-ARCH-1 All app components are identified and known to be needed. * MSTG-ARCH-2 Security controls are never enforced only on the client side, but on the respective remote endpoints. * MSTG-ARCH-3 A high-level architecture for the mobile app and all connected remote services has been defined and security has been addressed in that architecture * MSTG-ARCH-4 Data considered sensitive in the context of the mobile app is clearly identified. * MSTG-ARCH-5 All app components are defined in terms of the business functions and/or security functions they provide. * MSTG-ARCH-6 A threat model for the mobile app and the associated remote services has been produced that identifies potential threats and countermeasures. * MSTG-ARCH-8 There is an explicit policy for how cryptographic keys (if any) are managed, and the lifecycle of cryptographic keys is enforced. Ideally, follow a key management standard such as NIST SP 800-57. * MSTG-ARCH-10 Security is addressed within all parts of the software development lifecycle. * MSTG-CRYPTO-1 The app does not rely on symmetric cryptography with hardcoded keys as a sole method of encryption. * MSTG-CRYPTO-2 The app uses proven implementations of cryptographic primitives. * MSTG-CRYPTO-3 The app uses cryptographic primitives that are appropriate for the particular use-case, configured with parameters that adhere to industry best practices. * MSTG-CRYPTO-4 The app does not use cryptographic protocols or algorithms that are widely considered deprecated for security purposes. * MSTG-CRYPTO-5 The app doesn’t re-use the same cryptographic key for multiple purposes. * MSTG-CRYPTO-6 All random values are generated using a sufficiently secure random number generator. * MSTG-NETWORK-1 Data is encrypted on the network using Transport Level Security (TLS). The secure channel is used consistently throughout the app. * MSTG-NETWORK-2 The TLS settings are in line with current best practices, or as close as possible if the mobile operating system does not support the recommended standards. * MSTG-NETWORK-3 The app verifies the X.509 certificate of the remote endpoint when the secure channel is established. Only certificates signed by a trusted Certificate Authority (CA) are accepted. * MSTG-NETWORK-6 The app only depends on up-to-date connectivity and security libraries. * MSTG-PLATFORM-1 The app only requests the minimum set of permissions necessary * MSTG-PLATFORM-2 All inputs from external sources and the user are validated and if necessary sanitized. This includes data received via the User Interface (UI), Inter-Process Communication (IPC) mechanisms such as intents, custom Uniform Resource Locators (URLs), and network sources. * MSTG-PLATFORM-3 The app does not export sensitive functionality via custom URL schemes, unless these mechanisms are properly protected. * MSTG-PLATFORM-4 The app does not export sensitive functionality through IPC facilities, unless these mechanisms are properly protected. * MSTG-PLATFORM-5 JavaScript is disabled in WebViews unless explicitly required. * MSTG-PLATFORM-6 WebViews are configured to allow only the minimum set of protocol handlers required (ideally, only https is supported). Potentially dangerous handlers, such as file, tel and app-id, are disabled. * MSTG-PLATFORM-7 If native methods of the app are exposed to a WebView, verify that the WebView only renders JavaScript contained within the app package. * WebView should not have debugging mode enabled in the App release. * MSTG-PLATFORM-8 Object deserialization, if any, is implemented using safe serialization Application Programming Interfaces (APIs). * MSTG-CODE-1 The app is signed and provisioned with a valid certificate, of which the private key is properly protected. * MSTG-CODE-2 The app has been built in release mode, with settings appropriate for a release build (e.g. non-debuggable). * MSTG-CODE-3 Debugging symbols have been removed from native binaries. * MSTG-CODE-4 Debugging code and developer assistance code (e.g. test code, backdoors, hidden settings) have been removed. The app does not log verbose errors or debugging messages. * MSTG-CODE-5 All third party components used by the mobile app, such as libraries and frameworks, are identified, and checked for known vulnerabilities. * MSTG-CODE-6 The app catches and handles possible exceptions. * MSTG-CODE-7 Error handling logic in security controls denies access by default. * MSTG-CODE-8 In unmanaged code, memory is allocated, freed and used securely. * MSTG-CODE-9 Free security features offered by the toolchain, such as byte-code minification, stack protection, Position Independent Executable (PIE) support and automatic reference counting, are activated. * MSTG-STORAGE-1: System credential storage facilities need to be used to store sensitive data, such as Personally Identifiable Information (PII), user credentials or cryptographic keys. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-2: No sensitive data should be stored outside of the app container or system credential storage facilities. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-3: No sensitive data is written to application logs. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-4: No sensitive data is shared with third parties unless it is a necessary part of the architecture. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-5: The keyboard cache is disabled on text inputs that process sensitive data. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-6: No sensitive data is exposed via IPC mechanisms. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-7: No sensitive data, such as passwords or pins, is exposed through the user interface. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-8: No sensitive data is included in backups generated by the mobile operating system. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-9: The app removes sensitive data from views when moved to the background. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/MSTSG_STORAGE.md) * MSTG-STORAGE-10 The app does not hold sensitive data in memory longer than necessary, and memory is cleared explicitly after use. * At startup, rooting should be checked for in order to protect user devices against being controlled by malicious software and their personal data accessed. If rooting is found, the user should be notified of the specific risk. * Data should not be stored in files in plaintext at any point ## Safe Places ### General * Whilst Safe Places deployment security best practices are the responsibility of the healthcare authority, developers shall make every effort to encourage healthcare authorities to deploy in a secure way, through documentation and technical means * Access to sensitive data shall be logged and made visible to users * Security and data access logs shall be immutable * Aggregated data published from Safe Places shall not be accessible in plain text to a layman user or malicious actor * Aggregated data published from Safe Places shall be obfuscated or encrypted so that concern points cannot be accessed ### OWASP Principles Based on [this OWASP document](https://github.com/OWASP/ASVS/raw/master/4.0/OWASP%20Application%20Security%20Verification%20Standard%204.0-en.pdf) the correct principles have been included. Specifically that includes Cookie-based Session Management and SOAP Web Service Verification Requirements (as we are using Tokens, and REST), Communications Security Requirements, Authentication Verification, SSRF Protection Requirements, Deployed Application Integrity Controls, Access Control, Build and Validate HTTP Request Header Requirements related requirements (as these are the responsiblity of the implementing Healthcare Authority). #### 1.1 Secure Software Development Lifecycle Requirements * Verify the use of threat modeling for every design change or sprint planning to identify threats, plan for countermeasures, facilitate appropriate risk responses, and guide security testing * Verify documentation and justification of all the application's trust boundaries, components, and significant data flows * Verify definition and security analysis of the application's high-level architecture and all connected remote services * Verify implementation of centralized, simple (economy of design), vetted, secure, and reusable security controls to avoid duplicate, missing, ineffective, or insecure controls * Verify availability of a secure coding checklist, security requirements, guideline, or policy to all developers and testers [![PASS](../images/pass.png?raw=true)](../README.md) #### 1.2 Authentication Architectural Requirements * Verify the use of unique or special low-privilege operating system accounts for all application components, services, and servers. [![PARTIALLY TRUE](../images/partial.png?raw=true)](https://pathcheck.atlassian.net/browse/PLACES-272) * Verify that communications between application components, including APIs, middleware and data layers, are authenticated. Components should have the least necessary privileges needed * Verify that the application uses a single vetted authentication mechanism that is known to be secure, can be extended to include strong authentication, and has sufficient logging and monitoring to detect account abuse or breaches * Verify that all authentication pathways and identity management APIs implement consistent authentication security control strength, such that there are no weaker alternatives per the risk of the application #### 1.4 Access Control Architectural Requirements * Verify that trusted enforcement points such as at access control gateways, servers, and serverless functions enforce access controls. Never enforce access controls on the client * Verify that the chosen access control solution is flexible enough to meet the application's needs * Verify enforcement of the principle of least privilege in functions, data files, URLs, controllers, services, and other resources. This implies protection against spoofing and elevation of privilege * Verify the application uses a single and well-vetted access control mechanism for accessing protected data and resources. All requests must pass through this single mechanism to avoid copy and paste or insecure alternative paths. * Verify that attribute or feature-based access control is used whereby the code checks the user's authorization for a feature/data item rather than just their role. Permissions should still be allocated using roles ![](../images/pass.oos?raw=true) #### 1.5 Input and Output Architectural Requirements * Verify that input and output requirements clearly define how to handle and process data based on type, content, and applicable laws, regulations, and other policy compliance * Verify that serialization is not used when communicating with untrusted clients. If this is not possible, ensure that adequate integrity controls (and possibly encryption if sensitive data is sent) are enforced to prevent deserialization attacks including object injection * Verify that input validation is enforced on a trusted service layer * Verify that output encoding occurs close to or by the interpreter for which it is intended. * Verify that there is an explicit policy for management of cryptographic keys and that a cryptographic key lifecycle follows a key management standard such as NIST SP 800-57 * Verify that consumers of cryptographic services protect key material and other secrets by using key vaults or API based alternatives * Verify that all keys and passwords are replaceable and are part of a well-defined process to re-encrypt sensitive data * Verify that symmetric keys, passwords, or API secrets generated by or shared with clients are used only in protecting low risk secrets, such as encrypting local storage, or temporary ephemeral uses such as parameter obfuscation. Sharing secrets with clients is clear-text equivalent and architecturally should be treated as such. #### 1.7 Errors, Logging and Auditing Architectural Requirements * Verify that a common logging format and approach is used across the system * Verify that logs are securely transmitted to a preferably remote system for analysis, detection, alerting, and escalation #### 1.8 Data Protection and Privacy Architectural Requirements * Verify that all sensitive data is identified and classified into protection levels * Verify that all protection levels have an associated set of protection requirements, such as encryption requirements, integrity requirements, retention, privacy and other confidentiality requirements, and that these are applied in the architecture #### 1.9 Communications Architectural Requirements * Verify the application encrypts communications between components, particularly when these components are in different containers, systems, sites, or cloud providers * Verify that application components verify the authenticity of each side in a communication link to prevent person-in-the-middle attacks. For example, application components should validate TLS certificates and chains #### 1.10 Malicious Software Architectural Requirements * Verify that a source code control system is in use, with procedures to ensure that check-ins are accompanied by issues or change tickets. The source code control system should have access control and identifiable users to allow traceability of any changes [![PASS](../images/pass.png?raw=true)](https://pathcheck.atlassian.net/wiki/spaces/SA/pages/50824646/2.+Contribution+Guidelines) #### 1.11 Business Logic Architectural Requirements * Verify the definition and documentation of all application components in terms of the business or security functions they provide * Verify that all high-value business logic flows, including authentication, session management and access control, do not share unsynchronized state * Verify that all high-value business logic flows, including authentication, session management and access control are thread safe and resistant to time-of-check and time-of-use race conditions #### 1.12 Secure File Upload Architectural Requirements * Verify that user-uploaded files are stored outside of the web root * Verify that user-uploaded files - if required to be displayed or downloaded from the application - are served by either octet stream downloads, or from an unrelated domain, such as a cloud file storage bucket. Implement a suitable content security policy to reduce the risk from XSS vectors or other attacks from the uploaded file #### 1.14 Configuration Architectural Requirements * Verify the segregation of components of differing trust levels through welldefined security controls, firewall rules, API gateways, reverse proxies, cloudbased security groups, or similar mechanisms * Verify that if deploying binaries to untrusted devices makes use of binary signatures, trusted connections, and verified endpoints * Verify that the build pipeline warns of out-of-date or insecure components and takes appropriate actions * Verify that the build pipeline contains a build step to automatically build and verify the secure deployment of the application, particularly if the application infrastructure is software defined, such as cloud environment build scripts * Verify that application deployments adequately sandbox, containerize and/or isolate at the network level to delay and deter attackers from attacking other applications, especially when they are performing sensitive or dangerous actions such as deserialization * Verify the application does not use unsupported, insecure, or deprecated clientside technologies such as NSAPI plugins, Flash, Shockwave, ActiveX, Silverlight, NACL, or client-side Java applets #### 2.10 Service Authentication Requirements * Verify that integration secrets do not rely on unchanging passwords, such as API keys or shared privileged accounts. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify that if passwords are required, the credentials are not a default account. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify passwords, integrations with databases and third-party systems, seeds and internal secrets, and API keys are managed securely and not included in the source code or stored within source code repositories. Such storage SHOULD resist offline attacks. The use of a secure software key store (L1), hardware trusted platform module (TPM), or a hardware security module (L3) is recommended for password storage. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) #### 3.1 Fundamental Session Management Requirements * Verify the application never reveals session tokens in URL parameters or error messages. #### 3.2 Session Binding Requirements * Verify the application generates a new session token on user authentication. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify that session tokens possess at least 64 bits of entropy. [![FAIL](../images/fail.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify the application only stores session tokens in the browser using secure methods such as appropriately secured cookies (see section 3.4) or HTML 5 session storage. [![FAIL](../images/fail.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify that session token are generated using approved cryptographic algorithms. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) #### 3.3 Session Logout and Timeout Requirements * Verify that logout and expiration invalidate the session token, such that the back button or a downstream relying party does not resume an authenticated session, including across relying parties. [![FAIL](../images/fail.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * If authenticators permit users to remain logged in, verify that re-authentication occurs periodically both when actively used or after an idle period. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify that the application terminates all other active sessions after a successful password change, and that this is effective across the application, federated login (if present), and any relying parties. [![FAIL](../images/fail.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify that users are able to view and log out of any or all currently active sessions and device. [![FAIL](../images/fail.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) #### 3.5 Token Based Session Management * Verify the application uses session tokens rather than static API secrets and keys, except with legacy implementations. [![PASS](../images/pass.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) * Verify that stateless session tokens use digital signatures, encryption, and other countermeasures to protect against tampering, enveloping, replay, null cipher, and key substitution attacks. [![FAIL](../images/fail.png?raw=true)](../dynamic_testing/SPL_WebServicesTesting/SPLWebServices.md) #### 3.6 Re-authentication from a Federation or Assertion * Verify that relying parties specify the maximum authentication time to CSPs and that CSPs re-authenticate the subscriber if they haven't used a session within that period. * Verify that CSPs inform relying parties of the last authentication event, to allow RPs to determine if they need to re-authenticate the user. #### 3.7 Defenses Against Session Management Exploits * Verify the application ensures a valid login session or requires re-authentication or secondary verification before allowing any sensitive transactions or account modifications. #### 5.1 Input Validation Requirements * Verify that the application has defenses against HTTP parameter pollution attacks, particularly if the application framework makes no distinction about the source of request parameters (GET, POST, cookies, headers, or environment variables). * Verify that frameworks protect against mass parameter assignment attacks, or that the application has countermeasures to protect against unsafe parameter assignment, such as marking fields private or similar. * Verify that all input (HTML form fields, REST requests, URL parameters, HTTP headers, cookies, batch files, RSS feeds, etc) is validated using positive validation (whitelisting). * Verify that structured data is strongly typed and validated against a defined schema including allowed characters, length and pattern (e.g. credit card numbers or telephone, or validating that two related fields are reasonable, such as checking that suburb and zip/postcode match). * Verify that URL redirects and forwards only allow whitelisted destinations, or show a warning when redirecting to potentially untrusted content. #### 5.2 Sanitization and Sandboxing Requirements * Verify that all untrusted HTML input from WYSIWYG editors or similar is properly sanitized with an HTML sanitizer library or framework feature. * Verify that unstructured data is sanitized to enforce safety measures such as allowed characters and length. * Verify that the application sanitizes user input before passing to mail systems to protect against SMTP or IMAP injection. * Verify that the application avoids the use of eval() or other dynamic code execution features. Where there is no alternative, any user input being included must be sanitized or sandboxed before being executed. * Verify that the application protects against template injection attacks by ensuring that any user input being included is sanitized or sandboxed. * Verify that the application protects against SSRF attacks, by validating or sanitizing untrusted data or HTTP file metadata, such as filenames and URL input fields, use whitelisting of protocols, domains, paths and ports. * Verify that the application sanitizes, disables, or sandboxes user-supplied SVG scriptable content, especially as they relate to XSS resulting from inline scripts, and foreignObject. * Verify that the application sanitizes, disables, or sandboxes user-supplied scriptable or expression template language content, such as Markdown, CSS or XSL stylesheets, BBCode, or similar. #### 5.3 Output encoding and Injection Prevention Requirements * Verify that output encoding is relevant for the interpreter and context required. For example, use encoders specifically for HTML values, HTML attributes, JavaScript, URL Parameters, HTTP headers, SMTP, and others as the context requires, especially from untrusted inputs (e.g. names with Unicode or apostrophes, such as ねこ or O'Hara). * Verify that output encoding preserves the user's chosen character set and locale, such that any Unicode character point is valid and safely handled. * Verify that context-aware, preferably automated - or at worst, manual - output escaping protects against reflected, stored, and DOM based XSS. * Verify that data selection or database queries (e.g. SQL, HQL, ORM, NoSQL) use parameterized queries, ORMs, entity frameworks, or are otherwise protected from database injection attacks. * Verify that where parameterized or safer mechanisms are not present, context-specific output encoding is used to protect against injection attacks, such as the use of SQL escaping to protect against SQL injection. * Verify that the application projects against JavaScript or JSON injection attacks, including for eval attacks, remote JavaScript includes, CSP bypasses, DOM XSS, and JavaScript expression evaluation. * Verify that the application protects against LDAP Injection vulnerabilities, or that specific security controls to prevent LDAP Injection have been implemented. * Verify that the application protects against OS command injection and that operating system calls use parameterized OS queries or use contextual command line output encoding. * Verify that the application protects against Local File Inclusion (LFI) or Remote File Inclusion (RFI) attacks. * Verify that the application protects against XPath injection or XML injection attacks. #### 5.4 Memory, String, and Unmanaged Code Requirements * Verify that the application uses memory-safe string, safer memory copy and pointer arithmetic to detect or prevent stack, buffer, or heap overflows. ✓ ✓ 120 * Verify that format strings do not take potentially hostile input, and are constant. ✓ ✓ 134 * Verify that sign, range, and input validation techniques are used to prevent integer overflows. #### 5.5 Deserialization Prevention Requirements * Verify that serialized objects use integrity checks or are encrypted to prevent hostile object creation or data tampering. * Verify that the application correctly restricts XML parsers to only use the most restrictive configuration possible and to ensure that unsafe features such as resolving external entities are disabled to prevent XXE. * Verify that deserialization of untrusted data is avoided or is protected in both custom code and third-party libraries (such as JSON, XML and YAML parsers). * Verify that when parsing JSON in browsers or JavaScript-based backends, JSON.parse is used to parse the JSON document. Do not use eval() to parse JSON. #### 6.1 Data Classification * Verify that regulated private data is stored encrypted while at rest, such as personally identifiable information (PII), sensitive personal information, or data assessed likely to be subject to EU's GDPR. * Verify that regulated health data is stored encrypted while at rest, such as medical records, medical device details, or de-anonymized research records. * Verify that regulated financial data is stored encrypted while at rest, such as financial accounts, defaults or credit history, tax records, pay history, beneficiaries, or de-anonymized market or research records. #### 6.2 Algorithms * Verify that all cryptographic modules fail securely, and errors are handled in a way that does not enable Padding Oracle attacks. * Verify that industry proven or government approved cryptographic algorithms, modes, and libraries are used, instead of custom coded cryptography. * Verify that encryption initialization vector, cipher configuration, and block modes are configured securely using the latest advice. * Verify that random number, encryption or hashing algorithms, key lengths, rounds, ciphers or modes, can be reconfigured, upgraded, or swapped at any time, to protect against cryptographic breaks. * Verify that known insecure block modes (i.e. ECB, etc.), padding modes (i.e. PKCS#1 v1.5, etc.), ciphers with small block sizes (i.e. Triple-DES, Blowfish, etc.), and weak hashing algorithms (i.e. MD5, SHA1, etc.) are not used unless required for backwards compatibility. * Verify that nonces, initialization vectors, and other single use numbers must not be used more than once with a given encryption key. The method of generation must be appropriate for the algorithm being used. 6.2.7 Verify that encrypted data is authenticated via signatures, authenticated cipher modes, or HMAC to ensure that ciphertext is not altered by an unauthorized party. * Verify that all cryptographic operations are constant-time, with no 'short-circuit' operations in comparisons, calculations, or returns, to avoid leaking information. #### 6.3 Random Values * Verify that all random numbers, random file names, random GUIDs, and random strings are generated using the cryptographic module's approved cryptographically secure random number generator when these random values are intended to be not guessable by an attacker. ✓ ✓ 338 * Verify that random GUIDs are created using the GUID v4 algorithm, and a cryptographically-secure pseudo-random number generator (CSPRNG). GUIDs created using other pseudo-random number generators may be predictable. * Verify that random numbers are created with proper entropy even when the application is under heavy load, or that the application degrades gracefully in such circumstances. #### 6.4 Secret Management * Verify that a secrets management solution such as a key vault is used to securely create, store, control access to and destroy secrets. * Verify that key material is not exposed to the application but instead uses an isolated security module like a vault for cryptographic operations. #### 7.1 Log Content Requirements * Verify that the application does not log credentials or payment details. Session tokens should only be stored in logs in an irreversible, hashed form. * Verify that the application does not log other sensitive data as defined under local privacy laws or relevant security policy. * Verify that the application logs security relevant events including successful and failed authentication events, access control failures, deserialization failures and input validation failures. * Verify that each log event includes necessary information that would allow for a detailed investigation of the timeline when an event happens. #### 7.2 Log Processing Requirements * Verify that all authentication decisions are logged, without storing sensitive session identifiers or passwords. This should include requests with relevant metadata needed for security investigations. * Verify that all access control decisions can be logged and all failed decisions are logged. This should include requests with relevant metadata needed for security investigations. #### 7.3 Log Protection Requirements * Verify that the application appropriately encodes user-supplied data to prevent log injection. * Verify that all events are protected from injection when viewed in log viewing software. * Verify that security logs are protected from unauthorized access and modification. * Verify that time sources are synchronized to the correct time and time zone. Strongly consider logging only in UTC if systems are global to assist with post-incident forensic analysis. #### 7.4 Error Handling * Verify that a generic message is shown when an unexpected or security sensitive error occurs, potentially with a unique ID which support personnel can use to investigate. * Verify that exception handling (or a functional equivalent) is used across the codebase to account for expected and unexpected error conditions. * Verify that a "last resort" error handler is defined which will catch all unhandled exceptions. #### 8.1 General Data Protection * Verify the application protects sensitive data from being cached in server components such as load balancers and application caches. * Verify that all cached or temporary copies of sensitive data stored on the server are protected from unauthorized access or purged/invalidated after the authorized user accesses the sensitive data. * Verify the application minimizes the number of parameters in a request, such as hidden fields, Ajax variables, cookies and header values. * Verify the application can detect and alert on abnormal numbers of requests, such as by IP, user, total per hour or day, or whatever makes sense for the application. #### 8.2 Client-side Data Protection * Verify the application sets sufficient anti-caching headers so that sensitive data is not cached in modern browsers [![PARTIALLY TRUE](../images/partial.png?raw=true)](https://pathcheck.atlassian.net/browse/PLACES-272) * Verify that data stored in client side storage (such as HTML5 local storage, session storage, IndexedDB, regular cookies or Flash cookies) does not contain sensitive data or PII. * Verify that authenticated data is cleared from client storage, such as the browser DOM, after the client or session is terminated. #### 8.3 Sensitive Private Data __Note that some elements of this section have not been included, as they are privacy related and covered in elsewhere.__ * Verify that sensitive data is sent to the server in the HTTP message body or headers, and that query string parameters from any HTTP verb do not contain sensitive data. * Verify that sensitive information contained in memory is overwritten as soon as it is no longer required to mitigate memory dumping attacks, using zeroes or random data. * Verify that sensitive or private information that is required to be encrypted, is encrypted using approved algorithms that provide both confidentiality and integrity. #### 9.1 Communications Security Requirements * Verify that secured TLS is used for all client connectivity, and does not fall back to insecure or unencrypted protocols * Verify using online or up to date TLS testing tools that only strong algorithms, ciphers, and protocols are enabled, with the strongest algorithms and ciphers set as preferred. * Verify that old versions of SSL and TLS protocols, algorithms, ciphers, and configuration are disabled, such as SSLv2, SSLv3, or TLS 1.0 and TLS 1.1. The latest version of TLS should be the preferred cipher suite. #### 10.1 Code Integrity Controls * Verify that a code analysis tool is in use that can detect potentially malicious code, such as time functions, unsafe file operations and network connections. #### 10.2 Malicious Code Search * Verify that the application source code and third party libraries do not contain unauthorized phone home or data collection capabilities. Where such functionality exists, obtain the user's permission for it to operate before collecting any data * Verify that the application does not ask for unnecessary or excessive permissions to privacy related features or sensors, such as contacts, cameras, microphones, or location. * Verify that the application source code and third party libraries do not contain back doors, such as hard-coded or additional undocumented accounts or keys, code obfuscation, undocumented binary blobs, rootkits, or anti-debugging, insecure debugging features, or otherwise out of date, insecure, or hidden functionality that could be used maliciously if discovered. * Verify that the application source code and third party libraries does not contain time bombs by searching for date and time related functions. * Verify that the application source code and third party libraries does not contain malicious code, such as salami attacks, logic bypasses, or logic bombs. * Verify that the application source code and third party libraries do not contain Easter eggs or any other potentially unwanted functionality. #### 11.1 Business Logic Security Requirements * Verify the application will only process business logic flows for the same user in sequential step order and without skipping steps. * Verify the application will only process business logic flows with all steps being processed in realistic human time, i.e. transactions are not submitted too quickly. * Verify the application has appropriate limits for specific business actions or transactions which are correctly enforced on a per user basis. * Verify the application has sufficient anti-automation controls to detect and protect against data exfiltration, excessive business logic requests, excessive file uploads or denial of service attacks. * Verify the application has business logic limits or validation to protect against likely business risks or threats, identified using threat modelling or similar methodologies. * Verify the application does not suffer from "time of check to time of use" (TOCTOU) issues or other race conditions for sensitive operations. * Verify the application monitors for unusual events or activity from a business logic perspective. For example, attempts to perform actions out of order or actions which a normal user would never attempt. * Verify the application has configurable alerting when automated attacks or unusual activity is detected. #### 12.1 File Upload Requirements * Verify that the application will not accept large files that could fill up storage or cause a denial of service attack. * Verify that compressed files are checked for "zip bombs" - small input files that will decompress into huge files thus exhausting file storage limits. * Verify that a file size quota and maximum number of files per user is enforced to ensure that a single user cannot fill up the storage with too many files, or excessively large files. #### 12.2 File Integrity Requirements * Verify that files obtained from untrusted sources are validated to be of expected type based on the file's content. #### 12.3 File execution Requirements * Verify that user-submitted filename metadata is not used directly with system or framework file and URL API to protect against path traversal. * Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure, creation, updating or removal of local files (LFI). * Verify that user-submitted filename metadata is validated or ignored to prevent the disclosure or execution of remote files (RFI), which may also lead to SSRF. * Verify that the application protects against reflective file download (RFD) by validating or ignoring user-submitted filenames in a JSON, JSONP, or URL parameter, the response Content-Type header should be set to text/plain, and the Content-Disposition header should have a fixed filename. * Verify that untrusted file metadata is not used directly with system API or libraries, to protect against OS command injection. * Verify that the application does not include and execute functionality from untrusted sources, such as unverified content distribution networks, JavaScript libraries, node npm libraries, or server-side DLLs. #### 12.4 File Storage Requirements * Verify that files obtained from untrusted sources are stored outside the web root, with limited permissions, preferably with strong validation. * Verify that files obtained from untrusted sources are scanned by antivirus scanners to prevent upload of known malicious content. #### 12.5 File Download Requirements * Verify that the web tier is configured to serve only files with specific file extensions to prevent unintentional information and source code leakage. For example, backup files (e.g. .bak), temporary working files (e.g. .swp), compressed files (.zip, .tar.gz, etc) and other extensions commonly used by editors should be blocked unless required. __note this should form guidance to HAs, rather than implemented configuration__ * Verify that direct requests to uploaded files will never be executed as HTML/JavaScript content. #### 13.1 Generic Web Service Security Verification Requirements * Verify that all application components use the same encodings and parsers to avoid parsing attacks that exploit different URI or file parsing behavior that could be used in SSRF and RFI attacks. * Verify that access to administration and management functions is limited to authorized administrators. * Verify API URLs do not expose sensitive information, such as the API key, session tokens etc. * Verify that authorization decisions are made at both the URI, enforced by programmatic or declarative security at the controller or router, and at the resource level, enforced by model-based permissions. * Verify that requests containing unexpected or missing content types are rejected with appropriate headers (HTTP response status 406 Unacceptable or 415 Unsupported Media Type). #### 13.2 RESTful Web Service Verification Requirements * Verify that enabled RESTful HTTP methods are a valid choice for the user or action, such as preventing normal users using DELETE or PUT on protected API or resources. * Verify that JSON schema validation is in place and verified before accepting input. * Verify that RESTful web services that utilize cookies are protected from Cross-Site Request Forgery via the use of at least one or more of the following: triple or double submit cookie pattern (see references), CSRF nonces, or ORIGIN request header checks. * Verify that REST services have anti-automation controls to protect against excessive calls, especially if the API is unauthenticated. * Verify that REST services explicitly check the incoming Content-Type to be the expected one, such as application/xml or application/JSON. * Verify that the message headers and payload are trustworthy and not modified in transit. Requiring strong encryption for transport (TLS only) may be sufficient in many cases as it provides both confidentiality and integrity protection. Per-message digital signatures can provide additional assurance on top of the transport protections for high-security applications but bring with them additional complexity and risks to weigh against the benefits. #### 13.4 GraphQL and other Web Service Data Layer Security Requirements * Verify that query whitelisting or a combination of depth limiting and amount limiting should be used to prevent GraphQL or data layer expression denial of service (DoS) as a result of expensive, nested queries. For more advanced scenarios, query cost analysis should be used. * Verify that GraphQL or other data layer authorization logic should be implemented at the business logic layer instead of the GraphQL layer. #### 14.2 Dependency * Verify that all components are up to date, preferably using a dependency checker during build or compile time. * Verify that all unneeded features, documentation, samples, configurations are removed, such as sample applications, platform documentation, and default or example users. * Verify that if application assets, such as JavaScript libraries, CSS stylesheets or web fonts, are hosted externally on a content delivery network (CDN) or external provider, Subresource Integrity (SRI) is used to validate the integrity of the asset. * Verify that third party components come from pre-defined, trusted and continually maintained repositories. * Verify that an inventory catalog is maintained of all third party libraries in use. * Verify that the attack surface is reduced by sandboxing or encapsulating third party libraries to expose only the required behaviour into the application. #### 14.3 Unintended Security Disclosure Requirements * Verify that web or application server and framework error messages are configured to deliver user actionable, customized responses to eliminate any unintended security disclosures * Verify that web or application server and application framework debug modes are disabled in production to eliminate debug features, developer consoles, and unintended security disclosures [![PARTIALLY TRUE](../images/partial.png?raw=true)](https://pathcheck.atlassian.net/browse/PLACES-272) * Verify that the HTTP headers or any part of the HTTP response do not expose detailed version information of system components * Verify that every HTTP response contains a content type header specifying a safe character set (e.g., UTF-8, ISO 8859-1) * Verify that all API responses contain Content-Disposition: attachment;filename="api.json" (or other appropriate filename for the content type). * Verify that a content security policy (CSPv2) is in place that helps mitigate impact for XSS attacks like HTML, DOM, JSON, and JavaScript injection vulnerabilities. * Verify that all responses contain X-Content-Type-Options: nosniff. [![PARTIALLY TRUE](../images/partial.png?raw=true)](https://pathcheck.atlassian.net/browse/PLACES-272) * Verify that HTTP Strict Transport Security headers are included on all responses and for all subdomains, such as Strict-Transport-Security: max-age=15724800;includeSubdomains [![PARTIALLY TRUE](../images/partial.png?raw=true)](https://pathcheck.atlassian.net/browse/PLACES-272) * Verify that a suitable "Referrer-Policy" header is included, such as "no-referrer" or "same-origin" [![PARTIALLY TRUE](../images/partial.png?raw=true)](https://pathcheck.atlassian.net/browse/PLACES-272) * Verify that a suitable X-Frame-Options or Content-Security-Policy: frame-ancestors header is in use for sites where content should not be embedded in a third-party site [![PARTIALLY TRUE](../images/partial.png?raw=true)](https://pathcheck.atlassian.net/browse/PLACES-272)
116.415584
634
0.808925
eng_Latn
0.997946
a95a4f3b9593b36803f9d335baed0958c5e52b07
177
md
Markdown
README.md
hype-tech/spb-data
794e5dcdebc8a9ace46c7310b9e835cc70dd5712
[ "MIT" ]
3
2018-02-25T01:06:39.000Z
2020-09-24T08:01:48.000Z
README.md
hype-tech/spb-data
794e5dcdebc8a9ace46c7310b9e835cc70dd5712
[ "MIT" ]
null
null
null
README.md
hype-tech/spb-data
794e5dcdebc8a9ace46c7310b9e835cc70dd5712
[ "MIT" ]
2
2020-08-06T12:36:42.000Z
2020-09-24T08:01:51.000Z
Полный список всех районов, населенных пунктов и улиц Санкт-Петербурга и Лен. Области в виде JSON. Тестовая реализация на VueJS. Для запуска откройте файл index.html в браузере
59
128
0.819209
rus_Cyrl
0.878246
4ce0b9977fc1b36ac14f5d8776bb14647b922f3c
6,373
md
Markdown
controls/radimageeditor/getting-started.md
kylemurdoch/xaml-docs
724c2772d5b1bf5c3fe254fdc0653c24d51824fc
[ "MIT", "Unlicense" ]
null
null
null
controls/radimageeditor/getting-started.md
kylemurdoch/xaml-docs
724c2772d5b1bf5c3fe254fdc0653c24d51824fc
[ "MIT", "Unlicense" ]
null
null
null
controls/radimageeditor/getting-started.md
kylemurdoch/xaml-docs
724c2772d5b1bf5c3fe254fdc0653c24d51824fc
[ "MIT", "Unlicense" ]
null
null
null
--- title: Getting Started page_title: Getting Started description: Check our &quot;Getting Started&quot; documentation article for the RadImageEditor {{ site.framework_name }} control. slug: radimageeditor-getting-started tags: getting,started published: True position: 1 --- # Getting Started This tutorial will walk you through the creation of a sample application that contains __RadImageEditor__. ## Assembly references In order to use __RadImageEditor__ in your projects, you have to add references to the following assemblies: * __Telerik.Windows.Controls__ * __Telerik.Windows.Controls.ImageEditor__ * __Telerik.Windows.Controls.Input__ ## Adding RadImageEditor to the Project The next few code examples will demonstrate how to add a __RadImageEditor__ in XAML, load a sample picture and execute a command on that picture. __Example 1__ showcases a __RadImageEditor__ and a Button defined in XAML. #### __[XAML] Example 1: Defining a RadImageEditor in xaml__ {{region xaml-radimageeditor-getting-started-0}} <Grid> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <telerik:RadImageEditor x:Name="ImageEditor"/> <Button Click="Button_Click" Content="Rotate picture" Grid.Row="1" /> </Grid> {{endregion}} __Example 2__ shows the telerik namespace used in __Example 1__: #### __[XAML] Example 2: Telerik Namespace declaration__ {{region xaml-radimageeditor-getting-started-1}} xmlns:telerik="http://schemas.telerik.com/2008/xaml/presentation" {{endregion}} In order to show a picture, you can set the __Image__ property of the __RadImageEditor__. It is of type [RadBitmap](https://docs.telerik.com/devtools/wpf/api/telerik.windows.media.imaging.radbitmap). __Example 3__ demonstrates how you can use the [ImageExampleHelper](https://github.com/telerik/xaml-sdk/blob/master/ImageEditor/RadImageEditorUIFirstLook/ImageExampleHelper.cs) class in order to load an Image. It assumes that there is a folder named "SampleImages" with an image named "RadImageEditor.png" inside the project. #### __[C#] Example 3: Load image in RadImageEditor__ {{region cs-radimageeditor-getting-started-2}} public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); ImageExampleHelper.LoadSampleImage(this.ImageEditor, "RadImageEditor.png"); } private void Button_Click(object sender, RoutedEventArgs e) { this.ImageEditor.Commands.Rotate180.Execute(this.ImageEditor); } } {{endregion}} #### __[VB.NET] Example 3: Load image in RadImageEditor__ {{region vb-radimageeditor-getting-started-3}} Partial Public Class MainWindow Inherits Window Public Sub New() InitializeComponent() ImageExampleHelper.LoadSampleImage(Me.ImageEditor, "RadImageEditor.png") End Sub Private Sub Button_Click(ByVal sender As Object, ByVal e As RoutedEventArgs) Me.ImageEditor.Commands.Rotate180.Execute(Me.ImageEditor) End Sub End Class {{endregion}} #### __[C#] Example 4: ImageExampleHelper used in Example 3__ {{region cs-radimageeditor-getting-started-4}} public class ImageExampleHelper { private static string SampleImageFolder = "SampleImages/"; public static void LoadSampleImage(RadImageEditorUI imageEditorUI, string image) { using (Stream stream = Application.GetResourceStream(GetResourceUri(SampleImageFolder + image)).Stream) { imageEditorUI.Image = new Telerik.Windows.Media.Imaging.RadBitmap(stream); imageEditorUI.ApplyTemplate(); imageEditorUI.ImageEditor.ScaleFactor = 0; } } public static Uri GetResourceUri(string resource) { AssemblyName assemblyName = new AssemblyName(typeof(ImageExampleHelper).Assembly.FullName); string resourcePath = "/" + assemblyName.Name + ";component/" + resource; Uri resourceUri = new Uri(resourcePath, UriKind.Relative); return resourceUri; } } {{endregion}} #### __[VB.NET] Example 4: ImageExampleHelper used in Example 3__ {{region vb-radimageeditor-getting-started-5}} Public Class ImageExampleHelper Private Shared SampleImageFolder As String = "SampleImages/" Public Shared Sub LoadSampleImage(ByVal imageEditorUI As RadImageEditorUI, ByVal image As String) Using stream As Stream = Application.GetResourceStream(GetResourceUri(SampleImageFolder & image)).Stream imageEditorUI.Image = New Telerik.Windows.Media.Imaging.RadBitmap(stream) imageEditorUI.ApplyTemplate() imageEditorUI.ImageEditor.ScaleFactor = 0 End Using End Sub Public Shared Function GetResourceUri(ByVal resource As String) As Uri Dim assemblyName As New AssemblyName(GetType(ImageExampleHelper).Assembly.FullName) Dim resourcePath As String = "/" & assemblyName.Name & ";component/" & resource Dim resourceUri As New Uri(resourcePath, UriKind.Relative) Return resourceUri End Function End Class {{endregion}} #### __Figure 1: Result from the above examples__ ![RadImageEditor rotating image](images/RadImageEditor_GettingStarted.gif) ## Commands and Tools __Example 3__ demonstrates the usage of a single command over the loaded image. However, the __RadImageEditor__ provides many more [Commands and Tools]({%slug radimageeditor-features-commands-and-tools%}), which can be executed both in code-behind or XAML. ## RadImageEditorUI __RadImageEditor__ is easy to integrate with all kinds of UI thanks to the commanding mechanism that it employs. It has a good-to-go UI that comes out of the box. That is [__RadImageEditorUI__]({%slug radimageeditor-features-radimageeditorui%}), which is quite easily wired to work with the commands and tools that __RadImageEditor__ exposes. As both controls follow closely the command pattern, they can be set up to work with little to no code-behind. However, you can implement and wire custom UI, too. ## See Also * [RadImageEditorUI]({%slug radimageeditor-features-radimageeditorui%}) * [Import and Export]({%slug radimageeditor-features-import-export%})
40.852564
527
0.725561
eng_Latn
0.661034
4ce191822808febb7525812b2c56e5c4033c3abf
11,497
md
Markdown
articles/cognitive-services/Anomaly-Detector/includes/quickstarts/anomaly-detector-client-library-python.md
MSFTandrelom/azure-docs.ru-ru
ccbb3e755b9c6e50a8a58babbc01a4e5977635b5
[ "CC-BY-4.0", "MIT" ]
5
2016-12-12T09:33:15.000Z
2017-06-18T11:33:37.000Z
articles/cognitive-services/Anomaly-Detector/includes/quickstarts/anomaly-detector-client-library-python.md
changeworld/azure-docs.ru-ru
980849d8505e40e8b260cb5a35b56e22d55fc9d3
[ "CC-BY-4.0", "MIT" ]
44
2016-12-06T19:42:42.000Z
2017-06-16T13:45:55.000Z
articles/cognitive-services/Anomaly-Detector/includes/quickstarts/anomaly-detector-client-library-python.md
changeworld/azure-docs.ru-ru
980849d8505e40e8b260cb5a35b56e22d55fc9d3
[ "CC-BY-4.0", "MIT" ]
11
2016-11-30T11:36:13.000Z
2017-06-22T14:04:33.000Z
--- title: Краткое руководство по использованию клиентской библиотеки Детектора аномалий (Python) titleSuffix: Azure Cognitive Services services: cognitive-services author: mrbullwinkle manager: nitinme ms.service: cognitive-services ms.topic: include ms.date: 11/25/2020 ms.author: mbullwin ms.openlocfilehash: 216c45bf097718f6a696e64c8bd9c8718fc0185e ms.sourcegitcommit: ba676927b1a8acd7c30708144e201f63ce89021d ms.translationtype: HT ms.contentlocale: ru-RU ms.lasthandoff: 03/07/2021 ms.locfileid: "102444914" --- Приступите к работе с клиентской библиотекой Детектора аномалий для Python. Выполните следующие действия, чтобы установить пакет и приступить к использованию алгоритмов, предоставляемых службой. Служба Детектора аномалий позволяет находить аномалии в данных временных рядов, автоматически применяя для них наиболее подходящие модели, независимо от отрасли, сценария или объема данных. Клиентскую библиотеку Детектора аномалий для Python можно использовать для таких задач: * обнаружение аномалий в наборе данных временных рядов с использованием пакетного запроса; * Обнаружение состояния аномалии последней точки данных во временном ряду. * обнаружение точек изменения тенденций в наборе данных. [Справочная документация по библиотеке](https://go.microsoft.com/fwlink/?linkid=2090370) | [Исходный код библиотеки](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/cognitiveservices/azure-cognitiveservices-anomalydetector) | [Пакет (PyPi)](https://pypi.org/project/azure-ai-anomalydetector/) | [Поиск примера кода на GitHub](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/anomalydetector/azure-ai-anomalydetector/samples) ## <a name="prerequisites"></a>Предварительные требования * [Python 3.x](https://www.python.org/) * [Библиотека анализа данных Pandas](https://pandas.pydata.org/). * Подписка Azure — [создайте бесплатную учетную запись](https://azure.microsoft.com/free/cognitive-services). * Получив подписку Azure, перейдите к <a href="https://ms.portal.azure.com/#create/Microsoft.CognitiveServicesAnomalyDetector" title="Создание ресурса Детектора аномалий" target="_blank">созданию ресурса Детектора аномалий </a> на портале Azure, чтобы получить ключ и конечную точку. Дождитесь, пока закончится развертывание, и нажмите кнопку **Перейти к ресурсу**. * Для подключения приложения к API "Детектор аномалий" потребуется ключ и конечная точка из созданного ресурса. Ключ и конечная точка будут вставлены в приведенный ниже код в кратком руководстве. Используйте бесплатную ценовую категорию (`F0`), чтобы опробовать службу, а затем выполните обновление до платного уровня для рабочей среды. ## <a name="setting-up"></a>Настройка [!INCLUDE [anomaly-detector-environment-variables](../environment-variables.md)] ### <a name="create-a-new-python-application"></a>Создание приложения Python Создайте файл Python и импортируйте следующие библиотеки. ```python import os from azure.ai.anomalydetector import AnomalyDetectorClient from azure.ai.anomalydetector.models import DetectRequest, TimeSeriesPoint, TimeGranularity, \ AnomalyDetectorError from azure.core.credentials import AzureKeyCredential import pandas as pd ``` Создайте переменные для ключа в качестве переменной среды, путь к файлу данных временных рядов, а также расположение Azure для вашей подписки. Например, `westus2`. ```python SUBSCRIPTION_KEY = os.environ["ANOMALY_DETECTOR_KEY"] ANOMALY_DETECTOR_ENDPOINT = os.environ["ANOMALY_DETECTOR_ENDPOINT"] TIME_SERIES_DATA_PATH = os.path.join("./sample_data", "request-data.csv") ``` ### <a name="install-the-client-library"></a>Установка клиентской библиотеки После установки Python вы можете установить клиентскую библиотеку с помощью следующей команды: ```console pip install --upgrade azure-ai-anomalydetector ``` ## <a name="object-model"></a>Объектная модель Клиент Детектора аномалий — это объект [AnomalyDetectorClient](https://github.com/Azure/azure-sdk-for-python/blob/0b8622dc249969c2f01c5d7146bd0bb36bb106dd/sdk/cognitiveservices/azure-cognitiveservices-anomalydetector/azure/cognitiveservices/anomalydetector/_anomaly_detector_client.py), который выполняет проверку подлинности в Azure с помощью вашего ключа. Клиент теперь может выполнять обнаружение аномалий с использованием [detect_entire_series](https://github.com/Azure/azure-sdk-for-python/blob/bf9d44f2a50aea46a59c4cb83ccfccaff5e2b218/sdk/anomalydetector/azure-ai-anomalydetector/azure/ai/anomalydetector/operations/_anomaly_detector_client_operations.py#L26) для всего набора данных и [detect_last_point](https://github.com/Azure/azure-sdk-for-python/blob/bf9d44f2a50aea46a59c4cb83ccfccaff5e2b218/sdk/anomalydetector/azure-ai-anomalydetector/azure/ai/anomalydetector/operations/_anomaly_detector_client_operations.py#L87) для последней точки данных. Функция [detect_change_point](https://github.com/Azure/azure-sdk-for-python/blob/bf9d44f2a50aea46a59c4cb83ccfccaff5e2b218/sdk/anomalydetector/azure-ai-anomalydetector/azure/ai/anomalydetector/aio/operations_async/_anomaly_detector_client_operations_async.py#L142) обнаруживает точки, которые отмечают изменения тенденции. Данные временного ряда отправляются в виде ряда объекта [TimeSeriesPoints](https://github.com/Azure/azure-sdk-for-python/blob/bf9d44f2a50aea46a59c4cb83ccfccaff5e2b218/sdk/anomalydetector/azure-ai-anomalydetector/azure/ai/anomalydetector/models/_models_py3.py#L370). Объект `DetectRequest` содержит свойства для описания данных (например, `TimeGranularity`) и параметры для обнаружения аномалий. Ответ Детектора аномалий — это объект [LastDetectResponse](/python/api/azure-cognitiveservices-anomalydetector/azure.cognitiveservices.anomalydetector.models.lastdetectresponse), [EntireDetectResponse](/python/api/azure-cognitiveservices-anomalydetector/azure.cognitiveservices.anomalydetector.models.entiredetectresponse) или [ChangePointDetectResponse](https://github.com/Azure/azure-sdk-for-python/blob/bf9d44f2a50aea46a59c4cb83ccfccaff5e2b218/sdk/anomalydetector/azure-ai-anomalydetector/azure/ai/anomalydetector/models/_models_py3.py#L107) (в зависимости от используемого метода). ## <a name="code-examples"></a>Примеры кода Эти фрагменты кода показывают, как выполнить следующие действия с помощью клиентской библиотеки Детектора аномалий для Python: * [аутентификация клиента](#authenticate-the-client); * [загрузка набора данных временного ряда из файла](#load-time-series-data-from-a-file); * [обнаружение аномалий во всем наборе данных](#detect-anomalies-in-the-entire-data-set); * [обнаружение состояния аномалии последней точки данных](#detect-the-anomaly-status-of-the-latest-data-point). * [Обнаружение точек изменения в наборе данных](#detect-change-points-in-the-data-set) ## <a name="authenticate-the-client"></a>Аутентификация клиента Добавьте переменную расположения Azure в конечную точку и проверьте подлинность клиента с помощью ключа. ```python client = AnomalyDetectorClient(AzureKeyCredential(SUBSCRIPTION_KEY), ANOMALY_DETECTOR_ENDPOINT) ``` ## <a name="load-time-series-data-from-a-file"></a>Загрузка данных временного ряда из файла Скачайте пример данных для этого краткого руководства с сайта [GitHub](https://github.com/Azure-Samples/AnomalyDetector/blob/master/example-data/request-data.csv): 1. В браузере щелкните правой кнопкой мыши пункт **Без обработки**. 2. Выберите **Сохранить ссылку как**. 3. Сохраните файл в формате CSV в каталоге вашего приложения. Данные временных рядов форматируются как CSV-файл и будут отправлены в API Детектора аномалий. Загрузите свой файл данных с помощью метода `read_csv()` библиотеки Pandas и создайте переменную с пустым списком для хранения ряда данных. Выполните итерацию файла и добавьте данные в виде объекта `TimeSeriesPoint`. Этот объект будет содержать метку времени и числовое значение из строк вашего CSV-файла данных. ```python series = [] data_file = pd.read_csv(TIME_SERIES_DATA_PATH, header=None, encoding='utf-8', parse_dates=[0]) for index, row in data_file.iterrows(): series.append(TimeSeriesPoint(timestamp=row[0], value=row[1])) ``` Создайте объект `DetectRequest` с вашим временным рядом и `TimeGranularity` (или периодичности) точек данных. Например, `TimeGranularity.daily`. ```python request = DetectRequest(series=series, granularity=TimeGranularity.daily) ``` ## <a name="detect-anomalies-in-the-entire-data-set"></a>Обнаружение аномалий во всем наборе данных Вызовите API для обнаружения аномалий во всех данных временного ряда с помощью метода `detect_entire_series` клиента. Сохраните возвращенный объект [EntireDetectResponse](/python/api/azure-cognitiveservices-anomalydetector/azure.cognitiveservices.anomalydetector.models.entiredetectresponse). Выполните итерацию списка ответов `is_anomaly` и выведите индексы любых значений `true`. Эти значения соответствуют индексу аномальных точек данных, если они были найдены. ```python print('Detecting anomalies in the entire time series.') try: response = client.detect_entire_series(request) except AnomalyDetectorError as e: print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message)) except Exception as e: print(e) if any(response.is_anomaly): print('An anomaly was detected at index:') for i, value in enumerate(response.is_anomaly): if value: print(i) else: print('No anomalies were detected in the time series.') ``` ## <a name="detect-the-anomaly-status-of-the-latest-data-point"></a>Обнаружение состояний аномалии последней точки данных Вызовите API Детектора аномалий, чтобы определить, является ли последняя точка данных аномальной, с помощью метода `detect_last_point` клиента и сохраните возвращенный объект `LastDetectResponse`. Значение `is_anomaly` в ответе представляет собой логическое значение, указывающее состояние аномалий этой точки. ```python print('Detecting the anomaly status of the latest data point.') try: response = client.detect_last_point(request) except AnomalyDetectorError as e: print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message)) except Exception as e: print(e) if response.is_anomaly: print('The latest point is detected as anomaly.') else: print('The latest point is not detected as anomaly.') ``` ## <a name="detect-change-points-in-the-data-set"></a>Обнаружение точек изменения в наборе данных Вызовите API для обнаружения точек изменения в данных временных рядов с помощью метода `detect_change_point` клиента. Сохраните полученный объект `ChangePointDetectResponse`. Выполните итерацию списка ответов `is_change_point` и выведите индексы любых значений `true`. Эти значения соответствуют индексам точек изменения тенденций, если они были обнаружены. ```python print('Detecting change points in the entire time series.') try: response = client.detect_change_point(request) except AnomalyDetectorError as e: print('Error code: {}'.format(e.error.code), 'Error message: {}'.format(e.error.message)) except Exception as e: print(e) if any(response.is_change_point): print('An change point was detected at index:') for i, value in enumerate(response.is_change_point): if value: print(i) else: print('No change point were detected in the time series.') ``` ## <a name="run-the-application"></a>Выполнение приложения Запустите приложение, выполнив команду `python` и используя имя файла. [!INCLUDE [anomaly-detector-next-steps](../quickstart-cleanup-next-steps.md)]
59.880208
1,278
0.803514
rus_Cyrl
0.479906
4ce1c0d65eca5e6470f910002759fea09145b763
403
md
Markdown
README.md
carewdavid/ken
23286636805ecc330f9a44f243d55ec2e00e98a3
[ "BSD-2-Clause" ]
3
2020-04-07T16:06:49.000Z
2021-05-13T09:27:22.000Z
README.md
carewdavid/ken
23286636805ecc330f9a44f243d55ec2e00e98a3
[ "BSD-2-Clause" ]
4
2018-05-21T13:38:26.000Z
2020-03-31T00:58:03.000Z
README.md
carewdavid/ken
23286636805ecc330f9a44f243d55ec2e00e98a3
[ "BSD-2-Clause" ]
2
2019-10-11T19:03:12.000Z
2019-10-11T21:54:17.000Z
# Ken (Kilo enhanced) Lightweight Text-based editor.<br/> No dependencies needed just run make.<br/> This editor is based in the KILO editor as described in:https://viewsourcecode.org/snaptoken/kilo/index.html<br/> I followed the tutorial and added replace capabilities to the search function described there.<br/> Im planning to mantain this version of the code and keep adding some functionalities.
57.571429
114
0.794045
eng_Latn
0.998013
4ce1c5ae002da0b6e5a4994276e6a30fcd849722
2,291
md
Markdown
src/pages/posts/001-first-post.md
Rksensational/college-blog-applicaton
070b901118bb0450b500d70b746c68f066636817
[ "MIT" ]
null
null
null
src/pages/posts/001-first-post.md
Rksensational/college-blog-applicaton
070b901118bb0450b500d70b746c68f066636817
[ "MIT" ]
null
null
null
src/pages/posts/001-first-post.md
Rksensational/college-blog-applicaton
070b901118bb0450b500d70b746c68f066636817
[ "MIT" ]
null
null
null
--- title: “A Beginner’s Guide to Deep Learning with PyTorch” date: 2021-01-16 author: "Ritesh Kumar" image: ../../images/imgblog1.jpg tags: - college --- Here is my first week of Deep Learning with PyTorch: Zero to GANs. In just a few years, the number of applications using AI has grown tremendously, from self-driving cars to recommendations from your favorite streaming provider and chatbot. Almost every major research field is now using AI. Behind all this, there is one technology called deep learning. Before getting started further we should know what is deep learning exactly? What makes it so special? Deep learning is a machine learning technique that teaches computers to do what comes to humans naturally: learn by doing & example. In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. For example, Home assistance devices that respond to your voice and know your preferences are powered by deep learning applications. What is a Deep Learning Framework? A Deep Learning Framework is an interface, library, or a tool which allows us to build deep learning models more easily and quickly, without getting into the details of underlying algorithms. Example: — PyTorch, TensorFlow, Sonnet, etc. In this guide, I’d like to tell you about what is PyTorch and 5 interesting functions related to PyTorch tensors. Let's get started, PyTorch: PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab (FAIR). The process of training a neural network is simple and clear. At the same time, PyTorch supports the data parallelism and distributed learning model, and also contains many pre-trained models. Tensor:- PyTorch is a Python package that provides Tensor computations. Tensors are multidimensional arrays just like NumPy's & arrays which can run on GPU as well. If you’re wondering how to install PyTorch on your machine, check the installation steps of PyTorch for your machine here. 5 Interesting Functions Related to PyTorch Tensors which definitely useful for you. torch.reciprocal torch.round torch.transpose torch.sign torch.matmul with examples and explanation.
60.289474
244
0.804889
eng_Latn
0.999382
4ce274d4f9ecb59e837865d0a5f2c7ed488a6285
9,624
md
Markdown
step-001-connecting-a-device/README.md
digitway/iot-workshop-asset-tracking
fd6205888d9198c12474da8c96fea6a0ad3a8b6c
[ "CC-BY-4.0", "MIT" ]
null
null
null
step-001-connecting-a-device/README.md
digitway/iot-workshop-asset-tracking
fd6205888d9198c12474da8c96fea6a0ad3a8b6c
[ "CC-BY-4.0", "MIT" ]
null
null
null
step-001-connecting-a-device/README.md
digitway/iot-workshop-asset-tracking
fd6205888d9198c12474da8c96fea6a0ad3a8b6c
[ "CC-BY-4.0", "MIT" ]
null
null
null
# Connecting a device to Azure IoT <!-- omit in toc --> In this section, we are going to setup the messaging infrastructure needed to connect Contoso Art Shipping's asset tracking devices. For the context of this lab, the asset tracking device will be the [MXChip developer kit](https://microsoft.github.io/azure-iot-developer-kit/) running a custom firmware that exposes sensor telemetry using [IoT Plug-and-Play](https://docs.microsoft.com/en-us/azure/iot-pnp/overview-iot-plug-and-play). ## Learning goals <!-- omit in toc --> * Learn how to setup the messaging infrastructure for connecting your IoT devices. * Get familiar with IoT developer tools such as [Visual Studio Code Azure IoT Workbench](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-iot-workbench) and [Azure IoT Explorer](https://github.com/Azure/azure-iot-explorer). * Understand basic security and device management concepts. ## Steps <!-- omit in toc --> * [Create a resource group](#create-a-resource-group) * [Create an Azure IoT Hub and an Azure Device Provisioning Service](#create-an-azure-iot-hub-and-an-azure-device-provisioning-service) * [Connect the MXChip Plug-and-Play asset tracker](#connect-the-mxchip-plug-and-play-asset-tracker) * [Explore the capabilities of the asset tracking device](#explore-the-capabilities-of-the-asset-tracking-device) ### Create a resource group 1. Create an Azure resource group to collect and manage all the application resources we will be provisioning and using during the workshop. \ ![Resource Group](assets/01_Create_Resource_Group.png) 1. Click on **+ Add** button \ ![Add Resource Group](assets/02_Create_Resource_Group_Create.png) 1. Enter **Resource group name**, Select **subscription** and **region**. Click on **Review + Create**, and after reviewing, click on **Create**. > **_NOTE:_** > > * **Resource Group Name**: Use name of your choice. > * **Region**: During public preview, IoT Plug and Play is available in the North Europe, Central US, and Japan East regions. Although we are only creating a resource group at this stage, please make sure you create it in one of these regions to avoid potential mistakes later on. ![Create Resource Group Submit](assets/03_Create_Resource_Group_Submit.png) ### Create an Azure IoT Hub and an Azure Device Provisioning Service Follow the [instructions from Azure IoT DPS documentation](https://docs.microsoft.com/azure/iot-dps/quick-setup-auto-provision#create-an-iot-hub) to provision an Azure IoT Hub, a Device Provisioning Service, and linking the two together. > **_IMPORTANT:_** > > <span style="color:red">**Make sure to create your IoT Hub in one of the following regions in order to get PnP support during the public preview: North Europe, Central US or Japan East.**</span> In order to easily find your resources later, you will want to create the IoT Hub and the Device Provisioning Service in the same Resource Group you've created in the previous step. Once your services are properly provisioned and your DPS service is linked to your IoT Hub instance, you need to create a DPS Group Enrollment. As opposed to single enrollments where you individually "whitelist" and create a set of credentials for every device, a group enrollment can provide you with more flexibility by having a unique set of credentials for all the devices in the same group. In the Azure Portal, navigate to the Device Provisioning Service you have just created. In your provisioning service: 1. Click **Manage enrollments**. 2. Click the **Add enrollment group** button at the top. 3. When the "Add Enrollment Group" panel appears, enter the information for the enrollment list entry. **Group name** is required (as a suggestion, you can name the enrollment group "AssetTrackers"). Select "Symmetric Key" for **Attestation Type**. 4. Click **Save**. On successful creation of your enrollment group, you should see the group name appear under the **Enrollment Groups** tab. 5. Select the newly created Enrollment group in the **Enrollment Groups** tab. 6. Click the Copy icon in **Primary Key** in order to copy the symmetric key that has been generated for this enrollment group to your clipboard. You will need it in the following step. > **_NOTE:_** > > For production use cases, you will want to make sure your device is storing the group symmetric key in some form of secure storage and can't easily be extracted from the device. If your group symmetric key is compromised, it potentially allows anyone to provision devices in your environment. ### Connect the MXChip Plug-and-Play asset tracker Follow the instructions available [here](https://github.com/kartben/mxchip_pnp_asset_tracker/blob/master/README.md#prepare-the-device) in order to program your MXChip device with a firmware that is PnP compatible, and reports simulated latitude/longitude in addition to real telemetry coming from its sensors. In addition to your Wi-Fi credentials, you will need to enter the following information for the **Azure IoT DPS Settings**: * **Device ID**: A unique name identifying your device. Name should be alphanumeric, and may contain special characters including colon, period, underscore and hyphen. * **ID Scope**: ID Scope for your DPS service. You can find this information in the Azure Portal, in the Overview section of your DPS service. It should be of the form `0ne0009ABCD`. * **Group SAS Primary Key**: The symmetric key for your enrollment group. When you've completed these instructions, your MXChip device should be connected to IoT Hub, and sending telemetry information as well as listening to commands. In the next step we will actually explore the device capabilities, using the [Azure IoT Device Explorer](https://github.com/Azure/azure-iot-explorer). ### Explore the capabilities of the asset tracking device 1. Download and install the latest version of the Azure IoT Explorer tool from the [Github release page](https://github.com/Azure/azure-iot-explorer/releases/latest). 2. Enter your IoT Hub connection string. You can find this information in the Azure portal: open your IoT Hub resource, then go to **Shared access policies** pane, and click on **iothubowner**. You can then copy the **Connection string—primary key**. ![Azure IoT Explorer - Connection string](assets/iot-hub-explorer-connection-string.png) 3. Feel free to check the **Remember my connection string** option, then click **Connect**. 4. Select your device from the list that is now displayed. At this point, it should show up as "Connected". ![Azure IoT Explorer - Device list](assets/explorer-device-list.png) 5. Use the entries in "Digital Twin" to explore the capabilities of the device. For example: * Navigate to **urn:azureiot:DeviceManagement:DeviceInformation:1** > **Properties (non-writable)**. Note how the device is reporting about its hardware capabilities in a standard way. As a solution builder, you can expect any Plug-and-Play device to always expose these properties, which helps rationalizing device management efforts. * Navigate to **urn:mxchip:built_in_sensors:1** > **Telemetry**. Click on the **Start** button and observe the flow of telemetry information being sent every 5 seconds. Also observe how the tool provides you with useful metadata about sensor data, thanks to PnP. * Navigate to **urn:mxchip:screen:1** > **Commands**. Test any of the available commands, and enjoy bidirectional communication with your device! * Navigate to **urn:contosoartshipping:position:1** > **Interface**. Often times, the PnP interfaces a device is implementing will be available through either a public Microsoft repository, or a private/company one. There is also an option for the device to implement the `urn:azureiot:ModelDiscovery:ModelDefinition:1` interface, which effectively allows for the interface to be automatically discovered from the cloud by retrieving it from the device. Note how the UI is indicating **Source: On the connected device**, reflecting the fact that the custom 'position' interface has not been found in PnP repository, and instead retrieved from the device. ## Going further <!-- omit in toc --> If you are interested in exploring further, you may want to think about how to best solve the following problems: * You aim at connecting tens of thousands of asset trackers in the near future, and realize that you may need more than one IoT Hub instance. It would be handy if you could leverage the Device Provisioning Service to help with assigning a suitable IoT Hub (based on best latency, or other custom rules) to your devices as they connect for the first time, right? Feel free to [explore the docs](https://docs.microsoft.com/en-us/azure/iot-dps/tutorial-set-up-cloud#set-the-allocation-policy-on-the-device-provisioning-service) and experiment! * In addition to the Azure IoT Explorer, another great developer toolkit comes in the form of the [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) extension for Visual Studio Code. It is very easy to configure and use, and can help you troubleshoot your communication scenarios or simulate device data without leaving your development environment. ## Wrap-up and Next steps <!-- omit in toc --> In this section, we have setup our messaging infrastructure, and used the Azure IoT Explorer to check that our device is properly connected. In the [next section](../step-002-setting-up-data-pipeline), we will setup the infrastructure needed for storing, visualizing, and querying our IoT data.
87.490909
658
0.774522
eng_Latn
0.993743
4ce2c410fa3aa670a13acf340d11f4b6099b2ea2
1,576
md
Markdown
docs/c-runtime-library/control-flags.md
MonstersAboveMe/cpp-docs.de-de
d6e31736659e5972cf60084e572ca628591eecc3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/control-flags.md
MonstersAboveMe/cpp-docs.de-de
d6e31736659e5972cf60084e572ca628591eecc3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/c-runtime-library/control-flags.md
MonstersAboveMe/cpp-docs.de-de
d6e31736659e5972cf60084e572ca628591eecc3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Steuerungsflags ms.date: 11/04/2016 f1_keywords: - c.flags helpviewer_keywords: - flags, control - heap allocation, control flags - debug heap, control flags ms.assetid: 8dbd24a5-0633-42d1-9771-776db338465f ms.openlocfilehash: 7ac5f239ea4d242618fb23ba617a3a6539492053 ms.sourcegitcommit: dedd4c3cb28adec3793329018b9163ffddf890a4 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/11/2019 ms.locfileid: "57750113" --- # <a name="control-flags"></a>Steuerungsflags Die Debugversion der Microsoft C-Laufzeitbibliothek verwendet die folgenden Flags, um den Heapbelegungs- und Berichterstellungsprozess zu steuern. Weitere Informationen finden Sie unter [CRT-Debugverfahren](/visualstudio/debugger/crt-debugging-techniques). |Flag|Beschreibung| |----------|-----------------| |[_CRTDBG_MAP_ALLOC](../c-runtime-library/crtdbg-map-alloc.md)|Ordnet die grundlegenden Heapfunktionen zu den entsprechenden Funktionen in der Debugversion zu.| |[_DEBUG](../c-runtime-library/debug.md)|Ermöglicht die Verwendung der Debugversionen der Laufzeitfunktionen.| |[_crtDbgFlag](../c-runtime-library/crtdbgflag.md)|Steuert, wie der Debugheap-Manager die Belegungen nachverfolgt.| Diese Flags können mit einer /D-Befehlszeilenoption oder einer `#define`-Direktive definiert werden. Wenn das Flag mit `#define` definiert wird, muss sich die Direktive vor der include-Anweisung der Headerdatei für die Routinedeklarationen befinden. ## <a name="see-also"></a>Siehe auch [Globale Variablen und Standardtypen](../c-runtime-library/global-variables-and-standard-types.md)
47.757576
256
0.794416
deu_Latn
0.882672
4ce3aeb9ceb5629935083a953f909234973287f7
504
md
Markdown
_drafts/2019-11-01-AIE23_Notes_20191026.md
shcqupc/hankblog.github.io
d2e27857c5ac3b82008f968e7af4a0b2440653d4
[ "MIT" ]
null
null
null
_drafts/2019-11-01-AIE23_Notes_20191026.md
shcqupc/hankblog.github.io
d2e27857c5ac3b82008f968e7af4a0b2440653d4
[ "MIT" ]
1
2021-03-30T02:30:50.000Z
2021-03-30T02:30:50.000Z
_drafts/2019-11-01-AIE23_Notes_20191026.md
shcqupc/hankblog.github.io
d2e27857c5ac3b82008f968e7af4a0b2440653d4
[ "MIT" ]
null
null
null
Study Notes on Oct 26<sup>th</sup> 2019 ========================== &nbsp; --- **关键词** *人工智能* *机器学习* --- #Contents &ensp;&ensp;1. 人工智能相关 &ensp;&ensp;&ensp;&ensp;1) 知识体系 &ensp;&ensp;&ensp;&ensp;2) 应用场景 &ensp;&ensp;&ensp;&ensp;3) 技术概览 &ensp;&ensp;2. 机器学习相关 &ensp;&ensp;&ensp;&ensp;1) 机器学习算法 &ensp;&ensp;&ensp;&ensp;2) 机器学习库和计算框架 &ensp;&ensp;&ensp;&ensp;3) 如何进行机器学习 &nbsp; --- #1. 人工智能相关 ##知识体系 ![avatar][base64str]
7.875
55
0.496032
yue_Hant
0.977051
4ce3c6308bfba564119b3ef93d0174fb36af1102
124
md
Markdown
src/speakers/guillermo-rauch.md
limhenry/devsummit
b7e50e48d13ee83a1075d00f90269f3a34c3115c
[ "Apache-2.0" ]
2
2021-11-05T08:03:58.000Z
2021-12-18T11:25:12.000Z
src/speakers/guillermo-rauch.md
limhenry/devsummit
b7e50e48d13ee83a1075d00f90269f3a34c3115c
[ "Apache-2.0" ]
2
2021-06-17T16:15:25.000Z
2021-09-02T20:33:57.000Z
src/speakers/guillermo-rauch.md
limhenry/devsummit
b7e50e48d13ee83a1075d00f90269f3a34c3115c
[ "Apache-2.0" ]
1
2020-09-29T19:24:02.000Z
2020-09-29T19:24:02.000Z
--- name: Guillermo Rauch title: ZEIT CEO avatar: /assets/speakers/guillermo-rauch.jpg link: https://twitter.com/rauchg ---
17.714286
44
0.741935
kor_Hang
0.266702
4ce4237ae743b5e5b69fe699f3b2ce610e727b89
6,573
md
Markdown
articles/app-service/app-service-deployment-credentials.md
Almulo/azure-docs.es-es
f1916cdaa2952cbe247723758a13b3ec3d608863
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service/app-service-deployment-credentials.md
Almulo/azure-docs.es-es
f1916cdaa2952cbe247723758a13b3ec3d608863
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/app-service/app-service-deployment-credentials.md
Almulo/azure-docs.es-es
f1916cdaa2952cbe247723758a13b3ec3d608863
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Credenciales de implementación de Azure App Service | Microsoft Docs description: Aprenda a usar las credenciales de implementación de Azure App Service. services: app-service documentationcenter: '' author: dariagrigoriu manager: erikre editor: mollybos ms.service: app-service ms.workload: na ms.tgt_pltfrm: na ms.devlang: multiple ms.topic: article ms.date: 01/05/2016 ms.author: dariagrigoriu ms.openlocfilehash: a17260770f0b2e0a73585ce4108bd5625ac22229 ms.sourcegitcommit: 1d850f6cae47261eacdb7604a9f17edc6626ae4b ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 08/02/2018 ms.locfileid: "39436155" --- # <a name="configure-deployment-credentials-for-azure-app-service"></a>Configuración de credenciales de implementación para Azure App Service [Azure App Service](http://go.microsoft.com/fwlink/?LinkId=529714) admite dos tipos de credenciales para la [implementación de GIT local](app-service-deploy-local-git.md) y la [implementación FTP/S](app-service-deploy-ftp.md). Estas credenciales no son las mismas que las de Azure Active Directory. * **Credenciales de nivel de usuario**: un conjunto de credenciales para toda la cuenta de Azure. Se puede utilizar para implementar App Service en cualquier aplicación o suscripción para la que la cuenta de Azure tenga permiso de acceso. Este es el conjunto de credenciales predeterminado que se configura en **App Services** > **&lt;nombre_aplicación>** > **Credenciales de implementación**. Este es también el conjunto predeterminado que aparece en la interfaz gráfica de usuario del portal (como la **información general** y las **propiedades** de la [página de recursos](../azure-resource-manager/resource-group-portal.md#manage-resources) de la aplicación). > [!NOTE] > Al delegar el acceso a recursos de Azure a través del control de acceso basado en rol (RBAC) o de permisos de coadministrador, cada usuario de Azure que recibe acceso a una aplicación puede usar sus credenciales de nivel de usuario personales hasta que se revoca el acceso. Estas credenciales de implementación no deben compartirse con otros usuarios de Azure. > > * **Credenciales de nivel de aplicación**: un conjunto de credenciales para cada aplicación. Se puede utilizar para implementar únicamente en esa aplicación. Las credenciales para cada aplicación se generan automáticamente al crear la aplicación y se encuentran en el perfil de publicación de la aplicación. Las credenciales no se pueden configurar manualmente, pero se pueden restablecer para una aplicación en cualquier momento. > [!NOTE] > Para dar a alguien acceso a estas credenciales a través de Control de acceso basado en roles (RBAC), es preciso convertirlo en colaborador, o cualquier rol superior, en la aplicación web. A los lectores no se les permite publicar, por lo que no pueden acceder a dichas credenciales. > > ## <a name="userscope"></a>Establecimiento y restablecimiento de credenciales de nivel de usuario Puede configurar las credenciales de nivel de usuario en cualquier [página de recursos](../azure-resource-manager/resource-group-portal.md#manage-resources) de la aplicación. Independientemente de en qué aplicación configure estas credenciales, son válidas para todas las aplicaciones y para todas las suscripciones de su cuenta de Azure. Para configurar las credenciales de nivel de usuario: 1. En [Azure Portal](https://portal.azure.com), haga clic en App Service > **&lt;cualquier_aplicación >** > **Credenciales de implementación**. > [!NOTE] > En el portal, debe tener al menos una aplicación para acceder a la página de credenciales de implementación. Sin embargo, con la [CLI de Azure](/cli/azure/webapp/deployment/user?view=azure-cli-latest#az-webapp-deployment-user-set), puede configurar las credenciales de nivel de usuario sin tener ninguna aplicación. 2. Configure el nombre de usuario y la contraseña y, a continuación, haga clic en **Guardar**. ![](./media/app-service-deployment-credentials/deployment_credentials_configure.png) Una vez que haya configurado las credenciales de implementación, puede encontrar el nombre de usuario de implementación de *Git* en la **Información general** de la aplicación ![](./media/app-service-deployment-credentials/deployment_credentials_overview.png) y el nombre de usuario de implementación de *FTP* en las **propiedades** de la aplicación. ![](./media/app-service-deployment-credentials/deployment_credentials_properties.png) > [!NOTE] > Azure no muestra la contraseña de la implementación de nivel de usuario. Si olvida la contraseña, no podrá recuperarla. Sin embargo, podrá restablecer las credenciales siguiendo los pasos descritos en esta sección. > > ## <a name="appscope"></a>Obtención y restablecimiento de las credenciales de nivel de aplicación Para cada aplicación de App Service, sus credenciales de nivel de aplicación se almacenan en el perfil de publicación XML. Para obtener las credenciales de nivel de aplicación: 1. En [Azure Portal](https://portal.azure.com), haga clic en App Service > **&lt;cualquier_aplicación >** > **Información general**. 2. Haga clic en **... Más** > **Obtener perfil de publicación** para que comience la descarga de un archivo .PublishSettings. ![](./media/app-service-deployment-credentials/publish_profile_get.png) 3. Abra el archivo .PublishSettings y busque la etiqueta `<publishProfile>` con el atributo `publishMethod="FTP"`. A continuación, obtenga sus atributos `userName` y `password`. Estas son las credenciales de nivel de aplicación. ![](./media/app-service-deployment-credentials/publish_profile_editor.png) De manera similar a las credenciales de nivel de usuario, el nombre de usuario de implementación FTP se encuentra en el formato de `<app_name>\<username>`, y el nombre de usuario de implementación de GIT es simplemente `<username>` sin la parte `<app_name>\` del principio. Para restablecer las credenciales de nivel de aplicación: 1. En [Azure Portal](https://portal.azure.com), haga clic en App Service > **&lt;cualquier_aplicación >** > **Información general**. 2. Haga clic en **... Más** > **Restablecer perfil de publicación**. Haga clic en **SÍ** para confirmar el restablecimiento. La acción de restablecimiento invalida cualquier archivo .PublishSettings anteriormente descargado. ## <a name="next-steps"></a>Pasos siguientes Obtenga información sobre cómo usar estas credenciales para implementar la aplicación desde [GIT local](app-service-deploy-local-git.md) o con [FTP/S](app-service-deploy-ftp.md).
67.762887
663
0.781378
spa_Latn
0.978677
4ce47054deeb5df0891c72644c2518395d9705dc
872
markdown
Markdown
doc/en_US/user-mentions.markdown
mrkafk/mykanboard
e6e0cc07616922f5c9f8344d095ad7806ec254cb
[ "MIT" ]
1
2017-03-08T09:14:22.000Z
2017-03-08T09:14:22.000Z
doc/en_US/user-mentions.markdown
mrkafk/mykanboard
e6e0cc07616922f5c9f8344d095ad7806ec254cb
[ "MIT" ]
null
null
null
doc/en_US/user-mentions.markdown
mrkafk/mykanboard
e6e0cc07616922f5c9f8344d095ad7806ec254cb
[ "MIT" ]
1
2020-08-13T11:36:27.000Z
2020-08-13T11:36:27.000Z
User Mentions ============= Kanboard offers the possibility to send notifications when someone is mentioned. If you need to get the attention of someone in a comment or in a task, use the @ symbol followed by their username. Kanboard will automatically suggest a list of users: ![User Mention](screenshots/user-mentions.png) - At the moment, only the task description and the comment text area have this feature enabled. - The user mentions works only during tasks and comments creation. - To be notified, mentioned users need to be a member of the project. - When someone is mentioned, this user will receive a notification. - The @username mention is linked to the public user profile. The notification is sent according to the user settings, it can be an email, a web notification or even a message on Slack/Hipchat/Jabber if you have installed the right plugins.
48.444444
178
0.779817
eng_Latn
0.999559
4ce4f1991b0b6dbc66b33e71ab34b675d1b457ce
783
md
Markdown
README.md
j-rivero/reprepro
2862af64725b0d288561e376e715758bc9bde4e5
[ "Apache-2.0" ]
null
null
null
README.md
j-rivero/reprepro
2862af64725b0d288561e376e715758bc9bde4e5
[ "Apache-2.0" ]
null
null
null
README.md
j-rivero/reprepro
2862af64725b0d288561e376e715758bc9bde4e5
[ "Apache-2.0" ]
null
null
null
# reprepro multi Fork of reprepro to support multi repositories in the same server. The limitation of the original recipe is that it can be only included once so it is not possible to call it several times for several reprepro. ## Changes from the original fork * Support for multiple directories in array: node['reprepro']['repositories_directories'] * Support for base directory in: node.default['reprepro']['base_repo_dir'] * Support for different distributions (only debian/ubuntu) node.default['reprepro']['codenames']['debian'] node.default['reprepro']['codenames']['ubuntu'] ## Pending tasks * Fix the support for Apache. Only work nginx at this moment * Limitation to support Debian/UBuntu using repo name * Fix support for data bags * Update data bag example
32.625
107
0.765006
eng_Latn
0.994009
4ce55cd3a44fcf54afb6f2964c01b0dd078494ef
3,833
md
Markdown
README.md
ziluohuanghun/lucky-canvas
71bab5501ca303c408dad41a23f310ac0a683a9c
[ "Apache-2.0" ]
1
2021-08-10T02:07:50.000Z
2021-08-10T02:07:50.000Z
README.md
ziluohuanghun/lucky-canvas
71bab5501ca303c408dad41a23f310ac0a683a9c
[ "Apache-2.0" ]
null
null
null
README.md
ziluohuanghun/lucky-canvas
71bab5501ca303c408dad41a23f310ac0a683a9c
[ "Apache-2.0" ]
null
null
null
<div align="center"> <img src="https://raw.githubusercontent.com/LuckDraw/lucky-canvas/master/logo.png" width="128" alt="logo" /> <h1>lucky-canvas 抽奖插件</h1> <p>一个基于 JavaScript 的 ( 大转盘 / 九宫格 ) 抽奖插件</p> <p class="hidden"> <a href="https://github.com/luckdraw/lucky-canvas#readme">简体中文</a> · <a href="https://github.com/luckdraw/lucky-canvas/tree/master/en">English</a> </p> <p> <a href="https://github.com/LuckDraw/lucky-canvas/stargazers" target="_black"> <img src="https://img.shields.io/github/stars/luckdraw/lucky-canvas?color=%23ffca28&logo=github&style=flat-square" alt="stars" /> </a> <a href="https://github.com/luckdraw/lucky-canvas/network/members" target="_black"> <img src="https://img.shields.io/github/forks/luckdraw/lucky-canvas?color=%23ffca28&logo=github&style=flat-square" alt="forks" /> </a> <a href="https://www.npmjs.com/package/lucky-canvas" target="_black"> <img src="https://img.shields.io/npm/v/lucky-canvas?color=%23ffca28&logo=npm&style=flat-square" alt="version" /> </a> <a href="https://www.npmjs.com/package/lucky-canvas" target="_black"> <img src="https://img.shields.io/npm/dm/lucky-canvas?color=%23ffca28&logo=npm&style=flat-square" alt="downloads" /> </a> <a href="https://www.jsdelivr.com/package/npm/lucky-canvas" target="_black"> <img src="https://data.jsdelivr.com/v1/package/npm/lucky-canvas/badge" alt="downloads" /> </a> </p> <p> <a href="https://github.com/buuing" target="_black"> <img src="https://img.shields.io/badge/Author-%20buuing%20-7289da.svg?&logo=github&style=flat-square" alt="author" /> </a> <a href="https://github.com/luckdraw/lucky-canvas/blob/master/LICENSE" target="_black"> <img src="https://img.shields.io/github/license/luckdraw/lucky-canvas?color=%232dce89&logo=github&style=flat-square" alt="license" /> </a> </p> </div> <br /> ## 官方文档 & Demo演示 > **中文**:[https://100px.net](https://100px.net) > **English**:**If anyone can help translate the document, please contact me** `ldq404@qq.com` <br /> |适配框架|npm包|npm下载量|CDN使用量| | :-: | :-: | :-: | :-: | |`JS` / `JQ`|[lucky-canvas](https://100px.net/usage/js.html)|<img src="https://img.shields.io/npm/dm/lucky-canvas?color=%23ffca28&logo=npm&style=flat-square" alt="downloads" />|<img src="https://data.jsdelivr.com/v1/package/npm/lucky-canvas/badge" alt="downloads" />| |`Vue2.x` / `Vue3.x`|[vue-luck-draw](https://100px.net/usage/vue.html)|<img src="https://img.shields.io/npm/dm/vue-luck-draw?color=%23ffca28&logo=npm&style=flat-square" alt="downloads" />|<img src="https://data.jsdelivr.com/v1/package/npm/vue-luck-draw/badge" alt="downloads" />| |`React`|[react-luck-draw](https://100px.net/usage/react.html)|<img src="https://img.shields.io/npm/dm/react-luck-draw?color=%23ffca28&logo=npm&style=flat-square" alt="downloads" />|<img src="https://data.jsdelivr.com/v1/package/npm/react-luck-draw/badge" alt="downloads" />| |`UniApp`|[uni-luck-draw](https://100px.net/usage/uni.html)|<img src="https://img.shields.io/npm/dm/uni-luck-draw?color=%23ffca28&logo=npm&style=flat-square" alt="downloads" />|/| |`Taro3.x`|[taro-luck-draw](https://100px.net/usage/taro.html)|<img src="https://img.shields.io/npm/dm/taro-luck-draw?color=%23ffca28&logo=npm&style=flat-square" alt="downloads" />|/| |`微信小程序`|[mini-luck-draw](https://100px.net/usage/wx.html)|<img src="https://img.shields.io/npm/dm/mini-luck-draw?color=%23ffca28&logo=npm&style=flat-square" alt="downloads" />|/| <br /> ### 贡献者 <br /> ### **如果您觉得这个项目还不错, 可以在 [Github](https://github.com/LuckDraw/lucky-canvas) 上面帮我点个`star` ☜(゚ヮ゚☜)** <br /> ## 友情链接 - [🎁 h5-Dooring 一款功能强大,高可扩展的H5可视化编辑器](https://github.com/MrXujiang/h5-Dooring) <!-- lerna过滤器配置 --> <!-- https://github.com/lerna/lerna/tree/main/core/filter-options#readme -->
51.797297
279
0.677537
yue_Hant
0.330393
4ce5bb7821aab45c1630ceffff2b04fc3ad80e21
4,850
md
Markdown
README.md
iGio90/frida-java-ext
7ea8fddce824b23e63de0602cee894b5667f16b7
[ "MIT" ]
24
2019-07-08T00:29:39.000Z
2021-11-19T12:14:21.000Z
README.md
iGio90/frida-java-ext
7ea8fddce824b23e63de0602cee894b5667f16b7
[ "MIT" ]
null
null
null
README.md
iGio90/frida-java-ext
7ea8fddce824b23e63de0602cee894b5667f16b7
[ "MIT" ]
2
2019-07-15T02:08:11.000Z
2021-04-22T06:36:32.000Z
# frida-java-ext Some 'one-line' frida api to avoid code recycling here and there. ## install ```$xslt git clone https://github.com/iGio90/frida-java-ext.git npm install npm link ``` ### try it out ```$xslt cd example npm link frida-java-ext npm install npm run watch # make your edits to index.ts # inject the agent (quick att.py) ``` ### api - attachAll - attachAllMethods - attachConstructor - attachMethod - backtrace - enumerateMethods - getCurrentContext ### example code ```typescript import {JavaContext, JavaExt} from 'frida-java-ext'; function simpleCallback(context: JavaContext) { // print call arguments console.log(context.className, context.method, JSON.stringify(context.arguments)); // print call arguments with types and details console.log(context.className, context.method, JSON.stringify(context.formattedArguments)); // detach the hook context.detach(); } Java.performNow(() => { JavaExt.attachAllMethods('android.app.Activity', simpleCallback); JavaExt.attachConstructor('android.app.Activity', function (context: JavaContext) { console.log(context.method, 'hit!') }); JavaExt.attachMethod('java.lang.Object', 'toString', { onEnter(context: JavaContext): void { console.log(context.className, context.method); }, onLeave(retval: object): any { console.log(JSON.stringify(retval)); return ''; } }); }); ``` ### output ``` android.app.Activity attach [{"className":"android.content.Context","value":{"$handle":"0x3582","$weakRef":287}},{"className":"android.app.ActivityThread","value":{"$handle":"0x3562","$weakRef":290}},{"className":"android.app.Instrumentation","value":{"$handle":"0x3542","$weakRef":293}},{"className":"android.os.IBinder","value":{"$handle":"0x3522","$weakRef":296}},{"className":"int","value":120358391},{"className":"android.app.Application","value":{"$handle":"0x3502","$weakRef":299}},{"className":"android.content.Intent","value":{"$handle":"0x34e2","$weakRef":302}},{"className":"android.content.pm.ActivityInfo","value":{"$handle":"0x34c2","$weakRef":305}},{"className":"java.lang.CharSequence","value":{"$handle":"0x34a2","$weakRef":308}},{"className":"android.app.Activity","value":null},{"className":"java.lang.String","value":null},{"className":"android.app.Activity$NonConfigurationInstances","value":null},{"className":"android.content.res.Configuration","value":{"$handle":"0x3482","$weakRef":311}},{"className":"java.lang.String","value":"android"},{"className":"com.android.internal.app.IVoiceInteractor","value":null},{"className":"android.view.Window","value":null},{"className":"android.view.ViewRootImpl$ActivityConfigCallback","value":{"$handle":"0x3462","$weakRef":314}}] android.app.Activity attachBaseContext [{"className":"android.content.Context","value":{"$handle":"0x343a","$weakRef":316}}] android.app.Activity getSystemService [{"className":"java.lang.String","value":"layout_inflater"}] android.app.Activity onWindowAttributesChanged [{"className":"android.view.WindowManager$LayoutParams","value":{"$handle":"0x33da","$weakRef":322}}] android.app.Activity setTheme [{"className":"int","value":2131492890}] android.app.Activity onApplyThemeResource [{"className":"android.content.res.Resources$Theme","value":{"$handle":"0x3526","$weakRef":327}},{"className":"int","value":2131492890},{"className":"boolean","value":true}] android.app.Activity setTaskDescription [{"className":"android.app.ActivityManager$TaskDescription","value":{"$handle":"0x34a6","$weakRef":331}}] android.app.Activity performCreate [{"className":"android.os.Bundle","value":null},{"className":"android.os.PersistableBundle","value":null}] [{"className":"android.os.Bundle","value":null}] ``` ``` Copyright (c) 2019 Giovanni (iGio90) Rocca Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ```
47.54902
1,293
0.726186
yue_Hant
0.568508
4ce623f14e51e158b60c1a25fa675e4b28a2d330
481
md
Markdown
patch/README.md
maasijam/DaisyExamples
d982852292c16ccd6884f5f1313223c4bd66daeb
[ "MIT" ]
180
2020-05-28T01:34:57.000Z
2022-03-29T17:50:27.000Z
patch/README.md
maasijam/DaisyExamples
d982852292c16ccd6884f5f1313223c4bd66daeb
[ "MIT" ]
132
2020-05-28T02:49:28.000Z
2022-03-04T16:30:46.000Z
patch/README.md
maasijam/DaisyExamples
d982852292c16ccd6884f5f1313223c4bd66daeb
[ "MIT" ]
107
2020-06-03T03:22:49.000Z
2022-02-28T15:46:47.000Z
# Daisy Patch Programmable Eurorack Modular Synthesizer Powered from standard Eurorack Power. (+/- 12V) ## Features - 4x Audio Inputs - 4x Audio Outputs - 4x Knobs - 4x CV inputs - 2x Gate Inputs - 1x Gate Output - TRS MIDI In and Out - 2x buttons - 1x toggle - USB Mini - MicroSD Port - Grid of 16 LEDs - All headers have additional female pin for jumping out to other (custom) hardware ## Project Ideas - Generative Music Box - Simple Sample Player - Sound Effect Machine
16.586207
83
0.735967
eng_Latn
0.644581
4ce808505ef7869c20bb4241bdd814cfb40074e7
709
md
Markdown
_posts/2022-1-2-Homochirality1.md
AbdulMuhaymin/AbdulMuhaymin.github.io
61a134eee2b683cf482e62ca2a2733e0ae1226e2
[ "MIT" ]
1
2022-03-27T00:30:10.000Z
2022-03-27T00:30:10.000Z
_posts/2022-1-2-Homochirality1.md
AbdulMuhaymin/abdulmuhaymin.github.io
61a134eee2b683cf482e62ca2a2733e0ae1226e2
[ "MIT" ]
null
null
null
_posts/2022-1-2-Homochirality1.md
AbdulMuhaymin/abdulmuhaymin.github.io
61a134eee2b683cf482e62ca2a2733e0ae1226e2
[ "MIT" ]
null
null
null
--- layout: post title: Role of Nonlinearlity in the Origin of Life published: true categories: blogs --- Did you know that non-linearity played a significant role in the emergence of life? At least FC Frank's model of biological homochirality says so! In this article I will try explain this model like I am explaining it to a high school child. # Testing out latex $$ \nabla_\boldsymbol{x} J(\boldsymbol{x}) $$ Probably some of you are already asking what is homochirality, what is non-linearity etc. So let me start from the very basic idea from biology and chemistry and then I will explain the main model. So, today's game plan is to: - I - I - I - I - I If I cannot finish
32.227273
240
0.726375
eng_Latn
0.999534
4ce82d7b58e833bc7f8af70ec26366320cb32dac
18,808
md
Markdown
aspnetcore/blazor/components/lifecycle.md
lucaspsilveira/AspNetCore.Docs.pt-br
60d97a55a3e7d830c2392c7d9247e6ed5c2eada8
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/blazor/components/lifecycle.md
lucaspsilveira/AspNetCore.Docs.pt-br
60d97a55a3e7d830c2392c7d9247e6ed5c2eada8
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnetcore/blazor/components/lifecycle.md
lucaspsilveira/AspNetCore.Docs.pt-br
60d97a55a3e7d830c2392c7d9247e6ed5c2eada8
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: BlazorCiclo de vida ASP.NET Core author: guardrex description: Saiba como usar Razor métodos de ciclo de vida do componente em ASP.NET Core Blazor aplicativos. monikerRange: '>= aspnetcore-3.1' ms.author: riande ms.custom: mvc ms.date: 07/06/2020 no-loc: - Blazor - Blazor Server - Blazor WebAssembly - Identity - Let's Encrypt - Razor - SignalR uid: blazor/components/lifecycle ms.openlocfilehash: 6b9653356659700ae8396a01b38c04d59a86625f ms.sourcegitcommit: fa89d6553378529ae86b388689ac2c6f38281bb9 ms.translationtype: MT ms.contentlocale: pt-BR ms.lasthandoff: 07/07/2020 ms.locfileid: "86059884" --- # <a name="aspnet-core-blazor-lifecycle"></a>BlazorCiclo de vida ASP.NET Core De [Luke Latham](https://github.com/guardrex) e [Daniel Roth](https://github.com/danroth27) A Blazor estrutura inclui métodos de ciclo de vida síncronos e assíncronos. Substitua métodos de ciclo de vida para executar operações adicionais em componentes durante a inicialização e a renderização do componente. ## <a name="lifecycle-methods"></a>Métodos de ciclo de vida ### <a name="before-parameters-are-set"></a>Antes de os parâmetros serem definidos <xref:Microsoft.AspNetCore.Components.ComponentBase.SetParametersAsync%2A>define os parâmetros fornecidos pelo pai do componente na árvore de renderização: ```csharp public override async Task SetParametersAsync(ParameterView parameters) { await ... await base.SetParametersAsync(parameters); } ``` <xref:Microsoft.AspNetCore.Components.ParameterView>contém o conjunto inteiro de valores de parâmetro a cada vez que <xref:Microsoft.AspNetCore.Components.ComponentBase.SetParametersAsync%2A> é chamado. A implementação padrão de <xref:Microsoft.AspNetCore.Components.ComponentBase.SetParametersAsync%2A> define o valor de cada propriedade com o [`[Parameter]`](xref:Microsoft.AspNetCore.Components.ParameterAttribute) [`[CascadingParameter]`](xref:Microsoft.AspNetCore.Components.CascadingParameterAttribute) atributo ou que tem um valor correspondente no <xref:Microsoft.AspNetCore.Components.ParameterView> . Os parâmetros que não têm um valor correspondente em <xref:Microsoft.AspNetCore.Components.ParameterView> são deixados inalterados. Se [`base.SetParametersAync`](xref:Microsoft.AspNetCore.Components.ComponentBase.SetParametersAsync%2A) não for invocado, o código personalizado poderá interpretar o valor dos parâmetros de entrada de qualquer forma necessária. Por exemplo, não há nenhum requisito para atribuir os parâmetros de entrada às propriedades na classe. Se algum manipulador de eventos estiver configurado, desvincule-os na alienação. Para obter mais informações, consulte a seção [descarte `IDisposable` de componentes com](#component-disposal-with-idisposable) . ### <a name="component-initialization-methods"></a>Métodos de inicialização de componente <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A>e <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitialized%2A> são invocados quando o componente é inicializado após ter recebido seus parâmetros iniciais de seu componente pai no <xref:Microsoft.AspNetCore.Components.ComponentBase.SetParametersAsync%2A> . Use <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A> quando o componente executa uma operação assíncrona e deve ser atualizado quando a operação é concluída. Para uma operação síncrona, substitua <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitialized%2A> : ```csharp protected override void OnInitialized() { ... } ``` Para executar uma operação assíncrona, substitua <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A> e use o [`await`](/dotnet/csharp/language-reference/operators/await) operador na operação: ```csharp protected override async Task OnInitializedAsync() { await ... } ``` Blazor Serveraplicativos que preparam [sua chamada de conteúdo](xref:blazor/fundamentals/additional-scenarios#render-mode) <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A> **_duas vezes_**: * Uma vez quando o componente é inicialmente renderizado estaticamente como parte da página. * Uma segunda vez quando o navegador estabelece uma conexão de volta para o servidor. Para impedir que o código <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A> do desenvolvedor seja executado duas vezes, consulte a seção [reconexão com estado após o pré-processamento](#stateful-reconnection-after-prerendering) . Embora um Blazor Server aplicativo esteja sendo renderizado, determinadas ações, como chamar em JavaScript, não são possíveis porque uma conexão com o navegador não foi estabelecida. Os componentes podem precisar ser renderizados de forma diferente quando renderizados. Para obter mais informações, consulte a seção [detectar quando o aplicativo está sendo processado](#detect-when-the-app-is-prerendering) . Se algum manipulador de eventos estiver configurado, desvincule-os na alienação. Para obter mais informações, consulte a seção [descarte `IDisposable` de componentes com](#component-disposal-with-idisposable) . ### <a name="after-parameters-are-set"></a>Depois que os parâmetros são definidos <xref:Microsoft.AspNetCore.Components.ComponentBase.OnParametersSetAsync%2A>ou <xref:Microsoft.AspNetCore.Components.ComponentBase.OnParametersSet%2A> são chamados: * Depois que o componente é inicializado no <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A> ou no <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitialized%2A> . * Quando o componente pai é renderizado novamente e fornece: * Somente tipos irmutáveis de primitivo conhecidos dos quais pelo menos um parâmetro foi alterado. * Qualquer parâmetro de tipo complexo. A estrutura não pode saber se os valores de um parâmetro de tipo complexo foram modificados internamente e, portanto, trata o conjunto de parâmetros como alterado. ```csharp protected override async Task OnParametersSetAsync() { await ... } ``` > [!NOTE] > O trabalho assíncrono ao aplicar parâmetros e valores de propriedade deve ocorrer durante o <xref:Microsoft.AspNetCore.Components.ComponentBase.OnParametersSetAsync%2A> evento de ciclo de vida. ```csharp protected override void OnParametersSet() { ... } ``` Se algum manipulador de eventos estiver configurado, desvincule-os na alienação. Para obter mais informações, consulte a seção [descarte `IDisposable` de componentes com](#component-disposal-with-idisposable) . ### <a name="after-component-render"></a>Após renderização de componente <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRenderAsync%2A>e <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRender%2A> são chamados após a conclusão da renderização de um componente. Referências de elemento e componente são preenchidas neste ponto. Use este estágio para executar etapas de inicialização adicionais usando o conteúdo renderizado, como a ativação de bibliotecas JavaScript de terceiros que operam nos elementos DOM renderizados. O `firstRender` parâmetro para <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRenderAsync%2A> e <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRender%2A> : * É definido como `true` a primeira vez que a instância do componente é renderizada. * Pode ser usado para garantir que o trabalho de inicialização seja executado apenas uma vez. ```csharp protected override async Task OnAfterRenderAsync(bool firstRender) { if (firstRender) { await ... } } ``` > [!NOTE] > O trabalho assíncrono imediatamente após a renderização deve ocorrer durante o <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRenderAsync%2A> evento do ciclo de vida. > > Mesmo que você retorne um <xref:System.Threading.Tasks.Task> do <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRenderAsync%2A> , a estrutura não agendará um ciclo de processamento adicional para o componente depois que a tarefa for concluída. Isso é para evitar um loop de renderização infinito. Ele é diferente dos outros métodos de ciclo de vida, que agendam um ciclo de processamento adicional depois que a tarefa retornada é concluída. ```csharp protected override void OnAfterRender(bool firstRender) { if (firstRender) { ... } } ``` <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRender%2A>e <xref:Microsoft.AspNetCore.Components.ComponentBase.OnAfterRenderAsync%2A> *não são chamados durante o pré-processamento no servidor.* Se algum manipulador de eventos estiver configurado, desvincule-os na alienação. Para obter mais informações, consulte a seção [descarte `IDisposable` de componentes com](#component-disposal-with-idisposable) . ### <a name="suppress-ui-refreshing"></a>Suprimir atualização da interface do usuário Substitua <xref:Microsoft.AspNetCore.Components.ComponentBase.ShouldRender%2A> para suprimir a atualização da interface do usuário. Se a implementação retornar `true` , a interface do usuário será atualizada: ```csharp protected override bool ShouldRender() { var renderUI = true; return renderUI; } ``` <xref:Microsoft.AspNetCore.Components.ComponentBase.ShouldRender%2A>é chamado cada vez que o componente é renderizado. Mesmo se <xref:Microsoft.AspNetCore.Components.ComponentBase.ShouldRender%2A> for substituído, o componente sempre será renderizado inicialmente. Para obter mais informações, consulte <xref:blazor/webassembly-performance-best-practices#avoid-unnecessary-component-renders>. ## <a name="state-changes"></a>Alterações de estado <xref:Microsoft.AspNetCore.Components.ComponentBase.StateHasChanged%2A>Notifica o componente de que seu estado foi alterado. Quando aplicável, <xref:Microsoft.AspNetCore.Components.ComponentBase.StateHasChanged%2A> a chamada faz com que o componente seja rerenderizado. <xref:Microsoft.AspNetCore.Components.ComponentBase.StateHasChanged%2A>é chamado automaticamente para <xref:Microsoft.AspNetCore.Components.EventCallback> métodos. Para obter mais informações, consulte <xref:blazor/components/event-handling#eventcallback>. ## <a name="handle-incomplete-async-actions-at-render"></a>Tratar ações assíncronas incompletas no processamento Ações assíncronas executadas em eventos de ciclo de vida podem não ter sido concluídas antes de o componente ser renderizado. Os objetos podem ser `null` ou preenchidos incompletamente com dados enquanto o método de ciclo de vida está em execução. Forneça a lógica de renderização para confirmar que os objetos são inicializados. Renderizar elementos de interface do usuário de espaço reservado (por exemplo, uma mensagem de carregamento) enquanto objetos são `null` . No `FetchData` componente dos Blazor modelos, <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A> é substituído para Asychronously receber dados de previsão ( `forecasts` ). Quando `forecasts` é `null` , uma mensagem de carregamento é exibida para o usuário. Depois que o `Task` retornado por <xref:Microsoft.AspNetCore.Components.ComponentBase.OnInitializedAsync%2A> for concluído, o componente será rerenderizado com o estado atualizado. `Pages/FetchData.razor`no Blazor Server modelo: [!code-razor[](lifecycle/samples_snapshot/3.x/FetchData.razor?highlight=9,21,25)] ## <a name="component-disposal-with-idisposable"></a>Descarte de componentes com IDisposable Se um componente implementa <xref:System.IDisposable> , o [ `Dispose` método](/dotnet/standard/garbage-collection/implementing-dispose) é chamado quando o componente é removido da interface do usuário. O componente a seguir usa `@implements IDisposable` o e o `Dispose` método: ```razor @using System @implements IDisposable ... @code { public void Dispose() { ... } } ``` > [!NOTE] > A chamada <xref:Microsoft.AspNetCore.Components.ComponentBase.StateHasChanged%2A> no `Dispose` não tem suporte. <xref:Microsoft.AspNetCore.Components.ComponentBase.StateHasChanged%2A>pode ser invocado como parte da subdivisão do renderizador, portanto, não há suporte para a solicitação de atualizações da interface do usuário nesse ponto. Cancele a assinatura de manipuladores de eventos de eventos .NET. Os exemplos de [ Blazor formulário](xref:blazor/forms-validation) a seguir mostram como desvincular um manipulador de eventos no `Dispose` método: * Campo privado e abordagem lambda [!code-razor[](lifecycle/samples_snapshot/3.x/event-handler-disposal-1.razor?highlight=23,28)] * Abordagem do método privado [!code-razor[](lifecycle/samples_snapshot/3.x/event-handler-disposal-2.razor?highlight=16,26)] ## <a name="handle-errors"></a>Tratar erros Para obter informações sobre como lidar com erros durante a execução do método de ciclo de vida, consulte <xref:blazor/fundamentals/handle-errors#lifecycle-methods> . ## <a name="stateful-reconnection-after-prerendering"></a>Reconexão com estado após o pré-processamento Em um Blazor Server aplicativo quando <xref:Microsoft.AspNetCore.Mvc.TagHelpers.ComponentTagHelper.RenderMode> é <xref:Microsoft.AspNetCore.Mvc.Rendering.RenderMode.ServerPrerendered> , o componente é inicialmente renderizado estaticamente como parte da página. Depois que o navegador estabelece uma conexão de volta com o servidor, o componente é renderizado *novamente*e o componente agora é interativo. Se o [`OnInitialized{Async}`](#component-initialization-methods) método de ciclo de vida para inicializar o componente estiver presente, o método será executado *duas vezes*: * Quando o componente é renderizado estaticamente. * Depois que a conexão do servidor tiver sido estabelecida. Isso pode resultar em uma alteração perceptível nos dados exibidos na interface do usuário quando o componente é finalmente renderizado. Para evitar o cenário de processamento duplo em um Blazor Server aplicativo: * Passe um identificador que pode ser usado para armazenar em cache o estado durante o pré-processamento e para recuperar o estado após a reinicialização do aplicativo. * Use o identificador durante o pré-processamento para salvar o estado do componente. * Use o identificador após o pré-processamento para recuperar o estado armazenado em cache. O código a seguir demonstra uma atualização `WeatherForecastService` em um aplicativo baseado em modelo Blazor Server que evita a renderização dupla: ```csharp public class WeatherForecastService { private static readonly string[] summaries = new[] { "Freezing", "Bracing", "Chilly", "Cool", "Mild", "Warm", "Balmy", "Hot", "Sweltering", "Scorching" }; public WeatherForecastService(IMemoryCache memoryCache) { MemoryCache = memoryCache; } public IMemoryCache MemoryCache { get; } public Task<WeatherForecast[]> GetForecastAsync(DateTime startDate) { return MemoryCache.GetOrCreateAsync(startDate, async e => { e.SetOptions(new MemoryCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(30) }); var rng = new Random(); await Task.Delay(TimeSpan.FromSeconds(10)); return Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = startDate.AddDays(index), TemperatureC = rng.Next(-20, 55), Summary = summaries[rng.Next(summaries.Length)] }).ToArray(); }); } } ``` Para obter mais informações sobre o <xref:Microsoft.AspNetCore.Mvc.TagHelpers.ComponentTagHelper.RenderMode> , consulte <xref:blazor/fundamentals/additional-scenarios#render-mode> . ## <a name="detect-when-the-app-is-prerendering"></a>Detectar quando o aplicativo está sendo renderizado [!INCLUDE[](~/includes/blazor-prerendering.md)] ## <a name="cancelable-background-work"></a>Trabalho em segundo plano cancelável Os componentes geralmente executam trabalho em segundo plano de longa execução, como fazer chamadas de rede ( <xref:System.Net.Http.HttpClient> ) e interagir com bancos de dados. É desejável interromper o trabalho em segundo plano para conservar recursos do sistema em várias situações. Por exemplo, as operações assíncronas em segundo plano não são interrompidas automaticamente quando um usuário sai de um componente. Outros motivos pelos quais os itens de trabalho em segundo plano podem exigir cancelamento incluem: * Uma tarefa em execução em segundo plano foi iniciada com dados de entrada ou parâmetros de processamento com falha. * O conjunto atual de itens de trabalho em segundo plano em execução deve ser substituído por um novo conjunto de itens de trabalho. * A prioridade das tarefas em execução no momento deve ser alterada. * O aplicativo deve ser desligado para reimplantá-lo no servidor. * Os recursos do servidor se tornam limitados, exigindo o reagendamento de itens de trabalho em segundo plano. Para implementar um padrão de trabalho de segundo plano cancelável em um componente: * Use um <xref:System.Threading.CancellationTokenSource> e <xref:System.Threading.CancellationToken> . * Na [alienação do componente](#component-disposal-with-idisposable) e a qualquer cancelamento do ponto, é desejável cancelar manualmente o token, chamar [`CancellationTokenSource.Cancel`](xref:System.Threading.CancellationTokenSource.Cancel%2A) para sinalizar que o trabalho em segundo plano deve ser cancelado. * Após a chamada assíncrona retornar, chame <xref:System.Threading.CancellationToken.ThrowIfCancellationRequested%2A> no token. No exemplo a seguir: * `await Task.Delay(5000, cts.Token);`representa o trabalho em segundo plano assíncrono de execução longa. * `BackgroundResourceMethod`representa um método de plano de fundo de execução longa que não deve iniciar se o `Resource` for descartado antes de o método ser chamado. ```razor @implements IDisposable @using System.Threading <button @onclick="LongRunningWork">Trigger long running work</button> @code { private Resource resource = new Resource(); private CancellationTokenSource cts = new CancellationTokenSource(); protected async Task LongRunningWork() { await Task.Delay(5000, cts.Token); cts.Token.ThrowIfCancellationRequested(); resource.BackgroundResourceMethod(); } public void Dispose() { cts.Cancel(); cts.Dispose(); resource.Dispose(); } private class Resource : IDisposable { private bool disposed; public void BackgroundResourceMethod() { if (disposed) { throw new ObjectDisposedException(nameof(Resource)); } ... } public void Dispose() { disposed = true; } } } ```
52.536313
580
0.780253
por_Latn
0.982256
4ce833f2966cc6ff278e5b7981e69af5b12cf10a
596
md
Markdown
docs/development/core/public/kibana-plugin-core-public.savedobjectsimportretry.ignoremissingreferences.md
AlexanderWert/kibana
ae64fc259222f1147c1500104d7dcb4cfa263b63
[ "Apache-2.0" ]
1
2020-10-30T21:12:27.000Z
2020-10-30T21:12:27.000Z
docs/development/core/public/kibana-plugin-core-public.savedobjectsimportretry.ignoremissingreferences.md
AlexanderWert/kibana
ae64fc259222f1147c1500104d7dcb4cfa263b63
[ "Apache-2.0" ]
4
2020-12-04T21:57:58.000Z
2022-03-24T03:49:27.000Z
docs/development/core/public/kibana-plugin-core-public.savedobjectsimportretry.ignoremissingreferences.md
jchakravarty/kibana
a6b2a6ef5b591a8231ee7cd3864a8b9087cfc8a4
[ "Apache-2.0" ]
1
2020-08-31T12:44:16.000Z
2020-08-31T12:44:16.000Z
<!-- Do not edit this file. It is automatically generated by API Documenter. --> [Home](./index.md) &gt; [kibana-plugin-core-public](./kibana-plugin-core-public.md) &gt; [SavedObjectsImportRetry](./kibana-plugin-core-public.savedobjectsimportretry.md) &gt; [ignoreMissingReferences](./kibana-plugin-core-public.savedobjectsimportretry.ignoremissingreferences.md) ## SavedObjectsImportRetry.ignoreMissingReferences property If `ignoreMissingReferences` is specified, reference validation will be skipped for this object. <b>Signature:</b> ```typescript ignoreMissingReferences?: boolean; ```
42.571429
281
0.788591
eng_Latn
0.381613
4ce84ca2422a26b1038fef22ca205ea1e42a71b2
316
md
Markdown
_posts/Template.md
CWKSC/CWKSC.github.io
6f5393ac4fc8960b3eaff625d7c36693e393aef4
[ "MIT" ]
3
2020-06-07T14:27:38.000Z
2020-06-12T18:45:26.000Z
_posts/Template.md
CWKSC/CWKSC.github.io
6f5393ac4fc8960b3eaff625d7c36693e393aef4
[ "MIT" ]
null
null
null
_posts/Template.md
CWKSC/CWKSC.github.io
6f5393ac4fc8960b3eaff625d7c36693e393aef4
[ "MIT" ]
null
null
null
--- date: 2020-06-15 00:00:00 layout: post title: subtitle: description: image: https://cdn.jsdelivr.net/gh/CWKSC/MyResources/Image/post11.jpg optimized_image: https://cdn.jsdelivr.net/gh/CWKSC/MyResources/Image/optimized/post11_opt.jpg category: tags: author: CWKSC paginate: false math: true --- Content here
19.75
93
0.765823
kor_Hang
0.250961
4ce88337ddcd0eee14ce41da7b7b5e41fee90a1d
15,609
md
Markdown
articles/data-lake-store/data-lake-store-performance-tuning-storm.md
changeworld/azure-docs.hu-hu
f0a30d78dd2458170473188ccce3aa7e128b7f89
[ "CC-BY-4.0", "MIT" ]
1
2019-09-29T16:59:33.000Z
2019-09-29T16:59:33.000Z
articles/data-lake-store/data-lake-store-performance-tuning-storm.md
Nike1016/azure-docs.hu-hu
eaca0faf37d4e64d5d6222ae8fd9c90222634341
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/data-lake-store/data-lake-store-performance-tuning-storm.md
Nike1016/azure-docs.hu-hu
eaca0faf37d4e64d5d6222ae8fd9c90222634341
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Az Azure Data Lake Storage Gen1 Storm teljesítmény-finomhangolási útmutató |} A Microsoft Docs description: Az Azure Data Lake Storage Gen1 Storm teljesítmény-finomhangolási útmutató services: data-lake-store documentationcenter: '' author: stewu manager: amitkul editor: stewu ms.assetid: ebde7b9f-2e51-4d43-b7ab-566417221335 ms.service: data-lake-store ms.devlang: na ms.topic: article ms.date: 12/19/2016 ms.author: stewu ms.openlocfilehash: 8066a759cf80be6e9ca232bcd3693a5fa4d2f2f9 ms.sourcegitcommit: d4dfbc34a1f03488e1b7bc5e711a11b72c717ada ms.translationtype: MT ms.contentlocale: hu-HU ms.lasthandoff: 06/13/2019 ms.locfileid: "61436477" --- # <a name="performance-tuning-guidance-for-storm-on-hdinsight-and-azure-data-lake-storage-gen1"></a>Teljesítmény-finomhangolási útmutató a Storm on HDInsight és az Azure Data Lake Storage Gen1 Ismerje meg, amelyek érdemes figyelembe venni, egy Azure Storm-topológia teljesítményének hangolása. Például fontos tudni, hogy a spoutok és a boltok (a munkahelyi e i/o- vagy memóriaigényes) által végzett munka jellemzőit. Ez a cikk ismerteti a széles körű teljesítmény-finomhangolási útmutató, beleértve a gyakori hibák elhárítása. ## <a name="prerequisites"></a>Előfeltételek * **Azure-előfizetés**. Lásd: [Ingyenes Azure-fiók létrehozása](https://azure.microsoft.com/pricing/free-trial/). * **Az Azure Data Lake Storage Gen1 fiók**. Létrehozásával kapcsolatos utasításokért lásd: [Ismerkedés az Azure Data Lake Storage Gen1](data-lake-store-get-started-portal.md). * **Egy Azure HDInsight-fürt** hozzáférést egy Data Lake Storage Gen1 fiókot. Lásd: [egy HDInsight-fürt létrehozása a Data Lake Storage Gen1](data-lake-store-hdinsight-hadoop-use-portal.md). Ellenőrizze, hogy engedélyezi a távoli asztal a fürtöt. * **Egy Storm-fürt futtatása a Data Lake Storage Gen1**. További információkért lásd: [HDInsight alatt futó Stormmal](https://docs.microsoft.com/azure/hdinsight/hdinsight-storm-overview). * **Teljesítmény-finomhangolási útmutató a Data Lake Storage Gen1**. Az általános teljesítmény fogalmak, lásd: [Data Lake Storage Gen1 teljesítményének hangolása útmutatója](https://docs.microsoft.com/azure/data-lake-store/data-lake-store-performance-tuning-guidance). ## <a name="tune-the-parallelism-of-the-topology"></a>A topológia párhuzamosságát hangolása Előfordulhat, hogy növelje az egyidejűség és a Data Lake Storage Gen1 az i/o teljesítmény javítása érdekében. Storm-topológia párhuzamosságát meghatározó konfigurációk készletével rendelkezik: * (A dolgozók egyenlően vannak elosztva a virtuális gépeket) a munkavégző folyamatok száma. * Spout végrehajtó példányok száma. * Bolt végrehajtó példányok száma. * Spout feladatok száma. * Bolt feladatok száma. Ha például 4 virtuális gépet, és 4 munkavégző folyamatok, 32 spout végrehajtóval és 32 spout feladatokat, és a 256 bolt végrehajtóval és 512 bolt feladatok fürtön, vegye figyelembe a következőket: Minden egyes felügyelő, amely egy munkavégző csomópont, egy egyetlen feldolgozói folyamat a Java virtuális gép (JVM) rendelkezik. A JVM folyamatról 4 spout a szálak és a 64 bolt szál kezeli. Mindegyik szál belül feladatok egymás után futnak. Az előző konfigurációval mindegyik spout szál 1 feladat, pedig minden egyes bolt szál 2 feladatok. A Storm az alábbiakban a különböző összetevők érintett, és azok rendelkezik párhuzamosság szintje: * A (Storm Nimbus elnevezésű) fő csomópontot és feladatok kezelése szolgál. Ezek a csomópontok semmilyen hatást a párhuzamosság foka nem. * A felügyeleti csomópontok. A HDInsight ez felel meg az Azure virtuális gép munkavégző csomópont. * A feldolgozó feladatok a virtuális gépeken futó Storm folyamatokat. Mindegyik feldolgozó tevékenység felel meg a JVM-példányra. A Storm osztja el a kell adnia a feldolgozó csomópontok lehető legegyenletesebben feldolgozó folyamatok számát. * Spout és boltok végrehajtó példányok. A szál a feldolgozók (JVMs) belül futó minden végrehajtó példány felel meg. * A Storm-feladatokat. Ezek a logikai feladatokat, hogy minden egyes szálait futtassa. Ez nem változtatja, párhuzamosság szintjét, így kell kiértékelni, ha több feladatot végrehajtó kiszolgálónként kell-e. ### <a name="get-the-best-performance-from-data-lake-storage-gen1"></a>A legjobb teljesítményt beolvasása a Data Lake Storage Gen1 Ha a Data Lake Storage Gen1 dolgozik, a lehető legjobb teljesítményt kap Ha tegye a következőket: * Egyesítse a kisméretű hozzáfűzi, nagyobb méretű (ideális esetben 4 MB). * Hajtsa végre, sok egyidejű kérés, amennyit csak lehet. Mindegyik bolt szál blokkolása olvasási állapotát, mert Ön szeretné valahol magonként 8 – 12 szálak tartományán. Ezt követi a hálózati adapter és a Processzor felhasznált is. Nagyobb méretű virtuális gép lehetővé teszi több egyidejű kéréseket. ### <a name="example-topology"></a>Példa topológia Tegyük fel, hogy egy 8 feldolgozó csomópontot tartalmazó fürt D13v2 Azure virtuális gépen. Ez a virtuális gép rendelkezik 8 mag, így többek között a 8 feldolgozó csomópontot kell 64 magok száma összesen. Tegyük fel, magonként 8 bolt szál végzünk. A megadott 64 mag, azt jelenti, hogy szeretnénk 512 teljes bolt végrehajtó példányok (vagyis a szál). Ebben az esetben tegyük fel, hogy virtuális gépenként több JVM kezdődnie, és főként használja a hozzászóláslánc egyidejűségi belül a JVM párhuzamosság elérése érdekében. Ez azt jelenti, 8 feldolgozó feladatokat (egy Azure virtuális gépenként), és 512 bolt végrehajtóval van szükségünk. Ebben a konfigurációban a Storm megpróbálja terjeszteni a dolgozók munkavégző csomópontok (más néven a felügyeleti csomópontok) közötti egyenletes jogosultságot ad az egyes feldolgozó csomópontok 1 JVM. Most a felettesi belül Storm próbál a végrehajtóval felügyelőt közötti egyenletes elosztása, így minden egyes felügyeleti (vagyis a JVM) 8 szálait minden. ## <a name="tune-additional-parameters"></a>További paraméterek hangolása Miután az alapszintű topológiát, érdemes lehet hogy szeretne-e a Teljesítménybeállítások paraméterek egyikét: * **JVMs száma feldolgozó csomópontonként.** Ha egy nagy mennyiségű adat struktúra (például egy keresési táblázat) a memóriában, minden egyes JVM gazdagép igényel külön példányt. Másik lehetőségként használhatja az adatok struktúrája számos szálakon Ha kevesebb JVMs. A bolt i/o JVMs száma nem derül lehető különbség a között ezek JVMs hozzáadott szálak számának. Az egyszerűség kedvéért célszerű worker kiszolgálónként egy JVM rendelkezik. Mi a bolt végez, illetve milyen alkalmazás feldolgozása, attól függően szükséges, azonban szükség lehet módosítani ezt a számot. * **Spout végrehajtóval száma.** Mivel az előző példában a boltok használ a Data Lake Storage Gen1 írása, a spoutok az értéke nem közvetlenül kapcsolódik a bolt teljesítményét. Azonban feldolgozás vagy i/o a spout történik az igényelt kreditmennyiség függvényében, egy célszerű a spoutok a legjobb teljesítmény hangolására. Győződjön meg arról, hogy rendelkezik-e elegendő spoutok tudják tartani a boltok foglalt. A különböző kimeneti mértéke meg kell egyeznie a boltok átviteli kapacitást. A tényleges konfiguráció a spout függ. * **Feladatok száma.** Minden egyes bolt egyetlen szálon fut. További feladatok / bolt nem ad meg semmilyen további egyidejűségi. Csak azok az előnyök, ha a rekord bosszankodnak, a folyamat vesz igénybe a bolt végrehajtási ideje nagy részét. Célszerű a csoporthoz be egy nagyobb számos rekordok hozzáfűzése nyugtázása a bolt történő elküldése előtt. Így a legtöbb esetben több feladat semmilyen további előnyt biztosítanak. * **Helyi vagy shuffle csoportosítást.** Ha ez a beállítás engedélyezve van, az azonos feldolgozó folyamaton belül boltok rekordokat tartalmazó érkeznek. Ez csökkenti a folyamatok közötti kommunikációt és a hálózati hívások. Ez a legtöbb topológiák ajánlott. Ez a alapvető forgatókönyv egy jó kiindulási pont. Tesztelje a saját adataival, a fenti paraméterek az optimális teljesítmény érdekében módosítania. ## <a name="tune-the-spout"></a>A spout hangolása A következő beállításokat a spout hangolásához módosíthatja. - **Rekord időtúllépés: topology.message.timeout.secs**. Ez a beállítás meghatározza, hogy egy üzenetet, majd nyugtázása, szükséges idő előtt, nem sikerült. - **Munkavégző folyamatok maximális memória: worker.childopts**. Ez a beállítás lehetővé teszi a Java Worker további parancssori paraméterek megadását. A leggyakrabban használt beállítás itt XmX, amely megadja, hogy a JVM-halommemória számára lefoglalt maximális memória. - **Függőben lévő maximális spout: topology.max.spout.pending**. Ez a beállítás határozza meg, hogy a flight (a topológia összes csomópontjának jelenleg még nem nyugtázott) spout szálanként bármikor rekordok számát. Egy jó számítási tennie, hogy a rekordok méretének becslése. Ezt követően döntse el, hogy mennyi memóriát egy spout szál rendelkezik. A teljes memória egy olyan hozzászólásláncra, és elosztja ezt az értéket, lefoglalt kell biztosítanak a felső határérték a maximális spout függőben lévő paraméter. ## <a name="tune-the-bolt"></a>A bolt hangolása A Data Lake Storage Gen1 ír le, amikor beállított mérete szinkronizálási házirend (puffer az ügyféloldalon) 4 MB-ra. Írta vagy hsync() majd történik, csak akkor, amikor a puffer mérete a következő ezt az értéket. A Data Lake Storage Gen1 illesztőprogram a feldolgozón a virtuális gép automatikusan végrehajtja a pufferelés, kivéve, ha explicit módon egy hsync() hajt végre. Az alapértelmezett Data Lake Storage Gen1 Storm bolt paramétereinek mérete szinkronizálási házirend (fileBufferSize) ezt a paramétert hangolására használható. I/O-igényes topológia esetén, egy célszerű mindegyik a saját fájl írása bolt szál lesz, és használatával úgy beállítani a fájl rotációja (fileRotationSize). A fájl bizonyos méretet elér, amikor a stream automatikusan ki van ürítve, és a egy új fájl beíródik. A rotációja javasolt fájl mérete 1 GB. ### <a name="handle-tuple-data"></a>Leíró rekord adatait A Storm a spout tartalmazza rekordot, mindaddig, amíg explicit módon a bolt által elfogadott. Ha egy rekordot a bolt által olvasott, de még nem nyugtázták, a spout előfordulhat, hogy nem rendelkezik állandóként létrehozni a Data Lake Storage Gen1 háttérrendszerének. Elfogadott, egy rekord, miután a spout garantálható adatmegőrzés a bolt által, és ezután törölheti az adatok bármilyen forrásból, olvasásakor. A Data Lake Storage Gen1 a legjobb teljesítmény érdekében a bolt rendelkezik tuple adatok 4 MB-os puffer. Ezután írni a Data Lake Storage Gen1 vissza vége, egy 4 MB-os írási. Miután az adatok sikeresen lett írva a tároló (hívó hflush()) által a bolt is igazolja vissza az adatokat a spout. Ez a az itt megadott példa bolt leírása. Emellett akkor is, amely tárolja a rekordokat tartalmazó, mielőtt a hflush() kezdeményezték, és a rekordok felsorolásának arra vonatkozik, nagyobb számú elfogadható. Ez azonban növeli, hogy a spout tárolására van szüksége, és így nő a memória mennyiségét szükséges JVM útban rekordok számát. > [!NOTE] > Előfordulhat, hogy az alkalmazások a követelmény, hogy tudomásul veszi a rekordok gyakrabban (a data kisebb, mint 4 MB-os) más nem teljesítmény javítása érdekében. Azonban, hogy az i/o átviteli sebességet, hogy a tároló háttér hatással lehetnek. Alaposan mérjük a kompromisszummal jár, szemben a bolt i/o-teljesítményt. Ha a rekordok sebessége nem nagy, ezért a 4 MB-os puffer töltse ki, gondolja át, ezt úgy csökkentése hosszú ideig tart: * A boltok számának csökkentését, így nincsenek töltse ki a kevesebb pufferek. * Minden kiürítéseinek száma x vagy minden y ezredmásodperc kellene egy idő- vagy száma-alapú szabályzat, ahol egy hflush() az aktivált, és a rekordok felsorolásának eddig összegyűlt ismernek vissza. Vegye figyelembe, hogy az átviteli sebességet ebben az esetben alacsonyabb, de események lassú arány, maximális átviteli sebesség nem a legnagyobb célja mégis. Ezek a megoldások a teljes rekordhoz áramlanak keresztül az áruházban a szükséges idő csökkentése érdekében. Ez előfordulhat, hogy számít, ha azt szeretné, hogy még egy kis gyakorisága rendelkező valós idejű folyamatot. Vegye figyelembe, hogy ha a rekord aránya alacsony, úgy kell módosítania a topology.message.timeout_secs paramétert, hogy a rekordok felsorolásának nem időtúllépés, amíg azok kihozhatják pufferelt vagy nem dolgozott fel. ## <a name="monitor-your-topology-in-storm"></a>A topológia a Storm figyelése A topológia futása közben figyelheti a Storm felhasználói felületén. A fő paramétereket, és tekintse meg a következők: * **Teljes folyamat-végrehajtás késése.** Ez az egy rekordot a spout által kibocsátott, a bolt által feldolgozott, és arra vonatkozik, átlagos idő. * **Teljes bolt folyamat késleltetésére.** Ez a, amíg nem kap nyugtázása a rekord, a bolt által eltöltött átlagos időt. * **Teljes bolt késése hajtható végre.** Ez az az execute metódus a bolt által eltöltött átlagos időt. * **Hibák száma.** Ez vonatkozik, amelyek nem tudta teljesen feldolgozni azokat időkorlátjának lejártáig rekordok számát. * **A kapacitás.** Ez a mérték, hogyan foglalt a rendszer. Ha ez a szám 1, a boltok gyors akkor is működik. Ha kevesebb mint 1, növelheti a párhuzamosságot. Ha nagyobb, mint 1, csökkentheti a párhuzamosság. ## <a name="troubleshoot-common-problems"></a>Gyakori hibák elhárítása Az alábbiakban néhány gyakori hibaelhárítási forgatókönyveket. * **Számos rekordokat tartalmazó időkorlátja.** Tekintse meg annak megállapításához, ahol a szűk keresztmetszetet a topológia összes csomópontján. Ennek leggyakoribb oka az, hogy a boltok nem érhetők el a különböző tartani. Ez belső puffer eltömődés közben feldolgozásra váró rekordok vezet. Fontolja meg az időtúllépés értéke növelésével vagy csökkentésével a maximális spout függőben van. * **Nincs olyan magas teljes folyamat végrehajtása késés, de egy folyamat alacsony bolt késése.** Ebben az esetben is lehet, hogy a rekordok felsorolásának nincs folyamatban ismernek elég gyorsan. Ellenőrizze, hogy nincsenek-e elegendő számú acknowledgers. Egy másik lehetőség az, hogy azok vannak várakozik a túl hosszú ahhoz a boltok feldolgozását. Csökkentheti a maximális spout függőben van. * **Van egy nagy bolt késése hajtható végre.** Ez azt jelenti, hogy az execute() metódust a bolt-túl sokáig tart. Optimalizálhatja a kódot, vagy írási méretek tekintse meg és viselkedés ürítése. ### <a name="data-lake-storage-gen1-throttling"></a>Data Lake Storage Gen1 szabályozása Ha eléri a Data Lake Storage Gen1 által nyújtott sávszélesség korlátai, megjelenhet tevékenységhibákat okozna. Tekintse meg a feladat naplókat szabályozási hibákat tapasztal. A párhuzamosság tároló méretének növelésével csökkenthető. Annak ellenőrzéséhez, hogy a, szabályozott első, a hibakeresési ügyféloldali naplózás engedélyezése: 1. A **Ambari** > **Storm** > **Config** > **storm-feldolgozó – log4j speciális**, módosítása **&lt;gyökér szintű = "info"&gt;** való **&lt;gyökér szintű = "debug"&gt;** . Indítsa újra az összes a csomópontok/szolgáltatást a konfigurációjának érvénybe léptetéséhez. 2. A figyelő a Storm-topológia bejelentkezik a munkavégző csomópontok (/var/log/storm/worker-artifacts alatt /&lt;TopologyName&gt;/&lt;port&gt;/worker.log) a Data Lake Storage Gen1 szabályozási kivételeket. ## <a name="next-steps"></a>További lépések További teljesítményhangolás, a Storm lehet hivatkozni [ebben a blogbejegyzésben](https://blogs.msdn.microsoft.com/shanyu/2015/05/14/performance-tuning-for-hdinsight-storm-and-microsoft-azure-eventhubs/). Egy további példa futtatásához: [erre a Githubon](https://github.com/hdinsight/storm-performance-automation).
109.922535
788
0.809981
hun_Latn
1.00001
4ce89497ca9094ba0c0843dc24874b8e112f1ae7
3,597
md
Markdown
about/README.md
pascalvanhecke/dutchblockchainweek
55f962f4ff665d267825a733271db2fe2cae9ab9
[ "MIT" ]
null
null
null
about/README.md
pascalvanhecke/dutchblockchainweek
55f962f4ff665d267825a733271db2fe2cae9ab9
[ "MIT" ]
null
null
null
about/README.md
pascalvanhecke/dutchblockchainweek
55f962f4ff665d267825a733271db2fe2cae9ab9
[ "MIT" ]
null
null
null
--- ### DON'T MAKE CHANGES BELOW THIS LINE! ### title: About --- # About Dutch Blockchain Week is taking place from the 2nd till the 7th of June 2019 and is the first of its kind with events hosted not in just one city but across the country in Amsterdam, Rotterdam, Utrecht, The Hague and Arnhem, Tilburg and Almelo. Blockchain Netherlands, as the initiator of this week, brings together the Dutch ecosystem creating a truly community-driven blockchain week. They aim to connect, strengthen, expose the Dutch blockchain community to an international stage. What makes this Dutch Blockchain Week unique is that the program is completely created and organized by the community, thus offering a variety of topics, events accessible regardless of knowledge level and covering all types of organization in the blockchain space. Dutch Blockchain Week takes place in parallel to the world-renowned Money2020 conferences with a focus on innovation in the financial industry. The momentum that is created by the influx of thousands of international entrepreneurs, executives and regulators gives an enormous impulse for the Dutch blockchain ecosystem and therefore Dutch Blockchain Week. Together with partners, such as BlockchainTalks, Lisk Center Utrecht, BlockDAM, Blockbar, Bitcoin Magazine NL a program is created that covers everything from fintech to regulation, from women in tech to social good, new economic models to technical deep-dives. At the moment, 15 events are confirmed to take place in Amsterdam, Rotterdam, Utrecht, The Hague and Arnhem. More events in Amsterdam, Tilburg and Almelo will be announced soon. ## Partners - [Bitcoin Magazine NL (Media Partner)](https://bitcoinmagazine.nl/) - [NEVER BEEN BEFORE](https://www.neverbeenbefore.com/) - [Blockchain Talent Lab](http://www.blockchaintalentlab.com/) - [Work on Blockchain](https://workonblockchain.com/) - [Meet Berlage (Location sponsor)](https://meetberlage.com/) - [Epicenter (Location sponsor)](https://epicenteramsterdam.com) - [Mindspace (Location sponsor)](https://www.mindspace.me/amsterdam/) - [Windesheim Flevoland (Location sponsor)](https://www.windesheimflevoland.nl/) ## Organizers - [BlockchainTalks](https://blockchaintalks.io/) - [Crypto Canal](https://cryptocanal.org/) - [BlockDAM](https://www.meetup.com/Permissionless-Society/) - [Blockbar | Open Blockchain Lab The Hague](https://www.blockbar.nl/) - [Lisk Center Utrecht](https://www.liskcenter.io/) - [Rotterdam Blockchain Community](https://rotterdamblockchain.com/) - [Smilo](https://smilo.io/) - [Ethereum DEV NL](http://www.ethereum.nl/) - [Unibright](https://unibright.io/) - [Hyperledger NL](https://www.meetup.com/Hyperledger-Netherlands/) - [Token Agency](https://token.agency) - [Lekkercryptisch](https://lekkercryptisch.nl/) - [Arnhem Bitcoin City](https://www.arnhembitcoinstad.nl/) - [Bitcoin Wednesday](https://www.bitcoinwednesday.com/) - [Algorand](https://www.algorand.com/) ## Speakers Blockdata, Blockchain 4 Humanity, Blockchain Investments & Co., BchainWise, BitFury, BonSanca, Bx3 Consulting, Bloxy.Info, Coinstone Capital, Cybercapital, Cement DAO, Eldorado.io, Emerge, HumanVenture Global SA, IQ-EQ, IT To Profit, Krull Smart Solutions, MakerDAO, Nik Page Experience, Parallel Industries, Request, RIDDLE&CODE, Scrypt.Media, Tykn, The Fork, V-ID ## Communication Channels - [Email](mailto:mail@dutchblockchainweek.com) - [Twitter](https://twitter.com/DutchBlockWeek) @DutchBlockWeek - [Telegram](https://t.me/Dutchblockchainweek) #BCNL #DBC #DutchBlockchainWeek <!-- ### DON'T MAKE CHANGES BELOW THIS LINE! ### -->
64.232143
506
0.77231
eng_Latn
0.758808
4ce8a4b5252a729e4d77550222bd5424635760c2
2,660
md
Markdown
miniconda/README.md
andersy005/anaconda-deploy-cheyenne
dfe2a1ef126b80208f127396f74721e426e7b014
[ "MIT" ]
null
null
null
miniconda/README.md
andersy005/anaconda-deploy-cheyenne
dfe2a1ef126b80208f127396f74721e426e7b014
[ "MIT" ]
2
2017-12-06T20:47:35.000Z
2018-01-31T15:57:11.000Z
miniconda/README.md
andersy005/anaconda-deploy-cheyenne
dfe2a1ef126b80208f127396f74721e426e7b014
[ "MIT" ]
null
null
null
# Conda deployment on Cheyenne - [Conda deployment on Cheyenne](#conda-deployment-on-cheyenne) - [Installing using HPCinstall](#installing-using-hpcinstall) - [Usage](#usage) Documented script example of Miniconda deployment on [Cheyenne Supercomputer](https://www2.cisl.ucar.edu/resources/computational-systems/cheyenne). ## Installing using [HPCinstall](https://github.com/NCAR/HPCinstall) - Get the source code ```bash $ git clone https://github.com/andersy005/modules-cheyenne.git $ cd modules-cheyenne/miniconda ``` - Load HPCinstall module ```bash $ module purge $ module use $MODULEPATH_ROOT/system $ module load hpcinstall ``` NOTE: By default, `hpcinstall` module works with Python 2 only. Therefore, please make sure that you have `python2` as your default on your path by running the following commands: ```bash $ which python $ python --version ``` - Install using HPCinstall. `force` argument is used to force overwrite of existing install. For more details on how to use `hpcinstall`, just run `hpcinstall --help`. ```bash $ hpcinstall build_conda --force ``` ## Usage Hpcinstall sets some environment variables that are needed when loading a newly created module. To get these environment variables, look for `hpci.main.log` file in the directory from which you executed the installation command above. You will see something that looks like: ```bash abanihi@cheyenne1: /glade/work/abanihi/devel/personal/modules-cheyenne/conda $ cat hpci.main.log On 2018-10-11T11:36:58.229174 abanihi called HPCinstall from /gpfs/u/apps/ch/opt/hpcinstall/1.2.6/hpcinstall.py invoked as /gpfs/u/apps/ch/opt/hpcinstall/1.2.6/hpcinstall build_miniconda --force Setting environmental variables: HPCI_SW_DIR = /glade/work/abanihi/softwares/miniconda/3/ HPCI_SW_NAME = miniconda HPCI_SW_VERSION = 3 HPCI_MOD_DIR = /glade/work/abanihi/softwares/modulefiles/ HPCI_MOD_DIR_IDEP = /glade/work/abanihi/softwares/modulefiles/idep/ HPCI_MOD_DIR_CDEP = not_compiler_dependent HPCI_MOD_PREREQ = Running ./build_conda... Done running ./build_conda - exited with code 0 For more details about this code, see URL: https://conda.io/docs/ Hashdir: d41d8cd98f00b204e9800998ecf8427e /glade/work/abanihi/softwares/miniconda/3 ``` To load the `conda` module, you will need to run: `$ module use $HPCI_MOD_DIR` `$ module load conda` NOTE: Replace `HPCI_MOD_DIR` with the corresponding value printed above. ```bash $ module use /glade/work/abanihi/softwares/modulefiles/ $ module load miniconda # Load the module $ module help miniconda # Use module help to find more info on the module ``` **You are now all set!**
32.839506
179
0.761278
eng_Latn
0.867286
4ce9395f13b6e43766fad4994b5ee6a989e93a6e
8,052
md
Markdown
lib/tcllib/embedded/md/tcllib/files/modules/tar/tar.md
marsman57/TclIdentityServerREST
e4bbb856cf9481f662fcaa8370500d03908e5a50
[ "MIT" ]
86
2015-01-29T03:48:33.000Z
2022-03-10T16:55:04.000Z
lib/tcllib/embedded/md/tcllib/files/modules/tar/tar.md
marsman57/TclIdentityServerREST
e4bbb856cf9481f662fcaa8370500d03908e5a50
[ "MIT" ]
7
2015-06-02T08:29:21.000Z
2019-06-13T05:48:17.000Z
lib/tcllib/embedded/md/tcllib/files/modules/tar/tar.md
marsman57/TclIdentityServerREST
e4bbb856cf9481f662fcaa8370500d03908e5a50
[ "MIT" ]
53
2015-02-13T01:31:07.000Z
2021-10-20T11:54:48.000Z
[//000000001]: # (tar \- Tar file handling) [//000000002]: # (Generated from file 'tar\.man' by tcllib/doctools with format 'markdown') [//000000003]: # (tar\(n\) 0\.11 tcllib "Tar file handling") <hr> [ <a href="../../../../toc.md">Main Table Of Contents</a> &#124; <a href="../../../toc.md">Table Of Contents</a> &#124; <a href="../../../../index.md">Keyword Index</a> &#124; <a href="../../../../toc0.md">Categories</a> &#124; <a href="../../../../toc1.md">Modules</a> &#124; <a href="../../../../toc2.md">Applications</a> ] <hr> # NAME tar \- Tar file creation, extraction & manipulation # <a name='toc'></a>Table Of Contents - [Table Of Contents](#toc) - [Synopsis](#synopsis) - [Description](#section1) - [Bugs, Ideas, Feedback](#section2) - [Keywords](#keywords) - [Category](#category) # <a name='synopsis'></a>SYNOPSIS package require Tcl 8\.4 package require tar ?0\.11? [__::tar::contents__ *tarball* ?__\-chan__?](#1) [__::tar::stat__ *tarball* ?file? ?__\-chan__?](#2) [__::tar::untar__ *tarball* *args*](#3) [__::tar::get__ *tarball* *fileName* ?__\-chan__?](#4) [__::tar::create__ *tarball* *files* *args*](#5) [__::tar::add__ *tarball* *files* *args*](#6) [__::tar::remove__ *tarball* *files*](#7) # <a name='description'></a>DESCRIPTION Note: Starting with version 0\.8 the tar reader commands \(contents, stats, get, untar\) support the GNU LongName extension \(header type 'L'\) for large paths\. - <a name='1'></a>__::tar::contents__ *tarball* ?__\-chan__? Returns a list of the files contained in *tarball*\. The order is not sorted and depends on the order files were stored in the archive\. If the option __\-chan__ is present *tarball* is interpreted as an open channel\. It is assumed that the channel was opened for reading, and configured for binary input\. The command will *not* close the channel\. - <a name='2'></a>__::tar::stat__ *tarball* ?file? ?__\-chan__? Returns a nested dict containing information on the named ?file? in *tarball*, or all files if none is specified\. The top level are pairs of filename and info\. The info is a dict with the keys "__mode__ __uid__ __gid__ __size__ __mtime__ __type__ __linkname__ __uname__ __gname__ __devmajor__ __devminor__" % ::tar::stat tarball.tar foo.jpg {mode 0644 uid 1000 gid 0 size 7580 mtime 811903867 type file linkname {} uname user gname wheel devmajor 0 devminor 0} If the option __\-chan__ is present *tarball* is interpreted as an open channel\. It is assumed that the channel was opened for reading, and configured for binary input\. The command will *not* close the channel\. - <a name='3'></a>__::tar::untar__ *tarball* *args* Extracts *tarball*\. *\-file* and *\-glob* limit the extraction to files which exactly match or pattern match the given argument\. No error is thrown if no files match\. Returns a list of filenames extracted and the file size\. The size will be null for non regular files\. Leading path seperators are stripped so paths will always be relative\. * __\-dir__ dirName Directory to extract to\. Uses __pwd__ if none is specified * __\-file__ fileName Only extract the file with this name\. The name is matched against the complete path stored in the archive including directories\. * __\-glob__ pattern Only extract files patching this glob style pattern\. The pattern is matched against the complete path stored in the archive\. * __\-nooverwrite__ Dont overwrite files that already exist * __\-nomtime__ Leave the file modification time as the current time instead of setting it to the value in the archive\. * __\-noperms__ In Unix, leave the file permissions as the current umask instead of setting them to the values in the archive\. * __\-chan__ If this option is present *tarball* is interpreted as an open channel\. It is assumed that the channel was opened for reading, and configured for binary input\. The command will *not* close the channel\. % foreach {file size} [::tar::untar tarball.tar -glob *.jpg] { puts "Extracted $file ($size bytes)" } - <a name='4'></a>__::tar::get__ *tarball* *fileName* ?__\-chan__? Returns the contents of *fileName* from the *tarball*\. % set readme [::tar::get tarball.tar doc/README] { % puts $readme } If the option __\-chan__ is present *tarball* is interpreted as an open channel\. It is assumed that the channel was opened for reading, and configured for binary input\. The command will *not* close the channel\. An error is thrown when *fileName* is not found in the tar archive\. - <a name='5'></a>__::tar::create__ *tarball* *files* *args* Creates a new tar file containing the *files*\. *files* must be specified as a single argument which is a proper list of filenames\. * __\-dereference__ Normally __create__ will store links as an actual link pointing at a file that may or may not exist in the archive\. Specifying this option will cause the actual file point to by the link to be stored instead\. * __\-chan__ If this option is present *tarball* is interpreted as an open channel\. It is assumed that the channel was opened for writing, and configured for binary output\. The command will *not* close the channel\. % ::tar::create new.tar [glob -nocomplain file*] % ::tar::contents new.tar file1 file2 file3 - <a name='6'></a>__::tar::add__ *tarball* *files* *args* Appends *files* to the end of the existing *tarball*\. *files* must be specified as a single argument which is a proper list of filenames\. * __\-dereference__ Normally __add__ will store links as an actual link pointing at a file that may or may not exist in the archive\. Specifying this option will cause the actual file point to by the link to be stored instead\. * __\-prefix__ string Normally __add__ will store files under exactly the name specified as argument\. Specifying a ?\-prefix? causes the *string* to be prepended to every name\. * __\-quick__ The only sure way to find the position in the *tarball* where new files can be added is to read it from start, but if *tarball* was written with a "blocksize" of 1 \(as this package does\) then one can alternatively find this position by seeking from the end\. The ?\-quick? option tells __add__ to do the latter\. - <a name='7'></a>__::tar::remove__ *tarball* *files* Removes *files* from the *tarball*\. No error will result if the file does not exist in the tarball\. Directory write permission and free disk space equivalent to at least the size of the tarball will be needed\. % ::tar::remove new.tar {file2 file3} % ::tar::contents new.tar file3 # <a name='section2'></a>Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems\. Please report such in the category *tar* of the [Tcllib Trackers](http://core\.tcl\.tk/tcllib/reportlist)\. Please also report any ideas for enhancements you may have for either package and/or documentation\. When proposing code changes, please provide *unified diffs*, i\.e the output of __diff \-u__\. Note further that *attachments* are strongly preferred over inlined patches\. Attachments can be made by going to the __Edit__ form of the ticket immediately after its creation, and then using the left\-most button in the secondary navigation bar\. # <a name='keywords'></a>KEYWORDS [archive](\.\./\.\./\.\./\.\./index\.md\#archive), [tape archive](\.\./\.\./\.\./\.\./index\.md\#tape\_archive), [tar](\.\./\.\./\.\./\.\./index\.md\#tar) # <a name='category'></a>CATEGORY File formats
37.626168
135
0.664804
eng_Latn
0.991796
4ce9c1d2161742e2f4bc43f3d3786cb624830bb8
377
md
Markdown
README.md
Aryssimon/Captain-Sonoz
407863e5a416c75f0cd2aee1974dff7d9619a4d9
[ "MIT" ]
null
null
null
README.md
Aryssimon/Captain-Sonoz
407863e5a416c75f0cd2aee1974dff7d9619a4d9
[ "MIT" ]
null
null
null
README.md
Aryssimon/Captain-Sonoz
407863e5a416c75f0cd2aee1974dff7d9619a4d9
[ "MIT" ]
null
null
null
# Captain Sonoz ## Installation and Utilisation 1. Adjust the game configuration in **Input.oz** 2. Choose the players from the following atoms: **random**, **kamikaze**, **rocketman**. 3. Compile all the .oz files using `make` 4.Run the game using `ozengine Main.ozf` > The compiled .ozf files can be cleaned using `make clean` ## Authors * Arys Simon * Thirifay Louis
20.944444
88
0.71618
eng_Latn
0.972235
4cea5b8b2ce2cc214baa002f351d72daef09ba07
3,336
md
Markdown
windows-driver-docs-pr/debugger/-vtop.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/debugger/-vtop.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
windows-driver-docs-pr/debugger/-vtop.md
Ryooooooga/windows-driver-docs.ja-jp
c7526f4e7d66ff01ae965b5670d19fd4be158f04
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: vtop description: Vtop 拡張機能では、仮想のアドレスを対応する物理アドレスに変換し、他のページ テーブルとページのディレクトリ情報を表示します。 ms.assetid: 41f4accc-3eb9-4406-a6cc-a05022166e14 keywords: - Windows デバッグ vtop ms.date: 05/23/2017 topic_type: - apiref api_name: - vtop api_type: - NA ms.localizationpriority: medium ms.openlocfilehash: 343265d0d42a14ed1c0780ca66831e8112d60234 ms.sourcegitcommit: 0cc5051945559a242d941a6f2799d161d8eba2a7 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 04/23/2019 ms.locfileid: "63341780" --- # <a name="vtop"></a>!vtop **! Vtop**拡張機能は、対応する物理アドレスを仮想アドレスに変換し、他のページ テーブルとページのディレクトリ情報が表示されます。 構文 ```dbgcmd !vtop PFN VirtualAddress !vtop 0 VirtualAddress ``` ## <a name="span-idddkvtopdbgspanspan-idddkvtopdbgspanparameters"></a><span id="ddk__vtop_dbg"></span><span id="DDK__VTOP_DBG"></span>パラメーター <span id="_______DirBase______"></span><span id="_______dirbase______"></span><span id="_______DIRBASE______"></span> *DirBase* プロセスの基本クラスをディレクトリを指定します。 各プロセスでは、独自の仮想アドレス空間があります。 使用して、 [ **! プロセス**](-process.md)拡張機能、プロセスのディレクトリの基数を判断します。 <span id="_______PFN______"></span><span id="_______pfn______"></span> *PFN* プロセスのディレクトリ ベースのページのフレーム数 (PFN) を指定します。 <span id="_______0______"></span> **0** により **! vtop**現在を使用する[プロセス コンテキスト](changing-contexts.md#process-context)アドレス変換します。 <span id="_______VirtualAddress______"></span><span id="_______virtualaddress______"></span><span id="_______VIRTUALADDRESS______"></span> *virtualAddress* そのページが必要な仮想アドレスを指定します。 ### <a name="span-iddllspanspan-iddllspandll"></a><span id="DLL"></span><span id="dll"></span>DLL Kdexts.dll ### <a name="span-idadditionalinformationspanspan-idadditionalinformationspanspan-idadditionalinformationspanadditional-information"></a><span id="Additional_Information"></span><span id="additional_information"></span><span id="ADDITIONAL_INFORMATION"></span>追加情報 これらの結果を実現するための他のメソッドでは、次を参照してください。[仮想のアドレスを物理アドレスを変換する](converting-virtual-addresses-to-physical-addresses.md)します。 参照してください[ **! ptov**](-ptov.md)します。 ページのテーブルとページのディレクトリについては、次を参照してください。 *Microsoft Windows internals 』*、Mark Russinovich と David Solomon します。 <a name="remarks"></a>注釈 ------- このコマンドを使用する、 [ **! プロセス**](-process.md)拡張機能、プロセスのディレクトリの基数を判断します。 (つまり、12 ビット数の右シフト) によって次の 3 つの後続の 16 進数 0 を削除することで、このディレクトリ ベースのページのフレーム数 (PFN) を確認できます。 以下に例を示します。 ```dbgcmd kd> !process 0 0 **** NT ACTIVE PROCESS DUMP **** .... PROCESS ff779190 SessionId: 0 Cid: 04fc Peb: 7ffdf000 ParentCid: 0394 DirBase: 098fd000 ObjectTable: e1646b30 TableSize: 8. Image: MyApp.exe ``` 0x098FD000 ディレクトリ ベースなので、その PFN は 0x098FD です。 ```dbgcmd kd> !vtop 98fd 12f980 Pdi 0 Pti 12f 0012f980 09de9000 pfn(09de9) ``` 後続の 3 つのゼロは省略可能な方法に注意してください。 **! Vtop**拡張機能は、ページ directory インデックス (PDI)、ページ テーブルのインデックス (PTI)、最初に入力した仮想のアドレス、物理ページのページのフレーム数 (PFN) の先頭の物理アドレスが表示されます。ページ テーブル エントリ (PTE)。 仮想アドレス 0x0012F980 物理アドレスに変換する場合は、単に、最後の 3 つの 16 進数字 (0x980) を受け取り、物理アドレス (0x09DE9000) ページの先頭に追加する必要があります。 これは、物理アドレス 0x09DE9980 を与えます。 次の 3 つのゼロを削除し、完全なディレクトリを基本に渡すを忘れた場合 **! vtop** PFN ではなく、結果は通常なる正しい。 これはときに **! vtop**番号は PFN をするのには大きすぎて、その右-シフトが代わりに番号を使用して 12 ビット。 ```dbgcmd kd> !vtop 98fd 12f980 Pdi 0 Pti 12f 0012f980 09de9000 pfn(09de9) kd> !vtop 98fd000 12f980 Pdi 0 Pti 12f 0012f980 09de9000 pfn(09de9) ``` ただし、これをお勧め、PFN を常に使用するため、この方法でいくつかのディレクトリの基本値は変換されません。
31.17757
264
0.764089
yue_Hant
0.749035
4ceac44ff9dece9cdf0b9df96b8535adb2d72dab
1,610
md
Markdown
README.md
mauro-midolo/MouseMover
9fe58caa779611ba438c39650ec38168b5be2f9d
[ "Apache-2.0" ]
null
null
null
README.md
mauro-midolo/MouseMover
9fe58caa779611ba438c39650ec38168b5be2f9d
[ "Apache-2.0" ]
null
null
null
README.md
mauro-midolo/MouseMover
9fe58caa779611ba438c39650ec38168b5be2f9d
[ "Apache-2.0" ]
null
null
null
# Mouse Mover [![Apache License, Version 2.0, January 2004](https://img.shields.io/github/license/apache/maven.svg?label=License)](https://github.com/mauro-midolo/MouseMover/blob/master/LICENSE) [![Maven Central](https://img.shields.io/maven-central/v/org.apache.maven.plugins/maven-javadoc-plugin.svg?label=Maven%20Central)](https://mvnrepository.com/artifact/com.github.mauro-midolo/MouseMover) ![GitHub issues](https://img.shields.io/github/issues-raw/mauro-midolo/mousemover) ## Tool usage You can use the application directly using the .exe file (for windows) or .jar file (for linux or MacOS) <br> You can download the latest version from [here](https://repo1.maven.org/maven2/com/github/mauro-midolo/MouseMover/1.4.1/MouseMover-1.4.1-distribution.zip) <br> This is the application interface:<br> ![Mouse Mover example](https://github.com/mauro-midolo/MouseMover/blob/master/src/main/resources/MouseMover.PNG?raw=true) ## Java API Usage You also can activate the mouse mover inside your java application using the API<br> Follow the steps to import and use the library: * Import the project as maven dependency <br> ` <dependency> <groupId>com.github.mauro-midolo</groupId> <artifactId>MouseMover</artifactId> <version>1.4.1</version> </dependency> ` * Create and execute following code<br> ` import com.github.mauromidolo.mousemover.controll.MouseMoverController; MouseMoverController.getInstance().switchOn(30); ` You can also see the Javadoc documentation [here](https://mauro-midolo.github.io/MouseMover/com/github/mauromidolo/mousemover/controll/MouseMoverController.html)
61.923077
479
0.776398
yue_Hant
0.333131
4ceb1467456c313565e9c3113b17037f04c0e27b
1,363
md
Markdown
dynamicsax2012-technet/paymentmanagerservice-constructor-microsoft-dynamics-commerce-runtime-services.md
MicrosoftDocs/DynamicsAX2012-technet
4e3ffe40810e1b46742cdb19d1e90cf2c94a3662
[ "CC-BY-4.0", "MIT" ]
9
2019-01-16T13:55:51.000Z
2021-11-04T20:39:31.000Z
dynamicsax2012-technet/paymentmanagerservice-constructor-microsoft-dynamics-commerce-runtime-services.md
MicrosoftDocs/DynamicsAX2012-technet
4e3ffe40810e1b46742cdb19d1e90cf2c94a3662
[ "CC-BY-4.0", "MIT" ]
265
2018-08-07T18:36:16.000Z
2021-11-10T07:15:20.000Z
dynamicsax2012-technet/paymentmanagerservice-constructor-microsoft-dynamics-commerce-runtime-services.md
MicrosoftDocs/DynamicsAX2012-technet
4e3ffe40810e1b46742cdb19d1e90cf2c94a3662
[ "CC-BY-4.0", "MIT" ]
32
2018-08-09T22:29:36.000Z
2021-08-05T06:58:53.000Z
--- title: PaymentManagerService Constructor (Microsoft.Dynamics.Commerce.Runtime.Services) TOCTitle: PaymentManagerService Constructor ms:assetid: M:Microsoft.Dynamics.Commerce.Runtime.Services.PaymentManagerService.#ctor ms:mtpsurl: https://technet.microsoft.com/library/microsoft.dynamics.commerce.runtime.services.paymentmanagerservice.paymentmanagerservice(v=AX.60) ms:contentKeyID: 62207761 author: Khairunj ms.date: 05/18/2015 mtps_version: v=AX.60 f1_keywords: - Microsoft.Dynamics.Commerce.Runtime.Services.PaymentManagerService.#ctor dev_langs: - CSharp - C++ - VB --- # PaymentManagerService Constructor [!INCLUDE[archive-banner](includes/archive-banner.md)] **Namespace:**  [Microsoft.Dynamics.Commerce.Runtime.Services](microsoft-dynamics-commerce-runtime-services-namespace.md) **Assembly:**  Microsoft.Dynamics.Commerce.Runtime.Services (in Microsoft.Dynamics.Commerce.Runtime.Services.dll) ## Syntax ``` vb 'Declaration Public Sub New 'Usage Dim instance As New PaymentManagerService() ``` ``` csharp public PaymentManagerService() ``` ``` c++ public: PaymentManagerService() ``` ## See Also #### Reference [PaymentManagerService Class](paymentmanagerservice-class-microsoft-dynamics-commerce-runtime-services.md) [Microsoft.Dynamics.Commerce.Runtime.Services Namespace](microsoft-dynamics-commerce-runtime-services-namespace.md)
25.716981
147
0.801174
yue_Hant
0.834498
4cebc633c8df8916e5a8aad711574aa6d9c1f97a
4,800
md
Markdown
_posts/2018-12-12-Download-2013-acura-rdx-engine.md
Camille-Conlin/26
00f0ca24639a34f881d6df937277b5431ae2dd5d
[ "MIT" ]
null
null
null
_posts/2018-12-12-Download-2013-acura-rdx-engine.md
Camille-Conlin/26
00f0ca24639a34f881d6df937277b5431ae2dd5d
[ "MIT" ]
null
null
null
_posts/2018-12-12-Download-2013-acura-rdx-engine.md
Camille-Conlin/26
00f0ca24639a34f881d6df937277b5431ae2dd5d
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download 2013 acura rdx engine book them was offered brandy. south of Port Dickson they run to the river bank, the story says. " He realized that he'd trashed a deserted bathroom. " Holmgren, with thy permission. At all events, grudges or resentments, he can no longer keep the ship in 2013 acura rdx engine from a distance but must track it closely. She reeling off the stool. She had drapery auroras are instead common, and the hide used for _baydars_, and said. she licked her fingers. "How long had Harry been dead?" representative from another studio been here already this morning?" of the Russians to correspond with those of the Portuguese and the ruled their departments in academia. foaming breakers. Sepharad?" Agnes asked. His mother had been an agent of hope and freedom in a 2013 acura rdx engine spanning not merely worlds but "ice-house," 2013 acura rdx engine. ' Quoth the king, and then the edges of the large holes closed so much the price, behind the bars, Barty raised off the gurney pillow. " And this was the end and the beginning? The console has a funny electric smell, paraded their newfound wealth and arrogance through the fashion houses and auction rooms of London, "I bought thee with my money and looked for fidelity from thee, deadly metal-on-metal rasp. And then suddenly he, as though cast loose stones that rattle like dice into the darkness, and In becoming brothers. " ABOUT TWO HUNDRED feet below the ridgeline, grudges or resentments. He wasn't going to what-if himself into a panic. She was straining the milk and setting out the pans? It is taught in winter and spring, the manufacturers pushing for deregulation of cheap (i. " And he was certified of this, or angel dust. cards. " With a smile that might as well have been a sheer! The King's Son and the Ogress dlxxxi professional boxing. Starck employed arguments difficult to refute. But the words had no weight or meaning? On the back there was a picture of her sitting in this same room, Matt, They're dead serious about it. It is at all events Owzyn's So saying, if I recollect right, and Junior was amount to much that I can see, thinks of that, must be in some ran her hand lovingly over the gossamer wall, whose inspiring widespread suspicion of conspiracy, although he was only a finder, 89, backward-hooked fangs exposed 2013 acura rdx engine their full wicked arc. The fin-like feet 2013 acura rdx engine 211: As specimens of the sub-fossil mollusc fauna of the He smiled at her. oiled and rattle-free. 3 deg. His features were not merely pan- Reminding himself that nature was merely a dumb machine, three elderly men, and straight out told me what studio or network you're with, Populus, and that we. Micky -David T. Not the veins, was stretched over them! She took a deep breath. "Anywhere. just concentrate on action and ignore the disgusting aftermath. 230. [233] stately banquet in honour of 2013 acura rdx engine _Vega_ expedition. " It was probably a curious mirth infected the twins, and there before him would be those nailhead eyes, and on that account the Navy had done nothing wrong. Lately she had made her way from day to day in a curious and fragile state of dietary, Queen Es Shuhba is come to thee. The Changer 2013 acura rdx engine openly at her! of addiction and insanity. "I have 2013 acura rdx engine idea on that," McKillian said. We must assume that he has absconded. " insufficient to illuminate the boy or to draw the attention of any 2013 acura rdx engine rocketing by at seventy or eighty THE TWO KINGS AND THE VIZIER'S DAUGHTERS. His Diary of a Book Reader, even the seats were like glass. Ah, you in writing (or by e-mail) within 30 days of receipt that she sense. Hand said, most married couples end up not saying 2013 acura rdx engine, nor did he win thereto save after sore travail. "A witchwind coming. 145. Awe readily mixes with the surface water and cools it, and I wondered what he was doing, but has a kernel. According to Leilani. Vanadium said, dreary, and generally lending Curtis quickly feels his way past the sink. " Bridges were made for people like her. this summer festival of the damned. themselves into false gods, fair of seeming and great. Two were of the year occurred on a pleasant afternoon in early April, smiling and confidential, i, ii. I went to the asparagus festival in Stockton once. Against his chest. No mammalia "But. I just wanted you to know fair enough the cracked-glass dwindled into trifles. A man of power had come to heal the cattle, who were the sailors C. " Sirocco snorted, where he had passed the summer in great want of sparing us the trouble of paying income tax on it. that no trace of it was left. "All I have is a nose," he by their interest in aftermath.
533.333333
4,705
0.784167
eng_Latn
0.999927
4cec27be67d4389fe98747a8f6c5bb69a4e9333a
8,707
md
Markdown
_posts/2013-02-14-my-first-game-html5-lightcycles.md
JDStraughan/jdstraughan.github.io
b1148c8baf168967fbc27995722dc4eea15f59cd
[ "Apache-2.0" ]
1
2016-10-03T09:51:49.000Z
2016-10-03T09:51:49.000Z
_posts/2013-02-14-my-first-game-html5-lightcycles.md
JDStraughan/jdstraughan.github.com
b1148c8baf168967fbc27995722dc4eea15f59cd
[ "Apache-2.0" ]
null
null
null
_posts/2013-02-14-my-first-game-html5-lightcycles.md
JDStraughan/jdstraughan.github.com
b1148c8baf168967fbc27995722dc4eea15f59cd
[ "Apache-2.0" ]
3
2015-05-20T15:18:37.000Z
2020-09-09T06:31:43.000Z
--- layout: post published: true title: My first game - HTML5 lightcycles tags: ["html5", "javascript", "game development"] description: "For the past few months I have been playing with the HTML5 canvas element off and on, eventually building TRON, my first canvas game." header-img: "img/header/tron.jpg" image-credit: Pacifier (Own work) [<a href="https://creativecommons.org/licenses/by-sa/3.0/">CC-BY-SA-3.0</a>], <a href="http://commons.wikimedia.org/wiki/File%3ATron_bryce.jpg">via Wikimedia Commons</a> --- For the past few months I have been playing with the [HTML5 canvas element](https://en.wikipedia.org/wiki/Canvas_element) off and on. In recent years it has been the subject of much excitement in the web development community, and as matured rapidly in modern browsers. Now there are a wide variety of games and activities that utilize this relatively new element. For whatever reasons, I just assumed manipulating the canvas would be a pain in the ass, so I avoided it. I thought, "I am not a game developer, so I have no need for it." I understood that not all canvas use was in the realm of game development, however, the other use cases weren't inspiring me to learn. So I ignored the canvas. Until a few months ago. I was reading a thread on reddit and commenting to a new web developer about how learning to program is a process of writing code that does not work properly, finding the issues, fixing them, and then moving on. In this same thread, someone had commented that when they get bored learning a particular programming lesson, they just fire up [jsfiddle](http://jsfiddle.net/) and play with the canvas - because, they said - "it is just so fun and simple". Fun and simple? Canvas? "Canvas is simple and fun", they said. "Reddit is full of trolls", I said. I don't like to feed the trolls. If it was simple, everyone would be doing it. If it were simple, then even I could write a game in javascript. So like any good redditor, I fired up my browser and was going to prove to the world that this cannot be, in any way, simple. A few minutes later I had a square drawn on my screen. A few hours later I had multiple shapes in different colors. And they were moving. By the end of that first day, I had accomplished no real work, but I had shapes moving on my screen, under my control. My mind exploded with ideas. Every mirror I passed had a cheerful Gabe Newell staring back at me. It had been over twenty years since I thought about writing games when I grew up. I have written code all my life, but somewhere along the way I lost the dream of my childhood. I had never written a game. Unfortunately, I have a job. Where I write code. I can't really complain, I like writing code, but everyday my mind was aching to get back to the canvas. Although I have not been able to devote large swaths of time to this endeavor, a few hours here and there is all that is needed to learn the basics required to get a game going in HTML5. I'd draw shapes, make them spin, move, wrap, warp, and collide. I had finally done it - I was writing a game. Several, in fact. I had started a snake game, an asteroids game, and a math puzzle game. I was like Peter Pan. I tinkered with each, wrote and replaced a lot of bad code, contemplated how to determine if my shape collided with another, read stack overflow, blogs, and docs. A few days ago I decided to lower the bar, and just sit down and complete a game. Something that could be played, had a beginning and an end, and was fun. Thinking about what I wanted to build, I started with a small blue box and started moving it around the canvas. Then I commented out the code that redraws the board several times a second and kept "playing". As you can guess, the box did not just move anymore, it just "expanded" in one direction. Like the lightcycles from Tron. I was 7 (the same age my son is now) when Tron came to the movie theater, and it changed my life. Programs, users, computers, graphics, gaming: Tron had it all. It was also one of the first VHS movies we had, and I am sure I just about wore the tape out. Once the arcade game was released, like many kids in the 80s, I dropped every quarter I could find into it. Of all the games it offered, I loved the lightcycles. I fought for the Users. <div class="video"> <iframe width="640" height="480" src="http://www.youtube.com/embed/ONg0rUogiEg?rel=0" frameborder="0" allowfullscreen></iframe> <h5>Original Tron arcade game - lightcycle level</h5> </div> I had my inspiration. I'd build a lightcycle game. For the uninitiated, like myself, the logic behind gaming is a mystery. I have contemplated it often, but in all honesty have not read much on the subject. Based on my previous experiments, reading lots of blog posts (I wish I could list them all here) and a fair amount of generous stackoverflow answers, I could make my cycle appear, move, and I had figured out how to detect if I had left the game grid. The canvas, after all, is just a bitmap with a simple API. Like an etch-a-sketch. You can move to a location, draw, fill, relocate, etc. Because this etch-a-sketch is in a computer, you can refresh it so many times a second no one will ever be the wiser. That is how animation works. The tutorials, blogs, and answers got me that far. Beyond this was still covered in a cloud of mystery. In my snake game I had not crossed the issue of the snake colliding with itself, and randomizing the location of the bait after each consumption was pretty easy. While this attempt at a game has not found completion, it was the early sandbox that I used to work with. This lightcycle game was going to require an enemy, not just a piece of bait. This enemy would need to have some intelligence, the game would need to know when there was a collision, and it would need to determine a winner. These all seemed like daunting tasks. I thought about studying up on some game development, finding a good framework, and putting together a game. Someday I may do this, but I decided to stick with native js, learn the ropes, see what is happening under the hood. I'd just try to solve some of these problems and then go see how others do it. For the lightcycle's walls, for example, I just made a <code>history</code> array withe coordinates of each movement. Each time a cycle moves, it checks that the new coordinates are not outside the map, or in the history of either cycle. Why did I not just lump them all into one array? Who knows, I was too busy trying to figure out how to make it work. Is there a better way? I sure hope so, but this was working so I moved on. >Note: A redditor pointed out it would be better to have an game board 2D array that stores the x,y coords of used tiles and instead of checking <code>indexOf</code> for each cycle's history, I can just look for existence of the x,y in the array. I'll add this in the next iteration of the game. Thanks _dmcinnes_! Soon I had collisions, movement, light. I felt close to completion. Then I realized I had to create some way to control the enemy cycle, and it was not going to write itself. For my first attempt, I simply had the enemy cycle start each turn looking right, and if it could go right it did. Then it looked up, then left, then down. It was not the smartest cycle in the world, but it gave me something to play against while testing my other code. In my second attempt at "AI" (I use that term very loosely), I decided to create an advisor method. This function would scan the 4 directions, determine the best one, and advise the cycle of the direction with the most runway. If the current direction was blocked in the next tile, it would turn using the best route. To keep the player on their toes, it also has a ~10% chance of turning to the advisor's best option on each turn. Not the most elegant approach, I admit, but it works. As of the time of this writing, it is what is currently controlling the Program bike. This method does beat me more often than I'd care to admit. Of all the areas I'd like to spend more time perfecting, this is the most intriguing to me. Overall, I am quite happy that for under 250 lines of javascript I was able to produce a playable game that is both fun and challenging. If you are looking for a simple and fun way to expand your programming chops, I highly recommend trying out the canvas element. You can play my HTML5 lightcycle game [in 640x480 mode here](http://jsfiddle.net/PxpVr/17/embedded/result/), or [in 320x240 mode here](http://jsfiddle.net/PxpVr/16/embedded/result/). As always, source code is [available on GitHub](https://github.com/JDStraughan/html5-lightcycles).
138.206349
742
0.764328
eng_Latn
0.999935
4cec394a30938a09d14087eeae285c791b815730
1,299
md
Markdown
i18n/zh/docusaurus-plugin-content-docs/version-v1.3/end-user/components/cloud-services/terraform/gcp-cloudsql.md
StevenLeiZhang/kubevela.io
5eb6843a9004025d86bdafe67716fad41af91b84
[ "Apache-2.0" ]
null
null
null
i18n/zh/docusaurus-plugin-content-docs/version-v1.3/end-user/components/cloud-services/terraform/gcp-cloudsql.md
StevenLeiZhang/kubevela.io
5eb6843a9004025d86bdafe67716fad41af91b84
[ "Apache-2.0" ]
null
null
null
i18n/zh/docusaurus-plugin-content-docs/version-v1.3/end-user/components/cloud-services/terraform/gcp-cloudsql.md
StevenLeiZhang/kubevela.io
5eb6843a9004025d86bdafe67716fad41af91b84
[ "Apache-2.0" ]
null
null
null
--- title: Gcp-Cloudsql --- ## 描述 A module to create a private database setup ## 参数说明 ### 属性 名称 | 描述 | 类型 | 是否必须 | 默认值 ------------ | ------------- | ------------- | ------------- | ------------- database | A list of objects that describes if any databases to be created | list(object({\n name = string\n })) | false | instance | | map(any) | false | name | The name of the database instance | string | true | network_name | The name of the VCP to provision this in to | string | true | project | The name of the GCP project | string | true | require_ssl | Require SSL connections or not. | bool | false | users | A list of user that belong to a database instance | list(object({\n name = string\n password = string\n })) | false | writeConnectionSecretToRef | The secret which the cloud resource connection will be written to | [writeConnectionSecretToRef](#writeConnectionSecretToRef) | false | #### writeConnectionSecretToRef 名称 | 描述 | 类型 | 是否必须 | 默认值 ------------ | ------------- | ------------- | ------------- | ------------- name | The secret name which the cloud resource connection will be written to | string | true | namespace | The secret namespace which the cloud resource connection will be written to | string | false |
40.59375
167
0.602771
eng_Latn
0.991349
4cec5bcbf112192db81e59719535711547318eb6
8,120
md
Markdown
docs/ssms/ssms-utility.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssms/ssms-utility.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/ssms/ssms-utility.md
L3onard80/sql-docs.it-it
f73e3d20b5b2f15f839ff784096254478c045bbb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Utilità SSMS ms.prod: sql ms.prod_service: sql-tools ms.technology: ssms ms.topic: conceptual helpviewer_keywords: - SQL Server Management Studio [SQL Server], opening - command prompt utilities [SQL Server], sqlwb - sqlwb utility - Management Studio command line - opening SQL Server Management Studio ms.assetid: aafda520-9e2a-4e1e-b936-1b165f1684e8 author: markingmyname ms.author: maghan ms.reviewer: '' ms.custom: seo-lt-2019 ms.date: 08/07/2019 ms.openlocfilehash: 5a31fb94fad2e063fe9846bd820957abb4ce9b32 ms.sourcegitcommit: ff82f3260ff79ed860a7a58f54ff7f0594851e6b ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 03/29/2020 ms.locfileid: "75243902" --- # <a name="ssms-utility"></a>Utilità SSMS [!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../includes/appliesto-ss-asdb-asdw-pdw-md.md)] L'utilità **Ssms** consente di aprire [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)]. Se specificato, **Ssms** anche una connessione a un server e apre query, script, file, progetti e soluzioni. È possibile specificare file che includono query, progetti o soluzioni. I file contenenti query vengono automaticamente connessi a un server se si specificano le informazioni di connessione e il tipo di file è associato al tipo corrispondente di server. Ad esempio, i file SQL aprono una finestra dell'Editor di query SQL in [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)], mentre i file MDX aprono una finestra dell'Editor di query MDX in [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)]. L'apertura di **Soluzioni e progetti di SQL Server** avviene in [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)]. > [!NOTE] > L'utilità **Ssms** non esegue query. Per eseguire query dalla riga di comando, usare l'utilità **sqlcmd** . ## <a name="syntax"></a>Sintassi ``` Ssms [scriptfile] [projectfile] [solutionfile] [-S servername] [-d databasename] [-G] [-U username] [-E] [-nosplash] [-log [filename]?] [-?] ``` ## <a name="arguments"></a>Argomenti *scriptfile* Specifica uno o più file di script da aprire. Il parametro deve includere il percorso completo dei file. *projectfile* Specifica un progetto di script da aprire. Il parametro deve includere il percorso completo del file del progetto script. *solutionfile* Specifica una soluzione da aprire. Il parametro deve includere il percorso completo del file di soluzione. [ **-S** _servername_] Nome del server [ **-d** _databasename_] Nome del database [ **-G**] Connessione con l'autenticazione di Azure Active Directory. Il tipo di connessione dipende dalla presenza di **-U**. > [!Note] > L'opzione **Active Directory - Universale con supporto MFA** non è attualmente supportata. [ **-U** _username_] Nome utente per la connessione con l'autenticazione SQL [ **-E**] Viene stabilita la connessione con l'autenticazione di Windows [ **-nosplash**] Impedisce a [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] di visualizzare la grafica della schermata iniziale durante l'apertura. Utilizzare questa opzione in caso di connessione al computer che esegue [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] per mezzo di Servizi terminal tramite una connessione a larghezza di banda limitata. Questo argomento non supporta la distinzione tra maiuscole e minuscole e può trovarsi prima o dopo altri argomenti. [ **-log** _[filename]?_ ] Registra l'attività di [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] nel file specificato per la risoluzione dei problemi [ **-?** ] Visualizza la guida relativa alla riga di comando ## <a name="remarks"></a>Osservazioni Tutte le opzioni sono facoltative e separate da uno spazio, ad eccezione dei file che devono essere separati da virgole. Se non viene specificata alcuna opzione, **Ssms** apre [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] in base alle impostazioni definite in **Opzioni** nel menu **Strumenti** . Se ad esempio l'opzione **All'avvio** della pagina **Ambiente/Generale** specifica **Apri nuova finestra Query**, **Ssms** si apre con un editor di query vuoto. L'opzione **-log** deve trovarsi alla fine della riga di comando, dopo tutte le altre opzioni. L'argomento del nome del file è facoltativo. Se il nome del file è specificato e il file non esiste, il file viene creato. Se non è possibile creare il file, ad esempio a causa dell'accesso in scrittura insufficiente, il log viene invece scritto nella posizione APPDATA non localizzata (vedere di seguito). Se l'argomento del nome del file non viene specificato, i file vengono scritti nella cartella di dati dell'applicazione non localizzata dell'utente corrente. La cartella di dati dell'applicazione non localizzata per SQL Server può essere individuata tramite la variabile di ambiente APPDATA. Ad esempio, per SQL Server 2012, la cartella è \<unità di sistema>:\Users\\<nomeutente\>\AppData\Roaming\Microsoft\AppEnv\10.0\\. Per impostazione predefinita, i due file sono denominati ActivityLog.xml e ActivityLog.xsl. Il primo contiene i dati del log attività, mentre il secondo è un foglio di stile XML che fornisce un modo più comodo per visualizzare il file XML. Usare i passaggi seguenti per visualizzare il file di log nel visualizzatore XML predefinito, ad esempio Internet Explorer: fare clic su Start, scegliere Esegui..., quindi digitare "\<unità di sistema>:\Users\\<nomeutente\>\AppData\Roaming\Microsoft\AppEnv\10.0\ActivityLog.xml" nel campo visualizzato e quindi premere Invio. I file che contengono query richiedono la connessione a un server se si specificano le informazioni di connessione e il tipo di file è associato al tipo corrispondente di server. Ad esempio, i file SQL aprono una finestra dell'Editor di query SQL in [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)], mentre i file MDX aprono una finestra dell'Editor di query MDX in [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)]. L'apertura di **Soluzioni e progetti di SQL Server** avviene in [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)]. Nella tabella seguente viene eseguito il mapping dei tipi di server alle estensioni di file. |Tipo di server|Estensione| |-----------------|---------------| |[!INCLUDE[ssNoVersion](../includes/ssnoversion-md.md)]|sql| |SQL Server Analysis Services|mdx<br /><br /> xmla| ## <a name="examples"></a>Esempi Lo script seguente apre [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] da un prompt dei comandi in base alle impostazioni predefinite. ``` Ssms ``` Il codice seguente apre [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] da un prompt dei comandi usando *Active Directory - Integrata*: ``` Ssms.exe -S servername.database.windows.net -G ``` Lo script seguente apre [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] da un prompt dei comandi utilizzando l'autenticazione di Windows, con l'editor del codice impostato sul server `ACCTG and the database AdventureWorks2012,` senza visualizzare la schermata iniziale. ``` Ssms -E -S ACCTG -d AdventureWorks2012 -nosplash ``` Lo script seguente apre [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] da un prompt dei comandi e quindi apre lo script MonthEndQuery. ``` Ssms "C:\Documents and Settings\username\My Documents\SQL Server Management Studio Projects\FinanceScripts\FinanceScripts\MonthEndQuery.sql" ``` Lo script seguente apre [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] da un prompt dei comandi e quindi apre il progetto NewReportsProject nel computer denominato `developer`. ``` Ssms "\\developer\fin\ReportProj\ReportProj\NewReportProj.ssmssqlproj" ``` Lo script seguente apre [!INCLUDE[ssManStudioFull](../includes/ssmanstudiofull-md.md)] da un prompt dei comandi e quindi apre la soluzione MonthlyReports. ``` Ssms "C:\solutionsfolder\ReportProj\MonthlyReports.ssmssln" ``` ## <a name="see-also"></a>Vedere anche [Utilizzo di SQL Server Management Studio](https://msdn.microsoft.com/library/f289e978-14ca-46ef-9e61-e1fe5fd593be)
63.937008
1,389
0.766872
ita_Latn
0.990995
4cec632b416469e57d000a21626120cb9982f0b6
188
md
Markdown
docs/ja/data_types/special_data_types/expression.md
sunadm/ClickHouse
55903fbe23ef6dff8fc7ec25ae68e04919bc9b7f
[ "Apache-2.0" ]
7
2021-02-26T04:34:22.000Z
2021-12-31T08:15:47.000Z
docs/ja/data_types/special_data_types/expression.md
sunadm/ClickHouse
55903fbe23ef6dff8fc7ec25ae68e04919bc9b7f
[ "Apache-2.0" ]
1
2019-10-13T16:06:13.000Z
2019-10-13T16:06:13.000Z
docs/ja/data_types/special_data_types/expression.md
sunadm/ClickHouse
55903fbe23ef6dff8fc7ec25ae68e04919bc9b7f
[ "Apache-2.0" ]
3
2020-02-24T12:57:54.000Z
2021-10-04T13:29:00.000Z
# Expression Used for representing lambda expressions in high-order functions. [Original article](https://clickhouse.tech/docs/en/data_types/special_data_types/expression/) <!--hide-->
26.857143
105
0.787234
eng_Latn
0.551986
4cecee08f08eb00b705bd55a41d7a562e0f2c5b2
5,117
md
Markdown
articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
SkyChang/azure-docs.zh-tw
549d15e2207fd02b09ecfad7768e6e960957939b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
SkyChang/azure-docs.zh-tw
549d15e2207fd02b09ecfad7768e6e960957939b
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/azure-vmware/vrealize-operations-for-azure-vmware-solution.md
SkyChang/azure-docs.zh-tw
549d15e2207fd02b09ecfad7768e6e960957939b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 設定 Azure VMware 解決方案的 vRealize 作業 description: 瞭解如何為您的 Azure VMware 解決方案私人雲端設定 vRealize 作業。 ms.topic: how-to ms.date: 09/22/2020 ms.openlocfilehash: 9e512d107ddc4d9bca28323658d09f4b4b378dc3 ms.sourcegitcommit: 829d951d5c90442a38012daaf77e86046018e5b9 ms.translationtype: MT ms.contentlocale: zh-TW ms.lasthandoff: 10/09/2020 ms.locfileid: "91579940" --- # <a name="set-up-vrealize-operations-for-azure-vmware-solution"></a>設定 Azure VMware 解決方案的 vRealize 作業 vRealize Operations Manager 是一種操作管理平臺,可讓 VMware 基礎結構系統管理員監視系統資源。 這些系統資源可以是應用層級或基礎結構層級 (實體和虛擬) 物件。 在過去,大部分的 VMware 系統管理員已使用 vRealize 作業來監視和管理 VMware 私用雲端元件(vCenter、ESXi、NSX-T、vSAN 和混合式雲端擴充 (HCX) )。 每個 Azure VMware 解決方案私人雲端都會布建專用的 vCenter、NSX-T、vSAN 和 HCX 部署。 [在開始之前](#before-you-begin)和[必要條件](#prerequisites)之前,請先仔細檢查。 接著,我們將逐步引導您使用 Azure VMware 解決方案 vRealize Operations Manager 的兩個典型部署拓撲: > [!div class="checklist"] > * [管理 Azure VMware 解決方案部署的內部部署 vRealize 作業](#on-premises-vrealize-operations-managing-azure-vmware-solution-deployment) > * [在 Azure VMware 解決方案部署上執行的 vRealize 作業](#vrealize-operations-running-on-azure-vmware-solution-deployment) ## <a name="before-you-begin"></a>開始之前 * 請參閱 [vRealize Operations Manager 產品檔](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) ,以深入瞭解如何部署 vRealize 作業。 * 檢閱基本 Azure VMware 解決方案軟體定義資料中心 (SDDC) [教學課程系列](tutorial-network-checklist.md)。 * (選擇性)請參閱管理 Azure VMware 解決方案部署選項之內部部署 vRealize 作業的 [VRealize Operations 遠端控制器](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-263F9219-E801-4383-8A59-E84F3D01ED6B.html) 產品檔。 ## <a name="prerequisites"></a>必要條件 * 您應在內部部署與 Azure VMware Solution SDDC 之間設定 VPN 或 Azure ExpressRoute。 * Azure VMware 解決方案私人雲端已部署在 Azure 中。 ## <a name="on-premises-vrealize-operations-managing-azure-vmware-solution-deployment"></a>管理 Azure VMware 解決方案部署的內部部署 vRealize 作業 大部分的客戶都有現有的內部部署 vRealize 作業,可用來管理一或多個內部部署 vCenters 網域。 當客戶在 Azure 中布建新的 Azure VMware 解決方案私人雲端時,它們通常會使用 Azure ExpressRoute 或第3層 VPN 解決方案,將其內部部署環境連線至 Azure VMware 解決方案,如下所示。 :::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-1.png" alt-text="管理 Azure VMware 解決方案部署的內部部署 vRealize 作業" border="false"::: 若要將 vRealize 作業功能延伸至 Azure VMware 解決方案私人雲端,您可以建立 [Azure Vmware 解決方案私用雲端資源](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.config.doc/GUID-640AD750-301E-4D36-8293-1BFEB67E2600.html) 的介面卡實例,以從 Azure vmware 解決方案私人雲端收集資料,並將它帶入內部部署的 vRealize 作業。 內部部署 vRealize Operations Manager 實例可以直接連接到 Azure VMware 解決方案上的 vCenter 和 NSX-T 管理員,或者您可以選擇在 Azure VMware 解決方案私人雲端上部署 vRealize 作業遠端收集器。 VRealize 作業會從 Azure VMware 解決方案私人雲端壓縮和加密收集到的資料,然後再透過 ExpressRoute 或 VPN 網路將其傳送至內部部署的 vRealize Operations Manager。 > [!TIP] > 如需安裝 vRealize Operations Manager 的逐步指南,請參閱 [VMware 檔](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) 。 ## <a name="vrealize-operations-running-on-azure-vmware-solution-deployment"></a>在 Azure VMware 解決方案部署上執行的 vRealize 作業 另一個部署選項是在 Azure VMware Solution 私用雲端中的其中一個 vSphere 叢集上部署 vRealize Operations Manager 的實例,如下所示。 :::image type="content" source="media/vrealize-operations-manager/vrealize-operations-deployment-option-2.png" alt-text="管理 Azure VMware 解決方案部署的內部部署 vRealize 作業" border="false"::: 部署 vRealize 作業的 Azure VMware 解決方案實例之後,您可以設定 vRealize 作業,以收集來自 vCenter、ESXi、NSX-T、vSAN 和 HCX 的資料。 > [!TIP] > 如需安裝 vRealize Operations Manager 的逐步指南,請參閱 [VMware 檔](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.vapp.doc/GUID-7FFC61A0-7562-465C-A0DC-46D092533984.html) 。 ## <a name="known-limitations"></a>已知限制 - **cloudadmin@vsphere.local**Azure VMware 解決方案中的使用者具有[有限的許可權](concepts-role-based-access-control.md)。 在 Azure VMware 解決方案上) (Vm 的虛擬機器,不支援使用 VMware 工具的內部來賓記憶體收集。 在此情況下,作用中和已耗用的記憶體使用率仍會繼續運作。 - 以主機為基礎之商務意圖的工作負載優化無法運作,因為 Azure VMware 解決方案會管理叢集設定,包括 DRS 設定。 - VRealize Operations Manager 8.0 和更新版本中,已完全支援使用叢集型商務意圖在 SDDC 中放置跨叢集的工作負載優化。 但是,工作負載優化不會察覺資源集區,而是將虛擬機器放置在叢集層級。 使用者可以在 Azure VMware Solution vCenter Server 介面中手動更正此錯誤。 - 您無法使用 Azure VMware 解決方案 vCenter Server 認證來登入 vRealize Operations Manager。 - Azure VMware 解決方案不支援 vRealize Operations Manager 外掛程式。 使用 vCenter Server 雲端帳戶將 Azure VMware 解決方案 vCenter 連接到 vRealize Operations Manager 時,您將會遇到下列警告: :::image type="content" source="./media/vrealize-operations-manager/warning-adapter-instance-creation-succeeded.png" alt-text="管理 Azure VMware 解決方案部署的內部部署 vRealize 作業"::: 因為 **cloudadmin@vsphere.local** Azure VMware 解決方案中的使用者沒有足夠的許可權來執行註冊所需的所有 vCenter Server 動作,所以會發生此警告。 不過,這些許可權足以讓介面卡實例進行資料收集,如下所示: :::image type="content" source="./media/vrealize-operations-manager/adapter-instance-to-perform-data-collection.png" alt-text="管理 Azure VMware 解決方案部署的內部部署 vRealize 作業"::: 如需詳細資訊,請參閱設定 [VCenter Adapter 實例所需的許可權](https://docs.vmware.com/en/vRealize-Operations-Manager/8.1/com.vmware.vcom.core.doc/GUID-3BFFC92A-9902-4CF2-945E-EA453733B426.html)。 <!-- LINKS - external --> <!-- LINKS - internal -->
58.816092
518
0.79656
yue_Hant
0.744198
4cecf57e114d1d2d856950e62e4d456870d62d17
9,894
md
Markdown
vendor/bundle/ruby/2.3.0/gems/fast_gettext-1.1.2/Readme.md
thejonanshow/my-boxen
ba07d2b90a2a086511e2f63bae732ba6413157c0
[ "MIT" ]
null
null
null
vendor/bundle/ruby/2.3.0/gems/fast_gettext-1.1.2/Readme.md
thejonanshow/my-boxen
ba07d2b90a2a086511e2f63bae732ba6413157c0
[ "MIT" ]
null
null
null
vendor/bundle/ruby/2.3.0/gems/fast_gettext-1.1.2/Readme.md
thejonanshow/my-boxen
ba07d2b90a2a086511e2f63bae732ba6413157c0
[ "MIT" ]
null
null
null
FastGettext =========== GetText but 3.5 x faster, 560 x less memory, simple, clean namespace (7 vs 34) and threadsafe! It supports multiple backends (.mo, .po, .yml files, Database(ActiveRecord + any other), Chain, Loggers) and can easily be extended. [Example Rails application](https://github.com/grosser/gettext_i18n_rails_example) Comparison ========== <table> <tr> <td></td> <td width="100">Hash</td> <td width="150">FastGettext</td> <td width="100">GetText</td> <td width="100">ActiveSupport I18n::Simple</td> </tr> <tr> <td>Speed*</td> <td>0.82s</td> <td>1.36s</td> <td>4.88s</td> <td>21.77s</td> </tr> <tr> <td>RAM*</td> <td>4K</td> <td>8K</td> <td>4480K</td> <td>10100K</td> </tr> <tr> <td>Included backends</td> <td></td> <td>db, yml, mo, po, logger, chain</td> <td>mo</td> <td>yml (db/key-value/po/chain in other I18n backends)</td> </tr> </table> <small>*50.000 translations with ruby enterprise 1.8.6 through `rake benchmark`</small> Setup ===== ### 1. Install sudo gem install fast_gettext ### 2. Add a translation repository From mo files (traditional/default) FastGettext.add_text_domain('my_app',:path => 'locale') Or po files (less maintenance than mo) FastGettext.add_text_domain('my_app',:path => 'locale', :type => :po) # :ignore_fuzzy => true to not use fuzzy translations # :report_warning => false to hide warnings about obsolete/fuzzy translations Or yaml files (use I18n syntax/indentation) FastGettext.add_text_domain('my_app', :path => 'config/locales', :type => :yaml) Or database (scaleable, good for many locales/translators) # db access is cached <-> only first lookup hits the db require "fast_gettext/translation_repository/db" FastGettext::TranslationRepository::Db.require_models #load and include default models FastGettext.add_text_domain('my_app', :type => :db, :model => TranslationKey) ### 3. Choose text domain and locale for translation Do this once in every Thread. (e.g. Rails -> ApplicationController) FastGettext.text_domain = 'my_app' FastGettext.available_locales = ['de','en','fr','en_US','en_UK'] # only allow these locales to be set (optional) FastGettext.locale = 'de' ### 4. Start translating include FastGettext::Translation _('Car') == 'Auto' _('not-found') == 'not-found' s_('Namespace|not-found') == 'not-found' n_('Axis','Axis',3) == 'Achsen' #German plural of Axis _('Hello %{name}!') % {:name => "Pete"} == 'Hello Pete!' Managing translations ============ ### mo/po-files Generate .po or .mo files using GetText parser (example tasks at [gettext_i18n_rails](http://github.com/grosser/gettext_i18n_rails)) Tell Gettext where your .mo or .po files lie, e.g. for locale/de/my_app.po and locale/de/LC_MESSAGES/my_app.mo FastGettext.add_text_domain('my_app',:path=>'locale') Use the [original GetText](http://github.com/mutoh/gettext) to create and manage po/mo-files. (Work on a po/mo parser & reader that is easier to use has started, contributions welcome @ [get_pomo](http://github.com/grosser/get_pomo) ) ###Database [Example migration for ActiveRecord](http://github.com/grosser/fast_gettext/blob/master/examples/db/migration.rb)<br/> The default plural seperator is `||||` but you may overwrite it (or suggest a better one..). This is usable with any model DataMapper/Sequel or any other(non-database) backend, the only thing you need to do is respond to the self.translation(key, locale) call. If you want to use your own models, have a look at the [default models](http://github.com/grosser/fast_gettext/tree/master/lib/fast_gettext/translation_repository/db_models) to see what you want/need to implement. To manage translations via a Web GUI, use a [Rails application and the translation_db_engine](http://github.com/grosser/translation_db_engine) Rails ======================= Try the [gettext_i18n_rails plugin](http://github.com/grosser/gettext_i18n_rails), it simplifies the setup.<br/> Try the [translation_db_engine](http://github.com/grosser/translation_db_engine), to manage your translations in a db. Setting `available_locales`,`text_domain` or `locale` will not work inside the `evironment.rb`, since it runs in a different thread then e.g. controllers, so set them inside your application_controller. #environment.rb after initializers Object.send(:include,FastGettext::Translation) FastGettext.add_text_domain('accounting',:path=>'locale') FastGettext.add_text_domain('frontend',:path=>'locale') ... #application_controller.rb class ApplicationController ... include FastGettext::Translation before_filter :set_locale def set_locale FastGettext.available_locales = ['de','en',...] FastGettext.text_domain = 'frontend' FastGettext.set_locale(params[:locale] || session[:locale] || request.env['HTTP_ACCEPT_LANGUAGE']) session[:locale] = I18n.locale = FastGettext.locale end Advanced features ================= ### Abnormal pluralisation Plurals are selected by index, think of it as `['car', 'cars'][index]`<br/> A pluralisation rule decides which form to use e.g. in english its `count == 1 ? 0 : 1`.<br/> If you have any languages that do not fit this rule, you have to add a custom pluralisation rule. Via Ruby: FastGettext.pluralisation_rule = lambda{|count| count > 5 ? 1 : (count > 2 ? 0 : 2)} Via mo/pofile: Plural-Forms: nplurals=2; plural=n==2?3:4; [Plural expressions for all languages](http://translate.sourceforge.net/wiki/l10n/pluralforms). ###default_text_domain If you only use one text domain, setting `FastGettext.default_text_domain = 'app'` is sufficient and no more `text_domain=` is needed ###default_locale If the simple rule of "first `availble_locale` or 'en'" is not suficcient for you, set `FastGettext.default_locale = 'de'`. ###default_available_locales Fallback when no available_locales are set ###Chains You can use any number of repositories to find a translation. Simply add them to a chain and when the first cannot translate a given key, the next is asked and so forth. repos = [ FastGettext::TranslationRepository.build('new', :path=>'....'), FastGettext::TranslationRepository.build('old', :path=>'....') ] FastGettext.add_text_domain 'combined', :type=>:chain, :chain=>repos ###Logger When you want to know which keys could not be translated or were used, add a Logger to a Chain: repos = [ FastGettext::TranslationRepository.build('app', :path=>'....') FastGettext::TranslationRepository.build('logger', :type=>:logger, :callback=>lambda{|key_or_array_of_ids| ... }), } FastGettext.add_text_domain 'combined', :type=>:chain, :chain=>repos If the Logger is in position #1 it will see all translations, if it is in position #2 it will only see the unfound. Unfound may not always mean missing, if you choose not to translate a word because the key is a good translation, it will appear nevertheless. A lambda or anything that responds to `call` will do as callback. A good starting point may be `examples/missing_translations_logger.rb`. ###Plugins Want a xml version ? Write your own TranslationRepository! #fast_gettext/translation_repository/xxx.rb module FastGettext module TranslationRepository class Wtf define initialize(name,options), [key], plural(*keys) and either inherit from TranslationRepository::Base or define available_locales and pluralisation_rule end end end ###Multi domain support If you have more than one gettext domain, there are two sets of functions available: include FastGettext::TranslationMultidomain d_("domainname", "string") # finds 'string' in domain domainname dn_("domainname", "string", "strings", 1) # ditto # etc. These are helper methods so you don't need to write: FastGettext.text_domain = "domainname" _("string") It is useful in Rails plugins in the views for example. The second set of functions are D functions which search for string in _all_ domains. If there are multiple translations in different domains, it returns them in random order (depends on the Ruby hash implementation): include FastGettext::TranslationMultidomain D_("string") # finds 'string' in any domain # etc. FAQ === - [Problems with ActiveRecord messages?](http://wiki.github.com/grosser/fast_gettext/activerecord) - [Iconv require error in 1.9.2](http://exceptionz.wordpress.com/2010/02/03/how-to-fix-the-iconv-require-error-in-ruby-1-9) TODO ==== - Add a fallback for Iconv.conv in ruby 1.9.4 -> lib/fast_gettext/vendor/iconv - YML backend that reads ActiveSupport::I18n files Author ====== Mo/Po-file parsing from Masao Mutoh, see vendor/README ### [Contributors](http://github.com/grosser/fast_gettext/contributors) - [geekq](http://www.innoq.com/blog/vd) - [Matt Sanford](http://blog.mzsanford.com) - [Antonio Terceiro](http://softwarelivre.org/terceiro) - [J. Pablo Fernández](http://pupeno.com) - Rudolf Gavlas - [Ramón Cahenzli](http://www.psy-q.ch) - [Rainux Luo](http://rainux.org) - [Dmitry Borodaenko](https://github.com/angdraug) - [Kouhei Sutou](https://github.com/kou) - [Hoang Nghiem](https://github.com/hoangnghiem) - [Costa Shapiro](https://github.com/costa) - [Jamie Dyer](https://github.com/kernow) - [Stephan Kulow](https://github.com/coolo) - [Fotos Georgiadis](https://github.com/fotos) - [Lukáš Zapletal](https://github.com/lzap) - [Dominic Cleal](https://github.com/domcleal) [Michael Grosser](http://grosser.it)<br/> michael@grosser.it<br/> License: MIT, some vendor parts under the same license terms as Ruby (see headers)<br/> [![Build Status](https://travis-ci.org/grosser/fast_gettext.png)](https://travis-ci.org/grosser/fast_gettext)
37.619772
213
0.710026
eng_Latn
0.761482
4ced28123286c215e92ca81c75edfb2c9339794c
11,591
md
Markdown
includes/resource-graph/samples/bycat/networking.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
63
2017-08-28T07:43:47.000Z
2022-02-24T03:04:04.000Z
includes/resource-graph/samples/bycat/networking.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
704
2017-08-04T09:45:07.000Z
2021-12-03T05:49:08.000Z
includes/resource-graph/samples/bycat/networking.md
R0bes/azure-docs.de-de
24540ed5abf9dd081738288512d1525093dd2938
[ "CC-BY-4.0", "MIT" ]
178
2017-07-05T10:56:47.000Z
2022-03-18T12:25:19.000Z
--- author: DCtheGeek ms.service: resource-graph ms.topic: include ms.date: 09/03/2021 ms.author: dacoulte ms.custom: generated ms.openlocfilehash: f309bd9e75abdaace892273c0ef74e79cad80c80 ms.sourcegitcommit: f2d0e1e91a6c345858d3c21b387b15e3b1fa8b4c ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 09/07/2021 ms.locfileid: "123535154" --- ### <a name="count-resources-that-have-ip-addresses-configured-by-subscription"></a>Zählen von Ressourcen, für die IP-Adressen konfiguriert sind (nach Abonnement) Mithilfe der Beispielabfrage „Alle öffentlichen IP-Adressen auflisten“ und Hinzufügen von `summarize` und `count()` können wir eine Liste nach Abonnement von Ressourcen mit konfigurierten IP-Adressen abrufen. ```kusto Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | summarize count () by subscriptionId ``` # <a name="azure-cli"></a>[Azure-Befehlszeilenschnittstelle](#tab/azure-cli) ```azurecli-interactive az graph query -q "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | summarize count () by subscriptionId" ``` # <a name="azure-powershell"></a>[Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive Search-AzGraph -Query "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | summarize count () by subscriptionId" ``` # <a name="portal"></a>[Portal](#tab/azure-portal) :::image type="icon" source="../../../../articles/governance/resource-graph/media/resource-graph-small.png":::Probieren Sie im Azure Resource Graph-Explorer die folgende Abfrage aus: - Azure-Portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty(properties.ipAddress)%0a%7c%20summarize%20count%20()%20by%20subscriptionId" target="_blank">portal.azure.com</a> - Azure Government-Portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty(properties.ipAddress)%0a%7c%20summarize%20count%20()%20by%20subscriptionId" target="_blank">portal.azure.us</a> - Azure China 21Vianet-Portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty(properties.ipAddress)%0a%7c%20summarize%20count%20()%20by%20subscriptionId" target="_blank">portal.azure.cn</a> --- ### <a name="get-virtual-networks-and-subnets-of-network-interfaces"></a>Abrufen von virtuellen Netzwerken und Subnetzen von Netzwerkschnittstellen Verwenden Sie einen regulären `parse`-Ausdruck, um die Namen des virtuellen Netzwerks und des Subnetzes aus der Ressourcen-ID-Eigenschaft zu erhalten. Während `parse` das Abrufen von Daten aus einem komplexen Feld ermöglicht, ist es besser, direkt auf die vorhandenen Eigenschaften zuzugreifen, anstatt `parse` zu verwenden. ```kusto Resources | where type =~ 'microsoft.network/networkinterfaces' | project id, ipConfigurations = properties.ipConfigurations | mvexpand ipConfigurations | project id, subnetId = tostring(ipConfigurations.properties.subnet.id) | parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnet | project id, virtualNetwork, subnet ``` # <a name="azure-cli"></a>[Azure-Befehlszeilenschnittstelle](#tab/azure-cli) ```azurecli-interactive az graph query -q "Resources | where type =~ 'microsoft.network/networkinterfaces' | project id, ipConfigurations = properties.ipConfigurations | mvexpand ipConfigurations | project id, subnetId = tostring(ipConfigurations.properties.subnet.id) | parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnet | project id, virtualNetwork, subnet" ``` # <a name="azure-powershell"></a>[Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive Search-AzGraph -Query "Resources | where type =~ 'microsoft.network/networkinterfaces' | project id, ipConfigurations = properties.ipConfigurations | mvexpand ipConfigurations | project id, subnetId = tostring(ipConfigurations.properties.subnet.id) | parse kind=regex subnetId with '/virtualNetworks/' virtualNetwork '/subnets/' subnet | project id, virtualNetwork, subnet" ``` # <a name="portal"></a>[Portal](#tab/azure-portal) :::image type="icon" source="../../../../articles/governance/resource-graph/media/resource-graph-small.png":::Probieren Sie im Azure Resource Graph-Explorer die folgende Abfrage aus: - Azure-Portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20%3d%7e%20%27microsoft.network%2fnetworkinterfaces%27%0a%7c%20project%20id%2c%20ipConfigurations%20%3d%20properties.ipConfigurations%0a%7c%20mvexpand%20ipConfigurations%0a%7c%20project%20id%2c%20subnetId%20%3d%20tostring(ipConfigurations.properties.subnet.id)%0a%7c%20parse%20kind%3dregex%20subnetId%20with%20%27%2fvirtualNetworks%2f%27%20virtualNetwork%20%27%2fsubnets%2f%27%20subnet%0a%7c%20project%20id%2c%20virtualNetwork%2c%20subnet" target="_blank">portal.azure.com</a> - Azure Government-Portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20%3d%7e%20%27microsoft.network%2fnetworkinterfaces%27%0a%7c%20project%20id%2c%20ipConfigurations%20%3d%20properties.ipConfigurations%0a%7c%20mvexpand%20ipConfigurations%0a%7c%20project%20id%2c%20subnetId%20%3d%20tostring(ipConfigurations.properties.subnet.id)%0a%7c%20parse%20kind%3dregex%20subnetId%20with%20%27%2fvirtualNetworks%2f%27%20virtualNetwork%20%27%2fsubnets%2f%27%20subnet%0a%7c%20project%20id%2c%20virtualNetwork%2c%20subnet" target="_blank">portal.azure.us</a> - Azure China 21Vianet-Portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20%3d%7e%20%27microsoft.network%2fnetworkinterfaces%27%0a%7c%20project%20id%2c%20ipConfigurations%20%3d%20properties.ipConfigurations%0a%7c%20mvexpand%20ipConfigurations%0a%7c%20project%20id%2c%20subnetId%20%3d%20tostring(ipConfigurations.properties.subnet.id)%0a%7c%20parse%20kind%3dregex%20subnetId%20with%20%27%2fvirtualNetworks%2f%27%20virtualNetwork%20%27%2fsubnets%2f%27%20subnet%0a%7c%20project%20id%2c%20virtualNetwork%2c%20subnet" target="_blank">portal.azure.cn</a> --- ### <a name="list-all-public-ip-addresses"></a>Auflisten aller öffentlichen IP-Adressen Ähnlich wie bei der Abfrage „Anzeigen von Ressourcen mit Speicher“ wird jeder Typ mit dem Wort **publicIPAddresses** gefunden. Diese Abfrage baut auf diesem Muster auf, um nur Ergebnisse mit **properties.ipAddress** `isnotempty` einzuschließen, um nur die **properties.ipAddress** zurückzugeben und die Ergebnisse auf die obersten 100 zu begrenzen (`limit`). Je nach Ihrer ausgewählten Shell müssen Sie Anführungszeichen möglicherweise mit Escapezeichen versehen. ```kusto Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | project properties.ipAddress | limit 100 ``` # <a name="azure-cli"></a>[Azure-Befehlszeilenschnittstelle](#tab/azure-cli) ```azurecli-interactive az graph query -q "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | project properties.ipAddress | limit 100" ``` # <a name="azure-powershell"></a>[Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive Search-AzGraph -Query "Resources | where type contains 'publicIPAddresses' and isnotempty(properties.ipAddress) | project properties.ipAddress | limit 100" ``` # <a name="portal"></a>[Portal](#tab/azure-portal) :::image type="icon" source="../../../../articles/governance/resource-graph/media/resource-graph-small.png":::Probieren Sie im Azure Resource Graph-Explorer die folgende Abfrage aus: - Azure-Portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty(properties.ipAddress)%0a%7c%20project%20properties.ipAddress%0a%7c%20limit%20100" target="_blank">portal.azure.com</a> - Azure Government-Portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty(properties.ipAddress)%0a%7c%20project%20properties.ipAddress%0a%7c%20limit%20100" target="_blank">portal.azure.us</a> - Azure China 21Vianet-Portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20contains%20%27publicIPAddresses%27%20and%20isnotempty(properties.ipAddress)%0a%7c%20project%20properties.ipAddress%0a%7c%20limit%20100" target="_blank">portal.azure.cn</a> --- ### <a name="show-unassociated-network-security-groups"></a>Anzeigen nicht zugeordneter Netzwerksicherheitsgruppen Diese Abfrage gibt Netzwerksicherheitsgruppen (Network Security Groups, NSGs) zurück, die keiner Netzwerkschnittstelle und keinem Subnetz zugeordnet sind. ```kusto Resources | where type =~ 'microsoft.network/networksecuritygroups' and isnull(properties.networkInterfaces) and isnull(properties.subnets) | project name, resourceGroup | sort by name asc ``` # <a name="azure-cli"></a>[Azure-Befehlszeilenschnittstelle](#tab/azure-cli) ```azurecli-interactive az graph query -q "Resources | where type =~ 'microsoft.network/networksecuritygroups' and isnull(properties.networkInterfaces) and isnull(properties.subnets) | project name, resourceGroup | sort by name asc" ``` # <a name="azure-powershell"></a>[Azure PowerShell](#tab/azure-powershell) ```azurepowershell-interactive Search-AzGraph -Query "Resources | where type =~ 'microsoft.network/networksecuritygroups' and isnull(properties.networkInterfaces) and isnull(properties.subnets) | project name, resourceGroup | sort by name asc" ``` # <a name="portal"></a>[Portal](#tab/azure-portal) :::image type="icon" source="../../../../articles/governance/resource-graph/media/resource-graph-small.png":::Probieren Sie im Azure Resource Graph-Explorer die folgende Abfrage aus: - Azure-Portal: <a href="https://portal.azure.com/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20%3d%7e%20%27microsoft.network%2fnetworksecuritygroups%27%20and%20isnull(properties.networkInterfaces)%20and%20isnull(properties.subnets)%0a%7c%20project%20name%2c%20resourceGroup%0a%7c%20sort%20by%20name%20asc" target="_blank">portal.azure.com</a> - Azure Government-Portal: <a href="https://portal.azure.us/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20%3d%7e%20%27microsoft.network%2fnetworksecuritygroups%27%20and%20isnull(properties.networkInterfaces)%20and%20isnull(properties.subnets)%0a%7c%20project%20name%2c%20resourceGroup%0a%7c%20sort%20by%20name%20asc" target="_blank">portal.azure.us</a> - Azure China 21Vianet-Portal: <a href="https://portal.azure.cn/?feature.customportal=false#blade/HubsExtension/ArgQueryBlade/query/Resources%0a%7c%20where%20type%20%3d%7e%20%27microsoft.network%2fnetworksecuritygroups%27%20and%20isnull(properties.networkInterfaces)%20and%20isnull(properties.subnets)%0a%7c%20project%20name%2c%20resourceGroup%0a%7c%20sort%20by%20name%20asc" target="_blank">portal.azure.cn</a> ---
77.791946
638
0.799068
kor_Hang
0.196927
4cede7782536fc95167761bd0f6b74ce824f8c1d
211
md
Markdown
doc/dplypy/index.md
ccharp/dplyPY
681af27b6ca595bec88e1a9b98ea90c9ac848f1b
[ "MIT" ]
1
2022-03-10T17:41:25.000Z
2022-03-10T17:41:25.000Z
doc/dplypy/index.md
ccharp/dplyPY
681af27b6ca595bec88e1a9b98ea90c9ac848f1b
[ "MIT" ]
44
2022-02-18T19:19:06.000Z
2022-03-16T01:09:46.000Z
doc/dplypy/index.md
ccharp/dplyPY
681af27b6ca595bec88e1a9b98ea90c9ac848f1b
[ "MIT" ]
null
null
null
Package dplypy ============= `DplyFrame` is the data carrier between `pipeline` methods (which are documented [here](pipeline.md)). Sub-modules ----------- * [DplyFrame](dplyframe.md) * [Pipeline](pipeline.md)
23.444444
102
0.663507
eng_Latn
0.910424
4cee1a6ec9964d7a0af1e150290baa657d0340d7
611
md
Markdown
_posts/review/TIL(Today I Learned)/2020/09-september/2020-09-14-200914TIL.md
Yadon079/yadon079.github.io
3c2387ab60ed3964c8e1671f05b47fcc7981cb43
[ "MIT" ]
2
2020-09-01T05:41:51.000Z
2021-07-30T04:37:42.000Z
_posts/review/TIL(Today I Learned)/2020/09-september/2020-09-14-200914TIL.md
Yadon079/yadon079.github.io
3c2387ab60ed3964c8e1671f05b47fcc7981cb43
[ "MIT" ]
null
null
null
_posts/review/TIL(Today I Learned)/2020/09-september/2020-09-14-200914TIL.md
Yadon079/yadon079.github.io
3c2387ab60ed3964c8e1671f05b47fcc7981cb43
[ "MIT" ]
6
2021-01-31T03:32:07.000Z
2021-08-13T14:01:19.000Z
--- layout: post date: 2020-09-14 23:59:00 title: "2020년 9월 14일 돌아보기" description: "Today I Learned" subject: til category: [ til ] tags: [ TIL, 오늘 내가 공부한 것 ] comments: true --- ## 오늘 한 일 배열 챕터를 정리하기 시작했다. 모든 지식들이 중요하지만 배열은 특히 중요한데, 알고리즘에서 굉장히 자주 사용되기 때문이다. 지금은 자바를 공부하면서 배열에 대해 공부하고 있지만, 알고리즘을 풀기 위해서는 기초지식보단 자료구조가 더 시급한 것 같다. 기초 지식은 개발자 역량을 지속적으로 키우기 위해서 필요한 것이고, 단기적으로 취업을 하기 위해서는 알고리즘 위주의 공부가 더 급한 것 같다. 대부분 회사에서는(특히 대기업) 코딩테스트를 먼저 본 후 면접을 보기 때문에 일단 코딩테스트를 통과하기 위해 알고리즘을 공부해야 하는 것이다. 물론 코딩테스트를 통과할 정도의 실력이면 충분한 지식이 있겠지만, 일종의 지름길로서 알고리즘을 먼저 공부하는 것이 좋은 것 같다. 알고리즘을 공부하면서 모르는 부분이 나오면 공부하고 기록해두는 식으로 해야겠다.
32.157895
232
0.708674
kor_Hang
1.00001