hexsha
stringlengths
40
40
size
int64
5
1.04M
ext
stringclasses
6 values
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
3
344
max_stars_repo_name
stringlengths
5
125
max_stars_repo_head_hexsha
stringlengths
40
78
max_stars_repo_licenses
listlengths
1
11
max_stars_count
int64
1
368k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
3
344
max_issues_repo_name
stringlengths
5
125
max_issues_repo_head_hexsha
stringlengths
40
78
max_issues_repo_licenses
listlengths
1
11
max_issues_count
int64
1
116k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
3
344
max_forks_repo_name
stringlengths
5
125
max_forks_repo_head_hexsha
stringlengths
40
78
max_forks_repo_licenses
listlengths
1
11
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
content
stringlengths
5
1.04M
avg_line_length
float64
1.14
851k
max_line_length
int64
1
1.03M
alphanum_fraction
float64
0
1
lid
stringclasses
191 values
lid_prob
float64
0.01
1
0c4050e9923adc34eda6eca20aadf752c7221a2d
10,689
md
Markdown
docs/microservices/design/data-considerations.md
vijayraavi/architecture-center
f9adc034d5a53277f9e8bcf3603185cb42b0a009
[ "CC-BY-4.0", "MIT" ]
1
2019-07-22T02:05:37.000Z
2019-07-22T02:05:37.000Z
docs/microservices/design/data-considerations.md
vijayraavi/architecture-center
f9adc034d5a53277f9e8bcf3603185cb42b0a009
[ "CC-BY-4.0", "MIT" ]
1
2019-07-15T07:46:31.000Z
2019-07-15T07:46:31.000Z
docs/microservices/design/data-considerations.md
vijayraavi/architecture-center
f9adc034d5a53277f9e8bcf3603185cb42b0a009
[ "CC-BY-4.0", "MIT" ]
2
2019-12-14T20:05:19.000Z
2021-11-30T14:10:43.000Z
--- title: Data considerations for microservices description: Data considerations for microservices. author: MikeWasson ms.date: 02/25/2019 ms.topic: guide ms.service: architecture-center ms.subservice: reference-architecture ms.custom: microservices --- # Data considerations for microservices This article describes considerations for managing data in a microservices architecture. Because every microservice manages its own data, data integrity and data consistency are critical challenges. A basic principle of microservices is that each service manages its own data. Two services should not share a data store. Instead, each service is responsible for its own private data store, which other services cannot access directly. The reason for this rule is to avoid unintentional coupling between services, which can result if services share the same underlying data schemas. If there is a change to the data schema, the change must be coordinated across every service that relies on that database. By isolating each service's data store, we can limit the scope of change, and preserve the agility of truly independent deployments. Another reason is that each microservice may have its own data models, queries, or read/write patterns. Using a shared data store limits each team's ability to optimize data storage for their particular service. ![Diagram of a wrong approach to CQRS](../../guide/architecture-styles/images/cqrs-microservices-wrong.png) This approach naturally leads to [polyglot persistence](https://martinfowler.com/bliki/PolyglotPersistence.html) — the use of multiple data storage technologies within a single application. One service might require the schema-on-read capabilities of a document database. Another might need the referential integrity provided by an RDBMS. Each team is free to make the best choice for their service. For more about the general principle of polyglot persistence, see [Use the best data store for the job](../../guide/design-principles/use-the-best-data-store.md). > [!NOTE] > It's fine for services to share the same physical database server. The problem occurs when services share the same schema, or read and write to the same set of database tables. ## Challenges Some challenges arise from this distributed approach to managing data. First, there may be redundancy across the data stores, with the same item of data appearing in multiple places. For example, data might be stored as part of a transaction, then stored elsewhere for analytics, reporting, or archiving. Duplicated or partitioned data can lead to issues of data integrity and consistency. When data relationships span multiple services, you can't use traditional data management techniques to enforce the relationships. Traditional data modeling uses the rule of "one fact in one place." Every entity appears exactly once in the schema. Other entities may hold references to it but not duplicate it. The obvious advantage to the traditional approach is that updates are made in a single place, which avoids problems with data consistency. In a microservices architecture, you have to consider how updates are propagated across services, and how to manage eventual consistency when data appears in multiple places without strong consistency. ## Approaches to managing data There is no single approach that's correct in all cases, but here are some general guidelines for managing data in a microservices architecture. - Embrace eventual consistency where possible. Understand the places in the system where you need strong consistency or ACID transactions, and the places where eventual consistency is acceptable. - When you need strong consistency guarantees, one service may represent the source of truth for a given entity, which is exposed through an API. Other services might hold their own copy of the data, or a subset of the data, that is eventually consistent with the master data but not considered the source of truth. For example, imagine an e-commerce system with a customer order service and a recommendation service. The recommendation service might listen to events from the order service, but if a customer requests a refund, it is the order service, not the recommendation service, that has the complete transaction history. - For transactions, use patterns such as [Scheduler Agent Supervisor](../../patterns/scheduler-agent-supervisor.md) and [Compensating Transaction](../../patterns/compensating-transaction.md) to keep data consistent across several services. You may need to store an additional piece of data that captures the state of a unit of work that spans multiple services, to avoid partial failure among multiple services. For example, keep a work item on a durable queue while a multi-step transaction is in progress. - Store only the data that a service needs. A service might only need a subset of information about a domain entity. For example, in the Shipping bounded context, we need to know which customer is associated to a particular delivery. But we don't need the customer's billing address — that's managed by the Accounts bounded context. Thinking carefully about the domain, and using a DDD approach, can help here. - Consider whether your services are coherent and loosely coupled. If two services are continually exchanging information with each other, resulting in chatty APIs, you may need to redraw your service boundaries, by merging two services or refactoring their functionality. - Use an [event driven architecture style](../../guide/architecture-styles/event-driven.md). In this architecture style, a service publishes an event when there are changes to its public models or entities. Interested services can subscribe to these events. For example, another service could use the events to construct a materialized view of the data that is more suitable for querying. - A service that owns events should publish a schema that can be used to automate serializing and deserializing the events, to avoid tight coupling between publishers and subscribers. Consider JSON schema or a framework like [Microsoft Bond](https://github.com/Microsoft/bond), Protobuf, or Avro. - At high scale, events can become a bottleneck on the system, so consider using aggregation or batching to reduce the total load. ## Example: Choosing data stores for the Drone Delivery application The previous articles in this series discuss a drone delivery service as a running example. You can read more about the scenario and the corresponding reference implementation [here](./index.md). To recap, this application defines several microservices for scheduling deliveries by drone. When a user schedules a new delivery, the client request includes information about the delivery, such as pickup and dropoff locations, and about the package, such as size and weight. This information defines a unit of work. The various backend services care about different portions of the information in the request, and also have different read and write profiles. ![Diagram of data considerations](../images/data-considerations.png) ### Delivery service The Delivery service stores information about every delivery that is currently scheduled or in progress. It listens for events from the drones, and tracks the status of deliveries that are in progress. It also sends domain events with delivery status updates. It's expected that users will frequently check the status of a delivery while they are waiting for their package. Therefore, the Delivery service requires a data store that emphasizes throughput (read and write) over long-term storage. Also, the Delivery service does not perform any complex queries or analysis, it simply fetches the latest status for a given delivery. The Delivery service team chose Azure Redis Cache for its high read-write performance. The information stored in Redis is relatively short-lived. Once a delivery is complete, the Delivery History service is the system of record. ### Delivery History service The Delivery History service listens for delivery status events from the Delivery service. It stores this data in long-term storage. There are two different use-cases for this historical data, which have different data storage requirements. The first scenario is aggregating the data for the purpose of data analytics, in order to optimize the business or improve the quality of the service. Note that the Delivery History service doesn't perform the actual analysis of the data. It's only responsible for the ingestion and storage. For this scenario, the storage must be optimized for data analysis over a large set of data, using a schema-on-read approach to accommodate a variety of data sources. [Azure Data Lake Store](/azure/data-lake-store/) is a good fit for this scenario. Data Lake Store is an Apache Hadoop file system compatible with Hadoop Distributed File System (HDFS), and is tuned for performance for data analytics scenarios. The other scenario is enabling users to look up the history of a delivery after the delivery is completed. Azure Data Lake is not particularly optimized for this scenario. For optimal performance, Microsoft recommends storing time-series data in Data Lake in folders partitioned by date. (See [Tuning Azure Data Lake Store for performance](/azure/data-lake-store/data-lake-store-performance-tuning-guidance)). However, that structure is not optimal for looking up individual records by ID. Unless you also know the timestamp, a lookup by ID requires scanning the entire collection. Therefore, the Delivery History service also stores a subset of the historical data in Cosmos DB for quicker lookup. The records don't need to stay in Cosmos DB indefinitely. Older deliveries can be archived — say, after a month. This could be done by running an occasional batch process. ### Package service The Package service stores information about all of the packages. The storage requirements for the Package are: - Long-term storage. - Able to handle a high volume of packages, requiring high write throughput. - Support simple queries by package ID. No complex joins or requirements for referential integrity. Because the package data is not relational, a document oriented database is appropriate, and Cosmos DB can achieve very high throughput by using sharded collections. The team that works on the Package service is familiar with the MEAN stack (MongoDB, Express.js, AngularJS, and Node.js), so they select the [MongoDB API](/azure/cosmos-db/mongodb-introduction) for Cosmos DB. That lets them leverage their existing experience with MongoDB, while getting the benefits of Cosmos DB, which is a managed Azure service.
124.290698
876
0.809337
eng_Latn
0.999322
0c40632d989a64098026c1ad7b883b9bc2949c2f
30
md
Markdown
README.md
stzhyegwan/hanwha
ecb0075ce06f86aef1c5ee21cbdf32fe6d5a674f
[ "MIT" ]
null
null
null
README.md
stzhyegwan/hanwha
ecb0075ce06f86aef1c5ee21cbdf32fe6d5a674f
[ "MIT" ]
null
null
null
README.md
stzhyegwan/hanwha
ecb0075ce06f86aef1c5ee21cbdf32fe6d5a674f
[ "MIT" ]
null
null
null
# hanwha practice git process
10
20
0.8
eng_Latn
0.808024
0c40f5d551bcc3f3f8867e5c8c6ac93e3025bbb4
3,519
md
Markdown
information-schema/information-schema-cluster-info.md
hidehalo/docs
fe6795a7d271b7c8685be7cf645ef3c4e827e3f0
[ "Apache-2.0" ]
515
2016-07-25T06:48:33.000Z
2022-03-22T13:19:42.000Z
information-schema/information-schema-cluster-info.md
hidehalo/docs
fe6795a7d271b7c8685be7cf645ef3c4e827e3f0
[ "Apache-2.0" ]
6,432
2016-07-23T06:23:49.000Z
2022-03-31T17:01:16.000Z
information-schema/information-schema-cluster-info.md
hidehalo/docs
fe6795a7d271b7c8685be7cf645ef3c4e827e3f0
[ "Apache-2.0" ]
596
2016-08-11T09:08:03.000Z
2022-03-29T05:44:17.000Z
--- title: CLUSTER_INFO summary: Learn the `CLUSTER_INFO` cluster topology information table. aliases: ['/docs/dev/system-tables/system-table-cluster-info/','/docs/dev/reference/system-databases/cluster-info/','/tidb/dev/system-table-cluster-info/'] --- # CLUSTER_INFO The `CLUSTER_INFO` cluster topology table provides the current topology information of the cluster, the version information of each instance, the Git Hash corresponding to the instance version, the starting time of each instance, and the running time of each instance. {{< copyable "sql" >}} ```sql USE information_schema; desc cluster_info; ``` ```sql +----------------+-------------+------+------+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------------+-------------+------+------+---------+-------+ | TYPE | varchar(64) | YES | | NULL | | | INSTANCE | varchar(64) | YES | | NULL | | | STATUS_ADDRESS | varchar(64) | YES | | NULL | | | VERSION | varchar(64) | YES | | NULL | | | GIT_HASH | varchar(64) | YES | | NULL | | | START_TIME | varchar(32) | YES | | NULL | | | UPTIME | varchar(32) | YES | | NULL | | +----------------+-------------+------+------+---------+-------+ 7 rows in set (0.00 sec) ``` Field description: * `TYPE`: The instance type. The optional values are `tidb`, `pd`, and `tikv`. * `INSTANCE`: The instance address, which is a string in the format of `IP:PORT`. * `STATUS_ADDRESS`: The service address of HTTP API. Some commands in tikv-ctl, pd-ctl, or tidb-ctl might use this API and this address. You can also get more cluster information via this address. Refer to [TiDB HTTP API document](https://github.com/pingcap/tidb/blob/master/docs/tidb_http_api.md) for details. * `VERSION`: The semantic version number of the corresponding instance. To be compatible with the MySQL version number, the TiDB version is displayed in the format of `${mysql-version}-${tidb-version}`. * `GIT_HASH`: The Git Commit Hash when compiling the instance version, which is used to identify whether two instances are of the absolutely consistent version. * `START_TIME`: The starting time of the corresponding instance. * `UPTIME`: The uptime of the corresponding instance. {{< copyable "sql" >}} ```sql SELECT * FROM cluster_info; ``` ```sql +------+-----------------+-----------------+--------------+------------------------------------------+---------------------------+---------------------+ | TYPE | INSTANCE | STATUS_ADDRESS | VERSION | GIT_HASH | START_TIME | UPTIME | +------+-----------------+-----------------+--------------+------------------------------------------+---------------------------+---------------------+ | tidb | 0.0.0.0:4000 | 0.0.0.0:10080 | 4.0.0-beta.2 | 0df3b74f55f8f8fbde39bbd5d471783f49dc10f7 | 2020-07-05T09:25:53-06:00 | 26h39m4.352862693s | | pd | 127.0.0.1:2379 | 127.0.0.1:2379 | 4.1.0-alpha | 1ad59bcbf36d87082c79a1fffa3b0895234ac862 | 2020-07-05T09:25:47-06:00 | 26h39m10.352868103s | | tikv | 127.0.0.1:20165 | 127.0.0.1:20180 | 4.1.0-alpha | b45e052df8fb5d66aa8b3a77b5c992ddbfbb79df | 2020-07-05T09:25:50-06:00 | 26h39m7.352869963s | +------+-----------------+-----------------+--------------+------------------------------------------+---------------------------+---------------------+ 3 rows in set (0.00 sec) ```
59.644068
310
0.533674
eng_Latn
0.461618
0c41c8bddc4dab0b2dbee1278eeb2743613400f2
1,067
md
Markdown
Googleapprating prediction.md
gokul0405/googleapprating-prediction
fe1315413ec479e8018348b01b755a7dfa3bec0c
[ "MIT" ]
null
null
null
Googleapprating prediction.md
gokul0405/googleapprating-prediction
fe1315413ec479e8018348b01b755a7dfa3bec0c
[ "MIT" ]
null
null
null
Googleapprating prediction.md
gokul0405/googleapprating-prediction
fe1315413ec479e8018348b01b755a7dfa3bec0c
[ "MIT" ]
null
null
null
# googleapprating-prediction An data analytics project to predict the apps rating The problem is to identify the apps that are going to be good for Google to promote. App ratings, which are provided by the customers, is always a great indicator of the goodness of the app. The problem reduces to: predict which apps will have high ratings. Fields in the data – App: Application name Category: Category to which the app belongs Rating: Overall user rating of the app Reviews: Number of user reviews for the app Size: Size of the app Installs: Number of user downloads/installs for the app Type: Paid or Free Price: Price of the app Content Rating: Age group the app is targeted at - Children / Mature 21+ / Adult Genres: An app can belong to multiple genres (apart from its main category). For example, a musical family game will belong to Music, Game, Family genres. Last Updated: Date when the app was last updated on Play Store Current Ver: Current version of the app available on Play Store Android Ver: Minimum required Android version
26.675
154
0.773196
eng_Latn
0.997664
0c424e953825b67bf2fb7fb7e5a0a6004b5e887c
908
md
Markdown
.github/ISSUE_TEMPLATE/planed-feature.md
codexnetwork/codex.io
a5c586b5e306f58e73efce2c5f5e6298b82116d8
[ "MIT" ]
3
2019-06-06T04:52:22.000Z
2021-04-21T06:18:10.000Z
.github/ISSUE_TEMPLATE/planed-feature.md
codexnetwork/codex.io
a5c586b5e306f58e73efce2c5f5e6298b82116d8
[ "MIT" ]
null
null
null
.github/ISSUE_TEMPLATE/planed-feature.md
codexnetwork/codex.io
a5c586b5e306f58e73efce2c5f5e6298b82116d8
[ "MIT" ]
7
2019-06-04T06:27:20.000Z
2019-12-03T10:20:29.000Z
--- name: Planed Feature about: Planed Feature title: '' labels: enhancement assignees: '' --- <!-- PLEASE FILL OUT THE FOLLOWING MARKDOWN TEMPLATE --> <!-- Give your PR a title that is sufficient to understand what is being changed. --> ## Description <!-- Describe the change you made, the motivation for it, and the impact it will have. Reference issues or pull requests where possible (use '#XX' or 'GH-XX' where XX is the issue or pull request number). --> ## Consensus Changes <!-- If this PR introduces a change to the validation of blocks in the chain or consensus in general, please describe the impact. --> ## API Changes <!-- If this PR introduces API changes, please describe the changes here. What will developers need to know before upgrading to this version? --> ## Documentation Additions <!-- List all the information that needs to be added to the documentation after merge. -->
30.266667
208
0.727974
eng_Latn
0.998582
0c4291bd6117bcfe501cc9000d15bb97d763c1a4
42,557
md
Markdown
README.md
tvmogul/Angular-2-SwipeClouds-Mobile-Framework
678cf947e4e8005f150b2a79b1448bef5a69c618
[ "MIT" ]
null
null
null
README.md
tvmogul/Angular-2-SwipeClouds-Mobile-Framework
678cf947e4e8005f150b2a79b1448bef5a69c618
[ "MIT" ]
null
null
null
README.md
tvmogul/Angular-2-SwipeClouds-Mobile-Framework
678cf947e4e8005f150b2a79b1448bef5a69c618
[ "MIT" ]
null
null
null
<!-- Start Article --> <span id="ArticleContent"> # SwipeClouds® Angular 8 Mobile App Video Framework # # Angular 8 Native Language Mobile Apps for iPhone(Swift) & Android(Java) # # NO React Native, NO Ionic, NO Flutter, NO Unnecessary 3rd-Party Frameworks! # ## by William SerGio ## <p>You can download all the source code at:</p> <p>http://www.codeproject.com/script/Articles/MemberArticles.aspx?amid=53972</p> <img height="520px" src="/mobile-drawing-app/src/assets/swipeclouds.gif" width="264px" /> # Old Frameworks NO Longer Needed # - **NO React Native** - **NO Ionic** - **NO Intel XDK** - **NO Onsen UI** - **NO Trigger.IO** - **NO ng2-Bootstrap** - **NO TopCoat** - **NO Sencha Touch** - **NO NativeScript** - **NO Angular UI** - **NO Flutter** - **NO Framework 7** - **NO Kendo UI** - **NO Xamarin** # A Few Features Included # ## - **Ability to Stream Video to Smart TV Sets** - **Facial ID & TouchID for BOTH iPhone & Android** - **Plays 3-D Video on BOTH iPhone & Android** - **Encryption for Passwords That NOBODY Can Crack** - **How to Use JQuery Plugins in Angular** - **Cool Animated Canvas Backgrounds** - **JSONP & Observables for Remote Data** - **Pinch Clouds to Expand & Contract** - **Angular Component Plays Embedded Videos** - **Delivers Targeted Ads Based on Zip Code Radius** - **Allows Use of Full-Featured SASS** - **Angular ListView, Toolbar & NavBar** - **iOS7 Frosted Panels & HTML Games Like Chess** - **Angular Dialog Popup Component** - **BarCode Scanner, UnserData & Compass Plugins** - **Angular LocalStorage Component** - **How to Load External Website Using Angular** - **Angular Back Button for External Sites** ## ## Introduction ## <p>In my opinion, frameworks like React Native and Ionic are completelty unnecessary and only add unwanted size and maintenence with each new release. I am an expert in every Mobile Framework like React Native, Ionic, Cordova, PhoneGap, etc. and I have written hundreds of apps in these frameworks like React Native for Clients who were misinformed into thinking that they wanted a "uniform look and approach to coding" which isn't the case with any of these old and unnecessary frameworks.</p> <p>We need to understand how mobile apps are created today. We start with a native language iPhone App and Android App that hosts a browser that displays a Web-based App on a company's server. Let's look at both teh iPhone and Android as follows: ## Native iPhone App in SwipeClouds ## <p>We use Swift over Objective C in our native iPhone App. UIWebView is the now deprecated, but still available browser interface in xcode iOS apps for displaying web content. It has been replaced with WKWebView (which is what Safari uses) from iOS 8 onwards. You can also use the iOS9 SafariViewController. In our Native iPhone we make all 3 of these browser Interfaces available because, to date, UIWebView is still the most reliable in terms of running Angular 8 web-based apps.</p> <p>I have added a lot of pre-built components in my SwipeClouds using Swift UI. Swift UI is an interactive and revolutionary framework that will enable the Apple developers to develop and design apps for both iOS, MacOS, and tvOs. The Swift UI framework we have in SwipeClouds has tools and APIs to bring iPhone app solutions to Mac, and other Apple platforms while saving time and resources required. In my opinion, the Swift UI framework makes Google’s Flutter UI framework and Facebook’s React Native Frameworks OBSOLETE.</p> We added in our Native iPhone App in Swift a simple 2-way JavaScript Bridge that allows us to call with a SINGLE line of code any Swift function that will return any data back to the JavaScript and our Swift code with a SINGLE line of code can call any JavaScript function making all those 3rd-paty frameworks obsolete in my opinion. ## Native Android App in SwipeClouds ## In our Native Android App in Java we use android.webkit.WebView and WebChromeClient to dsplay server-based apps in this browser Interface. And, like we do on the iPhone, we added a simple 2-way JavaScript Bridge that allows us to call with a SINGLE line of code any Java or C++ function that will return any data back to the JavaScript and our Java/C++ code with a SINGLE line of code can call any JavaScript function making all those 3rd-paty frameworks obsolete in my opinion. I believe in eleminating as many 3rd-party components as possible and I have always liked the look and feel of JQuery Mobile and wanted to see how JQuery Mobile's styling would look in an Angular Mobile App where we let Angular control the DOM but take advantage of JQuery Mobile's cool Plugins and styling. You can easily ass Cordova or PhoneGap to this project. All the frameworks listed above like Ionic, etc. <strong>are all great frameworks </strong>that I have used for our clients BUT they are <strong>NOT </strong>needed to build a really cool, fully-functioning Angular Mobile App.. This article demonstrates how to use JQuery Plugins in an Angular Mobile App. For the main GUI in our Angular Mobile App I decided to use an amazing JavaScript canvas plugin by Graham Breach which I made some changes to for this project. &nbsp;For scrolling I used the JQuery plugin iScroll v4.2 by Matteo Spinelli. If you are tired of the same, boring look and feel of most mobile apps, then checkout my approach, which doesn't have any Ionic, or any of the third-party components listed above.</p> <p>To run the compiled code to see what the Angular Mobile App looks like just open the www.zip folder in Visual Studio as a website and you can see what the app looks like. Later, after you have created the project you can run it from Node.js. If you just want to see the working app I posted the compiled .apk file for Android on my website and you can just scan the QR Cod below to load swipeclouds.apk on your Android mobile phone.</p> <p>I believe that a first app in Angular should have all the basics and so this sample project includes in this app:</p> <ul> <li>SwipeClouds Interface</li> <li>Videos &amp; Movies from Hundreds of Tube Servers</li> <li>Super Good <strong>Chess</strong> Game</li> <li>Internal Browser for loading Local &amp; Remote html</li> <li>iOS7 Frosted Panels, Toolbars &amp; Navbars</li> <li><strong>Barcode Scanner</strong> (Full JavaScript &amp; Java Code)</li> <li><strong>Multiple Compasses</strong> (5 Compasses &amp; Full JavaScript &amp; Java Code)</li> <li><strong>User Data Scrapper & Targeted Niche TV Commercials Delivery</strong>&nbsp;(Full JavaScript &amp; Java & Swift Code)</li> </ul> <p>Creating powerful Angular Mobile Apps is fast and easy. You just run Angular-CLI and then unzip the src.zip file above and copy the contents into the "src" directory created by the CLI and you have all the features above ready to go with full source code---WOW! If you would like to download the compiled Android apk file you can find that on my SwipeClouds website at: <a href="http://www.swipeclouds.com/" target="_blank">http://www.swipeclouds.com</a></p> ## Main GUI - Pinch to Size Cloud ## <p>The main GUI is a SwipeCloud of floating images and you can swirl this cloud by swiping any of the images with your finger. Pinching the SwipeCloud with your fingers will increase and decrease the size of the SwipeCloud. This SwipeCloud is the main means of navigation for the Angular Mobile App and clicking on any of the images in the SwipeCloud will load a different view, which, in most cases will load the VideoComponent View for that particular group of video feeds from any tube server that allows embedding.</p> Angular Mobile App with a Very Different Look &amp; Feel<br> with A Novel Approach to Navigation. JQuery has a lot of<br> really cool, already-built CANVAS plugins like SwipeClouds<br> **We set pinchZoom = true** function TouchDown(e) { var tg = EventToCanvasId(e), tc = (tg &amp;&amp; TagCanvas.tc[tg]), p; if(tc &amp;&amp; e.changedTouches) { if(e.touches.length == 1 &amp;&amp; tc.touchState == 0) { tc.touchState = 1; tc.BeginDrag(e); if(p = EventXY(e, tc.canvas)) { tc.mx = p.x; tc.my = p.y; tc.drawn = 0; } } else if(e.targetTouches.length == 2 &amp;&amp; tc.<strong>pinchZoom</strong>) { tc.touchState = 3; tc.EndDrag(); tc.BeginPinch(e); } else {&nbsp;tc.EndDrag(); tc.EndPinch(); tc.touchState = 0; } }} ## Layouts for Portrait vs. Landscape ## <p>I decided that the best layout for video and the other views was to retain the Toolbar and Navbar in the Portrait Orientation and to Hide them in the Landscape Orientation. You can see this below. I added a button and code to stream the selected video from your mobile phone to any smart TV set using pairing from the tube server's site.</p> <img height="320px" src="http://www.swipeclouds.com/img/orientation.jpg" width="628px" /> <p>I also used this approach for the SwipeClouds view. It made sense that if the user needs access to the Toolbar or Navbar from the Landscape Orientation the user just rotates the phone to the portrait and the controls appear.</p> ## Install Node.js ## <p>Let's get started to building this Angular2 Mobile App by downloading Node.js which includes npm at <a href="https://nodejs.org/en/" target="_blank">https://nodejs.org/en/</a></p> <p>At this point if you tried using npm it will most likely give you the dreaded and now famous error:<br> <br> <code><strong>npm ERR! Windows_NT 6.1.7601</strong></code><br> <br> There are numerous working fixes for this error if you are behind a proxy but if you are not behind a proxy then trying to fix this error can make you crazy. Run the the commands below in a CMD window launched as <b>ADMINISTRATOR</b>:</p> <pre lang="C++">npm config delete http-proxy npm config delete https-proxy npm config delete proxy -g npm config delete http-proxy -g <strong>THE REAL MAGIC TO FIXING THIS ERROR IS:</strong> npm config set registry "http://registry.npmjs.org" npm config set strict-ssl false </pre> <h2><strong>Angular-CLI</strong></h2> <p>I like the speed of development with Angular-CLI but I dislike how buggy it is to work with at this time. I really dislike companies like Google telling me what my app should look like or what IDE I should use. The purpose of this article is to walk beginners through creating an Angular App using CLI so let's start.</p> <p>Install Angular-CLI which will also install Angular's "ng" command globally on your system:</p> <p>Directions for installing Angular-CLI are at: <a href="https://github.com/angular/angular-cli#updating-angular-cli">https://github.com/angular/angular-cli#updating-angular-cli</a></p> npm uninstall -g @angular/cli npm cache clean npm install -g @angular/cli@latest</b> To verify whether your installation completed successfully, you can run: ng version @angular/cli: 1.0.0-beta.32.3 node: 7.4.0 os: win32 x64 ## Create Our Angular Mobile App ## <p>Now that you have Angular-CLI installed, you can generate an Angular Mobile App: Then create a directory for your Angular projects. On my computer I have a directory called "Angular." From inside that directory using the <b>CMD prompt in administrator mod</b>e create your "first-app" as follows:</p> Select a folder - I used C:\Angular C:\Angular&gt;ng new first-app --routing--style=scss ## Installing &amp; Using Visual Studio Code IDE ## <p>I used Microsoft's Visual Studio Code IDE which you can easily download and install from: <a href="http://code.visualstudio.com/" target="_blank">http://code.visualstudio.com/</a></p> <p>Open <i><b>Visual Studio Code</b></i> and select the project folder "first-app" and open the Integrated Terminal Window as shown below. In the <i><b>Integrated Terminal Window</b></i> in <i><b>Visual Studio Code </b></i>run the command below which will create your "dist" directory for your finished project.</p> <p><img height="319px" src="http://www.swipeclouds.com/img/ngbuild.jpg" width="474px"></p> <p>Then run the default app installed by Angular-CLI as follows:</p> C:\Angular&gt;first-app&gt;ng serve <p>This will start our Node.js server running on port 4200 so if you open your Chrome Web Browser to http://localhost:4200 you will see the application running. This will run the default Angular app that comes with Angular-CLI.</p> ## What Can Wrong When You Run The App? ## <p>In the sample project I left the includes for Cordova. If you leave these Cordova includes in the "index.html" then you will get the message below -<strong> DO NOT HIT THE "OK" BUTTON</strong> or the app will not load. Just hit the "<strong>CANCEL</strong>" button and the app will run in the browser.</p> <p><img height="444px" src="http://www.swipeclouds.com/img/run_cordova.jpg" width="469px"></p> ## How to Compile Angular Apps as Mobile Apps Using Ahead-oF-Time Compilation (aot) ## <p>Next we will add the source code for the source code for our mobile app to this default project. Download the zipped <strong>src.zip</strong> file posted above and empty the contents of the zipped "src" folder&nbsp;into the src folder of the project. &nbsp;And again run the command:</p> C:\Angular&gt;first-app&gt;ng serve <p>I will jump ahead here to explain how to build your "www" folder for mobile. The BIG SECRET to compiling an Angular App for Mobile that isn't obvious. To build an Angular App so it will work as a Mobile App in XCODE or Android Studio is setting up the pathways correctly. Look at the index.html from the "src" folder you added to the project and notice the following:</p> &lt;script&gt;document.write('&lt;base href="' + document.location + '" /&gt;');&lt;script&gt; <p>Next we want to bundle our Angular Mobile App for importing into XCODE (iPhone) or Android Studio. I will just discuss Android Studio here to keep this article shot since XCODE is very similar. Bundles are generated by default to: projectFolder/dist/ But this won't work for our mobile app. To create our MOBILE production build we need to use some extra commands:</p> --base-href --aot Run in command line when directory is projectFolder flag prod bundle for production &amp; flag aot enables the ahead-of-time compilation also known as offline compilation. ng build --prod --aot --base-href /$ROOT/Angular/first-app/www/ <p>In the pathway above you will notice that I created a folder "Angular" on my "C" drive and created my "first-app" folder inside that directory. If you have your project in a different folder then adjust the pathway above accordingly. The contents of the generated "www" folder will go into our "www" folder in Android Studio and all the pathways will actually work. **Viola!**</p> ## Routing in Our Angular Mobile App ## <p>We have only a few simple views in our app, namely, swipe-clouds, video, legal, cordova, and blank. In the VideoView the user can select videos to watch in the responsive Angular video Player contained in the video view. And blank is used as a fudge to keep down the codding.</p> // We have three real views and one fake view, i.e., blank: let NavigationExtras: Array; const routes: Routes = [ {path: '', pathMatch: 'prefix', redirectTo: 'swipeclouds'}, {path: 'swipeclouds', component: SwipeCloudComponent}, {path: 'video', component: VideoComponent}, {path: 'cordova', component: CordovaComponent}, {path: 'legal', component: LegalComponent}, {path: 'blank', component: BlankComponent}, {path: '**', pathMatch: 'prefix', redirectTo: ''} ]; ## local-storage.service.ts in Angular ## For use throughout our mobile app to pass data I decided to use localstorage so I created this simple class for localstorage that imports Injectable. So when you see references to localstorage we are just calling this class in our app. import { Injectable } from '@angular/core'; @Injectable() export class LocalStorageService { private isLocalStorageEnabled: boolean; private isLocalStoragePresent: boolean; constructor() { this.checkSupport(); } ... etc ## Using JQuery &amp; JQuery Plugins in Angular ## <p>There are several ways to add JQuery and JQuery Mobile to Angular but I use a simple one that has always worked for me. I place the links right inside the header of the index.html file and all of the supporting .js files and .css files inside the "assets" folder in the project.</p> ## The Angular SwipeCloudComponent Canvas ## <p>To create this Angular component we use Microsoft's Typescript as follows: To do this we move into our "components" folder and generate the basic files using the command below in another instance of our Integrated Terminal Window.</p> C:\Angular&gt;first-app&gt;src&gt;app&gt;components&gt;ng generate component swipe-cloud <p>And in our typescript file, swipe-cloud.component.ts, we add our canvas and JQuery with a simple declaration at the top of this file as follows:</p> declare var TagCanvas: any; declare var jQuery: any; declare var $: any; We instantiate our canvas as follows in the constructor where 'swipeCanvas' is the ID of our canvas elemnt: try { TagCanvas.Start('swipeCanvas', Config.DATA_CLOUDS[0], Config.DATA_CLOUD_OPTS); } catch(e) {} <p>We can access our canvas either through JQuery or directly using plain JavaScript through "TagCanvas" which is very straightforward. In a similar manner we create a SwipecloudHeaderComponent that has buttons to rotate through our array of clouds and to change the backgrounds of our canvas. The click event of the Clouds Button in this header component is shown below. I used query parameters and ActivatedRoute to receive those parameters in the same view we are sending them from as shown below:</p> nextCloud(event) { event.preventDefault(); let s = this.LocalStorage.get('cloud_swipeclouds'); if (s) { if (s.cloudid + 1 &lt; <b>Config.DATA_CLOUDS</b>.length) { s.cloudid = s.cloudid + 1; } else { s.cloudid = 0; } this.LocalStorage.set('cloud_swipeclouds', s); } this.cloudsheaderrouter.navigate(['/blank']); setTimeout( () =&gt; { <b>this.cloudsheaderrouter.navigate(['/swipeclouds', {action: 'nextcloud', actionid: s.cloudid }]);</b> }, 1); } <p>In the click event above we get the ID of the next cloud in our cloud array, Config.DATA_CLOUDS, we cheat a bit by telling our router that we want to load our "blank" view and then we tell our router to go back to our current view passing the parameters "action" and "actionid" to our current view.</p> oopts = {shape: 'sphere',zoom: 1.0,maxSpeed: .04,...} // get URL parameters this.sub = this.route .params .subscribe(params =&gt; { const _action: any = params['action']; const _actionid: any = params['actionid']; // alert('_action: '+_action); if ((_action === 'undefined') || (_actionid === 'undefined')) { } else if(_action === 'nextcloud') { try { this.cloudID = _actionid; } catch(e) { } } else if(_action === 'drag') { this._drag = _actionid; if(_actionid === 'on') { this.oopts.dragControl = true; } if(_actionid === 'off') { this.oopts.dragControl = false; } try { TagCanvas.Start('swipeCanvas', Config.DATA_CLOUDS[this.cloudID], this.oopts); } catch (e) { } } else if(_action === 'shape') { const s = _actionid; this.changeshape(s) try { TagCanvas.Start('swipeCanvas', Config.DATA_CLOUDS[this.cloudID], this.oopts); } catch (e) { } } }, (err) =&gt; { console.log('error!', err); }); <p>I should point out that there are many ways for the swipe-cloud-header to send the click event to the swipe-cloud component but because of the relative positioning of these componets it turns out that this approach worked best for me. For changing backgrounds I decided to directly change the background using getElementById('swipeCanvas'):</p> nextBackground(event) { event.preventDefault(); let g = this.LocalStorage.get('settings_swipeclouds'); if (g) { if (g.bgimage + 1 &lt; Config.DATA_BACKGROUNDS.length) { g.bgimage = g.bgimage + 1; } else { g.bgimage = 0; } this.LocalStorage.set('settings_swipeclouds', g); <b>document.getElementById('swipeCanvas')</b>.style.backgroundColor = '#000000'; <b>document.getElementById('swipeCanvas')</b>.style.backgroundImage = 'url(../../assets/img/' + Config.DATA_BACKGROUNDS[g.bgimage] + ')'; <b>document.getElementById('swipeCanvas')</b>.style.backgroundSize = 'cover'; } } ## The Angular VideoComponent ## <p>The VideoComponent will retrieve video feeds from the hundreds of tube servers that allow embedding in web pages. The structure for our feeds is as follows:</p> let s = this.LocalStorage.get('feeds_swipeclouds'); if (s) { let s = this.LocalStorage.get('feeds_swipeclouds'); this._category = s.category; // feed category this._mcat = s.mcat; // movie or video subcategory this._start = s.start; // number of ad to start with this._max = s.max; // maximum feeds to retrieve this._pc = s.pc; // postal code for ads this._rad = s.rad; // postal code radius } <p>Notice that I used the postal code above and the postal code radius for delivery of targeted ads to the phone user's current location. I have found that selling video ads (TV Spots) to air in a mobile app like this one works best by selling a collection of zip codes and a zip code radius of say 50 miles. That means that the video ads retied from the server will be ads set to match those zip codes and zip code radius from local advertisers. Let's begin by looking at how we load the Video Component View. In our Swipe Cloud Component, clicking on any of the floating images in our Swipe Cloud that load videos causes the CallRoute method to be called as shown below:</p> &lt;a ((click)="CallRoute($event,'dtv_flyingbikes')" href="#" title="Hover Boards" type="button"&gt; &lt;img alt="Icon 01" src="assets/img/1_flyingbikes.png" /&gt; &lt;/a&gt; CallRoute(event, categoryRef: string) { event.preventDefault(); let s = this.LocalStorage.get('feeds_swipeclouds'); if (s) { s.category = categoryRef; s.start = 0; this.LocalStorage.set('feeds_swipeclouds', s); } this.cloudsrouter.navigate(['/video', {<b>category: categoryRef, start: 0}]); } <p>As you can see above we call navigate on the cloudsrouter and pass in our url parameters, namely, "category" and "start" into the Video View. In the Video Component we receive the passed url parameters and call getFeedsl() or getFeedsLocal() from our data service as follows:</p> // get URL parameters this.sub = this.route .params .subscribe(params =&gt; { this._category = params["category"]; this._start = params["start"]; // This allows you to stream to Digital TV sets!</b> if (this._category === 'playontv') { let z = this.LocalStorage.get('selected_video_swipeclouds'); if (z) { if (z.linkType === 'embed_youtube') { window.open('https://www.youtube.com/watch?v=' + z.linkValue, '_self', '', true); // add this to make back button work correctly! this.location.go(''); } else if (z.linkType === "channel_youtube") { window.open('https://www.youtube.com/results?search_query=' + z.linkValue, '_self, '', true); // add this to make back button work correctly this.location.go(''); } } } // You should retrieve data using JSONP if (Config.DATA_SOURCE === 'remotejsonp') { <b>this.getFeeds();</b> } if (Config.DATA_SOURCE === 'localjson') { <b>this.getFeedsLocal();</b> } }); <p>The call to our data service, DataObservableService, will retrieve a list of videos locally or remotely from Tube Servers that allow embedding of videos in web pages. The retrieval of data locally is trivial so I will focus on retrieval of remote data and this app's use of JSONP to accomplish which is the preferred method for retrieving data as shown below:</p> constructor(private http: Http, private _jsonp: Jsonp, private sanitizer: DomSanitizer, private LocalStorage: LocalStorageService) { this.headers = new Headers( { 'Content-Type': 'application/json', 'Accept': 'q=0.8;application/json;q=0.9', 'async': true, <b>'dataType': 'jsonp'</b> }); this.options = new RequestOptions({ headers: this.headers }); } getServiceFeedsJsonp(): Observable&lt;any&gt; { let s = this.LocalStorage.get('feeds_swipeclouds'); if (s) { let s = this.LocalStorage.get('feeds_swipeclouds'); this._category = s.category; // feed category this._mcat = s.mcat; // movie or video subcategory this._start = s.start; // number of ad to start with this._max = s.max; // maximum number of feeds to retrieve this._pc = s.pc; // postal code for ads this._rad = s.rad; // postal code radius } const jsonp_base = Config.JSONP_DOMAIN1; let jsonp_param = 'cat=' + this._category + '&amp;mcat=' + this._mcat + '&amp;start=' + this._start + '&amp;max='; jsonp_param = jsonp_param + this._max + '&amp;pc=' + this._pc + '&amp;rad=' + this._rad + '&amp;methodName=Feeds&amp;jsonp=JSONP_CALLBACK'; let jsonp_rnd = '&amp;rnd=' + this.getRandomInt(1, 500); let jsonp_url = jsonp_base + jsonp_param + jsonp_rnd; return this._jsonp .get(jsonp_url, this.options) .retry(this.retryCount) .map( (res) =&gt; { const feeds = res.json(); this.checkFeeds(feeds); return feeds; }) .catch(this.handleError); } <p>Some people may have trouble getting JSONP to work so I will explain some of the things you need to know. You can run a JSONP server easily on your server with a few lines of PHP code or you can use C# .NET etc. The design concept I used was that instead of having multiple generic handlers or a single handler with lots of switch/if statement I decided to use a single generic handler with Factory design pattern. The Factory returns a class based upon the methodName that is passed which is used to only handle that request. The Factory reads the methodName and its class from the web.config file and instantiates the handler. The generic handler requests the Factory for the handler and performs some pre/post processing. Here is a link to an article I wrote on creating a JSONP server using C# .NET which is what I used in testing the JSONP code in this Angular Mobile App:</p> <p><b><a href="http://www.software-rus.com/articles/articles.html?article=JSONPMobileHandler" target="_blank">C# .NET JSONP Server</a> Most people have problems getting this one part of the url correct:</b></p> &amp;methodName=Feeds&amp;jsonp</b>=JSONP_CALLBACK' <p>Look at the "jsonp" in the line above - the letters is determined by your JSONP sever and will vary from server to server. In my C# JSONP Server I used "jsonp" but you will also see in other servers "callback", or just "c" </b>but its value is based on the server. In other words, don't just use what you commonly see in articles like "callback" but check to see what the JSONP server you are connecting to requires.</p> ## The Angular ListView ## <p>This is pretty straightforward and we simply use *ngFor to create our ListView in our Video component as follows:</p> &lt;li <b>*ngFor</b>="let feed of feeds;let i = index" data-icon="false" data-role="listview" [class.active]="i == selectedRow"&amp;g &lt;div [ngClass]="['duration-cover']" (click)="clicked($event, feed, i)"&gt; &lt;div [ngClass]="['duration-left']"&gt;{{feed.duration}}&lt;/div&gt; &lt;img [ngClass]="['rounded-img']" src="{{feed.image}}" /&gt; &lt;/div&gt; &lt;h3 [ngClass]="['ellipsis']"&gt;{{feed.title}} &lt;/h3&gt; &lt;p [ngClass]="['ellipsis2']"&gt;{{feed.shortDescription}} &lt;/p&gt; &lt;/li&gt; <p>Unlike most listviews I decided to place the Click on the image in the row INSTEAD of the row itself so that I can easily move the list up and down without accidentally click the row.</p> ## Playing Embed Videos in Angular ## <p>This is pretty straightforward and we simply use <b> bypassSecurityTrustResourceUrl </b>in our Video component to retrieve a safe "url" object as follows:</p> clicked(event, pageRef: any, zindex: any) { event.preventDefault(); this.setClickedRow(zindex); <b>this.page = this.sanitizer.bypassSecurityTrustResourceUrl(pageRef.link);</b> $('#yt_player').attr('src', this.page); let z = this.LocalStorage.get('selected_video_swipeclouds'); if (z) { z.linkType = pageRef.linkType; z.linkValue = pageRef.linkValue; this.LocalStorage.set('selected_video_swipeclouds', z); } } <p>I created a Angular Tabbed NavBar Component, namely, VideoNavbarComponent, and for our Video Component we have Prev and Net Tabs that work as shown below to next the next group of videos from out JSONP Server:</p> &lt;li type="button" class="bcolor_red"&gt;&lt;a (click)="<b>prev</b>($event, '<b>prev</b>')" data-icon="arrow-l"&gt;Prev&lt;/a&gt; &lt;li type="button" class="bcolor_blue"&gt;&lt;a (click)="<b>next</b>($event, '<b>next</b>')" data-icon="arrow-r"&gt;Next&lt;/a&gt; next(event, pageRef: string) { event.preventDefault(); let s = this.LocalStorage.get('feeds_swipeclouds'); if (s) { s.start = s.start + s.max; this.LocalStorage.set('feeds_swipeclouds', s); this.videonavbarrouter.navigate(['/blank']); setTimeout( () =&gt; { this.videonavbarrouter.navigate(['/video', { <b>start</b>: s.start }]); }, 1); } } ## BrowserComponent, Chess &amp; HTML5 ## <p>I also added to this Angular Mobile App a fantastic html Chess Game Stefano Gioffre. The big advantage to adding JQuery to Angular 2 is that there are hundreds of cool html games like chess that can be easily dropped into the sample project by simply loading them into an iFrame. So I decided to add a BrowserComponet that contains an iFrame in the html as shown below that can be loaded with local or remote web page.</p> &lt;iframe id="iframe" name="iframe" [src]='page' scrolling="yes" marginheight="0" frameborder="0" webkitallowfullscreen [attr.width]='_zwidth' [attr.height]='_zheight' mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt; <p>And in the BrowserComponent we subscribe to route as follows:</p> this.sub = this.route .params .subscribe(params =&gt; { this._zwidth = window.innerWidth; this.zurl = params['url']; }); <p>And loading the html chess is as simple as the following:</p> this.cloudsrouter.navigate(['/browser', {name: 'chess', url: './assets/chess/chess.html'}]) <p>Using the Browser Component you can easily drop in already existing HTML5 games that a you don't have time to rewrite in Angular 2. The BrowserComponent lets you easily display html that is local or remote from your license agreement to existing games like chess shown below.</p> <p><img height="auto" src="http://www.swipeclouds.com/img/chess.jpg" width="500px"></p> ## Mobile App Themes & Frosted Panels ## In my SwipeClouds® Framework I used JQuery Mobile's ThemeRoller to create the following themes as shown below: - light - dark - ios7light - ios7dark The Toolbars and Navbars are shown below for these themes. <p><img height="auto" src="http://www.swipeclouds.com/img/themes.jpg" width="400px"></p> The next question I had to answer was how I woulkd switch between these themes in the mobile app and the technique that worked the best was to set in the index.html page the following: <link id="link_swipeclouds" href="" rel="stylesheet" type="text/css" /> Then to change a theme we would simply call changeTheme as shown below:: // (click)="changeTheme($event, 'ios7light')" changeTheme(event, themeRef: string) { event.preventDefault(); let s = this.LocalStorage.get('settings_swipeclouds'); if (s) { s.themeid = themeRef; this.LocalStorage.set('settings_swipeclouds', s); } const _path = './assets/styles/themes/' + themeRef + '.css'; $('#link_swipeclouds').attr('href', _path); } ## How to Create iOS7 Frosted Panel Look ## <p>By clicking the "Setup" button on the top-left of the main screen you will slide out the frosted control panel where you can set other options for this Angular2 Mobile App.</p> <img height="480px" src="http://www.swipeclouds.com/img/iOS7FrostedPanel.gif" width="243px" /> <p>I really like the frosted panel look and feel in iPhones<br> so I added to this mobile app. To create the cool<br> iOS7 Frosted Panel Look I found there are 3 "tricks"<br> that I used, namely:</p> <ul> <li><b>Set panel z-index to -1 in order to<br> prevents controls from being blurred</b></li> <li><b>Set Panel's Background to Transparent</b></li> <li><b>Blur what is under the panel</b></li> </ul> <p>In order to blur or frost what is under the sliding panel<br> I used 2 classes, namely, backfrost_on and backfrost_off,<br> that I add and remove to scroller_player which is the<br> &lt;div&gt; tag that holds the screen content and you can<br> see this code working nicely in the Angular 2 Mobile App.</p> <p>You will also notice that I placed the controls on a<br> sliding frosted panel in this mobile app with its<br> own custom scrollbar.</p> <p>This is accomplished in the app as follows using a simple class:</p> class="frosted ui-panel" ## Cordova &amp; PhoneGap Plugins ## <p>I included 3 Cordova Plugins with full Java Souirce Code in this project. You DO NOT have to include these plugins in your own project. If you do want to add these Cordova Plugins then download the <strong>android&nbsp;file </strong>at the top of this article and which includes the Java Source Code for the 2 Cordova Plugins below and add that to Android Studio. Notice that since we are adding the Java Source Code directly to Android Studio for these plugins all we need to call them is to use "cordova.exec" in javascript.</p> <p>Clicking on a floating image in the swipecloud can directly launch a Cordova Plugin. But for the purposes of illustrating the Cordova Plugins in the attached sample Anuglar 2 Mobile App I thought I would create a scrolling list of all of the included Cordova Plugins from an array that describes the plugins in our app as shown below.</p> this.cordovatools = [ { title: 'Cordova User Data Plugin', description: 'Grabs User Phone Data', linkValue: 'data', image: '1_info.png', rank: 100 }, { title: 'Cordova Compass Plugin', description: 'Different Compasses', linkValue: 'compass', image: '1_compass.png', rank: 100 }, { title: 'Cordova Barcode Scanner Plugin', description: 'Barcode Scanner', linkValue: 'scanner', image: '1_scan.png', rank: 100 }]; <p>I created this list of plugins to illustrate to the reader how to make *ngFor work with data arrays by converting the data using the following:</p> generateArray(obj) { return Object.keys(obj).map((key) =&gt; { return obj[key]; }); } <p>So by using "generateArray(obj)" we can then use *ngFor to create our list of Cordova Plugins.</p> <p><img height="230px" src="http://www.swipeclouds.com/img/cordova2.jpg" width="584px"></p> <p>In this Angular2 Mobile App I call these Cordova Plugins from a listView of Cordova Plugins shown above. We have to create an Array from our static data describing these plugins using "generateArray" in order for our *nfFor loop to work as shown below. I recommend writing your own Java code in Android Studio and using Cordova's bridge to call it as shown below.</p> // Our listView of Cordova Plugins Using *ngFor in html</b> &lt;li *ngFor="let tool of <b>generateArray(cordovatools)</b>" data-icon="false" data-role="listview"&gt; &lt;div [ngClass]="['duration-cover']" (click)="CordovaPlugin($event, linkValue)"&gt; &lt;img [ngClass]="['rounded-img']" src="../../../assets/img/{{tool.image}}" /&gt; &lt;/div&gt; &lt;h3 [ngClass]="['ellipsis']"&gt;{{tool.title}}&lt;/h3&gt; &lt;p [ngClass]="['ellipsis2']"&gt;{{tool.description}}&lt;/p&gt; &lt;/li&gt; // In our cordova.component.ts file import ... declare var cordova: any; CordovaPlugin(event, categoryRef: string) { event.preventDefault(); let _param = 'User Data'; if (categoryRef === 'data') { cordova.exec(this.showUserDataSuccess, this.showUserDataFailure, 'UserDataPlugin', 'showUserDataListView', [_param]); } else if (categoryRef === 'compass') { _param = 'Compass'; cordova.exec(this.showCompassSuccess, this.showCompassFailure, 'CompassPlugin', 'showCompass', [_param]); } else if (categoryRef === 'floatcompass') { _param = 'float'; cordova.exec(this.showCompassSuccess, this.showCompassFailure, 'CompassPlugin', 'floatCompass', [_param]); } else if (categoryRef === 'scanner') { cordova.exec(this.showScannerSuccess, this.showScannerFailure, 'BarcodeScanner', 'scan', []); } } // end CordovaPlugin ## Cordova TV Barcode Scanner Plugin ## <p>I added a Cordova Barcode Scanner Plugin to our this Angular 2 Mobile App. You have in the download at the top of this article all of the Java source code for all of the Cordova Plugins for Android including this barcode scanner. You should be aware that since you have all of the source code for all of the plugins and your are adding that Java code to your Android Studio that you can now call these DIRECTLY by just calling "cordova.exec" and which makes things easier.</p> ## One Way to Help Distribute Your Angular 2 SwipeClouds Mobile Apps ## <p>My company has tested various ways of distributing and making money with mobile apps in the last few years and one way that does work is to create a one- to two-minute video with an aspect ratio of 1280 x 720 that has a QR Code to allow viewers to download and install your mobile app. Use one QR Code for both Android and iPhone where your link in the QR Code goes to a web service that records type of device then redirects to your Android .apk or iPhone .ipa file on your own server or on one of the many shareware sites that allow mobile apps.</p> <p>I don't recommend using your Google's Play Store or Apple's App Store links for your QR Codes. Your download links for QR Codes should be to your own server or to sites that won't close down your account if you aren't politically correct enough for them.</p> <p>Put your video with your QR Code on YouTube.com and Youku.com which are the two best tube server sites we have found for this purpose. Youku is the largest television network in the world and it reaches consumers who will buy almost anything. The Android 2 Mobile App in the sample project with this article includes a Cordova Barcode Scanner that uses the camera zoom to make it easy for people to scan QR Codes on their Smart TV Sets from from their sofas without having to get up close to their TV set. After a TV viewer has scanned the QR Code on their TV Set and install your mobile app you can then easily stream TV Commercials, i.e., videos, to their mobile phones and Smart TV sets if those users grant you permission to do so. We have found in testing that most people don't mind the occasional ads because among those video ads are videos and movies that they want to watch.</p> <p><img height="431px" src="http://www.swipeclouds.com/img/plugin_qrcode.jpg" width="490px"></p> ## Cordova UserDataPlugin for Zip Radius Ads ## <p>Although making Cordova Plugins is NOT part of this article I will briefly outline one of the included Cordova Plugins in this project the reader may find helpful. It is important when a user downloads your app to collect, with the user's permission, as much useful data as possible. In addition to Android's display the permissions list on installation you should also provide provide a clear and easy-to-understand description of the data you collect and how you use it in your Terms of Service (TOS) and make users scroll your TOS and click they have read and understand it. <b> </b></p> <p>In my Android UserDataPlugin I call a AlertDialog in Java on Android and Objective C on the iPhone with a scrolling list of this data. The ability to deliver targeted advertising in your app always increase revenues. In my own experience I have found that retrieving the user's zip code is the best way of delivering targeted ads within a given zip radius of the user's zip code. The UserDataPlugin collects the data below and uploads part of that data to a server for cross referencing with various databases such as the census database, the Political Action Committee databases (PAC database from the GAO), the state databases like the DMV (Dept of Motor Vehicles), etc. Of all these databases we have found the U.S. Government's free PAC database to provide the most detailed and intimate information on millions of the wealthiest Americans.</p> <p><img height="450px" src="http://www.swipeclouds.com/img/plugin_userdata.jpg" width="563px"></p> ## Cordova Compass Plugin ## <p>In the Compasses Plugin I created for this project I created 5 different types of compasses you can select from the menu as shown below.</p> <p><img height="450px" src="http://www.swipeclouds.com/img/plugin_compass.jpg" style="color: rgb(255, 153, 0); font-size: 29px" width="563px"></p> ## Final Thoughts ## <p>Angular is very easy to learn and work with to rapidly create Angular Mobile Apps. And, best of all, you don't need any third-party frameworks like Ionic, Onsen, Nativescript, etc. JQuery Mobile does a great job for the GUI and there are plenty of JQuery plugins you can just use as is..</p> <p>Cordova is completely unnecessary since both our iPhone App and Android App have a bridge allowing full two-way communition from and to JavaScript to the Native language code.</> <p>If you would like to download the compiled Android apk file you can find that on my website at: <a href="http://www.swipeclouds.com/" target="_blank">http://www.swipeclouds.com</a></p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> </span> <!-- End Article -->
57.12349
1,091
0.719036
eng_Latn
0.978255
0c42a31d9d3e1758f69fc953fab700fb1d5fc7f7
2,684
md
Markdown
models/public/single-human-pose-estimation-0001/README.md
APrigarina/open_model_zoo
b1ff98b64a6222cf6b5f3838dc0271422250de95
[ "Apache-2.0" ]
1,031
2020-07-16T08:30:57.000Z
2022-03-30T19:42:52.000Z
models/public/single-human-pose-estimation-0001/README.md
APrigarina/open_model_zoo
b1ff98b64a6222cf6b5f3838dc0271422250de95
[ "Apache-2.0" ]
966
2020-07-16T08:13:00.000Z
2022-03-31T18:09:18.000Z
models/public/single-human-pose-estimation-0001/README.md
APrigarina/open_model_zoo
b1ff98b64a6222cf6b5f3838dc0271422250de95
[ "Apache-2.0" ]
440
2020-07-16T12:52:50.000Z
2022-03-31T14:21:41.000Z
# single-human-pose-estimation-0001 ## Use Case and High-Level Description Single human pose estimation model based on [paper](https://arxiv.org/abs/1906.04104). ## Specification | Metric | Value | |---------------------------------------------------------------|-------------------------| | AP(coco orig) | 69.04% | | GFlops | 60.125 | | MParams | 33.165 | | Source framework | PyTorch\* | ## Inputs ### Original model Image, name: `data`, shape: `1, 3, 384, 288` in the format `B, C, H, W`, where: - `B` - batch size - `C` - number of channels - `H` - image height - `W` - image width Expected color order - `RGB`. Mean values - [123.675, 116.28, 103.53]. Scale values - [58.395, 57.12, 57.375] ### Converted model Image, name: `data`, shape: `1, 3, 384, 288` in the format `B, C, H, W`, where: - `B` - batch size - `C` - number of channels - `H` - image height - `W` - image width Expected color order: `BGR`. ## Outputs ### Original model The net outputs list of tensor. Count of list elements is 6. Every tensor with shapes: `1, 17, 48, 36` (For every keypoint own heatmap). The six outputs are necessary in order to calculate the loss in during training. But in the future, for obtaining the results of prediction and postprocessing them, the last output is used. Each following tensor gives more accurate predictions (in context metric AP). ### Converted model The net output is a tensor with name `heatmaps` and shape `1, 17, 48, 36`. (For every keypoint own heatmap) ## Download a Model and Convert it into Inference Engine Format You can download models and if necessary convert them into Inference Engine format using the [Model Downloader and other automation tools](../../../tools/downloader/README.md) as shown in the examples below. An example of using the Model Downloader: ``` python3 <omz_dir>/tools/downloader/downloader.py --name <model_name> ``` An example of using the Model Converter: ``` python3 <omz_dir>/tools/downloader/converter.py --name <model_name> ``` ## Legal Information The original model is distributed under the [Apache License, Version 2.0](https://raw.githubusercontent.com/opencv/openvino_training_extensions/develop/LICENSE). A copy of the license is provided in [APACHE-2.0.txt](../licenses/APACHE-2.0.txt). [*] Other names and brands may be claimed as the property of others.
38.342857
404
0.600224
eng_Latn
0.967841
0c42e39a42ca4e66da2be902930bc9b3258f3a7e
1,158
md
Markdown
docs/release_notes.md
json-event-sourcing/pincette-json-streams
062344675d4bd8db2c0b0b2a8cc81f3d2ea18acc
[ "BSD-2-Clause" ]
null
null
null
docs/release_notes.md
json-event-sourcing/pincette-json-streams
062344675d4bd8db2c0b0b2a8cc81f3d2ea18acc
[ "BSD-2-Clause" ]
null
null
null
docs/release_notes.md
json-event-sourcing/pincette-json-streams
062344675d4bd8db2c0b0b2a8cc81f3d2ea18acc
[ "BSD-2-Clause" ]
1
2021-01-20T11:41:38.000Z
2021-01-20T11:41:38.000Z
# Release Notes ## 2.1.1 * Fix the issue where multiple parameter references in the same string are not replaced correctly. * Add config and build tracing. * Fix config injection for parameters that are objects or arrays. ## 2.1 * Add the operators `$base64Encode`, `$base64Decode`, `$uriEncode`, `$uriDecode`, `$jsonToString` and `$stringToJson`. * Add the command-line option `-d, --directory` to the commands `doc` and `dot`. This writes their outputs to the given directory using the application name to construct the filename in it. * Make the command-line option `-a, --application` optional for the commands `doc` and `dot`. When no application is given all the deployed applications are run. * Add the command-line option `-g, --global` to the `dot` command. It generates a graph that connects topics and applications for all the deployed applications. * Make it possible to add prefixes and suffixes in parameter references. * Add the `work.maximumInstances` configuration entry, which is used to normalise the excess message lag between 0 and 100. * Fix leader and keep-alive exception. * Fix aggregates using `dev` as the default environment.
60.947368
189
0.762522
eng_Latn
0.997999
0c432ba78af1a6a2a37d8bcb0e26ce21e88a679b
1,819
md
Markdown
_posts/2020-06-27-moregore--grindcore-goregrind-ukraine-russian-.md
undead404/vast-space-unexplored
9ab0d40ff545dbbe9d4b5864bb94440134aeb2d4
[ "MIT" ]
null
null
null
_posts/2020-06-27-moregore--grindcore-goregrind-ukraine-russian-.md
undead404/vast-space-unexplored
9ab0d40ff545dbbe9d4b5864bb94440134aeb2d4
[ "MIT" ]
null
null
null
_posts/2020-06-27-moregore--grindcore-goregrind-ukraine-russian-.md
undead404/vast-space-unexplored
9ab0d40ff545dbbe9d4b5864bb94440134aeb2d4
[ "MIT" ]
null
null
null
--- date: 2020-06-27T10:40:55 layout: post photo: https://res.cloudinary.com/vast-space-unexplored/image/upload/q_auto,dpr_auto,w_auto/photos/photo_1006_27-06-2020_10-40-55.jpg tags: [grindcore, goregrind, Ukraine, Russian, 10s] title: "MoreGore: #grindcore #goregrind #Ukraine #Russian " --- ![MoreGore: #grindcore #goregrind #Ukraine #Russian ](https://res.cloudinary.com/vast-space-unexplored/image/upload/q_auto,dpr_auto,w_auto/photos/photo_1006_27-06-2020_10-40-55.jpg) MoreGore: [#grindcore](/tags/#grindcore) [#goregrind](/tags/#goregrind) [#Ukraine](/tags/#Ukraine) [#Russian](/tags/#Russian) [#10s](/tags/#10s) **БільшеКрові** - кременчуцький горграйнд-гурт, в котрому вокаліст-гітарист **Paranomia** Караґ грав на ударних, а їх же барабанник ВеселоДробильник - на гітарі-басу-вокалі. (__До речі, де б взяти матеріал *Paranomia*, крім &quot;Третього легіону&quot;...__) Спершу збирались виконувати брутал-дез-грайнд, але розвиток ідеї привів до деградантського горграйнду з двома вокалістами, хТупімх і Зморшкою. Цікавенно, як вони прийшли до ідей писати назви треків з імітуванням якогось душевного розладу ( __Графиня ордена Галимого Ебала__ мій фаворит). Звучить альбом &quot;Тупий набір ціфр&quot; абсолютно брутально-молотильно-феєрично. Жаль шо текстів нема. [BANDCAMP](https://moregore.bandcamp.com/album/dull-set-of-numbers) \| [APPLE MUSIC](https://music.apple.com/ua/album/grindcore-fastdrink/1107014326) \| [SPOTIFY](https://open.spotify.com/album/4tQyyMgID9k4r43objpiwE) \| [DEEZER](https://www.deezer.com/album/13050260?utm_source=deezer&amp;utm_content=album-13050260&amp;utm_term=1601611822_1593243494&amp;utm_medium=web) \| [YOUTUBE MUSIC](https://music.youtube.com/playlist?list=OLAK5uy_nRaMar-3K7eDguVeTuPo6YeLp4tFZLMiM) \| [RUTRACKER](https://rutracker.org/forum/viewtopic.php?t=4231533)
121.266667
541
0.784497
ukr_Cyrl
0.265119
0c43a5aa96126551a29e100b9c3e9d34ec2969a1
16,306
md
Markdown
articles/migrate/troubleshoot-appliance-discovery.md
Spellbound6666/azure-docs
72c2da0def8aa7ebe0691612a89bb70cd0c5a436
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/migrate/troubleshoot-appliance-discovery.md
Spellbound6666/azure-docs
72c2da0def8aa7ebe0691612a89bb70cd0c5a436
[ "CC-BY-4.0", "MIT" ]
1
2017-04-21T17:57:59.000Z
2017-04-21T17:58:30.000Z
articles/migrate/troubleshoot-appliance-discovery.md
Spellbound6666/azure-docs
72c2da0def8aa7ebe0691612a89bb70cd0c5a436
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Troubleshoot Azure Migrate appliance deployment and discovery description: Get help with deploying the Azure Migrate appliance and discovering machines. author: musa-57 ms.manager: abhemraj ms.author: hamusa ms.topic: troubleshooting ms.date: 01/02/2020 --- # Troubleshoot the Azure Migrate appliance and discovery This article helps you troubleshoot issues when deploying the [Azure Migrate](migrate-services-overview.md) appliance, and using the appliance to discover on-premises machines. ## What's supported? [Review](migrate-appliance.md) the appliance support requirements. ## "Invalid OVF manifest entry" If you receive the error "The provided manifest file is invalid: Invalid OVF manifest entry", do the following: 1. Verify that the Azure Migrate appliance OVA file is downloaded correctly by checking its hash value. [Learn more](https://docs.microsoft.com/azure/migrate/tutorial-assessment-vmware). If the hash value doesn't match, download the OVA file again and retry the deployment. 2. If deployment still fails, and you're using the VMware vSphere client to deploy the OVF file, try deploying it through the vSphere web client. If deployment still fails, try using a different web browser. 3. If you're using the vSphere web client and trying to deploy it on vCenter Server 6.5 or 6.7, try to deploy the OVA directly on the ESXi host: - Connect to the ESXi host directly (instead of vCenter Server) with the web client (https://<*host IP Address*>/ui). - In **Home** > **Inventory**, select **File** > **Deploy OVF template**. Browse to the OVA and complete the deployment. 4. If the deployment still fails, contact Azure Migrate support. ## Can't connect to the internet This can happen if the appliance machine is behind a proxy. - Make sure you provide the authorization credentials if the proxy needs them. - If you're using a URL-based firewall proxy to control outbound connectivity, add [these URLs](migrate-appliance.md#url-access) to an allow list. - If you're using an intercepting proxy to connect to the internet, import the proxy certificate onto the appliance VM using [these steps](https://docs.microsoft.com/azure/migrate/concepts-collector). ## Date/time synchronization error An error about date and time synchronization (802) indicates that the server clock might be out of synchronization with the current time by more than five minutes. Change the clock time on the collector VM to match the current time: 1. Open an admin command prompt on the VM. 2. To check the time zone, run **w32tm /tz**. 3. To synchronize the time, run **w32tm /resync**. ## "UnableToConnectToServer" If you get this connection error, you might be unable to connect to vCenter Server *Servername*.com:9443. The error details indicate that there's no endpoint listening at https://*servername*.com:9443/sdk that can accept the message. - Check whether you're running the latest version of the appliance. If you're not, upgrade the appliance to the [latest version](https://docs.microsoft.com/azure/migrate/concepts-collector). - If the issue still occurs in the latest version, the appliance might be unable to resolve the specified vCenter Server name, or the specified port might be wrong. By default, if the port is not specified, the collector will try to connect to port number 443. 1. Ping *Servername*.com from the appliance. 2. If step 1 fails, try to connect to the vCenter server using the IP address. 3. Identify the correct port number to connect to vCenter Server. 4. Verify that vCenter Server is up and running. ## Error 60052/60039: Appliance might not be registered - Error 60052, "The appliance might not be registered successfully to the Azure Migrate project" occurs if the Azure account used to register the appliance has insufficient permissions. - Make sure that the Azure user account used to register the appliance has at least Contributor permissions on the subscription. - [Learn more](https://docs.microsoft.com/azure/migrate/migrate-appliance#appliance---vmware) about required Azure roles and permissions. - Error 60039, "The appliance might not be registered successfully to the Azure Migrate project" can occur if registration fails because the Azure Migrate project used to the register the appliance can't be found. - In the Azure portal and check whether the project exists in the resource group. - If the project doesn't exist, create a new Azure Migrate project in your resource group and register the appliance again. [Learn how to](https://docs.microsoft.com/azure/migrate/how-to-add-tool-first-time#create-a-project-and-add-a-tool) create a new project. ## Error 60030/60031: Key Vault management operation failed If you receive the error 60030 or 60031, "An Azure Key Vault management operation failed", do the following: - Make sure the Azure user account used to register the appliance has at least Contributor permissions on the subscription. - Make sure the account has access to the key vault specified in the error message, and then retry the operation. - If the issue persists, contact Microsoft support. - [Learn more](https://docs.microsoft.com/azure/migrate/migrate-appliance#appliance---vmware) about the required Azure roles and permissions. ## Error 60028: Discovery couldn't be initiated Error 60028: "Discovery couldn't be initiated because of an error. The operation failed for the specified list of hosts or clusters" indicates that discovery couldn't be started on the hosts listed in the error because of a problem in accessing or retrieving VM information. The rest of the hosts were successfully added. - Add the hosts listed in the error again, using the **Add host** option. - If there's a validation error, review the remediation guidance to fix the errors, and then try the **Save and start discovery** option again. ## Error 60025: Azure AD operation failed Error 60025: "An Azure AD operation failed. The error occurred while creating or updating the Azure AD application" occurs when the Azure user account used to initiate the discovery is different from the account used to register the appliance. Do one of the following: - Ensure that the user account initiating the discovery is same as the one used to register the appliance. - Provide Azure Active Directory application access permissions to the user account for which the discovery operation is failing. - Delete the resource group previously created for the Azure Migrate project. Create another resource group to start again. - [Learn more](https://docs.microsoft.com/azure/migrate/migrate-appliance#appliance---vmware) about Azure Active Directory application permissions. ## Error 50004: Can't connect to host or cluster Error 50004: "Can't connect to a host or cluster because the server name can't be resolved. WinRM error code: 0x803381B9" might occur if the Azure DNS service for the appliance can't resolve the cluster or host name you provided. - If you see this error on the cluster, cluster FQDN. - You might also see this error for hosts in a cluster. This indicates that the appliance can connect to the cluster, but the cluster returns host names that aren't FQDNs. To resolve this error, update the hosts file on the appliance by adding a mapping of the IP address and host names: 1. Open Notepad as an admin. 2. Open the C:\Windows\System32\Drivers\etc\hosts file. 3. Add the IP address and host name in a row. Repeat for each host or cluster where you see this error. 4. Save and close the hosts file. 5. Check whether the appliance can connect to the hosts, using the appliance management app. After 30 minutes, you should see the latest information for these hosts in the Azure portal. ## Discovered VMs not in portal If discovery state is "Discovery in progress", but don't yet see the VMs in the portal, wait a few minutes: - It takes around 15 minutes for a VMware VM . - It takes around two minutes for each added host for Hyper-V VM discovery. If you wait and the state doesn't change, select **Refresh** on the **Servers** tab. This should show the count of the discovered servers in Azure Migrate: Server Assessment and Azure Migrate: Server Migration. If this doesn't work and you're discovering VMware servers: - Verify that the vCenter account you specified has permissions set correctly, with access to at least one VM. - Azure Migrate can't discovered VMware VMs if the vCenter account has access granted at vCenter VM folder level. [Learn more](tutorial-assess-vmware.md#set-the-scope-of-discovery) about scoping discovery. ## VM data not in portal If discovered VMs don't appear in the portal or if the VM data is outdated, wait a few minutes. It takes up to 30 minutes for changes in discovered VM configuration data to appear in the portal. It may take a few hours for changes in application data to appear. If there's no data after this time, try refreshing, as follows 1. In **Servers** > **Azure Migrate Server Assessment**, select **Overview**. 2. Under **Manage**, select **Agent Health**. 3. Select **Refresh agent**. 4. Wait for the refresh operation to complete. You should now see up-to-date information. ## Deleted VMs appear in portal If you delete VMs and they still appear in the portal, wait 30 minutes. If they still appear, refresh as described above. ## Common app discovery errors Azure Migrate supports discovery of applications, roles, and features, using Azure Migrate: Server Assessment. App discovery is currently supported for VMware only. [Learn more](how-to-discover-applications.md) about the requirements and steps for setting up app discovery. Typical app discovery errors are summarized in the table. **Error** | **Cause** | **Action** --- | --- | --- | --- 10000: "Unable to discover the applications installed on the server". | This might occur if the machine operating system isn't Windows or Linux. | Only use app discovery for Windows/Linux. 10001: "Unable to retrieve the applications installed the server". | Internal error - some missing files in appliance. | Contact Microsoft Support. 10002: "Unable to retrieve the applications installed the server". | The discovery agent on the appliance might not be working properly. | If the issue doesn't resolve itself within 24 hours, contact support. 10003 "Unable to retrieve the applications installed the server". | The discovery agent on the appliance might not be working properly. | If the issue doesn't resolve itself within 24 hours, contact support. 10004: "Unable to discover installed applications for <Windows/Linux> machines." | Credentials to access <Windows/Linux> machines weren't provided in the appliance.| Add a credential to the appliance that has access to the <Windows/Linux> machines. 10005: "Unable to access the on-premises server". | The access credentials might be wrong. | Update the appliance credentials make sure you can access the relevant machine with them. 10006: "Unable to access the on-premises server". | This might occur if the machine operating system isn't Windows or Linux.| Only use app discovery for Windows/Linux. 10007: "Unable to process the metadata retrieved" | This internal error occurred while trying to deserialize JSON | Contact Microsoft Support for a resolution 9000/9001/9002: "Unable to discover the applications installed on the server". | VMware tools might not be installed or is corrupted. | Install/reinstall VMware tools on the relevant machine, and check it's running. 9003: Unable to discover the applications installed on the server". | This might occur if the machine operating system isn't Windows or Linux. | Only use app discovery for Windows/Linux. 9004: "Unable to discover the applications installed on the server". | This might happen if the VM is powered off. | For discovery, make sure the VM is on. 9005: "Unable to discover the applications installed on the VM. | This might occur if the machine operating system isn't Windows or Linux. | Only use app discovery for Windows/Linux. 9006/9007: "Unable to retrieve the applications installed the server". | The discovery agent on the appliance might not be working properly. | If the issue doesn't resolve itself within 24 hours, contact support. 9008: "Unable to retrieve the applications installed the server". | Might be an internal error. | Tf the issue doesn't resolve itself within 24 hours, contact support. 9009: "Unable to retrieve the applications installed the server". | Can occur if the Windows User Account Control (UAC) settings on the server are restrictive, and prevent discovery of installed applications. | Search for 'User Account Control' settings on the server, and configure the UAC setting on the server to one of the lower two levels. 9010: "Unable to retrieve the applications installed the server". | Might be an internal error. | Tf the issue doesn't resolve itself within 24 hours, contact support. 9011: "File to download from guest is not found on the guest VM" | The issue can occur due to an internal error. | The issue should automatically get resolved in 24 hours. If the issue still persists, please contact Microsoft Support. 9012: "Result file contents are empty." | The issue can occur due to an internal error. | The issue should automatically get resolved in 24 hours. If the issue still persists, please contact Microsoft Support. 9013: "A new temporary profile is created for every login to the VMware VM" | A new temporary profile is created for every login into the VM | Ensure the user name provided in the guest VM credentials is in UPN format. 9015: "Unable to connect to VMware VMs due to insufficient privileges on vCenter" | Guest Operations role is not enabled on the vCenter user account | Ensure Guest Operations role is enabled on the vCenter user account. 9016: "Unable to connect to VMware VMs as the guest operations agent is out of data" | VMware tools is not properly installed or is not up to date. | Ensure the VMware tools is properly installed and up to date. 9017: "File with discovered metadata is not found on the VM." | The issue can occur due to an internal error. | Contact Microsoft Support for a resolution. 9018: "PowerShell is not installed in the Guest VMs." | PowerShell is not available in the guest VM. | Install PowerShell in the guest VM. 9019: "Unable to discover due to guest VM operation failures" | VMware guest operation failed on the VM. | Ensure that the VM credentials are valid and user name provided in the guest VM credentials is in UPN format. 9020: "File creation permission is denied." | The role associated to the user or the group policy is restricting the user to create the file in the folder | Check if the guest user provided has create permission for the file in the folder. See **Notifications** in Server Assessment for the name of the folder. 9021: "File create permission is denied in folder System Temp Path." | VMware tool version on the VM is unsupported | Upgrade your VMware tool version above 10.2.0. 9022: "Get WMI object access is denied." | The role associated to the user or the group policy is restricting the user to access WMI object. | Please contact Microsoft Support. 9023: "SystemRoot environment variable value is empty." | Not known | Please contact Microsoft Support. 9024: "TEMP environment variable value is empty." | Not known | Please contact Microsoft Support. 9025: "PowerShell is corrupt in the Guest VMs." | Not known | Reinstall PowerShell in the guest VM and check if PowerShell can be run on the guest VM. 8084: "Unable to discover applications due to VMware error: <Exception from VMware>" | The Azure Migrate appliance uses VMware APIs to discover applications. This issue can happen if an exception is thrown by vCenter Server while trying to discover applications. The fault message from VMware is displayed in the error message shown in portal. | Search for the message in the [VMware documentation](https://pubs.vmware.com/vsphere-51/topic/com.vmware.wssdk.apiref.doc/index-faults.html), and follow the steps to fix. If you can't fix, contact Microsoft support. ## Next steps Set up an appliance for [VMware](how-to-set-up-appliance-vmware.md), [Hyper-V](how-to-set-up-appliance-hyper-v.md), or [physical servers](how-to-set-up-appliance-physical.md).
91.606742
561
0.779958
eng_Latn
0.997222
0c443bed913f17e1dbdde9e3dd4fda940703b932
18,636
md
Markdown
source/develop/infra/testing/ovirttestday3.1.html.md
didib/ovirt-site
c80907faf68651ee931ae7de3f478fc0cab8c2ef
[ "MIT" ]
null
null
null
source/develop/infra/testing/ovirttestday3.1.html.md
didib/ovirt-site
c80907faf68651ee931ae7de3f478fc0cab8c2ef
[ "MIT" ]
null
null
null
source/develop/infra/testing/ovirttestday3.1.html.md
didib/ovirt-site
c80907faf68651ee931ae7de3f478fc0cab8c2ef
[ "MIT" ]
null
null
null
--- title: oVirtTestDay3.1 authors: aglitke, gcheresh, jhernand, mburns, mkolesni, rmiddle, rvaknin, snmishra, ykaul wiki_title: Testing/OvirtTestDay3.1 wiki_revision_count: 33 wiki_last_updated: 2012-06-15 --- # oVirt Test Day 3.1 ## Objective The purpose of test days initiative is to accomplish the following goals: * Get multiple engineers and stakeholders to have hands-on opportunity to learn more about Ovirt functionality and features. * Improve the quality of oVirt, towards its second release (aka '3.1') * While learning about the project, the stakeholders can come with their own test cases, in different categories: - General/Project installation - Storage - Networking - APIs, SDK, CLI - Spice - User Interface - Tools ## Participants Test Days are open to anyone. If you have your own setup we will provide all the software packages and the required information. Please refer - What to do as a participant - in the section below, if you're willing to participate please add yourself to the below table: | Name | General | Storage | Networking | APIs | Spice | User Interface | Tools | Distribution | |----------|---------|---------|------------|------|-------|----------------|-------|--------------| | snmishra | V | | Basic | | | V | | RHEL 6.2 | | rvaknin | | | | SDK | | V | | Fedora 16 | | mkolesni | | | Advanced | | | | | Fedora 17 | | gcheresh | | | Basic | | | V | | Fedora 17 | | rmiddle | V | | Basic | | | V | | Fedora 17 | ## Test Dates The overall test dates are spread across multiple duration which are driven by the beta releases from the engineering. The following are the list of test days scheduled - * JAN 18th, 2012 - First Release (3.0) * Jun 14th, 2012 - Second Release (3.1) ## Execution Plan and Guidelines The following is the list of categories which we would like to focus on. However the scope is not limited and they are guidelines only. Feel free to extend it to the limitations of the software. ### General You need at least two physical servers to install and configure a basic yet complete oVirt environment with shared storage to exercise the following: | Scenario | Bugs | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------| | Setup oVirt engine using either Active Directory or Local IPA, two hosts configured as hypervisors (Fedora / Ovirt-Node / other) with power management (Storage Domains - Data Domain / ISO Domain and Export Domain) | | | Use ISO Uploader to populate images of OS and tools | | | Basic Network Configuration | | | Create virtual machines and assign them to users | | | Migrate Virtual Machines between the hypervisors | | | Collect log file using the log collector tool | | ### Configuration | Scenario | Bugs | |---------------------------------------------------------------------------------------------------------------------------------------------------------------------|------| | Configure high availability for virtual machines which are running mission critical workloads, so they will be restarted on another host if hardware failure occurs | | | Use the multi-level administration feature to assign different levels of user permissions | | | Live Migration Scenarios | | ### Storage | Scenario | Bugs | |---------------------------------------------------------------------------------------------------------------------|------| | Use the General configuration as a base configuration | | | Create different types of storage domains (NFS, ISCSI, FC, local storage) with 2 or more data domains | | | Install at least 2 VMs on each of the Data Centers | | | Move the master domain to a different domain within the Data Center | | | Export one of the installed VMs, delete it, import it to another Data Center | | | Create a template from one of the VMs and then create a new VM based on this template | | | Move the newly created VM to another data domain | | | Create several snapshots from a VM (Each time change something in the guest) | | | Restore a previous snapshot | | | Storage Failovers | | | Host Failovers | | | Power User Portal provides power users the ability to create and manage virtual machines from the power user portal | | | High Availability scenarios provides instructions to configure virtual machine and host high availability | | ### Network * Base config - single NIC, bridge on top, VMs attached to NIC * Advanced configurations: ![](Vlan bonding.jpg "fig:Vlan bonding.jpg") make sure each of the configs can: * survive a reboot * test network at both host and VM level * ping and transfer large amounts of data (2Gb size files should be enough) * remain operational over time (1hr of uptime should be sufficient for the basic testing) ### APIs by default we'll be using the webadmin as our API for testing on this section we'll try to have default deployment with the different APIs | Scenario | Webadmin | UserPortal | Rest | Python-SDK | CLI | |--------------------------------|----------|------------|------|------------|-----| | Create a data-center | | | | V | | | Create a cluster | | | | V | | | Update cluster | | | | V | | | Install a host | | | | V | | | Create a storage domain on DC | | | | V | | | Attach export/ISO domain to DC | | | | V | | | Create vm | | | | V | | | Delete vm | | | | V | | | Import vm | | | | V | | | Start/hibernate/resume/stop vm | | | | V | | | Create a snapshot to vm | | | | V | | | Create a template from vm | | | | V | | | Create vm from template | | | | V | | | Sign out | | | | V | | | General | | | | V | | Python API of the above scenarios can be found in: <http://www.ovirt.org/wiki/Testing/PythonApi> ### Spice For details about configuration check <http://www.ovirt.org/wiki/Testing/Spice> | Scenario | Bugs | |------------------------------------------------------------------------------------------------------------------------|------| | Install Windows VM and a Linux VM with Guest Tools (QXL graphic driver and spice vdagent) | | | Assign user to these vms, login to a user portal, from your client machine, and connect to it using the Spice protocol | | | Try to watch a clip via YouTube or any other web based video (with QXL driver installed on VM) | | | Try to watch a Local movie (with QXL driver installed on VM) | | | Try to use client mouse mode and clipboard share between client and VM (with spice agent installed on VM) | | | Install AutoCAD or any other graphic application a try to work with it (with QXL driver installed on VM) | | ### User Interface Webadmin: BZ#832046, BZ#832064, BZ#832128 ### Node Pre-built node available [here](http://ovirt.org/releases/beta/binary/ovirt-node-iso-2.4.0-1.1.fc17.iso). Please check [Node_Release_Notes](Node_Release_Notes) prior to testing for information on current known issues | Scenario | Bugs | |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------| | Boot image and install using the TUI on a single disk | | | Boot image and install using the TUI on multiple disks | | | Boot image and install using autoinstall parameters from [here](http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Hypervisor_Deployment_Guide/sect-Deployment_Guide-Installing_Red_Hat_Enterprise_Virtualization_Hypervisors-RHEV_Hypervisor_Kernel_Parameters_and_Automated_Installation.html) | | | Configure Networking using the TUI | | | Configure ssh password auth and attempt to login using ssh | | | Configure remote logging | | | Configure kdump | | | Configure iscsi initiator name | | | Configure collectd | | | Configure oVirt Engine hostname and port | | | Configure admin password on oVirt Engine screen and add host through ovirt engine | | | Once registered, run vms on top of ovirt-node | | | | | ## oVirt Information Details Beta RPMs for Fedora 17 are available from <http://www.ovirt.org/releases/beta/fedora/17/>. In order to use it create a `/etc/yum.repos.d/ovirt-engine-beta.repo` file with the following content: [ovirt-beta] name=ovirt-beta baseurl=http://ovirt.org/releases/beta/fedora/17 enabled=1 skip_if_unavailable=1 gpgcheck=0 The run `yum install ovirt-engine`. ***Note:** Take into account that if this repository is not configured propertly you will be installing version 3.0 of the engine (it is part of Fedora 17), which is not the subject of this test day.* Please refer the following documents for more information on hardware requirements, installation procedure and software download locations: * <http://ovirt.org/wiki/Installing_ovirt_from_rpm> * <http://ovirt.org/wiki/Installing_ovirt-node_from_rpm> Please refer the following documents for Ovirt Installation guide, bits location and admin guide: * <http://ovirt.org/wiki/Documentation> Please refer the following document for 'virt-to-date' tool, simple tool for setting up local yum repo with all required packages and easy deployment. * <http://ovirt.org/wiki/virt-to-date> In case you would like to test a product with a new test case, there is a template to be used for creating test cases. Please copy this template for the test case, and update the link in this document to point to the results table below. It is not necessary that the person who is writing the test case will also be the person executing the test case, please make sure the instructions are explicit enough that anyone who may want to participate in the test day can follow them, and execute it. ## What to do as a participant * If you already have the hardware, verify if it meets the hardware requirement, refer information detail section below * Update the Participants section. * Accomplish the goals set in objective section , run the tests, update the test matrix. * Running into any issues - contact any participant in the list ## Bug Reporting * ovirt - <https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt> * Spice - <https://bugs.freedesktop.org/> under Spice product * VDSM - <https://bugzilla.redhat.com/show_bug.cgi?id=831998> Track bug for the release - <https://bugzilla.redhat.com/show_bug.cgi?id=822145> ## Miscellaneous IRC - #ovirt at irc.oftc.net ## Current Issues * VM state shown as Non-responsive in UI even though VM is Up according to vdsm * [Bug 832158 - ISO List is not refreshed after new ISO is uploaded](https://bugzilla.redhat.com/show_bug.cgi?id=832158) * Fixed in ovirt-node-iso-2.4.0-1.0.fc17.iso - ovirt-node-iso network interface config not working. New build expected with a fixed shortly. * [Bug 769571 - Console icon doesn't get reenabled when spice console is closed](https://bugzilla.redhat.com/769571) * ovirt-node-iso [Bug 832196 - Getting directory not found error.](https://bugzilla.redhat.com/show_bug.cgi?id=832196) * [Bug 832199 - vdsmd init script times out due to lengthy semanage operation](https://bugzilla.redhat.com/show_bug.cgi?id=832199) * Workaround for sanlock error: "Readonly leases are not supported." - comment out: lock_manager="sanlock" in /etc/libvirt/qemu.conf and restart libvirt and vdsm
83.569507
494
0.406042
eng_Latn
0.978778
0c4462508ded9e473f8e65bf5b187d2d7055091f
22
md
Markdown
README.md
q1178249326/pubgradar
6259ad36f9f144f9bb5c2c594be6299cbc4d1875
[ "MIT" ]
1
2018-06-20T08:14:20.000Z
2018-06-20T08:14:20.000Z
README.md
q1178249326/pubgradar
6259ad36f9f144f9bb5c2c594be6299cbc4d1875
[ "MIT" ]
1
2018-05-10T08:37:30.000Z
2018-05-10T08:37:30.000Z
README.md
q1178249326/pubgradar
6259ad36f9f144f9bb5c2c594be6299cbc4d1875
[ "MIT" ]
5
2018-05-10T09:43:56.000Z
2019-05-17T03:51:42.000Z
# pubgradar pubgradar
7.333333
11
0.818182
nno_Latn
0.650207
0c4522df847eb6d4eea0bba4de4180a9b808a534
728
md
Markdown
index.md
ngeraci/pca-python
59b23cb0909132ad196f433055d324a6e0c59196
[ "CC-BY-4.0" ]
null
null
null
index.md
ngeraci/pca-python
59b23cb0909132ad196f433055d324a6e0c59196
[ "CC-BY-4.0" ]
null
null
null
index.md
ngeraci/pca-python
59b23cb0909132ad196f433055d324a6e0c59196
[ "CC-BY-4.0" ]
1
2019-03-21T20:26:24.000Z
2019-03-21T20:26:24.000Z
--- layout: lesson root: . --- This lesson introduces Python, specifically using the pandas library to work with metadata. > ## Prerequisites > > Learners should understand the concepts of files and directories (including the working directory) before tackling this lesson. {: .prereq} ### Getting Started To get started, follow the installation instructions in the "[Setup](setup/)" tab to install Python 3 on your computer. You'll also need to download the file **[python-lesson.zip](https://raw.githubusercontent.com/ngeraci/pca-python/gh-pages/data/python-lesson.zip)** from GitHub to your desktop and extract it there (once you have unzipped/extracted the file, you should end up with a folder called “python-lesson”).
45.5
417
0.769231
eng_Latn
0.997022
0c45978a5214e23773d1e36555675df027e84f60
16,233
md
Markdown
mr-docs/legal/guides-preview.md
Mamaylya/dynamics-365-mixed-reality
c53d1558f51ec1479bdd7e820b183a299ab738cb
[ "CC-BY-4.0", "MIT" ]
1
2019-12-04T01:36:18.000Z
2019-12-04T01:36:18.000Z
mr-docs/legal/guides-preview.md
Mamaylya/dynamics-365-mixed-reality
c53d1558f51ec1479bdd7e820b183a299ab738cb
[ "CC-BY-4.0", "MIT" ]
null
null
null
mr-docs/legal/guides-preview.md
Mamaylya/dynamics-365-mixed-reality
c53d1558f51ec1479bdd7e820b183a299ab738cb
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- author: ReneeW-CPub description: Microsoft Software License Terms MICROSOFT DYNAMICS 365 GUIDES PREVIEW ms.author: renwe ms.date: 02/24/2019 ms.service: crm-online ms.topic: article title: Microsoft Software License Terms MICROSOFT DYNAMICS 365 GUIDES PREVIEW robots: noindex, nofollow --- # MICROSOFT SOFTWARE LICENSE TERMS<br>MICROSOFT DYNAMICS 365 GUIDES PREVIEW These license terms are an agreement between you and Microsoft Corporation (or one of its affiliates). They apply to the Preview software named above and any Microsoft services or software updates (the “Preview”) (except to the extent such services or updates are accompanied by new or additional terms, in which case those different terms apply prospectively and do not alter your or Microsoft’s rights relating to pre-updated software or services). IF YOU COMPLY WITH THESE LICENSE TERMS, YOU HAVE THE RIGHTS BELOW. BY USING THE SOFTWARE, YOU ACCEPT THESE TERMS. **1. INSTALLATION AND USE RIGHTS.** > **a) General.** You may install and run one instance of the software on Windows devices you own or control in order to access and interact with the software. Your access and use of this software must comply with these terms. You must uninstall the software after the trial period ends unless you have purchased a subscription covering the device on which the software is installed. > **b) Limitations and Exclusions.** The Microsoft Online Services Terms (see [http://microsoftvolumelicensing.com/](http://microsoftvolumelicensing.com/)) and any Microsoft volume licensing agreement you have entered into with Microsoft, or any other applicable agreement (collectively, the “Microsoft Licensing Agreement” or “MLA”), and any references or statements made in the Trust Center do not apply to the Preview, including, but not limited to, data security, GDPR, and professional services related terms. Microsoft also may, in its sole discretion, limit the: (a) rate at which the Preview may be called or made available; and (b) the amount of data that may be uploaded to, or served from, the Preview (all of the foregoing being forms of "Throttling"). Microsoft may perform this Throttling globally across the Preview. You will not take steps to circumvent or disable any technical measures Microsoft may put in place to enforce Throttling. The Preview is subject to reduced or different security, compliance and privacy commitments. Any data provided to Microsoft through your use of the software and the Preview may be transferred, stored, and processed in the United States, or in any other country where Microsoft of its subcontractors operate. >> **Important** >> If you are the administrator for the organization and have authority to act on behalf of and to bind the organization, then by installing or using the Preview, you consent to allow authorized users of Microsoft Dynamics 365 online services to activate, configure and enable certain functionality which transmits Customer Data (as defined in the MLA) to external systems. Please consult the feature technical documentation available at: [https://docs.microsoft.com/en-us/dynamics365/mixed-reality/guides/](https://docs.microsoft.com/en-us/dynamics365/mixed-reality/guides/). > **c) Required Microsoft Applications.** The software requires a license to Microsoft Dynamics 365 CDS and Microsoft Power BI Pro and a preview license to access and use such applications may be available to you in connection with this Preview. These license terms apply to those applications, unless other license terms are provided with your preview or paid subscription to such other Microsoft applications, in which case those other terms govern. Some functionality may require additional licenses for other Microsoft software or applications. You will not be able to use this functionality unless you have a separate license for such Microsoft software or application. **2. TIME-LIMITED SUBSCRIPTION.** > **a) Period.** Unless Microsoft extends the term by notice in writing, this agreement is effective on your acceptance and terminates on the earlier of (i) 30 days following first availability of a commercial release of the software or (ii) upon termination of this agreement by Microsoft. Microsoft may extend this agreement in its discretion. > **b) Access to data.** You may not be able to access data used in the software when the preview term ends. **3. DATA COLLECTION.** The software may collect information about you and your use of the software and send that to Microsoft. Microsoft may use this information to provide services and improve Microsoft’s products and services. Your opt-out rights, if any, are described in the product documentation. Some features in the software may enable collection of data from users of your applications that access or use the software. If you use these features to enable data collection in your applications, you must comply with applicable law, including getting any required user consent, and maintain a prominent privacy policy that accurately informs users about how you use, collect, and share their data. You can learn more about Microsoft’s data collection and use in the product documentation and the Microsoft Privacy Statement at [https://go.microsoft.com/fwlink/?LinkId=521839](https://go.microsoft.com/fwlink/?LinkId=521839). You agree to comply with all applicable provisions of the Microsoft Privacy Statement. **4. DATA TRANSFERS.** Customer Data and Personal Data (as defined in the MLA) that Microsoft processes on your behalf may be transferred to, and stored and processed in, the United States or any other country in which Microsoft or its subprocessors operate. You appoint Microsoft to perform any such transfer of Customer Data and Personal Data to any such country and to store and process Customer Data and Personal Data to provide the Preview. Microsoft will abide by the requirements of European Economic Area and Swiss data protection law regarding the collection, use, transfer, retention, and other processing of Personal Data from the European Economic Area and Switzerland. All transfers of Personal Data to a third country or an international organization will be subject to appropriate safeguards as described in Article 46 of the GDPR ((EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of Personal Data and on the free movement of such data, and repealing Directive 95/46/EC) and such transfers and safeguards will be documented according to Article 30(2) of the GDPR. In addition, Microsoft is certified to the EU-U.S. and Swiss-U.S. Privacy Shield Frameworks and the commitments they entail. Microsoft agrees to notify you if it makes a determination that it can no longer meet its obligation to provide the same level of protection as is required by the Privacy Shield principles. **5. SCOPE OF LICENSE.** The software is licensed, not sold. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you will not (and have no right to): > a) work around any technical limitations in the software that only allow you to use it in certain ways; > b) reverse engineer, decompile or disassemble the software; > c) remove, minimize, block, or modify any notices of Microsoft or its suppliers in the software; > d) use the software in any way that is against the law or to create or propagate malware; or > e) share, publish, distribute, or lend the software, provide the software as a stand-alone hosted solution for others to use, or transfer the software or this agreement to any third party. **6. EXPORT RESTRICTIONS.** You must comply with all domestic and international export laws and regulations that apply to the software, which include restrictions on destinations, end users, and end use. For further information on export restrictions, visit [http://aka.ms/exporting](http://aka.ms/exporting). **7. SUPPORT SERVICES.** Microsoft is not obligated under this agreement to provide any support services for the software. Any support provided is “as is”, “with all faults”, and without warranty of any kind and Microsoft may cease providing support (if any) at any time, without notice. If Microsoft elects to provide support, Microsoft will solely own any enhancements or improvements to the Preview arising from provision of such support services. **8. UPDATES.** The software may periodically check for updates, and download and install them for you. You may obtain updates only from Microsoft or authorized sources. Microsoft may need to update your system to provide you with updates. You agree to receive these automatic updates without any additional notice. Updates may not include or support all existing software features, services, or peripheral devices. **9. FEEDBACK.** “Feedback” is all suggestions, comments, input, ideas, or know-how, in any form, that you provide to Microsoft about the Preview. If you give Feedback to Microsoft, you give to Microsoft, without charge, the right to use, share and commercialize your Feedback in any way and for any purpose. You will not give Feedback that is subject to a license that requires Microsoft to license its software or documentation to third parties because Microsoft includes your Feedback in them. These rights survive this agreement. **10. ENTIRE AGREEMENT.** This agreement, and any other terms Microsoft may provide for supplements, updates, or third-party applications, is the entire agreement for the software. **11. APPLICABLE LAW AND PLACE TO RESOLVE DISPUTES.** If you acquired the software in the United States or Canada, the laws of the state or province where you live (or, if a business, where your principal place of business is located) govern the interpretation of this agreement, claims for its breach, and all other claims (including consumer protection, unfair competition, and tort claims), regardless of conflict of laws principles. If you acquired the software in any other country, its laws apply. If U.S. federal jurisdiction exists, you and Microsoft consent to exclusive jurisdiction and venue in the federal court in King County, Washington for all disputes heard in court. If not, you and Microsoft consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington for all disputes heard in court. **12. CONSUMER RIGHTS; REGIONAL VARIATIONS.** This agreement describes certain legal rights. You may have other rights, including consumer rights, under the laws of your state, province, or country. Separate and apart from your relationship with Microsoft, you may also have rights with respect to the party from which you acquired the software. This agreement does not change those other rights if the laws of your state, province, or country do not permit it to do so. For example, if you acquired the software in one of the below regions, or mandatory country law applies, then the following provisions apply to you: > **a) Australia.** You have statutory guarantees under the Australian Consumer Law and nothing in this agreement is intended to affect those rights. > **b) Canada.** If you acquired this software in Canada, you may stop receiving updates by turning off the automatic update feature, disconnecting your device from the Internet (if and when you re-connect to the Internet, however, the software will resume checking for and installing updates), or uninstalling the software. The product documentation, if any, may also specify how to turn off updates for your specific device or software. > **c) Germany and Austria.** >> **i.Warranty.** The properly licensed software will perform substantially as described in any Microsoft materials that accompany the software. However, Microsoft gives no contractual guarantee in relation to the licensed software. >> **ii.Limitation of Liability.** In case of intentional conduct, gross negligence, claims based on the Product Liability Act, as well as, in case of death or personal or physical injury, Microsoft is liable according to the statutory law. >>Subject to the foregoing clause ii., Microsoft will only be liable for slight negligence if Microsoft is in breach of such material contractual obligations, the fulfillment of which facilitate the due performance of this agreement, the breach of which would endanger the purpose of this agreement and the compliance with which a party may constantly trust in (so-called "cardinal obligations"). In other cases of slight negligence, Microsoft will not be liable for slight negligence. **13. DISCLAIMER OF WARRANTY. THE SOFTWARE IS LICENSED “AS IS.” YOU BEAR THE RISK OF USING IT. MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. TO THE EXTENT PERMITTED UNDER APPLICABLE LAWS, MICROSOFT EXCLUDES ALL IMPLIED WARRANTIES, INCLUDING MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT.** **14. LIMITATION ON AND EXCLUSION OF DAMAGES. IF YOU HAVE ANY BASIS FOR RECOVERING DAMAGES DESPITE THE PRECEDING DISCLAIMER OF WARRANTY, YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.** >**This limitation applies to (a) anything related to the software, services, content (including code) on third party Internet sites, or third party applications; and (b) claims for breach of contract, warranty, guarantee, or condition; strict liability, negligence, or other tort; or any other claim; in each case to the extent permitted by applicable law.** >**It also applies even if Microsoft knew or should have known about the possibility of the damages. The above limitation or exclusion may not apply to you because your state, province, or country may not allow the exclusion or limitation of incidental, consequential, or other damages.** **Please note: As this software is distributed in Canada, some of the clauses in this agreement are provided below in French.** **Remarque: Ce logiciel étant distribué au Canada, certaines des clauses dans ce contrat sont fournies ci-dessous en français.** **EXONÉRATION DE GARANTIE. Le logiciel visé par une licence est offert « tel quel ». Toute utilisation de ce logiciel est à votre seule risque et péril. Microsoft n’accorde aucune autre garantie expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection des consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les garanties implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contrefaçon sont exclues.** **LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES DOMMAGES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation pour les autres dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de bénéfices.** **Cette limitation concerne:** >**- tout ce qui est relié au logiciel, aux services ou au contenu (y compris le code) figurant sur des sites Internet tiers ou dans des programmes tiers; et** >**- les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité stricte, de négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.** **Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel dommage. Si votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages indirects, accessoires ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus ne s’appliquera pas à votre égard.** **EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois de votre pays si celles-ci ne le permettent pas.**
84.108808
966
0.794369
eng_Latn
0.994534
0c461d054f2d68e5fc205d4dc661147abfd393f5
2,663
md
Markdown
README.md
thinhnguyen112000/BTTR
8dd28d898e3ef8b5f16dcb8765181b59b8193b85
[ "MIT" ]
63
2021-05-13T20:58:37.000Z
2022-03-30T01:05:34.000Z
README.md
thinhnguyen112000/BTTR
8dd28d898e3ef8b5f16dcb8765181b59b8193b85
[ "MIT" ]
9
2021-05-20T02:43:32.000Z
2022-03-24T09:54:20.000Z
README.md
thinhnguyen112000/BTTR
8dd28d898e3ef8b5f16dcb8765181b59b8193b85
[ "MIT" ]
27
2021-05-13T20:11:50.000Z
2022-03-28T07:33:16.000Z
<div align="center"> # Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer [![arXiv](https://img.shields.io/badge/arXiv-2105.02412-b31b1b.svg)](https://arxiv.org/abs/2105.02412) [![Springer](https://badgen.net/badge/Springer/BTTR-paper/purple)](https://link.springer.com/chapter/10.1007%2F978-3-030-86331-9_37) </div> ## Description Convert offline handwritten mathematical expression to LaTeX sequence using bidirectionally trained transformer. ## How to run First, install dependencies ```bash # clone project git clone https://github.com/Green-Wood/BTTR # install project cd BTTR conda create -y -n bttr python=3.7 conda activate bttr conda install --yes -c pytorch pytorch=1.7.0 torchvision cudatoolkit=<your-cuda-version> pip install -e . ``` Next, navigate to any file and run it. It may take **6~7** hours to coverage on **4** gpus using ddp. ```bash # module folder cd BTTR # train bttr model using 4 gpus and ddp python train.py --config config.yaml ``` For single gpu user, you may change the `config.yaml` file to ```yaml gpus: 1 # gpus: 4 # accelerator: ddp ``` ## Imports This project is setup as a package which means you can now easily import any file into any other file like so: ```python from bttr.datamodule import CROHMEDatamodule from bttr import LitBTTR from pytorch_lightning import Trainer # model model = LitBTTR() # data dm = CROHMEDatamodule(test_year=test_year) # train trainer = Trainer() trainer.fit(model, datamodule=dm) # test using the best model! trainer.test(datamodule=dm) ``` ## Note Metrics used in validation is not accurate. For more accurate metrics: 1. use `test.py` to generate result.zip 2. download and install [crohmelib](http://saskatoon.cs.rit.edu:10001/root/crohmelib), [lgeval](http://saskatoon.cs.rit.edu:10001/root/lgeval), and [tex2symlg](https://www.cs.rit.edu/~crohme2019/downloads/convert2symLG.zip) tool. 3. convert tex file to symLg file using `tex2symlg` command 4. evaluate two folder using `evaluate` command ### Citation ``` @article{zhao2021handwritten, title={Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer}, author={Zhao, Wenqi and Gao, Liangcai and Yan, Zuoyu and Peng, Shuai and Du, Lin and Zhang, Ziyin}, journal={arXiv preprint arXiv:2105.02412}, year={2021} } ``` ``` @inproceedings{Zhao2021HandwrittenME, title={Handwritten Mathematical Expression Recognition with Bidirectionally Trained Transformer}, author={Wenqi Zhao and Liangcai Gao and Zuoyu Yan and Shuai Peng and Lin Du and Ziyin Zhang}, booktitle={ICDAR}, year={2021} } ```
29.588889
229
0.744273
eng_Latn
0.818449
0c464c43472cfe88f4120b42b843fa56bd5cb3e0
42
md
Markdown
README.md
svennickel/foosassistant
c0924315bc53dcc663afe1caa19d14fffa272fbf
[ "MIT" ]
null
null
null
README.md
svennickel/foosassistant
c0924315bc53dcc663afe1caa19d14fffa272fbf
[ "MIT" ]
null
null
null
README.md
svennickel/foosassistant
c0924315bc53dcc663afe1caa19d14fffa272fbf
[ "MIT" ]
null
null
null
# Foos Assistant Table Soccer Referee App
14
24
0.809524
kor_Hang
0.429696
0c47493121202404a902c61f8b9874d82ff13a89
1,631
md
Markdown
desktop-src/Controls/en-endcomposition.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
3
2020-04-24T13:02:42.000Z
2021-07-17T15:32:03.000Z
desktop-src/Controls/en-endcomposition.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/Controls/en-endcomposition.md
npherson/win32
28da414b56bb3e56e128bf7e0db021bad5343d2d
[ "CC-BY-4.0", "MIT" ]
1
2022-03-09T23:50:05.000Z
2022-03-09T23:50:05.000Z
--- title: EN_ENDCOMPOSITION notification code description: Notifies a rich edit control parent window that the user has entered new data or has finished entering data while using IME or Text Services Framework. ms.assetid: 3956313F-F82F-41A2-AEDA-52E63218977C keywords: - EN_ENDCOMPOSITION notification code Windows Controls topic_type: - apiref api_name: - EN_ENDCOMPOSITION api_location: - Richedit.h api_type: - HeaderDef ms.topic: article ms.date: 05/31/2018 --- # EN\_ENDCOMPOSITION notification code Notifies a rich edit control parent window that the user has entered new data or has finished entering data while using IME or [Text Services Framework](https://docs.microsoft.com/windows/desktop/TSF/text-services-framework). ```C++ EN_ENDCOMPOSITION pEndComp = (ENDCOMPOSITIONNOTIFY *)lParam; ``` ## Parameters <dl> <dt> *lParam* </dt> <dd> A [**ENDCOMPOSITIONNOTIFY**](/windows/win32/api/richedit/ns-richedit-endcompositionnotify) structure that receives information about the end composition condition. </dd> </dl> ## Requirements | | | |-------------------------------------|---------------------------------------------------------------------------------------| | Minimum supported client<br/> | Windows 8 \[desktop apps only\]<br/> | | Minimum supported server<br/> | Windows Server 2012 \[desktop apps only\]<br/> | | Header<br/> | <dl> <dt>Richedit.h</dt> </dl> |
25.888889
225
0.586143
eng_Latn
0.611665
0c47e13392b156905c5109da5d4dde37b9d03661
680
md
Markdown
README.md
ghaninia/Captcha
e2572b2208ae875a6d2792820e720835063ef6f8
[ "MIT" ]
10
2018-08-19T21:54:54.000Z
2018-08-20T03:13:38.000Z
README.md
ghaninia/Captcha
e2572b2208ae875a6d2792820e720835063ef6f8
[ "MIT" ]
null
null
null
README.md
ghaninia/Captcha
e2572b2208ae875a6d2792820e720835063ef6f8
[ "MIT" ]
1
2021-10-03T23:28:32.000Z
2021-10-03T23:28:32.000Z
![Captcha](https://ghaninia.ir/filemanager/uploads/photos/1/download.png) Installation ------------ Enter the following command to install Captcha ```php $ composer require ghaninia/captcha ``` Now you have to add the package to your project by entering the following command ```php ...config/app.php 'providers' => [ ... GhaniniaIR\Captcha\CaptchaServiceProvider::class ], ``` usage ----- <p>Use the following command at the top of each file of the use Verta class </p> ```html <img src="{{ route("captcha") }}"> ``` Validation Rules ----- <p> You can validate form with Validator Laravel </p> ```php $this->validate([ "field" => 'required|captcha' ]) ```
16.190476
81
0.669118
eng_Latn
0.948528
0c48b8907600cfca44b84230535066e2ef0b8942
784
md
Markdown
README.md
rmic/pycalendars
bafbfd32a8edfea917cb90d7180a88dbda1b9499
[ "MIT" ]
null
null
null
README.md
rmic/pycalendars
bafbfd32a8edfea917cb90d7180a88dbda1b9499
[ "MIT" ]
null
null
null
README.md
rmic/pycalendars
bafbfd32a8edfea917cb90d7180a88dbda1b9499
[ "MIT" ]
null
null
null
# pycalendars PyCalendars helps you to retrieve events from your Google and ICloud calendars, in python. ## Installing Using pip : ```pip install pycalendars``` Using conda : ```conda install -c rmic pycalendars``` ## Getting Started ### Google calendar The google calendar api requires that you create a project in google developer console (https://console.developers.google.com/) and retrieve credentials. Depending on what type of application you plan to develop and how you want to use the calendar, you may either want to get credentials for a web / desktop application or for a service account. ### ICloud ICloud only requires your apple id username and password. However, depending on how your account is configured, you may need to use a 2FA verification code.
30.153846
192
0.770408
eng_Latn
0.985326
0c496314039ab63a80a436fc5bbd9c044634bc8b
10,192
md
Markdown
socrata/q8g4-wqta.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
7
2017-05-02T16:08:17.000Z
2021-05-27T09:59:46.000Z
socrata/q8g4-wqta.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
5
2017-11-27T15:40:39.000Z
2017-12-05T14:34:14.000Z
socrata/q8g4-wqta.md
axibase/open-data-catalog
18210b49b6e2c7ef05d316b6699d2f0778fa565f
[ "Apache-2.0" ]
3
2017-03-03T14:48:48.000Z
2019-05-23T12:57:42.000Z
# Green Connections Network ## Dataset | Name | Value | | :--- | :---- | | Catalog | [Link](https://catalog.data.gov/dataset/green-connections-network-3fdfa) | | Metadata | [Link](https://data.sfgov.org/api/views/q8g4-wqta) | | Data: JSON | [100 Rows](https://data.sfgov.org/api/views/q8g4-wqta/rows.json?max_rows=100) | | Data: CSV | [100 Rows](https://data.sfgov.org/api/views/q8g4-wqta/rows.csv?max_rows=100) | | Host | data.sfgov.org | | Id | q8g4-wqta | | Name | Green Connections Network | | Category | Energy and Environment | | Tags | planning | | Created | 2016-07-28T21:18:39Z | | Publication Date | 2016-08-19T21:36:05Z | ## Description Green Connections aims to increase access to parks, open spaces, and the waterfront by envisioning a network of green connectors -- city streets that will be upgraded incrementally over the next 20 years to make it safer and more pleasant to travel to parks by walking, biking, and other forms of active transportation. The dataset is a zipped GIS shapefile of the Green Connections Network which is shown in this map: http://www.sf-planning.org/ftp/files/Citywide/green_connections/GC_Final_Network_Map_03-2014.pdf. Further information can be found on the Green Connections website: http://greenconnections.sfplanning.org ## Columns ```ls | Included | Schema Type | Field Name | Name | Data Type | Render Type | | ======== | ============== | =========== | ========== | ========= | =========== | | No | time | :updated_at | updated_at | meta_data | meta_data | | Yes | numeric metric | cnn | cnn | number | number | | Yes | series tag | gc_rt_name | gc_rt_name | text | text | | Yes | series tag | gc_rt_nmbr | gc_rt_nmbr | text | text | | Yes | numeric metric | lf_fadd | lf_fadd | number | number | | Yes | numeric metric | lf_toadd | lf_toadd | number | number | | Yes | series tag | objectid | objectid | text | number | | Yes | numeric metric | rt_fadd | rt_fadd | number | number | | Yes | numeric metric | rt_toadd | rt_toadd | number | number | | Yes | series tag | street | street | text | text | | Yes | series tag | st_type | st_type | text | text | | Yes | series tag | geometry | geometry | line | line | | Yes | series tag | multigeom | multigeom | checkbox | checkbox | ``` ## Time Field ```ls Value = updated_at Format & Zone = seconds ``` ## Data Commands ```ls series e:q8g4-wqta d:2016-08-19T04:00:50.000Z t:gc_rt_nmbr=14 t:gc_rt_name="Presidio to Park Merced" t:street=22ND t:multigeom=false t:st_type=AVE t:objectid=181899 t:geometry="LINESTRING (-122.48111286328042 37.77458337860856, -122.48097876563916 37.77271628309879)" m:lf_fadd=700 m:rt_toadd=799 m:lf_toadd=798 m:rt_fadd=701 m:cnn=1138000 series e:q8g4-wqta d:2016-08-19T04:00:50.000Z t:gc_rt_nmbr=2 t:gc_rt_name="China Beach to Bay" t:street=WASHINGTON t:multigeom=false t:st_type=ST t:objectid=168499 t:geometry="LINESTRING (-122.41053787168633 37.79463122541162, -122.41136404327203 37.794526652839004)" m:lf_fadd=1051 m:rt_toadd=1098 m:lf_toadd=1099 m:rt_fadd=1058 m:cnn=13423000 series e:q8g4-wqta d:2016-08-19T04:00:50.000Z t:gc_rt_nmbr=8/24 t:gc_rt_name="Noe Valley to Central Waterfront/Shoreline" t:street=ILLINOIS t:multigeom=false t:st_type=ST t:objectid=175410 t:geometry="LINESTRING (-122.38706585219497 37.75544622090616, -122.38693869664368 37.75416779649186)" m:lf_fadd=1301 m:rt_toadd=1398 m:lf_toadd=1399 m:rt_fadd=1300 m:cnn=7172000 ``` ## Meta Commands ```ls metric m:cnn p:long l:cnn t:dataTypeName=number metric m:lf_fadd p:long l:lf_fadd t:dataTypeName=number metric m:lf_toadd p:long l:lf_toadd t:dataTypeName=number metric m:rt_fadd p:long l:rt_fadd t:dataTypeName=number metric m:rt_toadd p:long l:rt_toadd t:dataTypeName=number entity e:q8g4-wqta l:"Green Connections Network" t:url=https://data.sfgov.org/api/views/q8g4-wqta property e:q8g4-wqta t:meta.view v:id=q8g4-wqta v:category="Energy and Environment" v:averageRating=0 v:name="Green Connections Network" property e:q8g4-wqta t:meta.view.license v:name="Open Data Commons Public Domain Dedication and License" v:termsLink=http://opendatacommons.org/licenses/pddl/1.0/ property e:q8g4-wqta t:meta.view.owner v:id=dbag-6qd9 v:screenName=OpenData v:displayName=OpenData property e:q8g4-wqta t:meta.view.tableauthor v:id=dbag-6qd9 v:screenName=OpenData v:roleName=publisher v:displayName=OpenData ``` ## Top Records ```ls | :updated_at | cnn | gc_rt_name | gc_rt_nmbr | lf_fadd | lf_toadd | objectid | rt_fadd | rt_toadd | street | st_type | geometry | multigeom | | =========== | ========== | ========================================== | ========== | ======= | ======== | ======== | ======= | ======== | ========== | ======= | ========================================================================================================================================================================================================================================================================================== | ========= | | 1471579250 | 1138000.0 | Presidio to Park Merced | 14 | 700.0 | 798.0 | 181899 | 701.0 | 799.0 | 22ND | AVE | LINESTRING (-122.48111286328042 37.77458337860856, -122.48097876563916 37.77271628309879) | false | | 1471579250 | 13423000.0 | China Beach to Bay | 2 | 1051.0 | 1099.0 | 168499 | 1058.0 | 1098.0 | WASHINGTON | ST | LINESTRING (-122.41053787168633 37.79463122541162, -122.41136404327203 37.794526652839004) | false | | 1471579250 | 7172000.0 | Noe Valley to Central Waterfront/Shoreline | 8/24 | 1301.0 | 1399.0 | 175410 | 1300.0 | 1398.0 | ILLINOIS | ST | LINESTRING (-122.38706585219497 37.75544622090616, -122.38693869664368 37.75416779649186) | false | | 1471579250 | 10382000.0 | Crosstown Trail | 23 | 701.0 | 799.0 | 171793 | 700.0 | 798.0 | PERU | AVE | LINESTRING (-122.4241347788771 37.72594540801444, -122.42354330063168 37.72566302133307, -122.42332549523945 37.725138138378526) | false | | 1471579250 | 5705000.0 | Folsom Street | 20 | 3299.0 | 3399.0 | 177036 | 3300.0 | 3398.0 | FOLSOM | ST | LINESTRING (-122.41337632273402 37.745149133538405, -122.413280063816 37.744150044881245) | false | | 1471579250 | 1188000.0 | Noe Valley to Central Waterfront | 8 | 3051.0 | 3099.0 | 181857 | 3050.0 | 3098.0 | 22ND | ST | LINESTRING (-122.41547327494038 37.75563410833778, -122.41656664078371 37.75556827813656) | false | | 1471579250 | 3026201.0 | Crosstown Trail | 23 | 0.0 | 0.0 | 179905 | 500.0 | 598.0 | BOSWORTH | ST | LINESTRING (-122.43297411597455 37.73331082636995, -122.43320106219707 37.73343882604965, -122.4340207120184 37.73358796266472, -122.43414453633044 37.733550343266714) | false | | 1471579250 | 3258000.0 | Lake Merced to Candlestick | 12 | 601.0 | 649.0 | 179625 | 600.0 | 648.0 | BRUNSWICK | ST | LINESTRING (-122.44550334417616 37.70981331484346, -122.44670507359385 37.70940240109001) | false | | 1471579250 | 3516000.0 | Market to Beach | 3 | 4801.0 | 4899.0 | 179361 | 4800.0 | 4898.0 | CABRILLO | ST | LINESTRING (-122.51003695605 37.7732594694463, -122.51014643498205 37.77314440739929, -122.51041858382143 37.77313191655599, -122.51054322242608 37.773230418334855, -122.51066621449738 37.773222453514855, -122.51093996180835 37.772957707958625, -122.5112821019844 37.77295994414043) | false | | 1471579250 | 5654000.0 | Folsom Street | 20 | 353.0 | 399.0 | 177162 | 350.0 | 398.0 | FOLSOM | ST | LINESTRING (-122.39330569042441 37.78822645225682, -122.39359716393014 37.78799630950512) | false | ```
102.949495
623
0.497645
yue_Hant
0.187607
0c4990b1bd6beaabb222754f110eb2e0d7c97f78
1,574
md
Markdown
docs/visio/line-format-section.md
sloppyjuicy/office-developer-client-docs.fr-FR
7eaaee254db929b89d8df6493d9b975aefbe7209
[ "CC-BY-4.0", "MIT" ]
4
2020-05-19T18:52:25.000Z
2022-03-26T23:53:12.000Z
docs/visio/line-format-section.md
sloppyjuicy/office-developer-client-docs.fr-FR
7eaaee254db929b89d8df6493d9b975aefbe7209
[ "CC-BY-4.0", "MIT" ]
5
2021-07-19T21:24:40.000Z
2021-12-08T02:52:10.000Z
docs/visio/line-format-section.md
sloppyjuicy/office-developer-client-docs.fr-FR
7eaaee254db929b89d8df6493d9b975aefbe7209
[ "CC-BY-4.0", "MIT" ]
3
2019-10-13T18:11:35.000Z
2021-07-19T20:46:56.000Z
--- title: Line Format, section manager: soliver ms.date: 11/16/2014 ms.audience: Developer ms.topic: reference f1_keywords: - Vis_DSS.chm82251230 ms.localizationpriority: medium ms.assetid: e3399716-44de-f8cc-8b42-446284d2fbd4 description: Contient les cellules qui contrôlent les attributs de traits d'une forme, tels que le modèle, l'épaisseur et la couleur. Ces attributs déterminent si l'extrémité des traits est mise en forme (avec une pointe de flèche, par exemple), la taille des formats d'extrémité de trait, le rayon de l'arrondi appliqué au trait et le style de trait (arrondi ou carré). ms.openlocfilehash: 8570553f0528ce77a15253ae064e9b1612ed892c ms.sourcegitcommit: a1d9041c20256616c9c183f7d1049142a7ac6991 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 09/24/2021 ms.locfileid: "59554499" --- # <a name="line-format-section"></a>Line Format, section Contient les cellules qui contrôlent les attributs de traits d'une forme, tels que le modèle, l'épaisseur et la couleur. Ces attributs déterminent si l'extrémité des traits est mise en forme (avec une pointe de flèche, par exemple), la taille des formats d'extrémité de trait, le rayon de l'arrondi appliqué au trait et le style de trait (arrondi ou carré). ## <a name="remarks"></a>Remarques Vous pouvez définir des formats de lignes à l’aide du volet Format de forme (sous l’onglet Accueil, dans le groupe **Styles** de forme, cliquez sur **Trait,** Cliquez sur **Options** de trait), en appliquant un style de trait ou en entrant une formule dans une cellule de format de ligne.
56.214286
370
0.783355
fra_Latn
0.976468
0c49a1494f0690efd0b3c777be759781a3a03b66
717
md
Markdown
bootstrap/templates/default/README.md
Cadene/bootstrap.pytorch
e7d55b52fe8d819de7ea3da8b1027d4a3dcc9e0c
[ "BSD-3-Clause" ]
196
2018-01-12T01:07:47.000Z
2022-03-18T21:42:11.000Z
bootstrap/templates/default/README.md
Cadene/bootstrap.pytorch
e7d55b52fe8d819de7ea3da8b1027d4a3dcc9e0c
[ "BSD-3-Clause" ]
32
2019-02-24T11:08:22.000Z
2020-07-17T14:33:02.000Z
bootstrap/templates/default/README.md
Cadene/bootstrap.pytorch
e7d55b52fe8d819de7ea3da8b1027d4a3dcc9e0c
[ "BSD-3-Clause" ]
30
2018-03-22T23:51:01.000Z
2022-03-27T12:13:06.000Z
# {PROJECT_NAME} ## Install [Conda](https://docs.conda.io/en/latest/miniconda.html) ```bash conda create --name {PROJECT_NAME_LOWER} python=3 source activate {PROJECT_NAME_LOWER} cd $HOME git clone --recursive https://github.com/{PROJECT_NAME}/{PROJECT_NAME_LOWER}.bootstrap.pytorch.git cd {PROJECT_NAME_LOWER}.bootstrap.pytorch pip install -r requirements.txt ``` ## Reproducing results Run experiment: ```bash python -m bootstrap.run \ -o {PROJECT_NAME_LOWER}/options/{PROJECT_NAME_LOWER}.yaml \ --exp.dir logs/{PROJECT_NAME_LOWER}/1_exp ``` Display training and evaluation figures: ```bash open logs/{PROJECT_NAME_LOWER}/1_exp/view.html ``` Display table of results: ```bash python -m bootstrap.compare -o
21.727273
98
0.762901
kor_Hang
0.511455
0c4a1042c4a5e2a008d2baf6c6f3bebd5ffb248b
45
md
Markdown
src/components/ui/README.md
d-sektionen/medlem
ccbcbef24de0eadbfa0245c939ec9a5668f6d4aa
[ "MIT" ]
2
2019-03-02T21:17:49.000Z
2021-05-29T14:42:21.000Z
src/components/ui/README.md
d-sektionen/medlem
ccbcbef24de0eadbfa0245c939ec9a5668f6d4aa
[ "MIT" ]
13
2019-09-06T20:23:31.000Z
2022-02-03T18:49:23.000Z
src/components/ui/README.md
d-sektionen/medlem
ccbcbef24de0eadbfa0245c939ec9a5668f6d4aa
[ "MIT" ]
1
2019-10-15T09:00:29.000Z
2019-10-15T09:00:29.000Z
UI components for shared usage between apps.
22.5
44
0.822222
eng_Latn
0.999955
0c4a4dac0f32302bfb1f0f540152f85b082e8975
19,041
md
Markdown
content/blog/09.md
dottjt/juliusreade
3d800a43a9637160024a82694338e08e081ac1b7
[ "MIT" ]
null
null
null
content/blog/09.md
dottjt/juliusreade
3d800a43a9637160024a82694338e08e081ac1b7
[ "MIT" ]
null
null
null
content/blog/09.md
dottjt/juliusreade
3d800a43a9637160024a82694338e08e081ac1b7
[ "MIT" ]
null
null
null
+++ tags = [ "depression", "Mental Illness guide", "Mental Illness,"] categories = [ "depression", "mental illness", "blog" ] keywords = "adulthood, adult, on growing up, Mental Illness guide, successful adults, Mental Illness" layout = "layout" date = "2017-02-19T03:23:16+02:00" draft = false slug = "what-it-means-to-be-an-effective-adult" title = "What It Means To Be An Effective Adult" thumbnail = "/img/blog/09.png" thumbnailalt = "What It Means To Be An Effective Adult" description = "What It Means To Be An Effective Adult. A guide written for millenials, like myself. Cause #YOLO" +++ <!-- What adult do you want to become? What kind of adult do you want to become? What kind of adult are you? How to adult effectively. --> Millenials like myself are retarded. We have all these false expectations of what life should be, and in believing so strongly in the advice that has been thrown at us, have failed miserably to cope with the harsh reality of... ...not becoming the next big thing on YouTube. Yep. That's me. Despite being 'adults' many of us fear commitment, rely too heavily on our parents for financial support, and fail to kickstart our careers in any meaningful way. Which is to say that many of us struggle to adult. You know, that thing we have apparently are, though cannot quite identify with? Yep. That's us. Part of the reason why we struggle is that we falsely believe adulthood will entail some kind of dramatic shift in attitude and perspective, as if to be wisked off our feet and towards a stable family and six-figure income. A shift which never actually takes place. Instead, we simply wake up one year older, completely oblivious to world around us and our lives continue almost unscathed. As a result, we fail to treat our lives with urgency and instead of embracing commitment, responsibility and hard work, we instead become broken children. A far cry from all the admirable traits we ideally should be transfering over to future generations. You know, like the children we expect to have in our 40s, even though the [chance of conceiving at 40 is around 20%](http://www.babycenter.com.au/a1013991/getting-pregnant-in-your-40s), falling to less than 5% by your mid-40s. Our false millenial ideals of "live now and settle later" simply don't add up. Which is ultimately to say that there are very real consequences of not playing The Adult. Because when it comes down to it, society functions on the basis of adulthood and if you don't put in the time and effort, you're gonna get crushed. Of course, excessive pessimism doesn't work. What I mean to say is that life can be incredibly comfortable and enjoyable later on, if you dedicate some of your 20s towards commitment and responsibility. Otherwise, you going to be unhealthy. You're going to be mentally unprepared and let depression get the best of you, when life becomes even harder. You're going to be broke and without assets. You won't have the strength to maintain a loving relationship. Though the truth is even more vital: You're not going to feel depressed and hopeless NOW if you commit to your adulthood. So today, we're going to focus on the positive, enabling attributes of being an adult, that will essentially allow us to have our cake and eat it too. In other words, to strive for a balance in life so we can play the role of the adult, as well as afford the inherent retardation that is being a millenial. Cause we silly. ### What Is Effective Adulthood? ![What Is Effective Adulthood?](/img/blog/09-02.png) Obviously, there's no one way to adult. Some of us choose to earn our living solving life's most pressing issues, while others of us are still in a frenzy of cluelessness, as we extend our hand towards the bong for our ritual wake n' bake. Adults are only human. However since we're striving to become successful adults, not dysfunctional ones, we'll need to dive into the enabling traits that will help us become most effective. Which when you boil down to it is quite simple: **The only two things you need in life in order to become a successful adult are skills and attitude.** It's honestly that simple. ...simple enough that we overthink it and completely fuck it up. To deliver you this well-timed spiel, without useful skills you're basically deemed worthless by society, since you cannot contribute to the functioning of society. And when you cannot contribute to society, you have zero opportunity to exert any amount of influence, leverage or attention within your own sphere of influence. That is unless if you're exceeding attractive or funny (which are essentially skills within themselves) - which brings us to our second point. Your skills, or perhaps your amazingly gorgeous face or your stunning wit, is useless without the right attitude. Attitude is what essentially allows us to use our skills appropriately, and to leverage them to the best of our ability. For example you could be the most talented painter in the whole wide world, however if you don't have the right attitude to sell yourself - no one is going to discover your works. In addition, attitude is what allows us to commit to our skills, so that we can improve upon them and not get distracted by \<Insert Mental Illness Here\>. Fundamentally, with the right attitude and the right skills, you can do and achieve almost anything in life. The issue is that most people will very rarely have a healthy balance of skill and attitude. Hence why we are 'unadjusted'. In a lot of cases, our failure to adult can extend from an arrogance or ignorance in our attitude, and this is often what gets in the way of our success. Part of the problem is a) We don't understand the urgency, nor importance of why we must commit to these skills b) We're unsure of what skills we should commit to and c) We're too inexperienced to be able to understand how to effectively utilise our skills. This really is to say that our expectations of life, and of reality, are incompatible. We're deluded millenials, remember? And so in the process of our retardation, our ambition turns into mental illness, we get taken advantage of by our bosses, and we end up even more cynical than previously before about life. Because at the end of the day, we don't know if our skills are valid, nor can we truly understand whether our attitude is truly helpful. Which is to say that we're completely clueless. But that's a good thing, because it gives us a starting point for which to approach and tackle these issues. We begin with attitude. ### Developing Your Attitude Towards Life ![Developing Your Attitude Towards Life](/img/blog/09-03.png) Our attitude towards life is almost always the limiting factor in most situations. (There's a reason sales is a huge industry) To assist us, we're going to set some healthy expectations so that you can spend a little less time guessing, and a little more time getting your shit together. #### Honestly, We're All Clueless Here is the first and most important rule of adulting: **No one has a damn clue what they're doing.** And by gosh do people take advantage of that. Because at the end of the day, it's all a game of confidence. If you act with authority, people will listen to you. If you say you can do something with conviction, they will believe you. People barely know better. It's the reason people get underpaid. It's the reason people get bullied under conditions they believe are 'fair'. It's the reason people believe they're worthless. It's the reason 90% of people lose money on the stock exchange. Don't buy into other people's cluelessness and furthermore, don't buy into your own cluelessness. Because always remember: They are no different to you. They don't know anymore about life than you do. They simply believe it and know how to get others to believe it too. Of course, that's not to say that you shouldn't respect them and their schemes of influence. The police, or even your boss, for example. It simply means you should always be aware of your place in society, and what you have to do to be where you wish to be. #### Take Risks In a world so clueless people, it utterly makes sense to take risks because your chances of succeeding are high. You can get away with so much, whether it be theft or simply taking advantage of knowledge someone doesn't have. An interesting quote in regards to the stock market that resonates with this idea: **"Everyone agrees on price, but disagrees on value"**. And when you think about it, that really is the great trickery of society - leading people to believe they are getting value, at a price which significantly suits you. Of course, this doesn't have to be sinister. For example, if you believe the price of you spending four hours each day in your 20s towards improving your skills is worth the value it can deliver at 40, then you've comfortably beat the market. The reason most people don't succeed is because they over-estimate the consequence of taking risks. They genuinely believe the cluelessness aka propaganda they've been told and they buy into the fear that keeps them clueless. The kind of people who end up working at the one place for five years at minimum wage, because they worry they have 'too little experience' to find anything else. Hilarious. Absolutely hilarious. And certainly, a poor attitude towards life. On a more fundamental level, it indicates something more serious: A lack of faith in one's own ability to sell themselves. Educate yourself and see risks for what they really are: **Opportunities**. The opportunity to change and to defy your own self-doubt. Because whether you believe it or not, you always have leverage. Well, assuming you have skills, at the very least. #### Work Won't Always Suck Here's what I can tell you: **Your first-few working years are probably going to suck**. Because ultimately when we're starting out, we have no real leverage to dictate our own terms and that's simply part of the game. The good news is that you only need 1 - 2 years experience before you can start doing what you like. Personally speaking I only made it 10 months in before I quit on what I would consider my first "career-focused" job, and that worked out wondering for me. However there's one important detail that made it work: **I was incredibly passionate about my skills, and I worked hard each day to improve them**. Because at the end of the day, being an adult is all about building leverage in your life. Skills provide you with leverage, just as attitude can too. Get through those first few shitty years and you'll be fine. #### You Will Be Exhausted Ultimately, adulthood comes down to effort and routine. And if it's one thing us millenials hate more than anything, it's effort and routine. Certainly the idea of working 9-to-5 sounds terrifying, and personally speaking it was a huge fear of mine. But don't worry. Because here's what I can tell you: **Working full-time will make you a happier person**. Why? Because despite dedicating 38 hours each week towards work, you'll actually have more time and you'll be more motivated to spend your time where it truly counts. Sounds crazy, but it's true. Here's the deal: When you're just lazing around at home, there's no urgency to get anything done. There's no incentive to wake up at 5:50am in the morning and study for the next 2 hours, because in your head you feel as if you have all the time in the world. When you're working full-time however, you become ultra-aware of the time you do have, and it drives you to make the most of it. However that's not even the main thing: There's something about having limited time that makes you defy exhaustion. Previously I would get exhausted after 40 minutes of study, but now it's like I can remain engaged for hours. When you realise you only have 4 hours each day to study, you make sure those four hours are quality study time and your mind refuses to switch off. I think part of the drive is in realising how awful entry-level positions are. Which motivates you to want more. I even wrote an entire article about, if you'd like to know more: <div class="cross-post-template"> <a class="cross-post-template-title" href="https://juliusreade.com/2016/11/27/why-obtaining-a-full-time-job-should-be-your-number-one-priority/">Why Obtaining A Full-time Job Should Be Your Number One Priority</a> </div> You should totally read it. #### Don't Stop. Don't Overthink It Inevitably there are times where you will no longer want to adult. Where you'll be fed up with looking after yourself, cleaning the floors for the eighth time, or even dealing with people in general. Don't stop. Seriously, don't stop. You only get one chance to adult, so do it right. One the main reasons why people stop is that they overthink things. They worry if they're not committing to the right thing or if they're happy or whatever thought du jour they're preoccupying themselves with. Here's a key piece of advice they don't tell you in school: Don't think. Seriously, it's a waste of time. Here's the crux of it: Productive people don't have time to think, because they're busy actually doing stuff. The idea is better explained within this article: <div class="cross-post-template"> <a class="cross-post-template-title" href="https://juliusreade.com/2016/12/11/smart-people-dont-think.-they-plan-ahead/">Smart People Don't Think. They Plan Ahead</a> </div> Hopefully this list covers a few important points for you to grasp onto, at least as a starting point. ### Committment Is Beautiful ![Committment Is Beautiful](/img/blog/09-04.png) Committment is beautiful. You know why? Because most people are too afraid to commit and by committing, that already puts you ahead of 95% of most people. Seriously. The funny thing is that there's this perception that by not committing, you're somehow experiencing more of the world. And yet in reality, by not committing you simply achieve less over the long-run and never quite master anything. Whether you commit towards your mental health, your career or even your relationship - you're honestly doing more than most people. The other reason committment is important is that it opens up opportunity in life. This is another concept that people can't quite seem to grasp: They believe by committing that they're actually locking themselves in, rather than enabling themselves to grow. On the one hand, people want children, yet they freak out at the prospect of being in a relationship. They let their fears get in the way of what they want, because that's all it is. Irrationality rearing it's ugly head and making us indecisive adults. If it's one rule you should follow, it's that you should always commit when given the chance. What people don't realise is that you don't lose anything by committing. People worry about losing choice, when in reality they haven't committed to anything. ### What Keeps Me Going ![What Keeps Me Going](/img/blog/09-05.png) Certainly, there's one theme I haven't touched on. That is the theme of motivation. Previously I stated that adulthood doesn't entail a dramatic shift in attitude towards life. Well, that's not true. Adulthood does entail a dramatic shift in attitude, except most people never experience it. For me, my shift in attitude occurred once I'd moved out of home and got serious with my girlfriend. I believe time pressure has something to do with it. However ultimately, I know it's because of my future children. The reason that keeps me going is knowing that my future children will have an amazing future, because of all the hard work I've put in now. That's what keeps me going and will continue to motivate me, for as long as I live. I just didn't quite realise it until later in life. <!-- Currently as we speak, I have a left dagger in my eye and while the pain is excruciating, makes for a wonderful facebook post as I attempt to concoct the perfect selfie. I digress. However there's also a subtle beauty to being an adult, which I think needs to be cherished and embraced. Today I want to talk about what it means to be an adult, because I think it's something ill-defined. Not only in this quest to become a better adult, but also to allow you to live more effectively as a human being as well. Because guess what? The problem is that a lot of us grew up with shitty adult figures who didn't embrace the power of adulthood. Who quite frankly, were shit adults. Who were lazy, had no aspirations, and who suffered and who normalised this suffering as a basic fact of life. Being able to construct a building is a skill, being exceedingly attractive is a skill, having wit or wealth is a skill, including the one thing that transcends these all: Your attitude towards life. These are top In most cases attitude is usually the limiting factor. Quite simply because, we have zero expectations of what it means to be an adult and so we don't convey ourselves effectively. Now while these are the things that make us adult, in practice it's not as easy as simply undesrtanding what needs to be done. So here's some insight into what personally helps me be myself: I mean, god knows how much time I spend writing these articles The problem of course, is that it's something most of us are unprepared for. ...which leads to acting like a complete spastic into your early 20s. Certainly, I have been culprit to this behaviour. Because at the end of the day, all effective adults have one thing in common: Everything they do prepares them for the future. Whether you want to raise three children or invent something spectactular (or perhaps both), you need to have the skills and attitude to do it. In fact, it will allow you to prepare for anything in life. Now this is where it gets tricky for us millenials, who are apparently retarded and hash-tag a little too much. Believe it or not, attitude is a skill. I like to think of it as an overarching skill that helps calibrate everything else in place. The great thing about developing skills is that it doesn't need to be so lenient as your attitude. You can develop whatever skills you please. Ultimately passion sells itself and if you're really good at what you can do, you can get away with anything. So for example, you can be amazing at building things, yet not have the self-confidence or charisma to sell your creations. In the game of 'the adult' this is a failure of attitude. Which is totally fine, if that's your intention. You may not want to sell your creations, nor do you really care about creating things which are necessarily marketable. Self-expression and self-discovery are important, however remember that so is your adulthood. Balance. That's what we're going to talk about. -->
40.002101
257
0.768972
eng_Latn
0.999913
0c4ab3385de0480d61a4a2d2f4e7cc9a58c19f00
13,765
md
Markdown
README.md
SingingBush/unit-threaded
5d2d1aa9ac50e929faa5f5f63562bc120a60f738
[ "BSD-3-Clause" ]
117
2015-02-26T22:19:46.000Z
2022-02-23T15:16:19.000Z
README.md
SingingBush/unit-threaded
5d2d1aa9ac50e929faa5f5f63562bc120a60f738
[ "BSD-3-Clause" ]
170
2015-03-22T01:27:14.000Z
2022-02-11T12:54:43.000Z
README.md
SingingBush/unit-threaded
5d2d1aa9ac50e929faa5f5f63562bc120a60f738
[ "BSD-3-Clause" ]
47
2015-03-22T12:26:10.000Z
2021-10-03T16:32:27.000Z
unit-threaded ============= [![Build Status](https://github.com/atilaneves/unit-threaded/workflows/CI/badge.svg)](https://github.com/atilaneves/unit-threaded/actions) [![Coverage](https://codecov.io/gh/atilaneves/unit-threaded/branch/master/graph/badge.svg)](https://codecov.io/gh/atilaneves/unit-threaded) [My DConf2016 Lightning talk demonstrating unit-threaded](https://www.youtube.com/watch?v=vNPb4Mg6F6Y#t=6m50s). Multi-threaded advanced unit test framework for [the D programming language](https://dlang.org/). Augments D's `unittest` blocks with: * Tests can be named and individually run * Custom assertions for better error reporting (e.g. `1.should == 2`) * Runs in threads by default * UDAs for customisation of tests * Property based testing * Mocking ## Quick start with dub Note: while getting started this way is easy, it also increases build times and may run into edge cases. See below for how to do it manually. dub runs tests with `dub test`. Unfortunately, due to the nature of D's compile-time reflection, to use this library a test runner file listing all modules to reflect on must exist. Since this is a tedious task and easily automated, unit-threaded has a dub configuration called `gen_ut_main` to do just that. To use unit-threaded with a dub project, you can use a `unittest` configuration as exemplified in this `dub.json`: ```json { "name": "myproject", "targetType": "executable", "targetPath": "bin", "configurations": [ { "name": "executable" }, { "name": "unittest", "targetType": "executable", "preBuildCommands": ["$DUB run --compiler=$$DC unit-threaded -c gen_ut_main -- -f bin/ut.d -d $DUB"], "mainSourceFile": "bin/ut.d", "excludedSourceFiles": ["src/main.d"], "dependencies": { "unit-threaded": "*" } } ] } ``` With `dub.sdl`: ``` configuration "executable" { } configuration "unittest" { dependency "unit-threaded" version="*" mainSourceFile "bin/ut.d" excludedSourceFiles "src/main.d" targetType "executable" preBuildCommands "$DUB run --compiler=$$DC unit-threaded -c gen_ut_main -- -f bin/ut.d -d $DUB" } ``` `excludedSourceFiles` is there to not compile the file containing the `main` function to avoid linker errors. As an alternative to using `excludedSourceFiles`, the "real" `main` can be versioned out: ```d version(unittest) { import unit_threaded; mixin runTestsMain!( "module1", "module2", // ... ); } else { void main() { //... } } ``` ### Manually listing the D modules with tests Alternatively to the above and the recommended way is to manually (unfortunately) list all the modules with tests in the unittest main function. There's a mixin for that: ```d import unit_threaded; mixin runTestsMain!( "mypkg.mymod0", "mypkg.mymod1", // ... ); ``` Your unittest blocks will now be run in threads and can be run individually. To name each unittest, simply attach a string UDA to it: ```d @("Test that 2 + 3 is 5") unittest { assert(2 + 3 == 5); } ``` You can also have multiple configurations for running unit tests, e.g. one that uses the standard D runtime unittest runner and one that uses unit-threaded: "configurations": [ {"name": "ut_default"}, { "name": "unittest", "preBuildCommands: ["$DUB run --compiler=$$DC unit-threaded -c gen_ut_main -- -f bin/ut.d -d $DUB"], "mainSourceFile": "bin/ut.d", ... } ] In this example, `dub test -c ut_default` runs as usual if you don't use this library, and `dub test` runs with the unit-threaded test runner. To use unit-threaded's assertions or UDA-based features, you must import the library: ```d // Don't use `version(unittest)` here - if anyone depends on your code and // doesn't depend on unit-threaded they won't be able to test their own // code! version(TestingMyCode) { import unit_threaded; } else { enum ShouldFail; } // so production builds compile int adder(int i, int j) { return i + j; } @("Test adder") unittest { adder(2, 3).shouldEqual(5); } @("Test adder fails", ShouldFail) unittest { adder(2, 3).shouldEqual(7); } ``` If using a custom dub configuration for unit-threaded as shown above, a version block can be used on `Have_unit_threaded` (this is added by dub to the build). Custom Assertions ----------------- Code speaks louder than words: ```d 1.should == 1; 1.should.not == 2; 1.should in [1, 2, 3]; 4.should.not in [1, 2, 3]; void funcThrows() { throw new Exception("oops"); } funcThrows.shouldThrow; // or with .be 1.should.be == 1; 1.should.not.be == 2; 1.should.be in [1, 2, 3]; 4.should.not.be in [1, 2, 3]; // I know this is operator overload abuse. I still like it. [1, 2, 3].should ~ [3, 2, 1]; [1, 2, 3].should.not ~ [1, 2, 2]; 1.0.should ~ 1.0001; 1.0.should.not ~ 2.0; ``` See more in the `unit_threaded.should` module. Fast compilation mode -------------------- Fast compilation mode. Set the version to `unitThreadedLight` and it will compile much faster, but with no error reporting and certain features might not work. Experimental support. Advanced Usage: Attributes -------------------------- `@ShouldFail` is used to decorate a test that is expected to fail, and can be passed a string to explain why. `@ShouldFail` should be preferred to `@HiddenTest`. If the relevant bug is fixed or not-yet-implemented functionality is done, the test will then fail, which makes them harder to sweep under the carpet and forget about. Since code under test might not be thread-safe, the `@Serial` attribute can be used on a test. This causes all tests in the same module that have this attribute to be executed sequentially so they don't interleave with one another. Although not the best practice, it happens sometimes that a test is flaky. It is recommended to fix the test, but as a stopgap measure the `@Flaky` UDA can be used to rerun the test up to a default number of 10 times. This can be customized by passing it a number (e.g. `@Flaky(12)`); The `@Name` UDA can be used instead of a plain string in order to name a `unittest` block. unit-threaded uses D's package and module system to make it possible to select a subset of tests to run. Sometimes however, tests in different modules address cross-cutting concerns and it may be desirable to indicate this grouping in order to select only those tests. The `@Tags` UDA can be used to do that. Any number of tags can be applied to a test: ```d @Tags("foo", "tagged") unittest { ... } ``` The strings a test is tagged with can be used by the test runner binary to constrain which tests to run either by selecting tests with or without tags: ./ut @foo ~@bar That will run all tests that have the "foo" tag that also don't have the "bar" tag. If using value or type parameterized tests, the `@AutoTags` UDA will give each sub-test a tag corresponding to their parameter: ```d @Values("foo", "bar") @AutoTags // equivalent to writing @Tags("foo", "bar") @("autotag_test") unittest { // ... } ``` The `@Setup` and `@Shutdown` UDAs can be attached to a free function in a module. If they are, they will be run before/after each `unittest` block in a composite (usually a module). This feature currently only works for `unittest` blocks, not free functions. Classes could override `setup` and `shutdown` already. Property-based testing ---------------------- There is preliminary support for property-based testing. To check a property use the `check` function from `unit_threaded.property` with a function returning `bool`: ```d check!((int a) => a % 2 == 0); ``` The above example will obviously fail. By default `check` runs the property function with 100 random values, pass it a different runtime parameter to change that: ```d check!((int a) => a % 2 == 0)(10_000); // will still fail ``` If using compile-time delegates as above, the types of the input parameters must be explicitly stated. Multiple parameters can be used as long as each one is of one of the currently supported types. Mocking -------- Classes and interfaces can be mocked like so: ```d interface Foo { int foo(int, string); } int fun(Foo f, int i, string s) { return f.foo(i * 2, s ~ "stuff"); } auto m = mock!Foo; m.expect!"foo"; fun(m, 3, "bar"); m.verify; // throws if not called ``` To check the values passed in, pass them to `expect`: ```d m.expect!"foo"(6, "barstuff"); fun(m , 3, "bar"); m.verify; ``` Either call `expect` then `verify` or call `expectCalled` at the end: ```d fun(m, 3, "bar"); m.expectCalled!"foo"(6, "barstuff"); ``` The return value is `T.init` unless `returnValue` is called (it's variadic): ```d m.returnValue!"foo"(2, 3, 4); assert(fun(m, 3, "bar") == 2); assert(fun(m, 3, "bar") == 3); assert(fun(m, 3, "bar") == 4); assert(fun(m, 3, "bar") == 0); ``` Structs can also be mocked: ```d int fun(T)(T f, int i, string s) { return f.foo(i * 2, s ~ "stuff"); } auto m = mockStruct(2, 3, 4); // the ints are return values (none need be passed) assert(fun(m, 3, "bar") == 2); m.expectCalled!"foo"(6, "barstuff"); ``` If a struct is needed that returns different types for different functions: ```d auto m = mockStruct!(ReturnValues!("length", 5, 3), ReturnValues!("greet", "hello", "g'day")); m.length.shouldEqual(5); m.length.shouldEqual(3); m.greet.shouldEqual("hello"); m.grett.shouldEqual("g'day"); ``` Structs that always throw: ```d { auto m = throwStruct; m.foo.shouldThrow!UnitTestException; } { auto m = throwStruct!MyException; m.foo.shouldThrow!MyException; } ``` Command-line Parameters ----------------------- To run in single-threaded mode, use `-s`. There is support for debug prints in the tests with the `-d` switch. TestCases and test functions can print debug output with the function `writelnUt` available [here](source/unit_threaded/io.d). Tests can be run in random order instead of in threads. To do so, use the `-r` option. A seed will be printed so that the same run can be repeated by using the `--seed` option. This implies running in a single thread. Integration tests and a sandbox environment ------------------------------------------ If you want to write tests that read from and write to the file system, you can use the `Sandbox` struct from [`unit_threaded.integration`](source/unit_threaded/integration) like so: ```d with(immutable Sandbox()) { writeFile("foo.txt", "foobarbaz\ntoto"); // can also pass string[] for lines shouldExist("foo.txt"); shouldNotExist("bar.txt"); shouldEqualLines("foo.txt", ["foobarbaz", "toto"]); } ``` By default the sandbox main path is `tmp/unit-threaded` but you can change that by calling `Sandbox.setPath` Test Registration and Test Runner --------------------------------- There are two example programs in the [`example`](example/) folder, one with passing unit tests and the other failing, to show what the output looks like in each case. Because of the way D packages work, they must be run from the top-level directory of the repository. The built-in D unittest blocks are included automatically, as seen in the output of both example programs (`example.tests.pass_tests.unittest` and its homologue in [`example_fail`](example/example_fail)). A name will be automatically generated for them. The user can specify a name by decorating them with a string UDA or the included `@Name` UDA. The easiest way to run tests is by doing what the failing example code does: mixing in `runTestsMain()` in [`runner.d`](subpackages/runner/source/unit_threaded/runner/runner.d) with the modules containing the tests as compile-time arguments (as strings). There is no need to register tests. The registration is implicit and happens with: * D's `unittest`` blocks * Functions with a camelCase name beginning with `test` (e.g. `testFoo()`) * Classes that derive from `TestCase` and override `test()` The modules to be reflected on must be specified when calling `runTests` or `runTestsMain`, but that's usually done as shown in the dub configuration above. Private functions are skipped. `TestCase` also has support for `setup()` and `shutdown()`, child classes need only override the appropriate functions(s). Tests can be hidden with the `@HiddenTest` attribute. This means that particular test doesn't get run by default but can still be run by passing its name as a command-line argument. `HiddenTest` takes a compile-time string to list the reason why the test is hidden. This would usually be a bug id but can be anything the user wants. Since D packages are just directories and there the compiler can't read the filesystem at compile-time, there is no way to automatically add all tests in a package. To mitigate this and avoid having to manually write the name of all the modules containing tests, a dub configuration called `gen_ut_main` runs unit-threaded as a command-line utility to write the file for you. Related Projects ---------------- - [dunit](https://github.com/linkrope/dunit): xUnit Testing Framework for D - [DMocks-revived](https://github.com/QAston/DMocks-revived): a mock-object framework that allows to mock interfaces or classes - [deject](https://github.com/bgertzfield/deject): automatic dependency injection - [specd](https://github.com/jostly/specd): a unit testing framework inspired by [specs2](http://etorreborre.github.io/specs2/) and [ScalaTest](http://www.scalatest.org) - [DUnit](https://github.com/kalekold/dunit): a toolkit of test assertions and a template mixin to enable mocking
30.45354
139
0.691827
eng_Latn
0.993386
0c4af14d37065ea70db27d9b0e33fea8f6ce69d6
792
md
Markdown
vocabulary/u/unbeknownst.md
lsieun/EnglishDictionary
5ad881da2d06835d1150e7076955c82be83fa127
[ "MIT" ]
null
null
null
vocabulary/u/unbeknownst.md
lsieun/EnglishDictionary
5ad881da2d06835d1150e7076955c82be83fa127
[ "MIT" ]
null
null
null
vocabulary/u/unbeknownst.md
lsieun/EnglishDictionary
5ad881da2d06835d1150e7076955c82be83fa127
[ "MIT" ]
null
null
null
# unbeknownst - Word: unbeknownst - Cognate: - Similar: - Story: If someone plans your birthday party unbeknownst to you — that is, you're completely unaware of it — it will probably be a surprise party. - Story: Used as an adjective or adverb, unbeknownst is descended from unbeknown (1848), which combines the prefix un- ("not") with be ("by, about") and know. Sometimes the FBI might be secretly working on a case, unbeknownst to the CIA, which is also secretly working on it. Imagine their frustration when everyone finds out they could have shared information and work, while saving time and manpower. ## adjective - Comparative: - Meaning: happening without a particular person knowing about it - Chinese: 未知的,不得而知的 - Tags: - Synonyms: - Antonyms: - Use: - Eg.: - Picture:
37.714286
402
0.744949
eng_Latn
0.999741
0c4b316e83b474406852f6e514c6bd281b8a62fa
2,741
md
Markdown
_posts/02_pro/03_integrations/2017-05-04-slack.md
loopDelicious/docs-implementation
4d7a1678092ecbffbf759967379dcc1ae85c834b
[ "Apache-2.0" ]
null
null
null
_posts/02_pro/03_integrations/2017-05-04-slack.md
loopDelicious/docs-implementation
4d7a1678092ecbffbf759967379dcc1ae85c834b
[ "Apache-2.0" ]
null
null
null
_posts/02_pro/03_integrations/2017-05-04-slack.md
loopDelicious/docs-implementation
4d7a1678092ecbffbf759967379dcc1ae85c834b
[ "Apache-2.0" ]
1
2019-10-24T19:49:08.000Z
2019-10-24T19:49:08.000Z
--- categories: - "pro" - "integrations" title: "Slack" page_id: "slack" tags: - "pro" warning: false redirect_from: - /docs/slack_integration --- In the Postman Pro to Slack integration, there are 3 ways to connect to receive notifications from Postman ### Team Activity Feed The [Postman Team Activity Feed](http://blog.getpostman.com/2016/10/27/new-more-useful-activity-feed-in-postman-collections/) is a real-time feed showing the creates, updates, subscriptions, and deletes performed across the shared Postman Collections in your team. Using the Postman Pro to Slack Integration, you can pipe your team’s Activity Feed to a Slack channel of your choosing. This can help, for example, with tracking progress of a particular Postman Collection, or seeing development momentum within your team environment. ### Postman Search This element of the Postman Pro to Slack Integration enables you to easily search across the names and descriptions of your shared Postman Collections, folders, requests, and responses, and then display results in any of your team or personal Slack channels. After enabling this integration, a new slash command, `/postman`, will find its way into your team’s Slack global slash commands. Upon invoking a search, `/postman search [keyword(s)]`, the top ten search results will display in the current channel. This is super useful when searching for particular requests, or trying to point a colleague to a set of Collections related to your current discussion. ### Monitor Results [Postman Monitors](/docs/postman/monitors/intro_monitors) enable you to set up recurring runs of your Postman Collections at scheduled intervals. By using Postman Monitors, you can ensure that your systems are stable and your APIs are working as they should. The Postman Monitor to Slack connection allows you to pipe any set of Monitoring run results to a pre-configured Slack channel. This provides extra visibility on your Monitor run results, right in Slack. Plus, you can add further Slack alerting and notifications, based on Monitor run results. ### Add the Slack Integration From the [Integrations page](https://app.getpostman.com/dashboard/integrations), select the [Slack Integration](https://app.getpostman.com/dashboard/integrations): ![select slack integration](https://s3.amazonaws.com/postman-static-getpostman-com/postman-docs/slackINT.png) Click `Add` next to the Integration you’d like to activate. ![add slack](https://s3.amazonaws.com/postman-static-getpostman-com/postman-docs/slack_add.png) Authenticate with Slack, and if prompted, choose a Slack channel to post results to. ![authenticate slack](https://s3.amazonaws.com/postman-static-getpostman-com/postman-docs/slack_auth.png)
55.938776
293
0.790587
eng_Latn
0.97977
0c4b973fa0f3c766fe8a0ed408440727869daed6
13,562
md
Markdown
docs/relational-databases/security/authentication-access/determining-effective-database-engine-permissions.md
in4matica/sql-docs.de-de
b5a6c26b66f347686c4943dc8307b3b1deedbe7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/security/authentication-access/determining-effective-database-engine-permissions.md
in4matica/sql-docs.de-de
b5a6c26b66f347686c4943dc8307b3b1deedbe7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/security/authentication-access/determining-effective-database-engine-permissions.md
in4matica/sql-docs.de-de
b5a6c26b66f347686c4943dc8307b3b1deedbe7e
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Ermitteln effektiver Datenbank-Engine-Berechtigungen | Microsoft-Dokumentation ms.custom: '' ms.date: 01/03/2017 ms.prod: sql ms.prod_service: database-engine, sql-database, sql-data-warehouse, pdw ms.reviewer: '' ms.technology: security ms.topic: conceptual helpviewer_keywords: - permissions, effective - effective permissions ms.assetid: 273ea09d-60ee-47f5-8828-8bdc7a3c3529 author: VanMSFT ms.author: vanto monikerRange: '>=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current' ms.openlocfilehash: 40f30fd646e166cc9b8db433934d22a378c907cb ms.sourcegitcommit: b2e81cb349eecacee91cd3766410ffb3677ad7e2 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 02/01/2020 ms.locfileid: "67995627" --- # <a name="determining-effective-database-engine-permissions"></a>Ermitteln effektiver Datenbank-Engine-Berechtigungen [!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../../../includes/appliesto-ss-asdb-asdw-pdw-md.md)] In diesem Artikel wird beschrieben, wie Sie feststellen können, wer über Berechtigungen für verschiedene Objekte in der SQL Server-Datenbank-Engine verfügt. SQL Server implementiert zwei Berechtigungssysteme für die Datenbank-Engine. Ein älteres System fester Datenbankrollen hat vorkonfigurierte Berechtigungen. Ab SQL Server 2005 ist ein flexibleres und präziseres System verfügbar. (Die Informationen in diesem Artikel gelten für SQL Server ab Version 2005. Einige Arten von Berechtigungen sind in einigen Versionen von SQL Server nicht verfügbar.) > [!IMPORTANT] > * Die effektiven Berechtigungen sind das Aggregat von beiden Berechtigungssystemen. > * Eine DOS-Berechtigung überschreibt eine Gewährung von Berechtigungen. > * Wenn ein Benutzer ein Mitglied der festen Serverrolle „sysadmin“ ist, werden Berechtigungen nicht darüber hinaus überprüft, damit Verweigerungen nicht erzwungen werden. > * Die alten und neuen Systeme sind vergleichbar. Z.B. ist die Mitgliedschaft in der festen `sysadmin`-Serverrolle vergleichbar mit der `CONTROL SERVER`-Berechtigung. Die Systeme sind jedoch nicht identisch. Wenn eine Anmeldung beispielsweise nur über die `CONTROL SERVER`-Berechtigung verfügt und eine gespeicherte Prozedur die Mitgliedschaft in der festen `sysadmin`-Serverrolle überprüft, dann schlägt die Überprüfung der Berechtigung fehl. Das Gegenteil trifft ebenfalls zu. ## <a name="summary"></a>Zusammenfassung * Eine Berechtigung auf Serverebene stammt von einer Mitgliedschaft in der festen Serverrolle oder von benutzerdefinierten Serverrollen. Jeder Benutzer gehört zur festen `public`-Serverrolle und erhält jede dort zugewiesene Berechtigung. * Berechtigungen auf Serverebene können von Berechtigungen für Anmeldungen oder benutzerdefinierten Serverrollen stammen. * Berechtigungen auf Datenbankebene können aus der Mitgliedschaft in festen Datenbankrollen oder benutzerdefinierten Datenbankrollen in jeder Datenbank stammen. Jeder gehört zu den festen `public`-Datenbankrollen und empfängt jede zugewiesene Berechtigung. * Berechtigungen auf Datenbankebene können von Berechtigungen für Benutzer oder benutzerdefinierten Datenbankrollen in jeder Datenbank stammen. * Berechtigungen können von der `guest`-Anmeldung oder dem aktivierten `guest`-Datenbankbenutzer empfangen werden. Die `guest`-Anmeldenamen und -Benutzer sind standardmäßig deaktiviert. * Windows-Benutzer können Mitglieder der Windows-Gruppen sein, die über Anmeldenamen verfügen können. SQL Server erfährt von der Windows-Gruppenmitgliedschaft, wenn ein Windows-Benutzer eine Verbindung herstellt und ein Windows-Token mit der Sicherheits-ID einer Windows-Gruppe darstellt. Da SQL Server automatische Updates nicht über die Windows-Gruppenmitgliedschaften verwaltet oder erhält, kann SQL Server die Berechtigungen der Windows-Benutzer nicht zuverlässig anzeigen, die von der Windows-Gruppenmitgliedschaft empfangen werden. * Berechtigungen können erworben werden, indem zu einer Anwendungsrolle gewechselt und das Kennwort bereitgestellt wird. * Berechtigungen können erworben werden, indem eine gespeicherte Prozedur ausgeführt wird, die die `EXECUTE AS`-Klausel einschließt. * Berechtigungen können mit der `IMPERSONATE`-Berechtigung durch Anmeldenamen oder Benutzer erworben werden. * Mitglieder der Administratorgruppe des lokalen Computers können immer ihre Berechtigungen auf `sysadmin` erhöhen. (Gilt nicht für die SQL-Datenbank.) * Mitglieder der festen `securityadmin`-Serverrolle können viele ihrer Berechtigungen und in einigen Fällen auch die Berechtigungen auf `sysadmin` erhöhen. (Gilt nicht für die SQL-Datenbank.) * SQL Server-Administratoren können Informationen zu allen Anmeldungen und Benutzern anzeigen. Weniger privilegierten Benutzern werden in der Regel nur Informationen zu ihren eigenen Identitäten angezeigt. ## <a name="older-fixed-role-permission-system"></a>Ältere feste Rollenberechtigungssysteme Feste Serverrollen und feste Datenbankrollen verfügen über vorkonfigurierte Berechtigungen, die nicht geändert werden können. Führen Sie die folgende Abfrage aus, um zu bestimmen, wer Mitglied der festen Serverrolle ist: > [!NOTE] > Dies gilt nicht für SQL-Datenbank oder SQL Data Warehouse, bei denen die Berechtigung auf Serverebene nicht verfügbar ist. Die `is_fixed_role`-Spalte von `sys.server_principals` wurde zu SQL Server 2012 hinzugefügt. Sie ist für ältere Versionen von SQL Server nicht erforderlich. > ```sql > SELECT SP1.name AS ServerRoleName, > isnull (SP2.name, 'No members') AS LoginName > FROM sys.server_role_members AS SRM > RIGHT OUTER JOIN sys.server_principals AS SP1 > ON SRM.role_principal_id = SP1.principal_id > LEFT OUTER JOIN sys.server_principals AS SP2 > ON SRM.member_principal_id = SP2.principal_id > WHERE SP1.is_fixed_role = 1 -- Remove for SQL Server 2008 > ORDER BY SP1.name; > ``` > [!NOTE] > * Alle Anmeldenamen sind Mitglieder der öffentlichen Rollen und können nicht entfernt werden. > * Diese Abfrage überprüft die Tabellen in der Master-Datenbank, sie kann jedoch in jeder Datenbank für das lokale Produkt ausgeführt werden. Um zu bestimmen, wer die Mitglieder einer festen Datenbankrolle sind, führen Sie die folgende Abfrage in jeder Datenbank aus. ```sql SELECT DP1.name AS DatabaseRoleName, isnull (DP2.name, 'No members') AS DatabaseUserName FROM sys.database_role_members AS DRM RIGHT OUTER JOIN sys.database_principals AS DP1 ON DRM.role_principal_id = DP1.principal_id LEFT OUTER JOIN sys.database_principals AS DP2 ON DRM.member_principal_id = DP2.principal_id WHERE DP1.is_fixed_role = 1 ORDER BY DP1.name; ``` Um die Berechtigungen zu verstehen, die jeder Rolle gewährt werden, finden Sie Rollenbeschreibungen und Illustrationen in der Onlinedokumentation ([Rollen auf Serverebene](../../../relational-databases/security/authentication-access/server-level-roles.md) und [Rollen auf Datenbankebene](../../../relational-databases/security/authentication-access/database-level-roles.md)). ## <a name="newer-granular-permission-system"></a>Neuere präzise Berechtigungssysteme Dieses System ist flexibel, was bedeutet, dass es kompliziert sein kann, wenn die Personen, die es einrichten, präzise sein möchten. Zur Vereinfachung könnten Sie Rollen erstellen, Rollen Berechtigungen zuweisen und dann Personengruppen zu den Rollen hinzufügen. Und es ist einfacher, wenn das Entwicklungsteam für die Datenbank die Aktivitäten nach Schema trennt und dann die Rollenberechtigungen für ein ganzes Schema statt für einzelne Tabellen oder Prozeduren erteilt. Reale Szenarios sind komplex, und Geschäftsanforderungen können unerwartete Sicherheitsanforderungen schaffen. [!INCLUDE[database-engine-permissions](../../../includes/paragraph-content/database-engine-permissions.md)] ### <a name="security-classes"></a>Sicherheitsklassen Berechtigungen können auf Serverebene, Datenbankebene, Schemaebene oder Objektebene usw. erteilt werden. Es gibt 26 Ebenen (genannt Klassen). Die vollständige Liste der Klassen in alphabetischer Reihenfolge lautet: `APPLICATION ROLE`, `ASSEMBLY`, `ASYMMETRIC KEY`, `AVAILABILITY GROUP`, `CERTIFICATE`, `CONTRACT`, `DATABASE`, `DATABASE` `SCOPED CREDENTIAL`, `ENDPOINT`, `FULLTEXT CATALOG`, `FULLTEXT STOPLIST`, `LOGIN`, `MESSAGE TYPE`, `OBJECT`, `REMOTE SERVICE BINDING`, `ROLE`, `ROUTE`, `SCHEMA`, `SEARCH PROPERTY LIST`, `SERVER`, `SERVER ROLE`, `SERVICE`, `SYMMETRIC KEY`, `TYPE`, `USER` und `XML SCHEMA COLLECTION`. (Einige Klassen sind für bestimmte Typen von SQL Server nicht verfügbar.) Vollständige Informationen zu jeder Klasse erfordern eine andere Abfrage. ### <a name="principals"></a>Principals Dateiberechtigungen werden an Prinzipale erteilt. Prinzipale können Serverrollen, Anmeldenamen, Datenbankrollen oder Benutzer sein. Anmeldungen können Windows-Gruppen darstellen, die viele Windows-Benutzer enthalten. Da Windows-Gruppen nicht von SQL Server verwaltet werden, weiß SQL Server nicht immer, wer Mitglied einer Windows-Gruppe ist. Wenn ein Windows-Benutzer mit SQL Server verbunden ist, enthält das Anmeldungspaket das Token der Windows-Gruppenmitgliedschaft für den Benutzer. Wenn ein Windows-Benutzer eine Verbindung mithilfe einer Anmeldung auf Basis einer Windows-Gruppe herstellt, könnten einige Aktivitäten SQL Server benötigen. Dieser würde eine Anmeldung oder einen Benutzer erstellen, der die einzelnen Windows-Benutzer darstellt. Z.B. enthält eine Windows-Gruppe (Techniker) Benutzer (Mary, Todd, Pat) und die Techniker-Gruppe verfügt über ein Datenbankbenutzerkonto. Wenn Mary über eine Berechtigung verfügt und eine Tabelle erstellt kann ein Benutzer (nämlich Mary) als Besitzer der Tabelle erstellt werden. Wenn Todd eine Berechtigung verweigert wird, über die der Rest der Techniker-Gruppe verfügt, dann muss der Benutzer Todd die Möglichkeit erhalten, die Berechtigungsverweigerung nachverfolgen zu können. Denken Sie daran, dass ein Windows-Benutzer Mitglied von mehr als einer Windows-Gruppe (z.B. sowohl Techniker und Manager) sein könnte. Berechtigungen, die der Techniker- oder Manager-Anmeldung, den individuellen Benutzern oder Rollen, in denen der Benutzer Mitglied ist, gewährt oder verweigert werden, werden alle aggregiert und für die effektiven Berechtigungen ausgewertet. Die `HAS_PERMS_BY_NAME`-Funktion kann anzeigen, ob ein Benutzer oder Anmeldename über eine bestimmte Berechtigung verfügt. Es gibt jedoch keine offensichtliche Möglichkeit zur Bestimmung der Quelle der Erteilung oder Verweigerung der Berechtigung. Durchsuchen Sie die Liste der Berechtigungen, und probieren Sie sie vielleicht auch aus. ## <a name="useful-queries"></a>Nützliche Abfragen ### <a name="server-permissions"></a>Serverberechtigungen Die folgende Abfrage gibt eine Liste der Berechtigungen zurück, die auf Serverebene erteilt oder verweigert wurden. Diese Abfrage sollte in der Master-Datenbank ausgeführt werden. > [!NOTE] > Berechtigungen auf Serverebene können nicht auf SQL-Datenbank oder SQL Data Warehouse abgefragt oder erteilt werden. > ```sql > SELECT pr.type_desc, pr.name, > isnull (pe.state_desc, 'No permission statements') AS state_desc, > isnull (pe.permission_name, 'No permission statements') AS permission_name > FROM sys.server_principals AS pr > LEFT OUTER JOIN sys.server_permissions AS pe > ON pr.principal_id = pe.grantee_principal_id > WHERE is_fixed_role = 0 -- Remove for SQL Server 2008 > ORDER BY pr.name, type_desc; > ``` ### <a name="database-permissions"></a>Datenbankberechtigungen Die folgende Abfrage gibt eine Liste der Berechtigungen zurück, die auf Datenbankebene erteilt oder verweigert wurden. Diese Abfrage sollte in jeder Datenbank ausgeführt werden. ```sql SELECT pr.type_desc, pr.name, isnull (pe.state_desc, 'No permission statements') AS state_desc, isnull (pe.permission_name, 'No permission statements') AS permission_name FROM sys.database_principals AS pr LEFT OUTER JOIN sys.database_permissions AS pe ON pr.principal_id = pe.grantee_principal_id WHERE pr.is_fixed_role = 0 ORDER BY pr.name, type_desc; ``` Jede Klasse der Berechtigungstabelle kann mit anderen Systemansichten verknüpft werden, die verwandte Informationen über die Klasse des sicherungsfähigen Elements bereitstellen. Die folgende Abfrage enthält z.B. den Namen des Datenbankobjekts, das von der Berechtigung betroffen ist. ```sql SELECT pr.type_desc, pr.name, pe.state_desc, pe.permission_name, s.name + '.' + oj.name AS Object, major_id FROM sys.database_principals AS pr JOIN sys.database_permissions AS pe ON pr.principal_id = pe.grantee_principal_id JOIN sys.objects AS oj ON oj.object_id = pe.major_id JOIN sys.schemas AS s ON oj.schema_id = s.schema_id WHERE class_desc = 'OBJECT_OR_COLUMN'; ``` Verwenden Sie die `HAS_PERMS_BY_NAME`-Funktion, um zu bestimmen, ob ein bestimmter Benutzer (in diesem Fall `TestUser`) über eine Berechtigung verfügt. Beispiel: ```sql EXECUTE AS USER = 'TestUser'; SELECT HAS_PERMS_BY_NAME ('dbo.T1', 'OBJECT', 'SELECT'); REVERT; ``` Die Details der Syntax finden Sie unter [HAS_PERMS_BY_NAME](../../../t-sql/functions/has-perms-by-name-transact-sql.md). ## <a name="see-also"></a>Siehe auch: [Erste Schritte mit Berechtigungen für die Datenbank-Engine](../../../relational-databases/security/authentication-access/getting-started-with-database-engine-permissions.md) [Tutorial: Erste Schritte mit der Datenbank-Engine](Tutorial:%20Getting%20Started%20with%20the%20Database%20Engine.md)
84.236025
767
0.804085
deu_Latn
0.987648
0c4bf4656d987b93b9aff8c3ec472a1679f96d42
238
md
Markdown
README.md
elton/diesel_demo
5341430cdd954058d2d8cefeb62025344684188a
[ "Apache-2.0" ]
null
null
null
README.md
elton/diesel_demo
5341430cdd954058d2d8cefeb62025344684188a
[ "Apache-2.0" ]
null
null
null
README.md
elton/diesel_demo
5341430cdd954058d2d8cefeb62025344684188a
[ "Apache-2.0" ]
null
null
null
<!-- * @Author: elton * @Date: 2020-10-20 18:25:30 * @LastEditTime: 2020-10-20 18:28:28 * @LastEditors: Please set LastEditors * @Description: In User Settings Edit * @FilePath: /diesel_demo/README.md --> # Diesel Demo Application
21.636364
39
0.684874
yue_Hant
0.27083
0c4c4510d9a1a2744e9aec3cb64597d8b7934e87
84
md
Markdown
README.md
vkarpov15/switchblade
542b9c77a764c0e23c88463c4528179e59bc4ec4
[ "Apache-2.0" ]
1
2018-12-02T21:40:17.000Z
2018-12-02T21:40:17.000Z
README.md
vkarpov15/switchblade
542b9c77a764c0e23c88463c4528179e59bc4ec4
[ "Apache-2.0" ]
null
null
null
README.md
vkarpov15/switchblade
542b9c77a764c0e23c88463c4528179e59bc4ec4
[ "Apache-2.0" ]
null
null
null
# switchblade Lightweight deployment to EC2 over SSH for node apps that use forever
28
69
0.821429
eng_Latn
0.999089
0c4c7f3d41d886bfacac13e3aade282a6706fccc
803
md
Markdown
docs/framework/wcf/diagnostics/tracing/system-servicemodel-servicehostcreation.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-servicehostcreation.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/framework/wcf/diagnostics/tracing/system-servicemodel-servicehostcreation.md
zabereznikova/docs.de-de
5f18370cd709e5f6208aaf5cf371f161df422563
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: System.ServiceModel.ServiceHostCreation ms.date: 03/30/2017 ms.assetid: 0b9cb4f7-48bb-4e89-b5c2-d2d22e0e8088 ms.openlocfilehash: 257abe09b8a6e00af8376e79d8a093c762650cf4 ms.sourcegitcommit: bc293b14af795e0e999e3304dd40c0222cf2ffe4 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 11/26/2020 ms.locfileid: "96273827" --- # <a name="systemservicemodelservicehostcreation"></a>System.ServiceModel.ServiceHostCreation System.ServiceModel.ServiceHostCreation ## <a name="description"></a>BESCHREIBUNG Erstellt den ServiceHost. ## <a name="see-also"></a>Weitere Informationen - [Ablaufverfolgung](index.md) - [Verwenden der Ablaufverfolgung zum Beheben von Anwendungsfehlern](using-tracing-to-troubleshoot-your-application.md) - [Verwaltung und Diagnose](../index.md)
32.12
119
0.799502
yue_Hant
0.541749
0c4ca33514f0dd31f0c1c21166fe1b2dd5ab6bcc
19
md
Markdown
_includes/01-name.md
ichigonica/markdown-portfolio
48e9aa725ba26d4d63e844759bbdd8f6086811fc
[ "MIT" ]
null
null
null
_includes/01-name.md
ichigonica/markdown-portfolio
48e9aa725ba26d4d63e844759bbdd8f6086811fc
[ "MIT" ]
3
2021-07-09T03:01:35.000Z
2021-07-09T03:49:59.000Z
_includes/01-name.md
ichigonica/markdown-portfolio
48e9aa725ba26d4d63e844759bbdd8f6086811fc
[ "MIT" ]
null
null
null
# Ichigonica text
9.5
18
0.736842
vie_Latn
0.531981
0c4cae4d587393e455ec2fdbbc2fe4d51fbf0af8
3,421
md
Markdown
_posts/2019-08-22-python3-function.md
Jeckjun/Jeckjun.github.io
38c60da1060e225f07c3b04cd63f88964a8c6656
[ "MIT" ]
1
2021-02-03T08:00:38.000Z
2021-02-03T08:00:38.000Z
_posts/2019-08-22-python3-function.md
Jeckjun/Jeckjun.github.io
38c60da1060e225f07c3b04cd63f88964a8c6656
[ "MIT" ]
null
null
null
_posts/2019-08-22-python3-function.md
Jeckjun/Jeckjun.github.io
38c60da1060e225f07c3b04cd63f88964a8c6656
[ "MIT" ]
null
null
null
--- title: 2019.8.22-python3-函数 tags: Python categories: Python --- * TOC {:toc} 2019.8.22的python3函数学习 # 笔记内容 ~~~Python # 函数 # n = 7 # def test(): # global n # 引用全局变量 # if n % 2 == 0: # raise Exception('必须是奇数') # mid = n // 2 + 1 # 找到中间排 # i = 1 # while n > 0: # 循环n次:1,2,3 # space_num = abs(mid - i) # 算出每次循环的空格数:3,2,1 # base_num = mid - space_num # 每列中间一半的**数:1,2,3 # star_num = base_num * 2 - 1 # 计算**数:1,3,5 # if base_num == 1: # print(" " * space_num + "*" * star_num) # 打印首尾 # else: # print(" " * space_num + "*" + " " * (star_num - 2) + "*") # 打印有空格部分 # i += 1 # n -= 1 # 带参数 # def test(n): # if n % 2 == 0: # raise Exception('必须是奇数') # mid = n // 2 + 1 # 找到中间排 # i = 1 # while n > 0: # 循环n次:1,2,3 # space_num = abs(mid - i) # 算出每次循环的空格数:3,2,1 # base_num = mid - space_num # 每列中间一半的**数:1,2,3 # star_num = base_num * 2 - 1 # 计算**数:1,3,5 # if base_num == 1: # print(" " * space_num + "*" * star_num) # 打印首尾 # else: # print(" " * space_num + "*" + " " * (star_num - 2) + "*") # 打印有空格部分 # i += 1 # n -= 1 # test(7) # 带返回值 # def discount(amount): # amount = amount * 0.8 # return amount # print(discount(100)) # 递归 # def test0(n): # if n > 16: # return n # return test0(2 * n) # print(test0(2)) # 打印1+2+...+100的和 # 递归求和 # def plus(n): # if n == 1: # return n # else: # return n + plus(n - 1) # print(plus(100)) # 递归阶乘 # def jiecheng(n): # # if n == 1: # # return n # # else: # # return n * jiecheng(n - 1) # # print(jiecheng(5)) # 斐波那契数 1 1 2 3 5 8 13 21 34 55 89 # n = 6 --> 8 # n = 7 --> 13 # def fbo(n): # if n == 1 or n == 2: # return 1 # else: # return fbo(n-1) + fbo(n-2) # print(fbo(11)) # 汉诺塔算法 # def hannota(n, x, y, z): # n是盘子数,x,y,z为形参作为柱子 # if n == 1: # 递归出口 # print("从", x, '放到', z) # return # 返回None # else: # hannota(n-1, x, z, y) # 把x上的n-1个盘子借助z,移动到y上 # hannota(1, x, y, z) # 把x上最下面的盘子移动到z上 # hannota(n-1, y, x, z) # 最后把y上的n-1个盘子借助x移动到,z上,大功告成 # hannota(9, '盘子X', '盘子Y', '盘子Z') # 传递实参进去 # 一等函数,可以和变量一样传递 # def fun1(): # print('运行...') # x = fun1 # b = x # b() # 高阶函数 # sorted(list, key, reverse)排序和sort(key, reverse)排序 # fruits = ['strawberry','fig','apple','cherry','raspberry','banana'] # x = sorted(fruits, key=len) # print(x) # # def first_letter(item): # 自定义排序函数,首字母排序 # return item[0] # y = sorted(fruits, key=first_letter, reverse=True) # print(y) # fruits.sort(key=first_letter, reverse=True) # print(fruits) # z = sorted(fruits, key=lambda item: item[0]) # lambda匿名函数,item参数,item[0]返回值 # print(z) # 装饰器 def dec(func): def dec1(*args, **kwargs): # 通用参数模式 print('dec running') n = func(*args, **kwargs) # 带返回值的函数 return n return dec1 # 返回func运行func,不运行dec1,返回dec1就运行dec1 # def tar(): # print('running tar()') # tar = dec(tar) # dec()为附加功能,tar照常运行 # tar() @dec # 装饰器,没调用时,运行py文件时就会运行,一般嵌套函数使用。 def tar1(): print('running tar1()') return 1 n = tar1() print(n) # 万能(通用)参数练习 # def all_args(*args, **kwargs): # args形参吗m,n,kwargs键值参数key=xx,k=xx # return args, kwargs # # print(all_args(5,4,3,k=1,y=2)) ~~~
23.431507
82
0.499854
yue_Hant
0.118111
0c4cd1639ec57407078b856e7eb80b95cf6362d2
4,199
md
Markdown
articles/virtual-desktop/remotefx-graphics-performance-counters.md
hcyang1012/azure-docs.ko-kr
e0844b72effe5055ad69dc006a9eb3c4656efc80
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-desktop/remotefx-graphics-performance-counters.md
hcyang1012/azure-docs.ko-kr
e0844b72effe5055ad69dc006a9eb3c4656efc80
[ "CC-BY-4.0", "MIT" ]
null
null
null
articles/virtual-desktop/remotefx-graphics-performance-counters.md
hcyang1012/azure-docs.ko-kr
e0844b72effe5055ad69dc006a9eb3c4656efc80
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: 그래픽 성능 문제 진단 원격 데스크톱 - Azure description: 이 문서에서는 원격 데스크톱 프로토콜 세션에서 RemoteFX 그래픽 카운터를 사용하여 Windows 가상 데스크톱의 그래픽과 관련된 성능 문제를 진단하는 방법에 대해 설명합니다. services: virtual-desktop author: Heidilohr ms.service: virtual-desktop ms.topic: troubleshooting ms.date: 05/23/2019 ms.author: helohr manager: lizross ms.openlocfilehash: 84cee86dbddff77f6142925eec01889cf793a466 ms.sourcegitcommit: 2ec4b3d0bad7dc0071400c2a2264399e4fe34897 ms.translationtype: MT ms.contentlocale: ko-KR ms.lasthandoff: 03/28/2020 ms.locfileid: "79127561" --- # <a name="diagnose-graphics-performance-issues-in-remote-desktop"></a>원격 데스크톱의 그래픽 성능 문제 진단 원격 세션에서 환경 품질 문제를 진단하기 위해 성능 모니터의 RemoteFX 그래픽 섹션에서 카운터가 제공되었습니다. 이 도움말에서는 이러한 카운터를 사용하여 RDP(원격 데스크톱 프로토콜) 세션 중에 그래픽 관련 성능 병목 현상을 찾아내고 수정하는 데 도움이 됩니다. ## <a name="find-your-remote-session-name"></a>원격 세션 이름 찾기 그래픽 성능 카운터를 식별하려면 원격 세션 이름이 필요합니다. 이 섹션의 지침에 따라 각 카운터의 인스턴스를 식별합니다. 1. 원격 세션에서 Windows 명령 프롬프트를 엽니다. 2. **qwinsta** 명령을 실행하고 세션 이름을 찾습니다. - 세션이 VM(다중 세션 가상 머신)에서 호스팅되는 경우: 각 카운터의 인스턴스는 "rdp-tcp 37"과 같이 세션 이름을 접미사로 사용하는 동일한 번호로 접미사가 붙어 있습니다. - 가상 그래픽 처리 장치(vGPU)를 지원하는 VM에서 세션이 호스팅되는 경우: 각 카운터의 인스턴스는 VM이 아닌 서버에 저장됩니다. 카운터 인스턴스에는 세션 이름의 숫자 대신 "Win8 Enterprise VM"과 같은 VM 이름이 포함됩니다. >[!NOTE] > 카운터에는 RemoteFX가 있지만 vGPU 시나리오에도 원격 데스크톱 그래픽이 포함됩니다. ## <a name="access-performance-counters"></a>성능 카운터에 액세스 원격 세션 이름을 결정한 후 다음 지침에 따라 원격 세션에 대한 RemoteFX Graphics 성능 카운터를 수집합니다. 1. **관리 도구** > **성능 모니터** **시작을** > 선택합니다. 2. 성능 **모니터** 대화 상자에서 **모니터링 도구를**확장하고 **성능 모니터를**선택한 다음 **에 추가를**선택합니다. 3. 카운터 **추가** 대화 상자에서 사용 가능한 카운터 목록에서 RemoteFX 그래픽에 대한 섹션을 **확장합니다.** 4. 모니터링할 카운터를 선택합니다. 5. 선택한 **개체 목록의 인스턴스에서** 선택한 카운터에 대해 모니터링할 특정 인스턴스를 선택한 다음 **추가 를**선택합니다. 사용 가능한 모든 카운터 인스턴스를 선택하려면 **모든 인스턴스를**선택합니다. 6. 카운터를 추가한 후 **확인을**선택합니다. 선택한 성능 카운터가 성능 모니터 화면에 나타납니다. >[!NOTE] >호스트의 각 활성 세션에는 각 성능 카운터의 고유한 인스턴스가 있습니다. ## <a name="diagnose-issues"></a>문제 진단 그래픽 관련 성능 문제는 일반적으로 다음 네 가지 범주로 나뉩니다. - 낮은 프레임 속도 - 랜덤 노점 - 높은 입력 대기 시간 - 잘못된 프레임 품질 ### <a name="addressing-low-frame-rate-random-stalls-and-high-input-latency"></a>낮은 프레임 속도, 임의 스톨 및 높은 입력 대기 시간 해결 먼저 출력 프레임/세컨드 카운터를 확인합니다. 클라이언트에서 사용할 수 있는 프레임 수를 측정합니다. 이 값이 입력 프레임/세 번째 카운터보다 크면 프레임이 건너뜁니다. 병목 현상을 식별하려면 생략/두 번째 카운터를 사용합니다. 세 가지 유형의 프레임 건너뛰기/두 번째 카운터가 있습니다. - 초당 프레임 건너뛰기(서버 리소스 부족) - 프레임 건너뛰기/초(네트워크 리소스 부족) - 프레임 건너뛰기/초(클라이언트 리소스 부족) 건너뛴/두 번째 카운터의 프레임에 대한 값이 높음은 카운터가 추적하는 리소스와 문제가 있음을 의미합니다. 예를 들어 클라이언트가 프레임을 디코딩하고 서버가 제공하는 것과 동일한 속도로 프레임을 표시하지 않으면 프레임 건너뛰기/초(클라이언트 리소스 부족) 카운터가 높습니다. 출력 프레임/세번째 카운터가 입력 프레임/세번째 카운터와 일치하지만 여전히 비정상적인 지연 또는 지연을 발견하면 평균 인코딩 시간이 원인일 수 있습니다. 인코딩은 단일 세션(vGPU) 시나리오의 서버와 다중 세션 시나리오의 VM에서 발생하는 동기 프로세스입니다. 평균 인코딩 시간은 33ms 미만이어야 합니다. 평균 인코딩 시간이 33ms 미만이지만 성능 문제가 있는 경우 사용 중인 앱 또는 운영 체제에 문제가 있을 수 있습니다. 앱 관련 문제 진단에 대한 자세한 내용은 [사용자 입력 지연 성능 카운터를](/windows-server/remote/remote-desktop-services/rds-rdsh-performance-counters/)참조하십시오. RDP는 평균 인코딩 시간 33ms를 지원하므로 초당 최대 30프레임의 입력 프레임 속도를 지원합니다. 33ms는 지원되는 최대 프레임 속도입니다. 대부분의 경우, 소스에 의해 프레임이 RDP에 제공되는 빈도에 따라 사용자가 경험하는 프레임 속도는 더 낮아질 것이다. 예를 들어 비디오 시청과 같은 작업에는 초당 30프레임의 전체 입력 프레임 속도가 필요하지만, 문서를 자주 편집하지 않는 것과 같이 계산 집약적인 작업은 사용자의 경험 품질이 저하되지 않고 입력 프레임/초의 값이 훨씬 낮아집니다. ### <a name="addressing-poor-frame-quality"></a>열악한 프레임 품질 해결 프레임 품질 카운터를 사용하여 프레임 품질 문제를 진단합니다. 이 카운터는 출력 프레임의 품질을 소스 프레임 품질의 백분율로 표현합니다. 품질 손실은 RemoteFX로 인한 것일 수도, 아니면 그래픽 소스에 내재되어 있을 수 있습니다. RemoteFX에서 품질 이 저하된 경우 더 높은 충실도의 콘텐츠를 보낼 네트워크 또는 서버 리소스가 부족하여 문제가 발생할 수 있습니다. ## <a name="mitigation"></a>완화 방법 서버 리소스로 인해 병목 현상이 발생하는 경우 다음 방법 중 하나를 시도하여 성능을 향상시킵니다. - 호스트당 세션 수를 줄입니다. - 서버의 메모리를 늘리고 리소스를 계산합니다. - 연결의 해상도를 삭제합니다. 네트워크 리소스로 인해 병목 현상이 발생하는 경우 다음 방법 중 하나를 시도하여 세션당 네트워크 가용성을 개선하십시오. - 호스트당 세션 수를 줄입니다. - 더 높은 대역폭 네트워크를 사용합니다. - 연결의 해상도를 삭제합니다. 클라이언트 리소스가 병목 현상을 일으키는 경우 다음 방법 중 하나를 시도하여 성능을 향상시킵니다. - 최신 원격 데스크톱 클라이언트를 설치합니다. - 클라이언트 컴퓨터에서 메모리를 늘리고 리소스를 계산합니다. > [!NOTE] > 현재 소스 프레임/세컨드 카운터는 지원하지 않습니다. 지금은 소스 프레임/세컨드 카운터가 항상 0으로 표시됩니다. ## <a name="next-steps"></a>다음 단계 - GPU에 최적화된 Azure 가상 컴퓨터를 만들려면 [Windows 가상 데스크톱 환경에 대한 GPU(GPU) 가속 구성을](configure-vm-gpu.md)참조하십시오. - 문제 해결 및 에스컬레이션 트랙에 대한 개요는 [문제 해결 개요, 피드백 및 지원을](troubleshoot-set-up-overview.md)참조하십시오. - 서비스에 대한 자세한 내용은 [Windows 데스크톱 환경을](environment-setup.md)참조하십시오.
38.87963
278
0.714694
kor_Hang
1.00001
0c4cd6d25001f8748a9c4e0237b2ffcf1937d8e8
4,711
markdown
Markdown
_posts/2011-11-30-question-closed-and-its-probably-best-that-way.markdown
NaeemShaikh/git-github.com-StackExchange-stack-blog
04f94931e5c99cfe434124225353992cb34cb735
[ "MIT" ]
176
2015-07-14T06:39:15.000Z
2021-11-14T17:39:32.000Z
_posts/2011-11-30-question-closed-and-its-probably-best-that-way.markdown
NaeemShaikh/git-github.com-StackExchange-stack-blog
04f94931e5c99cfe434124225353992cb34cb735
[ "MIT" ]
71
2015-07-13T14:09:09.000Z
2019-02-25T21:23:16.000Z
_posts/2011-11-30-question-closed-and-its-probably-best-that-way.markdown
NaeemShaikh/git-github.com-StackExchange-stack-blog
04f94931e5c99cfe434124225353992cb34cb735
[ "MIT" ]
145
2015-07-15T06:53:49.000Z
2022-03-29T09:58:26.000Z
--- author: sbrand comments: true date: 2011-11-30 19:12:43+00:00 layout: post redirect_from: /2011/11/question-closed-and-its-probably-best-that-way hero: slug: question-closed-and-its-probably-best-that-way title: Question [Closed]… and it's probably best that way wordpress_id: 10556 tags: - company - chaos - community --- [![](http://chaos.blogoverflow.com/files/2011/11/no-unicorns.jpg)](http://chaos.blogoverflow.com/files/2011/11/no-unicorns.jpg) If you’ve poked around our network, then you've probably noticed that [we hate fun](http://blog.stackoverflow.com/2010/01/stack-overflow-where-we-hate-fun/) at Stack Exchange. Hard-line Q&A is in our evil DNA. And you know what, I kinda like it that way. But I haven't always been onboard... Flashback to late September, when I asked the following question at our Skeptics site: _New York pizza is the best pizza, sure. But is it really because of "the water"? _ I would link you to the question, but it no longer exists; it wasn't just closed, but deleted forever by a moderator because the question did not improve the Internet. Or maybe it was because the question was [“extremely off topic,”](http://skeptics.stackexchange.com/faq#deletion) or because it was based on a false premise, or maybe because I did not prove with a hyperlink that anyone, other than myself, actually believes NY pizza is the best in the world. In any case, my lazy grab at an objective answer for a subjective question is now banished to the sewers of the Internet where only one of our savvy devs can retrieve it. It's probably best that way. But at the time, I thought the moderator might have closed my question because of his own personal taste. His surname is spelled with double consonants and ends in a vowel. He must be, I thought, a hardcore pizza traditionalist. But now I know he's just a good mod. I know this because the [Skeptics site](http://skeptics.stackexchange.com/) works. It is one of my favorite sites in the network. But I also know this because I've seen the light. Even as my questions continue to [get](http://english.stackexchange.com/q/49932/11461) [shut](http://skeptics.stackexchange.com/q/6486/4070) [down](http://android.stackexchange.com/q/15705/6852) across the network, I've come to realize that the conservative school of community moderation is the right school of community moderation, at least for Stack Exchange. When Joel & Jeff first sat around the campfire and dreamed up Stack Overflow, they did so with an insight in mind: They weren't going to just create a forum where a user can receive an answer. SO (and later, SE) would be a platform to encourage intelligent, invested answers deserving of links across the Internet and useful for generations to come. Too local? Take it to Yelp. Too easy? Take it to Google. Too subjective? Take it to Quora. Too fun? Take it to Facebook. Stack Exchange is about objectively correct answers that stand the test of time. There is little room here for questions that ask for something less correct or less permanent. Of course, this focus on “canon” -- a word we find ourselves using a lot around here -- has its drawbacks. Try promoting [Stack Parenting](http://parenting.stackexchange.com/) to moms who want to share personal insights about child rearing. Or try selling the [Bicycles site](http://bicycles.stackexchange.com/) to an overwhelmed blogger who seeks for his readers an online outlet where they can continue "the discussion" he started at his own site. Stack Exchange can do little to instantly appease these Internetters. And that makes my job hard. But the toughest jobs are very often the most rewarding. (My job does kick ass.) And the most rigorous answer is very often the most helpful. (See [here](http://skeptics.stackexchange.com/questions/4498/does-torture-work-well-as-an-interrogation-technique) for just one of countless examples.) Hard human work isn’t necessary to participate in Stack Exchange, but power users and bursts of focused use are the biggest assets we’ve got. Which brings me to yesterday. It was late afternoon. The sky was gray. And I watched over Joel's shoulder as he personally closed [a question](http://travel.stackexchange.com/questions/3677/easiest-way-to-join-the-mile-high-club-mhc) that was causing some buzz at the Travel site. Joel said the question was crude and intentionally provocative. I suggested maybe there wasn’t enough information available to make an assumption about the user's intentions. Joel said maybe, and he proceeded to close the question. I swiveled back to my desk and got back to whatever it was I was doing. We’re pretty serious at Stack Exchange. And I'm pretty sure we’re better off because of it.
107.068182
660
0.781787
eng_Latn
0.998842
0c4d4dcee0830061989722e22174acc4003df79b
7,724
markdown
Markdown
_posts/2020-11-08-csharp.markdown
donhuvy/donhuvy.github.io
917436b3cc23e16b14cf21872d4c7385875bac5d
[ "BSD-2-Clause" ]
1
2020-10-04T05:13:14.000Z
2020-10-04T05:13:14.000Z
_posts/2020-11-08-csharp.markdown
donhuvy/donhuvy.github.io
917436b3cc23e16b14cf21872d4c7385875bac5d
[ "BSD-2-Clause" ]
null
null
null
_posts/2020-11-08-csharp.markdown
donhuvy/donhuvy.github.io
917436b3cc23e16b14cf21872d4c7385875bac5d
[ "BSD-2-Clause" ]
1
2020-10-28T16:34:29.000Z
2020-10-28T16:34:29.000Z
--- layout: post title: "C#" date: 2020-11-08 17:44:00 +0700 categories: csharp --- ### Parallel.Invoke() File `Program.cs` ```csharp using System; using System.Threading; using System.Threading.Tasks; namespace Listing_1._1Parallel_Invoke { class Program { static void Task1() { Console.WriteLine("Task 1 starting"); Thread.Sleep(2000); Console.WriteLine("Task 1 ending"); } static void Task2() { Console.WriteLine("Task 2 starting"); Thread.Sleep(1000); Console.WriteLine("Task 2 ending"); } static void Main(string[] args) { Parallel.Invoke(() => Task1(), () => Task2()); Console.WriteLine("Finished processing. Press a key to end."); Console.ReadKey(); } } } ``` ### Parallel.ForEach() ```csharp using System; using System.Linq; using System.Threading; using System.Threading.Tasks; namespace Listing_1._1Parallel_Invoke { class Program { static void WorkOnItem(object item) { Console.WriteLine("Started working on: " + item); Thread.Sleep(100); Console.WriteLine("Finished working on: " + item); } static void Main(string[] args) { var items = Enumerable.Range(0, 500); Parallel.ForEach(items, item => { WorkOnItem(item); }); Console.WriteLine("Finished processing. Press a key to end."); Console.ReadKey(); } } } ``` ### Parallel.For() ```csharp using System; using System.Linq; using System.Threading; using System.Threading.Tasks; namespace Listing_1._1Parallel_Invoke { class Program { static void WorkOnItem(object item) { Console.WriteLine("Started working on: " + item); Thread.Sleep(100); Console.WriteLine("Finished working on: " + item); } static void Main(string[] args) { var items = Enumerable.Range(0, 500).ToArray(); Parallel.For(0, items.Length, i => { WorkOnItem(items[i]); }); Console.WriteLine("Finished processing. Press a key to end."); Console.ReadKey(); } } } ``` ### ParallelLoopResult ```csharp using System; using System.Linq; using System.Threading; using System.Threading.Tasks; namespace Listing_1._1Parallel_Invoke { class Program { static void WorkOnItem(object item) { Console.WriteLine("Started working on: " + item); Thread.Sleep(100); Console.WriteLine("Finished working on: " + item); } static void Main(string[] args) { var items = Enumerable.Range(0, 500).ToArray(); ParallelLoopResult result = Parallel.For(0, items.Count(), (int i, ParallelLoopState loopState) => { if (i == 200) loopState.Stop(); WorkOnItem(items[i]); }); Console.WriteLine("Completed: " + result.IsCompleted); Console.WriteLine("Items: " + result.LowestBreakIteration); Console.WriteLine("Finished processing. Press a key to end."); Console.ReadKey(); } } } ``` ### LINQ ```csharp using System; using System.Linq; namespace Listing_1._5_A_parallel_LINQ_query { class Program { class Person { public string Name { get; set; } public string City { get; set; } } static void Main(string[] args) { Person[] people = new Person[] { new Person{ Name = "Alan", City = "Hull"}, new Person{ Name = "Beryl", City = "Seatle"}, new Person{ Name = "Charles", City = "London"}, new Person{ Name = "David", City = "Seatle"}, new Person{ Name = "Eddy", City = "Paris"}, new Person{ Name = "Fred", City = "Berlin"}, new Person{ Name = "Gordon", City = "Hull"}, new Person{ Name = "Henry", City = "Seatle"}, new Person{ Name = "Issac", City = "Seatle"}, new Person{ Name = "James", City = "London"} }; var result = from person in people.AsParallel() where person.City == "Seatle" select person; foreach(var person in result) Console.WriteLine(person.Name); Console.WriteLine("Finished processing. Press a key to end."); Console.Read(); } } } ``` ### Further inform the parallel process. ```csharp using System; using System.Linq; namespace Listing_1._5_A_parallel_LINQ_query { class Program { class Person { public string Name { get; set; } public string City { get; set; } } static void Main(string[] args) { Person[] people = new Person[] { new Person{ Name = "Alan", City = "Hull"}, new Person{ Name = "Beryl", City = "Seatle"}, new Person{ Name = "Charles", City = "London"}, new Person{ Name = "David", City = "Seatle"}, new Person{ Name = "Eddy", City = "Paris"}, new Person{ Name = "Fred", City = "Berlin"}, new Person{ Name = "Gordon", City = "Hull"}, new Person{ Name = "Henry", City = "Seatle"}, new Person{ Name = "Issac", City = "Seatle"}, new Person{ Name = "James", City = "London"} }; var result = from person in people.AsParallel() .WithDegreeOfParallelism(4) .WithExecutionMode(ParallelExecutionMode.ForceParallelism) where person.City == "Seatle" select person; foreach (var person in result) Console.WriteLine(person.Name); Console.WriteLine("Finished processing. Press a key to end."); Console.Read(); } } } ``` ### Keep order ```csharp using System; using System.Linq; namespace Listing_1._5_A_parallel_LINQ_query { class Program { class Person { public string Name { get; set; } public string City { get; set; } } static void Main(string[] args) { Person[] people = new Person[] { new Person{ Name = "Alan", City = "Hull"}, new Person{ Name = "Beryl", City = "Seatle"}, new Person{ Name = "Charles", City = "London"}, new Person{ Name = "David", City = "Seatle"}, new Person{ Name = "Eddy", City = "Paris"}, new Person{ Name = "Fred", City = "Berlin"}, new Person{ Name = "Gordon", City = "Hull"}, new Person{ Name = "Henry", City = "Seatle"}, new Person{ Name = "Issac", City = "Seatle"}, new Person{ Name = "James", City = "London"} }; var result = from person in people.AsParallel().AsOrdered() where person.City == "Seatle" select person; foreach (var person in result) Console.WriteLine(person.Name); Console.WriteLine("Finished processing. Press a key to end."); Console.Read(); } } } ``` ### Get apart of stream ```csharp ```
27.101754
110
0.510487
yue_Hant
0.463523
0c4d575620cbf84648909faa8269b9d722394bc0
2,041
md
Markdown
portfolio/_posts/2018-03-2-myReplace.md
moralss/moralss.github.io
e2d1bf89ed1c2ab39ad6738ff3a0cb090c16dd66
[ "MIT" ]
null
null
null
portfolio/_posts/2018-03-2-myReplace.md
moralss/moralss.github.io
e2d1bf89ed1c2ab39ad6738ff3a0cb090c16dd66
[ "MIT" ]
null
null
null
portfolio/_posts/2018-03-2-myReplace.md
moralss/moralss.github.io
e2d1bf89ed1c2ab39ad6738ff3a0cb090c16dd66
[ "MIT" ]
null
null
null
--- Post: Title: "local Weather Machine" Date: 2018-02-16 Categories: # freeCodeCampChallenges ## Introduction ## Instructions Perform a search and replace on the sentence using the arguments provided and return the new sentence. First argument is the sentence to perform the search and replace on. Second argument is the word that you will be replacing (before). Third argument is what you will be replacing the second argument with (after). NOTE: Preserve the case of the original word when you are replacing it. For example if you mean to replace the word "Book" with the word "dog", it should be replaced as "Dog" ## solution ` function myReplace(str, before, after) { var ConvertSTringIntoArray = str.split(' '); var index = ConvertSTringIntoArray.indexOf(before); var beforeFirstIndex = before.charAt(0); var beforeCapitalizedIndex = before.charAt(0).toUpperCase(); if (beforeFirstIndex === beforeCapitalizedIndex) { ConvertSTringIntoArray[index] = after.charAt(0).toUpperCase() + after.slice(1); finalString = ConvertSTringIntoArray.join(' '); } else{ ConvertSTringIntoArray[index] = after; var finalString = ConvertSTringIntoArray.join(' '); } return finalString; }; console.log(myReplace("A quick brown fox jumped over the lazy dog", "jumped", "leaped")); console.log(myReplace("He is Sleeping on the couch", "Sleeping","sitting")); ` i started by creating my global variables , a variable to convert my string into an array , one for determining the position of the index of the string that had to be changed . The other two was to get the first index of the string passed in as a perimeter Before. i then wrote a if statement to determine if the perimeter Before had a first index which consisted of a captial letter or not .Replaced the second perimeter Called Before with the third perimeter called After, which then changed the text; ## Conclusion leant about charAt . had so much fun
28.347222
265
0.721705
eng_Latn
0.995483
0c4d5ab16522bc4da7abab8c83a7015a46c24934
1,340
md
Markdown
docs/csharp/misc/cs0171.md
meterpaffay/docs.de-de
1e51b03044794a06ba36bbc139a23b738ca9967a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs0171.md
meterpaffay/docs.de-de
1e51b03044794a06ba36bbc139a23b738ca9967a
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/csharp/misc/cs0171.md
meterpaffay/docs.de-de
1e51b03044794a06ba36bbc139a23b738ca9967a
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Compilerfehler CS0171 ms.date: 07/20/2015 f1_keywords: - CS0171 helpviewer_keywords: - CS0171 ms.assetid: 8c1d76c9-1048-4579-9031-23e3566e6288 ms.openlocfilehash: ed53f0a261729bd2446c4dd2259042aca28b9974 ms.sourcegitcommit: 44a7cd8687f227fc6db3211ccf4783dc20235e51 ms.translationtype: MT ms.contentlocale: de-DE ms.lasthandoff: 02/26/2020 ms.locfileid: "77627915" --- # <a name="compiler-error-cs0171"></a>Compilerfehler CS0171 Das dahinter liegende Feld für die automatisch implementierte Eigenschaft "Name" muss vollständig zugewiesen werden, bevor die Steuerung wieder an den Aufrufer übergeben wird. Sie könnten den parameterlosen Konstruktor u. U. aus einem Konstruktorinitialisierer aufrufen. Ein Konstruktor in einer [Struktur](../language-reference/builtin-types/struct.md) muss alle Felder in der Struktur initialisieren. Weitere Informationen finden Sie unter [Konstruktoren](../programming-guide/classes-and-structs/constructors.md). Im folgenden Beispiel wird CS0171 generiert: ```csharp // CS0171.cs struct MyStruct { MyStruct(int initField) // CS0171 { // i = initField; // uncomment this line to resolve this error } public int i; } class MyClass { public static void Main() { MyStruct aStruct = new MyStruct(); } } ```
31.904762
272
0.742537
deu_Latn
0.776664
0c4dc97c6c6d38b02481f092c10a2843fc56e709
565
md
Markdown
_project/a-family-lifestyle-website-that-features-delicious-and-easy-recipes-pregnancy-and-parenting-tips-travel.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/a-family-lifestyle-website-that-features-delicious-and-easy-recipes-pregnancy-and-parenting-tips-travel.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
_project/a-family-lifestyle-website-that-features-delicious-and-easy-recipes-pregnancy-and-parenting-tips-travel.md
rumnamanya/rumnamanya.github.io
2deadeff04c8a48cf683b885b7fa6ab9acc1d9d9
[ "MIT" ]
null
null
null
--- layout: project_single title: "A family lifestyle website that features delicious and easy recipes, pregnancy and parenting tips, travel tips and reviews, and more - all to make your life easier! #diychristmas" slug: "a-family-lifestyle-website-that-features-delicious-and-easy-recipes-pregnancy-and-parenting-tips-travel" parent: "diy-parenting-hacks-to-make-your-life-easier" --- A family lifestyle website that features delicious and easy recipes, pregnancy and parenting tips, travel tips and reviews, and more - all to make your life easier! #diychristmas
80.714286
188
0.79823
eng_Latn
0.965678
0c4dd33cf4d22c4c7973fe9a955547b8d5e979f8
13,578
md
Markdown
reference/docs-conceptual/dsc/pull-server/pullClientConfigID.md
lisandros82/powerShell-Docs.es-es
9224ebf3471e84ddf3d9367b6044b4dd159e8c21
[ "CC-BY-4.0", "MIT" ]
null
null
null
reference/docs-conceptual/dsc/pull-server/pullClientConfigID.md
lisandros82/powerShell-Docs.es-es
9224ebf3471e84ddf3d9367b6044b4dd159e8c21
[ "CC-BY-4.0", "MIT" ]
null
null
null
reference/docs-conceptual/dsc/pull-server/pullClientConfigID.md
lisandros82/powerShell-Docs.es-es
9224ebf3471e84ddf3d9367b6044b4dd159e8c21
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- ms.date: 12/12/2018 keywords: dsc,powershell,configuration,setup title: Configuración de un cliente de extracción mediante ID de configuración en PowerShell 5.0 y versiones posteriores ms.openlocfilehash: bd173a1079b916c450a0292dca7a595a9bcff985 ms.sourcegitcommit: debd2b38fb8070a7357bf1a4bf9cc736f3702f31 ms.translationtype: HT ms.contentlocale: es-ES ms.lasthandoff: 12/05/2019 ms.locfileid: "74417239" --- # <a name="set-up-a-pull-client-using-configuration-ids-in-powershell-50-and-later"></a>Configuración de un cliente de extracción mediante ID de configuración en PowerShell 5.0 y versiones posteriores > Se aplica a: Windows PowerShell 5.0 > [!IMPORTANT] > El servidor de extracción (característica de Windows *DSC-Service*) es un componente de Windows Server admitido, si bien no está previsto ofrecer nuevas características o funcionalidades. Se recomienda empezar a realizar la transición de los clientes administrados a [DSC de Azure Automation](/azure/automation/automation-dsc-getting-started) (incluye características más allá del servidor de extracción de Windows Server) o a una de las soluciones de la comunidad que figuran [aquí](pullserver.md#community-solutions-for-pull-service). Antes de configurar un cliente de extracción, debe configurar un servidor de extracción. Aunque este orden no es obligatorio, ayuda a solucionar problemas y ayuda a garantizar que el registro sea correcto. Para configurar un servidor de extracción, puede usar a las siguientes guías: - [Configuración de un servidor de extracción SMB de DSC](pullServerSmb.md) - [Configuración de un servidor de extracción HTTP de DSC](pullServer.md) Cada nodo de destino puede configurarse para descargar configuraciones y recursos, e incluso para notificar de su estado. Las secciones siguientes muestran cómo configurar a un cliente de extracción con un recurso compartido SMB o el servidor de extracción de DSC HTTP. Cuando se actualice el LCM del nodo, accederá a la ubicación configurada para descargar las configuraciones asignadas. Si los recursos necesarios no existen en el nodo, se descargarán automáticamente desde la ubicación configurada. Si el nodo se configura con un [servidor de informes](reportServer.md), notificará el estado de la operación. > [!NOTE] > Este tema se aplica a PowerShell 5.0. Para obtener información sobre cómo configurar un cliente de incorporación de cambios en PowerShell 4.0, consulte [Configuración de un cliente de incorporación de cambios con el id. de configuración de PowerShell 4.0](pullClientConfigID4.md). ## <a name="configure-the-pull-client-lcm"></a>Configuración del LCM del cliente de extracción La ejecución de cualquiera de los ejemplos siguientes crea una nueva carpeta de salida denominada **PullClientConfigID** y coloca un archivo MOF de metaconfiguración en ella. En este caso, el nombre del archivo MOF de metaconfiguración será `localhost.meta.mof`. Para aplicar la configuración, llame al cmdlet **Set-DscLocalConfigurationManager**, con el valor de **Path** establecido en la ubicación del archivo MOF de metaconfiguración. Por ejemplo: ```powershell Set-DSCLocalConfigurationManager –ComputerName localhost –Path .\PullClientConfigId –Verbose. ``` ## <a name="configuration-id"></a>Id. de configuración Los ejemplos siguientes establecen la propiedad **ConfigurationID** del LCM en un **GUID** que se había creado anteriormente para este fin. La propiedad **ConfigurationID** es lo que usa el LCM para buscar la configuración adecuada en el servidor de incorporación de cambios. El archivo MOF de configuración del servidor de incorporación de cambios debe denominarse `ConfigurationID.mof`, donde *ConfigurationID* es el valor de la propiedad **ConfigurationID** del LCM del nodo de destino. Para obtener más información, vea [Publicación de las configuraciones en un servidor de extracción (v4/v5)](publishConfigs.md). Puede crear un **GUID** aleatorio mediante el ejemplo siguiente, o mediante el cmdlet [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid). ```powershell [System.Guid]::NewGuid() ``` Para obtener más información sobre el uso de **GUID** en su entorno, vea el tema sobre la [planificación de GUID](/powershell/scripting/dsc/secureserver#guids). ## <a name="set-up-a-pull-client-to-download-configurations"></a>Configuración de un cliente de extracción para descargar configuraciones Cada cliente debe configurarse en modo **Pull** (de extracción) y se le debe asignar la URL del servidor de extracción donde se almacena su configuración. Para ello, tendrá que configurar el administrador de configuración local (LCM) con la información necesaria. Para configurar el LCM, debe crear un tipo especial de configuración, decorado con el atributo **DSCLocalConfigurationManager**. Para más información sobre la configuración del LCM, consulte [Configuración del administrador de configuración local](../managing-nodes/metaConfig.md). ### <a name="http-dsc-pull-server"></a>Servidor de extracción de DSC HTTP El script siguiente configura el LCM para que extraiga configuraciones de un servidor denominado "CONTOSO-PullSrv". ```powershell [DSCLocalConfigurationManager()] configuration PullClientConfigID { Node localhost { Settings { RefreshMode = 'Pull' ConfigurationID = '1d545e3b-60c3-47a0-bf65-5afc05182fd0' RefreshFrequencyMins = 30 RebootNodeIfNeeded = $true } ConfigurationRepositoryWeb CONTOSO-PullSrv { ServerURL = 'https://CONTOSO-PullSrv:8080/PSDSCPullServer.svc' } } } PullClientConfigID ``` En el script, el bloque **ConfigurationRepositoryWeb** define el servidor de incorporación de cambios. **ServerUrl** especifica la dirección URL de la extracción de DSC ### <a name="smb-share"></a>Recurso compartido SMB El script siguiente configura el LCM para que extraiga configuraciones del recurso compartido SMB `\\SMBPullServer\Pull`. ```powershell [DSCLocalConfigurationManager()] configuration PullClientConfigID { Node localhost { Settings { RefreshMode = 'Pull' ConfigurationID = '1d545e3b-60c3-47a0-bf65-5afc05182fd0' RefreshFrequencyMins = 30 RebootNodeIfNeeded = $true } ConfigurationRepositoryShare SMBPullServer { SourcePath = '\\SMBPullServer\Pull' } } } PullClientConfigID ``` En el script, el bloque **ConfigurationRepositoryShare** define el servidor de extracción que, en este caso, es simplemente un recurso compartido SMB. ## <a name="set-up-a-pull-client-to-download-resources"></a>Configuración de un cliente de extracción para descargar recursos Si solo especifica el bloque **ConfigurationRepositoryWeb** o **ConfigurationRepositoryShare** en la configuración del LCM (como en el ejemplo anterior), el cliente de extracción extraerá recursos de la misma ubicación donde recupera sus configuraciones. También puede especificar ubicaciones independientes para los recursos. Para especificar una ubicación de recurso como un servidor independiente, use el bloque **ResourceRepositoryWeb**. Para especificar una ubicación de recurso como un recurso compartido SMB, use el bloque **ResourceRepositoryShare**. > [!NOTE] > Puede combinar **ConfigurationRepositoryWeb** con **ResourceRepositoryShare** o **ConfigurationRepositoryShare** con **ResourceRepositoryWeb**. A continuación no se muestran ejemplos de esto. ### <a name="http-dsc-pull-server"></a>Servidor de extracción de DSC HTTP La metaconfiguración siguiente configura un cliente de extracción para que obtenga sus configuraciones de **CONTOSO-PullSrv** y sus recursos de **CONTOSO-ResourceSrv**. ```powershell [DSCLocalConfigurationManager()] configuration PullClientConfigID { Node localhost { Settings { RefreshMode = 'Pull' ConfigurationID = '1d545e3b-60c3-47a0-bf65-5afc05182fd0' RefreshFrequencyMins = 30 RebootNodeIfNeeded = $true } ConfigurationRepositoryWeb CONTOSO-PullSrv { ServerURL = 'https://CONTOSO-PullSrv:8080/PSDSCPullServer.svc' } ResourceRepositoryWeb CONTOSO-ResourceSrv { ServerURL = 'https://CONTOSO-REsourceSrv:8080/PSDSCPullServer.svc' } } } PullClientConfigID ``` ### <a name="smb-share"></a>Recurso compartido SMB En el siguiente ejemplo se muestra una metaconfiguración que configura un cliente para extraer configuraciones del recurso compartido SMB `\\SMBPullServer\Configurations` y recursos del recurso compartido SMB `\\SMBPullServer\Resources`. ```powershell [DSCLocalConfigurationManager()] configuration PullClientConfigID { Node localhost { Settings { RefreshMode = 'Pull' ConfigurationID = '1d545e3b-60c3-47a0-bf65-5afc05182fd0' RefreshFrequencyMins = 30 RebootNodeIfNeeded = $true } ConfigurationRepositoryShare SMBPullServer { SourcePath = '\\SMBPullServer\Configurations' } ResourceRepositoryShare SMBResourceServer { SourcePath = '\\SMBPullServer\Resources' } } } PullClientConfigID ``` #### <a name="automatically-download-resources-in-push-mode"></a>Descarga automática de recursos en modo de inserción A partir de PowerShell 5.0, los clientes de extracción pueden descargar módulos desde un recurso compartido SMB, incluso cuando están configurados para el modo **Push** (de inserción). Esto es especialmente útil en escenarios donde no se quiere configurar un servidor de extracción. El bloque **ResourceRepositoryShare** puede usarse sin especificar un **ConfigurationRepositoryShare**. En el siguiente ejemplo se muestra una metaconfiguración que configura un cliente para extraer de un recurso compartido SMB `\\SMBPullServer\Resources`. Cuando el nodo haya **INSERTADO** una configuración, descargará automáticamente los recursos necesarios, desde el recurso compartido especificado. ```powershell [DSCLocalConfigurationManager()] configuration PullClientConfigID { Node localhost { Settings { RefreshMode = 'Push' ConfigurationID = '1d545e3b-60c3-47a0-bf65-5afc05182fd0' } ResourceRepositoryShare SMBResourceServer { SourcePath = '\\SMBPullServer\Resources' } } } PullClientConfigID ``` ## <a name="set-up-a-pull-client-to-report-status"></a>Configuración de un cliente de extracción para notificar el estado De forma predeterminada, los nodos no enviarán informes a un servidor de extracción configurado. Puede usar un solo servidor de extracción para configuraciones, recursos e informes, pero debe crear un bloque **ReportRepositoryWeb** para configurar los informes. ### <a name="http-dsc-pull-server"></a>Servidor de extracción de DSC HTTP En el siguiente ejemplo se muestra una metaconfiguración que configura un cliente para que envíe informes de datos y extraiga configuraciones y recursos a un único servidor de extracción. ```powershell [DSCLocalConfigurationManager()] configuration PullClientConfigID { Node localhost { Settings { RefreshMode = 'Pull' ConfigurationID = '1d545e3b-60c3-47a0-bf65-5afc05182fd0' RefreshFrequencyMins = 30 RebootNodeIfNeeded = $true } ConfigurationRepositoryWeb CONTOSO-PullSrv { ServerURL = 'https://CONTOSO-PullSrv:8080/PSDSCPullServer.svc' } ReportServerWeb CONTOSO-PullSrv { ServerURL = 'https://CONTOSO-PullSrv:8080/PSDSCPullServer.svc' } } } PullClientConfigID ``` Para especificar un servidor de informes, utilice un bloque **ReportRepositoryWeb**. Un servidor de informes no puede ser un servidor SMB. La metaconfiguración siguiente configura un cliente de extracción para que obtenga sus configuraciones de **CONTOSO-PullSrv** y sus recursos de **CONTOSO-ResourceSrv**, y para que envíe los informes a **CONTOSO-ReportSrv**: ```powershell [DSCLocalConfigurationManager()] configuration PullClientConfigID { Node localhost { Settings { RefreshMode = 'Pull' ConfigurationID = '1d545e3b-60c3-47a0-bf65-5afc05182fd0' RefreshFrequencyMins = 30 RebootNodeIfNeeded = $true } ConfigurationRepositoryWeb CONTOSO-PullSrv { ServerURL = 'https://CONTOSO-PullSrv:8080/PSDSCPullServer.svc' } ResourceRepositoryWeb CONTOSO-ResourceSrv { ServerURL = 'https://CONTOSO-REsourceSrv:8080/PSDSCPullServer.svc' } ReportServerWeb CONTOSO-ReportSrv { ServerURL = 'https://CONTOSO-REsourceSrv:8080/PSDSCPullServer.svc' } } } PullClientConfigID ``` ### <a name="smb-share"></a>Recurso compartido SMB Un servidor de informes no puede ser un recurso compartido SMB. ## <a name="next-steps"></a>Pasos a seguir Una vez que se ha configurado el cliente de extracción, puede usar a las siguientes guías para realizar los pasos siguientes: - [Publicación de las configuraciones en un servidor de extracción (v4/v5)](publishConfigs.md) - [Empaquetado y carga de recursos en un servidor de extracción (v4)](package-upload-resources.md) ## <a name="see-also"></a>Véase también * [Configuración de un cliente de extracción con nombres de configuración](pullClientConfigNames.md)
45.563758
686
0.74142
spa_Latn
0.922804
0c4fbe282d0fd33d8c03d2755dd932073f1123d3
5,461
md
Markdown
reference/5.0/Microsoft.PowerShell.Utility/New-TimeSpan.md
jwmoss/PowerShell-Docs
25ae434ae90eaa2b64f16a721d557d790972c331
[ "CC-BY-4.0", "MIT" ]
1
2021-07-13T15:43:44.000Z
2021-07-13T15:43:44.000Z
reference/5.0/Microsoft.PowerShell.Utility/New-TimeSpan.md
jwmoss/PowerShell-Docs
25ae434ae90eaa2b64f16a721d557d790972c331
[ "CC-BY-4.0", "MIT" ]
1
2016-12-28T14:28:54.000Z
2016-12-28T14:28:54.000Z
reference/5.0/Microsoft.PowerShell.Utility/New-TimeSpan.md
jwmoss/PowerShell-Docs
25ae434ae90eaa2b64f16a721d557d790972c331
[ "CC-BY-4.0", "MIT" ]
1
2021-07-13T15:43:21.000Z
2021-07-13T15:43:21.000Z
--- ms.date: 06/09/2017 schema: 2.0.0 locale: en-us keywords: powershell,cmdlet online version: http://go.microsoft.com/fwlink/?LinkId=821837 external help file: Microsoft.PowerShell.Commands.Utility.dll-Help.xml title: New-TimeSpan --- # New-TimeSpan ## SYNOPSIS Creates a TimeSpan object. ## SYNTAX ### Date (Default) ``` New-TimeSpan [[-Start] <DateTime>] [[-End] <DateTime>] [<CommonParameters>] ``` ### Time ``` New-TimeSpan [-Days <Int32>] [-Hours <Int32>] [-Minutes <Int32>] [-Seconds <Int32>] [<CommonParameters>] ``` ## DESCRIPTION The **New-TimeSpan** cmdlet creates a **TimeSpan** object that represents a time interval. You can use a **TimeSpan** object to add or subtract time from **DateTime** objects. Without parameters, a **New-Timespan** command returns a timespan object that represents a time interval of zero. ## EXAMPLES ### Example 1: Create a TimeSpan object for a specified duration ``` PS C:\> $TimeSpan = New-TimeSpan -Hour 1 -Minute 25 ``` This command creates a **TimeSpan** object with a duration of 1 hour and 25 minutes and stores it in a variable named $TimeSpan. It displays a representation of the **TimeSpan** object. ### Example 2: Create a TimeSpan object for a time interval ``` PS C:\> new-timespan -end (get-date -year 2010 -month 1 -day 1) ``` This example creates a new **TimeSpan** object that represents the interval between the time that the command is run and January 1, 2010. This command does not require the *Start* parameter, because the default value of the *Start* parameter is the current date and time. ### Example 3: Get the date 90 days from the current date ``` PS C:\> $90days = New-TimeSpan -Days 90 PS C:\> (Get-Date) + $90days ``` These commands return the date that is 90 days after the current date. ### Example 4: Discover the TimeSpan since a file was updated ``` PS C:\> dir $pshome\en-us\about_remote.help.txt | New-TimeSpan Days : 321 Hours : 21 Minutes : 59 Seconds : 22 Milliseconds : 312 Ticks : 278135623127728 TotalDays : 321.916230471907 TotalHours : 7725.98953132578 TotalMinutes : 463559.371879547 TotalSeconds : 27813562.3127728 TotalMilliseconds : 27813562312.7728 PS C:\> # Equivalent to: PS C:\> New-TimeSpan -Start (dir $pshome\en-us\about_remote.help.txt).lastwritetime ``` This command tells you how long it has been since the about_remote.help.txt file was last updated. You can use this command format on any file, and on any other object that has a **LastWriteTime** property. This command works because the *Start* parameter of **New-TimeSpan** has an alias of LastWriteTime. When you pipe an object that has a Last****WriteTime property to **New-TimeSpan**, Windows PowerShell uses the value of the **LastWriteTime** property as the value of the *Start* parameter. ## PARAMETERS ### -Days Specifies the days in the time span. The default value is 0. ```yaml Type: Int32 Parameter Sets: Time Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -End Specifies the end of a time span. The default value is the current date and time. ```yaml Type: DateTime Parameter Sets: Date Aliases: Required: False Position: 1 Default value: None Accept pipeline input: True (ByPropertyName) Accept wildcard characters: False ``` ### -Hours Specifies the hours in the time span. The default value is zero. ```yaml Type: Int32 Parameter Sets: Time Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Minutes Specifies the minutes in the time span. The default value is 0. ```yaml Type: Int32 Parameter Sets: Time Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Seconds Specifies the length of the time span in seconds. The default value is 0. ```yaml Type: Int32 Parameter Sets: Time Aliases: Required: False Position: Named Default value: None Accept pipeline input: False Accept wildcard characters: False ``` ### -Start Specifies the start of a time span. Enter a string that represents the date and time, such as "3/15/09" or a **DateTime** object, such as one from a Get-Date command. The default value is the current date and time. You can use *Start* or its alias, LastWriteTime. The LastWriteTime alias lets you pipe objects that have a **LastWriteTime** property, such as files in the file system (System.Io.FileIO), to the Start parameter of **New-TimeSpan**. ```yaml Type: DateTime Parameter Sets: Date Aliases: LastWriteTime Required: False Position: 0 Default value: None Accept pipeline input: True (ByPropertyName, ByValue) Accept wildcard characters: False ``` ### CommonParameters This cmdlet supports the common parameters: -Debug, -ErrorAction, -ErrorVariable, -InformationAction, -InformationVariable, -OutVariable, -OutBuffer, -PipelineVariable, -Verbose, -WarningAction, and -WarningVariable. For more information, see about_CommonParameters (http://go.microsoft.com/fwlink/?LinkID=113216). ## INPUTS ### System.DateTime You can pipe a **DateTime** object that represents that start time to **New-TimeSpan**. ## OUTPUTS ### System.TimeSpan **New-TimeSpan** returns an object that represents the time span. ## NOTES ## RELATED LINKS [Get-Date](Get-Date.md) [Set-Date](Set-Date.md)
26.509709
314
0.732833
eng_Latn
0.862582
0c5023e0763d93658357571f646caeee21f76220
3,568
md
Markdown
distribution/gcp/Item/Networking/CloudCdn.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
71
2020-02-01T06:58:53.000Z
2022-03-16T14:58:44.000Z
distribution/gcp/Item/Networking/CloudCdn.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
9
2020-05-08T10:39:42.000Z
2022-01-24T08:22:18.000Z
distribution/gcp/Item/Networking/CloudCdn.md
tmorin/plantuml-libs
2c71c27bb6f9aae3013a3140eab2fd4f994e60c5
[ "MIT" ]
21
2020-01-11T20:50:13.000Z
2021-09-29T16:21:28.000Z
# CloudCdn ```text gcp/Item/Networking/CloudCdn ``` ```text include('gcp/Item/Networking/CloudCdn') ``` | Illustration | CloudCdn | CloudCdnCard | CloudCdnGroup | | :---: | :---: | :---: | :---: | | ![illustration for Illustration](../../../gcp/Item/Networking/CloudCdn.png) | ![illustration for CloudCdn](../../../gcp/Item/Networking/CloudCdn.Local.png) | ![illustration for CloudCdnCard](../../../gcp/Item/Networking/CloudCdnCard.Local.png) | ![illustration for CloudCdnGroup](../../../gcp/Item/Networking/CloudCdnGroup.Local.png) | ## CloudCdn ### Load remotely ```plantuml @startuml ' configures the library !global $LIB_BASE_LOCATION="https://github.com/tmorin/plantuml-libs/distribution" ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('gcp/bootstrap') ' loads the Item which embeds the element CloudCdn include('gcp/Item/Networking/CloudCdn') ' renders the element CloudCdn('CloudCdn', 'Cloud Cdn', 'an optional tech label') @enduml ``` ### Load locally ```plantuml @startuml ' configures the library !global $INCLUSION_MODE="local" !global $LIB_BASE_LOCATION="../../.." ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('gcp/bootstrap') ' loads the Item which embeds the element CloudCdn include('gcp/Item/Networking/CloudCdn') ' renders the element CloudCdn('CloudCdn', 'Cloud Cdn', 'an optional tech label') @enduml ``` ## CloudCdnCard ### Load remotely ```plantuml @startuml ' configures the library !global $LIB_BASE_LOCATION="https://github.com/tmorin/plantuml-libs/distribution" ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('gcp/bootstrap') ' loads the Item which embeds the element CloudCdnCard include('gcp/Item/Networking/CloudCdn') ' renders the element CloudCdnCard('CloudCdnCard', 'Cloud Cdn Card', 'an optional description') @enduml ``` ### Load locally ```plantuml @startuml ' configures the library !global $INCLUSION_MODE="local" !global $LIB_BASE_LOCATION="../../.." ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('gcp/bootstrap') ' loads the Item which embeds the element CloudCdnCard include('gcp/Item/Networking/CloudCdn') ' renders the element CloudCdnCard('CloudCdnCard', 'Cloud Cdn Card', 'an optional description') @enduml ``` ## CloudCdnGroup ### Load remotely ```plantuml @startuml ' configures the library !global $LIB_BASE_LOCATION="https://github.com/tmorin/plantuml-libs/distribution" ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('gcp/bootstrap') ' loads the Item which embeds the element CloudCdnGroup include('gcp/Item/Networking/CloudCdn') ' renders the element CloudCdnGroup('CloudCdnGroup', 'Cloud Cdn Group', 'an optional tech label') { note as note the content of the group end note } @enduml ``` ### Load locally ```plantuml @startuml ' configures the library !global $INCLUSION_MODE="local" !global $LIB_BASE_LOCATION="../../.." ' loads the library's bootstrap !include $LIB_BASE_LOCATION/bootstrap.puml ' loads the package bootstrap include('gcp/bootstrap') ' loads the Item which embeds the element CloudCdnGroup include('gcp/Item/Networking/CloudCdn') ' renders the element CloudCdnGroup('CloudCdnGroup', 'Cloud Cdn Group', 'an optional tech label') { note as note the content of the group end note } @enduml ```
22.582278
337
0.738789
kor_Hang
0.407436
0c5068968b132b093938b57a0aca1412633cc99d
997
md
Markdown
README.md
tehuel/retrotanks-2600
a81626d6cf7012fe3de0f55fa06fdccef1a226c2
[ "MIT" ]
null
null
null
README.md
tehuel/retrotanks-2600
a81626d6cf7012fe3de0f55fa06fdccef1a226c2
[ "MIT" ]
null
null
null
README.md
tehuel/retrotanks-2600
a81626d6cf7012fe3de0f55fa06fdccef1a226c2
[ "MIT" ]
null
null
null
# retrotanks-2600 Juego realizado para el mini concurso dentro del foro de BennuGD http://forum.bennugd.org/index.php?topic=4390.0 # Features Planeados - 2 jugadores - (tal vez IA, hay que probar que tan dificil de hacer es) - 3 vidas iniciales para cada jugador - gana el jugador que elimina 3 veces al adversario - el tanque avanza automaticamente, el jugador controla la direccion - (hay que probar direccion estilo gta y estilo pacman) - los tanques pueden tener nivel de energia, e ir perdiendola con cada impacto - varios escenarios para elegir - (los escenarios deberian ser directamente las durezas, para simplificar) - para la version complicada puede haber distintos terrenos: - que te frena - que se abren y cierren puertas - que te produzca daño - hay distintos powerups - mayor velocidad movimiento - mayor velocidad disparo - varios disparos juntos - mayor daño en los disparos - recuperar energia - ralentizar enemigo ## TODO - todo ## DONE - nothing
22.659091
78
0.758275
spa_Latn
0.994877
0c5098e811280ea1777a11415a5d3cf5683859af
7,328
md
Markdown
articles/multi-factor-authentication/multi-factor-authentication-get-started-server-radius.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/multi-factor-authentication/multi-factor-authentication-get-started-server-radius.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
articles/multi-factor-authentication/multi-factor-authentication-get-started-server-radius.md
OpenLocalizationTestOrg/azure-docs-pr15_pl-PL
18fa7535e7cdf4b159e63a40776995fa95f1f314
[ "CC-BY-3.0", "CC-BY-4.0", "MIT" ]
null
null
null
<properties pageTitle="Serwer uwierzytelnianie wieloskładnikowe uwierzytelniania RADIUS i Azure" description="Jest to strona uwierzytelnianie wieloskładnikowe Azure, który pomoże wdrażanie serwera uwierzytelnianie wieloskładnikowe Azure i uwierzytelniania RADIUS." services="multi-factor-authentication" documentationCenter="" authors="kgremban" manager="femila" editor="curtand"/> <tags ms.service="multi-factor-authentication" ms.workload="identity" ms.tgt_pltfrm="na" ms.devlang="na" ms.topic="get-started-article" ms.date="08/15/2016" ms.author="kgremban"/> # <a name="radius-authentication-and-azure-multi-factor-authentication-server"></a>Serwer uwierzytelnianie wieloskładnikowe uwierzytelniania RADIUS i Azure W sekcji uwierzytelnianie RADIUS umożliwia włączanie i konfigurowanie uwierzytelniania RADIUS na serwerze uwierzytelnianie wieloskładnikowe Azure. RADIUS to standardowy protokół do akceptowania żądań uwierzytelniania i przetwarzanie tych żądań. Serwer uwierzytelniania wieloskładnikowego Azure działa jako serwer RADIUS i dodaje się między klientem RADIUS (np. urządzenia VPN) i docelowego uwierzytelniania, który może być Active Directory (AD), katalogu LDAP lub innego serwera RADIUS, aby można było dodać uwierzytelnianie wieloskładnikowe Azure. W przypadku uwierzytelniania wieloskładnikowego Azure funkcji należy skonfigurować serwer uwierzytelniania wieloskładnikowego Azure, aby go można komunikować się z serwerami klienta i element docelowy uwierzytelniania. Serwer uwierzytelniania wieloskładnikowego Azure akceptuje żądania od klienta RADIUS, sprawdza poświadczenia element docelowy uwierzytelniania, dodaje uwierzytelnianie wieloskładnikowe Azure i wysyła odpowiedź do klienta RADIUS. Uwierzytelnianie całego powiedzie się tylko wtedy, gdy zarówno podstawowego uwierzytelnianie i uwierzytelnianie wieloskładnikowe Azure powiodła się. >[AZURE.NOTE] >Serwer MFA obsługuje tylko PAP (Protokół uwierzytelniania haseł) i MSCHAPv2 (protokół uwierzytelniania wyzwania uzgadniania firmy Microsoft) RADIUS protokoły działając jako serwer RADIUS. Inne protokoły, takie jak EAP (extensible authentication protocol) można używać podczas serwer MFA działa jako serwer proxy RADIUS do innego serwera RADIUS, który obsługuje tego protokołu, takich jak Microsoft NPS. ></br> >Podczas używania innych protokołów w tej konfiguracji, jednokierunkowa tokeny SMS i PRZYSIĘGĄ nie będzie działać, ponieważ serwer MFA nie będzie mógł zainicjować pomyślną odpowiedź wezwanie RADIUS przy użyciu protokołu. ![Uwierzytelnianie RADIUS](./media/multi-factor-authentication-get-started-server-rdg/radius.png) ## <a name="radius-authentication-configuration"></a>Konfiguracja uwierzytelniania RADIUS Aby skonfigurować uwierzytelnianie RADIUS, należy zainstalować serwer uwierzytelniania wieloskładnikowego Azure w systemie Windows server. Jeśli masz środowiska usługi Active Directory, serwer powinny zostać połączone z domeny w sieci. Poniższa procedura umożliwia skonfigurowanie serwera uwierzytelnianie wieloskładnikowe Azure: 1. W obrębie serwera uwierzytelnianie wieloskładnikowe Azure kliknij ikonę uwierzytelniania RADIUS w menu po lewej stronie. 2. Zaznacz pole wyboru Włącz RADIUS uwierzytelniania. 3. Na karcie klienci zmienić porty uwierzytelniania i księgowe porty usługi RADIUS uwierzytelnianie wieloskładnikowe Azure należy powiązać niestandardowe porty Aby odsłuchać dla żądań RADIUS z klientami, które zostaną skonfigurowane. 4. Kliknij przycisk Dodaj. przycisk. 5. W oknie dialogowym Dodawanie klienta RADIUS wprowadź adres IP urządzenia i serwera, który będzie uwierzytelniania serwera uwierzytelnianie wieloskładnikowe Azure, nazwa aplikacji (opcjonalnie) i wspólne hasło. Wspólne hasło muszą być taka sama na serwerze uwierzytelnianie wieloskładnikowe Azure i urządzenia/serwer. Nazwa aplikacji jest wyświetlana w raportach uwierzytelnianie wieloskładnikowe Azure i mogą być wyświetlane w wiadomości SMS lub aplikacji Mobile wiadomości uwierzytelniania. 6. Zaznacz pole Uwzględnij użytkownika wymaga uwierzytelniania wieloskładnikowego wszystkich użytkowników zostały lub zostaną zaimportowane na serwerze i objęte uwierzytelnianie wieloskładnikowe. Jeśli znacząca liczba użytkowników nie został zaimportowany na serwerze i/lub będzie Wyklucz z uwierzytelnianie wieloskładnikowe, nie zaznaczaj pola wyboru. Zobacz plik pomocy, aby uzyskać dodatkowe informacje na temat tej funkcji. 7. Włącz alternatywnych PRZYSIĘGĄ token zaznacz pole wyboru, jeśli użytkownicy będą używać uwierzytelniania aplikacji dla urządzeń przenośnych uwierzytelnianie wieloskładnikowe Azure i chcesz używać haseł PRZYSIĘGĄ jako alternatywnych uwierzytelniania powiadomienie połączenia, SMS lub wypychanych telefonu w nowym oknie grupy. 8. Kliknij przycisk OK. 9. Może być Powtórz kroki od 4 do 8, aby dodać dodatkowe klientów RADIUS. 10. Kliknij docelową kartę. 11. Jeśli serwer uwierzytelnianie wieloskładnikowe Azure jest zainstalowany na serwerze domeny w środowisku usługi Active Directory, wybierz pozycję domeny systemu Windows. 12. Jeśli przed katalogu LDAP uwierzytelniania użytkowników, wybierz pozycję powiązania LDAP. Używając powiązania LDAP, musi kliknij ikonę integracja katalogów i edytować konfiguracji LDAP na karcie Ustawienia serwera można powiązać z katalogu. Instrukcje dotyczące konfigurowania LDAP można znaleźć w podręczniku konfiguracji serwera Proxy LDAP. 13. Jeśli na innym serwerze RADIUS uwierzytelniania użytkowników, wybierz pozycję serwery RADIUS. 14. Konfigurowanie serwera, że serwer będzie serwera proxy żądania RADIUS, klikając pozycję Dodaj... przycisk. 15. W oknie dialogowym Dodawanie serwera usługi RADIUS wprowadź adres IP serwera RADIUS i wspólne hasło. Wspólne hasło muszą być taka sama na serwerze serwer uwierzytelniania wieloskładnikowego Azure i RADIUS. Zmienianie port uwierzytelniania i księgowe, jeśli różne porty są używane przez serwer RADIUS. 16. Kliknij przycisk OK. 17. Serwer uwierzytelniania wieloskładnikowego Azure należy dodać jako klienta RADIUS na serwerze RADIUS, dzięki czemu będzie przetwarzać żądań dostępu wysłanych do niego z serwera uwierzytelnianie wieloskładnikowe Azure. Należy użyć tego samego wspólnego hasła jest skonfigurowana na serwerze uwierzytelnianie wieloskładnikowe Azure. 18. Może być Powtórz ten krok, aby dodać dodatkowe serwery RADIUS i skonfiguruj kolejność, w której serwer należy połączenie za pomocą przycisków Przenieś w górę i Przenieś w dół. Na tym kończy konfiguracji serwera uwierzytelnianie wieloskładnikowe Azure. Serwer oczekuje się teraz na skonfigurowane porty na żądania dostępu RADIUS skonfigurowanych klientów. ## <a name="radius-client-configuration"></a>Konfiguracja klienta RADIUS Aby skonfigurować klienta RADIUS, użyj wytycznych: - Konfigurowanie urządzenia i serwer do uwierzytelniania za pośrednictwem RADIUS adres IP serwera uwierzytelnianie wieloskładnikowe Azure, które będą działać jako serwer RADIUS. - Użyj tego samego wspólnego hasła, który został skonfigurowany powyżej. - Przekroczono limit czasu RADIUS do 30 – 60 sekund tak skonfigurować program jest czas do sprawdzania poprawności poświadczeń użytkowników, przeprowadzać uwierzytelnianie wieloskładnikowe, odbieranie swoją odpowiedź, a następnie odpowiedzieć na żądania dostępu RADIUS.
114.5
1,145
0.841294
pol_Latn
0.999992
0c5164b677f4b6e36a4359d6402c4ee25f8bd28b
1,502
md
Markdown
src/et/2018-02/05/01.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
68
2016-10-30T23:17:56.000Z
2022-03-27T11:58:16.000Z
src/et/2018-02/05/01.md
PrJared/sabbath-school-lessons
94a27f5bcba987a11a698e5e0d4279b81a68bc9a
[ "MIT" ]
367
2016-10-21T03:50:22.000Z
2022-03-28T23:35:25.000Z
src/et/2018-02/05/01.md
OsArts/Bible-study
cfcefde42e21795e217d192a8b7a703ebb7a6c01
[ "MIT" ]
109
2016-08-02T14:32:13.000Z
2022-03-31T10:18:41.000Z
--- title: Kristus taevases pühamus date: 28/04/2018 --- ### Selle nädala õppeaine Rm 8:3; Jh 1:29; Ilm 5:12; Hb 7:1–28; 9:11–15; 3Ms 16:13; Hb 9:20–23. > <p>Meelespeetav tekst</p> > „Seepärast on Jumal tõstnud ta kõrgemaks kõrgest ja annetanud talle selle nime, mis on üle iga nime, et Jeesuse nimes nõtkuks iga põlv nii taevas kui maa peal kui maa all“ (Fl 2:9, 10). Siis, kui jutt käib Jeesusest taevases pühamus, ütleb kiri heebrealastele nii: „Kuhu Jeesus, meie eeljooksja, on läinud sisse meie heaks, saades Melkisedeki korra järgi igavesti ülempreestriks“ (Hb 6:20). Pühakiri, eriti Uus Testament, on väga selgesõnaline selles, et Kristus on meie Ülempreester taevases pühamus – Ta võttis endale niisuguse osa pärast seda, kui oli lõpetanud maa peal oma töö ohvrina meie eest (vaata Hb 10:12). Sellel nädalal võtame uurimise alla Kristuse tegevuse taevases pühamus. Ta vahendustöö on otsustava tähtsusega Tema rahva ettevalmistamisel lõpuajaks. Nii on meile antud väga tähtis manitsus: „Jumala rahvas peaks hakkama selgesti mõistma tõde pühamust ja uurimiskohtust. Igaüks vajab isiklikku teadmist oma suure Ülempreestri seisukoha ja töö suhtes. Vastasel juhul on võimatu osutada usku, mida nõuab meie aeg, või täita kohustust, mille Jumal on määranud meil täita.“ – Ellen G. White, Suur võitlus, lk 488. Mida teeb Kristus meie heaks taevases pühamus ja miks on väga tähtis, et mõistaksime seda eelkõige lõpupäevil? _Selle nädala õppetükki õppides valmistud hingamispäevaks, 5. maiks._
71.52381
509
0.784953
est_Latn
1.00001
0c51a3454781bbf3c3d16667f942631bfabfc10c
3,119
md
Markdown
HISTORY.md
wikimedia/mediawiki-libs-RemexHtml
613b2f7605fe7fe2805b907c211fe49763bd5a57
[ "MIT" ]
2
2017-03-07T01:55:32.000Z
2018-04-14T02:11:26.000Z
HISTORY.md
wikimedia/mediawiki-libs-RemexHtml
613b2f7605fe7fe2805b907c211fe49763bd5a57
[ "MIT" ]
null
null
null
HISTORY.md
wikimedia/mediawiki-libs-RemexHtml
613b2f7605fe7fe2805b907c211fe49763bd5a57
[ "MIT" ]
null
null
null
# Release History ## RemexHtml x.x.x (not yet released) ## RemexHtml 3.0.1 (2021-11-19) * Fix duplicate sourceLength output for <tr></table>. * In DOMBuilder, catch invalid character errors from createAttribute. ## RemexHtml 3.0.0 (2021-10-25) * Removed the RemexHtml\ namespace aliases. * Added Attributes::clone() * Added Dispatcher::flushTableText(). ## RemexHtml 2.3.2 (2021-08-07) * Changed package namespace from RemexHtml to Wikimedia\RemexHtml to match package name. PHP's `class_alias` has been used so that existing code using the old namespace will continue to work, but this is now deprecated; it is expected the next major release of RemexHtml will remove the aliases. * Fix handling of <body> tag in "after head" state that would incorrectly result in a parse error being raised. * Made DOMBuilder::createNode protected (rather than private) so that standards-compliant DOM implementations can override it. ## RemexHtml 2.3.1 (2021-04-20) * Don't pass null arguments to DOMImplementation::createDocument(): nulls are technically allowed and converted to the empty string, but this is deprecated legacy behavior. ## RemexHtml 2.3.0 (2021-02-05) * Allow use of third-party DOM implementations (like wikimedia/dodo) via the new `domImplementation` parameter to DOMBuilder. ## RemexHtml 2.2.2 (2021-01-30) * Support wikimedia/utfnormal ^3.0.1 ## RemexHtml 2.2.1 (2021-01-11) * Various minor changes for PHP 8.0 support. * Remove dead code about old phpunit version ## RemexHtml 2.2.0 (2020-04-29) * Update dependencies. * Fix warnings emitted by PHP 7.4. * Bug fix in TreeBuilder\ForeignAttributes::offsetGet(). * Drop PHP 7.0/7.1 and HHVM support; require PHPUnit 8. ## RemexHtml 2.1.0 (2019-09-16) * Call the non-standard \DOMElement::setIdAttribute() method by default. * Add scriptingFlag option to Tokenizer, and make it true by default. * Attributes bug fixes. * Added RelayTreeHandler and RelayTokenHandler for subclassing convenience. * Normalize text nodes during tree building, to match HTML parsing spec. ## RemexHtml 2.0.3 (2019-05-10) * Don't decode char refs if ignoreCharRefs is set, even if they are simple. (This fixes a regression introduced in 2.0.2.) * Performance improvements to character entity decoding and tokenizer preprocessing. ## RemexHtml 2.0.2 (2019-03-13) * Performance improvements to tokenization and tree building. * Provide an option to suppress namespace for HTML elements, working around a performance bug in PHP's dom_reconcile_ns (T217708). ## RemexHtml 2.0.1 (2018-10-15) * Don't double-decode HTML entities when running on PHP (not HHVM) (T207088). ## RemexHtml 2.0.0 (2018-08-13) * Drop support for PHP < 7.0. * Remove descendant nodes when we get an endTag() event (T200827). * Improved tracing. * Added NullTreeHandler and NullTokenHandler. ## RemexHtml 1.0.3 (2018-02-28) * Drop support for PHP < 5.5. ## RemexHtml 1.0.2 (2018-01-01) * Fix linked list manipulation in CachedScopeStack (T183379). ## RemexHtml 1.0.1 (2017-03-14) * Fix missing breaks in switch statements. ## RemexHtml 1.0.0 (2017-02-24) * Initial release.
37.130952
77
0.749599
eng_Latn
0.895717
0c51cab885bfae036ed6618b96134b1cb045acec
5,971
md
Markdown
README.md
denyncrawford/deno-livereload
6b75aab84a790b195ec54318cf23b33cca2f25e4
[ "MIT" ]
6
2021-05-03T14:51:32.000Z
2022-03-29T18:51:56.000Z
README.md
denyncrawford/deno-livereload
6b75aab84a790b195ec54318cf23b33cca2f25e4
[ "MIT" ]
3
2021-04-29T21:49:40.000Z
2021-06-07T23:28:26.000Z
README.md
denyncrawford/deno-livereload
6b75aab84a790b195ec54318cf23b33cca2f25e4
[ "MIT" ]
1
2021-12-28T03:58:38.000Z
2021-12-28T03:58:38.000Z
# Deno LiveReload <a href="LICENSE"> <img src="https://img.shields.io/badge/license-MIT-brightgreen.svg" alt="Software License" /> </a> <a href="https://github.com/denyncrawford/deno-livereload/issues"> <img src="https://img.shields.io/github/issues/denyncrawford/deno-livereload.svg" alt="Issues" /> </a> <a href="https://github.com/standard/ts-standard/"> <img src="https://img.shields.io/badge/code%20style-standard-brightgreen.svg" alt="Typescript Style Guide" /> </a> <a href="https://deno.land/x/livereload"> <img src="https://img.shields.io/badge/deno-^1.8.1-informational.svg?style=flat-squar" alt="Deno.land" /> </a> <a href="https://github.com/denyncrawford/deno-livereload/releases"> <img src="https://img.shields.io/github/release/denyncrawford/deno-livereload.svg" alt="Latest Version" /> </a> Deno LiveReload is a development tool for watching changes at your filesystem and automatic reload your browser. It is highly configurable and is intended to work with deno but you can use it wherever you want writing a simple watcher file. The official CLI is WIP. ## Import ```typescript import LiveReload from 'https://deno.land/x/livereload@0.1.0/src/mod.ts' ``` ## Usage To use LiveReload you must create a WATCHER and then inject some code in your html. 1. Create a server or handler: **watcher.ts** ```typescript const live = new LiveReload('public'); // foo code live.watch() ``` 2. Inject the script at your html files that you need to reload: > If you're serving your files use the second option. **index.html** ```html <html> <head> <!-- 1. No served files --> <script src="http://localhost:39430/livereload/client.js" defer></script> <!-- 2. Injecting for Served files --> <script defer> document.write(`<script src="http://'${(location.host || 'localhost').split(':')[0]}:39430/livereload/client.js></script>`) </script> <!-- 3. Using the same dev server (with live.handle) --> <script src="/livereload/client.js" defer></script> </head> <body> <h1>Hello world with Deno LiveReload</h1> </body> </html> ``` Livereload serves the backend instance and the client so it is stand alone by default, but if you have already a development server, you might want to disable the livereload server at the config object and use your own port and protocol, then you have to handle the incomming requests from the `/livereload` and `/livereload/client.js` endpoints with `LiveReload(options).handle(req)`. Please check [this example](#handling-request-with-opine-example) to know how to use your own server. ## Instantiating You can use it with any configuration more than the base path like in the example above or an array of base paths: ```typescript const live = new LiveReload(['public','assets', '.']); ``` > Relative paths will resolve into CWD. ## Config If you pass a config object these are the options availble: - `options` **WatchOptions** - `port` **number** - The port that livereload will use for connect to the server - *default*: `39430` - `base` **Array(string) | string** - Base paths(s) to watch - *default*: `Deno.cwd()` - `recursive` **b0olean** - If true it Will watch the specified directory and all sub directories. - *default*: `true` - `serve` **boolean** - If flase it will not serve and you might want to handle the server requests by your own. - *default*: `true` - `secure` **boolean** - Tells client bundle to listen as http or https - *default*: `false`, - `exclude` *optional* **Array(string)** - List of Glob Patterns for excluding reloads of matching paths. *default*: `undefined` ## API The constructed LiveReload class expose just two methods: - `LiveReload.watch()` **void** - Starts watching for changes, this method prevent declaring the instance where you don't want, so calling watch will fire the changes. - *required*: `true` - `LiveReload.handle(request: ServerRequest)` **void** - If the serve option is false you can use this method to handle each request of your own http server or http framework. - *required*: `flase` ## Handling request with opine example: ```typescript import { opine } from "https://deno.land/x/opine@1.0.2/mod.ts"; // Note the version import LiveReload from '../../mod.ts' import { ServerRequest } from 'https://deno.land/std@0.83.0/http/server.ts'; const app = opine(); const port = 3000; const live = new LiveReload({ base: 'test', exclude: ['*.css'], serve: false, port }); app.get(['/livereload', '/livereload/client.js'],(req: ServerRequest) => { live.handle(req) }) app.get("/", async (req: ServerRequest) => { const name = new TextDecoder().decode(await Deno.readFile('name.txt')); req.respond({status: 200, headers: new Headers({ "content-type": "text/html", }), body:`<script src="http://localhost:${port}/livereload/client.js" defer></script> <h1>My name is ${name}<h1>` }); }); live.watch() app.listen(port); ``` > NOTE: By now, this module is only compatible with the std@0.83.0 http ServerRequest interface, if you want to use a custom framework or dev server you must check first the http version of the module and match it with the compatible version. ## Building the web client > The client is a ts file and you don't need to import it directly from the file system, instead livereload serves the constructed client as a js file. This is because it sets the port dynamically and builds the client in real-time every time a request is made. For normal usage you won't need to build your own client. The client handles the inning notifications from the server and you can bundle your own custom client: 1. Clone the repo 2. Edit the src/client.ts 3. Run the bundler `deno run -A --unstable bundler.ts` ## Credits - [Miguel Rangel](https://github.com/denyncrawford) ## License The MIT License (MIT). Please see [License File](LICENSE) for more information.
34.918129
487
0.701223
eng_Latn
0.944631
0c51d7748a6b8b3002c7d17c2cbfc44f22145db3
2,480
md
Markdown
site/content/post/20181025-150635.md
hisashi629/smart-health-assoc-cms
af71b831c8473202756f07b2191fbdeabd12e391
[ "MIT" ]
null
null
null
site/content/post/20181025-150635.md
hisashi629/smart-health-assoc-cms
af71b831c8473202756f07b2191fbdeabd12e391
[ "MIT" ]
2
2021-03-09T01:14:22.000Z
2022-02-12T07:06:50.000Z
site/content/post/20181025-150635.md
hisashi629/smart-health-assoc-cms
af71b831c8473202756f07b2191fbdeabd12e391
[ "MIT" ]
null
null
null
--- title: 【スマートヘルスケア協会からのお知らせ】みんなで選ぶ薬局アワード開催 description: ーみんなで選ぶ薬局アワードとは?ー目的は「一般の方に薬局の取り組みをもっと知ってもらうこと」。全国各地から、創意工夫している薬局の取り組みを募集致しました。その中から、独自の審査基準に基づき、代表薬局を選出させていただき、発表していただきます。そして審査員、また、会場にお越しの皆様の投票により、最優秀賞の薬局を決定するイベントです。発表を聞き、あなたが「行きたい!」「感動した」と思った薬局にぜひとも、ご投票下さい。 date: 2018-04-08T15:00:00.000Z categories: topic --- 今回は当協会理事の竹中が運営しております「みんなで選ぶ薬局アワード」についてお知らせいたします。 以下は竹中からの紹介文です。 ご興味のある方は竹中までよろしくお願いいたします。 ============================== お世話になります。 スマートヘルスケア協会 理事の竹中と申します。 私が運営する薬局支援協会のイベントのご案内になります。 代表理事の岡崎先生には特別審査員をお願いしております。 \*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-*- 【第2回みんなで選ぶ薬局アワード】 「薬局ってどこも一緒じゃないの?」 「もっと信頼できる薬局を見つけたい」  そんな疑問にお答えするイベントです。 ◆公式HP http://pharmacyaward.com/ \*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-\*-*- ーみんなで選ぶ薬局アワードとは?ー 目的は「一般の方に薬局の取り組みをもっと知ってもらうこと」 全国各地から、創意工夫している薬局の取り組みを募集致しました。その中から、独自の審査基準に基づき、代表薬局を選出させていただき、発表していただきます。 そして審査員、また、会場にお越しの皆様の投票により、最優秀賞の薬局を決定するイベントです。発表を聞き、あなたが「行きたい!」「感動した」と思った薬局にぜひとも、ご投票下さい。 ー第1回みんなで選ぶ薬局アワードの様子ー https://www.onenationworkingtogether.org/55141 全国各地のエントリー薬局から、6組の代表薬局の発表者を選出し、決定いたしました。 代表薬局発表者 ◼︎このみ薬局(愛知県名古屋市)  『保険薬局でのグラム染色を活用した抗菌薬適正使用』 ◼︎旭川中央薬局(北海道旭川市)  『薬局ファーストで健康サポート   「かかる前薬局」のチカラで地域を健康に!』 ◼︎金山すみれ薬局(愛知県名古屋市)  『離島・へき地へお薬と夢と希望を届けよう!    ~ドローン薬局プロジェクト~』 ◼︎キムラ薬局 (大分県別府市) 『薬局をハブにしてつながるネットワーク進化版』 ◼︎つるさん薬局(東京都目黒区) 『薬局は劇場、薬剤師はエンターテイナー あなたに合った病院探しのお手伝い』 ◼︎薬局・なくすりーな(茨城県古河市) 『真のセルフメディケーション支援を目指して』 ↓発表薬局の詳しい情報はこちらからどうぞ↓ http://pharmacyaward.com/award2018.html ー主催 一般社団法人薬局支援協会ー ◆公式Facebookページ https://www.facebook.com/pharmacy.shien/ ◆公式ホームページ http://ph-support.jp/ +:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-  イベント詳細 +:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:-:+:- ◆日時:5月20日(日) 13時00分~17時30分(12時開場・受付開始) ◆場所: リクルートGINZA8ビル11Fホール  東京都中央区銀座8−4−17 リクルートGINZA8ビル  最寄駅:地下鉄「銀座」駅、JR・地下鉄「新橋」駅 ◆参加費 一般:2000円 学生:無料 ※定員300名 ◆参加申し込みはコチラ https://docs.google.com/forms/d/e/1FAIpQLSfO80y95imH2UVl0yaxvH-UW3LArgNjVUmgd_JU1DKy-v_WKQ/viewform ■■□―――――――――――――――――――□■■ クラウドファンディング挑戦中!!(3月31日(土)23時まで!) 【薬局の素晴らしい取り組み・思いを世の中へ発信したい!】 https://readyfor.jp/projects/pharmacyaward 株式会社バンブー http://bambooo.co.jp 本社:神奈川県横須賀市船越町1-12 TEL: 046-860-1681 FAX: 046-860-1682 岩盤浴ヨガスタジオasyikせんげん台 http://asyik.org 代表取締役 竹中孝行 ■■□―――――――――――――――――――□■■
12.4
222
0.639919
yue_Hant
0.594063
0c5238b55318814fb81e8dfdb1a9d293849ae32a
22,762
md
Markdown
docs/relational-databases/search/improve-the-performance-of-full-text-indexes.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/search/improve-the-performance-of-full-text-indexes.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/search/improve-the-performance-of-full-text-indexes.md
baleng/sql-docs.it-it
80bb05c3cc6a68564372490896545d6211a9fa26
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Migliorare le prestazioni degli indici full-text | Microsoft Docs ms.custom: '' ms.date: 03/14/2017 ms.prod: sql ms.prod_service: search, sql-database ms.reviewer: '' ms.technology: search ms.topic: conceptual helpviewer_keywords: - performance [SQL Server], full-text search - full-text queries [SQL Server], performance - crawls [full-text search] - full-text indexes [SQL Server], performance - full-text search [SQL Server], performance - batches [SQL Server], full-text search ms.assetid: ef39ef1f-f0b7-4582-8e9c-31d4bd0ad35d author: douglaslMS ms.author: douglasl manager: craigg monikerRange: =azuresqldb-current||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current ms.openlocfilehash: d79d404e72f13ade55f6bd64f261741d86b78347 ms.sourcegitcommit: 2429fbcdb751211313bd655a4825ffb33354bda3 ms.translationtype: HT ms.contentlocale: it-IT ms.lasthandoff: 11/28/2018 ms.locfileid: "52532544" --- # <a name="improve-the-performance-of-full-text-indexes"></a>Miglioramento delle prestazioni di indici full-text [!INCLUDE[appliesto-ss-asdb-xxxx-xxx-md](../../includes/appliesto-ss-asdb-xxxx-xxx-md.md)] Questo argomento descrive alcune delle cause comuni della riduzione delle prestazioni per gli indici e le query full-text. Vengono inoltre forniti alcuni suggerimenti per limitare i problemi e migliorare le prestazioni. ## <a name="causes"></a> Common causes of performance issues ### <a name="hardware-resource-issues"></a>Problemi relativi alle risorse hardware Le prestazioni di esecuzione dell'indicizzazione e delle query full-text possono dipendere da risorse hardware quali memoria e velocità del disco e della CPU, nonché dall'architettura del computer. La causa principale del calo delle prestazioni di esecuzione dell'indicizzazione full-text è data dai limiti delle risorse hardware. - **CPU**. Se l'uso della CPU da parte del processo host del daemon di filtri (fdhost.exe) o del processo [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] (sqlservr.exe) ha quasi raggiunto il 100%, il collo di bottiglia è rappresentato dalla CPU stessa. - **Memoria**. In caso di memoria fisica insufficiente, il collo di bottiglia è rappresentato dalla memoria. - **Disco**. Se la lunghezza media della coda di attesa del disco è superiore al doppio del numero di testine, il collo di bottiglia è rappresentato dal disco. La soluzione alternativa principale consiste nella creazione di cataloghi full-text separati dai file e dai log del database di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . Posizionare i log, i file di database e i cataloghi full-text su dischi separati. Per migliorare le prestazioni di esecuzione dell'indicizzazione, è inoltre possibile installare dischi più veloci e usare RAID. > [!NOTE] > A partire da [!INCLUDE[ssKatmai](../../includes/sskatmai-md.md)], il motore di ricerca full-text può usare memoria AWE, in quanto parte del processo sqlservr.exe. ### <a name="full-text-batching-issues"></a>Problemi relativi ai batch full-text Se nel sistema non vengono rilevati colli di bottiglia a livello dell'hardware, le prestazioni di indicizzazione della ricerca full-text dipendono principalmente dagli elementi seguenti: - Tempo necessario per la creazione di batch full-text in [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . - Velocità di utilizzo di tali batch da parte del daemon di filtri. ### <a name="full-text-index-population-issues"></a>Problemi relativi al popolamento dell'indice full-text - **Tipo di popolamento**. A differenza del popolamento completo, i popolamenti incrementale, manuale e con rilevamento automatico delle modifiche non sono progettati per ottimizzare le risorse hardware ai fini di una maggiore velocità. Di conseguenza, i suggerimenti per l'ottimizzazione in questo argomento potrebbero non migliorare le prestazioni per l'indicizzazione full-text quando si usa il popolamento incrementale, manuale o con rilevamento automatico delle modifiche. - **Unione nell'indice master**. Al termine di un popolamento, viene attivato un processo di unione conclusivo che associa i frammenti di indice in un singolo indice full-text master. Ciò consente prestazioni di query superiori poiché è necessario eseguire query solo sull'indice master anziché su alcuni frammenti di indice ed è possibile utilizzare statistiche di punteggio migliori per la classificazione della pertinenza. Tuttavia, l'unione nell'indice master può richiedere l'esecuzione di molte operazioni di I/O, in quanto è necessario leggere e scrivere grandi quantità di dati, ma questa operazione non blocca le query in entrata. L'unione nell'indice master di una grande quantità di dati può comportare la creazione di una transazione con esecuzione prolungata, con il conseguente ritardo del troncamento del log delle transazioni durante il checkpoint. In questo caso, le dimensioni del log delle transazioni potrebbero aumentare notevolmente, se si utilizza il modello di recupero con registrazione completa. È consigliabile verificare che il log delle transazioni contenga spazio sufficiente per una transazione con esecuzione prolungata prima di riorganizzare un indice full-text di grandi dimensioni in un database in cui viene utilizzato il modello di recupero con registrazione completa. Per altre informazioni, vedere [Gestione delle dimensioni del file di log delle transazioni](../../relational-databases/logs/manage-the-size-of-the-transaction-log-file.md). ## <a name="tuning"></a> Ottimizzare le prestazioni degli indici full-text Per ottimizzare le prestazioni degli indici full-text, implementare le procedure consigliate seguenti: - Per usare al meglio tutti i processori o i core CPU, impostare [sp_configure](../../relational-databases/system-stored-procedures/sp-configure-transact-sql.md) '**max full-text crawl range**' sul numero di CPU nel sistema. Per informazioni su questa opzione di configurazione, vedere [Opzione di configurazione del server max full-text crawl range](../../database-engine/configure-windows/max-full-text-crawl-range-server-configuration-option.md). - Verificare che la tabella di base includa un indice cluster. Utilizzare un tipo di dati integer per la prima colonna dell'indice cluster. Evitare l'utilizzo di GUID nella prima colonna dell'indice cluster. Un popolamento a più intervalli in un indice cluster garantisce la massima velocità di popolamento. È consigliabile che la colonna utilizzata come chiave full-text sia di un tipo di dati integer. - Aggiornare le statistiche della tabella di base utilizzando l'istruzione [UPDATE STATISTICS](../../t-sql/statements/update-statistics-transact-sql.md) . Un'operazione ancora più importante consiste nell'aggiornamento delle statistiche nell'indice cluster o nella chiave full-text per un popolamento completo. In questo modo, tramite un popolamento a più intervalli è possibile generare partizioni ottimali nella tabella. - Prima di eseguire un popolamento completo in un computer di grandi dimensioni con più CPU, è consigliabile limitare temporaneamente la dimensione del pool di buffer impostando il valore di **max server memory** in modo tale da lasciare una quantità di memoria sufficiente per il processo fdhost.exe e il sistema operativo. Per ulteriori informazioni, vedere "Stima dei requisiti di memoria del processo host del daemon di filtri (fdhost.exe)" più avanti in questo argomento. - Se si usa il popolamento incrementale basato su una colonna timestamp, compilare un indice secondario sulla colonna **timestamp** per migliorare le prestazioni di esecuzione del popolamento incrementale. ## <a name="full"></a> Risolvere i problemi relativi alle prestazioni di popolamenti completi ### <a name="review-the-full-text-crawl-logs"></a>Esaminare i log di ricerca per indicizzazione full-text Per diagnosticare problemi di prestazioni, analizzare i log della ricerca per indicizzazione full-text. Quando si verifica un errore durante una ricerca per indicizzazione, la funzionalità di registrazione corrispondente per la ricerca full-text crea e gestisce un log di tipo ricerca per indicizzazione in formato testo normale. Ogni log di tipo ricerca per indicizzazione corrisponde a un catalogo full-text specifico. Per impostazione predefinita, i log di ricerca per indicizzazione per un'istanza specifica, ad esempio l'istanza predefinita, si trovano nella cartella `%ProgramFiles%\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\LOG`. Il file del log di tipo ricerca per indicizzazione segue lo schema di denominazione seguente: `SQLFT<DatabaseID\><FullTextCatalogID\>.LOG[<n\>]` Di seguito sono riportate le parti variabili del nome del file del log di ricerca per indicizzazione. - <**IDDatabase**>: ID di un database. <**dbid**> è un numero a cinque cifre con zeri iniziali. - <**IDCatalogoFullText**>: ID del catalogo full-text. <**catid**> è un numero a cinque cifre con zeri iniziali. - <**n**>: numero intero che indica l'esistenza di uno o più log di ricerca per indicizzazione per lo stesso catalogo full-text. Ad esempio, `SQLFT0000500008.2` è il file del log di ricerca per indicizzazione per un database con ID database = 5 e ID catalogo full-text = 8. Il 2 alla fine del nome file indica che sono disponibili due file del log di tipo ricerca per indicizzazione per questa coppia di database/catalogo. ### <a name="check-physical-memory-usage"></a>Controllare l'utilizzo della memoria fisica Durante un popolamento full-text, è possibile che la memoria disponibile per fdhost.exe o sqlservr.exe diventi insufficiente o si esaurisca. - Se il log delle ricerche per indicizzazione full-text indica il riavvio frequente del processo fdhost.exe o la restituzione frequente del codice di errore 8007008, uno di questi processi non dispone di memoria sufficiente. - Se fdhost.exe produce dump, in particolare in computer di grandi dimensioni con più CPU, è possibile che la memoria si esaurisca. - Per ottenere informazioni sui buffer di memoria usati da una ricerca per indicizzazione full-text, vedere [sys.dm_fts_memory_buffers &#40;Transact-SQL&#41;](../../relational-databases/system-dynamic-management-views/sys-dm-fts-memory-buffers-transact-sql.md). Di seguito sono elencate le possibili cause di memoria insufficiente: - **Memoria insufficiente**. Se la quantità di memoria fisica disponibile durante un popolamento completo è pari a zero, è possibile che il pool di buffer di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] stia utilizzando la maggior parte della memoria fisica presente nel sistema. Il processo sqlservr.exe tenta di acquisire tutta la memoria disponibile per il pool di buffer, fino alla quantità massima di memoria del server configurata. Se l'allocazione di **max server memory** è eccessiva, per il processo fdhost.exe possono verificarsi condizioni di memoria insufficiente e l'impossibilità di allocare memoria condivisa. È possibile risolvere questo problema impostando in modo appropriato il valore **max server memory** del pool di buffer di [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . Per ulteriori informazioni, vedere "Stima dei requisiti di memoria del processo host del daemon di filtri (fdhost.exe)" più avanti in questo argomento. Può inoltre risultare utile ridurre la dimensione del batch per l'indicizzazione full-text. - **Contesa di memoria**. Durante un popolamento full-text in un computer con più CPU, tra fdhost.exe e sqlservr.exe può verificarsi una contesa per la memoria del pool di buffer. La conseguente mancanza di memoria condivisa genera tentativi batch, sovraccarico della memoria e dump da parte del processo fdhost.exe. - **Problemi di paging**. Anche le dimensioni insufficienti del file di paging, ad esempio in un sistema con un file di paging ridotto con crescita limitata, possono generare condizioni di memoria insufficiente per fdhost.exe o sqlservr.exe. Se nei log delle ricerche per indicizzazione full-text non sono riportati errori di memoria, è probabile che le prestazioni non siano ottimali a causa del paging eccessivo. ### <a name="estimate-the-memory-requirements-of-the-filter-daemon-host-process-fdhostexe"></a>Stimare i requisiti di memoria per il processo host del daemon di filtri (fdhost.exe) La quantità di memoria richiesta dal processo fdhost.exe per un popolamento dipende principalmente dal numero di intervalli di ricerca per indicizzazione full-text utilizzati, dalla dimensione della memoria condivisa in ingresso e dal numero massimo di istanze di tale memoria. È possibile stimare approssimativamente la quantità di memoria (in byte) utilizzata dall'host del daemon di filtri tramite la formula seguente: `number_of_crawl_ranges * ism_size * max_outstanding_isms * 2` I valori predefiniti delle variabili nella formula precedente sono i seguenti: |**Variabile**|**Valore predefinito**| |------------------|-----------------------| |*number_of_crawl_ranges*|Numero di CPU| |*ism_size*|1 MB per computer x86<br /><br /> 4 MB, 8 MB o 16 MB per computer x64, a seconda della memoria fisica totale| |*max_outstanding_isms*|25 per computer x86<br /><br /> 5 per computer x64| Nella tabella seguente vengono illustrate le linee guida da seguire per stimare i requisiti di memoria di fdhost.exe. Le formule contenute in questa tabella utilizzano i valori riportati di seguito. - *F*: stima della memoria richiesta da fdhost.exe (in MB). - *T*: memoria fisica totale disponibile nel sistema (in MB). - *M*: impostazione ottimale per **max server memory** . Per informazioni essenziali sulle formule seguenti, vedere le note dopo la tabella. |Piattaforma|Stima dei requisiti di memoria di fdhost.exe in MB:*F*^1|Formula per il calcolo di max server memory:*M*^2| |--------------|-----------------------------------------------------------|-----------------------------------------------------| |x86|*F* = *Numero di intervalli di ricerca per indicizzazione* * 50|*M* =minimo(*T*, 2000) - F - 500| |x64|*F* = *Numero di intervalli di ricerca per indicizzazione* * 10 * 8|*M* = *T* - *F* - 500| **Note sulle formule** 1. Se sono in corso più popolamenti completi, calcolare i requisiti di memoria di fdhost.exe per ciascuno separatamente, ad esempio *F1*, *F2* e così via. Calcolare quindi *M* come _T_**-** sigma **(**_F_i **)**. 2. 500 MB è una stima della memoria necessaria per gli altri processi del sistema. Se nel sistema sono in corso processi aggiuntivi, aumentare questo valore di conseguenza. 3. .*ism_size* sia 8 MB per le piattaforme x64. #### <a name="example-estimate-the-memory-requirements-of-fdhostexe"></a>Esempio: Stima dei requisiti di memoria di fdhost.exe Questo esempio è relativo a un computer a 64 bit con 8 GB di RAM e 4 processori dual core. Il primo calcolo consente di stimare i requisiti di memoria di fdhost.exe, ovvero*F*. Il numero di intervalli di ricerca per indicizzazione è `8`. `F = 8*10*8=640` Il calcolo successivo ottiene il valore ottimale per **max server memory**-*M*. La memoria fisica totale disponibile in questo sistema in MB, *T*, è `8192`. `M = 8192-640-500=7052` #### <a name="example-setting-max-server-memory"></a>Esempio: Impostazione del valore max server memory Questo esempio usa le istruzioni [sp_configure](../../relational-databases/system-stored-procedures/sp-configure-transact-sql.md) e [RECONFIGURE](../../t-sql/language-elements/reconfigure-transact-sql.md) [!INCLUDE[tsql](../../includes/tsql-md.md)] per impostare **max server memory** sul valore calcolato per *M* nell'esempio precedente, `7052`: ``` USE master; GO EXEC sp_configure 'max server memory', 7052; GO RECONFIGURE; GO ``` Per altre informazioni sulle opzioni per la memoria del server, vedere [Opzioni di configurazione del server Server Memory](../../database-engine/configure-windows/server-memory-server-configuration-options.md). ### <a name="check-cpu-usage"></a>Controllare l'utilizzo della CPU Le prestazioni di esecuzione dei popolamenti completi non sono ottimali quando l'utilizzo medio della CPU è inferiore al 30%. Di seguito vengono illustrati alcuni fattori che influiscono sull'utilizzo della CPU. - Tempi di attesa lunghi delle pagine Per determinare il tempo di attesa delle pagine, eseguire l'istruzione [!INCLUDE[tsql](../../includes/tsql-md.md)] seguente: ``` Execute SELECT TOP 10 * FROM sys.dm_os_wait_stats ORDER BY wait_time_ms DESC; ``` Nella tabella seguente vengono descritti i tipi di attesa relativi a questo contesto. |Tipo di attesa|Descrizione|Possibile soluzione| |---------------|-----------------|-------------------------| |PAGEIO_LATCH_SH (_EX o _UP)|Può indicare un collo di bottiglia a livello di IO, caso in cui anche la lunghezza media della coda del disco sarebbe elevata.|Lo spostamento dell'indice full-text in un filegroup diverso in un disco diverso potrebbe contribuire a ridurre il collo di bottiglia a livello di IO.| |PAGELATCH_EX (o _UP)|Può indicare contese tra i thread che tentano di scrivere nello stesso file di database.|L'aggiunta di file al filegroup in cui risiede l'indice full-text potrebbe contribuire ad attenuare queste contese.| Per altre informazioni, vedere [sys.dm_os_wait_stats &#40;Transact-SQL&#41;](../../relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql.md). - Analisi inefficaci della tabella di base Un popolamento completo esegue l'analisi della tabella di base per produrre batch. Tali analisi potrebbero risultare inefficaci negli scenari seguenti: - Se la tabella di base dispone di una percentuale elevata di colonne esterne alle righe sottoposte a indicizzazione full-text, il collo di bottiglia potrebbe essere causato proprio dall'analisi della tabella di base per produrre batch. In questo caso, lo spostamento dei dati di dimensioni inferiori all'interno delle righe tramite **varchar(max)** o **nvarchar(max)** potrebbe risolvere il problema. - Se la tabella di base è molto frammentata, l'analisi potrebbe risultare inefficace. Per informazioni sul calcolo dei dati esterni alle righe e sulla frammentazione dell'indice, vedere [sys.dm_db_partition_stats &#40;Transact-SQL&#41;](../../relational-databases/system-dynamic-management-views/sys-dm-db-partition-stats-transact-sql.md) e [sys.dm_db_index_physical_stats &#40;Transact-SQL&#41;](../../relational-databases/system-dynamic-management-views/sys-dm-db-index-physical-stats-transact-sql.md). Per ridurre la frammentazione, è possibile riorganizzare o ricompilare l'indice cluster. Per altre informazioni, vedere [Riorganizzare e ricompilare gli indici](../../relational-databases/indexes/reorganize-and-rebuild-indexes.md). ## <a name="filters"></a> Risolvere i problemi relativi all'indicizzazione lenta dei documenti > [!NOTE] > Questa sezione descrive un problema che riguarda solo gli utenti che indicizzano documenti (ad esempio documenti di Microsoft Word) in cui sono incorporati altri tipi di documento. Il motore di ricerca full-text usa due tipi di filtri durante il popolamento di un indice full-text: a thread singolo e multithread. - Alcuni documenti, quali i documenti di [!INCLUDE[msCoName](../../includes/msconame-md.md)] Word, vengono filtrati utilizzando un filtro multithread, - mentre altri, ad esempio i documenti PDF (Portable Document Format) di Adobe Acrobat, vengono filtrati utilizzando un filtro a thread singolo. Ai fini della sicurezza, i filtri vengono caricati dai processi dell'host del daemon di filtri. In un'istanza del server viene utilizzato un processo a thread multipli per tutti i filtri a thread multipli e un processo a thread singolo per tutti i filtri a thread singolo. Quando un documento che utilizza un filtro multithread contiene un documento incorporato che utilizza un filtro a thread singolo, il motore di ricerca full-text avvia un processo a thread singolo per il documento incorporato. Nel caso di un documento di Word che contiene un documento PDF, il motore di ricerca full-text usa il processo multithread per esaminare il contenuto in formato Word e avvia un processo a thread singolo per esaminare il contenuto in formato PDF. Un filtro a thread singolo potrebbe tuttavia non funzionare in modo corretto in questo ambiente e potrebbe destabilizzare il processo di filtraggio. In alcune situazioni in cui i documenti incorporati rappresentano una prassi comune, la destabilizzazione potrebbe causare arresti anomali del processo. In questo caso, il motore di ricerca full-text reindirizza tutti i documenti che hanno provocato l'errore, ad esempio un documento di Word in cui è incorporato contenuto in formato PDF, al processo di filtraggio a thread singolo. Se il reindirizzamento viene eseguito di frequente, le prestazioni del processo di indicizzazione full-text risultano ridotte. Per risolvere questo problema, è necessario contrassegnare il filtro per il documento contenitore, in questo esempio il documento di Word, come filtro a thread singolo. Per contrassegnare un filtro come filtro a thread singolo, impostare il valore **ThreadingModel** del Registro di sistema per il filtro su **Apartment Threaded**. Per informazioni sugli apartment a thread singolo, vedere il white paper [Understanding and Using COM Threading Models](https://go.microsoft.com/fwlink/?LinkId=209159). ## <a name="see-also"></a>Vedere anche [Opzioni di configurazione del server Server Memory](../../database-engine/configure-windows/server-memory-server-configuration-options.md) [Opzione di configurazione del server max full-text crawl range](../../database-engine/configure-windows/max-full-text-crawl-range-server-configuration-option.md) [Popolamento degli indici full-text](../../relational-databases/search/populate-full-text-indexes.md) [Creazione e gestione di indici full-text](../../relational-databases/search/create-and-manage-full-text-indexes.md) [sys.dm_fts_memory_buffers &#40;Transact-SQL&#41;](../../relational-databases/system-dynamic-management-views/sys-dm-fts-memory-buffers-transact-sql.md) [sys.dm_fts_memory_pools &#40;Transact-SQL&#41;](../../relational-databases/system-dynamic-management-views/sys-dm-fts-memory-pools-transact-sql.md) [Risoluzione dei problemi nell'indicizzazione full-text](../../relational-databases/search/troubleshoot-full-text-indexing.md)
102.071749
1,406
0.769704
ita_Latn
0.997655
0c52d7be1f8f461243760553e53ecbe63f35edb4
2,981
md
Markdown
ATPDocs/install-atp-step7.md
AzureMentor/ATADocs
8d4b7a96a841ddc52a93465b2a559a055eae0c57
[ "CC-BY-4.0", "MIT" ]
1
2020-06-16T22:07:32.000Z
2020-06-16T22:07:32.000Z
ATPDocs/install-atp-step7.md
AzureMentor/ATADocs
8d4b7a96a841ddc52a93465b2a559a055eae0c57
[ "CC-BY-4.0", "MIT" ]
null
null
null
ATPDocs/install-atp-step7.md
AzureMentor/ATADocs
8d4b7a96a841ddc52a93465b2a559a055eae0c57
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- # required metadata title: Azure Advanced Threat Protection configure detection exclusions and honeytoken accounts | Microsoft Docs description: Configuration of detection exclusions and honeytoken user accounts. keywords: author: shsagir ms.author: shsagir manager: rkarlin ms.date: 10/22/2019 ms.topic: conceptual ms.collection: M365-security-compliance ms.service: azure-advanced-threat-protection ms.assetid: 1ad5e923-9bbd-4f56-839a-b11a9f387d4b # optional metadata #ROBOTS: #audience: #ms.devlang: ms.reviewer: itargoet ms.suite: ems #ms.tgt_pltfrm: #ms.custom: --- # Configure detection exclusions and honeytoken accounts Azure ATP enables the exclusion of specific IP addresses or users from a number of detections. For example, a **DNS Reconnaissance exclusion** could be a security scanner that uses DNS as a scanning mechanism. The exclusion helps Azure ATP ignore such scanners. Azure ATP also enables the configuration of honeytoken accounts, which are used as traps for malicious actors - any authentication associated with these honeytoken accounts (normally dormant), triggers an alert. To configure, follow these steps: 1. From the Azure ATP portal, click on the settings icon and select **Configuration**. ![Azure ATP configuration settings](media/atp-config-menu.png) 2. Under **Detection**, click **Entity tags**. 3. Under **Honeytoken accounts**, enter the Honeytoken account name and click the **+** sign. The Honeytoken accounts field is searchable and automatically displays entities in your network. Click **Save**. ![Honeytoken](media/honeytoken-sensitive.png) 4. Click **Exclusions**. Enter a user account or IP address to be excluded from the detection, for each type of threat. 5. Click the *plus* sign. The **Add entity** (user or computer) field is searchable and will autofill with entities in your network. For more information, see [Excluding entities from detections](excluding-entities-from-detections.md) and the [security alert guide](suspicious-activity-guide.md). ![Exclusions](media/exclusions.png) 6. Click **Save**. Congratulations, you have successfully deployed Azure Advanced Threat Protection! Check the attack timeline to view security alerts generated from detected activities and search for users or computers, and view their profiles. Azure ATP scanning starts immediately. Some detections, such as [Suspicious additions to sensitive groups](atp-domain-dominance-alerts.md#suspicious-additions-to-sensitive-groups-external-id-2024), require a learning period and aren't available immediately after Azure ATP deployment.The learning period for each alert is listed in the detailed [security alert guide](suspicious-activity-guide.md). ## See Also - [Azure ATP sizing tool](https://aka.ms/aatpsizingtool) - [Configure event collection](configure-event-collection.md) - [Azure ATP prerequisites](atp-prerequisites.md) - [Check out the Azure ATP forum!](https://aka.ms/azureatpcommunity)
43.202899
399
0.786649
eng_Latn
0.9649
0c53f7e128dfb8e3b68b1ff197014c6af8328c18
58
md
Markdown
README.md
pair-up-solutions/Paer-Up
dfbfcff073ff52c707d6ff534c0a0e7b8722c3c2
[ "BSD-3-Clause" ]
3
2021-06-12T17:36:37.000Z
2021-07-16T15:43:46.000Z
README.md
realNitinKumar/Paer-Up
7456470cc84a7fc1033b735883f4e1cebfde850e
[ "BSD-3-Clause" ]
null
null
null
README.md
realNitinKumar/Paer-Up
7456470cc84a7fc1033b735883f4e1cebfde850e
[ "BSD-3-Clause" ]
7
2021-06-12T14:29:16.000Z
2021-08-10T18:00:12.000Z
# Paer-Up A solution to finding pair programming partners
19.333333
47
0.810345
eng_Latn
0.974352
0c54398df9fb8240c700bcc5e3bcde9a2d44607d
113
md
Markdown
README.md
spencer-p/wavwrite
fc17f0909dbcb0dbb6fc4bf0dcc688a1bfaeb47f
[ "MIT" ]
null
null
null
README.md
spencer-p/wavwrite
fc17f0909dbcb0dbb6fc4bf0dcc688a1bfaeb47f
[ "MIT" ]
null
null
null
README.md
spencer-p/wavwrite
fc17f0909dbcb0dbb6fc4bf0dcc688a1bfaeb47f
[ "MIT" ]
null
null
null
# WavWrite A tiny package that lets you write raw bytes to a `.wav` file, handling the file formatting legwork.
22.6
80
0.761062
eng_Latn
0.997659
0c547786e7395fd6cca2d5e978fc4fe52dce229a
462
md
Markdown
CHANGELOG.md
pschiffmann/verbose-regexp
750b0e46d6ae2ace8619bcb8c4f33467b8148a2b
[ "MIT" ]
2
2019-09-17T16:15:04.000Z
2021-04-11T23:53:49.000Z
CHANGELOG.md
pschiffmann/verbose-regexp
750b0e46d6ae2ace8619bcb8c4f33467b8148a2b
[ "MIT" ]
null
null
null
CHANGELOG.md
pschiffmann/verbose-regexp
750b0e46d6ae2ace8619bcb8c4f33467b8148a2b
[ "MIT" ]
null
null
null
# Changelog ## 1.0.0+1 - Add SDK constraint so the package can be used in Dart 2.0.0. - Run `dartfmt --fix`. ## 1.0.0 - Bump version to 1.0.0. Nothing changed really, but since I don't expect any design/API changes, I should have used this version number a year ago. - Cosmetic changes to appease pana: Add analysis_options.yaml and CHANGELOG.md. - Fix strong_mode and linter warnings. - Update repository name in pubspec.yaml. ## 0.1.0 - Initial version.
25.666667
148
0.725108
eng_Latn
0.971704
0c5490aa9b5a4c2dce0d701f6ad5b8a3b075d0ce
1,360
md
Markdown
content/ru/getting-started/external-learning-resources/index.md
gohugo-ru/hugoDocs
0f1a4044a5cdc1fa4b430ec5c31c6d6d9a022d2d
[ "Apache-2.0" ]
null
null
null
content/ru/getting-started/external-learning-resources/index.md
gohugo-ru/hugoDocs
0f1a4044a5cdc1fa4b430ec5c31c6d6d9a022d2d
[ "Apache-2.0" ]
null
null
null
content/ru/getting-started/external-learning-resources/index.md
gohugo-ru/hugoDocs
0f1a4044a5cdc1fa4b430ec5c31c6d6d9a022d2d
[ "Apache-2.0" ]
null
null
null
--- title: Внешние ресурсы обучения linktitle: Внешние ресурсы обучения description: Список руководств и книг по Хьюго. date: 2019-10-20 publishdate: 2019-10-20 lastmod: 2019-10-20 keywords: [books, tutorials, learning, usage, книги, руководства, туториалы, применение, использование, мануалы, обучение] menu: docs: parent: "getting-started" weight: 70 weight: 70 sections_weight: 70 draft: false toc: false --- ## Книги ### Хьюго в действии [![Хьюго в действии](hia.jpg)](https://www.manning.com/books/hugo-in-action) Hugo in Action - это пошаговое руководство по использованию Hugo для создания статических веб-сайтов. Работая с полным примером веб-сайта и образцами исходного кода, Вы узнаете, как создать и разместить не требующий обслуживания, высокопроизводительный сайт, который поразит Ваших пользователей и будет оставаться стабильным, не полагаясь на сторонний сервер. [Домашняя страница Хьюго в действии](https://www.manning.com/books/hugo-in-action) ### Создавайте сайты с Hugo [Создавайте сайты с Hugo - быстрая веб-разработка с Markdown (2020)](https://pragprog.com/titles/bhhugo/) Брайан П. Хоган. ## Видео уроки ### Видео плейлист Майка Дейна Майк Дейн объясняет различные особенности Hugo в специальных обучающих материалах на [YouTube](https://www.youtube.com/watch?list=PLLAZ4kZ9dFpOnyRlyS-liKL5ReHDcj4G3&v=qtIqKaDlqXo).
35.789474
359
0.780147
rus_Cyrl
0.835576
0c549f9219ebec03b9a3b05a4af27057cfd0c0e4
58
md
Markdown
README.md
imndszy/novamysql
84f6a7c7d0d80b015ca9ef99d0758904459ba618
[ "MIT" ]
null
null
null
README.md
imndszy/novamysql
84f6a7c7d0d80b015ca9ef99d0758904459ba618
[ "MIT" ]
null
null
null
README.md
imndszy/novamysql
84f6a7c7d0d80b015ca9ef99d0758904459ba618
[ "MIT" ]
null
null
null
# novamysql mysql encapsulation module powered by pymysql
19.333333
45
0.844828
eng_Latn
0.619332
0c55066b15132815d073e775c724094e81fd019f
1,219
md
Markdown
site2/docs/functions-deploy-cluster-resource.md
HQebupt/pulsar
415c6ed244b2c0bd53c8884b258528cae7b175ce
[ "Apache-2.0" ]
null
null
null
site2/docs/functions-deploy-cluster-resource.md
HQebupt/pulsar
415c6ed244b2c0bd53c8884b258528cae7b175ce
[ "Apache-2.0" ]
null
null
null
site2/docs/functions-deploy-cluster-resource.md
HQebupt/pulsar
415c6ed244b2c0bd53c8884b258528cae7b175ce
[ "Apache-2.0" ]
null
null
null
--- id: functions-deploy-cluster-resource title: Allocate resources to function instance sidebar_label: "Allocate resources to function instance" --- When running functions in cluster mode, you can specify the resources that can be allocated to each function instance. The following table outlines the resources that can be allocated to function instances. | Resource | Specified as | Supported runtime | |------------|---------------------|-------------------| | CPU | The number of cores | Kubernetes | | RAM | The number of bytes | Kubernetes | | Disk space | The number of bytes | Kubernetes | For example, the following command allocates 8 cores, 8GB of RAM, and 10GB of disk space to a function. ```bash bin/pulsar-admin functions create \ --jar target/my-functions.jar \ --classname org.example.functions.MyFunction \ --cpu 8 \ --ram 8589934592 \ --disk 10737418240 ``` :::note The resources allocated to a given function are applied to each instance of the function. For example, if you apply 8GB of RAM to a function with a [parallelism](functions-deploy-cluster-parallelism.md) of 5, you are applying 40GB of RAM for the function in total. :::
34.828571
265
0.69073
eng_Latn
0.987688
0c556cc125db7e091a25103308be2b5f0c5d4202
1,142
md
Markdown
content/integrations.md
scopeink/docs
c7fcd9865874b25d37d0ab786ddd02c957fe5e9b
[ "MIT" ]
1
2020-05-24T11:36:27.000Z
2020-05-24T11:36:27.000Z
content/integrations.md
scope-ink/docs
c7fcd9865874b25d37d0ab786ddd02c957fe5e9b
[ "MIT" ]
2
2019-10-30T13:23:05.000Z
2019-11-05T15:32:35.000Z
content/integrations.md
scopeink/docs
c7fcd9865874b25d37d0ab786ddd02c957fe5e9b
[ "MIT" ]
2
2020-05-24T11:36:20.000Z
2021-12-25T00:25:11.000Z
--- title: "Scope Integrations" metaTitle: "Scope Integrations - Scope Docs" metaDescription: "Scope.ink Integrations" --- Scope is integrated with GitHub and GitLab. Please, visit the appropiate section to integrate your SCM. - [GitHub](https://docs.scope.ink/integrations/1-github) - [GitHub Enterprise](https://docs.scope.ink/integrations/2-github-enterprise). - [GitLab](https://docs.scope.ink/integrations/3-gitlab) - [GitLab Enterprise](https://docs.scope.ink/integrations/4-gitlab-enterprise) ## The integration process depends on how you log in: **1. If you Login with GitHub** ![Log in with GitHub](https://user-images.githubusercontent.com/48650098/81928153-1e805e00-95e5-11ea-9486-650a9c07e2ee.png) - Your token will be automatically created and you will be able to add more integrations through the Integrations panel. **2. If you Login with GitLab** ![Log in with GitLab](https://user-images.githubusercontent.com/48650098/81928184-293af300-95e5-11ea-85ee-af73c4617002.png) - Your token will be automatically created and you will be able to add more integrations through the Integrations panel.
43.923077
124
0.76007
eng_Latn
0.735416
0c562f596703f6f2f34490d63a88cebb872c8bbd
1,312
md
Markdown
programming/node-js/mini-rest-api/step4.ru.md
dorinesinenco/EDUQATION
84a1b5a8d36d65c74f59e87335ad074b806821dd
[ "MIT" ]
5
2020-03-05T17:49:11.000Z
2021-08-08T09:40:31.000Z
programming/node-js/mini-rest-api/step4.ru.md
dorinesinenco/EDUQATION
84a1b5a8d36d65c74f59e87335ad074b806821dd
[ "MIT" ]
2
2019-10-13T13:00:36.000Z
2019-10-16T20:05:52.000Z
programming/node-js/mini-rest-api/step4.ru.md
dorinesinenco/EDUQATION
84a1b5a8d36d65c74f59e87335ad074b806821dd
[ "MIT" ]
23
2019-08-02T15:31:07.000Z
2022-03-29T08:59:01.000Z
## Доработка проекта * Продолжаем улучшать код > Для выполнения этого шага необходимо закончить шаги 2,3 --- * После добавления класса **Route** - объекты которого хранят данные об путях: 1. Переписать логику "routing"-а основного файла приложения ```index.js```, по принципу: ```js const server = http.createServer(({url}, res)=>{ // routing if ( url == "/" ) { .... ``` заменить на ```js const server = http.createServer(({url}, res)=>{ const route = new Route(url) // routing if ( route .isPath("/") ) { .... ``` 2. По такой схеме переписать все условные выражения что следуют ниже для того чтобы выдавать API данные и доступ только через предопределенный ключ 3. функции разработанные на 3-ем этапе перенести в модуль "files.js" 4. Переделать код ```index.js``` так чтобы вместо синхронного использования **fs** ипользовались ваши функции, например: ```js ... const html = fs.readFileSync("./server/public/index.html") res.end(html) ... ``` переделать в ```js ... readHTMLPage("index", (content)=>{ res.end(content) }) ... ```
25.230769
150
0.54878
rus_Cyrl
0.880104
0c5665b61e276ca4dac2247661a7c55b34d1b8cb
708
md
Markdown
README.md
zacharylandes/smart_up
63aa7d4ef7deeb8b5d8b00f9c1c32d4f1540211d
[ "MIT" ]
null
null
null
README.md
zacharylandes/smart_up
63aa7d4ef7deeb8b5d8b00f9c1c32d4f1540211d
[ "MIT" ]
null
null
null
README.md
zacharylandes/smart_up
63aa7d4ef7deeb8b5d8b00f9c1c32d4f1540211d
[ "MIT" ]
null
null
null
# Smart Up! ## Your online education home base Visit it it live here: https://aqueous-hamlet-68346.herokuapp.com/ (Please be patient - Free dynos!) This app was built using: - Ruby on Rails 6.1.3 - Ruby 2.7.2 - A Vue JS frontend (https://github.com/zacharylandes/smart_up_front_end) To run the app locally: clone this repo run `bundle install` run `rspec` to see passing tests run `rails db:{create,migrate,seed}` to setup the databse run `rails s` to see the app locally NEW FEATURE: Your place will be saved if you leave the series and return! TO DO: Implement authentication using Google and/or Facebook OAuth. Currently there is only one user! Implement improved security
16.857143
94
0.733051
eng_Latn
0.958035
0c5726dde0a96df49ac08d0a887ed7cfe7a1d1d9
9,957
md
Markdown
_posts/2019-04-07-Vulnhub-machine-walktrhough-MrRobot.md
reg1reg1/reg1reg1.github.io
2c46fc7887b608213bd556d16d1c25a83ea19766
[ "MIT" ]
null
null
null
_posts/2019-04-07-Vulnhub-machine-walktrhough-MrRobot.md
reg1reg1/reg1reg1.github.io
2c46fc7887b608213bd556d16d1c25a83ea19766
[ "MIT" ]
2
2019-11-12T07:00:10.000Z
2019-11-12T07:00:13.000Z
_posts/2019-04-07-Vulnhub-machine-walktrhough-MrRobot.md
reg1reg1/reg1reg1.github.io
2c46fc7887b608213bd556d16d1c25a83ea19766
[ "MIT" ]
null
null
null
--- layout: article title: MrRobot Vulnhub writeup key: 201904070 tags: ctf excerpt: This post entails about the walkthrough of getting root on a VM MrRobot which is present on VulnHub. This machine is at a beginner level, and one of the first machines that I broke on VulnHub. You are strongly recommended to try everything on your own before proceeding. --- ## Basic Description The basic description of the machine can be found at Vulnhub website. It is hinted there that the machine does not involve any reverse engineering and advance exploitation techiniques. This is nice because I am a newbie to reverse engineering, kernel exploitation, exploiting race conditions etc etc. The machine and many more such VM's for pentesting can be found at the Vulnhub website which is an absolute treasure for people like me just starting out in security. The name suggests it is based on the TV show Mr Robot, which I haven't seen. <a href="https://www.vulnhub.com/entry/mr-robot-1,151/">MrRobot VM Link</a>. ### Disclaimer: Please disregard the screenshot IP addresses for the machine for most part. As this pentest was performed over a few days, the DHCP would often assing new IP addresses in (NAT) to the VM ## Initiation Going to attack or pentest the machine as per the pentesting methodology. It is always better to go stepwise which helps us avoid overlooking any open port or other details. So instead of immediately jumping on the wagon and start firing up metasploit, we can look for open ports on the machine. Startup the VM and make sure you are able to reach the MrRobot VM by pinging it from your attack machine. To avoid disturbances or other scanning issues while running nmap, I put them on a shared Host only segment on Vmware. You can also put both the machines on NAT mode as well. The following image shows the results. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/reconrobot.PNG) | |:--:| | *Some interesting ports are open* | Since port 80 is open, we can visit the Webpage being served if any from the MrRobot VM. On opening the URL on browser, we can see a video most probably related to Mr Robot playing, and a video and interactive web simulation of a fancy command shell pops up. By doing this we have made the web attack vector our first choice. I perceived the home page to be a mere smokescreen or just plainly for the aesthetic apeal. Since we are on the website, next good idea would be to map the website for for paths, urls etc. ## Web Application Mapping Using a tool like Dirbuster (a noisy tool ) we can begin mapping the URL's by trying out URLS from some common worldlists. Alternatively you may use Burpsuite. I also by default always visit the *robots.txt* of any website to see if there are suspicious URL's. The following is the output of the robots.txt. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/robotstxt.PNG) | |:--:| | *robots.txt has the first key* | We have found the first key which is inside *key-1-of-3.txt* file. We download the fsocity.dic file as well. The extension seems to suggest it is a dictionary file of some sort. The Dirbuster reveals that Wordpress and PHP are at play here. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/reconwplogin.PNG) | |:--:| | *Interesting URL's revealed by Dirbuster* | On visiting,the URL *wp-login*, we are presented with a login form. Let us attack this login mechanism and see if we can get access to the Wordpress management interface. ## Gaining Wordpress access Initially , I tried with the user *admin* and some of the words in *Fsocity.dic*. Nothing worked out and I did not get any password matches. This made me wonder whether I was going in the right direction. Once I had given it some thought, I thought maybe the user admin did not exist. This is where I thought about looking for a way to enumerate users. A good place to start this was trying to use the *forgot password functionality* . Here when we enter a random user which will not exist database , we can see the following output. The Dirbuster reveals that Wordpress and PHP are at play here. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/invuser.PNG) | |:--:| | *Intvalid user admin* | Since the application lets us know about the validity of user, it makes the complexity of looking for credentials much easier. Instead of a multiplicative search, we need to do a additive search - We first need to find a valid user, and then only look for that valid user's password. The steps are as follows. - __Brute-forcing username__: Using the forgot password functionality, we can bruteforce words from the *fsocity.dic* file to check if the response changes. We can achieve this by using a tool called __Burpp Suite__- which has an inbuilt payload bruteforcer called 'Intruder'. It is basically point and click and hence very easy to use. Once we initiate this attack we get the following output. - * Most of the users will be invalid and will be having a similar response length. - * The response length difference by a huge number can indicate a different page response, and this is a quick way to check whether we have received an interesting response as opposed to render the html page every time. - * We notice that we have noticed the valid username as *Elliot* in the screenshot below. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/intruder.PNG) | |:--:| | *Valid user found!* | - __Brute-forcing passwords__: Once we have figured out the username, we can go back to solving the original problem and try to bruteforce the credentials for this found user. I used the same *fsocity.dic*. The technique to do this exactly as same as above using Burpsuite, the only difference being a different URL and different login form. We obtain the correct credentials as shown below. - | ![reconrobost.PNG]({{site.url}}/public/img/vulnhub/possible_password.PNG) | |:--:| | *Valid password found!* | - On logging in, we find that life is really kind. The credentials we found had admin access to the site. ## Gaining a shell Once we have a admin level access on the web, we can do a lot of things- Deface the website, bring it down, wipe the database etc and even pop a shell. This part was simple. We just need to host a php-backdoor webpage. There are several ways to do it, but I just replaced the default template for the __404 page__. The steps are as follows. - __Generating the payload__: Replace the IP address of the php command shell with your own IP and port. This page on loading will connect back to your machine on the specified port and IP. - __Preparing Metasploit Handler__: We prepare a metasploit handler, and set a listening port which must be same as the port we filled in on the backdoor webpage. The set options are shown below. The payload available for this kind of handler is only "sh", so no meterpreter. Since it is a standalone handler, we will be using the auxiliary handler module of metasploit. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/reverseshell.PNG) | |:--:| | *Auxiliary handler options* | - __Executing the payload__: The module on running opens a Command shell session as shown below. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/commandshell.PNG) | |:--:| | *Running the exploit and visiting an invalid URL* | ## Finding the second key Once we have the bash shell, we can use find command to find the file which is called *key-2-of-2.txt*. We cannot read it because of lack of permission. The other file in the directory is a *md5* file . It has the format of credentials which is usually present as *user:passwordhash* as present in the */etc/shadow* . This user has access to the file we need to read, and so we need to reverse the password from the hash. It is a md5 hash, and we can try to break it by dictionary attack. On reversing it via dictionary attack, we find that the plaintext phrase as *"abcdefghijklmnopqrstuvwxyz"* which is our *password* for the *robot* user. On switching to this user, and we can read it from the *key-2-of-2.txt* and get our 2nd key. ## Privilege Escalation. Now the third key is all that remains. We can search the file system for the third key , but many folders and paths will be skipped because we are under privileged. We need to get root access and here things get interesting. In the Windows when we perform privileged actions we or processes need to perform actions outside of their native privilege level, they use impersonation tokens. In Linux this functionality is implemented by __uid's__. When this bit in a process or a file or a script is set to match a user, the other users executing that assume that user's role. For eg:, the ping functionality is allowed to be executed by only the root, but since the uid is set to root, any user who pings executes the functionality as root. In Linux, *setuid* and *getuid* are two functions which are used to manage this feature. So here our aim is to basicaly get a process which has the __uid__ bit set for __root__ user and then be able to get shell from within that process. The query for finding programs under this category is this. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/getuidroot.PNG) | |:--:| | *Getuid* | We are basically finding and filtering for files which have the uid bit value as "4000" which corresponds to those processes with the uid set to root value. All the processes are pretty usual, and we cannot execute shell commands from *ping* or *umount*xcept __nmap__. Nmap has an interactive mode option. Let us execute nmap in interactive mode and give it a shot. | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/nmap.PNG) | |:--:| | *nmap in interactive mode* | Now we call *sh* from within the interactive mode of nmap and Bazingo! | ![reconrobot.PNG]({{site.url}}/public/img/vulnhub/privesc.PNG) | |:--:| | *executing shell gives root* | We can then find the third keyfile which is present in the path */root* and get the key.
70.119718
533
0.766094
eng_Latn
0.99953
0c57367a493f279b1a5da4999831ca2b3e07cfcd
31,479
md
Markdown
aspnet/aspnet/overview/owin-and-katana/an-overview-of-project-katana.md
seangwright/Docs
45ae8e4113d3a137641f16a79a1f7ee0aac056eb
[ "CC-BY-4.0", "MIT" ]
1
2019-01-10T22:40:49.000Z
2019-01-10T22:40:49.000Z
aspnet/aspnet/overview/owin-and-katana/an-overview-of-project-katana.md
Mikejo5000/Docs-1
b785ebe7072290584cba9f3de2435914fa3cc19e
[ "CC-BY-4.0", "MIT" ]
null
null
null
aspnet/aspnet/overview/owin-and-katana/an-overview-of-project-katana.md
Mikejo5000/Docs-1
b785ebe7072290584cba9f3de2435914fa3cc19e
[ "CC-BY-4.0", "MIT" ]
1
2019-06-20T17:39:47.000Z
2019-06-20T17:39:47.000Z
--- uid: aspnet/overview/owin-and-katana/an-overview-of-project-katana title: "An Overview of Project Katana | Microsoft Docs" author: howarddierking description: "The ASP.NET Framework has been around for over ten years, and the platform has enabled the development of countless Web sites and services. As Web applicatio..." ms.author: aspnetcontent manager: wpickett ms.date: 08/30/2013 ms.topic: article ms.assetid: 0ee21741-c1bf-4025-a9b0-24580cae24bc ms.technology: ms.prod: .net-framework msc.legacyurl: /aspnet/overview/owin-and-katana/an-overview-of-project-katana msc.type: authoredcontent --- An Overview of Project Katana ==================== by [Howard Dierking](https://github.com/howarddierking) > The ASP.NET Framework has been around for over ten years, and the platform has enabled the development of countless Web sites and services. As Web application development strategies have evolved, the framework has been able to evolve in step with technologies like ASP.NET MVC and ASP.NET Web API. As Web application development takes its next evolutionary step into the world of cloud computing, project [Katana](https://channel9.msdn.com/Shows/Web+Camps+TV/The-Katana-Project-OWIN-for-ASPNET) provides the underlying set of components to ASP.NET applications, enabling them to be flexible, portable, lightweight, and provide better performance – put another way, project [Katana](https://channel9.msdn.com/Shows/Web+Camps+TV/The-Katana-Project-OWIN-for-ASPNET) cloud optimizes your ASP.NET applications. ## Why Katana – Why Now? Regardless whether one is discussing a developer framework or end-user product, it's important to understand the underlying motivations for creating the product – and part of that includes knowing who the product was created for. ASP.NET was originally created with two customers in mind. **The first group of customers was classic ASP developers.** At the time, ASP was one of the primary technologies for creating dynamic, data-driven Web sites and applications by interweaving markup and server-side script. The ASP runtime supplied server-side script with a set of objects that abstracted core aspects of the underlying HTTP protocol and Web server and provided access to additional services such session and application state management, cache, etc. While powerful, classic ASP applications became a challenge to manage as they grew in size and complexity. This was largely due to the lack of structure found in in scripting environments coupled with the duplication of code resulting from the interleaving of code and markup. In order to capitalize on the strengths of classic ASP while addressing some of its challenges, ASP.NET took advantage of the code organization provided by the object-oriented languages of the .NET Framework while also preserving the server-side programming model to which classic ASP developers had grown accustomed. **The second group of target customers for ASP.NET was Windows business application developers.** Unlike classic ASP developers, who were accustomed to writing HTML markup and the code to generate more HTML markup, WinForms developers (like the VB6 developers before them) were accustomed to a design time experience that included a canvas and a rich set of user interface controls. The first version of ASP.NET – also known as "Web Forms" provided a similar design time experience along with a server-side event model for user interface components and a set of infrastructure features (such as ViewState) to create a seamless developer experience between client and server side programming. Web Forms effectively hid the Web's stateless nature under a stateful event model that was familiar to WinForms developers. ### Challenges Raised by the Historical Model **The net result was a mature, feature-rich runtime and developer programming model.** However, with that feature-richness came a couple notable challenges. Firstly, the framework was **monolithic**, with logically disparate units of functionality being tightly coupled in the same System.Web.dll assembly (for example, the core HTTP objects with the Web forms framework). Secondly, ASP.NET was included as a part of the larger .NET Framework, which meant that the **time between releases was on the order of years.** This made it difficult for ASP.NET to keep pace with all of the changes happening in rapidly evolving Web development. Finally, System.Web.dll itself was coupled in a few different ways to a specific Web hosting option: Internet Information Services (IIS). ### Evolutionary steps: ASP.NET MVC and ASP.NET Web API And lots of change was happening in Web development! Web applications were increasingly being developed as a series of small, focused components rather than large frameworks. The number of components as well as the frequency with which they were released was increasing at an ever faster rate. It was clear that keeping pace with the Web would require frameworks to get smaller, decoupled and more focused rather than larger and more feature-rich, therefore the **ASP.NET team took several evolutionary steps to enable ASP.NET as a family of pluggable Web components rather than a single framework**. One of the early changes was the rise in popularity of the well-known model-view-controller (MVC) design pattern thanks to Web development frameworks like Ruby on Rails. This style of building Web applications gave the developer greater control over her application's markup while still preserving the separation of markup and business logic, which was one of the initial selling points for ASP.NET. To meet the demand for this style of Web application development, Microsoft took the opportunity to position itself better for the future by **developing ASP.NET MVC out of band** (and not including it in the .NET Framework). ASP.NET MVC was released as an independent download. This gave the engineering team the flexibility to deliver updates much more frequently than had been previously possible. Another major shift in Web application development was the shift from dynamic, server-generated Web pages to static initial markup with dynamic sections of the page generated from client-side script communicating **with backend Web APIs through AJAX requests**. This architectural shift helped propel the rise of Web APIs, and the development of the ASP.NET Web API framework. As in the case of ASP.NET MVC, the release of ASP.NET Web API provided another opportunity to evolve ASP.NET further as a more modular framework. The engineering team took advantage of the opportunity and **built ASP.NET Web API such that it had no dependencies on any of the core framework types found in System.Web.dll**. This enabled two things: first, it meant that ASP.NET Web API could evolve in a completely self-contained manner (and it could continue to iterate quickly because it is delivered via NuGet). Second, because there were no external dependencies to System.Web.dll, and therefore, no dependencies to IIS, ASP.NET Web API included the capability to run in a custom host (for example, a console application, Windows service, etc.) ### The Future: A Nimble Framework By decoupling framework components from one another and then releasing them on NuGet, frameworks could now **iterate more independently and more quickly**. Additionally, the power and flexibility of Web API's self-hosting capability proved very attractive to developers who wanted a **small, lightweight host** for their services. It proved so attractive, in fact, that other frameworks also wanted this capability, and this surfaced a new challenge in that each framework ran in its own host process on its own base address and needed to be managed (started, stopped, etc.) independently. A modern Web application generally supports static file serving, dynamic page generation, Web API, and more recently real-time/push notifications. Expecting that each of these services should be run and managed independently was simply not realistic. What was needed was a single hosting abstraction that would enable a developer to compose an application from a variety of different components and frameworks, and then run that application on a supporting host. ## The Open Web Interface for .NET (OWIN) Inspired by the benefits achieved by [Rack](http://rack.github.io/) in the Ruby community, several members of the .NET community set out to create an abstraction between Web servers and framework components. Two design goals for the OWIN abstraction were that it was simple and that it took the fewest possible dependencies on other framework types. These two goals help ensure: - New components could be more easily developed and consumed. - Applications could be more easily ported between hosts and potentially entire platforms/operating systems. The resulting abstraction consists of two core elements. The first is the environment dictionary. This data structure is responsible for storing all of the state necessary for processing an HTTP request and response, as well as any relevant server state. The environment dictionary is defined as follows: [!code-console[Main](an-overview-of-project-katana/samples/sample1.cmd)] An OWIN-compatible Web server is responsible for populating the environment dictionary with data such as the body streams and header collections for an HTTP request and response. It is then the responsibility of the application or framework components to populate or update the dictionary with additional values and write to the response body stream. In addition to specifying the type for the environment dictionary, the OWIN specification defines a list of core dictionary key value pairs. For example, the following table shows the required dictionary keys for an HTTP request: | Key Name | Value Description | | --- | --- | | `"owin.RequestBody"` | A Stream with the request body, if any. Stream.Null MAY be used as a placeholder if there is no request body. See [Request Body](http://owin.org/html/owin.html#34-request-body-100-continue-and-completed-semantics). | | `"owin.RequestHeaders"` | An `IDictionary<string, string[]>` of request headers. See [Headers](http://owin.org/html/owin.html#3-3-headers). | | `"owin.RequestMethod"` | A `string` containing the HTTP request method of the request (e.g., `"GET"`, `"POST"`). | | `"owin.RequestPath"` | A `string` containing the request path. The path MUST be relative to the "root" of the application delegate; see [Paths](http://owin.org/html/owin.html#5-3-paths). | | `"owin.RequestPathBase"` | A `string` containing the portion of the request path corresponding to the "root" of the application delegate; see [Paths](http://owin.org/html/owin.html#5-3-paths). | | `"owin.RequestProtocol"` | A `string` containing the protocol name and version (e.g. `"HTTP/1.0"` or `"HTTP/1.1"`). | | `"owin.RequestQueryString"` | A `string` containing the query string component of the HTTP request URI, without the leading "?" (e.g., `"foo=bar&baz=quux"`). The value may be an empty string. | | `"owin.RequestScheme"` | A `string` containing the URI scheme used for the request (e.g., `"http"`, `"https"`); see [URI Scheme](http://owin.org/html/owin.html#5-1-uri-scheme). | The second key element of OWIN is the application delegate. This is a function signature which serves as the primary interface between all components in an OWIN application. The definition for the application delegate is as follows: `Func<IDictionary<string, object>, Task>;` The application delegate then is simply an implementation of the Func delegate type where the function accepts the environment dictionary as input and returns a Task. This design has several implications for developers: - There are a very small number of type dependencies required in order to write OWIN components. This greatly increases the accessibility of OWIN to developers. - The asynchronous design enables the abstraction to be efficient with its handling of computing resources, particularly in more I/O intensive operations. - Because the application delegate is an atomic unit of execution and because the environment dictionary is carried as a parameter on the delegate, OWIN components can be easily chained together to create complex HTTP processing pipelines. From an implementation perspective, OWIN is a specification ([http://owin.org/html/owin.html](http://owin.org/html/owin.html)). Its goal is not to be the next Web framework, but rather a specification for how Web frameworks and Web servers interact. If you've investigated [OWIN](http://owin.org/) or [Katana](https://github.com/aspnet/AspNetKatana/wiki), you may also have noticed the [Owin NuGet package](http://nuget.org/packages/Owin) and Owin.dll. This library contains a single interface, [IAppBuilder](https://github.com/owin/owin/blob/master/src/Owin/IAppBuilder.cs), which formalizes and codifies the startup sequence described in [section 4](http://owin.org/html/owin.html#4-application-startup) of the OWIN specification. While not required in order to build OWIN servers, the [IAppBuilder](https://github.com/owin/owin/blob/master/src/Owin/IAppBuilder.cs) interface provides a concrete reference point, and it is used by the Katana project components. ## Project Katana Whereas both the [OWIN](http://owin.org/html/owin.html) specification and *Owin.dll* are community owned and community run open source efforts, the [Katana](https://github.com/aspnet/AspNetKatana/wiki) project represents the set of OWIN components that, while still open source, are built and released by Microsoft. These components include both infrastructure components, such as hosts and servers, as well as functional components, such as authentication components and bindings to frameworks such as [SignalR](../../../signalr/index.md) and [ASP.NET Web API](../../../web-api/overview/getting-started-with-aspnet-web-api/index.md). The project has the following three high level goals: - **Portable** – Components should be able to be easily substituted for new components as they become available. This includes all types of components, from the framework to the server and host. The implication of this goal is that third party frameworks can seamlessly run on Microsoft servers while Microsoft frameworks can potentially run on third party servers and hosts. - **Modular/flexible**– Unlike many frameworks which include a myriad of features that are turned on by default, Katana project components should be small and focused, giving control over to the application developer in determining which components to use in her application. - **Lightweight/performant/scalable** – By breaking the traditional notion of a framework into a set of small, focused components which are added explicitly by the application developer, a resulting Katana application can consume fewer computing resources, and as a result, handle more load, than with other types of servers and frameworks. As the requirements of the application demand more features from the underlying infrastructure, those can be added to the OWIN pipeline, but that should be an explicit decision on the part of the application developer. Additionally, the substitutability of lower level components means that as they become available, new high performance servers can seamlessly be introduced to improve the performance of OWIN applications without breaking those applications. ## Getting Started with Katana Components When it was first introduced, one aspect of the [Node.js](http://nodejs.org/) framework that immediately drew people's attention was the simplicity with which one could author and run a Web server. If Katana goals were framed in light of [Node.js](http://nodejs.org/), one might summarize them by saying that Katana brings many of the benefits of [Node.js](http://nodejs.org/) (and frameworks like it) without forcing the developer to throw out everything she knows about developing ASP.NET Web applications. For this statement to hold true, getting started with the Katana project should be equally simple in nature to [Node.js](http://nodejs.org/). ## Creating "Hello World!" One notable difference between JavaScript and .NET development is the presence (or absence) of a compiler. As such, the starting point for a simple Katana server is a Visual Studio project. However, we can start with the most minimal of project types: the Empty ASP.NET Web Application. [![](an-overview-of-project-katana/_static/image1.png)](http://nuget.org/packages/Microsoft.Owin.Host.SystemWeb) Next, we will install the [Microsoft.Owin.Host.SystemWeb](http://nuget.org/packages/Microsoft.Owin.Host.SystemWeb) NuGet package into the project. This package provides an OWIN server that runs in the ASP.NET request pipeline. It can be found on the [NuGet gallery](http://nuget.org/packages/Microsoft.Owin.Host.SystemWeb) and can be installed using either the Visual Studio package manager dialog or the package manager console with the following command: [!code-console[Main](an-overview-of-project-katana/samples/sample2.cmd)] Installing the `Microsoft.Owin.Host.SystemWeb` package will install a few additional packages as dependencies. One of those dependencies is `Microsoft.Owin`, a library which provides several helper types and methods for developing OWIN applications. We can use those types to quickly write the following "hello world" server. [!code-csharp[Main](an-overview-of-project-katana/samples/sample3.cs)] This very simple Web server can now be run using Visual Studio's **F5** command and includes full support for debugging. ## Switching hosts By default, the previous "hello world" example runs in the ASP.NET request pipeline, which uses System.Web in the context of IIS. This can by itself add tremendous value as it enables us to benefit from the flexibility and composability of an OWIN pipeline with the management capabilities and overall maturity of IIS. However, there may be cases where the benefits provided by IIS are not required and the desire is for a smaller, more lightweight host. What is needed, then, to run our simple Web server outside of IIS and System.Web? To illustrate the portability goal, moving from a Web-server host to a command line host requires simply adding the new server and host dependencies to project's output folder and then starting the host. In this example, we'll host our Web server in a Katana host called `OwinHost.exe` and will use the Katana HttpListener-based server. Similarly to the other Katana components, these will be acquired from NuGet using the following command: [!code-console[Main](an-overview-of-project-katana/samples/sample4.cmd)] From the command line, we can then navigate to the project root folder and simply run the `OwinHost.exe` (which was installed in the tools folder of its respective NuGet package). By default, `OwinHost.exe` is configured to look for the HttpListener-based server and so no additional configuration is needed. Navigating in a Web browser to `http://localhost:5000/` shows the application now running through the console. ![](an-overview-of-project-katana/_static/image2.png) ## Katana Architecture The Katana component architecture divides an application into four logical layers, as depicted below: *host, server, middleware,* and *application*. The component architecture is factored in such a way that implementations of these layers can be easily substituted, in many cases, without requiring recompilation of the application. ![](an-overview-of-project-katana/_static/image3.png) ## Host The host is responsible for: - Managing the underlying process. - Orchestrating the workflow that results in the selection of a server and the construction of an OWIN pipeline through which requests will be handled. At present, there are 3 primary hosting options for Katana-based applications: **IIS/ASP.NET**: Using the standard HttpModule and HttpHandler types, OWIN pipelines can run on IIS as a part of an ASP.NET request flow. ASP.NET hosting support is enabled by installing the Microsoft.AspNet.Host.SystemWeb NuGet package into a Web application project. Additionally, because IIS acts as both a host and a server, the OWIN server/host distinction is conflated in this NuGet package, meaning that if using the SystemWeb host, a developer cannot substitute an alternate server implementation. **Custom Host**: The Katana component suite gives a developer the ability to host applications in her own custom process, whether that is a console application, Windows service, etc. This capability looks similar to the self-host capability provided by Web API. The following example shows a custom host of Web API code: [!code-csharp[Main](an-overview-of-project-katana/samples/sample5.cs)] The self-host setup for a Katana application is similar: [!code-csharp[Main](an-overview-of-project-katana/samples/sample6.cs)] One notable difference between the Web API and Katana self-host examples is that the Web API configuration code is missing from the Katana self-host example. In order to enable both portability and composability, Katana separates the code that starts the server from the code that configures the request processing pipeline. The code that configures Web API, then is contained in the class Startup, which is additionally specified as the type parameter in WebApplication.Start. [!code-csharp[Main](an-overview-of-project-katana/samples/sample7.cs)] The startup class will be discussed in greater detail later in the article. However, the code required to start a Katana self-host process looks strikingly similar to the code that you may be using today in ASP.NET Web API self-host applications. **OwinHost.exe**: While some will want to write a custom process to run Katana Web applications, many would prefer to simply launch a pre-built executable that can start a server and run their application. For this scenario, the Katana component suite includes `OwinHost.exe`. When run from within a project's root directory, this executable will start a server (it uses the HttpListener server by default) and use conventions to find and run the user's startup class. For more granular control, the executable provides a number of additional command line parameters. ![](an-overview-of-project-katana/_static/image4.png) ## Server While the host is responsible for starting and maintaining process within which the application runs, the responsibility of the server is to open a network socket, listen for requests, and send them through the pipeline of OWIN components specified by the user (as you may have already noticed, this pipeline is specified in the application developer's Startup class). Currently, the Katana project includes two server implementations: - **Microsoft.Owin.Host.SystemWeb**: As previously mentioned, IIS in concert with the ASP.NET pipeline acts as both a host and a server. Therefore, when choosing this hosting option, IIS both manages host-level concerns such as process activation and listens for HTTP requests. For ASP.NET Web applications, it then sends the requests into the ASP.NET pipeline. The Katana SystemWeb host registers an ASP.NET HttpModule and HttpHandler to intercept requests as they flow through the HTTP pipeline and send them through the user-specified OWIN pipeline. - **Microsoft.Owin.Host.HttpListener**: As its name indicates, this Katana server uses the .NET Framework's HttpListener class to open a socket and send requests into a developer-specified OWIN pipeline. This is currently the default server selection for both the Katana self-host API and OwinHost.exe. ## Middleware/framework As previously mentioned, when the server accepts a request from a client, it is responsible for passing it through a pipeline of OWIN components, which are specified by the developer's startup code. These pipeline components are known as middleware. At a very basic level, an OWIN middleware component simply needs to implement the OWIN application delegate so that it is callable. [!code-console[Main](an-overview-of-project-katana/samples/sample8.cmd)] However, in order to simplify the development and composition of middleware components, Katana supports a handful of conventions and helper types for middleware components. The most common of these is the `OwinMiddleware` class. A custom middleware component built using this class would look similar to the following: [!code-csharp[Main](an-overview-of-project-katana/samples/sample9.cs)] This class derives from `OwinMiddleware`, implements a constructor that accepts an instance of the next middleware in the pipeline as one of its arguments, and then passes it to the base constructor. Additional arguments used to configure the middleware are also declared as constructor parameters after the next middleware parameter. At runtime, the middleware is executed via the overridden `Invoke` method. This method takes a single argument of type `OwinContext`. This context object is provided by the `Microsoft.Owin` NuGet package described earlier and provides strongly-typed access to the request, response and environment dictionary, along with a few additional helper types. The middleware class can be easily added to the OWIN pipeline in the application startup code as follows: [!code-csharp[Main](an-overview-of-project-katana/samples/sample10.cs)] Because the Katana infrastructure simply builds up a pipeline of OWIN middleware components, and because the components simply need to support the application delegate to participate in the pipeline, middleware components can range in complexity from simple loggers to entire frameworks like ASP.NET, Web API, or [SignalR](../../../signalr/index.md). For example, adding ASP.NET Web API to the previous OWIN pipeline requires adding the following startup code: [!code-csharp[Main](an-overview-of-project-katana/samples/sample11.cs)] The Katana infrastructure will build the pipeline of middleware components based on the order in which they were added to the IAppBuilder object in the Configuration method. In our example, then, LoggerMiddleware can handle all requests that flow through the pipeline, regardless of how those requests are ultimately handled. This enables powerful scenarios where a middleware component (e.g. an authentication component) can process requests for a pipeline that includes multiple components and frameworks (e.g. ASP.NET Web API, SignalR, and a static file server). ## Applications As illustrated by the previous examples, OWIN and the Katana project should not be thought of as a new application programming model, but rather as an abstraction to decouple application programming models and frameworks from server and hosting infrastructure. For example, when building Web API applications, the developer framework will continue to use the ASP.NET Web API framework, irrespective of whether or not the application runs in an OWIN pipeline using components from the Katana project. The one place where OWIN-related code will be visible to the application developer will be the application startup code, where the developer composes the OWIN pipeline. In the startup code, the developer will register a series of UseXx statements, generally one for each middleware component that will process incoming requests. This experience will have the same effect as registering HTTP modules in the current System.Web world. Typically, a larger framework middleware, such as ASP.NET Web API or [SignalR](../../../signalr/index.md) will be registered at the end of the pipeline. Cross-cutting middleware components, such as those for authentication or caching, are generally registered towards the beginning of the pipeline so that they will process requests for all of the frameworks and components registered later in the pipeline. This separation of the middleware components from each other and from the underlying infrastructure components enables the components to evolve at different velocities while ensuring that the overall system remains stable. ## Components – NuGet Packages Like many current libraries and frameworks, the Katana project components are delivered as a set of NuGet packages. For the upcoming version 2.0, the Katana package dependency graph looks as follows. (Click on image for larger view.) [![](an-overview-of-project-katana/_static/image6.png)](an-overview-of-project-katana/_static/image5.png) Nearly every package in the Katana project depends, directly or indirectly, on the Owin package. You may remember that this is the package that contains the IAppBuilder interface, which provides a concrete implementation of the application startup sequence described in section 4 of the OWIN specification. Additionally, many of the packages depend on Microsoft.Owin, which provides a set of helper types for working with HTTP requests and responses. The remainder of the package can be classified as either hosting infrastructure packages (servers or hosts) or middleware. Packages and dependencies that are external to the Katana project are displayed in orange. The hosting infrastructure for Katana 2.0 includes both the SystemWeb and HttpListener-based servers, the OwinHost package for running OWIN applications using OwinHost.exe, and the Microsoft.Owin.Hosting package for self-hosting OWIN applications in a custom host (e.g. console application, Windows service, etc.) For Katana 2.0, the middleware components are primarily focused on providing different means of authentication. One additional middleware component for diagnostics is provided, which enables support for a start and error page. As OWIN grows into the de facto hosting abstraction, the ecosystem of middleware components, both those developed by Microsoft and third parties, will also grow in number. ## Conclusion From its beginning, the Katana project's goal has not been to create and thereby force developers to learn yet another Web framework. Rather, the goal has been to create an abstraction to give .NET Web application developers more choice than has previously been possible. By breaking up the logical layers of a typical Web application stack into a set of replaceable components, the Katana project enables components throughout the stack to improve at whatever rate makes sense for those components. By building all components around the simple OWIN abstraction, Katana enables frameworks and the applications built on top of them to be portable across a variety of different servers and hosts. By putting the developer in control of the stack, Katana ensures that the developer makes the ultimate choice about how lightweight or how feature-rich her Web stack should be. ## For more information about Katana - The Katana project on GitHub: [https://github.com/aspnet/AspNetKatana/](https://github.com/aspnet/AspNetKatana/). - Video: [The Katana Project - OWIN for ASP.NET](https://channel9.msdn.com/Shows/Web+Camps+TV/The-Katana-Project-OWIN-for-ASPNET), by Howard Dierking. ## Acknowledgements - [Rick Anderson](https://blogs.msdn.com/b/rickandy/): (twitter [@RickAndMSFT](http://twitter.com/RickAndMSFT) ) Rick is a senior programming writer for Microsoft focusing on Azure and MVC. - [Scott Hanselman](http://www.hanselman.com/blog/): (twitter [@shanselman](https://twitter.com/shanselman) ) - [Jon Galloway](https://weblogs.asp.net/jgalloway/default.aspx): (twitter [@jongalloway](https://twitter.com/jongalloway) )
138.674009
1,562
0.800629
eng_Latn
0.998376
0c5795042d9633804dbd59e5e39c6e31ed0446a5
384
md
Markdown
src/pages/letters/2016-03-03-adult-ed-class/2016-03-03-Z-Budner-to-Marie-Curie.md
kimadactyl/dearfriend-v2
f8180205b893c487dd53c86493abf7ba31c9f5be
[ "MIT" ]
null
null
null
src/pages/letters/2016-03-03-adult-ed-class/2016-03-03-Z-Budner-to-Marie-Curie.md
kimadactyl/dearfriend-v2
f8180205b893c487dd53c86493abf7ba31c9f5be
[ "MIT" ]
null
null
null
src/pages/letters/2016-03-03-adult-ed-class/2016-03-03-Z-Budner-to-Marie-Curie.md
kimadactyl/dearfriend-v2
f8180205b893c487dd53c86493abf7ba31c9f5be
[ "MIT" ]
null
null
null
--- sender: Zaneta Budner recipient: Marie Curie description: Physicist and chemist who conducted pioneering research into radioactivity website: https://en.wikipedia.org/wiki/Marie_Curie born: 1867 died: 1934 received: 2016-03-03 --- Dear Marie Curie Thank you for your work on radiation led to X-rays and Radiotherapy in medicine. You worked helping people. Best wishes Zaneta
20.210526
107
0.791667
eng_Latn
0.960356
0c57aff751ed9ad50db972667cbc348de4dcb6a3
80
md
Markdown
README.md
zerak/mahjong
740bb6c23cbf30c7fc53731ea90bf5120d5ae780
[ "MIT" ]
null
null
null
README.md
zerak/mahjong
740bb6c23cbf30c7fc53731ea90bf5120d5ae780
[ "MIT" ]
null
null
null
README.md
zerak/mahjong
740bb6c23cbf30c7fc53731ea90bf5120d5ae780
[ "MIT" ]
null
null
null
# mahjong A game server based on [EGO](https://github.com/zerak/ego) framework.
26.666667
69
0.7375
eng_Latn
0.489799
0c57bd7b2f62b50fe7a72158fdc564280c7d7522
8,812
md
Markdown
how-to-use-azureml/work-with-data/dataset-api-change-notice.md
f-urbano/MachineLearningNotebooks
c13a255a96bc67ef29d6e1312e2eaf5963b4bef4
[ "MIT" ]
null
null
null
how-to-use-azureml/work-with-data/dataset-api-change-notice.md
f-urbano/MachineLearningNotebooks
c13a255a96bc67ef29d6e1312e2eaf5963b4bef4
[ "MIT" ]
null
null
null
how-to-use-azureml/work-with-data/dataset-api-change-notice.md
f-urbano/MachineLearningNotebooks
c13a255a96bc67ef29d6e1312e2eaf5963b4bef4
[ "MIT" ]
null
null
null
# Dataset API change notice ## Why are Dataset API changes essential? The existing Dataset class only supports data in tabular format. In order to support binary data and address a wider range of machine learning scenarios including deep learning, we will introduce Dataset types. Datasets are categorized into various types based on how users consume them in training. List of Dataset types: - **TabularDataset**: Represents data in a tabular format by parsing the provided file or list of files. TabularDataset can be created from csv, tsv, parquet files, SQL query results etc. For the complete list, please visit our [documentation](https://aka.ms/tabulardataset-api-reference). It provides you with the ability to materialize the data into a pandas DataFrame. - **FileDataset**: References single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. In order to transit from the current Dataset design to typed Dataset, we will deprecate the following methods over time. ## Which methods on Dataset class will be deprecated in upcoming releases? Methods to be deprecated|Replacement in the new version| ----|-------- [Dataset.get()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#get-workspace--name-none--id-none-)|[Dataset.get_by_name()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#get-by-name-workspace--name--version--latest--) [Dataset.from_pandas_dataframe()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#from-pandas-dataframe-dataframe--path-none--in-memory-false-)|Creating a Dataset from in-memory DataFrame or local files will cause errors in training on remote compute. Therefore, the new Dataset design will only support creating Datasets from paths in datastores or public web urls. If you are using pandas, you can write the DataFrame into a parquet file, upload it to the cloud, and create a TabularDataset referencing the parquet file using [Dataset.Tabular.from_parquet_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-parquet-files-path--validate-true--include-path-false--set-column-types-none-) [Dataset.from_delimited_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#from-delimited-files-path--separator------header--promoteheadersbehavior-all-files-have-same-headers--3---encoding--fileencoding-utf8--0---quoting-false--infer-column-types-true--skip-rows-0--skip-mode--skiplinesbehavior-no-rows--0---comment-none--include-path-false--archive-options-none--partition-format-none-)|[Dataset.Tabular.from_delimited_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separator------header--promoteheadersbehavior-all-files-have-same-headers--3--) [Dataset.auto_read_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#auto-read-files-path--include-path-false--partition-format-none-)|`auto_read_files` does not always produce results that match with users' expectation. To avoid confusion, this method is not introduced with TabularDataset for now. Please use [Dataset.Tabular.from_parquet_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-parquet-files-path--validate-true--include-path-false--set-column-types-none-) or [Dataset.Tabular.from_delimited_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separator------header--promoteheadersbehavior-all-files-have-same-headers--3--) depending on your file format. [Dataset.from_parquet_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#from-parquet-files-path--include-path-false--partition-format-none-)|[Dataset.Tabular.from_parquet_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-parquet-files-path--validate-true--include-path-false--set-column-types-none-) [Dataset.from_sql_query()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#from-sql-query-data-source--query-)|[Dataset.Tabular.from_sql_query()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-sql-query-query--validate-true--set-column-types-none-) [Dataset.from_excel_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#from-excel-files-path--sheet-name-none--use-column-headers-false--skip-rows-0--include-path-false--infer-column-types-true--partition-format-none-)|We will support creating a TabularDataset from Excel files in a future release. [Dataset.from_json_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#from-json-files-path--encoding--fileencoding-utf8--0---flatten-nested-arrays-false--include-path-false--partition-format-none-)| [Dataset.Tabular.from_json_lines_files](https://docs.microsoft.com/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory?view=azure-ml-py#from-json-lines-files-path--validate-true--include-path-false--set-column-types-none--partition-format-none-) [Dataset.to_pandas_dataframe()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#to-pandas-dataframe--)|[TabularDataset.to_pandas_dataframe()](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py#to-pandas-dataframe--) [Dataset.to_spark_dataframe()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#to-spark-dataframe--)|[TabularDataset.to_spark_dataframe()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py#to-spark-dataframe--) [Dataset.head(3)](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#head-count-)|[TabularDataset.take(3).to_pandas_dataframe()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py#take-count-) [Dataset.sample()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#sample-sample-strategy--arguments-)|[TabularDataset.take_sample()](https://docs.microsoft.com/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py#take-sample-probability--seed-none-) [Dataset.from_binary_files()](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#from-binary-files-path-)|`Dataset.File.from_files()` ## Why should I use the new Dataset API if I'm only dealing with tabular data? The current Dataset will be kept around for backward compatibility, but we strongly encourage you to move to TabularDataset for the new capabilities listed below: - You are able to version and track the new typed Datasets. [Learn How](https://aka.ms/azureml/howto/versiondata) - You are able to use TabularDatasets as automated ML input. [Learn How](https://aka.ms/automl-dataset) - You are able to use the new typed Datasets as ScriptRun, Estimator, HyperDrive input. [Learn How](https://aka.ms/train-with-datasets) - You are be able to use the new typed Datasets in Azure Machine Learning Pipelines. [Learn How](https://aka.ms/pl-datasets) ## How to migrate registered Datasets to new typed Datasets? We handled the migration for you. All legacy datasets are migrated to new typed Datasets automatically. To use registered datasets, simply call [Dataset.get_by_name](https://docs.microsoft.com/python/api/azureml-core/azureml.core.dataset.dataset?view=azure-ml-py#get-by-name-workspace--name--version--latest--). ## How to provide feedback? If you have any feedback about our product, or if there is any missing capability that is essential for you to use new Dataset API, please email us at [AskAzureMLData@microsoft.com](mailto:AskAzureMLData@microsoft.com). ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/work-with-data/dataset-api-change-notice.png)
200.272727
993
0.798116
yue_Hant
0.326703
0c58800629e8c845a112c974e1db88d5e27dd3fc
4,774
md
Markdown
_wiki/BioJava_CookBook_PDB_CE_Algorithm.md
biojava/biojava.github.io
32d95e1e36e7d719b62eaba6bf529e710576d1da
[ "CC-BY-3.0" ]
3
2016-06-10T06:04:51.000Z
2020-01-03T00:47:51.000Z
_wiki/BioJava_CookBook_PDB_CE_Algorithm.md
biojava/biojava.github.io
32d95e1e36e7d719b62eaba6bf529e710576d1da
[ "CC-BY-3.0" ]
14
2016-03-23T04:38:32.000Z
2020-11-10T00:36:18.000Z
_wiki/BioJava_CookBook_PDB_CE_Algorithm.md
biojava/biojava.github.io
32d95e1e36e7d719b62eaba6bf529e710576d1da
[ "CC-BY-3.0" ]
16
2016-03-21T16:40:26.000Z
2021-03-17T15:01:10.000Z
--- title: BioJava:CookBook:PDB:CE Algorithm permalink: wiki/BioJava%3ACookBook%3APDB%3ACE_Algorithm --- CE Algorithm ============ The BioJava 3 release provides a version of the **Combinatorial Extension Algorithm** (CE), originally developed by Shindyalov and Bourne. [http://peds.oxfordjournals.org/cgi/content/short/11/9/739 original manuscript](http://peds.oxfordjournals.org/cgi/content/short/11/9/739 original manuscript "wikilink"). User Interface ============== **Required modules**: *biojava-structure, biojava-structure-gui, alignment* A user interface for running structure alignments manually is available through the biojava3-structure-gui modules. ```java public static void main(String[] args) { `       System.setProperty("PDB_DIR","/tmp/");` `   ` `       AlignmentGui.getInstance();` } ``` The *PDB\_DIR* property allows to specify the path, where in the local file system PDB files are stored. Local Execution =============== **Required modules**: *biojava-structure, alignment* **Optional module** : *biojava-structure-gui* for the 3D visualisation Using BioJava3 it is possible to align any set of atoms with the CE algorithm. This example demonstrates how to align two protein chains and edit some of the parameters. ```java `  public static void main(String[] args){` `      ` `      String pdbFilePath="/tmp/";` `      ` `      boolean isSplit = true;` `           ` `      String name1 = "1cdg.A";` `      String name2 = "1tim.B";` `      ` `      AtomCache cache = new AtomCache(pdbFilePath, isSplit);` `              ` `      Structure structure1 = null;` `      Structure structure2 = null;` `      try {` `         StructureAlignment algorithm  = StructureAlignmentFactory.getAlgorithm(CeMain.algorithmName);` `         ` `          structure1 = cache.getStructure(name1);` `          structure2 = cache.getStructure(name2);` `          ` `          Atom[] ca1 = StructureTools.getAtomCAArray(structure1);` `          Atom[] ca2 = StructureTools.getAtomCAArray(structure2);` `          ` `          // get default parameters` `          CeParameters params = new CeParameters();` `          ` `          // add more print` `          params.setShowAFPRanges(true);` `          ` `          // set the maximum gap size to unlimited ` `          params.setMaxGapSize(-1);` `          ` `          // The results are stored in an AFPChain object           ` `          AFPChain afpChain = algorithm.align(ca1,ca2,params);            ` `          afpChain.setName1(name1);` `          afpChain.setName2(name2);` `           // show a nice summary print` `          System.out.println(AfpChainWriter.toWebSiteDisplay(afpChain, ca1, ca2));` `          ` `          // print rotation matrices` `          System.out.println(afpChain.toRotMat());` `          //System.out.println(afpChain.toCE(ca1, ca2));` `          ` `          // print XML representation` `          //System.out.println(AFPChainXMLConverter.toXML(afpChain,ca1,ca2));` `                       ` `          // This line requires the biojava3-structure-gui module   ` `          StructureAlignmentDisplay.display(afpChain, ca1, ca2);` `          ` `      } catch (Exception e) {` `          e.printStackTrace();` `          return;` `      }` `  }` ``` CE Parameters ============= This CE implementation allows to specify a couple of custom parameters. 1. private int **maxGapSize**; (default 30) The Max gap size parameter G , which has been obtained empirically in the CE paper. The larger the max gap size, the longer the compute time, but in same cases drastically improved results. (e.g. 1CDG.A vs. 1TIM.A) For no limit set this parameter to -1. 2. boolean **checkCircular**; (default false) A flag that determines if CE should check for Circular Permutations (CP). Increases calculation time significantly, but can detect CPs. 3. int **winSize** : (default 8) The window size used for the calculation of the initial Aligned Fragment Pairs 4. double **rmsdThr**; (default 3.0) RMSD threshold used while tracing the AFP fragments 5. double **rmsdThrJoin**; (default 4.0) RMSD threshold used to decide if two AFPs should be joined 6. String[] **alignmentAtoms**; (default CA) allows to configure which atoms to use. At the present only supports "CA" and "CA","CB" settings 7. boolean **showAFPRanges**; (default false) A print flag that allows to view the ranges of the inital AFPs, prior to alignment optimization. back to <BioJava:CookBook:PDB:align> See also ======== - [Combinatorial Extension with Circular Permutations](Combinatorial Extension with Circular Permutations "wikilink")
33.858156
106
0.630917
eng_Latn
0.622609
0c58b969c79fb4fa97de887b03edd2a3f9d8bec2
3,089
md
Markdown
tensorflow/g3doc/get_started/index.md
aksaxena80/test
db0b5da485e1d1f23003ee08ed2e191451ee0319
[ "Apache-2.0" ]
4
2021-06-11T09:43:32.000Z
2021-11-17T11:15:52.000Z
tensorflow/g3doc/get_started/index.md
cleargraphinc/tensorflow
21fac39c471dede0e4ae62dd60e2b0b85db48415
[ "Apache-2.0" ]
null
null
null
tensorflow/g3doc/get_started/index.md
cleargraphinc/tensorflow
21fac39c471dede0e4ae62dd60e2b0b85db48415
[ "Apache-2.0" ]
2
2016-05-18T12:48:06.000Z
2019-03-30T03:56:31.000Z
# Introduction <a class="md-anchor" id="AUTOGENERATED-introduction"></a> Let's get you up and running with TensorFlow! But before we even get started, let's give you a sneak peek at what TensorFlow code looks like in the Python API, just so you have a sense of where we're headed. Here's a little Python program that makes up some data in three dimensions, and then fits a plane to it. ```python import tensorflow as tf import numpy as np # Make 100 phony data points in NumPy. x_data = np.float32(np.random.rand(2, 100)) # Random input y_data = np.dot([0.100, 0.200], x_data) + 0.300 # Construct a linear model. b = tf.Variable(tf.zeros([1])) W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0)) y = tf.matmul(W, x_data) + b # Minimize the squared errors. loss = tf.reduce_mean(tf.square(y - y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) # For initializing the variables. init = tf.initialize_all_variables() # Launch the graph sess = tf.Session() sess.run(init) # Fit the plane. for step in xrange(0, 201): sess.run(train) if step % 20 == 0: print step, sess.run(W), sess.run(b) # Learns best fit is W: [[0.100 0.200]], b: [0.300] ``` To whet your appetite further, we suggest you check out what a classical machine learning problem looks like in TensorFlow. In the land of neural networks the most "classic" classical problem is the MNIST handwritten digit classification. We offer two introductions here, one for machine learning newbies, and one for pros. If you've already trained dozens of MNIST models in other software packages, please take the red pill. If you've never even heard of MNIST, definitely take the blue pill. If you're somewhere in between, we suggest skimming blue, then red. <div style="width:100%; margin:auto; margin-bottom:10px; margin-top:20px; display: flex; flex-direction: row"> <a href="../tutorials/mnist/beginners/index.md" title="MNIST for ML Beginners tutorial"> <img style="flex-grow:1; flex-shrink:1; border: 1px solid black;" src="blue_pill.png" alt="MNIST for machine learning beginners tutorial" /> </a> <a href="../tutorials/mnist/pros/index.md" title="Deep MNIST for ML Experts tutorial"> <img style="flex-grow:1; flex-shrink:1; border: 1px solid black;" src="red_pill.png" alt="Deep MNIST for machine learning experts tutorial" /> </a> </div> <p style="font-size:10px;">Images licensed CC BY-SA 4.0; original by W. Carter</p> If you're already sure you want to learn and install TensorFlow you can skip these and charge ahead. Don't worry, you'll still get to see MNIST -- we'll also use MNIST as an example in our technical tutorial where we elaborate on TensorFlow features. ## Recommended Next Steps: <a class="md-anchor" id="AUTOGENERATED-recommended-next-steps-"></a> * [Download and Setup](../get_started/os_setup.md) * [Basic Usage](../get_started/basic_usage.md) * [TensorFlow Mechanics 101](../tutorials/mnist/tf/index.md) <div class='sections-order' style="display: none;"> <!-- <!-- os_setup.md --> <!-- basic_usage.md --> --> </div>
37.216867
145
0.725154
eng_Latn
0.912656
0c58df5bb8fb3ad794158a217b4c0dac52ca52d8
4,775
md
Markdown
docs/profiling/da0023-high-gc-cpu-time.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/profiling/da0023-high-gc-cpu-time.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
1
2018-10-19T08:00:06.000Z
2018-10-19T08:00:06.000Z
docs/profiling/da0023-high-gc-cpu-time.md
doodz/visualstudio-docs.fr-fr
49c7932ec7a761e4cd7c259a5772e5415253a7a5
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: "DA0023 : Temps processeur GC élevé | Microsoft Docs" ms.custom: ms.date: 11/04/2016 ms.reviewer: ms.suite: ms.technology: vs-ide-debug ms.tgt_pltfrm: ms.topic: article f1_keywords: - vs.performance.DA0023 - vs.performance.23 - vs.performance.rules.DA0023 ms.assetid: aba875fe-9cbc-418d-a2c4-6eb47519a5bb caps.latest.revision: "10" author: mikejo5000 ms.author: mikejo manager: ghogen ms.openlocfilehash: b1675f6e090de5b1e3fdcd3e30d706a38481a16a ms.sourcegitcommit: f40311056ea0b4677efcca74a285dbb0ce0e7974 ms.translationtype: HT ms.contentlocale: fr-FR ms.lasthandoff: 10/31/2017 --- # <a name="da0023-high-gc-cpu-time"></a>DA0023 : Temps processeur GC élevé ||| |-|-| |ID de règle|DA0023| |Catégorie|Utilisation du .NET Framework| |Méthode de profilage|Tout| |Message|% de temps dans GC relativement élevé. Cela indique un volume de surcharge de garbage collection trop élevé qui peut avoir un impact sur le taux de réponse de votre application. Vous pouvez regrouper les données d’allocation de mémoire .NET et les informations de durée de vie des objets afin de mieux comprendre le modèle d’allocation de mémoire utilisé par votre application.| |Type de règle|Informations| Lorsque vous effectuez un profilage à l’aide de la méthode d’échantillonnage, de mémoire .NET ou de conflit des ressources, vous devez collecter au moins 10 échantillons pour déclencher cette règle. ## <a name="cause"></a>Cause Les données relatives aux performances système qui sont collectées pendant le profilage indiquent que le temps consacré au garbage collection est très important, par rapport au temps total de traitement de l’application. ## <a name="rule-description"></a>Description de la règle Le common language runtime (CLR) Microsoft .NET fournit un mécanisme de gestion automatique de la mémoire qui utilise un récupérateur de mémoire pour récupérer la mémoire des objets que l’application n’utilise plus. Le récupérateur de mémoire est orienté génération et repose sur l’hypothèse que de nombreuses allocations sont de courte durée. Les variables locales, par exemple, doivent avoir une durée de vie courte. Les objets récemment créés commencent à la génération 0 (gen 0), puis progressent jusqu’à la génération 1 lorsqu’ils survivent à l’exécution d’un garbage collection. Enfin, ils passent à la génération 2 si l’application les utilise toujours. Les objets de la génération 0 sont collectés fréquemment et généralement de manière très efficace. Les objets de la génération 1 sont collectés moins fréquemment et moins efficacement. Enfin, les objets à longue durée de vie de la génération 2 doivent être collectés encore moins fréquemment. Le garbage collection de génération 2, qui correspond à un garbage collection complet, constitue l’option la plus coûteuse. Cette règle se déclenche lorsque le temps consacré au garbage collection est très important, par rapport au temps total de traitement de l’application. > [!NOTE] > Lorsque le temps consacré au garbage collection est excessif par rapport au temps total de traitement de l’application, l’avertissement [DA0024 : Temps CPU GC excessif](../profiling/da0024-excessive-gc-cpu-time.md) est déclenché à la place de cette règle. ## <a name="how-to-investigate-a-warning"></a>Comment rechercher la cause d’un avertissement Double-cliquez sur le message dans la fenêtre Liste d’erreurs pour accéder à la [vue Marques](../profiling/marks-view.md) des données de profilage. Accédez à la colonne **Mémoire CLR .NET\\% temps dans le GC**. Déterminez s’il existe des phases spécifiques de l’exécution du programme durant lesquelles la surcharge du garbage collection de mémoire managée est plus importante. Comparez les valeurs de la colonne % temps dans le GC au taux du garbage collection des colonnes **Nombre de collections de la génération 0**, **Nombre de collections de la génération 1** et **Nombre de collections de la génération 2**. La colonne % temps dans le GC contient le pourcentage de temps qu’une application a consacré au garbage collection, proportionnellement au temps total de traitement. Toutefois, il se peut que cette valeur soit très élevée, sans qu’un garbage collection excessif ne soit en cause. Pour plus d’informations sur la façon dont est calculée la valeur de la colonne % temps dans le GC, consultez le billet [Difference Between Perf Data Reported by Different Tools - 4](http://go.microsoft.com/fwlink/?LinkId=177863) du blog **Maoni's Weblog** sur MSDN. Si des erreurs de page se produisent ou si l’application est devancée par d’autres tâches prioritaires sur l’ordinateur pendant le garbage collection, les valeurs du compteur % temps dans le GC refléteront ces retards supplémentaires.
91.826923
782
0.784921
fra_Latn
0.974038
0c59518c19a7574afdab5d7a92a0ac3603faf6d4
2,516
md
Markdown
README.md
eddyerburgh/mississippi
4070f017bf1d0175c54ab383f880ef815cc887c6
[ "MIT" ]
886
2017-01-17T15:28:02.000Z
2022-01-22T06:25:07.000Z
README.md
Nepenthe-fmn/avoriaz
4070f017bf1d0175c54ab383f880ef815cc887c6
[ "MIT" ]
146
2017-01-23T13:47:16.000Z
2022-03-02T04:32:14.000Z
README.md
Nepenthe-fmn/avoriaz
4070f017bf1d0175c54ab383f880ef815cc887c6
[ "MIT" ]
97
2017-01-23T14:59:35.000Z
2021-06-07T12:38:18.000Z
# avoriaz [![Build Status](https://travis-ci.org/eddyerburgh/avoriaz.svg?branch=master)](https://travis-ci.org/eddyerburgh/avoriaz) > a Vue.js testing utility library ## Deprecation This library will be deprecated once [vue-test-utils](https://github.com/vuejs/vue-test-utils) is released. ## Installation ``` npm install --save-dev avoriaz ``` ## Documentation [Visit the docs](https://eddyerburgh.gitbooks.io/avoriaz/content/) ## Examples - [Example using karma and mocha](https://github.com/eddyerburgh/avoriaz-karma-mocha-example) - [Example using karma and jasmine](https://github.com/eddyerburgh/avoriaz-karma-jasmine-example) - [Example using Jest](https://github.com/eddyerburgh/avoriaz-jest-example) - [Example using mocha-webpack](https://github.com/eddyerburgh/avoriaz-mocha-example) - [Example using tape](https://github.com/eddyerburgh/avoriaz-tape-example) - [Example using ava](https://github.com/eddyerburgh/avoriaz-ava-example) ##### Assert wrapper contains a child ```js import { mount } from 'avoriaz' import Foo from './Foo.vue' const wrapper = mount(Foo) expect(wrapper.contains('.bar')).to.equal(true) ``` ##### Shallow render components ```js import { shallow } from 'avoriaz' import Foo from './Foo.vue' import Bar from './Bar.vue' const wrapper = shallow(Foo) expect(wrapper.contains(Bar)).to.equal(true) ``` ##### Assert style is rendered ```js const button = wrapper.find('div > button .button-child')[0] expect(button.hasStyle('color', 'red')).to.equal(true) ``` ##### Assert method is called when DOM event is triggered ```js const clickHandler = sinon.stub() const wrapper = mount(Foo, { propsData: { clickHandler } }) wrapper.find('div .bar')[0].trigger('click') expect(clickHandler.called).to.equal(true) ``` ##### Assert wrapper contains text ```js const title = wrapper.find('h1.title')[0] expect(title.text()).to.equal('some text') ``` ##### Inject globals ```js const $route = { path: 'http://www.example-path.com' } const wrapper = mount(Foo, { globals: { $route } }) expect(wrapper.vm.$route.path).to.equal($route.path) ``` ##### Inject slots ```js const wrapper = mount(Foo, { slots: { default: Foo } }) ``` ##### Set data ```js wrapper.setData({ someData: 'some data' }) expect(wrapper.vm.someData).to.equal('some data') ``` ##### Update props ```js wrapper.setProps({ someProp: 'some prop', anotherProp: 'another prop' }) ``` For more examples, [see the docs](https://eddyerburgh.gitbooks.io/avoriaz/content/)
23.296296
131
0.692369
eng_Latn
0.555799
0c59936093cbb0ddf359ef0ff345320e55da1d05
2,579
md
Markdown
src/posts/2013-08-16-refactoring-directory-into-git-submodule.md
spikeheap/spikeheap.github.io
4757dc4fd4a46a5bfcc2347b775072eeeacab901
[ "CC-BY-4.0" ]
4
2015-03-07T17:02:03.000Z
2015-03-25T15:23:35.000Z
src/posts/2013-08-16-refactoring-directory-into-git-submodule.md
spikeheap/spikeheap.github.io
4757dc4fd4a46a5bfcc2347b775072eeeacab901
[ "CC-BY-4.0" ]
7
2015-05-21T11:45:55.000Z
2015-10-27T11:26:28.000Z
src/posts/2013-08-16-refactoring-directory-into-git-submodule.md
spikeheap/spikeheap.github.io
4757dc4fd4a46a5bfcc2347b775072eeeacab901
[ "CC-BY-4.0" ]
1
2015-03-07T17:02:05.000Z
2015-03-07T17:02:05.000Z
--- layout: post tags: ['post','technology','git','puppet','linux'] title: "Refactoring a directory into a git submodule" date: 2013-08-16 18:10:00+00:00 comments: true description: To release the Rsnapshot Puppet module I needed to extract the module directory from our entire Puppet configuration Git repository --- To release the Rsnapshot Puppet module I needed to extract the module directory from our entire Puppet configuration Git repository (we know it's wrong, but it's how we started and are moving away from it slowly). Fortunately Git has some great functionality (hint: it's submodules) to allow the directory to be pulled out into a new repository, keeping the commit history. The steps I wanted to achieve were: 1. Create a new repository containing only the Rsnapshot directory (and the commit history for that directory). 2. Remove the directory from the Puppet repository and replace it with a pointer to the Rsnapshot repository. [Git submodules](http://git-scm.com/book/en/Git-Tools-Submodules) solve this use-case. A submodule is treated as a separate repository, but resides within the parent repository's directory structure. The simplest way to filter the repository is to clone it, apply the filter to the clone and then push the the filtered repository to your origin server (if you use one). The following example assumes you've created an empty repository ready for the submodule: ``` bash git clone $REPO_URL $SUBMODULE_NAME cd $SUBMODULE_NAME git filter-branch --subdirectory-filter '$PATH_TO_SUBMODULE' --prune-empty -- --all # Clean the repository git clean -xd -f # Update the remote (replace "origin" with your remote name) git remote rm origin git remote add origin $SUBMODULE_REPO_URL # Push the new submodule git push origin master ``` All that's left to do then is remove the existing directory and add the submodule. Even if you're paranoid the removed directory's history is still in Git, so it's easy to roll back. ``` bash git rm $PATH_TO_SUBMODULE git commit -m "Removing directory to replace with submodule" $PATH_TO_SUBMODULE git submodule add $SUBMODULE_REPO_URL $PATH_TO_SUBMODULE git add .gitmodules $PATH_TO_SUBMODULE git commit -m "Adding submodule X" git push ``` For the Puppet example the repository resides at /etc/puppet and I used the following variables: ``` bash SUBMODULE_NAME = puppet-rsnapshot PATH_TO_SUBMODULE = modules/rsnapshot ``` # References 1. http://git-scm.com/book/en/Git-Tools-Submodules 2. http://stackoverflow.com/questions/12514197/convert-a-git-folder-to-a-submodule-retrospectively
43.711864
374
0.785576
eng_Latn
0.980746
0c59df011e002308862c1fa95ce386b85360962d
129
md
Markdown
content/v1/glossary.md
drndos/specification
8f47ee179b126c6d5eb2f7bda4018368c501b60b
[ "MIT" ]
12
2016-02-01T15:00:50.000Z
2019-11-14T21:56:02.000Z
content/v1/glossary.md
drndos/specification
8f47ee179b126c6d5eb2f7bda4018368c501b60b
[ "MIT" ]
66
2016-01-22T12:32:08.000Z
2021-03-09T01:10:09.000Z
content/v1/glossary.md
drndos/specification
8f47ee179b126c6d5eb2f7bda4018368c501b60b
[ "MIT" ]
4
2017-12-29T15:30:34.000Z
2021-04-11T08:25:56.000Z
--- id: v1-glossary title: Glossary url: /v1/glossary version: v1 --- This section defines all concepts part of the data model.
14.333333
57
0.728682
eng_Latn
0.952136
0c5a4baa90e98d029b43a4cb1b924c98b14dcc95
9,366
md
Markdown
news/_posts/core-weekly/2019-05-27-coreweekly-week-20-2019.md
prestascott/prestashop.github.io
5ada918dc801b89c349b1c6bc40a731325363482
[ "CC0-1.0" ]
40
2015-03-20T22:57:22.000Z
2022-03-13T21:00:56.000Z
news/_posts/core-weekly/2019-05-27-coreweekly-week-20-2019.md
prestascott/prestashop.github.io
5ada918dc801b89c349b1c6bc40a731325363482
[ "CC0-1.0" ]
455
2015-04-04T19:50:25.000Z
2022-03-31T10:02:11.000Z
news/_posts/core-weekly/2019-05-27-coreweekly-week-20-2019.md
prestascott/prestashop.github.io
5ada918dc801b89c349b1c6bc40a731325363482
[ "CC0-1.0" ]
50
2015-04-04T13:17:59.000Z
2021-09-21T17:33:42.000Z
--- layout: post title: "PrestaShop Core Weekly - Week 20 of 2019" subtitle: "An inside look at the PrestaShop codebase" date: 2019-05-27 16:30:00 authors: [ AntoineThomas ] icon: icon-calendar tags: - core-weekly --- This edition of the Core Weekly report highlights changes in PrestaShop's core codebase from Monday 13th to Sunday 19th of May 2019. ![Core Weekly banner](/assets/images/2018/12/banner-core-weekly.jpg) ## General messages Dear Developers, [PSD Paris 2019](https://www.prestashop.com/fr/evenements/prestashop-day-paris) is just next week, and there will be [a dedicated space for developers](http://build.prestashop.com/news/psd-2019-developer-space/) :-) The whole PrestaShop is eager to meet you there in real life. See you next week! ## A quick update about PrestaShop's GitHub issues and pull requests: - [58 new issues](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Aissue+created%3A2019-05-13..2019-05-19) have been created in the project repositories; - [76 issues have been closed](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Aissue+closed%3A2019-05-13..2019-05-19), including [18 fixed issues](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Aissue+label%3Afixed+closed%3A2019-05-13..2019-05-19) on the core; - [55 pull requests have been opened](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Apr+created%3A2019-05-13..2019-05-19) in the project repositories; - [42 pull requests have been closed](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Apr+closed%3A2019-05-13..2019-05-19), including [37 merged pull requests](https://github.com/search?q=org%3APrestaShop+is%3Apublic++-repo%3Aprestashop%2Fprestashop.github.io++is%3Apr+merged%3A2019-05-13..2019-05-19). ## Code changes in the 'develop' branch ### Core * [#13736](https://github.com/PrestaShop/PrestaShop/pull/13736): Replace all calls to $this->l() in controllers, by [@eternoendless](https://github.com/eternoendless) * [#13789](https://github.com/PrestaShop/PrestaShop/pull/13789): Merge 1.7.6.x to develop - 15/05/2019, by [@matks](https://github.com/matks) * [#13803](https://github.com/PrestaShop/PrestaShop/pull/13803): Make the movement of the Grid columns really easy, by [@mickaelandrieu](https://github.com/mickaelandrieu) * [#13818](https://github.com/PrestaShop/PrestaShop/pull/13818): Remove redundant condition in cart.php. Thank you [@davidglezz](https://github.com/davidglezz) ### Back office * [#13593](https://github.com/PrestaShop/PrestaShop/pull/13593): Automated hooks discovery and updating hooks list in xml and sql files. Thank you [@tomas862](https://github.com/tomas862) * [#13798](https://github.com/PrestaShop/PrestaShop/pull/13798): Change input type for imap password, by [@marionf](https://github.com/marionf) ### Tests * [#13811](https://github.com/PrestaShop/PrestaShop/pull/13811): Download in headless mode in tests, by [@boubkerbribri](https://github.com/boubkerbribri) * [#13824](https://github.com/PrestaShop/PrestaShop/pull/13824): Update tests and package-lock, by [@PierreRambaud](https://github.com/PierreRambaud) * [#13827](https://github.com/PrestaShop/PrestaShop/pull/13827): Revert "Only run deploy if the triggered commit is recent", by [@PierreRambaud](https://github.com/PierreRambaud) ## Code changes in the "1.7.6.x" branch (for v1.7.6.0) ### Core * [#13665](https://github.com/PrestaShop/PrestaShop/pull/13665): Final retail price is not displayed due to missing CLDR files, by [@PierreRambaud](https://github.com/PierreRambaud) * [#13766](https://github.com/PrestaShop/PrestaShop/pull/13766): Merge 1.7.5.2 into 1.7.6.x, by [@eternoendless](https://github.com/eternoendless) * [#13768](https://github.com/PrestaShop/PrestaShop/pull/13768): Merge beta release into 1.7.6.x, by [@eternoendless](https://github.com/eternoendless) * [#13778](https://github.com/PrestaShop/PrestaShop/pull/13778): Add new hooks for Symfony migrated pages in XML install file and SQL upgrade, by [@matks](https://github.com/matks) * [#13808](https://github.com/PrestaShop/PrestaShop/pull/13808): Update Symfony to latest patch version to fix ICU version problem, by [@PierreRambaud](https://github.com/PierreRambaud) ### Back office * [#13165](https://github.com/PrestaShop/PrestaShop/pull/13165): Fix multilanguage fields configuration. Thank you [@sarjon](https://github.com/sarjon) * [#13722](https://github.com/PrestaShop/PrestaShop/pull/13722): Filter themes without override in email generation form, by [@jolelievre](https://github.com/jolelievre) * [#13763](https://github.com/PrestaShop/PrestaShop/pull/13763): Fix help sidebar display in Customers page. Thank you [@sarjon](https://github.com/sarjon) * [#13764](https://github.com/PrestaShop/PrestaShop/pull/13764): Fixes customer view url in notifications bar. Thank you [@sarjon](https://github.com/sarjon) * [#13765](https://github.com/PrestaShop/PrestaShop/pull/13765): Fix sql manager bulk actions. Thank you [@sarjon](https://github.com/sarjon) * [#13768](https://github.com/PrestaShop/PrestaShop/pull/13768): Merge beta release into 1.7.6.x, by [@eternoendless](https://github.com/eternoendless) * [#13777](https://github.com/PrestaShop/PrestaShop/pull/13777): Incorrect translation arguments passed in cms page form. Thank you [@tomas862](https://github.com/tomas862) * [#13779](https://github.com/PrestaShop/PrestaShop/pull/13779): Allow to overwrite theme mails if they have modules OR mail templates, by [@jolelievre](https://github.com/jolelievre) * [#13821](https://github.com/PrestaShop/PrestaShop/pull/13821): Fix manufacturers, taxes lists id filtering. Thank you [@zuk3975](https://github.com/zuk3975) ### Front office * [#12891](https://github.com/PrestaShop/PrestaShop/pull/12891): Fix bug on block social in footer. Thank you [@YeLnatSs](https://github.com/YeLnatSs) * [#13780](https://github.com/PrestaShop/PrestaShop/pull/13780): fix displayed discount on tax excluded cart display, by [@tomlev](https://github.com/tomlev) ### Tests * [#13726](https://github.com/PrestaShop/PrestaShop/pull/13726): Moving tests High to full or to broken tests, by [@boubkerbribri](https://github.com/boubkerbribri) * [#13772](https://github.com/PrestaShop/PrestaShop/pull/13772): Correct usage of fixtures on behat tests for taxes, by [@tomlev](https://github.com/tomlev) * [#13776](https://github.com/PrestaShop/PrestaShop/pull/13776): Improve tests orders and category, by [@boubkerbribri](https://github.com/boubkerbribri) * [#13831](https://github.com/PrestaShop/PrestaShop/pull/13831): Force report name, by [@PierreRambaud](https://github.com/PierreRambaud) ## Code changes in modules, themes & tools ### PrestaShop Coding Standards * [#1](https://github.com/PrestaShop/php-coding-standards/pull/1): Integrate php cs fixer, by [@PierreRambaud](https://github.com/PierreRambaud) * [#2](https://github.com/PrestaShop/php-coding-standards/pull/2): Add license in composer.json, by [@PierreRambaud](https://github.com/PierreRambaud) ### Docker Internal images * [#25](https://github.com/PrestaShop/docker-internal-images/pull/25): Call localhost to trigger the cache generation, by [@Quetzacoalt91](https://github.com/Quetzacoalt91) ### Live demo devices * [#5](https://github.com/PrestaShop/live-demo-devices/pull/5): Request periodically the shop before setting its URL in iFrame, by [@Quetzacoalt91](https://github.com/Quetzacoalt91) ### Faceted search * [#57](https://github.com/PrestaShop/ps_facetedsearch/pull/57): Update cldr javascript library, by [@PierreRambaud](https://github.com/PierreRambaud) * [#58](https://github.com/PrestaShop/ps_facetedsearch/pull/58): Add phpunit tests and quality, by [@PierreRambaud](https://github.com/PierreRambaud) ## Changes in Documentation * [#248](https://github.com/PrestaShop/docs/pull/248): Add deprecation notice. Thank you [@dennispw](https://github.com/dennispw) * [#265](https://github.com/PrestaShop/docs/pull/265): Adds options form, identifiable object forms and grid hooks docs. Thank you [@tomas862](https://github.com/tomas862) * [#267](https://github.com/PrestaShop/docs/pull/267): Global Smarty vars updated. Thank you [@d-roduit](https://github.com/d-roduit) <hr /> Thank you to the contributors whose pull requests were merged since the last Core Weekly Report: @d-roduit, @davidglezz, @dennispw, @sarjon, @tomas862, @YeLnatSs, @zuk3975! Thank you to the contributors whose PRs haven't been merged yet! And of course, a big thank you to all those who contribute with issues and comments [on GitHub](https://github.com/PrestaShop/PrestaShop)! If you want to contribute to PrestaShop with code, please read these pages first: * [Contributing code to PrestaShop](https://devdocs.prestashop.com/1.7/contribute/contribution-guidelines/) * [Coding standards](https://devdocs.prestashop.com/1.7/development/coding-standards/) ...and if you do not know how to fix an issue but wish to report it, please read this: [How to use GitHub to report an issue](https://devdocs.prestashop.com/1.7/contribute/contribute-reporting-issues/). Thank you! Happy contributin' everyone!
66.425532
376
0.75678
yue_Hant
0.296277
0c5b783360b51cba1f60662c8d21f45c9a590389
1,339
md
Markdown
README.md
Steakeye/rollup-plugin-favicons
38deececb73327f7187703cc6d14f1079e7e30be
[ "MIT" ]
null
null
null
README.md
Steakeye/rollup-plugin-favicons
38deececb73327f7187703cc6d14f1079e7e30be
[ "MIT" ]
null
null
null
README.md
Steakeye/rollup-plugin-favicons
38deececb73327f7187703cc6d14f1079e7e30be
[ "MIT" ]
null
null
null
# rollup-plugin-favicons [Rollup](https://github.com/rollup/rollup) plugin to generating favicons and their associated files. It uses the [favicons](https://github.com/itgalaxy/favicons) generator under the hood. This plugin was inspired by the [favicons-webpack-plugin](https://github.com/jantimon/favicons-webpack-plugin). The plugin can be used alongside the [rollup-plugin-html2](https://github.com/mentaljam/rollup-plugin-html2). In this case `rollup-plugin-favicons` should be placed before `rollup-plugin-html2` in the plugin list. ## Install ```sh npm i -D rollup-plugin-favicons ``` ## Usage ```js // rollup.config.js import favicons from 'rollup-plugin-favicons' import html2 from 'rollup-plugin-html2' export default { input: 'index.js', output: { dir: 'dist', format: 'es', }, plugins: [ favicons({ source: 'icon.svg', configuration: { appName: process.env.npm_package_displayName, }, }), html2({ template: 'index.html', }), ], } ``` ## Options ### `source: string` A path to a source image which would be used to generate icons. ### `configuration: object` A configuration for the [favicons](https://github.com/itgalaxy/favicons). For details please read the link. ## License [MIT](LICENSE) © [Petr Tsymbarovich](mailto:petr@tsymbarovich.ru)
21.596774
109
0.696042
eng_Latn
0.520241
0c5b79d824fb0b088760ed33921cdaa87193ae43
3,854
md
Markdown
wdk-ddi-src/content/wdfdmaenabler/nf-wdfdmaenabler-wdfdmaenablersetmaximumscattergatherelements.md
jesweare/windows-driver-docs-ddi
a6e73cac25d8328115822ec266dabdf87d395bc7
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/wdfdmaenabler/nf-wdfdmaenabler-wdfdmaenablersetmaximumscattergatherelements.md
jesweare/windows-driver-docs-ddi
a6e73cac25d8328115822ec266dabdf87d395bc7
[ "CC-BY-4.0", "MIT" ]
null
null
null
wdk-ddi-src/content/wdfdmaenabler/nf-wdfdmaenabler-wdfdmaenablersetmaximumscattergatherelements.md
jesweare/windows-driver-docs-ddi
a6e73cac25d8328115822ec266dabdf87d395bc7
[ "CC-BY-4.0", "MIT" ]
1
2021-12-08T21:34:31.000Z
2021-12-08T21:34:31.000Z
--- UID: NF:wdfdmaenabler.WdfDmaEnablerSetMaximumScatterGatherElements title: WdfDmaEnablerSetMaximumScatterGatherElements function (wdfdmaenabler.h) description: The WdfDmaEnablerSetMaximumScatterGatherElements method sets the maximum number of scatter/gather elements that a device supports, for a specified DMA enabler object. old-location: wdf\wdfdmaenablersetmaximumscattergatherelements.htm tech.root: wdf ms.assetid: fdfcb8bc-bc42-4c34-ae19-b40401bea41e ms.date: 02/26/2018 keywords: ["WdfDmaEnablerSetMaximumScatterGatherElements function"] ms.keywords: DFDmaObjectRef_d9f2c46d-5981-4997-96b6-5a9db0dbfd8d.xml, WdfDmaEnablerSetMaximumScatterGatherElements, WdfDmaEnablerSetMaximumScatterGatherElements method, kmdf.wdfdmaenablersetmaximumscattergatherelements, wdf.wdfdmaenablersetmaximumscattergatherelements, wdfdmaenabler/WdfDmaEnablerSetMaximumScatterGatherElements req.header: wdfdmaenabler.h req.include-header: Wdf.h req.target-type: Universal req.target-min-winverclnt: req.target-min-winversvr: req.kmdf-ver: 1.0 req.umdf-ver: req.ddi-compliance: DriverCreate, KmdfIrql, KmdfIrql2 req.unicode-ansi: req.idl: req.max-support: req.namespace: req.assembly: req.type-library: req.lib: Wdf01000.sys (see Framework Library Versioning.) req.dll: req.irql: PASSIVE_LEVEL targetos: Windows req.typenames: f1_keywords: - WdfDmaEnablerSetMaximumScatterGatherElements - wdfdmaenabler/WdfDmaEnablerSetMaximumScatterGatherElements topic_type: - APIRef - kbSyntax api_type: - LibDef api_location: - Wdf01000.sys - Wdf01000.sys.dll api_name: - WdfDmaEnablerSetMaximumScatterGatherElements --- # WdfDmaEnablerSetMaximumScatterGatherElements function ## -description <p class="CCE_Message">[Applies to KMDF only]</p> The <b>WdfDmaEnablerSetMaximumScatterGatherElements</b> method sets the maximum number of scatter/gather elements that a device supports, for a specified DMA enabler object. ## -parameters ### -param DmaEnabler [in] A handle to a DMA enabler object that the driver obtained from a previous call to <a href="/windows-hardware/drivers/ddi/wdfdmaenabler/nf-wdfdmaenabler-wdfdmaenablercreate">WdfDmaEnablerCreate</a>. ### -param MaximumFragments [in] The maximum number of scatter/gather elements that the driver and device can support. ## -remarks A bug check occurs if the driver supplies an invalid object handle. If your driver calls <b>WdfDmaEnablerSetMaximumScatterGatherElements</b>, it must do so within the <a href="/windows-hardware/drivers/ddi/wdfdriver/nc-wdfdriver-evt_wdf_driver_device_add">EvtDriverDeviceAdd</a> or <a href="/windows-hardware/drivers/ddi/wdfdevice/nc-wdfdevice-evt_wdf_device_prepare_hardware">EvtDevicePrepareHardware</a> callback function. If your driver does not call <b>WdfDmaEnablerSetMaximumScatterGatherElements</b>, the framework uses a default value of WDF_DMA_ENABLER_UNLIMITED_FRAGMENTS, which means that there is no limit to the number of scatter/gather elements. For more information about this method, see <a href="/windows-hardware/drivers/wdf/enabling-dma-transactions">Enabling DMA Transactions</a>. #### Examples The following code example sets the maximum number of scatter/gather elements for a specified DMA enabler object. ```cpp WdfDmaEnablerSetMaximumScatterGatherElements( DmaEnabler, NIC_MAX_PHYS_BUF_COUNT ); ``` ## -see-also <a href="/windows-hardware/drivers/ddi/wdfdmaenabler/nf-wdfdmaenabler-wdfdmaenablercreate">WdfDmaEnablerCreate</a> <a href="/windows-hardware/drivers/ddi/wdfdmaenabler/nf-wdfdmaenabler-wdfdmaenablergetmaximumscattergatherelements">WdfDmaEnablerGetMaximumScatterGatherElements</a>
40.145833
357
0.777634
eng_Latn
0.374475
0c5b7fa4c1dac1c94510839dee66c579b1c93bca
1,797
md
Markdown
desktop-src/Controls/lvm-sortgroups.md
crushonme/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/Controls/lvm-sortgroups.md
crushonme/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
null
null
null
desktop-src/Controls/lvm-sortgroups.md
crushonme/win32
f5099e1e3e455bb162771d80b0ba762ee5c974ec
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: LVM_SORTGROUPS message description: Uses an application-defined comparison function to sort groups by ID within a list-view control. ms.assetid: 553e96d6-a982-4482-8fba-ef11a74fb82e keywords: - LVM_SORTGROUPS message Windows Controls topic_type: - apiref api_name: - LVM_SORTGROUPS api_location: - Commctrl.h api_type: - HeaderDef ms.topic: reference ms.date: 05/31/2018 --- # LVM\_SORTGROUPS message Uses an application-defined comparison function to sort groups by ID within a list-view control. ## Parameters <dl> <dt> *wParam* </dt> <dd>Pointer to an application-defined comparison function, <a href="https://docs.microsoft.com/windows/desktop/api/commctrl/nc-commctrl-pfnlvgroupcompare">LVGroupCompare</a>.</dd> <dt> *lParam* </dt> <dd>Void pointer to the application-defined information.</dd> </dl> ## Return value Returns 1 if successful, or 0 otherwise. ## Remarks > [!Note] > To use this message, you must provide a manifest specifying Comclt32.dll version 6.0. For more information on manifests, see [Enabling Visual Styles](cookbook-overview.md). ## Requirements | | | |-------------------------------------|---------------------------------------------------------------------------------------| | Minimum supported client<br/> | Windows Vista \[desktop apps only\]<br/> | | Minimum supported server<br/> | Windows Server 2003 \[desktop apps only\]<br/> | | Header<br/> | <dl> <dt>Commctrl.h</dt> </dl> | ## See also <dl> <dt> [**LVGroupCompare**](https://msdn.microsoft.com/en-us/library/Bb775142(v=VS.85).aspx) </dt> </dl>
25.309859
190
0.58709
eng_Latn
0.481377
0c5c337f9f2b898167ef2022efb8e474112abb78
874
md
Markdown
_publications/2019-12-19-sewanha.md
janicejihyeon/janicejihyeon.github.io
341ee721bf7d6cf9ace853b4e9ab29b34d3bdee4
[ "MIT" ]
null
null
null
_publications/2019-12-19-sewanha.md
janicejihyeon/janicejihyeon.github.io
341ee721bf7d6cf9ace853b4e9ab29b34d3bdee4
[ "MIT" ]
null
null
null
_publications/2019-12-19-sewanha.md
janicejihyeon/janicejihyeon.github.io
341ee721bf7d6cf9ace853b4e9ab29b34d3bdee4
[ "MIT" ]
null
null
null
--- title: "Cryptanalysis of Kumar et al.’s Authentication Protocol for Wireless Sensor Networks" collection: publications permalink: /publication/2019-12-19-sewanha date: 2019-12-19 venue: 'Information Science and Applications' paperurl: 'http://janicejihyeon.github.io/files/SewanHa.pdf' citation: 'Sewan Ha, <b>Jihyeon Ryu</b>, Hyoungshick Kim, Dongho Won, Youngsook Lee. (2019). &quot;Cryptanalysis of Kumar et al.’s Authentication Protocol for Wireless Sensor Networks.&quot; <i>Information Science and Applications</i>. 329 - 340.' --- [Download paper here](http://janicejihyeon.github.io/files/SewanHa.pdf) Recommended citation: Sewan Ha, <b>Jihyeon Ryu</b>, Hyoungshick Kim, Dongho Won, Youngsook Lee. (2019). &quot;Cryptanalysis of Kumar et al.’s Authentication Protocol for Wireless Sensor Networks.&quot; <i>Information Science and Applications</i>. 329 - 340.
62.428571
257
0.771167
kor_Hang
0.327046
0c5c69ea130545e9de445e4fcab5bb3da8480851
12,974
md
Markdown
docs/vs-2015/modeling/accessing-models-from-text-templates.md
jcarmon4/visualstudio-docs.es-es
2f133c9f0a90eb92429dcca0573a0b3f458cdcf3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/modeling/accessing-models-from-text-templates.md
jcarmon4/visualstudio-docs.es-es
2f133c9f0a90eb92429dcca0573a0b3f458cdcf3
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/vs-2015/modeling/accessing-models-from-text-templates.md
jcarmon4/visualstudio-docs.es-es
2f133c9f0a90eb92429dcca0573a0b3f458cdcf3
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Obtener acceso a los modelos desde plantillas de texto | Microsoft Docs ms.date: 11/15/2016 ms.prod: visual-studio-dev14 ms.technology: vs-ide-modeling ms.topic: conceptual helpviewer_keywords: - text templates, accessing models ms.assetid: cf65395a-0ca3-4826-89c7-b1869562685c caps.latest.revision: 35 author: gewarren ms.author: gewarren manager: jillfra ms.openlocfilehash: e9eba4a919f159462080688c64ed765d3c1fec86 ms.sourcegitcommit: 2da366ba9ad124366f6502927ecc720985fc2f9e ms.translationtype: MT ms.contentlocale: es-ES ms.lasthandoff: 08/09/2019 ms.locfileid: "68871982" --- # <a name="accessing-models-from-text-templates"></a>Acceso a modelos a partir de plantillas de texto [!INCLUDE[vs2017banner](../includes/vs2017banner.md)] Mediante el uso de plantillas de texto, puede crear archivos de informe, archivos de código fuente y otros archivos de texto basados en modelos de lenguajes específicos de dominio. Para obtener información básica sobre las plantillas de texto, vea [generación de código y plantillas de texto T4](../modeling/code-generation-and-t4-text-templates.md). Las plantillas de texto funcionarán en el modo experimental cuando se depura el DSL y también funcionarán en un equipo en el que se haya implementado el DSL. > [!NOTE] > Cuando se crea una solución DSL, se generan archivos **\*** de plantilla de texto de ejemplo. TT en el proyecto de depuración. Al cambiar los nombres de las clases de dominio, estas plantillas dejarán de funcionar. No obstante, incluyen las directivas básicas que necesita y proporcionan ejemplos que puede actualizar para que coincidan con el DSL. Para tener acceso a un modelo desde una plantilla de texto: - Establezca la propiedad inherit de la Directiva de plantilla en [ModelingTextTransformation](/previous-versions/bb893209(v=vs.140)). Esto proporciona acceso al almacén. - Especifique los procesadores de directivas para el DSL al que desea obtener acceso. De este modo, se cargan los ensamblados del DSL para que pueda usar sus clases de dominio, propiedades y relaciones en el código de la plantilla de texto. También carga el archivo de modelo que especifique. Se `.tt` crea un archivo similar al ejemplo siguiente en el proyecto de depuración cuando se crea una [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] nueva solución a partir de la plantilla de idioma mínimo DSL. ``` <#@ template inherits="Microsoft.VisualStudio.TextTemplating.VSHost.ModelingTextTransformation" #> <#@ output extension=".txt" #> <#@ MyLanguage processor="MyLanguageDirectiveProcessor" requires="fileName='Sample.myDsl1'" #> This text will be output directly. This is the name of the model: <#= this.ModelRoot.Name #> Here is a list of elements in the model: <# // When you change the DSL Definition, some of the code below may not work. foreach (ExampleElement element in this.ExampleModel.Elements) {#> <#= element.Name #> <# } #> ``` Tenga en cuenta los puntos siguientes sobre esta plantilla: - La plantilla puede usar las clases de dominio, las propiedades y las relaciones definidas en la definición de DSL. - La plantilla carga el archivo de modelo que se especifica en `requires` la propiedad. - Una propiedad de `this` contiene el elemento raíz. Desde allí, el código puede navegar a otros elementos del modelo. El nombre de la propiedad suele ser el mismo que el de la clase de dominio raíz de DSL. En este ejemplo, es `this.ExampleModel`. - Aunque el lenguaje en el que se escriben los fragmentos C#de código es, puede generar texto de cualquier tipo. Como alternativa, puede escribir el código en [!INCLUDE[vbprvb](../includes/vbprvb-md.md)] agregando la propiedad `language="VB"` a la `template` Directiva. - Para depurar la plantilla `debug="true"` , agregue `template` a la Directiva. La plantilla se abrirá en otra instancia de [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] si se produce una excepción. Si desea interrumpir el depurador en un punto concreto del código, inserte la instrucción.`System.Diagnostics.Debugger.Break();` Para obtener más información, vea Depurar [una plantilla de texto T4](../modeling/debugging-a-t4-text-template.md). ## <a name="about-the-dsl-directive-processor"></a>Acerca del procesador de directivas DSL La plantilla puede usar las clases de dominio que definió en la definición de DSL. Se trata de una directiva que suele aparecer cerca del inicio de la plantilla. En el ejemplo anterior, es lo siguiente. ``` <#@ MyLanguage processor="MyLanguageDirectiveProcessor" requires="fileName='Sample.myDsl1'" #> ``` El nombre de la Directiva ( `MyLanguage`, en este ejemplo) se deriva del nombre del DSL. Invoca un procesador de *directivas* que se genera como parte del DSL. Puede encontrar su código fuente en **Dsl\GeneratedCode\DirectiveProcessor.CS**. El procesador de directivas de DSL realiza dos tareas principales: - Inserta de forma eficaz las directivas de ensamblado e importación en la plantilla que hace referencia a su DSL. Esto le permite usar las clases de dominio en el código de plantilla. - Carga el archivo especificado en el `requires` parámetro y establece una propiedad en `this` que hace referencia al elemento raíz del modelo cargado. ## <a name="validating-the-model-before-running-the-template"></a>Validar el modelo antes de ejecutar la plantilla Puede hacer que el modelo se valide antes de que se ejecute la plantilla. ``` <#@ MyLanguage processor="MyLanguageDirectiveProcessor" requires="fileName='Sample.myDsl1';validation='open|load|save|menu'" #> ``` Tenga en cuenta que: 1. Los `filename` parámetros `validation` y se separan con ";" y no debe haber otros separadores ni espacios. 2. La lista de categorías de validación determina qué métodos de validación se ejecutarán. Varias categorías deben separarse con&#124;"" y no debe haber otros separadores ni espacios. Si se encuentra un error, se le indicará en la ventana errores y el archivo de resultados contendrá un mensaje de error. ## <a name="Multiple"></a>Obtener acceso a varios modelos desde una plantilla de texto > [!NOTE] > Este método permite leer varios modelos en la misma plantilla, pero no admite referencias de ModelBus. Para leer modelos intervinculados por referencias de ModelBus, vea [usar Visual Studio ModelBus en una plantilla de texto](../modeling/using-visual-studio-modelbus-in-a-text-template.md). Si desea tener acceso a más de un modelo de la misma plantilla de texto, debe llamar al procesador de directivas generado una vez para cada modelo. Debe especificar el nombre de archivo de cada modelo en el `requires` parámetro. Debe especificar los nombres que desea usar para la clase de dominio raíz en el `provides` parámetro. Debe especificar valores diferentes para los `provides` parámetros en cada una de las llamadas de directiva. Por ejemplo, suponga que tiene tres archivos de modelo denominados Library. XYZ, School. XYZ y work. XYZ. Para tener acceso a ellas desde la misma plantilla de texto, debe escribir tres llamadas de directiva similares a las siguientes. ``` <#@ ExampleModel processor="<YourLanguageName>DirectiveProcessor" requires="fileName='Library.xyz'" provides="ExampleModel=LibraryModel" #> <#@ ExampleModel processor="<YourLanguageName>DirectiveProcessor" requires="fileName='School.xyz'" provides="ExampleModel=SchoolModel" #> <#@ ExampleModel processor="<YourLanguageName>DirectiveProcessor" requires="fileName='Work.xyz'" provides="ExampleModel=WorkModel" #> ``` > [!NOTE] > Este código de ejemplo es para un idioma que se basa en la plantilla de la solución de idioma mínimo. Para tener acceso a los modelos de la plantilla de texto, ahora puede escribir código similar al código del siguiente ejemplo. ```csharp <# foreach (ExampleElement element in this.LibraryModel.Elements) ... foreach (ExampleElement element in this.SchoolModel.Elements) ... foreach (ExampleElement element in this.WorkModel.Elements) ... #> ``` ```vb <# For Each element As ExampleElement In Me.LibraryModel.Elements ... For Each element As ExampleElement In Me.SchoolModel.Elements ... For Each element As ExampleElement In Me.WorkModel.Elements ... #> ``` ## <a name="loading-models-dynamically"></a>Cargar modelos dinámicamente Si desea determinar en tiempo de ejecución qué modelos se van a cargar, puede cargar dinámicamente un archivo de modelo en el código del programa, en lugar de usar la Directiva específica de DSL. Sin embargo, una de las funciones de la Directiva específica de DSL es importar el espacio de nombres DSL para que el código de plantilla pueda usar las clases de dominio definidas en ese DSL. Dado que no está utilizando la Directiva, debe agregar **\<>** de ensamblado e **\<importar** directivas de > para todos los modelos que puede cargar. Esto es fácil si los diferentes modelos que puede cargar son todas las instancias del mismo DSL. Para cargar el archivo, el método más eficaz es mediante [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] ModelBus. En un escenario típico, la plantilla de texto usará una directiva específica de DSL para cargar el primer modelo de la manera habitual. Ese modelo contiene referencias de ModelBus a otro modelo. Puede usar ModelBus para abrir el modelo al que se hace referencia y obtener acceso a un elemento determinado. Para obtener más información, consulte [utilizando Visual Studio ModelBus en una plantilla de texto](../modeling/using-visual-studio-modelbus-in-a-text-template.md). En un escenario menos habitual, es posible que desee abrir un archivo de modelo para el que solo tenga un nombre de archivo y que no esté en el [!INCLUDE[vsprvs](../includes/vsprvs-md.md)] proyecto actual. En este caso, puede abrir el archivo mediante la técnica descrita en [cómo: Abra un modelo desde el archivo en el](../modeling/how-to-open-a-model-from-file-in-program-code.md)código del programa. ## <a name="generating-multiple-files-from-a-template"></a>Generar varios archivos a partir de una plantilla Si desea generar varios archivos (por ejemplo, para generar un archivo independiente para cada elemento de un modelo), existen varios enfoques posibles. De forma predeterminada, solo se genera un archivo a partir de cada archivo de plantilla. ### <a name="splitting-a-long-file"></a>Dividir un archivo largo En este método, se usa una plantilla para generar un único archivo, separado por un delimitador. A continuación, divida el archivo en sus elementos. Hay dos plantillas, una para generar el único archivo y otra para dividirla. **LoopTemplate. T4** genera el archivo Long único. Tenga en cuenta que la extensión de archivo es ". T4", ya que no debe procesarse directamente al hacer clic en **transformar todas las plantillas**. Esta plantilla toma un parámetro, que especifica la cadena de delimitador que separa los segmentos: ``` <#@ template ninherits="Microsoft.VisualStudio.TextTemplating.VSHost.ModelingTextTransformation" #> <#@ parameter name="delimiter" type="System.String" #> <#@ output extension=".txt" #> <#@ MyDSL processor="MyDSLDirectiveProcessor" requires="fileName='SampleModel.mydsl1';validation='open|load|save|menu'" #> <# // Create a file segment for each element: foreach (ExampleElement element in this.ExampleModel.Elements) { // First item is the delimiter: #> <#= string.Format(delimiter, element.Id) #> Element: <#= element.Name #> <# // Here you generate more content derived from the element. } #> ``` `LoopSplitter.tt``LoopTemplate.t4`invoca y, a continuación, divide el archivo resultante en sus segmentos. Tenga en cuenta que esta plantilla no tiene que ser una plantilla de modelado, ya que no lee el modelo. ``` <#@ template hostspecific="true" language="C#" #> <#@ output extension=".txt" #> <#@ import namespace="Microsoft.VisualStudio.TextTemplating" #> <#@ import namespace="System.Runtime.Remoting.Messaging" #> <#@ import namespace="System.IO" #> <# // Get the local path: string itemTemplatePath = this.Host.ResolvePath("LoopTemplate.t4"); string dir = Path.GetDirectoryName(itemTemplatePath); // Get the template for generating each file: string loopTemplate = File.ReadAllText(itemTemplatePath); Engine engine = new Engine(); // Pass parameter to new template: string delimiterGuid = Guid.NewGuid().ToString(); string delimiter = "::::" + delimiterGuid + ":::"; CallContext.LogicalSetData("delimiter", delimiter + "{0}:::"); string joinedFiles = engine.ProcessTemplate(loopTemplate, this.Host); string [] separateFiles = joinedFiles.Split(new string [] {delimiter}, StringSplitOptions.None); foreach (string nameAndFile in separateFiles) { if (string.IsNullOrWhiteSpace(nameAndFile)) continue; string[] parts = nameAndFile.Split(new string[]{":::"}, 2, StringSplitOptions.None); if (parts.Length < 2) continue; #> Generate: [<#= dir #>] [<#= parts[0] #>] <# // Generate a file from this item: File.WriteAllText(Path.Combine(dir, parts[0] + ".txt"), parts[1]); } #> ```
58.441441
676
0.768075
spa_Latn
0.96788
0c5cccc0c75b55cfe467595fef065e3ac5243910
424
md
Markdown
README.md
JingMatrix/yoke-android
9e336b12d6607ec43bc062f2457c72bf445482bc
[ "MIT" ]
1
2022-01-13T15:07:52.000Z
2022-01-13T15:07:52.000Z
README.md
JingMatrix/yoke-android
9e336b12d6607ec43bc062f2457c72bf445482bc
[ "MIT" ]
null
null
null
README.md
JingMatrix/yoke-android
9e336b12d6607ec43bc062f2457c72bf445482bc
[ "MIT" ]
null
null
null
# Yoke (Android app) ## How to build See official instructions on https://developer.android.com/studio/build/building-cmdline : for example, on `Linux`, ``` ./gradlew assembleDebug ``` [<img src="https://f-droid.org/badge/get-it-on.png" alt="Get it on F-Droid" height="80">](https://f-droid.org/packages/com.simonramstedt.yoke/) ![Flightgear](media/flightgear.gif) See https://github.com/rmst/yoke for more.
21.2
72
0.700472
yue_Hant
0.251844
0c5dae7a2d07f0837d37b000466f493c7e66d948
4,445
md
Markdown
articles/hdinsight/hdinsight-apache-spark-job-server.md
SunnyDeng/azure-content-dede
edb0ac8eec176b64971ec219274a4a922dd00fec
[ "CC-BY-3.0" ]
2
2020-08-29T21:10:59.000Z
2021-07-25T10:13:02.000Z
articles/hdinsight/hdinsight-apache-spark-job-server.md
SunnyDeng/azure-content-dede
edb0ac8eec176b64971ec219274a4a922dd00fec
[ "CC-BY-3.0" ]
null
null
null
articles/hdinsight/hdinsight-apache-spark-job-server.md
SunnyDeng/azure-content-dede
edb0ac8eec176b64971ec219274a4a922dd00fec
[ "CC-BY-3.0" ]
null
null
null
<properties pageTitle="Apache Spark-Auftragsserver in HDInsight | Microsoft Azure" description="Erfahren Sie, wie Sie den Spark-Auftragsserver für die Remoteübermittlung und -verwaltung von Aufträgen in einem Spark-Cluster verwenden." services="hdinsight" documentationCenter="" authors="nitinme" manager="paulettm" editor="cgronlun" tags="azure-portal"/> <tags ms.service="hdinsight" ms.workload="big-data" ms.tgt_pltfrm="na" ms.devlang="na" ms.topic="article" ms.date="07/10/2015" ms.author="nitinme"/> # Spark-Auftragsserver in Azure HDInsight-Clustern Der Apache Spark-Cluster in Azure-HDInight verpackt den Spark-Auftragsserver im Rahmen der Clusterbereitstellung. Der Spark-Auftragsserver stellt REST-APIs zum Erstellen von Spark-Kontext, zum Übermitteln der Spark-Anwendung an den Kontext, zum Überprüfen des Auftragsstatus, zum Entfernen von Kontext usw. bereit. Dieser Artikel enthält einige Beispiele zur Verwendung von Curl zur Ausführung einiger häufiger Aufgaben auf einem Spark-Cluster, der einen Auftragsserver verwendet. >[AZURE.NOTE]Eine ausführliche Dokumentation für den Spark-Auftragsserver finden Sie unter [https://github.com/spark-jobserver/spark-jobserver](https://github.com/spark-jobserver/spark-jobserver). ## <a name="uploadjar"></a>Hochladen einer JAR-Datei in einen Spark-Cluster curl.exe -k -u "<hdinsight user>:<user password>" --data-binary @<location of jar on the computer> https://<cluster name>.azurehdinsight.net/sparkjobserver/jars/<application name> Beispiel: curl.exe -k -u "myuser:myPass@word1" --data-binary @C:\mylocation\eventhubs-examples\target\spark-streaming-eventhubs-example-0.1.0-jar-with-dependencies.jar https://mysparkcluster.azurehdinsight.net/sparkjobserver/jars/streamingjar ##<a name="createcontext"></a>Erstellen von neuem permanentem Kontext auf dem Auftragsserver curl.exe -k -u "<hdinsight user>:<user password>" -d "" "https://<cluster name>.azurehdinsight.net/sparkjobserver/contexts/<context name>?num-cpu-cores=<value>&memory-per-node=<value>" Beispiel: curl.exe -k -u "myuser:myPass@word1" -d "" "https://mysparkcluster.azurehdinsight.net/sparkjobserver/contexts/mystreaming?num-cpu-cores=4&memory-per-node=1024m" ##<a name="submitapp"></a>Übermitteln einer Anwendung an den Cluster curl.exe -k -u "<hdinsight user>:<user password>" -d @<input file name> "https://<cluster name>.azurehdinsight.net/sparkjobserver/jobs?appName=<app name>&classPath=<class path>&context=<context>" Beispiel: curl.exe -k -u "myuser:myPass@word1" -d @mypostdata.txt "https://mysparkcluster.azurehdinsight.net/sparkjobserver/jobs?appName=streamingjar&classPath=org.apache.spark.streaming.eventhubs.example.EventCountJobServer&context=mystreaming" wobei „mypostdata.txt“ Ihre Anwendung definiert. ##<a name="submitapp"></a>Löschen eines Auftrags curl.exe -X DELETE -k -u "<hdinsight user>:<user password>" "https://<cluster name>.azurehdinsight.net/sparkjobserver/contexts/<context>" Beispiel: curl.exe -X DELETE -k -u "myuser:myPass@word1" "https://mysparkcluster.azurehdinsight.net/sparkjobserver/contexts/mystreaming" ##<a name="seealso"></a>Weitere Informationen * [Übersicht: Apache Spark in Azure HDInsight](hdinsight-apache-spark-overview.md) * [Bereitstellen von Spark in einem HDInsight-Cluster](hdinsight-apache-spark-provision-clusters.md) * [Durchführen interaktiver Datenanalysen mithilfe von Spark in HDInsight mit BI-Tools](hdinsight-apache-spark-use-bi-tools.md) * [Verwenden von Spark in HDInsight zum Erstellen von Machine Learning-Anwendungen](hdinsight-apache-spark-ipython-notebook-machine-learning.md) * [Verwenden von Spark in HDInsight zum Erstellen von Echtzeit-Streaminganwendungen](hdinsight-apache-spark-csharp-apache-zeppelin-eventhub-streaming.md) * [Verwalten von Ressourcen für den Apache Spark-Cluster in Azure HDInsight](hdinsight-apache-spark-resource-manager.md) [hdinsight-versions]: ../hdinsight-component-versioning/ [hdinsight-upload-data]: ../hdinsight-upload-data/ [hdinsight-storage]: ../hdinsight-use-blob-storage/ [azure-purchase-options]: http://azure.microsoft.com/pricing/purchase-options/ [azure-member-offers]: http://azure.microsoft.com/pricing/member-offers/ [azure-free-trial]: http://azure.microsoft.com/pricing/free-trial/ [azure-management-portal]: https://manage.windowsazure.com/ [azure-create-storageaccount]: ../storage-create-storage-account/ <!---HONumber=Oct15_HO3-->
52.294118
480
0.782677
deu_Latn
0.461128
0c5de1735ddf1a885a1c2b62510810c1dc6e0a11
1,456
md
Markdown
node_modules/qiniu/CHANGELOG.md
liuaotian/hhitCampus
3e00a64b3ea06ccbe62928590094161badf060ac
[ "Apache-2.0" ]
2
2020-12-12T01:52:49.000Z
2021-02-06T15:14:52.000Z
node_modules/qiniu/CHANGELOG.md
liuaotian/hhitCampus
3e00a64b3ea06ccbe62928590094161badf060ac
[ "Apache-2.0" ]
null
null
null
node_modules/qiniu/CHANGELOG.md
liuaotian/hhitCampus
3e00a64b3ea06ccbe62928590094161badf060ac
[ "Apache-2.0" ]
null
null
null
## CHANGE LOG ### v6.1.11 2016-05-06 - npm 通过travis 自动发布 ### v6.1.10 2016-04-25 - list 增加delimiter 支持 - 增加强制copy/move - 底层使用putReadable 谢谢 @thesadabc - 修正result 处理 谢谢 @loulin - fix Unhandled stream error in pipe 谢谢@loulin - putExtra 修正 paras 为 params ### v6.1.9 2015-12-03 - Make secure base url - policy add fsizeMin - 修正 getEncodedEntryUri(bucket, key) - 文档修正 ### v6.1.8 2015-05-13 - 上传增加putpolicy2 ### v6.1.7 2015-05-09 - 上传putpolicy2增加 callbackHost、persistentPipeline, callbackFetchKey - 增加fetch 函数 - imageview -> imageview2 ### v6.1.6 2014-10-31 - 上传putpolicy2增加fsizelimit, insertonly ### v6.1.5 2014-7-23 issue [#111](https://github.com/qiniu/nodejs-sdk/pull/111) - [#109] 统一user agent - [#110] 更新put policy ### v6.1.4 2014-7-10 issue [#108](https://github.com/qiniu/nodejs-sdk/pull/108) - [#107] 调整上传host ### v6.1.3 2014-4-03 issue [#102](https://github.com/qiniu/nodejs-sdk/pull/102) - [#98] 增加pfop 功能 - [#99] 增加针对七牛callback的检查 ### v6.1.2 2014-2-17 issue [#96](https://github.com/qiniu/nodejs-sdk/pull/96) - Content-Length = 0 时的细节修复 ### v6.1.1 2013-12-5 issue [#90](https://github.com/qiniu/nodejs-sdk/pull/90) - 创建buffer前检测 ### v6.1.0 2013-10-08 issues [#81](https://github.com/qiniu/nodejs-sdk/pull/81) - 使用urllib - 修复callbackUrl的bug - 调整bucket的下载域名 ### v6.0.0 2013-07-16 issue [#56](https://github.com/qiniu/nodejs-sdk/pull/56) - 遵循 [sdkspec v6.0.4](https://github.com/qiniu/sdkspec/tree/v6.0.4)
14.707071
68
0.679945
yue_Hant
0.324619
0c5e01f083e7764bf6ee8ecf8500418f1968f763
3,333
md
Markdown
README.md
nofun97/frozen
d1f100e9e5414a8ab6375a2fc8afaaae4a7666e7
[ "Apache-2.0" ]
8
2020-01-11T01:25:58.000Z
2021-07-10T04:09:30.000Z
README.md
nofun97/frozen
d1f100e9e5414a8ab6375a2fc8afaaae4a7666e7
[ "Apache-2.0" ]
35
2020-01-09T10:20:54.000Z
2021-05-04T04:00:11.000Z
README.md
marcelocantos/frozen
4fbb61d1413d0d980adc27c7291550da18c73e06
[ "Apache-2.0" ]
2
2020-01-09T20:42:32.000Z
2020-01-13T00:46:10.000Z
# Frozen ![Go build status](https://github.com/arr-ai/frozen/workflows/Go/badge.svg) Efficient immutable data types. ## Types Map and Set both use a hashed array trie. - Map: Associates keys with values. - Set: Stores sets of values. ## Performance The following benchmarks test the base node implementation against several other key-value map implementations. All implementations are tested for insertions against an empty map, a map prepopulated with 1k elements and one prepopulated with 1M elements. The implementations are as follows: | Benchmark | Type | | --------------- | ------------------------------ | | MapInt | map[int]int | | MapInterface | map[interface{}]interface{} | | FrozenMap | frozen.Map | | FrozenNode | frozen.node | | SetInt | set = map[int]struct{} | | SetInterface | set = map[interface{}]struct{} | | FrozenSet | frozen.Set | In all cases, ints are mapped to ints. ```bash $ go test -run ^$ -cpuprofile cpu.prof -memprofile mem.prof -benchmem -bench ^BenchmarkInsert . goos: linux goarch: amd64 pkg: github.com/arr-ai/frozen BenchmarkInsertMapInt0-24 8532830 175 ns/op 72 B/op 0 allocs/op BenchmarkInsertMapInt1k-24 10379329 164 ns/op 60 B/op 0 allocs/op BenchmarkInsertMapInt1M-24 6760242 185 ns/op 78 B/op 0 allocs/op BenchmarkInsertMapInterface0-24 3579843 348 ns/op 152 B/op 2 allocs/op BenchmarkInsertMapInterface1k-24 3675631 365 ns/op 148 B/op 2 allocs/op BenchmarkInsertMapInterface1M-24 6517272 354 ns/op 115 B/op 2 allocs/op BenchmarkInsertFrozenMap0-24 5443401 225 ns/op 240 B/op 6 allocs/op BenchmarkInsertFrozenMap1k-24 2553954 446 ns/op 635 B/op 10 allocs/op BenchmarkInsertFrozenMap1M-24 1263691 960 ns/op 954 B/op 13 allocs/op BenchmarkInsertFrozenNode0-24 8220901 141 ns/op 144 B/op 4 allocs/op BenchmarkInsertFrozenNode1k-24 3294789 388 ns/op 539 B/op 8 allocs/op BenchmarkInsertFrozenNode1M-24 1316443 871 ns/op 858 B/op 11 allocs/op BenchmarkInsertSetInt0-24 12816358 155 ns/op 29 B/op 0 allocs/op BenchmarkInsertSetInt1k-24 12738687 155 ns/op 29 B/op 0 allocs/op BenchmarkInsertSetInt1M-24 7613054 171 ns/op 39 B/op 0 allocs/op BenchmarkInsertSetInterface0-24 5121948 302 ns/op 58 B/op 1 allocs/op BenchmarkInsertSetInterface1k-24 5051988 303 ns/op 58 B/op 1 allocs/op BenchmarkInsertSetInterface1M-24 3172472 329 ns/op 62 B/op 1 allocs/op BenchmarkInsertFrozenSet0-24 5400745 236 ns/op 296 B/op 6 allocs/op BenchmarkInsertFrozenSet1k-24 2460313 512 ns/op 787 B/op 11 allocs/op BenchmarkInsertFrozenSet1M-24 1132215 1046 ns/op 1106 B/op 14 allocs/op PASS ok github.com/arr-ai/frozen 65.909s ``` ![Benchmarks Graph](assets/benchmarks.png)
52.078125
95
0.60276
kor_Hang
0.232419
0c5e6c4d2e5a5be5110c1f1ead6a0a8f2eaeff66
4,481
md
Markdown
_posts/CPP_STL/C_hw/2018-11-15-Queue.md
Mr-dingo/Mr-dingo.github.io
d65c78077242b7e9430c0ba06dfa414e5be97c0b
[ "MIT" ]
null
null
null
_posts/CPP_STL/C_hw/2018-11-15-Queue.md
Mr-dingo/Mr-dingo.github.io
d65c78077242b7e9430c0ba06dfa414e5be97c0b
[ "MIT" ]
null
null
null
_posts/CPP_STL/C_hw/2018-11-15-Queue.md
Mr-dingo/Mr-dingo.github.io
d65c78077242b7e9430c0ba06dfa414e5be97c0b
[ "MIT" ]
null
null
null
--- layout: post title: "C언어 LinkedList로 Queue구현하기 과제" date: 2018-11-25 00:00:00 author: luis lee categories: 구현과제 --- <!-- TOC --> - [문제](#%EB%AC%B8%EC%A0%9C) - [코드의 구현](#%EC%BD%94%EB%93%9C%EC%9D%98-%EA%B5%AC%ED%98%84) <!-- /TOC --> 이 포스팅은 개인적인 용무로 인해 타학교 과제를 하다가 그냥 올리는 포스팅입니다. 과제 내용은 다음과 같으며 구현은 최대한 빨리 끝내야했기 때문에 코드를 깔끔하게 구성하지는 않았습니다. # 문제 Create a special queue using linked-list. If the linked-list is not used, 0 point will be given. There are three operations of the special queue. 'I' (In) operation to enqueue a value to a queue, 'O' (Out) operation to dequeue a value in a queue, and 'P' (Priority) operation to dequeue a priority value in a queue. Given an 'I' or 'P' operation, an integer is given together. In the 'I' operation, the value is enqueued to the queue. In the 'P' operation, the value is immediately dequeued regardless of the order. If there is no value in the queue, no action is taken. If there is more than one value in the queue, dequeue the value at the front of the queue. Given the operations of a queue in an empty queue state, print the values that are dequeued and the queued values in order after completing all the operations. Global variables should not be declared or used in this problem. Sample |input | |------| 10 I 7 <br> I 3 <br> I 4 <br> I 2 <br> O <br> I 9 <br> I 2 <br> I 6 <br> P 2 <br> O | output | | ------ | 7 <br> 2 <br> 3<br> 4 9 2 6<br> # 코드의 구현 구현은 1시간 10분정도 걸렸다. input/output format 이 정해져있고 c언어에 익숙하지는 않아서 scanf 부분에서 약간 시간이 걸렷다. 완전히 최적화되어있는 코드는 아니지만 그냥 포스팅 해본다. ```c #include <stdlib.h> #include <stdio.h> typedef struct Queue { struct Queue *next; //pointing next Queue node int data; } Queue; //insert node after target Queue* I_operator(Queue * target,int value); //delete node (parent of target) bool D_operator(Queue * target); Queue * findParent(Queue * target,Queue * head) //double linked list 가 아니므로 이전 노드를 찾는 함수를 구현 { Queue * resultQ; if(head->next == NULL) return NULL; for(resultQ = head; resultQ ; resultQ = resultQ->next) { if(resultQ->next == target) { return resultQ; } } return resultQ; } Queue * findValue(int value , Queue * head) //특정 값을 가지는 node를 찾는다. { Queue * resultQ; if(head->next == NULL) return NULL; for(resultQ = head->next; resultQ ; resultQ = resultQ->next) { if(resultQ->data == value) { return resultQ; } } return resultQ; } Queue * findLast(Queue * head) //마지막 노드를 찾는 함수 { Queue * resultQ = head; if(head->next == NULL) return resultQ; for(resultQ = head->next; resultQ->next ; resultQ = resultQ->next) { } return resultQ; } void doOperation(char OpType,int value,Queue * head) { if(OpType == 'O') { D_operator(head); } else if(OpType == 'I') { Queue * last = findLast(head); I_operator(last,value); } else if(OpType == 'P') { Queue * findedQueue = findValue(value,head) ; Queue * target = findParent(findedQueue,head); D_operator( target); } else { printf("wrong input"); //wrong input } } //input operator //target 의 뒤에 value 를 가지는 노드를 넣는다. Queue* I_operator(Queue * target,int value) { Queue * newQueue = (Queue*)malloc(sizeof(Queue)); newQueue->data = value; newQueue->next = target->next; target->next = newQueue; return newQueue; } //target 의 앞에있는(parent) 노드를 제거한다. bool D_operator(Queue * target) { if(target == NULL) return false; Queue * del = target->next; if(del == NULL) { return false; } printf("%d\n",del->data); target->next = del->next; free(del); return true; } int main(int argc, char const *argv[]) { /* code */ Queue * head = (Queue*)malloc(sizeof(Queue)); head->next = NULL; //initialize head node ( head node is dummy node in my code ) int inputSize; scanf("%d", &inputSize); for(int i = 0; i < inputSize; i++) { //key input char OpType; //operation type ( I , O , P ) int value; scanf(" %c",&OpType); if(OpType != 'O') //if Input's operator type is not 'O' , scan value { scanf("%d",&value); } doOperation(OpType,value , head); } for(Queue * curQ = head->next; curQ; curQ = curQ->next) { printf("%d ",curQ->data); } return 0; } ```
23.835106
345
0.602544
kor_Hang
0.873641
0c5fc4e2fbbd6372d3b041f9afc511495dbc8b43
9,212
md
Markdown
docs/2014/relational-databases/partitions/partitioned-tables-and-indexes.md
masahiko-sotta/sql-docs.ja-jp
f9e587be8d74ad47d0cc2c31a1670e2190a0aab7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/partitions/partitioned-tables-and-indexes.md
masahiko-sotta/sql-docs.ja-jp
f9e587be8d74ad47d0cc2c31a1670e2190a0aab7
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/2014/relational-databases/partitions/partitioned-tables-and-indexes.md
masahiko-sotta/sql-docs.ja-jp
f9e587be8d74ad47d0cc2c31a1670e2190a0aab7
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: パーティション テーブルとパーティション インデックス | Microsoft Docs ms.custom: '' ms.date: 06/13/2017 ms.prod: sql-server-2014 ms.reviewer: '' ms.technology: '' ms.topic: conceptual helpviewer_keywords: - partitioned tables [SQL Server], about partitioned tables - partitioned indexes [SQL Server], architecture - partitioned tables [SQL Server], architecture - partitioned indexes [SQL Server], about partitioned indexes ms.assetid: cc5bf181-18a0-44d5-8bd7-8060d227c927 author: MikeRayMSFT ms.author: mikeray manager: craigg ms.openlocfilehash: 5f96f82919b9f4a130ce8a533e6ffcf31e765f5f ms.sourcegitcommit: 3026c22b7fba19059a769ea5f367c4f51efaf286 ms.translationtype: MT ms.contentlocale: ja-JP ms.lasthandoff: 06/15/2019 ms.locfileid: "65092040" --- # <a name="partitioned-tables-and-indexes"></a>パーティション テーブルとパーティション インデックス [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] では、テーブルおよびインデックスのパーティション分割をサポートします。 パーティション テーブルとパーティション インデックスのデータは、データベース内の複数のファイル グループに分散できるように、複数の単位に分割されます。 行のグループが各パーティションにマップされるように、データは行方向にパーティション分割されます。 1 つのインデックスまたはテーブルのすべてのパーティションは、同じデータベース内に存在する必要があります。 データに対するクエリまたは更新の実行時は、テーブルやインデックスが 1 つの論理エンティティとして扱われます。 パーティション テーブルとパーティション インデックスは、[!INCLUDE[msCoName](../../includes/msconame-md.md)][!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] のすべてのエディションで使用できるわけではありません。 エディションでサポートされている機能の一覧については[!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)]を参照してください[機能は、SQL Server 2014 の各エディションでサポートされている](../../getting-started/features-supported-by-the-editions-of-sql-server-2014.md)します。 > [!IMPORTANT] > [!INCLUDE[ssCurrent](../../includes/sscurrent-md.md)] では、既定で最大 15,000 個のパーティションをサポートします。 [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)]より前のバージョンでは、パーティションの数は既定で 1,000 個に限られていました。x86 ベースのシステムでは、パーティション数が 1,000 個を超えるテーブルまたはインデックスを作成できますが、それはサポートされていません。 ## <a name="benefits-of-partitioning"></a>パーティション分割の利点 大きなテーブルやインデックスをパーティション分割することで、次のような管理上およびパフォーマンス上の利点が得られます。 - データ コレクション全体の整合性を保ちながら、データ サブセットの転送やアクセスを迅速かつ効率的に行うことができるようになります。 たとえば、OLTP システムから OLAP システムへのデータの読み込みなどの操作は、データがパーティション分割されていない場合は数分から数時間かかりますが、数秒で実行されるようになります。 - 1 つまたは複数のパーティションでのメンテナンス操作をより迅速に実行できます。 テーブル全体ではなく、これらのデータ サブセットのみを対象にできるので、操作がより効率化されます。 たとえば、1 つまたは複数のパーティションでデータを圧縮するか、インデックスの 1 つまたは複数のパーティションを再構築するかを選択できます。 - 頻繁に実行するクエリの種類とハードウェア構成に基づいて、クエリのパフォーマンスを改善できる場合があります。 たとえば、クエリ オプティマイザーで 2 つ以上のパーティション テーブル間の等結合クエリを行う場合、そのテーブル内のパーティション分割列が同じであれば、パーティション自体を結合できるので処理がより迅速になります。 [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] により I/O 操作用にデータの並べ替えが実行される場合、まずパーティションでデータが並べ替えられます。 [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] では、一度に 1 つのドライブにしかアクセスできないので、パフォーマンスが低下することがあります。 データの並べ替えのパフォーマンスを向上させるには、RAID を構成して複数のディスク間でパーティションのデータ ファイルをストライプします。 この方法を使用すると、 [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] では今までどおりデータがパーティションで並べ替えられますが、すべてのドライブの各パーティションに同時にアクセスできるようになります。 さらに、テーブル全体ではなくパーティション レベルでのロックのエスカレーションを有効にしてパフォーマンスを向上させることができます。 これにより、テーブルでのロックの競合を減らすことができます。 ## <a name="components-and-concepts"></a>コンポーネントおよび概念 テーブルおよびインデックスのパーティション分割に関連する用語を次に示します。 パーティション関数 テーブルまたはインデックスの行を、パーティション分割列と呼ばれる特定の列の値に基づいて、一連のパーティションにマップする方法を定義するデータベース オブジェクト。 つまり、テーブルのパーティションの数とパーティションの境界の定義方法は、パーティション関数によって定義されます。 たとえば、販売注文データを格納するテーブルの場合、販売日などの `datetime` 列に基づいて、月別の 12 のパーションに分割できます。 パーティション構成 パーティション関数のパーティションを一連のファイル グループにマップするデータベース オブジェクト。 パーティションを別々のファイル グループに配置する主な理由は、パーティションのバックアップ操作を個別に実行できるようにすることです。 これは、バックアップを個別のファイル グループで実行できるからです。 パーティション分割列 パーティション関数が、テーブルまたはインデックスをパーティション分割するために使用するテーブルまたはインデックスの列。 パーティション関数に参加する計算列は、明示的に PERSISTED とマークされている必要があります。 `timestamp` 型を除き、インデックス列として使用できるすべてのデータ型をパーティション分割列として使用できます。 `ntext` 型、`text` 型、`image` 型、`xml` 型、`varchar(max)` 型、`nvarchar(max)` 型、および `varbinary(max)` 型は指定できません。 また、Microsoft .NET Framework 共通言語ランタイム (CLR) ユーザー定義型の列とエイリアス データ型の列を指定することはできません。 固定されたインデックス 対応するテーブルと同じパーティション構成に基づいて構築されたインデックス。 テーブルとインデックスが固定されている状態では、両者のパーティション構造を保ったまま SQL Server がパーティションをすばやく効率的に切り替えることができます。 ベース テーブルに固定させるために、インデックスを同じ名前のパーティション関数に加える必要はありません。 ただし、インデックスとベース テーブルのパーティション関数は、本質的に同一 (つまり、1) 引数のデータ型が同一、2) 定義されるパーティションの数が同一、3) 定義されるパーティションの境界値が同一) である必要があります。 固定されていないインデックス 対応するパーティション テーブルから個別に分割されたインデックス。 つまり、インデックスのパーティション構成が異なっているか、インデックスがベース テーブルとは別のファイル グループに配置されています。 次のような場合、固定されていないパーティション インデックスをデザインすると便利です。 - ベース テーブルがパーティション分割されていない。 - インデックス キーが一意であり、テーブルのパーティション分割列を含んでいない。 - 異なる結合列を使用して多くのテーブルが併置されている結合にベース テーブルを加える。 パーティションの解消 クエリ オプティマイザーがクエリのフィルター条件を満たすために、関連するパーティションのみにアクセスするときに使用されるプロセス。 ## <a name="performance-guidelines"></a>パフォーマンスに関するガイドライン 新しいパーティション数の制限が 15,000 になったことは、メモリ、パーティション インデックス操作、DBCC コマンド、およびクエリに影響します。 ここでは、パーティション数が 1,000 を超えた場合のパフォーマンスへの影響について説明し、必要に応じた回避策を示します。 パーティション数の上限が 15,000 になると、データを保存できる期間が長くなります。 ただし、データの保持期間は必要最小限とし、パフォーマンスとパーティション数とのバランスをとる必要があります。 ### <a name="memory-usage-and-guidelines"></a>メモリ使用量とガイドライン 使用するパーティション数が多い場合は、16 GB 以上の RAM を使用することをお勧めします。 システムに十分なメモリがない場合は、データ操作言語 (DML) ステートメント、データ定義言語 (DDL) ステートメント、およびその他の処理においてメモリ不足によるエラーが発生する場合があります。 16 GB の RAM を搭載したシステムでメモリを集中的に使用するプロセスが多数実行される場合は、多数のパーティションで実行される操作でメモリが不足する可能性があります。 したがって、メモリを 16 GB よりも大きくするほど、パフォーマンスとメモリの問題が少なくなります。 SQL Server でパーティション インデックスを作成するパフォーマンスは、メモリにより制限される場合があります。 テーブルに既にクラスター化インデックスが適用されている場合、パーティション インデックスがベース テーブルまたはクラスター化インデックスに固定されていないとメモリによる制限を特に受けます。 ### <a name="partitioned-index-operations"></a>パーティション インデックス操作 SQL Server でパーティション インデックスを作成するパフォーマンスは、メモリにより制限される場合があります。 固定されていないインデックスの場合は、特に影響を受けます。 固定されていないインデックスをパーティションが 1, 000 個以上あるテーブルに作成または再構築することは可能ですが、サポートされていません。 このような操作を行うと、操作中にパフォーマンスが低下したりメモリが過度に消費される可能性があります。 固定されたインデックスの作成および再構築にかかる時間は、パーティション数が増えるにつれて長くなります。 パフォーマンスおよびメモリの問題を回避するために、インデックスの作成および再構築の複数のコマンドを同時に実行しないことをお勧めします。 SQL Server でパーティション インデックスを作成するための並べ替えを実行するとき、最初にパーティションごとに 1 つの並べ替えテーブルが作成されます。 次に、各パーティションのそれぞれのファイル グループ、または SORT_IN_TEMPDB インデックス オプションが指定されている場合は `tempdb` で並べ替えテーブルが作成されます。 1 つの並べ替えテーブルを作成するために最低限必要なメモリの量が決まっています。 ベース テーブルに固定するパーティション インデックスを作成すると、並べ替えテーブルは一度に 1 つずつ作成されるのでメモリの消費を抑えることができます。 しかし、固定されないパーティション インデックスを作成すると、複数の並べ替えテーブルが同時に作成されます。 そのため、このように同時に並べ替えを行うには十分なメモリが必要です。 パーティションの数が多いと、必要なメモリも増えます。 1 つの並べ替えテーブル、つまりパーティションあたり最低必要なサイズは 40 ページ (1 ページは 8 KB) です。 たとえば、100 個のパーティションから構成される固定されないパーティション インデックスは、同時に 4,000 (40 * 100) ページを同時に並べ替えることができるメモリが必要です。 これだけのメモリを使用できれば、作成操作は成功しますがパフォーマンスが低下する場合があります。 これだけのメモリを使用できない場合、作成操作は失敗します。 一方、100 個のパーティションから構成される固定されたパーティション インデックスは、複数の並べ替えが同時に行われることがないので、40 ページを並べ替えることができるメモリがあれば十分です。 固定されたインデックス、固定されないインデックスを問わず、SQL Server がマルチプロセッサ コンピューターで 2 次以上の並列処理によって作成操作を実行している場合、メモリの要件がさらに高くなる場合もあります。 これは並列処理の次数が多いと、メモリの要件も高くなるためです。 たとえば、SQL Server の並列処理の次数が 4 に設定されている場合、100 個のパーティションから構成される固定されないパーティション インデックスは、同時に 4 基のプロセッサで 4,000 ページを並べ替えるために 16,000 ページ分のメモリが必要です。 パーティション インデックスが固定されている場合、4 基のプロセッサで 40 ページを並べ替えるため、メモリの要件は 160 (4 * 40) ページまで下がります。 MAXDOP インデックス オプションを使用して、手動で並列処理の次数を減らすことができます。 ### <a name="dbcc-commands"></a>DBCC コマンド パーティション数が多い場合、DBCC コマンドの実行にかかる時間は、パーティション数が増えるほど長くなります。 ### <a name="queries"></a>クエリ パーティションの解消を使用するクエリは、パーティション数が多くなると、それに応じてパフォーマンスが向上する可能性があります。 パーティションの解消を使用しないクエリの場合、その実行にかかる時間は、パーティション数が増えるほど長くなります。 たとえば、テーブルの行数が 10 億で、 `A`、 `B`、および `C`の列があるとします。 シナリオ 1 では、テーブルが列 `A`で 1,000 個のパーティションに分割されます シナリオ 2 では、テーブルが列 `A`で 10,000 個のパーティションに分割されます 列 `A` でフィルタリングする WHERE 句を持つテーブルでのクエリは、パーティションの解消を実行し、1 つのパーティションをスキャンします。 シナリオ 2 の場合は、パーティション内でスキャンする行数が少ないので、同じクエリがより高速に実行される可能性があります。 列 B でフィルタリングする WHERE 句を持つクエリは、すべてのパーティションをスキャンします。 シナリオ 1 の場合は、スキャンするパーティション数が少ないので、同じクエリがシナリオ 2 より高速に実行される可能性があります。 パーティション分割列以外の列に対して TOP や MAX/MIN のような演算子を使用するクエリは、すべてのパーティションを評価する必要があるため、パーティション分割によってパフォーマンスが低下する可能性があります。 ## <a name="behavior-changes-in-statistics-computation-during-partitioned-index-operations"></a>パーティション インデックス操作中の統計計算での動作の変更 [!INCLUDE[ssSQL11](../../includes/sssql11-md.md)]以降では、パーティション インデックスが作成または再構築された場合、テーブル内のすべての行をスキャンして統計を作成することはできません。 代わりに、クエリ オプティマイザーが既定のサンプリング アルゴリズムを使用して統計を生成します。 パーティション インデックスでデータベースをアップグレードした後で、これらのインデックスのヒストグラム データに違いが見つかる場合があります。 この動作の変更はクエリ パフォーマンスに影響しない可能性があります。 テーブル内のすべての行をスキャンしてパーティション インデックスの統計を作成するには、FULLSCAN 句で CREATE STATISTICS または UPDATE STATISTICS を使用します。 ## <a name="related-tasks"></a>Related Tasks ||| |-|-| |**タスク**|**トピック**| |パーティション関数とパーティション構成の作成方法、およびそれらをテーブルおよびインデックスに適用する方法について説明します。|[パーティション テーブルとパーティション インデックスの作成](create-partitioned-tables-and-indexes.md)| ||| ## <a name="related-content"></a>関連コンテンツ 次のホワイトペーパーには、パーティション テーブルおよびパーティション インデックスの戦略と有用な実装について記述されています。 - [SQL Server 2008 を使用したパーティション テーブルとパーティション インデックス](https://msdn.microsoft.com/library/dd578580\(SQL.100\).aspx) - [自動スライディング ウィンドウを実装する方法](https://msdn.microsoft.com/library/aa964122\(SQL.90\).aspx) - [パーティション テーブルの一括読み込み](https://msdn.microsoft.com/library/cc966380.aspx) - [パーティション テーブルとパーティション インデックスに対するクエリ処理の機能強化](https://msdn.microsoft.com/library/ms345599.aspx) - [大規模なリレーショナル データ ウェアハウスを構築するためのトップ 10 のベスト プラクティス](http://sqlcat.com/top10lists/archive/2008/02/06/top-10-best-practices-for-building-a-large-scale-relational-data-warehouse.aspx)
74.894309
748
0.824142
jpn_Jpan
0.844233
0c5fd0ccde5273775c42ccd1a437dc669ab7b0ef
6,358
md
Markdown
docs/test/live-unit-testing-whats-new.md
Jteve-Sobs/visualstudio-docs.de-de
59bd3c5d2776a76ef8d28407c5cc97efc9e72f84
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/test/live-unit-testing-whats-new.md
Jteve-Sobs/visualstudio-docs.de-de
59bd3c5d2776a76ef8d28407c5cc97efc9e72f84
[ "CC-BY-4.0", "MIT" ]
1
2020-07-24T14:57:38.000Z
2020-07-24T14:57:38.000Z
docs/test/live-unit-testing-whats-new.md
angelobreuer/visualstudio-docs.de-de
f553469c026f7aae82b7dc06ba7433dbde321350
[ "CC-BY-4.0", "MIT" ]
null
null
null
--- title: Neuerungen in Live Unit Testing in Visual Studio 2017 titleSuffix: '' ms.date: 10/11/2017 ms.topic: conceptual helpviewer_keywords: - Live Unit Testing - Live Unit Testing What's New author: mikejo5000 ms.author: mikejo ms.workload: - dotnet monikerRange: vs-2017 ms.openlocfilehash: 7f7ab0c257bfed4521e95d9da12eaa0b9e25a71e ms.sourcegitcommit: cc841df335d1d22d281871fe41e74238d2fc52a6 ms.translationtype: HT ms.contentlocale: de-DE ms.lasthandoff: 03/18/2020 ms.locfileid: "76114271" --- # <a name="whats-new-in-live-unit-testing-for-visual-studio-2017"></a>Neuerungen in Live Unit Testing für Visual Studio 2017 In diesem Artikel werden die Neuerungen in Live Unit Testing für jede Visual Studio Version ab Visual Studio 2017 Version 15.3 aufgelistet. Eine Übersicht zur Verwendung von Live Unit Testing finden Sie unter [Live Unit Testing mit Visual Studio](live-unit-testing.md). ## <a name="version-154"></a>Version 15.4 Ab Visual Studio 2017 Version 15.4 umfasst Live Unit Testing Verbesserungen und Erweiterungen in verschiedenen Bereichen: - **Verbesserte Erkennbarkeit** Für Benutzer, die die Live Unit Testing-Funktion noch nicht kennen, zeigt die Visual Studio-IDE eine goldene Leiste an, die immer auf Live Unit Testing verweist, wenn der Benutzer eine Projektmappe öffnet, die zwar Komponententests enthält, in der Live Unit Testing jedoch deaktiviert ist. Die in der goldenen Leiste angezeigten Informationen informieren den Benutzer über Live Unit Testing sowie das Aktivieren dieser Funktion. Außerdem werden in der goldenen Leiste Informationen angezeigt, wenn die Voraussetzungen für Live Unit Testing nicht erfüllt sind. Dazu gehören: - Testadapter fehlen - Ältere Versionen von Testadaptern sind vorhanden - Eine Wiederherstellung von NuGet-Paketen wird benötigt, auf die in der Projektmappe verwiesen wird. - **Integration mit Taskcenter-Benachrichtigungen** Die Visual Studio-IDE zeigt im Taskcenter jetzt eine Benachrichtigung für die Verarbeitung im Hintergrund für Live Unit Testing an, damit Benutzer einfach nachvollziehen können, was passiert, wenn Live Unit Testing aktiviert ist. Dadurch wird das Hauptproblem beim Starten von Live Unit Testing in einer großen Projektmappe angegangen. Zuvor konnten Benutzer für ein paar Minuten (bis die Abdeckungssymbole angezeigt werden) nicht sehen, ob Live Unit Testing wirklich aktiviert und funktionstüchtig war. Das hat sich geändert. - **Unterstützung für das MSTest-Framework Version 1:** Live Unit Testing unterstützt drei beliebte Unit Testing-Frameworks: xUnit, NUnit und MSTest. Zuvor hat Live Unit Testing nur funktioniert, wenn Komponententestprojekte für MSTest die Version 2 von MSTest verwendet haben. Ab Visual Studio 2017 Version 15.4 wird auch MSTest Version 1 unterstützt. - **Zuverlässigkeit und Leistung:** Live Unit Testing gewährleistet nun, dass das System darauf aufmerksam wird, wenn Projekte den Ladevorgang nicht komplett abgeschlossen haben, und verhindert das Abstürzen von Live Unit Testing. Durch Verbesserungen der Buildleistung wird außerdem vermieden, dass MSBuild-Projekte erneut ausgewertet werden, wenn das System nicht darüber informiert ist, dass Änderungen an der Projektdatei vorgenommen wurden. - **Verschiedene Verbesserungen der Benutzeroberfläche:** Die verwirrende Option **Live Test Set – Include/Exclude** (Live Test Set – Einschließen/Ausschließen), die über die rechte Maustaste aufgerufen wird, wurde in **Live Unit Testing Include/Exclude** (Live Unit Testing: Einschließen/Ausschließen) umbenannt. Die Option **Reset clean** (Bereinigung zurücksetzen) im Menü **Test** > **Live Unit Testing** wurde entfernt. Sie kann nun über **Extras** > **Optionen** > **Live Unit Testing** > **Delete Persisted Data** (Persistente Daten löschen) aufgerufen werden. ## <a name="version-153"></a>Version 15.3 Ab Visual Studio 2017 Version 15.3 umfassen die Live Unit Testing-Funktionen Verbesserungen und Erweiterungen in zwei wichtigen Bereichen: - Unterstützung für .NET Core und .NET Standard. Sie können Live Unit Testing für .NET Core- und .NET Standard-Projektmappen verwenden, die entweder in C# oder in Visual Basic geschrieben sind. - Leistungsverbesserungen. Wie Sie merken werden, ist die Leistung nach dem ersten vollständigen Build und der ersten Testausführung unter Live Unit Testing erheblich schneller. Sie werden auch bei den nachfolgenden Starts von Live Unit Testing in der gleichen Projektmappe deutliche Leistungsverbesserungen wahrnehmen. Es werden nun von Live Unit Testing generierte Daten aufbewahrt und so oft wie möglich für Aktualitätsprüfungen wiederverwendet. Neben diesen wichtigen Ergänzungen enthält Live Unit Testing außerdem die folgenden Verbesserungen: - Testmethoden werden jetzt mithilfe eines Becherglassymbols von regulären Methoden unterschieden. Ein leeres Becherglas bedeutet, dass der jeweilige Test nicht in Live Unit Testing enthalten ist. - Wenn Sie im UI-Popupfenster eines Abdeckungssymbols für Live Unit Testing auf eine Testmethode klicken, haben Sie die Möglichkeit, den Test direkt in diesem Ausführungskontext im UI-Fenster zu debuggen, ohne den Code-Editor verlassen zu müssen. Dies ist äußerst praktisch, insbesondere wenn Sie einen fehlgeschlagenen Test prüfen. - Mehrere zusätzliche konfigurierbare Optionen wurden „Extras/Optionen/Live Unit Testing/Allgemein“ hinzugefügt. Sie können für den für die Live Unit Testing verwendeten Arbeitsspeicher eine Obergrenze festlegen. Außerdem lässt sich der Dateipfad für persistente Live Unit Testing-Daten für die geöffnete Lösung angeben. - Mehrere zusätzliche Menüelemente wurden unter dem Menübalken von „Test/Live Unit Testing“ hinzugefügt. **Bereinigt zurücksetzen** löscht die persistenten Daten und generiert sie neu. **Option** navigiert zu „Extras/Optionen/Live Unit Testing/Allgemein“. - Sie können nun die folgenden Attribute verwenden, um im Quellcode anzugeben, dass Sie bestimmte Testmethoden aus Live Unit Testing ausschließen möchten: - Für xUnit: `[Trait("Category", "SkipWhenLiveUnitTesting")]` - Für NUnit: `[Category("SkipWhenLiveUnitTesting")]` - Für MSTest: `[TestCategory("SkipWhenLiveUnitTesting")]` ## <a name="see-also"></a>Weitere Informationen - [Einführung von Live Unit Testing](live-unit-testing-intro.md) - [Live Unit Testing mit Visual Studio 2017](live-unit-testing.md)
89.549296
605
0.811104
deu_Latn
0.994844
0c60f87f5751e348aecd77754cb151d611e42842
1,868
md
Markdown
artwork/README.md
mikepfrank/dynamic
01581e5f671f3ab34eb5bec45c2cab508a1b6928
[ "Unlicense" ]
2
2019-01-25T07:18:56.000Z
2021-12-18T05:16:40.000Z
artwork/README.md
mikepfrank/dynamic
01581e5f671f3ab34eb5bec45c2cab508a1b6928
[ "Unlicense" ]
null
null
null
artwork/README.md
mikepfrank/dynamic
01581e5f671f3ab34eb5bec45c2cab508a1b6928
[ "Unlicense" ]
null
null
null
# Dynamic artwork source files (`dynamic/artwork/`). Source files and for-reference images for graphic artwork associated with the Dynamic application. ## 1. PowerPoint for logo assembly ([`Dynamic-logo.pptx`](Dynamic-logo.pptx "Dynamic-logo.pptx file")). Contains several slides used to compose different verisons of the Dynamic logo. ## 2. Dynamic logo ([`dynamic-logo.gif`](dynamic-logo.gif "dynamic-logo.gif file"), [`dynamic-logo.png`](dynamic-logo.png "dynamic-logo.png file")). Main version of the logo image, composed for the splash page. ## 3. Simplified logo ([`dynamic-logo-nosplash.png`](dynamic-logo-nosplash.png "dynamic-logo-nosplash.png file")). Just the main text-box part of the logo, for use in other contexts. ## 4. Faded logo ([`dynamic-splash-faded.png`](dynamic-splash-faded.png "dynamic-splash-faded.png")). Faded-to-white version of the logo, for use as a slide background. ## 5. Subtitled logo ([`dynamic-splash-subtitled.png`](dynamic-splash-subtitled.png "dynamic-splash-subtitled.png file"), [`dynamic-splash-subtitled.ppm`](dynamic-splash-subtitled.ppm "dynamic-splash-subtitled.ppm file")). A version of the main logo that also includes the subtitle "The Nonlinear Dynamical Network Simulator." ## 6. Window dump ([`dynamic-window.png`](dynamic-window.png "dynamic-window.png file")). Window dump (screenshot) showing how the splash page image looks in the main console window. ## 7. Alternate logo ([`DYNAMIXXXX.jpg`](DYNAMIXXXX.jpg "DYNAMIXXXX.jpg file"), [`DYNAMIXXXX.ppm`](DYNAMIXXXX.ppm "DYNAMIXXXX.ppm file")). An alternate, higher-resolution, artistically-complex version of the logo. ## 8. Smaller version of alternate logo ([`dynamix2.jpg`](dynamix2.jpg "dynamix2.jpg file")). Compressed, lower-resolution version of alternate logo. ## 9. README file ([`README.md`](README.md "README.md file")). This file.
44.47619
222
0.748394
eng_Latn
0.562345
0c6118355587dcb37d9b4dfa633b237a3f0c5f76
326
md
Markdown
README.md
subdigital/swift-coding
b28d38f6b62296f1cd99313b950281a43b130a7c
[ "MIT" ]
11
2021-04-25T18:02:37.000Z
2022-01-27T23:02:47.000Z
README.md
subdigital/swift-coding
b28d38f6b62296f1cd99313b950281a43b130a7c
[ "MIT" ]
1
2021-06-25T12:41:45.000Z
2021-06-28T12:23:44.000Z
README.md
subdigital/swift-coding
b28d38f6b62296f1cd99313b950281a43b130a7c
[ "MIT" ]
1
2021-05-10T15:10:50.000Z
2021-05-10T15:10:50.000Z
# swift-coding ![Bitrise Build Badge](https://app.bitrise.io/app/e6e8163c7f141139.svg?token=aVhm-ktqKllsLkhHQ_JrvQ) A protocol witness-oriented library for creating functional encoding and decodings, built on top of Codable. [NSScreencasts series on Codable witnesses](https://nsscreencast.com/series/54-codable-witnesses)
40.75
108
0.812883
eng_Latn
0.821399
0c6137c8d6795587d73c7dad7e5fe55b8589b31b
2,309
md
Markdown
docs/access/desktop-database-reference/parameters-collection-ado.md
JakubK44/office-developer-client-docs
7c36a87ab45654d6a9313f3bc4365c08c4a616c6
[ "CC-BY-4.0", "MIT" ]
50
2018-11-08T14:51:56.000Z
2022-03-28T18:56:54.000Z
docs/access/desktop-database-reference/parameters-collection-ado.md
JakubK44/office-developer-client-docs
7c36a87ab45654d6a9313f3bc4365c08c4a616c6
[ "CC-BY-4.0", "MIT" ]
510
2018-05-17T01:01:02.000Z
2022-03-31T22:20:22.000Z
docs/access/desktop-database-reference/parameters-collection-ado.md
JakubK44/office-developer-client-docs
7c36a87ab45654d6a9313f3bc4365c08c4a616c6
[ "CC-BY-4.0", "MIT" ]
98
2018-05-10T08:39:19.000Z
2022-03-31T09:41:54.000Z
--- title: Parameters collection (ADO) TOCTitle: Parameters collection (ADO) ms:assetid: 554387c3-3572-5391-3b24-c7d3443844cd ms:mtpsurl: https://msdn.microsoft.com/library/JJ249283(v=office.15) ms:contentKeyID: 48544923 ms.date: 09/18/2015 mtps_version: v=office.15 f1_keywords: - ado210.chm1231103 f1_categories: - Office.Version=v15 ms.localizationpriority: medium --- # Parameters collection (ADO) **Applies to**: Access 2013, Office 2013 Contains all the [Parameter](parameter-object-ado.md) objects of a [Command](command-object-ado.md) object. ## Remarks A **Command** object has a **Parameters** collection made up of **Parameter** objects. Using the [Refresh](refresh-method-ado.md) method on a **Command** object's **Parameters** collection retrieves provider parameter information for the stored procedure or parameterized query specified in the **Command** object. Some providers do not support stored procedure calls or parameterized queries; calling the **Refresh** method on the **Parameters** collection when using such a provider will return an error. If you have not defined your own **Parameter** objects and you access the **Parameters** collection before calling the **Refresh** method, ADO will automatically call the method and populate the collection for you. You can minimize calls to the provider to improve performance if you know the properties of the parameters associated with the stored procedure or parameterized query you wish to call. Use the [CreateParameter](createparameter-method-ado.md) method to create **Parameter** objects with the appropriate property settings and use the [Append](append-method-ado.md) method to add them to the **Parameters** collection. This lets you set and return parameter values without having to call the provider for the parameter information. If you are writing to a provider that does not supply parameter information, you must manually populate the **Parameters** collection using this method to be able to use parameters at all. Use the [Delete](delete-method-ado-parameters-collection.md) method to remove **Parameter** objects from the **Parameters** collection if necessary. The objects in the **Parameters** collection of a **Recordset** go out of scope (therefore becoming unavailable) when the **Recordset** is closed.
65.971429
866
0.786055
eng_Latn
0.986432
0c61621b0dcc854f3ff2d61b01d296ff70a38558
10,364
md
Markdown
versioned_docs/version-v1.25.x/user-guide/configure-certificates.md
pinpan/docs-site
71df8cf11d0b07c997d1c3fe48fc2db43794689d
[ "CC-BY-4.0" ]
63
2018-08-01T19:01:43.000Z
2022-01-26T13:59:33.000Z
versioned_docs/version-v1.25.x/user-guide/configure-certificates.md
pinpan/docs-site
71df8cf11d0b07c997d1c3fe48fc2db43794689d
[ "CC-BY-4.0" ]
1,073
2018-08-01T11:40:33.000Z
2022-03-31T13:56:30.000Z
versioned_docs/version-v1.25.x/user-guide/configure-certificates.md
pinpan/docs-site
71df8cf11d0b07c997d1c3fe48fc2db43794689d
[ "CC-BY-4.0" ]
163
2018-08-01T11:25:19.000Z
2022-03-31T14:02:22.000Z
# Configuring Zowe certificates As a system administrator, review this article to learn about the key concepts of Zowe certificates. Zowe uses a certificate to encrypt data for communication across secure sockets. An instance of Zowe references a USS directory referred to as a `KEYSTORE_DIRECTORY` which contains information about where the certificate is located. <!--issue: Make separate pages for keyring/keystore instructions.--> ## Northbound Certificate The Zowe certificate is used by the API Mediation Layer on its northbound edge when identifying itself and encrypting `https://` traffic to web browsers or REST client applications. If the Zowe Command Line Interface (CLI) is configured to use the Zowe API Mediation Layer, then the CLI is a client of the Zowe certificate. For more information, see [Using the Zowe Command Line Interface, Integrating with the API Mediation Layer](./cli-usingcli.md#integrating-with-api-mediation-layer). ## Southbound Certificate As well as being a server, Zowe itself is a client to services on the southbound edge of its API Mediation Layer. Zowe communicates to these services over secure sockets. These southbound services use certificates to encrypt their data, and Zowe uses a trust store to store its relationship to these certificates. The southbound services that are started by Zowe itself and run as address spaces under its `ZWESVSTC` started task (such as the API discovery service, the explorer JES REST API server) re-use the same Zowe certificate used by the API Mediation Layer on its northbound client edge. ## Trust store In addition to Zowe using the intra-address space of certificates, Zowe uses external services on z/OS (such as z/OSMF or Zowe conformant extensions that have registered themselves with the API Mediation Layer) to encrypt messages between its servers. These services present their own certificate to the API Mediation Layer, in which case the trust store is used to capture the relationship between Zowe's southbound edge and these external certificates. To disable the trust store validation of southbound certificates, set the value `VERIFY_CERTIFICATES=true` to `false` in the `zowe-setup-certificates.env` file in the `KEYSTORE_DIRECTORY`. A scenario when this is recommended is if the certificate presented to the API Mediation Layer is self-signed, such as from an unknown certificate authority. For example, the z/OSMF certificate may be self-signed. In this case, Zowe API Mediation Layer does not recognize the signing authority. To enable certificate validation without hostname validation, set `NONSTRICT_VERIFY_CERTIFICATES=true`. Using this setting, the certificate Common Name or Subject Alternate Name (SAN) is not checked. This facilitates deployment to environments where certificates are valid but do not contain a valid hostname. This configuration is for development purposes only and should not be used for production. The utility script `zowe-setup-certificates.sh` or the `ZWEKRING` JCL can help you import z/OSMF certificate authority into trust store. If you are not using Zowe to generate certificates or want to trust other external services, you can customize `zowe-setup-certificates.env` or `ZWEKRING` JCL to import them as external certificate authorities. A proper setup of trust store is mandatory to successfully start Zowe with `VERIFY_CERTIFICATES` or `NONSTRICT_VERIFY_CERTIFICATES` enabled in `zowe-setup-certificates.env` and used by `zowe-setup-certificates.sh`. ## Certificates in the Zowe architecture The [Zowe architecture diagram](../getting-started/zowe-architecture.md) shows the Zowe API Mediation Layer positioned on the client-server boundary between applications such as web browsers or the Zowe CLI accessing z/OS services. The following diagram is a section of the architecture annotated to describe the role of certificates and trust stores. <img src={require("../images/common/zowe-ssl.png").default} alt="Zowe SSL" width="700px"/> The lines shown in bold red are communication over a TCP/IP connection that is encrypted with the Zowe certificate. - On the northbound edge of the API gateway, the certificate is used between client applications such as web browsers, Zowe CLI, or any other application wishing to access Zowe's REST APIs. - On the southbound edge of the API Gateway, there are a number of Zowe micro services providing HTML GUIs for the Zowe desktop or REST APIs for the API Catalog. These also use the Zowe certificate for data encryption. The lines in bold green are external certificates for servers that are not managed by Zowe, such as z/OSMF itself or any Zowe conformant REST API or App Framework servers that are registered with the API Mediation Layer. For the API Mediation Layer to be able to accept these certificates, they either need to be signed by a recognized certificate authority, or else the API Mediation Layer needs to be configured to accept unverified certificates. Even if the API Mediation Layer is configured to accept certificates signed by unverified CAs on its southbound edge, client applications on the northbound edge of the API gateway will be presented with the Zowe certificate. ## Keystore versus key ring Zowe supports certificates that are stored in a USS directory **Java KeyStore** format. Beginning with release 1.15, Zowe is including the ability to work with certificates held in a **z/OS Keyring**. Support for Keyring certificates is currently incomplete and being provided as a beta technical preview for early preview by customers. If you have any feedback using keyrings please create an issue in the [zowe-install-packaging repo](https://github.com/zowe/zowe-install-packaging/issues). It is expected that in a future release keyring support will be made available as a fully supported feature. <!-- Zowe supports certificates that are stored either in a USS directory **Java KeyStore** format or else held in a **z/OS Keyring**. z/OS keystore are the preferred choice for storing certificates where system programmers are already familiar with their operation and usage. The user ID setting up a keystore and connecting it with certificates requires elevated permissions, and in scenarios where you need to create a Zowe sandbox environment or for testing purposes and your TSO user ID doesn't have authority to manipulate key rings, USS keystores are a good alternative. --> <!-- If you are using a USS keystore, then the script `zowe-setup-certificates.env` is the only configuration step required. This is described in detail in [Configuring Zowe certificates in a USS KeyStore](./configure-certificates-keystore.md). If you are using a key ring, the sample JCL member `ZWEKRING` provided in the PDS library `SZWESAMP` contains the security commands to create a key ring and manage its associated certificates. This is described in [Configuring Zowe certificates in a key ring](./configure-certificates-keyring.md). For both scenarios, where the certificate is held in a USS Java Keystore or a z/OS key ring, the USS `KEYSTORE_DIRECTORY` is still required which is created with the script `zowe-setup-certificates.sh`. --> ## Keystore directory creation The `KEYSTORE_DIRECTORY` is created by running the script `<RUNTIME_DIR>/bin/zowe-setup-certificates.sh`. This script has a number of input parameters that are specified in a configuration file whose location is passed as an argument to the `-p` parameter. The configuration file `<RUNTIME_DIR>/bin/zowe-setup-certificates.env` is provided for setting up a Keystore directory that contains the Zowe certificate in JavaKeystore format. The configuration file `<RUNTIME_DIR>/bin/zowe-setup-certificates-keyring.env` is provided for setting up a Keystore directory that references the Zowe certificate held in a z/OS keyring. The `.env` configuration file should be customized based on security rules and practices for the z/OS environment. Once the script has been successfully executed and the `KEYSTORE_DIRECTORY` is created successfully, it is referenced by a Zowe launch `instance.env` file. A `KEYSTORE_DIRECTORY` can be used by more than one instance of Zowe. See [Creating and configuring the Zowe instance directory](../user-guide/configure-instance-directory.md#keystore-configuration) for more information. The Zowe launch diagram shows the relationship between a Zowe instance directory, a Zowe runtime directory, the Zowe keystore directory, and (if used to store the Zowe certificate) the z/OS keyring. <img src={require("../images/common/zowe-directories-keys.png").default} alt="Zowe Directories" width="700"/> You create a `KEYSTORE_DIRECTORY` in USS by using the script `zowe-setup-certificates.sh` (1) with a `-p` argument that specifies a `.env` configuration file. - If the `-p` argument file `zowe-setup-certificates.env` (2) is used, the `KEYSTORE_DIRECTORY` will contain the certificate, the certificate authority, the trust store, and the JWT Secret. - If the `-p` argument file `zowe-setup-keyring-certificates.env` (3) is used, the `KEYSTORE_DIRECTORY` contains no certificates and is a pass-through to configure a Zowe instance to use a z/OS keyring. The JCL member `ZWEKRING` (4) is used to create a z/OS Keyring to hold the Zowe certificate and its signing certificate authority. At launch time, a Zowe instance is started using the script `<INSTANCE_DIR>/bin/zowe-start.sh` which takes configuration arguments from `<INSTANCE_DIR>/instance.env`. The argument (5) `KEYSTORE_DIRECTORY=<KEYSTORE_DIRECTORY>` specifies the path to the keystore directory that Zowe will use. **Note:** If you generated your own server certificate, and you want to enable Client Authentication for it, your server certificate must contain the `TLS Web Client Authentication (1.3.6.1.5.5.7.3.2)` value in the Extended Key Usage section. Additionally, the `Digital signature and/or key agreement` must also be set as extension value in the Key Usage section. For more information, see [key usage extensions and extended key usage](https://help.hcltechsw.com/domino/10.0.1/admin/conf_keyusageextensionsandextendedkeyusage_r.html). For more information on the Zowe launch topology, see [Topology of the Zowe z/OS launch process](./installandconfig.md#topology-of-the-zowe-z-os-launch-process).
120.511628
677
0.797858
eng_Latn
0.997015
0c61f0c0b99d1b7426157399d849e4bd4884e773
258
md
Markdown
practice/regex/01-introduction/matching-specific-string.md
yogendra-revanna/hackerrank-solutions
a21bab0b613926b046675bdbaa2bcb9d1884381a
[ "MIT" ]
1
2021-02-01T16:08:55.000Z
2021-02-01T16:08:55.000Z
practice/regex/01-introduction/matching-specific-string.md
yogendra-revanna/hackerrank-solutions
a21bab0b613926b046675bdbaa2bcb9d1884381a
[ "MIT" ]
null
null
null
practice/regex/01-introduction/matching-specific-string.md
yogendra-revanna/hackerrank-solutions
a21bab0b613926b046675bdbaa2bcb9d1884381a
[ "MIT" ]
null
null
null
# Matching Specific String https://www.hackerrank.com/challenges/matching-specific-string/problem ### Python Regex_Pattern = r'hackerrank' ### JavaScript (node.js) var Regex_Pattern = 'hackerrank'; ### PHP $Regex_Pattern = '/hackerrank/';
16.125
70
0.697674
yue_Hant
0.469942
0c630b09eaddca005c66d11674456cbe90dd6e45
5,459
md
Markdown
README.md
batuhanyndny/one-boilerplate
febe5346dbbdebd918dba2617f07dc61f155976a
[ "MIT", "Unlicense" ]
2
2022-03-09T23:18:04.000Z
2022-03-10T06:35:30.000Z
README.md
batuhanyndny/one-boilerplate
febe5346dbbdebd918dba2617f07dc61f155976a
[ "MIT", "Unlicense" ]
null
null
null
README.md
batuhanyndny/one-boilerplate
febe5346dbbdebd918dba2617f07dc61f155976a
[ "MIT", "Unlicense" ]
null
null
null
<div id="top"></div> <div align="center"> [![Contributors][contributors-shield]][contributors-url] [![Forks][forks-shield]][forks-url] [![Stargazers][stars-shield]][stars-url] [![Issues][issues-shield]][issues-url] </div> <div align="center"> [![MIT License][license-shield]][license-url] [![LinkedIn][linkedin-shield]][linkedin-url] </div> <div align="center"> <h3 align="center">💍 One Boilerplate to Rule Them All</h3> <p align="center"> Production ready react boilerplate for kickstart your next product! <br /> <a href="https://github.com/batuhanyndny/one-boilerplate/issues">Report Bug</a> · <a href="https://github.com/batuhanyndny/one-boilerplate/issues">Request Feature</a> </p> </div> <!-- TABLE OF CONTENTS --> <details> <summary>Table of Contents</summary> <ol> <li> <a href="#about-the-project">About The Project</a> <ul> <li><a href="#built-with">Built With</a></li> </ul> </li> <li> <a href="#getting-started">Getting Started</a> <ul> <li><a href="#prerequisites">Prerequisites</a></li> <li><a href="#installation">Installation</a></li> </ul> </li> <li><a href="#usage">Usage</a></li> <li><a href="#roadmap">Roadmap</a></li> <li><a href="#contributing">Contributing</a></li> <li><a href="#license">License</a></li> <li><a href="#contact">Contact</a></li> <li><a href="#acknowledgments">Acknowledgments</a></li> </ol> </details> <!-- ABOUT THE PROJECT --> ## About The Project (⚠️ UNDER DEVELOPMENT ⚠️) One Boilerplate is a react boilerplate that includes state management, code splitting, service architecture, and more. You can checkout [this](https://github.com/batuhanyndny/one-boilerplate/tree/old-redux-saga) branch for old redux-saga implementation <p align="right">(<a href="#top">back to top</a>)</p> ### Built With - [React 18](https://reactjs.org/) - [react-router](https://reactrouter.com/) - state management will be implemented soon... <p align="right">(<a href="#top">back to top</a>)</p> <!-- GETTING STARTED --> ## Getting Started It's simple as cloning the project! Tip: You can use `degit` ### Prerequisites You need `node@16` to run this boilerplate. You can use `nvm` or `fnm` to install the required version via `.nvmrc` file. ### Installation Install dependecies via `npm` or `yarn` - npm ```sh npm install ``` - yarn ```sh yarn ``` <p align="right">(<a href="#top">back to top</a>)</p> ### Usage - Start up the development server ```sh yarn start ``` - Test Project ```sh yarn test ``` - Build Project ```sh yarn build ``` - Lint ```sh yarn lint ``` <!-- ROADMAP --> ## Roadmap - [ ] Add state management - [ ] Add translations - [ ] Add lazy image component - [ ] Add test See the [open issues](https://github.com/batuhanyndny/one-boilerplate/issues) for a full list of proposed features (and known issues). <p align="right">(<a href="#top">back to top</a>)</p> <!-- CONTRIBUTING --> ## Contributing Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**. If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again! 1. Fork the Project 2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`) 3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`) 4. Push to the Branch (`git push origin feature/AmazingFeature`) 5. Open a Pull Request <p align="right">(<a href="#top">back to top</a>)</p> <!-- LICENSE --> ## License Distributed under the MIT License. See `LICENSE` for more information. <p align="right">(<a href="#top">back to top</a>)</p> <!-- CONTACT --> ## Contact Batuhan Yenidunya - [@batuhanyndny](https://twitter.com/batuhanyndny) - batuhanyndny@gmail.com <p align="right">(<a href="#top">back to top</a>)</p> <!-- ACKNOWLEDGMENTS --> ## Acknowledgments This project uses `create-react-app`. <p align="right">(<a href="#top">back to top</a>)</p> <!-- MARKDOWN LINKS & IMAGES --> <!-- https://www.markdownguide.org/basic-syntax/#reference-style-links --> [contributors-shield]: https://img.shields.io/github/contributors/batuhanyndny/one-boilerplate.svg?style=for-the-badge [contributors-url]: https://github.com/batuhanyndny/one-boilerplate/graphs/contributors [forks-shield]: https://img.shields.io/github/forks/batuhanyndny/one-boilerplate.svg?style=for-the-badge [forks-url]: https://github.com/batuhanyndny/one-boilerplate/network/members [stars-shield]: https://img.shields.io/github/stars/batuhanyndny/one-boilerplate.svg?style=for-the-badge [stars-url]: https://github.com/batuhanyndny/one-boilerplate/stargazers [issues-shield]: https://img.shields.io/github/issues/batuhanyndny/one-boilerplate.svg?style=for-the-badge [issues-url]: https://github.com/batuhanyndny/one-boilerplate/issues [license-shield]: https://img.shields.io/github/license/batuhanyndny/one-boilerplate?style=for-the-badge [license-url]: https://github.com/batuhanyndny/one-boilerplate/blob/master/LICENSE [linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555 [linkedin-url]: https://linkedin.com/in/batuhanyndny [product-screenshot]: images/screenshot.png
28.883598
172
0.686206
eng_Latn
0.292432
0c634d6bfdc01aaed9f6fef6a6af425f681fec3d
3,105
md
Markdown
docs/relational-databases/native-client-ole-db-interfaces/irowsetfastload-ole-db.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/native-client-ole-db-interfaces/irowsetfastload-ole-db.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
null
null
null
docs/relational-databases/native-client-ole-db-interfaces/irowsetfastload-ole-db.md
CeciAc/sql-docs.fr-fr
0488ed00d9a3c5c0a3b1601a143c0a43692ca758
[ "CC-BY-4.0", "MIT" ]
1
2020-03-04T05:50:54.000Z
2020-03-04T05:50:54.000Z
--- title: IRowsetFastLoad (OLE DB) | Microsoft Docs ms.custom: '' ms.date: 03/14/2017 ms.prod: sql ms.prod_service: database-engine, sql-database, sql-data-warehouse, pdw ms.reviewer: '' ms.technology: native-client ms.topic: reference apitype: COM helpviewer_keywords: - IRowsetFastLoad interface ms.assetid: d19a7097-48d9-409a-aff9-277891b7aca7 author: MightyPen ms.author: genemi monikerRange: '>=aps-pdw-2016||=azuresqldb-current||=azure-sqldw-latest||>=sql-server-2016||=sqlallproducts-allversions||>=sql-server-linux-2017||=azuresqldb-mi-current' ms.openlocfilehash: 39a9f660f39a27a189c81d24d4d155d8764037d1 ms.sourcegitcommit: 856e42f7d5125d094fa84390bc43048808276b57 ms.translationtype: MT ms.contentlocale: fr-FR ms.lasthandoff: 11/07/2019 ms.locfileid: "73789397" --- # <a name="irowsetfastload-ole-db"></a>IRowsetFastLoad (OLE DB) [!INCLUDE[appliesto-ss-asdb-asdw-pdw-md](../../includes/appliesto-ss-asdb-asdw-pdw-md.md)] L'interface **IRowsetFastLoad** expose la prise en charge des opérations de copie en bloc basées sur mémoire [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] . les consommateurs de fournisseurs OLE DB Native Client [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] utilisent l’interface pour ajouter rapidement des données à une table de [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] existante. Si vous affectez la valeur VARIANT_TRUE à SSPROP_ENABLEFASTLOAD pour une session, vous ne pouvez pas lire les données des ensembles de lignes retournés ultérieurement à partir de cette session. Lorsque SSPROP_ENABLEFASTLOAD a la valeur VARIANT_TRUE, tous les ensembles de lignes créés sur la session sont du type IRowsetFastLoad. Les ensembles de lignes IRowsetFastLoad ne prennent pas en charge la fonctionnalité de récupération (fetch) des ensembles de lignes. Par conséquent, les données issues de ces ensembles de lignes ne peuvent pas être lues. ## <a name="in-this-section"></a>Dans cette section |Méthode|Description| |------------|-----------------| |[IRowsetFastLoad :: Commit &#40;OLE DB&#41;](../../relational-databases/native-client-ole-db-interfaces/irowsetfastload-commit-ole-db.md)|Marque la fin d'un lot de lignes insérées et écrit les lignes dans la table [!INCLUDE[ssNoVersion](../../includes/ssnoversion-md.md)] .| |[IRowsetFastLoad :: InsertRow &#40;OLE DB&#41;](../../relational-databases/native-client-ole-db-interfaces/irowsetfastload-insertrow-ole-db.md)|Ajoute une ligne à l'ensemble de lignes de copie en bloc.| ## <a name="see-also"></a>Voir aussi [Interfaces &#40;OLE DB&#41; ](https://msdn.microsoft.com/library/34c33364-8538-45db-ae41-5654481cda93) [Copier des données en bloc avec IRowsetFastLoad &#40;OLE DB&#41;](../../relational-databases/native-client-ole-db-how-to/bulk-copy-data-using-irowsetfastload-ole-db.md) [Envoyer des données BLOB vers SQL SERVER en utilisant IROWSETFASTLOAD et ISEQUENTIALSTREAM &#40;OLE DB&#41;](../../relational-databases/native-client-ole-db-how-to/send-blob-data-to-sql-server-using-irowsetfastload-and-isequentialstream-ole-db.md)
70.568182
553
0.761353
fra_Latn
0.280811
0c6361d34a4398d82255fce7dce76d4cd0f9f52e
537
md
Markdown
_connectors/kc-source-weathercompany/index.md
tmitchell10/event-streams
aa5552697179e5c9e5e5b0b4745a8f5a667b5d0c
[ "Apache-2.0", "BSD-3-Clause", "MIT" ]
null
null
null
_connectors/kc-source-weathercompany/index.md
tmitchell10/event-streams
aa5552697179e5c9e5e5b0b4745a8f5a667b5d0c
[ "Apache-2.0", "BSD-3-Clause", "MIT" ]
null
null
null
_connectors/kc-source-weathercompany/index.md
tmitchell10/event-streams
aa5552697179e5c9e5e5b0b4745a8f5a667b5d0c
[ "Apache-2.0", "BSD-3-Clause", "MIT" ]
1
2020-07-30T09:39:01.000Z
2020-07-30T09:39:01.000Z
--- title: "Weather Company Data" sortTitle: "Weather Company" connectorID: kc-source-weathercompany direction: source support: community type: kafkaConnect icon: weatherComp.svg documentationURL: https://github.com/ibm-messaging/kafka-connect-weather-source/blob/master/README.md download: - { type: 'GitHub', url: 'https://github.com/ibm-messaging/kafka-connect-weather-source' } --- Kafka Connect for Weather Company Data is a source connector for importing data from the IBM Cloud Weather Company Data service into Apache Kafka.
35.8
146
0.787709
eng_Latn
0.411096
0c638313dca7a2c0db3975c2c29f4bc542411a61
469
md
Markdown
repos/photon/local/latest.md
666zimzum666/repo-info
07e1884c1d23c0d8011c73d1767c61b4be5e3e92
[ "Apache-2.0" ]
1
2021-12-22T14:06:48.000Z
2021-12-22T14:06:48.000Z
repos/photon/local/latest.md
666zimzum666/repo-info
07e1884c1d23c0d8011c73d1767c61b4be5e3e92
[ "Apache-2.0" ]
null
null
null
repos/photon/local/latest.md
666zimzum666/repo-info
07e1884c1d23c0d8011c73d1767c61b4be5e3e92
[ "Apache-2.0" ]
null
null
null
# `photon:4.0` ## Docker Metadata - Image ID: `sha256:84e902a593028d26ba7379ddeb36ec52815f061bc5810ea915ea33b98fb76c86` - Created: `2021-12-06T20:31:52.113064582Z` - Virtual Size: ~ 38.80 Mb (total size of all layers on-disk) - Arch: `linux`/`amd64` - Command: `["/bin/bash"]` - Environment: - `PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin` - Labels: - `build-date=20211206` - `name=Photon OS x86_64/4.0 Base Image` - `vendor=VMware`
27.588235
85
0.690832
yue_Hant
0.143049
0c6388865acabc171a2eb9a7fdbc5bb8dfd1d8ad
12,594
md
Markdown
doc/source/api/VoxelTool.md
lhinuz/godot_voxel
3430c3cf9fcd34887c7523e645aca2c7bfe1e2ec
[ "MIT" ]
null
null
null
doc/source/api/VoxelTool.md
lhinuz/godot_voxel
3430c3cf9fcd34887c7523e645aca2c7bfe1e2ec
[ "MIT" ]
null
null
null
doc/source/api/VoxelTool.md
lhinuz/godot_voxel
3430c3cf9fcd34887c7523e645aca2c7bfe1e2ec
[ "MIT" ]
null
null
null
# VoxelTool Inherits: [Reference](https://docs.godotengine.org/en/stable/classes/class_reference.html) Helper class to easily access and modify voxels ## Description: Abstract interface to access and edit voxels. It allows accessing individual voxels, or doing bulk operations such as carving large chunks or copy/paste boxes. It's not a class to instantiate alone, you may get it from the voxel objects you want to work with. ## Properties: Type | Name | Default -------- | -------------------------------- | -------- `int` | [channel](#i_channel) | 0 `int` | [eraser_value](#i_eraser_value) | 0 `int` | [mode](#i_mode) | 0 `float` | [sdf_scale](#i_sdf_scale) | 0.002 `int` | [value](#i_value) | 0 <p></p> ## Methods: Return | Signature ----------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- [void](#) | [do_box](#i_do_box) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) begin, [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) end ) [void](#) | [do_point](#i_do_point) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) [void](#) | [do_sphere](#i_do_sphere) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) center, [float](https://docs.godotengine.org/en/stable/classes/class_float.html) radius ) [int](https://docs.godotengine.org/en/stable/classes/class_int.html) | [get_voxel](#i_get_voxel) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) [float](https://docs.godotengine.org/en/stable/classes/class_float.html) | [get_voxel_f](#i_get_voxel_f) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) [Variant](https://docs.godotengine.org/en/stable/classes/class_variant.html) | [get_voxel_metadata](#i_get_voxel_metadata) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) [bool](https://docs.godotengine.org/en/stable/classes/class_bool.html) | [is_area_editable](#i_is_area_editable) ( [AABB](https://docs.godotengine.org/en/stable/classes/class_aabb.html) box ) [void](#) | [paste](#i_paste) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) dst_pos, [Reference](https://docs.godotengine.org/en/stable/classes/class_reference.html) src_buffer, [int](https://docs.godotengine.org/en/stable/classes/class_int.html) src_mask_value ) [VoxelRaycastResult](VoxelRaycastResult.md) | [raycast](#i_raycast) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) origin, [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) direction, [float](https://docs.godotengine.org/en/stable/classes/class_float.html) max_distance=10.0, [int](https://docs.godotengine.org/en/stable/classes/class_int.html) collision_mask=4294967295 ) [void](#) | [set_voxel](#i_set_voxel) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos, [int](https://docs.godotengine.org/en/stable/classes/class_int.html) v ) [void](#) | [set_voxel_f](#i_set_voxel_f) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos, [float](https://docs.godotengine.org/en/stable/classes/class_float.html) v ) [void](#) | [set_voxel_metadata](#i_set_voxel_metadata) ( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos, [Variant](https://docs.godotengine.org/en/stable/classes/class_variant.html) meta ) <p></p> ## Enumerations: enum **Mode**: - **MODE_ADD** = **0** --- When editing [enum VoxelBuffer.CHANNEL_SDF], will add matter. Useful for building. - **MODE_REMOVE** = **1** --- When editing [enum VoxelBuffer.CHANNEL_SDF], will subtract matter. Useful for digging. - **MODE_SET** = **2** --- Replace voxel values without any blending. Useful for blocky voxels. ## Property Descriptions - [int](https://docs.godotengine.org/en/stable/classes/class_int.html)<span id="i_channel"></span> **channel** = 0 Set which channel will be edited. When used on a terrain node, it will default to the first available channel, based on the stream and generator. - [int](https://docs.godotengine.org/en/stable/classes/class_int.html)<span id="i_eraser_value"></span> **eraser_value** = 0 Sets which value will be used to erase voxels when editing the enum VoxelBuffer.CHANNEL_TYPE channel in enum MODE_REMOVE mode. - [int](https://docs.godotengine.org/en/stable/classes/class_int.html)<span id="i_mode"></span> **mode** = 0 Sets how `do_*` functions will behave. This may vary depending on the channel. - [float](https://docs.godotengine.org/en/stable/classes/class_float.html)<span id="i_sdf_scale"></span> **sdf_scale** = 0.002 When working with smooth voxels, applies a scale to the signed distance field. A high scale (1 or higher) will tend to produce blocky results, and a low scale (below 1, but not too close to zero) will tend to be smoother. This is related to the enum VoxelBuffer.Depth configuration on voxels. For 8-bit and 16-bit, there is a limited range of values the Signed Distance Field can take, and by default it is clamped to -1..1, so the gradient can only range across 2 voxels. But when LOD is used, it is better to stretch that range over a longer distance, and this is achieved by scaling SDF values. - [int](https://docs.godotengine.org/en/stable/classes/class_int.html)<span id="i_value"></span> **value** = 0 Sets which voxel value will be used. This is not relevant when editing enum VoxelBuffer.CHANNEL_SDF. ## Method Descriptions - [void](#)<span id="i_do_box"></span> **do_box**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) begin, [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) end ) Operate on a rectangular cuboid section of the terrain. `begin` and `end` are inclusive. Choose operation and which voxel to use by setting `value` and `mode` before calling this function. - [void](#)<span id="i_do_point"></span> **do_point**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) - [void](#)<span id="i_do_sphere"></span> **do_sphere**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) center, [float](https://docs.godotengine.org/en/stable/classes/class_float.html) radius ) - [int](https://docs.godotengine.org/en/stable/classes/class_int.html)<span id="i_get_voxel"></span> **get_voxel**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) - [float](https://docs.godotengine.org/en/stable/classes/class_float.html)<span id="i_get_voxel_f"></span> **get_voxel_f**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) - [Variant](https://docs.godotengine.org/en/stable/classes/class_variant.html)<span id="i_get_voxel_metadata"></span> **get_voxel_metadata**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos ) - [bool](https://docs.godotengine.org/en/stable/classes/class_bool.html)<span id="i_is_area_editable"></span> **is_area_editable**( [AABB](https://docs.godotengine.org/en/stable/classes/class_aabb.html) box ) - [void](#)<span id="i_paste"></span> **paste**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) dst_pos, [Reference](https://docs.godotengine.org/en/stable/classes/class_reference.html) src_buffer, [int](https://docs.godotengine.org/en/stable/classes/class_int.html) src_mask_value ) - [VoxelRaycastResult](VoxelRaycastResult.md)<span id="i_raycast"></span> **raycast**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) origin, [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) direction, [float](https://docs.godotengine.org/en/stable/classes/class_float.html) max_distance=10.0, [int](https://docs.godotengine.org/en/stable/classes/class_int.html) collision_mask=4294967295 ) - [void](#)<span id="i_set_voxel"></span> **set_voxel**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos, [int](https://docs.godotengine.org/en/stable/classes/class_int.html) v ) - [void](#)<span id="i_set_voxel_f"></span> **set_voxel_f**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos, [float](https://docs.godotengine.org/en/stable/classes/class_float.html) v ) - [void](#)<span id="i_set_voxel_metadata"></span> **set_voxel_metadata**( [Vector3](https://docs.godotengine.org/en/stable/classes/class_vector3.html) pos, [Variant](https://docs.godotengine.org/en/stable/classes/class_variant.html) meta ) _Generated on Feb 16, 2021_
104.95
467
0.507623
yue_Hant
0.491355
0c63947f428be4fdaf78c139d225dc1ca7c21125
1,789
md
Markdown
h2o-docs/src/booklets/v2_2015/source/LaTeX_StyleGuide.md
ahmedengu/h2o-3
ac2c0a6fbe7f8e18078278bf8a7d3483d41aca11
[ "Apache-2.0" ]
6,098
2015-05-22T02:46:12.000Z
2022-03-31T16:54:51.000Z
h2o-docs/src/booklets/v2_2015/source/LaTeX_StyleGuide.md
ahmedengu/h2o-3
ac2c0a6fbe7f8e18078278bf8a7d3483d41aca11
[ "Apache-2.0" ]
2,517
2015-05-23T02:10:54.000Z
2022-03-30T17:03:39.000Z
h2o-docs/src/booklets/v2_2015/source/LaTeX_StyleGuide.md
ahmedengu/h2o-3
ac2c0a6fbe7f8e18078278bf8a7d3483d41aca11
[ "Apache-2.0" ]
2,199
2015-05-22T04:09:55.000Z
2022-03-28T22:20:45.000Z
Booklet Workflow ---------------------- - Get latest version of .tex file from h2o-3 repo - Make revisions (please refer to "Notes" below before making changes) - Make a PDF to test for errors - if errors, please message me to troubleshoot, don't push until fixed - Delete baggage files (please refer to "Notes" below) - Push changes to master Notes ------- - Please search for %% - these contain questions about sections that need more work. (% = explanation of function) - Please don't change any formatting or \ operators - the doc should generate without any errors, if this is not the case please let me know before changing anything. - Please use the following syntax: - \texttt{} for parameter references - \begin{lstlisting}[breaklines,basicstyle=\ttfamily] & \end{lstlisting} for code blocks (lstlisting prevents margin overrun and places line breaks so that users can copy/paste code) - {\url{}} for links. Please use this format so that it will display the link for print and break the lines nicely. - Please remember to use a \ before a _ if you are using \texttt - if you don't add a \ before the _, it can show up as italicized. I'll try to fix these if I find them but this is especially relevant for parameter names using underscores. - If you are trying to get a character to display that LaTeX thinks is code/math (for example, # or ~), try surrounding the character with $ or adding a \ before it. - Please try not to save if you have errors - they can be time-consuming to find & fix. If something's generating a ! by the line number, please let me know so I can help troubleshoot. - LaTeX can generate a lot of baggage files (.aux, .log, .out, etc) - please clean these out before pushing changes so that the folder stays clean.
81.318182
240
0.729458
eng_Latn
0.999233
0c639b6214774cd5a7cac810f39f9621ebc59709
522
md
Markdown
_posts/2021-02-03-33.Improved Variational Inference with Inverse Autoregressive Flow (2016).md
seunghan96/seunghan96.github.io
f1e5dc838a372c490ce3cd07da08dd822e06d190
[ "MIT" ]
null
null
null
_posts/2021-02-03-33.Improved Variational Inference with Inverse Autoregressive Flow (2016).md
seunghan96/seunghan96.github.io
f1e5dc838a372c490ce3cd07da08dd822e06d190
[ "MIT" ]
null
null
null
_posts/2021-02-03-33.Improved Variational Inference with Inverse Autoregressive Flow (2016).md
seunghan96/seunghan96.github.io
f1e5dc838a372c490ce3cd07da08dd822e06d190
[ "MIT" ]
1
2021-11-13T17:49:17.000Z
2021-11-13T17:49:17.000Z
--- title: 33.Improved Variational Inference with Inverse Autoregressive Flow (2016) categories: [BNN] tags: excerpt: Paper Review by Seunghan Lee --- 33.Improved Variational Inference with Inverse Autoregressive Flow (2016) ========================================================================= [Paper Review] by Seunghan Lee ( 이승한 ) <embed src="/assets/pdf/BNN/review/[review]33.Improved Variational Inference with Inverse Autoregressive Flow (2016).pdf#toolbar=0&navpanes=0&scrollbar=0" type="application/pdf" />
40.153846
180
0.668582
eng_Latn
0.284404
0c63ab35f1115127585f43cecc01fae79b058d4a
54
md
Markdown
README.md
YanliangWu/CS452-Marklin-Project
c2b19a7d4af90987a9656e84d42d277ee42c7537
[ "Unlicense" ]
null
null
null
README.md
YanliangWu/CS452-Marklin-Project
c2b19a7d4af90987a9656e84d42d277ee42c7537
[ "Unlicense" ]
null
null
null
README.md
YanliangWu/CS452-Marklin-Project
c2b19a7d4af90987a9656e84d42d277ee42c7537
[ "Unlicense" ]
null
null
null
# CS452-Marklin-Project UWaterloo CS452 Train Project
18
29
0.833333
kor_Hang
0.397958
0c63b799b9dac3ceba17af19312d92a66b5950f0
607
md
Markdown
_posts/2014-03-28-which_-a.md
asalamon74/commandlineblog
420004f18d4ee107aa0510d4fcc8cb060b0034ef
[ "CC0-1.0" ]
null
null
null
_posts/2014-03-28-which_-a.md
asalamon74/commandlineblog
420004f18d4ee107aa0510d4fcc8cb060b0034ef
[ "CC0-1.0" ]
null
null
null
_posts/2014-03-28-which_-a.md
asalamon74/commandlineblog
420004f18d4ee107aa0510d4fcc8cb060b0034ef
[ "CC0-1.0" ]
null
null
null
--- layout: post title: 'which -a' permalink: /2014/03/28/which_-a post_id: 5882522 categories: - which --- Hosszú évek óta használom a [which](/2012/11/07/which_543) parancsot, de sosem merült fel bennem a kérdés, vajon mit ír ki, ha többször is szerepel a keresett program a PATH-on. Alapesetben a legelső előfordulást írja ki: ``` $ which rails /usr/bin/rails ``` A -a kapcsolóval viszont az összeset kiírja: ``` $ which -a rails /usr/bin/rails /bin/rails /home/user/bin/rails ``` Utóbbi akkor nagyon hasznos, ha (kellően el nem ítélhető módon) több verziója is fent van a keresett programnak.  
19.580645
149
0.729819
hun_Latn
0.999905
0c640f77a6ca8ace4caf0c416e31d8054fc24047
95
md
Markdown
README.md
gabrielaraujo3/grid-de-precos
51ae2b2c156eb5311cf67d6c43e9246eb8f417f2
[ "MIT" ]
null
null
null
README.md
gabrielaraujo3/grid-de-precos
51ae2b2c156eb5311cf67d6c43e9246eb8f417f2
[ "MIT" ]
null
null
null
README.md
gabrielaraujo3/grid-de-precos
51ae2b2c156eb5311cf67d6c43e9246eb8f417f2
[ "MIT" ]
null
null
null
# grid-de-precos Vou tentar fazer uma Grid de Preços simples, mas sem seguir nenhum tutorial.
31.666667
77
0.778947
por_Latn
0.999998
0c645672e0eec0317b8d4899be2e9a5a3f2545c7
3,474
md
Markdown
_posts/2016-02-28-frontend-design-ux.md
cristinafsanz/projects
2af5914310202b13a427f6c5210076a01b0a0a90
[ "MIT" ]
2
2017-02-07T13:08:47.000Z
2019-01-03T11:02:50.000Z
_posts/2016-02-28-frontend-design-ux.md
cristinafsanz/projects
2af5914310202b13a427f6c5210076a01b0a0a90
[ "MIT" ]
null
null
null
_posts/2016-02-28-frontend-design-ux.md
cristinafsanz/projects
2af5914310202b13a427f6c5210076a01b0a0a90
[ "MIT" ]
4
2017-11-25T20:18:16.000Z
2020-04-16T05:40:42.000Z
--- layout: post title: Should Frontend designers know UX? date: 2016-02-27 --- Brad Frost wrote recently an article about <a href="http://bradfrost.com/blog/post/frontend-design/">Frontend design</a> and after read it I thought, yes, this is what I am! I am a Frontend Designer! He says that Frontend Design involves creating the HTML, CSS, and presentational JavaScript code that makes up a user interface, but it also helps bridge the divide between the design and development worlds because we know the basis of them. - I have a past as a backend developer, so I know it works. - I also create AngularJS applications, so in my case I am learning to write application-level JavaScript and not only presentational Javascript. - I am doing a graphic design course, so I can start comparing color palettes or create basic illustrations and icons. - But do I understand UX principles and best practices? I don't. It is not that I want to be a Full-stack designer, but it would be useful if I know the UX basis just in case I need it. Or to understand better a design that I'm going to transform in code. As the article <a href="http://webdesignerdepot.com/2015/06/what-is-a-full-stack-designer-and-should-you-be-one/">What is a Full-stack designer, and should you be one?</a> said, the benefits of expanding our skill sets are quite nice. Full-stack designers often end up with a more thorough understanding of their work, making it more consistent from research to production phases. Knowing the limitations and what to expect in development, while planning UX/UI wireframes or mockups, can keep concepts realistic. Although Full-stack design would be nice, I think it is not necessary. I can focus in web design to improve myself as a Frontend Designer, but I can come closer to UX to understand better the whole process. Brad Frost also points out that Frontend development should be a core part of the design process. We don't have to forget that we work on the same user interface. An article of UXPA magazine, <a href="http://uxpamagazine.org/building-it-right/">Building it Right! Bridging the Gap Between UX Designers and Developers</a> brings us the same idea, that it should be more cooperation between UX and Frontend Design. While the UX designers extract and translate the user requirements into user-friendly wireframes, Frontend designers convert those wireframes into real products. Since each role requires a different educational background, the two players sometimes find themselves in conflict with one another due to differing beliefs. To bridge the gap between these two points of view, the UXPA invited three Frontend designers to present their point of view to a local audience of UX professionals. They spoke about bringing the Frontend designers to the conceptualization phase. Frontend designers can also bring technical ways to improve the interaction and working together can increase trust and confidence with each other. In my case I can't start to do this approach at work right now, we don't have UX team currently. But I can start learning by my own, hopefully I will start to understand more the whole process and use it for my personal projects as well. By the way, next week it will be the first Women Techmakers Madrid and one talk is about these issues. I am looking forward to hearing what they are going to say. The title is "War of the Worlds: designers and developers" by Laura Andina and Josefina Pérez.
93.891892
569
0.790443
eng_Latn
0.999784
0c649ddf0913dec424a14253c99cad2d3818ea87
2,077
md
Markdown
docs/migration-guide.md
annez/lighthouse-ci
8b3fde99a22082f9c4ed8ce355f151a794f42fbb
[ "Apache-2.0" ]
1
2020-08-27T23:46:21.000Z
2020-08-27T23:46:21.000Z
docs/migration-guide.md
annez/lighthouse-ci
8b3fde99a22082f9c4ed8ce355f151a794f42fbb
[ "Apache-2.0" ]
70
2020-07-23T16:57:36.000Z
2021-07-27T13:30:22.000Z
docs/migration-guide.md
annez/lighthouse-ci
8b3fde99a22082f9c4ed8ce355f151a794f42fbb
[ "Apache-2.0" ]
1
2020-09-29T19:20:11.000Z
2020-09-29T19:20:11.000Z
# Migration Guide ## Overview This documents provides guidance on upgrading your LHCI version both in CLI and the server. ### Patch Updates Patch version updates typically require no specific action on the user's part (see [the version policy](./version-policy.md) for details). If any SQL migrations are needed for new features, the server will automatically execute them on startup. ### Server-CLI Version Compatibility LHCI commits to ensuring that the CLI of version `n` will always be compatible with the server of version `n - 1`. This ensures that smooth migrations between versions proceed by first updating your `@lhci/cli` dependencies followed by updating the server to match. The reverse statement is not true. Upgrading the server before upgrading clients may result in lost data. ## `0.3.0` to `0.4.0` ### Affected Usage Patterns - `staticDistDir` usage without explicit `url` (new autodetection logic, different URLs could be collected) - `lighthouse:*` assertion preset usage (new audits asserted, assertions could fail) - `assert` usage (some audits removed and scores changed, assertions could fail) - Custom LHCI server API usage (new headers required and statistic names changed, API calls could fail) - Custom `.lighthouseci/` folder usage (HTML reports deleted on every `collect` invocation, data could be lost) If your use of Lighthouse CI doesn't follow any of these patterns, you will likely see no breaking changes. ### Breaking Changes - build statistics use `_median` suffix not `_average` - category scores changed to lighthouse 6.0 weighting - collect will now recurse into subdirectories when no URLs are provided to collect and a `staticDistDir` is given - default assertion will now be minScore=0.9 - HTML reports in .lighthouseci/ will also be deleted on running of `collect`, not just the JSON - presets now assert lighthouse 6.0 audits - x-lhci-build-token header now required on POST /projects/:id/builds - x-lhci-build-token header now required on POST /projects/:id/builds/:id:/runs and PUT /projects/:id/builds/:id/lifecycle
56.135135
371
0.778527
eng_Latn
0.996578